title
stringlengths 3
49
| text
stringlengths 509
117k
| relevans
float64 0.76
0.83
| popularity
float64 0.95
1
| ranking
float64 0.76
0.82
|
---|---|---|---|---|
Heterologous | The term heterologous has several meanings in biology.
Gene expression
In cell biology and protein biochemistry, heterologous expression means that a protein is experimentally put into a cell that does not normally make (i.e., express) that protein. Heterologous (meaning 'derived from a different organism') refers to the fact that often the transferred protein was initially cloned from or derived from a different cell type or a different species from the recipient.
Typically the protein itself is not transferred, but instead the 'correctly edited' genetic material coding for the protein (the complementary DNA or cDNA) is added to the recipient cell. The genetic material that is transferred typically must be within a format that encourages the recipient cell to express the cDNA as a protein (i.e., it is put in an expression vector).
Methods for transferring foreign genetic material into a recipient cell include transfection and transduction. The choice of recipient cell type is often based on an experimental need to examine the protein's function in detail, and the most prevalent recipients, known as heterologous expression systems, are chosen usually because they are easy to transfer DNA into or because they allow for a simpler assessment of the protein's function.
Stem cells
In stem cell biology, a heterologous transplant refers to cells from a mixed population of donor cells. This is in contrast to an autologous transplant where the cells are derived from the same individual or an allogenic transplant where the donor cells are HLA matched to the recipient. A heterologous source of therapeutic cells will have a much greater availability than either autologous or allogenic cellular therapies.
Structural biology
In structural biology, a heterologous association is a binding mode between the protomers of a protein structure. In a heterologous association, each protomer contributes a different set of residues to the binding interface. In contrast, two protomers form an isologous association when they contribute the same set of residues to the protomer-protomer interface.
See also
Autologous
Homologous
Homology (biology)
Heterogeneous
References
Protein structure | 0.793451 | 0.962391 | 0.76361 |
Pleiotropy | Pleiotropy (from Greek , 'more', and , 'way') occurs when one gene influences two or more seemingly unrelated phenotypic traits. Such a gene that exhibits multiple phenotypic expression is called a pleiotropic gene. Mutation in a pleiotropic gene may have an effect on several traits simultaneously, due to the gene coding for a product used by a myriad of cells or different targets that have the same signaling function.
Pleiotropy can arise from several distinct but potentially overlapping mechanisms, such as gene pleiotropy, developmental pleiotropy, and selectional pleiotropy. Gene pleiotropy occurs when a gene product interacts with multiple other proteins or catalyzes multiple reactions. Developmental pleiotropy occurs when mutations have multiple effects on the resulting phenotype. Selectional pleiotropy occurs when the resulting phenotype has many effects on fitness (depending on factors such as age and gender).
An example of pleiotropy is phenylketonuria, an inherited disorder that affects the level of phenylalanine, an amino acid that can be obtained from food, in the human body. Phenylketonuria causes this amino acid to increase in amount in the body, which can be very dangerous. The disease is caused by a defect in a single gene on chromosome 12 that codes for enzyme phenylalanine hydroxylase, that affects multiple systems, such as the nervous and integumentary system.
Pleiotropic gene action can limit the rate of multivariate evolution when natural selection, sexual selection or artificial selection on one trait favors one allele, while selection on other traits favors a different allele. Some gene evolution is harmful to an organism. Genetic correlations and responses to selection most often exemplify pleiotropy.
History
Pleiotropic traits had been previously recognized in the scientific community but had not been experimented on until Gregor Mendel's 1866 pea plant experiment. Mendel recognized that certain pea plant traits (seed coat color, flower color, and axial spots) seemed to be inherited together; however, their correlation to a single gene has never been proven. The term "pleiotropie" was first coined by Ludwig Plate in his Festschrift, which was published in 1910. He originally defined pleiotropy as occurring when "several characteristics are dependent upon ... [inheritance]; these characteristics will then always appear together and may thus appear correlated". This definition is still used today.
After Plate's definition, Hans Gruneberg was the first to study the mechanisms of pleiotropy. In 1938 Gruneberg published an article dividing pleiotropy into two distinct types: "genuine" and "spurious" pleiotropy. "Genuine" pleiotropy is when two distinct primary products arise from one locus. "Spurious" pleiotropy, on the other hand, is either when one primary product is utilized in different ways or when one primary product initiates a cascade of events with different phenotypic consequences. Gruneberg came to these distinctions after experimenting on rats with skeletal mutations. He recognized that "spurious" pleiotropy was present in the mutation, while "genuine" pleiotropy was not, thus partially invalidating his own original theory. Through subsequent research, it has been established that Gruneberg's definition of "spurious" pleiotropy is what we now identify simply as "pleiotropy".
In 1941 American geneticists George Beadle and Edward Tatum further invalidated Gruneberg's definition of "genuine" pleiotropy, advocating instead for the "one gene-one enzyme" hypothesis that was originally introduced by French biologist Lucien Cuénot in 1903. This hypothesis shifted future research regarding pleiotropy towards how a single gene can produce various phenotypes.
In the mid-1950s Richard Goldschmidt and Ernst Hadorn, through separate individual research, reinforced the faultiness of "genuine" pleiotropy. A few years later, Hadorn partitioned pleiotropy into a "mosaic" model (which states that one locus directly affects two phenotypic traits) and a "relational" model (which is analogous to "spurious" pleiotropy). These terms are no longer in use but have contributed to the current understanding of pleiotropy.
By accepting the one gene-one enzyme hypothesis, scientists instead focused on how uncoupled phenotypic traits can be affected by genetic recombination and mutations, applying it to populations and evolution. This view of pleiotropy, "universal pleiotropy", defined as locus mutations being capable of affecting essentially all traits, was first implied by Ronald Fisher's Geometric Model in 1930. This mathematical model illustrates how evolutionary fitness depends on the independence of phenotypic variation from random changes (that is, mutations). It theorizes that an increasing phenotypic independence corresponds to a decrease in the likelihood that a given mutation will result in an increase in fitness. Expanding on Fisher's work, Sewall Wright provided more evidence in his 1968 book Evolution and the Genetics of Populations: Genetic and Biometric Foundations by using molecular genetics to support the idea of "universal pleiotropy". The concepts of these various studies on evolution have seeded numerous other research projects relating to individual fitness.
In 1957 evolutionary biologist George C. Williams theorized that antagonistic effects will be exhibited during an organism's life cycle if it is closely linked and pleiotropic. Natural selection favors genes that are more beneficial prior to reproduction than after (leading to an increase in reproductive success). Knowing this, Williams argued that if only close linkage was present, then beneficial traits will occur both before and after reproduction due to natural selection. This, however, is not observed in nature, and thus antagonistic pleiotropy contributes to the slow deterioration with age (senescence).
Mechanism
Pleiotropy describes the genetic effect of a single gene on multiple phenotypic traits. The underlying mechanism is genes that code for a product that is either used by various cells or has a cascade-like signaling function that affects various targets.
Polygenic traits
Most genetic traits are polygenic in nature: controlled by many genetic variants, each of small effect. These genetic variants can reside in protein coding or non-coding regions of the genome. In this context pleiotropy refers to the influence that a specific genetic variant, e.g., a single nucleotide polymorphism or SNP, has on two or more distinct traits.
Genome-wide association studies (GWAS) and machine learning analysis of large genomic datasets have led to the construction of SNP based polygenic predictors for human traits such as height, bone density, and many disease risks. Similar predictors exist for plant and animal species and are used in agricultural breeding.
One measure of pleiotropy is the fraction of genetic variance that is common between two distinct complex human traits: e.g., height vs bone density, breast cancer vs heart attack risk, or diabetes vs hypothyroidism risk. This has been calculated for hundreds of pairs of traits, with results shown in the Table. In most cases examined the genomic regions controlling each trait are largely disjoint, with only modest overlap.
Thus, at least for complex human traits so far examined, pleiotropy is limited in extent.
Models for the origin
One basic model of pleiotropy's origin describes a single gene locus to the expression of a certain trait. The locus affects the expressed trait only through changing the expression of other loci. Over time, that locus would affect two traits by interacting with a second locus. Directional selection for both traits during the same time period would increase the positive correlation between the traits, while selection on only one trait would decrease the positive correlation between the two traits. Eventually, traits that underwent directional selection simultaneously were linked by a single gene, resulting in pleiotropy.
The "pleiotropy-barrier" model proposes a logistic growth pattern for the increase of pleiotropy over time. This model differentiates between the levels of pleiotropy in evolutionarily younger and older genes subjected to natural selection. It suggests a higher potential for phenotypic innovation in evolutionarily newer genes due to their lower levels of pleiotropy.
Other more complex models compensate for some of the basic model's oversights, such as multiple traits or assumptions about how the loci affect the traits. They also propose the idea that pleiotropy increases the phenotypic variation of both traits since a single mutation on a gene would have twice the effect.
Evolution
Pleiotropy can have an effect on the evolutionary rate of genes and allele frequencies. Traditionally, models of pleiotropy have predicted that evolutionary rate of genes is related negatively with pleiotropyas the number of traits of an organism increases, the evolutionary rates of genes in the organism's population decrease. This relationship has not been clearly found in empirical studies for a long time. However, a study based on human disease genes revealed the evidence of lower evolutionary rate in genes with higher pleiotropy.
In mating, for many animals the signals and receptors of sexual communication may have evolved simultaneously as the expression of a single gene, instead of the result of selection on two independent genes, one that affects the signaling trait and one that affects the receptor trait. In such a case, pleiotropy would facilitate mating and survival. However, pleiotropy can act negatively as well. A study on seed beetles found that intralocus sexual conflict arises when selection for certain alleles of a gene that are beneficial for one sex causes expression of potentially harmful traits by the same gene in the other sex, especially if the gene is located on an autosomal chromosome.
Pleiotropic genes act as an arbitrating force in speciation. William R. Rice and Ellen E. Hostert (1993) conclude that the observed prezygotic isolation in their studies is a product of pleiotropy's balancing role in indirect selection. By imitating the traits of all-infertile hybridized species, they noticed that the fertilization of eggs was prevented in all eight of their separate studies, a likely effect of pleiotropic genes on speciation. Likewise, pleiotropic gene's stabilizing selection allows for the allele frequency to be altered.
Studies on fungal evolutionary genomics have shown pleiotropic traits that simultaneously affect adaptation and reproductive isolation, converting adaptations directly to speciation. A particularly telling case of this effect is host specificity in pathogenic ascomycetes and specifically, in venturia, the fungus responsible for apple scab. These parasitic fungi each adapts to a host, and are only able to mate within a shared host after obtaining resources. Since a single toxin gene or virulence allele can grant the ability to colonize the host, adaptation and reproductive isolation are instantly facilitated, and in turn, pleiotropically causes adaptive speciation. The studies on fungal evolutionary genomics will further elucidate the earliest stages of divergence as a result of gene flow, and provide insight into pleiotropically induced adaptive divergence in other eukaryotes.
Antagonistic pleiotropy
Sometimes, a pleiotropic gene may be both harmful and beneficial to an organism, which is referred to as antagonistic pleiotropy. This may occur when the trait is beneficial for the organism's early life, but not its late life. Such "trade-offs" are possible since natural selection affects traits expressed earlier in life, when most organisms are most fertile, more than traits expressed later in life.
This idea is central to the antagonistic pleiotropy hypothesis, which was first developed by G.C. Williams in 1957. Williams suggested that some genes responsible for increased fitness in the younger, fertile organism contribute to decreased fitness later in life, which may give an evolutionary explanation for senescence. An example is the p53 gene, which suppresses cancer but also suppresses stem cells, which replenish worn-out tissue.
Unfortunately, the process of antagonistic pleiotropy may result in an altered evolutionary path with delayed adaptation, in addition to effectively cutting the overall benefit of any alleles by roughly half. However, antagonistic pleiotropy also lends greater evolutionary "staying power" to genes controlling beneficial traits, since an organism with a mutation to those genes would have a decreased chance of successfully reproducing, as multiple traits would be affected, potentially for the worse.
Sickle cell anemia is a classic example of the mixed benefit given by the staying power of pleiotropic genes, as the mutation to Hb-S provides the fitness benefit of malaria resistance to heterozygotes as sickle cell trait, while homozygotes have significantly lowered life expectancy—what is known as "heterozygote advantage". Since both of these states are linked to the same mutated gene, large populations today are susceptible to sickle cell despite it being a fitness-impairing genetic disorder.
Examples
Albinism
Albinism is the mutation of the TYR gene, also termed tyrosinase. This mutation causes the most common form of albinism. The mutation alters the production of melanin, thereby affecting melanin-related and other dependent traits throughout the organism. Melanin is a substance made by the body that is used to absorb light and provides coloration to the skin. Indications of albinism are the absence of color in an organism's eyes, hair, and skin, due to the lack of melanin. Some forms of albinism are also known to have symptoms that manifest themselves through rapid-eye movement, light sensitivity, and strabismus.
Autism and schizophrenia
Pleiotropy in genes has been linked between certain psychiatric disorders as well. Deletion in the 22q11.2 region of chromosome 22 has been associated with schizophrenia and autism. Schizophrenia and autism are linked to the same gene deletion but manifest very differently from each other. The resulting phenotype depends on the stage of life at which the individual develops the disorder. Childhood manifestation of the gene deletion is typically associated with autism, while adolescent and later expression of the gene deletion often manifests in schizophrenia or other psychotic disorders. Though the disorders are linked by genetics, there is no increased risk found for adult schizophrenia in patients who experienced autism in childhood.
A 2013 study also genetically linked five psychiatric disorders, including schizophrenia and autism. The link was a single nucleotide polymorphism of two genes involved in calcium channel signaling with neurons. One of these genes, CACNA1C, has been found to influence cognition. It has been associated with autism, as well as linked in studies to schizophrenia and bipolar disorder. These particular studies show clustering of these diseases within patients themselves or families. The estimated heritability of schizophrenia is 70% to 90%, therefore the pleiotropy of genes is crucial since it causes an increased risk for certain psychotic disorders and can aid psychiatric diagnosis.
Phenylketonuria (PKU)
A common example of pleiotropy is the human disease phenylketonuria (PKU). This disease causes mental retardation and reduced hair and skin pigmentation, and can be caused by any of a large number of mutations in the single gene on chromosome 12 that codes for the enzyme phenylalanine hydroxylase, which converts the amino acid phenylalanine to tyrosine. Depending on the mutation involved, this conversion is reduced or ceases entirely. Unconverted phenylalanine builds up in the bloodstream and can lead to levels that are toxic to the developing nervous system of newborn and infant children. The most dangerous form of this is called classic PKU, which is common in infants. The baby seems normal at first but actually incurs permanent intellectual disability. This can cause symptoms such as mental retardation, abnormal gait and posture, and delayed growth. Because tyrosine is used by the body to make melanin (a component of the pigment found in the hair and skin), failure to convert normal levels of phenylalanine to tyrosine can lead to fair hair and skin.
The frequency of this disease varies greatly. Specifically, in the United States, PKU is found at a rate of nearly 1 in 10,000 births. Due to newborn screening, doctors are able to detect PKU in a baby sooner. This allows them to start treatment early, preventing the baby from suffering from the severe effects of PKU. PKU is caused by a mutation in the PAH gene, whose role is to instruct the body on how to make phenylalanine hydroxylase. Phenylalanine hydroxylase is what converts the phenylalanine, taken in through diet, into other things that the body can use. The mutation often decreases the effectiveness or rate at which the hydroxylase breaks down the phenylalanine. This is what causes the phenylalanine to build up in the body.
Sickle cell anemia
Sickle cell anemia is a genetic disease that causes deformed red blood cells with a rigid, crescent shape instead of the normal flexible, round shape. It is caused by a change in one nucleotide, a point mutation in the HBB gene. The HBB gene encodes information to make the beta-globin subunit of hemoglobin, which is the protein red blood cells use to carry oxygen throughout the body. Sickle cell anemia occurs when the HBB gene mutation causes both beta-globin subunits of hemoglobin to change into hemoglobinS (HbS).
Sickle cell anemia is a pleiotropic disease because the expression of a single mutated HBB gene produces numerous consequences throughout the body. The mutated hemoglobin forms polymers and clumps together causing the deoxygenated sickle red blood cells to assume the disfigured sickle shape. As a result, the cells are inflexible and cannot easily flow through blood vessels, increasing the risk of blood clots and possibly depriving vital organs of oxygen. Some complications associated with sickle cell anemia include pain, damaged organs, strokes, high blood pressure, and loss of vision. Sickle red blood cells also have a shortened lifespan and die prematurely.
Marfan syndrome
Marfan syndrome (MFS) is an autosomal dominant disorder which affects 1 in 5–10,000 people. MFS arises from a mutation in the FBN1 gene, which encodes for the glycoprotein fibrillin-1, a major constituent of extracellular microfibrils which form connective tissues. Over 1,000 different mutations in FBN1 have been found to result in abnormal function of fibrillin, which consequently relates to connective tissues elongating progressively and weakening. Because these fibers are found in tissues throughout the body, mutations in this gene can have a widespread effect on certain systems, including the skeletal, cardiovascular, and nervous system, as well as the eyes and lungs.
Without medical intervention, prognosis of Marfan syndrome can range from moderate to life-threatening, with 90% of known causes of death in diagnosed patients relating to cardiovascular complications and congestive cardiac failure. Other characteristics of MFS include an increased arm span and decreased upper to lower body ratio.
"Mini-muscle" allele
A gene recently discovered in laboratory house mice, termed "mini-muscle", causes, when mutated, a 50% reduction in hindlimb muscle mass as its primary effect (the phenotypic effect by which it was originally identified). In addition to smaller hindlimb muscle mass, the mutant mice exhibit lower heart rates during physical activity, and a higher endurance. Mini Muscle Mice also exhibit larger kidneys and livers. All of these morphological deviations influence the behavior and metabolism of the mouse. For example, mice with the Mini Muscle mutation were observed to have a higher per-gram aerobic capacity. The mini-muscle allele shows a mendelian recessive behavior. The mutation is a single nucleotide polymorphism (SNP) in an intron of the myosin heavy polypeptide4 gene.
DNA repair proteins
DNA repair pathways that repair damage to cellular DNA use many different proteins. These proteins often have other functions in addition to DNA repair. In humans, defects in some of these multifunctional proteins can cause widely differing clinical phenotypes. As an example, mutations in the XPB gene that encodes the largest subunit of the basal Transcription factor II H have several pleiotropic effects. XPB mutations are known to be deficient in nucleotide excision repair of DNA and in the quite separate process of gene transcription. In humans, XPB mutations can give rise to the cancer-prone disorder xeroderma pigmentosum or the noncancer-prone multisystem disorder trichothiodystrophy. Another example in humans is the ERCC6 gene, which encodes a protein that mediates DNA repair, transcription, and other cellular processes throughout the body. Mutations in ERCC6 are associated with disorders of the eye (retinal dystrophy), heart (cardiac arrhythmias), and immune system (lymphocyte immunodeficiency).
Chickens
Chickens exhibit various traits affected by pleiotropic genes. Some chickens exhibit frizzle feather trait, where their feathers all curl outward and upward rather than lying flat against the body. Frizzle feather was found to stem from a deletion in the genomic region coding for α-Keratin. This gene seems to pleiotropically lead to other abnormalities like increased metabolism, higher food consumption, accelerated heart rate, and delayed sexual maturity.
Domesticated chickens underwent a rapid selection process that led to unrelated phenotypes having high correlations, suggesting pleiotropic, or at least close linkage, effects between comb mass and physiological structures related to reproductive abilities. Both males and females with larger combs have higher bone density and strength, which allows females to deposit more calcium into eggshells. This linkage is further evidenced by the fact that two of the genes, HAO1 and BMP2, affecting medullary bone (the part of the bone that transfers calcium into developing eggshells) are located at the same locus as the gene affecting comb mass. HAO1 and BMP2 also display pleiotropic effects with commonly desired domestic chicken behavior; those chickens who express higher levels of these two genes in bone tissue produce more eggs and display less egg incubation behavior.
See also
cis-regulatory element
Enhancer (genetics)
Epistasis
Genetic correlation
Metabolic network
Metabolic supermice
Polygene
References
External links
Pleiotropy is 100 years old
Evolutionary developmental biology
Genetics concepts | 0.769049 | 0.992904 | 0.763592 |
Food | Food is any substance consumed by an organism for nutritional support. Food is usually of plant, animal, or fungal origin and contains essential nutrients such as carbohydrates, fats, proteins, vitamins, or minerals. The substance is ingested by an organism and assimilated by the organism's cells to provide energy, maintain life, or stimulate growth. Different species of animals have different feeding behaviours that satisfy the needs of their metabolisms and have evolved to fill a specific ecological niche within specific geographical contexts.
Omnivorous humans are highly adaptable and have adapted to obtain food in many different ecosystems. Humans generally use cooking to prepare food for consumption. The majority of the food energy required is supplied by the industrial food industry, which produces food through intensive agriculture and distributes it through complex food processing and food distribution systems. This system of conventional agriculture relies heavily on fossil fuels, which means that the food and agricultural systems are one of the major contributors to climate change, accounting for as much as 37% of total greenhouse gas emissions.
The food system has significant impacts on a wide range of other social and political issues, including sustainability, biological diversity, economics, population growth, water supply, and food security. Food safety and security are monitored by international agencies like the International Association for Food Protection, the World Resources Institute, the World Food Programme, the Food and Agriculture Organization, and the International Food Information Council.
Definition and classification
Food is any substance consumed to provide nutritional support and energy to an organism. It can be raw, processed, or formulated and is consumed orally by animals for growth, health, or pleasure. Food is mainly composed of water, lipids, proteins, and carbohydrates. Minerals (e.g., salts) and organic substances (e.g., vitamins) can also be found in food. Plants, algae, and some microorganisms use photosynthesis to make some of their own nutrients. Water is found in many foods and has been defined as a food by itself. Water and fiber have low energy densities, or calories, while fat is the most energy-dense component. Some inorganic (non-food) elements are also essential for plant and animal functioning.
Human food can be classified in various ways, either by related content or by how it is processed. The number and composition of food groups can vary. Most systems include four basic groups that describe their origin and relative nutritional function: Vegetables and Fruit, Cereals and Bread, Dairy, and Meat. Studies that look into diet quality group food into whole grains/cereals, refined grains/cereals, vegetables, fruits, nuts, legumes, eggs, dairy products, fish, red meat, processed meat, and sugar-sweetened beverages. The Food and Agriculture Organization and World Health Organization use a system with nineteen food classifications: cereals, roots, pulses and nuts, milk, eggs, fish and shellfish, meat, insects, vegetables, fruits, fats and oils, sweets and sugars, spices and condiments, beverages, foods for nutritional uses, food additives, composite dishes and savoury snacks.
Food sources
In a given ecosystem, food forms a web of interlocking chains with primary producers at the bottom and apex predators at the top. Other aspects of the web include detrovores (that eat detritis) and decomposers (that break down dead organisms). Primary producers include algae, plants, bacteria and protists that acquire their energy from sunlight. Primary consumers are the herbivores that consume the plants, and secondary consumers are the carnivores that consume those herbivores. Some organisms, including most mammals and birds, diet consists of both animals and plants, and they are considered omnivores. The chain ends with the apex predators, the animals that have no known predators in its ecosystem. Humans are considered apex predators.
Humans are omnivores, finding sustenance in vegetables, fruits, cooked meat, milk, eggs, mushrooms and seaweed. Cereal grain is a staple food that provides more food energy worldwide than any other type of crop. Corn (maize), wheat, and rice account for 87% of all grain production worldwide. Just over half of the world's crops are used to feed humans (55 percent), with 36 percent grown as animal feed and 9 percent for biofuels. Fungi and bacteria are also used in the preparation of fermented foods like bread, wine, cheese and yogurt.
Photosynthesis
During photosynthesis, energy from the sun is absorbed and used to transform water and carbon dioxide in the air or soil into oxygen and glucose. The oxygen is then released, and the glucose stored as an energy reserve. Photosynthetic plants, algae and certain bacteria often represent the lowest point of the food chains, making photosynthesis the primary source of energy and food for nearly all life on earth.
Plants also absorb important nutrients and minerals from the air, natural waters, and soil. Carbon, oxygen and hydrogen are absorbed from the air or water and are the basic nutrients needed for plant survival. The three main nutrients absorbed from the soil for plant growth are nitrogen, phosphorus and potassium, with other important nutrients including calcium, sulfur, magnesium, iron boron, chlorine, manganese, zinc, copper molybdenum and nickel.
Microorganisms
Bacteria and other microorganisms also form the lower rungs of the food chain. They obtain their energy from photosynthesis or by breaking down dead organisms, waste or chemical compounds. Some form symbiotic relationships with other organisms to obtain their nutrients. Bacteria provide a source of food for protozoa, who in turn provide a source of food for other organisms such as small invertebrates. Other organisms that feed on bacteria include nematodes, fan worms, shellfish and a species of snail.
In the marine environment, plankton (which includes bacteria, archaea, algae, protozoa and microscopic fungi) provide a crucial source of food to many small and large aquatic organisms.
Without bacteria, life would scarcely exist because bacteria convert atmospheric nitrogen into nutritious ammonia. Ammonia is the precursor to proteins, nucleic acids, and most vitamins. Since the advent of industrial process for nitrogen fixation, the Haber-Bosch Process, the majority of ammonia in the world is human-made.
Plants
Plants as a food source are divided into seeds, fruits, vegetables, legumes, grains and nuts. Where plants fall within these categories can vary, with botanically described fruits such as the tomato, squash, pepper and eggplant or seeds like peas commonly considered vegetables. Food is a fruit if the part eaten is derived from the reproductive tissue, so seeds, nuts and grains are technically fruit. From a culinary perspective, fruits are generally considered the remains of botanically described fruits after grains, nuts, seeds and fruits used as vegetables are removed. Grains can be defined as seeds that humans eat or harvest, with cereal grains (oats, wheat, rice, corn, barley, rye, sorghum and millet) belonging to the Poaceae (grass) family and pulses coming from the Fabaceae (legume) family. Whole grains are foods that contain all the elements of the original seed (bran, germ, and endosperm). Nuts are dry fruits, distinguishable by their woody shell.
Fleshy fruits (distinguishable from dry fruits like grain, seeds and nuts) can be further classified as stone fruits (cherries and peaches), pome fruits (apples, pears), berries (blackberry, strawberry), citrus (oranges, lemon), melons (watermelon, cantaloupe), Mediterranean fruits (grapes, fig), tropical fruits (banana, pineapple). Vegetables refer to any other part of the plant that can be eaten, including roots, stems, leaves, flowers, bark or the entire plant itself. These include root vegetables (potatoes and carrots), bulbs (onion family), flowers (cauliflower and broccoli), leaf vegetables (spinach and lettuce) and stem vegetables (celery and asparagus).
The carbohydrate, protein and lipid content of plants is highly variable. Carbohydrates are mainly in the form of starch, fructose, glucose and other sugars. Most vitamins are found from plant sources, with exceptions of vitamin D and vitamin B12. Minerals can also be plentiful or not. Fruit can consist of up to 90% water, contain high levels of simple sugars that contribute to their sweet taste, and have a high vitamin C content. Compared to fleshy fruit (excepting Bananas) vegetables are high in starch, potassium, dietary fiber, folate and vitamins and low in fat and calories. Grains are more starch based and nuts have a high protein, fibre, vitamin E and B content. Seeds are a good source of food for animals because they are abundant and contain fibre and healthful fats, such as omega-3 fats. Complicated chemical interactions can enhance or depress bioavailability of certain nutrients. Phytates can prevent the release of some sugars and vitamins.
Animals that only eat plants are called herbivores, with those that mostly just eat fruits known as frugivores, leaves, while shoot eaters are folivores (pandas) and wood eaters termed xylophages (termites). Frugivores include a diverse range of species from annelids to elephants, chimpanzees and many birds. About 182 fish consume seeds or fruit. Animals (domesticated and wild) use as many types of grasses that have adapted to different locations as their main source of nutrients.
Humans eat thousands of plant species; there may be as many as 75,000 edible species of angiosperms, of which perhaps 7,000 are often eaten. Plants can be processed into breads, pasta, cereals, juices and jams or raw ingredients such as sugar, herbs, spices and oils can be extracted. Oilseeds are pressed to produce rich oilssunflower, flaxseed, rapeseed (including canola oil) and sesame.
Many plants and animals have coevolved in such a way that the fruit is a good source of nutrition to the animal who then excretes the seeds some distance away, allowing greater dispersal. Even seed predation can be mutually beneficial, as some seeds can survive the digestion process. Insects are major eaters of seeds, with ants being the only real seed dispersers. Birds, although being major dispersers, only rarely eat seeds as a source of food and can be identified by their thick beak that is used to crack open the seed coat. Mammals eat a more diverse range of seeds, as they are able to crush harder and larger seeds with their teeth.
Animals
Animals are used as food either directly or indirectly. This includes meat, eggs, shellfish and dairy products like milk and cheese. They are an important source of protein and are considered complete proteins for human consumption as they contain all the essential amino acids that the human body needs. One steak, chicken breast or pork chop contains about 30 grams of protein. One large egg has 7 grams of protein. A serving of cheese has about 15 grams of protein. And 1 cup of milk has about 8 grams of protein. Other nutrients found in animal products include calories, fat, essential vitamins (including B12) and minerals (including zinc, iron, calcium, magnesium).
Food products produced by animals include milk produced by mammary glands, which in many cultures is drunk or processed into dairy products (cheese, butter, etc.). Eggs laid by birds and other animals are eaten and bees produce honey, a reduced nectar from flowers that is used as a popular sweetener in many cultures. Some cultures consume blood, such as in blood sausage, as a thickener for sauces, or in a cured, salted form for times of food scarcity, and others use blood in stews such as jugged hare.
Taste
Animals, specifically humans, typically have five different types of tastes: sweet, sour, salty, bitter, and umami. The differing tastes are important for distinguishing between foods that are nutritionally beneficial and those which may contain harmful toxins. As animals have evolved, the tastes that provide the most energy are the most pleasant to eat while others are not enjoyable, although humans in particular can acquire a preference for some substances which are initially unenjoyable. Water, while important for survival, has no taste.
Sweetness is almost always caused by a type of simple sugar such as glucose or fructose, or disaccharides such as sucrose, a molecule combining glucose and fructose. Sourness is caused by acids, such as vinegar in alcoholic beverages. Sour foods include citrus, specifically lemons and limes. Sour is evolutionarily significant as it can signal a food that may have gone rancid due to bacteria. Saltiness is the taste of alkali metal ions such as sodium and potassium. It is found in almost every food in low to moderate proportions to enhance flavor. Bitter taste is a sensation considered unpleasant characterised by having a sharp, pungent taste. Unsweetened dark chocolate, caffeine, lemon rind, and some types of fruit are known to be bitter. Umami, commonly described as savory, is a marker of proteins and characteristic of broths and cooked meats. Foods that have a strong umami flavor include cheese, meat and mushrooms.
While most animals taste buds are located in their mouth, some insects taste receptors are located on their legs and some fish have taste buds along their entire body. Dogs, cats and birds have relatively few taste buds (chickens have about 30), adult humans have between 2000 and 4000, while catfish can have more than a million. Herbivores generally have more than carnivores as they need to tell which plants may be poisonous. Not all mammals share the same tastes: some rodents can taste starch, cats cannot taste sweetness, and several carnivores (including hyenas, dolphins, and sea lions) have lost the ability to sense up to four of the five taste modalities found in humans.
Digestion
Food is broken into nutrient components through digestive process. Proper digestion consists of mechanical processes (chewing, peristalsis) and chemical processes (digestive enzymes and microorganisms). The digestive systems of herbivores and carnivores are very different as plant matter is harder to digest. Carnivores mouths are designed for tearing and biting compared to the grinding action found in herbivores. Herbivores however have comparatively longer digestive tracts and larger stomachs to aid in digesting the cellulose in plants.
Food safety
According to the World Health Organization (WHO), about 600 million people worldwide get sick and 420,000 die each year from eating contaminated food. Diarrhea is the most common illness caused by consuming contaminated food, with about 550 million cases and 230,000 deaths from diarrhea each year. Children under five years of age account for 40% of the burden of foodborne illness, with 125,000 deaths each year.
A 2003 World Health Organization (WHO) report concluded that about 30% of reported food poisoning outbreaks in the WHO European Region occur in private homes. According to the WHO and CDC, in the USA alone, annually, there are 76 million cases of foodborne illness leading to 325,000 hospitalizations and 5,000 deaths.
From 2011 to 2016, on average, there were 668,673 cases of foodborne illness and 21 deaths each year. In addition, during this period, 1,007 food poisoning outbreaks with 30,395 cases of food poisoning were reported.
See also
Food pairing
List of food and drink monuments
References
Further reading
Collingham, E. M. (2011). The Taste of War: World War Two and the Battle for Food
Katz, Solomon (2003). The Encyclopedia of Food and Culture, Scribner
Mobbs, Michael (2012). Sustainable Food Sydney: NewSouth Publishing,
Nestle, Marion (2007). Food Politics: How the Food Industry Influences Nutrition and Health, University Presses of California, revised and expanded edition,
The Future of Food (2015). A panel discussion at the 2015 Digital Life Design (DLD) Annual Conference. "How can we grow and enjoy food, closer to home, further into the future? MIT Media Lab's Kevin Slavin hosts a conversation with food artist, educator, and entrepreneur Emilie Baltz, professor Caleb Harper from MIT Media Lab's CityFarm project, the Barbarian Group's Benjamin Palmer, and Andras Forgacs, the co-founder and CEO of Modern Meadow, who is growing 'victimless' meat in a lab. The discussion addresses issues of sustainable urban farming, ecosystems, technology, food supply chains and their broad environmental and humanitarian implications, and how these changes in food production may change what people may find delicious ... and the other way around." Posted on the official YouTube Channel of DLD
External links
of Food Timeline
Food, BBC Radio 4 discussion with Rebecca Spang, Ivan Day and Felipe Fernandez-Armesto (In Our Time, 27 December 2001)
Food watchlist articles | 0.763924 | 0.999556 | 0.763584 |
Decarboxylation | Decarboxylation is a chemical reaction that removes a carboxyl group and releases carbon dioxide (CO2). Usually, decarboxylation refers to a reaction of carboxylic acids, removing a carbon atom from a carbon chain. The reverse process, which is the first chemical step in photosynthesis, is called carboxylation, the addition of CO2 to a compound. Enzymes that catalyze decarboxylations are called decarboxylases or, the more formal term, carboxy-lyases (EC number 4.1.1).
In organic chemistry
The term "decarboxylation" usually means replacement of a carboxyl group with a hydrogen atom:
Decarboxylation is one of the oldest known organic reactions. It is one of the processes assumed to accompany pyrolysis and destructive distillation.
Overall, decarboxylation depends upon stability of the carbanion synthon , although the anion may not be a true chemical intermediate. Typically, carboxylic acids decarboxylate slowly, but carboxylic acids with an α electron-withdrawing group (e.g. βketo acids, βnitriles, αnitro acids, or arylcarboxylic acids) decarboxylate easily. Decarboxylation of sodium chlorodifluoroacetate generates difluorocarbene:
Decarboxylations are an important in the malonic and acetoacetic ester synthesis. The Knoevenagel condensation and they allow keto acids serve as a stabilizing protecting group for carboxylic acid enols.
For the free acids, conditions that deprotonate the carboxyl group (possibly protonating the electron-withdrawing group to form a zwitterionic tautomer) accelerate decarboxylation. A strong base is key to ketonization, in which a pair of carboxylic acids combine to the eponymous functional group:
Transition metal salts, especially copper compounds, facilitate decarboxylation via carboxylate complex intermediates. Metals that catalyze cross-coupling reactions thus treat aryl carboxylates as an aryl anion synthon; this synthetic strategy is the decarboxylative cross-coupling reaction.
Upon heating in cyclohexanone amino acids decarboxylate. In the related Hammick reaction, uncatalyzed decarboxylation of a picolinic acid gives a stable carbene that attacks a carbonyl electrophile.
Oxidative decarboxylations are generally radical reactions. These include the Kolbe electrolysis and Hunsdiecker-Kochi reactions. The Barton decarboxylation is an unusual radical reductive decarboxylation.
As described above, most decarboxylations start with a carboxylic acid or its alkali metal salt, but the Krapcho decarboxylation starts with methyl esters. In this case, the reaction begins with halide-mediated cleavage of the ester, forming the carboxylate.
In biochemistry
Decarboxylations are pervasive in biology. They are often classified according to the cofactors that catalyze the transformations. Biotin-coupled processes effect the decarboxylation of malonyl-CoA to acetyl-CoA. Thiamine (T:) is the active component for decarboxylation of alpha-ketoacids, including pyruvate:
Pyridoxal phosphate promotes decarboxylation of amino acids. Flavin-dependent decarboxylases are involved in transformations of cysteine.
Iron-based hydroxylases operate by reductive activation of using the decarboxylation of alpha-ketoglutarate as an electron donor. The decarboxylation can be depicted as such:
Decarboxylation of amino acids
Common biosynthetic oxidative decarboxylations of amino acids to amines are:
tryptophan to tryptamine
phenylalanine to phenylethylamine
tyrosine to tyramine
histidine to histamine
serine to ethanolamine
glutamic acid to GABA
lysine to cadaverine
arginine to agmatine
ornithine to putrescine
5-HTP to serotonin
L-DOPA to dopamine
Other decarboxylation reactions from the citric acid cycle include:
pyruvate to acetyl-CoA (see pyruvate decarboxylation)
oxalosuccinate to α-ketoglutarate
α-ketoglutarate to succinyl-CoA.
Fatty acid synthesis
Straight-chain fatty acid synthesis occurs by recurring reactions involving decarboxylation of malonyl-CoA.
Case studies
Upon heating, Δ9-tetrahydrocannabinolic acid decarboxylates to give the psychoactive compound Δ9-Tetrahydrocannabinol. When cannabis is heated in vacuum, the decarboxylation of tetrahydrocannabinolic acid (THCA) appears to follow first order kinetics. The log fraction of THCA present decreases steadily over time, and the rate of decrease varies according to temperature. At 10-degree increments from 100 to 140 °C, half of the THCA is consumed in 30, 11, 6, 3, and 2 minutes; hence the rate constant follows Arrhenius' law, ranging between 10−8 and 10−5 in a linear log-log relationship with inverse temperature. However, modelling of decarboxylation of salicylic acid with a water molecule had suggested an activation barrier of 150 kJ/mol for a single molecule in solvent, much too high for the observed rate. Therefore, it was concluded that this reaction, conducted in the solid phase in plant material with a high fraction of carboxylic acids, follows a pseudo first order kinetics in which a nearby carboxylic acid precipitates without affecting the observed rate constant. Two transition states corresponding to indirect and direct keto-enol routes are possible, with energies of 93 and 104 kJ/mol. Both intermediates involve protonation of the alpha carbon, disrupting one of the double bonds of the aromatic ring and permitting the beta-keto group (which takes the form of an enol in THCA and THC) to participate in decarboxylation.
In beverages stored for long periods, very small amounts of benzene may form from benzoic acid by decarboxylation catalyzed by the presence of ascorbic acid.
The addition of catalytic amounts of cyclohexenone has been reported to catalyze the decarboxylation of amino acids. However, using such catalysts may also yield an amount of unwanted by-products.
References
Substitution reactions | 0.767471 | 0.994932 | 0.763582 |
Digestion | Digestion is the breakdown of large insoluble food compounds into small water-soluble components so that they can be absorbed into the blood plasma. In certain organisms, these smaller substances are absorbed through the small intestine into the blood stream. Digestion is a form of catabolism that is often divided into two processes based on how food is broken down: mechanical and chemical digestion. The term mechanical digestion refers to the physical breakdown of large pieces of food into smaller pieces which can subsequently be accessed by digestive enzymes. Mechanical digestion takes place in the mouth through mastication and in the small intestine through segmentation contractions. In chemical digestion, enzymes break down food into the small compounds that the body can use.
In the human digestive system, food enters the mouth and mechanical digestion of the food starts by the action of mastication (chewing), a form of mechanical digestion, and the wetting contact of saliva. Saliva, a liquid secreted by the salivary glands, contains salivary amylase, an enzyme which starts the digestion of starch in the food. The saliva also contains mucus, which lubricates the food; the electrolyte hydrogencarbonate, which provides the ideal conditions of pH for amylase to work; and other electrolytes (, , ). About 30% of starch is hydrolyzed into disaccharide in the oral cavity (mouth). After undergoing mastication and starch digestion, the food will be in the form of a small, round slurry mass called a bolus. It will then travel down the esophagus and into the stomach by the action of peristalsis. Gastric juice in the stomach starts protein digestion. Gastric juice mainly contains hydrochloric acid and pepsin. In infants and toddlers, gastric juice also contains rennin to digest milk proteins. As the first two chemicals may damage the stomach wall, mucus and bicarbonates are secreted by the stomach. They provide a slimy layer that acts as a shield against the damaging effects of chemicals like concentrated hydrochloric acid while also aiding lubrication. Hydrochloric acid provides acidic pH for pepsin. At the same time protein digestion is occurring, mechanical mixing occurs by peristalsis, which is waves of muscular contractions that move along the stomach wall. This allows the mass of food to further mix with the digestive enzymes. Pepsin breaks down proteins into peptides or proteoses, which is further broken down into dipeptides and amino acids by enzymes in the small intestine. Studies suggest that increasing the number of chews per bite increases relevant gut hormones and may decrease self-reported hunger and food intake.
When the pyloric sphincter valve opens, partially digested food (chyme) enters the duodenum where it mixes with digestive enzymes from the pancreas and bile juice from the liver and then passes through the small intestine, in which digestion continues. When the chyme is fully digested, it is absorbed into the blood. 95% of nutrient absorption occurs in the small intestine. Water and minerals are reabsorbed back into the blood in the colon (large intestine) where the pH is slightly acidic (about 5.6 ~ 6.9). Some vitamins, such as biotin and vitamin K (K2MK7) produced by bacteria in the colon are also absorbed into the blood in the colon. Absorption of water, simple sugar and alcohol also takes place in stomach. Waste material (feces) is eliminated from the rectum during defecation.
Digestive system
Digestive systems take many forms. There is a fundamental distinction between internal and external digestion. External digestion developed earlier in evolutionary history, and most fungi still rely on it. In this process, enzymes are secreted into the environment surrounding the organism, where they break down an organic material, and some of the products diffuse back to the organism. Animals have a tube (gastrointestinal tract) in which internal digestion occurs, which is more efficient because more of the broken down products can be captured, and the internal chemical environment can be more efficiently controlled.
Some organisms, including nearly all spiders, secrete biotoxins and digestive chemicals (e.g., enzymes) into the extracellular environment prior to ingestion of the consequent "soup". In others, once potential nutrients or food is inside the organism, digestion can be conducted to a vesicle or a sac-like structure, through a tube, or through several specialized organs aimed at making the absorption of nutrients more efficient.
Secretion systems
Bacteria use several systems to obtain nutrients from other organisms in the environments.
Channel transport system
In a channel transport system, several proteins form a contiguous channel traversing the inner and outer membranes of the bacteria. It is a simple system, which consists of only three protein subunits: the ABC protein, membrane fusion protein (MFP), and outer membrane protein. This secretion system transports various chemical species, from ions, drugs, to proteins of various sizes (20–900 kDa). The chemical species secreted vary in size from the small Escherichia coli peptide colicin V, (10 kDa) to the Pseudomonas fluorescens cell adhesion protein LapA of 900 kDa.
Molecular syringe
A type III secretion system means that a molecular syringe is used through which a bacterium (e.g. certain types of Salmonella, Shigella, Yersinia) can inject nutrients into protist cells. One such mechanism was first discovered in Y. pestis and showed that toxins could be injected directly from the bacterial cytoplasm into the cytoplasm of its host's cells rather than be secreted into the extracellular medium.
Conjugation machinery
The conjugation machinery of some bacteria (and archaeal flagella) is capable of transporting both DNA and proteins. It was discovered in Agrobacterium tumefaciens, which uses this system to introduce the Ti plasmid and proteins into the host, which develops the crown gall (tumor). The VirB complex of Agrobacterium tumefaciens is the prototypic system.
In the nitrogen-fixing Rhizobia, conjugative elements naturally engage in inter-kingdom conjugation. Such elements as the Agrobacterium Ti or Ri plasmids contain elements that can transfer to plant cells. Transferred genes enter the plant cell nucleus and effectively transform the plant cells into factories for the production of opines, which the bacteria use as carbon and energy sources. Infected plant cells form crown gall or root tumors. The Ti and Ri plasmids are thus endosymbionts of the bacteria, which are in turn endosymbionts (or parasites) of the infected plant.
The Ti and Ri plasmids are themselves conjugative. Ti and Ri transfer between bacteria uses an independent system (the tra, or transfer, operon) from that for inter-kingdom transfer (the vir, or virulence, operon). Such transfer creates virulent strains from previously avirulent Agrobacteria.
Release of outer membrane vesicles
In addition to the use of the multiprotein complexes listed above, gram-negative bacteria possess another method for release of material: the formation of outer membrane vesicles. Portions of the outer membrane pinch off, forming spherical structures made of a lipid bilayer enclosing periplasmic materials. Vesicles from a number of bacterial species have been found to contain virulence factors, some have immunomodulatory effects, and some can directly adhere to and intoxicate host cells. While release of vesicles has been demonstrated as a general response to stress conditions, the process of loading cargo proteins seems to be selective.
Gastrovascular cavity
The gastrovascular cavity functions as a stomach in both digestion and the distribution of nutrients to all parts of the body. Extracellular digestion takes place within this central cavity, which is lined with the gastrodermis, the internal layer of epithelium. This cavity has only one opening to the outside that functions as both a mouth and an anus: waste and undigested matter is excreted through the mouth/anus, which can be described as an incomplete gut.
In a plant such as the Venus flytrap that can make its own food through photosynthesis, it does not eat and digest its prey for the traditional objectives of harvesting energy and carbon, but mines prey primarily for essential nutrients (nitrogen and phosphorus in particular) that are in short supply in its boggy, acidic habitat.
Phagosome
A phagosome is a vacuole formed around a particle absorbed by phagocytosis. The vacuole is formed by the fusion of the cell membrane around the particle. A phagosome is a cellular compartment in which pathogenic microorganisms can be killed and digested. Phagosomes fuse with lysosomes in their maturation process, forming phagolysosomes. In humans, Entamoeba histolytica can phagocytose red blood cells.
Specialised organs and behaviours
To aid in the digestion of their food, animals evolved organs such as beaks, tongues, radulae, teeth, crops, gizzards, and others.
Beaks
Birds have bony beaks that are specialised according to the bird's ecological niche. For example, macaws primarily eat seeds, nuts, and fruit, using their beaks to open even the toughest seed. First they scratch a thin line with the sharp point of the beak, then they shear the seed open with the sides of the beak.
The mouth of the squid is equipped with a sharp horny beak mainly made of cross-linked proteins. It is used to kill and tear prey into manageable pieces. The beak is very robust, but does not contain any minerals, unlike the teeth and jaws of many other organisms, including marine species. The beak is the only indigestible part of the squid.
Tongue
The tongue is skeletal muscle on the floor of the mouth of most vertebrates, that manipulates food for chewing (mastication) and swallowing (deglutition). It is sensitive and kept moist by saliva. The underside of the tongue is covered with a smooth mucous membrane. The tongue also has a touch sense for locating and positioning food particles that require further chewing. The tongue is used to roll food particles into a bolus before being transported down the esophagus through peristalsis.
The sublingual region underneath the front of the tongue is a location where the oral mucosa is very thin, and underlain by a plexus of veins. This is an ideal location for introducing certain medications to the body. The sublingual route takes advantage of the highly vascular quality of the oral cavity, and allows for the speedy application of medication into the cardiovascular system, bypassing the gastrointestinal tract.
Teeth
Teeth (singular tooth) are small whitish structures found in the jaws (or mouths) of many vertebrates that are used to tear, scrape, milk and chew food. Teeth are not made of bone, but rather of tissues of varying density and hardness, such as enamel, dentine and cementum. Human teeth have a blood and nerve supply which enables proprioception. This is the ability of sensation when chewing, for example if we were to bite into something too hard for our teeth, such as a chipped plate mixed in food, our teeth send a message to our brain and we realise that it cannot be chewed, so we stop trying.
The shapes, sizes and numbers of types of animals' teeth are related to their diets. For example, herbivores have a number of molars which are used to grind plant matter, which is difficult to digest. Carnivores have canine teeth which are used to kill and tear meat.
Crop
A crop, or croup, is a thin-walled expanded portion of the alimentary tract used for the storage of food prior to digestion. In some birds it is an expanded, muscular pouch near the gullet or throat. In adult doves and pigeons, the crop can produce crop milk to feed newly hatched birds.
Certain insects may have a crop or enlarged esophagus.
Abomasum
Herbivores have evolved cecums (or an abomasum in the case of ruminants). Ruminants have a fore-stomach with four chambers. These are the rumen, reticulum, omasum, and abomasum. In the first two chambers, the rumen and the reticulum, the food is mixed with saliva and separates into layers of solid and liquid material. Solids clump together to form the cud (or bolus). The cud is then regurgitated, chewed slowly to completely mix it with saliva and to break down the particle size.
Fibre, especially cellulose and hemi-cellulose, is primarily broken down into the volatile fatty acids, acetic acid, propionic acid and butyric acid in these chambers (the reticulo-rumen) by microbes: (bacteria, protozoa, and fungi). In the omasum, water and many of the inorganic mineral elements are absorbed into the blood stream.
The abomasum is the fourth and final stomach compartment in ruminants. It is a close equivalent of a monogastric stomach (e.g., those in humans or pigs), and digesta is processed here in much the same way. It serves primarily as a site for acid hydrolysis of microbial and dietary protein, preparing these protein sources for further digestion and absorption in the small intestine. Digesta is finally moved into the small intestine, where the digestion and absorption of nutrients occurs. Microbes produced in the reticulo-rumen are also digested in the small intestine.
Specialised behaviours
Regurgitation has been mentioned above under abomasum and crop, referring to crop milk, a secretion from the lining of the crop of pigeons and doves with which the parents feed their young by regurgitation.
Many sharks have the ability to turn their stomachs inside out and evert it out of their mouths in order to get rid of unwanted contents (perhaps developed as a way to reduce exposure to toxins).
Other animals, such as rabbits and rodents, practise coprophagia behaviours – eating specialised faeces in order to re-digest food, especially in the case of roughage. Capybara, rabbits, hamsters and other related species do not have a complex digestive system as do, for example, ruminants. Instead they extract more nutrition from grass by giving their food a second pass through the gut. Soft faecal pellets of partially digested food are excreted and generally consumed immediately. They also produce normal droppings, which are not eaten.
Young elephants, pandas, koalas, and hippos eat the faeces of their mother, probably to obtain the bacteria required to properly digest vegetation. When they are born, their intestines do not contain these bacteria (they are completely sterile). Without them, they would be unable to get any nutritional value from many plant components.
In earthworms
An earthworm's digestive system consists of a mouth, pharynx, esophagus, crop, gizzard, and intestine. The mouth is surrounded by strong lips, which act like a hand to grab pieces of dead grass, leaves, and weeds, with bits of soil to help chew. The lips break the food down into smaller pieces. In the pharynx, the food is lubricated by mucus secretions for easier passage. The esophagus adds calcium carbonate to neutralize the acids formed by food matter decay. Temporary storage occurs in the crop where food and calcium carbonate are mixed. The powerful muscles of the gizzard churn and mix the mass of food and dirt. When the churning is complete, the glands in the walls of the gizzard add enzymes to the thick paste, which helps chemically breakdown the organic matter. By peristalsis, the mixture is sent to the intestine where friendly bacteria continue chemical breakdown. This releases carbohydrates, protein, fat, and various vitamins and minerals for absorption into the body.
Overview of vertebrate digestion
In most vertebrates, digestion is a multistage process in the digestive system, starting from ingestion of raw materials, most often other organisms. Ingestion usually involves some type of mechanical and chemical processing. Digestion is separated into four steps:
Ingestion: placing food into the mouth (entry of food in the digestive system),
Mechanical and chemical breakdown: mastication and the mixing of the resulting bolus with water, acids, bile and enzymes in the stomach and intestine to break down complex chemical species into simple structures,
Absorption: of nutrients from the digestive system to the circulatory and lymphatic capillaries through osmosis, active transport, and diffusion, and
Egestion (Excretion): Removal of undigested materials from the digestive tract through defecation.
Underlying the process is muscle movement throughout the system through swallowing and peristalsis. Each step in digestion requires energy, and thus imposes an "overhead charge" on the energy made available from absorbed substances. Differences in that overhead cost are important influences on lifestyle, behavior, and even physical structures. Examples may be seen in humans, who differ considerably from other hominids (lack of hair, smaller jaws and musculature, different dentition, length of intestines, cooking, etc.).
The major part of digestion takes place in the small intestine. The large intestine primarily serves as a site for fermentation of indigestible matter by gut bacteria and for resorption of water from digests before excretion.
In mammals, preparation for digestion begins with the cephalic phase in which saliva is produced in the mouth and digestive enzymes are produced in the stomach. Mechanical and chemical digestion begin in the mouth where food is chewed, and mixed with saliva to begin enzymatic processing of starches. The stomach continues to break food down mechanically and chemically through churning and mixing with both acids and enzymes. Absorption occurs in the stomach and gastrointestinal tract, and the process finishes with defecation.
Human digestion process
The human gastrointestinal tract is around long. Food digestion physiology varies between individuals and upon other factors such as the characteristics of the food and size of the meal, and the process of digestion normally takes between 24 and 72 hours.
Digestion begins in the mouth with the secretion of saliva and its digestive enzymes. Food is formed into a bolus by the mechanical mastication and swallowed into the esophagus from where it enters the stomach through the action of peristalsis. Gastric juice contains hydrochloric acid and pepsin which would damage the walls of the stomach and mucus and bicarbonates are secreted for protection. In the stomach further release of enzymes break down the food further and this is combined with the churning action of the stomach. Mainly proteins are digested in stomach. The partially digested food enters the duodenum as a thick semi-liquid chyme. In the small intestine, the larger part of digestion takes place and this is helped by the secretions of bile, pancreatic juice and intestinal juice. The intestinal walls are lined with villi, and their epithelial cells are covered with numerous microvilli to improve the absorption of nutrients by increasing the surface area of the intestine. Bile helps in emulsification of fats and also activates lipases.
In the large intestine, the passage of food is slower to enable fermentation by the gut flora to take place. Here, water is absorbed and waste material stored as feces to be removed by defecation via the anal canal and anus.
Neural and biochemical control mechanisms
Different phases of digestion take place including: the cephalic phase, gastric phase, and intestinal phase.
The cephalic phase occurs at the sight, thought and smell of food, which stimulate the cerebral cortex. Taste and smell stimuli are sent to the hypothalamus and medulla oblongata. After this it is routed through the vagus nerve and release of acetylcholine. Gastric secretion at this phase rises to 40% of maximum rate. Acidity in the stomach is not buffered by food at this point and thus acts to inhibit parietal (secretes acid) and G cell (secretes gastrin) activity via D cell secretion of somatostatin.
The gastric phase takes 3 to 4 hours. It is stimulated by distension of the stomach, presence of food in stomach and decrease in pH. Distention activates long and myenteric reflexes. This activates the release of acetylcholine, which stimulates the release of more gastric juices. As protein enters the stomach, it binds to hydrogen ions, which raises the pH of the stomach. Inhibition of gastrin and gastric acid secretion is lifted. This triggers G cells to release gastrin, which in turn stimulates parietal cells to secrete gastric acid. Gastric acid is about 0.5% hydrochloric acid, which lowers the pH to the desired pH of 1–3. Acid release is also triggered by acetylcholine and histamine.
The intestinal phase has two parts, the excitatory and the inhibitory. Partially digested food fills the duodenum. This triggers intestinal gastrin to be released. Enterogastric reflex inhibits vagal nuclei, activating sympathetic fibers causing the pyloric sphincter to tighten to prevent more food from entering, and inhibits local reflexes.
Breakdown into nutrients
Protein digestion
Protein digestion occurs in the stomach and duodenum in which 3 main enzymes, pepsin secreted by the stomach and trypsin and chymotrypsin secreted by the pancreas, break down food proteins into polypeptides that are then broken down by various exopeptidases and dipeptidases into amino acids. The digestive enzymes however are mostly secreted as their inactive precursors, the zymogens. For example, trypsin is secreted by pancreas in the form of trypsinogen, which is activated in the duodenum by enterokinase to form trypsin. Trypsin then cleaves proteins to smaller polypeptides.
Fat digestion
Digestion of some fats can begin in the mouth where lingual lipase breaks down some short chain lipids into diglycerides. However fats are mainly digested in the small intestine. The presence of fat in the small intestine produces hormones that stimulate the release of pancreatic lipase from the pancreas and bile from the liver which helps in the emulsification of fats for absorption of fatty acids. Complete digestion of one molecule of fat (a triglyceride) results a mixture of fatty acids, mono- and di-glycerides, but no glycerol.
Carbohydrate digestion
In humans, dietary starches are composed of glucose units arranged in long chains called amylose, a polysaccharide. During digestion, bonds between glucose molecules are broken by salivary and pancreatic amylase, resulting in progressively smaller chains of glucose. This results in simple sugars glucose and maltose (2 glucose molecules) that can be absorbed by the small intestine.
Lactase is an enzyme that breaks down the disaccharide lactose to its component parts, glucose and galactose. Glucose and galactose can be absorbed by the small intestine. Approximately 65 percent of the adult population produce only small amounts of lactase and are unable to eat unfermented milk-based foods. This is commonly known as lactose intolerance. Lactose intolerance varies widely by genetic heritage; more than 90 percent of peoples of east Asian descent are lactose intolerant, in contrast to about 5 percent of people of northern European descent.
Sucrase is an enzyme that breaks down the disaccharide sucrose, commonly known as table sugar, cane sugar, or beet sugar. Sucrose digestion yields the sugars fructose and glucose which are readily absorbed by the small intestine.
DNA and RNA digestion
DNA and RNA are broken down into mononucleotides by the nucleases deoxyribonuclease and ribonuclease (DNase and RNase) from the pancreas.
Non-destructive digestion
Some nutrients are complex molecules (for example vitamin B12) which would be destroyed if they were broken down into their functional groups. To digest vitamin B12 non-destructively, haptocorrin in saliva strongly binds and protects the B12 molecules from stomach acid as they enter the stomach and are cleaved from their protein complexes.
After the B12-haptocorrin complexes pass from the stomach via the pylorus to the duodenum, pancreatic proteases cleave haptocorrin from the B12 molecules which rebind to intrinsic factor (IF). These B12-IF complexes travel to the ileum portion of the small intestine where cubilin receptors enable assimilation and circulation of B12-IF complexes in the blood.
Digestive hormones
There are at least five hormones that aid and regulate the digestive system in mammals. There are variations across the vertebrates, as for instance in birds. Arrangements are complex and additional details are regularly discovered. Connections to metabolic control (largely the glucose-insulin system) have been uncovered.
Gastrin – is in the stomach and stimulates the gastric glands to secrete pepsinogen (an inactive form of the enzyme pepsin) and hydrochloric acid. Secretion of gastrin is stimulated by food arriving in stomach. The secretion is inhibited by low pH.
Secretin – is in the duodenum and signals the secretion of sodium bicarbonate in the pancreas and it stimulates the bile secretion in the liver. This hormone responds to the acidity of the chyme.
Cholecystokinin (CCK) – is in the duodenum and stimulates the release of digestive enzymes in the pancreas and stimulates the emptying of bile in the gall bladder. This hormone is secreted in response to fat in chyme.
Gastric inhibitory peptide (GIP) – is in the duodenum and decreases the stomach churning in turn slowing the emptying in the stomach. Another function is to induce insulin secretion.
Motilin – is in the duodenum and increases the migrating myoelectric complex component of gastrointestinal motility and stimulates the production of pepsin.
Significance of pH
Digestion is a complex process controlled by several factors. pH plays a crucial role in a normally functioning digestive tract. In the mouth, pharynx and esophagus, pH is typically about 6.8, very weakly acidic. Saliva controls pH in this region of the digestive tract. Salivary amylase is contained in saliva and starts the breakdown of carbohydrates into monosaccharides. Most digestive enzymes are sensitive to pH and will denature in a high or low pH environment.
The stomach's high acidity inhibits the breakdown of carbohydrates within it. This acidity confers two benefits: it denatures proteins for further digestion in the small intestines, and provides non-specific immunity, damaging or eliminating various pathogens.
In the small intestines, the duodenum provides critical pH balancing to activate digestive enzymes. The liver secretes bile into the duodenum to neutralize the acidic conditions from the stomach, and the pancreatic duct empties into the duodenum, adding bicarbonate to neutralize the acidic chyme, thus creating a neutral environment. The mucosal tissue of the small intestines is alkaline with a pH of about 8.5.
See also
Digestive system of gastropods
Digestive system of humpback whales
Evolution of the mammalian digestive system
Discovery and development of proton pump inhibitors
Erepsin
Gastroesophageal reflux disease
References
External links
Human Physiology – Digestion
NIH guide to digestive system
The Digestive System
How does the Digestive System Work?
Digestive system
Metabolism | 0.765635 | 0.997311 | 0.763576 |
Elution | In analytical and organic chemistry, elution is the process of extracting one material from another by washing with a solvent: washing of loaded ion-exchange resins to remove captured ions, or eluting proteins or other biopolymers from a gel electrophoresis or chromatography column.
In a liquid chromatography experiment, for example, an analyte is generally adsorbed by ("bound to") an adsorbent in a liquid chromatography column. The adsorbent, a solid phase, called a "stationary phase", is a powder which is coated onto a solid support. Based on an adsorbent's composition, it can have varying affinities to "hold onto" other molecules—forming a thin film on the surface of its particles. Elution then is the process of removing analytes from the adsorbent by running a solvent, called an "eluent", past the adsorbent–analyte complex. As the solvent molecules "elute", or travel down through the chromatography column, they can either pass by the adsorbent–analyte complex or displace the analyte by binding to the adsorbent in its place. After the solvent molecules displace the analyte, the analyte can be carried out of the column for analysis. This is why as the mobile phase, called an "eluate", passes out of the column, it typically flows into a detector or is collected by a fraction collector for compositional analysis.
Predicting and controlling the order of elution is a key aspect of column chromatographic and column electrophoretic methods.
Eluotropic series
An eluotropic series is listing of various compounds in order of eluting power for a given adsorbent. The "eluting power" of a solvent is largely a measure of how well the solvent can "pull" an analyte off the adsorbent to which it is attached. This often happens when the eluent adsorbs onto the stationary phase, displacing the analyte. Such series are useful for determining necessary solvents needed for chromatography of chemical compounds. Normally such a series progresses from non-polar solvents, such as n-hexane, to polar solvents such as methanol or water. The order of solvents in an eluotropic series depends both on the stationary phase as well as on the compound used to determine the order.
Eluent
The eluent or eluant is the "carrier" portion of the mobile phase. It moves the analytes through the chromatograph. In liquid chromatography, the eluent is the liquid solvent; in gas chromatography, it is the carrier gas.
Eluate
The eluate contains the analyte material that emerges from the chromatograph. It specifically includes both the analytes and coeluting solutes passing through the column, while the eluent is only the carrier.
Elution time and elution volume
The "elution time" of a solute is the time between the start of the separation (the time at which the solute enters the column) and the time at which the solute elutes. In the same way, the elution volume is the volume of eluent required to cause elution. Under standard conditions for a known mix of solutes in a certain technique, the elution volume may be enough information to identify solutes. For instance, a mixture of amino acids may be separated by ion-exchange chromatography. Under a particular set of conditions, the amino acids will elute in the same order and at the same elution volume.
Antibody elution
Antibody elution is the process of removing antibodies that are attached to their targets, such as the surface of red blood cells. Techniques include using heat, a freeze-thaw cycle, ultrasound, acids or organic solvents. No single method is best in all situations.
See also
Chromatography
Desorption
Electroelution
Gradient elution in high performance liquid chromatography
Leaching
References
External links
Chemistry glossary
Eluotropic series
Analytical chemistry
Chromatography | 0.772055 | 0.988999 | 0.763561 |
Energy transformation | Energy transformation, also known as energy conversion, is the process of changing energy from one form to another. In physics, energy is a quantity that provides the capacity to perform work or moving (e.g. lifting an object) or provides heat. In addition to being converted, according to the law of conservation of energy, energy is transferable to a different location or object, but it cannot be created or destroyed.
The energy in many of its forms may be used in natural processes, or to provide some service to society such as heating, refrigeration, lighting or performing mechanical work to operate machines. For example, to heat a home, the furnace burns fuel, whose chemical potential energy is converted into thermal energy, which is then transferred to the home's air to raise its temperature.
Limitations in the conversion of thermal energy
Conversions to thermal energy from other forms of energy may occur with 100% efficiency. Conversion among non-thermal forms of energy may occur with fairly high efficiency, though there is always some energy dissipated thermally due to friction and similar processes. Sometimes the efficiency is close to 100%, such as when potential energy is converted to kinetic energy as an object falls in a vacuum. This also applies to the opposite case; for example, an object in an elliptical orbit around another body converts its kinetic energy (speed) into gravitational potential energy (distance from the other object) as it moves away from its parent body. When it reaches the furthest point, it will reverse the process, accelerating and converting potential energy into kinetic. Since space is a near-vacuum, this process has close to 100% efficiency.
Thermal energy is unique because it in most cases (willow) cannot be converted to other forms of energy. Only a difference in the density of thermal/heat energy (temperature) can be used to perform work, and the efficiency of this conversion will be (much) less than 100%. This is because thermal energy represents a particularly disordered form of energy; it is spread out randomly among many available states of a collection of microscopic particles constituting the system (these combinations of position and momentum for each of the particles are said to form a phase space). The measure of this disorder or randomness is entropy, and its defining feature is that the entropy of an isolated system never decreases. One cannot take a high-entropy system (like a hot substance, with a certain amount of thermal energy) and convert it into a low entropy state (like a low-temperature substance, with correspondingly lower energy), without that entropy going somewhere else (like the surrounding air). In other words, there is no way to concentrate energy without spreading out energy somewhere else.
Thermal energy in equilibrium at a given temperature already represents the maximal evening-out of energy between all possible states because it is not entirely convertible to a "useful" form, i.e. one that can do more than just affect temperature. The second law of thermodynamics states that the entropy of a closed system can never decrease. For this reason, thermal energy in a system may be converted to other kinds of energy with efficiencies approaching 100% only if the entropy of the universe is increased by other means, to compensate for the decrease in entropy associated with the disappearance of the thermal energy and its entropy content. Otherwise, only a part of that thermal energy may be converted to other kinds of energy (and thus useful work). This is because the remainder of the heat must be reserved to be transferred to a thermal reservoir at a lower temperature. The increase in entropy for this process is greater than the decrease in entropy associated with the transformation of the rest of the heat into other types of energy.
In order to make energy transformation more efficient, it is desirable to avoid thermal conversion. For example, the efficiency of nuclear reactors, where the kinetic energy of the nuclei is first converted to thermal energy and then to electrical energy, lies at around 35%. By direct conversion of kinetic energy to electric energy, effected by eliminating the intermediate thermal energy transformation, the efficiency of the energy transformation process can be dramatically improved.
History of energy transformation
Energy transformations in the universe over time are usually characterized by various kinds of energy, which have been available since the Big Bang, later being "released" (that is, transformed to more active types of energy such as kinetic or radiant energy) by a triggering mechanism.
Release of energy from gravitational potential
A direct transformation of energy occurs when hydrogen produced in the Big Bang collects into structures such as planets, in a process during which part of the gravitational potential is to be converted directly into heat. In Jupiter, Saturn, and Neptune, for example, such heat from the continued collapse of the planets' large gas atmospheres continue to drive most of the planets' weather systems. These systems, consisting of atmospheric bands, winds, and powerful storms, are only partly powered by sunlight. However, on Uranus, little of this process occurs.
On Earth, a significant portion of the heat output from the interior of the planet, estimated at a third to half of the total, is caused by the slow collapse of planetary materials to a smaller size, generating heat.
Release of energy from radioactive potential
Familiar examples of other such processes transforming energy from the Big Bang include nuclear decay, which releases energy that was originally "stored" in heavy isotopes, such as uranium and thorium. This energy was stored at the time of the nucleosynthesis of these elements. This process uses the gravitational potential energy released from the collapse of Type II supernovae to create these heavy elements before they are incorporated into star systems such as the Solar System and the Earth. The energy locked into uranium is released spontaneously during most types of radioactive decay, and can be suddenly released in nuclear fission bombs. In both cases, a portion of the energy binding the atomic nuclei together is released as heat.
Release of energy from hydrogen fusion potential
In a similar chain of transformations beginning at the dawn of the universe, nuclear fusion of hydrogen in the Sun releases another store of potential energy which was created at the time of the Big Bang. At that time, according to one theory, space expanded and the universe cooled too rapidly for hydrogen to completely fuse into heavier elements. This resulted in hydrogen representing a store of potential energy which can be released by nuclear fusion. Such a fusion process is triggered by heat and pressure generated from the gravitational collapse of hydrogen clouds when they produce stars, and some of the fusion energy is then transformed into starlight. Considering the solar system, starlight, overwhelmingly from the Sun, may again be stored as gravitational potential energy after it strikes the Earth. This occurs in the case of avalanches, or when water evaporates from oceans and is deposited as precipitation high above sea level (where, after being released at a hydroelectric dam, it can be used to drive turbine/generators to produce electricity).
Sunlight also drives many weather phenomena on Earth. One example is a hurricane, which occurs when large unstable areas of warm ocean, heated over months, give up some of their thermal energy suddenly to power a few days of violent air movement. Sunlight is also captured by plants as a chemical potential energy via photosynthesis, when carbon dioxide and water are converted into a combustible combination of carbohydrates, lipids, and oxygen. The release of this energy as heat and light may be triggered suddenly by a spark, in a forest fire; or it may be available more slowly for animal or human metabolism when these molecules are ingested, and catabolism is triggered by enzyme action.
Through all of these transformation chains, the potential energy stored at the time of the Big Bang is later released by intermediate events, sometimes being stored in several different ways for long periods between releases, as more active energy. All of these events involve the conversion of one kind of energy into others, including heat.
Examples
Examples of sets of energy conversions in machines
A coal-fired power plant involves these energy transformations:
Chemical energy in the coal is converted into thermal energy in the exhaust gases of combustion
Thermal energy of the exhaust gases converted into thermal energy of steam through heat exchange
Kinetic energy of steam converted to mechanical energy in the turbine
Mechanical energy of the turbine is converted to electrical energy by the generator, which is the ultimate output
In such a system, the first and fourth steps are highly efficient, but the second and third steps are less efficient. The most efficient gas-fired electrical power stations can achieve 50% conversion efficiency. Oil- and coal-fired stations are less efficient.
In a conventional automobile, the following energy transformations occur:
Chemical energy in the fuel is converted into kinetic energy of expanding gas via combustion
Kinetic energy of expanding gas converted to the linear piston movement
Linear piston movement converted to rotary crankshaft movement
Rotary crankshaft movement passed into transmission assembly
Rotary movement passed out of transmission assembly
Rotary movement passed through a differential
Rotary movement passed out of differential to drive wheels
Rotary movement of drive wheels converted to linear motion of the vehicle
Other energy conversions
There are many different machines and transducers that convert one energy form into another. A short list of examples follows:
ATP hydrolysis (chemical energy in adenosine triphosphate → mechanical energy)
Battery (electricity) (chemical energy → electrical energy)
Electric generator (kinetic energy or mechanical work → electrical energy)
Electric heater (electric energy → heat)
Fire (chemical energy → heat and light)
Friction (kinetic energy → heat)
Fuel cell (chemical energy → electrical energy)
Geothermal power (heat→ electrical energy)
Heat engines, such as the internal combustion engine used in cars, or the steam engine (heat → mechanical energy)
Hydroelectric dam (gravitational potential energy → electrical energy)
Electric lamp (electrical energy → heat and light)
Microphone (sound → electrical energy)
Ocean thermal power (heat → electrical energy)
Photosynthesis (electromagnetic radiation → chemical energy)
Piezoelectrics (strain → electrical energy)
Thermoelectric (heat → electrical energy)
Wave power (mechanical energy → electrical energy)
Windmill (wind energy → electrical energy or mechanical energy)
See also
Chaos theory
Conservation law
Conservation of energy
Conservation of mass
Energy accounting
Energy quality
Groundwater energy balance
Laws of thermodynamics
Noether's theorem
Ocean thermal energy conversion
Thermodynamic equilibrium
Thermoeconomics
Uncertainty principle
References
Further reading
Energy Transfer and Transformation | Core knowledge science
Energy (physics) | 0.766607 | 0.996015 | 0.763552 |
Reading comprehension | Reading comprehension is the ability to process written text, understand its meaning, and to integrate with what the reader already knows. Reading comprehension relies on two abilities that are connected to each other: word reading and language comprehension. Comprehension specifically is a "creative, multifaceted process" that is dependent upon four language skills: phonology, syntax, semantics, and pragmatics.
Some of the fundamental skills required in efficient reading comprehension are the ability to:
know the meaning of words,
understand the meaning of a word from a discourse context,
follow the organization of a passage and to identify antecedents and references in it,
draw inferences from a passage about its contents,
identify the main thought of a passage,
ask questions about the text,
answer questions asked in a passage,
visualize the text,
recall prior knowledge connected to text,
recognize confusion or attention problems,
recognize the literary devices or propositional structures used in a passage and determine its tone,
understand the situational mood (agents, objects, temporal and spatial reference points, casual and intentional inflections, etc.) conveyed for assertions, questioning, commanding, refraining, etc., and
determine the writer's purpose, intent, and point of view, and draw inferences about the writer (discourse-semantics).
Comprehension skills that can be applied as well as taught to all reading situations include:
Summarizing
Sequencing
Inferencing
Comparing and contrasting
Drawing conclusions
Self-questioning
Problem-solving
Relating background knowledge
Distinguishing between fact and opinion
Finding the main idea, important facts, and supporting details.
There are many reading strategies to use in improving reading comprehension and inferences, these include improving one's vocabulary, critical text analysis (intertextuality, actual events vs. narration of events, etc.), and practising deep reading.
The ability to comprehend text is influenced by the readers' skills and their ability to process information. If word recognition is difficult, students tend to use too much of their processing capacity to read individual words which interferes with their ability to comprehend what is read.
Overview
Some people learn comprehension skills through education or instruction and others learn through direct experiences. Proficient reading depends on the ability to recognize words quickly and effortlessly. It is also determined by an individual's cognitive development, which is "the construction of thought processes".
There are specific characteristics that determine how successfully an individual will comprehend text, including prior knowledge about the subject, well-developed language, and the ability to make inferences from methodical questioning & monitoring comprehension like: "Why is this important?" and "Do I need to read the entire text?" are examples of passage questioning.
Instruction for comprehension strategy often involves initially aiding the students by social and imitation learning, wherein teachers explain genre styles and model both top-down and bottom-up strategies, and familiarize students with a required complexity of text comprehension. After the contiguity interface, the second stage involves the gradual release of responsibility wherein over time teachers give students individual responsibility for using the learned strategies independently with remedial instruction as required and this helps in error management.
The final stage involves leading the students to a self-regulated learning state with more and more practice and assessment, it leads to overlearning and the learned skills will become reflexive or "second nature". The teacher as reading instructor is a role model of a reader for students, demonstrating what it means to be an effective reader and the rewards of being one.
Reading comprehension levels
Reading comprehension involves two levels of processing, shallow (low-level) processing and deep (high-level) processing.
Deep processing involves semantic processing, which happens when we encode the meaning of a word and relate it to similar words. Shallow processing involves structural and phonemic recognition, the processing of sentence and word structure, i.e. first-order logic, and their associated sounds. This theory was first identified by Fergus I. M. Craik and Robert S. Lockhart.
Comprehension levels are observed through neuroimaging techniques like functional magnetic resonance imaging (fMRI). fMRI is used to determine the specific neural pathways of activation across two conditions: narrative-level comprehension, and sentence-level comprehension. Images showed that there was less brain region activation during sentence-level comprehension, suggesting a shared reliance with comprehension pathways. The scans also showed an enhanced temporal activation during narrative levels tests, indicating this approach activates situation and spatial processing.
In general, neuroimaging studies have found that reading involves three overlapping neural systems: networks active in visual, orthography-phonology (angular gyrus), and semantic functions (anterior temporal lobe with Broca's and Wernicke's areas). However, these neural networks are not discrete, meaning these areas have several other functions as well. The Broca's area involved in executive functions helps the reader to vary depth of reading comprehension and textual engagement in accordance with reading goals.
The role of vocabulary
Reading comprehension and vocabulary are inextricably linked together. The ability to decode or identify and pronounce words is self-evidently important, but knowing what the words mean has a major and direct effect on knowing what any specific passage means while skimming a reading material. It has been shown that students with a smaller vocabulary than other students comprehend less of what they read. It has also been suggested that to improve comprehension, improving word groups, complex vocabularies such as homonyms or words that have multiple meanings, and those with figurative meanings like idioms, similes, collocations and metaphors are a good practice.
Andrew Biemiller argues that teachers should give out topic-related words and phrases before reading a book to students. Note also that teaching includes topic-related word groups, synonyms of words, and their meaning with the context. He further says teachers should familiarize students with sentence structures in which these words commonly occur. According to Biemiller, this intensive approach gives students opportunities to explore the topic beyond its discourse – freedom of conceptual expansion. However, there is no evidence to suggest the primacy of this approach. Incidental morphemic analysis of words – prefixes, suffixes and roots – is also considered to improve understanding of the vocabulary, though they are proved to be an unreliable strategy for improving comprehension and is no longer used to teach students.
Vocabulary is important as it is what connects a reader to the text, while helping develop background knowledge, their own ideas, communicating, and learning new concepts. Vocabulary has been described as "the glue that holds stories, ideas, and content together...making comprehension accessible". This greatly reflects the important role that vocabulary plays. Especially when studying various pieces of literature, it is important to have this background vocabulary, otherwise readers will become lost rather quickly. Because of this, teachers focus a great deal of attention to vocabulary programs and implementing them into their weekly lesson plans.
History
Initially most comprehension teaching was that when taken together it would allow students to be imparted through selected techniques for each genre by strategic readers. However, from the 1930s testing various methods never seemed to win support in empirical research. One such strategy for improving reading comprehension is the technique called SQ3R introduced by Francis Pleasant Robinson in his 1946 book Effective Study.
Between 1969 and 2000, a number of "strategies" were devised for teaching students to employ self-guided methods for improving reading comprehension. In 1969 Anthony V. Manzo designed and found empirical support for the Re Quest, or Reciprocal Questioning Procedure, in traditional teacher-centered approach due to its sharing of "cognitive secrets". It was the first method to convert a fundamental theory such as social learning into teaching methods through the use of cognitive modeling between teachers and students.
Since the turn of the 20th century, comprehension lessons usually consist of students answering teacher's questions or writing responses to questions of their own, or from prompts of the teacher. This detached whole group version only helped students individually to respond to portions of the text (content area reading), and improve their writing skills. In the last quarter of the 20th century, evidence accumulated that academic reading test methods were more successful in assessing rather than imparting comprehension or giving a realistic insight. Instead of using the prior response registering method, research studies have concluded that an effective way to teach comprehension is to teach novice readers a bank of "practical reading strategies" or tools to interpret and analyze various categories and styles of text.
Common Core State Standards (CCSS) have been implemented in hopes that students test scores would improve. Some of the goals of CCSS are directly related to students and their reading comprehension skills, with them being concerned with students learning and noticing key ideas and details, considering the structure of the text, looking at how the ideas are integrated, and reading texts with varying difficulties and complexity.
Reading strategies
There are a variety of strategies used to teach reading. Strategies are key to help with reading comprehension. They vary according to the challenges like new concepts, unfamiliar vocabulary, long and complex sentences, etc. Trying to deal with all of these challenges at the same time may be unrealistic. Then again strategies should fit to the ability, aptitude and age level of the learner. Some of the strategies teachers use are: reading aloud, group work, and more reading exercises.
Reciprocal teaching
In the 1980s, Annemarie Sullivan Palincsar and Ann L. Brown developed a technique called reciprocal teaching that taught students to predict, summarize, clarify, and ask questions for sections of a text. The use of strategies like summarizing after each paragraph has come to be seen as effective for building students' comprehension. The idea is that students will develop stronger reading comprehension skills on their own if the teacher gives them explicit mental tools for unpacking text.
Instructional conversations
"Instructional conversations", or comprehension through discussion, create higher-level thinking opportunities for students by promoting critical and aesthetic thinking about the text. According to Vivian Thayer, class discussions help students to generate ideas and new questions. (Goldenberg, p. 317).
Dr. Neil Postman has said, "All our knowledge results from questions, which is another way of saying that question-asking is our most important intellectual tool" (Response to Intervention). There are several types of questions that a teacher should focus on: remembering, testing, understanding, application or solving, invite synthesis or creating, evaluation and judging. Teachers should model these types of questions through "think-alouds" before, during, and after reading a text. When a student can relate a passage to an experience, another book, or other facts about the world, they are "making a connection". Making connections help students understand the author's purpose and fiction or non-fiction story.
Text factors
There are factors that, once discerned, make it easier for the reader to understand the written text. One of such is the genre, like folktales, historical fiction, biographies or poetry. Each genre has its own characteristics for text structure that once understood helps the reader comprehend it. A story is composed of a plot, characters, setting, point of view, and theme. Informational books provide real-world knowledge for students and have unique features such as: headings, maps, vocabulary, and an index. Poems are written in different forms and the most commonly used are: rhymed verse, haikus, free verse, and narratives. Poetry uses devices such as: alliteration, repetition, rhyme, metaphors, and similes. "When children are familiar with genres, organizational patterns, and text features in books they're reading, they're better able to create those text factors in their own writing." Another one is arranging the text per perceptual span and a text display favorable to the age level of the reader.
Non-verbal imagery
Non-verbal imagery refers to media that utilize schemata to make planned or unplanned connections more commonly used within context such as a passage, an experience, or one's imagination. Some notable examples are emojis, emoticons, cropped and uncropped images, and recently, emojis which are images that are used to elicit humor and comprehension.
Visualization
Visualization is a "mental image" created in a person's mind while reading text. This "brings words to life" and helps improve reading comprehension. Asking sensory questions will help students become better visualizers.
Students can practice visualizing before seeing the picture of what they are reading by imagining what they "see, hear, smell, taste, or feel" when they are reading a page of a picture book aloud. They can share their visualizations, then check their level of detail against the illustrations.
Partner reading
Partner reading is a strategy created for reading pairs. The teacher chooses two appropriate books for the students to read. First, the pupils and their partners must read their own book. Once they have completed this, they are given the opportunity to write down their own comprehension questions for their partner. The students swap books, read them out loud to one another and ask one another questions about the book they have read.
There are different levels of this strategy:
1) The lower ones who need extra help recording the strategies.
2) The average ones who still need some help.
3) The good level. At this level, the children require no help.
Students at a very good level are a few years ahead of the other students.
This strategy:
Provides a model of fluent reading and helps students learn decoding skills by offering positive feedback.
Provides direct opportunities for a teacher to circulate in the class, observe students, and offer individual remediation.
Multiple reading strategies
There are a wide range of reading strategies suggested by reading programs and educators. Effective reading strategies may differ for second language learners, as opposed to native speakers. The National Reading Panel identified positive effects only for a subset, particularly summarizing, asking questions, answering questions, comprehension monitoring, graphic organizers, and cooperative learning. The Panel also emphasized that a combination of strategies, as used in Reciprocal Teaching, can be effective. The use of effective comprehension strategies that provide specific instructions for developing and retaining comprehension skills, with intermittent feedback, has been found to improve reading comprehension across all ages, specifically those affected by mental disabilities.
Reading different types of texts requires the use of different reading strategies and approaches. Making reading an active, observable process can be very beneficial to struggling readers. A good reader interacts with the text in order to develop an understanding of the information before them. Some good reader strategies are predicting, connecting, inferring, summarizing, analyzing and critiquing. There are many resources and activities educators and instructors of reading can use to help with reading strategies in specific content areas and disciplines. Some examples are graphic organizers, talking to the text, anticipation guides, double entry journals, interactive reading and note taking guides, chunking, and summarizing.
The use of effective comprehension strategies is highly important when learning to improve reading comprehension. These strategies provide specific instructions for developing and retaining comprehension skills across all ages. Applying methods to attain an overt phonemic awareness with intermittent practice has been found to improve reading in early ages, specifically those affected by mental disabilities.
The importance of interest
A common statistic that researchers have found is the importance of readers, and specifically students, to be interested in what they are reading. It has been reported by students that they are more likely to finish books if they are the ones that choose them. They are also more likely to remember what they read if they were interested as it causes them to pay attention to the minute details.
Reading strategies
There are various reading strategies that help readers recognize what they are learning, which allows them to further understand themselves as readers. Also to understand what information they have comprehended. These strategies also activate reading strategies that good readers use when reading and understanding a text.
Think-Alouds
When reading a passage, it is good to vocalize what one is reading and also their mental processes that are occurring while reading. This can take many different forms, with a few being asking oneself questions about reading or the text, making connections with prior knowledge or prior read texts, noticing when one struggles, and rereading what needs to be. These tasks will help readers think about their reading and if they are understood fully, which helps them notice what changes or tactics might need to be considered.
Know, Want to know, Learned
Know, Want to know, and Learned (KWL) is often used by teachers and their students, but it is a great tactic for all readers when considering their own knowledge. So, the reader goes through the knowledge that they already have, they think about what they want to know or the knowledge they want to gain, and finally they think about what they have learnt after reading. This allows readers to reflect on the prior knowledge they have, and also to recognize what knowledge they have gained and comprehended from their reading.
Comprehension strategies
Research studies on reading and comprehension have shown that highly proficient, effective readers utilize a number of different strategies to comprehend various types of texts, strategies that can also be used by less proficient readers in order to improve their comprehension. These include:
Making Inferences: In everyday terms we refer to this as "reading between the lines". It involves connecting various parts of texts that are not directly linked in order to form a sensible conclusion. A form of assumption, the reader speculates what connections lie within the texts. They also make predictions about what might occur next.
Planning and Monitoring: This strategy centers around the reader's mental awareness and their ability to control their comprehension by way of awareness. By previewing text (via outlines, table of contents, etc.) one can establish a goal for reading: "what do I need to get out of this"? Readers use context clues and other evaluation strategies to clarify texts and ideas, and thus monitoring their level of understanding.
Asking Questions: To solidify one's understanding of passages of texts, readers inquire and develop their own opinion of the author's writing, character motivations, relationships, etc. This strategy involves allowing oneself to be completely objective in order to find various meanings within the text.
Self-Monitoring: Asking oneself questions about reading strategies, whether they are getting confused or having trouble paying attention.
Determining Importance: Pinpointing the important ideas and messages within the text. Readers are taught to identify direct and indirect ideas and to summarize the relevance of each.
Visualizing: With this sensory-driven strategy, readers form mental and visual images of the contents of text. Being able to connect visually allows for a better understanding of the text through emotional responses.
Synthesizing: This method involves marrying multiple ideas from various texts in order to draw conclusions and make comparisons across different texts; with the reader's goal being to understand how they all fit together.
Making Connections: A cognitive approach also referred to as "reading beyond the lines", which involves:
(A) finding a personal connection to reading, such as personal experience, previously read texts, etc. to help establish a deeper understanding of the context of the te xt, or (B) thinking about implications that have no immediate connection with the theme of the text.
Assessment
There are informal and formal assessments to monitor an individual's comprehension ability and use of comprehension strategies. Informal assessments are generally conducted through observation and the use of tools, like story boards, word sorts, and interactive writing. Many teachers use Formative assessments to determine if a student has mastered content of the lesson. Formative assessments can be verbal as in a "Think-Pair-Share" or "Partner Share". Formative Assessments can also be "Ticket out the door" or "digital summarizers". Formal assessments are district or state assessments that evaluates all students on important skills and concepts. Summative assessments typically, are assessments given at the end of a unit to measure a student's learning.
Running records
A popular assessment undertaken in numerous primary schools around the world are running records. Running records are a helpful tool in regard to reading comprehension. The tool assists teachers in analyzing specific patterns in student behaviors and planning appropriate instruction. By conducting running records, teachers are given an overview of students' reading abilities and learning over a period of time.
In order for teachers to conduct a running record properly, they must sit beside a student and make sure that the environment is as relaxed as possible so the student does not feel pressured or intimidated. It is best if the running record assessment is conducted during reading, to avoid distractions. Another alternative is asking an education assistant to conduct the running record for you in a separate room whilst you teach/supervise the class. Quietly observe the students' reading and record during this time. There is a specific code for recording which most teachers understand. Once the student has finished reading, ask them to retell the story as best as they can. After the completion of this, ask them comprehensive questions listed to test them on their understanding of the book. At the end of the assessment add up their running record score and file the assessment sheet away. After the completion of the running record assessment, plan strategies that will improve the students' ability to read and understand the text.
Overview of the steps taken when conducting a Running Record assessment:
Select the text
Introduce the text
Take a running record
Ask for retelling of the story
Ask comprehensive questions
Check fluency
Analyze the record
Plan strategies to improve students reading/understanding ability
File results away.
Difficult or complex content
Reading difficult texts
Some texts, like in philosophy, literature or scientific research, may appear more difficult to read because of the prior knowledge they assume, the tradition from which they come, or the tone, such as criticizing or parodying. A Philosopher Jacques Derrida, explained his opinion about complicated text: "In order to unfold what is implicit in so many discourses, one would have each time to make a pedagogical outlay that is just not reasonable to expect from every book. Here the responsibility has to be shared out, mediated; the reading has to do its work and the work has to make its reader." Other Philosophers however, believe that if you have something to say, you should be able to make the message readable to a wide audience.
Hyperlinks
Embedded hyperlinks in documents or Internet pages have been found to make different demands on the reader than traditional text. Authors such as Nicholas Carr, and Psychologists, such as Maryanne Wolf, contend that the internet may have a negative impact on attention and reading comprehension. Some studies report increased demands of reading hyperlinked text in terms of cognitive load, or the amount of information actively maintained in one's mind (also see working memory). One study showed that going from about 5 hyperlinks per page to about 11 per page reduced college students' understanding (assessed by multiple choice tests) of articles about alternative energy. This can be attributed to the decision-making process (deciding whether to click on it) required by each hyperlink, which may reduce comprehension of surrounding text.
On the other hand, other studies have shown that if a short summary of the link's content is provided when the mouse pointer hovers over it, then comprehension of the text is improved. "Navigation hints" about which links are most relevant improved comprehension. Finally, the background knowledge of the reader can partially determine the effect hyperlinks have on comprehension. In a study of reading comprehension with subjects who were familiar or unfamiliar with art history, texts which were hyperlinked to one another hierarchically were easier for novices to understand than texts which were hyperlinked semantically. In contrast, those already familiar with the topic understood the content equally well with both types of organization.
In interpreting these results, it may be useful to note that the studies mentioned were all performed in closed content environments, not on the internet. That is, the texts used only linked to a predetermined set of other texts which was offline. Furthermore, the participants were explicitly instructed to read on a certain topic in a limited amount of time. Reading text on the internet may not have these constraints.
Professional development
The National Reading Panel noted that comprehension strategy instruction is difficult for many teachers as well as for students, particularly because they were not taught this way and because it is a demanding task. They suggested that professional development can increase teachers/students willingness to use reading strategies but admitted that much remains to be done in this area.
The directed listening and thinking activity is a technique available to teachers to aid students in learning how to un-read and reading comprehension. It is also difficult for students that are new. There is often some debate when considering the relationship between reading fluency and reading comprehension. There is evidence of a direct correlation that fluency and comprehension lead to better understanding of the written material, across all ages. The National Assessment of Educational Progress assessed U.S. student performance in reading at grade 12 from both public and private school population and found that only 37 percent of students had proficient skills. The majority, 72 percent of the students, were only at or above basic skills, and 28 percent of the students were below basic level.
See also
Balanced literacy
Baseball Study
Directed listening and thinking activity
English as a second or foreign language
Fluency
Levels-of-processing
Phonics
Readability
Reading
Reading for special needs
Simple view of reading
SQ3R
Synthetic phonics
Whole language
Notes
References
Sources
Further reading
External links
Info, Tips, and Strategies for PTE Read Aloud, Express English Language Training Center
English Reading Comprehension Skills, Andrews University
SQ3R Reading Strategy And How to Apply It, ProductiveFish
Vocabulary Instruction and Reading comprehension – From the ERIC Clearinghouse on Reading English and Communication.
ReadWorks.org | The Solution to Reading Comprehension
Education in the United States
Learning to read
Comprehension | 0.765562 | 0.997357 | 0.763538 |
Saprotrophic nutrition | Saprotrophic nutrition or lysotrophic nutrition is a process of chemoheterotrophic extracellular digestion involved in the processing of decayed (dead or waste) organic matter. It occurs in saprotrophs, and is most often associated with fungi (for example Mucor) and with soil bacteria. Saprotrophic microscopic fungi are sometimes called saprobes.
Saprotrophic plants or bacterial flora are called saprophytes (sapro- 'rotten material' + -phyte 'plant'), although it is now believed that all plants previously thought to be saprotrophic are in fact parasites of microscopic fungi or of other plants. In fungi, the saprotrophic process is most often facilitated through the active transport of such materials through endocytosis within the internal mycelium and its constituent hyphae.
Various word roots relating to decayed matter (detritus, sapro-, lyso-), to eating and nutrition (-vore, -phage, -troph), and to plants or life forms (-phyte, -obe) produce various terms, such as detritivore, detritophage, saprotroph, saprophyte, saprophage, and saprobe; their meanings overlap, although technical distinctions (based on physiologic mechanisms) narrow the senses. For example, biologists can make usage distinctions based on macroscopic swallowing of detritus (as in earthworms) versus microscopic lysis of detritus (as with mushrooms).
Process
As matter decomposes within a medium in which a saprotroph is residing, the saprotroph breaks such matter down into its composites.
Proteins are broken down into their amino acid composites through the breaking of peptide bonds by proteases.
Lipids are broken down into fatty acids and glycerol by lipases.
Starch is broken down into pieces of simple disaccharides by amylases.
Cellulose, a major portion of plant cells, and therefore a major constituent of decaying matter is broken down into glucose
These products are re-absorbed into the hypha through the cell wall by endocytosis and passed on throughout the mycelium complex. This facilitates the passage of such materials throughout the organism and allows for growth and, if necessary, repair.
Conditions
In order for a saprotrophic organism to facilitate optimal growth and repair, favourable conditions and nutrients must be present. Optimal conditions refers to several conditions which optimise the growth of saprotrophic organisms, such as;
Presence of water: 80–90% of the mass of the fungi is water, and the fungi require excess water for absorption due to the evaporation of internally retained water.
Presence of oxygen: Very few saprotrophic organisms can endure anaerobic conditions as evidenced by their growth above media such as water or soil.
Neutral-acidic pH: The condition of neutral or mildly acidic conditions under pH 7 are required.
Low-medium temperature: The majority of saprotrophic organisms require temperatures between , with optimum growth occurring at .
The majority of nutrients taken in by such organisms must be able to provide carbon, proteins, vitamins and, in some cases, ions. Due to the carbon composition of the majority of organisms, dead and organic matter provide rich sources of disaccharides and polysaccharides such as maltose and starch, and of the monosaccharide glucose.
See also
Chemoautotrophic nutrition
Decomposers
Detritivore
Holozoic nutrition
Mycorrhizal fungi and soil carbon storage
Parasitic nutrition
Photoautotrophic nutrition
Saprotrophic bacteria
Wood-decay fungus
References
Further reading
Nutrition by type
Mycology
Dead wood | 0.767871 | 0.994279 | 0.763478 |
Psychodynamics | Psychodynamics, also known as psychodynamic psychology, in its broadest sense, is an approach to psychology that emphasizes systematic study of the psychological forces underlying human behavior, feelings, and emotions and how they might relate to early experience. It is especially interested in the dynamic relations between conscious motivation and unconscious motivation.
The term psychodynamics is also used to refer specifically to the psychoanalytical approach developed by Sigmund Freud (1856–1939) and his followers. Freud was inspired by the theory of thermodynamics and used the term psychodynamics to describe the processes of the mind as flows of psychological energy (libido or psi) in an organically complex brain.
There are four major schools of thought regarding psychological treatment: psychodynamic, cognitive-behavioral, biological, and humanistic treatment. In the treatment of psychological distress, psychodynamic psychotherapy tends to be a less intensive (once- or twice-weekly) modality than the classical Freudian psychoanalysis treatment (of 3–5 sessions per week). Psychodynamic therapies depend upon a theory of inner conflict, wherein repressed behaviours and emotions surface into the patient's consciousness; generally, one's conflict is unconscious.
Since the 1970s, psychodynamics has largely been abandoned as not fact-based; Freudian psychoanalysis has been criticized as pseudoscience.
Overview
In general, psychodynamics is the study of the interrelationship of various parts of the mind, personality, or psyche as they relate to mental, emotional, or motivational forces especially at the unconscious level. The mental forces involved in psychodynamics are often divided into two parts: (a) the interaction of the emotional and motivational forces that affect behavior and mental states, especially on a subconscious level; (b) inner forces affecting behavior: the study of the emotional and motivational forces that affect behavior and states of mind.
Freud proposed that psychological energy was constant (hence, emotional changes consisted only in displacements) and that it tended to rest (point attractor) through discharge (catharsis).
In mate selection psychology, psychodynamics is defined as the study of the forces, motives, and energy generated by the deepest of human needs.
In general, psychodynamics studies the transformations and exchanges of "psychic energy" within the personality. A focus in psychodynamics is the connection between the energetics of emotional states in the Id, ego and super-ego as they relate to early childhood developments and processes. At the heart of psychological processes, according to Freud, is the ego, which he envisions as battling with three forces: the id, the super-ego, and the outside world. The id is the unconscious reservoir of libido, the psychic energy that fuels instincts and psychic processes. The ego serves as the general manager of personality, making decisions regarding the pleasures that will be pursued at the id's demand, the person's safety requirements, and the moral dictates of the superego that will be followed. The superego refers to the repository of an individual's moral values, divided into the conscience – the internalization of a society's rules and regulations – and the ego-ideal – the internalization of one's goals. Hence, the basic psychodynamic model focuses on the dynamic interactions between the id, ego, and superego. Psychodynamics, subsequently, attempts to explain or interpret behaviour or mental states in terms of innate emotional forces or processes.
History
Freud used the term psychodynamics to describe the processes of the mind as flows of psychological energy (libido) in an organically complex brain. The idea for this came from his first year adviser, Ernst von Brücke at the University of Vienna, who held the view that all living organisms, including humans, are basically energy-systems to which the principle of the conservation of energy applies. This principle states that "the total amount of energy in any given physical system is always constant, that energy quanta can be changed but not annihilated, and that consequently when energy is moved from one part of the system, it must reappear in another part." This principle is at the very root of Freud's ideas, whereby libido, which is primarily seen as sexual energy, is transformed into other behaviours. However, it is now clear that the term energy in physics means something quite different from the term energy in relation to mental functioning.
Psychodynamics was initially further developed by Carl Jung, Alfred Adler and Melanie Klein. By the mid-1940s and into the 1950s, the general application of the "psychodynamic theory" had been well established.
In his 1988 book Introduction to Psychodynamics – a New Synthesis, psychiatrist Mardi J. Horowitz states that his own interest and fascination with psychodynamics began during the 1950s, when he heard Ralph Greenson, a popular local psychoanalyst who spoke to the public on topics such as "People who Hate", speak on the radio at UCLA. In his radio discussion, according to Horowitz, he "vividly described neurotic behavior and unconscious mental processes and linked psychodynamics theory directly to everyday life."
In the 1950s, American psychiatrist Eric Berne built on Freud's psychodynamic model, particularly that of the "ego states", to develop a psychology of human interactions called transactional analysis which, according to physician James R. Allen, is a "cognitive-behavioral approach to treatment and that it is a very effective way of dealing with internal models of self and others as well as other psychodynamic issues.".
Around the 1970s, a growing number of researchers began departing from the psychodynamics model and Freudian subconscious. Many felt that the evidence was over-reliant on imaginative discourse in therapy, and on patient reports of their state-of-mind. These subjective experiences are inaccessible to others. Philosopher of science Karl Popper argued that much of Freudianism was untestable and therefore not scientific. In 1975 literary critic Frederick Crews began a decades-long campaign against the scientific credibility of Freudianism. This culminated in Freud: The Making of an Illusion which aggregated years of criticism from many quarters. Medical schools and psychology departments no longer offer much training in psychodynamics, according to a 2007 survey. An Emory University psychology professor explained, “I don’t think psychoanalysis is going to survive unless there is more of an appreciation for empirical rigor and testing.”
Freudian analysis
According to American psychologist Calvin S. Hall, from his 1954 Primer in Freudian Psychology:
At the heart of psychological processes, according to Freud, is the ego, which he sees battling with three forces: the id, the super-ego, and the outside world. Hence, the basic psychodynamic model focuses on the dynamic interactions between the id, ego, and superego. Psychodynamics, subsequently, attempts to explain or interpret behavior or mental states in terms of innate emotional forces or processes. In his writings about the "engines of human behavior", Freud used the German word Trieb, a word that can be translated into English as either instinct or drive.
In the 1930s, Freud's daughter Anna Freud began to apply Freud's psychodynamic theories of the "ego" to the study of parent-child attachment and especially deprivation and in doing so developed ego psychology.
Jungian analysis
At the turn of the 20th century, during these decisive years, a young Swiss psychiatrist named Carl Jung had been following Freud's writings and had sent him copies of his articles and his first book, the 1907 Psychology of Dementia Praecox, in which he upheld the Freudian psychodynamic viewpoint, although with some reservations. That year, Freud invited Jung to visit him in Vienna. The two men, it is said, were greatly attracted to each other, and they talked continuously for thirteen hours. This led to a professional relationship in which they corresponded on a weekly basis, for a period of six years.
Carl Jung's contributions in psychodynamic psychology include:
The psyche tends toward wholeness.
The self is composed of the ego, the personal unconscious, the collective unconscious. The collective unconscious contains the archetypes which manifest in ways particular to each individual.
Archetypes are composed of dynamic tensions and arise spontaneously in the individual and collective psyche. Archetypes are autonomous energies common to the human species. They give the psyche its dynamic properties and help organize it. Their effects can be seen in many forms and across cultures.
The Transcendent Function: The emergence of the third resolves the split between dynamic polar tensions within the archetypal structure.
The recognition of the spiritual dimension of the human psyche.
The role of images which spontaneously arise in the human psyche (images include the interconnection between affect, images, and instinct) to communicate the dynamic processes taking place in the personal and collective unconscious, images which can be used to help the ego move in the direction of psychic wholeness.
Recognition of the multiplicity of psyche and psychic life, that there are several organizing principles within the psyche, and that they are at times in conflict.
See also
Ernst Wilhelm Brücke
Yisrael Salantar
Cathexis
Object relations theory
Reaction formation
Robert Langs
References
Further reading
Brown, Junius Flagg & Menninger, Karl Augustus (1940). The Psychodynamics of Abnormal Behavior, 484 pages, McGraw-Hill Book Company, inc.
Weiss, Edoardo (1950). Principles of Psychodynamics, 268 pages, Grune & Stratton
Pearson Education (1970). The Psychodynamics of Patient Care Prentice Hall, 422 pgs. Stanford University: Higher Education Division.
Jean Laplanche et J.B. Pontalis (1974). The Language of Psycho-Analysis, Editeur: W. W. Norton & Company,
Shedler, Jonathan. "That was Then, This is Now: An Introduction to Contemporary Psychodynamic Therapy", PDF
PDM Task Force. (2006). Psychodynamic Diagnostic Manual. Silver Spring, MD. Alliance of Psychoanalytic Organizations.
Hutchinson, E.(ED.) (2017).Essentials of human behavior: Integrating person, environment, and the life course. Thousand Oaks, CA: Sage.
Freudian psychology
Psychoanalysis | 0.766171 | 0.996473 | 0.763469 |
Hormesis | Hormesis is a two-phased dose-response relationship to an environmental agent whereby low-dose amounts have a beneficial effect and high-dose amounts are either inhibitory to function or toxic. Within the hormetic zone, the biological response to low-dose amounts of some stressors is generally favorable. An example is the breathing of oxygen, which is required in low amounts (in air) via respiration in living animals, but can be toxic in high amounts, even in a managed clinical setting.
In toxicology, hormesis is a dose-response phenomenon to xenobiotics or other stressors.
In physiology and nutrition, hormesis has regions extending from low-dose deficiencies to homeostasis, and potential toxicity at high levels. Physiological concentrations of an agent above or below homeostasis may adversely affect an organism, where the hormetic zone is a region of homeostasis of balanced nutrition. In pharmacology, the hormetic zone is similar to the therapeutic window.
In the context of toxicology, the hormesis model of dose response is vigorously debated. The biochemical mechanisms by which hormesis works (particularly in applied cases pertaining to behavior and toxins) remain under early laboratory research and are not well understood.
Etymology
The term "hormesis" derives from Greek hórmēsis for "rapid motion, eagerness", itself from ancient Greek to excite. The same Greek root provides the word hormone. The term "hormetics" is used for the study of hormesis. The word hormesis was first reported in English in 1943.
History
A form of hormesis famous in antiquity was Mithridatism, the practice whereby Mithridates VI of Pontus supposedly made himself immune to a variety of toxins by regular exposure to small doses. Mithridate and theriac, polypharmaceutical electuaries claiming descent from his formula and initially including flesh from poisonous animals, were consumed for centuries by emperors, kings, and queens as protection against poison and ill health. In the Renaissance, the Swiss doctor Paracelsus said, "All things are poison, and nothing is without poison; the dosage alone makes it so a thing is not a poison."
German pharmacologist Hugo Schulz first described such a phenomenon in 1888 following his own observations that the growth of yeast could be stimulated by small doses of poisons. This was coupled with the work of German physician Rudolph Arndt, who studied animals given low doses of drugs, eventually giving rise to the Arndt–Schulz rule. Arndt's advocacy of homeopathy contributed to the rule's diminished credibility in the 1920s and 1930s. The term "hormesis" was coined and used for the first time in a scientific paper by Chester M. Southam and J. Ehrlich in 1943 in the journal Phytopathology, volume 33, pp. 517–541.
In 2004, Edward Calabrese evaluated the concept of hormesis. Over 600 substances show a U-shaped dose–response relationship; Calabrese and Baldwin wrote: "One percent (195 out of 20,285) of the published articles contained 668 dose-response relationships that met the entry criteria [of a U-shaped response indicative of hormesis]"
Examples
Carbon monoxide
Carbon monoxide is produced in small quantities across phylogenetic kingdoms, where it has essential roles as a neurotransmitter (subcategorized as a gasotransmitter). The majority of endogenous carbon monoxide is produced by heme oxygenase; the loss of heme oxygenase and subsequent loss of carbon monoxide signaling has catastrophic implications for an organism. In addition to physiological roles, small amounts of carbon monoxide can be inhaled or administered in the form of carbon monoxide-releasing molecules as a therapeutic agent.
Regarding the hormetic curve graph:
Deficiency zone: an absence of carbon monoxide signaling has toxic implications
Hormetic zone / region of homeostasis: small amount of carbon monoxide has a positive effect:
essential as a neurotransmitter
beneficial as a pharmaceutical
Toxicity zone: excessive exposure results in carbon monoxide poisoning
Oxygen
Many organisms maintain a hormesis relationship with oxygen, which follows a hormetic curve similar to carbon monoxide:
Deficiency zone: hypoxia / asphyxia
Hormetic zone / region of homeostasis
Toxicity zone: oxidative stress
Physical exercise
Physical exercise intensity may exhibit a hormetic curve. Individuals with low levels of physical activity are at risk for some diseases; however, individuals engaged in moderate, regular exercise may experience less disease risk.
Mitohormesis
The possible effect of small amounts of oxidative stress is under laboratory research. Mitochondria are sometimes described as "cellular power plants" because they generate most of the cell's supply of adenosine triphosphate (ATP), a source of chemical energy. Reactive oxygen species (ROS) have been discarded as unwanted byproducts of oxidative phosphorylation in mitochondria by the proponents of the free-radical theory of aging promoted by Denham Harman. The free-radical theory states that compounds inactivating ROS would lead to a reduction of oxidative stress and thereby produce an increase in lifespan, although this theory holds only in basic research. However, in over 19 clinical trials, "nutritional and genetic interventions to boost antioxidants have generally failed to increase life span."
Whether this concept applies to humans remains to be shown, although a 2007 epidemiological study supports the possibility of mitohormesis, indicating that supplementation with beta-carotene, vitamin A or vitamin E may increase disease prevalence in humans.
Alcohol
Alcohol is believed to be hormetic in preventing heart disease and stroke, although the benefits of light drinking may have been exaggerated. The gut microbiome of a typical healthy individual naturally ferments small amounts of ethanol, and in rare cases dysbiosis leads to auto-brewery syndrome, therefore whether benefits of alcohol are derived from the behavior of consuming alcoholic drinks or as a homeostasis factor in normal physiology via metabolites from commensal microbiota remains unclear.
In 2012, researchers at UCLA found that tiny amounts (1 mM, or 0.005%) of ethanol doubled the lifespan of Caenorhabditis elegans, a roundworm frequently used in biological studies, that were starved of other nutrients. Higher doses of 0.4% provided no longevity benefit. However, worms exposed to 0.005% did not develop normally (their development was arrested). The authors argue that the worms were using ethanol as an alternative energy source in the absence of other nutrition, or had initiated a stress response. They did not test the effect of ethanol on worms fed a normal diet.
Methylmercury
In 2010, a paper in the journal Environmental Toxicology & Chemistry showed that low doses of methylmercury, a potent neurotoxic pollutant, improved the hatching rate of mallard eggs. The author of the study, Gary Heinz, who led the study for the U.S. Geological Survey at the Patuxent Wildlife Research Center in Beltsville, stated that other explanations are possible. For instance, the flock he studied might have harbored some low, subclinical infection and that mercury, well known to be antimicrobial, might have killed the infection that otherwise hurt reproduction in the untreated birds.
Radiation
Ionizing radiation
Hormesis has been observed in a number of cases in humans and animals exposed to chronic low doses of ionizing radiation. A-bomb survivors who received high doses exhibited shortened lifespan and increased cancer mortality, but those who received low doses had lower cancer mortality than the Japanese average.
In Taiwan, recycled radiocontaminated steel was inadvertently used in the construction of over 100 apartment buildings, causing the long-term exposure of 10,000 people. The average dose rate was 50 mSv/year and a subset of the population (1,000 people) received a total dose over 4,000 mSv over ten years. In the widely used linear no-threshold model used by regulatory bodies, the expected cancer deaths in this population would have been 302 with 70 caused by the extra ionizing radiation, with the remainder caused by natural background radiation. The observed cancer rate, though, was quite low at 7 cancer deaths when 232 would be predicted by the LNT model had they not been exposed to the radiation from the building materials. Ionizing radiation hormesis appears to be at work.
Chemical and ionizing radiation combined
No experiment can be performed in perfect isolation. Thick lead shielding around a chemical dose experiment to rule out the effects of ionizing radiation is built and rigorously controlled for in the laboratory, and certainly not the field. Likewise the same applies for ionizing radiation studies. Ionizing radiation is released when an unstable particle releases radiation, creating two new substances and energy in the form of an electromagnetic wave. The resulting materials are then free to interact with any environmental elements, and the energy released can also be used as a catalyst in further ionizing radiation interactions.
The resulting confusion in the low-dose exposure field (radiation and chemical) arise from lack of consideration of this concept as described by Mothersill and Seymory.
Nucleotide excision repair
Veterans of the Gulf War (1991) who suffered from the persistent symptoms of Gulf War Illness (GWI) were likely exposed to stresses from toxic chemicals and/or radiation. The DNA damaging (genotoxic) effects of such exposures can be, at least partially, overcome by the DNA nucleotide excision repair (NER) pathway. Lymphocytes from GWI veterans exhibited a significantly elevated level of NER repair. It was suggested that this increased NER capability in exposed veterans was likely a hormetic response, that is, an induced protective response resulting from battlefield exposure.
Applications
Effects in aging
One of the areas where the concept of hormesis has been explored extensively with respect to its applicability is aging. Since the basic survival capacity of any biological system depends on its homeostatic ability, biogerontologists proposed that exposing cells and organisms to mild stress should result in the adaptive or hormetic response with various biological benefits. This idea has preliminary evidence showing that repetitive mild stress exposure may have anti-aging effects in laboratory models. Some mild stresses used for such studies on the application of hormesis in aging research and interventions are heat shock, irradiation, prooxidants, hypergravity, and food restriction. Such compounds that may modulate stress responses in cells have been termed "hormetins".
Controversy
Hormesis suggests dangerous substances have benefits. Concerns exist that the concept has been leveraged by lobbyists to weaken environmental regulations of some well-known toxic substances in the US.
Radiation controversy
The hypothesis of hormesis has generated the most controversy when applied to ionizing radiation. This hypothesis is called radiation hormesis. For policy-making purposes, the commonly accepted model of dose response in radiobiology is the linear no-threshold model (LNT), which assumes a strictly linear dependence between the risk of radiation-induced adverse health effects and radiation dose, implying that there is no safe dose of radiation for humans.
Nonetheless, many countries including the Czech Republic, Germany, Austria, Poland, and the United States have radon therapy centers whose whole primary operating principle is the assumption of radiation hormesis, or beneficial impact of small doses of radiation on human health. Countries such as Germany and Austria at the same time have imposed very strict antinuclear regulations, which have been described as radiophobic inconsistency.
The United States National Research Council (part of the National Academy of Sciences), the National Council on Radiation Protection and Measurements (a body commissioned by the United States Congress) and the United Nations Scientific Committee on the Effects of Ionizing Radiation all agree that radiation hormesis is not clearly shown, nor clearly the rule for radiation doses.
A United States–based National Council on Radiation Protection and Measurements stated in 2001 that evidence for radiation hormesis is insufficient and radiation protection authorities should continue to apply the LNT model for purposes of risk estimation.
A 2005 report commissioned by the French National Academy concluded that evidence for hormesis occurring at low doses is sufficient and LNT should be reconsidered as the methodology used to estimate risks from low-level sources of radiation, such as deep geological repositories for nuclear waste.
Policy consequences
Hormesis remains largely unknown to the public, requiring a policy change for a possible toxin to consider exposure risk of small doses.
See also
Calorie restriction
Michael Ristow
Petkau effect
Radiation hormesis
Stochastic resonance
Mithridatism
Antifragility
Xenohormesis
References
External links
International Dose-Response Society
Clinical pharmacology
Radiobiology
Toxicology
Health paradoxes | 0.767879 | 0.994232 | 0.76345 |
Regulation | Regulation is the management of complex systems according to a set of rules and trends. In systems theory, these types of rules exist in various fields of biology and society, but the term has slightly different meanings according to context. For example:
in government, typically regulation (or its plural) refers to the delegated legislation which is adopted to enforce primary legislation; including land-use regulation
in economy: regulatory economics
in finance: Financial regulation
in business, industry self-regulation occurs through self-regulatory organizations and trade associations which allow industries to set and enforce rules with less government involvement; and,
in biology, gene regulation and metabolic regulation allow living organisms to adapt to their environment and maintain homeostasis;
in psychology, self-regulation theory is the study of how individuals regulate their thoughts and behaviors to reach goals.
Forms
Regulation in the social, political, psychological, and economic domains can take many forms: legal restrictions promulgated by a government authority, contractual obligations (for example, contracts between insurers and their insureds), self-regulation in psychology, social regulation (e.g. norms), co-regulation, third-party regulation, certification, accreditation or market regulation.
State-mandated regulation is government intervention in the private market in an attempt to implement policy and produce outcomes which might not otherwise occur, ranging from consumer protection to faster growth or technological advancement.
The regulations may prescribe or proscribe conduct ("command-and-control" regulation), calibrate incentives ("incentive" regulation), or change preferences ("preferences shaping" regulation). Common examples of regulation include limits on environmental pollution, laws against child labor or other employment regulations, minimum wages laws, regulations requiring truthful labelling of the ingredients in food and drugs, and food and drug safety regulations establishing minimum standards of testing and quality for what can be sold, and zoning and development approvals regulation. Much less common are controls on market entry, or price regulation.
One critical question in regulation is whether the regulator or government has sufficient information to make ex-ante regulation more efficient than ex-post liability for harm and whether industry self-regulation might be preferable. The economics of imposing or removing regulations relating to markets is analysed in empirical legal studies, law and economics, political science, environmental science, health economics, and regulatory economics.
Power to regulate should include the power to enforce regulatory decisions. Monitoring is an important tool used by national regulatory authorities in carrying out the regulated activities.
In some countries (in particular the Scandinavian countries) industrial relations are to a very high degree regulated by the labour market parties themselves (self-regulation) in contrast to state regulation of minimum wages etc.
Measurement
Regulation can be assessed for different countries through various quantitative measures. The Global Indicators of Regulatory Governance by World Bank's Global Indicators Group scores 186 countries on transparency around proposed regulations, consultation on their content, the use of regulatory impact assessments and the access to enacted laws on a scale from 0 to 5. The V-Dem Democracy indices include the regulatory quality indicator. The QuantGov project at the Mercatus Center tracks the count of regulations by topic for United States, Canada, and Australia.
History
Regulation of businesses existed in the ancient early Egyptian, Indian, Greek, and Roman civilizations. Standardized weights and measures existed to an extent in the ancient world, and gold may have operated to some degree as an international currency. In China, a national currency system existed and paper currency was invented. Sophisticated law existed in Ancient Rome. In the European Early Middle Ages, law and standardization declined with the Roman Empire, but regulation existed in the form of norms, customs, and privileges; this regulation was aided by the unified Christian identity and a sense of honor regarding contracts.
Modern industrial regulation can be traced to the Railway Regulation Act 1844 in the United Kingdom, and succeeding Acts. Beginning in the late 19th and 20th centuries, much of regulation in the United States was administered and enforced by regulatory agencies which produced their own administrative law and procedures under the authority of statutes. Legislators created these agencies to require experts in the industry to focus their attention on the issue. At the federal level, one of the earliest institutions was the Interstate Commerce Commission which had its roots in earlier state-based regulatory commissions and agencies. Later agencies include the Federal Trade Commission, Securities and Exchange Commission, Civil Aeronautics Board, and various other institutions. These institutions vary from industry to industry and at the federal and state level. Individual agencies do not necessarily have clear life-cycles or patterns of behavior, and they are influenced heavily by their leadership and staff as well as the organic law creating the agency. In the 1930s, lawmakers believed that unregulated business often led to injustice and inefficiency; in the 1960s and 1970s, concern shifted to regulatory capture, which led to extremely detailed laws creating the United States Environmental Protection Agency and Occupational Safety and Health Administration.
Regulatory economics
Regulatory state
Regulatory capture
Deregulation
See also
References
External links
Centre on Regulation in Europe (CERRE)
New Perspectives on Regulation (2009) and Government and Markets: Toward a New Theory of Regulation (2009)
US/Canadian Regulatory Cooperation: Schmitz on Lessons from the European Union, Canadian Privy Council Office Commissioned Study
A Comparative Bibliography: Regulatory Competition on Corporate Law
Wikibooks
Legal and Regulatory Issues in the Information Economy
Lawrence A. Cunningham, A Prescription to Retire the Rhetoric of 'Principles-Based Systems' in Corporate Law, Securities Regulation and Accounting (2007)
Economics of regulation
Public policy | 0.768056 | 0.993999 | 0.763447 |
Plant taxonomy | Plant taxonomy is the science that finds, identifies, describes, classifies, and names plants. It is one of the main branches of taxonomy (the science that finds, describes, classifies, and names living things).
Plant taxonomy is closely allied to plant systematics, and there is no sharp boundary between the two. In practice, "plant systematics" involves relationships between plants and their evolution, especially at the higher levels, whereas "plant taxonomy" deals with the actual handling of plant specimens. The precise relationship between taxonomy and systematics, however, has changed along with the goals and methods employed.
Plant taxonomy is well known for being turbulent, and traditionally not having any close agreement on circumscription and placement of taxa. See the list of systems of plant taxonomy.
Background
Classification systems serve the purpose of grouping organisms by characteristics common to each group. Plants are distinguished from animals by various traits: they have cell walls made of cellulose, polyploidy, and they exhibit sedentary growth. Where animals have to eat organic molecules, plants are able to change energy from light into organic energy by the process of photosynthesis. The basic unit of classification is species, a group able to breed amongst themselves and bearing mutual resemblance, a broader classification is the genus. Several genera make up a family, and several families an order.
History of classification
The botanical term angiosperm, or flowering plant, comes from the Greek (; 'bottle, vessel') and (; 'seed'); in 1690, the term Angiospermae was coined by Paul Hermann, albeit in reference to only a small subset of the species that are known as angiosperms, today. Hermann's Angiospermae included only flowering plants possessing seeds enclosed in capsules, distinguished from his Gymnospermae, which were flowering plants with achenial or schizo-carpic fruits (the whole fruit, or each of its pieces, being here regarded as a seed and naked). The terms Angiospermae and Gymnospermae were used by Carl Linnaeus in the same sense, albeit with restricted application, in the names of the orders of his class Didynamia.
The terms angiosperms and gymnosperm fundamentally changed meaning in 1827, when Robert Brown determined the existence of truly-naked ovules in the Cycadeae and Coniferae. The term gymnosperm was, from then-on, applied to seed plants with naked ovules, and the term angiosperm to seed plants with enclosed ovules. However, for many years after Brown's discovery, the primary division of the seed plants was seen as between monocots and dicots, with gymnosperms as a small subset of the dicots.
In 1851, Hofmeister discovered the changes occurring in the embryo-sac of flowering plants, and determined the correct relationships of these to the Cryptogamia. This fixed the position of Gymnosperms as a class distinct from Dicotyledons, and the term Angiosperm then, gradually, came to be accepted as the suitable designation for the whole of the flowering plants (other than Gymnosperms), including the classes of Dicotyledons and Monocotyledons. This is the sense in which the term is used, today.
In most taxonomies, the flowering plants are treated as a coherent group; the most popular descriptive name has been Angiospermae, with Anthophyta (lit. 'flower-plants') a second choice (both unranked). The Wettstein system and Engler system treated them as a subdivision (Angiospermae). The Reveal system also treated them as a subdivision (Magnoliophytina), but later split it to Magnoliopsida, Liliopsida, and Rosopsida. The Takhtajan system and Cronquist system treat them as a division (Magnoliophyta). The Dahlgren system and Thorne system (1992) treat them as a class (Magnoliopsida). The APG system of 1998, and the later 2003 and 2009 revisions, treat the flowering plants as an unranked clade without a formal Latin name (angiosperms). A formal classification was published alongside the 2009 revision in which the flowering plants rank as a subclass (Magnoliidae).
The internal classification of this group has undergone considerable revision. The Cronquist system, proposed by Arthur Cronquist in 1968 and published in its full form in 1981, is still widely used but is no longer believed to accurately reflect phylogeny. A consensus about how the flowering plants should be arranged has recently begun to emerge through the work of the Angiosperm Phylogeny Group (APG), which published an influential reclassification of the angiosperms in 1998. Updates incorporating more recent research were published as the APG II system in 2003, the APG III system in 2009, and the APG IV system in 2016.
Traditionally, the flowering plants are divided into two groups,
Dicotyledoneae or Magnoliopsida
Monocotyledoneae or Liliopsida
to which the Cronquist system ascribes the classes Magnoliopsida (from "Magnoliaceae") and Liliopsida (from "Liliaceae"). Other descriptive names allowed by Article 16 of the ICBN include Dicotyledones or Dicotyledoneae, and Monocotyledones or Monocotyledoneae, which have a long history of use. In plain English, their members may be called "dicotyledons" ("dicots") and "monocotyledons" ("monocots"). The Latin behind these names refers the observation that the dicots most often have two cotyledons, or embryonic leaves, within each seed. The monocots usually have only one, but the rule is not absolute either way. From a broad diagnostic point of view, the number of cotyledons is neither a particularly handy, nor a reliable character.
Recent studies, as per the APG, show that the monocots form a monophyletic group (a clade), but that the dicots are paraphyletic; nevertheless, the majority of dicot species fall into a clade with the eudicots (or tricolpates), with most of the remaining going into another major clade with the magnoliids (containing about 9,000 species). The remainder includes a paraphyletic grouping of early-branching taxa known collectively as the basal angiosperms, plus the families Ceratophyllaceae and Chloranthaceae.
Plantae, the Plant Kingdom
The plant kingdom is divided according to the following:
Identification, classification and description of plants
Three goals of plant taxonomy are the identification, classification and description of plants. The distinction between these three goals is important and often overlooked.
Plant identification is a determination of the identity of an unknown plant by comparison with previously collected specimens or with the aid of books or identification manuals. The process of identification connects the specimen with a published name. Once a plant specimen has been identified, its name and properties are known.
Plant classification is the placing of known plants into groups or categories to show some relationship. Scientific classification follows a system of rules that standardizes the results, and groups successive categories into a hierarchy. For example, the family to which the lilies belong is classified as follows:
Kingdom: Plantae
Division: Magnoliophyta
Class: Liliopsida
Order: Liliales
Family: Liliaceae
The classification of plants results in an organized system for the naming and cataloging of future specimens, and ideally reflects scientific ideas about inter-relationships between plants. The set of rules and recommendations for formal botanical nomenclature, including plants, is governed by the International Code of Nomenclature for algae, fungi, and plants abbreviated as ICN.
Plant description is a formal description of a newly discovered species, usually in the form of a scientific paper using ICN guidelines. The names of these plants are then registered on the International Plant Names Index along with all other validly published names.
Classification systems
These include;
APG system (angiosperm phylogeny group)
APG II system (angiosperm phylogeny group II)
APG III system (angiosperm phylogeny group III)
APG IV system (angiosperm phylogeny group IV)
Bessey system (a system of plant taxonomy)
Cronquist system (taxonomic classification of flowering plants)
Melchior system
Online databases
Ecocrop
EPPO Code
GRIN
See Category: Online botany databases
See also
American Society of Plant Taxonomists
Biophysical environment
Botanical nomenclature
Citrus taxonomy
Environmental protection
Herbarium
History of plant systematics
International Association for Plant Taxonomy
Taxonomy of cultivated plants
References
Sources
External links
Plant systematics
Tracking Plant Taxonomy Updates discussion group on Facebook | 0.767864 | 0.994234 | 0.763437 |
Process modeling | The term process model is used in various contexts. For example, in business process modeling the enterprise process model is often referred to as the business process model.
Overview
Process models are processes of the same nature that are classified together into a model. Thus, a process model is a description of a process at the type level. Since the process model is at the type level, a process is an instantiation of it. The same process model is used repeatedly for the development of many applications and thus, has many instantiations. One possible use of a process model is to prescribe how things must/should/could be done in contrast to the process itself which is really what happens. A process model is roughly an anticipation of what the process will look like. What the process shall be will be determined during actual system development.
The goals of a process model are to be:
Descriptive
Track what actually happens during a process
Take the point of view of an external observer who looks at the way a process has been performed and determines the improvements that must be made to make it perform more effectively or efficiently.
Prescriptive
Define the desired processes and how they should/could/might be performed.
Establish rules, guidelines, and behavior patterns which, if followed, would lead to the desired process performance. They can range from strict enforcement to flexible guidance.
Explanatory
Provide explanations about the rationale of processes.
Explore and evaluate the several possible courses of action based on rational arguments.
Establish an explicit link between processes and the requirements that the model needs to fulfill.
Pre-defines points at which data can be extracted for reporting purposes.
Purpose
From a theoretical point of view, the meta-process modeling explains the key concepts needed to describe what happens in the development process, on what, when it happens, and why. From an operational point of view, the meta-process modeling is aimed at providing guidance for method engineers and application developers.
The activity of modeling a business process usually predicates a need to change processes or identify issues to be corrected. This transformation may or may not require IT involvement, although that is a common driver for the need to model a business process. Change management programmes are desired to put the processes into practice. With advances in technology from larger platform vendors, the vision of business process models (BPM) becoming fully executable (and capable of round-trip engineering) is coming closer to reality every day. Supporting technologies include Unified Modeling Language (UML), model-driven architecture, and service-oriented architecture.
Process modeling addresses the process aspects of an enterprise business architecture, leading to an all encompassing enterprise architecture. The relationships of a business processes in the context of the rest of the enterprise systems, data, organizational structure, strategies, etc. create greater capabilities in analyzing and planning a change. One real-world example is in corporate mergers and acquisitions; understanding the processes in both companies in detail, allowing management to identify redundancies resulting in a smoother merger.
Process modeling has always been a key aspect of business process reengineering, and continuous improvement approaches seen in Six Sigma.
Classification of process models
By coverage
There are five types of coverage where the term process model has been defined differently:
Activity-oriented: related set of activities conducted for the specific purpose of product definition; a set of partially ordered steps intended to reach a goal.
Product-oriented: series of activities that cause sensitive product transformations to reach the desired product.
Decision-oriented: set of related decisions conducted for the specific purpose of product definition.
Context-oriented: sequence of contexts causing successive product transformations under the influence of a decision taken in a context.
Strategy-oriented: allow building models representing multi-approach processes and plan different possible ways to elaborate the product based on the notion of intention and strategy.
By alignment
Processes can be of different kinds. These definitions "correspond to the various ways in which a process can be modelled".
Strategic processes
investigate alternative ways of doing a thing and eventually produce a plan for doing it
are often creative and require human co-operation; thus, alternative generation and selection from an alternative are very critical activities
Tactical processes
help in the achievement of a plan
are more concerned with the tactics to be adopted for actual plan achievement than with the development of a plan of achievement
Implementation processes
are the lowest level processes
are directly concerned with the details of the what and how of plan implementation
By granularity
Granularity refers to the level of detail of a process model and affects the kind of guidance, explanation and trace that can be provided. Coarse granularity restricts these to a rather limited level of detail whereas fine granularity provides more detailed capability. The nature of granularity needed is dependent on the situation at hand.
Project manager, customer representatives, the general, top-level, or middle management require rather coarse-grained process description as they want to gain an overview of time, budget, and resource planning for their decisions. In contrast, software engineers, users, testers, analysts, or software system architects will prefer a fine-grained process model where the details of the model can provide them with instructions and important execution dependencies such as the dependencies between people.
While notations for fine-grained models exist, most traditional process models are coarse-grained descriptions. Process models should, ideally, provide a wide range of granularity (e.g. Process Weaver).
By flexibility
It was found that while process models were prescriptive, in actual practice departures from the prescription can occur. Thus, frameworks for adopting methods evolved so that systems development methods match specific organizational situations and thereby improve their usefulness. The development of such frameworks is also called situational method engineering.
Method construction approaches can be organized in a flexibility spectrum ranging from 'low' to 'high'.
Lying at the 'low' end of this spectrum are rigid methods, whereas at the 'high' end there are modular method construction. Rigid methods are completely pre-defined and leave little scope for adapting them to the situation at hand. On the other hand, modular methods can be modified and augmented to fit a given situation. Selecting a rigid methods allows each project to choose its method from a panel of rigid, pre-defined methods, whereas selecting a path within a method consists of choosing the appropriate path for the situation at hand. Finally, selecting and tuning a method allows each project to select methods from different approaches and tune them to the project's needs."
Quality of methods
As the quality of process models is being discussed in this paper, there is a need to elaborate quality of modeling techniques as an important essence in quality of process models. In most existing frameworks created for understanding the quality, the line between quality of modeling techniques and the quality of models as a result of the application of those techniques are not clearly drawn. This report will concentrate both on quality of process modeling techniques and quality of process models to clearly differentiate the two.
Various frameworks were developed to help in understanding quality of process modeling techniques, one example is Quality based modeling evaluation framework or known as Q-Me framework which argued to provide set of well defined quality properties and procedures to make an objective assessment of this properties possible.
This framework also has advantages of providing uniform and formal description of the model element within one or different model types using one modeling techniques
In short this can make assessment of both the product quality and the process quality of modeling techniques with regard to a set of properties that have been defined before.
Quality properties that relate to business process modeling techniques discussed in are:
Expressiveness: the degree to which a given modeling technique is able to denote the models of any number and kinds of application domains.
Arbitrariness: the degree of freedom one has when modeling one and the same domain
Suitability: the degree to which a given modeling technique is specifically tailored for a specific kind of application domain.
Comprehensibility: the ease with which the way of working and way of modeling are understood by participants.
Coherence: the degree to which the individual sub models of a way of modeling constitute a whole.
Completeness; the degree to which all necessary concepts of the application domain are represented in the way of modeling.
Efficiency: the degree to which the modeling process uses resources such as time and people.
Effectiveness: the degree to which the modeling process achieves its goal.
To assess the quality of Q-ME framework; it is used to illustrate the quality of the dynamic essentials modeling of the organisation (DEMO) business modeling techniques.
It is stated that the evaluation of the Q-ME framework to the DEMO modeling techniques has revealed the shortcomings of Q-ME. One particular is that it does not include quantifiable metric to express the quality of business modeling technique which makes it hard to compare quality of different techniques in an overall rating.
There is also a systematic approach for quality measurement of modeling techniques known as complexity metrics suggested by Rossi et al. (1996). Techniques of Meta model is used as a basis for computation of these complexity metrics. In comparison to quality framework proposed by Krogstie, quality measurement focus more on technical level instead of individual model level.
Authors (Cardoso, Mendling, Neuman and Reijers, 2006) used complexity metrics to measure the simplicity and understandability of a design. This is supported by later research done by Mendling et al. who argued that without using the quality metrics to help question quality properties of a model, simple process can be modeled in a complex and unsuitable way. This in turn can lead to a lower understandability, higher maintenance cost and perhaps inefficient execution of the process in question.
The quality of modeling technique is important in creating models that are of quality and contribute to the correctness and usefulness of models.
Quality of models
Earliest process models reflected the dynamics of the process with a practical process obtained by instantiation in terms of relevant concepts, available technologies, specific implementation environments, process constraints and so on.
Enormous number of research has been done on quality of models but less focus has been shifted towards the quality of process models. Quality issues of process models cannot be evaluated exhaustively however there are four main guidelines and frameworks in practice for such. These are: top-down quality frameworks, bottom-up metrics related to quality aspects, empirical surveys related to modeling techniques, and pragmatic guidelines.
Hommes quoted Wang et al. (1994) that all the main characteristic of quality of models can all be grouped under 2 groups namely correctness and usefulness of a model, correctness ranges from the model correspondence to the phenomenon that is modeled to its correspondence to syntactical rules of the modeling and also it is independent of the purpose to which the model is used.
Whereas the usefulness can be seen as the model being helpful for the specific purpose at hand for which the model is constructed at first place. Hommes also makes a further distinction between internal correctness (empirical, syntactical and semantic quality) and external correctness (validity).
A common starting point for defining the quality of conceptual model is to look at the linguistic properties of the modeling language of which syntax and semantics are most often applied.
Also the broader approach is to be based on semiotics rather than linguistic as was done by Krogstie using the top-down quality framework known as SEQUAL. It defines several quality aspects based on relationships between a model, knowledge Externalisation, domain, a modeling language, and the activities of learning, taking action, and modeling.
The framework does not however provide ways to determine various degrees of quality but has been used extensively for business process modeling in empirical tests carried out
According to previous research done by Moody et al. with use of conceptual model quality framework proposed by Lindland et al. (1994) to evaluate quality of process model, three levels of quality were identified:
Syntactic quality: Assesses extent to which the model conforms to the grammar rules of modeling language being used.
Semantic quality: whether the model accurately represents user requirements
Pragmatic quality: whether the model can be understood sufficiently by all relevant stakeholders in the modeling process. That is the model should enable its interpreters to make use of it for fulfilling their need.
From the research it was noticed that the quality framework was found to be both easy to use and useful in evaluating the quality of process models however it had limitations in regards to reliability and difficult to identify defects. These limitations led to refinement of the framework through subsequent research done by Krogstie. This framework is called SEQUEL framework by Krogstie et al. 1995 (Refined further by Krogstie & Jørgensen, 2002) which included three more quality aspects.
Physical quality: whether the externalized model is persistent and available for the audience to make sense of it.
Empirical quality: whether the model is modeled according to the established regulations regarding a given language.
Social quality: This regards the agreement between the stakeholders in the modeling domain.
Dimensions of Conceptual Quality framework
Modeling Domain is the set of all statements that are relevant and correct for describing a problem domain, Language Extension is the set of all statements that are possible given the grammar and vocabulary of the modeling languages used. Model Externalization is the conceptual representation of the problem domain.
It is defined as the set of statements about the problem domain that are actually made. Social Actor Interpretation and Technical Actor Interpretation are the sets of statements that actors both human model users and the tools that interact with the model, respectively 'think' the conceptual representation of the problem domain contains.
Finally, Participant Knowledge is the set of statements that human actors, who are involved in the modeling process, believe should be made to represent the problem domain. These quality dimensions were later divided into two groups that deal with physical and social aspects of the model.
In later work, Krogstie et al. stated that while the extension of the SEQUAL framework has fixed some of the limitation of the initial framework, however other limitation remain .
In particular, the framework is too static in its view upon semantic quality, mainly considering models, not modeling activities, and comparing these models to a static domain rather than seeing the model as a facilitator for changing the domain.
Also, the framework's definition of pragmatic quality is quite narrow, focusing on understanding, in line with the semiotics of Morris, while newer research in linguistics and semiotics has focused beyond mere understanding, on how the model is used and affects its interpreters.
The need for a more dynamic view in the semiotic quality framework is particularly evident when considering process models, which themselves often prescribe or even enact actions in the problem domain, hence a change to the model may also change the problem domain directly. This paper discusses the quality framework in relation to active process models and suggests a revised framework based on this.
Further work by Krogstie et al. (2006) to revise SEQUAL framework to be more appropriate for active process models by redefining physical quality with a more narrow interpretation than previous research.
The other framework in use is Guidelines of Modeling (GoM) based on general accounting principles include the six principles: Correctness, Clarity deals with the comprehensibility and explicitness (System description) of model systems.
Comprehensibility relates to graphical arrangement of the information objects and, therefore, supports the understand ability of a model.
Relevance relates to the model and the situation being presented. Comparability involves the ability to compare models that is semantic comparison between two models, Economic efficiency; the produced cost of the design process need at least to be covered by the proposed use of cost cuttings and revenue increases.
Since the purpose of organizations in most cases is the maximization of profit, the principle defines the borderline for the modeling process. The last principle is Systematic design defines that there should be an accepted differentiation between diverse views within modeling.
Correctness, relevance and economic efficiency are prerequisites in the quality of models and must be fulfilled while the remaining guidelines are optional but necessary.
The two frameworks SEQUAL and GOM have a limitation of use in that they cannot be used by people who are not competent with modeling. They provide major quality metrics but are not easily applicable by non-experts.
The use of bottom-up metrics related to quality aspects of process models is trying to bridge the gap of use of the other two frameworks by non-experts in modeling but it is mostly theoretical and no empirical tests have been carried out to support their use.
Most experiments carried out relate to the relationship between metrics and quality aspects and these works have been done individually by different authors: Canfora et al. study the connection mainly between count metrics (for example, the number of tasks or splits -and maintainability of software process models); Cardoso validates the correlation between control flow complexity and perceived complexity; and Mendling et al. use metrics to predict control flow errors such as deadlocks in process models.
The results reveal that an increase in size of a model appears to reduce its quality and comprehensibility.
Further work by Mendling et al. investigates the connection between metrics and understanding and While some metrics are confirmed regarding their effect, also personal factors of the modeler – like competence – are revealed as important for understanding about the models.
Several empirical surveys carried out still do not give clear guidelines or ways of evaluating the quality of process models but it is necessary to have clear set of guidelines to guide modelers in this task. Pragmatic guidelines have been proposed by different practitioners even though it is difficult to provide an exhaustive account of such guidelines from practice.
Most of the guidelines are not easily put to practice but "label activities verb–noun" rule has been suggested by other practitioners before and analyzed empirically.
From the research. value of process models is not only dependent on the choice of graphical constructs but also on their annotation with textual labels which need to be analyzed. It was found that it results in better models in terms of understanding than alternative labelling styles.
From the earlier research and ways to evaluate process model quality it has been seen that the process model's size, structure, expertise of the modeler and modularity affect its overall comprehensibility.
Based on these a set of guidelines was presented 7 Process Modeling Guidelines (7PMG). This guideline uses the verb-object style, as well as guidelines on the number of elements in a model, the application of structured modeling, and the decomposition of a process model. The guidelines are as follows:
G1 Minimize the number of elements in a model
G2 Minimize the routing paths per element
G3 Use one start and one end event
G4 Model as structured as possible
G5 Avoid OR routing elements
G6 Use verb-object activity labels
G7 Decompose a model with more than 50 elements
7PMG still though has limitations with its use: Validity problem 7PMG does not relate to the content of a process model, but only to the way this content is organized and represented.
It does suggest ways of organizing different structures of the process model while the content is kept intact but the pragmatic issue of what must be included in the model is still left out.
The second limitation relates to the prioritizing guideline the derived ranking has a small empirical basis as it relies on the involvement of 21 process modelers only.
This could be seen on the one hand as a need for a wider involvement of process modelers' experience, but it also raises the question, what alternative approaches may be available to arrive at a prioritizing guideline?
See also
Model selection
Process (science)
Process architecture
Process calculus
Process flow diagram
Process ontology
Process Specification Language
References
External links
Modeling processes regarding workflow patterns; link appears to be broken
American Productivity and Quality Center (APQC), a worldwide organization for process and performance improvement
The Application of Petri Nets to Workflow Management, W.M.P. van der Aalst, 1998.
Business process management
Systems engineering
Process theory | 0.783646 | 0.9742 | 0.763427 |
Agglutination (biology) | Agglutination is the clumping of particles. The word agglutination comes from the Latin agglutinare (glueing to).
Agglutination is a reaction in which particles (as red blood cells or bacteria) suspended in a liquid collect into clumps usually as a response to a specific antibody.
This occurs in biology in two main examples:
The clumping of cells such as bacteria or red blood cells in the presence of an antibody or complement. The antibody or other molecule binds multiple particles and joins them, creating a large complex. This increases the efficacy of microbial elimination by phagocytosis as large clumps of bacteria can be eliminated in one pass, versus the elimination of single microbial antigens.
When people are given blood transfusions of the wrong blood group, the antibodies react with the incorrectly transfused blood group and as a result, the erythrocytes clump up and stick together causing them to agglutinate. The coalescing of small particles that are suspended in a solution; these larger masses are then (usually) precipitated.
In immunohematology
Hemagglutination
Hemagglutination is the process by which red blood cells agglutinate, meaning clump or clog. The agglutin involved in hemagglutination is called hemagglutinin. In cross-matching, donor red blood cells and the recipient's serum or plasma are incubated together. If agglutination occurs, this indicates that the donor and recipient blood types are incompatible.
When a person produces antibodies against their own red blood cells, as in cold agglutinin disease and other autoimmune conditions, the cells may agglutinate spontaneously. This is called autoagglutination and it can interfere with laboratory tests such as blood typing and the complete blood count.
Leukoagglutination
Leukoagglutination occurs when the particles involved are white blood cells.
An example is the PH-L form of phytohaemagglutinin.
In microbiology
Agglutination is commonly used as a method of identifying specific bacterial antigens and the identity of such bacteria, and therefore is an important technique in diagnosis.
History of discoveries
Two bacteriologists, Herbert Edward Durham (-1945) and Max von Gruber (1853–1927), discovered specific agglutination in 1896. The clumping became known as Gruber-Durham reaction. Gruber introduced the term agglutinin (from the Latin) for any substance that caused agglutination of cells.
French physician Fernand Widal (1862–1929) put Gruber and Durham's discovery to practical use later in 1896, using the reaction as the basis for a test for typhoid fever. Widal found that blood serum from a typhoid carrier caused a culture of typhoid bacteria to clump, whereas serum from a typhoid-free person did not. This Widal test was the first example of serum diagnosis.
Austrian physician Karl Landsteiner found another important practical application of the agglutination reaction in 1900. Landsteiner's agglutination tests and his discovery of ABO blood groups was the start of the science of blood transfusion and serology which has made transfusion possible and safer.
See also
Agglutination-PCR
Blocking antibody
Coagulation
Immune system
Macrophage
Mannan oligosaccharides (MOS)
References
Immunologic tests
Hematology | 0.768762 | 0.993059 | 0.763426 |
Hypothetico-deductive model | The hypothetico-deductive model or method is a proposed description of the scientific method. According to it, scientific inquiry proceeds by formulating a hypothesis in a form that can be falsifiable, using a test on observable data where the outcome is not yet known. A test outcome that could have and does run contrary to predictions of the hypothesis is taken as a falsification of the hypothesis. A test outcome that could have, but does not run contrary to the hypothesis corroborates the theory. It is then proposed to compare the explanatory value of competing hypotheses by testing how stringently they are corroborated by their predictions.
Example
One example of an algorithmic statement of the hypothetico-deductive method is as follows:
1. Use your experience: Consider the problem and try to make sense of it. Gather data and look for previous explanations. If this is a new problem to you, then move to step 2.
2. Form a conjecture (hypothesis): When nothing else is yet known, try to state an explanation, to someone else, or to your notebook.
3. Deduce predictions from the hypothesis: if you assume 2 is true, what consequences follow?
4. Test (or experiment): Look for evidence (observations) that conflict with these predictions in order to disprove 2. It is a fallacy or error in one's reasoning to seek 3 directly as proof of 2. This formal fallacy is called affirming the consequent.
One possible sequence in this model would be 1, 2, 3, 4. If the outcome of 4 holds, and 3 is not yet disproven, you may continue with 3, 4, 1, and so forth; but if the outcome of 4 shows 3 to be false, you will have to go back to 2 and try to invent a new 2, deduce a new 3, look for 4, and so forth.
Note that this method can never absolutely verify (prove the truth of) 2. It can only falsify 2. (This is what Einstein meant when he said, "No amount of experimentation can ever prove me right; a single experiment can prove me wrong.")
Discussion
Additionally, as pointed out by Carl Hempel (1905–1997), this simple view of the scientific method is incomplete; a conjecture can also incorporate probabilities, e.g., the drug is effective about 70% of the time. Tests, in this case, must be repeated to substantiate the conjecture (in particular, the probabilities). In this and other cases, we can quantify a probability for our confidence in the conjecture itself and then apply a Bayesian analysis, with each experimental result shifting the probability either up or down. Bayes' theorem shows that the probability will never reach exactly 0 or 100% (no absolute certainty in either direction), but it can still get very close to either extreme. See also confirmation holism.
Qualification of corroborating evidence is sometimes raised as philosophically problematic. The raven paradox is a famous example. The hypothesis that 'all ravens are black' would appear to be corroborated by observations of only black ravens. However, 'all ravens are black' is logically equivalent to 'all non-black things are non-ravens' (this is the contrapositive form of the original implication). 'This is a green tree' is an observation of a non-black thing that is a non-raven and therefore corroborates 'all non-black things are non-ravens'. It appears to follow that the observation 'this is a green tree' is corroborating evidence for the hypothesis 'all ravens are black'. Attempted resolutions may distinguish:
non-falsifying observations as to strong, moderate, or weak corroborations
investigations that do or do not provide a potentially falsifying test of the hypothesis.
Evidence contrary to a hypothesis is itself philosophically problematic. Such evidence is called a falsification of the hypothesis. However, under the theory of confirmation holism it is always possible to save a given hypothesis from falsification. This is so because any falsifying observation is embedded in a theoretical background, which can be modified in order to save the hypothesis. Karl Popper acknowledged this but maintained that a critical approach respecting methodological rules that avoided such immunizing stratagems is conducive to the progress of science.
Physicist Sean Carroll claims the model ignores underdetermination.
Versus other research models
The hypothetico-deductive approach contrasts with other research models such as the inductive approach or grounded theory. In the data percolation methodology,
the hypothetico-deductive approach is included in a paradigm of pragmatism by which four types of relations between the variables can exist: descriptive, of influence, longitudinal or causal. The variables are classified in two groups, structural and functional, a classification that drives the formulation of hypotheses and the statistical tests to be performed on the data so as to increase the efficiency of the research.
See also
Confirmation bias
Deductive-nomological
Explanandum and explanans
Inquiry
Models of scientific inquiry
Philosophy of science
Pragmatism
Scientific method
Verifiability theory of meaning
Will to believe doctrine
Types of inference
Strong inference
Abductive reasoning
Deductive reasoning
Inductive reasoning
Analogy
Citations
References
. (Luis de la Peña and Peter E. Hodgson, eds.)
.
Scientific method
Philosophy of science
Conceptual models | 0.771942 | 0.988951 | 0.763412 |
Protein structure prediction | Protein structure prediction is the inference of the three-dimensional structure of a protein from its amino acid sequence—that is, the prediction of its secondary and tertiary structure from primary structure. Structure prediction is different from the inverse problem of protein design.
Protein structure prediction is one of the most important goals pursued by computational biology and addresses Levinthal's paradox. Accurate structure prediction has important applications in medicine (for example, in drug design) and biotechnology (for example, in novel enzyme design).
Starting in 1994, the performance of current methods is assessed biannually in the Critical Assessment of Structure Prediction (CASP) experiment. A continuous evaluation of protein structure prediction web servers is performed by the community project Continuous Automated Model EvaluatiOn (CAMEO3D).
Protein structure and terminology
Proteins are chains of amino acids joined together by peptide bonds. Many conformations of this chain are possible due to the rotation of the main chain about the two torsion angles φ and ψ at the Cα atom (see figure). This conformational flexibility is responsible for differences in the three-dimensional structure of proteins. The peptide bonds in the chain are polar, i.e. they have separated positive and negative charges (partial charges) in the carbonyl group, which can act as hydrogen bond acceptor and in the NH group, which can act as hydrogen bond donor. These groups can therefore interact in the protein structure. Proteins consist mostly of 20 different types of L-α-amino acids (the proteinogenic amino acids). These can be classified according to the chemistry of the side chain, which also plays an important structural role. Glycine takes on a special position, as it has the smallest side chain, only one hydrogen atom, and therefore can increase the local flexibility in the protein structure. Cysteine in contrast can react with another cysteine residue to form one cystine and thereby form a cross link stabilizing the whole structure.
The protein structure can be considered as a sequence of secondary structure elements, such as α helices and β sheets. In these secondary structures, regular patterns of H-bonds are formed between the main chain NH and CO groups of spatially neighboring amino acids, and the amino acids have similar Φ and ψ
angles.
The formation of these secondary structures efficiently satisfies the hydrogen bonding capacities of the peptide bonds. The secondary structures can be tightly packed in the protein core in a hydrophobic environment, but they can also present at the polar protein surface. Each amino acid side chain has a limited volume to occupy and a limited number of possible interactions with other nearby side chains, a situation that must be taken into account in molecular modeling and alignments.
α-helix
The α-helix is the most abundant type of secondary structure in proteins. The α-helix has 3.6 amino acids per turn with an H-bond formed between every fourth residue; the average length is 10 amino acids (3 turns) or 10 Å but varies from 5 to 40 (1.5 to 11 turns). The alignment of the H-bonds creates a dipole moment for the helix with a resulting partial positive charge at the amino end of the helix. Because this region has free NH2 groups, it will interact with negatively charged groups such as phosphates. The most common location of α-helices is at the surface of protein cores, where they provide an interface with the aqueous environment. The inner-facing side of the helix tends to have hydrophobic amino acids and the outer-facing side hydrophilic amino acids. Thus, every third of four amino acids along the chain will tend to be hydrophobic, a pattern that can be quite readily detected. In the leucine zipper motif, a repeating pattern of leucines on the facing sides of two adjacent helices is highly predictive of the motif. A helical-wheel plot can be used to show this repeated pattern. Other α-helices buried in the protein core or in cellular membranes have a higher and more regular distribution of hydrophobic amino acids, and are highly predictive of such structures. Helices exposed on the surface have a lower proportion of hydrophobic amino acids. Amino acid content can be predictive of an α-helical region. Regions richer in alanine (A), glutamic acid (E), leucine (L), and methionine (M) and poorer in proline (P), glycine (G), tyrosine (Y), and serine (S) tend to form an α-helix. Proline destabilizes or breaks an α-helix but can be present in longer helices, forming a bend.
β-sheet
β-sheets are formed by H-bonds between an average of 5–10 consecutive amino acids in one portion of the chain with another 5–10 farther down the chain. The interacting regions may be adjacent, with a short loop in between, or far apart, with other structures in between. Every chain may run in the same direction to form a parallel sheet, every other chain may run in the reverse chemical direction to form an anti parallel sheet, or the chains may be parallel and anti parallel to form a mixed sheet. The pattern of H bonding is different in the parallel and anti parallel configurations. Each amino acid in the interior strands of the sheet forms two H-bonds with neighboring amino acids, whereas each amino acid on the outside strands forms only one bond with an interior strand. Looking across the sheet at right angles to the strands, more distant strands are rotated slightly counterclockwise to form a left-handed twist. The Cα-atoms alternate above and below the sheet in a pleated structure, and the R side groups of the amino acids alternate above and below the pleats. The Φ and Ψ angles of the amino acids in sheets vary considerably in one region of the Ramachandran plot. It is more difficult to predict the location of β-sheets than of α-helices. The situation improves somewhat when the amino acid variation in multiple sequence alignments is taken into account.
Deltas
Some parts of the protein have fixed three-dimensional structure, but do not form any regular structures. They should not be confused with disordered or unfolded segments of proteins or random coil, an unfolded polypeptide chain lacking any fixed three-dimensional structure. These parts are frequently called "deltas" (Δ) because they connect β-sheets and α-helices. Deltas are usually located at protein surface, and therefore mutations of their residues are more easily tolerated. Having more substitutions, insertions, and deletions in a certain region of a sequence alignment maybe an indication of some delta. The positions of introns in genomic DNA may correlate with the locations of loops in the encoded protein . Deltas also tend to have charged and polar amino acids and are frequently a component of active sites.
Protein classification
Proteins may be classified according to both structural and sequential similarity. For structural classification, the sizes and spatial arrangements of secondary structures described in the above paragraph are compared in known three-dimensional structures. Classification based on sequence similarity was historically the first to be used. Initially, similarity based on alignments of whole sequences was performed. Later, proteins were classified on the basis of the occurrence of conserved amino acid patterns. Databases that classify proteins by one or more of these schemes are available.
In considering protein classification schemes, it is important to keep several observations in mind. First, two entirely different protein sequences from different evolutionary origins may fold into a similar structure. Conversely, the sequence of an ancient gene for a given structure may have diverged considerably in different species while at the same time maintaining the same basic structural features. Recognizing any remaining sequence similarity in such cases may be a very difficult task. Second, two proteins that share a significant degree of sequence similarity either with each other or with a third sequence also share an evolutionary origin and should share some structural features also. However, gene duplication and genetic rearrangements during evolution may give rise to new gene copies, which can then evolve into proteins with new function and structure.
Terms used for classifying protein structures and sequences
The more commonly used terms for evolutionary and structural relationships among proteins are listed below. Many additional terms are used for various kinds of structural features found in proteins. Descriptions of such terms may be found at the CATH Web site, the Structural Classification of Proteins (SCOP) Web site, and a Glaxo Wellcome tutorial on the Swiss bioinformatics Expasy Web site.
Active site a localized combination of amino acid side groups within the tertiary (three-dimensional) or quaternary (protein subunit) structure that can interact with a chemically specific substrate and that provides the protein with biological activity. Proteins of very different amino acid sequences may fold into a structure that produces the same active site.
Architecture is the relative orientations of secondary structures in a three-dimensional structure without regard to whether or not they share a similar loop structure.
Fold (topology) a type of architecture that also has a conserved loop structure.
Blocks is a conserved amino acid sequence pattern in a family of proteins. The pattern includes a series of possible matches at each position in the represented sequences, but there are not any inserted or deleted positions in the pattern or in the sequences. By way of contrast, sequence profiles are a type of scoring matrix that represents a similar set of patterns that includes insertions and deletions.
Class a term used to classify protein domains according to their secondary structural content and organization. Four classes were originally recognized by Levitt and Chothia (1976), and several others have been added in the SCOP database. Three classes are given in the CATH database: mainly-α, mainly-β, and α–β, with the α–β class including both alternating α/β and α+β structures.
Core the portion of a folded protein molecule that comprises the hydrophobic interior of α-helices and β-sheets. The compact structure brings together side groups of amino acids into close enough proximity so that they can interact. When comparing protein structures, as in the SCOP database, core is the region common to most of the structures that share a common fold or that are in the same superfamily. In structure prediction, core is sometimes defined as the arrangement of secondary structures that is likely to be conserved during evolutionary change.
Domain (sequence context) a segment of a polypeptide chain that can fold into a three-dimensional structure irrespective of the presence of other segments of the chain. The separate domains of a given protein may interact extensively or may be joined only by a length of polypeptide chain. A protein with several domains may use these domains for functional interactions with different molecules.
Family (sequence context) a group of proteins of similar biochemical function that are more than 50% identical when aligned. This same cutoff is still used by the Protein Information Resource (PIR). A protein family comprises proteins with the same function in different organisms (orthologous sequences) but may also include proteins in the same organism (paralogous sequences) derived from gene duplication and rearrangements. If a multiple sequence alignment of a protein family reveals a common level of similarity throughout the lengths of the proteins, PIR refers to the family as a homeomorphic family. The aligned region is referred to as a homeomorphic domain, and this region may comprise several smaller homology domains that are shared with other families. Families may be further subdivided into subfamilies or grouped into superfamilies based on respective higher or lower levels of sequence similarity. The SCOP database reports 1296 families and the CATH database (version 1.7 beta), reports 1846 families.
When the sequences of proteins with the same function are examined in greater detail, some are found to share high sequence similarity. They are obviously members of the same family by the above criteria. However, others are found that have very little, or even insignificant, sequence similarity with other family members. In such cases, the family relationship between two distant family members A and C can often be demonstrated by finding an additional family member B that shares significant similarity with both A and C. Thus, B provides a connecting link between A and C. Another approach is to examine distant alignments for highly conserved matches.
At a level of identity of 50%, proteins are likely to have the same three-dimensional structure, and the identical atoms in the sequence alignment will also superimpose within approximately 1 Å in the structural model. Thus, if the structure of one member of a family is known, a reliable prediction may be made for a second member of the family, and the higher the identity level, the more reliable the prediction. Protein structural modeling can be performed by examining how well the amino acid substitutions fit into the core of the three-dimensional structure.
Family (structural context) as used in the FSSP database (Families of structurally similar proteins) and the DALI/FSSP Web site, two structures that have a significant level of structural similarity but not necessarily significant sequence similarity.
Fold similar to structural motif, includes a larger combination of secondary structural units in the same configuration. Thus, proteins sharing the same fold have the same combination of secondary structures that are connected by similar loops. An example is the Rossman fold comprising several alternating α helices and parallel β strands. In the SCOP, CATH, and FSSP databases, the known protein structures have been classified into hierarchical levels of structural complexity with the fold as a basic level of classification.
Homologous domain (sequence context) an extended sequence pattern, generally found by sequence alignment methods, that indicates a common evolutionary origin among the aligned sequences. A homology domain is generally longer than motifs. The domain may include all of a given protein sequence or only a portion of the sequence. Some domains are complex and made up of several smaller homology domains that became joined to form a larger one during evolution. A domain that covers an entire sequence is called the homeomorphic domain by PIR (Protein Information Resource).
Module a region of conserved amino acid patterns comprising one or more motifs and considered to be a fundamental unit of structure or function. The presence of a module has also been used to classify proteins into families.
Motif (sequence context) a conserved pattern of amino acids that is found in two or more proteins. In the Prosite catalog, a motif is an amino acid pattern that is found in a group of proteins that have a similar biochemical activity, and that often is near the active site of the protein. Examples of sequence motif databases are the Prosite catalog and the Stanford Motifs Database.
Motif (structural context) a combination of several secondary structural elements produced by the folding of adjacent sections of the polypeptide chain into a specific three-dimensional configuration. An example is the helix-loop-helix motif. Structural motifs are also referred to as supersecondary structures and folds.
Position-specific scoring matrix (sequence context, also known as weight or scoring matrix) represents a conserved region in a multiple sequence alignment with no gaps. Each matrix column represents the variation found in one column of the multiple sequence alignment.
Position-specific scoring matrix—3D (structural context) represents the amino acid variation found in an alignment of proteins that fall into the same structural class. Matrix columns represent the amino acid variation found at one amino acid position in the aligned structures.
Primary structure the linear amino acid sequence of a protein, which chemically is a polypeptide chain composed of amino acids joined by peptide bonds.
Profile (sequence context) a scoring matrix that represents a multiple sequence alignment of a protein family. The profile is usually obtained from a well-conserved region in a multiple sequence alignment. The profile is in the form of a matrix with each column representing a position in the alignment and each row one of the amino acids. Matrix values give the likelihood of each amino acid at the corresponding position in the alignment. The profile is moved along the target sequence to locate the best scoring regions by a dynamic programming algorithm. Gaps are allowed during matching and a gap penalty is included in this case as a negative score when no amino acid is matched. A sequence profile may also be represented by a hidden Markov model, referred to as a profile HMM.
Profile (structural context) a scoring matrix that represents which amino acids should fit well and which should fit poorly at sequential positions in a known protein structure. Profile columns represent sequential positions in the structure, and profile rows represent the 20 amino acids. As with a sequence profile, the structural profile is moved along a target sequence to find the highest possible alignment score by a dynamic programming algorithm. Gaps may be included and receive a penalty. The resulting score provides an indication as to whether or not the target protein might adopt such a structure.
Quaternary structure the three-dimensional configuration of a protein molecule comprising several independent polypeptide chains.
Secondary structure the interactions that occur between the C, O, and NH groups on amino acids in a polypeptide chain to form α-helices, β-sheets, turns, loops, and other forms, and that facilitate the folding into a three-dimensional structure.
Superfamily a group of protein families of the same or different lengths that are related by distant yet detectable sequence similarity. Members of a given superfamily thus have a common evolutionary origin. Originally, Dayhoff defined the cutoff for superfamily status as being the chance that the sequences are not related of 10 6, on the basis of an alignment score (Dayhoff et al. 1978). Proteins with few identities in an alignment of the sequences but with a convincingly common number of structural and functional features are placed in the same superfamily. At the level of three-dimensional structure, superfamily proteins will share common structural features such as a common fold, but there may also be differences in the number and arrangement of secondary structures. The PIR resource uses the term homeomorphic superfamilies to refer to superfamilies that are composed of sequences that can be aligned from end to end, representing a sharing of single sequence homology domain, a region of similarity that extends throughout the alignment. This domain may also comprise smaller homology domains that are shared with other protein families and superfamilies. Although a given protein sequence may contain domains found in several superfamilies, thus indicating a complex evolutionary history, sequences will be assigned to only one homeomorphic superfamily based on the presence of similarity throughout a multiple sequence alignment. The superfamily alignment may also include regions that do not align either within or at the ends of the alignment. In contrast, sequences in the same family align well throughout the alignment.
Supersecondary structure a term with similar meaning to a structural motif. Tertiary structure is the three-dimensional or globular structure formed by the packing together or folding of secondary structures of a polypeptide chain.
Secondary structure
Secondary structure prediction is a set of techniques in bioinformatics that aim to predict the local secondary structures of proteins based only on knowledge of their amino acid sequence. For proteins, a prediction consists of assigning regions of the amino acid sequence as likely alpha helices, beta strands (often termed extended conformations), or turns. The success of a prediction is determined by comparing it to the results of the DSSP algorithm (or similar e.g. STRIDE) applied to the crystal structure of the protein. Specialized algorithms have been developed for the detection of specific well-defined patterns such as transmembrane helices and coiled coils in proteins.
The best modern methods of secondary structure prediction in proteins were claimed to reach 80% accuracy after using machine learning and sequence alignments; this high accuracy allows the use of the predictions as feature improving fold recognition and ab initio protein structure prediction, classification of structural motifs, and refinement of sequence alignments. The accuracy of current protein secondary structure prediction methods is assessed in weekly benchmarks such as LiveBench and EVA.
Background
Early methods of secondary structure prediction, introduced in the 1960s and early 1970s, focused on identifying likely alpha helices and were based mainly on helix-coil transition models. Significantly more accurate predictions that included beta sheets were introduced in the 1970s and relied on statistical assessments based on probability parameters derived from known solved structures. These methods, applied to a single sequence, are typically at most about 60–65% accurate, and often underpredict beta sheets. Since the 1980s, artificial neural networks have been applied to the prediction of protein structures.
The evolutionary conservation of secondary structures can be exploited by simultaneously assessing many homologous sequences in a multiple sequence alignment, by calculating the net secondary structure propensity of an aligned column of amino acids. In concert with larger databases of known protein structures and modern machine learning methods such as neural nets and support vector machines, these methods can achieve up to 80% overall accuracy in globular proteins. The theoretical upper limit of accuracy is around 90%, partly due to idiosyncrasies in DSSP assignment near the ends of secondary structures, where local conformations vary under native conditions but may be forced to assume a single conformation in crystals due to packing constraints. Moreover, the typical secondary structure prediction methods do not account for the influence of tertiary structure on formation of secondary structure; for example, a sequence predicted as a likely helix may still be able to adopt a beta-strand conformation if it is located within a beta-sheet region of the protein and its side chains pack well with their neighbors. Dramatic conformational changes related to the protein's function or environment can also alter local secondary structure.
Historical perspective
To date, over 20 different secondary structure prediction methods have been developed. One of the first algorithms was Chou–Fasman method, which relies predominantly on probability parameters determined from relative frequencies of each amino acid's appearance in each type of secondary structure. The original Chou-Fasman parameters, determined from the small sample of structures solved in the mid-1970s, produce poor results compared to modern methods, though the parameterization has been updated since it was first published. The Chou-Fasman method is roughly 50–60% accurate in predicting secondary structures.
The next notable program was the GOR method is an information theory-based method. It uses the more powerful probabilistic technique of Bayesian inference. The GOR method takes into account not only the probability of each amino acid having a particular secondary structure, but also the conditional probability of the amino acid assuming each structure given the contributions of its neighbors (it does not assume that the neighbors have that same structure). The approach is both more sensitive and more accurate than that of Chou and Fasman because amino acid structural propensities are only strong for a small number of amino acids such as proline and glycine. Weak contributions from each of many neighbors can add up to strong effects overall. The original GOR method was roughly 65% accurate and is dramatically more successful in predicting alpha helices than beta sheets, which it frequently mispredicted as loops or disorganized regions.
Another big step forward, was using machine learning methods. First artificial neural networks methods were used. As a training sets they use solved structures to identify common sequence motifs associated with particular arrangements of secondary structures. These methods are over 70% accurate in their predictions, although beta strands are still often underpredicted due to the lack of three-dimensional structural information that would allow assessment of hydrogen bonding patterns that can promote formation of the extended conformation required for the presence of a complete beta sheet. PSIPRED and JPRED are some of the most known programs based on neural networks for protein secondary structure prediction. Next, support vector machines have proven particularly useful for predicting the locations of turns, which are difficult to identify with statistical methods.
Extensions of machine learning techniques attempt to predict more fine-grained local properties of proteins, such as backbone dihedral angles in unassigned regions. Both SVMs and neural networks have been applied to this problem. More recently, real-value torsion angles can be accurately predicted by SPINE-X and successfully employed for ab initio structure prediction.
Other improvements
It is reported that in addition to the protein sequence, secondary structure formation depends on other factors. For example, it is reported that secondary structure tendencies depend also on local environment, solvent accessibility of residues, protein structural class, and even the organism from which the proteins are obtained. Based on such observations, some studies have shown that secondary structure prediction can be improved by addition of information about protein structural class, residue accessible surface area and also contact number information.
Tertiary structure
The practical role of protein structure prediction is now more important than ever. Massive amounts of protein sequence data are produced by modern large-scale DNA sequencing efforts such as the Human Genome Project. Despite community-wide efforts in structural genomics, the output of experimentally determined protein structures—typically by time-consuming and relatively expensive X-ray crystallography or NMR spectroscopy—is lagging far behind the output of protein sequences.
The protein structure prediction remains an extremely difficult and unresolved undertaking. The two main problems are the calculation of protein free energy and finding the global minimum of this energy. A protein structure prediction method must explore the space of possible protein structures which is astronomically large. These problems can be partially bypassed in "comparative" or homology modeling and fold recognition methods, in which the search space is pruned by the assumption that the protein in question adopts a structure that is close to the experimentally determined structure of another homologous protein. In contrast, the de novo protein structure prediction methods must explicitly resolve these problems. The progress and challenges in protein structure prediction have been reviewed by Zhang.
Before modelling
Most tertiary structure modelling methods, such as Rosetta, are optimized for modelling the tertiary structure of single protein domains. A step called domain parsing, or domain boundary prediction, is usually done first to split a protein into potential structural domains. As with the rest of tertiary structure prediction, this can be done comparatively from known structures or ab initio with the sequence only (usually by machine learning, assisted by covariation). The structures for individual domains are docked together in a process called domain assembly to form the final tertiary structure.
Ab initio protein modelling
Energy- and fragment-based methods
Ab initio- or de novo- protein modelling methods seek to build three-dimensional protein models "from scratch", i.e., based on physical principles rather than (directly) on previously solved structures. There are many possible procedures that either attempt to mimic protein folding or apply some stochastic method to search possible solutions (i.e., global optimization of a suitable energy function). These procedures tend to require vast computational resources, and have thus only been carried out for tiny proteins. To predict protein structure de novo for larger proteins will require better algorithms and larger computational resources like those afforded by either powerful supercomputers (such as Blue Gene or MDGRAPE-3) or distributed computing (such as Folding@home, the Human Proteome Folding Project and Rosetta@Home). Although these computational barriers are vast, the potential benefits of structural genomics (by predicted or experimental methods) make ab initio structure prediction an active research field.
As of 2009, a 50-residue protein could be simulated atom-by-atom on a supercomputer for 1 millisecond. As of 2012, comparable stable-state sampling could be done on a standard desktop with a new graphics card and more sophisticated algorithms. A much larger simulation timescales can be achieved using coarse-grained modeling.
Evolutionary covariation to predict 3D contacts
As sequencing became more commonplace in the 1990s several groups used protein sequence alignments to predict correlated mutations and it was hoped that these coevolved residues could be used to predict tertiary structure (using the analogy to distance constraints from experimental procedures such as NMR). The assumption is when single residue mutations are slightly deleterious, compensatory mutations may occur to restabilize residue-residue interactions.
This early work used what are known as local methods to calculate correlated mutations from protein sequences, but suffered from indirect false correlations which result from treating each pair of residues as independent of all other pairs.
In 2011, a different, and this time global statistical approach, demonstrated that predicted coevolved residues were sufficient to predict the 3D fold of a protein, providing there are enough sequences available (>1,000 homologous sequences are needed). The method, EVfold, uses no homology modeling, threading or 3D structure fragments and can be run on a standard personal computer even for proteins with hundreds of residues. The accuracy of the contacts predicted using this and related approaches has now been demonstrated on many known structures and contact maps, including the prediction of experimentally unsolved transmembrane proteins.
Comparative protein modeling
Comparative protein modeling uses previously solved structures as starting points, or templates. This is effective because it appears that although the number of actual proteins is vast, there is a limited set of tertiary structural motifs to which most proteins belong. It has been suggested that there are only around 2,000 distinct protein folds in nature, though there are many millions of different proteins. The comparative protein modeling can combine with the evolutionary covariation in the structure prediction.
These methods may also be split into two groups:
Homology modeling is based on the reasonable assumption that two homologous proteins will share very similar structures. Because a protein's fold is more evolutionarily conserved than its amino acid sequence, a target sequence can be modeled with reasonable accuracy on a very distantly related template, provided that the relationship between target and template can be discerned through sequence alignment. It has been suggested that the primary bottleneck in comparative modelling arises from difficulties in alignment rather than from errors in structure prediction given a known-good alignment. Unsurprisingly, homology modelling is most accurate when the target and template have similar sequences.
Protein threading scans the amino acid sequence of an unknown structure against a database of solved structures. In each case, a scoring function is used to assess the compatibility of the sequence to the structure, thus yielding possible three-dimensional models. This type of method is also known as 3D-1D fold recognition due to its compatibility analysis between three-dimensional structures and linear protein sequences. This method has also given rise to methods performing an inverse folding search by evaluating the compatibility of a given structure with a large database of sequences, thus predicting which sequences have the potential to produce a given fold.
Modeling of side-chain conformations
Accurate packing of the amino acid side chains represents a separate problem in protein structure prediction. Methods that specifically address the problem of predicting side-chain geometry include dead-end elimination and the self-consistent mean field methods. The side chain conformations with low energy are usually determined on the rigid polypeptide backbone and using a set of discrete side chain conformations known as "rotamers." The methods attempt to identify the set of rotamers that minimize the model's overall energy.
These methods use rotamer libraries, which are collections of favorable conformations for each residue type in proteins. Rotamer libraries may contain information about the conformation, its frequency, and the standard deviations about mean dihedral angles, which can be used in sampling. Rotamer libraries are derived from structural bioinformatics or other statistical analysis of side-chain conformations in known experimental structures of proteins, such as by clustering the observed conformations for tetrahedral carbons near the staggered (60°, 180°, −60°) values.
Rotamer libraries can be backbone-independent, secondary-structure-dependent, or backbone-dependent. Backbone-independent rotamer libraries make no reference to backbone conformation, and are calculated from all available side chains of a certain type (for instance, the first example of a rotamer library, done by Ponder and Richards at Yale in 1987). Secondary-structure-dependent libraries present different dihedral angles and/or rotamer frequencies for -helix, -sheet, or coil secondary structures. Backbone-dependent rotamer libraries present conformations and/or frequencies dependent on the local backbone conformation as defined by the backbone dihedral angles and , regardless of secondary structure.
The modern versions of these libraries as used in most software are presented as multidimensional distributions of probability or frequency, where the peaks correspond to the dihedral-angle conformations considered as individual rotamers in the lists. Some versions are based on very carefully curated data and are used primarily for structure validation, while others emphasize relative frequencies in much larger data sets and are the form used primarily for structure prediction, such as the Dunbrack rotamer libraries.
Side-chain packing methods are most useful for analyzing the protein's hydrophobic core, where side chains are more closely packed; they have more difficulty addressing the looser constraints and higher flexibility of surface residues, which often occupy multiple rotamer conformations rather than just one.
Quaternary structure
In the case of complexes of two or more proteins, where the structures of the proteins are known or can be predicted with high accuracy, protein–protein docking methods can be used to predict the structure of the complex. Information of the effect of mutations at specific sites on the affinity of the complex helps to understand the complex structure and to guide docking methods.
Software
A great number of software tools for protein structure prediction exist. Approaches include homology modeling, protein threading, ab initio methods, secondary structure prediction, and transmembrane helix and signal peptide prediction. In particular, deep learning based on long short-term memory has been used for this purpose since 2007, when it was successfully applied to protein homology detection and to
predict subcellular localization of proteins.
Some recent successful methods based on the CASP experiments include I-TASSER, HHpred and AlphaFold. In 2021, AlphaFold was reported to perform best.
Knowing the structure of a protein often allows functional prediction as well. For instance, collagen is folded into a long-extended fiber-like chain and it makes it a fibrous protein. Recently, several techniques have been developed to predict protein folding and thus protein structure, for example, Itasser, and AlphaFold.
AI methods
AlphaFold was one of the first AIs to predict protein structures. It was introduced by Google's DeepMind in the 13th CASP competition, which was held in 2018. AlphaFold relies on a neural network approach, which directly predicts the 3D coordinates of all non-hydrogen atoms for a given protein using the amino acid sequence and aligned homologous sequences. The AlphaFold network consists of a trunk which processes the inputs through repeated layers, and a structure module which introduces an explicit 3D structure. Earlier neural networks for protein structure prediction used LSTM.
Since AlphaFold outputs protein coordinates directly, AlphaFold produces predictions in graphics processing unit (GPU) minutes to GPU hours, depending on the length of protein sequence.
Current AI methods and databases of predicted protein structures
AlphaFold2, was introduced in CASP14, and is capable of predicting protein structures to near experimental accuracy. AlphaFold was swiftly followed by RoseTTAFold and later by OmegaFold and the ESM Metagenomic Atlas. In a recent study, Sommer et al. 2022 demonstrated the application of protein structure prediction in genome annotation, specifically in identifying functional protein isoforms using computationally predicted structures, available at https://www.isoform.io. This study highlights the promise of protein structure prediction as a genome annotation tool and presents a practical, structure-guided approach that can be used to enhance the annotation of any genome.
The European Bioinformatics Institute together with DeepMind have constructed the AlphaFold – EBI database for predicted protein structures.
Evaluation of automatic structure prediction servers
CASP, which stands for Critical Assessment of Techniques for Protein Structure Prediction, is a community-wide experiment for protein structure prediction taking place every two years since 1994. CASP provides with an opportunity to assess the quality of available human, non-automated methodology (human category) and automatic servers for protein structure prediction (server category, introduced in the CASP7).
The CAMEO3D Continuous Automated Model EvaluatiOn Server evaluates automated protein structure prediction servers on a weekly basis using blind predictions for newly release protein structures. CAMEO publishes the results on its website.
See also
Protein design
Protein function prediction
Protein–protein interaction prediction
Gene prediction
Protein structure prediction software
De novo protein structure prediction
Molecular design software
Molecular modeling software
Modelling biological systems
Fragment libraries
Lattice proteins
Statistical potential
Structure atlas of human genome
Protein circular dichroism data bank
References
Further reading
External links
, Protein Structure Prediction Center, CASP experiments
ExPASy Proteomics tools – list of prediction tools and servers
Bioinformatics
Protein structure
Protein methods | 0.771721 | 0.989233 | 0.763412 |
Hydrochloride | In chemistry, a hydrochloride is an acid salt resulting, or regarded as resulting, from the reaction of hydrochloric acid with an organic base (e.g. an amine). An alternative name is chlorhydrate, which comes from French. An archaic alternative name is muriate, derived from hydrochloric acid's ancient name: muriatic acid.
Uses
Converting amines into their hydrochlorides is a common way to improve their water solubility, which can be desirable for substances used in medications. The European Pharmacopoeia lists more than 200 hydrochlorides as active ingredients in medications. These hydrochlorides, compared to free bases, may more readily dissolve in the gastrointestinal tract and be absorbed into the bloodstream more quickly. Additionally, many hydrochlorides of amines have a longer shelf-life than their respective free bases.
Amine hydrochlorides represent latent forms of a more reactive free base. In this regard, formation of an amine hydrochloride confers protection. This effect is illustrated by the hydrochlorides of the amino acids. Glycine methyl ester hydrochloride is a shelf-stable salt that can be readily converted to a reactive glycine methyl ester, a compound that is not shelf-stable.
See also
Chloride, inorganic salts of hydrochloric acid
Free base (chemistry)
Quaternary ammonium cation
References
Acid salts
Organochlorides
Salts | 0.76618 | 0.996364 | 0.763394 |
Protist | A protist or protoctist is any eukaryotic organism that is not an animal, land plant, or fungus. Protists do not form a natural group, or clade, but are a polyphyletic grouping of several independent clades that evolved from the last eukaryotic common ancestor.
Protists were historically regarded as a separate taxonomic kingdom known as Protista or Protoctista. With the advent of phylogenetic analysis and electron microscopy studies, the use of Protista as a formal taxon was gradually abandoned. In modern classifications, protists are spread across several eukaryotic clades called supergroups, such as Archaeplastida (photoautotrophs that includes land plants), SAR, Obazoa (which includes fungi and animals), Amoebozoa and Excavata.
Protists represent an extremely large genetic and ecological diversity in all environments, including extreme habitats. Their diversity, larger than for all other eukaryotes, has only been discovered in recent decades through the study of environmental DNA and is still in the process of being fully described. They are present in all ecosystems as important components of the biogeochemical cycles and trophic webs. They exist abundantly and ubiquitously in a variety of forms that evolved multiple times independently, such as free-living algae, amoebae and slime moulds, or as important parasites. Together, they compose an amount of biomass that doubles that of animals. They exhibit varied types of nutrition (such as phototrophy, phagotrophy or osmotrophy), sometimes combining them (in mixotrophy). They present unique adaptations not present in multicellular animals, fungi or land plants. The study of protists is termed protistology.
Definition
There is not a single accepted definition of what protists are. As a paraphyletic assemblage of diverse biological groups, they have historically been regarded as a catch-all taxon that includes any eukaryotic organism (i.e., living beings whose cells possess a nucleus) that is not an animal, a land plant or a dikaryon fungus. Because of this definition by exclusion, protists encompass almost all of the broad spectrum of biological characteristics expected in eukaryotes.
They are generally unicellular, microscopic eukaryotes. Some species can be purely phototrophic (generally called algae), or purely heterotrophic (traditionally called protozoa), but there is a wide range of mixotrophic protists which exhibit both phagotrophy and phototrophy together. They have different life cycles, trophic levels, modes of locomotion, and cellular structures. Some protists can be pathogens.
Examples of basic protist forms that do not represent evolutionary cohesive lineages include:
Algae, which are photosynthetic protists. Traditionally called "protophyta", they are found within most of the big evolutionary lineages or supergroups, intermingled with heterotrophic protists which are traditionally called "protozoa". There are many multicellular and colonial examples of algae, including kelp, red algae, some types of diatoms, and some lineages of green algae.
Flagellates, which bear eukaryotic flagella. They are found in all lineages, reflecting that the common ancestor of all living eukaryotes was a flagellated heterotroph.
Amoebae, which usually lack flagella but move through changes in the shape and motion of their protoplasm to produce pseudopodia. They have evolved independently several times, leading to major radiations of these lifeforms. Many lineages lack a solid shape ("naked amoebae"). Some of them have special forms, such as the "heliozoa", amoebae with microtubule-supported pseudopodia radiating from the cell, with at least three independent origins. Others, referred to as "testate amoebae", grow a shell around the cell made from organic or inorganic material.
Slime molds, which are amoebae capable of producing stalked reproductive structures that bear spores, often through aggregative multicellularity (numerous amoebae aggregating together). This type of multicellularity has evolved at least seven times among protists.
Fungus-like protists, which can produce hyphae-like structures and are often saprophytic. They have evolved multiple times, often very distantly from true fungi. For example, the oomycetes (water molds) or the myxomycetes.
Parasitic protists, such as Plasmodium falciparum, the cause of malaria.
The names of some protists (called ambiregnal protists), because of their mixture of traits similar to both animals and plants or fungi (e.g. slime molds and flagellated algae like euglenids), have been published under either or both of the ICN and the ICZN codes.
Classification
The evolutionary relationships of protists have been explained through molecular phylogenetics, the sequencing of entire genomes and transcriptomes, and electron microscopy studies of the flagellar apparatus and cytoskeleton. New major lineages of protists and novel biodiversity continue to be discovered, resulting in dramatic changes to the eukaryotic tree of life. The newest classification systems of eukaryotes, revised in 2019, do not recognize the formal taxonomic ranks (kingdom, phylum, class, order...) and instead only recognize clades of related organisms, making the classification more stable in the long term and easier to update. In this new cladistic scheme, the protists are divided into various wide branches informally named supergroups:
Archaeplastida — consists of groups that have evolved from a photosynthetic common ancestor that obtained chloroplasts directly through a single event of endosymbiosis with a cyanobacterium:
Picozoa (1 species), non-photosynthetic predators.
Glaucophyta (26 species), unicellular algae found in freshwater and terrestrial environments.
Rhodophyta (5,000–6,000 species), mostly multicellular marine algae that lost chlorophyll and only harvest light energy through phycobiliproteins.
Rhodelphidia (2 species), predators with non-photosynthetic plastid.
Viridiplantae or Chloroplastida, containing both green algae and land plants which are not protists. The green algae comprise many lineages of varying diversity, such as Chlorophyta (7,000), Prasinodermophyta (10), Zygnematophyceae (4,000), Charophyceae (877), Klebsormidiophyceae (48) or Coleochaetophyceae (36).
Sar, SAR or Harosa – a clade of three highly diverse lineages exclusively containing protists.
Stramenopiles is a wide clade of photosynthetic and heterotrophic organisms that evolved from a common ancestor with hairs in one of their two flagella. The photosynthetic stramenopiles, called Ochrophyta, are a monophyletic group that acquired chloroplasts from secondary endosymbiosis with a red alga. Among these, the best known are: the unicellular or colonial Bacillariophyta (>60,000 species), known as diatoms; the filamentous or genuinely multicellular Phaeophyta (2,000 species), known as brown algae; and the Chrysomonadea (>1,200 species). The heterotrophic stramenopiles are more diverse in forms, ranging from fungi-like organisms such as the Hyphochytrea, Oomycota and Labyrinthulea, to various kinds of protozoa such as the flagellates Opalinata and Bicosoecida.
Alveolata contains three of the most well-known groups of protists: Apicomplexa, a parasitic group with species harmful to humans and animals; Dinoflagellata, an ecologically important group as a main component of the marine microplankton and a main cause of algal blooms; and Ciliophora (4,500 species), the extremely diverse and well-studied group of mostly free-living heterotrophs known as ciliates.
Rhizaria is a morphologically diverse lineage mostly comprising heterotrophic amoebae, flagellates and amoeboflagellates, and some unusual algae (Chlorarachniophyta) and spore-forming parasites. The most familiar rhizarians are Foraminifera and Radiolaria, groups of large and abundant marine amoebae, many of them macroscopic. Much of the rhizarian diversity lies within the phylum Cercozoa, filled with free-living flagellates which usually have pseudopodia, as well as Phaeodaria, a group previously considered radiolarian. Other groups comprise various amoebae like Vampyrellida or are important parasites like Phytomyxea, Paramyxida or Haplosporida.
Haptista — includes the Haptophyta algae and the heterotrophic Centrohelida, which are "heliozoan"-type amoebae.
Cryptista — closely related to Archaeplastida, it includes the Cryptophyta algae, with a plastid of red algal origin, and two obscure relatives with two flagella, katablepharids and Palpitomonas.
Discoba — includes many lineages previously grouped under the paraphyletic "Excavata": the Jakobida, flagellates with bacterial-like mitochondrial genomes; Tsukubamonas, a free-living flagellate; and the Discicristata clade, which unites well-known phyla Heterolobosea and Euglenozoa. Heterolobosea includes amoebae, flagellates and amoeboflagellates with complex life cycles, and the unusual Acrasida, a group of slime molds. Euglenozoa encompasses a clade of algae with chloroplasts of green algal origin and many groups of anaerobic, parasitic or free-living heterotrophs.
Metamonada — a clade of completely anaerobic protozoa, primarily flagellates. Some are gut symbionts of animals, others are free-living (for example, Paratrimastix pyriformis), and others are well-known parasites (for example, Giardia lamblia).
Amorphea — unites two huge clades:
Amoebozoa (2,400 species) is a large group of heterotrophic protists, mostly amoebae. Many lineages are slime molds that produce spore-releasing fruiting bodies, such as Myxogastria, Dictyostelia and Protosporangiida, and are often studied by mycologists. Within the non-fruiting amoebae, the Tubulinea contain many naked amoebae (such as Amoeba itself) and a well-studied order of testate amoebae known as Arcellinida. Other non-fruiting amoebozoans are Variosea, Discosea and Archamoebae.
Obazoa includes the two kingdoms Metazoa (animals) and Fungi, and their closest protist relatives inside a clade known as Opisthokonta. The opisthokont protists are Nucleariida, Ichthyosporea, Pluriformea, Filasterea, Choanoflagellata and the elusive Tunicaraptor (1 species). They are flagellated or amoeboid heterotrophs of vital importance in the search for the genes that allow animal multicellularity. Sister groups to Opisthokonta are Apusomonadida (28 species) and Breviatea (4 species).
Many smaller lineages do not belong to any of these supergroups, and are usually poorly known groups with limited data, often referred to as 'orphan groups'. Some, such as the CRuMs clade, Malawimonadida and Ancyromonadida, appear to be related to Amorphea. Others, like Hemimastigophora (10 species) and Provora (7 species), appear to be related to or within Diaphoretickes, a clade that unites SAR, Archaeplastida, Haptista and Cryptista. A mysterious protist species, Meteora sporadica, is more closely related to the latter two of these orphan groups.
Although the root of the tree is still unresolved, one possible topology of the eukaryotic tree of life is:
History
Early concepts
From the start of the 18th century, the popular term "infusion animals" (later infusoria) referred to protists, bacteria and small invertebrate animals. In the mid-18th century, while Swedish scientist Carl von Linnaeus largely ignored the protists, his Danish contemporary Otto Friedrich Müller was the first to introduce protists to the binomial nomenclature system.
In the early 19th century, German naturalist Georg August Goldfuss introduced Protozoa (meaning 'early animals') as a class within Kingdom Animalia, to refer to four very different groups: infusoria (ciliates), corals, phytozoa (such as Cryptomonas) and jellyfish. Later, in 1845, Carl Theodor von Siebold was the first to establish Protozoa as a phylum of exclusively unicellular animals consisting of two classes: Infusoria (ciliates) and Rhizopoda (amoebae, foraminifera). Other scientists did not consider all of them part of the animal kingdom, and by the middle of the century they were regarded within the groupings of Protozoa (early animals), Protophyta (early plants), Phytozoa (animal-like plants) and Bacteria (mostly considered plants). Microscopic organisms were increasingly constrained in the plant/animal dichotomy. In 1858, the palaeontolgist Richard Owen was the first to define Protozoa as a separate kingdom of eukaryotic organisms, with "nucleated cells" and the "common organic characters" of plants and animals, although he also included sponges within protozoa.
Origin of the protist kingdom
In 1860, British naturalist John Hogg proposed Protoctista (meaning 'first-created beings') as the name for a fourth kingdom of nature (the other kingdoms being Linnaeus' plant, animal and mineral) which comprised all the lower, primitive organisms, including protophyta, protozoa and sponges, at the merging bases of the plant and animal kingdoms.
In 1866 the 'father of protistology', German scientist Ernst Haeckel, addressed the problem of classifying all these organisms as a mixture of animal and vegetable characters, and proposed Protistenreich (Kingdom Protista) as the third kingdom of life, comprising primitive forms that were "neither animals nor plants". He grouped both bacteria and eukaryotes, both unicellular and multicellular organisms, as Protista. He retained the Infusoria in the animal kingdom, until German zoologist Otto Butschli demonstrated that they were unicellular. At first, he included sponges and fungi, but in later publications he explicitly restricted Protista to predominantly unicellular organisms or colonies incapable of forming tissues. He clearly separated Protista from true animals on the basis that the defining character of protists was the absence of sexual reproduction, while the defining character of animals was the blastula stage of animal development. He also returned the terms protozoa and protophyta as subkingdoms of Protista.
Butschli considered the kingdom to be too polyphyletic and rejected the inclusion of bacteria. He fragmented the kingdom into protozoa (only nucleated, unicellular animal-like organisms), while bacteria and the protophyta were a separate grouping. This strengthened the old dichotomy of protozoa/protophyta from German scientist Carl Theodor von Siebold, and the German naturalists asserted this view over the worldwide scientific community by the turn of the century. However, British biologist C. Clifford Dobell in 1911 brought attention to the fact that protists functioned very differently compared to the animal and vegetable cellular organization, and gave importance to Protista as a group with a different organization that he called "acellularity", shifting away from the dogma of German cell theory. He coined the term protistology and solidified it as a branch of study independent from zoology and botany.
In 1938, American biologist Herbert Copeland resurrected Hogg's label, arguing that Haeckel's term Protista included anucleated microbes such as bacteria, which the term Protoctista (meaning "first established beings") did not. Under his four-kingdom classification (Monera, Protoctista, Plantae, Animalia), the protists and bacteria were finally split apart, recognizing the difference between anucleate (prokaryotic) and nucleate (eukaryotic) organisms. To firmly separate protists from plants, he followed Haeckel's blastular definition of true animals, and proposed defining true plants as those with chlorophyll a and b, carotene, xanthophyll and production of starch. He also was the first to recognize that the unicellular/multicellular dichotomy was invalid. Still, he kept fungi within Protoctista, together with red algae, brown algae and protozoans. This classification was the basis for Whittaker's later definition of Fungi, Animalia, Plantae and Protista as the four kingdoms of life.
In the popular five-kingdom scheme published by American plant ecologist Robert Whittaker in 1969, Protista was defined as eukaryotic "organisms which are unicellular or unicellular-colonial and which form no tissues". Just as the prokaryotic/eukaryotic division was becoming mainstream, Whittaker, after a decade from Copeland's system, recognized the fundamental division of life between the prokaryotic Monera and the eukaryotic kingdoms: Animalia (ingestion), Plantae (photosynthesis), Fungi (absorption) and the remaining Protista.
In the five-kingdom system of American evolutionary biologist Lynn Margulis, the term "protist" was reserved for microscopic organisms, while the more inclusive kingdom Protoctista (or protoctists) included certain large multicellular eukaryotes, such as kelp, red algae, and slime molds. Some use the term protist interchangeably with Margulis' protoctist, to encompass both single-celled and multicellular eukaryotes, including those that form specialized tissues but do not fit into any of the other traditional kingdoms.
Phylogenetics and modern concepts
The five-kingdom model remained the accepted classification until the development of molecular phylogenetics in the late 20th century, when it became apparent that protists are a paraphyletic group from which animals, fungi and plants evolved, and the three-domain system (Bacteria, Archaea, Eukarya) became prevalent. Today, protists are not treated as a formal taxon, but the term is commonly used for convenience in two ways:
Phylogenetic definition: protists are a paraphyletic group. A protist is any eukaryote that is not an animal, land plant or fungus, thus excluding many unicellular groups like the fungal Microsporidia, Chytridiomycetes and yeasts, and the non-unicellular Myxozoan animals included in Protista in the past.
Functional definition: protists are essentially those eukaryotes that are never multicellular, that either exist as independent cells, or if they occur in colonies, do not show differentiation into tissues. While in popular usage, this definition excludes the variety of non-colonial multicellularity types that protists exhibit, such as aggregative (e.g. choanoflagellates) or complex multicellularity (e.g. brown algae).
Kingdoms Protozoa and Chromista
There is, however, one classification of protists based on traditional ranks that lasted until the 21st century. The British protozoologist Thomas Cavalier-Smith, since 1998, developed a six-kingdom model: Bacteria, Animalia, Plantae, Fungi, Protozoa and Chromista. In his context, paraphyletic groups take preference over clades: both protist kingdoms Protozoa and Chromista contain paraphyletic phyla such as Apusozoa, Eolouka or Opisthosporidia. Additionally, red and green algae are considered true plants, while the fungal groups Microsporidia, Rozellida and Aphelida are considered protozoans under the phylum Opisthosporidia. This scheme endured until 2021, the year of his last publication.
Diversity
Species diversity
According to molecular data, protists dominate eukaryotic diversity, accounting for a vast majority of environmental DNA sequences or operational taxonomic units (OTUs). However, their species diversity is severely underestimated by traditional methods that differentiate species based on morphological characteristics. The number of described protistan species is very low (ranging from 26,000 to 74,400 as of 2012) in comparison to the diversity of plants, animals and fungi, which are historically and biologically well-known and studied. The predicted number of species also varies greatly, ranging from 1.4×10 to 1.6×10, and in several groups the number of predicted species is arbitrarily doubled. Most of these predictions are highly subjective.
Molecular techniques such as DNA barcoding are being used to compensate for the lack of morphological diagnoses, but this has revealed an unknown vast diversity of protists that is difficult to accurately process because of the exceedingly large genetic divergence between the different protistan groups. Several different molecular markers need to be used to survey the vast protistan diversity, because there is no universal marker that can be applied to all lineages.
Biomass
Protists make up a large portion of the biomass in both marine and terrestrial ecosystems. It has been estimated that protists account for 4 gigatons (Gt) of biomass in the entire planet Earth. This amount is smaller than 1% of all biomass, but is still double the amount estimated for all animals (2 Gt). Together, protists, animals, archaea (7 Gt) and fungi (12 Gt) account for less than 10% of the total biomass of the planet, because plants (450 Gt) and bacteria (70 Gt) are the remaining 80% and 15% respectively.
Ecology
Protists are highly abundant and diverse in all types of ecosystems, especially free-living (i.e. non-parasitic) groups. An unexpectedly enormous, taxonomically undescribed diversity of eukaryotic microbes is detected everywhere in the form of environmental DNA or RNA. The richest protist communities appear in soil, followed by ocean and freshwater habitats.
Phagotrophic protists (consumers) are the most diverse functional group in all ecosystems, with three main taxonomical groups of phagotrophs: Rhizaria (mainly Cercozoa in freshwater and soil habitats, and Radiolaria in oceans), ciliates (most abundant in freshwater and second most abundant in soil) and non-photosynthetic stramenopiles (third most represented overall, higher in soil than in oceans). Phototrophic protists (producers) appear in lower proportions, probably constrained by intense predation. They exist in similar abundance in both oceans and soil. They are mostly dinophytes in oceans, chrysophytes in freshwater, and Archaeplastida in soil.
Marine
Marine protists are highly diverse, have a fundamental impact on biogeochemical cycles (particularly, the carbon cycle) and are at the base of the marine trophic networks as part of the plankton.
Phototrophic marine protists located in the photic zone as phytoplankton are vital primary producers in the oceanic systems. They fix as much carbon as all terrestrial plants together. The smallest fractions, the picoplankton (<2 μm) and nanoplankton (2–20 μm), are dominated by several different algae (prymnesiophytes, pelagophytes, prasinophytes); fractions larger than 5 μm are instead dominated by diatoms and dinoflagellates. The heterotrophic fraction of marine picoplankton encompasses primarily early-branching stramenopiles (e.g. bicosoecids and labyrinthulomycetes), as well as alveolates, ciliates and radiolarians; protists of lower frequency include cercozoans and cryptophytes.
Mixotrophic marine protists, while not very researched, are present abundantly and ubiquitously in the global oceans, on a wide range of marine habitats. In metabarcoding analyses, they constitute more than 12% of the environmental sequences. They are an important and underestimated source of carbon in eutrophic and oligotrophic habitats. Their abundance varies seasonally. Planktonic protists are classified into various functional groups or 'mixotypes' that present different biogeographies:
Constitutive mixotrophs, also called 'phytoplankton that eat', have the innate ability to photosynthesize. They have diverse feeding behaviors: some require phototrophy, others phagotrophy, and others are obligate mixotrophs. They are responsible for harmful algal blooms. They dominate the eukaryotic microbial biomass in the photic zone, in eutrophic and oligotrophic waters across all climate zones, even in non-bloom conditions. They account for significant, often dominant predation of bacteria.
Non-constitutive mixotrophs acquire the ability to photosynthesize by stealing chloroplasts from their prey. They can be divided into two: generalists, which can use chloroplasts stolen from a variety of prey (e.g. oligotrich ciliates), or specialists, which have developed the need to only acquire chloroplasts from a few specific prey. The specialists are further divided into two: plastidic, those which contain differentiated plastids (e.g. Mesodinium, Dinophysis), and endosymbiotic, those which contain endosymbionts (e.g. mixotrophic Rhizaria such as Foraminifera and Radiolaria, dinoflagellates like Noctiluca). Both plastidic and generalist non-constitutive mixotrophs have similar biogeographies and low abundance, mostly found in eutrophic coastal waters. Generalist ciliates can account for up to 50% of ciliate communities in the photic zone. The endosymbiotic mixotrophs are the most abundant non-constitutive type.
Freshwater
Freshwater planktonic protist communities are characterized by a higher "beta diversity" (i.e. highly heterogeneous between samples) than soil and marine plankton. The high diversity can be a result of the hydrological dynamic of recruiting organisms from different habitats through extreme floods. The main freshwater producers (chrysophytes, cryptophytes and dinophytes) behave alternatively as consumers (mixotrophs). At the same time, strict consumers (non-photosynthetic) are less abundant in freshwater, implying that the consumer role is partly taken by these mixotrophs.
Soil
Soil protist communities are ecologically the richest. This may be due to the complex and highly dynamic distribution of water in the sediment, which creates extremely heterogenous environmental conditions. The constantly changing environment promotes the activity of only one part of the community at a time, while the rest remains inactive; this phenomenon promotes high microbial diversity in prokaryotes as well as protists. Only a small fraction of the detected diversity of soil-dwelling protists has been described (8.1% as of 2017). Soil protists are also morphologically and functionally diverse, with four major categories:
Photoautotrophic soil protists, or algae, are as abundant as their marine counterparts. Given the importance of marine algae, soil algae may provide a larger contribution to the global carbon cycle than previously thought, but the magnitude of their carbon fixation has yet to be quantified. Most soil algae belong to the supergroups Stramenopiles (diatoms, Xanthophyceae and Eustigmatophyceae) and Archaeplastida (Chlorophyceae and Trebouxiophyceae). There is also the presence of environmental DNA from dinoflagellates and haptophytes in soil, but no living forms have been seen.
Fungus-like protists are present abundantly in soil. Most environmental sequences belong to the Oomycetes (Stramenopiles), an osmotrophic and saprotrophic group that contains free-living and parasitic species of other protists, fungi, plants and animals. Another important group in soil are slime molds (found in Amoebozoa, Opisthokonta, Rhizaria and Heterolobosea), which reproduce by forming fruiting bodies known as sporocarps (originated from a single cell) and sorocarps (from aggregations of cells).
Phagotrophic protists are abundant and essential in soil ecosystems. As bacterial grazers, they have a significant role in the foodweb: they excrete nitrogen in the form of NH, making it available to plants and other microbes. Many soil protists are also mycophagous, and facultative (i.e. non-obligate) mycophagy is a widespread evolutionary feeding mode among soil protozoa. Amoeboflagellates like the glissomonads and cercomonads (in Rhizaria) are among the most abundant soil protists: they possess both flagella and pseudopodia, a morphological variability well suited for foraging between soil particles. Testate amoebae (e.g. arcellinids and euglyphids) have shells that protect against desiccation and predation, and their contribution to the silica cycle through the biomineralization of shells is as important as that of forest trees.
Parasitic soil protists (in Apicomplexa) are diverse, ubiquitous and have an important role as parasites of soil-dwelling invertebrate animals. In Neotropical forests, environmental DNA from the apicomplexan gregarines dominates protist diversity.
Parasitic
Parasitic protists represent around 15–20% of all environmental DNA in marine and soil systems, but only around 5% in freshwater systems, where chytrid fungi likely fill that ecological niche. In oceanic systems, parasitoids (i.e. those which kill their hosts, e.g. Syndiniales) are more abundant. In soil ecosystems, true parasites (i.e. those which do not kill their hosts) are primarily animal-hosted Apicomplexa (Alveolata) and plant-hosted oomycetes (Stramenopiles) and plasmodiophorids (Rhizaria). In freshwater ecosystems, parasitoids are mainly Perkinsea and Syndiniales (Alveolata), as well as the fungal Chytridiomycota. True parasites in freshwater are mostly oomycetes, Apicomplexa and Ichthyosporea.
Some protists are significant parasites of animals (e.g.; five species of the parasitic genus Plasmodium cause malaria in humans and many others cause similar diseases in other vertebrates), plants (the oomycete Phytophthora infestans causes late blight in potatoes) or even of other protists.
Around 100 protist species can infect humans. Two papers from 2013 have proposed virotherapy, the use of viruses to treat infections caused by protozoa.
Researchers from the Agricultural Research Service are taking advantage of protists as pathogens to control red imported fire ant (Solenopsis invicta) populations in Argentina. Spore-producing protists such as Kneallhazia solenopsae (recognized as a sister clade or the closest relative to the fungus kingdom now) can reduce red fire ant populations by 53–100%. Researchers have also been able to infect phorid fly parasitoids of the ant with the protist without harming the flies. This turns the flies into a vector that can spread the pathogenic protist between red fire ant colonies.
Biology
Physiological adaptations
While, in general, protists are typical eukaryotic cells and follow the same principles of physiology and biochemistry described for those cells within the "higher" eukaryotes (animals, fungi or plants), they have evolved a variety of unique physiological adaptations that do not appear in those eukaryotes.
Osmoregulation. Freshwater protists without cell walls are able to regulate their osmosis through contractile vacuoles, specialized organelles that periodically excrete fluid high in potassium and sodium through a cycle of diastole and systole. The cycle stops when the cells are placed in a medium with different salinity, until the cell adapts.
Energetic adaptations. The last eukaryotic common ancestor was aerobic, bearing mitochondria for oxidative metabolism. Many lineages of free-living and parasitic protists have independently evolved and adapted to inhabit anaerobic or microaerophilic habitats, by modifying the early mitochondria into hydrogenosomes, organelles that generate ATP anaerobically through fermentation of pyruvate. In a parallel manner, in the microaerophilic trypanosomatid protists, the fermentative glycosome evolved from the peroxisome.
Sensory adaptations. Many flagellates and probably all motile algae exhibit a positive phototaxis (i.e. they swim or glide toward a source of light). For this purpose, they exhibit three kinds of photoreceptors or "eyespots": (1) receptors with light antennae, found in many green algae, dinoflagellates and cryptophytes; (2) receptors with opaque screens; and (3) complex ocelloids with intracellular lenses, found in one group of predatory dinoflagellates, the Warnowiaceae. Additionally, some ciliates orient themselves in relation to the Earth's gravitational field while moving (geotaxis), and others swim in relation to the concentration of dissolved oxygen in the water.
Endosymbiosis. Protists have an accentuated tendency to include endosymbionts in their cells, and these have produced new physiological opportunities. Some associations are more permanent, such as Paramecium bursaria and its endosymbiont Chlorella; others more transient. Many protists contain captured chloroplasts, chloroplast-mitochondrial complexes, and even eyespots from algae. The xenosomes are bacterial endosymbionts found in ciliates, sometimes with a methanogenic role inside anaerobic ciliates.
Sexual reproduction
Protists generally reproduce asexually under favorable environmental conditions, but tend to reproduce sexually under stressful conditions, such as starvation or heat shock. Oxidative stress, which leads to DNA damage, also appears to be an important factor in the induction of sex in protists.
Eukaryotes emerged in evolution more than 1.5 billion years ago. The earliest eukaryotes were protists. Although sexual reproduction is widespread among multicellular eukaryotes, it seemed unlikely until recently, that sex could be a primordial and fundamental characteristic of eukaryotes. The main reason for this view was that sex appeared to be lacking in certain pathogenic protists whose ancestors branched off early from the eukaryotic family tree. However, several of these "early-branching" protists that were thought to predate the emergence of meiosis and sex (such as Giardia lamblia and Trichomonas vaginalis) are now known to descend from ancestors capable of meiosis and meiotic recombination, because they have a set core of meiotic genes that are present in sexual eukaryotes. Most of these meiotic genes were likely present in the common ancestor of all eukaryotes, which was likely capable of facultative (non-obligate) sexual reproduction.
This view was further supported by a 2011 study on amoebae. Amoebae have been regarded as asexual organisms, but the study describes evidence that most amoeboid lineages are ancestrally sexual, and that the majority of asexual groups likely arose recently and independently. Even in the early 20th century, some researchers interpreted phenomena related to chromidia (chromatin granules free in the cytoplasm) in amoebae as sexual reproduction.
Sex in pathogenic protists
Some commonly found protist pathogens such as Toxoplasma gondii are capable of infecting and undergoing asexual reproduction in a wide variety of animals – which act as secondary or intermediate host – but can undergo sexual reproduction only in the primary or definitive host (for example: felids such as domestic cats in this case).
Some species, for example Plasmodium falciparum, have extremely complex life cycles that involve multiple forms of the organism, some of which reproduce sexually and others asexually. However, it is unclear how frequently sexual reproduction causes genetic exchange between different strains of Plasmodium in nature and most populations of parasitic protists may be clonal lines that rarely exchange genes with other members of their species.
The pathogenic parasitic protists of the genus Leishmania have been shown to be capable of a sexual cycle in the invertebrate vector, likened to the meiosis undertaken in the trypanosomes.
Fossil record
Mesoproterozoic
By definition, all eukaryotes before the existence of plants, animals and fungi are considered protists. For that reason, this section contains information about the deep ancestry of all eukaryotes.
All living eukaryotes, including protists, evolved from the last eukaryotic common ancestor (LECA). Descendants of this ancestor are known as "crown-group" or "modern" eukaryotes. Molecular clocks suggest that LECA originated between 1200 and more than 1800 million years ago (Ma). Based on all molecular predictions, modern eukaryotes reached morphological and ecological diversity before 1000 Ma in the form of multicellular algae capable of sexual reproduction, and unicellular protists capable of phagocytosis and locomotion. However, the fossil record of modern eukaryotes is very scarce around this period, which contradicts the predicted diversity.
Instead, the fossil record of this period contains "stem-group eukaryotes". These fossils cannot be assigned to any known crown group, so they probably belong to extinct lineages that originated before LECA. They appear continuously throughout the Mesoproterozoic fossil record (1650–1000 Ma). They present defining eukaryote characteristics such as complex cell wall ornamentation and cell membrane protrusions, which require a flexible endomembrane system. However, they had a major distinction from crown eukaryores: the composition of their cell membrane. Unlike crown eukaryotes, which produce "crown sterols" for their cell membranes (e.g. cholesterol and ergosterol), stem eukaryotes produced "protosterols", which appear earlier in the biosynthetic pathway.
Crown sterols, while metabolically more expensive, may have granted several evolutionary advantages for LECA's descendants. Specific unsaturation patterns in crown sterols protect against osmotic shock during desiccation and rehydration cycles. Crown sterols can also receive ethyl groups, thus enhancing cohesion between lipids and adapting cells against extreme cold and heat. Moreover, the additional steps in the biosynthetic pathway allow cells to regulate the proportion of different sterols in their membranes, in turn allowing for a wider habitable temperature range and unique mechanisms such as asymmetric cell division or membrane repair under exposure to UV light. A more speculative role of these sterols is their protection against the Proterozoic changing oxygen levels. It is theorized that all of these sterol-based mechanisms allowed LECA's descendants to live as extremophiles of their time, diversifying into ecological niches that experienced cycles of desiccation and rehydration, daily extremes of high and low temperatures, and elevated UV radiation (such as mudflats, rivers, agitated shorelines and subaerial soil).
In contrast, the named mechanisms were absent in stem-group eukaryotes, as they were only capable of producing protosterols. Instead, these protosterol-based life forms occupied open marine waters. They were facultative anaerobes that thrived in Mesoproterozoic waters, which at the time were low on oxygen. Eventually, during the Tonian period (Neoproterozoic era), oxygen levels increased and the crown eukaryotes were able to expand to open marine environments thanks to their preference for more oxygenated habitats. Stem eukaryotes may have been driven to extinction as a result of this competition. Additionally, their protosterol membranes may have posed a disadvantage during the cold of the Cryogenian "Snowball Earth" glaciations and the extreme global heat that came afterwards.
Neoproterozoic
Modern eukaryotes began to appear abundantly in the Tonian period (1000–720 Ma), fueled by the proliferation of red algae. The oldest fossils assigned to modern eukaryotes belong to two photosynthetic protists: the multicellular red alga Bangiomorpha (from 1050 Ma), and the chlorophyte green alga Proterocladus (from 1000 Ma). Abundant fossils of heterotrophic protists appear later, around 900 Ma, with the emergence of fungi. For example, the oldest fossils of Amoebozoa are vase-shaped microfossils resembling modern testate amoebae, found in 800 million-year-old rocks. Radiolarian shells are found abundantly in the fossil record after the Cambrian period (~500 Ma), but more recent paleontological studies are beginning to interpret some Precambrian fossils as the earliest evidence of radiolarians.
See also
Evolution of sexual reproduction
Protist locomotion
Footnotes
References
Bibliography
General
Hausmann, K., N. Hulsmann, R. Radek. Protistology. Schweizerbart'sche Verlagsbuchshandlung, Stuttgart, 2003.
Margulis, L., J.O. Corliss, M. Melkonian, D.J. Chapman. Handbook of Protoctista. Jones and Bartlett Publishers, Boston, 1990.
Margulis, L., K.V. Schwartz. Five Kingdoms: An Illustrated Guide to the Phyla of Life on Earth, 3rd ed. New York: W.H. Freeman, 1998.
Margulis, L., L. Olendzenski, H.I. McKhann. Illustrated Glossary of the Protoctista, 1993.
Margulis, L., M.J. Chapman. Kingdoms and Domains: An Illustrated Guide to the Phyla of Life on Earth. Amsterdam: Academic Press/Elsevier, 2009.
Schaechter, M. Eukaryotic microbes. Amsterdam, Academic Press, 2012.
Physiology, ecology and paleontology
Fontaneto, D. Biogeography of Microscopic Organisms. Is Everything Small Everywhere? Cambridge University Press, Cambridge, 2011.
Moore, R. C., and other editors. Treatise on Invertebrate Paleontology. Protista, part B (vol. 1, Charophyta, vol. 2, Chrysomonadida, Coccolithophorida, Charophyta, Diatomacea & Pyrrhophyta), part C (Sarcodina, Chiefly "Thecamoebians" and Foraminiferida) and part D (Chiefly Radiolaria and Tintinnina). Boulder, Colorado: Geological Society of America; & Lawrence, Kansas: University of Kansas Press.
External links
UniEuk Taxonomy App
Tree of Life: Eukaryotes
Tsukii, Y. (1996). Protist Information Server (database of protist images). Laboratory of Biology, Hosei University. Protist Information Server. Updated: March 22, 2016.
Obsolete eukaryote taxa
Paraphyletic groups | 0.764132 | 0.999025 | 0.763387 |
Academic writing | Academic writing or scholarly writing refers primarily to nonfiction writing that is produced as part of academic work in accordance with the standards of a particular academic subject or discipline, including:
reports on empirical fieldwork or research in facilities for the natural sciences or social sciences,
monographs in which scholars analyze culture, propose new theories, or develop interpretations from archives,
as well as undergraduate versions of all of these.
Academic writing typically uses a more formal tone and follows specific conventions. Central to academic writing is its intertextuality, or an engagement with existing scholarly conversations through meticulous citing or referencing of other academic work, which underscores the writer's participation in the broader discourse community. However, the exact style, content, and organization of academic writing can vary depending on the specific genre and publication method. Despite this variation, all academic writing shares some common features, including a commitment to intellectual integrity, the advancement of knowledge, and the rigorous application of disciplinary methodologies.
Academic style
Academic writing often features prose register that is conventionally characterized by "evidence...that the writer(s) have been persistent, open-minded and disciplined in the study"; that prioritizes "reason over emotion or sensual perception"; and that imagines a reader who is "coolly rational, reading for information, and intending to formulate a reasoned response."
Three linguistic patterns that correspond to these goals across fields and genres, include the following:
a balance of caution and certainty, or a balance of hedging and boosting;
explicit cohesion through a range of cohesive ties and moves; and
the use of compressed noun phrases, rather than dependent clauses, for adding detail.
The stylistic means of achieving these conventions will differ by academic discipline, seen, for example, in the distinctions between writing in history versus engineering, or writing in physics versus philosophy. Biber and Gray propose further differences in the complexity of academic writing between disciplines, seen, for example, in the distinctions between writing in the humanities versus writing in the sciences. In the humanities, academic style is often seen in elaborated complex texts, while in the sciences, academic style is often seen in highly structured concise texts. These stylistic differences are thought to be related to the types of knowledge and information being communicated in these two broad fields.
One theory that attempts to account for these differences in writing is known as "discourse communities".
Criticism
Academic style has often been criticized for being too full of jargon and hard to understand by the general public. In 2022, Joelle Renstrom argued that the COVID-19 pandemic has had a negative impact on academic writing and that many scientific articles now "contain more jargon than ever, which encourages misinterpretation, political spin, and a declining public trust in the scientific process."
Discourse community
A discourse community is a group of people that shares mutual interests and beliefs. "It establishes limits and regularities...who may speak, what may be spoken, and how it is to be said; in addition, [rules] prescribe what is true and false, what is reasonable and what foolish, and what is meant and what not."
The concept of a discourse community is vital to academic writers across all disciplines, for the academic writer's purpose is to influence how their community understands its field of study: whether by maintaining, adding to, revising, or contesting what that community regards as "known" or "true." To effectively communicate and persuade within their field, academic writers are motivated to adhere to the conventions and standards set forth by their discourse community. Such adherence ensures that their contributions are intelligible and recognized as legitimate.
Discourse community constraints
Constraints are the discourse community's accepted rules and norms of writing that determine what can and cannot be said in a particular field or discipline. They define what constitutes an acceptable argument. Every discourse community expects to see writers construct their arguments using the community's conventional style of language, vocabulary, and sources, which are the building blocks of any argument in that community.
Writing for a discourse community
For writers to become familiar with some of the constraints of the discourse community they are writing for, across most discourse communities, writers must:
identify the novelty of their position
make a claim, or thesis
acknowledge prior work and situate their claim in a disciplinary context
offer warrants for one's view based on community-specific arguments and procedures
The structure and presentation of arguments can vary based on the discourse community the writer is a part of. For example, a high school student would typically present arguments differently than a college student. It is important for academic writers to familiarize themselves with the conventions of their discourse community by analyzing existing literature within the field. Such an in-depth understanding will enable writers to convey their ideas and arguments more effectively, ensuring that their contributions resonate with and are valued by their peers in the discourse community.
Writing Across the Curriculum (WAC) is a comprehensive educational initiative designed not only to enhance student writing proficiency across diverse disciplinary contexts but also to foster faculty development and interdisciplinary dialogue. The Writing Across the Curriculum Clearinghouse provides resources for such programs at all levels of education.
Novel argument
In a discourse community, academic writers build on the ideas of previous writers to establish their own claims. Successful writers know the importance of conducting research within their community and applying the knowledge gained to their own work. By synthesizing and expanding upon existing ideas, writers are able to make novel contributions to the discourse.
Intertextuality
Intertextuality is the combining of past writings into original, new pieces of text. According to Julia Kristeva, all texts are part of a larger network of intertextuality, meaning they are connected to prior texts through various links, such as allusions, repetitions, and direct quotations, whether they are acknowledged or not. Writers (often unwittingly) make use of what has previously been written and thus some degree of borrowing is inevitable. One of the key characteristics of academic writing across disciplines is the use of explicit conventions for acknowledging intertextuality, such as citation and bibliography. The conventions for marking intertextuality vary depending on the discourse community, with examples including MLA, APA, IEEE, and Chicago styles.
Summarizing and integrating other texts in academic writing is often metaphorically described as "entering the conversation," as described by Kenneth Burke:
Key elements
While the need for appropriate references and the avoidance of plagiarism are undisputed in academic and scholarly writing, the appropriate style is still a matter of debate. Some aspects of writing are universally accepted as important, while others are more subjective and open to interpretation.
Style
Contrary to stereotype, published academic research is not particularly syntactically complex; it is instead a fairly low-involvement register characterized by the modification of nominal elements through hedging and refining elaborations, often presented as sequences of objects of prepositions such as what, where, when, and whom.
Logical structure
Writing should be organized in a manner which demonstrates clarity of thought.
Appropriate references
Generally speaking, the range and organization of references illustrate the writer's awareness of the current state of knowledge in the field (including major current disagreements or controversies); typically the expectation is that these references will be formatted in the relevant disciplinary citation system.
Bibliography
Typically, this lists those materials read as background, evidencing wider reading, and will include the sources of individual citations.
Avoidance of plagiarism
Plagiarism, the "wrongful appropriation of another author's language, thoughts, ideas, or expressions", and the representation of them as one's own original work, is considered academic dishonesty, and can lead to severe consequences. However, the delineation of plagiarism is not always straightforward, as interpretations of what constitutes plagiarism can vary significantly across different cultures. This complexity is further amplified by the advent of advanced technologies, including artificial intelligence (AI), which have both complicated the detection of plagiarism and introduced new considerations in defining originality and authorship.
Academic genres
Academic writing encompasses many different genres, indicating the many different kinds of authors, audiences and activities engaged in the academy and the variety of kinds of messages sent among various people engaged in the academy. The partial list below indicates the complexity of academic writing and the academic world it is part of.
By researchers for other researchers
Scholarly monograph, in many types and varieties
Chapter in an edited volume
Book review
Conference paper
Essay; usually short, between 1,500 and 6,000 words in length
Explication; usually a short factual note explaining some part of a particular work; e.g. its terminology, dialect, allusions or coded references
Literature review or review essay; a summary and careful comparison of previous academic work published on a specific topic
Research article
Research proposal
Site description and plan (e.g. in archeology)
Technical report
Translation
Journal article (e.g. History Today); usually presenting a digest of recent research
Technical or administrative forms
Brief; short summary, often instructions for a commissioned work
Peer review report
Proposal for research or for a book
White paper; detailed technical specifications and/or performance report
Collating the work of others
Anthology; collection, collation, ordering and editing of the work of others
Catalogue raisonné; the definitive collection of the work of a single artist, in book form
Collected works; often referred to as the 'critical edition'. The definitive collection of the work of a single writer or poet, in book form, carefully purged of publishers' errors and later forgeries, etc.
Monograph or exhibition catalog; usually containing exemplary works, and a scholarly essay. Sometime contains new work by a creative writer, responding to the work
Transcribing, selecting and ordering oral testimony (e.g. oral history recordings)
Research and planning
Empirical research
Experimental plan
Laboratory report
Raw data collection plan
Research proposal, including research questions
Structured notes
Newer forms
Collaborative writing, especially using the internet
Hypertext, often incorporating new media and multimedia forms within the text
Performative writing (see also: belles-lettres)
By graduate students for their advisors and committees
Doctoral dissertation, completed over a number of years, often in excess of 20,000 words in length
Masters thesis (in some regions referred to as masters dissertation), often completed within a year and between 6,000 and 20,000 words in length.
Thesis or dissertation proposal
By undergraduate students for their instructors
Research paper; longer essay involving library research, 3000 to 6000 words in length
Book report
Exam essays
By instructors for students
Exam questions
Instructional pamphlet, or hand-out, or reading list
Presentations; usually short, often illustrated
Syllabus
Summaries of knowledge for researchers, students or general public
Annotated bibliography
Annotated catalogue, often of an individual or group's papers and/or library
Simplified graphical representation of knowledge; e.g. a map, or refining a display generated from a database. There will often be a 'key' or written work incorporated with the final work
Creating a timeline or chronological plan. There will often be a 'key' or written work incorporated with the final work
Devising a classification scheme; e.g. for animals, or newly arisen sub-cultures, or a radically new style of design
Encyclopedia entry or handbook chapter
Disseminating knowledge outside the academy
Call for papers
Documentary film script or TV script or radio script
Obituary
Opinion; an academic may sometimes be asked to give an expert written opinion, for use in a legal case before a court of law
Newspaper opinion article
Public speech or lecture
Review of a book, film, exhibition, event, etc.
Think-tank pamphlet, position paper, or briefing paper
Personal forms often for general public
These are acceptable to some academic disciplines, e.g. Cultural studies, Fine art, Feminist studies, Queer theory, Literary studies
Artist's book or chapbook
Autobiography
Belles-lettres; stylish or aesthetic writing on serious subjects, often with reference to one's personal experience
Commonplace book
Diary or weblog
Memoire; usually a short work, giving one's own memories of a famous person or event
Notebooks
Emotions in higher-education academic writing
Participating in higher education writing can entail high stakes. For instance, one's GPA may be influenced by writing performance in a class and the consequent grade received, potentially stirring negative emotions such as confusion and anxiety. Research on emotions and writing indicates that there is a relationship between writing identity and displaying emotions within an academic atmosphere. Instructors cannot simply read off one's identity and determine how it should be formatted. The structure of higher education, particularly within universities, is in a state of continual evolution, shaping and developing student writing identities. Nevertheless, this dynamic can lead to a positive contribution to one's academic writing identity in higher education. Unfortunately, higher education does not value mistakes, which makes it difficult for students to discover an academic identity. This can lead to a lack of confidence when submitting assignments. A student must learn to be confident enough to adapt and refine previous writing styles to succeed.
Academic writing can be seen as stressful, uninteresting, and difficult. When placed in the university setting, these emotions can contribute to student dropout. However, academic writing development can prevent fear and anxiety from developing if self-efficacy is high and anxiety is low. External factors can also prevent enjoyment in academic writing including finding time and space to complete assignments. Studies have shown core members of a "community of practice" concerning writing reports are more of a positive experience than those who do not. Overall emotions, lack of confidence, and prescriptive notions about what an academic writing identity should resemble can hinder a student's ability to succeed.
Format
A commonly recognized format for presenting original research in the social and applied sciences is known as IMRD, an initialism that refers to the usual ordering of subsections:
Introduction (Overview of relevant research and objective of current study)
Method (Assumptions, questions, procedures described in replicable or at least reproducible detail)
Results (Presentation of findings; often includes visual displays of quantitative data charts, plots)
and
Discussion (Analysis, Implications, Suggested next steps)
Standalone methods sections are atypical in presenting research in the humanities; other common formats in the applied and social sciences are IMRAD (which offers an "Analysis" section separate from the implications presented in the "Discussion" section) and IRDM (found in some engineering subdisciplines, which features Methods at the end of the document).
Other common sections in academic documents are:
Abstract
Acknowledgments
Indices
Bibliography
List of references
Appendix/Addendum, any addition to a document
See also
Academia
Academic authorship
Academic ghostwriting
Academic journal
Academic publishing
Author editing
Creative class
Criticism
Expository writing
Knowledge worker
Persuasive writing or rhetoric
Publishing
Research paper mill
Rhetorical device
Scientific writing
Scientific publishing
Scholarly method
Scholarly skywriting
Style guide
References
Further reading
General
C. Bazerman, J. Little, T. Chavkin, D. Fouquette, L. Bethel, and J. Garufis (2005). Writing across the curriculum. Parlor Press and WAC Clearinghouse. https://wac.colostate.edu/books/referenceguides/bazerman-wac/
C. Bazerman & D. Russell (1994). Landmark essays in writing across the curriculum. Davis, CA: Hermagoras Press.
Borg, Erik (2003). 'Discourse Community', English Language Teaching (ELT) Journal, Vol. 57, Issue 4, pp. 398–400
Coinam, David (2004). 'Concordancing Yourself: A Personal Exploration of Academic Writing', Language Awareness, Vol. 13, Issue 1, pp. 49–55
Goodall, H. Lloyd Jr. (2000). Writing Qualitative Inquiry: Self, Stories, and Academic Life (Walnut Creek, CA: Left Coast Press)
Johns, Ann M. (1997). Text, Role and Context: Developing Academic Literacies (Cambridge: Cambridge University Press)
King, Donald W., Carol Tenopir, Songphan Choemprayong, and Lei Wu (2009). 'Scholarly Journal Information Seeking and Reading Patterns of Faculty at Five U.S. Universities', Learned Publishing, Vol. 22, Issue 2, pp. 126–144
Kouritzin, Sandra G., Nathalie A. C Piquemal, and Renee Norman, eds (2009). Qualitative Research: Challenging the Orthodoxies in Standard Academic Discourse(s) (New York: Routledge)
Lincoln, Yvonna S, and Norman K Denzin (2003). Turning Points in Qualitative Research: Tying Knots in a Handkerchief (Walnut Creek, CA; Oxford: AltaMira Press)
Luey, Beth (2010). Handbook for Academic Authors, 5th edn (Cambridge: Cambridge University Press)
Murray, Rowena, and Sarah Moore (2006). The Handbook of Academic Writing: A Fresh Approach (Maidenhead: Open University Press)
Nash, Robert J. (2004). Liberating Scholarly Writing: The Power of Personal Narrative (New York; London: Teachers College Press)
Paltridge, Brian (2004). 'Academic Writing', Language Teaching, Vol. 37, Issue 2, pp. 87–105
Pelias, Ronald J. (1999). Writing Performance: Poeticizing the Researcher's Body (Carbondale, IL: Southern Illinois University Press)
Prior, Paul A. (1998). Writing/Disciplinarity: A Sociohistoric Account of Literate Activity in the Academy (Mahwah, NJ; London: Lawrence Erlbaum)
Rhodes, Carl and Andrew D. Brown (2005). 'Writing Responsibly: Narrative Fiction and Organization Studies', The Organization: The Interdisciplinary Journal of Organizations and Society, Vol. 12, Issue 4, pp. 467–491
Richards, Janet C., and Sharon K. Miller (2005). Doing Academic Writing in Education: Connecting the Personal and the Professional (Mahwah, NJ: Lawrence Erlbaum)
The University of Sydney. (2019). Academic Writing.
Architecture, design and art
Crysler, C. Greig (2002). Writing Spaces: Discourses of Architecture, Urbanism and the Built Environment (London: Routledge)
Francis, Pat (2009). Inspiring Writing in Art and Design: Taking a Line for a Write (Bristol; Chicago: Intellect)
Frayling, Christopher (1993). 'Research in Art and Design', Royal College of Art Research Papers, Vol. 1, Issue 1, pp. 1–5
Piotrowski, Andrzej (2008). 'The Spectacle of Architectural Discourses', Architectural Theory Review, Vol. 13, Issue 2, pp. 130–144
Bibliography
Baldo, Shannon. "Elves and Extremism: the use of Fantasy in the Radical Environmentalist Movement." Young Scholars in Writing: Undergraduate Research in Writing and Rhetoric 7 (Spring 2010): 108–15. Print.
Greene, Stuart. "Argument as Conversation: The Role of Inquiry in Writing a Researched Argument." n. page. Print.
Kantz, Margaret. "Helping Students Use Textual Sources Persuasively." College English 52.1 (1990): 74–91. Print.
Porter, James. "Intertextuality and the Discourse Community."Rhetoric Review. 5.1 (1986): 34–47. Print.
Writing | 0.768517 | 0.99323 | 0.763315 |
Isometry | In mathematics, an isometry (or congruence, or congruent transformation) is a distance-preserving transformation between metric spaces, usually assumed to be bijective. The word isometry is derived from the Ancient Greek: ἴσος isos meaning "equal", and μέτρον metron meaning "measure". If the transformation is from a metric space to itself, it is a kind of geometric transformation known as a motion.
Introduction
Given a metric space (loosely, a set and a scheme for assigning distances between elements of the set), an isometry is a transformation which maps elements to the same or another metric space such that the distance between the image elements in the new metric space is equal to the distance between the elements in the original metric space.
In a two-dimensional or three-dimensional Euclidean space, two geometric figures are congruent if they are related by an isometry;
the isometry that relates them is either a rigid motion (translation or rotation), or a composition of a rigid motion and a reflection.
Isometries are often used in constructions where one space is embedded in another space. For instance, the completion of a metric space involves an isometry from into a quotient set of the space of Cauchy sequences on
The original space is thus isometrically isomorphic to a subspace of a complete metric space, and it is usually identified with this subspace.
Other embedding constructions show that every metric space is isometrically isomorphic to a closed subset of some normed vector space and that every complete metric space is isometrically isomorphic to a closed subset of some Banach space.
An isometric surjective linear operator on a Hilbert space is called a unitary operator.
Definition
Let and be metric spaces with metrics (e.g., distances) and A map is called an isometry or distance-preserving map if for any ,
An isometry is automatically injective; otherwise two distinct points, a and b, could be mapped to the same point, thereby contradicting the coincidence axiom of the metric d, i.e., if and only if . This proof is similar to the proof that an order embedding between partially ordered sets is injective. Clearly, every isometry between metric spaces is a topological embedding.
A global isometry, isometric isomorphism or congruence mapping is a bijective isometry. Like any other bijection, a global isometry has a function inverse.
The inverse of a global isometry is also a global isometry.
Two metric spaces X and Y are called isometric if there is a bijective isometry from X to Y.
The set of bijective isometries from a metric space to itself forms a group with respect to function composition, called the isometry group.
There is also the weaker notion of path isometry or arcwise isometry:
A path isometry or arcwise isometry is a map which preserves the lengths of curves; such a map is not necessarily an isometry in the distance preserving sense, and it need not necessarily be bijective, or even injective.
This term is often abridged to simply isometry, so one should take care to determine from context which type is intended.
Examples
Any reflection, translation and rotation is a global isometry on Euclidean spaces. See also Euclidean group and .
The map in is a path isometry but not a (general) isometry. Note that unlike an isometry, this path isometry does not need to be injective.
Isometries between normed spaces
The following theorem is due to Mazur and Ulam.
Definition: The midpoint of two elements and in a vector space is the vector .
Linear isometry
Given two normed vector spaces and a linear isometry is a linear map that preserves the norms:
for all
Linear isometries are distance-preserving maps in the above sense.
They are global isometries if and only if they are surjective.
In an inner product space, the above definition reduces to
for all which is equivalent to saying that This also implies that isometries preserve inner products, as
.
Linear isometries are not always unitary operators, though, as those require additionally that and (i.e. the domain and codomain coincide and defines a coisometry).
By the Mazur–Ulam theorem, any isometry of normed vector spaces over is affine.
A linear isometry also necessarily preserves angles, therefore a linear isometry transformation is a conformal linear transformation.
Examples
A linear map from to itself is an isometry (for the dot product) if and only if its matrix is unitary.
Manifold
An isometry of a manifold is any (smooth) mapping of that manifold into itself, or into another manifold that preserves the notion of distance between points.
The definition of an isometry requires the notion of a metric on the manifold; a manifold with a (positive-definite) metric is a Riemannian manifold, one with an indefinite metric is a pseudo-Riemannian manifold. Thus, isometries are studied in Riemannian geometry.
A local isometry from one (pseudo-)Riemannian manifold to another is a map which pulls back the metric tensor on the second manifold to the metric tensor on the first. When such a map is also a diffeomorphism, such a map is called an isometry (or isometric isomorphism), and provides a notion of isomorphism ("sameness") in the category Rm of Riemannian manifolds.
Definition
Let and be two (pseudo-)Riemannian manifolds, and let be a diffeomorphism. Then is called an isometry (or isometric isomorphism) if
where denotes the pullback of the rank (0, 2) metric tensor by .
Equivalently, in terms of the pushforward we have that for any two vector fields on (i.e. sections of the tangent bundle ),
If is a local diffeomorphism such that then is called a local isometry.
Properties
A collection of isometries typically form a group, the isometry group. When the group is a continuous group, the infinitesimal generators of the group are the Killing vector fields.
The Myers–Steenrod theorem states that every isometry between two connected Riemannian manifolds is smooth (differentiable). A second form of this theorem states that the isometry group of a Riemannian manifold is a Lie group.
Riemannian manifolds that have isometries defined at every point are called symmetric spaces.
Generalizations
Given a positive real number ε, an ε-isometry or almost isometry (also called a Hausdorff approximation) is a map between metric spaces such that
for one has and
for any point there exists a point with
That is, an -isometry preserves distances to within and leaves no element of the codomain further than away from the image of an element of the domain. Note that -isometries are not assumed to be continuous.
The restricted isometry property characterizes nearly isometric matrices for sparse vectors.
Quasi-isometry is yet another useful generalization.
One may also define an element in an abstract unital C*-algebra to be an isometry:
is an isometry if and only if
Note that as mentioned in the introduction this is not necessarily a unitary element because one does not in general have that left inverse is a right inverse.
On a pseudo-Euclidean space, the term isometry means a linear bijection preserving magnitude. See also Quadratic spaces.
See also
Beckman–Quarles theorem
The second dual of a Banach space as an isometric isomorphism
Euclidean plane isometry
Flat (geometry)
Homeomorphism group
Involution
Isometry group
Motion (geometry)
Myers–Steenrod theorem
3D isometries that leave the origin fixed
Partial isometry
Scaling (geometry)
Semidefinite embedding
Space group
Symmetry in mathematics
Footnotes
References
Bibliography
Functions and mappings
Metric geometry
Symmetry
Equivalence (mathematics)
Riemannian geometry | 0.766988 | 0.995202 | 0.763308 |
Microscale thermophoresis | Microscale thermophoresis (MST) is a technology for the biophysical analysis of interactions between biomolecules. Microscale thermophoresis is based on the detection of a temperature-induced change in fluorescence of a target as a function of the concentration of a non-fluorescent ligand. The observed change in fluorescence is based on two distinct effects. On the one hand it is based on a temperature related intensity change (TRIC) of the fluorescent probe, which can be affected by binding events. On the other hand, it is based on thermophoresis, the directed movement of particles in a microscopic temperature gradient. Any change of the chemical microenvironment of the fluorescent probe, as well as changes in the hydration shell of biomolecules result in a relative change of the fluorescence detected when a temperature gradient is applied and can be used to determine binding affinities. MST allows measurement of interactions directly in solution without the need of immobilization to a surface (immobilization-free technology).
Applications
Affinity
between any kind of biomolecules including proteins, DNA, RNA, peptides, small molecules, fragments and ions
for interactions with high molecular weight complexes, large molecule assemblies, even with liposomes, vesicles, nanodiscs, nanoparticles and viruses
in any buffer, including serum and cell lysate
in competition experiments (for example with substrate and inhibitors)
Stoichiometry
Thermodynamic parameters
MST has been used to estimate the enthalpic and entropic contributions to biomolecular interactions.
Additional information
Sample property (homogeneity, aggregation, stability)
Multiple binding sites, cooperativity
Technology
MST is based on the quantifiable detection of a fluorescence change in a sample when a temperature change is applied. The fluorescence of a target molecule can be extrinsic or intrinsic (aromatic amino acids) and is altered in temperature gradients due to two distinct effects. On the one hand temperature related intensity change (TRIC), which describes the intrinsic property of fluorophores to change their fluorescence intensity as a function of temperature. The extent of the change in fluorescence intensity is affected by the chemical environment of the fluorescent probe, which can be altered in binding events due to conformational changes or proximity of ligands. On the other hand, MST is also based on the directed movement of molecules along temperature gradients, an effect termed thermophoresis. A spatial temperature difference ΔT leads to a change in molecule concentration in the region of elevated temperature, quantified by the Soret coefficient ST:chot/ccold = exp(-ST ΔT). Both, TRIC and thermophoresis contribute to the recorded signal in MST measurements in the following way: ∂/∂T(cF)=c∂F/∂T+F∂c/∂T. The first term in this equation c∂F/∂T describes TRIC as a change in fluorescence intensity (F) as a function of temperature (T), whereas the second term F∂c/∂T describes thermophoresis as the change in particle concentration (c) as a function of temperature. Thermophoresis depends on the interface between molecule and solvent. Under constant buffer conditions, thermophoresis probes the size, charge and solvation entropy of the molecules. The thermophoresis of a fluorescently labeled molecule A typically differs significantly from the thermophoresis of a molecule-target complex AT due to size, charge and solvation entropy differences. This difference in the molecule's thermophoresis is used to quantify the binding in titration experiments under constant buffer conditions.
The thermophoretic movement of the fluorescently labelled molecule is measured by monitoring the fluorescence distribution F inside a capillary. The microscopic temperature gradient is generated by an IR-Laser, which is focused into the capillary and is strongly absorbed by water. The temperature of the aqueous solution in the laser spot is raised by ΔT=1-10 K. Before the IR-Laser is switched on a homogeneous fluorescence distribution Fcold is observed inside the capillary. When the IR-Laser is switched on, two effects, occur on the same time-scale, contributing to the new fluorescence distribution Fhot. The thermal relaxation induces a binding-dependent drop in the fluorescence of the dye due to its local environmental-dependent response to the temperature jump (TRIC). At the same time molecules typically move from the locally heated region to the outer cold regions. The local concentration of molecules decreases in the heated region until it reaches a steady-state distribution.
While the mass diffusion D dictates the kinetics of depletion, ST determines the steady-state concentration ratio chot/ccold=exp(-ST ΔT) ≈ 1-ST ΔT under a temperature increase ΔT. The normalized fluorescence Fnorm=Fhot/Fcold measures mainly this concentration ratio, in addition to TRIC ∂F/∂T. In the linear approximation we find: Fnorm=1+(∂F/∂T-ST)ΔT. Due to the linearity of the fluorescence intensity and the thermophoretic depletion, the normalized fluorescence from the unbound molecule Fnorm(A) and the bound complex Fnorm(AT) superpose linearly. By denoting x the fraction of molecules bound to targets, the changing fluorescence signal during the titration of target T is given by: Fnorm=(1-x) Fnorm(A)+x Fnorm(AT).
Quantitative binding parameters are obtained by using a serial dilution of the binding substrate. By plotting Fnorm against the logarithm of the different concentrations of the dilution series, a sigmoidal binding curve is obtained. This binding curve can directly be fitted with the nonlinear solution of the law of mass action, with the dissociation constant KD as result.
References
Biochemistry methods
Protein methods
Biophysics
Molecular biology
Laboratory techniques | 0.779753 | 0.978899 | 0.763299 |
Translational research | Translational research (also called translation research, translational science, or, when the context is clear, simply translation) is research aimed at translating (converting) results in basic research into results that directly benefit humans. The term is used in science and technology, especially in biology and medical science. As such, translational research forms a subset of applied research.
The term has been used most commonly in life sciences and biotechnology, but applies across the spectrum of science and humanities. In the context of biomedicine, translational research is also known as bench to bedside. In the field of education, it is defined as research which translates concepts to classroom practice.
Critics of translational medical research (to the exclusion of more basic research) point to examples of important drugs that arose from fortuitous discoveries in the course of basic research such as penicillin and benzodiazepines. Other problems have stemmed from the widespread irreproducibility thought to exist in translational research literature.
Although translational research is relatively new, there are now several major research centers focused on it. In the U.S., the National Institutes of Health has implemented a major national initiative to leverage existing academic health center infrastructure through the Clinical and Translational Science Awards. Furthermore, some universities acknowledge translational research as its own field in which to study for a PhD or graduate certificate.
Definitions
Translational research is aimed at solving particular problems; the term has been used most commonly in life sciences and biotechnology, but applies across the spectrum of science and humanities.
In the field of education, it is defined for school-based education by the Education Futures Collaboration (www.meshguides.org) as research which translates concepts to classroom practice. Examples of translational research are commonly found in education subject association journals and in the MESHGuides which have been designed for this purpose.
In bioscience, translational research is a term often used interchangeably with translational medicine or translational science or bench to bedside. The adjective "translational" refers to the "translation" (the term derives from the Latin for "carrying over") of basic scientific findings in a laboratory setting into potential treatments for disease.
Biomedical translational research adopts a scientific investigation/enquiry into a given problem facing medical/health practices: it aims to "translate" findings in fundamental research into practice. In the field of biomedicine, it is often called "translational medicine", defined by the European Society for Translational Medicine (EUSTM) as "an interdisciplinary branch of the biomedical field supported by three main pillars: benchside, bedside and community", from laboratory experiments through clinical trials, to therapies, to point-of-care patient applications. The end point of translational research in medicine is the production of a promising new treatment that can be used clinically. Translational research is conceived due to the elongated time often taken to bring to bear discovered medical idea in practical terms in a health system. It is for these reasons that translational research is more effective in dedicated university science departments or isolated, dedicated research centers. Since 2009, the field has had specialized journals, the American Journal of Translational Research and Translational Research dedicated to translational research and its findings.
Translational research in biomedicine is broken down into different stages. In a two-stage model, T1 research, refers to the "bench-to-bedside" enterprise of translating knowledge from the basic sciences into the development of new treatments and T2 research refers to translating the findings from clinical trials into everyday practice, although this model is actually referring to the 2 "roadblocks" T1 and T2. Waldman et al. propose a scheme going from T0 to T5. T0 is laboratory (before human) research. In T1-translation, new laboratory discoveries are first translated to human application, which includes phase I & II clinical trials. In T2-translation, candidate health applications progress through clinical development to engender the evidence base for integration into clinical practice guidelines. This includes phase III clinical trials. In T3-translation, dissemination into community practices happens. T4-translation seeks to (1) advance scientific knowledge to paradigms of disease prevention, and (2) move health practices established in T3 into population health impact. Finally, T5-translation focuses on improving the wellness of populations by reforming suboptimal social structures
Comparison to basic research or applied research
Basic research is the systematic study directed toward greater knowledge or understanding of the fundamental aspects of phenomena and is performed without thought of practical ends. It results in general knowledge and understanding of nature and its laws. For instance, basic biomedical research focuses on studies of disease processes using, for example, cell cultures or animal models without consideration of the potential utility of that information.
Applied research is a form of systematic inquiry involving the practical application of science. It accesses and uses the research communities' accumulated theories, knowledge, methods, and techniques, for a specific, often state, business, or client-driven purpose. Translational research forms a subset of applied research. In life-sciences, this was evidenced by a citation pattern between the applied and basic sides in cancer research that appeared around 2000. In fields such as psychology, translational research is seen as a bridging between applied research and basic research types. The field of psychology defines translational research as the use of basic research to develop and test applications, such as treatment.
Challenges and criticisms
Critics of translational medical research (to the exclusion of more basic research) point to examples of important drugs that arose from fortuitous discoveries in the course of basic research such as penicillin and benzodiazepines, and the importance of basic research in improving our understanding of basic biological facts (e.g. the function and structure of DNA) that go on to transform applied medical research. Examples of failed translational research in the pharmaceutical industry include the failure of anti-aβ therapeutics in Alzheimer's disease. Other problems have stemmed from the widespread irreproducibility thought to exist in translational research literature.
Translational research-facilities in life-sciences
In U.S., the National Institutes of Health has implemented a major national initiative to leverage existing academic health center infrastructure through the Clinical and Translational Science Awards.
The National Center for Advancing Translational Sciences (NCATS) was established on December 23, 2011.
Although translational research is relatively new, it is being recognized and embraced globally. Some major centers for translational research include:
About 60 hubs of the Clinical and Translational Science Awards program.
Texas Medical Center, Houston, Texas, United States
Translational Research Institute (Australia), Brisbane, Queensland, Australia.
University of Rochester, Rochester, New York, United States has a dedicated Clinical and Translational Science Institute
Stanford University Medical Center, Stanford, California, United States.
Translational Genomics Research Institute, Phoenix, Arizona, United States.
Maine Medical Center in Portland, Maine, United States has a dedicated translational research institute.
Scripps Research Institute, Florida, United States, has a dedicated translational research institute.
UC Davis Clinical and Translational Science Center, Sacramento, California
Clinical and Translational Science Institute, University of Pittsburgh, Pittsburgh, Pennsylvania
Weill Cornell Medicine has a Clinical and Translational Science Center.
Hansjörg Wyss Institute for Biologically Inspired Engineering at Harvard University in Boston, Massachusetts, United States.
Additionally, translational research is now acknowledged by some universities as a dedicated field to study a PhD or graduate certificate in, in a medical context. These institutes currently include Monash University in Victoria, Australia, the University of Queensland, Diamantina Institute in Brisbane, Australia, at Duke University in Durham, North Carolina, America, at Creighton University in Omaha, Nebraska at Emory University in Atlanta, Georgia, and at The George Washington University in Washington, D.C.
The industry and academic interactions to promote translational science initiatives has been carried out by various global centers such as European Commission, GlaxoSmithKline and Novartis Institute for Biomedical Research.
See also
Biological engineering
Clinical and Translational Science (journal)
Clinical trials
Implementation research
Personalized medicine
Systems biology
Translational research informatics
Research practice gap (Knowledge transfer)
References
External links
Translational Research Institute
NIH Roadmap
American Journal of Translational Research
Center for Comparative Medicine and Translational Research
OSCAT2012: Conference on translational medicine
Medical research
Research
Research
Nursing research | 0.77062 | 0.990496 | 0.763296 |
Latent heat | Latent heat (also known as latent energy or heat of transformation) is energy released or absorbed, by a body or a thermodynamic system, during a constant-temperature process—usually a first-order phase transition, like melting or condensation.
Latent heat can be understood as hidden energy which is supplied or extracted to change the state of a substance without changing its temperature or pressure. This includes the latent heat of fusion (solid to liquid), the latent heat of vaporization (liquid to gas) and the latent heat of sublimation (solid to gas).
The term was introduced around 1762 by Scottish chemist Joseph Black. Black used the term in the context of calorimetry where a heat transfer caused a volume change in a body while its temperature was constant.
In contrast to latent heat, sensible heat is energy transferred as heat, with a resultant temperature change in a body.
Usage
The terms sensible heat and latent heat refer to energy transferred between a body and its surroundings, defined by the occurrence or non-occurrence of temperature change; they depend on the properties of the body. Sensible heat is sensed or felt in a process as a change in the body's temperature. Latent heat is energy transferred in a process without change of the body's temperature, for example, in a phase change (solid/liquid/gas).
Both sensible and latent heats are observed in many processes of transfer of energy in nature. Latent heat is associated with the change of phase of atmospheric or ocean water, vaporization, condensation, freezing or melting, whereas sensible heat is energy transferred that is evident in change of the temperature of the atmosphere or ocean, or ice, without those phase changes, though it is associated with changes of pressure and volume.
The original usage of the term, as introduced by Black, was applied to systems that were intentionally held at constant temperature. Such usage referred to latent heat of expansion and several other related latent heats. These latent heats are defined independently of the conceptual framework of thermodynamics.
When a body is heated at constant temperature by thermal radiation in a microwave field for example, it may expand by an amount described by its latent heat with respect to volume or latent heat of expansion, or increase its pressure by an amount described by its latent heat with respect to pressure.
Latent heat is energy released or absorbed by a body or a thermodynamic system during a constant-temperature process. Two common forms of latent heat are latent heat of fusion (melting) and latent heat of vaporization (boiling). These names describe the direction of energy flow when changing from one phase to the next: from solid to liquid, and liquid to gas.
In both cases the change is endothermic, meaning that the system absorbs energy. For example, when water evaporates, an input of energy is required for the water molecules to overcome the forces of attraction between them and make the transition from water to vapor.
If the vapor then condenses to a liquid on a surface, then the vapor's latent energy absorbed during evaporation is released as the liquid's sensible heat onto the surface.
The large value of the enthalpy of condensation of water vapor is the reason that steam is a far more effective heating medium than boiling water, and is more hazardous.
Meteorology
In meteorology, latent heat flux is the flux of energy from the Earth's surface to the atmosphere that is associated with evaporation or transpiration of water at the surface and subsequent condensation of water vapor in the troposphere. It is an important component of Earth's surface energy budget. Latent heat flux has been commonly measured with the Bowen ratio technique, or more recently since the mid-1900s by the eddy covariance method.
History
Background
Evaporative cooling
In 1748, an account was published in The Edinburgh Physical and Literary Essays of an experiment by the Scottish physician and chemist William Cullen. Cullen had used an air pump to lower the pressure in a container with diethyl ether. No heat was withdrawn from the ether, yet the ether boiled, but its temperature decreased. And in 1758, on a warm day in Cambridge, England, Benjamin Franklin and fellow scientist John Hadley experimented by continually wetting the ball of a mercury thermometer with ether and using bellows to evaporate the ether. With each subsequent evaporation, the thermometer read a lower temperature, eventually reaching . Another thermometer showed that the room temperature was constant at . In his letter Cooling by Evaporation, Franklin noted that, "One may see the possibility of freezing a man to death on a warm summer's day."
Latent heat
The English word latent comes from Latin latēns, meaning lying hidden. The term latent heat was introduced into calorimetry around 1750 by Joseph Black, commissioned by producers of Scotch whisky in search of ideal quantities of fuel and water for their distilling process to study system changes, such as of volume and pressure, when the thermodynamic system was held at constant temperature in a thermal bath.
It was known that when the air temperature rises above freezing—air then becoming the obvious heat source—snow melts very slowly and the temperature of the melted snow is close to its freezing point. In 1757, Black started to investigate if heat, therefore, was required for the melting of a solid, independent of any rise in temperature. As far Black knew, the general view at that time was that melting was inevitably accompanied by a small increase in temperature, and that no more heat was required than what the increase in temperature would require in itself. Soon, however, Black was able to show that much more heat was required during melting than could be explained by the increase in temperature alone. He was also able to show that heat is released by a liquid during its freezing; again, much more than could be explained by the decrease of its temperature alone.
Black would compare the change in temperature of two identical quantities of water, heated by identical means, one of which was, say, melted from ice, whereas the other was heated from merely cold liquid state. By comparing the resulting temperatures, he could conclude that, for instance, the temperature of the sample melted from ice was 140 °F lower than the other sample, thus melting the ice absorbed 140 "degrees of heat" that could not be measured by the thermometer, yet needed to be supplied, thus it was "latent" (hidden). Black also deduced that as much latent heat as was supplied into boiling the distillate (thus giving the quantity of fuel needed) also had to be absorbed to condense it again (thus giving the cooling water required).
Quantifying latent heat
In 1762, Black announced the following research and results to a society of professors at the University of Glasgow. Black had placed equal masses of ice at 32 °F (0 °C) and water at 33 °F (0.6 °C) respectively in two identical, well separated containers. The water and the ice were both evenly heated to 40 °F by the air in the room, which was at a constant 47 °F (8 °C). The water had therefore received 40 – 33 = 7 “degrees of heat”. The ice had been heated for 21 times longer and had therefore received 7 × 21 = 147 “degrees of heat”. The temperature of the ice had increased by 8 °F. The ice now stored, as it were, an additional 8 “degrees of heat” in a form which Black called sensible heat, manifested as temperature, which could be felt and measured. 147 – 8 = 139 “degrees of heat” were, so to speak, stored as latent heat, not manifesting itself. (In modern thermodynamics the idea of heat contained has been abandoned, so sensible heat and latent heat have been redefined. They do not reside anywhere.)
Black next showed that a water temperature of 176 °F was needed to melt an equal mass of ice until it was all 32 °F. So now 176 – 32 = 144 “degrees of heat” seemed to be needed to melt the ice. The modern value for the heat of fusion of ice would be 143 “degrees of heat” on the same scale (79.5 “degrees of heat Celsius”).
Finally Black increased the temperature of and vaporized respectively two equal masses of water through even heating. He showed that 830 “degrees of heat” was needed for the vaporization; again based on the time required. The modern value for the heat of vaporization of water would be 967 “degrees of heat” on the same scale.
James Prescott Joule
Later, James Prescott Joule characterised latent energy as the energy of interaction in a given configuration of particles, i.e. a form of potential energy, and the sensible heat as an energy that was indicated by the thermometer, relating the latter to thermal energy.
Specific latent heat
A specific latent heat (L) expresses the amount of energy in the form of heat (Q) required to completely effect a phase change of a unit of mass (m), usually , of a substance as an intensive property:
Intensive properties are material characteristics and are not dependent on the size or extent of the sample. Commonly quoted and tabulated in the literature are the specific latent heat of fusion and the specific latent heat of vaporization for many substances.
From this definition, the latent heat for a given mass of a substance is calculated by
where:
Q is the amount of energy released or absorbed during the change of phase of the substance (in kJ or in BTU),
m is the mass of the substance (in kg or in lb), and
L is the specific latent heat for a particular substance (in kJ kg−1 or in BTU lb−1), either Lf for fusion, or Lv for vaporization.
Table of specific latent heats
The following table shows the specific latent heats and change of phase temperatures (at standard pressure) of some common fluids and gases.
Specific latent heat for condensation of water in clouds
The specific latent heat of condensation of water in the temperature range from −25 °C to 40 °C is approximated by the following empirical cubic function:
where the temperature is taken to be the numerical value in °C.
For sublimation and deposition from and into ice, the specific latent heat is almost constant in the temperature range from −40 °C to 0 °C and can be approximated by the following empirical quadratic function:
Variation with temperature (or pressure)
As the temperature (or pressure) rises to the critical point, the latent heat of vaporization falls to zero.
See also
Bowen ratio
Eddy covariance flux (eddy correlation, eddy flux)
Sublimation (physics)
Specific heat capacity
Enthalpy of fusion
Enthalpy of vaporization
Ton of refrigeration -- the power required to freeze or melt 2000 lb of water in 24 hours
Notes
References
Thermochemistry
Atmospheric thermodynamics
Thermodynamics
Physical phenomena | 0.765289 | 0.99738 | 0.763284 |
MARTINI | Martini is a coarse-grained (CG) force field developed by Marrink and coworkers at the University of Groningen, initially developed in 2004 for molecular dynamics simulation of lipids, later (2007) extended to various other molecules. The force field applies a mapping of four heavy atoms to one CG interaction site and is parametrized with the aim of reproducing thermodynamic properties.
In 2021, a new version of the force field has been published, dubbed Martini 3.
Overview
For the Martini force field 4 bead categories have been defined: Q (charged), P (polar), N (nonpolar), and C (apolar). These bead types are in turn split in 4 or 5 different levels, giving a total of 20 beadtypes. For the interactions between the beads, 10 different interaction levels are defined (O-IX). The beads can be used at normal size (4:1 mapping), S-size (small, 3:1 mapping) or T-size (tiny, 2:1 mapping). The S-particles are mainly used in ring structures whereas the T-particles are currently used in nucleic acids only. Bonded interactions (bonds, angles, dihedrals, and impropers) are derived from atomistic simulations of crystal structures.
Use
The Martini force field has become one of the most used coarse grained force fields in the field of molecular dynamics simulations for biomolecules. The original 2004 and 2007 papers have been cited 1850 and 3400 times, respectively. The force field has been implemented in three major simulation codes: GROningen MAchine for Chemical Simulations (GROMACS), GROningen MOlecular Simulation (GROMOS), and Nanoscale Molecular Dynamics (NAMD). Notable successes are simulations of the clustering behavior of syntaxin-1A, the simulations of the opening of mechanosensitive channels (MscL) and the simulation of the domain partitioning of membrane peptides.
Parameter sets
Lipids
The initial papers contained parameters for water, simple alkanes, organic solvents, surfactants, a wide range of lipids and cholesterol. They semiquantitatively reproduce the phase behavior of bilayers with other bilayer properties, and more complex bilayer behavior.
Proteins
Compatible parameters for proteins were introduced by Monticelli et al.. Secondary structure elements, like alpha helixes and beta sheets (β-sheets), are constrained. Martini proteins are often simulated in combination with an elastic network, such as Elnedyn, to maintain the overall structure. However, the use of the elastic network restricts the use of the Martini force field for the study of large conformational changes (e.g. folding). The GōMartini approach introduced by Poma et al. removes this limitation.
Carbohydrates
Compatible parameters were released in 2009.
Nucleic acids
Compatible parameters were released for DNA in 2015 and RNA in 2017.
Other
Parameters for different other molecules, including carbon nanoparticles, ionic liquids, and a number of polymers, are available from the Martini website.
See also
GROMACS
VOTCA
Comparison of software for molecular mechanics modeling
Comparison of force field implementations
References
External links
Force fields (chemistry) | 0.775879 | 0.983763 | 0.763281 |
Biotechnology | Biotechnology is a multidisciplinary field that involves the integration of natural sciences and engineering sciences in order to achieve the application of organisms and parts thereof for products and services.
The term biotechnology was first used by Károly Ereky in 1919 to refer to the production of products from raw materials with the aid of living organisms. The core principle of biotechnology involves harnessing biological systems and organisms, such as bacteria, yeast, and plants, to perform specific tasks or produce valuable substances.
Biotechnology had a significant impact on many areas of society, from medicine to agriculture to environmental science. One of the key techniques used in biotechnology is genetic engineering, which allows scientists to modify the genetic makeup of organisms to achieve desired outcomes. This can involve inserting genes from one organism into another, and consequently, create new traits or modifying existing ones.
Other important techniques used in biotechnology include tissue culture, which allows researchers to grow cells and tissues in the lab for research and medical purposes, and fermentation, which is used to produce a wide range of products such as beer, wine, and cheese.
The applications of biotechnology are diverse and have led to the development of essential products like life-saving drugs, biofuels, genetically modified crops, and innovative materials. It has also been used to address environmental challenges, such as developing biodegradable plastics and using microorganisms to clean up contaminated sites.
Biotechnology is a rapidly evolving field with significant potential to address pressing global challenges and improve the quality of life for people around the world; however, despite its numerous benefits, it also poses ethical and societal challenges, such as questions around genetic modification and intellectual property rights. As a result, there is ongoing debate and regulation surrounding the use and application of biotechnology in various industries and fields.
Definition
The concept of biotechnology encompasses a wide range of procedures for modifying living organisms for human purposes, going back to domestication of animals, cultivation of plants, and "improvements" to these through breeding programs that employ artificial selection and hybridization. Modern usage also includes genetic engineering, as well as cell and tissue culture technologies. The American Chemical Society defines biotechnology as the application of biological organisms, systems, or processes by various industries to learning about the science of life and the improvement of the value of materials and organisms, such as pharmaceuticals, crops, and livestock. As per the European Federation of Biotechnology, biotechnology is the integration of natural science and organisms, cells, parts thereof, and molecular analogues for products and services. Biotechnology is based on the basic biological sciences (e.g., molecular biology, biochemistry, cell biology, embryology, genetics, microbiology) and conversely provides methods to support and perform basic research in biology.
Biotechnology is the research and development in the laboratory using bioinformatics for exploration, extraction, exploitation, and production from any living organisms and any source of biomass by means of biochemical engineering where high value-added products could be planned (reproduced by biosynthesis, for example), forecasted, formulated, developed, manufactured, and marketed for the purpose of sustainable operations (for the return from bottomless initial investment on R & D) and gaining durable patents rights (for exclusives rights for sales, and prior to this to receive national and international approval from the results on animal experiment and human experiment, especially on the pharmaceutical branch of biotechnology to prevent any undetected side-effects or safety concerns by using the products). The utilization of biological processes, organisms or systems to produce products that are anticipated to improve human lives is termed biotechnology.
By contrast, bioengineering is generally thought of as a related field that more heavily emphasizes higher systems approaches (not necessarily the altering or using of biological materials directly) for interfacing with and utilizing living things. Bioengineering is the application of the principles of engineering and natural sciences to tissues, cells, and molecules. This can be considered as the use of knowledge from working with and manipulating biology to achieve a result that can improve functions in plants and animals. Relatedly, biomedical engineering is an overlapping field that often draws upon and applies biotechnology (by various definitions), especially in certain sub-fields of biomedical or chemical engineering such as tissue engineering, biopharmaceutical engineering, and genetic engineering.
History
Although not normally what first comes to mind, many forms of human-derived agriculture clearly fit the broad definition of "utilizing a biotechnological system to make products". Indeed, the cultivation of plants may be viewed as the earliest biotechnological enterprise.
Agriculture has been theorized to have become the dominant way of producing food since the Neolithic Revolution. Through early biotechnology, the earliest farmers selected and bred the best-suited crops (e.g., those with the highest yields) to produce enough food to support a growing population. As crops and fields became increasingly large and difficult to maintain, it was discovered that specific organisms and their by-products could effectively fertilize, restore nitrogen, and control pests. Throughout the history of agriculture, farmers have inadvertently altered the genetics of their crops through introducing them to new environments and breeding them with other plants — one of the first forms of biotechnology.
These processes also were included in early fermentation of beer. These processes were introduced in early Mesopotamia, Egypt, China and India, and still use the same basic biological methods. In brewing, malted grains (containing enzymes) convert starch from grains into sugar and then adding specific yeasts to produce beer. In this process, carbohydrates in the grains broke down into alcohols, such as ethanol. Later, other cultures produced the process of lactic acid fermentation, which produced other preserved foods, such as soy sauce. Fermentation was also used in this time period to produce leavened bread. Although the process of fermentation was not fully understood until Louis Pasteur's work in 1857, it is still the first use of biotechnology to convert a food source into another form.
Before the time of Charles Darwin's work and life, animal and plant scientists had already used selective breeding. Darwin added to that body of work with his scientific observations about the ability of science to change species. These accounts contributed to Darwin's theory of natural selection.
For thousands of years, humans have used selective breeding to improve the production of crops and livestock to use them for food. In selective breeding, organisms with desirable characteristics are mated to produce offspring with the same characteristics. For example, this technique was used with corn to produce the largest and sweetest crops.
In the early twentieth century scientists gained a greater understanding of microbiology and explored ways of manufacturing specific products. In 1917, Chaim Weizmann first used a pure microbiological culture in an industrial process, that of manufacturing corn starch using Clostridium acetobutylicum, to produce acetone, which the United Kingdom desperately needed to manufacture explosives during World War I.
Biotechnology has also led to the development of antibiotics. In 1928, Alexander Fleming discovered the mold Penicillium. His work led to the purification of the antibiotic compound formed by the mold by Howard Florey, Ernst Boris Chain and Norman Heatley – to form what we today know as penicillin. In 1940, penicillin became available for medicinal use to treat bacterial infections in humans.
The field of modern biotechnology is generally thought of as having been born in 1971 when Paul Berg's (Stanford) experiments in gene splicing had early success. Herbert W. Boyer (Univ. Calif. at San Francisco) and Stanley N. Cohen (Stanford) significantly advanced the new technology in 1972 by transferring genetic material into a bacterium, such that the imported material would be reproduced. The commercial viability of a biotechnology industry was significantly expanded on June 16, 1980, when the United States Supreme Court ruled that a genetically modified microorganism could be patented in the case of Diamond v. Chakrabarty. Indian-born Ananda Chakrabarty, working for General Electric, had modified a bacterium (of the genus Pseudomonas) capable of breaking down crude oil, which he proposed to use in treating oil spills. (Chakrabarty's work did not involve gene manipulation but rather the transfer of entire organelles between strains of the Pseudomonas bacterium).
The MOSFET invented at Bell Labs between 1955 and 1960, Two years later, Leland C. Clark and Champ Lyons invented the first biosensor in 1962. Biosensor MOSFETs were later developed, and they have since been widely used to measure physical, chemical, biological and environmental parameters. The first BioFET was the ion-sensitive field-effect transistor (ISFET), invented by Piet Bergveld in 1970. It is a special type of MOSFET, where the metal gate is replaced by an ion-sensitive membrane, electrolyte solution and reference electrode. The ISFET is widely used in biomedical applications, such as the detection of DNA hybridization, biomarker detection from blood, antibody detection, glucose measurement, pH sensing, and genetic technology.
By the mid-1980s, other BioFETs had been developed, including the gas sensor FET (GASFET), pressure sensor FET (PRESSFET), chemical field-effect transistor (ChemFET), reference ISFET (REFET), enzyme-modified FET (ENFET) and immunologically modified FET (IMFET). By the early 2000s, BioFETs such as the DNA field-effect transistor (DNAFET), gene-modified FET (GenFET) and cell-potential BioFET (CPFET) had been developed.
A factor influencing the biotechnology sector's success is improved intellectual property rights legislation—and enforcement—worldwide, as well as strengthened demand for medical and pharmaceutical products.
Rising demand for biofuels is expected to be good news for the biotechnology sector, with the Department of Energy estimating ethanol usage could reduce U.S. petroleum-derived fuel consumption by up to 30% by 2030. The biotechnology sector has allowed the U.S. farming industry to rapidly increase its supply of corn and soybeans—the main inputs into biofuels—by developing genetically modified seeds that resist pests and drought. By increasing farm productivity, biotechnology boosts biofuel production.
Examples
Biotechnology has applications in four major industrial areas, including health care (medical), crop production and agriculture, non-food (industrial) uses of crops and other products (e.g., biodegradable plastics, vegetable oil, biofuels), and environmental uses.
For example, one application of biotechnology is the directed use of microorganisms for the manufacture of organic products (examples include beer and milk products). Another example is using naturally present bacteria by the mining industry in bioleaching. Biotechnology is also used to recycle, treat waste, clean up sites contaminated by industrial activities (bioremediation), and also to produce biological weapons.
A series of derived terms have been coined to identify several branches of biotechnology, for example:
Bioinformatics (or "gold biotechnology") is an interdisciplinary field that addresses biological problems using computational techniques, and makes the rapid organization as well as analysis of biological data possible. The field may also be referred to as computational biology, and can be defined as, "conceptualizing biology in terms of molecules and then applying informatics techniques to understand and organize the information associated with these molecules, on a large scale". Bioinformatics plays a key role in various areas, such as functional genomics, structural genomics, and proteomics, and forms a key component in the biotechnology and pharmaceutical sector.
Blue biotechnology is based on the exploitation of sea resources to create products and industrial applications. This branch of biotechnology is the most used for the industries of refining and combustion principally on the production of bio-oils with photosynthetic micro-algae.
Green biotechnology is biotechnology applied to agricultural processes. An example would be the selection and domestication of plants via micropropagation. Another example is the designing of transgenic plants to grow under specific environments in the presence (or absence) of chemicals. One hope is that green biotechnology might produce more environmentally friendly solutions than traditional industrial agriculture. An example of this is the engineering of a plant to express a pesticide, thereby ending the need of external application of pesticides. An example of this would be Bt corn. Whether or not green biotechnology products such as this are ultimately more environmentally friendly is a topic of considerable debate. It is commonly considered as the next phase of green revolution, which can be seen as a platform to eradicate world hunger by using technologies which enable the production of more fertile and resistant, towards biotic and abiotic stress, plants and ensures application of environmentally friendly fertilizers and the use of biopesticides, it is mainly focused on the development of agriculture. On the other hand, some of the uses of green biotechnology involve microorganisms to clean and reduce waste.
Red biotechnology is the use of biotechnology in the medical and pharmaceutical industries, and health preservation. This branch involves the production of vaccines and antibiotics, regenerative therapies, creation of artificial organs and new diagnostics of diseases. As well as the development of hormones, stem cells, antibodies, siRNA and diagnostic tests.
White biotechnology, also known as industrial biotechnology, is biotechnology applied to industrial processes. An example is the designing of an organism to produce a useful chemical. Another example is the using of enzymes as industrial catalysts to either produce valuable chemicals or destroy hazardous/polluting chemicals. White biotechnology tends to consume less in resources than traditional processes used to produce industrial goods.
"Yellow biotechnology" refers to the use of biotechnology in food production (food industry), for example in making wine (winemaking), cheese (cheesemaking), and beer (brewing) by fermentation. It has also been used to refer to biotechnology applied to insects. This includes biotechnology-based approaches for the control of harmful insects, the characterisation and utilisation of active ingredients or genes of insects for research, or application in agriculture and medicine and various other approaches.
Gray biotechnology is dedicated to environmental applications, and focused on the maintenance of biodiversity and the remotion of pollutants.
Brown biotechnology is related to the management of arid lands and deserts. One application is the creation of enhanced seeds that resist extreme environmental conditions of arid regions, which is related to the innovation, creation of agriculture techniques and management of resources.
Violet biotechnology is related to law, ethical and philosophical issues around biotechnology.
Microbial biotechnology has been proposed for the rapidly emerging area of biotechnology applications in space and microgravity (space bioeconomy)
Dark biotechnology is the color associated with bioterrorism or biological weapons and biowarfare which uses microorganisms, and toxins to cause diseases and death in humans, livestock and crops.
Medicine
In medicine, modern biotechnology has many applications in areas such as pharmaceutical drug discoveries and production, pharmacogenomics, and genetic testing (or genetic screening). In 2021, nearly 40% of the total company value of pharmaceutical biotech companies worldwide were active in Oncology with Neurology and Rare Diseases being the other two big applications.
Pharmacogenomics (a combination of pharmacology and genomics) is the technology that analyses how genetic makeup affects an individual's response to drugs. Researchers in the field investigate the influence of genetic variation on drug responses in patients by correlating gene expression or single-nucleotide polymorphisms with a drug's efficacy or toxicity. The purpose of pharmacogenomics is to develop rational means to optimize drug therapy, with respect to the patients' genotype, to ensure maximum efficacy with minimal adverse effects. Such approaches promise the advent of "personalized medicine"; in which drugs and drug combinations are optimized for each individual's unique genetic makeup.
Biotechnology has contributed to the discovery and manufacturing of traditional small molecule pharmaceutical drugs as well as drugs that are the product of biotechnology – biopharmaceutics. Modern biotechnology can be used to manufacture existing medicines relatively easily and cheaply. The first genetically engineered products were medicines designed to treat human diseases. To cite one example, in 1978 Genentech developed synthetic humanized insulin by joining its gene with a plasmid vector inserted into the bacterium Escherichia coli. Insulin, widely used for the treatment of diabetes, was previously extracted from the pancreas of abattoir animals (cattle or pigs). The genetically engineered bacteria are able to produce large quantities of synthetic human insulin at relatively low cost. Biotechnology has also enabled emerging therapeutics like gene therapy. The application of biotechnology to basic science (for example through the Human Genome Project) has also dramatically improved our understanding of biology and as our scientific knowledge of normal and disease biology has increased, our ability to develop new medicines to treat previously untreatable diseases has increased as well.
Genetic testing allows the genetic diagnosis of vulnerabilities to inherited diseases, and can also be used to determine a child's parentage (genetic mother and father) or in general a person's ancestry. In addition to studying chromosomes to the level of individual genes, genetic testing in a broader sense includes biochemical tests for the possible presence of genetic diseases, or mutant forms of genes associated with increased risk of developing genetic disorders. Genetic testing identifies changes in chromosomes, genes, or proteins. Most of the time, testing is used to find changes that are associated with inherited disorders. The results of a genetic test can confirm or rule out a suspected genetic condition or help determine a person's chance of developing or passing on a genetic disorder. As of 2011 several hundred genetic tests were in use. Since genetic testing may open up ethical or psychological problems, genetic testing is often accompanied by genetic counseling.
Agriculture
Genetically modified crops ("GM crops", or "biotech crops") are plants used in agriculture, the DNA of which has been modified with genetic engineering techniques. In most cases, the main aim is to introduce a new trait that does not occur naturally in the species. Biotechnology firms can contribute to future food security by improving the nutrition and viability of urban agriculture. Furthermore, the protection of intellectual property rights encourages private sector investment in agrobiotechnology.
Examples in food crops include resistance to certain pests, diseases, stressful environmental conditions, resistance to chemical treatments (e.g. resistance to a herbicide), reduction of spoilage, or improving the nutrient profile of the crop. Examples in non-food crops include production of pharmaceutical agents, biofuels, and other industrially useful goods, as well as for bioremediation.
Farmers have widely adopted GM technology. Between 1996 and 2011, the total surface area of land cultivated with GM crops had increased by a factor of 94, from . 10% of the world's crop lands were planted with GM crops in 2010. As of 2011, 11 different transgenic crops were grown commercially on in 29 countries such as the US, Brazil, Argentina, India, Canada, China, Paraguay, Pakistan, South Africa, Uruguay, Bolivia, Australia, Philippines, Myanmar, Burkina Faso, Mexico and Spain.
Genetically modified foods are foods produced from organisms that have had specific changes introduced into their DNA with the methods of genetic engineering. These techniques have allowed for the introduction of new crop traits as well as a far greater control over a food's genetic structure than previously afforded by methods such as selective breeding and mutation breeding. Commercial sale of genetically modified foods began in 1994, when Calgene first marketed its Flavr Savr delayed ripening tomato. To date most genetic modification of foods have primarily focused on cash crops in high demand by farmers such as soybean, corn, canola, and cotton seed oil. These have been engineered for resistance to pathogens and herbicides and better nutrient profiles. GM livestock have also been experimentally developed; in November 2013 none were available on the market, but in 2015 the FDA approved the first GM salmon for commercial production and consumption.
There is a scientific consensus that currently available food derived from GM crops poses no greater risk to human health than conventional food, but that each GM food needs to be tested on a case-by-case basis before introduction. Nonetheless, members of the public are much less likely than scientists to perceive GM foods as safe. The legal and regulatory status of GM foods varies by country, with some nations banning or restricting them, and others permitting them with widely differing degrees of regulation.
GM crops also provide a number of ecological benefits, if not used in excess. Insect-resistant crops have proven to lower pesticide usage, therefore reducing the environmental impact of pesticides as a whole. However, opponents have objected to GM crops per se on several grounds, including environmental concerns, whether food produced from GM crops is safe, whether GM crops are needed to address the world's food needs, and economic concerns raised by the fact these organisms are subject to intellectual property law.
Biotechnology has several applications in the realm of food security. Crops like Golden rice are engineered to have higher nutritional content, and there is potential for food products with longer shelf lives. Though not a form of agricultural biotechnology, vaccines can help prevent diseases found in animal agriculture. Additionally, agricultural biotechnology can expedite breeding processes in order to yield faster results and provide greater quantities of food. Transgenic biofortification in cereals has been considered as a promising method to combat malnutrition in India and other countries.
Industrial
Industrial biotechnology (known mainly in Europe as white biotechnology) is the application of biotechnology for industrial purposes, including industrial fermentation. It includes the practice of using cells such as microorganisms, or components of cells like enzymes, to generate industrially useful products in sectors such as chemicals, food and feed, detergents, paper and pulp, textiles and biofuels. In the current decades, significant progress has been done in creating genetically modified organisms (GMOs) that enhance the diversity of applications and economical viability of industrial biotechnology. By using renewable raw materials to produce a variety of chemicals and fuels, industrial biotechnology is actively advancing towards lowering greenhouse gas emissions and moving away from a petrochemical-based economy.
Synthetic biology is considered one of the essential cornerstones in industrial biotechnology due to its financial and sustainable contribution to the manufacturing sector. Jointly biotechnology and synthetic biology play a crucial role in generating cost-effective products with nature-friendly features by using bio-based production instead of fossil-based. Synthetic biology can be used to engineer model microorganisms, such as Escherichia coli, by genome editing tools to enhance their ability to produce bio-based products, such as bioproduction of medicines and biofuels. For instance, E. coli and Saccharomyces cerevisiae in a consortium could be used as industrial microbes to produce precursors of the chemotherapeutic agent paclitaxel by applying the metabolic engineering in a co-culture approach to exploit the benefits from the two microbes.
Another example of synthetic biology applications in industrial biotechnology is the re-engineering of the metabolic pathways of E. coli by CRISPR and CRISPRi systems toward the production of a chemical known as 1,4-butanediol, which is used in fiber manufacturing. In order to produce 1,4-butanediol, the authors alter the metabolic regulation of the Escherichia coli by CRISPR to induce point mutation in the gltA gene, knockout of the sad gene, and knock-in six genes (cat1, sucD, 4hbd, cat2, bld, and bdh). Whereas CRISPRi system used to knockdown the three competing genes (gabD, ybgC, and tesB) that affect the biosynthesis pathway of 1,4-butanediol. Consequently, the yield of 1,4-butanediol significantly increased from 0.9 to 1.8 g/L.
Environmental
Environmental biotechnology includes various disciplines that play an essential role in reducing environmental waste and providing environmentally safe processes, such as biofiltration and biodegradation. The environment can be affected by biotechnologies, both positively and adversely. Vallero and others have argued that the difference between beneficial biotechnology (e.g., bioremediation is to clean up an oil spill or hazard chemical leak) versus the adverse effects stemming from biotechnological enterprises (e.g., flow of genetic material from transgenic organisms into wild strains) can be seen as applications and implications, respectively. Cleaning up environmental wastes is an example of an application of environmental biotechnology; whereas loss of biodiversity or loss of containment of a harmful microbe are examples of environmental implications of biotechnology.
Many cities have installed CityTrees, which use biotechnology to filter pollutants from urban atmospheres.
Regulation
The regulation of genetic engineering concerns approaches taken by governments to assess and manage the risks associated with the use of genetic engineering technology, and the development and release of genetically modified organisms (GMO), including genetically modified crops and genetically modified fish. There are differences in the regulation of GMOs between countries, with some of the most marked differences occurring between the US and Europe. Regulation varies in a given country depending on the intended use of the products of the genetic engineering. For example, a crop not intended for food use is generally not reviewed by authorities responsible for food safety. The European Union differentiates between approval for cultivation within the EU and approval for import and processing. While only a few GMOs have been approved for cultivation in the EU a number of GMOs have been approved for import and processing. The cultivation of GMOs has triggered a debate about the coexistence of GM and non-GM crops. Depending on the coexistence regulations, incentives for the cultivation of GM crops differ.
Database for the GMOs used in the EU
The EUginius (European GMO Initiative for a Unified Database System) database is intended to help companies, interested private users and competent authorities to find precise information on the presence, detection and identification of GMOs used in the European Union. The information is provided in English.
Learning
In 1988, after prompting from the United States Congress, the National Institute of General Medical Sciences (National Institutes of Health) (NIGMS) instituted a funding mechanism for biotechnology training. Universities nationwide compete for these funds to establish Biotechnology Training Programs (BTPs). Each successful application is generally funded for five years then must be competitively renewed. Graduate students in turn compete for acceptance into a BTP; if accepted, then stipend, tuition and health insurance support are provided for two or three years during the course of their PhD thesis work. Nineteen institutions offer NIGMS supported BTPs. Biotechnology training is also offered at the undergraduate level and in community colleges.
References and notes
External links
What is Biotechnology? – A curated collection of resources about the people, places and technologies that have enabled biotechnology | 0.76401 | 0.999025 | 0.763265 |
Miller–Urey experiment | The Miller–Urey experiment (or Miller experiment) was an experiment in chemical synthesis carried out in 1952 that simulated the conditions thought at the time to be present in the atmosphere of the early, prebiotic Earth. It is seen as one of the first successful experiments demonstrating the synthesis of organic compounds from inorganic constituents in an origin of life scenario. The experiment used methane (CH4), ammonia
(NH3), hydrogen (H2), in ratio 2:2:1, and water (H2O). Applying an electric arc (simulating lightning) resulted in the production of amino acids.
It is regarded as a groundbreaking experiment, and the classic experiment investigating the origin of life (abiogenesis). It was performed in 1952 by Stanley Miller, supervised by Nobel laureate Harold Urey at the University of Chicago, and published the following year. At the time, it supported Alexander Oparin's and J. B. S. Haldane's hypothesis that the conditions on the primitive Earth favored chemical reactions that synthesized complex organic compounds from simpler inorganic precursors.
After Miller's death in 2007, scientists examining sealed vials preserved from the original experiments were able to show that more amino acids were produced in the original experiment than Miller was able to report with paper chromatography. While evidence suggests that Earth's prebiotic atmosphere might have typically had a composition different from the gas used in the Miller experiment, prebiotic experiments continue to produce racemic mixtures of simple-to-complex organic compounds, including amino acids, under varying conditions. Moreover, researchers have shown that transient, hydrogen-rich atmospheres – conducive to Miller-Urey synthesis – would have occurred after large asteroid impacts on early Earth.
History
Foundations of organic synthesis and the origin of life
Until the 19th century, there was considerable acceptance of the theory of spontaneous generation, the idea that "lower" animals, such as insects or rodents, arose from decaying matter. However, several experiments in the 19th century – particularly Louis Pasteur's swan neck flask experiment in 1859 — disproved the theory that life arose from decaying matter. Charles Darwin published On the Origin of Species that same year, describing the mechanism of biological evolution. While Darwin never publicly wrote about the first organism in his theory of evolution, in a letter to Joseph Dalton Hooker, he speculated:But if (and oh what a big if) we could conceive in some warm little pond with all sorts of ammonia and phosphoric salts, light, heat, electricity etcetera present, that a protein compound was chemically formed, ready to undergo still more complex changes [...]"
At this point, it was known that organic molecules could be formed from inorganic starting materials, as Friedrich Wöhler had described Wöhler synthesis of urea from ammonium cyanate in 1828. Several other early seminal works in the field of organic synthesis followed, including Alexander Butlerov's synthesis of sugars from formaldehyde and Adolph Strecker's synthesis of the amino acid alanine from acetaldehyde, ammonia, and hydrogen cyanide. In 1913, Walther Löb synthesized amino acids by exposing formamide to silent electric discharge, so scientists were beginning to produce the building blocks of life from simpler molecules, but these were not intended to simulate any prebiotic scheme or even considered relevant to origin of life questions.
But the scientific literature of the early 20th century contained speculations on the origin of life. In 1903, physicist Svante Arrhenius hypothesized that the first microscopic forms of life, driven by the radiation pressure of stars, could have arrived on Earth from space in the panspermia hypothesis. In the 1920s, Leonard Troland wrote about a primordial enzyme that could have formed by chance in the primitive ocean and catalyzed reactions, and Hermann J. Muller suggested that the formation of a gene with catalytic and autoreplicative properties could have set evolution in motion. Around the same time, Alexander Oparin's and J. B. S. Haldane's "Primordial soup" ideas were emerging, which hypothesized that a chemically-reducing atmosphere on early Earth would have been conducive to organic synthesis in the presence of sunlight or lightning, gradually concentrating the ocean with random organic molecules until life emerged. In this way, frameworks for the origin of life were coming together, but at the mid-20th century, hypotheses lacked direct experimental evidence.
Stanley Miller and Harold Urey
At the time of the Miller–Urey experiment, Harold Urey was a Professor of Chemistry at the University of Chicago who had a well-renowned career, including receiving the Nobel Prize in Chemistry in 1934 for his isolation of deuterium and leading efforts to use gaseous diffusion for uranium isotope enrichment in support of the Manhattan Project. In 1952, Urey postulated that the high temperatures and energies associated with large impacts in Earth's early history would have provided an atmosphere of methane (CH4), water (H2O), ammonia (NH3), and hydrogen (H2), creating the reducing environment necessary for the Oparin-Haldane "primordial soup" scenario.
Stanley Miller arrived at the University of Chicago in 1951 to pursue a PhD under nuclear physicist Edward Teller, another prominent figure in the Manhattan Project. Miller began to work on how different chemical elements were formed in the early universe, but, after a year of minimal progress, Teller was to leave for California to establish Lawrence Livermore National Laboratory and further nuclear weapons research. Miller, having seen Urey lecture on his 1952 paper, approached him about the possibility of a prebiotic synthesis experiment. While Urey initially discouraged Miller, he agreed to allow Miller to try for a year. By February 1953, Miller had mailed a manuscript as sole author reporting the results of his experiment to Science. Urey refused to be listed on the manuscript because he believed his status would cause others to underappreciate Miller's role in designing and conducting the experiment and so encouraged Miller to take full credit for the work. Despite this the set-up is still most commonly referred to including both their names. After not hearing from Science for a few weeks, a furious Urey wrote to the editorial board demanding an answer, stating, "If Science does not wish to publish this promptly we will send it to the Journal of the American Chemical Society." Miller's manuscript was eventually published in Science in May 1953.
Experiment
In the original 1952 experiment, methane (CH4), ammonia (NH3), and hydrogen (H2) were all sealed together in a 2:2:1 ratio (1 part H2) inside a sterile 5-L glass flask connected to a 500-mL flask half-full of water (H2O). The gas chamber was intended to represent Earth's prebiotic atmosphere, while the water simulated an ocean. The water in the smaller flask was boiled such that water vapor entered the gas chamber and mixed with the "atmosphere". A continuous electrical spark was discharged between a pair of electrodes in the larger flask. The spark passed through the mixture of gases and water vapor, simulating lightning. A condenser below the gas chamber allowed aqueous solution to accumulate into a U-shaped trap at the bottom of the apparatus, which was sampled.
After a day, the solution that had collected at the trap was pink, and after a week of continuous operation the solution was deep red and turbid, which Miller attributed to organic matter adsorbed onto colloidal silica. The boiling flask was then removed, and mercuric chloride (a poison) was added to prevent microbial contamination. The reaction was stopped by adding barium hydroxide and sulfuric acid, and evaporated to remove impurities. Using paper chromatography, Miller identified five amino acids present in the solution: glycine, α-alanine and β-alanine were positively identified, while aspartic acid and α-aminobutyric acid (AABA) were less certain, due to the spots being faint.
Materials and samples from the original experiments remained in 2017 under the care of Miller's former student, Jeffrey Bada, a professor at the UCSD, Scripps Institution of Oceanography who also conducts origin of life research. , the apparatus used to conduct the experiment was on display at the Denver Museum of Nature and Science.
Chemistry of experiment
In 1957 Miller published research describing the chemical processes occurring inside his experiment. Hydrogen cyanide (HCN) and aldehydes (e.g., formaldehyde) were demonstrated to form as intermediates early on in the experiment due to the electric discharge. This agrees with current understanding of atmospheric chemistry, as HCN can generally be produced from reactive radical species in the atmosphere that arise when CH4 and nitrogen break apart under ultraviolet (UV) light. Similarly, aldehydes can be generated in the atmosphere from radicals resulting from CH4 and H2O decomposition and other intermediates like methanol. Several energy sources in planetary atmospheres can induce these dissociation reactions and subsequent hydrogen cyanide or aldehyde formation, including lightning, ultraviolet light, and galactic cosmic rays.
For example, here is a set photochemical reactions of species in the Miller-Urey atmosphere that can result in formaldehyde:
H2O + hv → H + OH
CH4 + OH → CH3 + HOH
CH3 + OH → CH3OH
CH3OH + hv → CH2O (formaldehyde) + H2
A photochemical path to HCN from NH3 and CH4 is:
NH3 + hv → NH2 + H
NH2 + CH4 → NH3 + CH3
NH2 + CH3 → CH5N
CH5N + hv → HCN + 2H2
Other active intermediate compounds (acetylene, cyanoacetylene, etc.) have been detected in the aqueous solution of Miller–Urey-type experiments, but the immediate HCN and aldehyde production, the production of amino acids accompanying the plateau in HCN and aldehyde concentrations, and slowing of amino acid production rate during HCN and aldehyde depletion provided strong evidence that Strecker amino acid synthesis was occurring in the aqueous solution.
Strecker synthesis describes the reaction of an aldehyde, ammonia, and HCN to a simple amino acid through an aminoacetonitrile intermediate:
CH2O + HCN + NH3 → NH2-CH2-CN (aminoacetonitrile) + H2O
NH2-CH2-CN + 2H2O → NH3 + NH2-CH2-COOH (glycine)
Furthermore, water and formaldehyde can react via Butlerov's reaction to produce various sugars like ribose.
The experiments showed that simple organic compounds, including the building blocks of proteins and other macromolecules, can abiotically be formed from gases with the addition of energy.
Related experiments and follow-up work
Contemporary experiments
There were a few similar spark discharge experiments contemporaneous with Miller-Urey. An article in The New York Times (March 8, 1953) titled "Looking Back Two Billion Years" describes the work of Wollman M. MacNevin at Ohio State University, before the Miller Science paper was published in May 1953. MacNevin was passing 100,000V sparks through methane and water vapor and produced "resinous solids" that were "too complex for analysis." Furthermore, K. A. Wilde submitted a manuscript to Science on December 15, 1952, before Miller submitted his paper to the same journal in February 1953. Wilde's work, published on July 10, 1953, used voltages up to only 600V on a binary mixture of carbon dioxide (CO2) and water in a flow system and did not note any significant reduction products. According to some, the reports of these experiments explain why Urey was rushing Miller's manuscript through Science and threatening to submit to the Journal of the American Chemical Society.
By introducing an experimental framework to test prebiotic chemistry, the Miller–Urey experiment paved the way for future origin of life research. In 1961, Joan Oró produced milligrams of the nucleobase adenine from a concentrated solution of HCN and NH3 in water. Oró found that several amino acids were also formed from HCN and ammonia under those conditions. Experiments conducted later showed that the other RNA and DNA nucleobases could be obtained through simulated prebiotic chemistry with a reducing atmosphere. Other researchers also began using UV-photolysis in prebiotic schemes, as the UV flux would have been much higher on early Earth. For example, UV-photolysis of water vapor with carbon monoxide was found to yield various alcohols, aldehydes, and organic acids. In the 1970s, Carl Sagan used Miller-Urey-type reactions to synthesize and experiment with complex organic particles dubbed "tholins", which likely resemble particles formed in hazy atmospheres like that of Titan.
Modified Miller–Urey experiments
Much work has been done since the 1950s toward understanding how Miller-Urey chemistry behaves in various environmental settings. In 1983, testing different atmospheric compositions, Miller and another researcher repeated experiments with varying proportions of H2, H2O, N2, CO2 or CH4, and sometimes NH3. They found that the presence or absence of NH3 in the mixture did not significantly impact amino acid yield, as NH3 was generated from N2 during the spark discharge. Additionally, CH4 proved to be one of the most important atmospheric ingredients for high yields, likely due to its role in HCN formation. Much lower yields were obtained with more oxidized carbon species in place of CH4, but similar yields could be reached with a high H2/CO2 ratio. Thus, Miller-Urey reactions work in atmospheres of other compositions as well, depending on the ratio of reducing and oxidizing gases. More recently, Jeffrey Bada and H. James Cleaves, graduate students of Miller, hypothesized that the production of nitrites, which destroy amino acids, in CO2 and N2-rich atmospheres may explain low amino acids yields. In a Miller-Urey setup with a less-reducing (CO2 + N2 + H2O) atmosphere, when they added calcium carbonate to buffer the aqueous solution and ascorbic acid to inhibit oxidation, yields of amino acids greatly increased, demonstrating that amino acids can still be formed in more neutral atmospheres under the right geochemical conditions. In a prebiotic context, they argued that seawater would likely still be buffered and ferrous iron could inhibit oxidation.
In 1999, after Miller suffered a stroke, he donated the contents of his laboratory to Bada. In an old cardboard box, Bada discovered unanalyzed samples from modified experiments that Miller had conducted in the 1950s. In a "volcanic" apparatus, Miller had amended an aspirating nozzle to shoot a jet of steam into the reaction chamber. Using high-performance liquid chromatography and mass spectrometry, Bada's lab analyzed old samples from a set of experiments Miller conducted with this apparatus and found some higher yields and a more diverse suite of amino acids. Bada speculated that injecting the steam into the spark could have split water into H and OH radicals, leading to more hydroxylated amino acids during Strecker synthesis. In a separate set of experiments, Miller added hydrogen sulfide (H2S) to the reducing atmosphere, and Bada's analyses of the products suggested order-of-magnitude higher yields, including some amino acids with sulfur moieties.
A 2021 work highlighted the importance of the high-energy free electrons present in the experiment. It is these electrons that produce ions and radicals, and represent an aspect of the experiment that needs to be better understood.
After comparing Miller–Urey experiments conducted in borosilicate glassware with those conducted in Teflon apparatuses, a 2021 paper suggests that the glass reaction vessel acts as a mineral catalyst, implicating silicate rocks as important surfaces in prebiotic Miller-Urey reactions.
Early Earth's prebiotic atmosphere
While there is a lack of geochemical observations to constrain the exact composition of the prebiotic atmosphere, recent models point to an early "weakly reducing" atmosphere; that is, early Earth's atmosphere was likely dominated by CO2 and N2 and not CH4 and NH3 as used in the original Miller–Urey experiment. This is explained, in part, by the chemical composition of volcanic outgassing. Geologist William Rubey was one of the first to compile data on gases emitted from modern volcanoes and concluded that they are rich in CO2, H2O, and likely N2, with varying amounts of H2, sulfur dioxide (SO2), and H2S. Therefore, if the redox state of Earth's mantle — which dictates the composition of outgassing – has been constant since formation, then the atmosphere of early Earth was likely weakly reducing, but there are some arguments for a more-reducing atmosphere for the first few hundred million years.
While the prebiotic atmosphere could have had a different redox condition than that of the Miller–Urey atmosphere, the modified Miller–Urey experiments described in the above section demonstrated that amino acids can still be abiotically produced in less-reducing atmospheres under specific geochemical conditions. Furthermore, harkening back to Urey's original hypothesis of a "post-impact" reducing atmosphere, a recent atmospheric modeling study has shown that an iron-rich impactor with a minimum mass around 4×1020 – 5×1021 kg would be enough to transiently reduce the entire prebiotic atmosphere, resulting in a Miller-Urey-esque H2-, CH4-, and NH3-dominated atmosphere that persists for millions of years. Previous work has estimated from the lunar cratering record and composition of Earth's mantle that between four and seven such impactors reached the Hadean Earth.
A large factor controlling the redox budget of early Earth's atmosphere is the rate of atmospheric escape of H2 after Earth's formation. Atmospheric escape – common to young, rocky planets — occurs when gases in the atmosphere have sufficient kinetic energy to overcome gravitational energy. It is generally accepted that the timescale of hydrogen escape is short enough such that H2 made up < 1% of the atmosphere of prebiotic Earth, but, in 2005, a hydrodynamic model of hydrogen escape predicted escape rates two orders of magnitude lower than previously thought, maintaining a hydrogen mixing ratio of 30%. A hydrogen-rich prebiotic atmosphere would have large implications for Miller-Urey synthesis in the Hadean and Archean, but later work suggests solutions in that model might have violated conservation of mass and energy. That said, during hydrodynamic escape, lighter molecules like hydrogen can "drag" heavier molecules with them through collisions, and recent modeling of xenon escape has pointed to a hydrogen atmospheric mixing ratio of at least 1% or higher at times during the Archean.
Taken together, the view that early Earth's atmosphere was weakly reducing, with transient instances of highly-reducing compositions following large impacts is generally supported.
Extraterrestrial sources of amino acids
Conditions similar to those of the Miller–Urey experiments are present in other regions of the Solar System, often substituting ultraviolet light for lightning as the energy source for chemical reactions. The Murchison meteorite that fell near Murchison, Victoria, Australia in 1969 was found to contain an amino acid distribution remarkably similar to Miller-Urey discharge products. Analysis of the organic fraction of the Murchison meteorite with Fourier-transform ion cyclotron resonance mass spectrometry detected over 10,000 unique compounds, albeit at very low (ppb–ppm) concentrations. In this way, the organic composition of the Murchison meteorite is seen as evidence of Miller-Urey synthesis outside Earth.
Comets and other icy outer-solar-system bodies are thought to contain large amounts of complex carbon compounds (such as tholins) formed by processes akin to Miller-Urey setups, darkening surfaces of these bodies. Some argue that comets bombarding the early Earth could have provided a large supply of complex organic molecules along with the water and other volatiles, however very low concentrations of biologically-relevant material combined with uncertainty surrounding the survival of organic matter upon impact make this difficult to determine.
Relevance to the origin of life
The Miller–Urey experiment was proof that the building blocks of life could be synthesized abiotically from gases, and introduced a new prebiotic chemistry framework through which to study the origin of life. Simulations of protein sequences present in the last universal common ancestor (LUCA), or the last shared ancestor of all extant species today, show an enrichment in simple amino acids that were available in the prebiotic environment according to Miller-Urey chemistry. This suggests that the genetic code from which all life evolved was rooted in a smaller suite of amino acids than those used today. Thus, while creationist arguments focus on the fact that Miller–Urey experiments have not generated all 22 genetically-encoded amino acids, this does not actually conflict with the evolutionary perspective on the origin of life.
Another common misconception is that the racemic (containing both L and D enantiomers) mixture of amino acids produced in a Miller–Urey experiment is also problematic for abiogenesis theories, as life on Earth today uses L-amino acids. While it is true that Miller-Urey setups produce racemic mixtures, the origin of homochirality is a separate area in origin of life research.
Recent work demonstrates that magnetic mineral surfaces like magnetite can be templates for the enantioselective crystallization of chiral molecules, including RNA precursors, due to the chiral-induced spin selectivity (CISS) effect. Once an enantioselective bias is introduced, homochirality can then propagate through biological systems in various ways. In this way, enantioselective synthesis is not required of Miller-Urey reactions if other geochemical processes in the environment are introducing homochirality.
Finally, Miller-Urey and similar experiments primarily deal with the synthesis of monomers; polymerization of these building blocks to form peptides and other more complex structures is the next step of prebiotic chemistry schemes. Polymerization requires condensation reactions, which are thermodynamically unfavored in aqueous solutions because they expel water molecules. Scientists as far back as John Desmond Bernal in the late 1940s thus speculated that clay surfaces would play a large role in abiogenesis, as they might concentrate monomers. Several such models for mineral-mediated polymerization have emerged, such as the interlayers of layered double hydroxides like green rust over wet-dry cycles. Some scenarios for peptide formation have been proposed that are even compatible with aqueous solutions, such as the hydrophobic air-water interface and a novel "sulfide-mediated α-aminonitrile ligation" scheme, where amino acid precursors come together to form peptides. Polymerization of life's building blocks is an active area of research in prebiotic chemistry.
Amino acids identified
Below is a table of amino acids produced and identified in the "classic" 1952 experiment, as analyzed by Miller in 1952 and more recently by Bada and collaborators with modern mass spectrometry, the 2008 re-analysis of vials from the volcanic spark discharge experiment, and the 2010 re-analysis of vials from the H2S-rich spark discharge experiment. While not all proteinogenic amino acids have been produced in spark discharge experiments, it is generally accepted that early life used a simpler set of prebiotically-available amino acids.
References
External links
A simulation of the Miller–Urey Experiment along with a video Interview with Stanley Miller by Scott Ellis from CalSpace (UCSD)
Origin-Of-Life Chemistry Revisited: Reanalysis of famous spark-discharge experiments reveals a richer collection of amino acids were formed.
Miller–Urey experiment explained
Miller experiment with Lego bricks
"Stanley Miller's Experiment: Sparking the Building Blocks of Life" on PBS
The Miller-Urey experiment website
Details of 2008 re-analysis
Articles containing video clips
Biology experiments
Chemical synthesis of amino acids
Chemistry experiments
Origin of life
1952 in biology
1953 in biology
2008 in science | 0.764489 | 0.998397 | 0.763264 |
Sintering | Sintering or frittage is the process of compacting and forming a solid mass of material by pressure or heat without melting it to the point of liquefaction. Sintering happens as part of a manufacturing process used with metals, ceramics, plastics, and other materials. The atoms/molecules in the sintered material diffuse across the boundaries of the particles, fusing the particles together and creating a solid piece.
Since the sintering temperature does not have to reach the melting point of the material, sintering is often chosen as the shaping process for materials with extremely high melting points, such as tungsten and molybdenum. The study of sintering in metallurgical powder-related processes is known as powder metallurgy.
An example of sintering can be observed when ice cubes in a glass of water adhere to each other, which is driven by the temperature difference between the water and the ice. Examples of pressure-driven sintering are the compacting of snowfall to a glacier, or the formation of a hard snowball by pressing loose snow together.
The material produced by sintering is called sinter. The word sinter comes from the Middle High German , a cognate of English cinder.
General sintering
Sintering is generally considered successful when the process reduces porosity and enhances properties such as strength, electrical conductivity, translucency and thermal conductivity. In some special cases, sintering is carefully applied to enhance the strength of a material while preserving porosity (e.g. in filters or catalysts, where gas adsorption is a priority). During the sintering process, atomic diffusion drives powder surface elimination in different stages, starting at the formation of necks between powders to final elimination of small pores at the end of the process.
The driving force for densification is the change in free energy from the decrease in surface area and lowering of the surface free energy by the replacement of solid-vapor interfaces. It forms new but lower-energy solid-solid interfaces with a net decrease in total free energy. On a microscopic scale, material transfer is affected by the change in pressure and differences in free energy across the curved surface. If the size of the particle is small (and its curvature is high), these effects become very large in magnitude. The change in energy is much higher when the radius of curvature is less than a few micrometers, which is one of the main reasons why much ceramic technology is based on the use of fine-particle materials.
The ratio of bond area to particle size is a determining factor for properties such as strength and electrical conductivity. To yield the desired bond area, temperature and initial grain size are precisely controlled over the sintering process. At steady state, the particle radius and the vapor pressure are proportional to (p0)2/3 and to (p0)1/3, respectively.
The source of power for solid-state processes is the change in free or chemical potential energy between the neck and the surface of the particle. This energy creates a transfer of material through the fastest means possible; if transfer were to take place from the particle volume or the grain boundary between particles, particle count would decrease and pores would be destroyed. Pore elimination is fastest in samples with many pores of uniform size because the boundary diffusion distance is smallest. during the latter portions of the process, boundary and lattice diffusion from the boundary become important.
Control of temperature is very important to the sintering process, since grain-boundary diffusion and volume diffusion rely heavily upon temperature, particle size, particle distribution, material composition, and often other properties of the sintering environment itself.
Ceramic sintering
Sintering is part of the firing process used in the manufacture of pottery and other ceramic objects. Sintering and vitrification (which requires higher temperatures) are the two main mechanisms behind the strength and stability of ceramics. Sintered ceramic objects are made from substances such as glass, alumina, zirconia, silica, magnesia, lime, beryllium oxide, and ferric oxide. Some ceramic raw materials have a lower affinity for water and a lower plasticity index than clay, requiring organic additives in the stages before sintering.
Sintering begins when sufficient temperatures have been reached to mobilize the active elements in the ceramic material, which can start below their melting point (typically at 50–80% of their melting point), e.g. as premelting. When sufficient sintering has taken place, the ceramic body will no longer break down in water; additional sintering can reduce the porosity of the ceramic, increase the bond area between ceramic particles, and increase the material strength.
Industrial procedures to create ceramic objects via sintering of powders generally include:
mixing water, binder, deflocculant, and unfired ceramic powder to form a slurry
spray-drying the slurry
putting the spray dried powder into a mold and pressing it to form a green body (an unsintered ceramic item)
heating the green body at low temperature to burn off the binder
sintering at a high temperature to fuse the ceramic particles together.
All the characteristic temperatures associated with phase transformation, glass transitions, and melting points, occurring during a sinterisation cycle of a particular ceramic's formulation (i.e., tails and frits) can be easily obtained by observing the expansion-temperature curves during optical dilatometer thermal analysis. In fact, sinterisation is associated with a remarkable shrinkage of the material because glass phases flow once their transition temperature is reached, and start consolidating the powdery structure and considerably reducing the porosity of the material.
Sintering is performed at high temperature. Additionally, a second and/or third external force (such as pressure, electric current) could be used. A commonly used second external force is pressure. Sintering performed by only heating is generally termed "pressureless sintering", which is possible with graded metal-ceramic composites, utilising a nanoparticle sintering aid and bulk molding technology. A variant used for 3D shapes is called hot isostatic pressing.
To allow efficient stacking of product in the furnace during sintering and to prevent parts sticking together, many manufacturers separate ware using ceramic powder separator sheets. These sheets are available in various materials such as alumina, zirconia and magnesia. They are additionally categorized by fine, medium and coarse particle sizes. By matching the material and particle size to the ware being sintered, surface damage and contamination can be reduced while maximizing furnace loading.
Sintering of metallic powders
Most, if not all, metals can be sintered. This applies especially to pure metals produced in vacuum which suffer no surface contamination. Sintering under atmospheric pressure requires the use of a protective gas, quite often endothermic gas. Sintering, with subsequent reworking, can produce a great range of material properties. Changes in density, alloying, and heat treatments can alter the physical characteristics of various products. For instance, the Young's modulus En of sintered iron powders remains somewhat insensitive to sintering time, alloying, or particle size in the original powder for lower sintering temperatures, but depends upon the density of the final product:
where D is the density, E is Young's modulus and d is the maximum density of iron.
Sintering is static when a metal powder under certain external conditions may exhibit coalescence, and yet reverts to its normal behavior when such conditions are removed. In most cases, the density of a collection of grains increases as material flows into voids, causing a decrease in overall volume. Mass movements that occur during sintering consist of the reduction of total porosity by repacking, followed by material transport due to evaporation and condensation from diffusion. In the final stages, metal atoms move along crystal boundaries to the walls of internal pores, redistributing mass from the internal bulk of the object and smoothing pore walls. Surface tension is the driving force for this movement.
A special form of sintering (which is still considered part of powder metallurgy) is liquid-state sintering in which at least one but not all elements are in a liquid state. Liquid-state sintering is required for making cemented carbide and tungsten carbide.
Sintered bronze in particular is frequently used as a material for bearings, since its porosity allows lubricants to flow through it or remain captured within it. Sintered copper may be used as a wicking structure in certain types of heat pipe construction, where the porosity allows a liquid agent to move through the porous material via capillary action. For materials that have high melting points such as molybdenum, tungsten, rhenium, tantalum, osmium and carbon, sintering is one of the few viable manufacturing processes. In these cases, very low porosity is desirable and can often be achieved.
Sintered metal powder is used to make frangible shotgun shells called breaching rounds, as used by military and SWAT teams to quickly force entry into a locked room. These shotgun shells are designed to destroy door deadbolts, locks and hinges without risking lives by ricocheting or by flying on at lethal speed through the door. They work by destroying the object they hit and then dispersing into a relatively harmless powder.
Sintered bronze and stainless steel are used as filter materials in applications requiring high temperature resistance while retaining the ability to regenerate the filter element. For example, sintered stainless steel elements are employed for filtering steam in food and pharmaceutical applications, and sintered bronze in aircraft hydraulic systems.
Sintering of powders containing precious metals such as silver and gold is used to make small jewelry items. Evaporative self-assembly of colloidal silver nanocubes into supercrystals has been shown to allow the sintering of electrical joints at temperatures lower than 200 °C.
Advantages
Particular advantages of the powder technology include:
Very high levels of purity and uniformity in starting materials
Preservation of purity, due to the simpler subsequent fabrication process (fewer steps) that it makes possible
Stabilization of the details of repetitive operations, by control of grain size during the input stages
Absence of binding contact between segregated powder particles – or "inclusions" (called stringering) – as often occurs in melting processes
No deformation needed to produce directional elongation of grains
Capability to produce materials of controlled, uniform porosity.
Capability to produce nearly net-shaped objects.
Capability to produce materials which cannot be produced by any other technology.
Capability to fabricate high-strength material like turbine blades.
After sintering the mechanical strength to handling becomes higher.
The literature contains many references on sintering dissimilar materials to produce solid/solid-phase compounds or solid/melt mixtures at the processing stage. Almost any substance can be obtained in powder form, through either chemical, mechanical or physical processes, so basically any material can be obtained through sintering. When pure elements are sintered, the leftover powder is still pure, so it can be recycled.
Disadvantages
Particular disadvantages of the powder technology include:
sintering cannot create uniform sizes
micro- and nanostructures produced before sintering are often destroyed.
Plastics sintering
Plastic materials are formed by sintering for applications that require materials of specific porosity. Sintered plastic porous components are used in filtration and to control fluid and gas flows. Sintered plastics are used in applications requiring caustic fluid separation processes such as the nibs in whiteboard markers, inhaler filters, and vents for caps and liners on packaging materials. Sintered ultra high molecular weight polyethylene materials are used as ski and snowboard base materials. The porous texture allows wax to be retained within the structure of the base material, thus providing a more durable wax coating.
Liquid phase sintering
For materials that are difficult to sinter, a process called liquid phase sintering is commonly used. Materials for which liquid phase sintering is common are Si3N4, WC, SiC, and more. Liquid phase sintering is the process of adding an additive to the powder which will melt before the matrix phase. The process of liquid phase sintering has three stages:
rearrangement – As the liquid melts capillary action will pull the liquid into pores and also cause grains to rearrange into a more favorable packing arrangement.
solution-precipitation – In areas where capillary pressures are high (particles are close together) atoms will preferentially go into solution and then precipitate in areas of lower chemical potential where particles are not close or in contact. This is called contact flattening. This densifies the system in a way similar to grain boundary diffusion in solid state sintering. Ostwald ripening will also occur where smaller particles will go into solution preferentially and precipitate on larger particles leading to densification.
final densification – densification of solid skeletal network, liquid movement from efficiently packed regions into pores.
For liquid phase sintering to be practical the major phase should be at least slightly soluble in the liquid phase and the additive should melt before any major sintering of the solid particulate network occurs, otherwise rearrangement of grains will not occur. Liquid phase sintering was successfully applied to improve grain growth of thin semiconductor layers from nanoparticle precursor films.
Electric current assisted sintering
These techniques employ electric currents to drive or enhance sintering. English engineer A. G. Bloxam registered in 1906 the first patent on sintering powders using direct current in vacuum. The primary purpose of his inventions was the industrial scale production of filaments for incandescent lamps by compacting tungsten or molybdenum particles. The applied current was particularly effective in reducing surface oxides that increased the emissivity of the filaments.
In 1913, Weintraub and Rush patented a modified sintering method which combined electric current with pressure. The benefits of this method were proved for the sintering of refractory metals as well as conductive carbide or nitride powders. The starting boron–carbon or silicon–carbon powders were placed in an electrically insulating tube and compressed by two rods which also served as electrodes for the current. The estimated sintering temperature was 2000 °C.
In the United States, sintering was first patented by Duval d'Adrian in 1922. His three-step process aimed at producing heat-resistant blocks from such oxide materials as zirconia, thoria or tantalia. The steps were: (i) molding the powder; (ii) annealing it at about 2500 °C to make it conducting; (iii) applying current-pressure sintering as in the method by Weintraub and Rush.
Sintering that uses an arc produced via a capacitance discharge to eliminate oxides before direct current heating, was patented by G. F. Taylor in 1932. This originated sintering methods employing pulsed or alternating current, eventually superimposed to a direct current. Those techniques have been developed over many decades and summarized in more than 640 patents.
Of these technologies the most well known is resistance sintering (also called hot pressing) and spark plasma sintering, while electro sinter forging is the latest advancement in this field.
Spark plasma sintering
In spark plasma sintering (SPS), external pressure and an electric field are applied simultaneously to enhance the densification of the metallic/ceramic powder compacts. However, after commercialization it was determined there is no plasma, so the proper name is spark sintering as coined by Lenel. The electric field driven densification supplements sintering with a form of hot pressing, to enable lower temperatures and taking less time than typical sintering. For a number of years, it was speculated that the existence of sparks or plasma between particles could aid sintering; however, Hulbert and coworkers systematically proved that the electric parameters used during spark plasma sintering make it (highly) unlikely. In light of this, the name "spark plasma sintering" has been rendered obsolete. Terms such as field assisted sintering technique (FAST), electric field assisted sintering (EFAS), and direct current sintering (DCS) have been implemented by the sintering community. Using a direct current (DC) pulse as the electric current, spark plasma, spark impact pressure, joule heating, and an electrical field diffusion effect would be created. By modifying the graphite die design and its assembly, it is possible to perform pressureless sintering in spark plasma sintering facility. This modified die design setup is reported to synergize the advantages of both conventional pressureless sintering and spark plasma sintering techniques.
Electro sinter forging
Electro sinter forging is an electric current assisted sintering (ECAS) technology originated from capacitor discharge sintering. It is used for the production of diamond metal matrix composites and is under evaluation for the production of hard metals, nitinol and other metals and intermetallics. It is characterized by a very low sintering time, allowing machines to sinter at the same speed as a compaction press.
Pressureless sintering
Pressureless sintering is the sintering of a powder compact (sometimes at very high temperatures, depending on the powder) without applied pressure. This avoids density variations in the final component, which occurs with more traditional hot pressing methods.
The powder compact (if a ceramic) can be created by slip casting, injection moulding, and cold isostatic pressing. After presintering, the final green compact can be machined to its final shape before being sintered.
Three different heating schedules can be performed with pressureless sintering: constant-rate of heating (CRH), rate-controlled sintering (RCS), and two-step sintering (TSS). The microstructure and grain size of the ceramics may vary depending on the material and method used.
Constant-rate of heating (CRH), also known as temperature-controlled sintering, consists of heating the green compact at a constant rate up to the sintering temperature. Experiments with zirconia have been performed to optimize the sintering temperature and sintering rate for CRH method. Results showed that the grain sizes were identical when the samples were sintered to the same density, proving that grain size is a function of specimen density rather than CRH temperature mode.
In rate-controlled sintering (RCS), the densification rate in the open-porosity phase is lower than in the CRH method. By definition, the relative density, ρrel, in the open-porosity phase is lower than 90%. Although this should prevent separation of pores from grain boundaries, it has been proven statistically that RCS did not produce smaller grain sizes than CRH for alumina, zirconia, and ceria samples.
Two-step sintering (TSS) uses two different sintering temperatures. The first sintering temperature should guarantee a relative density higher than 75% of theoretical sample density. This will remove supercritical pores from the body. The sample will then be cooled down and held at the second sintering temperature until densification is completed. Grains of cubic zirconia and cubic strontium titanate were significantly refined by TSS compared to CRH. However, the grain size changes in other ceramic materials, like tetragonal zirconia and hexagonal alumina, were not statistically significant.
Microwave sintering
In microwave sintering, heat is sometimes generated internally within the material, rather than via surface radiative heat transfer from an external heat source. Some materials fail to couple and others exhibit run-away behavior, so it is restricted in usefulness. A benefit of microwave sintering is faster heating for small loads, meaning less time is needed to reach the sintering temperature, less heating energy is required and there are improvements in the product properties.
A failing of microwave sintering is that it generally sinters only one compact at a time, so overall productivity turns out to be poor except for situations involving one of a kind sintering, such as for artists. As microwaves can only penetrate a short distance in materials with a high conductivity and a high permeability, microwave sintering requires the sample to be delivered in powders with a particle size around the penetration depth of microwaves in the particular material. The sintering process and side-reactions run several times faster during microwave sintering at the same temperature, which results in different properties for the sintered product.
This technique is acknowledged to be quite effective in maintaining fine grains/nano sized grains in sintered bioceramics. Magnesium phosphates and calcium phosphates are the examples which have been processed through the microwave sintering technique.
Densification, vitrification and grain growth
Sintering in practice is the control of both densification and grain growth. Densification is the act of reducing porosity in a sample, thereby making it denser. Grain growth is the process of grain boundary motion and Ostwald ripening to increase the average grain size. Many properties (mechanical strength, electrical breakdown strength, etc.) benefit from both a high relative density and a small grain size. Therefore, being able to control these properties during processing is of high technical importance. Since densification of powders requires high temperatures, grain growth naturally occurs during sintering. Reduction of this process is key for many engineering ceramics. Under certain conditions of chemistry and orientation, some grains may grow rapidly at the expense of their neighbours during sintering. This phenomenon, known as abnormal grain growth (AGG), results in a bimodal grain size distribution that has consequences for the mechanical, dielectric and thermal performance of the sintered material.
For densification to occur at a quick pace it is essential to have (1) an amount of liquid phase that is large in size, (2) a near complete solubility of the solid in the liquid, and (3) wetting of the solid by the liquid. The power behind the densification is derived from the capillary pressure of the liquid phase located between the fine solid particles. When the liquid phase wets the solid particles, each space between the particles becomes a capillary in which a substantial capillary pressure is developed. For submicrometre particle sizes, capillaries with diameters in the range of 0.1 to 1 micrometres develop pressures in the range of to for silicate liquids and in the range of to for a metal such as liquid cobalt.
Densification requires constant capillary pressure where just solution-precipitation material transfer would not produce densification. For further densification, additional particle movement while the particle undergoes grain-growth and grain-shape changes occurs. Shrinkage would result when the liquid slips between particles and increases pressure at points of contact causing the material to move away from the contact areas, forcing particle centers to draw near each other.
The sintering of liquid-phase materials involves a fine-grained solid phase to create the needed capillary pressures proportional to its diameter, and the liquid concentration must also create the required capillary pressure within range, else the process ceases. The vitrification rate is dependent upon the pore size, the viscosity and amount of liquid phase present leading to the viscosity of the overall composition, and the surface tension. Temperature dependence for densification controls the process because at higher temperatures viscosity decreases and increases liquid content. Therefore, when changes to the composition and processing are made, it will affect the vitrification process.
Sintering mechanisms
Sintering occurs by diffusion of atoms through the microstructure. This diffusion is caused by a gradient of chemical potential – atoms move from an area of higher chemical potential to an area of lower chemical potential. The different paths the atoms take to get from one spot to another are the "sintering mechanisms" or "matter transport mechanisms".
In solid state sintering, the six common mechanisms are:
surface diffusion – diffusion of atoms along the surface of a particle
vapor transport – evaporation of atoms which condense on a different surface
lattice diffusion from surface – atoms from surface diffuse through lattice
lattice diffusion from grain boundary – atom from grain boundary diffuses through lattice
grain boundary diffusion – atoms diffuse along grain boundary
plastic deformation – dislocation motion causes flow of matter.
Mechanisms 1–3 above are non-densifying (i.e. do not cause the pores and the overall ceramic body to shrink) but can still increase the area of the bond or "neck" between grains; they take atoms from the surface and rearrange them onto another surface or part of the same surface. Mechanisms 4–6 are densifying – atoms are moved from the bulk material or the grain boundaries to the surface of pores, thereby eliminating porosity and increasing the density of the sample.
Grain growth
A grain boundary (GB) is the transition area or interface between adjacent crystallites (or grains) of the same chemical and lattice composition, not to be confused with a phase boundary. The adjacent grains do not have the same orientation of the lattice, thus giving the atoms in GB shifted positions relative to the lattice in the crystals. Due to the shifted positioning of the atoms in the GB they have a higher energy state when compared with the atoms in the crystal lattice of the grains. It is this imperfection that makes it possible to selectively etch the GBs when one wants the microstructure to be visible.
Striving to minimize its energy leads to the coarsening of the microstructure to reach a metastable state within the specimen. This involves minimizing its GB area and changing its topological structure to minimize its energy. This grain growth can either be normal or abnormal, a normal grain growth is characterized by the uniform growth and size of all the grains in the specimen. Abnormal grain growth is when a few grains grow much larger than the remaining majority.
Grain boundary energy/tension
The atoms in the GB are normally in a higher energy state than their equivalent in the bulk material. This is due to their more stretched bonds, which gives rise to a GB tension . This extra energy that the atoms possess is called the grain boundary energy, . The grain will want to minimize this extra energy, thus striving to make the grain boundary area smaller and this change requires energy.
"Or, in other words, a force has to be applied, in the plane of the grain boundary and acting along a line in the grain-boundary area, in order to extend the grain-boundary area in the direction of the force. The force per unit length, i.e. tension/stress, along the line mentioned is σGB. On the basis of this reasoning it would follow that:
with dA as the increase of grain-boundary area per unit length along the line in the grain-boundary area considered."[pg 478]
The GB tension can also be thought of as the attractive forces between the atoms at the surface and the tension between these atoms is due to the fact that there is a larger interatomic distance between them at the surface compared to the bulk (i.e. surface tension). When the surface area becomes bigger the bonds stretch more and the GB tension increases. To counteract this increase in tension there must be a transport of atoms to the surface keeping the GB tension constant. This diffusion of atoms accounts for the constant surface tension in liquids. Then the argument,
holds true. For solids, on the other hand, diffusion of atoms to the surface might not be sufficient and the surface tension can vary with an increase in surface area.
For a solid, one can derive an expression for the change in Gibbs free energy, dG, upon the change of GB area, dA. dG is given by
which gives
is normally expressed in units of while is normally expressed in units of since they are different physical properties.
Mechanical equilibrium
In a two-dimensional isotropic material the grain boundary tension would be the same for the grains. This would give angle of 120° at GB junction where three grains meet. This would give the structure a hexagonal pattern which is the metastable state (or mechanical equilibrium) of the 2D specimen. A consequence of this is that, to keep trying to be as close to the equilibrium as possible, grains with fewer sides than six will bend the GB to try keep the 120° angle between each other. This results in a curved boundary with its curvature towards itself. A grain with six sides will, as mentioned, have straight boundaries, while a grain with more than six sides will have curved boundaries with its curvature away from itself. A grain with six boundaries (i.e. hexagonal structure) is in a metastable state (i.e. local equilibrium) within the 2D structure. In three dimensions structural details are similar but much more complex and the metastable structure for a grain is a non-regular 14-sided polyhedra with doubly curved faces. In practice all arrays of grains are always unstable and thus always grow until prevented by a counterforce.
Grains strive to minimize their energy, and a curved boundary has a higher energy than a straight boundary. This means that the grain boundary will migrate towards the curvature. The consequence of this is that grains with less than 6 sides will decrease in size while grains with more than 6 sides will increase in size.
Grain growth occurs due to motion of atoms across a grain boundary. Convex surfaces have a higher chemical potential than concave surfaces, therefore grain boundaries will move toward their center of curvature. As smaller particles tend to have a higher radius of curvature and this results in smaller grains losing atoms to larger grains and shrinking. This is a process called Ostwald ripening. Large grains grow at the expense of small grains.
Grain growth in a simple model is found to follow:
Here G is final average grain size, G0 is the initial average grain size, t is time, m is a factor between 2 and 4, and K is a factor given by:
Here Q is the molar activation energy, R is the ideal gas constant, T is absolute temperature, and K0 is a material dependent factor. In most materials the sintered grain size is proportional to the inverse square root of the fractional porosity, implying that pores are the most effective retardant for grain growth during sintering.
Reducing grain growth
Solute ions
If a dopant is added to the material (example: Nd in BaTiO3) the impurity will tend to stick to the grain boundaries. As the grain boundary tries to move (as atoms jump from the convex to concave surface) the change in concentration of the dopant at the grain boundary will impose a drag on the boundary. The original concentration of solute around the grain boundary will be asymmetrical in most cases. As the grain boundary tries to move, the concentration on the side opposite of motion will have a higher concentration and therefore have a higher chemical potential. This increased chemical potential will act as a backforce to the original chemical potential gradient that is the reason for grain boundary movement. This decrease in net chemical potential will decrease the grain boundary velocity and therefore grain growth.
Fine second phase particles
If particles of a second phase which are insoluble in the matrix phase are added to the powder in the form of a much finer powder, then this will decrease grain boundary movement. When the grain boundary tries to move past the inclusion diffusion of atoms from one grain to the other, it will be hindered by the insoluble particle. This is because it is beneficial for particles to reside in the grain boundaries and they exert a force in opposite direction compared to grain boundary migration. This effect is called the Zener effect after the man who estimated this drag force to
where r is the radius of the particle and λ the interfacial energy of the boundary if there are N particles per unit volume their volume fraction f is
assuming they are randomly distributed. A boundary of unit area will intersect all particles within a volume of 2r which is 2Nr particles. So the number of particles n intersecting a unit area of grain boundary is:
Now, assuming that the grains only grow due to the influence of curvature, the driving force of growth is where (for homogeneous grain structure) R approximates to the mean diameter of the grains. With this the critical diameter that has to be reached before the grains ceases to grow:
This can be reduced to
so the critical diameter of the grains is dependent on the size and volume fraction of the particles at the grain boundaries.
It has also been shown that small bubbles or cavities can act as inclusion
More complicated interactions which slow grain boundary motion include interactions of the surface energies of the two grains and the inclusion and are discussed in detail by C.S. Smith.
Sintering of catalysts
Sintering is an important cause for loss of catalytic activity, especially on supported metal catalysts. It decreases the surface area of the catalyst and changes the surface structure. For a porous catalytic surface, the pores may collapse due to sintering, resulting in loss of surface area. Sintering is in general an irreversible process.
Small catalyst particles have the highest possible relative surface area and high reaction temperature, both factors that generally increase the reactivity of a catalyst. However, these factors are also the circumstances under which sintering occurs. Specific materials may also increase the rate of sintering. On the other hand, by alloying catalysts with other materials, sintering can be reduced. Rare-earth metals in particular have been shown to reduce sintering of metal catalysts when alloyed.
For many supported metal catalysts, sintering starts to become a significant effect at temperatures over . Catalysts that operate at higher temperatures, such as a car catalyst, use structural improvements to reduce or prevent sintering. These improvements are in general in the form of a support made from an inert and thermally stable material such as silica, carbon or alumina.
See also
, a rapid prototyping technology, that includes Direct Metal Laser Sintering (DMLS).
– a pioneer of sintering methods
References
Further reading
External links
Particle-Particle-Sintering – a 3D lattice kinetic Monte Carlo simulation
Sphere-Plate-Sintering – a 3D lattice kinetic Monte Carlo simulation
Industrial processes
Metalworking
Plastics industry
Metallurgical processes | 0.765384 | 0.99722 | 0.763256 |
Zeta potential | Zeta potential is the electrical potential at the slipping plane. This plane is the interface which separates mobile fluid from fluid that remains attached to the surface.
Zeta potential is a scientific term for electrokinetic potential in colloidal dispersions. In the colloidal chemistry literature, it is usually denoted using the Greek letter zeta (ζ), hence ζ-potential. The usual units are volts (V) or, more commonly, millivolts (mV). From a theoretical viewpoint, the zeta potential is the electric potential in the interfacial double layer (DL) at the location of the slipping plane relative to a point in the bulk fluid away from the interface. In other words, zeta potential is the potential difference between the dispersion medium and the stationary layer of fluid attached to the dispersed particle.
The zeta potential is caused by the net electrical charge contained within the region bounded by the slipping plane, and also depends on the location of that plane. Thus, it is widely used for quantification of the magnitude of the charge. However, zeta potential is not equal to the Stern potential or electric surface potential in the double layer, because these are defined at different locations. Such assumptions of equality should be applied with caution. Nevertheless, zeta potential is often the only available path for characterization of double-layer properties.
The zeta potential is an important and readily measurable indicator of the stability of colloidal dispersions. The magnitude of the zeta potential indicates the degree of electrostatic repulsion between adjacent, similarly charged particles in a dispersion. For molecules and particles that are small enough, a high zeta potential will confer stability, i.e., the solution or dispersion will resist aggregation. When the potential is small, attractive forces may exceed this repulsion and the dispersion may break and flocculate. So, colloids with high zeta potential (negative or positive) are electrically stabilized while colloids with low zeta potentials tend to coagulate or flocculate as outlined in the table.
Zeta potential can also be used for the pKa estimation of complex polymers that is otherwise difficult to measure accurately using conventional methods. This can help studying the ionisation behaviour of various synthetic and natural polymers under various conditions and can help in establishing standardised dissolution-pH thresholds for pH responsive polymers.
Measurement
Some new instrumentations techniques exist that allow zeta potential to be measured. The Zeta Potential Analyzer can measure solid, fibers, or powdered material. The motor found in the instrument creates an oscillating flow of electrolyte solution through the sample. Several sensors in the instrument monitor other factors, so the software attached is able to do calculations to find the zeta potential. Temperature, pH, conductivity, pressure, and streaming potential are all measured in the instrument for this reason.
Zeta potential can also be calculated using theoretical models, and an experimentally-determined electrophoretic mobility or dynamic electrophoretic mobility.
Electrokinetic phenomena and electroacoustic phenomena are the usual sources of data for calculation of zeta potential. (See Zeta potential titration.)
Electrokinetic phenomena
Electrophoresis is used for estimating zeta potential of particulates, whereas streaming potential/current is used for porous bodies and flat surfaces.
In practice, the zeta potential of dispersion is measured by applying an electric field across the dispersion. Particles within the dispersion with a zeta potential will migrate toward the electrode of opposite charge with a velocity proportional to the magnitude of the zeta potential.
This velocity is measured using the technique of the laser Doppler anemometer. The frequency shift or phase shift of an incident laser beam caused by these moving particles is measured as the particle mobility, and this mobility is converted to the zeta potential by inputting the dispersant viscosity and dielectric permittivity, and the application of the Smoluchowski theories.
Electrophoresis
Electrophoretic mobility is proportional to electrophoretic velocity, which is the measurable parameter. There are several theories that link electrophoretic mobility with zeta potential. They are briefly described in the article on electrophoresis and in details in many books on colloid and interface science.
There is an IUPAC Technical Report prepared by a group of world experts on the electrokinetic phenomena.
From the instrumental viewpoint, there are three different experimental techniques: microelectrophoresis, electrophoretic light scattering, and tunable resistive pulse sensing. Microelectrophoresis has the advantage of yielding an image of the moving particles. On the other hand, it is complicated by electro-osmosis at the walls of the sample cell. Electrophoretic light scattering is based on dynamic light scattering. It allows measurement in an open cell which eliminates the problem of electro-osmotic flow except for the case of a capillary cell. And, it can be used to characterize very small particles, but at the price of the lost ability to display images of moving particles. Tunable resistive pulse sensing (TRPS) is an impedance-based measurement technique that measures the zeta potential of individual particles based on the duration of the resistive pulse signal. The translocation duration of nanoparticles is measured as a function of voltage and applied pressure. From the inverse translocation time versus voltage-dependent electrophoretic mobility, and thus zeta potentials are calculated. The main advantage of the TRPS method is that it allows for simultaneous size and surface charge measurements on a particle-by-particle basis, enabling the analysis of a wide spectrum of synthetic and biological nano/microparticles and their mixtures.
All these measuring techniques may require dilution of the sample. Sometimes this dilution might affect properties of the sample and change zeta potential. There is only one justified way to perform this dilution – by using equilibrium supernatant. In this case, the interfacial equilibrium between the surface and the bulk liquid would be maintained and zeta potential would be the same for all volume fractions of particles in the suspension. When the diluent is known (as is the case for a chemical formulation), additional diluent can be prepared. If the diluent is unknown, equilibrium supernatant is readily obtained by centrifugation.
Streaming potential, streaming current
The streaming potential is an electric potential that develops during the flow of liquid through a capillary. In nature, a streaming potential may occur at a significant magnitude in areas with volcanic activities. The streaming potential is also the primary electrokinetic phenomenon for the assessment of the zeta potential at the solid material-water interface. A corresponding solid sample is arranged in such a way to form a capillary flow channel. Materials with a flat surface are mounted as duplicate samples that are aligned as parallel plates. The sample surfaces are separated by a small distance to form a capillary flow channel. Materials with an irregular shape, such as fibers or granular media, are mounted as a porous plug to provide a pore network, which serves as capillaries for the streaming potential measurement. Upon the application of pressure on a test solution, liquid starts to flow and to generate an electric potential. This streaming potential is related to the pressure gradient between the ends of either a single flow channel (for samples with a flat surface) or the porous plug (for fibers and granular media) to calculate the surface zeta potential.
Alternatively to the streaming potential, the measurement of streaming current offers another approach to the surface zeta potential. Most commonly, the classical equations derived by Maryan Smoluchowski are used to convert streaming potential or streaming current results into the surface zeta potential.
Applications of the streaming potential and streaming current method for the surface zeta potential determination consist of the characterization of surface charge of polymer membranes, biomaterials and medical devices, and minerals.
Electroacoustic phenomena
There are two electroacoustic effects that are widely used for characterizing zeta potential: colloid vibration current and electric sonic amplitude. There are commercially available instruments that exploit these effects for measuring dynamic electrophoretic mobility, which depends on zeta potential.
Electroacoustic techniques have the advantage of being able to perform measurements in intact samples, without dilution. Published and well-verified theories allow such measurements at volume fractions up to 50%. Calculation of zeta potential from the dynamic electrophoretic mobility requires information on the densities for particles and liquid. In addition, for larger particles exceeding roughly 300 nm in size information on the particle size required as well.
Calculation
The most known and widely used theory for calculating zeta potential from experimental data is that developed by Marian Smoluchowski in 1903. This theory was originally developed for electrophoresis; however, an extension to electroacoustics is now also available. Smoluchowski's theory is powerful because it is valid for dispersed particles of any shape and any concentration. However, it has its limitations:
Detailed theoretical analysis proved that Smoluchowski's theory is valid only for a sufficiently thin double layer, when the Debye length, , is much smaller than the particle radius, :
The model of the "thin double layer" offers tremendous simplifications not only for electrophoresis theory but for many other electrokinetic and electroacoustic theories. This model is valid for most aqueous systems because the Debye length is typically only a few nanometers in water. The model breaks only for nano-colloids in a solution with ionic strength approaching that of pure water.
Smoluchowski's theory neglects the contribution of surface conductivity. This is expressed in modern theories as the condition of a small Dukhin number:
The development of electrophoretic and electroacoustic theories with a wider range of validity was a purpose of many studies during the 20th century. There are several analytical theories that incorporate surface conductivity and eliminate the restriction of the small Dukhin number for both the electrokinetic and electroacoustic applications.
Early pioneering work in that direction dates back to Overbeek and Booth.
Modern, rigorous electrokinetic theories that are valid for any zeta potential, and often any , stem mostly from Soviet Ukrainian (Dukhin, Shilov, and others) and Australian (O'Brien, White, Hunter, and others) schools. Historically, the first one was Dukhin–Semenikhin theory. A similar theory was created ten years later by O'Brien and Hunter. Assuming a thin double layer, these theories would yield results that are very close to the numerical solution provided by O'Brien and White. There are also general electroacoustic theories that are valid for any values of Debye length and Dukhin number.
Henry's equation
When κa is between large values where simple analytical models are available, and low values where numerical calculations are valid, Henry's equation can be used when the zeta potential is low. For a nonconducting sphere, Henry's equation is , where f1 is the Henry function, one of a collection of functions which vary smoothly from 1.0 to 1.5 as κa approaches infinity.
References
Colloidal chemistry | 0.767614 | 0.9943 | 0.763239 |
Anachronism | An anachronism (from the Greek , 'against' and , 'time') is a chronological inconsistency in some arrangement, especially a juxtaposition of people, events, objects, language terms and customs from different time periods. The most common type of anachronism is an object misplaced in time, but it may be a verbal expression, a technology, a philosophical idea, a musical style, a material, a plant or animal, a custom, or anything else associated with a particular period that is placed outside its proper temporal domain.
An anachronism may be either intentional or unintentional. Intentional anachronisms may be introduced into a literary or artistic work to help a contemporary audience engage more readily with a historical period. Anachronism can also be used intentionally for purposes of rhetoric, propaganda, comedy, or shock. Unintentional anachronisms may occur when a writer, artist, or performer is unaware of differences in technology, terminology and language, customs and attitudes, or even fashions between different historical periods and eras.
Types
The metachronism-prochronism contrast is nearly synonymous with parachronism-anachronism, and involves postdating-predating respectively.
Parachronism
A parachronism (from the Greek , "on the side", and , "time") postdates. It is anything that appears in a time period in which it is not normally found (though not sufficiently out of place as to be impossible).
This may be an object, idiomatic expression, technology, philosophical idea, musical style, material, custom, or anything else so closely bound to a particular time period as to seem strange when encountered in a later era. They may be objects or ideas that were once common but are now considered rare or inappropriate. They can take the form of obsolete technology or outdated fashion or idioms.
Prochronism
A prochronism (from the Greek , "before", and , "time") predates. It is an impossible anachronism which occurs when an object or idea has not yet been invented when the situation takes place, and therefore could not have possibly existed at the time. A prochronism may be an object not yet developed, a verbal expression that had not yet been coined, a philosophy not yet formulated, a breed of animal not yet evolved or bred, or use of a technology that had not yet been created.
Metachronism
A metachronism (from the Greek , "after", and , "time") postdates. It is the use of older cultural artifacts in modern settings which may seem inappropriate. For example, it could be considered metachronistic for a modern-day person to be depicted wearing a top hat or writing with a quill.
Politically motivated anachronism
Works of art and literature promoting a political, nationalist or revolutionary cause may use anachronism to depict an institution or custom as being more ancient than it actually is, or otherwise intentionally blur the distinctions between past and present. For example, the 19th-century Romanian painter Constantin Lecca depicts the peace agreement between Ioan Bogdan Voievod and Radu Voievod—two leaders in Romania's 16th-century history—with the flags of Moldavia (blue-red) and of Wallachia (yellow-blue) seen in the background. These flags date only from the 1830s: anachronism promotes legitimacy for the unification of Moldavia and Wallachia into the Kingdom of Romania at the time the painting was made. The Russian artist Vasily Vereshchagin, in his painting Suppression of the Indian Revolt by the English, depicts the aftermath of the Indian Rebellion of 1857, when mutineers were executed by being blown from guns. In order to make the argument that the method of execution would again be utilized by the British if another rebellion broke out in India, Vereshchagin depicted the British soldiers conducting the executions in late 19th-century uniforms.
Art and literature
Anachronism is used especially in works of imagination that rest on a historical basis. Anachronisms may be introduced in many ways: for example, in the disregard of the different modes of life and thought that characterize different periods, or in ignorance of the progress of the arts and sciences and other facts of history. They vary from glaring inconsistencies to scarcely perceptible misrepresentation. Anachronisms may be the unintentional result of ignorance, or may be a deliberate aesthetic choice.
Sir Walter Scott justified the use of anachronism in historical literature: "It is necessary, for exciting interest of any kind, that the subject assumed should be, as it were, translated into the manners as well as the language of the age we live in." However, as fashions, conventions and technologies move on, such attempts to use anachronisms to engage an audience may have quite the reverse effect, as the details in question are increasingly recognized as belonging neither to the historical era being represented, nor to the present, but to the intervening period in which the artwork was created. "Nothing becomes obsolete like a period vision of an older period", writes Anthony Grafton; "Hearing a mother in a historical movie of the 1940s call out 'Ludwig! Ludwig van Beethoven! Come in and practice your piano now!' we are jerked from our suspension of disbelief by what was intended as a means of reinforcing it, and plunged directly into the American bourgeois world of the filmmaker."
It is only since the beginning of the 19th century that anachronistic deviations from historical reality have jarred on a general audience. C. S. Lewis wrote:
Anachronisms abound in the works of Raphael and Shakespeare, as well as in those of less celebrated painters and playwrights of earlier times. Carol Meyers says that anachronisms in ancient texts can be used to better understand the stories by asking what the anachronism represents. Repeated anachronisms and historical errors can become an accepted part of popular culture, such as the belief that Roman legionaries wore leather armor.
Comical anachronism
Comedy fiction set in the past may use anachronism for humorous effect. Comedic anachronism can be used to make serious points about both historical and modern society, such as drawing parallels to political or social conventions.
Future anachronism
Even with careful research, science fiction writers risk anachronism as their works age because they cannot predict all political, social, and technological change.
For example, many books, television shows, radio productions and films nominally set in the mid-21st century or later refer to the Soviet Union, to Saint Petersburg in Russia as Leningrad, to the continuing struggle between the Eastern and Western Blocs and to divided Germany and divided Berlin. Star Trek has suffered from future anachronisms; instead of "retconning" these errors, the 2009 film retained them for consistency with older franchises.
Buildings or natural features, such as the World Trade Center in New York City, can become out of place once they disappear, with some works having been edited to remove the World Trade Center to avoid this situation.
Futuristic technology may appear alongside technology which would be obsolete by the time in which the story is set. For example, in the stories of Robert A. Heinlein, interplanetary space travel coexists with calculation using slide rules.
Language anachronism
Language anachronisms in novels and films are quite common, both intentional and unintentional. Intentional anachronisms inform the audience more readily about a film set in the past. In this regard, language and pronunciation change so fast that most modern people (even many scholars) would find it difficult, or even impossible, to understand a film with dialogue in 15th-century English; thus, audiences willingly accept characters speaking an updated language, and modern slang and figures of speech are often used in these films.
Unconscious anachronism
Unintentional anachronisms may occur even in what are intended as wholly objective and accurate records or representations of historic artifacts and artworks, because the perspectives of historical recorders are conditioned by the assumptions and practices of their own times, in a form of cultural bias. One example is the attribution of historically inaccurate beards to various medieval tomb effigies and figures in stained glass in records made by English antiquaries of the late 16th and early 17th centuries. Working in an age in which beards were in fashion and widespread, the antiquaries seem to have unconsciously projected the fashion back into an era in which they were rare.
In academia
In historical writing, the most common type of anachronism is the adoption of the political, social or cultural concerns and assumptions of one era to interpret or evaluate the events and actions of another. The anachronistic application of present-day perspectives to comment on the historical past is sometimes described as presentism. Empiricist historians, working in the traditions established by Leopold von Ranke in the 19th century, regard this as a great error, and a trap to be avoided. Arthur Marwick has argued that "a grasp of the fact that past societies are very different from our own, and ... very difficult to get to know" is an essential and fundamental skill of the professional historian; and that "anachronism is still one of the most obvious faults when the unqualified (those expert in other disciplines, perhaps) attempt to do history".
Detection of forgery
The ability to identify anachronisms may be employed as a critical and forensic tool to demonstrate the fraudulence of a document or artifact purporting to be from an earlier time. Anthony Grafton discusses, for example, the work of the 3rd-century philosopher Porphyry, of Isaac Casaubon (1559–1614), and of Richard Reitzenstein (1861–1931), all of whom succeeded in exposing literary forgeries and plagiarisms, such as those included in the "Hermetic Corpus", through – among other techniques – the recognition of anachronisms. The detection of anachronisms is an important element within the scholarly discipline of diplomatics, the critical analysis of the forms and language of documents, developed by the Maurist scholar Jean Mabillon (1632–1707) and his successors René-Prosper Tassin (1697–1777) and Charles-François Toustain (1700–1754). The philosopher and reformer Jeremy Bentham wrote at the beginning of the 19th century:
Examples are:
The exposure by Lorenzo Valla in 1440 of the so-called Donation of Constantine, a decree purportedly issued by the Emperor Constantine the Great in either 315 or 317 AD, as a later forgery, depended to a considerable degree on the identification of anachronisms, such as references to the city of Constantinople (a name not in fact bestowed until 330 AD).
A large number of apparent anachronisms in the Book of Mormon have served to convince critics that the book was written in the 19th century, and not, as its adherents claim, in pre-Columbian America.
The use of 19th- and 20th-century anti-semitic terminology demonstrates that the purported "Franklin Prophecy" (attributed to Benjamin Franklin, who died in 1790) is a forgery.
The "William Lynch speech", an address, supposedly delivered in 1712, on the control of slaves in Virginia, is now considered to be a 20th-century forgery, partly on account of its use of anachronistic terms such as "program" and "refueling".
See also
Anachronisms in the Book of Mormon
Anatopism
Evolutionary anachronism
Invented traditions
List of stories set in a future now past
Retrofuturism
Skeuomorph
Society for Creative Anachronism
Steampunk
Tiffany Problem
Whig history
References
Bibliography
External links | 0.764446 | 0.99842 | 0.763238 |
Homogeneous catalysis | In chemistry, homogeneous catalysis is catalysis where the catalyst is in same phase as reactants, principally by a soluble catalyst in a solution. In contrast, heterogeneous catalysis describes processes where the catalysts and substrate are in distinct phases, typically solid and gas, respectively. The term is used almost exclusively to describe solutions and implies catalysis by organometallic compounds. Homogeneous catalysis is an established technology that continues to evolve. An illustrative major application is the production of acetic acid. Enzymes are examples of homogeneous catalysts.
Examples
Acid catalysis
The proton is a pervasive homogeneous catalyst because water is the most common solvent. Water forms protons by the process of self-ionization of water. In an illustrative case, acids accelerate (catalyze) the hydrolysis of esters:
CH3CO2CH3 + H2O CH3CO2H + CH3OH
At neutral pH, aqueous solutions of most esters do not hydrolyze at practical rates.
Transition metal-catalysis
Hydrogenation and related reactions
A prominent class of reductive transformations are hydrogenations. In this process, H2 added to unsaturated substrates. A related methodology, transfer hydrogenation, involves by transfer of hydrogen from one substrate (the hydrogen donor) to another (the hydrogen acceptor). Related reactions entail "HX additions" where X = silyl (hydrosilylation) and CN (hydrocyanation). Most large-scale industrial hydrogenations – margarine, ammonia, benzene-to-cyclohexane – are conducted with heterogeneous catalysts. Fine chemical syntheses, however, often rely on homogeneous catalysts.
Carbonylations
Hydroformylation, a prominent form of carbonylation, involves the addition of H and "C(O)H" across a double bond. This process is almost exclusively conducted with soluble rhodium- and cobalt-containing complexes.
A related carbonylation is the conversion of alcohols to carboxylic acids. MeOH and CO react in the presence of homogeneous catalysts to give acetic acid, as practiced in the Monsanto process and Cativa processes. Related reactions include hydrocarboxylation and hydroesterifications.
Polymerization and metathesis of alkenes
A number of polyolefins, e.g. polyethylene and polypropylene, are produced from ethylene and propylene by Ziegler-Natta catalysis. Heterogeneous catalysts dominate, but many soluble catalysts are employed especially for stereospecific polymers. Olefin metathesis is usually catalyzed heterogeneously in industry, but homogeneous variants are valuable in fine chemical synthesis.
Oxidations
Homogeneous catalysts are also used in a variety of oxidations. In the Wacker process, acetaldehyde is produced from ethene and oxygen. Many non-organometallic complexes are also widely used in catalysis, e.g. for the production of terephthalic acid from xylene. Alkenes are epoxidized and dihydroxylated by metal complexes, as illustrated by the Halcon process and the Sharpless dihydroxylation.
Enzymes (including metalloenzymes)
Enzymes are homogeneous catalysts that are essential for life but are also harnessed for industrial processes. A well-studied example is carbonic anhydrase, which catalyzes the release of CO2 into the lungs from the bloodstream. Enzymes possess properties of both homogeneous and heterogeneous catalysts. As such, they are usually regarded as a third, separate category of catalyst. Water is a common reagent in enzymatic catalysis. Esters and amides are slow to hydrolyze in neutral water, but the rates are sharply affected by metalloenzymes, which can be viewed as large coordination complexes. Acrylamide is prepared by the enzyme-catalyzed hydrolysis of acrylonitrile. US demand for acrylamide was as of 2007.
Advantages and disadvantages
Advantages
Homogeneous catalysts are often more selective than heterogeneous catalysts.
For exothermic processes, homogeneous catalysts dump heat into the solvent.
Homogeneous catalysts are easier to characterize, making their reaction mechanisms amenable to rational manipulation.
Disadvantages
The separation of homogeneous catalysts from products can be challenging. In some cases involving high activity catalysts, the catalyst is not removed from the product. In other cases, distillation can extract volatile organic products.
Homogeneous catalysts have limited thermal stability compared to heterogeneous catalysts. Many organometallic complexes degrade below 100 °C. Some pincer-based catalysts, however, operate near 200 °C.
See also
Concurrent tandem catalysis
References
Catalysis | 0.779606 | 0.979 | 0.763234 |
Bicarbonate | In inorganic chemistry, bicarbonate (IUPAC-recommended nomenclature: hydrogencarbonate) is an intermediate form in the deprotonation of carbonic acid. It is a polyatomic anion with the chemical formula .
Bicarbonate serves a crucial biochemical role in the physiological pH buffering system.
The term "bicarbonate" was coined in 1814 by the English chemist William Hyde Wollaston. The name lives on as a trivial name.
Chemical properties
The bicarbonate ion (hydrogencarbonate ion) is an anion with the empirical formula and a molecular mass of 61.01 daltons; it consists of one central carbon atom surrounded by three oxygen atoms in a trigonal planar arrangement, with a hydrogen atom attached to one of the oxygens. It is isoelectronic with nitric acid . The bicarbonate ion carries a negative one formal charge and is an amphiprotic species which has both acidic and basic properties. It is both the conjugate base of carbonic acid ; and the conjugate acid of , the carbonate ion, as shown by these equilibrium reactions:
+ 2 H2O + H2O + OH− H2CO3 + 2 OH−
H2CO3 + 2 H2O + H3O+ + H2O + 2 H3O+.
A bicarbonate salt forms when a positively charged ion attaches to the negatively charged oxygen atoms of the ion, forming an ionic compound. Many bicarbonates are soluble in water at standard temperature and pressure; in particular, sodium bicarbonate contributes to total dissolved solids, a common parameter for assessing water quality.
Physiological role
Bicarbonate is a vital component of the pH buffering system of the human body (maintaining acid–base homeostasis). 70%–75% of CO2 in the body is converted into carbonic acid (H2CO3), which is the conjugate acid of and can quickly turn into it.
With carbonic acid as the central intermediate species, bicarbonate – in conjunction with water, hydrogen ions, and carbon dioxide – forms this buffering system, which is maintained at the volatile equilibrium required to provide prompt resistance to pH changes in both the acidic and basic directions. This is especially important for protecting tissues of the central nervous system, where pH changes too far outside of the normal range in either direction could prove disastrous (see acidosis or alkalosis). Recently it has been also demonstrated that cellular bicarbonate metabolism can be regulated by mTORC1 signaling.
Additionally, bicarbonate plays a key role in the digestive system. It raises the internal pH of the stomach, after highly acidic digestive juices have finished in their digestion of food. Bicarbonate also acts to regulate pH in the small intestine. It is released from the pancreas in response to the hormone secretin to neutralize the acidic chyme entering the duodenum from the stomach.
Bicarbonate in the environment
Bicarbonate is the dominant form of dissolved inorganic carbon in sea water, and in most fresh waters. As such it is an important sink in the carbon cycle.
Some plants like Chara utilize carbonate and produce calcium carbonate (CaCO3) as result of biological metabolism.
In freshwater ecology, strong photosynthetic activity by freshwater plants in daylight releases gaseous oxygen into the water and at the same time produces bicarbonate ions. These shift the pH upward until in certain circumstances the degree of alkalinity can become toxic to some organisms or can make other chemical constituents such as ammonia toxic. In darkness, when no photosynthesis occurs, respiration processes release carbon dioxide, and no new bicarbonate ions are produced, resulting in a rapid fall in pH.
The flow of bicarbonate ions from rocks weathered by the carbonic acid in rainwater is an important part of the carbon cycle.
Other uses
The most common salt of the bicarbonate ion is sodium bicarbonate, NaHCO3, which is commonly known as baking soda. When heated or exposed to an acid such as acetic acid (vinegar), sodium bicarbonate releases carbon dioxide. This is used as a leavening agent in baking.
Ammonium bicarbonate is used in digestive biscuit manufacture.
Diagnostics
In diagnostic medicine, the blood value of bicarbonate is one of several indicators of the state of acid–base physiology in the body. It is measured, along with chloride, potassium, and sodium, to assess electrolyte levels in an electrolyte panel test (which has Current Procedural Terminology, CPT, code 80051).
The parameter standard bicarbonate concentration (SBCe) is the bicarbonate concentration in the blood at a PaCO2 of , full oxygen saturation and 36 °C.
Bicarbonate compounds
Sodium bicarbonate
Potassium bicarbonate
Caesium bicarbonate
Magnesium bicarbonate
Calcium bicarbonate
Ammonium bicarbonate
Carbonic acid
See also
Carbon dioxide
Carbonate
Carbonic anhydrase
Hard water
Arterial blood gas test
Henderson-Hasselbach equation
References
External links
Amphoteric compounds
Anions
Bicarbonates | 0.764743 | 0.998018 | 0.763227 |
Xenobiology | Xenobiology (XB) is a subfield of synthetic biology, the study of synthesizing and manipulating biological devices and systems. The name "xenobiology" derives from the Greek word xenos, which means "stranger, alien". Xenobiology is a form of biology that is not (yet) familiar to science and is not found in nature. In practice, it describes novel biological systems and biochemistries that differ from the canonical DNA–RNA-20 amino acid system (see central dogma of molecular biology). For example, instead of DNA or RNA, XB explores nucleic acid analogues, termed xeno nucleic acid (XNA) as information carriers. It also focuses on an expanded genetic code and the incorporation of non-proteinogenic amino acids, or “xeno amino acids” into proteins.
Difference between xeno-, exo-, and astro-biology
"Astro" means "star" and "exo" means "outside". Both exo- and astrobiology deal with the search for naturally evolved life in the Universe, mostly on other planets in the circumstellar habitable zone. (These are also occasionally referred to as xenobiology.) Whereas astrobiologists are concerned with the detection and analysis of life elsewhere in the Universe, xenobiology attempts to design forms of life with a different biochemistry or different genetic code than on planet Earth.
Aims
Xenobiology has the potential to reveal fundamental knowledge about biology and the origin of life. In order to better understand the origin of life, it is necessary to know why life evolved seemingly via an early RNA world to the DNA-RNA-protein system and its nearly universal genetic code. Was it an evolutionary "accident" or were there constraints that ruled out other types of chemistries? By testing alternative biochemical "primordial soups", it is expected to better understand the principles that gave rise to life as we know it.
Xenobiology is an approach to develop industrial production systems with novel capabilities by means of biopolymer engineering and pathogen resistance. The genetic code encodes in all organisms 20 canonical amino acids that are used for protein biosynthesis. In rare cases, special amino acids such as selenocysteine or pyrrolysine can be incorporated by the translational apparatus in to proteins of some organisms. Together, these 20+2 Amino Acids are known as the 22 Proteinogenic Amino Acids. By using additional amino acids from among the over 700 known to biochemistry, the capabilities of proteins may be altered to give rise to more efficient catalytical or material functions. The EC-funded project Metacode, for example, aims to incorporate metathesis (a useful catalytical function so far not known in living organisms) into bacterial cells. Another reason why XB could improve production processes lies in the possibility to reduce the risk of virus or bacteriophage contamination in cultivations since XB cells would no longer provide suitable host cells, rendering them more resistant (an approach called semantic containment)
Xenobiology offers the option to design a "genetic firewall", a novel biocontainment system, which may help to strengthen and diversify current bio-containment approaches. One concern with traditional genetic engineering and biotechnology is horizontal gene transfer to the environment and possible risks to human health. One major idea in XB is to design alternative genetic codes and biochemistries so that horizontal gene transfer is no longer possible. Additionally alternative biochemistry also allows for new synthetic auxotrophies. The idea is to create an orthogonal biological system that would be incompatible with natural genetic systems.
Scientific approach
In xenobiology, the aim is to design and construct biological systems that differ from their natural counterparts on one or more fundamental levels. Ideally these new-to-nature organisms would be different in every possible biochemical aspect exhibiting a very different genetic code. The long-term goal is to construct a cell that would store its genetic information not in DNA but in an alternative informational polymer consisting of xeno nucleic acids (XNA), different base pairs, using non-canonical amino acids and an altered genetic code. So far cells have been constructed that incorporate only one or two of these features.
Xeno nucleic acids (XNA)
Originally this research on alternative forms of DNA was driven by the question of how life evolved on earth and why RNA and DNA were selected by (chemical) evolution over other possible nucleic acid structures. Two hypotheses for the selection of RNA and DNA as life's backbone are either they are favored under life on Earth's conditions, or they were coincidentally present in pre-life chemistry and continue to be used now. Systematic experimental studies aiming at the diversification of the chemical structure of nucleic acids have resulted in completely novel informational biopolymers. So far a number of XNAs with new chemical backbones or leaving group of the DNA have been synthesized, e.g.: hexose nucleic acid (HNA); threose nucleic acid (TNA), glycol nucleic acid (GNA) cyclohexenyl nucleic acid (CeNA). The incorporation of XNA in a plasmid, involving 3 HNA codons, has been accomplished already in 2003. This XNA is used in vivo (E coli) as template for DNA synthesis. This study, using a binary (G/T) genetic cassette and two non-DNA bases (Hx/U), was extended to CeNA, while GNA seems to be too alien at this moment for the natural biological system to be used as template for DNA synthesis. Extended bases using a natural DNA backbone could, likewise, be transliterated into natural DNA, although to a more limited extent.
Aside being used as extensions to template DNA strands, XNA activity has been tested for use as genetic catalysts. Although proteins are the most common components of cellular enzymatic activity, nucleic acids are also used in the cell to catalyze reactions. A 2015 study found several different kinds of XNA, most notably FANA (2'-fluoroarabino nucleic acids), as well as HNA, CeNA and ANA (arabino nucleic acids) could be used to cleave RNA during post-transcriptional RNA processing acting as XNA enzymes, hence the name XNAzymes. FANA XNAzymes also showed the ability to ligate DNA, RNA and XNA substrates. Although XNAzyme studies are still preliminary, this study was a step in the direction of searching for synthetic circuit components that are more efficient than those containing DNA and RNA counterparts that can regulate DNA, RNA, and their own, XNA, substrates.
Expanding the genetic alphabet
While XNAs have modified backbones, other experiments target the replacement or enlargement of the genetic alphabet of DNA with unnatural base pairs. For example, DNA has been designed that has – instead of the four standard bases A, T, G, and C – six bases A, T, G, C, and the two new ones P and Z (where Z stands for 6-Amino-5-nitro3-(l'-p-D-2'-deoxyribofuranosyl)-2(1H)-pyridone, and P stands for 2-Amino-8-(1-beta-D-2'-deoxyribofuranosyl)imidazo[1,2-a]-1,3,5-triazin-4 (8H)). In a systematic study, Leconte et al. tested the viability of 60 candidate bases (yielding potentially 3600 base pairs) for possible incorporation in the DNA.
In 2002, Hirao et al. developed an unnatural base pair between 2-amino-8-(2-thienyl)purine (s) and pyridine-2-one (y) that functions in vitro in transcription and translation toward a genetic code for protein synthesis containing a non-standard amino acid. In 2006, they created 7-(2-thienyl)imidazo[4,5-b]pyridine (Ds) and pyrrole-2-carbaldehyde (Pa) as a third base pair for replication and transcription, and afterward, Ds and 4-[3-(6-aminohexanamido)-1-propynyl]-2-nitropyrrole (Px) was discovered as a high fidelity pair in PCR amplification. In 2013, they applied the Ds-Px pair to DNA aptamer generation by in vitro selection (SELEX) and demonstrated the genetic alphabet expansion significantly augment DNA aptamer affinities to target proteins.
In May 2014, researchers announced that they had successfully introduced two new artificial nucleotides into bacterial DNA, alongside the four naturally occurring nucleotides, and by including individual artificial nucleotides in the culture media, were able to passage the bacteria 24 times; they did not create mRNA or proteins able to use the artificial nucleotides.
Novel polymerases
Neither the XNA nor the unnatural bases are recognized by natural polymerases. One of the major challenges is to find or create novel types of polymerases that will be able to replicate these new-to-nature constructs. In one case a modified variant of the HIV-reverse transcriptase was found to be able to PCR-amplify an oligonucleotide containing a third type base pair.
Pinheiro et al. (2012) demonstrated that the method of polymerase evolution and design successfully led to the storage and recovery of genetic information (of less than 100bp length) from six alternative genetic polymers based on simple nucleic acid architectures not found in nature, xeno nucleic acids.
Genetic code engineering
One of the goals of xenobiology is to rewrite the genetic code. The most promising approach to change the code is the reassignment of seldom used or even unused codons.
In an ideal scenario, the genetic code is expanded by one codon, thus having been liberated from its old function and fully reassigned to a non-canonical amino acid (ncAA) ("code expansion"). As these methods are laborious to implement, and some short cuts can be applied ("code engineering"), for example in bacteria that are auxotrophic for specific amino acids and at some point in the experiment are fed isostructural analogues instead of the canonical amino acids for which they are auxotrophic. In that situation, the canonical amino acid residues in native proteins are substituted with the ncAAs. Even the insertion of multiple different ncAAs into the same protein is possible. Finally, the repertoire of 20 canonical amino acids can not only be expanded, but also reduced to 19.
By reassigning transfer RNA (tRNA)/aminoacyl-tRNA synthetase pairs the codon specificity can be changed. Cells endowed with such aminoacyl-[tRNA synthetases] are thus able to read [mRNA] sequences that make no sense to the existing gene expression machinery. Altering the codon: tRNA synthetases pairs may lead to the in vivo incorporation of the non-canonical amino acids into proteins.
In the past reassigning codons was mainly done on a limited scale. In 2013, however, Farren Isaacs and George Church at Harvard University reported the replacement of all 321 TAG stop codons present in the genome of E. coli with synonymous TAA codons, thereby demonstrating that massive substitutions can be combined into higher-order strains without lethal effects. Following the success of this genome wide codon replacement, the authors continued and achieved the reprogramming of 13 codons throughout the genome, directly affecting 42 essential genes.
An even more radical change in the genetic code is the change of a triplet codon to a quadruplet and even quintuplet codon pioneered by Sisido in cell-free systems and by Schultz in bacteria. Finally, non-natural base pairs can be used to introduce novel amino acid in proteins.
Directed evolution
The goal of substituting DNA by XNA may also be reached by another route, namely by engineering the environment instead of the genetic modules. This approach has been successfully demonstrated by Marlière and Mutzel with the production of an E. coli strain whose DNA is composed of standard A, C and G nucleotides but has the synthetic thymine analogue 5-chlorouracil instead of thymine (T) in the corresponding positions of the sequence. These cells are then dependent on externally supplied 5-chlorouracil for growth, but otherwise they look and behave as normal E. coli. These cells, however, are currently not yet fully auxotrophic for the Xeno-base since they are still growing on thymine when this is supplied to the medium.
Biosafety
Xenobiological systems are designed to convey orthogonality to natural biological systems. A (still hypothetical) organism that uses XNA, different base pairs and polymerases and has an altered genetic code will hardly be able to interact with natural forms of life on the genetic level. Thus, these xenobiological organisms represent a genetic enclave that cannot exchange information with natural cells. Altering the genetic machinery of the cell leads to semantic containment. In analogy to information processing in IT, this safety concept is termed a “genetic firewall”. The concept of the genetic firewall seems to overcome a number of limitations of previous safety systems. A first experimental evidence of the theoretical concept of the genetic firewall was achieved in 2013 with the construction of a genomically recoded organism (GRO). In this GRO all known UAG stop codons in E.coli were replaced by UAA codons, which allowed for the deletion of release factor 1 and reassignment of UAG translation function. The GRO exhibited increased resistance to T7 bacteriophage, thus showing that alternative genetic codes do reduce genetic compatibility. This GRO, however, is still very similar to its natural “parent” and cannot be regarded to have a genetic firewall. The possibility of reassigning the function of large number of triplets opens the perspective to have strains that combine XNA, novel base pairs, new genetic codes, etc. that cannot exchange any information with the natural biological world.
Regardless of changes leading to a semantic containment mechanism in new organisms, any novel biochemical systems still has to undergo a toxicological screening. XNA, novel proteins, etc. might represent novel toxins, or have an allergic potential that needs to be assessed.
Governance and regulatory issues
Xenobiology might challenge the regulatory framework, as currently laws and directives deal with genetically modified organisms and do not directly mention chemically or genomically modified organisms. Taking into account that real xenobiology organisms are not expected in the next few years, policy makers do have some time at hand to prepare themselves for an upcoming governance challenge. Since 2012, the following groups have picked up the topic as a developing governance issue: policy advisers in the US, four National Biosafety Boards in Europe, the European Molecular Biology Organisation, and the European Commission's Scientific Committee on Emerging and Newly Identified Health Risks (SCENIHR) in three opinions (Definition, risk assessment methodologies and safety aspects, and risks to the environment and biodiversity related to synthetic biology and research priorities in the field of synthetic biology.).
See also
Auxotrophy
Biological dark matter
Body plan
Directed evolution
Expanded genetic code
Foldamer
Hachimoji DNA
Hypothetical types of biochemistry
Life definitions
Nucleic acid analogue
Purple Earth hypothesis
RNA world
Shadow biosphere
References
External links
XB1: The First Conference on Xenobiology May 6–8, 2014. Genoa, Italy.
XB2: The Second Conference on Xenobiology May 24–26, 2016. Berlin, Germany.
Bioinformatics
Biotechnology
Synthetic biology | 0.773236 | 0.987047 | 0.763221 |
Foundational Model of Anatomy | The Foundational Model of Anatomy Ontology (FMA) is a reference ontology for the domain of human anatomy. It is a symbolic representation of the canonical, phenotypic structure of an organism; a spatial-structural ontology of anatomical entities and relations which form the physical organization of an organism at all salient levels of granularity.
FMA is developed and maintained by the Structural Informatics Group at the University of Washington.
Description
FMA ontology contains approximately 75,000 classes and over 120,000 terms, over 2.1 million relationship instances from over 168 relationship types.
See also
Terminologia Anatomica
Anatomography
References
External links
The Foundational Model of Anatomy Ontology
The Foundational Model of Anatomy Browser
FMA Ontology Browser
Bioinformatics
Ontology (information science)
Anatomical terminology | 0.777647 | 0.981447 | 0.763219 |
Polymer | A polymer is a substance or material that consists of very large molecules, or macromolecules, that are constituted by many repeating subunits derived from one or more species of monomers. Due to their broad spectrum of properties, both synthetic and natural polymers play essential and ubiquitous roles in everyday life. Polymers range from familiar synthetic plastics such as polystyrene to natural biopolymers such as DNA and proteins that are fundamental to biological structure and function. Polymers, both natural and synthetic, are created via polymerization of many small molecules, known as monomers. Their consequently large molecular mass, relative to small molecule compounds, produces unique physical properties including toughness, high elasticity, viscoelasticity, and a tendency to form amorphous and semicrystalline structures rather than crystals.
Polymers are studied in the fields of polymer science (which includes polymer chemistry and polymer physics), biophysics and materials science and engineering. Historically, products arising from the linkage of repeating units by covalent chemical bonds have been the primary focus of polymer science. An emerging important area now focuses on supramolecular polymers formed by non-covalent links. Polyisoprene of latex rubber is an example of a natural polymer, and the polystyrene of styrofoam is an example of a synthetic polymer. In biological contexts, essentially all biological macromolecules—i.e., proteins (polyamides), nucleic acids (polynucleotides), and polysaccharides—are purely polymeric, or are composed in large part of polymeric components.
Etymology
The term "polymer" derives . The term was coined in 1833 by Jöns Jacob Berzelius, though with a definition distinct from the modern IUPAC definition. The modern concept of polymers as covalently bonded macromolecular structures was proposed in 1920 by Hermann Staudinger, who spent the next decade finding experimental evidence for this hypothesis.
Common examples
Polymers are of two types: naturally occurring and synthetic or man made.
Natural
Natural polymeric materials such as hemp, shellac, amber, wool, silk, and natural rubber have been used for centuries. A variety of other natural polymers exist, such as cellulose, which is the main constituent of wood and paper.
Space polymer
Hemoglycin (previously termed hemolithin) is a space polymer that is the first polymer of amino acids found in meteorites.
Synthetic
The list of synthetic polymers, roughly in order of worldwide demand, includes polyethylene, polypropylene, polystyrene, polyvinyl chloride, synthetic rubber, phenol formaldehyde resin (or Bakelite), neoprene, nylon, polyacrylonitrile, PVB, silicone, and many more. More than 330 million tons of these polymers are made every year (2015).
Most commonly, the continuously linked backbone of a polymer used for the preparation of plastics consists mainly of carbon atoms. A simple example is polyethylene ('polythene' in British English), whose repeat unit or monomer is ethylene. Many other structures do exist; for example, elements such as silicon form familiar materials such as silicones, examples being Silly Putty and waterproof plumbing sealant. Oxygen is also commonly present in polymer backbones, such as those of polyethylene glycol, polysaccharides (in glycosidic bonds), and DNA (in phosphodiester bonds).
Synthesis
Polymerization is the process of combining many small molecules known as monomers into a covalently bonded chain or network. During the polymerization process, some chemical groups may be lost from each monomer. This happens in the polymerization of PET polyester. The monomers are terephthalic acid (HOOCC6H4COOH) and ethylene glycol (HOCH2CH2OH) but the repeating unit is OCC6H4COOCH2CH2O, which corresponds to the combination of the two monomers with the loss of two water molecules. The distinct piece of each monomer that is incorporated into the polymer is known as a repeat unit or monomer residue.
Synthetic methods are generally divided into two categories, step-growth polymerization and chain polymerization. The essential difference between the two is that in chain polymerization, monomers are added to the chain one at a time only, such as in polystyrene, whereas in step-growth polymerization chains of monomers may combine with one another directly, such as in polyester. Step-growth polymerization can be divided into polycondensation, in which low-molar-mass by-product is formed in every reaction step, and polyaddition.
Newer methods, such as plasma polymerization do not fit neatly into either category. Synthetic polymerization reactions may be carried out with or without a catalyst. Laboratory synthesis of biopolymers, especially of proteins, is an area of intensive research.
Biological synthesis
There are three main classes of biopolymers: polysaccharides, polypeptides, and polynucleotides.
In living cells, they may be synthesized by enzyme-mediated processes, such as the formation of DNA catalyzed by DNA polymerase. The synthesis of proteins involves multiple enzyme-mediated processes to transcribe genetic information from the DNA to RNA and subsequently translate that information to synthesize the specified protein from amino acids. The protein may be modified further following translation in order to provide appropriate structure and functioning. There are other biopolymers such as rubber, suberin, melanin, and lignin.
Modification of natural polymers
Naturally occurring polymers such as cotton, starch, and rubber were familiar materials for years before synthetic polymers such as polyethene and perspex appeared on the market. Many commercially important polymers are synthesized by chemical modification of naturally occurring polymers. Prominent examples include the reaction of nitric acid and cellulose to form nitrocellulose and the formation of vulcanized rubber by heating natural rubber in the presence of sulfur. Ways in which polymers can be modified include oxidation, cross-linking, and end-capping.
Structure
The structure of a polymeric material can be described at different length scales, from the sub-nm length scale up to the macroscopic one. There is in fact a hierarchy of structures, in which each stage provides the foundations for the next one.
The starting point for the description of the structure of a polymer is the identity of its constituent monomers. Next, the microstructure essentially describes the arrangement of these monomers within the polymer at the scale of a single chain. The microstructure determines the possibility for the polymer to form phases with different arrangements, for example through crystallization, the glass transition or microphase separation.
These features play a major role in determining the physical and chemical properties of a polymer.
Monomers and repeat units
The identity of the repeat units (monomer residues, also known as "mers") comprising a polymer is its first and most important attribute. Polymer nomenclature is generally based upon the type of monomer residues comprising the polymer. A polymer which contains only a single type of repeat unit is known as a homopolymer, while a polymer containing two or more types of repeat units is known as a copolymer. A terpolymer is a copolymer which contains three types of repeat units.
Polystyrene is composed only of styrene-based repeat units, and is classified as a homopolymer. Polyethylene terephthalate, even though produced from two different monomers (ethylene glycol and terephthalic acid), is usually regarded as a homopolymer because only one type of repeat unit is formed. Ethylene-vinyl acetate contains more than one variety of repeat unit and is a copolymer. Some biological polymers are composed of a variety of different but structurally related monomer residues; for example, polynucleotides such as DNA are composed of four types of nucleotide subunits.
{| class="wikitable" style="text-align:left; font-size:90%;" width="80%"
|-
| class="hintergrundfarbe6" align="center" colspan="4" |Homopolymers and copolymers (examples)
|- style="vertical-align:top" class="hintergrundfarbe2"
|
|
|
|
|- style="vertical-align:top"
| Homopolymer polystyrene
| Homopolymer polydimethylsiloxane, a silicone. The main chain is formed of silicon and oxygen atoms.
| The homopolymer polyethylene terephthalate has only one repeat unit.
| Copolymer styrene-butadiene rubber: The repeat units based on styrene and 1,3-butadiene form two repeating units, which can alternate in any order in the macromolecule, making the polymer thus a random copolymer.
|}
A polymer containing ionizable subunits (e.g., pendant carboxylic groups) is known as a polyelectrolyte or ionomer, when the fraction of ionizable units is large or small respectively.
Microstructure
The microstructure of a polymer (sometimes called configuration) relates to the physical arrangement of monomer residues along the backbone of the chain. These are the elements of polymer structure that require the breaking of a covalent bond in order to change. Various polymer structures can be produced depending on the monomers and reaction conditions: A polymer may consist of linear macromolecules containing each only one unbranched chain. In the case of unbranched polyethylene, this chain is a long-chain n-alkane. There are also branched macromolecules with a main chain and side chains, in the case of polyethylene the side chains would be alkyl groups. In particular unbranched macromolecules can be in the solid state semi-crystalline, crystalline chain sections highlighted red in the figure below.
While branched and unbranched polymers are usually thermoplastics, many elastomers have a wide-meshed cross-linking between the "main chains". Close-meshed crosslinking, on the other hand, leads to thermosets. Cross-links and branches are shown as red dots in the figures. Highly branched polymers are amorphous and the molecules in the solid interact randomly.
{| class="wikitable" style="text-align:center; font-size:90%;" width="60%"
|- class="hintergrundfarbe2"
| Linear, unbranched macromolecule
| Branched macromolecule
|Semi-crystalline structure of an unbranched polymer
| Slightly cross-linked polymer (elastomer)
| Highly cross-linked polymer (thermoset)
|}
Polymer architecture
An important microstructural feature of a polymer is its architecture and shape, which relates to the way branch points lead to a deviation from a simple linear chain. A branched polymer molecule is composed of a main chain with one or more substituent side chains or branches. Types of branched polymers include star polymers, comb polymers, polymer brushes, dendronized polymers, ladder polymers, and dendrimers. There exist also two-dimensional polymers (2DP) which are composed of topologically planar repeat units. A polymer's architecture affects many of its physical properties including solution viscosity, melt viscosity, solubility in various solvents, glass-transition temperature and the size of individual polymer coils in solution. A variety of techniques may be employed for the synthesis of a polymeric material with a range of architectures, for example living polymerization.
Chain length
A common means of expressing the length of a chain is the degree of polymerization, which quantifies the number of monomers incorporated into the chain. As with other molecules, a polymer's size may also be expressed in terms of molecular weight. Since synthetic polymerization techniques typically yield a statistical distribution of chain lengths, the molecular weight is expressed in terms of weighted averages. The number-average molecular weight (Mn) and weight-average molecular weight (Mw) are most commonly reported. The ratio of these two values (Mw / Mn) is the dispersity (Đ), which is commonly used to express the width of the molecular weight distribution.
The physical properties of polymer strongly depend on the length (or equivalently, the molecular weight) of the polymer chain. One important example of the physical consequences of the molecular weight is the scaling of the viscosity (resistance to flow) in the melt. The influence of the weight-average molecular weight on the melt viscosity depends on whether the polymer is above or below the onset of entanglements. Below the entanglement molecular weight, , whereas above the entanglement molecular weight, . In the latter case, increasing the polymer chain length 10-fold would increase the viscosity over 1000 times. Increasing chain length furthermore tends to decrease chain mobility, increase strength and toughness, and increase the glass-transition temperature (Tg). This is a result of the increase in chain interactions such as van der Waals attractions and entanglements that come with increased chain length. These interactions tend to fix the individual chains more strongly in position and resist deformations and matrix breakup, both at higher stresses and higher temperatures.
Monomer arrangement in copolymers
Copolymers are classified either as statistical copolymers, alternating copolymers, block copolymers, graft copolymers or gradient copolymers. In the schematic figure below, Ⓐ and Ⓑ symbolize the two repeat units.
{| class="wikitable" style="text-align:center; font-size:90%;"
|- class="hintergrundfarbe2"
| Random copolymer
| Gradient copolymer
| rowspan="2" | Graft copolymer
|- class="hintergrundfarbe2"
| Alternating copolymer
| Block copolymer
|}
Alternating copolymers possess two regularly alternating monomer residues: . An example is the equimolar copolymer of styrene and maleic anhydride formed by free-radical chain-growth polymerization. A step-growth copolymer such as Nylon 66 can also be considered a strictly alternating copolymer of diamine and diacid residues, but is often described as a homopolymer with the dimeric residue of one amine and one acid as a repeat unit.
Periodic copolymers have more than two species of monomer units in a regular sequence.
Statistical copolymers have monomer residues arranged according to a statistical rule. A statistical copolymer in which the probability of finding a particular type of monomer residue at a particular point in the chain is independent of the types of surrounding monomer residue may be referred to as a truly random copolymer. For example, the chain-growth copolymer of vinyl chloride and vinyl acetate is random.
Block copolymers have long sequences of different monomer units. Polymers with two or three blocks of two distinct chemical species (e.g., A and B) are called diblock copolymers and triblock copolymers, respectively. Polymers with three blocks, each of a different chemical species (e.g., A, B, and C) are termed triblock terpolymers.
Graft or grafted copolymers contain side chains or branches whose repeat units have a different composition or configuration than the main chain. The branches are added on to a preformed main chain macromolecule.
Monomers within a copolymer may be organized along the backbone in a variety of ways. A copolymer containing a controlled arrangement of monomers is called a sequence-controlled polymer. Alternating, periodic and block copolymers are simple examples of sequence-controlled polymers.
Tacticity
Tacticity describes the relative stereochemistry of chiral centers in neighboring structural units within a macromolecule. There are three types of tacticity: isotactic (all substituents on the same side), atactic (random placement of substituents), and syndiotactic (alternating placement of substituents).
{| class="wikitable" style="text-align:center; font-size:90%;" width="60%"
|- class="hintergrundfarbe2"
|Isotactic
| Syndiotactic
| Atactic (i. e. random)
|}
Morphology
Polymer morphology generally describes the arrangement and microscale ordering of polymer chains in space. The macroscopic physical properties of a polymer are related to the interactions between the polymer chains.
Disordered polymers: In the solid state, atactic polymers, polymers with a high degree of branching and random copolymers form amorphous (i.e. glassy structures). In melt and solution, polymers tend to form a constantly changing "statistical cluster", see freely-jointed-chain model. In the solid state, the respective conformations of the molecules are frozen. Hooking and entanglement of chain molecules lead to a "mechanical bond" between the chains. Intermolecular and intramolecular attractive forces only occur at sites where molecule segments are close enough to each other. The irregular structures of the molecules prevent a narrower arrangement.
Linear polymers with periodic structure, low branching and stereoregularity (e. g. not atactic) have a semi-crystalline structure in the solid state. In simple polymers (such as polyethylene), the chains are present in the crystal in zigzag conformation. Several zigzag conformations form dense chain packs, called crystallites or lamellae. The lamellae are much thinner than the polymers are long (often about 10 nm). They are formed by more or less regular folding of one or more molecular chains. Amorphous structures exist between the lamellae. Individual molecules can lead to entanglements between the lamellae and can also be involved in the formation of two (or more) lamellae (chains than called tie molecules). Several lamellae form a superstructure, a spherulite, often with a diameter in the range of 0.05 to 1 mm.
The type and arrangement of (functional) residues of the repeat units effects or determines the crystallinity and strength of the secondary valence bonds. In isotactic polypropylene, the molecules form a helix. Like the zigzag conformation, such helices allow a dense chain packing. Particularly strong intermolecular interactions occur when the residues of the repeating units allow the formation of hydrogen bonds, as in the case of p-aramid. The formation of strong intramolecular associations may produce diverse folded states of single linear chains with distinct circuit topology. Crystallinity and superstructure are always dependent on the conditions of their formation, see also: crystallization of polymers. Compared to amorphous structures, semi-crystalline structures lead to a higher stiffness, density, melting temperature and higher resistance of a polymer.
Cross-linked polymers: Wide-meshed cross-linked polymers are elastomers and cannot be molten (unlike thermoplastics); heating cross-linked polymers only leads to decomposition. Thermoplastic elastomers, on the other hand, are reversibly "physically crosslinked" and can be molten. Block copolymers in which a hard segment of the polymer has a tendency to crystallize and a soft segment has an amorphous structure are one type of thermoplastic elastomers: the hard segments ensure wide-meshed, physical crosslinking.
Crystallinity
When applied to polymers, the term crystalline has a somewhat ambiguous usage. In some cases, the term crystalline finds identical usage to that used in conventional crystallography. For example, the structure of a crystalline protein or polynucleotide, such as a sample prepared for x-ray crystallography, may be defined in terms of a conventional unit cell composed of one or more polymer molecules with cell dimensions of hundreds of angstroms or more. A synthetic polymer may be loosely described as crystalline if it contains regions of three-dimensional ordering on atomic (rather than macromolecular) length scales, usually arising from intramolecular folding or stacking of adjacent chains. Synthetic polymers may consist of both crystalline and amorphous regions; the degree of crystallinity may be expressed in terms of a weight fraction or volume fraction of crystalline material. Few synthetic polymers are entirely crystalline. The crystallinity of polymers is characterized by their degree of crystallinity, ranging from zero for a completely non-crystalline polymer to one for a theoretical completely crystalline polymer. Polymers with microcrystalline regions are generally tougher (can be bent more without breaking) and more impact-resistant than totally amorphous polymers. Polymers with a degree of crystallinity approaching zero or one will tend to be transparent, while polymers with intermediate degrees of crystallinity will tend to be opaque due to light scattering by crystalline or glassy regions. For many polymers, crystallinity may also be associated with decreased transparency.
Chain conformation
The space occupied by a polymer molecule is generally expressed in terms of radius of gyration, which is an average distance from the center of mass of the chain to the chain itself. Alternatively, it may be expressed in terms of pervaded volume, which is the volume spanned by the polymer chain and scales with the cube of the radius of gyration.
The simplest theoretical models for polymers in the molten, amorphous state are ideal chains.
Properties
Polymer properties depend of their structure and they are divided into classes according to their physical bases. Many physical and chemical properties describe how a polymer behaves as a continuous macroscopic material. They are classified as bulk properties, or intensive properties according to thermodynamics.
Mechanical properties
The bulk properties of a polymer are those most often of end-use interest. These are the properties that dictate how the polymer actually behaves on a macroscopic scale.
Tensile strength
The tensile strength of a material quantifies how much elongating stress the material will endure before failure. This is very important in applications that rely upon a polymer's physical strength or durability. For example, a rubber band with a higher tensile strength will hold a greater weight before snapping. In general, tensile strength increases with polymer chain length and crosslinking of polymer chains.
Young's modulus of elasticity
Young's modulus quantifies the elasticity of the polymer. It is defined, for small strains, as the ratio of rate of change of stress to strain. Like tensile strength, this is highly relevant in polymer applications involving the physical properties of polymers, such as rubber bands. The modulus is strongly dependent on temperature. Viscoelasticity describes a complex time-dependent elastic response, which will exhibit hysteresis in the stress-strain curve when the load is removed. Dynamic mechanical analysis or DMA measures this complex modulus by oscillating the load and measuring the resulting strain as a function of time.
Transport properties
Transport properties such as diffusivity describe how rapidly molecules move through the polymer matrix. These are very important in many applications of polymers for films and membranes.
The movement of individual macromolecules occurs by a process called reptation in which each chain molecule is constrained by entanglements with neighboring chains to move within a virtual tube. The theory of reptation can explain polymer molecule dynamics and viscoelasticity.
Phase behavior
Crystallization and melting
Depending on their chemical structures, polymers may be either semi-crystalline or amorphous. Semi-crystalline polymers can undergo crystallization and melting transitions, whereas amorphous polymers do not. In polymers, crystallization and melting do not suggest solid-liquid phase transitions, as in the case of water or other molecular fluids. Instead, crystallization and melting refer to the phase transitions between two solid states (i.e., semi-crystalline and amorphous). Crystallization occurs above the glass-transition temperature (Tg) and below the melting temperature (Tm).
Glass transition
All polymers (amorphous or semi-crystalline) go through glass transitions. The glass-transition temperature (Tg) is a crucial physical parameter for polymer manufacturing, processing, and use. Below Tg, molecular motions are frozen and polymers are brittle and glassy. Above Tg, molecular motions are activated and polymers are rubbery and viscous. The glass-transition temperature may be engineered by altering the degree of branching or crosslinking in the polymer or by the addition of plasticizers.
Whereas crystallization and melting are first-order phase transitions, the glass transition is not. The glass transition shares features of second-order phase transitions (such as discontinuity in the heat capacity, as shown in the figure), but it is generally not considered a thermodynamic transition between equilibrium states.
Mixing behavior
In general, polymeric mixtures are far less miscible than mixtures of small molecule materials. This effect results from the fact that the driving force for mixing is usually entropy, not interaction energy. In other words, miscible materials usually form a solution not because their interaction with each other is more favorable than their self-interaction, but because of an increase in entropy and hence free energy associated with increasing the amount of volume available to each component. This increase in entropy scales with the number of particles (or moles) being mixed. Since polymeric molecules are much larger and hence generally have much higher specific volumes than small molecules, the number of molecules involved in a polymeric mixture is far smaller than the number in a small molecule mixture of equal volume. The energetics of mixing, on the other hand, is comparable on a per volume basis for polymeric and small molecule mixtures. This tends to increase the free energy of mixing for polymer solutions and thereby making solvation less favorable, and thereby making the availability of concentrated solutions of polymers far rarer than those of small molecules.
Furthermore, the phase behavior of polymer solutions and mixtures is more complex than that of small molecule mixtures. Whereas most small molecule solutions exhibit only an upper critical solution temperature phase transition (UCST), at which phase separation occurs with cooling, polymer mixtures commonly exhibit a lower critical solution temperature phase transition (LCST), at which phase separation occurs with heating.
In dilute solutions, the properties of the polymer are characterized by the interaction between the solvent and the polymer. In a good solvent, the polymer appears swollen and occupies a large volume. In this scenario, intermolecular forces between the solvent and monomer subunits dominate over intramolecular interactions. In a bad solvent or poor solvent, intramolecular forces dominate and the chain contracts. In the theta solvent, or the state of the polymer solution where the value of the second virial coefficient becomes 0, the intermolecular polymer-solvent repulsion balances exactly the intramolecular monomer-monomer attraction. Under the theta condition (also called the Flory condition), the polymer behaves like an ideal random coil. The transition between the states is known as a coil–globule transition.
Inclusion of plasticizers
Inclusion of plasticizers tends to lower Tg and increase polymer flexibility. Addition of the plasticizer will also modify dependence of the glass-transition temperature Tg on the cooling rate. The mobility of the chain can further change if the molecules of plasticizer give rise to hydrogen bonding formation. Plasticizers are generally small molecules that are chemically similar to the polymer and create gaps between polymer chains for greater mobility and fewer interchain interactions. A good example of the action of plasticizers is related to polyvinylchlorides or PVCs. A uPVC, or unplasticized polyvinylchloride, is used for things such as pipes. A pipe has no plasticizers in it, because it needs to remain strong and heat-resistant. Plasticized PVC is used in clothing for a flexible quality. Plasticizers are also put in some types of cling film to make the polymer more flexible.
Chemical properties
The attractive forces between polymer chains play a large part in determining the polymer's properties. Because polymer chains are so long, they have many such interchain interactions per molecule, amplifying the effect of these interactions on the polymer properties in comparison to attractions between conventional molecules. Different side groups on the polymer can lend the polymer to ionic bonding or hydrogen bonding between its own chains. These stronger forces typically result in higher tensile strength and higher crystalline melting points.
The intermolecular forces in polymers can be affected by dipoles in the monomer units. Polymers containing amide or carbonyl groups can form hydrogen bonds between adjacent chains; the partially positively charged hydrogen atoms in N-H groups of one chain are strongly attracted to the partially negatively charged oxygen atoms in C=O groups on another. These strong hydrogen bonds, for example, result in the high tensile strength and melting point of polymers containing urethane or urea linkages. Polyesters have dipole-dipole bonding between the oxygen atoms in C=O groups and the hydrogen atoms in H-C groups. Dipole bonding is not as strong as hydrogen bonding, so a polyester's melting point and strength are lower than Kevlar's (Twaron), but polyesters have greater flexibility. Polymers with non-polar units such as polyethylene interact only through weak Van der Waals forces. As a result, they typically have lower melting temperatures than other polymers.
When a polymer is dispersed or dissolved in a liquid, such as in commercial products like paints and glues, the chemical properties and molecular interactions influence how the solution flows and can even lead to self-assembly of the polymer into complex structures. When a polymer is applied as a coating, the chemical properties will influence the adhesion of the coating and how it interacts with external materials, such as superhydrophobic polymer coatings leading to water resistance. Overall the chemical properties of a polymer are important elements for designing new polymeric material products.
Optical properties
Polymers such as PMMA and HEMA:MMA are used as matrices in the gain medium of solid-state dye lasers, also known as solid-state dye-doped polymer lasers. These polymers have a high surface quality and are also highly transparent so that the laser properties are dominated by the laser dye used to dope the polymer matrix. These type of lasers, that also belong to the class of organic lasers, are known to yield very narrow linewidths which is useful for spectroscopy and analytical applications. An important optical parameter in the polymer used in laser applications is the change in refractive index with temperature
also known as dn/dT. For the polymers mentioned here the (dn/dT) ~ −1.4 × 10−4 in units of K−1 in the 297 ≤ T ≤ 337 K range.
Electrical properties
Most conventional polymers such as polyethylene are electrical insulators, but the development of polymers containing π-conjugated bonds has led to a wealth of polymer-based semiconductors, such as polythiophenes. This has led to many applications in the field of organic electronics.
Applications
Nowadays, synthetic polymers are used in almost all walks of life. Modern society would look very different without them. The spreading of polymer use is connected to their unique properties: low density, low cost, good thermal/electrical insulation properties, high resistance to corrosion, low-energy demanding polymer manufacture and facile processing into final products. For a given application, the properties of a polymer can be tuned or enhanced by combination with other materials, as in composites. Their application allows to save energy (lighter cars and planes, thermally insulated buildings), protect food and drinking water (packaging), save land and lower use of fertilizers (synthetic fibres), preserve other materials (coatings), protect and save lives (hygiene, medical applications). A representative, non-exhaustive list of applications is given below.
Clothing, sportswear and accessories: polyester and PVC clothing, spandex, sport shoes, wetsuits, footballs and billiard balls, skis and snowboards, rackets, parachutes, sails, tents and shelters.
Electronic and photonic technologies: organic field effect transistors (OFET), light emitting diodes (OLED) and solar cells, television components, compact discs (CD), photoresists, holography.
Packaging and containers: films, bottles, food packaging, barrels.
Insulation: electrical and thermal insulation, spray foams.
Construction and structural applications: garden furniture, PVC windows, flooring, sealing, pipes.
Paints, glues and lubricants: varnish, adhesives, dispersants, anti-graffiti coatings, antifouling coatings, non-stick surfaces, lubricants.
Car parts: tires, bumpers, windshields, windscreen wipers, fuel tanks, car seats.
Household items: buckets, kitchenware, toys (e.g., construction sets and Rubik's cube).
Medical applications: blood bag, syringes, rubber gloves, surgical suture, contact lenses, prosthesis, controlled drug delivery and release, matrices for cell growth.
Personal hygiene and healthcare: diapers using superabsorbent polymers, toothbrushes, cosmetics, shampoo, condoms.
Security: personal protective equipment, bulletproof vests, space suits, ropes.
Separation technologies: synthetic membranes, fuel cell membranes, filtration, ion-exchange resins.
Money: polymer banknotes and payment cards.
3D printing.
Standardized nomenclature
There are multiple conventions for naming polymer substances. Many commonly used polymers, such as those found in consumer products, are referred to by a common or trivial name. The trivial name is assigned based on historical precedent or popular usage rather than a standardized naming convention. Both the American Chemical Society (ACS) and IUPAC have proposed standardized naming conventions; the ACS and IUPAC conventions are similar but not identical. Examples of the differences between the various naming conventions are given in the table below:
In both standardized conventions, the polymers' names are intended to reflect the monomer(s) from which they are synthesized (source based nomenclature) rather than the precise nature of the repeating subunit. For example, the polymer synthesized from the simple alkene ethene is called polyethene, retaining the -ene suffix even though the double bond is removed during the polymerization process:
→
However, IUPAC structure based nomenclature is based on naming of the preferred constitutional repeating unit.
IUPAC has also issued guidelines for abbreviating new polymer names. 138 common polymer abbreviations are also standardized in the standard ISO 1043-1.
Characterization
Polymer characterization spans many techniques for determining the chemical composition, molecular weight distribution, and physical properties. Select common techniques include the following:
Size-exclusion chromatography (also called gel permeation chromatography), sometimes coupled with static light scattering, can used to determine the number-average molecular weight, weight-average molecular weight, and dispersity.
Scattering techniques, such as static light scattering and small-angle neutron scattering, are used to determine the dimensions (radius of gyration) of macromolecules in solution or in the melt. These techniques are also used to characterize the three-dimensional structure of microphase-separated block polymers, polymeric micelles, and other materials.
Wide-angle X-ray scattering (also called wide-angle X-ray diffraction) is used to determine the crystalline structure of polymers (or lack thereof).
Spectroscopy techniques, including Fourier-transform infrared spectroscopy, Raman spectroscopy, and nuclear magnetic resonance spectroscopy, can be used to determine the chemical composition.
Differential scanning calorimetry is used to characterize the thermal properties of polymers, such as the glass-transition temperature, crystallization temperature, and melting temperature. The glass-transition temperature can also be determined by dynamic mechanical analysis.
Thermogravimetry is a useful technique to evaluate the thermal stability of the polymer.
Rheology is used to characterize the flow and deformation behavior. It can be used to determine the viscosity, modulus, and other rheological properties. Rheology is also often used to determine the molecular architecture (molecular weight, molecular weight distribution, branching) and to understand how the polymer can be processed.
Degradation
Polymer degradation is a change in the properties—tensile strength, color, shape, or molecular weight—of a polymer or polymer-based product under the influence of one or more environmental factors, such as heat, light, and the presence of certain chemicals, oxygen, and enzymes. This change in properties is often the result of bond breaking in the polymer backbone (chain scission) which may occur at the chain ends or at random positions in the chain.
Although such changes are frequently undesirable, in some cases, such as biodegradation and recycling, they may be intended to prevent environmental pollution. Degradation can also be useful in biomedical settings. For example, a copolymer of polylactic acid and polyglycolic acid is employed in hydrolysable stitches that slowly degrade after they are applied to a wound.
The susceptibility of a polymer to degradation depends on its structure. Epoxies and chains containing aromatic functionalities are especially susceptible to UV degradation while polyesters are susceptible to degradation by hydrolysis. Polymers containing an unsaturated backbone degrade via ozone cracking. Carbon based polymers are more susceptible to thermal degradation than inorganic polymers such as polydimethylsiloxane and are therefore not ideal for most high-temperature applications.
The degradation of polyethylene occurs by random scission—a random breakage of the bonds that hold the atoms of the polymer together. When heated above 450 °C, polyethylene degrades to form a mixture of hydrocarbons. In the case of chain-end scission, monomers are released and this process is referred to as unzipping or depolymerization. Which mechanism dominates will depend on the type of polymer and temperature; in general, polymers with no or a single small substituent in the repeat unit will decompose via random-chain scission.
The sorting of polymer waste for recycling purposes may be facilitated by the use of the resin identification codes developed by the Society of the Plastics Industry to identify the type of plastic.
Product failure
Failure of safety-critical polymer components can cause serious accidents, such as fire in the case of cracked and degraded polymer fuel lines. Chlorine-induced cracking of acetal resin plumbing joints and polybutylene pipes has caused many serious floods in domestic properties, especially in the US in the 1990s. Traces of chlorine in the water supply attacked polymers present in the plumbing, a problem which occurs faster if any of the parts have been poorly extruded or injection molded. Attack of the acetal joint occurred because of faulty molding, leading to cracking along the threads of the fitting where there is stress concentration.
Polymer oxidation has caused accidents involving medical devices. One of the oldest known failure modes is ozone cracking caused by chain scission when ozone gas attacks susceptible elastomers, such as natural rubber and nitrile rubber. They possess double bonds in their repeat units which are cleaved during ozonolysis. Cracks in fuel lines can penetrate the bore of the tube and cause fuel leakage. If cracking occurs in the engine compartment, electric sparks can ignite the gasoline and can cause a serious fire. In medical use degradation of polymers can lead to changes of physical and chemical characteristics of implantable devices.
Nylon 66 is susceptible to acid hydrolysis, and in one accident, a fractured fuel line led to a spillage of diesel into the road. If diesel fuel leaks onto the road, accidents to following cars can be caused by the slippery nature of the deposit, which is like black ice. Furthermore, the asphalt concrete road surface will suffer damage as a result of the diesel fuel dissolving the asphaltenes from the composite material, this resulting in the degradation of the asphalt surface and structural integrity of the road.
History
Polymers have been essential components of commodities since the early days of humankind. The use of wool (keratin), cotton and linen fibres (cellulose) for garments, paper reed (cellulose) for paper are just a few examples of how ancient societies exploited polymer-containing raw materials to obtain artefacts. The latex sap of "caoutchouc" trees (natural rubber) reached Europe in the 16th century from South America long after the Olmec, Maya and Aztec had started using it as a material to make balls, waterproof textiles and containers.
The chemical manipulation of polymers dates back to the 19th century, although at the time the nature of these species was not understood. The behaviour of polymers was initially rationalised according to the theory proposed by Thomas Graham which considered them as colloidal aggregates of small molecules held together by unknown forces.
Notwithstanding the lack of theoretical knowledge, the potential of polymers to provide innovative, accessible and cheap materials was immediately grasped. The work carried out by Braconnot, Parkes, Ludersdorf, Hayward and many others on the modification of natural polymers determined many significant advances in the field. Their contributions led to the discovery of materials such as celluloid, galalith, parkesine, rayon, vulcanised rubber and, later, Bakelite: all materials that quickly entered industrial manufacturing processes and reached households as garments components (e.g., fabrics, buttons), crockery and decorative items.
In 1920, Hermann Staudinger published his seminal work "Über Polymerisation", in which he proposed that polymers were in fact long chains of atoms linked by covalent bonds. His work was debated at length, but eventually it was accepted by the scientific community. Because of this work, Staudinger was awarded the Nobel Prize in 1953.
After the 1930s polymers entered a golden age during which new types were discovered and quickly given commercial applications, replacing naturally-sourced materials. This development was fuelled by an industrial sector with a strong economic drive and it was supported by a broad academic community that contributed innovative syntheses of monomers from cheaper raw material, more efficient polymerisation processes, improved techniques for polymer characterisation and advanced, theoretical understanding of polymers.
Since 1953, six Nobel prizes have been awarded in the area of polymer science, excluding those for research on biological macromolecules. This further testifies to its impact on modern science and technology. As Lord Todd summarised in 1980, "I am inclined to think that the development of polymerization is perhaps the biggest thing that chemistry has done, where it has had the biggest effect on everyday life".
See also
Ideal chain
Catenation
Inorganic polymer
Important publications in polymer chemistry
Oligomer
Polymer adsorption
Polymer classes
Polymer engineering
Polymery (botany)
Reactive compatibilization
Sequence-controlled polymer
Shape-memory polymer
Sol–gel process
Supramolecular polymer
Thermoplastic
Thermosetting polymer
References
Bibliography
External links
Libretext in Polymer chemistry
How to Analyze Polymers Using X-ray Diffraction
The Macrogalleria
Introduction to Polymers
Glossary of Polymer Abbreviations
Polymer chemistry
Soft matter
Materials science | 0.763872 | 0.999142 | 0.763217 |
Systematic theology | Systematic theology, or systematics, is a discipline of Christian theology that formulates an orderly, rational, and coherent account of the doctrines of the Christian faith. It addresses issues such as what the Bible teaches about certain topics or what is true about God and his universe. It also builds on biblical disciplines, church history, as well as biblical and historical theology. Systematic theology shares its systematic tasks with other disciplines such as constructive theology, dogmatics, ethics, apologetics, and philosophy of religion.
Method
With a methodological tradition that differs somewhat from biblical theology, systematic theology draws on the core sacred texts of Christianity, while simultaneously investigating the development of Christian doctrine over the course of history, particularly through philosophy, ethics, social sciences, and natural sciences. Using biblical texts, it attempts to compare and relate all of scripture which led to the creation of a systematized statement on what the whole Bible says about particular issues.
Within Christianity, different traditions (both intellectual and ecclesial) approach systematic theology in different ways impacting a) the method employed to develop the system, b) the understanding of theology's task, c) the doctrines included in the system, and d) the order those doctrines appear. Even with such diversity, it is generally the case that works that one can describe as systematic theologies begin with revelation and conclude with eschatology.
Since it is focused on truth, systematic theology is also framed to interact with and address the contemporary world. Many authors have explored this area, including Charles Gore, John Walvoord, Lindsay Dewar, and Charles Moule. This process concludes with applications to contemporary issues.
In a seminal article, "Principles of Systematic Theology," Anglican theologian John Webster describes systematic theology as proceeding along a series of principles, which he draws from various theologians including Thomas Aquinas:
The Trinity: The Ontological Principle (principium essendi)
Scripture: The External/Objective Cognitive Principle (principium cognoscendi externum)
The Redeemed Intelligence of the Saints: The Internal/Subjective Cognitive Principle (principium cognoscendi internum)
Categories
Since it is a systemic approach, systematic theology organizes truth under different headings and there are certain basic areas (or categories), although the exact list may vary slightly. These are:
Angelology – The study of angels
Bibliology – The study of the Bible
Hamartiology - The study of sin
Christology – The study of Christ
Ecclesiology – The study of the church
Eschatology – The study of the end times
Pneumatology – The study of the Holy Spirit
Soteriology – The study of salvation
Theological anthropology – The study of the nature of humanity
Theology proper – The study of the character of God
History
The establishment and integration of varied Christian ideas and Christianity-related notions, including diverse topics and themes of the Bible, in a single, coherent and well-ordered presentation is a relatively late development. The first known church father who referred to the notion of devising a comprehensive understanding of the principles of Christianity was Clement of Alexandria in the 3rd century, who stated thus: "Faith is then, so to speak, a comprehensive knowledge of the essentials." Clement himself, along with his follower Origen, attempted to create some systematic theology in their numerous surviving writings. In Eastern Orthodoxy, an early example is provided by John of Damascus's 8th-century Exposition of the Orthodox Faith, in which he attempts to set in order and demonstrate the coherence of the theology of the classic texts of the Eastern theological tradition.
In the West, Peter Lombard's 12th-century Sentences, wherein he thematically collected a great series of quotations of the Church Fathers, became the basis of a medieval scholastic tradition of thematic commentary and explanation. Thomas Aquinas's Summa Theologiae best exemplifies this scholastic tradition. The Lutheran scholastic tradition of a thematic, ordered exposition of Christian theology emerged in the 16th century with Philipp Melanchthon's Loci Communes, and was countered by a Calvinist scholasticism, which is exemplified by John Calvin's Institutes of the Christian Religion.
In the 19th century, primarily in Protestant groups, a new kind of systematic theology arose that attempted to demonstrate that Christian doctrine formed a more coherent system premised on one or more fundamental axioms. Such theologies often involved a more drastic pruning and reinterpretation of traditional belief in order to cohere with the axiom or axioms. Friedrich Daniel Ernst Schleiermacher, for example, produced Der christliche Glaube nach den Grundsätzen der evangelischen Kirche (The Christian Faith According to the Principles of the Protestant Church) in the 1820s, in which the fundamental idea is the universal presence among humanity, sometimes more hidden, sometimes more explicit, of a feeling or awareness of 'absolute dependence'.
See also
Biblical exegesis
Biblical theology
:Category:Systematic theologians
Christian apologetics
Christian theology
Constructive theology
Dispensationalist theology
Dogmatic Theology
Feminist theology
Hermeneutics
Historicism (Christianity)
Liberal Christianity
Liberation theology
Philosophical theology
Philosophy of religion
Political theology
Postliberal theology
Process theology
Theology of Anabaptism
References
Resources
Barth, Karl (1956–1975). Church Dogmatics. (thirteen volumes) Edinburgh: T&T Clark.
Berkhof, Hendrikus (1979). Christian Faith: An Introduction to the Study of the Faith. Grand Rapids: Eerdmans.
Berkhof, Louis (1996). Systematic Theology. Grand Rapids: Wm. B. Eerdmans Publishing Co.
Bloesch, Donald G. (2002–2004). Christian Foundations (seven volumes). Inter-varsity Press. (, , , , , , )
Calvin, John (1559). Institutes of the Christian Religion.
Chafer, Lewis Sperry (1948). Systematic Theology. Grand Rapids: Kregel
Chemnitz, Martin (1591). Loci Theologici. St. Louis: Concordia Publishing House, 1989.
Erickson, Millard (1998). Christian Theology (2nd ed.). Grand Rapids: Baker, 1998.
Frame, John. Theology of Lordship
Fruchtenbaum, Arnold (1989). Israelology: The Missing Link in Systematic Theology. Tustin, CA: Ariel Ministries
Fruchtenbaum, Arnold (1998). Messianic Christology. Tustin, CA: Ariel Ministries
Geisler, Norman L. (2002–2004). Systematic Theology (four volumes). Minneapolis: Bethany House.
Grenz, Stanley J. (1994). Theology for the Community of God. Grand Rapids: Eerdmans.
Grider, J. Kenneth (1994). A Wesleyan-Holiness Theology
Grudem, Wayne (1995). Systematic Theology. Zondervan.
Hodge, Charles (1960). Systematic Theology. Grand Rapids: Wm. B. Eerdmans Publishing Co.
Jenson, Robert W. (1997–1999). Systematic Theology. Oxford: Oxford University Press.
Melanchthon, Philipp (1543). Loci Communes. St. Louis: Concordia Publishing House, 1992.
Miley, John. Systematic Theology. 1892.
Newlands, George (1994). God in Christian Perspective. Edinburgh: T&T Clark.
Oden, Thomas C. (1987–1992). Systematic Theology (3 volumes). Peabody, MA: Prince Press.
Pannenberg, Wolfhart (1988–1993). Systematic Theology. Grand Rapids: Wm. B. Eerdmans Publishing Co.
Pieper, Francis (1917–1924). Christian Dogmatics. St. Louis: Concordia Publishing House.
Reymond, Robert L. (1998). A New Systematic Theology of the Christian Faith (2nd ed.). Word Publishing.
Schleiermacher, Friedrich (1928). The Christian Faith. Edinburgh: T&T Clark.
St. Augustine of Hippo (354–430). De Civitate Dei
Thielicke, Helmut (1974–1982). The Evangelical Faith. Edinburgh: T&T Clark.
Thiessen, Henry C. (1949). Systematic Theology. Grand Rapids: William B. Erdsmans Publishing Co.
Tillich, Paul. Systematic Theology. (3 volumes).
Turretin, Francis (3 parts, 1679–1685). Institutes of Elenctic Theology.
Van Til, Cornelius (1974). An Introduction to Systematic Theology. P & R Press.
Watson, Richard. Theological Institutes. 1823.
Weber, Otto. (1981–1983) Foundations of Dogmatics. Grand Rapids: Eerdmans.
Christian theology
Christian terminology | 0.767169 | 0.994845 | 0.763214 |
AIDA (marketing) | The AIDA marketing model is a model within the class known as hierarchy of effects models or hierarchical models, all of which imply that consumers move through a series of steps or stages when they make purchase decisions. These models are linear, sequential models built on an assumption that consumers move through a series of cognitive (thinking) and affective (feeling) stages culminating in a behavioural (doing e.g. purchase or trial) stage.
Steps proposed by the AIDA model
The steps proposed by the AIDA model are as follows:
Attention – The consumer becomes aware of a category, product or brand (usually through advertising)
↓
Interest – The consumer becomes interested by learning about brand benefits & how the brand fits with lifestyle
↓
Desire – The consumer develops a favorable disposition towards the brand
↓
Action – The consumer forms a purchase intention, shops around, engages in trial or makes a purchase
Some of the contemporary variants of the model replace attention with awareness. The common thread among all hierarchical models is that advertising operates as a stimulus (S) and the purchase decision is a response (R). In other words, the AIDA model is an applied stimulus-response model. A number of hierarchical models can be found in the literature including Lavidge's hierarchy of effects, DAGMAR and variants of AIDA. Hierarchical models have dominated advertising theory, and, of these models, the AIDA model is one of the most widely applied.
As consumers move through the hierarchy of effects they pass through both a cognitive processing stage and an affective processing stage before any action occurs. Thus the hierarchy of effects models all include Cognition (C)- Affect (A)- Behaviour (B) as the core steps in the underlying behavioral sequence. Some texts refer to this sequence as Learning → Feeling → Doing or C-A-B (cognitive -affective-behavioral) models.
Cognition (Awareness/learning) → Affect (Feeling/ interest/ desire) → Behavior (Action e.g. purchase/ trial/ consumption/ usage/ sharing information)
The basic AIDA model is one of the longest serving hierarchical models, having been in use for more than a century. Using a hierarchical system, such as AIDA, provides the marketer with a detailed understanding of how target audiences change over time, and provides insights as to which types of advertising messages are likely to be more effective at different junctures. Moving from step to step, the total number of prospects diminishes. This phenomenon is sometimes described as a "purchase funnel". A relatively large number of potential purchasers become aware of a product or brand, and then a smaller subset becomes interested, with only a relatively small proportion moving through to the actual purchase. This effect is also known as a "customer funnel", "marketing funnel", or "sales funnel".
The model is also used extensively in selling and advertising. According to the original model, "the steps to be taken by the seller at each stage are as follows:
Stage I. Secure attention.
Stage II. Hold attention Through Interest.
Stage III. Arouse Desire.
Stage IV. Create Confidence and Belief.
Stage V. Secure Decision and Action.
Stage VI. Create Satisfaction."
Criticisms
A major deficiency of the AIDA model and other hierarchical models is the absence of post-purchase effects such as satisfaction, consumption, repeat patronage behaviour and other post-purchase behavioural intentions such as referrals or participating in the preparation of online product reviews. Other criticisms include the model's reliance on a linear nature, and hierarchical sequence. In empirical studies, the model has been found to be a poor predictor of actual consumer behaviour. In addition, an extensive review of the literature surrounding advertising effects, carried out by Vakratsas and Ambler found little empirical support for the hierarchical models.
Another important criticism of the hierarchical models is their reliance on the concept of a linear, hierarchical response process. Indeed, some research suggests that consumers process promotional information via dual pathways, namely both cognitive (thinking) and affective (feeling) simultaneously. This insight has led to the development of a class of alternative models, known as integrative models.
Variants
In order to redress some of the model's deficiencies, a number of contemporary hierarchical models have modified or expanded the basic AIDA model. Some of these include post purchase stages, while other variants feature adaptations designed to accommodate the role of new, digital and interactive media, including social media and brand communities. However, all follow the basic sequence which includes Cognition- Affect- Behaviour.
Selected variants of AIDA:
Basic AIDA Model: Awareness → Interest → Desire → Action
Lavidge et al's Hierarchy of Effects: Awareness → Knowledge → Liking → Preference → Conviction → Purchase
McGuire's model: Presentation → Attention → Comprehension → Yielding → Retention → Behavior.
Modified AIDA Model: Awareness → Interest → Conviction → Desire → Action (purchase or consumption)
AIDAS Model: Attention → Interest → Desire → Action → Satisfaction
AISDALSLove model: Awareness → Interest → Search → Desire → Action → Like/dislike → Share → Love/Hate
Origins
The term, AIDA and the overall approach are commonly attributed to American advertising and sales pioneer, E. St. Elmo Lewis. In one of his publications on advertising, Lewis postulated at least three principles to which an advertisement should conform:
According to F. G. Coolsen, "Lewis developed his discussion of copy principles on the formula that good copy should attract attention, awaken interest, and create conviction." In fact, the formula with three steps appeared anonymously in the February 9, 1898, issue of Printers' Ink: "The mission of an advertisement is to sell goods. To do this, it must attract attention, of course; but attracting attention is only an auxiliary detail. The announcement should contain matter which will interest and convince after the attention has been attracted" (p. 50).
On January 6, 1910 Lewis gave a talk in Rochester on the topic "Is there a science back of advertising?" in which he said:
The importance of attracting the attention of the reader as the first step in copywriting was recognized early in the advertising literature as is shown by the Handbook for Advertisers and Guide to Advertising:
A precursor to Lewis was Joseph Addison Richards (1859–1928), an advertising agent from New York City who succeeded his father in the direction of one of the oldest advertising agencies in the United States. In 1893, Richards wrote an advertisement for his business containing virtually all steps from the AIDA model, but without hierarchically ordering the individual elements:
Between December 1899 and February 1900, the Bissell Carpet Sweeper Company organized a contest for the best written advertisement. Fred Macey, chairman of the Fred Macey Co. in Grand Rapids (Michigan), who was considered an advertising expert at that time, was assigned the task to examine the submissions to the company. In arriving at a decision, he considered inter alia each advertisement in the following respect:
The first published instance of the general concept, however, was in an article by Frank Hutchinson Dukesmith (1866–1935) in 1904. Dukesmith's four steps were attention, interest, desire, and conviction. The first instance of the AIDA acronym was in an article by C. P. Russell in 1921 where he wrote:
The model's usefulness was not confined solely to advertising. The basic principles of the AIDA model were widely adopted by sales representatives who used the steps to prepare effective sales presentations following the publication, in 1911, of Arthur Sheldon's book, Successful Selling. To the original model, Sheldon added satisfaction to stress the importance of repeat patronage.
AIDA is a linchpin of the Promotional part of the 4Ps of the Marketing mix, the mix itself being a key component of the model connecting customer needs through the organisation to the marketing decisions.
Theoretical developments in hierarchy of effects models
The marketing and advertising literature has spawned a number of hierarchical models. In a survey of more than 250 papers, Vakratsas and Ambler (1999) found little empirical support for any of the hierarchies of effects. In spite of that criticism, some authors have argued that hierarchical models continue to dominate theory, especially in the area of marketing communications and advertising.
All hierarchy of effects models exhibit several common characteristics. Firstly, they are all linear, sequential models built on an assumption that consumers move through a series of steps or stages involving cognitive, affective and behavioral responses that culminate in a purchase. Secondly, all hierarchy of effects models can be reduced to three broad stages – Cognitive→ Affective (emotions)→Behavioral (CAB).
Three broad stages implicit in all hierarchy of effects models:
Cognition (Awareness or learning)
↓
Affect (Feeling, interest or desire)
↓
Behavior (Action)
Recent modifications of the AIDA model have expanded the number of steps. Some of these modifications have been designed to accommodate theoretical developments, by including customer satisfaction (e.g. the AIDAS model) while other alternative models seek to accommodate changes in the external environment such as the rise of social media (e.g. the AISDALSLove model).
In the AISDALSLove model, new phases are 'Search' (after Interest), the phase when consumers actively searching information about brand/ product, 'Like/dislike' (after Action) as one of elements in the post-purchase phase, then continued with 'Share' (consumers will share their experiences about brand to other consumers) and the last is 'Love/hate' (a deep feeling towards branded product, that can become the long-term effect of advertising) which new elements such as Search, Like/dislike (evaluation), Share and Love/hate as long-term effects have also been added. Finally, S – 'Satisfaction' – is added to suggest the likelihood that a customer might become a repeat customer, provide positive referrals or engage in other brand advocacy behaviors following purchase.
Other theorists, including Christian Betancur (2014) and Rossiter and Percy (1985) have proposed that need recognition should be included as the initial stage of any hierarchical model. Betancur, for example, has proposed a more complete process: NAITDASE model (in Spanish: NAICDASE). Betancur's model begins with the identification of a Need (the consumer's perception of an opportunity or a problem). Following the Attention and Interest stages, consumers form feelings of Trust (i.e., Confidence). Without trust, customers are unlikely to move forward towards the Desire and Action stages of the process. Purchase is not the end stage in this model, as this is not the goal of the client; therefore, the final two stages are the Satisfaction of previously identified and agreed needs and the Evaluation by the customer about the whole process. If positive, it will repurchase and recommend to others (Customer's loyalty).
In Betancur's model, trust is a key element in the purchase process, and must be achieved through important elements including:
Business and personal image (including superior brand support).
Empathy with this customer.
Professionalism (knowledge of the product and master of the whole process from the point of view of the customer).
Ethics without exceptions.
Competitive Superiority (to solve the needs and requirements of this customer).
Commitment during the process and toward the customer satisfaction.
Trust (or Confidence) is the glue that bonds society and makes solid and reliable relations of each one other.
Cultural references
In the film Glengarry Glen Ross by David Mamet, the character Blake (played by Alec Baldwin) makes a speech where the AIDA model is visible on a chalkboard in the scene. A minor difference between the fictional account of the model and the model as it is commonly used is that the "A" in Blake's motivational talk is defined as attention rather than awareness and the "D" as decision rather than desire.
See also
Advertising models
Notes
References
Geml, Richard and Lauer, Hermann: Marketing- und Verkaufslexikon, 4. Auflage, Stuttgart 2008,
External links
Marketing techniques
Selling techniques
Promotion and marketing communications | 0.768544 | 0.993051 | 0.763203 |
ATPase | ATPases (, Adenosine 5'-TriPhosphatase, adenylpyrophosphatase, ATP monophosphatase, triphosphatase, SV40 T-antigen, ATP hydrolase, complex V (mitochondrial electron transport), (Ca2+ + Mg2+)-ATPase, HCO3−-ATPase, adenosine triphosphatase) are a class of enzymes that catalyze the decomposition of ATP into ADP and a free phosphate ion or the inverse reaction. This dephosphorylation reaction releases energy, which the enzyme (in most cases) harnesses to drive other chemical reactions that would not otherwise occur. This process is widely used in all known forms of life.
Some such enzymes are integral membrane proteins (anchored within biological membranes), and move solutes across the membrane, typically against their concentration gradient. These are called transmembrane ATPases.
Functions
Transmembrane ATPases import metabolites necessary for cell metabolism and export toxins, wastes, and solutes that can hinder cellular processes. An important example is the sodium-potassium pump (Na+/K+ATPase) that maintains the cell membrane potential. Another example is the hydrogen potassium ATPase (H+/K+ATPase or gastric proton pump) that acidifies the contents of the stomach. ATPase is genetically conserved in animals; therefore, cardenolides which are toxic steroids produced by plants that act on ATPases, make general and effective animal toxins that act dose dependently.
Besides exchangers, other categories of transmembrane ATPase include co-transporters and pumps (however, some exchangers are also pumps). Some of these, like the Na+/K+ATPase, cause a net flow of charge, but others do not. These are called electrogenic transporters and electroneutral transporters, respectively.
Structure
The Walker motifs are a telltale protein sequence motif for nucleotide binding and hydrolysis. Beyond this broad function, the Walker motifs can be found in almost all natural ATPases, with the notable exception of tyrosine kinases. The Walker motifs commonly form a Beta sheet-turn-Alpha helix that is self-organized as a Nest (protein structural motif). This is thought to be because modern ATPases evolved from small NTP-binding peptides that had to be self-organized.
Protein design has been able to replicate the ATPase function (weakly) without using natural ATPase sequences or structures. Importantly, while all natural ATPases have some beta-sheet structure, the designed "Alternative ATPase" lacks beta sheet structure, demonstrating that this life-essential function is possible with sequences and structures not found in nature.
Mechanism
ATPase (also called F0F1-ATP Synthase) is a charge-transferring complex that catalyzes ATP to perform ATP synthesis by moving ions through the membrane.
The coupling of ATP hydrolysis and transport is a chemical reaction in which a fixed number of solute molecules are transported for each ATP molecule hydrolyzed; for the Na+/K+ exchanger, this is three Na+ ions out of the cell and two K+ ions inside per ATP molecule hydrolyzed.
Transmembrane ATPases make use of ATP's chemical potential energy by performing mechanical work: they transport solutes in the opposite direction of their thermodynamically preferred direction of movement—that is, from the side of the membrane with low concentration to the side with high concentration. This process is referred to as active transport.
For instance, inhibiting vesicular H+-ATPases would result in a rise in the pH within vesicles and a drop in the pH of the cytoplasm.
All of the ATPases share a common basic structure. Each rotary ATPase is composed of two major components: F0/A0/V0 and F1/A1/V1. They are connected by 1-3 stalks to maintain stability, control rotation, and prevent them from rotating in the other direction. One stalk is utilized to transmit torque. The number of peripheral stalks is dependent on the type of ATPase: F-ATPases have one, A-ATPases have two, and V-ATPases have three. The F1 catalytic domain is located on the N-side of the membrane and is involved in the synthesis and degradation of ATP and is involved in oxidative phosphorylation. The F0 transmembrane domain is involved in the movement of ions across the membrane.
The bacterial F0F1-ATPase consists of the soluble F1 domain and the transmembrane F0 domain, which is composed of several subunits with varying stoichiometry. There are two subunits, γ, and ε, that form the central stalk and they are linked to F0. F0 contains a c-subunit oligomer in the shape of a ring (c-ring). The α subunit is close to the subunit b2 and makes up the stalk that connects the transmembrane subunits to the α3β3 and δ subunits. F-ATP synthases are identical in appearance and function except for the mitochondrial F0F1-ATP synthase, which contains 7-9 additional subunits.
The electrochemical potential is what causes the c-ring to rotate in a clockwise direction for ATP synthesis. This causes the central stalk and the catalytic domain to change shape. Rotating the c-ring causes three ATP molecules to be made, which then causes H+ to move from the P-side of the membrane to the N-side of the membrane. The counterclockwise rotation of the c-ring is driven by ATP hydrolysis and ions move from the N-side to the P-side, which helps to build up electrochemical potential.
Transmembrane ATP synthases
The ATP synthase of mitochondria and chloroplasts is an anabolic enzyme that harnesses the energy of a transmembrane proton gradient as an energy source for adding an inorganic phosphate group to a molecule of adenosine diphosphate (ADP) to form a molecule of adenosine triphosphate (ATP).
This enzyme works when a proton moves down the concentration gradient, giving the enzyme a spinning motion. This unique spinning motion bonds ADP and P together to create ATP.
ATP synthase can also function in reverse, that is, use energy released by ATP hydrolysis to pump protons against their electrochemical gradient.
Classification
There are different types of ATPases, which can differ in function (ATP synthesis and/or hydrolysis), structure (F-, V- and A-ATPases contain rotary motors) and in the type of ions they transport.
Rotary ATPases
F-ATPases (F1FO-ATPases) in mitochondria, chloroplasts and bacterial plasma membranes are the prime producers of ATP, using the proton gradient generated by oxidative phosphorylation (mitochondria) or photosynthesis (chloroplasts).
F-ATPases lacking a delta/OSCP subunit move sodium ions instead. They are proposed to be called N-ATPases, since they seem to form a distinct group that is further apart from usual F-ATPases than A-ATPases are from V-ATPases.
V-ATPases (V1VO-ATPases) are primarily found in eukaryotic vacuoles, catalysing ATP hydrolysis to transport solutes and lower pH in organelles like proton pump of lysosome.
A-ATPases (A1AO-ATPases) are found in Archaea and some extremophilic bacteria. They are arranged like V-ATPases, but function like F-ATPases mainly as ATP synthases.
Many homologs that are not necessarily rotaty exist. See .
P-ATPases (E1E2-ATPases) are found in bacteria, fungi and in eukaryotic plasma membranes and organelles, and function to transport a variety of different ions across membranes.
E-ATPases are cell-surface enzymes that hydrolyze a range of NTPs, including extracellular ATP. Examples include ecto-ATPases, CD39s, and ecto-ATP/Dases, all of which are members of a "GDA1 CD39" superfamily.
AAA proteins are a family of ring-shaped P-loop NTPases.
P-ATPase
P-ATPases (sometime known as E1-E2 ATPases) are found in bacteria and also in eukaryotic plasma membranes and organelles. Its name is due to short time attachment of inorganic phosphate at the aspartate residues at the time of activation. Function of P-ATPase is to transport a variety of different compounds, like ions and phospholipids, across a membrane using ATP hydrolysis for energy. There are many different classes of P-ATPases, which transports a specific type of ion. P-ATPases may be composed of one or two polypeptides, and can usually take two main conformations, E1 and E2.
Human genes
Na+/K+ transporting: ATP1A1, ATP1A2, ATP1A3, ATP1A4, ATP1B1, ATP1B2, ATP1B3, ATP1B4
Ca++ transporting: ATP2A1, ATP2A2, ATP2A3, ATP2B1, ATP2B2, ATP2B3, ATP2B4, ATP2C1, ATP2C2
Mg++ transporting: ATP3
H+/K+ exchanging: ATP4A
H+ transporting, mitochondrial: ATP5A1, ATP5B, ATP5C1, ATP5C2, ATP5D, ATP5E, ATP5F1, ATP5G1, ATP5G2, ATP5G3, ATP5H, ATP5I, ATP5J, ATP5J2, ATP5L, ATP5L2, ATP5O, ATP5S, MT-ATP6, MT-ATP8
H+ transporting, lysosomal: ATP6AP1, ATP6AP2, ATP6V1A, ATP6V1B1, ATP6V1B2, ATP6V1C1, ATP6V1C2, ATP6V1D, ATP6V1E1, ATP6V1E2, ATP6V1F, ATP6V1G1, ATP6V1G2, ATP6V1G3, ATP6V1H, ATP6V0A1, ATP6V0A2, ATP6V0A4, ATP6V0B, ATP6V0C, ATP6V0D1, ATP6V0D2, ATP6V0E
Cu++ transporting: ATP7A, ATP7B
Class I, type 8: ATP8A1, ATP8B1, ATP8B2, ATP8B3, ATP8B4
Class II, type 9: ATP9A, ATP9B
Class V, type 10: ATP10A, ATP10B, ATP10D
Class VI, type 11: ATP11A, ATP11B, ATP11C
H+/K+ transporting, nongastric: ATP12A
type 13: ATP13A1, ATP13A2, ATP13A3, ATP13A4, ATP13A5
See also
ATP synthase
ATP synthase alpha/beta subunits
AAA proteins
P-ATPase
References
External links
"ATP synthase - a splendid molecular machine"
Electron microscopy structures of ATPases from the EM Data Bank(EMDB)
EC 3.6.1
EC 3.6.3
Integral membrane proteins
Copper enzymes | 0.773128 | 0.987154 | 0.763196 |
Matrix (biology) | In biology, matrix (: matrices) is the material (or tissue) in between a eukaryotic organism's cells.
The structure of connective tissues is an extracellular matrix. Fingernails and toenails grow from matrices. It is found in various connective tissues. It serves as a jelly-like structure instead of cytoplasm in connective tissue.
Tissue matrices
Extracellular matrix (ECM)
The main ingredients of the extracellular matrix are glycoproteins secreted by the cells. The most abundant glycoprotein in the ECM of most animal cells is collagen, which forms strong fibers outside the cells. In fact, collagen accounts for about 40% of the total protein in the human body. The collagen fibers are embedded in a network woven from proteoglycans. A proteoglycan molecule consists of a small core protein with many carbohydrate chains covalently attached, so that it may be up to 95% carbohydrate. Large proteoglycan complexes can form when hundreds of proteoglycans become noncovalently attached to a single long polysaccharide molecule. Some cells are attached to the ECM by still other ECM glycoproteins such as fibronectin. Fibronectin and other ECM proteins bind to cell surface receptor proteins called integrins that are built into the plasma membrane. Integrins span the membrane and bind on the cytoplasmic side to associated proteins attached to microfilaments of the cytoskeleton. The name integrin is based on the word integrate, integrins are in a position to transmit signals between the ECM and the cytoskeleton and thus to integrate changes occurring outside and inside the cell. Current research on fibronectin, other ECM molecules, and integrins is revealing the influential role of the ECM in the lives of cells. By communicating with a cell through integrins, the ECM can regulate a cell's behavior. For example, some cells in a developing embryo migrate along specific pathways by matching the orientation of their microfilaments to the "grain" of fibers in the ECM. Researchers are also learning that the ECM around a cell can influence the activity of genes in the nucleus. Information about the ECM probably reaches the nucleus by a combination of mechanical and chemical signaling pathways. Mechanical signaling involves fibronectin, integrins, and microfilaments of the cytoskeleton. Changes in the cytoskeleton may in turn trigger chemical signaling pathways inside the cell, leading to changes in the set of proteins being made by the cell and therefore changes in the cells function. In this way, the ECM of a particular tissue may help coordinate the behavior of all the cells within that tissue. Direct connections between cells also function in this coordination.
Bone matrix
Bone is a form of connective tissue found in the body, composed largely of hardened hydroxyapatite-containing collagen. In larger mammals, it is arranged in osteon regions. Bone matrix allows mineral salts such as calcium to be stored and provides protection for internal organs and support for locomotion.
Cartilage matrix
Cartilage is another form of connective tissue found in the body, providing a smooth surface for joints and a mechanism for growth of bones during development.
Subcellular matrices
Mitochondrial matrix
In the mitochondrion, the matrix contains soluble enzymes that catalyze the oxidation of pyruvate and other small organic molecules.
Nuclear matrix
In the cell nucleus the matrix is the insoluble fraction that remains after extracting the solubled DNA.
Golgi matrix
The Golgi matrix is a protein scaffold around the Golgi apparatus made up of Golgins, GRASP's and miscellaneous other proteins on the cytoplasmic side of the Golgi apparatus involved in keeping its shape and membrane stacking.
Matrix (medium)
A matrix is also a medium in which bacteria are grown (cultured). For instance, a Petri dish of agar may be the matrix for culturing a sample swabbed from a patient's throat.
See also
Matricity
Tissues and cells
Germinal matrix
Hair matrix cell
Molecular biology
Matrix attachment region
Matrix metalloproteinase
Matrix protein
Bioinformatics and sequence evolution
PAM matrix
Position-specific scoring matrix
Similarity matrix
Substitution matrix
Botany and agriculture
Matrix Planting
Population biology and ecology
Matrix population models
References
Cell anatomy
Connective tissue
Organelles
Matrices (biology) | 0.774059 | 0.985955 | 0.763187 |
Cognitive skill | Cognitive skills are skills of the mind, as opposed to other types of skills such as motor skills or social skills. Some examples of cognitive skills are literacy, self-reflection, logical reasoning, abstract thinking, critical thinking, introspection and mental arithmetic. Cognitive skills vary in processing complexity, and can range from more fundamental processes such as perception and various memory functions, to more sophisticated processes such as decision making, problem solving and metacognition.
Specialisation of functions
Cognitive science has provided theories of how the brain works, and these have been of great interest to researchers who work in the empirical fields of brain science. A fundamental question is whether cognitive functions, for example visual processing and language, are autonomous modules, or to what extent the functions depend on each other. Research evidence points towards a middle position, and it is now generally accepted that there is a degree of modularity in aspects of brain organisation. In other words, cognitive skills or functions are specialised, but they also overlap or interact with each other. Deductive reasoning, on the other hand, has been shown to be related to either visual or linguistic processing, depending on the task; although there are also aspects that differ from them. All in all, research evidence does not provide strong support for classical models of cognitive psychology.
Cognitive functioning
Cognitive functioning refers to a person's ability to process thoughts. It is defined as "the ability of an individual to perform the various mental activities most closely associated with learning and problem-solving. Examples include the verbal, spatial, psychomotor, and processing-speed ability." Cognition mainly refers to things like memory, speech, and the ability to learn new information. The brain is usually capable of learning new skills in the aforementioned areas, typically in early childhood, and of developing personal thoughts and beliefs about the world. Old age and disease may affect cognitive functioning, causing memory loss and trouble thinking of the right words while speaking or writing ("drawing a blank"). Multiple sclerosis (MS), for example, can eventually cause memory loss, an inability to grasp new concepts or information, and depleted verbal fluency.
Humans generally have a high capacity for cognitive functioning once born, so almost every person is capable of learning or remembering. Intelligence is tested with IQ tests and others, although these have issues with accuracy and completeness. In such tests, patients may be asked a series of questions, or to perform tasks, with each measuring a cognitive skill, such as level of consciousness, memory, awareness, problem-solving, motor skills, analytical abilities, or other similar concepts. Early childhood is when the brain is most malleable to orientate to tasks that are relevant in the person's environment.
See also
Adaptive behavior
Adaptive functioning
Intelligence Quotient (IQ)
Cognition
Cognitive Abilities Test
Jungian cognitive functions
Notes
References
NCME - Glossary of Important Assessment and Measurement Terms [cognitive ability]
Cognition
Skills | 0.766491 | 0.995677 | 0.763178 |
Homogenization (chemistry) | Homogenization or homogenisation is any of several processes used to make a mixture of two mutually non-soluble liquids the same throughout. This is achieved by turning one of the liquids into a state consisting of extremely small particles distributed uniformly throughout the other liquid. A typical example is the homogenization of milk, wherein the milk fat globules are reduced in size and dispersed uniformly through the rest of the milk.
Definition
Homogenization (from "homogeneous;" Greek, homogenes: homos, same + genos, kind) is the process of converting two immiscible liquids (i.e. liquids that are not soluble, in all proportions, one in another) into an emulsion (Mixture of two or more liquids that are generally immiscible). Sometimes two types of homogenization are distinguished: primary homogenization, when the emulsion is created directly from separate liquids; and secondary homogenization, when the emulsion is created by the reduction in size of droplets in an existing emulsion.
Homogenization is achieved by a mechanical device called a homogenizer.
Application
One of the oldest applications of homogenization is in milk processing. It is normally preceded by "standardization" (the mixing of milk from several different herds or dairies to produce a more consistent raw milk prior to processing). The fat in milk normally separates from the water and collects at the top. Homogenization breaks the fat into smaller sizes so it no longer separates, allowing the sale of non-separating milk at any fat specification.
Methods
Milk homogenization is accomplished by mixing large amounts of harvested milk, then forcing the milk at high pressure through small holes. Milk homogenization is an essential tool of the milk food industry to prevent creating various levels of flavor and fat concentration.
Another application of homogenization is in soft drinks like cola products. The reactant mixture is rendered to intense homogenization, to as much as 35,000 psi, so that various constituents do not separate out during storage or distribution.
See also
Ultrasonic homogenizer
French pressure cell press
Homogenizer
Cell disruption
References
Unit operations
Food processing
Laboratory techniques | 0.769487 | 0.991787 | 0.763167 |
Tacticity | Tacticity (from , "relating to arrangement or order") is the relative stereochemistry of adjacent chiral centers within a macromolecule. The practical significance of tacticity rests on the effects on the physical properties of the polymer. The regularity of the macromolecular structure influences the degree to which it has rigid, crystalline long range order or flexible, amorphous long range disorder. Precise knowledge of tacticity of a polymer also helps understanding at what temperature a polymer melts, how soluble it is in a solvent and its mechanical properties.
A tactic macromolecule in the IUPAC definition is a macromolecule in which essentially all the configurational (repeating) units are identical. Tacticity is particularly significant in vinyl polymers of the type - where each repeating unit with a substituent R on one side of the polymer backbone is followed by the next repeating unit with the substituent on the same side as the previous one, the other side as the previous one or positioned randomly with respect to the previous one. In a hydrocarbon macromolecule with all carbon atoms making up the backbone in a tetrahedral molecular geometry, the zigzag backbone is in the paper plane with the substituents either sticking out of the paper or retreating into the paper. This projection is called the Natta projection after Giulio Natta. Monotactic macromolecules have one stereoisomeric atom per repeat unit, ditactic to n-tactic macromolecules have more than one stereoisomeric atom per unit.
Describing tacticity
Diads
Two adjacent structural units in a polymer molecule constitute a diad. Diads overlap: each structural unit is considered part of two diads, one diad with each neighbor. If a diad consists of two identically oriented units, the diad is called an (formerly meso diad, as in a meso compound, now proscribed). If a diad consists of units oriented in opposition, the diad is called an (formerly racemo diad, as in a racemic compound, now proscribed). In the case of vinyl polymer molecules, an is one in which the substituents are oriented on the same side of the polymer backbone: in the Natta projection, they both point into the plane or both point out of the plane.
Triads
The stereochemistry of macromolecules can be defined even more precisely with the introduction of triads. An isotactic triad (mm) is made up of two adjacent m diads, a syndiotactic triad (also spelled syndyotactic) (rr) consists of two adjacent , and a heterotactic triad (rm) is composed of an adjacent to an . The mass fraction of isotactic (mm) triads is a common quantitative measure of tacticity.
When the stereochemistry of a macromolecule is considered to be a Bernoulli process, the triad composition can be calculated from the probability Pm of a diad being . For example, when this probability is 0.25 then the probability of finding:
an isotactic triad is Pm2, or 0.0625
an heterotactic triad is 2Pm(1–Pm), or 0.375
a syndiotactic triad is (1–Pm)2, or 0.5625
with a total probability of 1. Similar relationships with diads exist for tetrads.
Tetrads, pentads, etc.
The definition of tetrads and pentads introduce further sophistication and precision to defining tacticity, especially when information on long-range ordering is desirable. Tacticity measurements obtained by carbon-13 NMR are typically expressed in terms of the relative abundance of various pentads within the polymer molecule, e.g. mmmm, mrrm.
Other conventions for quantifying tacticity
The primary convention for expressing tacticity is in terms of the relative weight fraction of triad or higher-order components, as described above. An alternative expression for tacticity is the average length of m and r sequences within the polymer molecule. The average m-sequence length may be approximated from the relative abundance of pentads as follows:
Polymers
Isotactic polymers
Isotactic polymers are composed of isotactic macromolecules (IUPAC definition). In isotactic macromolecules all the substituents are located on the same side of the macromolecular backbone. An isotactic macromolecule consists of 100% , though IUPAC also allows the term for macromolecules with at least 95% if that looser usage is explained. Polypropylene formed by Ziegler–Natta catalysis is an example isotactic polymer. Isotactic polymers are usually semicrystalline and often form a helix configuration.
Syndiotactic polymers
In syndiotactic or syntactic macromolecules the substituents have alternate positions along the chain. The macromolecule comprises 100% , though IUPAC also allows the term for macromolecules with at least 95% if that looser usage is explained. Syndiotactic polystyrene, made by metallocene catalysis polymerization, is crystalline with a melting point of 161 °C. Gutta percha is also an example syndiotactic polymer.
Atactic polymers
In atactic macromolecules the substituents are placed randomly along the chain. The percentage of is understood to be between 45 and 55% unless otherwise specified, but it could be any value other than 0 or 100% if that usage is clarified. With the aid of spectroscopic techniques such as NMR, it is possible to pinpoint the composition of a polymer in terms of the percentages for each triad.
Polymers that are formed by free-radical mechanisms such as polyvinyl chloride are usually atactic. Due to their random nature atactic polymers are usually amorphous. In hemi isotactic macromolecules every other repeat unit has a random substituent.
Atactic polymers are technologically very important. A good example is polystyrene (PS). If a special catalyst is used in its synthesis it is possible to obtain the syndiotactic version of this polymer, but most industrial polystyrene produced is atactic. The two materials have very different properties because the irregular structure of the atactic version makes it impossible for the polymer chains to stack in a regular fashion. The result is that, whereas syndiotactic PS is a semicrystalline material, the more common atactic version cannot crystallize and forms a glass instead. This example is quite general in that many polymers of economic importance are atactic glass formers.
Eutactic polymers
In eutactic macromolecules, substituents may occupy any specific (but potentially complex) sequence of positions along the chain. Isotactic and syndiotactic polymers are instances of the more general class of eutactic polymers, which also includes heterogeneous macromolecules in which the sequence consists of substituents of different kinds (for example, the side-chains in proteins and the bases in nucleic acids).
Head/tail configuration
In vinyl polymers the complete configuration can be further described by defining polymer head/tail configuration. In a regular macromolecule all monomer units are normally linked in a head to tail configuration so that all β-substituents are separated by three carbon atoms. In head to head configuration this separation is only by two carbon atoms and the separation with tail to tail configuration is by four atoms. Head/tail configurations are not part of polymer tacticity but should be taken into account when considering polymer defects.
Techniques for measuring tacticity
Tacticity may be measured directly using proton or carbon-13 NMR. This technique enables quantification of the tacticity distribution by comparison of peak areas or integral ranges corresponding to known diads (r, m), triads (mm, rm+mr, rr) and/or higher order n-ads, depending on spectral resolution. In cases of limited resolution, stochastic methods such as Bernoullian or Markovian analysis may also be used to fit the distribution and predict higher n-ads and calculate the isotacticity of the polymer to the desired level.
Other techniques sensitive to tacticity include x-ray powder diffraction, secondary ion mass spectrometry (SIMS), vibrational spectroscopy (FTIR) and especially two-dimensional techniques. Tacticity may also be inferred by measuring another physical property, such as melting temperature, when the relationship between tacticity and that property is well-established.
References
External links
Tacticity @ École Polytechnique Fédérale de Lausanne
Application of spectroscopy in polymer charactisation @ University of California, Los Angeles
Polymer chemistry
Stereochemistry | 0.776495 | 0.982835 | 0.763166 |
Behavior | Behavior (American English) or behaviour (British English) is the range of actions and mannerisms made by individuals, organisms, systems or artificial entities in some environment. These systems can include other systems or organisms as well as the inanimate physical environment. It is the computed response of the system or organism to various stimuli or inputs, whether internal or external, conscious or subconscious, overt or covert, and voluntary or involuntary.
Taking a behavior informatics perspective, a behavior consists of actor, operation, interactions, and their properties. This can be represented as a behavior vector.
Models
Biology
Although disagreement exists as to how to precisely define behavior in a biological context, one common interpretation based on a meta-analysis of scientific literature states that "behavior is the internally coordinated responses (actions or inactions) of whole living organisms (individuals or groups) to internal or external stimuli".
A broader definition of behavior, applicable to plants and other organisms, is similar to the concept of phenotypic plasticity. It describes behavior as a response to an event or environment change during the course of the lifetime of an individual, differing from other physiological or biochemical changes that occur more rapidly, and excluding changes that are a result of development (ontogeny).
Behaviors can be either innate or learned from the environment.
Behaviour can be regarded as any action of an organism that changes its relationship to its environment. Behavior provides outputs from the organism to the environment.
Human behavior
The endocrine system and the nervous system likely influence human behavior. Complexity in the behavior of an organism may be correlated to the complexity of its nervous system. Generally, organisms with more complex nervous systems have a greater capacity to learn new responses and thus adjust their behavior.
Animal behavior
Ethology is the scientific and objective study of animal behavior, usually with a focus on behavior under natural conditions, and viewing behavior as an evolutionarily adaptive trait. Behaviorism is a term that also describes the scientific and objective study of animal behavior, usually referring to measured responses to stimuli or trained behavioral responses in a laboratory context, without a particular emphasis on evolutionary adaptivity.
Consumer behavior
Consumers behavior
Consumer behavior involves the processes consumers go through, and reactions they have towards products or services. It has to do with consumption, and the processes consumers go through around purchasing and consuming goods and services. Consumers recognize needs or wants, and go through a process to satisfy these needs. Consumer behavior is the process they go through as customers, which includes types of products purchased, amount spent, frequency of purchases and what influences them to make the purchase decision or not.
Circumstances that influence consumer behaviour are varied, with contributions from both internal and external factors. Internal factors include attitudes, needs, motives, preferences and perceptual processes, whilst external factors include marketing activities, social and economic factors, and cultural aspects. Doctor Lars Perner of the University of Southern California claims that there are also physical factors that influence consumer behavior, for example, if a consumer is hungry, then this physical feeling of hunger will influence them so that they go and purchase a sandwich to satisfy the hunger.
Consumer decision making
Lars Perner presents a model that outlines the decision-making process involved in consumer behaviour. The process initiates with the identification of a problem, wherein the consumer acknowledges an unsatisfied need or desire. Subsequently, the consumer proceeds to seek information, whereas for low-involvement products, the search tends to rely on internal resources, retrieving alternatives from memory. Conversely, for high-involvement products, the search is typically more extensive, involving activities like reviewing reports, reading reviews, or seeking recommendations from friends.
The consumer will then evaluate his or her alternatives, comparing price, and quality, doing trade-offs between products, and narrowing down the choice by eliminating the less appealing products until there is one left. After this has been identified, the consumer will purchase the product.
Finally, the consumer will evaluate the purchase decision, and the purchased product, bringing in factors such as value for money, quality of goods, and purchase experience. However, this logical process does not always happen this way, people are emotional and irrational creatures. People make decisions with emotion and then justify them with logic according to Robert Cialdini Ph.D. Psychology.
How the 4P's influence consumer behavior
The Marketing mix (4 P's) are a marketing tool and stand for Price, Promotion, Product, and Placement.
Due to the significant impact of business-to-consumer marketing on consumer behavior, the four elements of the marketing mix, known as the 4 P's (product, price, place, and promotion), exert a notable influence on consumer behavior. The price of a good or service is largely determined by the market, as businesses will set their prices to be similar to that of other businesses so as to remain competitive whilst making a profit. When market prices for a product are high, it will cause consumers to purchase less and use purchased goods for longer periods of time, meaning they are purchasing the product less often. Alternatively, when market prices for a product are low, consumers are more likely to purchase more of the product, and more often.
The way that promotion influences consumer behavior has changed over time. In the past, large promotional campaigns and heavy advertising would convert into sales for a business, but nowadays businesses can have success on products with little or no advertising. This is due to the Internet and in particular social media. They rely on word of mouth from consumers using social media, and as products trend online, so sales increase as products effectively promote themselves. Thus, promotion by businesses does not necessarily result in consumer behavior trending towards purchasing products.
The way that product influences consumer behavior is through consumer willingness to pay, and consumer preferences. This means that even if a company were to have a long history of products in the market, consumers will still pick a cheaper product over the company in question's product if it means they will pay less for something that is very similar. This is due to consumer willingness to pay, or their willingness to part with the money they have earned. The product also influences consumer behavior through customer preferences. For example, take Pepsi vs Coca-Cola, a Pepsi-drinker is less likely to purchase Coca-Cola, even if it is cheaper and more convenient. This is due to the preference of the consumer, and no matter how hard the opposing company tries they will not be able to force the customer to change their mind.
Product placement in the modern era has little influence on consumer behavior, due to the availability of goods online. If a customer can purchase a good from the comfort of their home instead of purchasing in-store, then the placement of products is not going to influence their purchase decision.
In management
Behavior outside of psychology includes
Organizational
In management, behaviors are associated with desired or undesired focuses. Managers generally note what the desired outcome is, but behavioral patterns can take over. These patterns are the reference to how often the desired behavior actually occurs. Before a behavior actually occurs, antecedents focus on the stimuli that influence the behavior that is about to happen. After the behavior occurs, consequences fall into place. Consequences consist of rewards or punishments.
Social behavior
Social behavior is behavior among two or more organisms within the same species, and encompasses any behavior in which one member affects the other. This is due to an interaction among those members. Social behavior can be seen as similar to an exchange of goods, with the expectation that when one gives, one will receive the same. This behavior can be affected by both the qualities of the individual and the environmental (situational) factors. Therefore, social behavior arises as a result of an interaction between the two—the organism and its environment. This means that, in regards to humans, social behavior can be determined by both the individual characteristics of the person, and the situation they are in.
Behavior informatics
Behavior informatics also called behavior computing, explores behavior intelligence and behavior insights from the informatics and computing perspectives.
Different from applied behavior analysis from the psychological perspective, BI builds computational theories, systems and tools to qualitatively and quantitatively model, represent, analyze, and manage behaviors of individuals, groups and/or organizations.
Health
Health behavior refers to a person's beliefs and actions regarding their health and well-being. Health behaviors are direct factors in maintaining a healthy lifestyle. Health behaviors are influenced by the social, cultural, and physical environments in which we live. They are shaped by individual choices and external constraints. Positive behaviors help promote health and prevent disease, while the opposite is true for risk behaviors. Health behaviors are early indicators of population health. Because of the time lag that often occurs between certain behaviors and the development of disease, these indicators may foreshadow the future burdens and benefits of health-risk and health-promoting behaviors.
Correlates
A variety of studies have examined the relationship between health behaviors and health outcomes (e.g., Blaxter 1990) and have demonstrated their role in both morbidity and mortality.
These studies have identified seven features of lifestyle which were associated with lower morbidity and higher subsequent long-term survival (Belloc and Breslow 1972):
Avoiding snacks
Eating breakfast regularly
Exercising regularly
Maintaining a desirable body weight
Moderate alcohol intake
Not smoking
Sleeping 7–8hrs per night
Health behaviors impact upon individuals' quality of life, by delaying the onset of chronic disease and extending active lifespan. Smoking, alcohol consumption, diet, gaps in primary care services and low screening uptake are all significant determinants of poor health, and changing such behaviors should lead to improved health.
For example, in US, Healthy People 2000, United States Department of Health and Human Services, lists increased physical activity, changes in nutrition and reductions in tobacco, alcohol and drug use as important for health promotion and disease prevention.
Treatment approach
Any interventions done are matched with the needs of each individual in an ethical and respected manner. Health belief model encourages increasing individuals' perceived susceptibility to negative health outcomes and making individuals aware of the severity of such negative health behavior outcomes. E.g. through health promotion messages. In addition, the health belief model suggests the need to focus on the benefits of health behaviors and the fact that barriers to action are easily overcome. The theory of planned behavior suggests using persuasive messages for tackling behavioral beliefs to increase the readiness to perform a behavior, called intentions. The theory of planned behavior advocates the need to tackle normative beliefs and control beliefs in any attempt to change behavior. Challenging the normative beliefs is not enough but to follow through the intention with self-efficacy from individual's mastery in problem solving and task completion is important to bring about a positive change. Self efficacy is often cemented through standard persuasive techniques.
See also
Applied behavior analysis
Behavioral cusp
Behavioral economics
Behavioral genetics
Behavioral sciences
Cognitive bias
Evolutionary physiology
Experimental analysis of behavior
Human sexual behavior
Herd behavior
Instinct
Mere-measurement effect
Motivation
Normality (behavior)
Organizational studies
Radical behaviorism
Reasoning
Rebellion
Social relation
Theories of political behavior
Work behavior
References
General
Cao, L. (2014). Behavior Informatics: A New Perspective. IEEE Intelligent Systems (Trends and Controversies), 29(4): 62–80.
Perner, L. (2008), Consumer behavior. University of Southern California, Marshall School of Business. Retrieved from http://www.consumerpsychologist.com/intro_Consumer_Behavior.html
Further reading
Bateson, P. (2017) behavior, Development and Evolution. Open Book Publishers, Cambridge. .
External links
What is behavior? Baby don't ask me, don't ask me, no more at Earthling Nature.
behaviorinformatics.org
Links to review articles by Eric Turkheimer and co-authors on behavior research
Links to IJCAI2013 tutorial on behavior informatics and computing | 0.765408 | 0.997051 | 0.76315 |
Quantitative trait locus | A quantitative trait locus (QTL) is a locus (section of DNA) that correlates with variation of a quantitative trait in the phenotype of a population of organisms. QTLs are mapped by identifying which molecular markers (such as SNPs or AFLPs) correlate with an observed trait. This is often an early step in identifying the actual genes that cause the trait variation.
Definition
A quantitative trait locus (QTL) is a region of DNA which is associated with a particular phenotypic trait, which varies in degree and which can be attributed to polygenic effects, i.e., the product of two or more genes, and their environment. These QTLs are often found on different chromosomes. The number of QTLs which explain variation in the phenotypic trait indicates the genetic architecture of a trait. It may indicate that plant height is controlled by many genes of small effect, or by a few genes of large effect.
Typically, QTLs underlie continuous traits (those traits which vary continuously, e.g. height) as opposed to discrete traits (traits that have two or several character values, e.g. red hair in humans, a recessive trait, or smooth vs. wrinkled peas used by Mendel in his experiments).
Moreover, a single phenotypic trait is usually determined by many genes. Consequently, many QTLs are associated with a single trait.
Another use of QTLs is to identify candidate genes underlying a trait. The DNA sequence of any genes in this region can then be compared to a database of DNA for genes whose function is already known, this task being fundamental for marker-assisted crop improvement.
History
Mendelian inheritance was rediscovered at the beginning of the 20th century. As Mendel's ideas spread, geneticists began to connect Mendel's rules of inheritance of single factors to Darwinian evolution. For early geneticists, it was not immediately clear that the smooth variation in traits like body size (i.e., incomplete dominance) was caused by the inheritance of single genetic factors. Although Darwin himself observed that inbred features of fancy pigeons were inherited in accordance with Mendel's laws (although Darwin did not actually know about Mendel's ideas when he made the observation), it was not obvious that these features selected by fancy pigeon breeders can similarly explain quantitative variation in nature.
An early attempt by William Ernest Castle to unify the laws of Mendelian inheritance with Darwin's theory of speciation invoked the idea that species become distinct from one another as one species or the other acquires a novel Mendelian factor. Castle's conclusion was based on the observation that novel traits that could be studied in the lab and that show Mendelian inheritance patterns reflect a large deviation from the wild type, and Castle believed that acquisition of such features is the basis of "discontinuous variation" that characterizes speciation. Darwin discussed the inheritance of similar mutant features but did not invoke them as a requirement of speciation. Instead Darwin used the emergence of such features in breeding populations as evidence that mutation can occur at random within breeding populations, which is a central premise of his model of selection in nature. Later in his career, Castle would refine his model for speciation to allow for small variation to contribute to speciation over time. He also was able to demonstrate this point by selectively breeding laboratory populations of rats to obtain a hooded phenotype over several generations.
Castle's was perhaps the first attempt made in the scientific literature to direct evolution by artificial selection of a trait with continuous underlying variation, however the practice had previously been widely employed in the development of agriculture to obtain livestock or plants with favorable features from populations that show quantitative variation in traits like body size or grain yield.
Castle's work was among the first to attempt to unify the recently rediscovered laws of Mendelian inheritance with Darwin's theory of evolution. Still, it would be almost thirty years until the theoretical framework for evolution of complex traits would be widely formalized. In an early summary of the theory of evolution of continuous variation, Sewall Wright, a graduate student who trained under Castle, summarized contemporary thinking about the genetic basis of quantitative natural variation: "As genetic studies continued, ever smaller differences were found to mendelize, and any character, sufficiently investigated, turned out to be affected by many factors." Wright and others formalized population genetics theory that had been worked out over the preceding 30 years explaining how such traits can be inherited and create stably breeding populations with unique characteristics. Quantitative trait genetics today leverages Wright's observations about the statistical relationship between genotype and phenotype in families and populations to understand how certain genetic features can affect variation in natural and derived populations.
Quantitative traits
Polygenic inheritance refers to inheritance of a phenotypic characteristic (trait) that is attributable to two or more genes and can be measured quantitatively. Multifactorial inheritance refers to polygenic inheritance that also includes interactions with the environment. Unlike monogenic traits, polygenic traits do not follow patterns of Mendelian inheritance (discrete categories). Instead, their phenotypes typically vary along a continuous gradient depicted by a bell curve.
An example of a polygenic trait is human skin color variation. Several genes factor into determining a person's natural skin color, so modifying only one of those genes can change skin color slightly or in some cases, such as for SLC24A5, moderately. Many disorders with genetic components are polygenic, including autism, cancer, diabetes and numerous others. Most phenotypic characteristics are the result of the interaction of multiple genes.
Multifactorially inherited diseases are said to constitute the majority of genetic disorders affecting humans which will result in hospitalization or special care of some kind.
Multifactorial traits in general
Traits controlled both by the environment and by genetic factors are called multifactorial.
Usually, multifactorial traits outside of illness result in what we see as continuous characteristics in organisms, especially human organisms such as: height, skin color, and body mass. All of these phenotypes are complicated by a great deal of give-and-take between genes and environmental effects. The continuous distribution of traits such as height and skin color described above, reflects the action of genes that do not manifest typical patterns of dominance and recessiveness. Instead the contributions of each involved locus are thought to be additive. Writers have distinguished this kind of inheritance as polygenic, or quantitative inheritance.
Thus, due to the nature of polygenic traits, inheritance will not follow the same pattern as a simple monohybrid or dihybrid cross. Polygenic inheritance can be explained as Mendelian inheritance at many loci, resulting in a trait which is normally-distributed. If n is the number of involved loci, then the coefficients of the binomial expansion of (a + b)2n will give the frequency of distribution of all n allele combinations. For sufficiently high values of n, this binomial distribution will begin to resemble a normal distribution. From this viewpoint, a disease state will become apparent at one of the tails of the distribution, past some threshold value. Disease states of increasing severity will be expected the further one goes past the threshold and away from the mean.
Heritable disease and multifactorial inheritance
A mutation resulting in a disease state is often recessive, so both alleles must be mutant in order for the disease to be expressed phenotypically. A disease or syndrome may also be the result of the expression of mutant alleles at more than one locus. When more than one gene is involved, with or without the presence of environmental triggers, we say that the disease is the result of multifactorial inheritance.
The more genes involved in the cross, the more the distribution of the genotypes will resemble a normal, or Gaussian distribution. This shows that multifactorial inheritance is polygenic, and genetic frequencies can be predicted by way of a polyhybrid Mendelian cross. Phenotypic frequencies are a different matter, especially if they are complicated by environmental factors.
The paradigm of polygenic inheritance as being used to define multifactorial disease has encountered much disagreement. Turnpenny (2004) discusses how simple polygenic inheritance cannot explain some diseases such as the onset of Type I diabetes mellitus, and that in cases such as these, not all genes are thought to make an equal contribution.
The assumption of polygenic inheritance is that all involved loci make an equal contribution to the symptoms of the disease. This should result in a normal (Gaussian) distribution of genotypes. When it does not, the idea of polygenetic inheritance cannot be supported for that illness.
Examples
The above are well-known examples of diseases having both genetic and environmental components. Other examples involve atopic diseases such as eczema or dermatitis, spina bifida (open spine), and anencephaly (open skull).
While schizophrenia is widely believed to be multifactorially genetic by biopsychiatrists, no characteristic genetic markers have been determined with any certainty.
If it is shown that the brothers and sisters of the patient have the disease, then there is a strong chance that the disease is genetic and that the patient will also be a genetic carrier. This is not quite enough as it also needs to be proven that the pattern of inheritance is non-Mendelian. This would require studying dozens, even hundreds of different family pedigrees before a conclusion of multifactorial inheritance is drawn. This often takes several years.
If multifactorial inheritance is indeed the case, then the chance of the patient contracting the disease is reduced only if cousins and more distant relatives have the disease. It must be stated that while multifactorially-inherited diseases tend to run in families, inheritance will not follow the same pattern as a simple monohybrid or dihybrid cross.
If a genetic cause is suspected and little else is known about the illness, then it remains to be seen exactly how many genes are involved in the phenotypic expression of the disease. Once that is determined, the question must be answered: if two people have the required genes, why are there differences in expression between them? Generally, what makes the two individuals different are likely to be environmental factors. Due to the involved nature of genetic investigations needed to determine such inheritance patterns, this is not usually the first avenue of investigation one would choose to determine etiology.
QTL mapping
For organisms whose genomes are known, one might now try to exclude genes in the identified region whose function is known with some certainty not to be connected with the trait in question. If the genome is not available, it may be an option to sequence the identified region and determine the putative functions of genes by their similarity to genes with known function, usually in other genomes. This can be done using BLAST, an online tool that allows users to enter a primary sequence and search for similar sequences within the BLAST database of genes from various organisms. It is often not the actual gene underlying the phenotypic trait, but rather a region of DNA that is closely linked with the gene
Another interest of statistical geneticists using QTL mapping is to determine the complexity of the genetic architecture underlying a phenotypic trait. For example, they may be interested in knowing whether a phenotype is shaped by many independent loci, or by a few loci, and do those loci interact. This can provide information on how the phenotype may be evolving.
In a recent development, classical QTL analyses were combined with gene expression profiling i.e. by DNA microarrays. Such expression QTLs (eQTLs) describe cis- and trans-controlling elements for the expression of often disease-associated genes. Observed epistatic effects have been found beneficial to identify the gene responsible by a cross-validation of genes within the interacting loci with metabolic pathway- and scientific literature databases.
Analysis of variance
The simplest method for QTL mapping is analysis of variance (ANOVA, sometimes called "marker regression") at the marker loci. In this method, in a backcross, one may calculate a t-statistic to compare the averages of the two marker genotype groups. For other types of
crosses (such as the intercross), where there are more than two possible genotypes, one uses a more general form of ANOVA, which provides a so-called F-statistic. The ANOVA approach for QTL mapping has three important weaknesses. First, we do not receive separate estimates of QTL location and QTL effect. QTL location is indicated only by looking at which markers give the greatest differences between genotype group averages, and the apparent QTL effect at a marker will be smaller than the true QTL effect as a result of recombination between the marker and the QTL. Second, we must discard individuals whose genotypes are missing at the marker. Third, when the markers are widely spaced, the QTL may be quite far from all markers, and so the power for QTL detection will decrease.
Interval mapping
Lander and Botstein developed interval mapping, which overcomes the three disadvantages of analysis of variance at marker loci. Interval mapping is currently the most popular approach for QTL mapping in experimental crosses. The method makes use of a genetic map of the typed markers, and, like analysis of variance, assumes the presence of a single QTL. In interval mapping, each locus is considered one at a time and the logarithm of the odds ratio (LOD score) is calculated for the model that the given locus is a true QTL. The odds ratio is related to the Pearson correlation coefficient between the phenotype and the marker genotype for each individual in the experimental cross.
The term 'interval mapping' is used for estimating the position of a QTL within two markers (often indicated as 'marker-bracket'). Interval mapping is originally based on the maximum likelihood but there are also very good approximations possible with simple regression.
The principle for QTL mapping is:
1) The likelihood can be calculated for a given set of parameters (particularly QTL effect and QTL position) given the observed data on phenotypes and marker genotypes.
2) The estimates for the parameters are those where the likelihood is highest.
3) A significance threshold can be established by permutation testing.
Conventional methods for the detection of quantitative trait loci (QTLs) are based on a comparison of single QTL models with a model assuming no QTL. For instance in the "interval mapping" method the likelihood for a single putative QTL is assessed at each location on the genome. However, QTLs located elsewhere on the genome can have an interfering effect. As a consequence, the power of detection may be compromised, and the estimates of locations and effects of QTLs may be biased (Lander and Botstein 1989; Knapp 1991). Even nonexisting so-called "ghost" QTLs may appear (Haley and Knott 1992; Martinez and Curnow 1992). Therefore, multiple QTLs could be mapped more efficiently and more accurately by using multiple QTL models. One popular approach to handle QTL mapping where multiple QTL contribute to a trait is to iteratively scan the genome and add known QTL to the regression model as QTLs are identified. This method, termed composite interval mapping determine both the location and effects size of QTL more accurately than single-QTL approaches, especially in small mapping populations where the effect of correlation between genotypes in the mapping population may be problematic.
Composite interval mapping (CIM)
In this method, one performs interval mapping using a subset of marker loci as covariates. These markers serve as proxies for other QTLs to increase the resolution of interval mapping, by accounting for linked QTLs and reducing the residual variation. The key problem with CIM concerns the choice of suitable marker loci to serve as covariates; once these have been chosen, CIM turns the model selection problem into a single-dimensional scan. The choice of marker covariates has not been solved, however. Not surprisingly, the appropriate markers are those closest to the true QTLs, and so if one could find these, the QTL mapping problem would be complete anyway.
Inclusive composite interval mapping (ICIM) has also been proposed as a potential method for QTL mapping.
Family-pedigree based mapping
Family-based QTL mapping, or Family-pedigree based mapping (Linkage and association mapping), involves multiple families instead of a single family. Family-based QTL mapping has been the only way for mapping of genes where experimental crosses are difficult to make. However, due to some advantages, now plant geneticists are attempting to incorporate some of the methods pioneered in human genetics. Using family-pedigree based approach has been discussed (Bink et al. 2008). Family-based linkage and association has been successfully implemented (Rosyara et al. 2009)
See also
Association mapping
Family-based QTL mapping
Epistasis
Dominance (genetics)
Expression quantitative trait loci (eQTL)
Genetic predisposition
Nested association mapping
Oncogene
Genetic susceptibility
References
Bink MCAM, Boer MP, ter Braak CJF, Jansen J, Voorrips RE, van de Weg WE: Bayesian analysis of complex traits in pedigreed plant populations.
Euphytica 2008, 161:85–96.
Rosyara U.R., J.L. Gonzalez-Hernandez, K.D. Glover, K.R. Gedye and J.M. Stein. 2009. Family-based mapping of quantitative trait loci in plant breeding populations with resistance to Fusarium head blight in wheat as an illustration Theoretical Applied Genetics 118:1617–1631
Garnier, Sophie, Truong, Vinh, Genome-Wide Haplotype Analysis of Cis Expression Quantitative Trait Loci in Monocytes
External links
Plant Breeding and Genomics on eXtension.org
INTERSNP – a software for genome-wide interaction analysis (GWIA) of case-control SNP data and analysis of quantitative traits
Precision Mapping of Quantitative Trait Loci
QTL Cartographer
Complex Trait Consortium
A Statistical Framework for Quantitative Trait Mapping
GeneNetwork
GridQTL
QTL discussion forum
A list of computer programs for genetic analysis including QTL analysis
Quantitative Trait Locus (QTL) Analysis @ Scitable
Mapping Quantitative Trait Loci
What are Quantitative Trait Loci? – University of Warwick
Classical genetics
Statistical genetics
Quantitative trait loci
Genetic epidemiology
Quantitative genetics | 0.774503 | 0.985339 | 0.763148 |
Aspartate transaminase | Aspartate transaminase (AST) or aspartate aminotransferase, also known as AspAT/ASAT/AAT or (serum) glutamic oxaloacetic transaminase (GOT, SGOT), is a pyridoxal phosphate (PLP)-dependent transaminase enzyme that was first described by Arthur Karmen and colleagues in 1954. AST catalyzes the reversible transfer of an α-amino group between aspartate and glutamate and, as such, is an important enzyme in amino acid metabolism. AST is found in the liver, heart, skeletal muscle, kidneys, brain, red blood cells and gall bladder. Serum AST level, serum ALT (alanine transaminase) level, and their ratio (AST/ALT ratio) are commonly measured clinically as biomarkers for liver health. The tests are part of blood panels.
The half-life of total AST in the circulation approximates 17 hours and, on average, 87 hours for mitochondrial AST. Aminotransferase is cleared by sinusoidal cells in the liver.
Function
Aspartate transaminase catalyzes the interconversion of aspartate and α-ketoglutarate to oxaloacetate and glutamate.
L-Aspartate (Asp) + α-ketoglutarate ↔ oxaloacetate + L-glutamate (Glu)
As a prototypical transaminase, AST relies on PLP (Vitamin B6) as a cofactor to transfer the amino group from aspartate or glutamate to the corresponding ketoacid. In the process, the cofactor shuttles between PLP and the pyridoxamine phosphate (PMP) form. The amino group transfer catalyzed by this enzyme is crucial in both amino acid degradation and biosynthesis. In amino acid degradation, following the conversion of α-ketoglutarate to glutamate, glutamate subsequently undergoes oxidative deamination to form ammonium ions, which are excreted as urea. In the reverse reaction, aspartate may be synthesized from oxaloacetate, which is a key intermediate in the citric acid cycle.
Isoenzymes
Two isoenzymes are present in a wide variety of eukaryotes. In humans:
GOT1/cAST, the cytosolic isoenzyme derives mainly from red blood cells and heart.
GOT2/mAST, the mitochondrial isoenzyme is present predominantly in liver.
These isoenzymes are thought to have evolved from a common ancestral AST via gene duplication, and they share a sequence homology of approximately 45%.
AST has also been found in a number of microorganisms, including E. coli, H. mediterranei, and T. thermophilus. In E. coli, the enzyme is encoded by the aspCgene and has also been shown to exhibit the activity of an aromatic-amino-acid transaminase.
Structure
X-ray crystallography studies have been performed to determine the structure of aspartate transaminase from various sources, including chicken mitochondria, pig heart cytosol, and E. coli. Overall, the three-dimensional polypeptide structure for all species is quite similar. AST is dimeric, consisting of two identical subunits, each with approximately 400 amino acid residues and a molecular weight of approximately 45 kD. Each subunit is composed of a large and a small domain, as well as a third domain consisting of the N-terminal residues 3-14; these few residues form a strand, which links and stabilizes the two subunits of the dimer. The large domain, which includes residues 48-325, binds the PLP cofactor via an aldimine linkage to the ε-amino group of Lys258. Other residues in this domain – Asp 222 and Tyr 225 – also interact with PLP via hydrogen bonding. The small domain consists of residues 15-47 and 326-410 and represents a flexible region that shifts the enzyme from an "open" to a "closed" conformation upon substrate binding.
The two independent active sites are positioned near the interface between the two domains. Within each active site, a couple arginine residues are responsible for the enzyme's specificity for dicarboxylic acid substrates: Arg386 interacts with the substrate's proximal (α-)carboxylate group, while Arg292 complexes with the distal (side-chain) carboxylate.
In terms of secondary structure, AST contains both α and β elements. Each domain has a central sheet of β-strands with α-helices packed on either side.
Mechanism
Aspartate transaminase, as with all transaminases, operates via dual substrate recognition; that is, it is able to recognize and selectively bind two amino acids (Asp and Glu) with different side-chains. In either case, the transaminase reaction consists of two similar half-reactions that constitute what is referred to as a ping-pong mechanism. In the first half-reaction, amino acid 1 (e.g., L-Asp) reacts with the enzyme-PLP complex to generate ketoacid 1 (oxaloacetate) and the modified enzyme-PMP. In the second half-reaction, ketoacid 2 (α-ketoglutarate) reacts with enzyme-PMP to produce amino acid 2 (L-Glu), regenerating the original enzyme-PLP in the process. Formation of a racemic product (D-Glu) is very rare.
The specific steps for the half-reaction of Enzyme-PLP + aspartate ⇌ Enzyme-PMP + oxaloacetate are as follows (see figure); the other half-reaction (not shown) proceeds in the reverse manner, with α-ketoglutarate as the substrate.
Internal aldimine formation: First, the ε-amino group of Lys258 forms a Schiff base linkage with the aldehyde carbon to generate an internal aldimine.
Transaldimination: The internal aldimine then becomes an external aldimine when the ε-amino group of Lys258 is displaced by the amino group of aspartate. This transaldimination reaction occurs via a nucleophilic attack by the deprotonated amino group of Asp and proceeds through a tetrahedral intermediate. As this point, the carboxylate groups of Asp are stabilized by the guanidinium groups of the enzyme's Arg386 and Arg 292 residues.
Quinonoid formation: The hydrogen attached to the a-carbon of Asp is then abstracted (Lys258 is thought to be the proton acceptor) to form a quinonoid intermediate.
Ketimine formation: The quinonoid is reprotonated, but now at the aldehyde carbon, to form the ketimine intermediate.
Ketimine hydrolysis: Finally, the ketimine is hydrolyzed to form PMP and oxaloacetate.
This mechanism is thought to have multiple partially rate-determining steps. However, it has been shown that the substrate binding step (transaldimination) drives the catalytic reaction forward.
Clinical significance
AST is similar to alanine transaminase (ALT) in that both enzymes are associated with liver parenchymal cells. The difference is that ALT is found predominantly in the liver, with clinically negligible quantities found in the kidneys, heart, and skeletal muscle, while AST is found in the liver, heart (cardiac muscle), skeletal muscle, kidneys, brain, and red blood cells. As a result, ALT is a more specific indicator of liver inflammation than AST, as AST may be elevated also in diseases affecting other organs, such as myocardial infarction, acute pancreatitis, acute hemolytic anemia, severe burns, acute renal disease, musculoskeletal diseases, and trauma.
AST was defined as a biochemical marker for the diagnosis of acute myocardial infarction in 1954. However, the use of AST for such a diagnosis is now redundant and has been superseded by the cardiac troponins.
Laboratory tests should always be interpreted using the reference range from the laboratory that performed the test. Example reference ranges are shown below:
See also
Alanine transaminase (ALT/ALAT/SGPT)
Transaminases
References
Further reading
External links
AST - Lab Tests Online
AST: MedlinePlus Medical Encyclopedia
Liver function tests
EC 2.6.1
Glutamate (neurotransmitter) | 0.765012 | 0.997555 | 0.763142 |
Globular protein | In biochemistry, globular proteins or spheroproteins are spherical ("globe-like") proteins and are one of the common protein types (the others being fibrous, disordered and membrane proteins). Globular proteins are somewhat water-soluble (forming colloids in water), unlike the fibrous or membrane proteins. There are multiple fold classes of globular proteins, since there are many different architectures that can fold into a roughly spherical shape.
The term globin can refer more specifically to proteins including the globin fold.
Globular structure and solubility
The term globular protein is quite old (dating probably from the 19th century) and is now somewhat archaic given the hundreds of thousands of proteins and more elegant and descriptive structural motif vocabulary. The globular nature of these proteins can be determined without the means of modern techniques, but only by using ultracentrifuges or dynamic light scattering techniques.
The spherical structure is induced by the protein's tertiary structure. The molecule's apolar (hydrophobic) amino acids are bounded towards the molecule's interior whereas polar (hydrophilic) amino acids are bound outwards, allowing dipole–dipole interactions with the solvent, which explains the molecule's solubility.
Globular proteins are only marginally stable because the free energy released when the protein folded into its native conformation is relatively small. This is because protein folding requires entropic cost. As a primary sequence of a polypeptide chain can form numerous conformations, native globular structure restricts its conformation to a few only. It results in a decrease in randomness, although non-covalent interactions such as hydrophobic interactions stabilize the structure.
Protein folding
Although it is still unknown how proteins fold up naturally, new evidence has helped advance understanding. Part of the protein folding problem is that several non-covalent, weak interactions are formed, such as hydrogen bonds and Van der Waals interactions. Via several techniques, the mechanism of protein folding is currently being studied. Even in the protein's denatured state, it can be folded into the correct structure.
Globular proteins seem to have two mechanisms for protein folding, either the diffusion-collision model or nucleation condensation model, although recent findings have shown globular proteins, such as PTP-BL PDZ2, that fold with characteristic features of both models. These new findings have shown that the transition states of proteins may affect the way they fold. The folding of globular proteins has also recently been connected to treatment of diseases, and anti-cancer ligands have been developed which bind to the folded but not the natural protein. These studies have shown that the folding of globular proteins affects its function.
By the second law of thermodynamics, the free energy difference between unfolded and folded states is contributed by enthalpy and entropy changes. As the free energy difference in a globular protein that results from folding into its native conformation is small, it is marginally stable, thus providing a rapid turnover rate and effective control of protein degradation and synthesis.
Role
Unlike fibrous proteins which only play a structural function, globular proteins can act as:
Enzymes, by catalyzing organic reactions taking place in the organism in mild conditions and with a great specificity. Different esterases fulfill this role.
Messengers, by transmitting messages to regulate biological processes. This function is done by hormones, i.e. insulin etc.
Transporters of other molecules through membranes
Stocks of amino acids.
Regulatory roles are also performed by globular proteins rather than fibrous proteins.
Structural proteins, e.g., actin and tubulin, which are globular and soluble as monomers, but polymerize to form long, stiff fibers
Members
Among the most known globular proteins is hemoglobin, a member of the globin protein family. Other globular proteins are the alpha, beta and gamma (IgA, IgD, IgE, IgG and IgM) globulin. See protein electrophoresis for more information on the different globulins. Nearly all enzymes with major metabolic functions are globular in shape, as well as many signal transduction proteins.
Albumins are also globular proteins, although, unlike all of the other globular proteins, they are completely soluble in water. They are not soluble in oil.
References
Proteins by structure
Protein structure | 0.772551 | 0.987813 | 0.763136 |
Applied ontology | Applied ontology is the application of Ontology for practical purposes. This can involve employing ontological methods or resources to specific domains,
such as management, relationships, biomedicine, information science or geography. Alternatively, applied ontology can aim more generally at developing improved methodologies for recording and organizing knowledge.
Much work in applied ontology is carried out within the framework of the Semantic Web. Ontologies can structure data and add useful semantic content to it, such as definitions of classes and relations between entities, including subclass relations. The semantic web makes use of languages designed to allow for ontological content, including the Resource Description Framework (RDF) and the Web Ontology Language (OWL).
Applying ontology to relationships
The challenge of applying ontology is ontology's emphasis on a world view orthogonal to epistemology. The emphasis is on being rather than on doing (as implied by "applied") or on knowing. This is explored by philosophers and pragmatists like Fernando Flores and Martin Heidegger.
One way in which that emphasis plays out is in the concept of "speech acts": acts of promising, ordering, apologizing, requesting, inviting or sharing. The study of these acts from an ontological perspective is one of the driving forces behind relationship-oriented applied ontology. This can involve concepts championed by ordinary language philosophers like Ludwig Wittgenstein.
Applying ontology can also involve looking at the relationship between a person's world and that person's actions. The context or clearing is highly influenced by the being of the subject or the field of being itself. This view is highly influenced by the philosophy of phenomenology, the works of Heidegger, and others.
Ontological perspectives
Social scientists adopt a number of approaches to ontology. Some of these are:
Realism - the idea that facts are "out there" just waiting to be discovered;
Empiricism - the idea that we can observe the world and evaluate those observations in relation to facts;
Positivism - which focuses on the observations themselves, attending more to claims about facts than to facts themselves;
Grounded theory - which seeks to derive theories from facts;
Engaged theory - which moves across different levels of interpretation, linking different empirical questions to ontological understandings;
Postmodernism - which regards facts as fluid and elusive, and recommends focusing only on observational claims.
Data ontology
Ontologies can be used for structuring data in a machine-readable manner. In this context, an ontology is a controlled vocabulary of classes that can be placed in hierarchical relations with each other. These classes can represent entities in the real world which data is about. Data can then be linked to the formal structure of these ontologies to aid dataset interoperability, along with retrieval and discovery of information. The classes in an ontology can be limited to a relatively narrow domain (such as an ontology of occupations), or expansively cover all of reality with highly general classes (such as in Basic Formal Ontology).
Applied ontology is a quickly growing field. It has found major applications in areas such as biological research, artificial intelligence, banking, healthcare, and defense.
See also
Foundation ontology
Applied philosophy
John Searle
Bertrand Russell
Barry Smith, ontologist with a focus on biomedicine
Nicola Guarino, researcher in the formal ontology of information systems
References
External links
Applied philosophy
Applied ontology | 0.786316 | 0.97051 | 0.763127 |
Pharmaceutical manufacturing | Pharmaceutical manufacturing is the process of industrial-scale synthesis of pharmaceutical drugs as part of the pharmaceutical industry. The process of drug manufacturing can be broken down into a series of unit operations, such as milling, granulation, coating, tablet pressing, and others.
Scale-up considerations
Cooling
While a laboratory may use dry ice as a cooling agent for reaction selectivity, this process gets complicated on an industrial scale. The cost to cool a typical reactor to this temperature is large, and the viscosity of the reagents typically also increases as the temperature lowers, leading to difficult mixing. This results in added costs to stir harder and replace parts more often, or it results in a non-homogeneous reaction. Finally, lower temperatures can result in crusting of reagents, intermediates, and byproducts to the reaction vessel over time, which will impact the purity of the product.
Stoichiometry
Different stoichiometric ratios of reagents can result in different ratios of products formed. On the industrial scale, adding a large amount of reagent A to reagent B may take time. During this, the reagent A that is added is exposed to a much higher stoichiometric amount of reagent B until it is all added, and this imbalance can lead to reagent A prematurely reacting, and subsequent products to also react with the huge excess of reagent B.
Solvent extractions
Whether to add organic solvent into aqueous solvent, or vice versa, becomes important on the industrial scale. Depending on the solvents used, emulsions can form, and the time needed for the layers to separate can be extended if the mixing between solvents is not optimal. When adding organic solvent to aqueous, stoichiometry must be considered again, as the excess of water could hydrolyze organic compounds in only mildly acidic or basic conditions. In an even wider scope, the location of the chemical plant can play a role in the ambient temperature of the reaction vessel. A difference of even a couple of degrees can yield much different levels of extractions between plants located across countries.
Unit operations
Powder feeding in continuous manufacturing
In continuous manufacturing, input raw materials and energy are fed into the system at a constant rate, and at the same time, a constant extraction of output products is achieved. The process's performance is heavily dependent on the stability of the material flowrate. For powder-based continuous processes, it is critical to feed powders consistently and accurately into subsequent unit operations of the process line, as feeding is typically the first unit operation. Feeders have been designed to achieve performance reliability, feed rate accuracy, and minimal disturbances. Accurate and consistent delivery of materials by well-designed feeders ensures overall process stability. Loss-in-weight (LIW) feeders are selected for pharmaceutical manufacturing. Loss-in-weight (LIW) feeders control material dispensing by weight at a precise rate and are often selected to minimize the flowrate variability that is caused by change of fill level and material bulk density. Importantly, feeding performance is strongly dependent on powder flow properties.
Powder blending
In the pharmaceutical industry, a wide range of excipients may be blended together with the active pharmaceutical ingredient to create the final blend used to manufacture the solid dosage form. The range of materials that may be blended (excipients, API), presents a number of variables which must be addressed to achieve target product quality attributes. These variables may include the particle size distribution (including aggregates or lumps of material), particle shape (spheres, rods, cubes, plates, and irregular), presence of moisture (or other volatile compounds), particle surface properties (roughness, cohesion), and powder flow properties.
Milling
During the drug manufacturing process, milling is often required in order to reduce the average particle size in a drug powder. There are a number of reasons for this, including increasing homogeneity and dosage uniformity, increasing bioavailability, and increasing the solubility of the drug compound. In some cases, repeated powder blending followed by milling is conducted to improve the manufacturability of the blends.
Granulation
In general, there are two types of granulation: wet granulation and dry granulation. Granulation can be thought of as the opposite of milling; it is the process by which small particles are bound together to form larger particles, called granules. Granulation is used for several reasons. Granulation prevents the "demixing" of components in the mixture, by creating a granule which contains all of the components in their required proportions, improves flow characteristics of powders (because small particles do not flow well), and improves compaction properties for tablet formation.
Hot melt extrusion
Hot melt extrusion is utilized in pharmaceutical solid oral dose processing to enable delivery of drugs with poor solubility and bioavailability. Hot melt extrusion has been shown to molecularly disperse poorly soluble drugs in a polymer carrier increasing dissolution rates and bioavailability. The process involves the application of heat, pressure and agitation to mix materials together and 'extrude' them through a die. Twin-screw high shear extruders blend materials and simultaneously break up particles. The resulting particles can be blended and compressed into tablets or filled into capsules.
Documentation
The documentation of activities by pharmaceutical manufacturers is a license-to-operate endeavor, supporting both the quality of the product produced and satisfaction of regulators who oversee manufacturing operations and determine whether a manufacturing process may continue or must be terminated and remediated.
Site Master File (SMF)
A Site Master File is a document in the pharmaceutical industry which provides information about the production and control of manufacturing operations. The document is created by a manufacturer. The Site Master file contains specific and factual GMP information about the production and control of pharmaceutical manufacturing operations carried out at the named site and any closely integrated operations at adjacent and nearby buildings. If only part of a pharmaceutical operation is carried out on the site, the site master file needs to describe only those operations, e.g., analysis, packaging.
See also
Chemical engineering
Chemical reactor
Medical science liaison
Pharmaceutical formulation
Pharmaceutical packaging
Site Master File
3D drug printing
References
Drug manufacturing | 0.772644 | 0.987651 | 0.763102 |
Metabolic network modelling | Metabolic network modelling, also known as metabolic network reconstruction or metabolic pathway analysis, allows for an in-depth insight into the molecular mechanisms of a particular organism. In particular, these models correlate the genome with molecular physiology. A reconstruction breaks down metabolic pathways (such as glycolysis and the citric acid cycle) into their respective reactions and enzymes, and analyzes them within the perspective of the entire network. In simplified terms, a reconstruction collects all of the relevant metabolic information of an organism and compiles it in a mathematical model. Validation and analysis of reconstructions can allow identification of key features of metabolism such as growth yield, resource distribution, network robustness, and gene essentiality. This knowledge can then be applied to create novel biotechnology.
In general, the process to build a reconstruction is as follows:
Draft a reconstruction
Refine the model
Convert model into a mathematical/computational representation
Evaluate and debug model through experimentation
The related method of flux balance analysis seeks to mathematically simulate metabolism in genome-scale reconstructions of metabolic networks.
Genome-scale metabolic reconstruction
A metabolic reconstruction provides a highly mathematical, structured platform on which to understand the systems biology of metabolic pathways within an organism. The integration of biochemical metabolic pathways with rapidly available, annotated genome sequences has developed what are called genome-scale metabolic models. Simply put, these models correlate metabolic genes with metabolic pathways. In general, the more information about physiology, biochemistry and genetics is available for the target organism, the better the predictive capacity of the reconstructed models. Mechanically speaking, the process of reconstructing prokaryotic and eukaryotic metabolic networks is essentially the same. Having said this, eukaryote reconstructions are typically more challenging because of the size of genomes, coverage of knowledge, and the multitude of cellular compartments. The first genome-scale metabolic model was generated in 1995 for Haemophilus influenzae. The first multicellular organism, C. elegans, was reconstructed in 1998. Since then, many reconstructions have been formed. For a list of reconstructions that have been converted into a model and experimentally validated, see http://sbrg.ucsd.edu/InSilicoOrganisms/OtherOrganisms.
Drafting a reconstruction
Resources
Because the timescale for the development of reconstructions is so recent, most reconstructions have been built manually. However, now, there are quite a few resources that allow for the semi-automatic assembly of these reconstructions that are utilized due to the time and effort necessary for a reconstruction. An initial fast reconstruction can be developed automatically using resources like PathoLogic or ERGO in combination with encyclopedias like MetaCyc, and then manually updated by using resources like PathwayTools. These semi-automatic methods allow for a fast draft to be created while allowing the fine tune adjustments required once new experimental data is found. It is only in this manner that the field of metabolic reconstructions will keep up with the ever-increasing numbers of annotated genomes.
Databases
Kyoto Encyclopedia of Genes and Genomes (KEGG): a bioinformatics database containing information on genes, proteins, reactions, and pathways. The ‘KEGG Organisms’ section, which is divided into eukaryotes and prokaryotes, encompasses many organisms for which gene and DNA information can be searched by typing in the enzyme of choice.
BioCyc, EcoCyc, and MetaCyc: BioCyc Is a collection of 3,000 pathway/genome databases (as of Oct 2013), with each database dedicated to one organism. For example, EcoCyc is a highly detailed bioinformatics database on the genome and metabolic reconstruction of Escherichia coli, including thorough descriptions of E. coli signaling pathways and regulatory network. The EcoCyc database can serve as a paradigm and model for any reconstruction. Additionally, MetaCyc, an encyclopedia of experimentally defined metabolic pathways and enzymes, contains 2,100 metabolic pathways and 11,400 metabolic reactions (Oct 2013).
ENZYME: An enzyme nomenclature database (part of the ExPASy proteonomics server of the Swiss Institute of Bioinformatics). After searching for a particular enzyme on the database, this resource gives you the reaction that is catalyzed. ENZYME has direct links to other gene/enzyme/literature databases such as KEGG, BRENDA, and PUBMED.
BRENDA: A comprehensive enzyme database that allows for an enzyme to be searched by name, EC number, or organism.
BiGG: A knowledge base of biochemically, genetically, and genomically structured genome-scale metabolic network reconstructions.
metaTIGER: Is a collection of metabolic profiles and phylogenomic information on a taxonomically diverse range of eukaryotes which provides novel facilities for viewing and comparing the metabolic profiles between organisms.
Tools for metabolic modeling
Pathway Tools: A bioinformatics software package that assists in the construction of pathway/genome databases such as EcoCyc. Developed by Peter Karp and associates at the SRI International Bioinformatics Research Group, Pathway Tools has several components. Its PathoLogic module takes an annotated genome for an organism and infers probable metabolic reactions and pathways to produce a new pathway/genome database. Its MetaFlux component can generate a quantitative metabolic model from that pathway/genome database using flux-balance analysis. Its Navigator component provides extensive query and visualization tools, such as visualization of metabolites, pathways, and the complete metabolic network.
ERGO: A subscription-based service developed by Integrated Genomics. It integrates data from every level including genomic, biochemical data, literature, and high-throughput analysis into a comprehensive user friendly network of metabolic and nonmetabolic pathways.
KEGGtranslator: an easy-to-use stand-alone application that can visualize and convert KEGG files (KGML formatted XML-files) into multiple output formats. Unlike other translators, KEGGtranslator supports a plethora of output formats, is able to augment the information in translated documents (e.g., MIRIAM annotations) beyond the scope of the KGML document, and amends missing components to fragmentary reactions within the pathway to allow simulations on those. KEGGtranslator converts these files to SBML, BioPAX, SIF, SBGN, SBML with qualitative modeling extension, GML, GraphML, JPG, GIF, LaTeX, etc.
ModelSEED: An online resource for the analysis, comparison, reconstruction, and curation of genome-scale metabolic models. Users can submit genome sequences to the RAST annotation system, and the resulting annotation can be automatically piped into the ModelSEED to produce a draft metabolic model. The ModelSEED automatically constructs a network of metabolic reactions, gene-protein-reaction associations for each reaction, and a biomass composition reaction for each genome to produce a model of microbial metabolism that can be simulated using Flux Balance Analysis.
MetaMerge: algorithm for semi-automatically reconciling a pair of existing metabolic network reconstructions into a single metabolic network model.
CoReCo: algorithm for automatic reconstruction of metabolic models of related species. The first version of the software used KEGG as reaction database to link with the EC number predictions from CoReCo. Its automatic gap filling using atom map of all the reactions produce functional models ready for simulation.
Tools for literature
PUBMED: This is an online library developed by the National Center for Biotechnology Information, which contains a massive collection of medical journals. Using the link provided by ENZYME, the search can be directed towards the organism of interest, thus recovering literature on the enzyme and its use inside of the organism.
Methodology to draft a reconstruction
A reconstruction is built by compiling data from the resources above. Database tools such as KEGG and BioCyc can be used in conjunction with each other to find all the metabolic genes in the organism of interest. These genes will be compared to closely related organisms that have already developed reconstructions to find homologous genes and reactions. These homologous genes and reactions are carried over from the known reconstructions to form the draft reconstruction of the organism of interest. Tools such as ERGO, Pathway Tools and Model SEED can compile data into pathways to form a network of metabolic and non-metabolic pathways. These networks are then verified and refined before being made into a mathematical simulation.
The predictive aspect of a metabolic reconstruction hinges on the ability to predict the biochemical reaction catalyzed by a protein using that protein's amino acid sequence as an input, and to infer the structure of a metabolic network based on the predicted set of reactions. A network of enzymes and metabolites is drafted to relate sequences and function. When an uncharacterized protein is found in the genome, its amino acid sequence is first compared to those of previously characterized proteins to search for homology. When a homologous protein is found, the proteins are considered to have a common ancestor and their functions are inferred as being similar. However, the quality of a reconstruction model is dependent on its ability to accurately infer phenotype directly from sequence, so this rough estimation of protein function will not be sufficient. A number of algorithms and bioinformatics resources have been developed for refinement of sequence homology-based assignments of protein functions:
InParanoid: Identifies eukaryotic orthologs by looking only at in-paralogs.
CDD: Resource for the annotation of functional units in proteins. Its collection of domain models utilizes 3D structure to provide insights into sequence/structure/function relationships.
InterPro: Provides functional analysis of proteins by classifying them into families and predicting domains and important sites.
STRING: Database of known and predicted protein interactions.
Once proteins have been established, more information about the enzyme structure, reactions catalyzed, substrates and products, mechanisms, and more can be acquired from databases such as KEGG, MetaCyc and NC-IUBMB. Accurate metabolic reconstructions require additional information about the reversibility and preferred physiological direction of an enzyme-catalyzed reaction which can come from databases such as BRENDA or MetaCyc database.
Model refinement
An initial metabolic reconstruction of a genome is typically far from perfect due to the high variability and diversity of microorganisms. Often, metabolic pathway databases such as KEGG and MetaCyc will have "holes", meaning that there is a conversion from a substrate to a product (i.e., an enzymatic activity) for which there is no known protein in the genome that encodes the enzyme that facilitates the catalysis. What can also happen in semi-automatically drafted reconstructions is that some pathways are falsely predicted and don't actually occur in the predicted manner. Because of this, a systematic verification is made in order to make sure no inconsistencies are present and that all the entries listed are correct and accurate. Furthermore, previous literature can be researched in order to support any information obtained from one of the many metabolic reaction and genome databases. This provides an added level of assurance for the reconstruction that the enzyme and the reaction it catalyzes do actually occur in the organism.
Enzyme promiscuity and spontaneous chemical reactions can damage metabolites. This metabolite damage, and its repair or pre-emption, create energy costs that need to be incorporated into models. It is likely that many genes of unknown function encode proteins that repair or pre-empt metabolite damage, but most genome-scale metabolic reconstructions only include a fraction of all genes.
Any new reaction not present in the databases needs to be added to the reconstruction. This is an iterative process that cycles between the experimental phase and the coding phase. As new information is found about the target organism, the model will be adjusted to predict the metabolic and phenotypical output of the cell. The presence or absence of certain reactions of the metabolism will affect the amount of reactants/products that are present for other reactions within the particular pathway. This is because products in one reaction go on to become the reactants for another reaction, i.e. products of one reaction can combine with other proteins or compounds to form new proteins/compounds in the presence of different enzymes or catalysts.
Francke et al. provide an excellent example as to why the verification step of the project needs to be performed in significant detail. During a metabolic network reconstruction of Lactobacillus plantarum, the model showed that succinyl-CoA was one of the reactants for a reaction that was a part of the biosynthesis of methionine. However, an understanding of the physiology of the organism would have revealed that due to an incomplete tricarboxylic acid pathway, Lactobacillus plantarum does not actually produce succinyl-CoA, and the correct reactant for that part of the reaction was acetyl-CoA.
Therefore, systematic verification of the initial reconstruction will bring to light several inconsistencies that can adversely affect the final interpretation of the reconstruction, which is to accurately comprehend the molecular mechanisms of the organism. Furthermore, the simulation step also ensures that all the reactions present in the reconstruction are properly balanced. To sum up, a reconstruction that is fully accurate can lead to greater insight about understanding the functioning of the organism of interest.
Metabolic stoichiometric analysis
A metabolic network can be broken down into a stoichiometric matrix where the rows represent the compounds of the reactions, while the columns of the matrix correspond to the reactions themselves. Stoichiometry is a quantitative relationship between substrates of a chemical reaction. In order to deduce what the metabolic network suggests, recent research has centered on a few approaches, such as extreme pathways, elementary mode analysis, flux balance analysis, and a number of other constraint-based modeling methods.
Extreme pathways
Price, Reed, and Papin, from the Palsson lab, use a method of singular value decomposition (SVD) of extreme pathways in order to understand regulation of a human red blood cell metabolism. Extreme pathways are convex basis vectors that consist of steady state functions of a metabolic network. For any particular metabolic network, there is always a unique set of extreme pathways available. Furthermore, Price, Reed, and Papin, define a constraint-based approach, where through the help of constraints like mass balance and maximum reaction rates, it is possible to develop a ‘solution space’ where all the feasible options fall within. Then, using a kinetic model approach, a single solution that falls within the extreme pathway solution space can be determined. Therefore, in their study, Price, Reed, and Papin, use both constraint and kinetic approaches to understand the human red blood cell metabolism. In conclusion, using extreme pathways, the regulatory mechanisms of a metabolic network can be studied in further detail.
Elementary mode analysis
Elementary mode analysis closely matches the approach used by extreme pathways. Similar to extreme pathways, there is always a unique set of elementary modes available for a particular metabolic network. These are the smallest sub-networks that allow a metabolic reconstruction network to function in steady state. According to Stelling (2002), elementary modes can be used to understand cellular objectives for the overall metabolic network. Furthermore, elementary mode analysis takes into account stoichiometrics and thermodynamics when evaluating whether a particular metabolic route or network is feasible and likely for a set of proteins/enzymes.
Minimal metabolic behaviors (MMBs)
In 2009, Larhlimi and Bockmayr presented a new approach called "minimal metabolic behaviors" for the analysis of metabolic networks. Like elementary modes or extreme pathways, these are uniquely determined by the network, and yield a complete description of the flux cone. However, the new description is much more compact. In contrast with elementary modes and extreme pathways, which use an inner description based on generating vectors of the flux cone, MMBs are using an outer description of the flux cone. This approach is based on sets of non-negativity constraints. These can be identified with irreversible reactions, and thus have a direct biochemical interpretation. One can characterize a metabolic network by MMBs and the reversible metabolic space.
Flux balance analysis
A different technique to simulate the metabolic network is to perform flux balance analysis. This method uses linear programming, but in contrast to elementary mode analysis and extreme pathways, only a single solution results in the end. Linear programming is usually used to obtain the maximum potential of the objective function that you are looking at, and therefore, when using flux balance analysis, a single solution is found to the optimization problem. In a flux balance analysis approach, exchange fluxes are assigned to those metabolites that enter or leave the particular network only. Those metabolites that are consumed within the network are not assigned any exchange flux value. Also, the exchange fluxes along with the enzymes can have constraints ranging from a negative to positive value (ex: -10 to 10).
Furthermore, this particular approach can accurately define if the reaction stoichiometry is in line with predictions by providing fluxes for the balanced reactions. Also, flux balance analysis can highlight the most effective and efficient pathway through the network in order to achieve a particular objective function. In addition, gene knockout studies can be performed using flux balance analysis. The enzyme that correlates to the gene that needs to be removed is given a constraint value of 0. Then, the reaction that the particular enzyme catalyzes is completely removed from the analysis.
Dynamic simulation and parameter estimation
In order to perform a dynamic simulation with such a network it is necessary to construct an ordinary differential equation
system that describes the rates of change in each metabolite's concentration or amount. To this end, a rate law, i.e., a kinetic equation that determines the rate of reaction based on the concentrations of all reactants is required for each reaction. Software packages that include numerical integrators, such as COPASI or SBMLsimulator, are then able to simulate the system dynamics given an initial condition. Often these rate laws contain kinetic parameters with uncertain values. In many cases it is desired to estimate these parameter values with respect to given time-series data of metabolite concentrations. The system is then supposed to reproduce the given data. For this purpose the distance between the given data set and the result of the simulation, i.e., the numerically or in few cases analytically obtained solution of the differential equation system is computed. The values of the parameters are then estimated to minimize this distance. One step further, it may be desired to estimate the mathematical structure of the differential equation system because the real rate laws are not known for the reactions within the system under study. To this end, the program SBMLsqueezer allows automatic creation of appropriate rate laws for all reactions with the network.
Synthetic accessibility
Synthetic accessibility is a simple approach to network simulation whose goal is to predict which metabolic gene knockouts are lethal. The synthetic accessibility approach uses the topology of the metabolic network to calculate the sum of the minimum number of steps needed to traverse the metabolic network graph from the inputs, those metabolites available to the organism from the environment, to the outputs, metabolites needed by the organism to survive. To simulate a gene knockout, the reactions enabled by the gene are removed from the network and the synthetic accessibility metric is recalculated. An increase in the total number of steps is predicted to cause lethality. Wunderlich and Mirny showed this simple, parameter-free approach predicted knockout lethality in E. coli and S. cerevisiae as well as elementary mode analysis and flux balance analysis in a variety of media.
Applications of a reconstruction
Several inconsistencies exist between gene, enzyme, reaction databases, and published literature sources regarding the metabolic information of an organism. A reconstruction is a systematic verification and compilation of data from various sources that takes into account all of the discrepancies.
The combination of relevant metabolic and genomic information of an organism.
Metabolic comparisons can be performed between various organisms of the same species as well as between different organisms.
Analysis of synthetic lethality
Predict adaptive evolution outcomes
Use in metabolic engineering for high value outputs
Reconstructions and their corresponding models allow the formulation of hypotheses about the presence of certain enzymatic activities and the production of metabolites that can be experimentally tested, complementing the primarily discovery-based approach of traditional microbial biochemistry with hypothesis-driven research. The results these experiments can uncover novel pathways and metabolic activities and decipher between discrepancies in previous experimental data. Information about the chemical reactions of metabolism and the genetic background of various metabolic properties (sequence to structure to function) can be utilized by genetic engineers to modify organisms to produce high value outputs whether those products be medically relevant like pharmaceuticals; high value chemical intermediates such as terpenoids and isoprenoids; or biotechnological outputs like biofuels, or polyhydroxybutyrates also known as bioplastics.
Metabolic network reconstructions and models are used to understand how an organism or parasite functions inside of the host cell. For example, if the parasite serves to compromise the immune system by lysing macrophages, then the goal of metabolic reconstruction/simulation would be to determine the metabolites that are essential to the organism's proliferation inside of macrophages. If the proliferation cycle is inhibited, then the parasite would not continue to evade the host's immune system. A reconstruction model serves as a first step to deciphering the complicated mechanisms surrounding disease. These models can also look at the minimal genes necessary for a cell to maintain virulence. The next step would be to use the predictions and postulates generated from a reconstruction model and apply it to discover novel biological functions such as drug-engineering and drug delivery techniques.
See also
Computational systems biology
Computer simulation
Flux balance analysis
Fluxomics
Metabolic control analysis
Metabolic flux analysis
Metabolic network
Metabolic pathway
Biochemical systems equation
Metagenomics
References
Further reading
Overbeek R, Larsen N, Walunas T, D'Souza M, Pusch G, Selkov Jr, Liolios K, Joukov V, Kaznadzey D, Anderson I, Bhattacharyya A, Burd H, Gardner W, Hanke P, Kapatral V, Mikhailova N, Vasieva O, Osterman A, Vonstein V, Fonstein M, Ivanova N, Kyrpides N. (2003) The ERGO genome analysis and discovery system. Nucleic Acids Res. 31(1):164-71
Whitaker, J.W., Letunic, I., McConkey, G.A. and Westhead, D.R. metaTIGER: a metabolic evolution resource. Nucleic Acids Res. 2009 37: D531-8.
External links
ERGO
GeneDB
KEGG
PathCase Case Western Reserve University
BRENDA
BioCyc and Cyclone - provides an open source Java API to the pathway tool BioCyc to extract Metabolic graphs.
EcoCyc
MetaCyc
SEED
ModelSEED
ENZYME
SBRI Bioinformatics Tools and Software
TIGR
Pathway Tools
metaTIGER
Stanford Genomic Resources
Pathway Hunter Tool
IMG The Integrated Microbial Genomes system, for genome analysis by the DOE-JGI.
Systems Analysis, Modelling and Prediction Group at the University of Oxford, Biochemical reaction pathway inference techniques.
efmtool provided by Marco Terzer
SBMLsqueezer
Cellnet analyzer from Klamt and von Kamp
Copasi
gEFM A graph-based tool for EFM computation
Biological engineering
Biomedical engineering
Systems biology
Bioinformatics
Genomics
Metabolism | 0.793959 | 0.961133 | 0.7631 |
Evidence of common descent | Evidence of common descent of living organisms has been discovered by scientists researching in a variety of disciplines over many decades, demonstrating that all life on Earth comes from a single ancestor. This forms an important part of the evidence on which evolutionary theory rests, demonstrates that evolution does occur, and illustrates the processes that created Earth's biodiversity. It supports the modern evolutionary synthesis—the current scientific theory that explains how and why life changes over time. Evolutionary biologists document evidence of common descent, all the way back to the last universal common ancestor, by developing testable predictions, testing hypotheses, and constructing theories that illustrate and describe its causes.
Comparison of the DNA genetic sequences of organisms has revealed that organisms that are phylogenetically close have a higher degree of DNA sequence similarity than organisms that are phylogenetically distant. Genetic fragments such as pseudogenes, regions of DNA that are orthologous to a gene in a related organism, but are no longer active and appear to be undergoing a steady process of degeneration from cumulative mutations support common descent alongside the universal biochemical organization and molecular variance patterns found in all organisms. Additional genetic information conclusively supports the relatedness of life and has allowed scientists (since the discovery of DNA) to develop phylogenetic trees: a construction of organisms' evolutionary relatedness. It has also led to the development of molecular clock techniques to date taxon divergence times and to calibrate these with the fossil record.
Fossils are important for estimating when various lineages developed in geologic time. As fossilization is an uncommon occurrence, usually requiring hard body parts and death near a site where sediments are being deposited, the fossil record only provides sparse and intermittent information about the evolution of life. Evidence of organisms prior to the development of hard body parts such as shells, bones and teeth is especially scarce, but exists in the form of ancient microfossils, as well as impressions of various soft-bodied organisms. The comparative study of the anatomy of groups of animals shows structural features that are fundamentally similar (homologous), demonstrating phylogenetic and ancestral relationships with other organisms, most especially when compared with fossils of ancient extinct organisms. Vestigial structures and comparisons in embryonic development are largely a contributing factor in anatomical resemblance in concordance with common descent. Since metabolic processes do not leave fossils, research into the evolution of the basic cellular processes is done largely by comparison of existing organisms' physiology and biochemistry. Many lineages diverged at different stages of development, so it is possible to determine when certain metabolic processes appeared by comparing the traits of the descendants of a common ancestor.
Evidence from animal coloration was gathered by some of Darwin's contemporaries; camouflage, mimicry, and warning coloration are all readily explained by natural selection. Special cases like the seasonal changes in the plumage of the ptarmigan, camouflaging it against snow in winter and against brown moorland in summer provide compelling evidence that selection is at work. Further evidence comes from the field of biogeography because evolution with common descent provides the best and most thorough explanation for a variety of facts concerning the geographical distribution of plants and animals across the world. This is especially obvious in the field of insular biogeography. Combined with the well-established geological theory of plate tectonics, common descent provides a way to combine facts about the current distribution of species with evidence from the fossil record to provide a logically consistent explanation of how the distribution of living organisms has changed over time.
The development and spread of antibiotic resistant bacteria provides evidence that evolution due to natural selection is an ongoing process in the natural world. Natural selection is ubiquitous in all research pertaining to evolution, taking note of the fact that all of the following examples in each section of the article document the process. Alongside this are observed instances of the separation of populations of species into sets of new species (speciation). Speciation has been observed in the lab and in nature. Multiple forms of such have been described and documented as examples for individual modes of speciation. Furthermore, evidence of common descent extends from direct laboratory experimentation with the selective breeding of organisms—historically and currently—and other controlled experiments involving many of the topics in the article. This article summarizes the varying disciplines that provide the evidence for evolution and the common descent of all life on Earth, accompanied by numerous and specialized examples, indicating a compelling consilience of evidence.
Evidence from comparative physiology and biochemistry
Genetics
One of the strongest evidences for common descent comes from gene sequences. Comparative sequence analysis examines the relationship between the DNA sequences of different species, producing several lines of evidence that confirm Darwin's original hypothesis of common descent. If the hypothesis of common descent is true, then species that share a common ancestor inherited that ancestor's DNA sequence, as well as mutations unique to that ancestor. More closely related species have a greater fraction of identical sequence and shared substitutions compared to more distantly related species.
The simplest and most powerful evidence is provided by phylogenetic reconstruction. Such reconstructions, especially when done using slowly evolving protein sequences, are often quite robust and can be used to reconstruct a great deal of the evolutionary history of modern organisms (and even in some instances of the evolutionary history of extinct organisms, such as the recovered gene sequences of mammoths or Neanderthals). These reconstructed phylogenies recapitulate the relationships established through morphological and biochemical studies. The most detailed reconstructions have been performed on the basis of the mitochondrial genomes shared by all eukaryotic organisms, which are short and easy to sequence; the broadest reconstructions have been performed either using the sequences of a few very ancient proteins or by using ribosomal RNA sequence.
Phylogenetic relationships extend to a wide variety of nonfunctional sequence elements, including repeats, transposons, pseudogenes, and mutations in protein-coding sequences that do not change the amino-acid sequence. While a minority of these elements might later be found to harbor function, in aggregate they demonstrate that identity must be the product of common descent rather than common function.
Universal biochemical organisation and molecular variance patterns
All known extant (surviving) organisms are based on the same biochemical processes: genetic information encoded as nucleic acid (DNA, or RNA for many viruses), transcribed into RNA, then translated into proteins (that is, polymers of amino acids) by highly conserved ribosomes. Perhaps most tellingly, the Genetic Code (the "translation table" between DNA and amino acids) is the same for almost every organism, meaning that a piece of DNA in a bacterium codes for the same amino acid as in a human cell. ATP is used as energy currency by all extant life. A deeper understanding of developmental biology shows that common morphology is, in fact, the product of shared genetic elements. For example, although camera-like eyes are believed to have evolved independently on many separate occasions, they share a common set of light-sensing proteins (opsins), suggesting a common point of origin for all sighted creatures. Another example is the familiar vertebrate body plan, whose structure is controlled by the homeobox (Hox) family of genes.
DNA sequencing
Comparison of DNA sequences allows organisms to be grouped by sequence similarity, and the resulting phylogenetic trees are typically congruent with traditional taxonomy, and are often used to strengthen or correct taxonomic classifications. Sequence comparison is considered a measure robust enough to correct erroneous assumptions in the phylogenetic tree in instances where other evidence is scarce. For example, neutral human DNA sequences are approximately 1.2% divergent (based on substitutions) from those of their nearest genetic relative, the chimpanzee, 1.6% from gorillas, and 6.6% from baboons. Genetic sequence evidence thus allows inference and quantification of genetic relatedness between humans and other apes. The sequence of the 16S ribosomal RNA gene, a vital gene encoding a part of the ribosome, was used to find the broad phylogenetic relationships between all extant life. The analysis by Carl Woese resulted in the three-domain system, arguing for two major splits in the early evolution of life. The first split led to modern Bacteria and the subsequent split led to modern Archaea and Eukaryotes.
Some DNA sequences are shared by very different organisms. It has been predicted by the theory of evolution that the differences in such DNA sequences between two organisms should roughly resemble both the biological difference between them according to their anatomy and the time that had passed since these two organisms have separated in the course of evolution, as seen in fossil evidence. The rate of accumulating such changes should be low for some sequences, namely those that code for critical RNA or proteins, and high for others that code for less critical RNA or proteins; but for every specific sequence, the rate of change should be roughly constant over time. These results have been experimentally confirmed. Two examples are DNA sequences coding for rRNA, which is highly conserved, and DNA sequences coding for fibrinopeptides, amino acid chains discarded during the formation of fibrin, which are highly non-conserved.
Proteins
Proteomic evidence also supports the universal ancestry of life. Vital proteins, such as the ribosome, DNA polymerase, and RNA polymerase, are found in everything from the most primitive bacteria to the most complex mammals. The core part of the protein is conserved across all lineages of life, serving similar functions. Higher organisms have evolved additional protein subunits, largely affecting the regulation and protein-protein interaction of the core. Other overarching similarities between all lineages of extant organisms, such as DNA, RNA, amino acids, and the lipid bilayer, give support to the theory of common descent. Phylogenetic analyses of protein sequences from various organisms produce similar trees of relationship between all organisms. The chirality of DNA, RNA, and amino acids is conserved across all known life. As there is no functional advantage to right- or left-handed molecular chirality, the simplest hypothesis is that the choice was made randomly by early organisms and passed on to all extant life through common descent. Further evidence for reconstructing ancestral lineages comes from junk DNA such as pseudogenes, "dead" genes that steadily accumulate mutations.
Pseudogenes
Pseudogenes, also known as noncoding DNA, are extra DNA in a genome that do not get transcribed into RNA to synthesize proteins. Some of this noncoding DNA has known functions, but much of it has no known function and is called "Junk DNA". This is an example of a vestige since replicating these genes uses energy, making it a waste in many cases. A pseudogene can be produced when a coding gene accumulates mutations that prevent it from being transcribed, making it non-functional. But since it is not transcribed, it may disappear without affecting fitness, unless it has provided some beneficial function as non-coding DNA. Non-functional pseudogenes may be passed on to later species, thereby labeling the later species as descended from the earlier species.
Other mechanisms
A large body of molecular evidence supports a variety of mechanisms for large evolutionary changes, including: genome and gene duplication, which facilitates rapid evolution by providing substantial quantities of genetic material under weak or no selective constraints; horizontal gene transfer, the process of transferring genetic material to another cell that is not an organism's offspring, allowing for species to acquire beneficial genes from each other; and recombination, capable of reassorting large numbers of different alleles and of establishing reproductive isolation. The endosymbiotic theory explains the origin of mitochondria and plastids (including chloroplasts), which are organelles of eukaryotic cells, as the incorporation of an ancient prokaryotic cell into ancient eukaryotic cell. Rather than evolving eukaryotic organelles slowly, this theory offers a mechanism for a sudden evolutionary leap by incorporating the genetic material and biochemical composition of a separate species. Evidence supporting this mechanism has been found in the protist Hatena: as a predator it engulfs a green algal cell, which subsequently behaves as an endosymbiont, nourishing Hatena, which in turn loses its feeding apparatus and behaves as an autotroph.
Since metabolic processes do not leave fossils, research into the evolution of the basic cellular processes is done largely by comparison of existing organisms. Many lineages diverged when new metabolic processes appeared, and it is theoretically possible to determine when certain metabolic processes appeared by comparing the traits of the descendants of a common ancestor or by detecting their physical manifestations. As an example, the appearance of oxygen in the Earth's atmosphere is linked to the evolution of photosynthesis.
Specific examples from comparative physiology and biochemistry
Chromosome 2 in humans
Evidence for the evolution of Homo sapiens from a common ancestor with chimpanzees is found in the number of chromosomes in humans as compared to all other members of Hominidae. All hominidae have 24 pairs of chromosomes, except humans, who have only 23 pairs. Human chromosome 2 is a result of an end-to-end fusion of two ancestral chromosomes.
The evidence for this includes:
The correspondence of chromosome 2 to two ape chromosomes. The closest human relative, the chimpanzee, has near-identical DNA sequences to human chromosome 2, but they are found in two separate chromosomes. The same is true of the more distant gorilla and orangutan.
The presence of a vestigial centromere. Normally a chromosome has just one centromere, but in chromosome 2 there are remnants of a second centromere.
The presence of vestigial telomeres. These are normally found only at the ends of a chromosome, but in chromosome 2 there are additional telomere sequences in the middle.
Chromosome 2 thus presents strong evidence in favour of the common descent of humans and other apes. According to J. W. Ijdo, "We conclude that the locus cloned in cosmids c8.1 and c29B is the relic of an ancient telomere-telomere fusion and marks the point at which two ancestral ape chromosomes fused to give rise to human chromosome 2."
Cytochrome c and b
A classic example of biochemical evidence for evolution is the variance of the ubiquitous (i.e. all living organisms have it, because it performs very basic life functions) protein Cytochrome c in living cells. The variance of cytochrome c of different organisms is measured in the number of differing amino acids, each differing amino acid being a result of a base pair substitution, a mutation. If each differing amino acid is assumed the result of one base pair substitution, it can be calculated how long ago the two species diverged by multiplying the number of base pair substitutions by the estimated time it takes for a substituted base pair of the cytochrome c gene to be successfully passed on. For example, if the average time it takes for a base pair of the cytochrome c gene to mutate is N years, the number of amino acids making up the cytochrome c protein in monkeys differ by one from that of humans, this leads to the conclusion that the two species diverged N years ago.
The primary structure of cytochrome c consists of a chain of about 100 amino acids. Many higher order organisms possess a chain of 104 amino acids.
The cytochrome c molecule has been extensively studied for the glimpse it gives into evolutionary biology. Both chicken and turkeys have identical sequence homology (amino acid for amino acid), as do pigs, cows and sheep. Both humans and chimpanzees share the identical molecule, while rhesus monkeys share all but one of the amino acids: the 66th amino acid is isoleucine in the former and threonine in the latter.
What makes these homologous similarities particularly suggestive of common ancestry in the case of cytochrome c, in addition to the fact that the phylogenies derived from them match other phylogenies very well, is the high degree of functional redundancy of the cytochrome c molecule. The different existing configurations of amino acids do not significantly affect the functionality of the protein, which indicates that the base pair substitutions are not part of a directed design, but the result of random mutations that are not subject to selection.
In addition, Cytochrome b is commonly used as a region of mitochondrial DNA to determine phylogenetic relationships between organisms due to its sequence variability. It is considered most useful in determining relationships within families and genera. Comparative studies involving cytochrome b have resulted in new classification schemes and have been used to assign newly described species to a genus, as well as deepen the understanding of evolutionary relationships.
Endogenous retroviruses
Endogenous retroviruses (or ERVs) are remnant sequences in the genome left from ancient viral infections in an organism. The retroviruses (or virogenes) are always passed on to the next generation of that organism that received the infection. This leaves the virogene left in the genome. Because this event is rare and random, finding identical chromosomal positions of a virogene in two different species suggests common ancestry. Cats (Felidae) present a notable instance of virogene sequences demonstrating common descent. The standard phylogenetic tree for Felidae have smaller cats (Felis chaus, Felis silvestris, Felis nigripes, and Felis catus) diverging from larger cats such as the subfamily Pantherinae and other carnivores. The fact that small cats have an ERV where the larger cats do not suggests that the gene was inserted into the ancestor of the small cats after the larger cats had diverged. Another example of this is with humans and chimps. Humans contain numerous ERVs that comprise a considerable percentage of the genome. Sources vary, but 1% to 8% has been proposed. Humans and chimps share seven different occurrences of virogenes, while all primates share similar retroviruses congruent with phylogeny.
Recent African origin of modern humans
Mathematical models of evolution, pioneered by the likes of Sewall Wright, Ronald Fisher and J. B. S. Haldane and extended via diffusion theory by Motoo Kimura, allow predictions about the genetic structure of evolving populations. Direct examination of the genetic structure of modern populations via DNA sequencing has allowed verification of many of these predictions. For example, the Out of Africa theory of human origins, which states that modern humans developed in Africa and a small sub-population migrated out (undergoing a population bottleneck), implies that modern populations should show the signatures of this migration pattern. Specifically, post-bottleneck populations (Europeans and Asians) should show lower overall genetic diversity and a more uniform distribution of allele frequencies compared to the African population. Both of these predictions are borne out by actual data from a number of studies.
Evidence from comparative anatomy
Comparative study of the anatomy of groups of animals or plants reveals that certain structural features are basically similar. For example, the basic structure of all flowers consists of sepals, petals, stigma, style and ovary; yet the size, colour, number of parts and specific structure are different for each individual species. The neural anatomy of fossilized remains may also be compared using advanced imaging techniques.
Atavisms
Once thought of as a refutation to evolutionary theory, atavisms are "now seen as potent evidence of how much genetic potential is retained...after a particular structure has disappeared from a species". "Atavisms are the reappearance of a lost character typical of remote ancestors and not seen in the parents or recent ancestors..." and are an "[indication] of the developmental plasticity that exists within embryos..." Atavisms occur because genes for previously existing phenotypical features are often preserved in DNA, even though the genes are not expressed in some or most of the organisms possessing them. Numerous examples have documented the occurrence of atavisms alongside experimental research triggering their formation. Due to the complexity and interrelatedness of the factors involved in the development of atavisms, both biologists and medical professionals find it "difficult, if not impossible, to distinguish [them] from malformations."
Some examples of atavisms found in the scientific literature include:
Hind limbs in whales. (see figure 2a)
Reappearance of limbs in limbless vertebrates.
Back pair of flippers on a bottlenose dolphin.
Extra toes of the modern horse.
Human tails (not pseudo-tails) and extra nipples in humans.
Re-evolution of sexuality from parthenogenesis in orbitid mites.
Teeth in chickens.
Dewclaws in dogs.
Reappearance of wings on wingless stick insects and earwigs.
Atavistic muscles in several birds and mammals such as the beagle and the jerboa.
Extra toes in guinea pigs.
Evolutionary developmental biology and embryonic development
Evolutionary developmental biology is the biological field that compares the developmental process of different organisms to determine ancestral relationships between species. A large variety of organism's genomes contain a small fraction of genes that control the organisms development. Hox genes are an example of these types of nearly universal genes in organisms pointing to an origin of common ancestry. Embryological evidence comes from the development of organisms at the embryological level with the comparison of different organisms embryos similarity. Remains of ancestral traits often appear and disappear in different stages of the embryological development process.
Some examples include:
Hair growth and loss (lanugo) during human development.
Development and degeneration of a yolk sac.
Terrestrial frogs and salamanders passing through the larval stage within the egg—with features of typically aquatic larvae—but hatch ready for life on land.
The appearance of gill-like structures (pharyngeal arch) in vertebrate embryo development. Note that in fish, the arches continue to develop as branchial arches while in humans, for example, they give rise to a variety of structures within the head and neck.
Homologous structures and divergent (adaptive) evolution
If widely separated groups of organisms are originated from a common ancestry, they are expected to have certain basic features in common. The degree of resemblance between two organisms should indicate how closely related they are in evolution:
Groups with little in common are assumed to have diverged from a common ancestor much earlier in geological history than groups with a lot in common;
In deciding how closely related two animals are, a comparative anatomist looks for structures that are fundamentally similar, even though they may serve different functions in the adult. Such structures are described as homologous and suggest a common origin.
In cases where the similar structures serve different functions in adults, it may be necessary to trace their origin and embryonic development. A similar developmental origin suggests they are the same structure, and thus likely derived from a common ancestor.
When a group of organisms share a homologous structure that is specialized to perform a variety of functions to adapt different environmental conditions and modes of life, it is called adaptive radiation. The gradual spreading of organisms with adaptive radiation is known as divergent evolution.
Nested hierarchies and classification
Taxonomy is based on the fact that all organisms are related to each other in nested hierarchies based on shared characteristics. Most existing species can be organized rather easily in a nested hierarchical classification. This is evident from the Linnaean classification scheme. Based on shared derived characters, closely related organisms can be placed in one group (such as a genus), several genera can be grouped together into one family, several families can be grouped together into an order, etc. The existence of these nested hierarchies was recognized by many biologists before Darwin, but he showed that his theory of evolution with its branching pattern of common descent could explain them. Darwin described how common descent could provide a logical basis for classification:
Evolutionary trees
An evolutionary tree (of Amniota, for example, the last common ancestor of mammals and reptiles, and all its descendants) illustrates the initial conditions causing evolutionary patterns of similarity (e.g., all Amniotes produce an egg that possesses the amnios) and the patterns of divergence amongst lineages (e.g., mammals and reptiles branching from the common ancestry in Amniota). Evolutionary trees provide conceptual models of evolving systems once thought limited in the domain of making predictions out of the theory. However, the method of phylogenetic bracketing is used to infer predictions with far greater probability than raw speculation. For example, paleontologists use this technique to make predictions about nonpreservable traits in fossil organisms, such as feathered dinosaurs, and molecular biologists use the technique to posit predictions about RNA metabolism and protein functions. Thus evolutionary trees are evolutionary hypotheses that refer to specific facts, such as the characteristics of organisms (e.g., scales, feathers, fur), providing evidence for the patterns of descent, and a causal explanation for modification (i.e., natural selection or neutral drift) in any given lineage (e.g., Amniota). Evolutionary biologists test evolutionary theory using phylogenetic systematic methods that measure how much the hypothesis (a particular branching pattern in an evolutionary tree) increases the likelihood of the evidence (the distribution of characters among lineages). The severity of tests for a theory increases if the predictions "are the least probable of being observed if the causal event did not occur." "Testability is a measure of how much the hypothesis increases the likelihood of the evidence."
Vestigial structures
Evidence for common descent comes from the existence of vestigial structures. These rudimentary structures are often homologous to structures that correspond in related or ancestral species. A wide range of structures exist such as mutated and non-functioning genes, parts of a flower, muscles, organs, and even behaviors. This variety can be found across many different groups of species. In many cases they are degenerated or underdeveloped. The existence of vestigial organs can be explained in terms of changes in the environment or modes of life of the species. Those organs are typically functional in the ancestral species but are now either semi-functional, nonfunctional, or re-purposed.
Scientific literature concerning vestigial structures abounds. One study compiled 64 examples of vestigial structures found in the literature across a wide range of disciplines within the 21st century. The following non-exhaustive list summarizes Senter et al. alongside various other examples:
The presence of remnant mitochondria (mitosomes) that have lost the ability to synthesize ATP in Entamoeba histolytica, Trachipleistophora hominis, Cryptosporidium parvum, Blastocystis hominis, and Giardia intestinalis.
Remnant chloroplast organelles (leucoplasts) in non-photosynthetic algae species (Plasmodium falciparum, Toxoplasma gondii, Aspasia longa, Anthophysa vegetans, Ciliophrys infusionum, Pteridomonas danica, Paraphysomonas, Spumella and Epifagus americana.
Missing stamens (unvascularized staminodes) on Gilliesia and Gethyum flowers.
Non-functioning androecium in female flowers and non-functioning gynoecium in male flowers of the cactus species Consolea spinosissima.
Remnant stamens on female flowers of Fragaria virginiana; all species in the genus Schiedea; and on Penstemon centranthifolius, P. rostriflorus, P. ellipticus, and P. palmeri.
Vestigial anthers on Nemophila menziesii.
Reduced hindlimbs and pelvic girdle embedded in the muscles of extant whales (see figure 2b). Occasionally, the genes that code for longer extremities cause a modern whale to develop legs. On 28 October 2006, a four-finned bottlenose dolphin was caught and studied due to its extra set of hind limbs. These legged Cetacea display an example of an atavism predicted from their common ancestry.
Nonfunctional hind wings in Carabus solieri and other beetles.
Remnant eyes (and eye structures) in animals that have lost sight such as blind cavefish (e.g. Astyanax mexicanus), mole rats, snakes, spiders, salamanders, shrimp, crayfish, and beetles.
Vestigial eye in the extant Rhineura floridana and remnant jugal in the extinct Rhineura hatchery (reclassified as Protorhineura hatcherii).
Functionless wings in flightless birds such as ostriches, kiwis, cassowaries, and emus.
The presence of the plica semilunaris in the human eye—a vestigial remnant of the nictitating membrane.
Harderian gland in primates.
Reduced hind limbs and pelvic girdle structures in legless lizards, skinks, amphisbaenians, and some snakes.
Reduced and missing olfactory apparatus in whales that still possess vestigial olfactory receptor subgenomes.
Vestigial teeth in narwhal.
Rudimentary digits of Ateles geoffroyi, Colobus guereza, and Perodicticus potto.
Vestigial dental primordia in the embryonic tooth pattern in mice.
Reduced or absent vomeronasal organ in humans and Old World monkeys.
Presence of non-functional sinus hair muscles in humans used in whisker movement.
Degenerating palmaris longus muscle in humans.
Teleost fish, anthropoid primates (Simians), guinea pigs, some bat species, and some Passeriformes have lost the ability to synthesize vitamin C (ascorbic acid), yet still possess the genes involved. This inability is due to mutations of the L-gulono-γ-lactone oxidase (GLO) gene— and in primates, teleost fish, and guinea pigs it is irreversible.
Remnant abdominal segments in cirripedes (barnacles).
Non-mammalian vertebrate embryos depend on nutrients from the yolk sac. Humans and other mammal genomes contain broken, non-functioning genes that code for the production of yolk. alongside the presence of an empty yolk sac with the embryo.
Dolphin embryonic limb buds.
Leaf formation in some cacti species.
Presence of a vestigial endosymbiont Lepidodinium viride within the dinoflagellate Gymnodinium chlorophorum.
The species Dolabrifera dolabrifera has an ink gland but is "incapable of producing ink or its associated anti-predator proteins".
Specific examples from comparative anatomy
Insect mouthparts and appendages
Many different species of insects have mouthparts derived from the same embryonic structures, indicating that the mouthparts are modifications of a common ancestor's original features. These include a labrum (upper lip), a pair of mandibles, a hypopharynx (floor of mouth), a pair of maxillae, and a labium. (Fig. 2c) Evolution has caused enlargement and modification of these structures in some species, while it has caused the reduction and loss of them in other species. The modifications enable the insects to exploit a variety of food materials.
Insect mouthparts and antennae are considered homologues of insect legs. Parallel developments are seen in some arachnids: The anterior pair of legs may be modified as analogues of antennae, particularly in whip scorpions, which walk on six legs. These developments provide support for the theory that complex modifications often arise by duplication of components, with the duplicates modified in different directions.
Pelvic structure of dinosaurs
Similar to the pentadactyl limb in mammals, the earliest dinosaurs split into two distinct orders—the saurischia and ornithischia. They are classified as one or the other in accordance with what the fossils demonstrate. Figure 2d, shows that early saurischians resembled early ornithischians. The pattern of the pelvis in all species of dinosaurs is an example of homologous structures. Each order of dinosaur has slightly differing pelvis bones providing evidence of common descent. Additionally, modern birds show a similarity to ancient saurischian pelvic structures indicating the evolution of birds from dinosaurs. This can also be seen in Figure 5c as the Aves branch off the Theropoda suborder.
Pentadactyl limb
The pattern of limb bones called pentadactyl limb is an example of homologous structures (Fig. 2e). It is found in all classes of tetrapods (i.e. from amphibians to mammals). It can even be traced back to the fins of certain fossil fishes from which the first amphibians evolved such as tiktaalik. The limb has a single proximal bone (humerus), two distal bones (radius and ulna), a series of carpals (wrist bones), followed by five series of metacarpals (palm bones) and phalanges (digits). Throughout the tetrapods, the fundamental structures of pentadactyl limbs are the same, indicating that they originated from a common ancestor. But in the course of evolution, these fundamental structures have been modified. They have become superficially different and unrelated structures to serve different functions in adaptation to different environments and modes of life. This phenomenon is shown in the forelimbs of mammals. For example:
In monkeys, the forelimbs are much elongated, forming a grasping hand used for climbing and swinging among trees.
Pigs have lost their first digit, while the second and fifth digits are reduced. The remaining two digits are longer and stouter than the rest and bear a hoof for supporting the body.
In horses, the forelimbs are highly adapted for strength and support. Fast and long-distance running is possible due to the extensive elongation of the third digit that bears a hoof.
The mole has a pair of short, spade-like forelimbs for burrowing.
Anteaters use their enlarged third digit for tearing into ant and termite nests.
In cetaceans, the forelimbs become flippers for steering and maintaining equilibrium during swimming.
In bats, the forelimbs have become highly modified and evolved into functioning wings. Four digits have become elongated, while the hook-like first digit remains free and is used to grip.
Recurrent laryngeal nerve in giraffes
The recurrent laryngeal nerve is a fourth branch of the vagus nerve, which is a cranial nerve. In mammals, its path is unusually long. As a part of the vagus nerve, it comes from the brain, passes through the neck down to heart, rounds the dorsal aorta and returns up to the larynx, again through the neck. (Fig. 2f)
This path is suboptimal even for humans, but for giraffes it becomes even more suboptimal. Due to the lengths of their necks, the recurrent laryngeal nerve may be up to long, despite its optimal route being a distance of just several inches.
The indirect route of this nerve is the result of evolution of mammals from fish, which had no neck and had a relatively short nerve that innervated one gill slit and passed near the gill arch. Since then, the gill it innervated has become the larynx and the gill arch has become the dorsal aorta in mammals.
Route of the vas deferens
Similar to the laryngeal nerve in giraffes, the vas deferens is part of the male anatomy of many vertebrates; it transports sperm from the epididymis in anticipation of ejaculation. In humans, the vas deferens routes up from the testicle, looping over the ureter, and back down to the urethra and penis. It has been suggested that this is due to the descent of the testicles during the course of human evolution—likely associated with temperature. As the testicles descended, the vas deferens lengthened to accommodate the accidental "hook" over the ureter.
Evidence from paleontology
When organisms die, they often decompose rapidly or are consumed by scavengers, leaving no permanent evidences of their existence. However, occasionally, some organisms are preserved. The remains or traces of organisms from a past geologic age embedded in rocks by natural processes are called fossils. They are extremely important for understanding the evolutionary history of life on Earth, as they provide direct evidence of evolution and detailed information on the ancestry of organisms. Paleontology is the study of past life based on fossil records and their relations to different geologic time periods.
For fossilization to take place, the traces and remains of organisms must be quickly buried so that weathering and decomposition do not occur. Skeletal structures or other hard parts of the organisms are the most commonly occurring form of fossilized remains. There are also some trace "fossils" showing moulds, cast or imprints of some previous organisms.
As an animal dies, the organic materials gradually decay, such that the bones become porous. If the animal is subsequently buried in mud, mineral salts infiltrate into the bones and gradually fill up the pores. The bones harden into stones and are preserved as fossils. This process is known as petrification. If dead animals are covered by wind-blown sand, and if the sand is subsequently turned into mud by heavy rain or floods, the same process of mineral infiltration may occur. Apart from petrification, the dead bodies of organisms may be well preserved in ice, in hardened resin of coniferous trees (figure 3a), in tar, or in anaerobic, acidic peat. Fossilization can sometimes be a trace, an impression of a form. Examples include leaves and footprints, the fossils of which are made in layers that then harden.
Fossil record
It is possible to decipher how a particular group of organisms evolved by arranging its fossil record in a chronological sequence. Such a sequence can be determined because fossils are mainly found in sedimentary rock. Sedimentary rock is formed by layers of silt or mud on top of each other; thus, the resulting rock contains a series of horizontal layers, or strata. Each layer contains fossils typical for a specific time period when they formed. The lowest strata contain the oldest rock and the earliest fossils, while the highest strata contain the youngest rock and more recent fossils.
A succession of animals and plants can also be seen from fossil discoveries. By studying the number and complexity of different fossils at different stratigraphic levels, it has been shown that older fossil-bearing rocks contain fewer types of fossilized organisms, and they all have a simpler structure, whereas younger rocks contain a greater variety of fossils, often with increasingly complex structures.
For many years, geologists could only roughly estimate the ages of various strata and the fossils found. They did so, for instance, by estimating the time for the formation of sedimentary rock layer by layer. Today, by measuring the proportions of radioactive and stable elements in a given rock, the ages of fossils can be more precisely dated by scientists. This technique is known as radiometric dating.
Throughout the fossil record, many species that appear at an early stratigraphic level disappear at a later level. This is interpreted in evolutionary terms as indicating the times when species originated and became extinct. Geographical regions and climatic conditions have varied throughout Earth's history. Since organisms are adapted to particular environments, the constantly changing conditions favoured species that adapted to new environments through the mechanism of natural selection.
Extent of the fossil record
Despite the relative rarity of suitable conditions for fossilization, an estimated 250,000 fossil species have been named. The number of individual fossils this represents varies greatly from species to species, but many millions of fossils have been recovered: for instance, more than three million fossils from the last ice age have been recovered from the La Brea Tar Pits in Los Angeles. Many more fossils are still in the ground, in various geological formations known to contain a high fossil density, allowing estimates of the total fossil content of the formation to be made. An example of this occurs in South Africa's Beaufort Formation (part of the Karoo Supergroup, which covers most of South Africa), which is rich in vertebrate fossils, including therapsids (reptile-mammal transitional forms). It has been estimated that this formation contains 800 billion vertebrate fossils. Paleontologists have documented numerous transitional forms and have constructed "an astonishingly comprehensive record of the key transitions in animal evolution". Conducting a survey of the paleontological literature, one would find that there is "abundant evidence for how all the major groups of animals are related, much of it in the form of excellent transitional fossils".
Limitations
The fossil record is an important source for scientists when tracing the evolutionary history of organisms. However, because of limitations inherent in the record, there are not fine scales of intermediate forms between related groups of species. This lack of continuous fossils in the record is a major limitation in tracing the descent of biological groups. When transitional fossils are found that show intermediate forms in what had previously been a gap in knowledge, they are often popularly referred to as "missing links".
There is a gap of about 100 million years between the beginning of the Cambrian period and the end of the Ordovician period. The early Cambrian period was the period from which numerous fossils of sponges, cnidarians (e.g., jellyfish), echinoderms (e.g., eocrinoids), molluscs (e.g., snails) and arthropods (e.g., trilobites) are found. The first animal that possessed the typical features of vertebrates, the Arandaspis, was dated to have existed in the later Ordovician period. Thus few, if any, fossils of an intermediate type between invertebrates and vertebrates have been found, although likely candidates include the Burgess Shale animal, Pikaia gracilens, and its Maotianshan shales relatives, Myllokunmingia, Yunnanozoon, Haikouella lanceolata, and Haikouichthys.
Some of the reasons for the incompleteness of fossil records are:
In general, the probability that an organism becomes fossilized is very low;
Some species or groups are less likely to become fossils because they are soft-bodied;
Some species or groups are less likely to become fossils because they live (and die) in conditions that are not favourable for fossilization;
Many fossils have been destroyed through erosion and tectonic movements;
Most fossils are fragmentary;
Some evolutionary change occurs in populations at the limits of a species' ecological range, and as these populations are likely small, the probability of fossilization is lower (see punctuated equilibrium);
Similarly, when environmental conditions change, the population of a species is likely to be greatly reduced, such that any evolutionary change induced by these new conditions is less likely to be fossilized;
Most fossils convey information about external form, but little about how the organism functioned;
Using present-day biodiversity as a guide, this suggests that the fossils unearthed represent only a small fraction of the large number of species of organisms that lived in the past.
Specific examples from paleontology
Evolution of the horse
Due to an almost-complete fossil record found in North American sedimentary deposits from the early Eocene to the present, the horse provides one of the best examples of evolutionary history (phylogeny).
This evolutionary sequence starts with a small animal called Hyracotherium (commonly referred to as Eohippus), which lived in North America about 54 million years ago then spread across to Europe and Asia. Fossil remains of Hyracotherium show it to have differed from the modern horse in three important respects: it was a small animal (the size of a fox), lightly built and adapted for running; the limbs were short and slender, and the feet elongated so that the digits were almost vertical, with four digits in the forelimbs and three digits in the hindlimbs; and the incisors were small, the molars having low crowns with rounded cusps covered in enamel.
The probable course of development of horses from Hyracotherium to Equus (the modern horse) involved at least 12 genera and several hundred species. The major trends seen in the development of the horse to changing environmental conditions may be summarized as follows:
Increase in size (from 0.4 m to 1.5 m — from 15 in to 60 in);
Lengthening of limbs and feet;
Reduction of lateral digits;
Increase in length and thickness of the third digit;
Increase in width of incisors;
Replacement of premolars by molars; and
Increases in tooth length, crown height of molars.
Fossilized plants found in different strata show that the marshy, wooded country in which Hyracotherium lived became gradually drier. Survival now depended on the head being in an elevated position for gaining a good view of the surrounding countryside, and on a high turn of speed for escape from predators, hence the increase in size and the replacement of the splayed-out foot by the hoofed foot. The drier, harder ground would make the original splayed-out foot unnecessary for support. The changes in the teeth can be explained by assuming that the diet changed from soft vegetation to grass. A dominant genus from each geological period has been selected (see figure 3e) to show the slow alteration of the horse lineage from its ancestral to its modern form.
Transition from fish to amphibians
Prior to 2004, paleontologists had found fossils of amphibians with necks, ears, and four legs, in rock no older than 365 million years old. In rocks more than 385 million years old they could only find fish, without these amphibian characteristics. Evolutionary theory predicted that since amphibians evolved from fish, an intermediate form should be found in rock dated between 365 and 385 million years ago. Such an intermediate form should have many fish-like characteristics, conserved from 385 million years ago or more, but also have many amphibian characteristics as well. In 2004, an expedition to islands in the Canadian arctic searching specifically for this fossil form in rocks that were 375 million years old discovered fossils of Tiktaalik. Some years later, however, scientists in Poland found evidence of fossilised tetrapod tracks predating Tiktaalik.
Evidence from biogeography
Data about the presence or absence of species on various continents and islands (biogeography) can provide evidence of common descent and shed light on patterns of speciation.
Continental distribution
All organisms are adapted to their environment to a greater or lesser extent. If the abiotic and biotic factors within a habitat are capable of supporting a particular species in one geographic area, then one might assume that the same species would be found in a similar habitat in a similar geographic area, e.g. in Africa and South America. This is not the case. Plant and animal species are discontinuously distributed throughout the world:
Africa has Old World monkeys, apes, elephants, leopards, giraffes, and hornbills.
South America has New World monkeys, cougars, jaguars, sloths, llamas, and toucans.
Deserts in North and South America have native cacti, but deserts in Africa, Asia, and Australia have succulent (apart from Rhipsalis baccifera) which are native euphorbs that resemble cacti but are very different.
Even greater differences can be found if Australia is taken into consideration, though it occupies the same latitude as much of South America and Africa. Marsupials like kangaroos, bandicoots, and quolls make up about half of Australia's indigenous mammal species. By contrast, marsupials are today totally absent from Africa and form a smaller portion of the mammalian fauna of South America, where opossums, shrew opossums, and the monito del monte occur. The only living representatives of primitive egg-laying mammals (monotremes) are the echidnas and the platypus. The short-beaked echidna (Tachyglossus aculeatus) and its subspecies populate Australia, Tasmania, New Guinea, and Kangaroo Island while the long-beaked echidna (Zaglossus bruijni) lives only in New Guinea. The platypus lives in the waters of eastern Australia. They have been introduced to Tasmania, King Island, and Kangaroo Island. These Monotremes are totally absent in the rest of the world. On the other hand, Australia is missing many groups of placental mammals that are common on other continents (carnivorans, artiodactyls, shrews, squirrels, lagomorphs), although it does have indigenous bats and murine rodents; many other placentals, such as rabbits and foxes, have been introduced there by humans.
Other animal distribution examples include bears, located on all continents excluding Africa, Australia and Antarctica, and the polar bear solely in the Arctic Circle and adjacent land masses. Penguins are found only around the South Pole despite similar weather conditions at the North Pole. Families of sirenians are distributed around the earth's waters, where manatees are located in western Africa waters, northern South American waters, and West Indian waters only while the related family, the dugongs, are located only in Oceanic waters north of Australia, and the coasts surrounding the Indian Ocean. The now extinct Steller's sea cow resided in the Bering Sea.
The same kinds of fossils are found from areas known to be adjacent to one another in the past but that, through the process of continental drift, are now in widely divergent geographic locations. For example, fossils of the same types of ancient amphibians, arthropods and ferns are found in South America, Africa, India, Australia and Antarctica, which can be dated to the Paleozoic Era, when these regions were united as a single landmass called Gondwana.
Island biogeography
Types of species found on islands
Evidence from island biogeography has played an important and historic role in the development of evolutionary biology. For purposes of biogeography, islands are divided into two classes. Continental islands are islands like Great Britain, and Japan that have at one time or another been part of a continent. Oceanic islands, like the Hawaiian islands, the Galápagos Islands and St. Helena, on the other hand are islands that have formed in the ocean and never been part of any continent. Oceanic islands have distributions of native plants and animals that are unbalanced in ways that make them distinct from the biotas found on continents or continental islands. Oceanic islands do not have native terrestrial mammals (they do sometimes have bats and seals), amphibians, or fresh water fish. In some cases they have terrestrial reptiles (such as the iguanas and giant tortoises of the Galápagos Islands) but often (such as in Hawaii) they do not. This is despite the fact that when species such as rats, goats, pigs, cats, mice, and cane toads, are introduced to such islands by humans they often thrive. Starting with Charles Darwin, many scientists have conducted experiments and made observations that have shown that the types of animals and plants found, and not found, on such islands are consistent with the theory that these islands were colonized accidentally by plants and animals that were able to reach them. Such accidental colonization could occur by air, such as plant seeds carried by migratory birds, or bats and insects being blown out over the sea by the wind, or by floating from a continent or other island by sea (for example, by some kinds of plant seeds like coconuts that can survive immersion in salt water), and reptiles that can survive for extended periods on rafts of vegetation carried to sea by storms.
Endemism
Many of the species found on remote islands are endemic to a particular island or group of islands, meaning they are found nowhere else on earth. Examples of species endemic to islands include many flightless birds of New Zealand, lemurs of Madagascar, the Komodo dragon of Komodo, the dragon's blood tree of Socotra, Tuatara of New Zealand, and others. However, many such endemic species are related to species found on other nearby islands or continents; the relationship of the animals found on the Galápagos Islands to those found in South America is a well-known example. All of these facts, the types of plants and animals found on oceanic islands, the large number of endemic species found on oceanic islands, and the relationship of such species to those living on the nearest continents, are most easily explained if the islands were colonized by species from nearby continents that evolved into the endemic species now found there.
Other types of endemism do not have to include, in the strict sense, islands. Islands can mean isolated lakes or remote and isolated areas. Examples of these would include the highlands of Ethiopia, Lake Baikal, fynbos of South Africa, forests of New Caledonia, and others. Examples of endemic organisms living in isolated areas include the kagu of New Caledonia, cloud rats of the Luzon tropical pine forests of the Philippines, the boojum tree (Fouquieria columnaris) of the Baja California peninsula, and the Baikal seal.
Adaptive radiations
Oceanic islands are frequently inhabited by clusters of closely related species that fill a variety of ecological niches, often niches that are filled by very different species on continents. Such clusters, like the finches of the Galápagos, Hawaiian honeycreepers, members of the sunflower family on the Juan Fernandez Archipelago and wood weevils on St. Helena are called adaptive radiations because they are best explained by a single species colonizing an island (or group of islands) and then diversifying to fill available ecological niches. Such radiations can be spectacular; 800 species of the fruit fly family Drosophila, nearly half the world's total, are endemic to the Hawaiian islands. Another illustrative example from Hawaii is the silversword alliance, which is a group of thirty species found only on those islands. Members range from the silverswords that flower spectacularly on high volcanic slopes to trees, shrubs, vines and mats that occur at various elevations from mountain top to sea level, and in Hawaiian habitats that vary from deserts to rainforests. Their closest relatives outside Hawaii, based on molecular studies, are tarweeds found on the west coast of North America. These tarweeds have sticky seeds that facilitate distribution by migrant birds. Additionally, nearly all of the species on the island can be crossed and the hybrids are often fertile, and they have been hybridized experimentally with two of the west coast tarweed species as well. Continental islands have less distinct biota, but those that have been long separated from any continent also have endemic species and adaptive radiations, such as the 75 lemur species of Madagascar, and the eleven extinct moa species of New Zealand.
Ring species
A ring species is a connected series of populations, each of which can interbreed with its neighbors, with at least two "end" populations which are too distantly related to interbreed, though with the potential for gene flow between all the populations. Ring species represent speciation and have been cited as evidence of evolution. They illustrate what happens over time as populations genetically diverge, specifically because they represent, in living populations, what normally happens over time between long deceased ancestor populations and living populations, in which the intermediates have become extinct. Richard Dawkins says that ring species "are only showing us in the spatial dimension something that must always happen in the time dimension".
Specific examples from biogeography
Distribution of Glossopteris
The combination of continental drift and evolution can sometimes be used to predict what will be found in the fossil record. Glossopteris is an extinct species of seed fern plants from the Permian. Glossopteris appears in the fossil record around the beginning of the Permian on the ancient continent of Gondwana. Continental drift explains the current biogeography of the tree. Present day Glossopteris fossils are found in Permian strata in southeast South America, southeast Africa, all of Madagascar, northern India, all of Australia, all of New Zealand, and scattered on the southern and northern edges of Antarctica. During the Permian, these continents were connected as Gondwana (see figure 4c) in agreement with magnetic striping, other fossil distributions, and glacial scratches pointing away from the temperate climate of the South Pole during the Permian.
Metatherian distribution
The history of metatherians (the clade containing marsupials and their extinct, primitive ancestors) provides an example of how evolutionary theory and the movement of continents can be combined to make predictions concerning fossil stratigraphy and distribution. The oldest metatherian fossils are found in present-day China. Metatherians spread westward into modern North America (still attached to Eurasia) and then to South America, which was connected to North America until around 65 mya. Marsupials reached Australia via Antarctica about 50 mya, shortly after Australia had split off suggesting a single dispersion event of just one species. Evolutionary theory suggests that the Australian marsupials descended from the older ones found in the Americas. Geologic evidence suggests that between 30 and 40 million years ago South America and Australia were still part of the Southern Hemisphere super continent of Gondwana and that they were connected by land that is now part of Antarctica. Therefore, when combining the models, scientists could predict that marsupials migrated from what is now South America, through Antarctica, and then to present-day Australia between 40 and 30 million years ago. A first marsupial fossil of the extinct family Polydolopidae was found on Seymour Island on the Antarctic Peninsula in 1982. Further fossils have subsequently been found, including members of the marsupial orders Didelphimorphia (opossum) and Microbiotheria, as well as ungulates and a member of the enigmatic extinct order Gondwanatheria, possibly Sudamerica ameghinoi.
Migration, isolation, and distribution of the camel
The history of the camel provides an example of how fossil evidence can be used to reconstruct migration and subsequent evolution. The fossil record indicates that the evolution of camelids started in North America (see figure 4e), from which, six million years ago, they migrated across the Bering Strait into Asia and then to Africa, and 3.5 million years ago through the Isthmus of Panama into South America. Once isolated, they evolved along their own lines, giving rise to the Bactrian camel and dromedary in Asia and Africa and the llama and its relatives in South America. Camelids then became extinct in North America at the end of the last ice age.
Evidence from selection
Scientists have observed and documented a multitude of events where natural selection is in action. The most well known examples are antibiotic resistance in the medical field along with better-known laboratory experiments documenting evolution's occurrence. Natural selection is tantamount to common descent in that long-term occurrence and selection pressures can lead to the diversity of life on earth as found today. All adaptations—documented and undocumented changes concerned—are caused by natural selection (and a few other minor processes). It is well established that, "...natural selection is a ubiquitous part of speciation...", and is the primary driver of speciation.
Artificial selection and experimental evolution
Artificial selection demonstrates the diversity that can exist among organisms that share a relatively recent common ancestor. In artificial selection, one species is bred selectively at each generation, allowing only those organisms that exhibit desired characteristics to reproduce. These characteristics become increasingly well developed in successive generations. Artificial selection was successful long before science discovered the genetic basis. Examples of artificial selection include dog breeding, genetically modified food, flower breeding, and the cultivation of foods such as wild cabbage, and others.
Experimental evolution uses controlled experiments to test hypotheses and theories of evolution. In one early example, William Dallinger set up an experiment shortly before 1880, subjecting microbes to heat with the aim of forcing adaptive changes. His experiment ran for around seven years, and his published results were acclaimed, but he did not resume the experiment after the apparatus failed.
A large-scale example of experimental evolution is Richard Lenski's multi-generation experiment with Escherichia coli. Lenski observed that some strains of E. coli evolved a complex new ability, the ability to metabolize citrate, after tens of thousands of generations. The evolutionary biologist Jerry Coyne commented as a critique of creationism, saying, "the thing I like most is it says you can get these complex traits evolving by a combination of unlikely events. That's just what creationists say can't happen." In addition to the metabolic changes, the different bacterial populations were found to have diverged in respect to both morphology (the overall size of the cell) and fitness (of which was measured in competition with the ancestors).
Invertebrates
Historical lead tolerance in Daphnia
A study of species of Daphnia and lead pollution in the 20th century predicted that an increase in lead pollution would lead to strong selection of lead tolerance. Researchers were able to use "resurrection ecology", hatching decades-old Daphnia eggs from the time when lakes were heavily polluted with lead. The hatchlings in the study were compared to current-day Daphnia, and demonstrated "dramatic fitness differences between old and modern phenotypes when confronted with a widespread historical environmental stressor". Essentially, the modern-day Daphnia were unable to resist or tolerate high levels of lead (this is due to the huge reduction of lead pollution in 21st century lakes). The old hatchlings, however, were able to tolerate high lead pollution. The authors concluded that "by employing the techniques of resurrection ecology, we were able to show clear phenotypic change over decades...".
Peppered moths
A classic example was the phenotypic change, light-to-dark color adaptation, in the peppered moth, due to pollution from the Industrial Revolution in England.
Microbes
Antimicrobial resistance
The development and spread of antibiotic-resistant bacteria is evidence for the process of evolution of species. Thus the appearance of vancomycin-resistant Staphylococcus aureus, and the danger it poses to hospital patients, is a direct result of evolution through natural selection. The rise of Shigella strains resistant to the synthetic antibiotic class of sulfonamides also demonstrates the generation of new information as an evolutionary process. Similarly, the appearance of DDT resistance in various forms of Anopheles mosquitoes, and the appearance of myxomatosis resistance in breeding rabbit populations in Australia, are both evidence of the existence of evolution in situations of evolutionary selection pressure in species in which generations occur rapidly.
All classes of microbes develop resistance: including fungi (antifungal resistance), viruses (antiviral resistance), protozoa (antiprotozoal resistance), and bacteria (antibiotic resistance). This is to be expected when considering that all life exhibits universal genetic code and is therefore subject to the process of evolution through its various mechanisms.
Nylon-eating bacteria
Another example of organisms adapting to human-caused conditions are Nylon-eating bacteria: a strain of Flavobacterium that are capable of digesting certain byproducts of nylon 6 manufacturing. There is scientific consensus that the capacity to synthesize nylonase most probably developed as a single-step mutation that survived because it improved the fitness of the bacteria possessing the mutation. This is seen as a good example of evolution through mutation and natural selection that has been observed as it occurs and could not have come about until the production of nylon by humans.
Plants and fungi
Monkeyflower radiation
Both subspecies Mimulus aurantiacus puniceus (red-flowered) and Mimulus aurantiacus australis (yellow-flowered) of monkeyflowers are isolated due to the preferences of their hummingbird and hawkmoth pollinators. The radiation of M. aurantiacus subspecies are mostly yellow colored; however, both M. a. ssp. puniceus and M. a. ssp. flemingii are red. Phylogenetic analysis suggests two independent origins of red-colored flowers that arose due to cis-regulatory mutations in the gene MaMyb2 that is present in all M. aurantiacus subspecies. Further research suggested that two independent mutations did not take place, but one MaMyb2 allele was transferred via introgressive hybridization.
Radiotrophic fungi
Radiotrophic fungi is a perfect example of natural selection taking place after a chemical accident. Radiotrophic fungi appears to use the pigment melanin to convert gamma radiation into chemical energy for growth and were first discovered in 2007 as black molds growing inside and around the Chernobyl Nuclear Power Plant. Research at the Albert Einstein College of Medicine showed that three melanin-containing fungi, Cladosporium sphaerospermum, Wangiella dermatitidis, and Cryptococcus neoformans, increased in biomass and accumulated acetate faster in an environment in which the radiation level was 500 times higher than in the normal environment.
Vertebrates
Guppies
While studying guppies (Poecilia reticulata) in Trinidad, biologist John Endler detected selection at work on the fish populations. To rule out alternative possibilities, Endler set up a highly controlled experiment to mimic the natural habitat by constructing ten ponds within a laboratory greenhouse at Princeton University. Each pond contained gravel to exactly match that of the natural ponds. After capturing a random sample of guppies from ponds in Trinidad, he raised and mixed them to create similar genetically diverse populations and measured each fish (spot length, spot height, spot area, relative spot length, relative spot height, total patch area, and standard body lengths). For the experiment he added Crenicichla alta (P. reticulata'''s main predator) in four of the ponds, Rivulus hartii (a non-predator fish) in four of the ponds, and left the remaining two ponds empty with only the guppies. After 10 generations, comparisons were made between each pond's guppy populations and measurements were taken again. Endler found that the populations had evolved dramatically different color patterns in the control and non-predator pools and drab color patterns in the predator pool. Predation pressure had caused a selection against standing out from background gravel.
In parallel, during this experiment, Endler conducted a field experiment in Trinidad where he caught guppies from ponds where they had predators and relocated them to ponds upstream where the predators did not live. After 15 generations, Endler found that the relocated guppies had evolved dramatic and colorful patterns. Essentially, both experiments showed convergence due to similar selection pressures (i.e. predator selection against contrasting color patterns and sexual selection for contrasting color patterns).
In a later study by David Reznick, the field population was examined 11 years later after Endler relocated the guppies to high streams. The study found that the populations has evolved in a number of different ways: bright color patterns, late maturation, larger sizes, smaller litter sizes, and larger offspring within litters. Further studies of P. reticulata and their predators in the streams of Trinidad have indicated that varying modes of selection through predation have not only changed the guppies color patterns, sizes, and behaviors, but their life histories and life history patterns.
Humans
Natural selection is observed in contemporary human populations, with recent findings demonstrating that the population at risk of the severe debilitating disease kuru has significant over-representation of an immune variant of the prion protein gene G127V versus non-immune alleles. Scientists postulate one of the reasons for the rapid selection of this genetic variant is the lethality of the disease in non-immune persons. Other reported evolutionary trends in other populations include a lengthening of the reproductive period, reduction in cholesterol levels, blood glucose and blood pressure.
A well known example of selection occurring in human populations is lactose tolerance. Lactose intolerance is the inability to metabolize lactose, because of a lack of the required enzyme lactase in the digestive system. The normal mammalian condition is for the young of a species to experience reduced lactase production at the end of the weaning period (a species-specific length of time). In humans, in non-dairy consuming societies, lactase production usually drops about 90% during the first four years of life, although the exact drop over time varies widely. Lactase activity persistence in adults is associated with two polymorphisms: C/T 13910 and G/A 22018 located in the MCM6 gene. This gene difference eliminates the shutdown in lactase production, making it possible for members of these populations to continue consumption of raw milk and other fresh and fermented dairy products throughout their lives without difficulty. This appears to be an evolutionarily recent (around 10,000 years ago [and 7,500 years ago in Europe]) adaptation to dairy consumption, and has occurred independently in both northern Europe and east Africa in populations with a historically pastoral lifestyle.
Italian wall lizards
In 1971, ten adult specimens of Podarcis sicula (the Italian wall lizard) were transported from the Croatian island of Pod Kopište to the island Pod Mrčaru (about 3.5 km to the east). Both islands lie in the Adriatic Sea near Lastovo, where the lizards founded a new bottlenecked population. The two islands have similar size, elevation, microclimate, and a general absence of terrestrial predators and the P. sicula expanded for decades without human interference, even out-competing the (now locally extinct) Podarcis melisellensis population.
In the 1990s, scientists returned to Pod Mrčaru and found that the lizards there differed greatly from those on Kopište. While mitochondrial DNA analyses have verified that P. sicula currently on Mrčaru are genetically very similar to the Kopište source population, the new Mrčaru population of P. sicula had a larger average size, shorter hind limbs, lower maximal sprint speed and altered response to simulated predatory attacks compared to the original Kopište population. These changes were attributed to "relaxed predation intensity" and greater protection from vegetation on Mrčaru.
In 2008, further analysis revealed that the Mrčaru population of P. sicula have significantly different head morphology (longer, wider, and taller heads) and increased bite force compared to the original Kopište population. This change in head shape corresponded with a shift in diet: Kopište P. sicula are primarily insectivorous, but those on Mrčaru eat substantially more plant matter. The changes in foraging style may have contributed to a greater population density and decreased territorial behavior of the Mrčaru population.
Another difference found between the two populations was the discovery, in the Mrčaru lizards, of cecal valves, which slow down food passage and provide fermenting chambers, allowing commensal microorganisms to convert cellulose to nutrients digestible by the lizards. Additionally, the researchers discovered that nematodes were common in the guts of Mrčaru lizards, but absent from Kopište P. sicula, which do not have cecal valves. The cecal valves, which occur in less than 1 percent of all known species of scaled reptiles, have been described as an "adaptive novelty, a brand new feature not present in the ancestral population and newly evolved in these lizards".
PAH resistance in killifish
A similar study was also done regarding the polycyclic aromatic hydrocarbons (PAHs) that pollute the waters of the Elizabeth River in Portsmouth, Virginia. This chemical is a product of creosote, a type of tar. The Atlantic killifish (Fundulus heteroclitus) has evolved a resistance to PAHs involving the AHR gene (the same gene involved in the tomcods). This particular study focused on the resistance to "acute toxicity and cardiac teratogenesis" caused by PAHs.
that mutated within the tomcods in the Hudson River.
PCB resistance in codfish
An example involving the direct observation of gene modification due to selection pressures is the resistance to PCBs in codfish. After General Electric dumped polychlorinated biphenyls (PCBs) in the Hudson River from 1947 through 1976, tomcods (Microgadus tomcod) living in the river were found to have evolved an increased resistance to the compound's toxic effects. The tolerance to the toxins is due to a change in the coding section of specific gene. Genetic samples were taken from the cods from eight different rivers in the New England region: the St. Lawrence River, Miramichi River, Margaree River, Squamscott River, Niantic River, the Shinnecock Basic, the Hudson River, and the Hackensack River. Genetic analysis found that in the population of tomcods in the four southernmost rivers, the gene AHR2 (aryl hydrocarbon receptor 2) was present as an allele with a difference of two amino acid deletions. This deletion conferred a resistance to PCB in the fish species and was found in 99% of Hudson River tomcods, 92% in the Hackensack River, 6% in the Niantic River, and 5% in Shinnecock Bay. This pattern along the sampled bodies of waters infers a direct correlation of selective pressures leading to the evolution of PCB resistance in Atlantic tomcod fish.
Urban wildlife
Urban wildlife is a broad and easily observable case of human-caused selection pressure on wildlife. With the growth in human habitats, different animals have adapted to survive within these urban environments. These types of environments can exert selection pressures on organisms, often leading to new adaptations. For example, the weed Crepis sancta, found in France, has two types of seed, heavy and fluffy. The heavy ones land nearby to the parent plant, whereas fluffy seeds float further away on the wind. In urban environments, seeds that float far often land on infertile concrete. Within about 5–12 generations, the weed evolves to produce significantly heavier seeds than its rural relatives. Other examples of urban wildlife are rock pigeons and species of crows adapting to city environments around the world; African penguins in Simon's Town; baboons in South Africa; and a variety of insects living in human habitations. Studies have been conducted and have found striking changes to animals' (more specifically mammals') behavior and physical brain size due to their interactions with human-created environments.
White Sands lizards
Animals that exhibit ecotonal variations allow for research concerning the mechanisms that maintain population differentiation. A wealth of information about natural selection, genotypic, and phenotypic variation; adaptation and ecomorphology; and social signaling has been acquired from the studies of three species of lizards located in the White Sands desert of New Mexico. Holbrookia maculata, Aspidoscelis inornatus, and Sceloporus undulatus exhibit ecotonal populations that match both the dark soils and the white sands in the region. Research conducted on these species has found significant phenotypic and genotypic differences between the dark and light populations due to strong selection pressures. For example, H. maculata exhibits the strongest phenotypic difference (matches best with the substrate) of the light colored population coinciding with the least amount of gene flow between the populations and the highest genetic differences when compared to the other two lizard species.
New Mexico's White Sands are a recent geologic formation (approximately 6000 years old to possibly 2000 years old). This recent origin of these gypsum sand dunes suggests that species exhibiting lighter-colored variations have evolved in a relatively short time frame. The three lizard species previously mentioned have been found to display variable social signal coloration in coexistence with their ecotonal variants. Not only have the three species convergently evolved their lighter variants due to the selection pressures from the environment, they have also evolved ecomorphological differences: morphology, behavior (in this case, escape behavior), and performance (in this case, sprint speed) collectively. Roches' work found surprising results in the escape behavior of H. maculata and S. undulatus. When dark morphs were placed on white sands, their startle response was significantly diminished. This result could be due to varying factors relating to sand temperature or visual acuity; however, regardless of the cause, "...failure of mismatched lizards to sprint could be maladaptive when faced with a predator".
Evidence from speciation
Speciation is the evolutionary process by which new biological species arise. Biologists research species using different theoretical frameworks for what constitutes a species (see species problem and species complex) and there exists debate with regard to delineation. Nevertheless, much of the current research suggests that, "...speciation is a process of emerging genealogical distinctness, rather than a discontinuity affecting all genes simultaneously" and, in allopatry (the most common form of speciation), "reproductive isolation is a byproduct of evolutionary change in isolated populations, and thus can be considered an evolutionary accident". Speciation occurs as the result of the latter (allopatry); however, a variety of differing agents have been documented and are often defined and classified in various forms (e.g. peripatric, parapatric, sympatric, polyploidization, hybridization, etc.). Instances of speciation have been observed in both nature and the laboratory. A.-B Florin and A. Ödeen note that, "strong laboratory evidence for allopatric speciation is lacking..."; however, contrary to laboratory studies (focused specifically on models of allopatric speciation), "speciation most definitely occurs; [and] the vast amount of evidence from nature makes it unreasonable to argue otherwise". Coyne and Orr compiled a list of 19 laboratory experiments on Drosophila presenting examples of allopatric speciation by divergent selection concluding that, "reproductive isolation in allopatry can evolve as a byproduct of divergent selection".
Research documenting speciation is abundant. Biologists have documented numerous examples of speciation in nature—with evolution having produced far more species than any observer would consider necessary. For example, there are well over 350,000 described species of beetles. Examples of speciation come from the observations of island biogeography and the process of adaptive radiation, both explained previously. Evidence of common descent can also be found through paleontological studies of speciation within geologic strata. The examples described below represent different modes of speciation and provide strong evidence for common descent.
Not all speciation research directly observes divergence from "start-to-finish". This is by virtue of research delimitation and definition ambiguity, and occasionally leads research towards historical reconstructions. In light of this, examples abound, and the following are by no means exhaustive—comprising only a small fraction of the instances observed. Once again, take note of the established fact that, "...natural selection is a ubiquitous part of speciation...", and is the primary driver of speciation, so; hereinafter, examples of speciation will often interdepend and correspond with selection.
Fossils
Limitations exist within the fossil record when considering the concept of what constitutes a species. Paleontologists largely rely on a different framework: the morphological species concept. Due to the absence of information such as reproductive behavior or genetic material in fossils, paleontologists distinguish species by their phenotypic differences. Extensive investigation of the fossil record has led to numerous theories concerning speciation (in the context of paleontology) with many of the studies suggesting that stasis, punctuation, and lineage branching are common. In 1995, D. H. Erwin, et al. published a major work—New Approaches to Speciation in the Fossil Record—which compiled 58 studies of fossil speciation (between 1972 and 1995) finding most of the examples suggesting stasis (involving anagenesis or punctuation) and 16 studies suggesting speciation. Despite stasis appearing to be the predominant conclusion at first glance, this particular meta-study investigated deeper, concluding that, "...no single pattern appears dominant..." with "...the preponderance of studies illustrating both stasis and gradualism in the history of a single lineage". Many of the studies conducted utilize seafloor sediments that can provide a significant amount of data concerning planktonic microfossils. The succession of fossils in stratigraphy can be used to determine evolutionary trends among fossil organisms. In addition, incidences of speciation can be interpreted from the data and numerous studies have been conducted documenting both morphological evolution and speciation.
Globorotalia
Extensive research on the planktonic foraminifer Globorotalia truncatulinoides has provided insight into paleobiogeographical and paleoenvironmental studies alongside the relationship between the environment and evolution. In an extensive study of the paleobiogeography of G. truncatulinoides, researchers found evidence that suggested the formation of a new species (via the sympatric speciation framework). Cores taken of the sediment containing the three species G. crassaformis, G. tosaensis, and G. truncatulinoides found that before 2.7 Ma, only G. crassaformis and G. tosaensis existed. A speciation event occurred at that time, whereby intermediate forms existed for quite some time. Eventually G. tosaensis disappears from the record (suggesting extinction) but exists as an intermediate between the extant G. crassaformis and G. truncatulinoides. This record of the fossils also matched the already existing phylogeny constructed by morphological characters of the three species. See figure 6a.
Radiolaria
In a large study of five species of radiolarians (Calocycletta caepa, Pterocanium prismatium, Pseudoculous vema, Eucyrtidium calvertense, and Eucyrtidium matuyamai), the researchers documented considerable evolutionary change in each lineage. Alongside this, trends with the closely related species E. calvertense and E. matuyamai showed that about 1.9 Mya E. calvertense invaded a new region of the Pacific, becoming isolated from the main population. The stratigraphy of this species clearly shows that this isolated population evolved into E. matuyamai. It then reinvaded the region of the still-existing and static E. calvertense population whereby a sudden decrease in body size occurred. Eventually the invader E. matuyamai disappeared from the stratum (presumably due to extinction) coinciding with a desistance of size reduction of the E. calvertense population. From that point on, the change in size leveled to a constant. The authors suggest competition-induced character displacement.
Rhizosolenia
Researchers conducted measurements on 5,000 Rhizosolenia (a planktonic diatom) specimens from eight sedimentary cores in the Pacific Ocean. The core samples spanned two million years and were chronologized using sedimentary magnetic field reversal measurements. All the core samples yielded a similar pattern of divergence: with a single lineage (R. bergonii) occurring before 3.1 Mya and two morphologically distinct lineages (daughter species: R. praebergonii) appearing after. The parameters used to measure the samples were consistent throughout each core. An additional study of the daughter species R. praebergonii found that, after the divergence, it invaded the Indian Ocean.
Turborotalia
A recent study was conducted involving the planktonic foraminifer Turborotalia. The authors extracted "51 stratigraphically ordered samples from a site within the oceanographically stable tropical North Pacific gyre". Two hundred individual species were examined using ten specific morphological traits (size, compression index, chamber aspect ratio, chamber inflation, aperture aspect ratio, test height, test expansion, umbilical angle, coiling direction, and the number of chambers in the final whorl). Utilizing multivariate statistical clustering methods, the study found that the species continued to evolve non-directionally within the Eocene from 45 Ma to about 36 Ma. However, from 36 Ma to approximately 34 Ma, the stratigraphic layers showed two distinct clusters with significantly defining characteristics distinguishing one another from a single species. The authors concluded that speciation must have occurred and that the two new species were descended from the prior species.
Vertebrates
There exists evidence for vertebrate speciation despite limitations imposed by the fossil record. Studies have been conducted documenting similar patterns seen in marine invertebrates. For example, extensive research documenting rates of morphological change, evolutionary trends, and speciation patterns in small mammals has significantly contributed to the scientific literature.
A study of four mammalian genera: Hyopsodus, Pelycodus, Haplomylus (three from the Eocene), and Plesiadapis (from the Paleocene) found that—through a large number of stratigraphic layers and specimen sampling—each group exhibited, "gradual phyletic evolution, overall size increase, iterative evolution of small species, and character divergence following the origin of each new lineage". The authors of this study concluded that speciation was discernible. In another study concerning morphological trends and rates of evolution found that the European arvicolid rodent radiated into 52 distinct lineages over a time frame of 5 million years while documenting examples of phyletic gradualism, punctuation, and stasis.
Invertebrates
Drosophila melanogaster
William R. Rice and George W. Salt found experimental evidence of sympatric speciation in the common fruit fly. They collected a population of Drosophila melanogaster from Davis, California, and placed the pupae into a habitat maze. Newborn flies had to investigate the maze to find food. The flies had three choices to take in finding food. Light and dark (phototaxis), up and down (geotaxis), and the scent of acetaldehyde and the scent of ethanol (chemotaxis) were the three options. This eventually divided the flies into 42 spatio-temporal habitats. They then cultured two strains that chose opposite habitats. One of the strains emerged early, immediately flying upward in the dark attracted to the acetaldehyde. The other strain emerged late and immediately flew downward, attracted to light and ethanol. Pupae from the two strains were then placed together in the maze and allowed to mate at the food site. They then were collected. A selective penalty was imposed on the female flies that switched habitats. This entailed that none of their gametes would pass on to the next generation. After 25 generations of this mating test, it showed reproductive isolation between the two strains. They repeated the experiment again without creating the penalty against habitat switching and the result was the same; reproductive isolation was produced.
Gall wasps
A study of the gall-forming wasp species Belonocnema treatae found that populations inhabiting different host plants (Quercus geminata and Q. virginiana) exhibited different body size and gall morphology alongside a strong expression of sexual isolation. The study hypothesized that B. treatae populations inhabiting different host plants would show evidence of divergent selection promoting speciation. The researchers sampled gall wasp species and oak tree localities, measured body size (right hand tibia of each wasp), and counted gall chamber numbers. In addition to measurements, they conducted mating assays and statistical analyses. Genetic analysis was also conducted on two mtDNA sites (416 base pairs from cytochrome C and 593 base pairs from cytochrome oxidase ) to "control for the confounding effects of time since divergence among allopatric populations".
In an additional study, the researchers studied two gall wasp species B. treatae and Disholcaspis quercusvirens and found strong morphological and behavioral variation among host-associated populations. This study further confounded prerequisites to speciation.
Hawthorn fly
One example of evolution at work is the case of the hawthorn fly, Rhagoletis pomonella, also known as the apple maggot fly, which appears to be undergoing sympatric speciation. Different populations of hawthorn fly feed on different fruits. A distinct population emerged in North America in the 19th century some time after apples, a non-native species, were introduced. This apple-feeding population normally feeds only on apples and not on the historically preferred fruit of hawthorns. The current hawthorn feeding population does not normally feed on apples. Some evidence, such as the fact that six out of thirteen allozyme loci are different, that hawthorn flies mature later in the season and take longer to mature than apple flies; and that there is little evidence of interbreeding (researchers have documented a 4–6% hybridization rate) suggests that speciation is occurring.
London Underground mosquito
The London Underground mosquito is a species of mosquito in the genus Culex found in the London Underground. It evolved from the overground species Culex pipiens. This mosquito, although first discovered in the London Underground system, has been found in underground systems around the world. It is suggested that it may have adapted to human-made underground systems since the last century from local above-ground Culex pipiens, although more recent evidence suggests that it is a southern mosquito variety related to Culex pipiens that has adapted to the warm underground spaces of northern cities.
The two species have very different behaviours, are extremely difficult to mate, and with different allele frequency, consistent with genetic drift during a founder event. More specifically, this mosquito, Culex pipiens molestus, breeds all-year round, is cold intolerant, and bites rats, mice, and humans, in contrast to the above ground species Culex pipiens that is cold tolerant, hibernates in the winter, and bites only birds. When the two varieties were cross-bred the eggs were infertile suggesting reproductive isolation.
The genetic data indicates that the molestus form in the London Underground mosquito appears to have a common ancestry, rather than the population at each station being related to the nearest aboveground population (i.e. the pipiens form). Byrne and Nichols' working hypothesis was that adaptation to the underground environment had occurred locally in London only once. These widely separated populations are distinguished by very minor genetic differences, which suggest that the molestus form developed: a single mtDNA difference shared among the underground populations of ten Russian cities; a single fixed microsatellite difference in populations spanning Europe, Japan, Australia, the middle East and Atlantic islands.
Snapping shrimp and the isthmus of Panama
Debate exists determining when the isthmus of Panama closed. Much of the evidence supports a closure approximately 2.7 to 3.5 mya using "...multiple lines of evidence and independent surveys". However, a recent study suggests an earlier, transient bridge existed 13 to 15 mya. Regardless of the timing of the isthmus closer, biologists can study the species on the Pacific and Caribbean sides in, what has been called, "one of the greatest natural experiments in evolution." Studies of snapping shrimp in the genus Alpheus have provided direct evidence of allopatric speciation events, and contributed to the literature concerning rates of molecular evolution. Phylogenetic reconstructions using "multilocus datasets and coalescent-based analytical methods" support the relationships of the species in the group and molecular clock techniques support the separation of 15 pairs of Alpheus species between 3 and 15 million years ago.
Plants
The botanist Verne Grant pioneered the field of plant speciation with his research and major publications on the topic. As stated before, many biologists rely on the biological species concept, with some modern researchers utilizing the phylogenetic species concept. Debate exists in the field concerning which framework should be applied in the research. Regardless, reproductive isolation is the primary role in the process of speciation and has been studied extensively by biologists in their respective disciplines.
Both hybridization and polyploidy have also been found to be major contributors to plant speciation. With the advent of molecular markers, "hybridization [is] considerably more frequent than previously believed". In addition to these two modes leading to speciation, pollinator preference and isolation, chromosomal rearrangements, and divergent natural selection have become critical to the speciation of plants. Furthermore, recent research suggests that sexual selection, epigenetic drivers, and the creation of incompatible allele combinations caused by balancing selection also contribute to the formation of new species. Instances of these modes have been researched in both the laboratory and in nature. Studies have also suggested that, due to "the sessile nature of plants... [it increases] the relative importance of ecological speciation...."
Hybridization between two different species sometimes leads to a distinct phenotype. This phenotype can also be fitter than the parental lineage and as such, natural selection may then favor these individuals. Eventually, if reproductive isolation is achieved, it may lead to a separate species. However, reproductive isolation between hybrids and their parents is particularly difficult to achieve and thus hybrid speciation is considered a rare event. However, hybridization resulting in reproductive isolation is considered an important means of speciation in plants, since polyploidy (having more than two copies of each chromosome) is tolerated in plants more readily than in animals.
Polyploidy is important in hybrids as it allows reproduction, with the two different sets of chromosomes each being able to pair with an identical partner during meiosis. Polyploids also have more genetic diversity, which allows them to avoid inbreeding depression in small populations. Hybridization without change in chromosome number is called homoploid hybrid speciation. It is considered very rare but has been shown in Heliconius butterflies and sunflowers. Polyploid speciation, which involves changes in chromosome number, is a more common phenomenon, especially in plant species.
Polyploidy is a mechanism that has caused many rapid speciation events in sympatry because offspring of, for example, tetraploid x diploid matings often result in triploid sterile progeny. Not all polyploids are reproductively isolated from their parental plants, and gene flow may still occur for example through triploid hybrid x diploid matings that produce tetraploids, or matings between meiotically unreduced gametes from diploids and gametes from tetraploids. It has been suggested that many of the existing plant and most animal species have undergone an event of polyploidization in their evolutionary history. Reproduction of successful polyploid species is sometimes asexual, by parthenogenesis or apomixis, as for unknown reasons many asexual organisms are polyploid. Rare instances of polyploid mammals are known, but most often result in prenatal death.
Researchers consider reproductive isolation as key to speciation. A major aspect of speciation research is to determine the nature of the barriers that inhibit reproduction. Botanists often consider the zoological classifications of prezygotic and postzygotic barriers as inadequate. The examples provided below give insight into the process of speciation.
Mimulus peregrinus
The creation of a new allopolyploid species of monkeyflower (Mimulus peregrinus) was observed on the banks of the Shortcleuch Water—a river in Leadhills, South Lanarkshire, Scotland. Parented from the cross of the two species Mimulus guttatus (containing 14 pairs of chromosomes) and Mimulus luteus (containing 30-31 pairs from a chromosome duplication), M. peregrinus has six copies of its chromosomes (caused by the duplication of the sterile hybrid triploid). Due to the nature of these species, they have the ability to self-fertilize. Because of its number of chromosomes it is not able to pair with M. guttatus, M. luteus, or their sterile triploid offspring. M. peregrinus will either die, producing no offspring, or reproduce with itself effectively leading to a new species.
RaphanobrassicaRaphanobrassica includes all intergeneric hybrids between the genera Raphanus (radish) and Brassica (cabbages, etc.). The Raphanobrassica is an allopolyploid cross between the radish (Raphanus sativus) and cabbage (Brassica oleracea). Plants of this parentage are now known as radicole. Two other fertile forms of Raphanobrassica are known. Raparadish, an allopolyploid hybrid between Raphanus sativus and Brassica rapa is grown as a fodder crop. "Raphanofortii" is the allopolyploid hybrid between Brassica tournefortii and Raphanus caudatus. The Raphanobrassica is a fascinating plant, because (in spite of its hybrid nature), it is not sterile. This has led some botanists to propose that the accidental hybridization of a flower by pollen of another species in nature could be a mechanism of speciation common in higher plants.
Senecio (groundsel)
The Welsh groundsel is an allopolyploid, a plant that contains sets of chromosomes originating from two different species. Its ancestor was Senecio × baxteri, an infertile hybrid that can arise spontaneously when the closely related groundsel (Senecio vulgaris) and Oxford ragwort (Senecio squalidus) grow alongside each other. Sometime in the early 20th century, an accidental doubling of the number of chromosomes in an S. × baxteri plant led to the formation of a new fertile species.
The York groundsel (Senecio eboracensis) is a hybrid species of the self-incompatible Senecio squalidus (also known as Oxford ragwort) and the self-compatible Senecio vulgaris (also known as common groundsel). Like S. vulgaris, S. eboracensis is self-compatible; however, it shows little or no natural crossing with its parent species, and is therefore reproductively isolated, indicating that strong breed barriers exist between this new hybrid and its parents. It resulted from a backcrossing of the F1 hybrid of its parents to S. vulgaris. S. vulgaris is native to Britain, while S. squalidus was introduced from Sicily in the early 18th century; therefore, S. eboracensis has speciated from those two species within the last 300 years.
Other hybrids descended from the same two parents are known. Some are infertile, such as S. x baxteri. Other fertile hybrids are also known, including S. vulgaris var. hibernicus, now common in Britain, and the allohexaploid S. cambrensis, which according to molecular evidence probably originated independently at least three times in different locations. Morphological and genetic evidence support the status of S. eboracensis as separate from other known hybrids.
Thale cress
Kirsten Bomblies et al. from the Max Planck Institute for Developmental Biology discovered two genes in the thale cress plant, Arabidopsis thaliana. When both genes are inherited by an individual, it ignites a reaction in the hybrid plant that turns its own immune system against it. In the parents, the genes were not detrimental, but they evolved separately to react defectively when combined. To test this, Bomblies crossed 280 genetically different strains of Arabidopsis in 861 distinct ways and found that 2 percent of the resulting hybrids were necrotic. Along with allocating the same indicators, the 20 plants also shared a comparable collection of genetic activity in a group of 1,080 genes. In almost all of the cases, Bomblies discovered that only two genes were required to cause the autoimmune response. Bomblies looked at one hybrid in detail and found that one of the two genes belonged to the NB-LRR class, a common group of disease resistance genes involved in recognizing new infections. When Bomblies removed the problematic gene, the hybrids developed normally. Over successive generations, these incompatibilities could create divisions between different plant strains, reducing their chances of successful mating and turning distinct strains into separate species.
Tragopogon (salsify)Tragopogon is one example where hybrid speciation has been observed. In the early 20th century, humans introduced three species of salsify into North America. These species, the western salsify (Tragopogon dubius), the meadow salsify (Tragopogon pratensis), and the oyster plant (Tragopogon porrifolius), are now common weeds in urban wastelands. In the 1950s, botanists found two new species in the regions of Idaho and Washington, where the three already known species overlapped. One new species, Tragopogon miscellus, is a tetraploid hybrid of T. dubius and T. pratensis. The other new species, Tragopogon mirus, is also an allopolyploid, but its ancestors were T. dubius and T. porrifolius. These new species are usually referred to as "the Ownbey hybrids" after the botanist who first described them. The T. mirus population grows mainly by reproduction of its own members, but additional episodes of hybridization continue to add to the T. mirus population.T. dubius and T. pratensis mated in Europe but were never able to hybridize. A study published in March 2011 found that when these two plants were introduced to North America in the 1920s, they mated and doubled the number of chromosomes in there hybrid Tragopogon miscellus allowing for a "reset" of its genes, which in turn, allows for greater genetic variation. Professor Doug Soltis of the University of Florida said, "We caught evolution in the act...New and diverse patterns of gene expression may allow the new species to rapidly adapt in new environments".
Vertebrates
Blackcap
The bird species, Sylvia atricapilla, commonly referred to as blackcaps, lives in Germany and flies southwest to Spain while a smaller group flies northwest to Great Britain during the winter. Gregor Rolshausen from the University of Freiburg found that the genetic separation of the two populations is already in progress. The differences found have arisen in about 30 generations. With DNA sequencing, the individuals can be assigned to a correct group with an 85% accuracy. Stuart Bearhop from the University of Exeter reported that birds wintering in England tend to mate only among themselves, and not usually with those wintering in the Mediterranean. It is still inference to say that the populations will become two different species, but researchers expect it due to the continued genetic and geographic separation.
Mollies
The shortfin molly (Poecilia mexicana) is a small fish that lives in the Sulfur Caves of Mexico. Years of study on the species have found that two distinct populations of mollies—the dark interior fish and the bright surface water fish—are becoming more genetically divergent. The populations have no obvious barrier separating the two; however, it was found that the mollies are hunted by a large water bug (Belostoma spp). Tobler collected the bug and both types of mollies, placed them in large plastic bottles, and put them back in the cave. After a day, it was found that, in the light, the cave-adapted fish endured the most damage, with four out of every five stab-wounds from the water bugs sharp mouthparts. In the dark, the situation was the opposite. The mollies' senses can detect a predator's threat in their own habitats, but not in the other ones. Moving from one habitat to the other significantly increases the risk of dying. Tobler plans on further experiments, but believes that it is a good example of the rise of a new species.
Polar bear
Natural selection, geographic isolation, and speciation in progress are illustrated by the relationship between the polar bear (Ursus maritimus) and the brown bear (Ursus arctos). Considered separate species throughout their ranges; however, it has been documented that they possess the capability to interbreed and produce fertile offspring. This introgressive hybridization has occurred both in the wild and in captivity and has been documented and verified with DNA testing. The oldest known fossil evidence of polar bears dates around 130,000 to 110,000 years ago; however, molecular data has revealed varying estimates of divergence time. Mitochondrial DNA analysis has given an estimate of 150,000 years ago while nuclear genome analysis has shown an approximate divergence of 603,000 years ago. Recent research using the complete genomes (rather than mtDNA or partial nuclear genomes) establishes the divergence of polar and brown bears between 479 and 343 thousand years ago. Despite the differences in divergence rates, molecular research suggests the sister species have undergone a highly complex process of speciation and admixture between the two.
The polar bear has acquired anatomical and physiological differences from the brown bear that allow it to comfortably survive in conditions that the brown bear likely could not. Notable examples include the ability to swim sixty miles or more at a time in freezing waters, fur that blends with the snow, and to stay warm in the arctic environment, an elongated neck that makes it easier to keep their heads above water while swimming, and oversized and heavy-matted webbed feet that act as paddles when swimming. It has also evolved small papillae and vacuole-like suction cups on the soles to make them less likely to slip on the ice, alongside smaller ears for a reduction of heat loss, eyelids that act like sunglasses, accommodations for their all-meat diet, a large stomach capacity to enable opportunistic feeding, and the ability to fast for up to nine months while recycling their urea.
Evidence from coloration
Animal coloration provided important early evidence for evolution by natural selection, at a time when little direct evidence was available. Three major functions of coloration were discovered in the second half of the 19th century, and subsequently used as evidence of selection: camouflage (protective coloration); mimicry, both Batesian and Müllerian; and aposematism. After the circumstantial evidence provided by Darwin in On the Origin of Species, and given the absence of mechanisms for genetic variation or heredity at that time, naturalists including Darwin's contemporaries, Henry Walter Bates and Fritz Müller sought evidence from what they could observe in the field.
Mimicry and aposematism
Bates and Müller described forms of mimicry that now carry their names, based on their observations of tropical butterflies. These highly specific patterns of coloration are readily explained by natural selection, since predators such as birds which hunt by sight will more often catch and kill insects that are less good mimics of distasteful models than those that are better mimics; but the patterns are otherwise hard to explain. Darwinists such as Alfred Russel Wallace and Edward Bagnall Poulton, and in the 20th century Hugh Cott and Bernard Kettlewell, sought evidence that natural selection was taking place. The efficacy of mimicry in butterflies was demonstrated in controlled experiments by Jane Van Zandt Brower in 1958.
Camouflage
In 1889, Wallace noted that snow camouflage, especially plumage and pelage that changed with the seasons, suggested an obvious explanation as an adaptation for concealment. Poulton's 1890 book, The Colours of Animals, written during Darwinism's lowest ebb, used all the forms of coloration to argue the case for natural selection. Cott described many kinds of camouflage, mimicry and warning coloration in his 1940 book Adaptive Coloration in Animals, and in particular his drawings of coincident disruptive coloration in frogs convinced other biologists that these deceptive markings were products of natural selection. Kettlewell experimented on peppered moth evolution, showing that the species had adapted as pollution changed the environment; this provided compelling evidence of Darwinian evolution.
Evidence from behavior
Some primitive reflexes are critical for the survival of neonates. There is evidence confirming that closely related species share more similar primitive reflexes, such as the type of fur-grasping in primates and their relationship to manual dexterity. The exact selection pressures for their development are not fully determined and some reflexes are understood to have evolved multiple times independently (convergent evolution).
Evidence from mathematical modeling and simulation
Computer science allows the iteration of self-changing complex systems to be studied, allowing a mathematical understanding of the nature of the processes behind evolution; providing evidence for the hidden causes of known evolutionary events. The evolution of specific cellular mechanisms like spliceosomes that can turn the cell's genome into a vast workshop of billions of interchangeable parts that can create tools that create us can be studied for the first time in an exact way.
"It has taken more than five decades, but the electronic computer is now powerful enough to simulate evolution", assisting bioinformatics in its attempt to solve biological problems.
Computational evolutionary biology has enabled researchers to trace the evolution of a large number of organisms by measuring changes in their DNA, rather than through physical taxonomy or physiological observations alone. It has compared entire genomes permitting the study of more complex evolutionary events, such as gene duplication, horizontal gene transfer, and the prediction of factors important in speciation. It has also helped build complex computational models of populations to predict the outcome of the system over time and track and share information on an increasingly large number of species and organisms.
Future endeavors are to reconstruct a now more complex tree of life.
Christoph Adami, a professor at the Keck Graduate Institute made this point in Evolution of biological complexity:
To make a case for or against a trend in the evolution of complexity in biological evolution, complexity must be both rigorously defined and measurable. A recent information-theoretic (but intuitively evident) definition identifies genomic complexity with the amount of information a sequence stores about its environment. We investigate the evolution of genomic complexity in populations of digital organisms and monitor in detail the evolutionary transitions that increase complexity. We show that, because natural selection forces genomes to behave as a natural "Maxwell Demon", within a fixed environment, genomic complexity is forced to increase.
David J. Earl and Michael W. Deem—professors at Rice University made this point in Evolvability is a selectable trait:
Not only has life evolved, but life has evolved to evolve. That is, correlations within protein structure have evolved, and mechanisms to manipulate these correlations have evolved in tandem. The rates at which the various events within the hierarchy of evolutionary moves occur are not random or arbitrary but are selected by Darwinian evolution. Sensibly, rapid or extreme environmental change leads to selection for greater evolvability. This selection is not forbidden by causality and is strongest on the largest-scale moves within the mutational hierarchy. Many observations within evolutionary biology, heretofore considered evolutionary happenstance or accidents, are explained by selection for evolvability. For example, the vertebrate immune system shows that the variable environment of antigens has provided selective pressure for the use of adaptable codons and low-fidelity polymerases during somatic hypermutation. A similar driving force for biased codon usage as a result of productively high mutation rates is observed in the hemagglutinin protein of influenza A.
"Computer simulations of the evolution of linear sequences have demonstrated the importance of recombination of blocks of sequence rather than point mutagenesis alone. Repeated cycles of point mutagenesis, recombination, and selection should allow in vitro molecular evolution of complex sequences, such as proteins." Evolutionary molecular engineering, also called directed evolution or in vitro molecular evolution involves the iterated cycle of mutation, multiplication with recombination, and selection of the fittest of individual molecules (proteins, DNA, and RNA). Natural evolution can be relived showing us possible paths from catalytic cycles based on proteins to based on RNA to based on DNA.In Vitro Molecular Evolution. Isgec.org (4 August 1975). Retrieved on 2011-12-06.
The Avida artificial life software platform has been used to explore common descent and natural selection. It has been used to demonstrate that natural selection can favor altruism, something that had been predicted but is difficult to test empirically. At the higher replication rates allowed by the platform it becomes observable.
See also
Nothing in Biology Makes Sense Except in the Light of Evolution
References
Sources
Darwin, Charles 24 November 1859. On the Origin of Species by means of Natural Selection or the Preservation of Favoured Races in the Struggle for Life. London: John Murray, Albemarle Street. 502 pages. Reprinted: Gramercy (22 May 1995).
External links
National Academies Evolution Resources
TalkOrigins Archive – 29+ Evidences for Macroevolution: The Scientific Case for Common Descent
TalkOrigins Archive – Transitional Vertebrate Fossils FAQ
Understanding Evolution: Your one-stop source for information on evolution
National Academy Press: Teaching About Evolution and the Nature of Science
Evolution — Provided by PBS''.
Evolution News from Genome News Network (GNN)
Evolution by Natural Selection — An introduction to the logic of the theory of evolution by natural selection
Howstuffworks.com — How Evolution Works
15 Evolutionary Gems
Evolutionary biology | 0.772933 | 0.987276 | 0.763098 |
Gibbs free energy | In thermodynamics, the Gibbs free energy (or Gibbs energy as the recommended name; symbol ) is a thermodynamic potential that can be used to calculate the maximum amount of work, other than pressure-volume work, that may be performed by a thermodynamically closed system at constant temperature and pressure. It also provides a necessary condition for processes such as chemical reactions that may occur under these conditions. The Gibbs free energy is expressed asWhere:
is the internal energy of the system
is the enthalpy of the system
is the entropy of the system
is the temperature of the system
is the volume of the system
is the pressure of the system (which must be equal to that of the surroundings for mechanical equilibrium).
The Gibbs free energy change , measured in joules in SI) is the maximum amount of non-volume expansion work that can be extracted from a closed system (one that can exchange heat and work with its surroundings, but not matter) at fixed temperature and pressure. This maximum can be attained only in a completely reversible process. When a system transforms reversibly from an initial state to a final state under these conditions, the decrease in Gibbs free energy equals the work done by the system to its surroundings, minus the work of the pressure forces.
The Gibbs energy is the thermodynamic potential that is minimized when a system reaches chemical equilibrium at constant pressure and temperature when not driven by an applied electrolytic voltage. Its derivative with respect to the reaction coordinate of the system then vanishes at the equilibrium point. As such, a reduction in is necessary for a reaction to be spontaneous under these conditions.
The concept of Gibbs free energy, originally called available energy, was developed in the 1870s by the American scientist Josiah Willard Gibbs. In 1873, Gibbs described this "available energy" as
The initial state of the body, according to Gibbs, is supposed to be such that "the body can be made to pass from it to states of dissipated energy by reversible processes". In his 1876 magnum opus On the Equilibrium of Heterogeneous Substances, a graphical analysis of multi-phase chemical systems, he engaged his thoughts on chemical-free energy in full.
If the reactants and products are all in their thermodynamic standard states, then the defining equation is written as , where is enthalpy, is absolute temperature, and is entropy.
Overview
According to the second law of thermodynamics, for systems reacting at fixed temperature and pressure without input of non-Pressure Volume (pV) work, there is a general natural tendency to achieve a minimum of the Gibbs free energy.
A quantitative measure of the favorability of a given reaction under these conditions is the change ΔG (sometimes written "delta G" or "dG") in Gibbs free energy that is (or would be) caused by the reaction. As a necessary condition for the reaction to occur at constant temperature and pressure, ΔG must be smaller than the non-pressure-volume (non-pV, e.g. electrical) work, which is often equal to zero (then ΔG must be negative). ΔG equals the maximum amount of non-pV work that can be performed as a result of the chemical reaction for the case of a reversible process. If analysis indicates a positive ΔG for a reaction, then energy — in the form of electrical or other non-pV work — would have to be added to the reacting system for ΔG to be smaller than the non-pV work and make it possible for the reaction to occur.
One can think of ∆G as the amount of "free" or "useful" energy available to do non-pV work at constant temperature and pressure. The equation can be also seen from the perspective of the system taken together with its surroundings (the rest of the universe). First, one assumes that the given reaction at constant temperature and pressure is the only one that is occurring. Then the entropy released or absorbed by the system equals the entropy that the environment must absorb or release, respectively. The reaction will only be allowed if the total entropy change of the universe is zero or positive. This is reflected in a negative ΔG, and the reaction is called an exergonic process.
If two chemical reactions are coupled, then an otherwise endergonic reaction (one with positive ΔG) can be made to happen. The input of heat into an inherently endergonic reaction, such as the elimination of cyclohexanol to cyclohexene, can be seen as coupling an unfavorable reaction (elimination) to a favorable one (burning of coal or other provision of heat) such that the total entropy change of the universe is greater than or equal to zero, making the total Gibbs free energy change of the coupled reactions negative.
In traditional use, the term "free" was included in "Gibbs free energy" to mean "available in the form of useful work". The characterization becomes more precise if we add the qualification that it is the energy available for non-pressure-volume work. (An analogous, but slightly different, meaning of "free" applies in conjunction with the Helmholtz free energy, for systems at constant temperature). However, an increasing number of books and journal articles do not include the attachment "free", referring to G as simply "Gibbs energy". This is the result of a 1988 IUPAC meeting to set unified terminologies for the international scientific community, in which the removal of the adjective "free" was recommended. This standard, however, has not yet been universally adopted.
The name "free enthalpy" was also used for G in the past.
History
The quantity called "free energy" is a more advanced and accurate replacement for the outdated term affinity, which was used by chemists in the earlier years of physical chemistry to describe the force that caused chemical reactions.
In 1873, Josiah Willard Gibbs published A Method of Geometrical Representation of the Thermodynamic Properties of Substances by Means of Surfaces, in which he sketched the principles of his new equation that was able to predict or estimate the tendencies of various natural processes to ensue when bodies or systems are brought into contact. By studying the interactions of homogeneous substances in contact, i.e., bodies composed of part solid, part liquid, and part vapor, and by using a three-dimensional volume-entropy-internal energy graph, Gibbs was able to determine three states of equilibrium, i.e., "necessarily stable", "neutral", and "unstable", and whether or not changes would ensue. Further, Gibbs stated:
In this description, as used by Gibbs, ε refers to the internal energy of the body, η refers to the entropy of the body, and ν is the volume of the body...
Thereafter, in 1882, the German scientist Hermann von Helmholtz characterized the affinity as the largest quantity of work which can be gained when the reaction is carried out in a reversible manner, e.g., electrical work in a reversible cell. The maximum work is thus regarded as the diminution of the free, or available, energy of the system (Gibbs free energy G at T = constant, P = constant or Helmholtz free energy F at T = constant, V = constant), whilst the heat given out is usually a measure of the diminution of the total energy of the system (internal energy). Thus, G or F is the amount of energy "free" for work under the given conditions.
Until this point, the general view had been such that: "all chemical reactions drive the system to a state of equilibrium in which the affinities of the reactions vanish". Over the next 60 years, the term affinity came to be replaced with the term free energy. According to chemistry historian Henry Leicester, the influential 1923 textbook Thermodynamics and the Free Energy of Chemical Substances by Gilbert N. Lewis and Merle Randall led to the replacement of the term "affinity" by the term "free energy" in much of the English-speaking world.
Definitions
The Gibbs free energy is defined as
which is the same as
where:
U is the internal energy (SI unit: joule),
p is pressure (SI unit: pascal),
V is volume (SI unit: m3),
T is the temperature (SI unit: kelvin),
S is the entropy (SI unit: joule per kelvin),
H is the enthalpy (SI unit: joule).
The expression for the infinitesimal reversible change in the Gibbs free energy as a function of its "natural variables" p and T, for an open system, subjected to the operation of external forces (for instance, electrical or magnetic) Xi, which cause the external parameters of the system ai to change by an amount dai, can be derived as follows from the first law for reversible processes:
where:
μi is the chemical potential of the i-th chemical component. (SI unit: joules per particle or joules per mole)
Ni is the number of particles (or number of moles) composing the i-th chemical component.
This is one form of the Gibbs fundamental equation. In the infinitesimal expression, the term involving the chemical potential accounts for changes in Gibbs free energy resulting from an influx or outflux of particles. In other words, it holds for an open system or for a closed, chemically reacting system where the Ni are changing. For a closed, non-reacting system, this term may be dropped.
Any number of extra terms may be added, depending on the particular system being considered. Aside from mechanical work, a system may, in addition, perform numerous other types of work. For example, in the infinitesimal expression, the contractile work energy associated with a thermodynamic system that is a contractile fiber that shortens by an amount −dl under a force f would result in a term f dl being added. If a quantity of charge −de is acquired by a system at an electrical potential Ψ, the electrical work associated with this is −Ψ de, which would be included in the infinitesimal expression. Other work terms are added on per system requirements.
Each quantity in the equations above can be divided by the amount of substance, measured in moles, to form molar Gibbs free energy. The Gibbs free energy is one of the most important thermodynamic functions for the characterization of a system. It is a factor in determining outcomes such as the voltage of an electrochemical cell, and the equilibrium constant for a reversible reaction. In isothermal, isobaric systems, Gibbs free energy can be thought of as a "dynamic" quantity, in that it is a representative measure of the competing effects of the enthalpic and entropic driving forces involved in a thermodynamic process.
The temperature dependence of the Gibbs energy for an ideal gas is given by the Gibbs–Helmholtz equation, and its pressure dependence is given by
or more conveniently as its chemical potential:
In non-ideal systems, fugacity comes into play.
Derivation
The Gibbs free energy total differential with respect to natural variables may be derived by Legendre transforms of the internal energy.
The definition of G from above is
.
Taking the total differential, we have
Replacing dU with the result from the first law gives
The natural variables of G are then p, T, and {Ni}.
Homogeneous systems
Because S, V, and Ni are extensive variables, an Euler relation allows easy integration of dU:
Because some of the natural variables of G are intensive, dG may not be integrated using Euler relations as is the case with internal energy. However, simply substituting the above integrated result for U into the definition of G gives a standard expression for G:
This result shows that the chemical potential of a substance is its (partial) mol(ecul)ar Gibbs free energy. It applies to homogeneous, macroscopic systems, but not to all thermodynamic systems.
Gibbs free energy of reactions
The system under consideration is held at constant temperature and pressure, and is closed (no matter can come in or out). The Gibbs energy of any system is and an infinitesimal change in G, at constant temperature and pressure, yields
.
By the first law of thermodynamics, a change in the internal energy U is given by
where is energy added as heat, and is energy added as work. The work done on the system may be written as , where is the mechanical work of compression/expansion done on or by the system and is all other forms of work, which may include electrical, magnetic, etc. Then
and the infinitesimal change in G is
.
The second law of thermodynamics states that for a closed system at constant temperature (in a heat bath), and so it follows that
Assuming that only mechanical work is done, this simplifies to
This means that for such a system when not in equilibrium, the Gibbs energy will always be decreasing, and in equilibrium, the infinitesimal change dG will be zero. In particular, this will be true if the system is experiencing any number of internal chemical reactions on its path to equilibrium.
In electrochemical thermodynamics
When electric charge dQele is passed between the electrodes of an electrochemical cell generating an emf , an electrical work term appears in the expression for the change in Gibbs energy:
where S is the entropy, V is the system volume, p is its pressure and T is its absolute temperature.
The combination (, Qele) is an example of a conjugate pair of variables. At constant pressure the above equation produces a Maxwell relation that links the change in open cell voltage with temperature T (a measurable quantity) to the change in entropy S when charge is passed isothermally and isobarically. The latter is closely related to the reaction entropy of the electrochemical reaction that lends the battery its power. This Maxwell relation is:
If a mole of ions goes into solution (for example, in a Daniell cell, as discussed below) the charge through the external circuit is
where n0 is the number of electrons/ion, and F0 is the Faraday constant and the minus sign indicates discharge of the cell. Assuming constant pressure and volume, the thermodynamic properties of the cell are related strictly to the behavior of its emf by
where ΔH is the enthalpy of reaction. The quantities on the right are all directly measurable.
Useful identities to derive the Nernst equation
During a reversible electrochemical reaction at constant temperature and pressure, the following equations involving the Gibbs free energy hold:
(see chemical equilibrium),
(for a system at chemical equilibrium),
(for a reversible electrochemical process at constant temperature and pressure),
(definition of ),
and rearranging gives
which relates the cell potential resulting from the reaction to the equilibrium constant and reaction quotient for that reaction (Nernst equation),
where
, Gibbs free energy change per mole of reaction,
, Gibbs free energy change per mole of reaction for unmixed reactants and products at standard conditions (i.e. 298K, 100kPa, 1M of each reactant and product),
, gas constant,
, absolute temperature,
, natural logarithm,
, reaction quotient (unitless),
, equilibrium constant (unitless),
, electrical work in a reversible process (chemistry sign convention),
, number of moles of electrons transferred in the reaction,
, Faraday constant (charge per mole of electrons),
, cell potential,
, standard cell potential.
Moreover, we also have
which relates the equilibrium constant with Gibbs free energy. This implies that at equilibrium
and
Standard Gibbs energy change of formation
The standard Gibbs free energy of formation of a compound is the change of Gibbs free energy that accompanies the formation of 1 mole of that substance from its component elements, in their standard states (the most stable form of the element at 25 °C and 100 kPa). Its symbol is ΔfG˚.
All elements in their standard states (diatomic oxygen gas, graphite, etc.) have standard Gibbs free energy change of formation equal to zero, as there is no change involved.
ΔfG = ΔfG˚ + RT ln Qf,
where Qf is the reaction quotient.
At equilibrium, ΔfG = 0, and Qf = K, so the equation becomes
ΔfG˚ = −RT ln K,
where K is the equilibrium constant of the formation reaction of the substance from the elements in their standard states.
Graphical interpretation by Gibbs
Gibbs free energy was originally defined graphically. In 1873, American scientist Willard Gibbs published his first thermodynamics paper, "Graphical Methods in the Thermodynamics of Fluids", in which Gibbs used the two coordinates of the entropy and volume to represent the state of the body. In his second follow-up paper, "A Method of Geometrical Representation of the Thermodynamic Properties of Substances by Means of Surfaces", published later that year, Gibbs added in the third coordinate of the energy of the body, defined on three figures. In 1874, Scottish physicist James Clerk Maxwell used Gibbs' figures to make a 3D energy-entropy-volume thermodynamic surface of a fictitious water-like substance. Thus, in order to understand the concept of Gibbs free energy, it may help to understand its interpretation by Gibbs as section AB on his figure 3, and as Maxwell sculpted that section on his 3D surface figure.
See also
Bioenergetics
Calphad (CALculation of PHAse Diagrams)
Critical point (thermodynamics)
Electron equivalent
Enthalpy-entropy compensation
Free entropy
Gibbs–Helmholtz equation
Grand potential
Non-random two-liquid model (NRTL model) – Gibbs energy of excess and mixing calculation and activity coefficients
Spinodal – Spinodal Curves (Hessian matrix)
Standard molar entropy
Thermodynamic free energy
UNIQUAC model – Gibbs energy of excess and mixing calculation and activity coefficients
Notes and references
External links
IUPAC definition (Gibbs energy)
Gibbs Free Energy – Georgia State University
Physical quantities
State functions
Thermodynamic free energy | 0.763963 | 0.998835 | 0.763073 |
Applied behavior analysis | Applied behavior analysis (ABA), also called behavioral engineering, is a scientific discipline that applies the principles of learning based upon respondent and operant conditioning to change behavior of social significance. ABA is the applied form of behavior analysis; the other two are radical behaviorism (or the philosophy of the science) and the experimental analysis of behavior (or basic experimental research).
The term applied behavior analysis has replaced behavior modification because the latter approach suggested changing behavior without clarifying the relevant behavior-environment interactions. In contrast, ABA changes behavior by first assessing the functional relationship between a targeted behavior and the environment, a process known as a functional behavior assessment. Further, the approach seeks to develop socially acceptable alternatives for maladaptive behaviors, often through administering differential reinforcement contingencies.
Although service delivery providers commonly implement empirically validated interventions for individuals with autism, ABA has been utilized in a range of other areas, including applied animal behavior, organizational behavior management, substance abuse, behavior management in classrooms, acceptance and commitment therapy, and athletic exercise, among others.
ABA has been rejected or strongly criticized by the most members in the autism rights movement due to the perception that it reinforces autistic people in behaving like a non-autistic person and suppresses autistic traits instead of acceptance of autistic behaviors such as hand flapping or other visible forms of stimming. Also, some forms of ABA and its predecessors in the past used aversives, such as electric shocks.
Definition
ABA is an applied science devoted to developing procedures which will produce observable changes in behavior. It is to be distinguished from the experimental analysis of behavior, which focuses on basic experimental research, but it uses principles developed by such research, in particular operant conditioning and classical conditioning. Behavior analysis adopts the viewpoint of radical behaviorism, treating thoughts, emotions, and other covert activity as behavior that is subject to the same responses as overt behavior. This represents a shift away from methodological behaviorism, which restricts behavior-change procedures to behaviors that are overt, and was the conceptual underpinning of behavior modification.
Behavior analysts also emphasize that the science of behavior must be a natural science as opposed to a social science. As such, behavior analysts focus on the observable relationship of behavior with the environment, including antecedents and consequences, without resort to "hypothetical constructs".
History
The beginnings of ABA can be traced back to Teodoro Ayllon and Jack Michael's study "The psychiatric nurse as a behavioral engineer" (1959) that they published in the Journal of the Experimental Analysis of Behavior (JEAB). Ayllon and Michael were training the staff at a psychiatric hospital how to use a token economy based on the principles of operant conditioning for patients with schizophrenia and intellectual disability, which led to researchers at the University of Kansas to start the Journal of Applied Behavior Analysis (JABA) in 1968.
A group of researchers at the University of Washington, including Donald Baer, Sidney W. Bijou, Bill Hopkins, Jay Birnbrauer, Todd Risley, and Montrose Wolf, applied the principles of behavior analysis to treat autism, manage the behavior of children and adolescents in juvenile detention centers, and organize employees who required proper structure and management in businesses. In 1968, Baer, Bijou, Risley, Birnbrauer, Wolf, and James Sherman joined the Department of Human Development and Family Life at the University of Kansas, where they founded the Journal of Applied Behavior Analysis.
Notable graduate students from the University of Washington include Robert Wahler, James Sherman, and Ivar Lovaas. Lovaas established the UCLA Young Autism Project while teaching at the University of California, Los Angeles. In 1965, Lovaas published a series of articles that described a pioneering investigation of the antecedents and consequences that maintained a problem behavior, including the use of electric shock on autistic children to suppress stimming and meltdowns (described as "self-stimulatory behavior" and "tantrum behaviors" respectively) and to coerce "affectionate" behavior, and relied on the methods of errorless learning which was initially used by Charles Ferster to teach nonverbal children to speak. Lovaas also described how to use social (secondary) reinforcers, teach children to imitate, and what interventions (including electric shocks) may be used to reduce aggression and life-threatening self-injury.
In 1987, Lovaas published the study, "Behavioral treatment and normal educational and intellectual functioning in young autistic children". The experimental group in this study received an average of 40 hours per week in a 1:1 teaching setting at a table using errorless discrete trial training (DTT). The treatment is done at home with parents involved, and the curriculum is highly individualized with a heavy emphasis on teaching eye contact, fine and gross motor imitation, academics, and language. The use of aversives and reinforcement were used to motivate learning and reduce non-desired behaviors. Early development of the therapy in the 1960s involved use of electric shocks, scolding, and the withholding of food. By the time the children were enrolled in this study, such aversives were abandoned, and a loud "no", electric shock, or slap to the thigh were used only as a last resort to reduce aggressive and self-stimulatory behaviors. The outcome of this study indicated 47% of the experimental group (9/19) went on to lose their autism diagnosis and were described as indistinguishable from their typically developing adolescent peers. This included passing general education without assistance and forming and maintaining friendships. These gains were maintained as reported in the 1993 study, "Long-term outcome for children with autism who received early intensive behavioral treatment". Lovaas' work went on to be recognized by the US Surgeon General in 1999, and his research were replicated in university and private settings. The "Lovaas Method" went on to become known as early intensive behavioral intervention (EIBI).
Over the years, "behavior analysis" gradually superseded "behavior modification"; that is, from simply trying to alter problematic behavior, behavior analysts sought to understand the function of that behavior, what reinforcement histories (i.e., attention seeking, escape, sensory stimulation, etc.) promote and maintain it, and how it can be replaced by successful behavior. ABA's priority on compliance and behavioral modification over that of an individual's needs can lead to harmful consequences, including prompt dependency, loss of intrinsic motivation, and even psychological trauma. Curtailing of self-soothing behaviors is potentially classifiable as a form of abuse.
While ABA seems to be intrinsically linked to autism intervention, it is also used in a broad range of other areas. Recent notable areas of research in the Journal of Applied Behavior Analysis include autism, classroom instruction with typically developing students, pediatric feeding therapy, and substance use disorders. Other applications of ABA include applied animal behavior, consumer behavior analysis, forensic behavior analysis, behavioral medicine, behavioral neuroscience, clinical behavior analysis, organizational behavior management, schoolwide positive behavior interventions and support, and contact desensitization for phobias.
Characteristics
Baer, Wolf, and Risley's 1968 article is still used as the standard description of ABA. It lists the following seven characteristics of ABA. Another resource for the characteristics of applied behavior analysis is the textbook Behavior Modification: Principles and Procedures.
Applied: ABA focuses on the social significance of the behavior studied. For example, a non-applied researcher may study eating behavior because this research helps to clarify metabolic processes, whereas the applied researcher may study eating behavior in individuals who eat too little or too much, trying to change such behavior so that it is more acceptable to the persons involved. It is also based on trying to improve the everyday life of clients that are receiving it.
Behavioral: ABA is pragmatic; it asks how it is possible to get an individual to do something effectively. To answer this question, the behavior itself must be objectively measurable and observable. This is designed so that when someone is trying to determine a target behavior, it is able to be observed and understood by anyone. Verbal descriptions are treated as behavior in themselves, and not as substitutes for the behavior described.
Analytic: Behavior analysis is successful when the analyst understands and can manipulate the events that control a target behavior. This may be relatively easy to do in the lab, where a researcher is able to arrange the relevant events, but it is not always easy, or ethical, in an applied situation. In order to consider something to fall under the spectrum of analytic, it must demonstrate a functional relationship and it must be provable. Baer et al. outline two methods that may be used in applied settings to demonstrate control while maintaining ethical standards. These are the reversal design and the multiple baseline design. In the reversal design, the experimenter first measures the behavior of choice, introduces an intervention, and then measures the behavior again. Then, the intervention is removed, or reduced, and the behavior is measured yet again. The intervention is effective to the extent that the behavior changes and then changes back in response to these manipulations. The multiple baseline method may be used for behaviors that seem irreversible. Here, several behaviors are measured and then the intervention is applied to each in turn. The effectiveness of the intervention is revealed by changes in just the behavior to which the intervention is being applied.
Technological: The description of analytic research must be clear and detailed, so that any competent researcher can repeat it accurately. The goal is to make sure that anyone can implement and understand what is being explained. Cooper et al. describe a good way to check this: Have a person trained in applied behavior analysis read the description and then act out the procedure in detail. If the person makes any mistakes or has to ask any questions then the description needs improvement.
Conceptually Systematic: Behavior analysis should not simply produce a list of effective interventions. Rather, to the extent possible, these methods should be grounded in the principles of applied behavioral analysis. This is aided by the use of theoretically meaningful terms, such as "secondary reinforcement" or "errorless discrimination" where appropriate.
Effective: Though analytic methods should be theoretically grounded, they must be effective. Interventions also must be relevant to the client and/or culture. An analyst must ask themselves if the intervention is working. The intervention must also contain a positive change. If an intervention does not produce a large enough effect for practical use, then the analysis has failed
Generality: Behavior analysts should aim for interventions that are generally applicable; the methods should work in different environments, apply to more than one specific behavior, and have long-lasting effects. This generalizability should be implemented from the very beginning of the intervention. When first starting a new intervention, it is a good idea for that to take place in a natural environment for the client.
Other proposed characteristics
In 2005, Heward et al. suggested the addition of the following five characteristics:
Accountable: To be accountable means that ABA must be able to demonstrate that its methods are effective. This requires repeatedly measuring the effect of interventions (success, failure or no effect at all), and, if necessary, making changes that improve their effectiveness.
Public: The methods, results, and theoretical analyses of ABA must be published and open to scrutiny. There are no hidden treatments or mystical, metaphysical explanations.
Doable: To be generally useful, interventions should be available to a variety of individuals, who might be teachers, parents, therapists, or even those who wish to modify their own behavior. With proper planning and training, many interventions can be applied by almost anyone willing to invest the effort.
Empowering: ABA provides tools that give the practitioner feedback on the results of interventions. These allow clinicians to assess their skill level and build confidence in their effectiveness.
Optimistic: According to several leading authors, behavior analysts have cause to be optimistic that their efforts are socially worthwhile, for the following reasons:
The behaviors impacted by behavior analysis are largely determined by learning and controlled by manipulable aspects of the environment.
Practitioners can improve performance by direct and continuous measurements.
As a practitioner uses behavioral techniques with positive outcomes, they become more confident of future success.
The literature provides many examples of success in teaching individuals considered previously unteachable.
Use as therapy for autism
Although BCBA certification does not require any autism training, a large majority of ABA practitioners specialize in autism, and ABA itself is often mistakenly considered synonymous with therapy for autism. Practitioners often use ABA-based techniques to teach adaptive behaviors to, or diminish challenging behaviors presented by, individuals with autism.
Despite many years of research indicating that early intensive behavioral intervention—the traditional form of ABA that relies on discrete trial training—improves the intellectual performance of those with ASD, most of these studies lack random assignment and there is need for larger sample sizes. A 2018 Cochrane review of five controlled trials found weak evidence indicating that ABA may be effective for some autistic children, noting a high risk of bias in the studies included in the review. The effectiveness of ABA therapies for autism may be overall limited by diagnostic severity, age of intervention, and IQ. Despite this, however, ABA has nevertheless been recommended for people with intellectual disabilities.
In 2018, a Cochrane meta-analysis database concluded that some recent research is beginning to suggest that because of the heterology of ASD, there are two different ABA teaching approaches to acquiring spoken language: children with higher receptive language skills respond to 2.5 – 20 hours per week of the naturalistic approach, whereas children with lower receptive language skills need 25 hours per week of discrete trial training—the structured and intensive form of ABA. A 2023 multi-site randomized control trial study of 164 participants showed similar findings.
Quality of evidence
Conflicts of interest, methodological concerns, and a high risk of bias pervade most ABA studies. A 2019 meta-analysis noted that "methodological rigor remains a pressing concern" in research into ABA's use as therapy for autism; while the authors found some evidence in favour of behavioral interventions, the effects disappeared when they limited the scope of their review to randomized controlled trial designs and outcomes for which there was no risk of detection bias.
One study revealed extensive undisclosed conflicts of interest (COI) in published ABA studies. 84% of studies published in top behavioral journals over a period of one year had at least one author with a COI involving their employment, either as an ABA clinical provider or a training consultant to ABA clinical providers. However, only 2% of these studies disclosed the COI.
Low-quality evidence is likewise a concern in some research reporting on the potential harms of ABA on autistic children.
Another concern is that ABA research only measures behavior as a means of success, which has led to a lack of qualitative research about autistic experiences of ABA, a lack of research examining the internal effects of ABA and a lack of research for autistic children who are non-speaking or have comorbid intellectual disabilities (which is concerning considering this is one of the major populations that intensive ABA focuses on). Research is also lacking about whether ABA is effective long-term and very little longitudinal outcomes have been studied.
Ethical concerns
Researchers and advocates have denounced the ABA ethical code as too lenient, citing its failure to restrict or clarify the use of aversives, the absence of an autism or child development education requirement for ABA therapists, and its emphasis on parental consent rather than the consent of the person receiving services. This emphasis on parental consent stems from ABA viewing the parent as the client, a stance which has been criticized for centering benefits to the parent, not the child, in behavioral interventions. Numerous researchers have argued that ABA is abusive and can increase symptoms of post-traumatic stress disorder (PTSD) in people undergoing the intervention. Some bioethicists argue that employing ABA violates the principles of justice and nonmaleficence and infringes on the autonomy of both autistic children and their parents.
Two 2020 reviews found that very few studies directly reported on or investigated possible harms; although a significant number of studies mentioned adverse events in their analysis of why people withdrew from them, there was no effort to monitor or collect data on adverse outcomes.
Justin B. Leaf and others examined and responded to several of these criticisms of ABA in three papers published in 2018, 2019, and 2022, respectively, in which they questioned the evidence for such criticisms, concluding that the claim that all ABA is abusive has no basis in the published literature. Others have published similar responses.
Use of aversives
Lovaas incorporated aversives into some of the ABA practices he developed, including employing electric shocks, slapping, and shouting to modify undesirable behavior. Although the use of aversives in ABA became less common over time, and in 2012 their use was described as inconsistent with contemporary practice, aversives persisted in some ABA programs. In comments made in 2014 to the US Food and Drug Administration (FDA), a clinician previously employed by the Judge Rotenberg Educational Center claimed that "all textbooks used for thorough training of applied behavior analysts include an overview of the principles of punishment, including the use of electrical brain stimulation."
Views of the autistic community
Proponents of neurodiversity dispute the value of eliminating autistic behaviors, maintaining that it forces autistic people to mask their true personalities and conform to a narrow conception of normality. Masking is associated with suicidality and poor long-term mental health. Some autistic advocates contend that it is cruel to try to make autistic people behave as if they were non-autistic without consideration for their well-being, criticizing ABA's framing of autism as a tragedy in need of treatment. Instead, these critics advocate for increased social acceptance of harmless autistic traits and therapies focused on improving quality of life. The Autistic Self Advocacy Network, for example, campaigns against the use of ABA in autism. The European Council of Autistic People (EUCAP) published a 2024 position statement expressing deep concern about the harm caused by ABA being overlooked. They emphasize that most surveyed autistic individuals view ABA as harmful, abusive, and counterproductive to their well-being. EUCAP advocates for a variety of support methods and the inclusion of autistic individuals in decision-making processes regarding their care.
A 2020 study examined perspectives of autistic adults that received ABA as children and found that the overwhelming majority reported that "behaviorist methods create painful lived experiences", that ABA led to the "erosion of the true actualizing self", and that they felt they had a "lack of self-agency within interpersonal experiences".
Concepts
Behavior
Behavior refers to the movement of some part of an organism that changes some aspect of the environment. Often, the term behavior refers to a class of responses that share physical dimensions or functions, and in that case a response is a single instance of that behavior. If a group of responses have the same function, this group may be called a response class. Repertoire refers to the various responses available to an individual; the term may refer to responses that are relevant to a particular situation, or it may refer to everything a person can do.
Operant conditioning
Operant behavior is the so-called "voluntary" behavior that is sensitive to, or controlled by its consequences. Specifically, operant conditioning refers to the three-term contingency that uses stimulus control, in particular an antecedent contingency called the discriminative stimulus (SD) that influences the strengthening or weakening of behavior through such consequences as reinforcement or punishment. The term is used quite generally, from reaching for a candy bar, to turning up the heat to escape an aversive chill, to studying for an exam to get good grades.
Respondent (classical) conditioning
Respondent (classical) conditioning is based on innate stimulus-response relationships called reflexes. In his experiments with dogs, Pavlov usually used the salivary reflex, namely salivation (unconditioned response) following the taste of food (unconditioned stimulus). Pairing a neutral stimulus, for example a bell (conditioned stimulus) with food caused the dog to elicit salivation (conditioned response). Thus, in classical conditioning, the conditioned stimulus becomes a signal for a biologically significant consequence. Note that in respondent conditioning, unlike operant conditioning, the response does not produce a reinforcer or punisher (e.g., the dog does not get food because it salivates).
Reinforcement
Reinforcement is the key element in operant conditioning and in most behavior change programs. It is the process by which behavior is strengthened. If a behavior is followed closely in time by a stimulus and this results in an increase in the future frequency of that behavior, then the stimulus is a positive reinforcer. If the removal of an event serves as a reinforcer, this is termed negative reinforcement. There are multiple schedules of reinforcement that affect the future probability of behavior. "[H]e would get Beth to comply by hugging him and giving her food as a reward."
Punishment
Punishment is a process by which a consequence immediately follows a behavior which decreases the future frequency of that behavior. As with reinforcement, a stimulus can be added (positive punishment) or removed (negative punishment). Broadly, there are three types of punishment: presentation of aversive stimuli (e.g., pain), response cost (removal of desirable stimuli as in monetary fines), and restriction of freedom (as in a 'time out'). Punishment in practice can often result in unwanted side effects. Some other potential unwanted effects include resentment over being punished, attempts to escape the punishment, expression of pain and negative emotions associated with it, and recognition by the punished individual between the punishment and the person delivering it. ABA therapist state that they use punishment is used infrequently as a last resort or when there is a direct threat caused by the behavior.
Extinction
Extinction is the technical term to describe the procedure of withholding/discontinuing reinforcement of a previously reinforced behavior, resulting in the decrease of that behavior. The behavior is then set to be extinguished (Cooper et al.). Extinction procedures are often preferred over punishment procedures, as many punishment procedures are deemed unethical and in many states prohibited. Nonetheless, extinction procedures must be implemented with utmost care by professionals, as they are generally associated with extinction bursts. An extinction burst is the temporary increase in the frequency, intensity, and/or duration of the behavior targeted for extinction. Other characteristics of an extinction burst include an extinction-produced aggression—the occurrence of an emotional response to an extinction procedure often manifested as aggression; and b) extinction-induced response variability—the occurrence of novel behaviors that did not typically occur prior to the extinction procedure. These novel behaviors are a core component of shaping procedures.
Discriminated operant and three-term contingency
In addition to a relation being made between behavior and its consequences, operant conditioning also establishes relations between antecedent conditions and behaviors. This differs from the S–R formulations (If-A-then-B), and replaces it with an AB-because-of-C formulation. In other words, the relation between a behavior (B) and its context (A) is because of consequences (C), more specifically, this relationship between AB because of C indicates that the relationship is established by prior consequences that have occurred in similar contexts. This antecedent–behavior–consequence contingency is termed the three-term contingency. A behavior which occurs more frequently in the presence of an antecedent condition than in its absence is called a discriminated operant. The antecedent stimulus is called a discriminative stimulus (SD). The fact that the discriminated operant occurs only in the presence of the discriminative stimulus is an illustration of stimulus control. More recently behavior analysts have been focusing on conditions that occur prior to the circumstances for the current behavior of concern that increased the likelihood of the behavior occurring or not occurring. These conditions have been referred to variously as "Setting Event", "Establishing Operations", and "Motivating Operations" by various researchers in their publications.
Verbal behavior
B. F. Skinner's classification system of behavior analysis has been applied to treatment of a host of communication disorders. Skinner's system includes:
Tact – a verbal response evoked by a non-verbal antecedent and maintained by generalized conditioned reinforcement.
Mand – behavior under control of motivating operations maintained by a characteristic reinforcer.
Intraverbals – verbal behavior for which the relevant antecedent stimulus was other verbal behavior, but which does not share the response topography of that prior verbal stimulus (e.g., responding to another speaker's question).
Autoclitic – secondary verbal behavior which alters the effect of primary verbal behavior on the listener. Examples involve quantification, grammar, and qualifying statements (e.g., the differential effects of "I think..." vs. "I know...")
Skinner's use of behavioral techniques was famously critiqued by the linguist Noam Chomsky through an extensive breakdown of how Skinner's view of language as behavioral simply cannot explain the complexity of human language. This suggests that while behaviorist techniques can teach language, it is a very poor measure to explain language fundamentals. Considering Chomsky's critiques, it may be more appropriate to teach language through a Speech language pathologist instead of a behaviorist.
For an assessment of verbal behavior from Skinner's system, see Assessment of Basic Language and Learning Skills.
Measuring behavior
When measuring behavior, there are both dimensions of behavior and quantifiable measures of behavior. In applied behavior analysis, the quantifiable measures are a derivative of the dimensions. These dimensions are repeatability, temporal extent, and temporal locus.
Repeatability
Response classes occur repeatedly throughout time—i.e., how many times the behavior occurs.
Count is the number of occurrences in behavior.
Rate/frequency is the number of instances of behavior per unit of time.
Celeration is the measure of how the rate changes over time.
Temporal extent
Schirmer, Meck & Penney explore the ‘timing’ of temporal information that seeks out the rhythm and duration of the behavior. Given the expressions of behavior, an emotional meaning is obtained through the duration in correspondence with body and vocal expressions. Using the striatal beat frequency (SBF) model, this highlights the essential role of the striatum’s timing that synchronizes cortical oscillations. At onset of the event, ventral tegmental inputs reset the cortical phase that initiates the timing. During the event, the oscillations are monitored by neurons which is an identifier of the unique phase patterns for different durations of behavior. And when finished, the striatum decodes the patterns to aid in memory storage and comparison of event durations. Researchers discovered socio-temporal processes that attach social meaning to time, allowing the social significance to impact the perception and timing of acts.
Temporal locus
Latency specifically measures the time that elapses between the event of a stimulus and the behavior that follows. This is important in behavioral research because it quantifies how quickly an individual may respond to external stimuli, providing insights into their perceptual and cognitive processing rates. There are two measurements that are able to define temporal locus, they are response latency and interresponse time.
Response latency in children, when being treated with morphine they exhibit a longer time to the response latency in delayed matching of a simple task, and these children seem to have a harder time with social ability. This means that these children require more time to remember things when given the stimulus.
Interresponse time refers to the duration of time that occurs between two instances of behavior, and it helps in understanding patterns and frequency of a certain behavior on a period of time. Use of psychiatric medications may reduce the rate of response, but on the other hand lengthen the duration of interresponse time. The usage of these medications effectively reduces interest as the reaction declines as well.
Derivative measures
Derivative measures are additional metrics derived from primary data, often by combining or transforming dimensional quantities to offer deeper insights into a phenomenon. Despite not being directly tied to specific dimensions, these measures provide valuable supplemental information. In applied behavior analysis (ABA), for example, percentage is a derivative measure that quantifies the ratio of specific responses to total responses, offering a nuanced understanding of behavior and assisting in evaluating progress and intervention effectiveness.
Trials-to-criterion, another ABA derivative measure, tracks the number of response opportunities needed to achieve a set level of performance. This metric aids behavior analysts in assessing skill acquisition and mastery, influencing decisions on program adjustments and teaching methods.
Applied behavior analysis relies on meticulous measurement and impartial evaluation of observable behavior as a foundational principle. Without accurate data collection and analysis, behavior analysts lack the essential information to assess intervention effectiveness and make informed decisions about program modifications. Therefore, precise measurement and assessment play a pivotal role in ABA practice, guiding practitioners to enhance behavioral outcomes and drive significant change.
Behavior analysts utilize a few distinct techniques to gather information. A portion of the ways of collect data information include:
Frequency
This technique refers to the times that an objective way of behaving was noticed and counted. In the published article On Terms: Frequency and Rate in Applied Behavior Analysis, the authors state that two major texts, one being the Behavior Analyst Certification Board pair the word "frequency" with two different words—one text pairing with "count" and the other "rate". Despite one major text using the word "count" interchangeably with "frequency", both texts advise readers they should not be using counts of behavior without referencing the time base of the observation. Additionally, when given that context of advice, the count and time information provide data rate. The authors of this article suggest that when looking at applied behavior analysis (ABA) and accessing behavior measurement, you should be using the term "rate" instead of "count" to reference frequency. Any references to counts without information about observation time should be avoided.
In Annals of Clinical Psychiatry article Applied Behavioral Analytic Interventions for children with Autism: A Description and Review of Treatment Research, they point out how frequency is used to keep track of adaptive and maladaptive behaviors. By doing so, ABA therapists and clinicians are able to create a customized program for that patient. The author notes that tracking frequency, in cases specifically looking at frequency of requesting behaviors during play, language, imitation and socialization, can also be a variable to predict treatment outcome.
Rate
Same as frequency, yet inside a predefined time limit.
Duration
This estimation alludes to how much time that somebody participated in a way of behaving.
Fluency
Fluency, is a gauge on how smooth a behavior is performed. Fluency is associated with behaviors that we use over a long duration and be able to perform it with confidence. The three outcomes associated with fluency:
The ability to retain the behavior or action
Maintain the behavior while there are disruptions
The ability to transfer the behavior to other applications
Fluency will increase the response speed and accuracy of a behavior. However, when introduced to a new stimulus different from their usual behavior, there will be a decrease in reaction time or increased response time but with more false alarms. Fluency relies on repeated action so the amount of required effort for the behavior is lessened to an extent where the individual could focus more on the other factors of the behavior.
There are two types of approaches to fluency:
Unassisted approach - Individual practice of certain behavior. Set a target of response speed and accuracy under a timeframe and readjust accordingly depending on the difficulty.
Assisted approach - Behavior assisted by a teacher or an individual.
The unassisted approach would need to perform their reached target behavior to someone. The assisted learning approach have a limitation that it would need an individual to assist them which could be time-consuming for both individuals
Response latency
Latency refers to how much time after a particular boost has been given before the objective way of behaving happens.
Analyzing behavior change
Experimental control
In applied behavior analysis, all experiments should include the following:
At least one participant
At least one behavior (dependent variable)
At least one setting
A system for measuring the behavior and ongoing visual analysis of data
At least one treatment or intervention condition
Manipulations of the independent variable so that its effects on the dependent variable may be quantitatively or qualitatively analyzed
An intervention that will benefit the participant in some way (behavioral cusp)
Methodologies developed through ABA research
Task analysis
Task analysis is a process in which a task is analyzed into its component parts so that those parts can be taught through the use of chaining: forward chaining, backward chaining and total task presentation. Task analysis has been used in organizational behavior management, a behavior analytic approach to changing the behaviors of members of an organization (e.g., factories, offices, or hospitals). Behavioral scripts often emerge from a task analysis. Bergan conducted a task analysis of the behavioral consultation relationship and Thomas Kratochwill developed a training program based on teaching Bergan's skills. A similar approach was used for the development of microskills training for counselors. Ivey would later call this "behaviorist" phase a very productive one and the skills-based approach came to dominate counselor training during 1970–90. Task analysis was also used in determining the skills needed to access a career. In education, Englemann (1968) used task analysis as part of the methods to design the direct instruction curriculum.
Chaining
The skill to be learned is broken down into small units for easy learning. For example, a person learning to brush teeth independently may start with learning to unscrew the toothpaste cap. Once they have learned this, the next step may be squeezing the tube, etc.
For problem behavior, chains can also be analyzed and the chain can be disrupted to prevent the problem behavior. Some behavior therapies, such as dialectical behavior therapy, make extensive use of behavior chain analysis, but is not philosophically behavior analytic.
There are two types of chain in the ABA world: forward chain and backward chain. Forward chain starts with the first step and continues until the final step, while backward chain begins with the last step and moves backward until the first step.
Prompting
A prompt is a cue that is used to encourage a desired response from an individual. Prompts are often categorized into a prompt hierarchy from most intrusive to least intrusive, although there is some controversy about what is considered most intrusive, those that are physically intrusive or those that are hardest prompt to fade (e.g., verbal). In order to minimize errors and ensure a high level of success during learning, prompts are given in a most-to-least sequence and faded systematically. During this process, prompts are faded as quickly as possible so that the learner does not come to depend on them and eventually behaves appropriately without prompting.
Types of prompts
Prompters might use any or all of the following to suggest the desired response:
Vocal prompts: Words or other vocalizations
Visual prompts: A visual cue or picture
Gestural prompts: A physical gesture
Positional prompt: e.g., the target item is placed close to the individual.
Modeling: Modeling the desired response. This type of prompt is best suited for individuals who learn through imitation and can attend to a model.
Physical prompts: Physically manipulating the individual to produce the desired response. There are many degrees of physical prompts, from quite intrusive (e.g., the teacher places a hand on the learner's hand) to minimally intrusive (e.g., a slight tap).
This is not an exhaustive list of prompts; the nature, number, and order of prompts are chosen to be the most effective for a particular individual.
Fading
The overall goal is for an individual to eventually not need prompts. As an individual gains mastery of a skill at a particular prompt level, the prompt is faded to a less intrusive prompt. This ensures that the individual does not become overly dependent on a particular prompt when learning a new behavior or skill.
One of the primary choices that was made while showing another way of behaving is the manner by which to fade the prompts or prompts. An arrangement should be set up to fade the prompts in an organized style. For instance, blurring the actual brief of directing a kid's hands might follow this succession: (a) supporting wrists, (b) contacting hands softly, (c) contacting lower arm or elbow, and (d) pulling out actual contact through and through. Fading guarantees that the kid does not turn out to be excessively subject to a specific brief while mastering another expertise.
Thinning a reinforcement schedule
Thinning is often confused with fading. Fading refers to a prompt being removed, where thinning refers to an increase in the time or number of responses required between reinforcements. Periodic thinning that produces a 30% decrease in reinforcement has been suggested as an efficient way to thin. Schedule thinning is often an important and neglected issue in contingency management and token economy systems, especially when these are developed by unqualified practitioners (see professional practice of behavior analysis).
Generalization
Generalization is the expansion of a student's performance ability beyond the initial conditions set for acquisition of a skill. Generalization can occur across people, places, and materials used for teaching. For example, once a skill is learned in one setting, with a particular instructor, and with specific materials, the skill is taught in more general settings with more variation from the initial acquisition phase. For example, if a student has successfully mastered learning colors at the table, the teacher may take the student around the house or school and generalize the skill in these more natural environments with other materials. Behavior analysts have spent considerable amount of time studying factors that lead to generalization.
Shaping
Shaping involves gradually modifying the existing behavior into the desired behavior. If the student engages with a dog by hitting it, then they could have their behavior shaped by reinforcing interactions in which they touch the dog more gently. Over many interactions, successful shaping would replace the hitting behavior with patting or other gentler behavior. Shaping is based on a behavior analyst's thorough knowledge of operant conditioning principles and extinction. Recent efforts to teach shaping have used simulated computer tasks.
One teaching technique found to be effective with some students, particularly children, is the use of video modeling (the use of taped sequences as exemplars of behavior). It can be used by therapists to assist in the acquisition of both verbal and motor responses, in some cases for long chains of behavior.
Another example of shaping is when a toddler learns to walk. The child is reinforced by crawling, standing, taking a few steps, and then eventually walking. When a child is learning to walk, they are praised by a lot of claps and excitements.
Interventions based on an FBA
Functional behavioral assessment (FBA) is an individualized critical thinking process that may be used to address problem behavior. An evaluation is initiated to distinguish the causality of a problem behavior. This interactive evaluation includes gathering data about the ecological circumstances that occur prior to an identified conduct issue and the resulting rewards that reinforce the behavior. The data that is collected is then used to recognize and execute individualized interventions pointed toward lessening problem behaviors and expanding positive behavior outcomes.
Critical to behavior analytic interventions is the concept of a systematic behavioral case formulation with a functional behavioral assessment or analysis at the core. This approach should apply a behavior analytic theory of change (see Behavioral change theories). This formulation should include a thorough functional assessment, a skills assessment, a sequential analysis (behavior chain analysis), an ecological assessment, a look at existing evidenced-based behavioral models for the problem behavior (such as Fordyce's model of chronic pain) and then a treatment plan based on how environmental factors influence behavior. Some argue that behavior analytic case formulation can be improved with an assessment of rules and rule-governed behavior. Some of the interventions that result from this type of conceptualization involve training specific communication skills to replace the problem behaviors as well as specific setting, antecedent, behavior, and consequence strategies.
Other species
ABA has been successfully used in other species. Morris uses ABA to reduce feather-plucking in the black vulture (Coragyps atratus).
Major journals
Applied behavior analysts publish in many journals. Some examples of "core" behavior analytic journals are:
Applied Animal Behaviour Science
Behavioral Health and Medicine
Behavior Analysis: Research and Practice
Behavior and Philosophy
Behavior and Social Issues
Behavior Modification
Behavior Therapy
Journal of Applied Behavior Analysis
Journal of Behavior Analysis of Offender and Victim: Treatment and Prevention
Journal of Behavior Analysis of Sports, Health, Fitness, and Behavioral Medicine
Journal of Contextual Behavioral Science
Journal of Early and Intensive Behavior Interventions
Journal of Organizational Behavior Management
Journal of Positive Behavior Interventions
Journal of the Experimental Analysis of Behavior
Perspectives on Behavior Science (formerly The Behavior Analyst until 2018)
The Behavioral Development Bulletin
The Behavior Analyst Today
The International Journal of Behavioral Consultation and Therapy
The Journal of Behavioral Assessment and Intervention in Children
The Journal of Speech-Language Pathology and Applied Behavior Analysis
The Psychological Record
See also
Association for Behavior Analysis International
Behavior analysis of child development
Behavior therapy
Behavioral activation
Educational psychology
Parent management training
Professional practice of behavior analysis
References
Sources
Further reading
External links
Applied Behavior Analysis: Overview and Summary of Scientific Support
Functional Behavioral Assessment, The IRIS Center – U.S. Department of Education, Office of Special Education Programs Grant and Vanderbilt University
Behavior analysis
Behavior
Behavior modification
Behavioral concepts
Behaviorism
Life coaching
Mind control
Industrial and organizational psychology
Personal development
Autism pseudoscience | 0.764606 | 0.997976 | 0.763059 |
Declarative knowledge | Declarative knowledge is an awareness of facts that can be expressed using declarative sentences. It is also called theoretical knowledge, descriptive knowledge, propositional knowledge, and knowledge-that. It is not restricted to one specific use or purpose and can be stored in books or on computers.
Epistemology is the main discipline studying declarative knowledge. Among other things, it studies the essential components of declarative knowledge. According to a traditionally influential view, it has three elements: it is a belief that is true and justified. As a belief, it is a subjective commitment to the accuracy of the believed claim while truth is an objective aspect. To be justified, a belief has to be rational by being based on good reasons. This means that mere guesses do not amount to knowledge even if they are true. In contemporary epistemology, additional or alternative components have been suggested. One proposal is that no contradicting evidence is present. Other suggestions are that the belief was caused by a reliable cognitive process and that the belief is infallible.
Types of declarative knowledge can be distinguished based on the source of knowledge, the type of claim that is known, and how certain the knowledge is. A central contrast is between a posteriori knowledge, which arises from experience, and a priori knowledge, which is grounded in pure rational reflection. Other classifications include domain-specific knowledge and general knowledge, knowledge of facts, concepts, and principles as well as explicit and implicit knowledge.
Declarative knowledge is often contrasted with practical knowledge and knowledge by acquaintance. Practical knowledge consists of skills, like knowing how to ride a horse. It is a form of non-intellectual knowledge since it does not need to involve true beliefs. Knowledge by acquaintance is a familiarity with something based on first-hand experience, like knowing the taste of chocolate. This familiarity can be present even if the person does not possess any factual information about the object. Some theorists also contrast declarative knowledge with conditional knowledge, prescriptive knowledge, structural knowledge, case knowledge, and strategic knowledge.
Declarative knowledge is required for various activities, such as labeling phenomena as well as describing and explaining them. It can guide the processes of problem-solving and decision-making. In many cases, its value is based on its usefulness in achieving one's goals. However, its usefulness is not always obvious and not all instances of declarative knowledge are valuable. A lot of knowledge taught at school is declarative knowledge. It is said to be stored as explicit memory and can be learned through rote memorization of isolated, singular, facts. But in many cases, it is advantageous to foster a deeper understanding that integrates the new information into wider structures and connects it to pre-existing knowledge. Sources of declarative knowledge are perception, introspection, memory, reasoning, and testimony.
Definition and semantic field
Declarative knowledge is an awareness or understanding of facts. It can be expressed through spoken and written language using declarative sentences and can thus be acquired through verbal communication. Examples of declarative knowledge are knowing "that Princess Diana died in 1997" or "that Goethe was 83 when he finished writing Faust". Declarative knowledge involves mental representations in the form of concepts, ideas, theories, and general rules. Through these representations, the person stands in a relationship to a particular aspect of reality by depicting what it is like. Declarative knowledge tends to be context-independent: it is not tied to any specific use and may be employed for many tasks. It includes a wide range of phenomena and encompasses both knowledge of individual facts and general laws. An example for individual facts is knowing that the atomic mass of gold is 196.97 u. Knowing that the color of leaves of some trees changes in autumn, on the other hand, belongs to general laws. Due to its verbal nature, declarative knowledge can be stored in media like books and harddisks. It may also be processed using computers and plays a key role in various forms of artificial intelligence, for example, in the knowledge base of expert systems.
Terms like theoretical knowledge, descriptive knowledge, propositional knowledge, and knowledge-that are used as synonyms of declarative knowledge and express its different aspects. Theoretical knowledge is knowledge of what is the case, in the past, present, or future independent of a practical outlook concerning how to achieve a specific goal. Descriptive knowledge is knowledge that involves descriptions of actual or speculative objects, events, or concepts. Propositional knowledge asserts that a proposition or claim about the world is true. This is often expressed using a that-clause, as in "knowing that kangaroos hop" or "knowing that 2 + 2 = 4". For this reason, it is also referred to as knowledge-that. Declarative knowledge contrasts with non-declarative knowledge, which does not concern the explicit comprehension of factual information regarding the world. In this regard, practical knowledge in the form of skills and knowledge by acquaintance as a type of experiential familiarity are not forms of declarative knowledge. The main discipline investigating declarative knowledge is called epistemology. It tries to determine its nature, how it arises, what value it has, and what its limits are.
Components
A central issue in epistemology is to determine the components or essential features of declarative knowledge. This field of inquiry is called the analysis of knowledge. It aims to provide the conditions that are individually necessary and jointly sufficient for a state to amount to declarative knowledge. In this regard, it is similar to how a chemist breaks down a sample by identifying all the chemical elements composing it.
A traditionally influential view states that declarative knowledge has three essential features: it is (1) a belief that is (2) true and (3) justified. This position is referred to as the justified-true-belief theory of knowledge and is often seen as the standard view. This view faced significant criticism following a series of counterexamples given by Edmund Gettier in the latter half of the 20th century. In response, various alternative theories of the elements of declarative knowledge have been suggested. Some see justified true belief as a necessary condition that is not sufficient by itself and discuss additional components that are needed. Another response is to deny that justification is needed and seek a different component to replace it. Some theorists, like Timothy Williamson, reject the idea that declarative knowledge can be deconstructed into various constituent parts. They argue instead that it is a basic and unanalyzable epistemological state.
Belief
One commonly accepted component of knowledge is belief. In this sense, whoever knows that whales are animals automatically also believes that whales are animals. A belief is a mental state that affirms that something is the case. As an attitude toward a proposition, it belongs to the subjective side of knowledge. Some theorists, like Luis Villoro, distinguish between weak and strong beliefs. Having a weak belief implies that the person merely presumes that something is the case. They guess that the claim is probably correct while acknowledging at the same time that they might very well be mistaken about it. This contrasts with strong belief, which implies a substantial commitment to the believed claim. It involves certainty in the form of being sure about it. For declarative knowledge, this stronger sense of belief is relevant.
A few epistemologists, like Katalin Farkas, claim that, at least in some cases, knowledge is not a form of belief but a different type of mental state. One argument for this position is based on statements like "I don't believe it, I know it", which may be used to express that the person is very certain and has good reason to affirm this claim. However, this argument is not generally accepted since knowing something does not imply that the person disbelieves the claim. A further explanation is to hold that this statement is a linguistic tool to emphasize that the person is well-informed. In this regard, it only denies that a weak belief exists without rejecting that a stronger form of belief is involved.
Truth
Beliefs are either true or false depending on whether they accurately represent reality. Truth is usually seen as one of the essential components of knowledge. This means that it is impossible to know a claim that is false. For example, it is possible to believe that Hillary Clinton won the 2016 US Presidential election but nobody can know it because this event did not occur. That a proposition is true does not imply that it is common knowledge, that an irrefutable proof exists, or that someone is thinking about it. Instead, it only means that it presents things as they are. For example, when flipping a coin, it may be true that it will land heads even if it is not possible to predict this with certainty. Truth is an objective factor of knowledge that goes beyond the mental sphere of belief since it usually depends on what the world outside the person's mind is like.
Some epistemologists hold that there are at least some forms of knowledge that do not require truth. For example, Joseph Thomas Tolliver argues that some mental states amount to knowledge only because of the causes and effects they have. This is the case even if they do not represent anything and are therefore neither true nor false. A different outlook is found in the field of the anthropology of knowledge, which studies how knowledge is acquired, stored, retrieved, and communicated. In this discipline, knowledge is often understood in a very wide sense that is roughly equivalent to understanding and culture. In this regard, the main interest is usually about how people ascribe truth values to meaning-contents, like when affirming an assertion, independent of whether this assertion is true or false. Despite these positions, it is widely accepted in epistemology that truth is an essential component of declarative knowledge.
Justification
In epistemology, justification means that a claim is supported by evidence or that a person has good reasons for believing it. This implies some form of appraisal in relation to an evaluative standard of rationality. For example, a person who just checked their bank account and saw that their balance is 500 dollars has a good reason to believe that they have 500 dollars in their bank account. However, justification by itself does not imply that a belief is true. For example, if someone reads the time from their clock they may form a justified belief about the current time even if the clock stopped a while ago and shows a false time now. If a person has a justified belief then they are often able to articulate what this belief is and to provide arguments stating the reasons supporting it. However, this ability to articulate one's reasons is not an essential requirement of justification.
Justification is usually included as a component of knowledge to exclude lucky guesses. For example, a compulsive gambler flipping a coin may be certain that it will land heads this time without a good reason for this belief. In this case, the belief does not amount to knowledge even if it turns out that it was true. This observation can be easily explained by including justification as an essential component. This implies that the gambler's belief does not amount to knowledge because it lacks justification. In this regard, mere true opinion is not enough to establish knowledge. A central issue in epistemology concerns the standards of justification, i.e., what conditions have to be fulfilled for a belief to be justified. Internalists understand justification as a purely subjective component, akin to belief. They claim that a belief is justified if it stands in the right relation to other mental states of the believer. For example, perceptual experiences can justify beliefs about the perceived object. This contrasts with externalists, who claim that justification involves objective factors that are external to the person's mind. Such factors can include causal relations with the object of the belief or that reliable cognitive processes are responsible for the formation of the belief.
A closely related issue concerns the question of how the different mental states have to be related to each other to be justified. For example, one belief may be supported by another belief. However, it is questionable whether this is sufficient for justification if the second belief is itself not justified. For example, a person may believe that Ford cars are cheaper than BMWs because they heard this from a friend. However, this belief may not be justified if there is no good reason to think that the friend is a reliable source of information. This can lead to an infinite regress since whatever reason is provided for the friend's reliability may itself lack justification. Three popular responses to this problem are foundationalism, coherentism, and infinitism. According to foundationalists, some reasons are foundational and do not depend on other reasons for their justification. Coherentists also reject the idea that an infinite chain of reasons is needed and argue that different beliefs can mutually support each other without one being more basic than the others. Infinitists, on the other hand, accept the idea that an infinite chain of reasons is required.
Many debates concerning the nature of declarative knowledge focus on the role of justification, specifically whether it is needed at all and what else might be needed to complement it. Influential in this regard was a series of thought experiments by Edmund Gettier. They present concrete cases of justified true beliefs that fail to amount to knowledge. The reason for their failure is a type of epistemic luck. This means that the justification is not relevant to whether the belief is true. In one thought experiment, Smith and Jones apply for a job and before officially declaring the result, the company president tells Smith that Jones will get the job. Smith saw that Jones has 10 coins in his pocket so he comes to form the justified belief that the successful candidate has 10 coins in his pocket. In the end, it turns out that Smith gets the job after all. By lucky coincidence, Smith also has 10 coins in his pocket. Gettier claims that, because of this coincidence, Smith's belief that the successful candidate has 10 coins in his pocket does not amount to knowledge. The belief is justified and true but the justification is not relevant to the truth.
Others
In response to Gettier's thought experiments, various further components of declarative knowledge have been suggested. Some of them are intended as additional elements besides belief, truth, and justification while others are understood as replacements for justification.
According to defeasibility theory, an additional factor besides having evidence in favor of the belief is that no defeating evidence is present. Defeating evidence of a belief is evidence that undermines the justification of the belief. For example, if a person looks outside the window and sees a rainbow then this impression justifies their belief that there is a rainbow. However, if the person just ate a psychedelic drug then this is defeating evidence since it undermines the reliability of their experiences. Defeasibility theorists claim that, in this case, the belief does not amount to knowledge because defeating evidence is present. As an additional component of knowledge, they require that the person has no defeating evidence of the belief. Some theorists demand the stronger requirement that there is no true proposition that would defeat the belief, independent of whether the person is aware of this proposition or not. A closely related theory holds that beliefs can only amount to knowledge if they are not inferred from a falsehood.
A further theory is based on the idea that knowledge states should be responsive to what the world is like. One suggested component in this regard is that the belief is safe or sensitive. This means that the person has the belief because it is true but that they would not hold the belief if it was false. In this regard, the person's belief tracks the state of the world.
Some theories do not try to provide additional requirements but instead propose replacing justification with alternative components. For example, according to some forms of reliabilism, a true belief amounts to knowledge if it was formed through a reliable cognitive process. A cognitive process is reliable if it produces mostly true beliefs in actual situations and would also do so in counterfactual situations.
Examples of reliable processes are perception and reasoning. An outcome of reliabilism is that knowledge is not restricted to humans. The reason is that reliable belief-formation processes may also be present in other animals, like dogs, apes, or rats, even if they do not possess justification for their beliefs. Virtue epistemology is a closely related approach that understands knowledge as the manifestation of epistemic virtues. It agrees with regular forms of reliabilism that knowledge is not a matter of luck but puts additional emphasis on the evaluative aspect of knowledge and the underlying skills responsible for it.
According to causal theories of knowledge, a necessary element of knowing a fact is that this fact somehow caused the knowledge of it. This is the case, for example, if a belief about the color of a house is based on a perceptual experience, which causally connects the house to the belief. This causal connection does not have to be direct and can be mediated through steps like activating memories and drawing inferences.
In many cases, the goal of suggesting additional components is to avoid cases of epistemic luck. In this regard, some theorists have argued that the additional component would have to ensure that the belief is true. This approach is reflected in the idea that knowledge implies a form of certainty. But it sets the standards of knowledge very high and may require that a belief has to be infallible to amount to knowledge. This means that the justification ensures that the belief is true. For example, Richard Kirkham argues that the justification required for knowledge must be based on self-evident premises that deductively entail the held belief. Such a position leads to a form of skepticism about knowledge since the great majority of regular beliefs do not live up to these requirements. It would imply that people know very little and that most who claim to know a certain fact are mistaken. However, a more common view among epistemologists is that knowledge does not require infallibility and that many knowledge claims in everyday life are true.
Types
Declarative knowledge arises in many forms. It is possible to distinguish between them based on the type of content of what is known. For example, empirical knowledge is knowledge of observable facts while conceptual knowledge is an understanding of general categorizations and theories as well as the relations between them. Other examples are ethical, religious, scientific, mathematical, and logical knowledge as well as self-knowledge. A further distinction focuses on the mode of how something is known. On a causal level, different sources of knowledge correspond to different types of declarative knowledge. Examples are knowledge through perception, introspection, memory, reasoning, and testimony.
On a logical level, forms of knowledge can be distinguished based on how a knowledge claim is supported by its premises. This classification corresponds to the different forms of logical reasoning, such as deductive and inductive reasoning. A closely related categorization focuses on the strength of the source of the justification. It distinguishes between probabilistic and apodictic knowledge. The distinction between a priori and a posteriori knowledge, on the other hand, focuses on the type of the source. These classifications overlap with each other at various points. For example, a priori knowledge is closely connected to apodictic, conceptual, deductive, and logical knowledge. A posteriori knowledge, on the other hand, is linked to probabilistic, empirical, inductive, and scientific knowledge. Self-knowledge may be identified with introspective knowledge.
The distinction between a priori and a posteriori knowledge is determined by the role of experience and matches the contrast between empirical and non-empirical knowledge. A posteriori knowledge is knowledge from experience. This means that experience, like regular perception, is responsible for its formation and justification. Knowing that the door of one's house is green is one example of a posteriori knowledge since some form of sensory observation is required. For a priori knowledge, on the other hand, no experience is required. It is based on pure rational reflection and can neither be verified nor falsified through experience. Examples are knowing that 7 + 5 = 12 or that whatever is red everywhere is not blue everywhere. In this context, experience means primarily sensory observation but can also include related processes, like introspection and memory. However, it does not include all conscious phenomena. For example, having a rational insight into the solution of a mathematical problem does not mean that the resulting knowledge is a posteriori. And knowing that 7 + 5 = 12 is a priori knowledge even though some form of consciousness is involved in learning what symbols like "7" and "+" mean and in becoming aware of the associated concepts.
One classification distinguishes between knowledge of facts, concepts, and principles. Knowledge of facts pertains to the association of concrete information, for example, that the red color on a traffic light means stop or that Christopher Columbus sailed in 1492 from Spain to America. Knowledge of concepts applies to more abstract and general ideas that group together many individual phenomena. For example, knowledge of the concept of jogging implies knowing how it differs from walking and running as well as being able to apply this concept to concrete cases. Knowledge of principles is an awareness of general patterns of cause and effect, including rules of thumb. It is a form of understanding how things work and being aware of the explanation of why something happened the way it did. Examples are that if there is lightning then there will be thunder or if a person robs a bank then they may go to jail. Similar classifications distinguish between declarative knowledge of persons, events, principles, maxims, and norms.
Declarative knowledge is traditionally identified with explicit knowledge and contrasted with tacit or implicit knowledge. Explicit knowledge is knowledge of which the person is aware and which can be articulated. It is stored in explicit memory. Implicit knowledge, on the other hand, is a form of embodied knowledge that the person cannot articulate. The traditional association of declarative knowledge with explicit knowledge is not always accepted in the contemporary literature. Some theorists argue that there are forms of implicit declarative knowledge. A putative example is a person who has learned a concept and is now able to correctly classify objects according to this concept even though they are not able to provide a verbal rationale for their decision.
A further contrast is between domain-specific and general knowledge. Domain-specific knowledge applies to a narrow subject or a particular task but is useless outside this focus. General knowledge, on the other hand, concerns wide topics or has general applications. For example, declarative knowledge of the rules of grammar belongs to general knowledge while having memorized the lines of the poem The Raven is domain-specific knowledge. This distinction is based on a continuum of cases that are more or less general without a clear-cut line between the types. According to Paul Kurtz, there are six types of descriptive knowledge: knowledge of available means, of consequences, of particular facts, of general causal laws, of established values, and of basic needs. Another classification distinguishes between structural knowledge and perceptual knowledge.
Contrast with other forms of knowledge
Declarative knowledge is often contrasted with other types of knowledge. A common classification in epistemology distinguishes it from practical knowledge and knowledge by acquaintance. All of them can be expressed with the verb "to know" but their differences are reflected in the grammatical structures used to articulate them. Declarative knowledge is usually expressed with a that-clause, as in "Ann knows that koalas sleep most of the time". For practical knowledge, a how-clause is used instead, for example, "Dave knows how to read the time on a clock". Knowledge by acquaintance can be articulated using a direct object without a preposition, as in "Emily knows Obama personally".
Practical knowledge consists of skills. Knowing how to ride a horse or how to play the guitar are forms of practical knowledge. The terms "procedural knowledge" and "knowledge-how" are often used as synonyms. It differs from declarative knowledge in various aspects. It is usually imprecise and cannot be proven by deducing it from premises. It is non-propositional and, for the most part, cannot be taught in abstract without concrete exercise. In this regard, it is a form of non-intellectual knowledge. It is tied to a specific goal and its value lies not in being true, but rather in how effective it is to accomplish its goal. Practical knowledge can be present without any beliefs and may even involve false beliefs. For example, an experienced ball player may know how to catch a ball despite having false beliefs. They may believe that their eyes continuously track the ball. But, in truth, their eyes perform a series of abrupt movements that anticipate the ball's trajectory rather than following it. Another difference is that declarative knowledge is commonly only ascribed to animals with highly developed minds, like humans. Practical knowledge, on the other hand, is more prevalent in the animal kingdom. For example, ants know how to walk through the kitchen despite presumably lacking the mental capacity for the declarative knowledge that they are walking through the kitchen.
Declarative knowledge is also different from knowledge by acquaintance, which is also known as objectual knowledge, and knowledge-of. Knowledge by acquaintance is a form of familiarity or direct awareness that a person has with another person, a thing, or a place. For example, a person who has tasted the flavor of chocolate knows chocolate in this sense, just like a person who visited Lake Taupō knows Lake Taupō. Knowledge by acquaintance does not imply that the person can provide factual information about the object. It is a form of non-inferential knowledge that depends on first-hand experience. For example, a person who has never left their home country may acquire a lot of declarative knowledge about other countries by reading books without any knowledge by acquaintance.
Knowledge by acquaintance plays a central role in the epistemology of Bertrand Russell. He holds that it is more basic than other forms of knowledge since to understand a proposition, one has to be acquainted with its constituents. According to Russell, knowledge by acquaintance covers a wide range of phenomena, such as thoughts, feelings, desires, memory, introspection, and sense data. It can happen in relation to particular things and universals. Knowledge of physical objects, on the other hand, belongs to declarative knowledge, which he calls knowledge by description. It also has a central role to play since it extends the realm of knowledge to things that lie beyond the personal sphere of experience.
Some theorists, like Anita Woolfolk et. al., contrast declarative knowledge and procedural knowledge with conditional knowledge. According to this view, conditional knowledge is about knowing when and why to use declarative and procedural knowledge. For many issues, like solving math problems and learning a foreign language, it is not sufficient to know facts and general procedures if the person does not know under which situations to use them. To master a language, for example, it is not enough to acquire declarative knowledge of verb forms if one lacks conditional knowledge of when it is appropriate to use them. Some theorists understand conditional knowledge as one type of declarative knowledge and not as a distinct category.
A further distinction is between declarative or descriptive knowledge in contrast to prescriptive knowledge. Descriptive knowledge represents what the world is like. It describes and classifies what phenomena are there and in what relations they stand toward each other. It is interested in what is true independently of what people want. Prescriptive knowledge is not about what things actually are like but what they should be like. This concerns specifically the question of what purposes people should follow and how they should act. It guides action by showing what people should do to fulfill their needs and desires. In this regard, it has a more subjective component since it depends on what people want. Some theorists equate prescriptive knowledge with procedural knowledge. But others distinguish them based on the claim that prescriptive knowledge is about what should be done while procedural knowledge is about how to do it. Other classifications contrast declarative knowledge with structural knowledge, meta knowledge, heuristic knowledge, control knowledge, case knowledge, and strategic knowledge.
Some theorists argue that one type of knowledge is more basic than others. For example, Robert E. Haskell claims that declarative knowledge is the basic form of knowledge since it constitutes a general framework of understanding. According to him, it is a precondition for acquiring other forms of knowledge. However, this position is not generally accepted and philosophers like Gilbert Ryle defend the opposing thesis that declarative knowledge presupposes procedural knowledge.
Value
Declarative knowledge plays a central role in human understanding of the world. It underlies activities such as labeling phenomena, describing them, explaining them, and communicating with others about them. The value of declarative knowledge depends in part on its usefulness in helping people achieve their objectives. For example, to treat a disease, knowledge of its symptoms and possible cures is beneficial. Or if a person has applied for a new job then knowing where and when the interview takes place is important. Due to its context-independence, declarative knowledge can be used for a great variety of tasks and because of its compact nature, it can be easily stored and retrieved. Declarative knowledge can be useful for procedural knowledge, for example, by knowing the list of steps needed to execute a skill. It also has a key role in understanding and solving problems and can guide the process of decision-making. A related issue in the field of epistemology concerns the question of whether declarative knowledge is more valuable than true belief. This is not obvious since, for many purposes, true belief is as useful as knowledge to achieve one's goals.
Declarative knowledge is primarily desired in cases where it is immediately useful. But not all forms of knowledge are useful. For example, indiscriminately memorizing phone numbers found in a foreign phone book is unlikely to result in useful declarative knowledge. However, it is often difficult to assess the value of knowledge if one does not foresee a situation where it would be useful. In this regard, it can happen that the value of apparently useless knowledge is only discovered much later. For example, Maxwell's equations linking magnetism to electricity were considered useless at the time of discovery until experimental scientists discovered how to detect electromagnetic waves. Occasionally, knowledge may have a negative value, for example, when it hinders someone to do what would be needed because their knowledge of associated dangers paralyzes them.
Learning
The value of knowledge is specifically relevant in the field of education. It is needed to decide which of the vast amount of knowledge should become part of the curriculum to be passed on to students. Many types of learning at school involve the acquisition of declarative knowledge. One form of declarative knowledge learning is so-called rote learning. It is a memorization technique in which the claim to be learned is repeated again and again until it is fully memorized. Other forms of declarative knowledge learning focus more on developing an understanding of the subject. This means that the learner should not only be able to repeat the claim but also to explain, describe, and summarize it. For declarative knowledge to be useful, it is often advantageous if it is embedded in a meaningful structure. For example, learning about new concepts and ideas involves developing an understanding of how they are related to each other and to what is already known.
According to Ellen Gagné, learning declarative knowledge happens in four steps. In the first step, the learner comes into contact with the material to be learned and apprehends it. Next, they translate this information into propositions. Following that, the learner's memory triggers and activates related propositions. As the last step, new connections are established and inferences are drawn. A similar process is described by John V. Dempsey, who stresses that the new information must be organized, divided, and linked to existing knowledge. He distinguishes between learning that involves recalling information in contrast to learning that only requires being able to recognize patterns. A related theory is defended by Anthony J. Rhem. He holds that the process of learning declarative knowledge involves organizing new information into groups. Next, links between the groups are drawn and the new information is connected to pre-existing knowledge.
Some theorists, like Robert Gagné and Leslie Briggs, distinguish between types of declarative knowledge learning based on the cognitive processes involved: learning of labels and names, of facts and lists, and of organized discourse. Learning labels and names requires forming a mental connection between two elements. Examples include memorizing foreign vocabulary and learning the capital city of each state. Learning facts involves relationships between concepts, for example, that "Ann Richards was the governor of Texas in 1991". This process is usually easier if the person is not dealing with isolated facts but possesses a network of information into which the new fact is integrated. The case for learning lists is similar since it involves the association of many items. Learning organized discourse encompasses not discrete facts or items but a wider comprehension of the meaning present in an extensive body of information.
Various sources of declarative knowledge are discussed in epistemology. They include perception, introspection, memory, reasoning, and testimony. Perception is usually understood as the main source of empirical knowledge. It is based on the senses, like seeing that it is raining when looking out the window. Introspection is similar to perception but provides knowledge of the internal sphere and not of external objects. An example is directing one's attention to a pain in one's toe to assess whether it has intensified.
Memory differs from perception and introspection in that it does not produce new knowledge but merely stores and retrieves pre-existing knowledge. As such, it depends on other sources. It is similar to reasoning in this regard, which starts from a known fact and arrives at new knowledge by drawing inferences from it. Empiricists hold that this is the only way how reason can arrive at knowledge while rationalists contend that some claims can be known by pure reason independent of additional sources. Testimony is different from the other sources since it does not have its own cognitive faculty. Rather, it is grounded in the notion that people can acquire knowledge through communication with others, for example, by speaking to someone or by reading a newspaper. Some religious philosophers include religious experiences (through the so-called sensus divinitatis) as a source of knowledge of the divine. However, such claims are controversial.
References
Citations
Sources
Concepts in epistemology
Psychological concepts
Intelligence
Mental content
Definitions of knowledge | 0.765103 | 0.997322 | 0.763054 |
Hardy–Weinberg principle | In population genetics, the Hardy–Weinberg principle, also known as the Hardy–Weinberg equilibrium, model, theorem, or law, states that allele and genotype frequencies in a population will remain constant from generation to generation in the absence of other evolutionary influences. These influences include genetic drift, mate choice, assortative mating, natural selection, sexual selection, mutation, gene flow, meiotic drive, genetic hitchhiking, population bottleneck, founder effect, inbreeding and outbreeding depression.
In the simplest case of a single locus with two alleles denoted A and a with frequencies and , respectively, the expected genotype frequencies under random mating are for the AA homozygotes, for the aa homozygotes, and for the heterozygotes. In the absence of selection, mutation, genetic drift, or other forces, allele frequencies p and q are constant between generations, so equilibrium is reached.
The principle is named after G. H. Hardy and Wilhelm Weinberg, who first demonstrated it mathematically. Hardy's paper was focused on debunking the view that a dominant allele would automatically tend to increase in frequency (a view possibly based on a misinterpreted question at a lecture). Today, tests for Hardy–Weinberg genotype frequencies are used primarily to test for population stratification and other forms of non-random mating.
Derivation
Consider a population of monoecious diploids, where each organism produces male and female gametes at equal frequency, and has two alleles at each gene locus. We assume that the population is so large that it can be treated as infinite. Organisms reproduce by random union of gametes (the "gene pool" population model). A locus in this population has two alleles, A and a, that occur with initial frequencies and , respectively. The allele frequencies at each generation are obtained by pooling together the alleles from each genotype of the same generation according to the expected contribution from the homozygote and heterozygote genotypes, which are 1 and 1/2, respectively:
The different ways to form genotypes for the next generation can be shown in a Punnett square, where the proportion of each genotype is equal to the product of the row and column allele frequencies from the current generation.
The sum of the entries is , as the genotype frequencies must sum to one.
Note again that as , the binomial expansion of gives the same relationships.
Summing the elements of the Punnett square or the binomial expansion, we obtain the expected genotype proportions among the offspring after a single generation:
These frequencies define the Hardy–Weinberg equilibrium. It should be mentioned that the genotype frequencies after the first generation need not equal the genotype frequencies from the initial generation, e.g. . However, the genotype frequencies for all future times will equal the Hardy–Weinberg frequencies, e.g. for . This follows since the genotype frequencies of the next generation depend only on the allele frequencies of the current generation which, as calculated by equations and, are preserved from the initial generation:
For the more general case of dioecious diploids [organisms are either male or female] that reproduce by random mating of individuals, it is necessary to calculate the genotype frequencies from the nine possible matings between each parental genotype (AA, Aa, and aa) in either sex, weighted by the expected genotype contributions of each such mating. Equivalently, one considers the six unique diploid-diploid combinations:
and constructs a Punnett square for each, so as to calculate its contribution to the next generation's genotypes. These contributions are weighted according to the probability of each diploid-diploid combination, which follows a multinomial distribution with . For example, the probability of the mating combination is and it can only result in the genotype: . Overall, the resulting genotype frequencies are calculated as:
As before, one can show that the allele frequencies at time equal those at time , and so, are constant in time. Similarly, the genotype frequencies depend only on the allele frequencies, and so, after time are also constant in time.
If in either monoecious or dioecious organisms, either the allele or genotype proportions are initially unequal in either sex, it can be shown that constant proportions are obtained after one generation of random mating. If dioecious organisms are heterogametic and the gene locus is located on the X chromosome, it can be shown that if the allele frequencies are initially unequal in the two sexes [e.g., XX females and XY males, as in humans], in the heterogametic sex 'chases' in the homogametic sex of the previous generation, until an equilibrium is reached at the weighted average of the two initial frequencies.
Deviations from Hardy–Weinberg equilibrium
The seven assumptions underlying Hardy–Weinberg equilibrium are as follows:
organisms are diploid
only sexual reproduction occurs
generations are nonoverlapping
mating is random
population size is infinitely large
allele frequencies are equal in the sexes
there is no migration, gene flow, admixture, mutation or selection
Violations of the Hardy–Weinberg assumptions can cause deviations from expectation. How this affects the population depends on the assumptions that are violated.
Random mating. The HWP states the population will have the given genotypic frequencies (called Hardy–Weinberg proportions) after a single generation of random mating within the population. When the random mating assumption is violated, the population will not have Hardy–Weinberg proportions. A common cause of non-random mating is inbreeding, which causes an increase in homozygosity for all genes.
If a population violates one of the following four assumptions, the population may continue to have Hardy–Weinberg proportions each generation, but the allele frequencies will change over time.
Selection, in general, causes allele frequencies to change, often quite rapidly. While directional selection eventually leads to the loss of all alleles except the favored one (unless one allele is dominant, in which case recessive alleles can survive at low frequencies), some forms of selection, such as balancing selection, lead to equilibrium without loss of alleles.
Mutation will have a very subtle effect on allele frequencies through the introduction of new allele into a population. Mutation rates are of the order 10−4 to 10−8, and the change in allele frequency will be, at most, the same order. Recurrent mutation will maintain alleles in the population, even if there is strong selection against them.
Migration genetically links two or more populations together. In general, allele frequencies will become more homogeneous among the populations. Some models for migration inherently include nonrandom mating (Wahlund effect, for example). For those models, the Hardy–Weinberg proportions will normally not be valid.
Small population size can cause a random change in allele frequencies. This is due to a sampling effect, and is called genetic drift. Sampling effects are most important when the allele is present in a small number of copies.
In real world genotype data, deviations from Hardy–Weinberg Equilibrium may be a sign of genotyping error.
Sex linkage
Where the A gene is sex linked, the heterogametic sex (e.g., mammalian males; avian females) have only one copy of the gene (and are termed hemizygous), while the homogametic sex (e.g., human females) have two copies. The genotype frequencies at equilibrium are p and q for the heterogametic sex but p2, 2pq and q2 for the homogametic sex.
For example, in humans red–green colorblindness is an X-linked recessive trait. In western European males, the trait affects about 1 in 12, (q = 0.083) whereas it affects about 1 in 200 females (0.005, compared to q2 = 0.007), very close to Hardy–Weinberg proportions.
If a population is brought together with males and females with a different allele frequency in each subpopulation (males or females), the allele frequency of the male population in the next generation will follow that of the female population because each son receives its X chromosome from its mother. The population converges on equilibrium very quickly.
Generalizations
The simple derivation above can be generalized for more than two alleles and polyploidy.
Generalization for more than two alleles
Consider an extra allele frequency, r. The two-allele case is the binomial expansion of (p + q)2, and thus the three-allele case is the trinomial expansion of (p + q + r)2.
More generally, consider the alleles A1, ..., An given by the allele frequencies p1 to pn;
giving for all homozygotes:
and for all heterozygotes:
Generalization for polyploidy
The Hardy–Weinberg principle may also be generalized to polyploid systems, that is, for organisms that have more than two copies of each chromosome. Consider again only two alleles. The diploid case is the binomial expansion of:
and therefore the polyploid case is the binomial expansion of:
where c is the ploidy, for example with tetraploid (c = 4):
Whether the organism is a 'true' tetraploid or an amphidiploid will determine how long it will take for the population to reach Hardy–Weinberg equilibrium.
Complete generalization
For distinct alleles in -ploids, the genotype frequencies in the Hardy–Weinberg equilibrium are given by individual terms in the multinomial expansion of :
Significance tests for deviation
Testing deviation from the HWP is generally performed using Pearson's chi-squared test, using the observed genotype frequencies obtained from the data and the expected genotype frequencies obtained using the HWP. For systems where there are large numbers of alleles, this may result in data with many empty possible genotypes and low genotype counts, because there are often not enough individuals present in the sample to adequately represent all genotype classes. If this is the case, then the asymptotic assumption of the chi-squared distribution, will no longer hold, and it may be necessary to use a form of Fisher's exact test, which requires a computer to solve. More recently a number of MCMC methods of testing for deviations from HWP have been proposed (Guo & Thompson, 1992; Wigginton et al. 2005)
Example chi-squared test for deviation
This data is from E. B. Ford (1971) on the scarlet tiger moth, for which the phenotypes of a sample of the population were recorded. Genotype–phenotype distinction is assumed to be negligibly small. The null hypothesis is that the population is in Hardy–Weinberg proportions, and the alternative hypothesis is that the population is not in Hardy–Weinberg proportions.
From this, allele frequencies can be calculated:
and
So the Hardy–Weinberg expectation is:
Pearson's chi-squared test states:
There is 1 degree of freedom (degrees of freedom for test for Hardy–Weinberg proportions are # genotypes − # alleles). The 5% significance level for 1 degree of freedom is 3.84, and since the χ2 value is less than this, the null hypothesis that the population is in Hardy–Weinberg frequencies is not rejected.
Fisher's exact test (probability test)
Fisher's exact test can be applied to testing for Hardy–Weinberg proportions. Since the test is conditional on the allele frequencies, p and q, the problem can be viewed as testing for the proper number of heterozygotes. In this way, the hypothesis of Hardy–Weinberg proportions is rejected if the number of heterozygotes is too large or too small. The conditional probabilities for the heterozygote, given the allele frequencies are given in Emigh (1980) as
where n11, n12, n22 are the observed numbers of the three genotypes, AA, Aa, and aa, respectively, and n1 is the number of A alleles, where .
An example
Using one of the examples from Emigh (1980), we can consider the case where n = 100, and p = 0.34. The possible observed heterozygotes and their exact significance level is given in Table 4.
Using this table, one must look up the significance level of the test based on the observed number of heterozygotes. For example, if one observed 20 heterozygotes, the significance level for the test is 0.007. As is typical for Fisher's exact test for small samples, the gradation of significance levels is quite coarse.
However, a table like this has to be created for every experiment, since the tables are dependent on both n and p.
Equivalence tests
The equivalence tests are developed in order to establish sufficiently good agreement of the observed genotype frequencies and Hardy Weinberg equilibrium. Let denote the family of the genotype distributions under the assumption of Hardy Weinberg equilibrium. The distance between a genotype distribution and Hardy Weinberg equilibrium is defined by , where is some distance. The equivalence test problem is given by and , where is a tolerance parameter. If the hypothesis can be rejected then the population is close to Hardy Weinberg equilibrium with a high probability. The equivalence tests for the biallelic case are developed among others in Wellek (2004). The equivalence tests for the case of multiple alleles are proposed in Ostrovski (2020).
Inbreeding coefficient
The inbreeding coefficient, (see also F-statistics), is one minus the observed frequency of heterozygotes over that expected from Hardy–Weinberg equilibrium.
where the expected value from Hardy–Weinberg equilibrium is given by
For example, for Ford's data above:
For two alleles, the chi-squared goodness of fit test for Hardy–Weinberg proportions is equivalent to the test for inbreeding, .
The inbreeding coefficient is unstable as the expected value approaches zero, and thus not useful for rare and very common alleles. For: ; is undefined.
History
Mendelian genetics were rediscovered in 1900. However, it remained somewhat controversial for several years as it was not then known how it could cause continuous characteristics. Udny Yule (1902) argued against Mendelism because he thought that dominant alleles would increase in the population. The American William E. Castle (1903) showed that without selection, the genotype frequencies would remain stable. Karl Pearson (1903) found one equilibrium position with values of p = q = 0.5. Reginald Punnett, unable to counter Yule's point, introduced the problem to G. H. Hardy, a British mathematician, with whom he played cricket. Hardy was a pure mathematician and held applied mathematics in some contempt; his view of biologists' use of mathematics comes across in his 1908 paper where he describes this as "very simple":
To the Editor of Science: I am reluctant to intrude in a discussion concerning matters of which I have no expert knowledge, and I should have expected the very simple point which I wish to make to have been familiar to biologists. However, some remarks of Mr. Udny Yule, to which Mr. R. C. Punnett has called my attention, suggest that it may still be worth making...
Suppose that Aa is a pair of Mendelian characters, A being dominant, and that in any given generation the number of pure dominants (AA), heterozygotes (Aa), and pure recessives (aa) are as p:2q:r. Finally, suppose that the numbers are fairly large, so that mating may be regarded as random, that the sexes are evenly distributed among the three varieties, and that all are equally fertile. A little mathematics of the multiplication-table type is enough to show that in the next generation the numbers will be as (p + q)2:2(p + q)(q + r):(q + r)2, or as p1:2q1:r1, say.
The interesting question is: in what circumstances will this distribution be the same as that in the generation before? It is easy to see that the condition for this is q2 = pr. And since q12 = p1r1, whatever the values of p, q, and r may be, the distribution will in any case continue unchanged after the second generation
The principle was thus known as Hardy's law in the English-speaking world until 1943, when Curt Stern pointed out that it had first been formulated independently in 1908 by the German physician Wilhelm Weinberg. William Castle in 1903 also derived the ratios for the special case of equal allele frequencies, and it is sometimes (but rarely) called the Hardy–Weinberg–Castle Law.
Derivation of Hardy's equations
Hardy's statement begins with a recurrence relation for the frequencies p, 2q, and r. These recurrence relations follow from fundamental concepts in probability, specifically independence, and conditional probability. For example, consider the probability of an offspring from the generation being homozygous dominant. Alleles are inherited independently from each parent. A dominant allele can be inherited from a homozygous dominant parent with probability 1, or from a heterozygous parent with probability 0.5. To represent this reasoning in an equation, let represent inheritance of a dominant allele from a parent. Furthermore, let and represent potential parental genotypes in the preceding generation.
The same reasoning, applied to the other genotypes yields the two remaining recurrence relations. Equilibrium occurs when each proportion is constant between subsequent generations. More formally, a population is at equilibrium at generation when
, , and
By solving these equations necessary and sufficient conditions for equilibrium to occur can be determined. Again, consider the frequency of homozygous dominant animals. Equilibrium implies
First consider the case, where , and note that it implies that and . Now consider the remaining case, where :
where the final equality holds because the allele proportions must sum to one. In both cases, . It can be shown that the other two equilibrium conditions imply the same equation. Together, the solutions of the three equilibrium equations imply sufficiency of Hardy's condition for equilibrium. Since the condition always holds for the second generation, all succeeding generations have the same proportions.
Numerical example
Estimation of genotype distribution
An example computation of the genotype distribution given by Hardy's original equations is instructive. The phenotype distribution from Table 3 above will be used to compute Hardy's initial genotype distribution. Note that the p and q values used by Hardy are not the same as those used above.
As checks on the distribution, compute
and
For the next generation, Hardy's equations give
Again as checks on the distribution, compute
and
which are the expected values. The reader may demonstrate that subsequent use of the second-generation values for a third generation will yield identical results.
Estimation of carrier frequency
The Hardy–Weinberg principle can also be used to estimate the frequency of carriers of an autosomal recessive condition in a population based on the frequency of suffers.
Let us assume an estimated babies are born with cystic fibrosis, this is about the frequency of homozygous individuals observed in Northern European populations. We can use the Hardy–Weinberg equations to estimate the carrier frequency, the frequency of heterozygous individuals, .
As is small we can take p, , to be 1.
We therefore estimate the carrier rate to be , which is about the frequency observed in Northern European populations.
This can be simplified to the carrier frequency being about twice the square root of the birth frequency.
Graphical representation
It is possible to represent the distribution of genotype frequencies for a bi-allelic locus within a population graphically using a de Finetti diagram. This uses a triangular plot (also known as trilinear, triaxial or ternary plot) to represent the distribution of the three genotype frequencies in relation to each other. It differs from many other such plots in that the direction of one of the axes has been reversed. The curved line in the diagram is the Hardy–Weinberg parabola and represents the state where alleles are in Hardy–Weinberg equilibrium. It is possible to represent the effects of natural selection and its effect on allele frequency on such graphs. The de Finetti diagram was developed and used extensively by A. W. F. Edwards in his book Foundations of Mathematical Genetics.
See also
F-statistics
Fixation index
QST_(genetics)
Wahlund effect
Regression toward the mean
Multinomial distribution (Hardy–Weinberg is a trinomial distribution with probabilities )
Additive disequilibrium and z statistic
Population genetics
Genetic diversity
Founder effect
Population bottleneck
Genetic drift
Inbreeding depression
Coefficient of inbreeding
Coefficient of relationship
Natural selection
Fitness
Genetic load
Notes
References
Citations
Sources
Edwards, A.W.F. 1977. Foundations of Mathematical Genetics. Cambridge University Press, Cambridge (2nd ed., 2000).
Ford, E.B. (1971). Ecological Genetics, London.
External links
EvolutionSolution (at bottom of page)
Hardy–Weinberg Equilibrium Calculator
genetics Population Genetics Simulator
HARDY C implementation of Guo & Thompson 1992
Source code (C/C++/Fortran/R) for Wigginton et al. 2005
Online de Finetti Diagram Generator and Hardy–Weinberg equilibrium tests
Online Hardy–Weinberg equilibrium tests and drawing of de Finetti diagrams
Hardy–Weinberg Equilibrium Calculator
Population genetics
Classical genetics
Statistical genetics
Sexual selection | 0.765534 | 0.996732 | 0.763032 |
Computer graphics (computer science) | Computer graphics is a sub-field of computer science which studies methods for digitally synthesizing and manipulating visual content. Although the term often refers to the study of three-dimensional computer graphics, it also encompasses two-dimensional graphics and image processing.
Overview
Computer graphics studies manipulation of visual and geometric information using computational techniques. It focuses on the mathematical and computational foundations of image generation and processing rather than purely aesthetic issues. Computer graphics is often differentiated from the field of visualization, although the two fields have many similarities.
Connected studies include:
Applied mathematics
Computational geometry
Computational topology
Computer vision
Image processing
Information visualization
Scientific visualization
Applications of computer graphics include:
Print design
Digital art
Special effects
Video games
Visual effects
History
There are several international conferences and journals where the most significant results in computer graphics are published. Among them are the SIGGRAPH and Eurographics conferences and the Association for Computing Machinery (ACM) Transactions on Graphics journal. The joint Eurographics and ACM SIGGRAPH symposium series features the major venues for the more specialized sub-fields: Symposium on Geometry Processing, Symposium on Rendering, Symposium on Computer Animation, and High Performance Graphics.
As in the rest of computer science, conference publications in computer graphics are generally more significant than journal publications (and subsequently have lower acceptance rates).
Subfields
A broad classification of major subfields in computer graphics might be:
Geometry: ways to represent and process surfaces
Animation: ways to represent and manipulate motion
Rendering: algorithms to reproduce light transport
Imaging: image acquisition or image editing
Geometry
The subfield of geometry studies the representation of three-dimensional objects in a discrete digital setting. Because the appearance of an object depends largely on its exterior, boundary representations are most commonly used. Two dimensional surfaces are a good representation for most objects, though they may be non-manifold. Since surfaces are not finite, discrete digital approximations are used. Polygonal meshes (and to a lesser extent subdivision surfaces) are by far the most common representation, although point-based representations have become more popular recently (see for instance the Symposium on Point-Based Graphics). These representations are Lagrangian, meaning the spatial locations of the samples are independent. Recently, Eulerian surface descriptions (i.e., where spatial samples are fixed) such as level sets have been developed into a useful representation for deforming surfaces which undergo many topological changes (with fluids being the most notable example).
Geometry subfields include:
Implicit surface modeling – an older subfield which examines the use of algebraic surfaces, constructive solid geometry, etc., for surface representation.
Digital geometry processing – surface reconstruction, simplification, fairing, mesh repair, parameterization, remeshing, mesh generation, surface compression, and surface editing all fall under this heading.
Discrete differential geometry – a nascent field which defines geometric quantities for the discrete surfaces used in computer graphics.
Point-based graphics – a recent field which focuses on points as the fundamental representation of surfaces.
Subdivision surfaces
Out-of-core mesh processing – another recent field which focuses on mesh datasets that do not fit in main memory.
Animation
The subfield of animation studies descriptions for surfaces (and other phenomena) that move or deform over time. Historically, most work in this field has focused on parametric and data-driven models, but recently physical simulation has become more popular as computers have become more powerful computationally.
Animation subfields include:
Performance capture
Character animation
Physical simulation (e.g. cloth modeling, animation of fluid dynamics, etc.)
Rendering
Rendering generates images from a model. Rendering may simulate light transport to create realistic images or it may create images that have a particular artistic style in non-photorealistic rendering. The two basic operations in realistic rendering are transport (how much light passes from one place to another) and scattering (how surfaces interact with light). See Rendering (computer graphics) for more information.
Rendering subfields include:
Transport describes how illumination in a scene gets from one place to another. Visibility is a major component of light transport.
Scattering: Models of scattering (how light interacts with the surface at a given point) and shading (how material properties vary across the surface) are used to describe the appearance of a surface. In graphics these problems are often studied within the context of rendering since they can substantially affect the design of rendering algorithms. Descriptions of scattering are usually given in terms of a bidirectional scattering distribution function (BSDF). The latter issue addresses how different types of scattering are distributed across the surface (i.e., which scattering function applies where). Descriptions of this kind are typically expressed with a program called a shader. (There is some confusion since the word "shader" is sometimes used for programs that describe local geometric variation.)
Non-photorealistic rendering
Physically based rendering – concerned with generating images according to the laws of geometric optics
Real-time rendering – focuses on rendering for interactive applications, typically using specialized hardware like GPUs
Relighting – recent area concerned with quickly re-rendering scenes
Notable researchers
Arthur Appel
James Arvo
Brian A. Barsky
Jim Blinn
Jack E. Bresenham
Loren Carpenter
Edwin Catmull
James H. Clark
Robert L. Cook
Franklin C. Crow
Paul Debevec
David C. Evans
Ron Fedkiw
Steven K. Feiner
James D. Foley
David Forsyth
Henry Fuchs
Andrew Glassner
Henri Gouraud (computer scientist)
Donald P. Greenberg
Eric Haines
R. A. Hall
Pat Hanrahan
John Hughes
Jim Kajiya
Takeo Kanade
Kenneth Knowlton
Marc Levoy
Martin Newell (computer scientist)
James O'Brien
Ken Perlin
Matt Pharr
Bui Tuong Phong
Przemyslaw Prusinkiewicz
William Reeves
David F. Rogers
Holly Rushmeier
Peter Shirley
James Sethian
Ivan Sutherland
Demetri Terzopoulos
Kenneth Torrance
Greg Turk
Andries van Dam
Henrik Wann Jensen
Gregory Ward
John Warnock
J. Turner Whitted
Lance Williams
Applications for their use
Bitmap Design / Image Editing
Adobe Photoshop
Corel Photo-Paint
GIMP
Krita
Vector drawing
Adobe Illustrator
CorelDRAW
Inkscape
Affinity Designer
Sketch
Architecture
VariCAD
FreeCAD
AutoCAD
QCAD
LibreCAD
DataCAD
Corel Designer
Video editing
Adobe Premiere Pro
Sony Vegas
Final Cut
DaVinci Resolve
Cinelerra
VirtualDub
Sculpting, Animation, and 3D Modeling
Blender 3D
Wings 3D
ZBrush
Sculptris
SolidWorks
Rhino3D
SketchUp
3ds Max
Cinema 4D
Maya
Houdini
Digital composition
Nuke
Blackmagic Fusion
Adobe After Effects
Natron
Rendering
V-Ray
RedShift
RenderMan
Octane Render
Mantra
Lumion (Architectural visualization)
Other applications examples
ACIS - geometric core
Autodesk Softimage
POV-Ray
Scribus
Silo
Hexagon
Lightwave
See also
Computer facial animation
Computer science
Computer science and engineering
Computer graphics
Digital geometry
Digital image editing
Geometry processing
IBM PCPG, (1980s)
Painter's algorithm
Stanford Bunny
Utah Teapot
References
Further reading
Foley et al. Computer Graphics: Principles and Practice.
Shirley. Fundamentals of Computer Graphics.
Watt. 3D Computer Graphics.
External links
A Critical History of Computer Graphics and Animation
History of Computer Graphics series of articles
Industry
Industrial labs doing "blue sky" graphics research include:
Adobe Advanced Technology Labs
MERL
Microsoft Research – Graphics
Nvidia Research
Major film studios notable for graphics research include:
ILM
PDI/Dreamworks Animation
Pixar
+ | 0.770814 | 0.989898 | 0.763027 |
Mean-field theory | In physics and probability theory, Mean-field theory (MFT) or Self-consistent field theory studies the behavior of high-dimensional random (stochastic) models by studying a simpler model that approximates the original by averaging over degrees of freedom (the number of values in the final calculation of a statistic that are free to vary). Such models consider many individual components that interact with each other.
The main idea of MFT is to replace all interactions to any one body with an average or effective interaction, sometimes called a molecular field. This reduces any many-body problem into an effective one-body problem. The ease of solving MFT problems means that some insight into the behavior of the system can be obtained at a lower computational cost.
MFT has since been applied to a wide range of fields outside of physics, including statistical inference, graphical models, neuroscience, artificial intelligence, epidemic models, queueing theory, computer-network performance and game theory, as in the quantal response equilibrium.
Origins
The idea first appeared in physics (statistical mechanics) in the work of Pierre Curie and Pierre Weiss to describe phase transitions. MFT has been used in the Bragg–Williams approximation, models on Bethe lattice, Landau theory, Pierre–Weiss approximation, Flory–Huggins solution theory, and Scheutjens–Fleer theory.
Systems with many (sometimes infinite) degrees of freedom are generally hard to solve exactly or compute in closed, analytic form, except for some simple cases (e.g. certain Gaussian random-field theories, the 1D Ising model). Often combinatorial problems arise that make things like computing the partition function of a system difficult. MFT is an approximation method that often makes the original problem to be solvable and open to calculation, and in some cases MFT may give very accurate approximations.
In field theory, the Hamiltonian may be expanded in terms of the magnitude of fluctuations around the mean of the field. In this context, MFT can be viewed as the "zeroth-order" expansion of the Hamiltonian in fluctuations. Physically, this means that an MFT system has no fluctuations, but this coincides with the idea that one is replacing all interactions with a "mean-field”.
Quite often, MFT provides a convenient launch point for studying higher-order fluctuations. For example, when computing the partition function, studying the combinatorics of the interaction terms in the Hamiltonian can sometimes at best produce perturbation results or Feynman diagrams that correct the mean-field approximation.
Validity
In general, dimensionality plays an active role in determining whether a mean-field approach will work for any particular problem. There is sometimes a critical dimension above which MFT is valid and below which it is not.
Heuristically, many interactions are replaced in MFT by one effective interaction. So if the field or particle exhibits many random interactions in the original system, they tend to cancel each other out, so the mean effective interaction and MFT will be more accurate. This is true in cases of high dimensionality, when the Hamiltonian includes long-range forces, or when the particles are extended (e.g. polymers). The Ginzburg criterion is the formal expression of how fluctuations render MFT a poor approximation, often depending upon the number of spatial dimensions in the system of interest.
Formal approach (Hamiltonian)
The formal basis for mean-field theory is the Bogoliubov inequality. This inequality states that the free energy of a system with Hamiltonian
has the following upper bound:
where is the entropy, and and are Helmholtz free energies. The average is taken over the equilibrium ensemble of the reference system with Hamiltonian . In the special case that the reference Hamiltonian is that of a non-interacting system and can thus be written as
where are the degrees of freedom of the individual components of our statistical system (atoms, spins and so forth), one can consider sharpening the upper bound by minimising the right side of the inequality. The minimising reference system is then the "best" approximation to the true system using non-correlated degrees of freedom and is known as the mean field approximation.
For the most common case that the target Hamiltonian contains only pairwise interactions, i.e.,
where is the set of pairs that interact, the minimising procedure can be carried out formally. Define as the generalized sum of the observable over the degrees of freedom of the single component (sum for discrete variables, integrals for continuous ones). The approximating free energy is given by
where is the probability to find the reference system in the state specified by the variables . This probability is given by the normalized Boltzmann factor
where is the partition function. Thus
In order to minimise, we take the derivative with respect to the single-degree-of-freedom probabilities using a Lagrange multiplier to ensure proper normalization. The end result is the set of self-consistency equations
where the mean field is given by
Applications
Mean field theory can be applied to a number of physical systems so as to study phenomena such as phase transitions.
Ising model
Formal derivation
The Bogoliubov inequality, shown above, can be used to find the dynamics of a mean field model of the two-dimensional Ising lattice. A magnetisation function can be calculated from the resultant approximate free energy. The first step is choosing a more tractable approximation of the true Hamiltonian. Using a non-interacting or effective field Hamiltonian,
,
the variational free energy is
By the Bogoliubov inequality, simplifying this quantity and calculating the magnetisation function that minimises the variational free energy yields the best approximation to the actual magnetisation. The minimiser is
which is the ensemble average of spin. This simplifies to
Equating the effective field felt by all spins to a mean spin value relates the variational approach to the suppression of fluctuations. The physical interpretation of the magnetisation function is then a field of mean values for individual spins.
Non-interacting spins approximation
Consider the Ising model on a -dimensional lattice. The Hamiltonian is given by
where the indicates summation over the pair of nearest neighbors , and are neighboring Ising spins.
Let us transform our spin variable by introducing the fluctuation from its mean value . We may rewrite the Hamiltonian as
where we define ; this is the fluctuation of the spin.
If we expand the right side, we obtain one term that is entirely dependent on the mean values of the spins and independent of the spin configurations. This is the trivial term, which does not affect the statistical properties of the system. The next term is the one involving the product of the mean value of the spin and the fluctuation value. Finally, the last term involves a product of two fluctuation values.
The mean field approximation consists of neglecting this second-order fluctuation term:
These fluctuations are enhanced at low dimensions, making MFT a better approximation for high dimensions.
Again, the summand can be re-expanded. In addition, we expect that the mean value of each spin is site-independent, since the Ising chain is translationally invariant. This yields
The summation over neighboring spins can be rewritten as , where means "nearest neighbor of ", and the prefactor avoids double counting, since each bond participates in two spins. Simplifying leads to the final expression
where is the coordination number. At this point, the Ising Hamiltonian has been decoupled into a sum of one-body Hamiltonians with an effective mean field , which is the sum of the external field and of the mean field induced by the neighboring spins. It is worth noting that this mean field directly depends on the number of nearest neighbors and thus on the dimension of the system (for instance, for a hypercubic lattice of dimension , ).
Substituting this Hamiltonian into the partition function and solving the effective 1D problem, we obtain
where is the number of lattice sites. This is a closed and exact expression for the partition function of the system. We may obtain the free energy of the system and calculate critical exponents. In particular, we can obtain the magnetization as a function of .
We thus have two equations between and , allowing us to determine as a function of temperature. This leads to the following observation:
For temperatures greater than a certain value , the only solution is . The system is paramagnetic.
For , there are two non-zero solutions: . The system is ferromagnetic.
is given by the following relation: .
This shows that MFT can account for the ferromagnetic phase transition.
Application to other systems
Similarly, MFT can be applied to other types of Hamiltonian as in the following cases:
To study the metal–superconductor transition. In this case, the analog of the magnetization is the superconducting gap .
The molecular field of a liquid crystal that emerges when the Laplacian of the director field is non-zero.
To determine the optimal amino acid side chain packing given a fixed protein backbone in protein structure prediction (see Self-consistent mean field (biology)).
To determine the elastic properties of a composite material.
Variationally minimisation like mean field theory can be also be used in statistical inference.
Extension to time-dependent mean fields
In mean field theory, the mean field appearing in the single-site problem is a time-independent scalar or vector quantity. However, this isn't always the case: in a variant of mean field theory called dynamical mean field theory (DMFT), the mean field becomes a time-dependent quantity. For instance, DMFT can be applied to the Hubbard model to study the metal–Mott-insulator transition.
See also
Dynamical mean field theory
Mean field game theory
References
Statistical mechanics
Concepts in physics
Electronic structure methods | 0.769294 | 0.991829 | 0.763007 |
Natural environment | The natural environment or natural world encompasses all biotic and abiotic things occurring naturally, meaning in this case not artificial. The term is most often applied to Earth or some parts of Earth. This environment encompasses the interaction of all living species, climate, weather and natural resources that affect human survival and economic activity.
The concept of the natural environment can be distinguished as components:
Complete ecological units that function as natural systems without massive civilized human intervention, including all vegetation, microorganisms, soil, rocks, plateaus, mountains, the atmosphere and natural phenomena that occur within their boundaries and their nature.
Universal natural resources and physical phenomena that lack clear-cut boundaries, such as air, water and climate, as well as energy, radiation, electric charge and magnetism, not originating from civilized human actions.
In contrast to the natural environment is the built environment. Built environments are where humans have fundamentally transformed landscapes such as urban settings and agricultural land conversion, the natural environment is greatly changed into a simplified human environment. Even acts which seem less extreme, such as building a mud hut or a photovoltaic system in the desert, the modified environment becomes an artificial one. Though many animals build things to provide a better environment for themselves, they are not human, hence beaver dams and the works of mound-building termites are thought of as natural.
People cannot find absolutely natural environments on Earth, and naturalness usually varies in a continuum, from 100% natural in one extreme to 0% natural in the other. The massive environmental changes of humanity in the Anthropocene have fundamentally effected all natural environments including: climate change, biodiversity loss and pollution from plastic and other chemicals in the air and water. More precisely, we can consider the different aspects or components of an environment, and see that their degree of naturalness is not uniform. If, for instance, in an agricultural field, the mineralogic composition and the structure of its soil are similar to those of an undisturbed forest soil, but the structure is quite different.
Composition
Earth science generally recognizes four spheres, the lithosphere, the hydrosphere, the atmosphere and the biosphere as correspondent to rocks, water, air and life respectively. Some scientists include as part of the spheres of the Earth, the cryosphere (corresponding to ice) as a distinct portion of the hydrosphere, as well as the pedosphere (to soil) as an active and intermixed sphere. Earth science (also known as geoscience, the geographical sciences or the Earth Sciences), is an all-embracing term for the sciences related to the planet Earth. There are four major disciplines in earth sciences, namely geography, geology, geophysics and geodesy. These major disciplines use physics, chemistry, biology, chronology and mathematics to build a qualitative and quantitative understanding of the principal areas or spheres of Earth.
Geological activity
The Earth's crust or lithosphere, is the outermost solid surface of the planet and is chemically, physically and mechanically different from underlying mantle. It has been generated greatly by igneous processes in which magma cools and solidifies to form solid rock. Beneath the lithosphere lies the mantle which is heated by the decay of radioactive elements. The mantle though solid is in a state of rheic convection. This convection process causes the lithospheric plates to move, albeit slowly. The resulting process is known as plate tectonics. Volcanoes result primarily from the melting of subducted crust material or of rising mantle at mid-ocean ridges and mantle plumes.
Water on Earth
Most water is found in various kinds of natural body of water.
Oceans
An ocean is a major body of saline water and a component of the hydrosphere. Approximately 71% of the surface of the Earth (an area of some 362 million square kilometers) is covered by ocean, a continuous body of water that is customarily divided into several principal oceans and smaller seas. More than half of this area is over 3,000 meters (9,800 ft) deep. Average oceanic salinity is around 35 parts per thousand (ppt) (3.5%), and nearly all seawater has a salinity in the range of 30 to 38 ppt. Though generally recognized as several separate oceans, these waters comprise one global, interconnected body of salt water often referred to as the World Ocean or global ocean. The deep seabeds are more than half the Earth's surface, and are among the least-modified natural environments. The major oceanic divisions are defined in part by the continents, various archipelagos and other criteria, these divisions are : (in descending order of size) the Pacific Ocean, the Atlantic Ocean, the Indian Ocean, the Southern Ocean and the Arctic Ocean.
Rivers
A river is a natural watercourse, usually freshwater, flowing toward an ocean, a lake, a sea or another river. A few rivers simply flow into the ground and dry up completely without reaching another body of water.
The water in a river is usually in a channel, made up of a stream bed between banks. In larger rivers there is often also a wider floodplain shaped by waters over-topping the channel. Flood plains may be very wide in relation to the size of the river channel. Rivers are a part of the hydrological cycle. Water within a river is generally collected from precipitation through surface runoff, groundwater recharge, springs and the release of water stored in glaciers and snowpacks.
Small rivers may also be called by several other names, including stream, creek and brook. Their current is confined within a bed and stream banks. Streams play an important corridor role in connecting fragmented habitats and thus in conserving biodiversity. The study of streams and waterways in general is known as surface hydrology.
Lakes
A lake (from Latin lacus) is a terrain feature, a body of water that is localized to the bottom of basin. A body of water is considered a lake when it is inland, is not part of an ocean and is larger and deeper than a pond.
Natural lakes on Earth are generally found in mountainous areas, rift zones and areas with ongoing or recent glaciation. Other lakes are found in endorheic basins or along the courses of mature rivers. In some parts of the world, there are many lakes because of chaotic drainage patterns left over from the last ice age. All lakes are temporary over geologic time scales, as they will slowly fill in with sediments or spill out of the basin containing them.
Ponds
A pond is a body of standing water, either natural or human-made, that is usually smaller than a lake. A wide variety of human-made bodies of water are classified as ponds, including water gardens designed for aesthetic ornamentation, fish ponds designed for commercial fish breeding and solar ponds designed to store thermal energy. Ponds and lakes are distinguished from streams by their current speed. While currents in streams are easily observed, ponds and lakes possess thermally driven micro-currents and moderate wind-driven currents. These features distinguish a pond from many other aquatic terrain features, such as stream pools and tide pools.
Human impact on water
Humans impact the water in different ways such as modifying rivers (through dams and stream channelization), urbanization and deforestation. These impact lake levels, groundwater conditions, water pollution, thermal pollution, and marine pollution. Humans modify rivers by using direct channel manipulation. We build dams and reservoirs and manipulate the direction of the rivers and water path. Dams can usefully create reservoirs and hydroelectric power. However, reservoirs and dams may negatively impact the environment and wildlife. Dams stop fish migration and the movement of organisms downstream. Urbanization affects the environment because of deforestation and changing lake levels, groundwater conditions, etc. Deforestation and urbanization go hand in hand. Deforestation may cause flooding, declining stream flow and changes in riverside vegetation. The changing vegetation occurs because when trees cannot get adequate water they start to deteriorate, leading to a decreased food supply for the wildlife in an area.
Atmosphere, climate and weather
The atmosphere of the Earth serves as a key factor in sustaining the planetary ecosystem. The thin layer of gases that envelops the Earth is held in place by the planet's gravity. Dry air consists of 78% nitrogen, 21% oxygen, 1% argon, inert gases and carbon dioxide. The remaining gases are often referred to as trace gases. The atmosphere includes greenhouse gases such as carbon dioxide, methane, nitrous oxide and ozone. Filtered air includes trace amounts of many other chemical compounds. Air also contains a variable amount of water vapor and suspensions of water droplets and ice crystals seen as clouds. Many natural substances may be present in tiny amounts in an unfiltered air sample, including dust, pollen and spores, sea spray, volcanic ash and meteoroids. Various industrial pollutants also may be present, such as chlorine (elementary or in compounds), fluorine compounds, elemental mercury, and sulphur compounds such as sulphur dioxide (SO2).
The ozone layer of the Earth's atmosphere plays an important role in reducing the amount of ultraviolet (UV) radiation that reaches the surface. As DNA is readily damaged by UV light, this serves to protect life at the surface. The atmosphere also retains heat during the night, thereby reducing the daily temperature extremes.
Layers of the atmosphere
Principal layers
Earth's atmosphere can be divided into five main layers. These layers are mainly determined by whether temperature increases or decreases with altitude. From highest to lowest, these layers are:
Exosphere: The outermost layer of Earth's atmosphere extends from the exobase upward, mainly composed of hydrogen and helium.
Thermosphere: The top of the thermosphere is the bottom of the exosphere, called the exobase. Its height varies with solar activity and ranges from about . The International Space Station orbits in this layer, between . In another way, the thermosphere is Earth's second highest atmospheric layer, extending from approximately 260,000 feet at the mesopause to the thermopause at altitudes ranging from 1,600,000 to 3,300,000 feet.
Mesosphere: The mesosphere extends from the stratopause to . It is the layer where most meteors burn up upon entering the atmosphere.
Stratosphere: The stratosphere extends from the tropopause to about . The stratopause, which is the boundary between the stratosphere and mesosphere, typically is at .
Troposphere: The troposphere begins at the surface and extends to between at the poles and at the equator, with some variation due to weather. The troposphere is mostly heated by transfer of energy from the surface, so on average the lowest part of the troposphere is warmest and temperature decreases with altitude. The tropopause is the boundary between the troposphere and stratosphere.
Other layers
Within the five principal layers determined by temperature there are several layers determined by other properties.
The ozone layer is contained within the stratosphere. It is mainly located in the lower portion of the stratosphere from about , though the thickness varies seasonally and geographically. About 90% of the ozone in our atmosphere is contained in the stratosphere.
The ionosphere: The part of the atmosphere that is ionized by solar radiation, stretches from and typically overlaps both the exosphere and the thermosphere. It forms the inner edge of the magnetosphere.
The homosphere and heterosphere: The homosphere includes the troposphere, stratosphere and mesosphere. The upper part of the heterosphere is composed almost completely of hydrogen, the lightest element.
The planetary boundary layer is the part of the troposphere that is nearest the Earth's surface and is directly affected by it, mainly through turbulent diffusion.
Effects of global warming
The dangers of global warming are being increasingly studied by a wide global consortium of scientists. These scientists are increasingly concerned about the potential long-term effects of global warming on our natural environment and on the planet. Of particular concern is how climate change and global warming caused by anthropogenic, or human-made releases of greenhouse gases, most notably carbon dioxide, can act interactively and have adverse effects upon the planet, its natural environment and humans' existence. It is clear the planet is warming, and warming rapidly. This is due to the greenhouse effect, which is caused by greenhouse gases, which trap heat inside the Earth's atmosphere because of their more complex molecular structure which allows them to vibrate and in turn trap heat and release it back towards the Earth. This warming is also responsible for the extinction of natural habitats, which in turn leads to a reduction in wildlife population. The most recent report from the Intergovernmental Panel on Climate Change (the group of the leading climate scientists in the world) concluded that the earth will warm anywhere from 2.7 to almost 11 degrees Fahrenheit (1.5 to 6 degrees Celsius) between 1990 and 2100.
Efforts have been increasingly focused on the mitigation of greenhouse gases that are causing climatic changes, on developing adaptative strategies to global warming, to assist humans, other animal, and plant species, ecosystems, regions and nations in adjusting to the effects of global warming. Some examples of recent collaboration to address climate change and global warming include:
The United Nations Framework Convention Treaty and convention on Climate Change, to stabilize greenhouse gas concentrations in the atmosphere at a level that would prevent dangerous anthropogenic interference with the climate system.
The Kyoto Protocol, which is the protocol to the international Framework Convention on Climate Change treaty, again with the objective of reducing greenhouse gases in an effort to prevent anthropogenic climate change.
The Western Climate Initiative, to identify, evaluate, and implement collective and cooperative ways to reduce greenhouse gases in the region, focusing on a market-based cap-and-trade system.
A significantly profound challenge is to identify the natural environmental dynamics in contrast to environmental changes not within natural variances. A common solution is to adapt a static view neglecting natural variances to exist. Methodologically, this view could be defended when looking at processes which change slowly and short time series, while the problem arrives when fast processes turns essential in the object of the study.
Climate
Climate looks at the statistics of temperature, humidity, atmospheric pressure, wind, rainfall, atmospheric particle count and other meteorological elements in a given region over long periods of time. Weather, on the other hand, is the present condition of these same elements over periods up to two weeks.
Climates can be classified according to the average and typical ranges of different variables, most commonly temperature and precipitation. The most commonly used classification scheme is the one originally developed by Wladimir Köppen. The Thornthwaite system, in use since 1948, uses evapotranspiration as well as temperature and precipitation information to study animal species diversity and the potential impacts of climate changes.
Weather
Weather is a set of all the phenomena occurring in a given atmospheric area at a given time. Most weather phenomena occur in the troposphere, just below the stratosphere. Weather refers, generally, to day-to-day temperature and precipitation activity, whereas climate is the term for the average atmospheric conditions over longer periods of time. When used without qualification, "weather" is understood to be the weather of Earth.
Weather occurs due to density (temperature and moisture) differences between one place and another. These differences can occur due to the sun angle at any particular spot, which varies by latitude from the tropics. The strong temperature contrast between polar and tropical air gives rise to the jet stream. Weather systems in the mid-latitudes, such as extratropical cyclones, are caused by instabilities of the jet stream flow. Because the Earth's axis is tilted relative to its orbital plane, sunlight is incident at different angles at different times of the year. On the Earth's surface, temperatures usually range ±40 °C (100 °F to −40 °F) annually. Over thousands of years, changes in the Earth's orbit have affected the amount and distribution of solar energy received by the Earth and influenced long-term climate.
Surface temperature differences in turn cause pressure differences. Higher altitudes are cooler than lower altitudes due to differences in compressional heating. Weather forecasting is the application of science and technology to predict the state of the atmosphere for a future time and a given location. The atmosphere is a chaotic system, and small changes to one part of the system can grow to have large effects on the system as a whole. Human attempts to control the weather have occurred throughout human history, and there is evidence that civilized human activity such as agriculture and industry has inadvertently modified weather patterns.
Life
Evidence suggests that life on Earth has existed for about 3.7 billion years. All known life forms share fundamental molecular mechanisms, and based on these observations, theories on the origin of life attempt to find a mechanism explaining the formation of a primordial single cell organism from which all life originates. There are many different hypotheses regarding the path that might have been taken from simple organic molecules via pre-cellular life to protocells and metabolism.
Although there is no universal agreement on the definition of life, scientists generally accept that the biological manifestation of life is characterized by organization, metabolism, growth, adaptation, response to stimuli and reproduction. Life may also be said to be simply the characteristic state of organisms. In biology, the science of living organisms, "life" is the condition which distinguishes active organisms from inorganic matter, including the capacity for growth, functional activity and the continual change preceding death.
A diverse variety of living organisms (life forms) can be found in the biosphere on Earth, and properties common to these organisms—plants, animals, fungi, protists, archaea, and bacteria—are a carbon- and water-based cellular form with complex organization and heritable genetic information. Living organisms undergo metabolism, maintain homeostasis, possess a capacity to grow, respond to stimuli, reproduce and, through natural selection, adapt to their environment in successive generations. More complex living organisms can communicate through various means.
Ecosystems
An ecosystem (also called an environment) is a natural unit consisting of all plants, animals, and micro-organisms (biotic factors) in an area functioning together with all of the non-living physical (abiotic) factors of the environment.
Central to the ecosystem concept is the idea that living organisms are continually engaged in a highly interrelated set of relationships with every other element constituting the environment in which they exist. Eugene Odum, one of the founders of the science of ecology, stated: "Any unit that includes all of the organisms (i.e.: the "community") in a given area interacting with the physical environment so that a flow of energy leads to clearly defined trophic structure, biotic diversity, and material cycles (i.e.: exchange of materials between living and nonliving parts) within the system is an ecosystem."
The human ecosystem concept is then grounded in the deconstruction of the human/nature dichotomy, and the emergent premise that all species are ecologically integrated with each other, as well as with the abiotic constituents of their biotope.
A more significant number or variety of species or biological diversity of an ecosystem may contribute to greater resilience of an ecosystem because there are more species present at a location to respond to change and thus "absorb" or reduce its effects. This reduces the effect before the ecosystem's structure changes to a different state. This is not universally the case and there is no proven relationship between the species diversity of an ecosystem and its ability to provide goods and services on a sustainable level.
The term ecosystem can also pertain to human-made environments, such as human ecosystems and human-influenced ecosystems. It can describe any situation where there is relationship between living organisms and their environment. Fewer areas on the surface of the earth today exist free from human contact, although some genuine wilderness areas continue to exist without any forms of human intervention.
Biogeochemical cycles
Global biogeochemical cycles are critical to life, most notably those of water, oxygen, carbon, nitrogen and phosphorus.
The nitrogen cycle is the transformation of nitrogen and nitrogen-containing compounds in nature. It is a cycle which includes gaseous components.
The water cycle, is the continuous movement of water on, above, and below the surface of the Earth. Water can change states among liquid, vapour, and ice at various places in the water cycle. Although the balance of water on Earth remains fairly constant over time, individual water molecules can come and go.
The carbon cycle is the biogeochemical cycle by which carbon is exchanged among the biosphere, pedosphere, geosphere, hydrosphere, and atmosphere of the Earth.
The oxygen cycle is the movement of oxygen within and between its three main reservoirs: the atmosphere, the biosphere, and the lithosphere. The main driving factor of the oxygen cycle is photosynthesis, which is responsible for the modern Earth's atmospheric composition and life.
The phosphorus cycle is the movement of phosphorus through the lithosphere, hydrosphere, and biosphere. The atmosphere does not play a significant role in the movements of phosphorus, because phosphorus and phosphorus compounds are usually solids at the typical ranges of temperature and pressure found on Earth.
Wilderness
Wilderness is generally defined as a natural environment on Earth that has not been significantly modified by human activity. The WILD Foundation goes into more detail, defining wilderness as: "The most intact, undisturbed wild natural areas left on our planet – those last truly wild places that humans do not control and have not developed with roads, pipelines or other industrial infrastructure." Wilderness areas and protected parks are considered important for the survival of certain species, ecological studies, conservation, solitude, and recreation. Wilderness is deeply valued for cultural, spiritual, moral, and aesthetic reasons. Some nature writers believe wilderness areas are vital for the human spirit and creativity.
The word, "wilderness", derives from the notion of wildness; in other words that which is not controllable by humans. The word etymology is from the Old English wildeornes, which in turn derives from wildeor meaning wild beast (wild + deor = beast, deer). From this point of view, it is the wildness of a place that makes it a wilderness. The mere presence or activity of people does not disqualify an area from being "wilderness". Many ecosystems that are, or have been, inhabited or influenced by activities of people may still be considered "wild". This way of looking at wilderness includes areas within which natural processes operate without very noticeable human interference.
Wildlife includes all non-domesticated plants, animals and other organisms. Domesticating wild plant and animal species for human benefit has occurred many times all over the planet, and has a major impact on the environment, both positive and negative. Wildlife can be found in all ecosystems. Deserts, rain forests, plains, and other areas—including the most developed urban sites—all have distinct forms of wildlife. While the term in popular culture usually refers to animals that are untouched by civilized human factors, most scientists agree that wildlife around the world is (now) impacted by human activities.
Challenges
It is the common understanding of natural environment that underlies environmentalism — a broad political, social and philosophical movement that advocates various actions and policies in the interest of protecting what nature remains in the natural environment, or restoring or expanding the role of nature in this environment. While true wilderness is increasingly rare, wild nature (e.g., unmanaged forests, uncultivated grasslands, wildlife, wildflowers) can be found in many locations previously inhabited by humans.
Goals for the benefit of people and natural systems, commonly expressed by environmental scientists and environmentalists include:
Elimination of pollution and toxicants in air, water, soil, buildings, manufactured goods, and food.
Preservation of biodiversity and protection of endangered species.
Conservation and sustainable use of resources such as water, land, air, energy, raw materials, and natural resources.
Halting human-induced global warming, which represents pollution, a threat to biodiversity, and a threat to human populations.
Shifting from fossil fuels to renewable energy in electricity, heating and cooling, and transportation, which addresses pollution, global warming, and sustainability. This may include public transportation and distributed generation, which have benefits for traffic congestion and electric reliability.
Shifting from meat-intensive diets to largely plant-based diets in order to help mitigate biodiversity loss and climate change.
Establishment of nature reserves for recreational purposes and ecosystem preservation.
Sustainable and less polluting waste management including waste reduction (or even zero waste), reuse, recycling, composting, waste-to-energy, and anaerobic digestion of sewage sludge.
Reducing profligate consumption and clamping down on illegal fishing and logging.
Slowing and stabilisation of human population growth.
Reducing the import of second hand electronic appliances from developed countries to developing countries.
Criticism
In some cultures the term environment is meaningless because there is no separation between people and what they view as the natural world, or their surroundings. Specifically in the United States and Arabian countries many native cultures do not recognize the "environment", or see themselves as environmentalists.
See also
Biophilic design
Citizen's dividend
Conservation movement
Environmental history of the United States
Gaia hypothesis
Geological engineering
Greening
Index of environmental articles
List of conservation topics
List of environmental books
List of environmental issues
List of environmental websites
Natural capital
Natural history
Natural landscape
Nature-based solutions
Sustainability
Sustainable agriculture
Timeline of environmental history
References
Further reading
Allaby, Michael, and Chris Park, eds. A dictionary of environment and conservation (Oxford University Press, 2013), with a British emphasis.
External links
UNEP - United Nations Environment Programme
BBC - Science and Nature.
Science.gov – Environment & Environmental Quality
Habitat
Earth | 0.763787 | 0.998972 | 0.763002 |
Exothermic reaction | In thermochemistry, an exothermic reaction is a "reaction for which the overall standard enthalpy change ΔH⚬ is negative." Exothermic reactions usually release heat. The term is often confused with exergonic reaction, which IUPAC defines as "... a reaction for which the overall standard Gibbs energy change ΔG⚬ is negative." A strongly exothermic reaction will usually also be exergonic because ΔH⚬ makes a major contribution to ΔG⚬. Most of the spectacular chemical reactions that are demonstrated in classrooms are exothermic and exergonic. The opposite is an endothermic reaction, which usually takes up heat and is driven by an entropy increase in the system.
Examples
Examples are numerous: combustion, the thermite reaction, combining strong acids and bases, polymerizations. As an example in everyday life, hand warmers make use of the oxidation of iron to achieve an exothermic reaction:
4Fe + 3O2 → 2Fe2O3 ΔH⚬ = - 1648 kJ/mol
A particularly important class of exothermic reactions is combustion of a hydrocarbon fuel, e.g. the burning of natural gas:
CH4 + 2O2 → CO2 + 2H2O ΔH⚬ = - 890 kJ/mol
These sample reactions are strongly exothermic.
Uncontrolled exothermic reactions, those leading to fires and explosions, are wasteful because it is difficult to capture the released energy. Nature effects combustion reactions under highly controlled conditions, avoiding fires and explosions, in aerobic respiration so as to capture the released energy, e.g. for the formation of ATP.
Measurement
The enthalpy of a chemical system is essentially its energy. The enthalpy change ΔH for a reaction is equal to the heat q transferred out of (or into) a closed system at constant pressure without in- or output of electrical energy. Heat production or absorption in a chemical reaction is measured using calorimetry, e.g. with a bomb calorimeter. One common laboratory instrument is the reaction calorimeter, where the heat flow from or into the reaction vessel is monitored. The heat release and corresponding energy change, Δ, of a combustion reaction can be measured particularly accurately.
The measured heat energy released in an exothermic reaction is converted to ΔH⚬ in Joule per mole (formerly cal/mol). The standard enthalpy change ΔH⚬ is essentially the enthalpy change when the stoichiometric coefficients in the reaction are considered as the amounts of reactants and products (in mole); usually, the initial and final temperature is assumed to be 25 °C. For gas-phase reactions, ΔH⚬ values are related to bond energies to a good approximation by:
Δ⚬ = total bond energy of reactants − total bond energy of products
In an exothermic reaction, by definition, the enthalpy change has a negative value:
Δ = Hproducts - Hreactants < 0
where a larger value (the higher energy of the reactants) is subtracted from a smaller value (the lower energy of the products). For example, when hydrogen burns:
2H2 (g) + O2 (g) → 2H2O (g)
Δ⚬ = −483.6 kJ/mol
See also
Chemical thermodynamics
Differential scanning calorimetry
Endergonic
Exergonic
Endergonic reaction
Exergonic reaction
Exothermic process
Endothermic reaction
Endotherm
References
External links
Thermochemistry | 0.766001 | 0.99608 | 0.762998 |
Vitality | Vitality (, , ) is the capacity to live, grow, or develop. Vitality is also the characteristic that distinguishes living from non-living things. To experience vitality is regarded as a basic psychological drive and, in philosophy, a component to the will to live. As such, people seek to maximize their vitality or their experience of vitality—that which corresponds to an enhanced physiological capacity and mental state.
Overview
The pursuit and maintenance of health and vitality have been at the forefront of medicine and natural philosophy throughout history. Life depends upon various biological processes known as vital processes. Historically, these vital processes have been viewed as having either mechanistic or non-mechanistic causes. The latter point of view is characteristic of vitalism, the doctrine that the phenomena of life cannot be explained by purely chemical and physical mechanisms.
Prior to the 19th century, theoreticians often held that human lifespan had been less limited in the past, and that aging was due to a loss of, and failure to maintain, vitality.
A commonly held view was that people are born with finite vitality, which diminishes over time until illness and debility set in, and finally death.
Religion
In traditional cultures, the capacity for life is often directly equated with the or . This can be found in the Hindu concept , where vitality in the body derives from a subtle principle in the air and in food, as well as in Hebrew and ancient Greek texts.
Jainism
Vitality and DNA damage
Low vitality or fatigue is a common complaint by older patients. and may reflect an underlying medical illness. Vitality level was measured in 2,487 Copenhagen patients using a standardized, subjective, self-reported vitality scale and was found to be inversely related to DNA damage (as measured in peripheral blood mononuclear cells). DNA damage indicates cellular disfunction.
See also
Urban vitality
Vitalism
References
Jain philosophical concepts
Natural philosophy
Philosophy of life
Quality of life | 0.776579 | 0.982506 | 0.762994 |
Limnology | Limnology ( ; ) is the study of inland aquatic ecosystems.
The study of limnology includes aspects of the biological, chemical, physical, and geological characteristics of fresh and saline, natural and man-made bodies of water. This includes the study of lakes, reservoirs, ponds, rivers, springs, streams, wetlands, and groundwater. Water systems are often categorized as either running (lotic) or standing (lentic).
Limnology includes the study of the drainage basin, movement of water through the basin and biogeochemical changes that occur en route. A more recent sub-discipline of limnology, termed landscape limnology, studies, manages, and seeks to conserve these ecosystems using a landscape perspective, by explicitly examining connections between an aquatic ecosystem and its drainage basin. Recently, the need to understand global inland waters as part of the Earth system created a sub-discipline called global limnology. This approach considers processes in inland waters on a global scale, like the role of inland aquatic ecosystems in global biogeochemical cycles.
Limnology is closely related to aquatic ecology and hydrobiology, which study aquatic organisms and their interactions with the abiotic (non-living) environment. While limnology has substantial overlap with freshwater-focused disciplines (e.g., freshwater biology), it also includes the study of inland salt lakes.
History
The term limnology was coined by François-Alphonse Forel (1841–1912) who established the field with his studies of Lake Geneva. Interest in the discipline rapidly expanded, and in 1922 August Thienemann (a German zoologist) and Einar Naumann (a Swedish botanist) co-founded the International Society of Limnology (SIL, from Societas Internationalis Limnologiae). Forel's original definition of limnology, "the oceanography of lakes", was expanded to encompass the study of all inland waters, and influenced Benedykt Dybowski's work on Lake Baikal.
Prominent early American limnologists included G. Evelyn Hutchinson and Ed Deevey. At the University of Wisconsin-Madison, Edward A. Birge, Chancey Juday, Charles R. Goldman, and Arthur D. Hasler contributed to the development of the Center for Limnology.
General limnology
Physical properties
Physical properties of aquatic ecosystems are determined by a combination of heat, currents, waves and other seasonal distributions of environmental conditions. The morphometry of a body of water depends on the type of feature (such as a lake, river, stream, wetland, estuary etc.) and the structure of the earth surrounding the body of water. Lakes, for instance, are classified by their formation, and zones of lakes are defined by water depth. River and stream system morphometry is driven by underlying geology of the area as well as the general velocity of the water. Stream morphometry is also influenced by topography (especially slope) as well as precipitation patterns and other factors such as vegetation and land development. Connectivity between streams and lakes relates to the landscape drainage density, lake surface area and lake shape.
Other types of aquatic systems which fall within the study of limnology are estuaries. Estuaries are bodies of water classified by the interaction of a river and the ocean or sea. Wetlands vary in size, shape, and pattern however the most common types, marshes, bogs and swamps, often fluctuate between containing shallow, freshwater and being dry depending on the time of year. The volume and quality of water in underground aquifers rely on the vegetation cover, which fosters recharge and aids in maintaining water quality.
Light interactions
Light zonation is the concept of how the amount of sunlight penetration into water influences the structure of a body of water. These zones define various levels of productivity within an aquatic ecosystems such as a lake. For instance, the depth of the water column which sunlight is able to penetrate and where most plant life is able to grow is known as the photic or euphotic zone. The rest of the water column which is deeper and does not receive sufficient amounts of sunlight for plant growth is known as the aphotic zone. The amount of solar energy present underwater and the spectral quality of the light that are present at various depths have a significant impact on the behavior of many aquatic organisms. For example, zooplankton's vertical migration is influenced by solar energy levels.
Thermal stratification
Similar to light zonation, thermal stratification or thermal zonation is a way of grouping parts of the water body within an aquatic system based on the temperature of different lake layers. The less turbid the water, the more light is able to penetrate, and thus heat is conveyed deeper in the water. Heating declines exponentially with depth in the water column, so the water will be warmest near the surface but progressively cooler as moving downwards. There are three main sections that define thermal stratification in a lake. The epilimnion is closest to the water surface and absorbs long- and shortwave radiation to warm the water surface. During cooler months, wind shear can contribute to cooling of the water surface. The thermocline is an area within the water column where water temperatures rapidly decrease. The bottom layer is the hypolimnion, which tends to have the coldest water because its depth restricts sunlight from reaching it. In temperate lakes, fall-season cooling of surface water results in turnover of the water column, where the thermocline is disrupted, and the lake temperature profile becomes more uniform. In cold climates, when water cools below 4oC (the temperature of maximum density) many lakes can experience an inverse thermal stratification in winter. These lakes are often dimictic, with a brief spring overturn in addition to longer fall overturn. The relative thermal resistance is the energy needed to mix these strata of different temperatures.
Lake Heat Budget
An annual heat budget, also shown as θa, is the total amount of heat needed to raise the water from its minimum winter temperature to its maximum summer temperature. This can be calculated by integrating the area of the lake at each depth interval (Az) multiplied by the difference between the summer (θsz) and winter (θwz) temperatures or Az(θsz-θwz)
Chemical properties
The chemical composition of water in aquatic ecosystems is influenced by natural characteristics and processes including precipitation, underlying soil and bedrock in the drainage basin, erosion, evaporation, and sedimentation. All bodies of water have a certain composition of both organic and inorganic elements and compounds. Biological reactions also affect the chemical properties of water. In addition to natural processes, human activities strongly influence the chemical composition of aquatic systems and their water quality.
Allochthonous sources of carbon or nutrients come from outside the aquatic system (such as plant and soil material). Carbon sources from within the system, such as algae and the microbial breakdown of aquatic particulate organic carbon, are autochthonous. In aquatic food webs, the portion of biomass derived from allochthonous material is then named "allochthony". In streams and small lakes, allochthonous sources of carbon are dominant while in large lakes and the ocean, autochthonous sources dominate.
Oxygen and carbon dioxide
Dissolved oxygen and dissolved carbon dioxide are often discussed together due their coupled role in respiration and photosynthesis. Dissolved oxygen concentrations can be altered by physical, chemical, and biological processes and reaction. Physical processes including wind mixing can increase dissolved oxygen concentrations, particularly in surface waters of aquatic ecosystems. Because dissolved oxygen solubility is linked to water temperatures, changes in temperature affect dissolved oxygen concentrations as warmer water has a lower capacity to "hold" oxygen as colder water. Biologically, both photosynthesis and aerobic respiration affect dissolved oxygen concentrations. Photosynthesis by autotrophic organisms, such as phytoplankton and aquatic algae, increases dissolved oxygen concentrations while simultaneously reducing carbon dioxide concentrations, since carbon dioxide is taken up during photosynthesis. All aerobic organisms in the aquatic environment take up dissolved oxygen during aerobic respiration, while carbon dioxide is released as a byproduct of this reaction. Because photosynthesis is light-limited, both photosynthesis and respiration occur during the daylight hours, while only respiration occurs during dark hours or in dark portions of an ecosystem. The balance between dissolved oxygen production and consumption is calculated as the aquatic metabolism rate.
Vertical changes in the concentrations of dissolved oxygen are affected by both wind mixing of surface waters and the balance between photosynthesis and respiration of organic matter. These vertical changes, known as profiles, are based on similar principles as thermal stratification and light penetration. As light availability decreases deeper in the water column, photosynthesis rates also decrease, and less dissolved oxygen is produced. This means that dissolved oxygen concentrations generally decrease as you move deeper into the body of water because of photosynthesis is not replenishing dissolved oxygen that is being taken up through respiration. During periods of thermal stratification, water density gradients prevent oxygen-rich surface waters from mixing with deeper waters. Prolonged periods of stratification can result in the depletion of bottom-water dissolved oxygen; when dissolved oxygen concentrations are below 2 milligrams per liter, waters are considered hypoxic. When dissolved oxygen concentrations are approximately 0 milligrams per liter, conditions are anoxic. Both hypoxic and anoxic waters reduce available habitat for organisms that respire oxygen, and contribute to changes in other chemical reactions in the water.
Nitrogen and phosphorus
Nitrogen and phosphorus are ecologically significant nutrients in aquatic systems. Nitrogen is generally present as a gas in aquatic ecosystems however most water quality studies tend to focus on nitrate, nitrite and ammonia levels. Most of these dissolved nitrogen compounds follow a seasonal pattern with greater concentrations in the fall and winter months compared to the spring and summer. Phosphorus has a different role in aquatic ecosystems as it is a limiting factor in the growth of phytoplankton because of generally low concentrations in the water. Dissolved phosphorus is also crucial to all living things, is often very limiting to primary productivity in freshwater, and has its own distinctive ecosystem cycling.
Biological properties
Role in ecology
Lakes "are relatively easy to sample, because they have clear-cut boundaries (compared to terrestrial ecosystems) and because field experiments are relatively easy to perform.", which make then especially useful for ecologists who try to understand ecological dynamics.
Lake trophic classification
One way to classify lakes (or other bodies of water) is with the trophic state index. An oligotrophic lake is characterized by relatively low levels of primary production and low levels of nutrients. A eutrophic lake has high levels of primary productivity due to very high nutrient levels. Eutrophication of a lake can lead to algal blooms. Dystrophic lakes have high levels of humic matter and typically have yellow-brown, tea-coloured waters. These categories do not have rigid specifications; the classification system can be seen as more of a spectrum encompassing the various levels of aquatic productivity.
Tropical limnology
Tropical limnology is a unique and important subfield of limnology that focuses on the distinct physical, chemical, biological, and cultural aspects of freshwater systems in tropical regions. The physical and chemical properties of tropical aquatic environments are different from those in temperate regions, with warmer and more stable temperatures, higher nutrient levels, and more complex ecological interactions. Moreover, the biodiversity of tropical freshwater systems is typically higher, human impacts are often more severe, and there are important cultural and socioeconomic factors that influence the use and management of these systems.
Professional organizations
People who study limnology are called limnologists. These scientists largely study the characteristics of inland fresh-water systems such as lakes, rivers, streams, ponds and wetlands. They may also study non-oceanic bodies of salt water, such as the Great Salt Lake. There are many professional organizations related to limnology and other aspects of the aquatic science, including the Association for the Sciences of Limnology and Oceanography, the Asociación Ibérica de Limnología, the International Society of Limnology, the Polish Limnological Society, the Society of Canadian Limnologists, and the Freshwater Biological Association.
See also
References
Further reading
Gerald A. Cole, Textbook of Limnology, 4th ed. (Waveland Press, 1994)
Stanley Dodson, Introduction to Limnology (2005),
A.J.Horne and C.R. Goldman: Limnology (1994),
G. E. Hutchinson, A Treatise on Limnology, 3 vols. (1957–1975) - classic but dated
H.B.N. Hynes, The Ecology of Running Waters (1970)
Jacob Kalff, Limnology (Prentice Hall, 2001)
B. Moss, Ecology of Fresh Waters (Blackwell, 1998)
Robert G. Wetzel and Gene E. Likens, Limnological Analyses, 3rd ed. (Springer-Verlag, 2000)
Patrick E. O'Sullivan and Colin S. Reynolds The Lakes Handbook: Limnology and limnetic ecology
.01
Hydrography
Lakes
Rivers
Systems ecology
Aquatic ecology
Water | 0.768749 | 0.99251 | 0.762991 |
Adenine | Adenine (symbol A or Ade) is a purine nucleotide base. It is one of the four nucleobases in the nucleic acids of DNA, the other three being guanine (G), cytosine (C), and thymine (T). Adenine derivatives have various roles in biochemistry including cellular respiration, in the form of both the energy-rich adenosine triphosphate (ATP) and the cofactors nicotinamide adenine dinucleotide (NAD), flavin adenine dinucleotide (FAD) and Coenzyme A. It also has functions in protein synthesis and as a chemical component of DNA and RNA. The shape of adenine is complementary to either thymine in DNA or uracil in RNA.
The adjacent image shows pure adenine, as an independent molecule. When connected into DNA, a covalent bond is formed between deoxyribose sugar and the bottom left nitrogen (thereby removing the existing hydrogen atom). The remaining structure is called an adenine residue, as part of a larger molecule. Adenosine is adenine reacted with ribose, as used in RNA and ATP; Deoxyadenosine is adenine attached to deoxyribose, as used to form DNA.
Structure
Adenine forms several tautomers, compounds that can be rapidly interconverted and are often considered equivalent. However, in isolated conditions, i.e. in an inert gas matrix and in the gas phase, mainly the 9H-adenine tautomer is found.
Biosynthesis
Purine metabolism involves the formation of adenine and guanine. Both adenine and guanine are derived from the nucleotide inosine monophosphate (IMP), which in turn is synthesized from a pre-existing ribose phosphate through a complex pathway using atoms from the amino acids glycine, glutamine, and aspartic acid, as well as the coenzyme tetrahydrofolate.
Manufacturing method
Patented Aug. 20, 1968, the current recognized method of industrial-scale production of adenine is a modified form of the formamide method. This method heats up formamide under 120 degree Celsius conditions within a sealed flask for 5 hours to form adenine. The reaction is heavily increased in quantity by using a phosphorus oxychloride (phosphoryl chloride) or phosphorus pentachloride as an acid catalyst and sunlight or ultraviolet conditions. After the 5 hours have passed and the formamide-phosphorus oxychloride-adenine solution cools down, water is put into the flask containing the formamide and now-formed adenine. The water-formamide-adenine solution is then poured through a filtering column of activated charcoal. The water and formamide molecules, being small molecules, will pass through the charcoal and into the waste flask; the large adenine molecules, however, will attach or "adsorb" to the charcoal due to the van der Waals forces that interact between the adenine and the carbon in the charcoal. Because charcoal has a large surface area, it's able to capture the majority of molecules that pass a certain size (greater than water and formamide) through it. To extract the adenine from the charcoal-adsorbed adenine, ammonia gas dissolved in water (aqua ammonia) is poured onto the activated charcoal-adenine structure to liberate the adenine into the ammonia-water solution. The solution containing water, ammonia, and adenine is then left to air dry, with the adenine losing solubility due to the loss of ammonia gas that previously made the solution basic and capable of dissolving adenine, thus causing it to crystallize into a pure white powder that can be stored.
Function
Adenine is one of the two purine nucleobases (the other being guanine) used in forming nucleotides of the nucleic acids. In DNA, adenine binds to thymine via two hydrogen bonds to assist in stabilizing the nucleic acid structures. In RNA, which is used for protein synthesis, adenine binds to uracil.
Adenine forms adenosine, a nucleoside, when attached to ribose, and deoxyadenosine when attached to deoxyribose. It forms adenosine triphosphate (ATP), a nucleoside triphosphate, when three phosphate groups are added to adenosine. Adenosine triphosphate is used in cellular metabolism as one of the basic methods of transferring chemical energy between chemical reactions. ATP is thus a derivative of adenine, adenosine, cyclic adenosine monophosphate, and adenosine diphosphate.
{| class="wikitable left" style="text-align:center"
|-
|
|
|-
| Adenosine, A
| Deoxyadenosine, dA
|}
History
In older literature, adenine was sometimes called Vitamin B4. Due to it being synthesized by the body and not essential to be obtained by diet, it does not meet the definition of vitamin and is no longer part of the Vitamin B complex. However, two B vitamins, niacin and riboflavin, bind with adenine to form the essential cofactors nicotinamide adenine dinucleotide (NAD) and flavin adenine dinucleotide (FAD), respectively. Hermann Emil Fischer was one of the early scientists to study adenine.
It was named in 1885 by Albrecht Kossel after Greek ἀδήν aden "gland", in reference to the pancreas, from which Kossel's sample had been extracted.
Experiments performed in 1961 by Joan Oró have shown that a large quantity of adenine can be synthesized from the polymerization of ammonia with five hydrogen cyanide (HCN) molecules in aqueous solution; whether this has implications for the origin of life on Earth is under debate.
On August 8, 2011, a report, based on NASA studies with meteorites found on Earth, was published suggesting building blocks of DNA and RNA (adenine, guanine and related organic molecules) may have been formed extraterrestrially in outer space. In 2011, physicists reported that adenine has an "unexpectedly variable range of ionization energies along its reaction pathways" which suggested that "understanding experimental data on how adenine survives exposure to UV light is much more complicated than previously thought"; these findings have implications for spectroscopic measurements of heterocyclic compounds, according to one report.
References
External links
Vitamin B4 MS Spectrum
Nucleobases
Purines
Vitamins | 0.765946 | 0.99614 | 0.76299 |
WebQuest | A WebQuest is an inquiry-oriented lesson format in which most or all the information that learners work with comes from the web. These can be created using various programs, including a simple word processing document that includes links to websites.
Distinguishing characteristics
A WebQuest is distinguished from other Internet-based research by four characteristics. First, it is classroom-based. Second, it emphasizes higher-order thinking (such as analysis, creativity, or criticism) rather than just acquiring information. And third, the teacher preselects the sources, emphasizing information use rather than information gathering. Finally, though solo WebQuests are not unknown, most WebQuests are group work with the task frequently being split into roles.
Structure
A WebQuest has 6 essential parts: introduction, task, process, resources, evaluation, and conclusion. The original paper on WebQuests had a component called guidance instead of evaluation.
Task
The task is the formal description of what the students will produce in the WebQuest. The task should be meaningful and fun. Creating the task is the most difficult and creative part of developing a WebQuest.
Process
The steps the students should take to accomplish the task. It is frequently profitable to reinforce the written process with some demonstrations.
Resources
The resources the students should use. Providing these helps focus the exercise on processing information rather than just locating it. Though the instructor may search for the online resources as a separate step, it is good to incorporate them as links within the process section where they will be needed rather than just including them as a long list elsewhere. Having off-line resources like visiting lecturers and sculptures can contribute greatly to the interest of the students.
Evaluation
The way in which the students' performance will be evaluated. The standards should be fair, clear, consistent, and specific to the tasks set.
Conclusion
Time set aside for reflection and discussion of possible extensions.
Use in education
Webquests can be a valuable addition to a collaborative classroom. One of the goals is to increase critical thinking by employing higher levels of Bloom’s Taxonomy and Webb’s Depth of Knowledge. This is a goal of the American educational system's Common Core and many new American state standards for public education. Since most webquests are done in small collaborative groups, they can foster cooperative learning and collaborative activities. Students will often be assigned roles, allowing them to roleplay in different positions, and learn how to deal with conflict within the group.
Webquests can be a versatile tool for teaching students. They can be used to introduce new knowledge, to deepen knowledge, or to allow students to test hypotheses as part of a final interaction with knowledge. The integration of computers and the Internet also increase students’ competency with technology. By having specific task lists, students can stay on task. By having specific sources of information, students can focus on using resources to answer questions rather than vetting resources to use which is a different skill altogether.
In inclusive classrooms (classrooms that have students of varying exceptionalities interacting such as learning disabled, language impaired, or giftedness) tasks can be differentiated to a skill level or collaborative groups for the same level of task. A skill level may have students with learning disabilities working on a basic task to meet the minimum standard of learning skills and gifted students pushing their task to the higher end of the learning skill. More commonly, groups are composed of learners of all skill levels and completing the same level of task. This is typically easier because the teacher is only creating one webquest, but can cause less student interaction from lower students and less learning from higher students.
Limitations of WebQuests
WebQuests are only one tool in a teacher's toolboxes. They are not appropriate to every learning goal. In particular, they are weak in teaching factual total recall, simple procedures, and definitions.
WebQuests also usually require good reading skills, so are not appropriate to the youngest classrooms or to students with language and reading difficulties without accommodations. One might ask an adult to assist with the reading or use screen-reading technologies, such as VoiceOver or Jaws.
How WebQuests are developed
Learners typically complete WebQuests as cooperative groups. Each learner within a group can be given a "role," or specific area to research. WebQuests may take the form of role-playing scenarios, where students take on the personas of professional researchers or historical figures.
A teacher can search for WebQuests on a particular topic or they can develop their own using a web editor like Microsoft FrontPage or Adobe Dreamweaver. This tool allows learners to complete various tasks using other cognitive toolsboxes (e.g. Microsoft Word, PowerPoint, Access, Excel, and Publisher). With the focus of education increasingly being turned to differentiated instruction, teachers are using WebQuests more frequently. WebQuests also help to address the different learning styles of each students. The number of activities associated with a WebQuest can reach almost any student.
WebQuests may be created by anyone; typically they are developed by educators. The first part of a WebQuest is the introduction. This describes the WebQuest and gives the purpose of the activity. The next part describes what students will do. Then is a list of what to do and how to do it. There is usually a list of links to follow to complete the activity.
Finally, WebQuests do not have to be developed as a true web site. They may be developed and implemented using lower threshold (less demanding) technologies, (e.g. they may be saved as a word document on a local computer).
Many Webquests are being developed by college students across the United States as a requirement for their k-12 planning e-portfolio.
Developments in WebQuest methodologies
The WebQuest methodology has been transferred to language learning in the 3D virtual world Second Life to create a more immersive and interactive experience.
Tools
WebQuests are simple webpages, and they can be built with any software that allows you to create websites. Tech-savvy users can develop HTML in Notepad or Notepad++, while others will want to use the templates available in word processing suites like Microsoft Word and OpenOffice. More advanced web development software, like Dreamweaver and FrontPage, will give you the most control over the design of your webquest. Webquest templates allow educators to get a jump start on the development of WebQuest by providing a pre-designed format which generally can be easily edited. These templates are categorized as "Framed" or "Unframed," and they can have a navigation bar at the top, bottom, left, or right of the content.
There are several websites that are specifically geared towards creating webquests. Questgarden, Zunal, and Teacherweb all allow teachers to create accounts, and these websites walk them through the process of creating a webquest. OpenWebQuest is a similar service, although it is based in Greece and much of the website is in Greek. These websites offer little control over design, but they make the creation process very simple and straightforward.
Alternatively, teachers can use one of a number of free website services to create their own website and structure it as a webquest. Wordpress and Edublogs both allow users to create free blogs, and navigation menus can be created to string a series of pages into a webquest. This option offers a greater deal of flexibility than pre-made webquests, but it requires a little more technical know-how.
References
Further reading
Dodge, B. (1995a). "Some thoughts about Webquests". retrieved November 16, 2007 from About WebQuests at webquest.sdsu.edu.
Dodge, B. (1995b). "WebQuests: A technique for Internet-based learning". Distance Educator, 1(2), 10–13.
Further reading
WebQuest.org, Bernie Dodge's WebQuest site.
OpenWebQuest platform, Open source webquest platform (in Greek).
Questgarden.com, QuestGarden, by Bernie Dodge.
Create a WebQuest at createwebquest.com.
eric.ed.gov, education search engine.
WebQuest at Discovery School website
Online Webquest Generator developed by University of Alicante.
MHSebQuests at eduscapes.com.
HSWebQuest at aacps.org.
Zunal.Com, Zunal Free WebQuest Application and Hosting, by Zafer Unal.
Webquest.es, Free WebQuest Application and Hosting with drupal, by Silvia Martinez.
Webkwestie.nl, Dé Nederlandsetalige website voor WebQuests, opgezet door John Demmers.
Educational technology
San Diego State University | 0.772699 | 0.98743 | 0.762986 |
Acylation | In chemistry, acylation is a broad class of chemical reactions in which an acyl group is added to a substrate. The compound providing the acyl group is called the acylating agent. The substrate to be acylated and the product include the following:
alcohols, esters
amines, amides
arenes or alkenes, ketones
A particularly common type of acylation is acetylation, the addition of the acetyl group. Closely related to acylation is formylation, which employ sources of "HCO+ in place of "RCO+".
Examples
Because they form a strong electrophile when treated with Lewis acids, acyl halides are commonly used as acylating agents. For example, Friedel–Crafts acylation uses acetyl chloride as the agent and aluminum chloride as a catalyst to add an acetyl group to benzene:
This reaction is an example of electrophilic aromatic substitution.
Acyl halides and acid anhydrides of carboxylic acids are also common acylating agents. In some cases, active esters exhibit comparable reactivity. All react with amines to form amides and with alcohols to form esters by nucleophilic acyl substitution.
Acylation can be used to prevent rearrangement reactions that would normally occur in alkylation. To do this an acylation reaction is performed, then the carbonyl is removed by Clemmensen reduction or a similar process.
Acylation in biology
Protein acylation is the post-translational modification of proteins via the attachment of functional groups through acyl linkages. Protein acylation has been observed as a mechanism controlling biological signaling. One prominent type is fatty acylation, the addition of fatty acids to particular amino acids (e.g. myristoylation, palmitoylation or palmitoleoylation). Different types of fatty acids engage in global protein acylation. Palmitoleoylation is an acylation type where the monounsaturated fatty acid palmitoleic acid is covalently attached to serine or threonine residues of proteins. Palmitoleoylation appears to play a significant role in the trafficking, targeting, and function of Wnt proteins.
See also
Hydroacylation
Acetyl
Ketene
References
Organic reactions | 0.775607 | 0.983726 | 0.762984 |
Carboxylation | Carboxylation is a chemical reaction in which a carboxylic acid is produced by treating a substrate with carbon dioxide. The opposite reaction is decarboxylation. In chemistry, the term carbonation is sometimes used synonymously with carboxylation, especially when applied to the reaction of carbanionic reagents with CO2. More generally, carbonation usually describes the production of carbonates.
Organic chemistry
Carboxylation is a standard conversion in organic chemistry. Specifically carbonation (i.e. carboxylation) of Grignard reagents and organolithium compounds is a classic way to convert organic halides into carboxylic acids.
Sodium salicylate, precursor to aspirin, is commercially prepared by treating sodium phenolate (the sodium salt of phenol) with carbon dioxide at high pressure (100 atm) and high temperature (390 K) – a method known as the Kolbe-Schmitt reaction. Acidification of the resulting salicylate salt gives salicylic acid.
Many detailed procedures are described in the journal Organic Syntheses.
Carboxylation catalysts include N-Heterocyclic carbenes and catalysts based on silver.
Carboxylation in biochemistry
Carbon-based life originates from carboxylation that couples atmospheric carbon dioxide to a sugar. The process is usually catalysed by the enzyme RuBisCO. Ribulose-1,5-bisphosphate carboxylase/oxygenase, the enzyme that catalyzes this carboxylation, is possibly the single most abundant protein on Earth.
Many carboxylases, including Acetyl-CoA carboxylase, Methylcrotonyl-CoA carboxylase, Propionyl-CoA carboxylase, and Pyruvate carboxylase require biotin as a cofactor. These enzymes are involved in various biogenic pathways. In the EC scheme, such carboxylases are classed under EC 6.3.4, "Other Carbon—Nitrogen Ligases".
Another example is the posttranslational modification of glutamate residues, to γ-carboxyglutamate, in proteins. It occurs primarily in proteins involved in the blood clotting cascade, specifically factors II, VII, IX, and X, protein C, and protein S, and also in some bone proteins. This modification is required for these proteins to function. Carboxylation occurs in the liver and is performed by γ-glutamyl carboxylase (GGCX). GGCX requires vitamin K as a cofactor and performs the reaction in a processive manner. γ-carboxyglutamate binds calcium, which is essential for its activity. For example, in prothrombin, calcium binding allows the protein to associate with the plasma membrane in platelets, bringing it into close proximity with the proteins that cleave prothrombin to active thrombin after injury.
See also
Decarboxylation
Carboxy-lyases
References
Organic reactions
Post-translational modification | 0.78066 | 0.977347 | 0.762976 |
Molecular entity | In chemistry and physics, a molecular entity, or chemical entity, is "any constitutionally or isotopically distinct atom, molecule, ion, ion pair, radical, radical ion, complex, conformer, etc., identifiable as a separately distinguishable entity". A molecular entity is any singular entity, irrespective of its nature, used to concisely express any type of chemical particle that can exemplify some process: for example, atoms, molecules, ions, etc. can all undergo a chemical reaction.
Chemical species is the macroscopic equivalent of molecular entity and refers to sets or ensembles of molecular entities.
According to IUPAC, "The degree of precision necessary to describe a molecular entity depends on the context. For example 'hydrogen molecule' is an adequate definition of a certain molecular entity for some purposes, whereas for others it is necessary to distinguish the electronic state and/or vibrational state and/or nuclear spin, etc. of the hydrogen molecule."
See also
New chemical entity
Chemical Entities of Biological Interest
Notes and references
Chemical nomenclature | 0.786053 | 0.970637 | 0.762973 |
Data and information visualization | Data and information visualization (data viz/vis or info viz/vis) is the practice of designing and creating easy-to-communicate and easy-to-understand graphic or visual representations of a large amount of complex quantitative and qualitative data and information with the help of static, dynamic or interactive visual items. Typically based on data and information collected from a certain domain of expertise, these visualizations are intended for a broader audience to help them visually explore and discover, quickly understand, interpret and gain important insights into otherwise difficult-to-identify structures, relationships, correlations, local and global patterns, trends, variations, constancy, clusters, outliers and unusual groupings within data (exploratory visualization). When intended for the general public (mass communication) to convey a concise version of known, specific information in a clear and engaging manner (presentational or explanatory visualization), it is typically called information graphics.
Data visualization is concerned with visually presenting sets of primarily quantitative raw data in a schematic form. The visual formats used in data visualization include tables, charts and graphs (e.g. pie charts, bar charts, line charts, area charts, cone charts, pyramid charts, donut charts, histograms, spectrograms, cohort charts, waterfall charts, funnel charts, bullet graphs, etc.), diagrams, plots (e.g. scatter plots, distribution plots, box-and-whisker plots), geospatial maps (such as proportional symbol maps, choropleth maps, isopleth maps and heat maps), figures, correlation matrices, percentage gauges, etc., which sometimes can be combined in a dashboard.
Information visualization, on the other hand, deals with multiple, large-scale and complicated datasets which contain quantitative (numerical) data as well as qualitative (non-numerical, i.e. verbal or graphical) and primarily abstract information and its goal is to add value to raw data, improve the viewers' comprehension, reinforce their cognition and help them derive insights and make decisions as they navigate and interact with the computer-supported graphical display. Visual tools used in information visualization include maps (such as tree maps), animations, infographics, Sankey diagrams, flow charts, network diagrams, semantic networks, entity-relationship diagrams, venn diagrams, timelines, mind maps, etc.
Emerging technologies like virtual, augmented and mixed reality have the potential to make information visualization more immersive, intuitive, interactive and easily manipulable and thus enhance the user's visual perception and cognition. In data and information visualization, the goal is to graphically present and explore abstract, non-physical and non-spatial data collected from databases, information systems, file systems, documents, business and financial data, etc. (presentational and exploratory visualization) which is different from the field of scientific visualization, where the goal is to render realistic images based on physical and spatial scientific data to confirm or reject hypotheses (confirmatory visualization).
Effective data visualization is properly sourced, contextualized, simple and uncluttered. The underlying data is accurate and up-to-date to make sure that insights are reliable. Graphical items are well-chosen for the given datasets and aesthetically appealing, with shapes, colors and other visual elements used deliberately in a meaningful and non-distracting manner. The visuals are accompanied by supporting texts (labels and titles). These verbal and graphical components complement each other to ensure clear, quick and memorable understanding. Effective information visualization is aware of the needs and concerns and the level of expertise of the target audience, deliberately guiding them to the intended conclusion. Such effective visualization can be used not only for conveying specialized, complex, big data-driven ideas to a wider group of non-technical audience in a visually appealing, engaging and accessible manner, but also to domain experts and executives for making decisions, monitoring performance, generating new ideas and stimulating research. In addition, data scientists, data analysts and data mining specialists use data visualization to check the quality of data, find errors, unusual gaps and missing values in data, clean data, explore the structures and features of data and assess outputs of data-driven models. In business, data and information visualization can constitute a part of data storytelling, where they are paired with a coherent narrative structure or storyline to contextualize the analyzed data and communicate the insights gained from analyzing the data clearly and memorably with the goal of convincing the audience into making a decision or taking an action in order to create business value. This can be contrasted with the field of statistical graphics, where complex statistical data are communicated graphically in an accurate and precise manner among researchers and analysts with statistical expertise to help them perform exploratory data analysis or to convey the results of such analyses, where visual appeal, capturing attention to a certain issue and storytelling are not as important.
The field of data and information visualization is of interdisciplinary nature as it incorporates principles found in the disciplines of descriptive statistics (as early as the 18th century), visual communication, graphic design, cognitive science and, more recently, interactive computer graphics and human-computer interaction. Since effective visualization requires design skills, statistical skills and computing skills, it is argued by authors such as Gershon and Page that it is both an art and a science. The neighboring field of visual analytics marries statistical data analysis, data and information visualization and human analytical reasoning through interactive visual interfaces to help human users reach conclusions, gain actionable insights and make informed decisions which are otherwise difficult for computers to do.
Research into how people read and misread various types of visualizations is helping to determine what types and features of visualizations are most understandable and effective in conveying information. On the other hand, unintentionally poor or intentionally misleading and deceptive visualizations (misinformative visualization) can function as powerful tools which disseminate misinformation, manipulate public perception and divert public opinion toward a certain agenda. Thus data visualization literacy has become an important component of data and information literacy in the information age akin to the roles played by textual, mathematical and visual literacy in the past.
Overview
The field of data and information visualization has emerged "from research in human–computer interaction, computer science, graphics, visual design, psychology, and business methods. It is increasingly applied as a critical component in scientific research, digital libraries, data mining, financial data analysis, market studies, manufacturing production control, and drug discovery".
Data and information visualization presumes that "visual representations and interaction techniques take advantage of the human eye's broad bandwidth pathway into the mind to allow users to see, explore, and understand large amounts of information at once. Information visualization focused on the creation of approaches for conveying abstract information in intuitive ways."
Data analysis is an indispensable part of all applied research and problem solving in industry. The most fundamental data analysis approaches are visualization (histograms, scatter plots, surface plots, tree maps, parallel coordinate plots, etc.), statistics (hypothesis test, regression, PCA, etc.), data mining (association mining, etc.), and machine learning methods (clustering, classification, decision trees, etc.). Among these approaches, information visualization, or visual data analysis, is the most reliant on the cognitive skills of human analysts, and allows the discovery of unstructured actionable insights that are limited only by human imagination and creativity. The analyst does not have to learn any sophisticated methods to be able to interpret the visualizations of the data. Information visualization is also a hypothesis generation scheme, which can be, and is typically followed by more analytical or formal analysis, such as statistical hypothesis testing.
To communicate information clearly and efficiently, data visualization uses statistical graphics, plots, information graphics and other tools. Numerical data may be encoded using dots, lines, or bars, to visually communicate a quantitative message. Effective visualization helps users analyze and reason about data and evidence. It makes complex data more accessible, understandable, and usable, but can also be reductive. Users may have particular analytical tasks, such as making comparisons or understanding causality, and the design principle of the graphic (i.e., showing comparisons or showing causality) follows the task. Tables are generally used where users will look up a specific measurement, while charts of various types are used to show patterns or relationships in the data for one or more variables.
Data visualization refers to the techniques used to communicate data or information by encoding it as visual objects (e.g., points, lines, or bars) contained in graphics. The goal is to communicate information clearly and efficiently to users. It is one of the steps in data analysis or data science. According to Vitaly Friedman (2008) the "main goal of data visualization is to communicate information clearly and effectively through graphical means. It doesn't mean that data visualization needs to look boring to be functional or extremely sophisticated to look beautiful. To convey ideas effectively, both aesthetic form and functionality need to go hand in hand, providing insights into a rather sparse and complex data set by communicating its key aspects in a more intuitive way. Yet designers often fail to achieve a balance between form and function, creating gorgeous data visualizations which fail to serve their main purpose — to communicate information".
Indeed, Fernanda Viegas and Martin M. Wattenberg suggested that an ideal visualization should not only communicate clearly, but stimulate viewer engagement and attention.
Data visualization is closely related to information graphics, information visualization, scientific visualization, exploratory data analysis and statistical graphics. In the new millennium, data visualization has become an active area of research, teaching and development. According to Post et al. (2002), it has united scientific and information visualization.
In the commercial environment data visualization is often referred to as dashboards. Infographics are another very common form of data visualization.
Principles
Characteristics of effective graphical displays
Edward Tufte has explained that users of information displays are executing particular analytical tasks such as making comparisons. The design principle of the information graphic should support the analytical task. As William Cleveland and Robert McGill show, different graphical elements accomplish this more or less effectively. For example, dot plots and bar charts outperform pie charts.
In his 1983 book The Visual Display of Quantitative Information, Edward Tufte defines 'graphical displays' and principles for effective graphical display in the following passage:
"Excellence in statistical graphics consists of complex ideas communicated with clarity, precision, and efficiency. Graphical displays should:
show the data
induce the viewer to think about the substance rather than about methodology, graphic design, the technology of graphic production, or something else
avoid distorting what the data has to say
present many numbers in a small space
make large data sets coherent
encourage the eye to compare different pieces of data
reveal the data at several levels of detail, from a broad overview to the fine structure
serve a reasonably clear purpose: description, exploration, tabulation, or decoration
be closely integrated with the statistical and verbal descriptions of a data set.
Graphics reveal data. Indeed, graphics can be more precise and revealing than conventional statistical computations."
For example, the Minard diagram shows the losses suffered by Napoleon's army in the 1812–1813 period. Six variables are plotted: the size of the army, its location on a two-dimensional surface (x and y), time, the direction of movement, and temperature. The line width illustrates a comparison (size of the army at points in time), while the temperature axis suggests a cause of the change in army size. This multivariate display on a two-dimensional surface tells a story that can be grasped immediately while identifying the source data to build credibility. Tufte wrote in 1983 that: "It may well be the best statistical graphic ever drawn."
Not applying these principles may result in misleading graphs, distorting the message, or supporting an erroneous conclusion. According to Tufte, chartjunk refers to the extraneous interior decoration of the graphic that does not enhance the message or gratuitous three-dimensional or perspective effects. Needlessly separating the explanatory key from the image itself, requiring the eye to travel back and forth from the image to the key, is a form of "administrative debris." The ratio of "data to ink" should be maximized, erasing non-data ink where feasible.
The Congressional Budget Office summarized several best practices for graphical displays in a June 2014 presentation. These included: a) Knowing your audience; b) Designing graphics that can stand alone outside the report's context; and c) Designing graphics that communicate the key messages in the report.
Quantitative messages
Author Stephen Few described eight types of quantitative messages that users may attempt to understand or communicate from a set of data and the associated graphs used to help communicate the message:
Time-series: A single variable is captured over a period of time, such as the unemployment rate or temperature measures over a 10-year period. A line chart may be used to demonstrate the trend over time.
Ranking: Categorical subdivisions are ranked in ascending or descending order, such as a ranking of sales performance (the measure) by sales persons (the category, with each sales person a categorical subdivision) during a single period. A bar chart may be used to show the comparison across the sales persons.
Part-to-whole: Categorical subdivisions are measured as a ratio to the whole (i.e., a percentage out of 100%). A pie chart or bar chart can show the comparison of ratios, such as the market share represented by competitors in a market.
Deviation: Categorical subdivisions are compared against a reference, such as a comparison of actual vs. budget expenses for several departments of a business for a given time period. A bar chart can show comparison of the actual versus the reference amount.
Frequency distribution: Shows the number of observations of a particular variable for given interval, such as the number of years in which the stock market return is between intervals such as 0–10%, 11–20%, etc. A histogram, a type of bar chart, may be used for this analysis. A boxplot helps visualize key statistics about the distribution, such as median, quartiles, outliers, etc.
Correlation: Comparison between observations represented by two variables (X,Y) to determine if they tend to move in the same or opposite directions. For example, plotting unemployment (X) and inflation (Y) for a sample of months. A scatter plot is typically used for this message.
Nominal comparison: Comparing categorical subdivisions in no particular order, such as the sales volume by product code. A bar chart may be used for this comparison.
Geographic or geospatial: Comparison of a variable across a map or layout, such as the unemployment rate by state or the number of persons on the various floors of a building. A cartogram is a typical graphic used.
Analysts reviewing a set of data may consider whether some or all of the messages and graphic types above are applicable to their task and audience. The process of trial and error to identify meaningful relationships and messages in the data is part of exploratory data analysis.
Visual perception and data visualization
A human can distinguish differences in line length, shape, orientation, distances, and color (hue) readily without significant processing effort; these are referred to as "pre-attentive attributes". For example, it may require significant time and effort ("attentive processing") to identify the number of times the digit "5" appears in a series of numbers; but if that digit is different in size, orientation, or color, instances of the digit can be noted quickly through pre-attentive processing.
Compelling graphics take advantage of pre-attentive processing and attributes and the relative strength of these attributes. For example, since humans can more easily process differences in line length than surface area, it may be more effective to use a bar chart (which takes advantage of line length to show comparison) rather than pie charts (which use surface area to show comparison).
Human perception/cognition and data visualization
Almost all data visualizations are created for human consumption. Knowledge of human perception and cognition is necessary when designing intuitive visualizations. Cognition refers to processes in human beings like perception, attention, learning, memory, thought, concept formation, reading, and problem solving. Human visual processing is efficient in detecting changes and making comparisons between quantities, sizes, shapes and variations in lightness. When properties of symbolic data are mapped to visual properties, humans can browse through large amounts of data efficiently. It is estimated that 2/3 of the brain's neurons can be involved in visual processing. Proper visualization provides a different approach to show potential connections, relationships, etc. which are not as obvious in non-visualized quantitative data. Visualization can become a means of data exploration.
Studies have shown individuals used on average 19% less cognitive resources, and 4.5% better able to recall details when comparing data visualization with text.
History
The modern study of visualization started with computer graphics, which "has from its beginning been used to study scientific problems. However, in its early days the lack of graphics power often limited its usefulness. The recent emphasis on visualization started in 1987 with the special issue of Computer Graphics on Visualization in Scientific Computing. Since then there have been several conferences and workshops, co-sponsored by the IEEE Computer Society and ACM SIGGRAPH". They have been devoted to the general topics of data visualization, information visualization and scientific visualization, and more specific areas such as volume visualization.
In 1786, William Playfair published the first presentation graphics.
There is no comprehensive 'history' of data visualization. There are no accounts that span the entire development of visual thinking and the visual representation of data, and which collate the contributions of disparate disciplines. Michael Friendly and Daniel J Denis of York University are engaged in a project that attempts to provide a comprehensive history of visualization. Contrary to general belief, data visualization is not a modern development. Since prehistory, stellar data, or information such as location of stars were visualized on the walls of caves (such as those found in Lascaux Cave in Southern France) since the Pleistocene era. Physical artefacts such as Mesopotamian clay tokens (5500 BC), Inca quipus (2600 BC) and Marshall Islands stick charts (n.d.) can also be considered as visualizing quantitative information.
The first documented data visualization can be tracked back to 1160 B.C. with Turin Papyrus Map which accurately illustrates the distribution of geological resources and provides information about quarrying of those resources. Such maps can be categorized as thematic cartography, which is a type of data visualization that presents and communicates specific data and information through a geographical illustration designed to show a particular theme connected with a specific geographic area. Earliest documented forms of data visualization were various thematic maps from different cultures and ideograms and hieroglyphs that provided and allowed interpretation of information illustrated. For example, Linear B tablets of Mycenae provided a visualization of information regarding Late Bronze Age era trades in the Mediterranean. The idea of coordinates was used by ancient Egyptian surveyors in laying out towns, earthly and heavenly positions were located by something akin to latitude and longitude at least by 200 BC, and the map projection of a spherical Earth into latitude and longitude by Claudius Ptolemy [–] in Alexandria would serve as reference standards until the 14th century.
The invention of paper and parchment allowed further development of visualizations throughout history. Figure shows a graph from the 10th or possibly 11th century that is intended to be an illustration of the planetary movement, used in an appendix of a textbook in monastery schools. The graph apparently was meant to represent a plot of the inclinations of the planetary orbits as a function of the time. For this purpose, the zone of the zodiac was represented on a plane with a horizontal line divided into thirty parts as the time or longitudinal axis. The vertical axis designates the width of the zodiac. The horizontal scale appears to have been chosen for each planet individually for the periods cannot be reconciled. The accompanying text refers only to the amplitudes. The curves are apparently not related in time.
By the 16th century, techniques and instruments for precise observation and measurement of physical quantities, and geographic and celestial position were well-developed (for example, a "wall quadrant" constructed by Tycho Brahe [1546–1601], covering an entire wall in his observatory). Particularly important were the development of triangulation and other methods to determine mapping locations accurately. Very early, the measure of time led scholars to develop innovative way of visualizing the data (e.g. Lorenz Codomann in 1596, Johannes Temporarius in 1596).
French philosopher and mathematician René Descartes and Pierre de Fermat developed analytic geometry and two-dimensional coordinate system which heavily influenced the practical methods of displaying and calculating values. Fermat and Blaise Pascal's work on statistics and probability theory laid the groundwork for what we now conceptualize as data. According to the Interaction Design Foundation, these developments allowed and helped William Playfair, who saw potential for graphical communication of quantitative data, to generate and develop graphical methods of statistics. In the second half of the 20th century, Jacques Bertin used quantitative graphs to represent information "intuitively, clearly, accurately, and efficiently".
John Tukey and Edward Tufte pushed the bounds of data visualization; Tukey with his new statistical approach of exploratory data analysis and Tufte with his book "The Visual Display of Quantitative Information" paved the way for refining data visualization techniques for more than statisticians. With the progression of technology came the progression of data visualization; starting with hand-drawn visualizations and evolving into more technical applications – including interactive designs leading to software visualization.
Programs like SAS, SOFA, R, Minitab, Cornerstone and more allow for data visualization in the field of statistics. Other data visualization applications, more focused and unique to individuals, programming languages such as D3, Python and JavaScript help to make the visualization of quantitative data a possibility. Private schools have also developed programs to meet the demand for learning data visualization and associated programming libraries, including free programs like The Data Incubator or paid programs like General Assembly.
Beginning with the symposium "Data to Discovery" in 2013, ArtCenter College of Design, Caltech and JPL in Pasadena have run an annual program on interactive data visualization. The program asks: How can interactive data visualization help scientists and engineers explore their data more effectively? How can computing, design, and design thinking help maximize research results? What methodologies are most effective for leveraging knowledge from these fields? By encoding relational information with appropriate visual and interactive characteristics to help interrogate, and ultimately gain new insight into data, the program develops new interdisciplinary approaches to complex science problems, combining design thinking and the latest methods from computing, user-centered design, interaction design and 3D graphics.
Terminology
Data visualization involves specific terminology, some of which is derived from statistics. For example, author Stephen Few defines two types of data, which are used in combination to support a meaningful analysis or visualization:
Categorical: Represent groups of objects with a particular characteristic. Categorical variables can either be nominal or ordinal. Nominal variables for example gender have no order between them and are thus nominal. Ordinal variables are categories with an order, for sample recording the age group someone falls into.
Quantitative: Represent measurements, such as the height of a person or the temperature of an environment. Quantitative variables can either be continuous or discrete. Continuous variables capture the idea that measurements can always be made more precisely. While discrete variables have only a finite number of possibilities, such as a count of some outcomes or an age measured in whole years.
The distinction between quantitative and categorical variables is important because the two types require different methods of visualization.
Two primary types of information displays are tables and graphs.
A table contains quantitative data organized into rows and columns with categorical labels. It is primarily used to look up specific values. In the example above, the table might have categorical column labels representing the name (a qualitative variable) and age (a quantitative variable), with each row of data representing one person (the sampled experimental unit or category subdivision).
A graph is primarily used to show relationships among data and portrays values encoded as visual objects (e.g., lines, bars, or points). Numerical values are displayed within an area delineated by one or more axes. These axes provide scales (quantitative and categorical) used to label and assign values to the visual objects. Many graphs are also referred to as charts.
Eppler and Lengler have developed the "Periodic Table of Visualization Methods," an interactive chart displaying various data visualization methods. It includes six types of data visualization methods: data, information, concept, strategy, metaphor and compound. In "Visualization Analysis and Design" Tamara Munzner writes "Computer-based visualization systems provide visual representations of datasets designed to help people carry out tasks more effectively." Munzner agues that visualization "is suitable when there is a need to augment human capabilities rather than replace people with computational decision-making methods."
Techniques
Other techniques
Cartogram
Cladogram (phylogeny)
Concept Mapping
Dendrogram (classification)
Information visualization reference model
Grand tour
Graph drawing
Heatmap
HyperbolicTree
Multidimensional scaling
Parallel coordinates
Problem solving environment
Treemapping
Interactivity
Interactive data visualization enables direct actions on a graphical plot to change elements and link between multiple plots.
Interactive data visualization has been a pursuit of statisticians since the late 1960s. Examples of the developments can be found on the American Statistical Association video lending library.
Common interactions include:
Brushing: works by using the mouse to control a paintbrush, directly changing the color or glyph of elements of a plot. The paintbrush is sometimes a pointer and sometimes works by drawing an outline of sorts around points; the outline is sometimes irregularly shaped, like a lasso. Brushing is most commonly used when multiple plots are visible and some linking mechanism exists between the plots. There are several different conceptual models for brushing and a number of common linking mechanisms. Brushing scatterplots can be a transient operation in which points in the active plot only retain their new characteristics. At the same time, they are enclosed or intersected by the brush, or it can be a persistent operation, so that points retain their new appearance after the brush has been moved away. Transient brushing is usually chosen for linked brushing, as we have just described.
Painting: Persistent brushing is useful when we want to group the points into clusters and then proceed to use other operations, such as the tour, to compare the groups. It is becoming common terminology to call the persistent operation painting,
Identification: which could also be called labeling or label brushing, is another plot manipulation that can be linked. Bringing the cursor near a point or edge in a scatterplot, or a bar in a barchart, causes a label to appear that identifies the plot element. It is widely available in many interactive graphics, and is sometimes called mouseover.
Scaling: maps the data onto the window, and changes in the area of the. mapping function help us learn different things from the same plot. Scaling is commonly used to zoom in on crowded regions of a scatterplot, and it can also be used to change the aspect ratio of a plot, to reveal different features of the data.
Linking: connects elements selected in one plot with elements in another plot. The simplest kind of linking, one-to-one, where both plots show different projections of the same data, and a point in one plot corresponds to exactly one point in the other. When using area plots, brushing any part of an area has the same effect as brushing it all and is equivalent to selecting all cases in the corresponding category. Even when some plot elements represent more than one case, the underlying linking rule still links one case in one plot to the same case in other plots. Linking can also be by categorical variable, such as by a subject id, so that all data values corresponding to that subject are highlighted, in all the visible plots.
Other perspectives
There are different approaches on the scope of data visualization. One common focus is on information presentation, such as Friedman (2008). Friendly (2008) presumes two main parts of data visualization: statistical graphics, and thematic cartography. In this line the "Data Visualization: Modern Approaches" (2007) article gives an overview of seven subjects of data visualization:
Articles & resources
Displaying connections
Displaying data
Displaying news
Displaying websites
Mind maps
Tools and services
All these subjects are closely related to graphic design and information representation.
On the other hand, from a computer science perspective, Frits H. Post in 2002 categorized the field into sub-fields:
Information visualization
Interaction techniques and architectures
Modelling techniques
Multiresolution methods
Visualization algorithms and techniques
Volume visualization
Within The Harvard Business Review, Scott Berinato developed a framework to approach data visualisation. To start thinking visually, users must consider two questions; 1) What you have and 2) what you're doing. The first step is identifying what data you want visualised. It is data-driven like profit over the past ten years or a conceptual idea like how a specific organisation is structured. Once this question is answered one can then focus on whether they are trying to communicate information (declarative visualisation) or trying to figure something out (exploratory visualisation). Scott Berinato combines these questions to give four types of visual communication that each have their own goals.
These four types of visual communication are as follows;
idea illustration (conceptual & declarative).
Used to teach, explain and/or simply concepts. For example, organisation charts and decision trees.
idea generation (conceptual & exploratory).
Used to discover, innovate and solve problems. For example, a whiteboard after a brainstorming session.
visual discovery (data-driven & exploratory).
Used to spot trends and make sense of data. This type of visual is more common with large and complex data where the dataset is somewhat unknown and the task is open-ended.
everyday data-visualisation (data-driven & declarative).
The most common and simple type of visualisation used for affirming and setting context. For example, a line graph of GDP over time.
Applications
Data and information visualization insights are being applied in areas such as:
Scientific research
Digital libraries
Data mining
Information graphics
Financial data analysis
Health care
Market studies
Manufacturing production control
Crime mapping
eGovernance and Policy Modeling
Digital Humanities
Data Art
Organization
Notable academic and industry laboratories in the field are:
Adobe Research
IBM Research
Google Research
Microsoft Research
Panopticon Software
Scientific Computing and Imaging Institute
Tableau Software
University of Maryland Human-Computer Interaction Lab
Conferences in this field, ranked by significance in data visualization research, are:
IEEE Visualization: An annual international conference on scientific visualization, information visualization, and visual analytics. Conference is held in October.
ACM SIGGRAPH: An annual international conference on computer graphics, convened by the ACM SIGGRAPH organization. Conference dates vary.
Conference on Human Factors in Computing Systems (CHI): An annual international conference on human–computer interaction, hosted by ACM SIGCHI. Conference is usually held in April or May.
Eurographics: An annual Europe-wide computer graphics conference, held by the European Association for Computer Graphics. Conference is usually held in April or May.
For further examples, see: :Category:Computer graphics organizations
Data presentation architecture
Data presentation architecture (DPA) is a skill-set that seeks to identify, locate, manipulate, format and present data in such a way as to optimally communicate meaning and proper knowledge.
Historically, the term data presentation architecture is attributed to Kelly Lautt: "Data Presentation Architecture (DPA) is a rarely applied skill set critical for the success and value of Business Intelligence. Data presentation architecture weds the science of numbers, data and statistics in discovering valuable information from data and making it usable, relevant and actionable with the arts of data visualization, communications, organizational psychology and change management in order to provide business intelligence solutions with the data scope, delivery timing, format and visualizations that will most effectively support and drive operational, tactical and strategic behaviour toward understood business (or organizational) goals. DPA is neither an IT nor a business skill set but exists as a separate field of expertise. Often confused with data visualization, data presentation architecture is a much broader skill set that includes determining what data on what schedule and in what exact format is to be presented, not just the best way to present data that has already been chosen. Data visualization skills are one element of DPA."
Objectives
DPA has two main objectives:
To use data to provide knowledge in the most efficient manner possible (minimize noise, complexity, and unnecessary data or detail given each audience's needs and roles)
To use data to provide knowledge in the most effective manner possible (provide relevant, timely and complete data to each audience member in a clear and understandable manner that conveys important meaning, is actionable and can affect understanding, behavior and decisions)
Scope
With the above objectives in mind, the actual work of data presentation architecture consists of:
Creating effective delivery mechanisms for each audience member depending on their role, tasks, locations and access to technology
Defining important meaning (relevant knowledge) that is needed by each audience member in each context
Determining the required periodicity of data updates (the currency of the data)
Determining the right timing for data presentation (when and how often the user needs to see the data)
Finding the right data (subject area, historical reach, breadth, level of detail, etc.)
Utilizing appropriate analysis, grouping, visualization, and other presentation formats
Related fields
DPA work shares commonalities with several other fields, including:
Business analysis in determining business goals, collecting requirements, mapping processes.
Business process improvement in that its goal is to improve and streamline actions and decisions in furtherance of business goals
Data visualization in that it uses well-established theories of visualization to add or highlight meaning or importance in data presentation.
Digital humanities explores more nuanced ways of visualising complex data.
Information architecture, but information architecture's focus is on unstructured data and therefore excludes both analysis (in the statistical/data sense) and direct transformation of the actual content (data, for DPA) into new entities and combinations.
HCI and interaction design, since many of the principles in how to design interactive data visualisation have been developed cross-disciplinary with HCI.
Visual journalism and data-driven journalism or data journalism: Visual journalism is concerned with all types of graphic facilitation of the telling of news stories, and data-driven and data journalism are not necessarily told with data visualisation. Nevertheless, the field of journalism is at the forefront in developing new data visualisations to communicate data.
Graphic design, conveying information through styling, typography, position, and other aesthetic concerns.
See also
Analytics
Big data
Climate change art
Color coding in data visualization
Computational visualistics
Information art
Data management
Data physicalization
Data Presentation Architecture
Data profiling
Data warehouse
Geovisualization
Grand Tour (data visualisation)
imc FAMOS (1987), graphical data analysis
Infographics
Information design
Information management
List of graphical methods
List of information graphics software
List of countries by economic complexity, example of Treemapping
Patent visualisation
Software visualization
Statistical analysis
Visual analytics
Warming stripes
Notes
References
Further reading
Kawa Nazemi (2014). Adaptive Semantics Visualization Eurographics Association.
Andreas Kerren, John T. Stasko, Jean-Daniel Fekete, and Chris North (2008). Information Visualization – Human-Centered Issues and Perspectives. Volume 4950 of LNCS State-of-the-Art Survey, Springer.
Spence, Robert Information Visualization: Design for Interaction (2nd Edition), Prentice Hall, 2007, .
Jeffrey Heer, Stuart K. Card, James Landay (2005). "Prefuse: a toolkit for interactive information visualization" . In: ACM Human Factors in Computing Systems CHI 2005.
Ben Bederson and Ben Shneiderman (2003). The Craft of Information Visualization: Readings and Reflections. Morgan Kaufmann.
Colin Ware (2000). Information Visualization: Perception for design. Morgan Kaufmann.
Stuart K. Card, Jock D. Mackinlay and Ben Shneiderman (1999). Readings in Information Visualization: Using Vision to Think, Morgan Kaufmann Publishers.
Schwabish, Jonathan A. 2014. "An Economist's Guide to Visualizing Data." Journal of Economic Perspectives, 28 (1): 209–34.
External links
Milestones in the History of Thematic Cartography, Statistical Graphics, and Data Visualization, An illustrated chronology of innovations by Michael Friendly and Daniel J. Denis.
Duke University-Christa Kelleher Presentation-Communicating through infographics-visualizing scientific & engineering information-March 6, 2015
Visualization (graphics)
Statistical charts and diagrams
Information technology governance
de:Informationsvisualisierung | 0.766613 | 0.995247 | 0.762969 |
Scientific method | The scientific method is an empirical method for acquiring knowledge that has characterized the development of science since at least the 17th century. The scientific method involves careful observation coupled with rigorous scepticism, because cognitive assumptions can distort the interpretation of the observation. Scientific inquiry includes creating a hypothesis through inductive reasoning, testing it through experiments and statistical analysis, and adjusting or discarding the hypothesis based on the results.
Although procedures vary between fields, the underlying process is often similar. The scientific method involves making conjectures (hypothetical explanations), predicting the logical consequences of hypothesis, then carrying out experiments or empirical observations based on those predictions. A hypothesis is a conjecture based on knowledge obtained while seeking answers to the question. Hypotheses can be very specific or broad but must be falsifiable, implying that it is possible to identify a possible outcome of an experiment or observation that conflicts with predictions deduced from the hypothesis; otherwise, the hypothesis cannot be meaningfully tested.
While the scientific method is often presented as a fixed sequence of steps, it actually represents a set of general principles. Not all steps take place in every scientific inquiry (nor to the same degree), and they are not always in the same order.
History
The history of scientific method considers changes in the methodology of scientific inquiry, not the history of science itself. The development of rules for scientific reasoning has not been straightforward; scientific method has been the subject of intense and recurring debate throughout the history of science, and eminent natural philosophers and scientists have argued for the primacy of various approaches to establishing scientific knowledge.
Different early expressions of empiricism and the scientific method can be found throughout history, for instance with the ancient Stoics, Epicurus, Alhazen, Avicenna, Al-Biruni, Roger Bacon, and William of Ockham.
In the scientific revolution of the 16th and 17th centuries some of the most important developments were the furthering of empiricism by Francis Bacon and Robert Hooke, the rationalist approach described by René Descartes and inductivism, brought to particular prominence by Isaac Newton and those who followed him. Experiments were advocated by Francis Bacon, and performed by Giambattista della Porta, Johannes Kepler, and Galileo Galilei. There was particular development aided by theoretical works by a skeptic Francisco Sanches, by idealists as well as empiricists John Locke, George Berkeley, and David Hume. C. S. Peirce formulated the hypothetico-deductive model in the 20th century, and the model has undergone significant revision since.
The term "scientific method" emerged in the 19th century, as a result of significant institutional development of science, and terminologies establishing clear boundaries between science and non-science, such as "scientist" and "pseudoscience", appearing. Throughout the 1830s and 1850s, when Baconianism was popular, naturalists like William Whewell, John Herschel and John Stuart Mill engaged in debates over "induction" and "facts" and were focused on how to generate knowledge. In the late 19th and early 20th centuries, a debate over realism vs. antirealism was conducted as powerful scientific theories extended beyond the realm of the observable.
Modern use and critical thought
The term "scientific method" came into popular use in the twentieth century; Dewey's 1910 book, How We Think, inspired popular guidelines, appearing in dictionaries and science textbooks, although there was little consensus over its meaning. Although there was growth through the middle of the twentieth century, by the 1960s and 1970s numerous influential philosophers of science such as Thomas Kuhn and Paul Feyerabend had questioned the universality of the "scientific method" and in doing so largely replaced the notion of science as a homogeneous and universal method with that of it being a heterogeneous and local practice. In particular, Paul Feyerabend, in the 1975 first edition of his book Against Method, argued against there being any universal rules of science; Karl Popper, and Gauch 2003, disagree with Feyerabend's claim.
Later stances include physicist Lee Smolin's 2013 essay "There Is No Scientific Method", in which he espouses two ethical principles, and historian of science Daniel Thurs' chapter in the 2015 book Newton's Apple and Other Myths about Science, which concluded that the scientific method is a myth or, at best, an idealization. As myths are beliefs, they are subject to the narrative fallacy as Taleb points out. Philosophers Robert Nola and Howard Sankey, in their 2007 book Theories of Scientific Method, said that debates over the scientific method continue, and argued that Feyerabend, despite the title of Against Method, accepted certain rules of method and attempted to justify those rules with a meta methodology.
Staddon (2017) argues it is a mistake to try following rules in the absence of an algorithmic scientific method; in that case, "science is best understood through examples". But algorithmic methods, such as disproof of existing theory by experiment have been used since Alhacen (1027) and his Book of Optics, and Galileo (1638) and his Two New Sciences, and The Assayer, which still stand as scientific method.
Elements of inquiry
Overview
The scientific method is the process by which science is carried out. As in other areas of inquiry, science (through the scientific method) can build on previous knowledge, and unify understanding of its studied topics over time. This model can be seen to underlie the scientific revolution.
The overall process involves making conjectures (hypotheses), predicting their logical consequences, then carrying out experiments based on those predictions to determine whether the original conjecture was correct. However, there are difficulties in a formulaic statement of method. Though the scientific method is often presented as a fixed sequence of steps, these actions are more accurately general principles. Not all steps take place in every scientific inquiry (nor to the same degree), and they are not always done in the same order.
Factors of scientific inquiry
There are different ways of outlining the basic method used for scientific inquiry. The scientific community and philosophers of science generally agree on the following classification of method components. These methodological elements and organization of procedures tend to be more characteristic of experimental sciences than social sciences. Nonetheless, the cycle of formulating hypotheses, testing and analyzing the results, and formulating new hypotheses, will resemble the cycle described below.The scientific method is an iterative, cyclical process through which information is continually revised. It is generally recognized to develop advances in knowledge through the following elements, in varying combinations or contributions:
Characterizations (observations, definitions, and measurements of the subject of inquiry)
Hypotheses (theoretical, hypothetical explanations of observations and measurements of the subject)
Predictions (inductive and deductive reasoning from the hypothesis or theory)
Experiments (tests of all of the above)
Each element of the scientific method is subject to peer review for possible mistakes. These activities do not describe all that scientists do but apply mostly to experimental sciences (e.g., physics, chemistry, biology, and psychology). The elements above are often taught in the educational system as "the scientific method".
The scientific method is not a single recipe: it requires intelligence, imagination, and creativity. In this sense, it is not a mindless set of standards and procedures to follow but is rather an ongoing cycle, constantly developing more useful, accurate, and comprehensive models and methods. For example, when Einstein developed the Special and General Theories of Relativity, he did not in any way refute or discount Newton's Principia. On the contrary, if the astronomically massive, the feather-light, and the extremely fast are removed from Einstein's theories – all phenomena Newton could not have observed – Newton's equations are what remain. Einstein's theories are expansions and refinements of Newton's theories and, thus, increase confidence in Newton's work.
An iterative, pragmatic scheme of the four points above is sometimes offered as a guideline for proceeding:
Define a question
Gather information and resources (observe)
Form an explanatory hypothesis
Test the hypothesis by performing an experiment and collecting data in a reproducible manner
Analyze the data
Interpret the data and draw conclusions that serve as a starting point for a new hypothesis
Publish results
Retest (frequently done by other scientists)
The iterative cycle inherent in this step-by-step method goes from point 3 to 6 and back to 3 again.
While this schema outlines a typical hypothesis/testing method, many philosophers, historians, and sociologists of science, including Paul Feyerabend, claim that such descriptions of scientific method have little relation to the ways that science is actually practiced.
Characterizations
The basic elements of the scientific method are illustrated by the following example (which occurred from 1944 to 1953) from the discovery of the structure of DNA (marked with and indented).
In 1950, it was known that genetic inheritance had a mathematical description, starting with the studies of Gregor Mendel, and that DNA contained genetic information (Oswald Avery's transforming principle). But the mechanism of storing genetic information (i.e., genes) in DNA was unclear. Researchers in Bragg's laboratory at Cambridge University made X-ray diffraction pictures of various molecules, starting with crystals of salt, and proceeding to more complicated substances. Using clues painstakingly assembled over decades, beginning with its chemical composition, it was determined that it should be possible to characterize the physical structure of DNA, and the X-ray images would be the vehicle.
The scientific method depends upon increasingly sophisticated characterizations of the subjects of investigation. (The subjects can also be called unsolved problems or the unknowns.) For example, Benjamin Franklin conjectured, correctly, that St. Elmo's fire was electrical in nature, but it has taken a long series of experiments and theoretical changes to establish this. While seeking the pertinent properties of the subjects, careful thought may also entail some definitions and observations; these observations often demand careful measurements and/or counting can take the form of expansive empirical research.
A scientific question can refer to the explanation of a specific observation, as in "Why is the sky blue?" but can also be open-ended, as in "How can I design a drug to cure this particular disease?" This stage frequently involves finding and evaluating evidence from previous experiments, personal scientific observations or assertions, as well as the work of other scientists. If the answer is already known, a different question that builds on the evidence can be posed. When applying the scientific method to research, determining a good question can be very difficult and it will affect the outcome of the investigation.
The systematic, careful collection of measurements or counts of relevant quantities is often the critical difference between pseudo-sciences, such as alchemy, and science, such as chemistry or biology. Scientific measurements are usually tabulated, graphed, or mapped, and statistical manipulations, such as correlation and regression, performed on them. The measurements might be made in a controlled setting, such as a laboratory, or made on more or less inaccessible or unmanipulatable objects such as stars or human populations. The measurements often require specialized scientific instruments such as thermometers, spectroscopes, particle accelerators, or voltmeters, and the progress of a scientific field is usually intimately tied to their invention and improvement.
Definition
The scientific definition of a term sometimes differs substantially from its natural language usage. For example, mass and weight overlap in meaning in common discourse, but have distinct meanings in mechanics. Scientific quantities are often characterized by their units of measure which can later be described in terms of conventional physical units when communicating the work.
New theories are sometimes developed after realizing certain terms have not previously been sufficiently clearly defined. For example, Albert Einstein's first paper on relativity begins by defining simultaneity and the means for determining length. These ideas were skipped over by Isaac Newton with, "I do not define time, space, place and motion, as being well known to all." Einstein's paper then demonstrates that they (viz., absolute time and length independent of motion) were approximations. Francis Crick cautions us that when characterizing a subject, however, it can be premature to define something when it remains ill-understood. In Crick's study of consciousness, he actually found it easier to study awareness in the visual system, rather than to study free will, for example. His cautionary example was the gene; the gene was much more poorly understood before Watson and Crick's pioneering discovery of the structure of DNA; it would have been counterproductive to spend much time on the definition of the gene, before them.
Hypothesis development
Linus Pauling proposed that DNA might be a triple helix.. The structure that we propose is a three-chain structure, each chain being a helix' – Linus Pauling" This hypothesis was also considered by Francis Crick and James D. Watson but discarded. When Watson and Crick learned of Pauling's hypothesis, they understood from existing data that Pauling was wrong. and that Pauling would soon admit his difficulties with that structure.
A hypothesis is a suggested explanation of a phenomenon, or alternately a reasoned proposal suggesting a possible correlation between or among a set of phenomena. Normally, hypotheses have the form of a mathematical model. Sometimes, but not always, they can also be formulated as existential statements, stating that some particular instance of the phenomenon being studied has some characteristic and causal explanations, which have the general form of universal statements, stating that every instance of the phenomenon has a particular characteristic.
Scientists are free to use whatever resources they have – their own creativity, ideas from other fields, inductive reasoning, Bayesian inference, and so on – to imagine possible explanations for a phenomenon under study. Albert Einstein once observed that "there is no logical bridge between phenomena and their theoretical principles." Charles Sanders Peirce, borrowing a page from Aristotle (Prior Analytics, 2.25) described the incipient stages of inquiry, instigated by the "irritation of doubt" to venture a plausible guess, as abductive reasoning. The history of science is filled with stories of scientists claiming a "flash of inspiration", or a hunch, which then motivated them to look for evidence to support or refute their idea. Michael Polanyi made such creativity the centerpiece of his discussion of methodology.
William Glen observes that
In general, scientists tend to look for theories that are "elegant" or "beautiful". Scientists often use these terms to refer to a theory that is following the known facts but is nevertheless relatively simple and easy to handle. Occam's Razor serves as a rule of thumb for choosing the most desirable amongst a group of equally explanatory hypotheses.
To minimize the confirmation bias that results from entertaining a single hypothesis, strong inference emphasizes the need for entertaining multiple alternative hypotheses, and avoiding artifacts.
Predictions from the hypothesis
James D. Watson, Francis Crick, and others hypothesized that DNA had a helical structure. This implied that DNA's X-ray diffraction pattern would be 'x shaped'.: June 1952 — Watson had succeeded in getting X-ray pictures of TMV showing a diffraction pattern consistent with the transform of a helix. This prediction followed from the work of Cochran, Crick and Vand (and independently by Stokes). The Cochran-Crick-Vand-Stokes theorem provided a mathematical explanation for the empirical observation that diffraction from helical structures produces x-shaped patterns.
In their first paper, Watson and Crick also noted that the double helix structure they proposed provided a simple mechanism for DNA replication, writing, "It has not escaped our notice that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material".Any useful hypothesis will enable predictions, by reasoning including deductive reasoning. It might predict the outcome of an experiment in a laboratory setting or the observation of a phenomenon in nature. The prediction can also be statistical and deal only with probabilities.
It is essential that the outcome of testing such a prediction be currently unknown. Only in this case does a successful outcome increase the probability that the hypothesis is true. If the outcome is already known, it is called a consequence and should have already been considered while formulating the hypothesis.
If the predictions are not accessible by observation or experience, the hypothesis is not yet testable and so will remain to that extent unscientific in a strict sense. A new technology or theory might make the necessary experiments feasible. For example, while a hypothesis on the existence of other intelligent species may be convincing with scientifically based speculation, no known experiment can test this hypothesis. Therefore, science itself can have little to say about the possibility. In the future, a new technique may allow for an experimental test and the speculation would then become part of accepted science.
For example, Einstein's theory of general relativity makes several specific predictions about the observable structure of spacetime, such as that light bends in a gravitational field, and that the amount of bending depends in a precise way on the strength of that gravitational field. Arthur Eddington's observations made during a 1919 solar eclipse supported General Relativity rather than Newtonian gravitation.
Experiments
Watson and Crick showed an initial (and incorrect) proposal for the structure of DNA to a team from King's College London – Rosalind Franklin, Maurice Wilkins, and Raymond Gosling. Franklin immediately spotted the flaws which concerned the water content. Later Watson saw Franklin's photo 51, a detailed X-ray diffraction image, which showed an X-shapeCynthia Wolberger (2021) Photograph 51 explained and was able to confirm the structure was helical.: "The instant I saw the picture my mouth fell open and my pulse began to race." Page 168 shows the X-shaped pattern of the B-form of DNA, clearly indicating crucial details of its helical structure to Watson and Crick.
Once predictions are made, they can be sought by experiments. If the test results contradict the predictions, the hypotheses which entailed them are called into question and become less tenable. Sometimes the experiments are conducted incorrectly or are not very well designed when compared to a crucial experiment. If the experimental results confirm the predictions, then the hypotheses are considered more likely to be correct, but might still be wrong and continue to be subject to further testing. The experimental control is a technique for dealing with observational error. This technique uses the contrast between multiple samples, or observations, or populations, under differing conditions, to see what varies or what remains the same. We vary the conditions for the acts of measurement, to help isolate what has changed. Mill's canons can then help us figure out what the important factor is. Factor analysis is one technique for discovering the important factor in an effect.
Depending on the predictions, the experiments can have different shapes. It could be a classical experiment in a laboratory setting, a double-blind study or an archaeological excavation. Even taking a plane from New York to Paris is an experiment that tests the aerodynamical hypotheses used for constructing the plane.
These institutions thereby reduce the research function to a cost/benefit, which is expressed as money, and the time and attention of the researchers to be expended, in exchange for a report to their constituents. Current large instruments, such as CERN's Large Hadron Collider (LHC), or LIGO, or the National Ignition Facility (NIF), or the International Space Station (ISS), or the James Webb Space Telescope (JWST), entail expected costs of billions of dollars, and timeframes extending over decades. These kinds of institutions affect public policy, on a national or even international basis, and the researchers would require shared access to such machines and their adjunct infrastructure.
Scientists assume an attitude of openness and accountability on the part of those experimenting. Detailed record-keeping is essential, to aid in recording and reporting on the experimental results, and supports the effectiveness and integrity of the procedure. They will also assist in reproducing the experimental results, likely by others. Traces of this approach can be seen in the work of Hipparchus (190–120 BCE), when determining a value for the precession of the Earth, while controlled experiments can be seen in the works of al-Battani (853–929 CE) and Alhazen (965–1039 CE).
Communication and iteration
Watson and Crick then produced their model, using this information along with the previously known information about DNA's composition, especially Chargaff's rules of base pairing. After considerable fruitless experimentation, being discouraged by their superior from continuing, and numerous false starts,: Sunday, February 8, 1953 — Maurice Wilkes gave Watson and Crick permission to work on models, as Wilkes would not be building models until Franklin left DNA research. Watson and Crick were able to infer the essential structure of DNA by concrete modeling of the physical shapes of the nucleotides which comprise it.: "Suddenly I became aware that an adenine-thymine pair held together by two hydrogen bonds was identical in shape to a guanine-cytosine pair held together by at least two hydrogen bonds. ..." They were guided by the bond lengths which had been deduced by Linus Pauling and by Rosalind Franklin's X-ray diffraction images.
The scientific method is iterative. At any stage, it is possible to refine its accuracy and precision, so that some consideration will lead the scientist to repeat an earlier part of the process. Failure to develop an interesting hypothesis may lead a scientist to re-define the subject under consideration. Failure of a hypothesis to produce interesting and testable predictions may lead to reconsideration of the hypothesis or of the definition of the subject. Failure of an experiment to produce interesting results may lead a scientist to reconsider the experimental method, the hypothesis, or the definition of the subject.
This manner of iteration can span decades and sometimes centuries. Published papers can be built upon. For example: By 1027, Alhazen, based on his measurements of the refraction of light, was able to deduce that outer space was less dense than air, that is: "the body of the heavens is rarer than the body of air". In 1079 Ibn Mu'adh's Treatise On Twilight was able to infer that Earth's atmosphere was 50 miles thick, based on atmospheric refraction of the sun's rays.
This is why the scientific method is often represented as circular – new information leads to new characterisations, and the cycle of science continues. Measurements collected can be archived, passed onwards and used by others. Other scientists may start their own research and enter the process at any stage. They might adopt the characterization and formulate their own hypothesis, or they might adopt the hypothesis and deduce their own predictions. Often the experiment is not done by the person who made the prediction, and the characterization is based on experiments done by someone else. Published results of experiments can also serve as a hypothesis predicting their own reproducibility.
Confirmation
Science is a social enterprise, and scientific work tends to be accepted by the scientific community when it has been confirmed. Crucially, experimental and theoretical results must be reproduced by others within the scientific community. Researchers have given their lives for this vision; Georg Wilhelm Richmann was killed by ball lightning (1753) when attempting to replicate the 1752 kite-flying experiment of Benjamin Franklin.
If an experiment cannot be repeated to produce the same results, this implies that the original results might have been in error. As a result, it is common for a single experiment to be performed multiple times, especially when there are uncontrolled variables or other indications of experimental error. For significant or surprising results, other scientists may also attempt to replicate the results for themselves, especially if those results would be important to their own work. Replication has become a contentious issue in social and biomedical science where treatments are administered to groups of individuals. Typically an experimental group gets the treatment, such as a drug, and the control group gets a placebo. John Ioannidis in 2005 pointed out that the method being used has led to many findings that cannot be replicated.
The process of peer review involves the evaluation of the experiment by experts, who typically give their opinions anonymously. Some journals request that the experimenter provide lists of possible peer reviewers, especially if the field is highly specialized. Peer review does not certify the correctness of the results, only that, in the opinion of the reviewer, the experiments themselves were sound (based on the description supplied by the experimenter). If the work passes peer review, which occasionally may require new experiments requested by the reviewers, it will be published in a peer-reviewed scientific journal. The specific journal that publishes the results indicates the perceived quality of the work.
Scientists typically are careful in recording their data, a requirement promoted by Ludwik Fleck (1896–1961) and others. Though not typically required, they might be requested to supply this data to other scientists who wish to replicate their original results (or parts of their original results), extending to the sharing of any experimental samples that may be difficult to obtain. To protect against bad science and fraudulent data, government research-granting agencies such as the National Science Foundation, and science journals, including Nature and Science, have a policy that researchers must archive their data and methods so that other researchers can test the data and methods and build on the research that has gone before. Scientific data archiving can be done at several national archives in the U.S. or the World Data Center.
Foundational principles
Honesty, openness, and falsifiability
The unfettered principles of science are to strive for accuracy and the creed of honesty; openness already being a matter of degrees. Openness is restricted by the general rigour of scepticism. And of course the matter of non-science.
Smolin, in 2013, espoused ethical principles rather than giving any potentially limited definition of the rules of inquiry. His ideas stand in the context of the scale of data–driven and big science, which has seen increased importance of honesty and consequently reproducibility. His thought is that science is a community effort by those who have accreditation and are working within the community. He also warns against overzealous parsimony.
Popper previously took ethical principles even further, going as far as to ascribe value to theories only if they were falsifiable. Popper used the falsifiability criterion to demarcate a scientific theory from a theory like astrology: both "explain" observations, but the scientific theory takes the risk of making predictions that decide whether it is right or wrong:
Theory's interactions with observation
Science has limits. Those limits are usually deemed to be answers to questions that aren't in science's domain, such as faith. Science has other limits as well, as it seeks to make true statements about reality. The nature of truth and the discussion on how scientific statements relate to reality is best left to the article on the philosophy of science here. More immediately topical limitations show themselves in the observation of reality.
It is the natural limitations of scientific inquiry that there is no pure observation as theory is required to interpret empirical data, and observation is therefore influenced by the observer's conceptual framework. As science is an unfinished project, this does lead to difficulties. Namely, that false conclusions are drawn, because of limited information.
An example here are the experiments of Kepler and Brahe, used by Hanson to illustrate the concept. Despite observing the same sunrise the two scientists came to different conclusions—their intersubjectivity leading to differing conclusions. Johannes Kepler used Tycho Brahe's method of observation, which was to project the image of the Sun on a piece of paper through a pinhole aperture, instead of looking directly at the Sun. He disagreed with Brahe's conclusion that total eclipses of the Sun were impossible because, contrary to Brahe, he knew that there were historical accounts of total eclipses. Instead, he deduced that the images taken would become more accurate, the larger the aperture—this fact is now fundamental for optical system design. Another historic example here is the discovery of Neptune, credited as being found via mathematics because previous observers didn't know what they were looking at.
Empiricism, rationalism, and more pragmatic views
Scientific endeavour can be characterised as the pursuit of truths about the natural world or as the elimination of doubt about the same. The former is the direct construction of explanations from empirical data and logic, the latter the reduction of potential explanations. It was established above how the interpretation of empirical data is theory-laden, so neither approach is trivial.
The ubiquitous element in the scientific method is empiricism, which holds that knowledge is created by a process involving observation; scientific theories generalize observations. This is in opposition to stringent forms of rationalism, which holds that knowledge is created by the human intellect; later clarified by Popper to be built on prior theory. The scientific method embodies the position that reason alone cannot solve a particular scientific problem; it unequivocally refutes claims that revelation, political or religious dogma, appeals to tradition, commonly held beliefs, common sense, or currently held theories pose the only possible means of demonstrating truth.
In 1877, C. S. Peirce characterized inquiry in general not as the pursuit of truth per se but as the struggle to move from irritating, inhibitory doubts born of surprises, disagreements, and the like, and to reach a secure belief, the belief being that on which one is prepared to act. His pragmatic views framed scientific inquiry as part of a broader spectrum and as spurred, like inquiry generally, by actual doubt, not mere verbal or "hyperbolic doubt", which he held to be fruitless. This "hyperbolic doubt" Peirce argues against here is of course just another name for Cartesian doubt associated with René Descartes. It is a methodological route to certain knowledge by identifying what can't be doubted.
A strong formulation of the scientific method is not always aligned with a form of empiricism in which the empirical data is put forward in the form of experience or other abstracted forms of knowledge as in current scientific practice the use of scientific modelling and reliance on abstract typologies and theories is normally accepted. In 2010, Hawking suggested that physics' models of reality should simply be accepted where they prove to make useful predictions. He calls the concept model-dependent realism.
Rationality
Rationality embodies the essence of sound reasoning, a cornerstone not only in philosophical discourse but also in the realms of science and practical decision-making. According to the traditional viewpoint, rationality serves a dual purpose: it governs beliefs, ensuring they align with logical principles, and it steers actions, directing them towards coherent and beneficial outcomes. This understanding underscores the pivotal role of reason in shaping our understanding of the world and in informing our choices and behaviours. The following section will first explore beliefs and biases, and then get to the rational reasoning most associated with the sciences.
Beliefs and biases
Scientific methodology often directs that hypotheses be tested in controlled conditions wherever possible. This is frequently possible in certain areas, such as in the biological sciences, and more difficult in other areas, such as in astronomy.
The practice of experimental control and reproducibility can have the effect of diminishing the potentially harmful effects of circumstance, and to a degree, personal bias. For example, pre-existing beliefs can alter the interpretation of results, as in confirmation bias; this is a heuristic that leads a person with a particular belief to see things as reinforcing their belief, even if another observer might disagree (in other words, people tend to observe what they expect to observe).
A historical example is the belief that the legs of a galloping horse are splayed at the point when none of the horse's legs touch the ground, to the point of this image being included in paintings by its supporters. However, the first stop-action pictures of a horse's gallop by Eadweard Muybridge showed this to be false, and that the legs are instead gathered together.
Another important human bias that plays a role is a preference for new, surprising statements (see Appeal to novelty), which can result in a search for evidence that the new is true. Poorly attested beliefs can be believed and acted upon via a less rigorous heuristic.
Goldhaber and Nieto published in 2010 the observation that if theoretical structures with "many closely neighboring subjects are described by connecting theoretical concepts, then the theoretical structure acquires a robustness which makes it increasingly hardthough certainly never impossibleto overturn". When a narrative is constructed its elements become easier to believe.
notes "Words and ideas are originally phonetic and mental equivalences of the experiences coinciding with them. ... Such proto-ideas are at first always too broad and insufficiently specialized. ... Once a structurally complete and closed system of opinions consisting of many details and relations has been formed, it offers enduring resistance to anything that contradicts it". Sometimes, these relations have their elements assumed a priori, or contain some other logical or methodological flaw in the process that ultimately produced them. Donald M. MacKay has analyzed these elements in terms of limits to the accuracy of measurement and has related them to instrumental elements in a category of measurement.
Deductive and inductive reasoning
The idea of there being two opposed justifications for truth has shown up through-out the history of scientific method as analysis versus synthesis, non-ampliative/ampliative, or even confirmation and verification. (And there are other kinds of reasoning.) One to use what is observed to build towards fundamental truths – and the other to derive from those fundamental truths more specific principles.
Deductive reasoning is the building of knowledge based on what has been shown to be true before. It requires the assumption of fact established prior, and, given the truth of the assumptions, a valid deduction guarantees the truth of the conclusion. Inductive reasoning builds knowledge not from established truth, but from a body of observations. It requires stringent scepticism regarding observed phenomena, because cognitive assumptions can distort the interpretation of initial perceptions.
An example for how inductive and deductive reasoning works can be found in the history of gravitational theory. It took thousands of years of measurements, from the Chaldean, Indian, Persian, Greek, Arabic, and European astronomers, to fully record the motion of planet Earth. Kepler(and others) were then able to build their early theories by generalizing the collected data inductively, and Newton was able to unify prior theory and measurements into the consequences of his laws of motion in 1727.
Another common example of inductive reasoning is the observation of a counterexample to current theory inducing the need for new ideas. Le Verrier in 1859 pointed out problems with the perihelion of Mercury that showed Newton's theory to be at least incomplete. The observed difference of Mercury's precession between Newtonian theory and observation was one of the things that occurred to Einstein as a possible early test of his theory of relativity. His relativistic calculations matched observation much more closely than Newtonian theory did. Though, today's Standard Model of physics suggests that we still do not know at least some of the concepts surrounding Einstein's theory, it holds to this day and is being built on deductively.
A theory being assumed as true and subsequently built on is a common example of deductive reasoning. Theory building on Einstein's achievement can simply state that 'we have shown that this case fulfils the conditions under which general/special relativity applies, therefore its conclusions apply also'. If it was properly shown that 'this case' fulfils the conditions, the conclusion follows. An extension of this is the assumption of a solution to an open problem. This weaker kind of deductive reasoning will get used in current research, when multiple scientists or even teams of researchers are all gradually solving specific cases in working towards proving a larger theory. This often sees hypotheses being revised again and again as new proof emerges.
This way of presenting inductive and deductive reasoning shows part of why science is often presented as being a cycle of iteration. It is important to keep in mind that that cycle's foundations lie in reasoning, and not wholly in the following of procedure.
Certainty, probabilities, and statistical inference
Claims of scientific truth can be opposed in three ways: by falsifying them, by questioning their certainty, or by asserting the claim itself to be incoherent. Incoherence, here, means internal errors in logic, like stating opposites to be true; falsification is what Popper would have called the honest work of conjecture and refutation — certainty, perhaps, is where difficulties in telling truths from non-truths arise most easily.
Measurements in scientific work are usually accompanied by estimates of their uncertainty. The uncertainty is often estimated by making repeated measurements of the desired quantity. Uncertainties may also be calculated by consideration of the uncertainties of the individual underlying quantities used. Counts of things, such as the number of people in a nation at a particular time, may also have an uncertainty due to data collection limitations. Or counts may represent a sample of desired quantities, with an uncertainty that depends upon the sampling method used and the number of samples taken.
In the case of measurement imprecision, there will simply be a 'probable deviation' expressing itself in a study's conclusions. Statistics are different. Inductive statistical generalisation will take sample data and extrapolate more general conclusions, which has to be justified — and scrutinised. It can even be said that statistical models are only ever useful, but never a complete representation of circumstances.
In statistical analysis, expected and unexpected bias is a large factor. Research questions, the collection of data, or the interpretation of results, all are subject to larger amounts of scrutiny than in comfortably logical environments. Statistical models go through a process for validation, for which one could even say that awareness of potential biases is more important than the hard logic; errors in logic are easier to find in peer review, after all. More general, claims to rational knowledge, and especially statistics, have to be put into their appropriate context. Simple statements such as '9 out of 10 doctors recommend' are therefore of unknown quality because they do not justify their methodology.
Lack of familiarity with statistical methodologies can result in erroneous conclusions. Foregoing the easy example, multiple probabilities interacting is where, for example medical professionals, have shown a lack of proper understanding. Bayes' theorem is the mathematical principle lining out how standing probabilities are adjusted given new information. The boy or girl paradox is a common example. In knowledge representation, Bayesian estimation of mutual information between random variables is a way to measure dependence, independence, or interdependence of the information under scrutiny.
Beyond commonly associated survey methodology of field research, the concept together with probabilistic reasoning is used to advance fields of science where research objects have no definitive states of being. For example, in statistical mechanics.
Methods of inquiry
Hypothetico-deductive method
The hypothetico-deductive model, or hypothesis-testing method, or "traditional" scientific method is, as the name implies, based on the formation of hypotheses and their testing via deductive reasoning. A hypothesis stating implications, often called predictions, that are falsifiable via experiment is of central importance here, as not the hypothesis but its implications are what is tested. Basically, scientists will look at the hypothetical consequences a (potential) theory holds and prove or disprove those instead of the theory itself. If an experimental test of those hypothetical consequences shows them to be false, it follows logically that the part of the theory that implied them was false also. If they show as true however, it does not prove the theory definitively.
The logic of this testing is what affords this method of inquiry to be reasoned deductively. The formulated hypothesis is assumed to be 'true', and from that 'true' statement implications are inferred. If the following tests show the implications to be false, it follows that the hypothesis was false also. If test show the implications to be true, new insights will be gained. It is important to be aware that a positive test here will at best strongly imply but not definitively prove the tested hypothesis, as deductive inference (A ⇒ B) is not equivalent like that; only (¬B ⇒ ¬A) is valid logic. Their positive outcomes however, as Hempel put it, provide "at least some support, some corroboration or confirmation for it". This is why Popper insisted on fielded hypotheses to be falsifieable, as successful tests imply very little otherwise. As Gillies put it, “successful theories are those that survive elimination through falsification”.
Deductive reasoning in this mode of inquiry will sometimes be replaced by abductive reasoning—the search for the most plausible explanation via logical inference. For example in biology, where general laws are few, as valid deductions rely on solid presuppositions.
Inductive method
The inductivist approach to deriving scientific truth first rose to prominence with Francis Bacon and particularly with Isaac Newton and those who followed him. After the establishment of the HD-method, it was often put aside as something of a "fishing expedition" though. It is still valid to some degree, but today's inductive method is often far removed from the historic approach—the scale of the data collected lending new effectiveness to the method. It is most-associated with data-mining projects or large-scale observation projects. In both these cases, it is often not at all clear what the results of proposed experiments will be, and thus knowledge will arise after the collection of data through inductive reasoning.
Where the traditional method of inquiry does both, the inductive approach usually formulates only a research question, not a hypothesis. Following the initial question instead, a suitable "high-throughput method" of data-collection is determined, the resulting data processed and 'cleaned up', and conclusions drawn after. "This shift in focus elevates the data to the supreme role of revealing novel insights by themselves".
The advantage the inductive method has over methods formulating a hypothesis that it is essentially free of "a researcher's preconceived notions" regarding their subject. On the other hand, inductive reasoning is always attached to a measure of certainty, as all inductively reasoned conclusions are. This measure of certainty can reach quite high degrees, though. For example, in the determination of large primes, which are used in encryption software.
Mathematical modelling
Mathematical modelling, or allochthonous reasoning, typically is the formulation of a hypothesis followed by building mathematical constructs that can be tested in place of conducting physical laboratory experiments. This approach has two main factors: simplification/abstraction and secondly a set of correspondence rules. The correspondence rules lay out how the constructed model will relate back to reality-how truth is derived; and the simplifying steps taken in the abstraction of the given system are to reduce factors that do not bear relevance and thereby reduce unexpected errors. These steps can also help the researcher in understanding the important factors of the system, how far parsimony can be taken until the system becomes more and more unchangeable and thereby stable. Parsimony and related principles are further explored below.
Once this translation into mathematics is complete, the resulting model, in place of the corresponding system, can be analysed through purely mathematical and computational means. The results of this analysis are of course also purely mathematical in nature and get translated back to the system as it exists in reality via the previously determined correspondence rules—iteration following review and interpretation of the findings. The way such models are reasoned will often be mathematically deductive—but they don't have to be. An example here are Monte-Carlo simulations. These generate empirical data "arbitrarily", and, while they may not be able to reveal universal principles, they can nevertheless be useful.
Scientific inquiry
Scientific inquiry generally aims to obtain knowledge in the form of testable explanations that scientists can use to predict the results of future experiments. This allows scientists to gain a better understanding of the topic under study, and later to use that understanding to intervene in its causal mechanisms (such as to cure disease). The better an explanation is at making predictions, the more useful it frequently can be, and the more likely it will continue to explain a body of evidence better than its alternatives. The most successful explanations – those that explain and make accurate predictions in a wide range of circumstances – are often called scientific theories.
Most experimental results do not produce large changes in human understanding; improvements in theoretical scientific understanding typically result from a gradual process of development over time, sometimes across different domains of science. Scientific models vary in the extent to which they have been experimentally tested and for how long, and in their acceptance in the scientific community. In general, explanations become accepted over time as evidence accumulates on a given topic, and the explanation in question proves more powerful than its alternatives at explaining the evidence. Often subsequent researchers re-formulate the explanations over time, or combined explanations to produce new explanations.
Properties of scientific inquiry
Scientific knowledge is closely tied to empirical findings and can remain subject to falsification if new experimental observations are incompatible with what is found. That is, no theory can ever be considered final since new problematic evidence might be discovered. If such evidence is found, a new theory may be proposed, or (more commonly) it is found that modifications to the previous theory are sufficient to explain the new evidence. The strength of a theory relates to how long it has persisted without major alteration to its core principles.
Theories can also become subsumed by other theories. For example, Newton's laws explained thousands of years of scientific observations of the planets almost perfectly. However, these laws were then determined to be special cases of a more general theory (relativity), which explained both the (previously unexplained) exceptions to Newton's laws and predicted and explained other observations such as the deflection of light by gravity. Thus, in certain cases independent, unconnected, scientific observations can be connected, unified by principles of increasing explanatory power.
Since new theories might be more comprehensive than what preceded them, and thus be able to explain more than previous ones, successor theories might be able to meet a higher standard by explaining a larger body of observations than their predecessors. For example, the theory of evolution explains the diversity of life on Earth, how species adapt to their environments, and many other patterns observed in the natural world; its most recent major modification was unification with genetics to form the modern evolutionary synthesis. In subsequent modifications, it has also subsumed aspects of many other fields such as biochemistry and molecular biology.
Heuristics
Confirmation theory
During the course of history, one theory has succeeded another, and some have suggested further work while others have seemed content just to explain the phenomena. The reasons why one theory has replaced another are not always obvious or simple. The philosophy of science includes the question: What criteria are satisfied by a 'good' theory. This question has a long history, and many scientists, as well as philosophers, have considered it. The objective is to be able to choose one theory as preferable to another without introducing cognitive bias. Though different thinkers emphasize different aspects, a good theory:
is accurate (the trivial element);
is consistent, both internally and with other relevant currently accepted theories;
has explanatory power, meaning its consequences extend beyond the data it is required to explain;
has unificatory power; as in its organizing otherwise confused and isolated phenomena
and is fruitful for further research.
In trying to look for such theories, scientists will, given a lack of guidance by empirical evidence, try to adhere to:
parsimony in causal explanations
and look for invariant observations.
Scientists will sometimes also list the very subjective criteria of "formal elegance" which can indicate multiple different things.
The goal here is to make the choice between theories less arbitrary. Nonetheless, these criteria contain subjective elements, and should be considered heuristics rather than a definitive. Also, criteria such as these do not necessarily decide between alternative theories. Quoting Bird:
It also is debatable whether existing scientific theories satisfy all these criteria, which may represent goals not yet achieved. For example, explanatory power over all existing observations is satisfied by no one theory at the moment.
Parsimony
The desiderata of a "good" theory have been debated for centuries, going back perhaps even earlier than Occam's razor, which is often taken as an attribute of a good theory. Science tries to be simple. When gathered data supports multiple explanations, the most simple explanation for phenomena or the most simple formation of a theory is recommended by the principle of parsimony. Scientists go as far as to call simple proofs of complex statements beautiful.
The concept of parsimony should not be held to imply complete frugality in the pursuit of scientific truth. The general process starts at the opposite end of there being a vast number of potential explanations and general disorder. An example can be seen in Paul Krugman's process, who makes explicit to "dare to be silly". He writes that in his work on new theories of international trade he reviewed prior work with an open frame of mind and broadened his initial viewpoint even in unlikely directions. Once he had a sufficient body of ideas, he would try to simplify and thus find what worked among what did not. Specific to Krugman here was to "question the question". He recognised that prior work had applied erroneous models to already present evidence, commenting that "intelligent commentary was ignored". Thus touching on the need to bridge the common bias against other circles of thought.
Elegance
Occam's razor might fall under the heading of "simple elegance", but it is arguable that parsimony and elegance pull in different directions. Introducing additional elements could simplify theory formulation, whereas simplifying a theory's ontology might lead to increased syntactical complexity.
Sometimes ad-hoc modifications of a failing idea may also be dismissed as lacking "formal elegance". This appeal to what may be called "aesthetic" is hard to characterise, but essentially about a sort of familiarity. Though, argument based on "elegance" is contentious and over-reliance on familiarity will breed stagnation.
Invariance
Principles of invariance have been a theme in scientific writing, and especially physics, since at least the early 20th century. The basic idea here is that good structures to look for are those independent of perspective, an idea that has featured earlier of course for example in Mill's Methods of difference and agreement—methods that would be referred back to in the context of contrast and invariance. But as tends to be the case, there is a difference between something being a basic consideration and something being given weight. Principles of invariance have only been given weight in the wake of Einstein's theories of relativity, which reduced everything to relations and were thereby fundamentally unchangeable, unable to be varied. As David Deutsch put it in 2009: "the search for hard-to-vary explanations is the origin of all progress".
An example here can be found in one of Einstein's thought experiments. The one of a lab suspended in empty space is an example of a useful invariant observation. He imagined the absence of gravity and an experimenter free floating in the lab. — If now an entity pulls the lab upwards, accelerating uniformly, the experimenter would perceive the resulting force as gravity. The entity however would feel the work needed to accelerate the lab continuously. Through this experiment Einstein was able to equate gravitational and inertial mass; something unexplained by Newton's laws, and an early but "powerful argument for a generalised postulate of relativity".
The discussion on invariance in physics is often had in the more specific context of symmetry. The Einstein example above, in the parlance of Mill would be an agreement between two values. In the context of invariance, it is a variable that remains unchanged through some kind of transformation or change in perspective. And discussion focused on symmetry would view the two perspectives as systems that share a relevant aspect and are therefore symmetrical.
Related principles here are falsifiability and testability. The opposite of something being hard-to-vary are theories that resist falsification—a frustration that was expressed colourfully by Wolfgang Pauli as them being "not even wrong". The importance of scientific theories to be falsifiable finds especial emphasis in the philosophy of Karl Popper. The broader view here is testability, since it includes the former and allows for additional practical considerations.
Philosophy and discourse
Philosophy of science looks at the underpinning logic of the scientific method, at what separates science from non-science, and the ethic that is implicit in science. There are basic assumptions, derived from philosophy by at least one prominent scientist, that form the base of the scientific method – namely, that reality is objective and consistent, that humans have the capacity to perceive reality accurately, and that rational explanations exist for elements of the real world. These assumptions from methodological naturalism form a basis on which science may be grounded. Logical positivist, empiricist, falsificationist, and other theories have criticized these assumptions and given alternative accounts of the logic of science, but each has also itself been criticized.
There are several kinds of modern philosophical conceptualizations and attempts at definitions of the method of science. The one attempted by the unificationists, who argue for the existence of a unified definition that is useful (or at least 'works' in every context of science). The pluralists, arguing degrees of science being too fractured for a universal definition of its method to by useful. And those, who argue that the very attempt at definition is already detrimental to the free flow of ideas.
Additionally, there have been views on the social framework in which science is done, and the impact of the sciences social envrionment on research. Also, there is 'scientific method' as popularised by Dewey in How We Think (1910) and Karl Pearson in Grammar of Science (1892), as used in fairly uncritical manner in education.
Pluralism
Scientific pluralism is a position within the philosophy of science that rejects various proposed unities of scientific method and subject matter. Scientific pluralists hold that science is not unified in one or more of the following ways: the metaphysics of its subject matter, the epistemology of scientific knowledge, or the research methods and models that should be used. Some pluralists believe that pluralism is necessary due to the nature of science. Others say that since scientific disciplines already vary in practice, there is no reason to believe this variation is wrong until a specific unification is empirically proven. Finally, some hold that pluralism should be allowed for normative reasons, even if unity were possible in theory.
Unificationism
Unificationism, in science, was a central tenet of logical positivism. Different logical positivists construed this doctrine in several different ways, e.g. as a reductionist thesis, that the objects investigated by the special sciences reduce to the objects of a common, putatively more basic domain of science, usually thought to be physics; as the thesis that all theories and results of the various sciences can or ought to be expressed in a common language or "universal slang"; or as the thesis that all the special sciences share a common scientific method.
Development of the idea has been troubled by accelerated advancement in technology that has opened up many new ways to look at the world.
Epistemological anarchism
Paul Feyerabend examined the history of science, and was led to deny that science is genuinely a methodological process. In his book Against Method he argued that no description of scientific method could possibly be broad enough to include all the approaches and methods used by scientists, and that there are no useful and exception-free methodological rules governing the progress of science. In essence, he said that for any specific method or norm of science, one can find a historic episode where violating it has contributed to the progress of science. He jokingly suggested that, if believers in the scientific method wish to express a single universally valid rule, it should be 'anything goes'. As has been argued before him however, this is uneconomic; problem solvers, and researchers are to be prudent with their resources during their inquiry.
A more general inference against formalised method has been found through research involving interviews with scientists regarding their conception of method. This research indicated that scientists frequently encounter difficulty in determining whether the available evidence supports their hypotheses. This reveals that there are no straightforward mappings between overarching methodological concepts and precise strategies to direct the conduct of research.
Education
In science education, the idea of a general and universal scientific method has been notably influential, and numerous studies (in the US) have shown that this framing of method often forms part of both students’ and teachers’ conception of science. This convention of traditional education has been argued against by scientists, as there is a consensus that educations' sequential elements and unified view of scientific method do not reflect how scientists actually work.
How the sciences make knowledge has been taught in the context of "the" scientific method (singular) since the early 20th century. Various systems of education, including but not limited to the US, have taught the method of science as a process or procedure, structured as a definitive series of steps: observation, hypothesis, prediction, experiment.
This version of the method of science has been a long-established standard in primary and secondary education, as well as the biomedical sciences. It has long been held to be an inaccurate idealisation of how some scientific inquiries are structured.
The taught presentation of science had to defend demerits such as:
it pays no regard to the social context of science,
it suggests a singular methodology of deriving knowledge,
it overemphasises experimentation,
it oversimplifies science, giving the impression that following a scientific process automatically leads to knowledge,
it gives the illusion of determination; that questions necessarily lead to some kind of answers and answers are preceded by (specific) questions,
and, it holds that scientific theories arise from observed phenomena only.
The scientific method no longer features in the standards for US education of 2013 (NGSS) that replaced those of 1996 (NRC). They, too, influenced international science education, and the standards measured for have shifted since from the singular hypothesis-testing method to a broader conception of scientific methods. These scientific methods, which are rooted in scientific practices and not epistemology, are described as the 3 dimensions of scientific and engineering practices, crosscutting concepts (interdisciplinary ideas), and disciplinary core ideas.
The scientific method, as a result of simplified and universal explanations, is often held to have reached a kind of mythological status; as a tool for communication or, at best, an idealisation. Education's approach was heavily influenced by John Dewey's, How We Think (1910). Van der Ploeg (2016) indicated that Dewey's views on education had long been used to further an idea of citizen education removed from "sound education", claiming that references to Dewey in such arguments were undue interpretations (of Dewey).
Sociology of knowledge
The sociology of knowledge is a concept in the discussion around scientific method, claiming the underlying method of science to be sociological. King explains that sociology distinguishes here between the system of ideas that govern the sciences through an inner logic, and the social system in which those ideas arise.
Thought collectives
A perhaps accessible lead into what is claimed is Fleck's thought, echoed in Kuhn's concept of normal science. According to Fleck, scientists' work is based on a thought-style, that cannot be rationally reconstructed. It gets instilled through the experience of learning, and science is then advanced based on a tradition of shared assumptions held by what he called thought collectives. Fleck also claims this phenomenon to be largely invisible to members of the group.
Comparably, following the field research in an academic scientific laboratory by Latour and Woolgar, Karin Knorr Cetina has conducted a comparative study of two scientific fields (namely high energy physics and molecular biology) to conclude that the epistemic practices and reasonings within both scientific communities are different enough to introduce the concept of "epistemic cultures", in contradiction with the idea that a so-called "scientific method" is unique and a unifying concept.
Situated cognition and relativism
On the idea of Fleck's thought collectives sociologists built the concept of situated cognition: that the perspective of the researcher fundamentally affects their work; and, too, more radical views.
Norwood Russell Hanson, alongside Thomas Kuhn and Paul Feyerabend, extensively explored the theory-laden nature of observation in science. Hanson introduced the concept in 1958, emphasizing that observation is influenced by the observer's conceptual framework. He used the concept of gestalt to show how preconceptions can affect both observation and description, and illustrated this with examples like the initial rejection of Golgi bodies as an artefact of staining technique, and the differing interpretations of the same sunrise by Tycho Brahe and Johannes Kepler. Intersubjectivity led to different conclusions.
Kuhn and Feyerabend acknowledged Hanson's pioneering work, although Feyerabend's views on methodological pluralism were more radical. Criticisms like those from Kuhn and Feyerabend prompted discussions leading to the development of the strong programme, a sociological approach that seeks to explain scientific knowledge without recourse to the truth or validity of scientific theories. It examines how scientific beliefs are shaped by social factors such as power, ideology, and interests.
The postmodernist critiques of science have themselves been the subject of intense controversy. This ongoing debate, known as the science wars, is the result of conflicting values and assumptions between postmodernist and realist perspectives. Postmodernists argue that scientific knowledge is merely a discourse, devoid of any claim to fundamental truth. In contrast, realists within the scientific community maintain that science uncovers real and fundamental truths about reality. Many books have been written by scientists which take on this problem and challenge the assertions of the postmodernists while defending science as a legitimate way of deriving truth.
Limits of method
Role of chance in discovery
Somewhere between 33% and 50% of all scientific discoveries are estimated to have been stumbled upon, rather than sought out. This may explain why scientists so often express that they were lucky. Louis Pasteur is credited with the famous saying that "Luck favours the prepared mind", but some psychologists have begun to study what it means to be 'prepared for luck' in the scientific context. Research is showing that scientists are taught various heuristics that tend to harness chance and the unexpected. This is what Nassim Nicholas Taleb calls "Anti-fragility"; while some systems of investigation are fragile in the face of human error, human bias, and randomness, the scientific method is more than resistant or tough – it actually benefits from such randomness in many ways (it is anti-fragile). Taleb believes that the more anti-fragile the system, the more it will flourish in the real world.
Psychologist Kevin Dunbar says the process of discovery often starts with researchers finding bugs in their experiments. These unexpected results lead researchers to try to fix what they think is an error in their method. Eventually, the researcher decides the error is too persistent and systematic to be a coincidence. The highly controlled, cautious, and curious aspects of the scientific method are thus what make it well suited for identifying such persistent systematic errors. At this point, the researcher will begin to think of theoretical explanations for the error, often seeking the help of colleagues across different domains of expertise.
Relationship with statistics
When the scientific method employs statistics as a key part of its arsenal, there are mathematical and practical issues that can have a deleterious effect on the reliability of the output of scientific methods. This is described in a popular 2005 scientific paper "Why Most Published Research Findings Are False" by John Ioannidis, which is considered foundational to the field of metascience. Much research in metascience seeks to identify poor use of statistics and improve its use, an example being the misuse of p-values.
The particular points raised are statistical ("The smaller the studies conducted in a scientific field, the less likely the research findings are to be true" and "The greater the flexibility in designs, definitions, outcomes, and analytical modes in a scientific field, the less likely the research findings are to be true.") and economical ("The greater the financial and other interests and prejudices in a scientific field, the less likely the research findings are to be true" and "The hotter a scientific field (with more scientific teams involved), the less likely the research findings are to be true.") Hence: "Most research findings are false for most research designs and for most fields" and "As shown, the majority of modern biomedical research is operating in areas with very low pre- and poststudy probability for true findings." However: "Nevertheless, most new discoveries will continue to stem from hypothesis-generating research with low or very low pre-study odds," which means that *new* discoveries will come from research that, when that research started, had low or very low odds (a low or very low chance) of succeeding. Hence, if the scientific method is used to expand the frontiers of knowledge, research into areas that are outside the mainstream will yield the newest discoveries.
Science of complex systems
Science applied to complex systems can involve elements such as transdisciplinarity, systems theory, control theory, and scientific modelling.
In general, the scientific method may be difficult to apply stringently to diverse, interconnected systems and large data sets. In particular, practices used within Big data, such as predictive analytics, may be considered to be at odds with the scientific method, as some of the data may have been stripped of the parameters which might be material in alternative hypotheses for an explanation; thus the stripped data would only serve to support the null hypothesis in the predictive analytics application. notes "a scientific discovery remains incomplete without considerations of the social practices that condition it".
Relationship with mathematics
Science is the process of gathering, comparing, and evaluating proposed models against observables. A model can be a simulation, mathematical or chemical formula, or set of proposed steps. Science is like mathematics in that researchers in both disciplines try to distinguish what is known from what is unknown at each stage of discovery. Models, in both science and mathematics, need to be internally consistent and also ought to be falsifiable (capable of disproof). In mathematics, a statement need not yet be proved; at such a stage, that statement would be called a conjecture.
Mathematical work and scientific work can inspire each other. For example, the technical concept of time arose in science, and timelessness was a hallmark of a mathematical topic. But today, the Poincaré conjecture has been proved using time as a mathematical concept in which objects can flow (see Ricci flow).
Nevertheless, the connection between mathematics and reality (and so science to the extent it describes reality) remains obscure. Eugene Wigner's paper, "The Unreasonable Effectiveness of Mathematics in the Natural Sciences", is a very well-known account of the issue from a Nobel Prize-winning physicist. In fact, some observers (including some well-known mathematicians such as Gregory Chaitin, and others such as Lakoff and Núñez) have suggested that mathematics is the result of practitioner bias and human limitation (including cultural ones), somewhat like the post-modernist view of science.
George Pólya's work on problem solving, the construction of mathematical proofs, and heuristic show that the mathematical method and the scientific method differ in detail, while nevertheless resembling each other in using iterative or recursive steps.
In Pólya's view, understanding involves restating unfamiliar definitions in your own words, resorting to geometrical figures, and questioning what we know and do not know already; analysis, which Pólya takes from Pappus, involves free and heuristic construction of plausible arguments, working backward from the goal, and devising a plan for constructing the proof; synthesis is the strict Euclidean exposition of step-by-step details of the proof; review involves reconsidering and re-examining the result and the path taken to it.
Building on Pólya's work, Imre Lakatos argued that mathematicians actually use contradiction, criticism, and revision as principles for improving their work. In like manner to science, where truth is sought, but certainty is not found, in Proofs and Refutations, what Lakatos tried to establish was that no theorem of informal mathematics is final or perfect. This means that, in non-axiomatic mathematics, we should not think that a theorem is ultimately true, only that no counterexample has yet been found. Once a counterexample, i.e. an entity contradicting/not explained by the theorem is found, we adjust the theorem, possibly extending the domain of its validity. This is a continuous way our knowledge accumulates, through the logic and process of proofs and refutations. (However, if axioms are given for a branch of mathematics, this creates a logical system —Wittgenstein 1921 Tractatus Logico-Philosophicus 5.13; Lakatos claimed that proofs from such a system were tautological, i.e. internally logically true, by rewriting forms, as shown by Poincaré, who demonstrated the technique of transforming tautologically true forms (viz. the Euler characteristic) into or out of forms from homology, or more abstractly, from homological algebra.
Lakatos proposed an account of mathematical knowledge based on Polya's idea of heuristics. In Proofs and Refutations, Lakatos gave several basic rules for finding proofs and counterexamples to conjectures. He thought that mathematical 'thought experiments' are a valid way to discover mathematical conjectures and proofs.
Gauss, when asked how he came about his theorems, once replied "durch planmässiges Tattonieren" (through systematic palpable experimentation).
See also
Outline of scientific method
Research transparency
Notes
Notes: Problem-solving via scientific method
Notes: Philosophical expressions of method
References
Sources
, also published by Dover, 1964. From the Waynflete Lectures, 1948. On the web. N.B.: the web version does not have the 3 addenda by Born, 1950, 1964, in which he notes that all knowledge is subjective. Born then proposes a solution in Appendix 3 (1964)
.
.
Reviewed in:
Public domain in the US. 236 pages
.
.
. (written in German, 1935, Entstehung und Entwickelung einer wissenschaftlichen Tatsache: Einführung in die Lehre vom Denkstil und Denkkollectiv) English translation by Thaddeus J. Trenn and Fred Bradley, 1979 Edited by Thaddeus J. Trenn and Robert K. Merton. Foreword by Robert K. Merton
.
English translation: Additional publication information is from the collection of first editions of the Library of Congress surveyed by .
.
.
. 1877, 1879. Reprinted with a foreword by Ernst Nagel, New York, 1958.
2nd edition 2007.
. Memoir of a researcher in the Avery–MacLeod–McCarty experiment.
.
.
, Third edition. From I. Bernard Cohen and Anne Whitman's 1999 translation.
. Translated to English by Karen Jelved, Andrew D. Jackson, and Ole Knudsen, (translators 1997).
Peirce, C.S. – see Charles Sanders Peirce bibliography.
.
(}
.
.
.
.
.
Reviewed in .
Critical edition.
.
Further reading
Bauer, Henry H., Scientific Literacy and the Myth of the Scientific Method, University of Illinois Press, Champaign, IL, 1992
Beveridge, William I.B., The Art of Scientific Investigation, Heinemann, Melbourne, Australia, 1950.
Bernstein, Richard J., Beyond Objectivism and Relativism: Science, Hermeneutics, and Praxis, University of Pennsylvania Press, Philadelphia, PA, 1983.
Brody, Baruch A. and Capaldi, Nicholas, Science: Men, Methods, Goals: A Reader: Methods of Physical Science , W.A. Benjamin, 1968
Brody, Baruch A. and Grandy, Richard E., Readings in the Philosophy of Science, 2nd edition, Prentice-Hall, Englewood Cliffs, NJ, 1989.
Burks, Arthur W., Chance, Cause, Reason: An Inquiry into the Nature of Scientific Evidence, University of Chicago Press, Chicago, IL, 1977.
Chalmers, Alan, What Is This Thing Called Science?. Queensland University Press and Open University Press, 1976.
.
Earman, John (ed.), Inference, Explanation, and Other Frustrations: Essays in the Philosophy of Science, University of California Press, Berkeley & Los Angeles, CA, 1992.
Fraassen, Bas C. van, The Scientific Image, Oxford University Press, Oxford, 1980.
.
Gadamer, Hans-Georg, Reason in the Age of Science, Frederick G. Lawrence (trans.), MIT Press, Cambridge, MA, 1981.
Giere, Ronald N. (ed.), Cognitive Models of Science, vol. 15 in 'Minnesota Studies in the Philosophy of Science', University of Minnesota Press, Minneapolis, MN, 1992.
Hacking, Ian, Representing and Intervening, Introductory Topics in the Philosophy of Natural Science, Cambridge University Press, Cambridge, 1983.
Heisenberg, Werner, Physics and Beyond, Encounters and Conversations, A.J. Pomerans (trans.), Harper and Row, New York, 1971, pp. 63–64.
Holton, Gerald, Thematic Origins of Scientific Thought: Kepler to Einstein, 1st edition 1973, revised edition, Harvard University Press, Cambridge, MA, 1988.
Karin Knorr Cetina,
Kuhn, Thomas S., The Essential Tension, Selected Studies in Scientific Tradition and Change, University of Chicago Press, Chicago, IL, 1977.
Latour, Bruno, Science in Action, How to Follow Scientists and Engineers through Society, Harvard University Press, Cambridge, MA, 1987.
Losee, John, A Historical Introduction to the Philosophy of Science, Oxford University Press, Oxford, 1972. 2nd edition, 1980.
Maxwell, Nicholas, The Comprehensibility of the Universe: A New Conception of Science, Oxford University Press, Oxford, 1998. Paperback 2003.
Maxwell, Nicholas, Understanding Scientific Progress , Paragon House, St. Paul, Minnesota, 2017.
Misak, Cheryl J., Truth and the End of Inquiry, A Peircean Account of Truth, Oxford University Press, Oxford, 1991.
Oreskes, Naomi, "Masked Confusion: A trusted source of health information misleads the public by prioritizing rigor over reality", Scientific American, vol. 329, no. 4 (November 2023), pp. 90–91.
Piattelli-Palmarini, Massimo (ed.), Language and Learning, The Debate between Jean Piaget and Noam Chomsky, Harvard University Press, Cambridge, MA, 1980.
Popper, Karl R., Unended Quest, An Intellectual Autobiography, Open Court, La Salle, IL, 1982.
Putnam, Hilary, Renewing Philosophy, Harvard University Press, Cambridge, MA, 1992.
Rorty, Richard, Philosophy and the Mirror of Nature, Princeton University Press, Princeton, NJ, 1979.
Salmon, Wesley C., Four Decades of Scientific Explanation, University of Minnesota Press, Minneapolis, MN, 1990.
Shimony, Abner, Search for a Naturalistic World View: Vol. 1, Scientific Method and Epistemology, Vol. 2, Natural Science and Metaphysics, Cambridge University Press, Cambridge, 1993.
Thagard, Paul, Conceptual Revolutions, Princeton University Press, Princeton, NJ, 1992.
Ziman, John (2000). Real Science: what it is, and what it means. Cambridge: Cambridge University Press.
External links
An Introduction to Science: Scientific Thinking and a scientific method by Steven D. Schafersman.
Introduction to the scientific method at the University of Rochester
The scientific method from a philosophical perspective
Theory-ladenness by Paul Newall at The Galilean Library
Lecture on Scientific Method by Greg Anderson (archived 28 April 2006)
Using the scientific method for designing science fair projects
Scientific Methods an online book by Richard D. Jarrard
Richard Feynman on the Key to Science (one minute, three seconds), from the Cornell Lectures.
Lectures on the Scientific Method by Nick Josh Karean, Kevin Padian, Michael Shermer and Richard Dawkins (archived 21 January 2013).
"How Do We Know What Is True?" (animated video; 2:52)
Method
Philosophy of science
Empiricism | 0.763134 | 0.999776 | 0.762963 |
Clinical pharmacology | Clinical pharmacology is "that discipline that teaches, does research, frames policy, gives information and advice about the actions and proper uses of medicines in humans and implements that knowledge in clinical practice". Clinical pharmacology is inherently a translational discipline underpinned by the basic science of pharmacology, engaged in the experimental and observational study of the disposition and effects of drugs in humans, and committed to the translation of science into evidence-based therapeutics. It has a broad scope, from the discovery of new target molecules to the effects of drug usage in whole populations. The main aim of clinical pharmacology is to generate data for optimum use of drugs and the practice of 'evidence-based medicine'.
Clinical pharmacologists have medical and scientific training that enables them to evaluate evidence and produce new data through well-designed studies. Clinical pharmacologists must have access to enough patients for clinical care, teaching and education, and research. Their responsibilities to patients include, but are not limited to, detecting and analysing adverse drug effects and reactions, therapeutics, and toxicology including reproductive toxicology, perioperative drug management, and psychopharmacology.
Modern clinical pharmacologists are also trained in data analysis skills. Their approaches to analyse data can include modelling and simulation techniques (e.g. population analysis, non-linear mixed-effects modelling).
Branches
Clinical pharmacology consists of multiple branches listed below:
Pharmacodynamics – what drugs do to the body and how. This includes not just the cellular and molecular aspects, but also more relevant clinical measurements. For example, not just the pharmacological actions of salbutamol, a beta2-adrenergic receptor agonist, but the respiratory peak flow rate of both healthy volunteers and patients.
Pharmacokinetics – what happens to the drug while in the body. This involves the body systems for handling the drug, usually divided into the following classification:
Absorption – the processes by which the drug move into the bloodstream from the site of administration (e.g. the gut)
Distribution – the extent to which the drug enters and leaves different tissues of the body
Metabolism – the processes by which the drug is metabolized in the liver, i.e. transformed into molecules that are usually less pharmacologically active
Excretion – the processes by which the drug is eliminated from the body, which mostly happens in the liver and kidneys.
Rational Prescribing – using the right medication, in the right dose, using the right route and frequency of administration, and for the right duration of time.
Adverse drug effects – unwanted effects of a medicine that are typically not noticed by the individual (e.g. a reduction in the white cell count or a change in the serum uric acid concentration)
Adverse drug reactions – unwanted effects of the drug that the individual experiences (e.g. a sore throat because of a reduced white cell count or an attack of gout because of an increased serum uric acid concentration)
Toxicology – the discipline that deals with the adverse effects of chemicals
Drug interactions – the study of how drugs interact with each other. A drug may negatively or positively affect the effects of another drug; drugs can also interact with other agents, such as foods, alcohol, and devices.
Drug development – the processes of bringing a new medicine from its discovery to clinical use, usually culminating in some form of clinical trials and marketing authorization applications to country-specific drug regulators, such as the US FDA and the UK's MHRA.
Molecular pharmacology – the discipline of studying drug actions at the molecular level; it is a branch of pharmacology in general.
Pharmacogenomics – the study of the human genome in order to understand the ways in which genetic factors determine the actions of medicines.
History
Medicinal uses of plant and animal resources have been common since prehistoric times. Many countries, such as China, Egypt, and India, have written documentation of many traditional remedies. A few of these remedies are still regarded as helpful today, but most have them have been discarded, because they were ineffective and potentially harmful.
For many years, therapeutic practices were based on Hippocratic humoral theory, popularized by the Greek physician Galen (129 – c. AD 216) and not on experimentation.
In around the 17th century physicians started to apply use methods to study traditional remedies, although they still lacked methods to test the hypotheses they had about how drugs worked.
By the late 18th century and early 19th century, methods of experimental physiology and pharmacology began to be developed by scientists such as François Magendie and his student Claude Bernard.
From the late 18th century to the early 20th century, advances were made in chemistry and physiology that laid the foundations needed to understand how drugs act at the tissue and organ levels. The advances that were made at this time gave manufacturers the ability to make and sell medicines that they claimed to be effective, but were in many cases worthless. There were no methods for evaluating such claims until rational therapeutic concepts were established in medicine, starting at about the end of the 19th century.
The development of receptor theory at the start of the 20th century and later developments led to better understanding of how medicines act and the development of many new medicines that are both safe and effective. Expansion of the scientific principles of pharmacology and clinical pharmacology continues today.
See also
Dormant therapy
References
External links
International Union of Basic and Clinical Pharmacology (IUPHAR)
European Association for Clinical Pharmacology and Therapeutics (EACPT)
Dutch Society on Clinical Pharmacology and Biopharmaceutics (NVKF&B)
American Society for Clinical Pharmacology and Therapeutics (ASCPT)
American College of Clinical Pharmacology (ACCP)
British Pharmacological Society (BPS)
Korean Society for Clinical Pharmacology and Therapeutics (KSCPT)
Japanese Society for Clinical Pharmacology and Therapeutics (JSCPT)
Pharmacology | 0.778467 | 0.980065 | 0.762948 |
Mathematical object | A mathematical object is an abstract concept arising in mathematics.
In the usual language of mathematics, an object is anything that has been (or could be) formally defined, and with which one may do deductive reasoning and mathematical proofs. Typically, a mathematical object can be a value that can be assigned to a variable, and therefore can be involved in formulas. Commonly encountered mathematical objects include numbers, sets, functions, expressions, geometric objects, transformations of other mathematical objects, and spaces. Mathematical objects can be very complex; for example, theorems, proofs, and even theories are considered as mathematical objects in proof theory.
In philosophy of mathematics
Nature of mathematical objects
In Philosophy of mathematics, the concept of "objects" touches on topics of existence, identity, and the nature of reality. In metaphysics, objects are often considered entities that possess properties and can stand in various relations to one another. Philosophers debate whether objects have an independent existence outside of human thought (realism), or if their existence is dependent on mental constructs or language (idealism and nominalism). Objects can range from the concrete, such as physical objects in the world, to the abstract, and it is in this latter which mathematical objects usually lie. What constitutes an "object" is foundational to many areas of philosophy, from ontology (the study of being) to epistemology (the study of knowledge). In mathematics, objects are often seen as entities that exist independently of the physical world, raising questions about their ontological status. There are varying schools of thought which offer different perspectives on the matter, and many famous mathematicians and philosophers each have differing opinions on which is more correct.
Quine-Putnam indispensability
Quine-Putnam indispensability is an argument for the existence of mathematical objects based on their unreasonable effectiveness in the natural sciences. Every branch of science relies largely on large and often vastly different areas of mathematics. From physics' use of Hilbert spaces in quantum mechanics and differential geometry in general relativity to biology's use of chaos theory and combinatorics (see mathematical biology), not only does mathematics help with predictions, it allows these areas to have an elegant language to express these ideas. Moreover, it is hard to imagine how areas like quantum mechanics and general relativity could have developed without their assistance from mathematics, and therefore, one could argue that mathematics is indispensable to these theories. It is because of this unreasonable effectiveness and indispensability of mathematics that philosophers Willard Quine and Hilary Putnam argue that we should believe the mathematical objects for which these theories depend actually exist, that is, we ought to have an ontological commitment to them. The argument is described by the following syllogism:(Premise 1) We ought to have ontological commitment to all and only the entities that are indispensable to our best scientific theories.
(Premise 2) Mathematical entities are indispensable to our best scientific theories.
(Conclusion) We ought to have ontological commitment to mathematical entitiesThis argument resonates with a philosophy in applied mathematics called Naturalism (or sometimes Predicativism) which states that the only authoritative standards on existence are those of science.
Schools of thought
Platonism
Platonism asserts that mathematical objects are seen as real, abstract entities that exist independently of human thought, often in some Platonic realm. Just as physical objects like electrons and planets exist, so do numbers and sets. And just as statements about electrons and planets are true or false as these objects contain perfectly objective properties, so are statements about numbers and sets. Mathematicians discover these objects rather than invent them. (See also: Mathematical Platonism)
Some some notable platonists include:
Plato: The ancient Greek philosopher who, though not a mathematician, laid the groundwork for Platonism by positing the existence of an abstract realm of perfect forms or ideas, which influenced later thinkers in mathematics.
Kurt Gödel: A 20th-century logician and mathematician, Gödel was a strong proponent of mathematical Platonism, and his work in model theory was a major influence on modern platonism
Roger Penrose: A contemporary mathematical physicist, Penrose has argued for a Platonic view of mathematics, suggesting that mathematical truths exist in a realm of abstract reality that we discover.
Nominalism
Nominalism denies the independent existence of mathematical objects. Instead, it suggests that they are merely convenient fictions or shorthand for describing relationships and structures within our language and theories. Under this view, mathematical objects don't have an existence beyond the symbols and concepts we use.
Some notable nominalists incluse:
Nelson Goodman: A philosopher known for his work in the philosophy of science and nominalism. He argued against the existence of abstract objects, proposing instead that mathematical objects are merely a product of our linguistic and symbolic conventions.
Hartry Field: A contemporary philosopher who has developed the form of nominalism called "fictionalism," which argues that mathematical statements are useful fictions that don't correspond to any actual abstract objects.
Logicism
Logicism asserts that all mathematical truths can be reduced to logical truths, and all objects forming the subject matter of those branches of mathematics are logical objects. In other words, mathematics is fundamentally a branch of logic, and all mathematical concepts, theorems, and truths can be derived from purely logical principles and definitions. Logicism faced challenges, particularly with the Russillian axioms, the Multiplicative axiom (now called the Axiom of Choice) and his Axiom of Infinity, and later with the discovery of Gödel’s incompleteness theorems, which showed that any sufficiently powerful formal system (like those used to express arithmetic) cannot be both complete and consistent. This meant that not all mathematical truths could be derived purely from a logical system, undermining the logicist program.
Some notable logicists include:
Gottlob Frege: Frege is often regarded as the founder of logicism. In his work, Grundgesetze der Arithmetik (Basic Laws of Arithmetic), Frege attempted to show that arithmetic could be derived from logical axioms. He developed a formal system that aimed to express all of arithmetic in terms of logic. Frege’s work laid the groundwork for much of modern logic and was highly influential, though it encountered difficulties, most notably Russell’s paradox, which revealed inconsistencies in Frege’s system.
Bertrand Russell: Russell, along with Alfred North Whitehead, further developed logicism in their monumental work Principia Mathematica. They attempted to derive all of mathematics from a set of logical axioms, using a type theory to avoid the paradoxes that Frege’s system encountered. Although Principia Mathematica was enormously influential, the effort to reduce all of mathematics to logic was ultimately seen as incomplete. However, it did advance the development of mathematical logic and analytic philosophy.
Formalism
Mathematical formalism treats objects as symbols within a formal system. The focus is on the manipulation of these symbols according to specified rules, rather than on the objects themselves. One common understanding of formalism takes mathematics as not a body of propositions representing an abstract piece of reality but much more akin to a game, bringing with it no more ontological commitment of objects or properties than playing ludo or chess. In this view, mathematics is about the consistency of formal systems rather than the discovery of pre-existing objects. Some philosophers consider logicism to be a type of formalism.
Some notable formalists include:
David Hilbert: A leading mathematician of the early 20th century, Hilbert is one of the most prominent advocates of formalism. He believed that mathematics is a system of formal rules and that its truth lies in the consistency of these rules rather than any connection to an abstract reality.
Hermann Weyl: German mathematician and philosopher who, while not strictly a formalist, contributed to formalist ideas, particularly in his work on the foundations of mathematics.
Constructivism
Mathematical constructivism asserts that it is necessary to find (or "construct") a specific example of a mathematical object in order to prove that an example exists. Contrastingly, in classical mathematics, one can prove the existence of a mathematical object without "finding" that object explicitly, by assuming its non-existence and then deriving a contradiction from that assumption. Such a proof by contradiction might be called non-constructive, and a constructivist might reject it. The constructive viewpoint involves a verificational interpretation of the existential quantifier, which is at odds with its classical interpretation. There are many forms of constructivism. These include the program of intuitionism founded by Brouwer, the finitism of Hilbert and Bernays, the constructive recursive mathematics of mathematicians Shanin and Markov, and Bishop's program of constructive analysis. Constructivism also includes the study of constructive set theories such as Constructive Zermelo–Fraenkel and the study of philosophy.
Structuralism
Structuralism suggests that mathematical objects are defined by their place within a structure or system. The nature of a number, for example, is not tied to any particular thing, but to its role within the system of arithmetic. In a sense, the thesis is that mathematical objects (if there are such objects) simply have no intrinsic nature.
Some notable structuralists include:
Paul Benacerraf: A philosopher known for his work in the philosophy of mathematics, particularly his paper "What Numbers Could Not Be," which argues for a structuralist view of mathematical objects.
Stewart Shapiro: Another prominent philosopher who has developed and defended structuralism, especially in his book Philosophy of Mathematics: Structure and Ontology.
Objects versus mappings
Frege famously distinguished between functions and objects. According to his view, a function is a kind of ‘incomplete’ entity that maps arguments to values, and is denoted by an incomplete expression, whereas an object is a ‘complete’ entity and can be denoted by a singular term. Frege reduced properties and relations to functions and so these entities are not included among the objects. Some authors make use of Frege’s notion of ‘object’ when discussing abstract objects. But though Frege’s sense of ‘object’ is important, it is not the only way to use the term. Other philosophers include properties and relations among the abstract objects. And when the background context for discussing objects is type theory, properties and relations of higher type (e.g., properties of properties, and properties of relations) may be all be considered ‘objects’. This latter use of ‘object’ is interchangeable with ‘entity.’ It is this more broad interpretation that mathematicians mean when they use the term 'object'.
See also
Abstract object
Exceptional object
Impossible object
List of mathematical objects
List of mathematical shapes
List of shapes
List of surfaces
List of two-dimensional geometric shapes
Mathematical structure
References
Cited sources
Further reading
Azzouni, J., 1994. Metaphysical Myths, Mathematical Practice. Cambridge University Press.
Burgess, John, and Rosen, Gideon, 1997. A Subject with No Object. Oxford Univ. Press.
Davis, Philip and Reuben Hersh, 1999 [1981]. The Mathematical Experience. Mariner Books: 156–62.
Gold, Bonnie, and Simons, Roger A., 2011. Proof and Other Dilemmas: Mathematics and Philosophy. Mathematical Association of America.
Hersh, Reuben, 1997. What is Mathematics, Really? Oxford University Press.
Sfard, A., 2000, "Symbolizing mathematical reality into being, Or how mathematical discourse and mathematical objects create each other," in Cobb, P., et al., Symbolizing and communicating in mathematics classrooms: Perspectives on discourse, tools and instructional design. Lawrence Erlbaum.
Stewart Shapiro, 2000. Thinking about mathematics: The philosophy of mathematics. Oxford University Press.
External links
Stanford Encyclopedia of Philosophy: "Abstract Objects"—by Gideon Rosen.
Wells, Charles. "Mathematical Objects".
AMOF: The Amazing Mathematical Object Factory
Mathematical Object Exhibit
Philosophical concepts
Category theory
Mathematical concepts
Platonism | 0.766248 | 0.995693 | 0.762948 |
Soil | Soil, also commonly referred to as earth, is a mixture of organic matter, minerals, gases, liquids, and organisms that together support the life of plants and soil organisms. Some scientific definitions distinguish dirt from soil by restricting the former term specifically to displaced soil.
Soil consists of a solid phase of minerals and organic matter (the soil matrix), as well as a porous phase that holds gases (the soil atmosphere) and water (the soil solution). Accordingly, soil is a three-state system of solids, liquids, and gases. Soil is a product of several factors: the influence of climate, relief (elevation, orientation, and slope of terrain), organisms, and the soil's parent materials (original minerals) interacting over time. It continually undergoes development by way of numerous physical, chemical and biological processes, which include weathering with associated erosion. Given its complexity and strong internal connectedness, soil ecologists regard soil as an ecosystem.
Most soils have a dry bulk density (density of soil taking into account voids when dry) between 1.1 and 1.6 g/cm3, though the soil particle density is much higher, in the range of 2.6 to 2.7 g/cm3. Little of the soil of planet Earth is older than the Pleistocene and none is older than the Cenozoic, although fossilized soils are preserved from as far back as the Archean.
Collectively the Earth's body of soil is called the pedosphere. The pedosphere interfaces with the lithosphere, the hydrosphere, the atmosphere, and the biosphere. Soil has four important functions:
as a medium for plant growth
as a means of water storage, supply, and purification
as a modifier of Earth's atmosphere
as a habitat for organisms
All of these functions, in their turn, modify the soil and its properties.
Soil science has two basic branches of study: edaphology and pedology. Edaphology studies the influence of soils on living things. Pedology focuses on the formation, description (morphology), and classification of soils in their natural environment. In engineering terms, soil is included in the broader concept of regolith, which also includes other loose material that lies above the bedrock, as can be found on the Moon and other celestial objects.
Processes
Soil is a major component of the Earth's ecosystem. The world's ecosystems are impacted in far-reaching ways by the processes carried out in the soil, with effects ranging from ozone depletion and global warming to rainforest destruction and water pollution. With respect to Earth's carbon cycle, soil acts as an important carbon reservoir, and it is potentially one of the most reactive to human disturbance and climate change. As the planet warms, it has been predicted that soils will add carbon dioxide to the atmosphere due to increased biological activity at higher temperatures, a positive feedback (amplification). This prediction has, however, been questioned on consideration of more recent knowledge on soil carbon turnover.
Soil acts as an engineering medium, a habitat for soil organisms, a recycling system for nutrients and organic wastes, a regulator of water quality, a modifier of atmospheric composition, and a medium for plant growth, making it a critically important provider of ecosystem services. Since soil has a tremendous range of available niches and habitats, it contains a prominent part of the Earth's genetic diversity. A gram of soil can contain billions of organisms, belonging to thousands of species, mostly microbial and largely still unexplored. Soil has a mean prokaryotic density of roughly 108 organisms per gram, whereas the ocean has no more than 107 prokaryotic organisms per milliliter (gram) of seawater. Organic carbon held in soil is eventually returned to the atmosphere through the process of respiration carried out by heterotrophic organisms, but a substantial part is retained in the soil in the form of soil organic matter; tillage usually increases the rate of soil respiration, leading to the depletion of soil organic matter. Since plant roots need oxygen, aeration is an important characteristic of soil. This ventilation can be accomplished via networks of interconnected soil pores, which also absorb and hold rainwater making it readily available for uptake by plants. Since plants require a nearly continuous supply of water, but most regions receive sporadic rainfall, the water-holding capacity of soils is vital for plant survival.
Soils can effectively remove impurities, kill disease agents, and degrade contaminants, this latter property being called natural attenuation. Typically, soils maintain a net absorption of oxygen and methane and undergo a net release of carbon dioxide and nitrous oxide. Soils offer plants physical support, air, water, temperature moderation, nutrients, and protection from toxins. Soils provide readily available nutrients to plants and animals by converting dead organic matter into various nutrient forms.
Composition
A typical soil is about 50% solids (45% mineral and 5% organic matter), and 50% voids (or pores) of which half is occupied by water and half by gas. The percent soil mineral and organic content can be treated as a constant (in the short term), while the percent soil water and gas content is considered highly variable whereby a rise in one is simultaneously balanced by a reduction in the other. The pore space allows for the infiltration and movement of air and water, both of which are critical for life existing in soil. Compaction, a common problem with soils, reduces this space, preventing air and water from reaching plant roots and soil organisms.
Given sufficient time, an undifferentiated soil will evolve a soil profile that consists of two or more layers, referred to as soil horizons. These differ in one or more properties such as in their texture, structure, density, porosity, consistency, temperature, color, and reactivity. The horizons differ greatly in thickness and generally lack sharp boundaries; their development is dependent on the type of parent material, the processes that modify those parent materials, and the soil-forming factors that influence those processes. The biological influences on soil properties are strongest near the surface, though the geochemical influences on soil properties increase with depth. Mature soil profiles typically include three basic master horizons: A, B, and C. The solum normally includes the A and B horizons. The living component of the soil is largely confined to the solum, and is generally more prominent in the A horizon. It has been suggested that the pedon, a column of soil extending vertically from the surface to the underlying parent material and large enough to show the characteristics of all its horizons, could be subdivided in the humipedon (the living part, where most soil organisms are dwelling, corresponding to the humus form), the copedon (in intermediary position, where most weathering of minerals takes place) and the lithopedon (in contact with the subsoil).
The soil texture is determined by the relative proportions of the individual particles of sand, silt, and clay that make up the soil. The interaction of the individual mineral particles with organic matter, water, gases via biotic and abiotic processes causes those particles to flocculate (stick together) to form aggregates or peds. Where these aggregates can be identified, a soil can be said to be developed, and can be described further in terms of color, porosity, consistency, reaction (acidity), etc.
Water is a critical agent in soil development due to its involvement in the dissolution, precipitation, erosion, transport, and deposition of the materials of which a soil is composed. The mixture of water and dissolved or suspended materials that occupy the soil pore space is called the soil solution. Since soil water is never pure water, but contains hundreds of dissolved organic and mineral substances, it may be more accurately called the soil solution. Water is central to the dissolution, precipitation and leaching of minerals from the soil profile. Finally, water affects the type of vegetation that grows in a soil, which in turn affects the development of the soil, a complex feedback which is exemplified in the dynamics of banded vegetation patterns in semi-arid regions.
Soils supply plants with nutrients, most of which are held in place by particles of clay and organic matter (colloids) The nutrients may be adsorbed on clay mineral surfaces, bound within clay minerals (absorbed), or bound within organic compounds as part of the living organisms or dead soil organic matter. These bound nutrients interact with soil water to buffer the soil solution composition (attenuate changes in the soil solution) as soils wet up or dry out, as plants take up nutrients, as salts are leached, or as acids or alkalis are added.
Plant nutrient availability is affected by soil pH, which is a measure of the hydrogen ion activity in the soil solution. Soil pH is a function of many soil forming factors, and is generally lower (more acidic) where weathering is more advanced.
Most plant nutrients, with the exception of nitrogen, originate from the minerals that make up the soil parent material. Some nitrogen originates from rain as dilute nitric acid and ammonia, but most of the nitrogen is available in soils as a result of nitrogen fixation by bacteria. Once in the soil-plant system, most nutrients are recycled through living organisms, plant and microbial residues (soil organic matter), mineral-bound forms, and the soil solution. Both living soil organisms (microbes, animals and plant roots) and soil organic matter are of critical importance to this recycling, and thereby to soil formation and soil fertility. Microbial soil enzymes may release nutrients from minerals or organic matter for use by plants and other microorganisms, sequester (incorporate) them into living cells, or cause their loss from the soil by volatilisation (loss to the atmosphere as gases) or leaching.
Formation
Soil is said to be formed when organic matter has accumulated and colloids are washed downward, leaving deposits of clay, humus, iron oxide, carbonate, and gypsum, producing a distinct layer called the B horizon. This is a somewhat arbitrary definition as mixtures of sand, silt, clay and humus will support biological and agricultural activity before that time. These constituents are moved from one level to another by water and animal activity. As a result, layers (horizons) form in the soil profile. The alteration and movement of materials within a soil causes the formation of distinctive soil horizons. However, more recent definitions of soil embrace soils without any organic matter, such as those regoliths that formed on Mars and analogous conditions in planet Earth deserts.
An example of the development of a soil would begin with the weathering of lava flow bedrock, which would produce the purely mineral-based parent material from which the soil texture forms. Soil development would proceed most rapidly from bare rock of recent flows in a warm climate, under heavy and frequent rainfall. Under such conditions, plants (in a first stage nitrogen-fixing lichens and cyanobacteria then epilithic higher plants) become established very quickly on basaltic lava, even though there is very little organic material. Basaltic minerals commonly weather relatively quickly, according to the Goldich dissolution series. The plants are supported by the porous rock as it is filled with nutrient-bearing water that carries minerals dissolved from the rocks. Crevasses and pockets, local topography of the rocks, would hold fine materials and harbour plant roots. The developing plant roots are associated with mineral-weathering mycorrhizal fungi that assist in breaking up the porous lava, and by these means organic matter and a finer mineral soil accumulate with time. Such initial stages of soil development have been described on volcanoes, inselbergs, and glacial moraines.
How soil formation proceeds is influenced by at least five classic factors that are intertwined in the evolution of a soil: parent material, climate, topography (relief), organisms, and time. When reordered to climate, relief, organisms, parent material, and time, they form the acronym CROPT.
Physical properties
The physical properties of soils, in order of decreasing importance for ecosystem services such as crop production, are texture, structure, bulk density, porosity, consistency, temperature, colour and resistivity. Soil texture is determined by the relative proportion of the three kinds of soil mineral particles, called soil separates: sand, silt, and clay. At the next larger scale, soil structures called peds or more commonly soil aggregates are created from the soil separates when iron oxides, carbonates, clay, silica and humus, coat particles and cause them to adhere into larger, relatively stable secondary structures. Soil bulk density, when determined at standardized moisture conditions, is an estimate of soil compaction. Soil porosity consists of the void part of the soil volume and is occupied by gases or water. Soil consistency is the ability of soil materials to stick together. Soil temperature and colour are self-defining. Resistivity refers to the resistance to conduction of electric currents and affects the rate of corrosion of metal and concrete structures which are buried in soil. These properties vary through the depth of a soil profile, i.e. through soil horizons. Most of these properties determine the aeration of the soil and the ability of water to infiltrate and to be held within the soil.
Soil moisture
Soil water content can be measured as volume or weight. Soil moisture levels, in order of decreasing water content, are saturation, field capacity, wilting point, air dry, and oven dry. Field capacity describes a drained wet soil at the point water content reaches equilibrium with gravity. Irrigating soil above field capacity risks percolation losses. Wilting point describes the dry limit for growing plants. During growing season, soil moisture is unaffected by functional groups or specie richness.
Available water capacity is the amount of water held in a soil profile available to plants. As water content drops, plants have to work against increasing forces of adhesion and sorptivity to withdraw water. Irrigation scheduling avoids moisture stress by replenishing depleted water before stress is induced.
Capillary action is responsible for moving groundwater from wet regions of the soil to dry areas. Subirrigation designs (e.g., wicking beds, sub-irrigated planters) rely on capillarity to supply water to plant roots. Capillary action can result in an evaporative concentration of salts, causing land degradation through salination.
Soil moisture measurement—measuring the water content of the soil, as can be expressed in terms of volume or weight—can be based on in situ probes (e.g., capacitance probes, neutron probes), or remote sensing methods. Soil moisture measurement is an important factor in determining changes in soil activity.
Soil gas
The atmosphere of soil, or soil gas, is very different from the atmosphere above. The consumption of oxygen by microbes and plant roots, and their release of carbon dioxide, decreases oxygen and increases carbon dioxide concentration. Atmospheric CO2 concentration is 0.04%, but in the soil pore space it may range from 10 to 100 times that level, thus potentially contributing to the inhibition of root respiration. Calcareous soils regulate CO2 concentration by carbonate buffering, contrary to acid soils in which all CO2 respired accumulates in the soil pore system. At extreme levels, CO2 is toxic. This suggests a possible negative feedback control of soil CO2 concentration through its inhibitory effects on root and microbial respiration (also called soil respiration). In addition, the soil voids are saturated with water vapour, at least until the point of maximal hygroscopicity, beyond which a vapour-pressure deficit occurs in the soil pore space. Adequate porosity is necessary, not just to allow the penetration of water, but also to allow gases to diffuse in and out. Movement of gases is by diffusion from high concentrations to lower, the diffusion coefficient decreasing with soil compaction. Oxygen from above atmosphere diffuses in the soil where it is consumed and levels of carbon dioxide in excess of above atmosphere diffuse out with other gases (including greenhouse gases) as well as water. Soil texture and structure strongly affect soil porosity and gas diffusion. It is the total pore space (porosity) of soil, not the pore size, and the degree of pore interconnection (or conversely pore sealing), together with water content, air turbulence and temperature, that determine the rate of diffusion of gases into and out of soil. Platy soil structure and soil compaction (low porosity) impede gas flow, and a deficiency of oxygen may encourage anaerobic bacteria to reduce (strip oxygen) from nitrate NO3 to the gases N2, N2O, and NO, which are then lost to the atmosphere, thereby depleting the soil of nitrogen, a detrimental process called denitrification. Aerated soil is also a net sink of methane (CH4) but a net producer of methane (a strong heat-absorbing greenhouse gas) when soils are depleted of oxygen and subject to elevated temperatures.
Soil atmosphere is also the seat of emissions of volatiles other than carbon and nitrogen oxides from various soil organisms, e.g. roots, bacteria, fungi, animals. These volatiles are used as chemical cues, making soil atmosphere the seat of interaction networks playing a decisive role in the stability, dynamics and evolution of soil ecosystems. Biogenic soil volatile organic compounds are exchanged with the aboveground atmosphere, in which they are just 1–2 orders of magnitude lower than those from aboveground vegetation.
Humans can get some idea of the soil atmosphere through the well-known 'after-the-rain' scent, when infiltering rainwater flushes out the whole soil atmosphere after a drought period, or when soil is excavated, a bulk property attributed in a reductionist manner to particular biochemical compounds such as petrichor or geosmin.
Solid phase (soil matrix)
Soil particles can be classified by their chemical composition (mineralogy) as well as their size. The particle size distribution of a soil, its texture, determines many of the properties of that soil, in particular hydraulic conductivity and water potential, but the mineralogy of those particles can strongly modify those properties. The mineralogy of the finest soil particles, clay, is especially important.
Soil biodiversity
Large numbers of microbes, animals, plants and fungi are living in soil. However, biodiversity in soil is much harder to study as most of this life is invisible, hence estimates about soil biodiversity have been unsatisfactory. A recent study suggested that soil is likely home to 59 ± 15% of the species on Earth. Enchytraeidae (worms) have the greatest percentage of species in soil (98.6%), followed by fungi (90%), plants (85.5%), and termites (Isoptera) (84.2%). Many other groups of animals have substantial fractions of species living in soil, e.g. about 30% of insects, and close to 50% of arachnids. While most vertebrates live above ground (ignoring aquatic species), many species are fossorial, that is, they live in soil, such as most blind snakes.
Chemistry
The chemistry of a soil determines its ability to supply available plant nutrients and affects its physical properties and the health of its living population. In addition, a soil's chemistry also determines its corrosivity, stability, and ability to absorb pollutants and to filter water. It is the surface chemistry of mineral and organic colloids that determines soil's chemical properties. A colloid is a small, insoluble particle ranging in size from 1 nanometer to 1 micrometer, thus small enough to remain suspended by Brownian motion in a fluid medium without settling. Most soils contain organic colloidal particles called humus as well as the inorganic colloidal particles of clays. The very high specific surface area of colloids and their net electrical charges give soil its ability to hold and release ions. Negatively charged sites on colloids attract and release cations in what is referred to as cation exchange. Cation-exchange capacity is the amount of exchangeable cations per unit weight of dry soil and is expressed in terms of milliequivalents of positively charged ions per 100 grams of soil (or centimoles of positive charge per kilogram of soil; cmolc/kg). Similarly, positively charged sites on colloids can attract and release anions in the soil, giving the soil anion exchange capacity.
Cation and anion exchange
The cation exchange, that takes place between colloids and soil water, buffers (moderates) soil pH, alters soil structure, and purifies percolating water by adsorbing cations of all types, both useful and harmful.
The negative or positive charges on colloid particles make them able to hold cations or anions, respectively, to their surfaces. The charges result from four sources.
Isomorphous substitution occurs in clay during its formation, when lower-valence cations substitute for higher-valence cations in the crystal structure. Substitutions in the outermost layers are more effective than for the innermost layers, as the electric charge strength drops off as the square of the distance. The net result is oxygen atoms with net negative charge and the ability to attract cations.
Edge-of-clay oxygen atoms are not in balance ionically as the tetrahedral and octahedral structures are incomplete.
Hydroxyls may substitute for oxygens of the silica layers, a process called hydroxylation. When the hydrogens of the clay hydroxyls are ionised into solution, they leave the oxygen with a negative charge (anionic clays).
Hydrogens of humus hydroxyl groups may also be ionised into solution, leaving, similarly to clay, an oxygen with a negative charge.
Cations held to the negatively charged colloids resist being washed downward by water and are out of reach of plant roots, thereby preserving the soil fertility in areas of moderate rainfall and low temperatures.
There is a hierarchy in the process of cation exchange on colloids, as cations differ in the strength of adsorption by the colloid and hence their ability to replace one another (ion exchange). If present in equal amounts in the soil water solution:
Al3+ replaces H+ replaces Ca2+ replaces Mg2+ replaces K+ same as replaces Na+
If one cation is added in large amounts, it may replace the others by the sheer force of its numbers. This is called law of mass action. This is largely what occurs with the addition of cationic fertilisers (potash, lime).
As the soil solution becomes more acidic (low pH, meaning an abundance of H+), the other cations more weakly bound to colloids are pushed into solution as hydrogen ions occupy exchange sites (protonation). A low pH may cause the hydrogen of hydroxyl groups to be pulled into solution, leaving charged sites on the colloid available to be occupied by other cations. This ionisation of hydroxy groups on the surface of soil colloids creates what is described as pH-dependent surface charges. Unlike permanent charges developed by isomorphous substitution, pH-dependent charges are variable and increase with increasing pH. Freed cations can be made available to plants but are also prone to be leached from the soil, possibly making the soil less fertile. Plants are able to excrete H+ into the soil through the synthesis of organic acids and by that means, change the pH of the soil near the root and push cations off the colloids, thus making those available to the plant.
Cation exchange capacity (CEC)
Cation exchange capacity is the soil's ability to remove cations from the soil water solution and sequester those to be exchanged later as the plant roots release hydrogen ions to the solution. CEC is the amount of exchangeable hydrogen cation (H+) that will combine with 100 grams dry weight of soil and whose measure is one milliequivalents per 100 grams of soil (1 meq/100 g). Hydrogen ions have a single charge and one-thousandth of a gram of hydrogen ions per 100 grams dry soil gives a measure of one milliequivalent of hydrogen ion. Calcium, with an atomic weight 40 times that of hydrogen and with a valence of two, converts to = 20 milliequivalents of hydrogen ion per 100 grams of dry soil or 20 meq/100 g. The modern measure of CEC is expressed as centimoles of positive charge per kilogram (cmol/kg) of oven-dry soil.
Most of the soil's CEC occurs on clay and humus colloids, and the lack of those in hot, humid, wet climates (such as tropical rainforests), due to leaching and decomposition, respectively, explains the apparent sterility of tropical soils. Live plant roots also have some CEC, linked to their specific surface area.
Anion exchange capacity (AEC)
Anion exchange capacity is the soil's ability to remove anions (such as nitrate, phosphate) from the soil water solution and sequester those for later exchange as the plant roots release carbonate anions to the soil water solution. Those colloids which have low CEC tend to have some AEC. Amorphous and sesquioxide clays have the highest AEC, followed by the iron oxides. Levels of AEC are much lower than for CEC, because of the generally higher rate of positively (versus negatively) charged surfaces on soil colloids, to the exception of variable-charge soils. Phosphates tend to be held at anion exchange sites.
Iron and aluminum hydroxide clays are able to exchange their hydroxide anions (OH−) for other anions. The order reflecting the strength of anion adhesion is as follows:
replaces replaces replaces Cl−
The amount of exchangeable anions is of a magnitude of tenths to a few milliequivalents per 100 g dry soil. As pH rises, there are relatively more hydroxyls, which will displace anions from the colloids and force them into solution and out of storage; hence AEC decreases with increasing pH (alkalinity).
Reactivity (pH)
Soil reactivity is expressed in terms of pH and is a measure of the acidity or alkalinity of the soil. More precisely, it is a measure of hydronium concentration in an aqueous solution and ranges in values from 0 to 14 (acidic to basic) but practically speaking for soils, pH ranges from 3.5 to 9.5, as pH values beyond those extremes are toxic to life forms.
At 25 °C an aqueous solution that has a pH of 3.5 has 10−3.5 moles H3O+ (hydronium ions) per litre of solution (and also 10−10.5 moles per litre OH−). A pH of 7, defined as neutral, has 10−7 moles of hydronium ions per litre of solution and also 10−7 moles of OH− per litre; since the two concentrations are equal, they are said to neutralise each other. A pH of 9.5 has 10−9.5 moles hydronium ions per litre of solution (and also 10−2.5 moles per litre OH−). A pH of 3.5 has one million times more hydronium ions per litre than a solution with pH of 9.5 ( or 106) and is more acidic.
The effect of pH on a soil is to remove from the soil or to make available certain ions. Soils with high acidity tend to have toxic amounts of aluminium and manganese. As a result of a trade-off between toxicity and requirement most nutrients are better available to plants at moderate pH, although most minerals are more soluble in acid soils. Soil organisms are hindered by high acidity, and most agricultural crops do best with mineral soils of pH 6.5 and organic soils of pH 5.5. Given that at low pH toxic metals (e.g. cadmium, zinc, lead) are positively charged as cations and organic pollutants are in non-ionic form, thus both made more available to organisms, it has been suggested that plants, animals and microbes commonly living in acid soils are pre-adapted to every kind of pollution, whether of natural or human origin.
In high rainfall areas, soils tend to acidify as the basic cations are forced off the soil colloids by the mass action of hydronium ions from usual or unusual rain acidity against those attached to the colloids. High rainfall rates can then wash the nutrients out, leaving the soil inhabited only by those organisms which are particularly efficient to uptake nutrients in very acid conditions, like in tropical rainforests. Once the colloids are saturated with H3O+, the addition of any more hydronium ions or aluminum hydroxyl cations drives the pH even lower (more acidic) as the soil has been left with no buffering capacity. In areas of extreme rainfall and high temperatures, the clay and humus may be washed out, further reducing the buffering capacity of the soil. In low rainfall areas, unleached calcium pushes pH to 8.5 and with the addition of exchangeable sodium, soils may reach pH 10. Beyond a pH of 9, plant growth is reduced. High pH results in low micro-nutrient mobility, but water-soluble chelates of those nutrients can correct the deficit. Sodium can be reduced by the addition of gypsum (calcium sulphate) as calcium adheres to clay more tightly than does sodium causing sodium to be pushed into the soil water solution where it can be washed out by an abundance of water.
Base saturation percentage
There are acid-forming cations (e.g. hydronium, aluminium, iron) and there are base-forming cations (e.g. calcium, magnesium, sodium). The fraction of the negatively-charged soil colloid exchange sites (CEC) that are occupied by base-forming cations is called base saturation. If a soil has a CEC of 20 meq and 5 meq are aluminium and hydronium cations (acid-forming), the remainder of positions on the colloids are assumed occupied by base-forming cations, so that the base saturation is (the compliment 25% is assumed acid-forming cations). Base saturation is almost in direct proportion to pH (it increases with increasing pH). It is of use in calculating the amount of lime needed to neutralise an acid soil (lime requirement). The amount of lime needed to neutralize a soil must take account of the amount of acid forming ions on the colloids (exchangeable acidity), not just those in the soil water solution (free acidity). The addition of enough lime to neutralize the soil water solution will be insufficient to change the pH, as the acid forming cations stored on the soil colloids will tend to restore the original pH condition as they are pushed off those colloids by the calcium of the added lime.
Buffering
The resistance of soil to change in pH, as a result of the addition of acid or basic material, is a measure of the buffering capacity of a soil and (for a particular soil type) increases as the CEC increases. Hence, pure sand has almost no buffering ability, though soils high in colloids (whether mineral or organic) have high buffering capacity. Buffering occurs by cation exchange and neutralisation. However, colloids are not the only regulators of soil pH. The role of carbonates should be underlined, too. More generally, according to pH levels, several buffer systems take precedence over each other, from calcium carbonate buffer range to iron buffer range.
The addition of a small amount of highly basic aqueous ammonia to a soil will cause the ammonium to displace hydronium ions from the colloids, and the end product is water and colloidally fixed ammonium, but little permanent change overall in soil pH.
The addition of a small amount of lime, Ca(OH)2, will displace hydronium ions from the soil colloids, causing the fixation of calcium to colloids and the evolution of CO2 and water, with little permanent change in soil pH.
The above are examples of the buffering of soil pH. The general principal is that an increase in a particular cation in the soil water solution will cause that cation to be fixed to colloids (buffered) and a decrease in solution of that cation will cause it to be withdrawn from the colloid and moved into solution (buffered). The degree of buffering is often related to the CEC of the soil; the greater the CEC, the greater the buffering capacity of the soil.
Redox
Soil chemical reactions involve some combination of proton and electron transfer. Oxidation occurs if there is a loss of electrons in the transfer process while reduction occurs if there is a gain of electrons. Reduction potential is measured in volts or millivolts. Soil microbial communities develop along electron transport chains, forming electrically conductive biofilms, and developing networks of bacterial nanowires.
Redox factors in soil development, where formation of redoximorphic color features provides critical information for soil interpretation. Understanding the redox gradient is important to managing carbon sequestration, bioremediation, wetland delineation, and soil-based microbial fuel cells.
Nutrients
Seventeen elements or nutrients are essential for plant growth and reproduction. They are carbon (C), hydrogen (H), oxygen (O), nitrogen (N), phosphorus (P), potassium (K), sulfur (S), calcium (Ca), magnesium (Mg), iron (Fe), boron (B), manganese (Mn), copper (Cu), zinc (Zn), molybdenum (Mo), nickel (Ni) and chlorine (Cl). Nutrients required for plants to complete their life cycle are considered essential nutrients. Nutrients that enhance the growth of plants but are not necessary to complete the plant's life cycle are considered non-essential. With the exception of carbon, hydrogen and oxygen, which are supplied by carbon dioxide and water, and nitrogen, provided through nitrogen fixation, the nutrients derive originally from the mineral component of the soil. The Law of the Minimum expresses that when the available form of a nutrient is not in enough proportion in the soil solution, then other nutrients cannot be taken up at an optimum rate by a plant. A particular nutrient ratio of the soil solution is thus mandatory for optimizing plant growth, a value which might differ from nutrient ratios calculated from plant composition.
Plant uptake of nutrients can only proceed when they are present in a plant-available form. In most situations, nutrients are absorbed in an ionic form from (or together with) soil water. Although minerals are the origin of most nutrients, and the bulk of most nutrient elements in the soil is held in crystalline form within primary and secondary minerals, they weather too slowly to support rapid plant growth. For example, the application of finely ground minerals, feldspar and apatite, to soil seldom provides the necessary amounts of potassium and phosphorus at a rate sufficient for good plant growth, as most of the nutrients remain bound in the crystals of those minerals.
The nutrients adsorbed onto the surfaces of clay colloids and soil organic matter provide a more accessible reservoir of many plant nutrients (e.g. K, Ca, Mg, P, Zn). As plants absorb the nutrients from the soil water, the soluble pool is replenished from the surface-bound pool. The decomposition of soil organic matter by microorganisms is another mechanism whereby the soluble pool of nutrients is replenished – this is important for the supply of plant-available N, S, P, and B from soil.
Gram for gram, the capacity of humus to hold nutrients and water is far greater than that of clay minerals, most of the soil cation exchange capacity arising from charged carboxylic groups on organic matter. However, despite the great capacity of humus to retain water once water-soaked, its high hydrophobicity decreases its wettability once dry. All in all, small amounts of humus may remarkably increase the soil's capacity to promote plant growth.
Soil organic matter
The organic material in soil is made up of organic compounds and includes plant, animal and microbial material, both living and dead. A typical soil has a biomass composition of 70% microorganisms, 22% macrofauna, and 8% roots. The living component of an acre of soil may include 900 lb of earthworms, 2400 lb of fungi, 1500 lb of bacteria, 133 lb of protozoa and 890 lb of arthropods and algae.
A few percent of the soil organic matter, with small residence time, consists of the microbial biomass and metabolites of bacteria, molds, and actinomycetes that work to break down the dead organic matter. Were it not for the action of these micro-organisms, the entire carbon dioxide part of the atmosphere would be sequestered as organic matter in the soil. However, in the same time soil microbes contribute to carbon sequestration in the topsoil through the formation of stable humus. In the aim to sequester more carbon in the soil for alleviating the greenhouse effect it would be more efficient in the long-term to stimulate humification than to decrease litter decomposition.
The main part of soil organic matter is a complex assemblage of small organic molecules, collectively called humus or humic substances. The use of these terms, which do not rely on a clear chemical classification, has been considered as obsolete. Other studies showed that the classical notion of molecule is not convenient for humus, which escaped most attempts done over two centuries to resolve it in unit components, but still is chemically distinct from polysaccharides, lignins and proteins.
Most living things in soils, including plants, animals, bacteria, and fungi, are dependent on organic matter for nutrients and/or energy. Soils have organic compounds in varying degrees of decomposition, the rate of which is dependent on the temperature, soil moisture, and aeration. Bacteria and fungi feed on the raw organic matter, which are fed upon by protozoa, which in turn are fed upon by nematodes, annelids and arthropods, themselves able to consume and transform raw or humified organic matter. This has been called the soil food web, through which all organic matter is processed as in a digestive system. Organic matter holds soils open, allowing the infiltration of air and water, and may hold as much as twice its weight in water. Many soils, including desert and rocky-gravel soils, have little or no organic matter. Soils that are all organic matter, such as peat (histosols), are infertile. In its earliest stage of decomposition, the original organic material is often called raw organic matter. The final stage of decomposition is called humus.
In grassland, much of the organic matter added to the soil is from the deep, fibrous, grass root systems. By contrast, tree leaves falling on the forest floor are the principal source of soil organic matter in the forest. Another difference is the frequent occurrence in the grasslands of fires that destroy large amounts of aboveground material but stimulate even greater contributions from roots. Also, the much greater acidity under any forests inhibits the action of certain soil organisms that otherwise would mix much of the surface litter into the mineral soil. As a result, the soils under grasslands generally develop a thicker A horizon with a deeper distribution of organic matter than in comparable soils under forests, which characteristically store most of their organic matter in the forest floor (O horizon) and thin A horizon.
Humus
Humus refers to organic matter that has been decomposed by soil microflora and fauna to the point where it is resistant to further breakdown. Humus usually constitutes only five percent of the soil or less by volume, but it is an essential source of nutrients and adds important textural qualities crucial to soil health and plant growth. Humus also feeds arthropods, termites and earthworms which further improve the soil. The end product, humus, is suspended in colloidal form in the soil solution and forms a weak acid that can attack silicate minerals by chelating their iron and aluminum atoms. Humus has a high cation and anion exchange capacity that on a dry weight basis is many times greater than that of clay colloids. It also acts as a buffer, like clay, against changes in pH and soil moisture.
Humic acids and fulvic acids, which begin as raw organic matter, are important constituents of humus. After the death of plants, animals, and microbes, microbes begin to feed on the residues through their production of extra-cellular soil enzymes, resulting finally in the formation of humus. As the residues break down, only molecules made of aliphatic and aromatic hydrocarbons, assembled and stabilized by oxygen and hydrogen bonds, remain in the form of complex molecular assemblages collectively called humus. Humus is never pure in the soil, because it reacts with metals and clays to form complexes which further contribute to its stability and to soil structure. Although the structure of humus has in itself few nutrients (with the exception of constitutive metals such as calcium, iron and aluminum) it is able to attract and link, by weak bonds, cation and anion nutrients that can further be released into the soil solution in response to selective root uptake and changes in soil pH, a process of paramount importance for the maintenance of fertility in tropical soils.
Lignin is resistant to breakdown and accumulates within the soil. It also reacts with proteins, which further increases its resistance to decomposition, including enzymatic decomposition by microbes. Fats and waxes from plant matter have still more resistance to decomposition and persist in soils for thousand years, hence their use as tracers of past vegetation in buried soil layers. Clay soils often have higher organic contents that persist longer than soils without clay as the organic molecules adhere to and are stabilised by the clay. Proteins normally decompose readily, to the exception of scleroproteins, but when bound to clay particles they become more resistant to decomposition. As for other proteins clay particles absorb the enzymes exuded by microbes, decreasing enzyme activity while protecting extracellular enzymes from degradation. The addition of organic matter to clay soils can render that organic matter and any added nutrients inaccessible to plants and microbes for many years. A study showed increased soil fertility following the addition of mature compost to a clay soil. High soil tannin content can cause nitrogen to be sequestered as resistant tannin-protein complexes.
Humus formation is a process dependent on the amount of plant material added each year and the type of base soil. Both are affected by climate and the type of organisms present. Soils with humus can vary in nitrogen content but typically have 3 to 6 percent nitrogen. Raw organic matter, as a reserve of nitrogen and phosphorus, is a vital component affecting soil fertility. Humus also absorbs water, and expands and shrinks between dry and wet states to a higher extent than clay, increasing soil porosity. Humus is less stable than the soil's mineral constituents, as it is reduced by microbial decomposition, and over time its concentration diminishes without the addition of new organic matter. However, humus in its most stable forms may persist over centuries if not millennia. Charcoal is a source of highly stable humus, called black carbon, which had been used traditionally to improve the fertility of nutrient-poor tropical soils. This very ancient practice, as ascertained in the genesis of Amazonian dark earths, has been renewed and became popular under the name of biochar. It has been suggested that biochar could be used to sequester more carbon in the fight against the greenhouse effect.
Climatological influence
The production, accumulation and degradation of organic matter are greatly dependent on climate. For example, when a thawing event occurs, the flux of soil gases with atmospheric gases is significantly influenced. Temperature, soil moisture and topography are the major factors affecting the accumulation of organic matter in soils. Organic matter tends to accumulate under wet or cold conditions where decomposer activity is impeded by low temperature or excess moisture which results in anaerobic conditions. Conversely, excessive rain and high temperatures of tropical climates enables rapid decomposition of organic matter and leaching of plant nutrients. Forest ecosystems on these soils rely on efficient recycling of nutrients and plant matter by the living plant and microbial biomass to maintain their productivity, a process which is disturbed by human activities. Excessive slope, in particular in the presence of cultivation for the sake of agriculture, may encourage the erosion of the top layer of soil which holds most of the raw organic material that would otherwise eventually become humus.
Plant residue
Cellulose and hemicellulose undergo fast decomposition by fungi and bacteria, with a half-life of 12–18 days in a temperate climate. Brown rot fungi can decompose the cellulose and hemicellulose, leaving the lignin and phenolic compounds behind. Starch, which is an energy storage system for plants, undergoes fast decomposition by bacteria and fungi. Lignin consists of polymers composed of 500 to 600 units with a highly branched, amorphous structure, linked to cellulose, hemicellulose and pectin in plant cell walls. Lignin undergoes very slow decomposition, mainly by white rot fungi and actinomycetes; its half-life under temperate conditions is about six months.
Horizons
A horizontal layer of the soil, whose physical features, composition and age are distinct from those above and beneath, is referred to as a soil horizon. The naming of a horizon is based on the type of material of which it is composed. Those materials reflect the duration of specific processes of soil formation. They are labelled using a shorthand notation of letters and numbers which describe the horizon in terms of its colour, size, texture, structure, consistency, root quantity, pH, voids, boundary characteristics and presence of nodules or concretions. No soil profile has all the major horizons. Some, called entisols, may have only one horizon or are currently considered as having no horizon, in particular incipient soils from unreclaimed mining waste deposits, moraines, volcanic cones sand dunes or alluvial terraces. Upper soil horizons may be lacking in truncated soils following wind or water ablation, with concomitant downslope burying of soil horizons, a natural process aggravated by agricultural practices such as tillage. The growth of trees is another source of disturbance, creating a micro-scale heterogeneity which is still visible in soil horizons once trees have died. By passing from a horizon to another, from the top to the bottom of the soil profile, one goes back in time, with past events registered in soil horizons like in sediment layers. Sampling pollen, testate amoebae and plant remains in soil horizons may help to reveal environmental changes (e.g. climate change, land use change) which occurred in the course of soil formation. Soil horizons can be dated by several methods such as radiocarbon, using pieces of charcoal provided they are of enough size to escape pedoturbation by earthworm activity and other mechanical disturbances. Fossil soil horizons from paleosols can be found within sedimentary rock sequences, allowing the study of past environments.
The exposure of parent material to favourable conditions produces mineral soils that are marginally suitable for plant growth, as is the case in eroded soils. The growth of vegetation results in the production of organic residues which fall on the ground as litter for plant aerial parts (leaf litter) or are directly produced belowground for subterranean plant organs (root litter), and then release dissolved organic matter. The remaining surficial organic layer, called the O horizon, produces a more active soil due to the effect of the organisms that live within it. Organisms colonise and break down organic materials, making available nutrients upon which other plants and animals can live. After sufficient time, humus moves downward and is deposited in a distinctive organic-mineral surface layer called the A horizon, in which organic matter is mixed with mineral matter through the activity of burrowing animals, a process called pedoturbation. This natural process does not go to completion in the presence of conditions detrimental to soil life such as strong acidity, cold climate or pollution, stemming in the accumulation of undecomposed organic matter within a single organic horizon overlying the mineral soil and in the juxtaposition of humified organic matter and mineral particles, without intimate mixing, in the underlying mineral horizons.
Classification
One of the first soil classification systems was developed by Russian scientist Vasily Dokuchaev around 1880. It was modified a number of times by American and European researchers and was developed into the system commonly used until the 1960s. It was based on the idea that soils have a particular morphology based on the materials and factors that form them. In the 1960s, a different classification system began to emerge which focused on soil morphology instead of parental materials and soil-forming factors. Since then, it has undergone further modifications. The World Reference Base for Soil Resources aims to establish an international reference base for soil classification.
Uses
Soil is used in agriculture, where it serves as the anchor and primary nutrient base for plants. The types of soil and available moisture determine the species of plants that can be cultivated. Agricultural soil science was the primeval domain of soil knowledge, long time before the advent of pedology in the 19th century. However, as demonstrated by aeroponics, aquaponics and hydroponics, soil material is not an absolute essential for agriculture, and soilless cropping systems have been claimed as the future of agriculture for an endless growing mankind.
Soil material is also a critical component in mining, construction and landscape development industries. Soil serves as a foundation for most construction projects. The movement of massive volumes of soil can be involved in surface mining, road building and dam construction. Earth sheltering is the architectural practice of using soil for external thermal mass against building walls. Many building materials are soil based. Loss of soil through urbanization is growing at a high rate in many areas and can be critical for the maintenance of subsistence agriculture.
Soil resources are critical to the environment, as well as to food and fibre production, producing 98.8% of food consumed by humans. Soil provides minerals and water to plants according to several processes involved in plant nutrition. Soil absorbs rainwater and releases it later, thus preventing floods and drought, flood regulation being one of the major ecosystem services provided by soil. Soil cleans water as it percolates through it. Soil is the habitat for many organisms: the major part of known and unknown biodiversity is in the soil, in the form of earthworms, woodlice, millipedes, centipedes, snails, slugs, mites, springtails, enchytraeids, nematodes, protists), bacteria, archaea, fungi and algae; and most organisms living above ground have part of them (plants) or spend part of their life cycle (insects) below-ground. Above-ground and below-ground biodiversities are tightly interconnected, making soil protection of paramount importance for any restoration or conservation plan.
The biological component of soil is an extremely important carbon sink since about 57% of the biotic content is carbon. Even in deserts, cyanobacteria, lichens and mosses form biological soil crusts which capture and sequester a significant amount of carbon by photosynthesis. Poor farming and grazing methods have degraded soils and released much of this sequestered carbon to the atmosphere. Restoring the world's soils could offset the effect of increases in greenhouse gas emissions and slow global warming, while improving crop yields and reducing water needs.
Waste management often has a soil component. Septic drain fields treat septic tank effluent using aerobic soil processes. Land application of waste water relies on soil biology to aerobically treat BOD. Alternatively, landfills use soil for daily cover, isolating waste deposits from the atmosphere and preventing unpleasant smells. Composting is now widely used to treat aerobically solid domestic waste and dried effluents of settling basins. Although compost is not soil, biological processes taking place during composting are similar to those occurring during decomposition and humification of soil organic matter.
Organic soils, especially peat, serve as a significant fuel and horticultural resource. Peat soils are also commonly used for the sake of agriculture in Nordic countries, because peatland sites, when drained, provide fertile soils for food production. However, wide areas of peat production, such as rain-fed sphagnum bogs, also called blanket bogs or raised bogs, are now protected because of their patrimonial interest. As an example, Flow Country, covering 4,000 square kilometres of rolling expanse of blanket bogs in Scotland, is now candidate for being included in the World Heritage List. Under present-day global warming peat soils are thought to be involved in a self-reinforcing (positive feedback) process of increased emission of greenhouse gases (methane and carbon dioxide) and increased temperature, a contention which is still under debate when replaced at field scale and including stimulated plant growth.
Geophagy is the practice of eating soil-like substances. Both animals and humans occasionally consume soil for medicinal, recreational, or religious purposes. It has been shown that some monkeys consume soil, together with their preferred food (tree foliage and fruits), in order to alleviate tannin toxicity.
Soils filter and purify water and affect its chemistry. Rain water and pooled water from ponds, lakes and rivers percolate through the soil horizons and the upper rock strata, thus becoming groundwater. Pests (viruses) and pollutants, such as persistent organic pollutants (chlorinated pesticides, polychlorinated biphenyls), oils (hydrocarbons), heavy metals (lead, zinc, cadmium), and excess nutrients (nitrates, sulfates, phosphates) are filtered out by the soil. Soil organisms metabolise them or immobilise them in their biomass and necromass, thereby incorporating them into stable humus. The physical integrity of soil is also a prerequisite for avoiding landslides in rugged landscapes.
Degradation
Land degradation is a human-induced or natural process which impairs the capacity of land to function. Soil degradation involves acidification, contamination, desertification, erosion or salination.
Acidification
Soil acidification is beneficial in the case of alkaline soils, but it degrades land when it lowers crop productivity, soil biological activity and increases soil vulnerability to contamination and erosion. Soils are initially acid and remain such when their parent materials are low in basic cations (calcium, magnesium, potassium and sodium). On parent materials richer in weatherable minerals acidification occurs when basic cations are leached from the soil profile by rainfall or exported by the harvesting of forest or agricultural crops. Soil acidification is accelerated by the use of acid-forming nitrogenous fertilizers and by the effects of acid precipitation. Deforestation is another cause of soil acidification, mediated by increased leaching of soil nutrients in the absence of tree canopies.
Contamination
Soil contamination at low levels is often within a soil's capacity to treat and assimilate waste material. Soil biota can treat waste by transforming it, mainly through microbial enzymatic activity. Soil organic matter and soil minerals can adsorb the waste material and decrease its toxicity, although when in colloidal form they may transport the adsorbed contaminants to subsurface environments. Many waste treatment processes rely on this natural bioremediation capacity. Exceeding treatment capacity can damage soil biota and limit soil function. Derelict soils occur where industrial contamination or other development activity damages the soil to such a degree that the land cannot be used safely or productively. Remediation of derelict soil uses principles of geology, physics, chemistry and biology to degrade, attenuate, isolate or remove soil contaminants to restore soil functions and values. Techniques include leaching, air sparging, soil conditioners, phytoremediation, bioremediation and Monitored Natural Attenuation. An example of diffuse pollution with contaminants is copper accumulation in vineyards and orchards to which fungicides are repeatedly applied, even in organic farming.
Microfibres from synthetic textiles are another type of plastic soil contamination, 100% of agricultural soil samples from southwestern China contained plastic particles, 92% of which were microfibres. Sources of microfibres likely included string or twine, as well as irrigation water in which clothes had been washed.
The application of biosolids from sewage sludge and compost can introduce microplastics to soils. This adds to the burden of microplastics from other sources (e.g. the atmosphere). Approximately half the sewage sludge in Europe and North America is applied to agricultural land. In Europe it has been estimated that for every million inhabitants 113 to 770 tonnes of microplastics are added to agricultural soils each year.
Desertification
Desertification, an environmental process of ecosystem degradation in arid and semi-arid regions, is often caused by badly adapted human activities such as overgrazing or excess harvesting of firewood. It is a common misconception that drought causes desertification. Droughts are common in arid and semiarid lands. Well-managed lands can recover from drought when the rains return. Soil management tools include maintaining soil nutrient and organic matter levels, reduced tillage and increased cover. These practices help to control erosion and maintain productivity during periods when moisture is available. Continued land abuse during droughts, however, increases land degradation. Increased population and livestock pressure on marginal lands accelerates desertification. It is now questioned whether present-day climate warming will favour or disfavour desertification, with contradictory reports about predicted rainfall trends associated with increased temperature, and strong discrepancies among regions, even in the same country.
Erosion
Erosion of soil is caused by water, wind, ice, and movement in response to gravity. More than one kind of erosion can occur simultaneously. Erosion is distinguished from weathering, since erosion also transports eroded soil away from its place of origin (soil in transit may be described as sediment). Erosion is an intrinsic natural process, but in many places it is greatly increased by human activity, especially unsuitable land use practices. These include agricultural activities which leave the soil bare during times of heavy rain or strong winds, overgrazing, deforestation, and improper construction activity. Improved management can limit erosion. Soil conservation techniques which are employed include changes of land use (such as replacing erosion-prone crops with grass or other soil-binding plants), changes to the timing or type of agricultural operations, terrace building, use of erosion-suppressing cover materials (including cover crops and other plants), limiting disturbance during construction, and avoiding construction during erosion-prone periods and in erosion-prone places such as steep slopes. Historically, one of the best examples of large-scale soil erosion due to unsuitable land-use practices is wind erosion (the so-called dust bowl) which ruined American and Canadian prairies during the 1930s, when immigrant farmers, encouraged by the federal government of both countries, settled and converted the original shortgrass prairie to agricultural crops and cattle ranching.
A serious and long-running water erosion problem occurs in China, on the middle reaches of the Yellow River and the upper reaches of the Yangtze River. From the Yellow River, over 1.6 billion tons of sediment flow each year into the ocean. The sediment originates primarily from water erosion (gully erosion) in the Loess Plateau region of northwest China.
Soil piping is a particular form of soil erosion that occurs below the soil surface. It causes levee and dam failure, as well as sink hole formation. Turbulent flow removes soil starting at the mouth of the seep flow and the subsoil erosion advances up-gradient. The term sand boil is used to describe the appearance of the discharging end of an active soil pipe.
Salination
Soil salination is the accumulation of free salts to such an extent that it leads to degradation of the agricultural value of soils and vegetation. Consequences include corrosion damage, reduced plant growth, erosion due to loss of plant cover and soil structure, and water quality problems due to sedimentation. Salination occurs due to a combination of natural and human-caused processes. Arid conditions favour salt accumulation. This is especially apparent when soil parent material is saline. Irrigation of arid lands is especially problematic. All irrigation water has some level of salinity. Irrigation, especially when it involves leakage from canals and overirrigation in the field, often raises the underlying water table. Rapid salination occurs when the land surface is within the capillary fringe of saline groundwater. Soil salinity control involves watertable control and flushing with higher levels of applied water in combination with tile drainage or another form of subsurface drainage.
Reclamation
Soils which contain high levels of particular clays with high swelling properties, such as smectites, are often very fertile. For example, the smectite-rich paddy soils of Thailand's Central Plains are among the most productive in the world. However, the overuse of mineral nitrogen fertilizers and pesticides in irrigated intensive rice production has endangered these soils, forcing farmers to implement integrated practices based on Cost Reduction Operating Principles.
Many farmers in tropical areas, however, struggle to retain organic matter and clay in the soils they work. In recent years, for example, productivity has declined and soil erosion has increased in the low-clay soils of northern Thailand, following the abandonment of shifting cultivation for a more permanent land use. Farmers initially responded by adding organic matter and clay from termite mound material, but this was unsustainable in the long-term because of rarefaction of termite mounds. Scientists experimented with adding bentonite, one of the smectite family of clays, to the soil. In field trials, conducted by scientists from the International Water Management Institute (IWMI) in cooperation with Khon Kaen University and local farmers, this had the effect of helping retain water and nutrients. Supplementing the farmer's usual practice with a single application of of bentonite resulted in an average yield increase of 73%. Other studies showed that applying bentonite to degraded sandy soils reduced the risk of crop failure during drought years.
In 2008, three years after the initial trials, IWMI scientists conducted a survey among 250 farmers in northeast Thailand, half of whom had applied bentonite to their fields. The average improvement for those using the clay addition was 18% higher than for non-clay users. Using the clay had enabled some farmers to switch to growing vegetables, which need more fertile soil. This helped to increase their income. The researchers estimated that 200 farmers in northeast Thailand and 400 in Cambodia had adopted the use of clays, and that a further 20,000 farmers were introduced to the new technique.
If the soil is too high in clay or salts (e.g. saline sodic soil), adding gypsum, washed river sand and organic matter (e.g.municipal solid waste) will balance the composition.
Adding organic matter, like ramial chipped wood or compost, to soil which is depleted in nutrients and too high in sand will boost its quality and improve production.
Special mention must be made of the use of charcoal, and more generally biochar to improve nutrient-poor tropical soils, a process based on the higher fertility of anthropogenic pre-Columbian Amazonian Dark Earths, also called Terra Preta de Índio, due to interesting physical and chemical properties of soil black carbon as a source of stable humus. However, the uncontrolled application of charred waste products of all kinds may endanger soil life and human health.
History of studies and research
The history of the study of soil is intimately tied to humans' urgent need to provide food for themselves and forage for their animals. Throughout history, civilizations have prospered or declined as a function of the availability and productivity of their soils.
Studies of soil fertility
The Greek historian Xenophon (450–355 BCE) was the first to expound upon the merits of green-manuring crops: 'But then whatever weeds are upon the ground, being turned into earth, enrich the soil as much as dung.'
Columella's Of husbandry, circa 60 CE, advocated the use of lime and that clover and alfalfa (green manure) should be turned under, and was used by 15 generations (450 years) under the Roman Empire until its collapse. From the fall of Rome to the French Revolution, knowledge of soil and agriculture was passed on from parent to child and as a result, crop yields were low. During the European Middle Ages, Yahya Ibn al-'Awwam's handbook, with its emphasis on irrigation, guided the people of North Africa, Spain and the Middle East; a translation of this work was finally carried to the southwest of the United States when under Spanish influence. Olivier de Serres, considered the father of French agronomy, was the first to suggest the abandonment of fallowing and its replacement by hay meadows within crop rotations. He also highlighted the importance of soil (the French terroir) in the management of vineyards. His famous book contributed to the rise of modern, sustainable agriculture and to the collapse of old agricultural practices such as soil amendment for crops by the lifting of forest litter and assarting, which ruined the soils of western Europe during the Middle Ages and even later on according to regions.
Experiments into what made plants grow first led to the idea that the ash left behind when plant matter was burned was the essential element but overlooked the role of nitrogen, which is not left on the ground after combustion, a belief which prevailed until the 19th century. In about 1635, the Flemish chemist Jan Baptist van Helmont thought he had proved water to be the essential element from his famous five years' experiment with a willow tree grown with only the addition of rainwater. His conclusion came from the fact that the increase in the plant's weight had apparently been produced only by the addition of water, with no reduction in the soil's weight. John Woodward ( 1728) experimented with various types of water ranging from clean to muddy and found muddy water the best, and so he concluded that earthy matter was the essential element. Others concluded it was humus in the soil that passed some essence to the growing plant. Still others held that the vital growth principal was something passed from dead plants or animals to the new plants. At the start of the 18th century, Jethro Tull demonstrated that it was beneficial to cultivate (stir) the soil, but his opinion that the stirring made the fine parts of soil available for plant absorption was erroneous.
As chemistry developed, it was applied to the investigation of soil fertility. The French chemist Antoine Lavoisier showed in about 1778 that plants and animals must combust oxygen internally to live. He was able to deduce that most of the weight of van Helmont's willow tree derived from air. It was the French agriculturalist Jean-Baptiste Boussingault who by means of experimentation obtained evidence showing that the main sources of carbon, hydrogen and oxygen for plants were air and water, while nitrogen was taken from soil. Justus von Liebig in his book Organic chemistry in its applications to agriculture and physiology (published 1840), asserted that the chemicals in plants must have come from the soil and air and that to maintain soil fertility, the used minerals must be replaced. Liebig nevertheless believed the nitrogen was supplied from the air. The enrichment of soil with guano by the Incas was rediscovered in 1802, by Alexander von Humboldt. This led to its mining and that of Chilean nitrate and to its application to soil in the United States and Europe after 1840.
The work of Liebig was a revolution for agriculture, and so other investigators started experimentation based on it. In England John Bennet Lawes and Joseph Henry Gilbert worked in the Rothamsted Experimental Station, founded by the former, and that plants took nitrogen from the soil, and that salts needed to be in an available state to be absorbed by plants. Their investigations also produced the superphosphate, consisting in the acid treatment of phosphate rock. This led to the invention and use of salts of potassium (K) and nitrogen (N) as fertilizers. Ammonia generated by the production of coke was recovered and used as fertiliser. Finally, the chemical basis of nutrients delivered to the soil in manure was understood and in the mid-19th century chemical fertilisers were applied. However, the dynamic interaction of soil and its life forms was still not understood.
In 1856, J. Thomas Way discovered that ammonia contained in fertilisers was transformed into nitrates, and twenty years later Robert Warington proved that this transformation was done by living organisms. In 1890 Sergei Winogradsky announced he had found the bacteria responsible for this transformation.
It was known that certain legumes could take up nitrogen from the air and fix it to the soil but it took the development of bacteriology towards the end of the 19th century to lead to an understanding of the role played in nitrogen fixation by bacteria. The symbiosis of bacteria and leguminous roots, and the fixation of nitrogen by the bacteria, were simultaneously discovered by the German agronomist Hermann Hellriegel and the Dutch microbiologist Martinus Beijerinck.
Crop rotation, mechanisation, chemical and natural fertilisers led to a doubling of wheat yields in western Europe between 1800 and 1900.
Studies of soil formation
The scientists who studied the soil in connection with agricultural practices had considered it mainly as a static substrate. However, soil is the result of evolution from more ancient geological materials, under the action of biotic and abiotic processes. After studies of the improvement of the soil commenced, other researchers began to study soil genesis and as a result also soil types and classifications.
In 1860, while in Mississippi, Eugene W. Hilgard (1833–1916) studied the relationship between rock material, climate, vegetation, and the type of soils that were developed. He realised that the soils were dynamic, and considered the classification of soil types. (See also at Project Gutenberg). His work was not continued. At about the same time, Friedrich Albert Fallou was describing soil profiles and relating soil characteristics to their formation as part of his professional work evaluating forest and farm land for the principality of Saxony. His 1857 book, (First principles of soil science), established modern soil science. Contemporary with Fallou's work, and driven by the same need to accurately assess land for equitable taxation, Vasily Dokuchaev led a team of soil scientists in Russia who conducted an extensive survey of soils, observing that similar basic rocks, climate and vegetation types lead to similar soil layering and types, and established the concepts for soil classifications. Due to language barriers, the work of this team was not communicated to western Europe until 1914 through a publication in German by Konstantin Glinka, a member of the Russian team.
Curtis F. Marbut, influenced by the work of the Russian team, translated Glinka's publication into English, and, as he was placed in charge of the U.S. National Cooperative Soil Survey, applied it to a national soil classification system.
See also
References
Sources
Bibliography
Further reading
Soil-Net.com A free schools-age educational site teaching about soil and its importance.
Adams, J.A. 1986. Dirt. College Station, Texas: Texas A&M University Press
Certini, G., Scalenghe, R. 2006. Soils: Basic concepts and future challenges. Cambridge Univ Press, Cambridge.
Montgomery, David R., Dirt: The Erosion of Civilizations (U of California Press, 2007),
Faulkner, Edward H. Plowman's Folly (New York, Grosset & Dunlap, 1943).
LandIS Free Soilscapes Viewer Free interactive viewer for the Soils of England and Wales
Jenny, Hans. 1941. Factors of Soil Formation: A System of Quantitative Pedology
Logan, W.B. Dirt: The ecstatic skin of the earth (1995).
Mann, Charles C. September 2008. " Our good earth" National Geographic Magazine
External links
Photographs of sand boils.
Soil Survey Division Staff. 1999. Soil survey manual. Soil Conservation Service. U.S. Department of Agriculture Handbook 18.
Soil Survey Staff. 1975. Soil Taxonomy: A basic system of soil classification for making and interpreting soil surveys. USDA-SCS Agric. Handb. 436. United States Government Printing Office, Washington, DC.
Soils (Matching suitable forage species to soil type), Oregon State University
Janick, Jules. 2002. Soil notes, Purdue University
LandIS Soils Data for England and Wales a pay source for GIS data on the soils of England and Wales and soils data source; they charge a handling fee to researchers.
Land management
Horticulture
Granularity of materials
Natural materials
Natural resources | 0.763836 | 0.998811 | 0.762928 |
Chemical physics | Chemical physics is a branch of physics that studies chemical processes from a physical point of view. It focuses on understanding the physical properties and behavior of chemical systems, using principles from both physics and chemistry. This field investigates physicochemical phenomena using techniques from atomic and molecular physics and condensed matter physics.
The United States Department of Education defines chemical physics as "A program that focuses on the scientific study of structural phenomena combining the disciplines of physical chemistry and atomic/molecular physics. Includes instruction in heterogeneous structures, alignment and surface phenomena, quantum theory, mathematical physics, statistical and classical mechanics, chemical kinetics, and laser physics."
Distinction between Chemical Physics and Physical Chemistry
While at the interface of physics and chemistry, chemical physics is distinct from physical chemistry as it focuses more on using physical theories to understand and explain chemical phenomena at the microscopic level, such as quantum mechanics, statistical mechanics, and molecular dynamics. Meanwhile, physical chemistry uses a broader range of methods, such as thermodynamics and kinetics, to study the physical nature of chemical processes. On the other hand, physical chemistry deals with the physical properties and behavior of matter in chemical reactions, covering a broader range of topics such as thermodynamics, kinetics, and spectroscopy, and often links the macroscopic and microscopic chemical behavior. The distinction between the two fields still needs to be clarified as both fields share common grounds. Scientists often practice in both fields during their research, as there is significant overlap in the topics and techniques used. Journals like PCCP (Physical Chemistry Chemical Physics) cover research in both areas, highlighting their overlap.
History
The term "chemical physics" in its modern sense was first used by the German scientist A. Eucken, who published "A Course in Chemical Physics" in 1930. Prior to this, in 1927, the publication "Electronic Chemistry" by V. N. Kondrat'ev, N. N. Semenov, and Iu. B. Khariton hinted at the meaning of "chemical physics" through its title. The Institute of Chemical Physics of the Academy of Sciences of the USSR was established in 1931. In the United States, "The Journal of Chemical Physics" has been published since 1933.
In 1964, the General Electric Foundation established the Irving Langmuir Award in Chemical Physics to honor outstanding achievements in the field of chemical physics. Named after the Nobel Laureate Irving Langmuir, the award recognizes significant contributions to understanding chemical phenomena through physics principles, impacting areas such as surface chemistry and quantum mechanics.
What chemical physicists do
Chemical physicists investigate the structure and dynamics of ions, free radicals, polymers, clusters, and molecules. Their research includes studying the quantum mechanical aspects of chemical reactions, solvation processes, and the energy flow within and between molecules, and nanomaterials such as quantum dots. Experiments in chemical physics typically involve using spectroscopic methods to understand hydrogen bonding, electron transfer, the formation and dissolution of chemical bonds, chemical reactions, and the formation of nanoparticles.
The research objectives in the theoretical aspect of chemical physics are to understand how chemical structures and reactions work at the quantum mechanical level. This field also aims to clarify how ions and radicals behave and react in the gas phase and to develop precise approximations that simplify the computation of the physics of chemical phenomena.
Chemical physicists are looking for answers to such questions as:
Can we experimentally test quantum mechanical predictions of the vibrations and rotations of simple molecules? Or even those of complex molecules (such as proteins)?
Can we develop more accurate methods for calculating the electronic structure and properties of molecules?
Can we understand chemical reactions from first principles?
Why do quantum dots start blinking (in a pattern suggesting fractal kinetics) after absorbing photons?
How do chemical reactions really take place?
What is the step-by-step process that occurs when an isolated molecule becomes solvated? Or when a whole ensemble of molecules becomes solvated?
Can we use the properties of negative ions to determine molecular structures, understand the dynamics of chemical reactions, or explain photodissociation?
Why does a stream of soft x-rays knock enough electrons out of the atoms in a xenon cluster to cause the cluster to explode?
Journals
The Journal of Chemical Physics
Journal of Physical Chemistry Letters
Journal of Physical Chemistry A
Journal of Physical Chemistry B
Journal of Physical Chemistry C
Physical Chemistry Chemical Physics
Chemical Physics Letters
Chemical Physics
ChemPhysChem
Molecular Physics (journal)
See also
Intermolecular force
Molecular dynamics
Quantum chemistry
Solid-state physics or Condensed matter physics
Surface science
Van der Waals molecule
References
Subfields of chemistry
Applied and interdisciplinary physics | 0.773536 | 0.986271 | 0.762916 |
Racemization | In chemistry, racemization is a conversion, by heat or by chemical reaction, of an optically active compound into a racemic (optically inactive) form. This creates a 1:1 molar ratio of enantiomers and is referred to as a racemic mixture (i.e. contain equal amount of (+) and (−) forms). Plus and minus forms are called Dextrorotation and levorotation. The D and L enantiomers are present in equal quantities, the resulting sample is described as a racemic mixture or a racemate. Racemization can proceed through a number of different mechanisms, and it has particular significance in pharmacology as different enantiomers may have different pharmaceutical effects.
Stereochemistry
Chiral molecules have two forms (at each point of asymmetry), which differ in their optical characteristics: The levorotatory form (the (−)-form) will rotate counter-clockwise on the plane of polarization of a beam of light, whereas the dextrorotatory form (the (+)-form) will rotate clockwise on the plane of polarization of a beam of light. The two forms, which are non-superposable when rotated in 3-dimensional space, are said to be enantiomers. The notation is not to be confused with D and L naming of molecules which refers to the similarity in structure to D-glyceraldehyde and L-glyceraldehyde. Also, (R)- and (S)- refer to the chemical structure of the molecule based on Cahn–Ingold–Prelog priority rules of naming rather than rotation of light. R/S notation is the primary notation used for +/- now because D and L notation are used primarily for sugars and amino acids.
Racemization occurs when one pure form of an enantiomer is converted into equal proportion of both enantiomers, forming a racemate. When there are both equal numbers of dextrorotating and levorotating molecules, the net optical rotation of a racemate is zero. Enantiomers should also be distinguished from diastereomers which are a type of stereoisomer that have different molecular structures around a stereocenter and are not mirror images.
Partial to complete racemization of stereochemistry in solutions are a result of SN1 mechanisms. However, when complete inversion of stereochemistry configuration occurs in a substitution reaction, an SN2 reaction is responsible.
Physical properties
In the solid state, racemic mixtures may have different physical properties from either of the pure enantiomers because of the differential intermolecular interactions (see Biological Significance section). The change from a pure enantiomer to a racemate can change its density, melting point, solubility, heat of fusion, refractive index, and its various spectra. Crystallization of a racemate can result in separate (+) and (−) forms, or a single racemic compound. However, in liquid and gaseous states, racemic mixtures will behave with physical properties that are identical, or near identical, to their pure enantiomers.
Biological significance
In general, most biochemical reactions are stereoselective, so only one stereoisomer will produce the intended product while the other simply does not participate or can cause side-effects. Of note, the L form of amino acids and the D form of sugars (primarily glucose) are usually the biologically reactive form. This is due to the fact that many biological molecules are chiral and thus the reactions between specific enantiomers produce pure stereoisomers. Also notable is the fact that all amino acid residues exist in the L form. However, bacteria produce D-amino acid residues that polymerize into short polypeptides which can be found in bacterial cell walls. These polypeptides are less digestible by peptidases and are synthesized by bacterial enzymes instead of mRNA translation which would normally produce L-amino acids.
The stereoselective nature of most biochemical reactions meant that different enantiomers of a chemical may have different properties and effects on a person. Many psychotropic drugs show differing activity or efficacy between isomers, e.g. amphetamine is often dispensed as racemic salts while the more active dextroamphetamine is reserved for refractory cases or more severe indications; another example is methadone, of which one isomer has activity as an opioid agonist and the other as an NMDA antagonist.
Racemization of pharmaceutical drugs can occur in vivo. Thalidomide as the (R) enantiomer is effective against morning sickness, while the (S) enantiomer is teratogenic, causing birth defects when taken in the first trimester of pregnancy. If only one enantiomer is administered to a human subject, both forms may be found later in the blood serum. The drug is therefore not considered safe for use by women of child-bearing age, and while it has other uses, its use is tightly controlled. Thalidomide can be used to treat multiple myeloma.
Another commonly used drug is ibuprofen which is only anti-inflammatory as one enantiomer while the other is biologically inert. Likewise, the (S) stereoisomer is much more reactive than the (R) enantiomer in citalopram (Celexa), an antidepressant which inhibits serotonin reuptake, is active. The configurational stability of a drug is therefore an area of interest in pharmaceutical research. The production and analysis of enantiomers in the pharmaceutical industry is studied in the field of chiral organic synthesis.
Formation of racemic mixtures
Racemization can be achieved by simply mixing equal quantities of two pure enantiomers. Racemization can also occur in a chemical interconversion. For example, when (R)-3-phenyl-2-butanone is dissolved in aqueous ethanol that contains NaOH or HCl, a racemate is formed. The racemization occurs by way of an intermediate enol form in which the former stereocenter becomes planar and hence achiral. An incoming group can approach from either side of the plane, so there is an equal probability that protonation back to the chiral ketone will produce either an R or an S form, resulting in a racemate.
Racemization can occur through some of the following processes:
Substitution reactions that proceed through a free carbocation intermediate, such as unimolecular substitution reactions, lead to non-stereospecific addition of substituents which results in racemization.
Although unimolecular elimination reactions also proceed through a carbocation, they do not result in a chiral center. They result instead in a set of geometric isomers in which trans/cis (E/Z) forms are produced, rather than racemates.
In a unimolecular aliphatic electrophilic substitution reaction, if the carbanion is planar or if it cannot maintain a pyramidal structure, then racemization should occur, though not always.
In a free radical substitution reaction, if the formation of the free radical takes place at a chiral carbon, then racemization is almost always observed.
The rate of racemization (from L-forms to a mixture of L-forms and D-forms) has been used as a way of dating biological samples in tissues with slow rates of turnover, forensic samples, and fossils in geological deposits. This technique is known as amino acid dating.
Discovery of optical activity
In 1843, Louis Pasteur discovered optical activity in paratartaric, or racemic, acid found in grape wine. He was able to separate two enantiomer crystals that rotated polarized light in opposite directions.
See also
Dextrorotation and levorotation
Enantiomer
Racemic mixture
References
Chemical reactions
Stereochemistry
Protein structure
Post-translational modification | 0.777092 | 0.98172 | 0.762887 |
Deposition (phase transition) | Deposition is the phase transition in which gas transforms into solid without passing through the liquid phase. Deposition is a thermodynamic process. The reverse of deposition is sublimation and hence sometimes deposition is called desublimation.
Applications
Examples
One example of deposition is the process by which, in sub-freezing air, water vapour changes directly to ice without first becoming a liquid. This is how frost and hoar frost form on the ground or other surfaces. Another example is when frost forms on a leaf. For deposition to occur, thermal energy must be removed from a gas. When the air becomes cold enough, water vapour in the air surrounding the leaf loses enough thermal energy to change into a solid. Even though the air temperature may be below the dew point, the water vapour may not be able to condense spontaneously if there is no way to remove the latent heat. When the leaf is introduced, the supercooled water vapour immediately begins to condense, but by this point is already past the freezing point. This causes the water vapour to change directly into a solid.
Another example is the soot that is deposited on the walls of chimneys. Soot molecules rise from the fire in a hot and gaseous state. When they come into contact with the walls they cool, and change to the solid state, without formation of the liquid state. The process is made use of industrially in combustion chemical vapour deposition.
Industrial applications
There is an industrial coating process, known as evaporative deposition, whereby a solid material is heated to the gaseous state in a low-pressure chamber, the gas molecules travel across the chamber space and then deposit to the solid state on a target surface, forming a smooth and thin layer on the target surface. Again, the molecules do not go through an intermediate liquid state when going from the gas to the solid. See also physical vapor deposition, which is a class of processes used to deposit thin films of various materials onto various surfaces.
Deposition releases energy and is an exothermic phase change.
See also
References
Gaja, Shiv P., Fundamentals of Atmospheric Modeling, Cambridge University Press, 2nd ed., 2005, p. 525
Moore, John W., et al., Principles of Chemistry: The Molecular Science, Brooks Cole, 2009, p. 387
Whitten, Kenneth W., et al., Chemistry, Brooks-Cole, 9th ed., 2009, p. 7
Phase transitions | 0.768522 | 0.992657 | 0.762878 |
Geodynamics | Geodynamics is a subfield of geophysics dealing with dynamics of the Earth. It applies physics, chemistry and mathematics to the understanding of how mantle convection leads to plate tectonics and geologic phenomena such as seafloor spreading, mountain building, volcanoes, earthquakes, faulting. It also attempts to probe the internal activity by measuring magnetic fields, gravity, and seismic waves, as well as the mineralogy of rocks and their isotopic composition. Methods of geodynamics are also applied to exploration of other planets.
Overview
Geodynamics is generally concerned with processes that move materials throughout the Earth. In the Earth's interior, movement happens when rocks melt or deform and flow in response to a stress field. This deformation may be brittle, elastic, or plastic, depending on the magnitude of the stress and the material's physical properties, especially the stress relaxation time scale. Rocks are structurally and compositionally heterogeneous and are subjected to variable stresses, so it is common to see different types of deformation in close spatial and temporal proximity. When working with geological timescales and lengths, it is convenient to use the continuous medium approximation and equilibrium stress fields to consider the average response to average stress.
Experts in geodynamics commonly use data from geodetic GPS, InSAR, and seismology, along with numerical models, to study the evolution of the Earth's lithosphere, mantle and core.
Work performed by geodynamicists may include:
Modeling brittle and ductile deformation of geologic materials
Predicting patterns of continental accretion and breakup of continents and supercontinents
Observing surface deformation and relaxation due to ice sheets and post-glacial rebound, and making related conjectures about the viscosity of the mantle
Finding and understanding the driving mechanisms behind plate tectonics.
Deformation of rocks
Rocks and other geological materials experience strain according to three distinct modes, elastic, plastic, and brittle depending on the properties of the material and the magnitude of the stress field. Stress is defined as the average force per unit area exerted on each part of the rock. Pressure is the part of stress that changes the volume of a solid; shear stress changes the shape. If there is no shear, the fluid is in hydrostatic equilibrium. Since, over long periods, rocks readily deform under pressure, the Earth is in hydrostatic equilibrium to a good approximation. The pressure on rock depends only on the weight of the rock above, and this depends on gravity and the density of the rock. In a body like the Moon, the density is almost constant, so a pressure profile is readily calculated. In the Earth, the compression of rocks with depth is significant, and an equation of state is needed to calculate changes in density of rock even when it is of uniform composition.
Elastic
Elastic deformation is always reversible, which means that if the stress field associated with elastic deformation is removed, the material will return to its previous state. Materials only behave elastically when the relative arrangement along the axis being considered of material components (e.g. atoms or crystals) remains unchanged. This means that the magnitude of the stress cannot exceed the yield strength of a material, and the time scale of the stress cannot approach the relaxation time of the material. If stress exceeds the yield strength of a material, bonds begin to break (and reform), which can lead to ductile or brittle deformation.
Ductile
Ductile or plastic deformation happens when the temperature of a system is high enough so that a significant fraction of the material microstates (figure 1) are unbound, which means that a large fraction of the chemical bonds are in the process of being broken and reformed. During ductile deformation, this process of atomic rearrangement redistributes stress and strain towards equilibrium faster than they can accumulate. Examples include bending of the lithosphere under volcanic islands or sedimentary basins, and bending at oceanic trenches. Ductile deformation happens when transport processes such as diffusion and advection that rely on chemical bonds to be broken and reformed redistribute strain about as fast as it accumulates.
Brittle
When strain localizes faster than these relaxation processes can redistribute it, brittle deformation occurs. The mechanism for brittle deformation involves a positive feedback between the accumulation or propagation of defects especially those produced by strain in areas of high strain, and the localization of strain along these dislocations and fractures. In other words, any fracture, however small, tends to focus strain at its leading edge, which causes the fracture to extend.
In general, the mode of deformation is controlled not only by the amount of stress, but also by the distribution of strain and strain associated features. Whichever mode of deformation ultimately occurs is the result of a competition between processes that tend to localize strain, such as fracture propagation, and relaxational processes, such as annealing, that tend to delocalize strain.
Deformation structures
Structural geologists study the results of deformation, using observations of rock, especially the mode and geometry of deformation to reconstruct the stress field that affected the rock over time. Structural geology is an important complement to geodynamics because it provides the most direct source of data about the movements of the Earth. Different modes of deformation result in distinct geological structures, e.g. brittle fracture in rocks or ductile folding.
Thermodynamics
The physical characteristics of rocks that control the rate and mode of strain, such as yield strength or viscosity, depend on the thermodynamic state of the rock and composition. The most important thermodynamic variables in this case are temperature and pressure. Both of these increase with depth, so to a first approximation the mode of deformation can be understood in terms of depth. Within the upper lithosphere, brittle deformation is common because under low pressure rocks have relatively low brittle strength, while at the same time low temperature reduces the likelihood of ductile flow. After the brittle-ductile transition zone, ductile deformation becomes dominant. Elastic deformation happens when the time scale of stress is shorter than the relaxation time for the material. Seismic waves are a common example of this type of deformation. At temperatures high enough to melt rocks, the ductile shear strength approaches zero, which is why shear mode elastic deformation (S-Waves) will not propagate through melts.
Forces
The main motive force behind stress in the Earth is provided by thermal energy from radioisotope decay, friction, and residual heat. Cooling at the surface and heat production within the Earth create a metastable thermal gradient from the hot core to the relatively cool lithosphere. This thermal energy is converted into mechanical energy by thermal expansion. Deeper and hotter rocks often have higher thermal expansion and lower density relative to overlying rocks. Conversely, rock that is cooled at the surface can become less buoyant than the rock below it. Eventually this can lead to a Rayleigh-Taylor instability (Figure 2), or interpenetration of rock on different sides of the buoyancy contrast.
Negative thermal buoyancy of the oceanic plates is the primary cause of subduction and plate tectonics, while positive thermal buoyancy may lead to mantle plumes, which could explain intraplate volcanism. The relative importance of heat production vs. heat loss for buoyant convection throughout the whole Earth remains uncertain and understanding the details of buoyant convection is a key focus of geodynamics.
Methods
Geodynamics is a broad field which combines observations from many different types of geological study into a broad picture of the dynamics of Earth. Close to the surface of the Earth, data includes field observations, geodesy, radiometric dating, petrology, mineralogy, drilling boreholes and remote sensing techniques. However, beyond a few kilometers depth, most of these kinds of observations become impractical. Geologists studying the geodynamics of the mantle and core must rely entirely on remote sensing, especially seismology, and experimentally recreating the conditions found in the Earth in high pressure high temperature experiments.(see also Adams–Williamson equation).
Numerical modeling
Because of the complexity of geological systems, computer modeling is used to test theoretical predictions about geodynamics using data from these sources.
There are two main ways of geodynamic numerical modeling.
Modelling to reproduce a specific observation: This approach aims to answer what causes a specific state of a particular system.
Modelling to produce basic fluid dynamics: This approach aims to answer how a specific system works in general.
Basic fluid dynamics modelling can further be subdivided into instantaneous studies, which aim to reproduce the instantaneous flow in a system due to a given buoyancy distribution, and time-dependent studies, which either aim to reproduce a possible evolution of a given initial condition over time or a statistical (quasi) steady-state of a given system.
See also
Cytherodynamics
References
Bibliography
External links
Geological Survey of Canada - Geodynamics Program
Geodynamics Homepage - JPL/NASA
NASA Planetary geodynamics
Los Alamos National Laboratory–Geodynamics & National Security
Computational Infrastructure for Geodynamics
Geophysics
Geodesy
Plate tectonics | 0.784922 | 0.971912 | 0.762876 |
Expression quantitative trait loci | An expression quantitative trait locus (eQTL) is a type of quantitative trait locus (QTL), a genomic locus (region of DNA) that is associated with phenotypic variation for a specific, quantifiable trait. While the term QTL can refer to a wide range of phenotypic traits, the more specific eQTL refers to traits measured by gene expression, such as mRNA levels. Although named "expression QTLs", not all measures of gene expression can be used for eQTLs. For example, traits quantified by protein levels are instead referred to as protein QTLs (pQTLs).
Distant and local, trans- and cis-eQTLs, respectively
An expression quantitative trait is an amount of an mRNA transcript or a protein. These are usually the product of a single gene with a specific chromosomal location. This distinguishes expression quantitative traits from most complex traits, which are not the product of the expression of a single gene. Chromosomal loci that explain variance in expression traits are called eQTLs. eQTLs located near the gene-of-origin (gene which produces the transcript or protein) are referred to as local eQTLs or cis-eQTLs. By contrast, those located distant from their gene of origin, often on different chromosomes, are referred to as distant eQTLs or trans-eQTLs. The first genome-wide study of gene expression was carried out in yeast and published in 2002. The initial wave of eQTL studies employed microarrays to measure genome-wide gene expression; more recent studies have employed massively parallel RNA sequencing. Many expression QTL studies were performed in plants and animals, including humans, non-human primates and mice.
Some cis eQTLs are detected in many tissue types but the majority of trans eQTLs are tissue-dependent (dynamic). eQTLs may act in cis (locally) or trans (at a distance) to a gene. The abundance of a gene transcript is directly modified by polymorphism in regulatory elements. Consequently, transcript abundance might be considered as a quantitative trait that can be mapped with considerable power. These have been named expression QTLs (eQTLs). The combination of whole-genome genetic association studies and the measurement of global gene expression allows the systematic identification of eQTLs. By assaying gene
expression and genetic variation simultaneously on a genome-wide basis in a large number of individuals, statistical genetic methods can be used to map the genetic factors that underpin individual differences in quantitative levels of expression of many thousands of
transcripts. Studies have shown that single nucleotide polymorphisms (SNPs) reproducibly associated with complex disorders as well as certain pharmacologic phenotypes are found to be significantly enriched for eQTLs, relative to frequency-matched control SNPs. The integration of eQTLs with GWAS has led to development of the transcriptome-wide association study (TWAS) methodology.
Detecting eQTLs
Mapping eQTLs is done using standard QTL mapping methods that test the linkage between variation in expression and genetic polymorphisms. The only considerable difference is that eQTL studies can involve a million or more expression microtraits. Standard gene mapping software packages can be used, although it is often faster to use custom code such as QTL Reaper or the web-based eQTL mapping system GeneNetwork. GeneNetwork hosts many large eQTL mapping data sets and provide access to fast algorithms to map single loci and epistatic interactions. As is true in all QTL mapping studies, the final steps in defining DNA variants that cause variation in traits are usually difficult and require a second round of experimentation. This is especially the case for trans eQTLs that do not benefit from the strong prior probability that relevant variants are in the immediate vicinity of the parent gene. Statistical, graphical, and bioinformatic methods are used to evaluate positional candidate genes and entire systems of interactions. The development of single cell technologies, and parallel advances in statistical methods has made it possible to define even subtle changes in eQTLs as cell-states change.
See also
Epigenome-wide association study
Quantitative trait locus (QTL)
Transcriptome-wide association study
References
Classical genetics
Statistical genetics
Quantitative trait loci | 0.780053 | 0.977977 | 0.762875 |
Fundamental thermodynamic relation | In thermodynamics, the fundamental thermodynamic relation are four fundamental equations which demonstrate how four important thermodynamic quantities depend on variables that can be controlled and measured experimentally. Thus, they are essentially equations of state, and using the fundamental equations, experimental data can be used to determine sought-after quantities like G (Gibbs free energy) or H (enthalpy). The relation is generally expressed as a microscopic change in internal energy in terms of microscopic changes in entropy, and volume for a closed system in thermal equilibrium in the following way.
Here, U is internal energy, T is absolute temperature, S is entropy, P is pressure, and V is volume.
This is only one expression of the fundamental thermodynamic relation. It may be expressed in other ways, using different variables (e.g. using thermodynamic potentials). For example, the fundamental relation may be expressed in terms of the enthalpy H as
in terms of the Helmholtz free energy F as
and in terms of the Gibbs free energy G as
.
The first and second laws of thermodynamics
The first law of thermodynamics states that:
where and are infinitesimal amounts of heat supplied to the system by its surroundings and work done by the system on its surroundings, respectively.
According to the second law of thermodynamics we have for a reversible process:
Hence:
By substituting this into the first law, we have:
Letting be reversible pressure-volume work done by the system on its surroundings,
we have:
This equation has been derived in the case of reversible changes. However, since U, S, and V are thermodynamic state functions that depend on only the initial and final states of a thermodynamic process, the above relation holds also for non-reversible changes. If the composition, i.e. the amounts of the chemical components, in a system of uniform temperature and pressure can also change, e.g. due to a chemical reaction, the fundamental thermodynamic relation generalizes to:
The are the chemical potentials corresponding to particles of type .
If the system has more external parameters than just the volume that can change, the fundamental thermodynamic relation generalizes to
Here the are the generalized forces corresponding to the external parameters . (The negative sign used with pressure is unusual and arises because pressure represents a compressive stress that tends to decrease volume. Other generalized forces tend to increase their conjugate displacements.)
Relationship to statistical mechanics
The fundamental thermodynamic relation and statistical mechanical principles can be derived from one another.
Derivation from statistical mechanical principles
The above derivation uses the first and second laws of thermodynamics. The first law of thermodynamics is essentially a definition of heat, i.e. heat is the change in the internal energy of a system that is not caused by a change of the external parameters of the system.
However, the second law of thermodynamics is not a defining relation for the entropy. The fundamental definition of entropy of an isolated system containing an amount of energy is:
where is the number of quantum states in a small interval between and . Here is a macroscopically small energy interval that is kept fixed. Strictly speaking this means that the entropy depends on the choice of . However, in the thermodynamic limit (i.e. in the limit of infinitely large system size), the specific entropy (entropy per unit volume or per unit mass) does not depend on . The entropy is thus a measure of the uncertainty about exactly which quantum state the system is in, given that we know its energy to be in some interval of size .
Deriving the fundamental thermodynamic relation from first principles thus amounts to proving that the above definition of entropy implies that for reversible processes we have:
The fundamental assumption of statistical mechanics is that all the states at a particular energy are equally likely. This allows us to extract all the thermodynamical quantities of interest. The temperature is defined as:
This definition can be derived from the microcanonical ensemble, which is a system of a constant number of particles, a constant volume and that does not exchange energy with its environment. Suppose that the system has some external parameter, x, that can be changed. In general, the energy eigenstates of the system will depend on x. According to the adiabatic theorem of quantum mechanics, in the limit of an infinitely slow change of the system's Hamiltonian, the system will stay in the same energy eigenstate and thus change its energy according to the change in energy of the energy eigenstate it is in.
The generalized force, X, corresponding to the external parameter x is defined such that is the work performed by the system if x is increased by an amount dx. E.g., if x is the volume, then X is the pressure. The generalized force for a system known to be in energy eigenstate is given by:
Since the system can be in any energy eigenstate within an interval of , we define the generalized force for the system as the expectation value of the above expression:
To evaluate the average, we partition the energy eigenstates by counting how many of them have a value for within a range between and . Calling this number , we have:
The average defining the generalized force can now be written:
We can relate this to the derivative of the entropy with respect to x at constant energy E as follows. Suppose we change x to x + dx. Then will change because the energy eigenstates depend on x, causing energy eigenstates to move into or out of the range between and . Let's focus again on the energy eigenstates for which lies within the range between and . Since these energy eigenstates increase in energy by Y dx, all such energy eigenstates that are in the interval ranging from E − Y dx to E move from below E to above E. There are
such energy eigenstates. If , all these energy eigenstates will move into the range between and and contribute to an increase in . The number of energy eigenstates that move from below to above is, of course, given by . The difference
is thus the net contribution to the increase in . Note that if Y dx is larger than there will be energy eigenstates that move from below to above . They are counted in both and , therefore the above expression is also valid in that case.
Expressing the above expression as a derivative with respect to E and summing over Y yields the expression:
The logarithmic derivative of with respect to x is thus given by:
The first term is intensive, i.e. it does not scale with system size. In contrast, the last term scales as the inverse system size and thus vanishes in the thermodynamic limit. We have thus found that:
Combining this with
Gives:
which we can write as:
Derivation of statistical mechanical principles from the fundamental thermodynamic relation
It has been shown that the fundamental thermodynamic relation together with the following three postulates
is sufficient to build the theory of statistical mechanics without the equal a priori probability postulate.
For example, in order to derive the Boltzmann distribution, we assume the probability density of microstate satisfies . The normalization factor (partition function) is therefore
The entropy is therefore given by
If we change the temperature by while keeping the volume of the system constant, the change of entropy satisfies
where
Considering that
we have
From the fundamental thermodynamic relation, we have
Since we kept constant when perturbing , we have . Combining the equations above, we have
Physics laws should be universal, i.e., the above equation must hold for arbitrary systems, and the only way for this to happen is
That is
It has been shown that the third postulate in the above formalism can be replaced by the following:
However, the mathematical derivation will be much more complicated.
References
External links
The Fundamental Thermodynamic Relation
Thermodynamics
Statistical mechanics
Thermodynamic equations | 0.775336 | 0.983927 | 0.762873 |
Electronegativity | Electronegativity, symbolized as χ, is the tendency for an atom of a given chemical element to attract shared electrons (or electron density) when forming a chemical bond. An atom's electronegativity is affected by both its atomic number and the distance at which its valence electrons reside from the charged nucleus. The higher the associated electronegativity, the more an atom or a substituent group attracts electrons. Electronegativity serves as a simple way to quantitatively estimate the bond energy, and the sign and magnitude of a bond's chemical polarity, which characterizes a bond along the continuous scale from covalent to ionic bonding. The loosely defined term electropositivity is the opposite of electronegativity: it characterizes an element's tendency to donate valence electrons.
On the most basic level, electronegativity is determined by factors like the nuclear charge (the more protons an atom has, the more "pull" it will have on electrons) and the number and location of other electrons in the atomic shells (the more electrons an atom has, the farther from the nucleus the valence electrons will be, and as a result, the less positive charge they will experience—both because of their increased distance from the nucleus and because the other electrons in the lower energy core orbitals will act to shield the valence electrons from the positively charged nucleus).
The term "electronegativity" was introduced by Jöns Jacob Berzelius in 1811,
though the concept was known before that and was studied by many chemists including Avogadro.
In spite of its long history, an accurate scale of electronegativity was not developed until 1932, when Linus Pauling proposed an electronegativity scale which depends on bond energies, as a development of valence bond theory. It has been shown to correlate with a number of other chemical properties. Electronegativity cannot be directly measured and must be calculated from other atomic or molecular properties. Several methods of calculation have been proposed, and although there may be small differences in the numerical values of the electronegativity, all methods show the same periodic trends between elements.
The most commonly used method of calculation is that originally proposed by Linus Pauling. This gives a dimensionless quantity, commonly referred to as the Pauling scale (χr), on a relative scale running from 0.79 to 3.98 (hydrogen = 2.20). When other methods of calculation are used, it is conventional (although not obligatory) to quote the results on a scale that covers the same range of numerical values: this is known as an electronegativity in Pauling units.
As it is usually calculated, electronegativity is not a property of an atom alone, but rather a property of an atom in a molecule. Even so, the electronegativity of an atom is strongly correlated with the first ionization energy. The electronegativity is slightly negatively correlated (for smaller electronegativity values) and rather strongly positively correlated (for most and larger electronegativity values) with the electron affinity. It is to be expected that the electronegativity of an element will vary with its chemical environment, but it is usually considered to be a transferable property, that is to say that similar values will be valid in a variety of situations.
Caesium is the least electronegative element (0.79); fluorine is the most (3.98).
Methods of calculation
Pauling electronegativity
Pauling first proposed the concept of electronegativity in 1932 to explain why the covalent bond between two different atoms (A–B) is stronger than the average of the A–A and the B–B bonds. According to valence bond theory, of which Pauling was a notable proponent, this "additional stabilization" of the heteronuclear bond is due to the contribution of ionic canonical forms to the bonding.
The difference in electronegativity between atoms A and B is given by:
where the dissociation energies, Ed, of the A–B, A–A and B–B bonds are expressed in electronvolts, the factor (eV)− being included to ensure a dimensionless result. Hence, the difference in Pauling electronegativity between hydrogen and bromine is 0.73 (dissociation energies: H–Br, 3.79 eV; H–H, 4.52 eV; Br–Br 2.00 eV)
As only differences in electronegativity are defined, it is necessary to choose an arbitrary reference point in order to construct a scale. Hydrogen was chosen as the reference, as it forms covalent bonds with a large variety of elements: its electronegativity was fixed first at 2.1, later revised to 2.20. It is also necessary to decide which of the two elements is the more electronegative (equivalent to choosing one of the two possible signs for the square root). This is usually done using "chemical intuition": in the above example, hydrogen bromide dissolves in water to form H+ and Br− ions, so it may be assumed that bromine is more electronegative than hydrogen. However, in principle, since the same electronegativities should be obtained for any two bonding compounds, the data are in fact overdetermined, and the signs are unique once a reference point has been fixed (usually, for H or F).
To calculate Pauling electronegativity for an element, it is necessary to have data on the dissociation energies of at least two types of covalent bonds formed by that element. A. L. Allred updated Pauling's original values in 1961 to take account of the greater availability of thermodynamic data, and it is these "revised Pauling" values of the electronegativity that are most often used.
The essential point of Pauling electronegativity is that there is an underlying, quite accurate, semi-empirical formula for dissociation energies, namely:
or sometimes, a more accurate fit
These are approximate equations but they hold with good accuracy. Pauling obtained the first equation by noting that a bond can be approximately represented as a quantum mechanical superposition of a covalent bond and two ionic bond-states. The covalent energy of a bond is approximate, by quantum mechanical calculations, the geometric mean of the two energies of covalent bonds of the same molecules, and there is additional energy that comes from ionic factors, i.e. polar character of the bond.
The geometric mean is approximately equal to the arithmetic mean—which is applied in the first formula above—when the energies are of a similar value, e.g., except for the highly electropositive elements, where there is a larger difference of two dissociation energies; the geometric mean is more accurate and almost always gives positive excess energy, due to ionic bonding. The square root of this excess energy, Pauling notes, is approximately additive, and hence one can introduce the electronegativity. Thus, it is these semi-empirical formulas for bond energy that underlie the concept of Pauling electronegativity.
The formulas are approximate, but this rough approximation is in fact relatively good and gives the right intuition, with the notion of the polarity of the bond and some theoretical grounding in quantum mechanics. The electronegativities are then determined to best fit the data.
In more complex compounds, there is an additional error since electronegativity depends on the molecular environment of an atom. Also, the energy estimate can be only used for single, not for multiple bonds. The enthalpy of formation of a molecule containing only single bonds can subsequently be estimated based on an electronegativity table, and it depends on the constituents and the sum of squares of differences of electronegativities of all pairs of bonded atoms. Such a formula for estimating energy typically has a relative error on the order of 10% but can be used to get a rough qualitative idea and understanding of a molecule.
Mulliken electronegativity
Robert S. Mulliken proposed that the arithmetic mean of the first ionization energy (Ei) and the electron affinity (Eea) should be a measure of the tendency of an atom to attract electrons:
As this definition is not dependent on an arbitrary relative scale, it has also been termed absolute electronegativity, with the units of kilojoules per mole or electronvolts. However, it is more usual to use a linear transformation to transform these absolute values into values that resemble the more familiar Pauling values. For ionization energies and electron affinities in electronvolts,
and for energies in kilojoules per mole,
The Mulliken electronegativity can only be calculated for an element whose electron affinity is known. Measured values are available for 72 elements, while approximate values have been estimated or calculated for the remaining elements.
The Mulliken electronegativity of an atom is sometimes said to be the negative of the chemical potential. By inserting the energetic definitions of the ionization potential and electron affinity into the Mulliken electronegativity, it is possible to show that the Mulliken chemical potential is a finite difference approximation of the electronic energy with respect to the number of electrons., i.e.,
Allred–Rochow electronegativity
A. Louis Allred and Eugene G. Rochow considered that electronegativity should be related to the charge experienced by an electron on the "surface" of an atom: The higher the charge per unit area of atomic surface the greater the tendency of that atom to attract electrons. The effective nuclear charge, Zeff, experienced by valence electrons can be estimated using Slater's rules, while the surface area of an atom in a molecule can be taken to be proportional to the square of the covalent radius, rcov. When rcov is expressed in picometres,
Sanderson electronegativity equalization
R.T. Sanderson has also noted the relationship between Mulliken electronegativity and atomic size, and has proposed a method of calculation based on the reciprocal of the atomic volume. With a knowledge of bond lengths, Sanderson's model allows the estimation of bond energies in a wide range of compounds. Sanderson's model has also been used to calculate molecular geometry, s-electron energy, NMR spin-spin coupling constants and other parameters for organic compounds. This work underlies the concept of electronegativity equalization, which suggests that electrons distribute themselves around a molecule to minimize or to equalize the Mulliken electronegativity. This behavior is analogous to the equalization of chemical potential in macroscopic thermodynamics.
Allen electronegativity
Perhaps the simplest definition of electronegativity is that of Leland C. Allen, who has proposed that it is related to the average energy of the valence electrons in a free atom,
where εs,p are the one-electron energies of s- and p-electrons in the free atom and ns,p are the number of s- and p-electrons in the valence shell. It is usual to apply a scaling factor, 1.75×10−3 for energies expressed in kilojoules per mole or 0.169 for energies measured in electronvolts, to give values that are numerically similar to Pauling electronegativities.
The one-electron energies can be determined directly from spectroscopic data, and so electronegativities calculated by this method are sometimes referred to as spectroscopic electronegativities. The necessary data are available for almost all elements, and this method allows the estimation of electronegativities for elements that cannot be treated by the other methods, e.g. francium, which has an Allen electronegativity of 0.67. However, it is not clear what should be considered to be valence electrons for the d- and f-block elements, which leads to an ambiguity for their electronegativities calculated by the Allen method.
On this scale, neon has the highest electronegativity of all elements, followed by fluorine, helium, and oxygen.
Correlation of electronegativity with other properties
The wide variety of methods of calculation of electronegativities, which all give results that correlate well with one another, is one indication of the number of chemical properties that might be affected by electronegativity. The most obvious application of electronegativities is in the discussion of bond polarity, for which the concept was introduced by Pauling. In general, the greater the difference in electronegativity between two atoms the more polar the bond that will be formed between them, with the atom having the higher electronegativity being at the negative end of the dipole. Pauling proposed an equation to relate the "ionic character" of a bond to the difference in electronegativity of the two atoms, although this has fallen somewhat into disuse.
Several correlations have been shown between infrared stretching frequencies of certain bonds and the electronegativities of the atoms involved: however, this is not surprising as such stretching frequencies depend in part on bond strength, which enters into the calculation of Pauling electronegativities. More convincing are the correlations between electronegativity and chemical shifts in NMR spectroscopy or isomer shifts in Mössbauer spectroscopy (see figure). Both these measurements depend on the s-electron density at the nucleus, and so are a good indication that the different measures of electronegativity really are describing "the ability of an atom in a molecule to attract electrons to itself".
Trends in electronegativity
Periodic trends
In general, electronegativity increases on passing from left to right along a period and decreases on descending a group. Hence, fluorine is the most electronegative of the elements (not counting noble gases), whereas caesium is the least electronegative, at least of those elements for which substantial data is available. This would lead one to believe that caesium fluoride is the compound whose bonding features the most ionic character.
There are some exceptions to this general rule. Gallium and germanium have higher electronegativities than aluminium and silicon, respectively, because of the d-block contraction. Elements of the fourth period immediately after the first row of the transition metals have unusually small atomic radii because the 3d-electrons are not effective at shielding the increased nuclear charge, and smaller atomic size correlates with higher electronegativity (see Allred-Rochow electronegativity and Sanderson electronegativity above). The anomalously high electronegativity of lead, in particular when compared to thallium and bismuth, is an artifact of electronegativity varying with oxidation state: its electronegativity conforms better to trends if it is quoted for the +2 state with a Pauling value of 1.87 instead of the +4 state.
Variation of electronegativity with oxidation number
In inorganic chemistry, it is common to consider a single value of electronegativity to be valid for most "normal" situations. While this approach has the advantage of simplicity, it is clear that the electronegativity of an element is not an invariable atomic property and, in particular, increases with the oxidation state of the element.
Allred used the Pauling method to calculate separate electronegativities for different oxidation states of the handful of elements (including tin and lead) for which sufficient data were available. However, for most elements, there are not enough different covalent compounds for which bond dissociation energies are known to make this approach feasible. This is particularly true of the transition elements, where quoted electronegativity values are usually, of necessity, averages over several different oxidation states and where trends in electronegativity are harder to see as a result.
The chemical effects of this increase in electronegativity can be seen both in the structures of oxides and halides and in the acidity of oxides and oxoacids. Hence CrO3 and Mn2O7 are acidic oxides with low melting points, while Cr2O3 is amphoteric and Mn2O3 is a completely basic oxide.
The effect can also be clearly seen in the dissociation constants pKa of the oxoacids of chlorine. The effect is much larger than could be explained by the negative charge being shared among a larger number of oxygen atoms, which would lead to a difference in pKa of log10() = –0.6 between hypochlorous acid and perchloric acid. As the oxidation state of the central chlorine atom increases, more electron density is drawn from the oxygen atoms onto the chlorine, diminishing the partial negative charge of individual oxygen atoms. At the same time, the positive partial charge on the hydrogen increases with a higher oxidation state. This explains the observed increased acidity with an increasing oxidation state in the oxoacids of chlorine.
Electronegativity and hybridization scheme
The electronegativity of an atom changes depending on the hybridization of the orbital employed in bonding. Electrons in s orbitals are held more tightly than electrons in p orbitals. Hence, a bond to an atom that employs an spx hybrid orbital for bonding will be more heavily polarized to that atom when the hybrid orbital has more s character. That is, when electronegativities are compared for different hybridization schemes of a given element, the order holds (the trend should apply to non-integer hybridization indices as well). While this holds true in principle for any main-group element, values for the hybridization-specific electronegativity are most frequently cited for carbon. In organic chemistry, these electronegativities are frequently invoked to predict or rationalize bond polarities in organic compounds containing double and triple bonds to carbon.
Group electronegativity
In organic chemistry, electronegativity is associated more with different functional groups than with individual atoms. The terms group electronegativity and substituent electronegativity are used synonymously. However, it is common to distinguish between the inductive effect and the resonance effect, which might be described as σ- and π-electronegativities, respectively. There are a number of linear free-energy relationships that have been used to quantify these effects, of which the Hammett equation is the best known. Kabachnik Parameters are group electronegativities for use in organophosphorus chemistry.
Electropositivity
Electropositivity is a measure of an element's ability to donate electrons, and therefore form positive ions; thus, it is antipode to electronegativity.
Mainly, this is an attribute of metals, meaning that, in general, the greater the metallic character of an element the greater the electropositivity. Therefore, the alkali metals are the most electropositive of all. This is because they have a single electron in their outer shell and, as this is relatively far from the nucleus of the atom, it is easily lost; in other words, these metals have low ionization energies.
While electronegativity increases along periods in the periodic table, and decreases down groups, electropositivity decreases along periods (from left to right) and increases down groups. This means that elements in the upper right of the periodic table of elements (oxygen, sulfur, chlorine, etc.) will have the greatest electronegativity, and those in the lower-left (rubidium, caesium, and francium) the greatest electropositivity.
See also
Chemical polarity
Electron affinity
Electronegativities of the elements (data page)
Ionization energy
Metallic bonding
Miedema's model
Orbital hybridization
Oxidation state
Periodic table
References
Bibliography
External links
WebElements, lists values of electronegativities by a number of different methods of calculation
Video explaining electronegativity
Electronegativity Chart, a summary listing of the electronegativity of each element along with an interactive periodic table
Chemical properties
Chemical bonding
Dimensionless numbers of chemistry | 0.764189 | 0.998271 | 0.762868 |
Donabedian model | The Donabedian model is a conceptual model that provides a framework for examining health services and evaluating quality of health care. According to the model, information about quality of care can be drawn from three categories: "structure", "process", and "outcomes". Structure describes the context in which care is delivered, including hospital buildings, staff, financing, and equipment. Process denotes the transactions between patients and providers throughout the delivery of healthcare. Finally, outcomes refer to the effects of healthcare on the health status of patients and populations. Avedis Donabedian, a physician and health services researcher at the University of Michigan, developed the original model in 1966. While there are other quality of care frameworks, including the World Health Organization (WHO)-Recommended Quality of Care Framework and the Bamako Initiative, the Donabedian Model continues to be the dominant paradigm for assessing the quality of health care.
Dimensions of care
The model is most often represented by a chain of three boxes containing structure, process, and outcome connected by unidirectional arrows in that order. These boxes represent three types of information that may be collected in order to draw inferences about quality of care in a given system.
Structure
Structure includes all of the factors that affect the context in which care is delivered. This includes the physical facility, equipment, and human resources, as well as organizational characteristics such as staff training and payment methods. These factors control how providers and patients in a healthcare system act and are measures of the average quality of care within a facility or system. Structure is often easy to observe and measure and it may be the upstream cause of problems identified in process.
Process
Process is the sum of all actions that make up healthcare. These commonly include diagnosis, treatment, preventive care, and patient education but may be expanded to include actions taken by the patients or their families. Processes can be further classified as technical processes, how care is delivered, or interpersonal processes, which all encompass the manner in which care is delivered. According to Donabedian, the measurement of process is nearly equivalent to the measurement of quality of care because process contains all acts of healthcare delivery. Information about process can be obtained from medical records, interviews with patients and practitioners, or direct observations of healthcare visits.
Outcome
Outcome contains all the effects of healthcare on patients or populations, including changes to health status, behavior, or knowledge as well as patient satisfaction and health-related quality of life. Outcomes are sometimes seen as the most important indicators of quality because improving patient health status is the primary goal of healthcare. However, accurately measuring outcomes that can be attributed exclusively to healthcare is very difficult. Drawing connections between process and outcomes often requires large sample populations, adjustments by case mix, and long-term follow ups as outcomes may take considerable time to become observable.
Although it is widely recognized and applied in many health care related fields, the Donabedian Model was developed to assess quality of care in clinical practice. The model does not have an implicit definition of quality care so that it can be applied to problems of broad or narrow scope. Donabedian notes that each of the three domains has advantages and disadvantages that necessitate researchers to draw connections between them in order to create a chain of causation that is conceptually useful for understanding systems as well as designing experiments and interventions.
Applications
Donabedian developed his quality of care framework to be flexible enough for application in diverse healthcare settings and among various levels within a delivery system.
At its most basic level, the framework can be used to modify structures and processes within a healthcare delivery unit, such as a small group practice or ambulatory care center, to improve patient flow or information exchange. For instance, health administrators in a small physician practice may be interested in improving their treatment coordination process through enhanced communication of lab results from laboratorian to provider in an effort to streamline patient care. The process for information exchange, in this case the transfer of lab results to the attending physician, depends on the structure for receiving and interpreting results. The structure could involve an electronic health record (EHR) that a laboratorian fills out with lab results for use by the physician to complete a diagnosis. To improve this process, a healthcare administrator may look at the structure and decide to purchase an information technology (IT) solution of pop-up alerts for actionable lab results to incorporate into the EHR. The process could be modified through a change in standard protocol of determining how and when an alert is released and who is responsible for each step in the process. The outcomes to evaluate the efficacy of this quality improvement (QI) solution might include patient satisfaction, timeliness of diagnosis, or clinical outcomes.
In addition to examining quality within a healthcare delivery unit, the Donabedian model is applicable to the structure and process for treating certain diseases and conditions with the aim to improve the quality of chronic disease management. For example, systemic lupus erythematosus (SLE) is a condition with significant morbidity and mortality and substantial disparities in outcomes among rheumatic diseases. The propensity for SLE care to be fragmented and poorly coordinated, as well as evidence that healthcare system factors associated with improved SLE outcomes are modifiable, points to an opportunity for process improvement through changes in preventive care, monitoring, and effective self-care. A researcher may develop evidence within these areas to analyze the relationship between structure and process to outcomes in SLE care for the purposes of finding solutions to improve outcomes. An analysis of SLE care structure may reveal an association between access to care and financing to quality outcomes. An analysis of process may look at hospital and physician specialty in SLE care and how it relates to SLE mortality in hospitals, or the effect on outcomes by including additional QI indicators to the diagnosis and treatment of SLE. To assess these changes in structure and process, evidence garnered from changes in mortality, disease damage, and health-related quality of life would be used to validate structure-process changes.
Donabedian’s model can also be applied to a large health system to measure overall quality and align improvement work across a hospital, group practice or the large integrated health system to improve quality and outcomes for a population. In 2007, the US Institute for Healthcare Improvement proposed “whole system measures” that address structure, process, and outcomes of care. These indicators supply health care leaders with data to evaluate the organization’s performance in order to design strategic QI planning. The indicators are limited to 13 non-disease specific measures that provide system-level indications of quality, applicable to both inpatient and outpatient settings and across the continuum of care. In addition to informing the QI plan, these measures can be used to evaluate the quality of the system’s care over time, how it performs relative to stated strategic planning goals, and how it performs compared to similar organizations.
Criticisms and adaptations
While the Donabedian model continues to serve as a touchstone framework in health services research, potential limitations have been suggested by other researchers, and, in some cases, adaptations of the model have been proposed. The sequential progression from structure to process to outcome has been described by some as too linear of a framework, and consequently has a limited utility for recognizing how the three domains influence and interact with each other. The model has also been criticized for failing to incorporate antecedent characteristics (e.g. patient characteristics, environmental factors) which are important precursors to evaluating quality care. Coyle and Battles suggest that these factors are vital to fully understanding the true effectiveness of new strategies or modifications within the care process. According to Coyle and Battles, patient factors include genetics, socio-demographics, health habits, beliefs and attitudes, and preferences. Environmental factors include the patients' cultural, social, political, personal, and physical characteristics, as well as factors related to the health profession itself.
History
Avedis Donabedian first described the three elements of the Donabedian Model in his 1966 article, “Evaluating the Quality of Medical Care.” As a preface to his analysis of methodologies used in health services research, Donabedian identified the three dimensions that can be utilized to assess quality of care (structure, process, and outcome) that would later become the core divisions of the Donabedian Model. “Evaluating the Quality of Medical Care” became one of the most frequently cited public health-related articles of the 20th century, and the Donabedian Model gained widespread acceptance.
In 1980, Donabedian published The Definition of Quality and Approaches to its Assessment, vol. 1: Explorations in Quality Assessment and Monitoring, which provided a more in-depth description of the structure—process– outcome paradigm. In his book, Donabedian once again defines structure, process, and outcome, and clarifies that these categories should not be mistaken for attributes of quality, but rather they are the classifications for the types of information that can be obtained in order to infer whether the quality of care is poor, fair, or good. Furthermore, he states that in order to make inferences about quality, there needs to be an established relationship between the three categories and that this relationship between categories is a probability rather than a certainty.
References
Health care management | 0.772778 | 0.987172 | 0.762865 |
Social constructionism | Social constructionism is a term used in sociology, social ontology, and communication theory. The term can serve somewhat different functions in each field; however, the foundation of this theoretical framework suggests various facets of social reality—such as concepts, beliefs, norms, and values—are formed through continuous interactions and negotiations among society's members, rather than empirical observation of physical reality. The theory of social constructionism posits that much of what individuals perceive as 'reality' is actually the outcome of a dynamic process of construction influenced by social conventions and structures.
Unlike phenomena that are innately determined or biologically predetermined, these social constructs are collectively formulated, sustained, and shaped by the social contexts in which they exist. These constructs significantly impact both the behavior and perceptions of individuals, often being internalized based on cultural narratives, whether or not these are empirically verifiable. In this two-way process of reality construction, individuals not only interpret and assimilate information through their social relations but also contribute to shaping existing societal narratives.
Examples of social constructs range widely, encompassing the assigned value of money, conceptions of concept of self/self-identity, beauty standards, gender, language, race, ethnicity, social class, social hierarchy, nationality, religion, social norms, the modern calendar and other units of time, marriage, education, citizenship, stereotypes, femininity and masculinity, social institutions, and even the idea of 'social construct' itself. These constructs are not universal truths but are flexible entities that can vary dramatically across different cultures and societies. They arise from collaborative consensus and are shaped and maintained through collective human interactions, cultural practices, and shared beliefs. This articulates the view that people in society construct ideas or concepts that may not exist without the existence of people or language to validate those concepts, meaning without a society these constructs would cease to exist.
Overview
A social construct or construction is the meaning, notion, or connotation placed on an object or event by a society, and adopted by that society with respect to how they view or deal with the object or event.
The social construction of target populations refers to the cultural characterizations or popular images of the persons or groups whose behavior and well-being are affected by public policy.
Social constructionism posits that the meanings of phenomena do not have an independent foundation outside the mental and linguistic representation that people develop about them throughout their history, and which becomes their shared reality. From a linguistic viewpoint, social constructionism centres meaning as an internal reference within language (words refer to words, definitions to other definitions) rather than to an external reality.
Origins
In the 16th century, Michel de Montaigne wrote that, "We need to interpret interpretations more than to interpret things." In 1886 or 1887, Friedrich Nietzsche put it similarly: "Facts do not exist, only interpretations." In his 1922 book Public Opinion, Walter Lippmann said, "The real environment is altogether too big, too complex, and too fleeting for direct acquaintance" between people and their environment. Each person constructs a pseudo-environment that is a subjective, biased, and necessarily abridged mental image of the world, and to a degree, everyone's pseudo-environment is a fiction. People "live in the same world, but they think and feel in different ones." Lippman's "environment" might be called "reality", and his "pseudo-environment" seems equivalent to what today is called "constructed reality".
Social constructionism has more recently been rooted in "symbolic interactionism" and "phenomenology". With Berger and Luckmann's The Social Construction of Reality published in 1966, this concept found its hold. More than four decades later, much theory and research pledged itself to the basic tenet that people "make their social and cultural worlds at the same time these worlds make them." It is a viewpoint that uproots social processes "simultaneously playful and serious, by which reality is both revealed and concealed, created and destroyed by our activities." It provides a substitute to the "Western intellectual tradition" where the researcher "earnestly seeks certainty in a representation of reality by means of propositions."
In social constructionist terms, "taken-for-granted realities" are cultivated from "interactions between and among social agents"; furthermore, reality is not some objective truth "waiting to be uncovered through positivist scientific inquiry." Rather, there can be "multiple realities that compete for truth and legitimacy." Social constructionism understands the "fundamental role of language and communication" and this understanding has "contributed to the linguistic turn" and more recently the "turn to discourse theory". The majority of social constructionists abide by the belief that "language does not mirror reality; rather, it constitutes [creates] it."
A broad definition of social constructionism has its supporters and critics in the organizational sciences. A constructionist approach to various organizational and managerial phenomena appear to be more commonplace and on the rise.
Andy Lock and Tom Strong trace some of the fundamental tenets of social constructionism back to the work of the 18th-century Italian political philosopher, rhetorician, historian, and jurist Giambattista Vico.
Berger and Luckmann give credit to Max Scheler as a large influence as he created the idea of sociology of knowledge which influenced social construction theory.
According to Lock and Strong, other influential thinkers whose work has affected the development of social constructionism are: Edmund Husserl, Alfred Schutz, Maurice Merleau-Ponty, Martin Heidegger, Hans-Georg Gadamer, Paul Ricoeur, Jürgen Habermas, Emmanuel Levinas, Mikhail Bakhtin, Valentin Volosinov, Lev Vygotsky, George Herbert Mead, Ludwig Wittgenstein, Gregory Bateson, Harold Garfinkel, Erving Goffman, Anthony Giddens, Michel Foucault, Ken Gergen, Mary Gergen, Rom Harre, and John Shotter.
Applications
Personal construct psychology
Since its appearance in the 1950s, personal construct psychology (PCP) has mainly developed as a constructivist theory of personality and a system of transforming individual meaning-making processes, largely in therapeutic contexts. It was based around the notion of persons as scientists who form and test theories about their worlds. Therefore, it represented one of the first attempts to appreciate the constructive nature of experience and the meaning persons give to their experience. Social constructionism (SC), on the other hand, mainly developed as a form of a critique, aimed to transform the oppressing effects of the social meaning-making processes. Over the years, it has grown into a cluster of different approaches, with no single SC position. However, different approaches under the generic term of SC are loosely linked by some shared assumptions about language, knowledge, and reality.
A usual way of thinking about the relationship between PCP and SC is treating them as two separate entities that are similar in some aspects, but also very different in others. This way of conceptualizing this relationship is a logical result of the circumstantial differences of their emergence. In subsequent analyses these differences between PCP and SC were framed around several points of tension, formulated as binary oppositions: personal/social; individualist/relational; agency/structure; constructivist/constructionist. Although some of the most important issues in contemporary psychology are elaborated in these contributions, the polarized positioning also sustained the idea of a separation between PCP and SC, paving the way for only limited opportunities for dialogue between them.
Reframing the relationship between PCP and SC may be of use in both the PCP and the SC communities. On one hand, it extends and enriches SC theory and points to benefits of applying the PCP "toolkit" in constructionist therapy and research. On the other hand, the reframing contributes to PCP theory and points to new ways of addressing social construction in therapeutic conversations.
Educational psychology
Like social constructionism, social constructivism states that people work together to construct artifacts. While social constructionism focuses on the artifacts that are created through the social interactions of a group, social constructivism focuses on an individual's learning that takes place because of his or her interactions in a group.
Social constructivism has been studied by many educational psychologists, who are concerned with its implications for teaching and learning. For more on the psychological dimensions of social constructivism, see the work of Lev Vygotsky, Ernst von Glasersfeld and A. Sullivan Palincsar.
Systemic therapy
Some of the systemic models that use social constructionism include narrative therapy and solution-focused therapy.
Poverty
Max Rose and Frank R. Baumgartner (2013), in Framing the Poor: Media Coverage and U.S. Poverty Policy, 1960-2008, examine how media has framed the poor in the U.S. and how negative framing has caused a shift in government spending. Since 1960, the government has decreasingly spent money on social services such as welfare. Evidence shows the media framing the poor more negatively since 1960, with more usage of words such as lazy and fraud.
Crime
Potter and Kappeler (1996), in their introduction to Constructing Crime: Perspective on Making News And Social Problems wrote, "Public opinion and crime facts demonstrate no congruence. The reality of crime in the United States has been subverted to a constructed reality as ephemeral as swamp gas."
Criminology has long focussed on why and how society defines criminal behavior and crime in general. While looking at crime through a social constructionism lens, there is evidence to support that criminal acts are a social construct where abnormal or deviant acts become a crime based on the views of society. Another explanation of crime as it relates to social constructionism are individual identity constructs that result in deviant behavior. If someone has constructed the identity of a "madman" or "criminal" for themselves based on a society's definition, it may force them to follow that label, resulting in criminal behavior.
History and development
Berger and Luckmann
Constructionism became prominent in the U.S. with Peter L. Berger and Thomas Luckmann's 1966 book, The Social Construction of Reality. Berger and Luckmann argue that all knowledge, including the most basic, taken-for-granted common-sense knowledge of everyday reality, is derived from and maintained by social interactions. In their model, people interact on the understanding that their perceptions of everyday life are shared with others, and this common knowledge of reality is in turn reinforced by these interactions. Since this common-sense knowledge is negotiated by people, human typifications, significations and institutions come to be presented as part of an objective reality, particularly for future generations who were not involved in the original process of negotiation. For example, as parents negotiate rules for their children to follow, those rules confront the children as externally produced "givens" that they cannot change. Berger and Luckmann's social constructionism has its roots in phenomenology. It links to Heidegger and Edmund Husserl through the teaching of Alfred Schutz, who was also Berger's PhD adviser.
Narrative turn
During the 1970s and 1980s, social constructionist theory underwent a transformation as constructionist sociologists engaged with the work of Michel Foucault and others as a narrative turn in the social sciences was worked out in practice. This particularly affected the emergent sociology of science and the growing field of science and technology studies. In particular, Karin Knorr-Cetina, Bruno Latour, Barry Barnes, Steve Woolgar, and others used social constructionism to relate what science has typically characterized as objective facts to the processes of social construction. Their goal was to show that human subjectivity imposes itself on the facts taken as objective, not solely the other way around. A particularly provocative title in this line of thought is Andrew Pickering's Constructing Quarks: A Sociological History of Particle Physics. At the same time, social constructionism shaped studies of technologythe Sofield, especially on the social construction of technology, or SCOT, and authors as Wiebe Bijker, Trevor Pinch, Maarten van Wesel, etc. Despite its common perception as objective, mathematics is not immune to social constructionist accounts. Sociologists such as Sal Restivo and Randall Collins, mathematicians including Reuben Hersh and Philip J. Davis, and philosophers including Paul Ernest have published social constructionist treatments of mathematics.
Postmodernism
Within the social constructionist strand of postmodernism, the concept of socially constructed reality stresses the ongoing mass-building of worldviews by individuals in dialectical interaction with society at a time. The numerous realities so formed comprise, according to this view, the imagined worlds of human social existence and activity. These worldviews are gradually crystallized by habit into institutions propped up by language conventions; given ongoing legitimacy by mythology, religion and philosophy; maintained by therapies and socialization; and subjectively internalized by upbringing and education. Together, these become part of the identity of social citizens.
In the book The Reality of Social Construction, the British sociologist Dave Elder-Vass places the development of social constructionism as one outcome of the legacy of postmodernism. He writes "Perhaps the most widespread and influential product of this process [coming to terms with the legacy of postmodernism] is social constructionism, which has been booming [within the domain of social theory] since the 1980s."
Criticisms
Critics argue that social constructionism rejects the influences of biology on behaviour and culture, or suggests that they are unimportant to achieve an understanding of human behaviour. Scientific estimates of nature versus nurture and gene–environment interactions have shown almost always substantial influences of both genetics and social, often in an inseparable manner. Claims that genetics does not affect humans are seen as outdated by most contemporary scholars of human development.
Social constructionism has also been criticized for having an overly narrow focus on society and culture as a causal factor in human behavior, excluding the influence of innate biological tendencies. This criticism has been explored by psychologists such as Steven Pinker in The Blank Slate as well as by Asian studies scholar Edward Slingerland in What Science Offers the Humanities. John Tooby and Leda Cosmides used the term standard social science model to refer to social theories that they believe fail to take into account the evolved properties of the brain.
In 1996, to illustrate what he believed to be the intellectual weaknesses of social constructionism and postmodernism, physics professor Alan Sokal submitted an article to the academic journal Social Text deliberately written to be incomprehensible but including phrases and jargon typical of the articles published by the journal. The submission, which was published, was an experiment to see if the journal would "publish an article liberally salted with nonsense if (a) it sounded good and (b) it flattered the editors' ideological preconceptions." In 1999, Sokal, with coauthor Jean Bricmont published the book Fashionable Nonsense, which criticized postmodernism and social constructionism.
Philosopher Paul Boghossian has also written against social constructionism. He follows Ian Hacking's argument that many adopt social constructionism because of its potentially liberating stance: if things are the way that they are only because of human social conventions, as opposed to being so naturally, then it should be possible to change them into how people would rather have them be. He then states that social constructionists argue that people should refrain from making absolute judgements about what is true and instead state that something is true in the light of this or that theory. Countering this, he states:
Woolgar and Pawluch argue that constructionists tend to "ontologically gerrymander" social conditions in and out of their analysis.
Alan Sokal also criticize social constructionism for contradicting itself on the knowability of the existence of societies. The argument is that if there was no knowable objective reality, there would be no way of knowing whether or not societies exist and if so, what their rules and other characteristics are. One example of the contradiction is that the claim that "phenomena must be measured by what is considered average in their respective cultures, not by an objective standard." Since there are languages that have no word for average and therefore the whole application of the concept of "average" to such cultures contradict social constructionism's own claim that cultures can only be measured by their own standards. Social constructionism is a diverse field with varying stances on these matters. Some social constructionists do acknowledge the existence of an objective reality but argue that human understanding and interpretation of that reality are socially constructed. Others might contend that while the term average may not exist in all languages, equivalent or analogous concepts might still be applied within those cultures, thereby not completely invalidating the principle of cultural relativity in measuring phenomena.
See also
References
Further reading
Books
Boghossian, P. Fear of Knowledge: Against Relativism and Constructivism. Oxford University Press, 2006. Online review: Fear of Knowledge: Against Relativism and Constructivism
Berger, P. L. and Luckmann, T., The Social Construction of Reality : A Treatise in the Sociology of Knowledge (Anchor, 1967; ).
Best, J. Images of Issues: Typifying Contemporary Social Problems, New York: Gruyter, 1989
Burr, V. Social Constructionism, 2nd ed. Routledge 2003.
Ellul, J. Propaganda: The Formation of Men's Attitudes. Trans. Konrad Kellen & Jean Lerner. New York: Knopf, 1965. New York: Random House/ Vintage 1973
Ernst, P., (1998), Social Constructivism as a Philosophy of Mathematics; Albany, New York: State University of New York Press
Gergen, K., An Invitation to Social Construction. Los Angeles: Sage, 2015 (3d edition, first 1999).
Glasersfeld, E. von, Radical Constructivism: A Way of Knowing and Learning. London: RoutledgeFalmer, 1995.
Hacking, I., The Social Construction of What? Cambridge: Harvard University Press, 1999;
Hibberd, F. J., Unfolding Social Constructionism. New York: Springer, 2005.
Kukla, A., Social Constructivism and the Philosophy of Science, London: Routledge, 2000.
Lawrence, T. B. and Phillips, N. Constructing Organizational Life: How Social-Symbolic Work Shapes Selves, Organizations, and Institutions. Oxford University Press, 2019.
Lowenthal, P., & Muth, R. Constructivism. In E. F. Provenzo, Jr. (Ed.), Encyclopedia of the social and cultural foundations of education (pp. 177–179). Thousand Oaks, CA: Sage, 2008.
McNamee, S. and Gergen, K. (Eds.). Therapy as Social Construction. London: Sage, 1992 .
McNamee, S. and Gergen, K. Relational Responsibility: Resources for Sustainable Dialogue. Thousand Oaks, California: Sage, 2005. .
Penman, R. Reconstructing communicating. Mahwah, NJ: Lawrence Erlbaum, 2000.
Poerksen, B. The Certainty of Uncertainty: Dialogues Introducing Constructivism. Exeter: Imprint-Academic, 2004.
Restivo, S. and Croissant, J., "Social Constructionism in Science and Technology Studies" (Handbook of Constructionist Research, ed. J.A. Holstein & J.F. Gubrium) Guilford, NY 2008, 213–229;
Schmidt, S. J., Histories and Discourses: Rewriting Constructivism. Exeter: Imprint-Academic, 2007.
Searle, J., The Construction of Social Reality. New York: Free Press, 1995; .
Shotter, J. Conversational realities: Constructing life through language. Thousand Oaks, CA: Sage, 1993.
Stewart, J., Zediker, K. E., & Witteborn, S. Together: Communicating interpersonally – A social construction approach (6th ed). Los Angeles, CA: Roxbury, 2005.
Weinberg, D. Contemporary Social Constructionism: Key Themes. Philadelphia, PA: Temple University Press, 2014.
Willard, C. A., Liberalism and the Problem of Knowledge: A New Rhetoric for Modern Democracy Chicago: University of Chicago Press, 1996; .
Wilson, D. S. (2005), "Evolutionary Social Constructivism". In J. Gottshcall and D. S. Wilson, (Eds.), The Literary Animal: Evolution and the Nature of Narrative. Evanston, IL, Northwestern University Press; . Full text
Articles
Drost, Alexander. "Borders. A Narrative Turn – Reflections on Concepts, Practices and their Communication", in: Olivier Mentz and Tracey McKay (eds.), Unity in Diversity. European Perspectives on Borders and Memories, Berlin 2017, pp. 14–33.
Mallon, R, "Naturalistic Approaches to Social Construction", The Stanford Encyclopedia of Philosophy, Edward N. Zalta (ed.).
Shotter, J., & Gergen, K. J., Social construction: Knowledge, self, others, and continuing the conversation. In S. A. Deetz (Ed.), Communication Yearbook, 17 (pp. 3–33). Thousand Oaks, CA: Sage, 1994.
External links
Communication theory
Consensus reality
Human behavior
Human communication
Social concepts
Social epistemology
Sociology of knowledge
Sociological theories | 0.764934 | 0.997279 | 0.762852 |
Soil chemistry | Soil chemistry is the study of the chemical characteristics of soil. Soil chemistry is affected by mineral composition, organic matter and environmental factors. In the early 1870s a consulting chemist to the Royal Agricultural Society in England, named J. Thomas Way, performed many experiments on how soils exchange ions, and is considered the father of soil chemistry. Other scientists who contributed to this branch of ecology include Edmund Ruffin, and Linus Pauling.
History
Until the late 1960s, soil chemistry focused primarily on chemical reactions in the soil that contribute to pedogenesis or that affect plant growth. Since then, concerns have grown about environmental pollution, organic and inorganic soil contamination and potential ecological health and environmental health risks. Consequently, the emphasis in soil chemistry has shifted from pedology and agricultural soil science to an emphasis on environmental soil science.
Environmental soil chemistry
A knowledge of environmental soil chemistry is paramount to predicting the fate of contaminants, as well as the processes by which they are initially released into the soil. Once a chemical is exposed to the soil environment, myriad chemical reactions can occur that may increase or decrease contaminant toxicity. These reactions include adsorption/desorption, precipitation, polymerization, dissolution, hydrolysis, hydration, complexation and oxidation/reduction. These reactions are often disregarded by scientists and engineers involved with environmental remediation. Understanding these processes enable us to better predict the fate and toxicity of contaminants and provide the knowledge to develop scientifically correct, and cost-effective remediation strategies.
Key concepts
Soil structure
Soil structure refers to the manner in which these individual soil particles are grouped together to form clusters of particles called aggregates. This is determined by the types of soil formation, parent material, and texture. Soil structure can be influenced by a wide variety of biota as well as management methods by humans.
Formation of aggregates
Aggregates can form under varying conditions and differ from each other in soil horizon and structure
Natural aggregates results in what are called peds, whereas artificial aggregates are called clods.
Clods are formed due to disturbance of the field by ploughing or digging.
Microbial activity also influences the formation of aggregates.
Types of soil structure
The classification of soil structural forms is based largely on shape.
Spheroidal structure: sphere-like or rounded in shape. All the axes are approximately of the same dimensions, with curved and irregular faces. These are found commonly in cultivated fields.
Crumb structure: small and are like crumbs of bread due to them being porous
Granular structure: less porous than crumb structure aggregates and are more durable than crumb structure aggregates
Plate-like structure: mainly horizontally aligned along plant based areas, with thin units being laminar and the thick units of the aggregates are classified as platy. Platy structures are usually found in the surface and sometimes in the lower sub-soils.
Block-like structure: particles that are arranged around a central point are enclosed by surfaces that may be either flat or somewhat rounded. These types are generally found in subsoil.
Sub angular blocky: corners are more rounded than the angular blocky aggregates
Prism-like structure: particles that are longer than they are wide, with the vertical axis being greater than the horizontal axis. They are commonly found in subsoil horizon of arid and semi-arid region soils.
Prismatic: more angular and hexagonal at the top of the aggregate
Columnar: particles that are rounded at the top of the aggregate
Minerals
The mineral components of the soil are derived from the parental rocks or regolith. The minerals present about 90% of the total weight of the soil. Some important elements, which are found in compound state, are oxygen, iron, silicon, aluminium, nitrogen, phosphorus, potassium, calcium, magnesium, carbon, hydrogen, etc.
The formation of primary and secondary minerals can better define what minerals are in the rock composition
Soil pores
The interactions of the soil's micropores and macropores are important to soil chemistry, as they allow for the provision of water and gaseous elements to the soil and the surrounding atmosphere. Macropores help transport molecules and substances in and out of the micropores. Micropores are comprised within the aggregates themselves.
Soil water
Water is essential for organisms within the soil profile, and it partially fills up the macropores in an ideal soil.
Leaching of the soil occurs as water carries along with it ions deeper into the lower soil horizons, causing the soil to become more oxidized in other soil horizons.
Water also will go from a higher water potential to a lower water potential, this can result in capillarity activity and gravitational force occurring with the water due to adhesion of the water to the soil surface and cohesion amongst the water molecules.
Air/Atmosphere
The atmosphere contains three main gases, namely oxygen, carbon dioxide (CO2) and nitrogen. In the atmosphere, oxygen is 20%, nitrogen is 79% and CO2 is 0.15% to 0.65% by volume. CO2 increases with the increase in the depth of soil because of decomposition of accumulated organic matter and abundance of plant roots. The presence of oxygen in the soil is important because it helps in breaking down insoluble rocky mass into soluble minerals and organic humification. Air in the soil is composed of gases that are present in the atmosphere, but not in the same proportions. These gases facilitate chemical reactions in microorganisms. Accumulation of soluble nutrients in the soil makes it more productive. If the soil is deficient in oxygen, microbial activity is slowed down or eliminated. Important factors controlling the soil atmosphere are temperature, atmospheric pressure, wind/aeration and rainfall.
Soil texture
Soil texture influences the soil chemistry pertaining to the soil's ability to maintain its structure, the restriction of water flow and the contents of the particles in the soil. Soil texture considers all particle types and a soil texture triangle is a chart that can be used to calculate the percentages of each particle type adding up to total 100% for the soil profile. These soil separates differ not only in their sizes but also in their bearing on some of the important factors affecting plant growth such as soil aeration, work ability, movement and availability of water and nutrients.
Sand
Sand particles range in size (about 0.05–2 mm). Sand is the most coarse of the particle groups. Sand has the largest pores and soil particles of the particle groups. It also drains the most easily. These particles become more involved in chemical reactions when coated with clay.
Silt
Silt particles range in size (about 0.002–0.5 mm). Silt pores are considered a medium in size compared with the other particle groups. Silt has a texture consistency of flour. Silt particles allow water and air to pass readily, yet retain moisture for crop growth. Silty soil contains sufficient quantities of nutrients, both organic and inorganic.
Clay
Clay has particles smallest in size (about <0.002 mm) of the particle groups. Clay also has the smallest pores which give it a greater porosity, and it does not drain well. Clay has a sticky texture when wet. Some kinds can grow and dissipate, or in other words shrink and swell.
Loam
Loam is a combination of sand, silt and clay that encompasses soils. It can be named based on the primary particles in the soil composition, ex. sandy loam, clay loam, silt loam, etc.
Biota
Biota are organisms that, along with organic matter, help comprise the biological system of the soil. The vast majority of biological activity takes place near the soil surface, usually in the A horizon of a soil profile. Biota rely on inputs of organic matter in order to sustain themselves and increase population sizes. In return, they contribute nutrients to the soil, typically after it has been cycled in the soil trophic food web.
With the many different interactions that take place, biota can largely impact their environment physically, chemically, and biologically (Pavao-Zuckerman, 2008). A prominent factor that helps to provide some degree of stability with these interactions is biodiversity, a key component of all ecological communities. Biodiversity allows for a consistent flow of energy through trophic levels and strongly influences the structure of ecological communities in the soil.
Soil organisms
Types of living soil biota can be divided into categories of plants (flora), animals (fauna), and microorganisms. Plants play a role in soil chemistry by exchanging nutrients with microorganisms and absorbing nutrients, creating concentration gradients of cations and anions. In addition to this, the differences in water potential created by plants influence water movement in soil, which affects the form and transportation of various particles. Vegetative cover on the soil surface greatly reduces erosion, which in turn prevents compaction and helps to maintain aeration in the soil pore space, providing oxygen and carbon to the biota and cation exchange sites that depend on it (Peri et al., 2022).
Animals are essential to soil chemistry, as they regulate the cycling of nutrients and energy into different forms. This is primarily done through food webs. Some types of soil animals can be found below.
Detritivores
Examples include millipedes, woodlice, and dung beetles
Decomposers
Examples include fungi, earthworms, and bacteria
Protozoans
Examples include amoeba, euglena, and paramecium
Soil microbes play a major role in a multitude of biological and chemical activities that take place in soil. These microorganisms are said to make up around 1,000–10,000 kg of biomass per hectare in some soils (García-Sánchez, 2016). They are mostly recognized for their association with plants. The most well-known example of this is mycorrhizae, which exchange carbon for nitrogen with plant roots in a symbiotic relationship. Additionally, microbes are responsible for the majority of respiration that takes place in the soil, which has implications for the release of gases like methane and nitrous oxide from soil (giving it significance in discussion of climate change) (Frouz et al., 2020). Given the significance of the effects of microbes on their environment, the conservation and promotion of microbial life is often desired by many plant growers, conservationists, and ecologists.
Soil organic matter
Soil organic matter is the largest source of nutrients and energy in a soil. Its inputs strongly influence key soil factors such as types of biota, pH, and even soil order. Soil organic matter is often strategically applied by plant growers because of its ability to improve soil structure, supply nutrients, manage pH, increase water retention, and regulate soil temperature (which directly affects water dynamics and biota).
The chief elements found in humus, the product of organic matter decomposition in soil, are carbon, hydrogen, oxygen, sulphur and nitrogen. The important compound found in humus are carbohydrates, phosphoric acid, some organic acids, resins, urea etc. Humus is a dynamic product and is constantly changing because of its oxidation, reduction and hydrolysis; hence, it has much carbon content and less nitrogen. This material can come from a variety of sources, but often derives from livestock manure and plant residues.
Though there are many other variables, such as texture, soils that lack sufficient organic matter content are susceptible to soil degradation and drying, as there is nothing supporting the soil structure. This often leads to a decline in soil fertility and an increase in erodibility.
Other associated concepts:
Anion and cation exchange capacity
Soil pH
Mineral formation and transformation processes and pedogenesis
Clay mineralogy
Sorption and precipitation reactions in soil
Chemistry of problem soils
C/N ratio
Erosion and soil degradation
Soil cycle
Many plant nutrients in soil undergo biogeochemical cycles throughout their environment. These cycles are influenced by water, gas exchange, biological activity, immobilization, and mineralization dynamics, but each element has its own course of flow (Deemy et al., 2022). For example, nitrogen moves from an isolated gaseous form to the compounds nitrate and nitrite as it moves through soil and becomes available to plants. In comparison, an element like phosphorus transfers in mineral form, as it is contained in rock material. These cycles also greatly vary in mobility, solubility, and the rate at which they move through their natural cycles. Together, they drive all of the processes of soil chemistry.
Elemental cycles
Carbon
Hydrogen
Oxygen
Nitrogen
Phosphorus
Potassium
Sulfur
Calcium
Magnesium
Iron
Boron
Manganese
Copper
Zinc
Nickel
Chlorine
Methods of investigation
New knowledge about the chemistry of soils often comes from studies in the laboratory, in which soil samples taken from undisturbed soil horizons in the field are used in experiments that include replicated treatments and controls. In many cases, the soil samples are air dried at ambient temperatures (e.g., ) and sieved to a 2 mm size prior to storage for further study. Such drying and sieving soil samples markedly disrupts soil structure, microbial population diversity, and chemical properties related to pH, oxidation-reduction status, manganese oxidation state, and dissolved organic matter; among other properties. Renewed interest in recent decades has led many soil chemists to maintain soil samples in a field-moist condition and stored at under aerobic conditions before and during investigations.
Two approaches are frequently used in laboratory investigations in soil chemistry. The first is known as batch equilibration. The chemist adds a given volume of water or salt solution of known concentration of dissolved ions to a mass of soil (e.g., 25–mL of solution to 5–g of soil in a centrifuge tube or flask). The soil slurry then is shaken or swirled for a given amount of time (e.g., 15 minutes to many hours) to establish a steady state or equilibrium condition prior to filtering or centrifuging at high speed to separate sand grains, silt particles, and clay colloids from the equilibrated solution. The filtrate or centrifugate then is analyzed using one of several methods, including ion specific electrodes, atomic absorption spectrophotometry, inductively coupled plasma spectrometry, ion chromatography, and colorimetric methods. In each case, the analysis quantifies the concentration or activity of an ion or molecule in the solution phase, and by multiplying the measured concentration or activity (e.g., in mg ion/mL) by the solution-to-soil ratio (mL of extraction solution/g soil), the chemist obtains the result in mg ion/g soil. This result based on the mass of soil allows comparisons between different soils and treatments. A related approach uses a known volume to solution to leach (infiltrate) the extracting solution through a quantity of soil in small columns at a controlled rate to simulate how rain, snow meltwater, and irrigation water pass through soils in the field. The filtrate then is analyzed using the same methods as used in batch equilibrations.
Another approach to quantifying soil processes and phenomena uses in situ methods that do not disrupt the soil. as occurs when the soil is shaken or leached with an extracting soil solution. These methods usually use surface spectroscopic techniques, such as Fourier transform infrared spectroscopy, nuclear magnetic resonance, Mössbauer spectroscopy, and X-ray spectroscopy. These approaches aim to obtain information on the chemical nature of the mineralogy and chemistry of particle and colloid surfaces, and how ions and molecules are associated with such surfaces by adsorption, complexation, and precipitation.
These laboratory experiments and analyses have an advantage over field studies in that chemical mechanisms on how ions and molecules react in soils can be inferred from the data. One can draw conclusions or frame new hypotheses on similar reactions in different soils with diverse textures, organic matter contents, types of clay minerals and oxides, pH, and drainage condition. Laboratory studies have the disadvantage that they lose some of the realism and heterogeneity of undisturbed soil in the field, while gaining control and the power of extrapolation to unstudied soil. Mechanistic laboratory studies combined with more realistic, less controlled, observational field studies often yield accurate approximations of the behavior and chemistry of the soils that may be spatially heterogeneous and temporally variable. Another challenge faced by soil chemists is how microbial populations and enzyme activity in field soils may be changed when the soil is disturbed, both in the field and laboratory, particularly when soils samples are dried prior to laboratory studies and analysis.
References
Sonon, L.S., M.A. Chappell and V.P. Evangelou (2000) The History of Soil Chemistry. Url accessed on 2006-04-11.
External links | 0.784686 | 0.972168 | 0.762846 |
Idiosyncratic drug reaction | Idiosyncratic drug reactions, also known as type B reactions, are drug reactions that occur rarely and unpredictably amongst the population. This is not to be mistaken with idiopathic, which implies that the cause is not known. They frequently occur with exposure to new drugs, as they have not been fully tested and the full range of possible side-effects have not been discovered; they may also be listed as an adverse drug reaction with a drug, but are extremely rare. Some patients have multiple-drug intolerance. Patients who have multiple idiopathic effects that are nonspecific are more likely to have anxiety and depression. Idiosyncratic drug reactions appear to not be concentration dependent. A minimal amount of drug will cause an immune response, but it is suspected that at a low enough concentration, a drug will be less likely to initiate an immune response.
Mechanism
In adverse drug reactions involving overdoses, the toxic effect is simply an extension of the pharmacological effect (Type A adverse drug reactions). On the other hand, clinical symptoms of idiosyncratic drug reactions (Type B adverse drug reactions) are different from the pharmacological effect of the drug.
The proposed mechanism of most idiosyncratic drug reactions is immune-mediated toxicity. To create an immune response, a foreign molecule must be present that antibodies can bind to (i.e. the antigen) and cellular damage must exist. Very often, drugs will not be immunogenic because they are too small to induce immune response. However, a drug can cause an immune response if the drug binds a larger molecule. Some unaltered drugs, such as penicillin, will bind avidly to proteins. Others must be bioactivated into a toxic compound that will in turn bind to proteins. The second criterion of cellular damage can come either from a toxic drug/drug metabolite, or from an injury or infection.
These will sensitize the immune system to the drug and cause a response.
Idiosyncratic reactions fall conventionally under toxicology.
See also
Idiosyncrasy
References
External links
Medical terminology
Pharmacy
Clinical pharmacology | 0.780193 | 0.977755 | 0.762838 |
Prosthetic group | A prosthetic group is the non-amino acid component that is part of the structure of the heteroproteins or conjugated proteins, being tightly linked to the apoprotein.
Not to be confused with the cosubstrate that binds to the enzyme apoenzyme (either a holoprotein or heteroprotein) by non-covalent binding a non-protein (non-amino acid)
This is a component of a conjugated protein that is required for the protein's biological activity. The prosthetic group may be organic (such as a vitamin, sugar, RNA, phosphate or lipid) or inorganic (such as a metal ion). Prosthetic groups are bound tightly to proteins and may even be attached through a covalent bond. They often play an important role in enzyme catalysis. A protein without its prosthetic group is called an apoprotein, while a protein combined with its prosthetic group is called a holoprotein. A non-covalently bound prosthetic group cannot generally be removed from the holoprotein without denaturating the protein. Thus, the term "prosthetic group" is a very general one and its main emphasis is on the tight character of its binding to the apoprotein. It defines a structural property, in contrast to the term "coenzyme" that defines a functional property.
Prosthetic groups are a subset of cofactors. Loosely bound metal ions and coenzymes are still cofactors, but are generally not called prosthetic groups. In enzymes, prosthetic groups are involved in the catalytic mechanism and required for activity. Other prosthetic groups have structural properties. This is the case for the sugar and lipid moieties in glycoproteins and lipoproteins or RNA in ribosomes. They can be very large, representing the major part of the protein in proteoglycans for instance.
The heme group in hemoglobin is a prosthetic group. Further examples of organic prosthetic groups are vitamin derivatives: thiamine pyrophosphate, pyridoxal-phosphate and biotin. Since prosthetic groups are often vitamins or made from vitamins, this is one of the reasons why vitamins are required in the human diet. Inorganic prosthetic groups are usually transition metal ions such as iron (in heme groups, for example in cytochrome c oxidase and hemoglobin), zinc (for example in carbonic anhydrase), copper (for example in complex IV of the respiratory chain) and molybdenum (for example in nitrate reductase).
List of prosthetic groups
The table below contains a list of some of the most common prosthetic groups.
References
External links
Cofactors PowerPoint lecture
Cofactors
de:Prosthetische Gruppe | 0.774364 | 0.985114 | 0.762837 |
Engineering mathematics | Mathematical engineering (or engineering mathematics) is a branch of applied mathematics, concerning mathematical methods and techniques that are typically used in engineering and industry. Along with fields like engineering physics and engineering geology, both of which may belong in the wider category engineering science, engineering mathematics is an interdisciplinary subject motivated by engineers' needs both for practical, theoretical and other considerations outside their specialization, and to deal with constraints to be effective in their work.
Description
Historically, engineering mathematics consisted mostly of applied analysis, most notably: differential equations; real and complex analysis (including vector and tensor analysis); approximation theory (broadly construed, to include asymptotic, variational, and perturbative methods, representations, numerical analysis); Fourier analysis; potential theory; as well as linear algebra and applied probability, outside of analysis. These areas of mathematics were intimately tied to the development of Newtonian physics, and the mathematical physics of that period. This history also left a legacy: until the early 20th century subjects such as classical mechanics were often taught in applied mathematics departments at American universities, and fluid mechanics may still be taught in (applied) mathematics as well as engineering departments.
The success of modern numerical computer methods and software has led to the emergence of computational mathematics, computational science, and computational engineering (the last two are sometimes lumped together and abbreviated as CS&E), which occasionally use high-performance computing for the simulation of phenomena and the solution of problems in the sciences and engineering. These are often considered interdisciplinary fields, but are also of interest to engineering mathematics.
Specialized branches include engineering optimization and engineering statistics.
Engineering mathematics in tertiary education typically consists of mathematical methods and models courses.
See also
Industrial mathematics
Control theory, a mathematical discipline concerned with engineering
Further mathematics and additional mathematics, A-level mathematics courses with similar content
Mathematical methods in electronics, signal processing and radio engineering
References
Applied mathematics | 0.769312 | 0.991582 | 0.762836 |
Metatheory | A metatheory or meta-theory is a theory on a subject matter that is a theory in itself. Analyses or descriptions of an existing theory would be considered meta-theories. If the subject matter of a theoretical statement consists of one or multiple theories, it would also be called a meta-theory. For mathematics and mathematical logic, a metatheory is a mathematical theory about another mathematical theory. Meta-theoretical investigations are part of the philosophy of science. The topic of metascience is an attempt to use scientific knowledge to improve the practice of science itself.
The study of metatheory became widespread during the 20th century after its application to various topics, including scientific linguistics and its concept of metalanguage.
Examples of metatheories
Metascience
Metascience is the use of scientific method to study science itself. Metascience is an attempt to increase the quality of scientific research while reducing wasted activity; it uses research methods to study how research is done or can be improved. It has been described as "research on research", "the science of science", and "a bird's eye view of science". In the words of John Ioannidis, "Science is the best thing that has happened to human beings ... but we can do it better."
In 1966, an early meta-research paper examined the statistical methods of 295 papers published in ten well-known medical journals. It found that, "in almost 73% of the reports read ... conclusions were drawn when the justification for these conclusions was invalid". Meta-research during the ensuing decades found many methodological flaws, inefficiencies, and bad practices in the research of numerous scientific topics. Many scientific studies could not be reproduced, particularly those involving medicine and the so-called soft sciences. The term "replication crisis" was invented during the early 2010s as part of an increasing awareness of the problem.
Measures have been implemented to address the issues revealed by metascience. These measures include the pre-registration of scientific studies and clinical trials as well as the founding of organizations such as CONSORT and the EQUATOR Network that issue guidelines for methods and reporting. There are continuing efforts to reduce the misuse of statistics, to eliminate perverse incentives from academia, to improve the peer review process, to reduce bias in scientific literature, and to increase the overall quality and efficiency of the scientific process.
A major criticism of metatheory is that it is theory based on other theory.
Metamathematics
Introduced in 20th-century philosophy as a result of the work of the German mathematician David Hilbert, who in 1905 published a proposal for proof of the consistency and completeness of mathematics, creating the topic of metamathematics. His hopes for the success of this proof were disappointed by the work of Kurt Gödel, who in 1931, used his incompleteness theorems to prove the goal of consistency and completeness to be unattainable. Nevertheless, his program of unsolved mathematical problems influenced mathematics for the rest of the 20th century.
A metatheorem is defined as: "a statement about theorems. It usually gives a criterion for getting a new theorem from an old one, either by changing its objects according to a rule" known as the duality principle, or by transferring it to another topic (e.g., from the theory of categories to the theory of groups) or to another context of the same topic (e.g., from linear transformations to matrices).
Metalogic
Metalogic is the study of the metatheory of logic. Whereas logic is the study of how logical systems can be used to construct valid and sound arguments, metalogic studies the properties of logical systems. Logic concerns the truths that may be derived using a logical system; metalogic concerns the truths that may be derived about the languages and systems that are used to express truths. The basic objects of metalogical study are formal languages, formal systems, and their interpretations. The study of interpretation of formal systems is the type of mathematical logic that is known as model theory, and the study of deductive systems is the type that is known as proof theory.
Metaphilosophy
Metaphilosophy is "the investigation of the nature of philosophy". Its subject matter includes the aims of philosophy, the boundaries of philosophy, and its methods. Thus, while philosophy characteristically inquires into the nature of being, the reality of objects, the possibility of knowledge, the nature of truth, and so on, metaphilosophy is the self-referential inquiry into the nature, purposes, and methods of the activity that makes these kinds of inquiries, by asking what is philosophy itself, what sorts of questions it should ask, how it might pose and answer them, and what it can achieve in doing so. It is considered by some to be a topic prior and preparatory to philosophy, while others see it as inherently a part of philosophy, or automatically a part of philosophy while others adopt some combination of these views.
Sociology of sociology
The sociology of sociology is a topic of sociology that combines social theories with analysis of the effect of socio-historical contexts in sociological intellectual production.
See also
Meta-emotion
Metacognition
Metahistory (concept)
Metaknowledge
Metalanguage
Metalearning
Metamemory
Meta-ontology
Philosophy of social science
References
External links
Meta-theoretical Issues (2003), Lyle Flint
Metaphilosophy | 0.776172 | 0.982801 | 0.762823 |
Homocysteine | Homocysteine or Hcy: is a non-proteinogenic α-amino acid. It is a homologue of the amino acid cysteine, differing by an additional methylene bridge (-CH2-). It is biosynthesized from methionine by the removal of its terminal Cε methyl group. In the body, homocysteine can be recycled into methionine or converted into cysteine with the aid of vitamin B6, B9, and B12.
High levels of homocysteine in the blood (hyperhomocysteinemia) is regarded as a marker of cardiovascular disease, likely working through atherogenesis, which can result in ischemic injury. Therefore, hyperhomocysteinemia is a possible risk factor for coronary artery disease. Coronary artery disease occurs when an atherosclerotic plaque blocks blood flow to the coronary arteries, which supply the heart with oxygenated blood.
Hyperhomocysteinemia has been correlated with the occurrence of blood clots, heart attacks, and strokes, although it is unclear whether hyperhomocysteinemia is an independent risk factor for these conditions. Hyperhomocysteinemia also has been associated with early-term spontaneous abortions and with neural tube defects.
Structure
Homocysteine exists at neutral pH values as a zwitterion.
Biosynthesis and biochemical roles
Homocysteine is biosynthesized naturally via a multi-step process. First, methionine receives an adenosine group from ATP, a reaction catalyzed by S-adenosyl-methionine synthetase, to give S-adenosyl methionine (SAM-e). SAM-e then transfers the methyl group to an acceptor molecule, (e.g., norepinephrine as an acceptor during epinephrine synthesis, DNA methyltransferase as an intermediate acceptor in the process of DNA methylation). The adenosine is then hydrolyzed to yield L-homocysteine. L-Homocysteine has two primary fates: conversion via tetrahydrofolate (THF) back into L-methionine or conversion to L-cysteine.
Biosynthesis of cysteine
Mammals biosynthesize the amino acid cysteine via homocysteine. Cystathionine β-synthase catalyses the condensation of homocysteine and serine to give cystathionine. This reaction uses pyridoxine (vitamin B6) as a cofactor. Cystathionine γ-lyase then converts this double amino acid to cysteine, ammonia, and α-ketobutyrate. Bacteria and plants rely on a different pathway to produce cysteine, relying on O-acetylserine.
Methionine salvage
Homocysteine can be recycled into methionine. This process uses N5-methyl tetrahydrofolate as the methyl donor and cobalamin (vitamin B12)-related enzymes. More detail on these enzymes can be found in the article for methionine synthase.
Other reactions of biochemical significance
Homocysteine can cyclize to give homocysteine thiolactone, a five-membered heterocycle. Because of this "self-looping" reaction, homocysteine-containing peptides tend to cleave themselves by reactions generating oxidative stress.
Homocysteine also acts as an allosteric antagonist at Dopamine D2 receptors.
It has been proposed that both homocysteine and its thiolactone may have played a significant role in the appearance of life on the early Earth.
Homocysteine levels
Homocysteine levels typically are higher in men than women, and increase with age.
Common levels in Western populations are 10 to 12 μmol/L, and levels of 20 μmol/L are found in populations with low B-vitamin intakes or in the elderly (e.g., Rotterdam, Framingham).
It is decreased with methyl folate trapping, where it is accompanied by decreased methylmalonic acid, increased folate, and a decrease in formiminoglutamic acid. This is the opposite of MTHFR C677T mutations, which result in an increase in homocysteine.
The ranges above are provided as examples only; test results always should be interpreted using the range provided by the laboratory that produced the result.
Elevated homocysteine
Abnormally high levels of homocysteine in the serum, above 15 μmol/L, are a medical condition called hyperhomocysteinemia. This has been claimed to be a significant risk factor for the development of a wide range of diseases, in total more than 100 including thrombosis, neuropsychiatric illness, in particular dementia and fractures. It also is found to be associated with microalbuminuria, which is a strong indicator of the risk of future cardiovascular disease and renal dysfunction. Vitamin B12 deficiency, even when coupled with high serum folate levels, has been found to increase overall homocysteine concentrations as well.
Typically, hyperhomocysteinemia is managed with vitamin B6, vitamin B9, and vitamin B12 supplementation. However, supplementation with these vitamins does not appear to improve cardiovascular disease outcomes.
References
External links
Homocysteine MS Spectrum
Homocysteine at Lab Tests Online
Prof. David Spence on homocysteine levels, kidney damage, and cardiovascular disease, The Health Report, Radio National, 24 May 2010
Alpha-Amino acids
Sulfur amino acids
Thiols
Non-proteinogenic amino acids
Excitatory amino acids | 0.766478 | 0.995222 | 0.762816 |
Deprotonation | Deprotonation (or dehydronation) is the removal (transfer) of a proton (or hydron, or hydrogen cation), (H+) from a Brønsted–Lowry acid in an acid–base reaction. The species formed is the conjugate base of that acid. The complementary process, when a proton is added (transferred) to a Brønsted–Lowry base, is protonation (or hydronation). The species formed is the conjugate acid of that base.
A species that can either accept or donate a proton is referred to as amphiprotic. An example is the H2O (water) molecule, which can gain a proton to form the hydronium ion, H3O+, or lose a proton, leaving the hydroxide ion, OH−.
The relative ability of a molecule to give up a proton is measured by its pKa value. A low pKa value indicates that the compound is acidic and will easily give up its proton to a base. The pKa of a compound is determined by many aspects, but the most significant is the stability of the conjugate base. This is primarily determined by the ability (or inability) of the conjugated base to stabilize negative charge. One of the most important ways of assessing a conjugate base's ability to distribute negative charge is using resonance. Electron withdrawing groups (which can stabilize the molecule by increasing charge distribution) or electron donating groups (which destabilize by decreasing charge distribution) present on a molecule also determine its pKa. The solvent used can also assist in the stabilization of the negative charge on a conjugated base.
Bases used to deprotonate depend on the pKa of the compound. When the compound is not particularly acidic, and, as such, the molecule does not give up its proton easily, a base stronger than the commonly known hydroxides is required. Hydrides are one of the many types of powerful deprotonating agents. Common hydrides used are sodium hydride and potassium hydride. The hydride forms hydrogen gas with the liberated proton from the other molecule. The hydrogen is dangerous and could ignite with the oxygen in the air, so the chemical procedure should be done in an inert atmosphere (e.g., nitrogen).
Deprotonation can be an important step in a chemical reaction. Acid–base reactions typically occur faster than any other step which may determine the product of a reaction. The conjugate base is more electron-rich than the molecule which can alter the reactivity of the molecule. For example, deprotonation of an alcohol forms the negatively charged alkoxide, which is a much stronger nucleophile.
To determine whether or not a given base will be sufficient to deprotonate a specific acid, compare the conjugate base with the original base. A conjugate base is formed when the acid is deprotonated by the base. In the image above, hydroxide acts as a base to deprotonate the carboxylic acid. The conjugate base is the carboxylate salt. In this case, hydroxide is a strong enough base to deprotonate the carboxylic acid because the conjugate base is more stable than the base because the negative charge is delocalized over two electronegative atoms compared to one. Using pKa values, the carboxylic acid is approximately 4 and the conjugate acid, water, is 15.7. Because acids with higher pKa values are less likely to donate their protons, the equilibrium will favor their formation. Therefore, the side of the equation with water will be formed preferentially. If, for example, water, instead of hydroxide, was used to deprotonate the carboxylic acid, the equilibrium would not favor the formation of the carboxylate salt. This is because the conjugate acid, hydronium, has a pKa of -1.74, which is lower than the carboxylic acid. In this case, equilibrium would favor the carboxylic acid.
References
Acid–base chemistry
Chemical reactions
Reaction mechanisms | 0.775892 | 0.983132 | 0.762804 |
Rearrangement reaction | In organic chemistry, a rearrangement reaction is a broad class of organic reactions where the carbon skeleton of a molecule is rearranged to give a structural isomer of the original molecule. Often a substituent moves from one atom to another atom in the same molecule, hence these reactions are usually intramolecular. In the example below, the substituent R moves from carbon atom 1 to carbon atom 2:
Intermolecular rearrangements also take place.
A rearrangement is not well represented by simple and discrete electron transfers (represented by curved arrows in organic chemistry texts). The actual mechanism of alkyl groups moving, as in Wagner–Meerwein rearrangement, probably involves transfer of the moving alkyl group fluidly along a bond, not ionic bond-breaking and forming. In pericyclic reactions, explanation by orbital interactions give a better picture than simple discrete electron transfers. It is, nevertheless, possible to draw the curved arrows for a sequence of discrete electron transfers that give the same result as a rearrangement reaction, although these are not necessarily realistic. In allylic rearrangement, the reaction is indeed ionic.
Three key rearrangement reactions are 1,2-rearrangements, pericyclic reactions and olefin metathesis.
1,2-rearrangements
A 1,2-rearrangement is an organic reaction where a substituent moves from one atom to another atom in a chemical compound. In a 1,2 shift the movement involves two adjacent atoms but moves over larger distances are possible. Skeletal isomerization is not normally encountered in the laboratory, but is the basis of large applications in oil refineries. In general, straight-chain alkanes are converted to branched isomers by heating in the presence of a catalyst. Examples include isomerisation of n-butane to isobutane and pentane to isopentane. Highly branched alkanes have favorable combustion characteristics for internal combustion engines.
Further examples are the Wagner–Meerwein rearrangement:
and the Beckmann rearrangement, which is relevant to the production of certain nylons:
Pericyclic reactions
A pericyclic reaction is a type of reaction with multiple carbon–carbon bond making and breaking wherein the transition state of the molecule has a cyclic geometry, and the reaction progresses in a concerted fashion. Examples are hydride shifts
and the Claisen rearrangement:
Olefin metathesis
Olefin metathesis is a formal exchange of the alkylidene fragments in two alkenes. It is a catalytic reaction with carbene, or more accurately, transition metal carbene complex intermediates.
In this example (ethenolysis, a pair of vinyl compounds form a new symmetrical alkene with expulsion of ethylene.
Other rearragement reactions
1,3-rearrangements
1,3-rearrangements take place over 3 carbon atoms. Examples:
the Fries rearrangement
a 1,3-alkyl shift of verbenone to chrysanthenone
See also
Beckmann rearrangement
Curtius rearrangement
Hofmann rearrangement
Lossen rearrangement
Schmidt reaction
Tiemann rearrangement
Wolff rearrangement
Photochemical rearrangements
Thermal rearrangement of aromatic hydrocarbons
Mumm rearrangement
References | 0.776846 | 0.981917 | 0.762798 |
Isoelectric focusing | Isoelectric focusing (IEF), also known as electrofocusing, is a technique for separating different molecules by differences in their isoelectric point (pI). It is a type of zone electrophoresis usually performed on proteins in a gel that takes advantage of the fact that overall charge on the molecule of interest is a function of the pH of its surroundings.
Procedure
IEF involves adding an ampholyte solution into immobilized pH gradient (IPG) gels. IPGs are the acrylamide gel matrix co-polymerized with the pH gradient, which result in completely stable gradients except the most alkaline (>12) pH values. The immobilized pH gradient is obtained by the continuous change in the ratio of immobilines. An immobiline is a weak acid or base defined by its pK value.
A protein that is in a pH region below its isoelectric point (pI) will be positively charged and so will migrate toward the cathode (negatively charged electrode). As it migrates through a gradient of increasing pH, however, the protein's overall charge will decrease until the protein reaches the pH region that corresponds to its pI. At this point it has no net charge and so migration ceases (as there is no electrical attraction toward either electrode). As a result, the proteins become focused into sharp stationary bands with each protein positioned at a point in the pH gradient corresponding to its pI. The technique is capable of extremely high resolution with proteins differing by a single charge being fractionated into separate bands.
Molecules to be focused are distributed over a medium that has a pH gradient (usually created by aliphatic ampholytes). An electric current is passed through the medium, creating a "positive" anode and "negative" cathode end. Negatively charged molecules migrate through the pH gradient in the medium toward the "positive" end while positively charged molecules move toward the "negative" end. As a particle moves toward the pole opposite of its charge it moves through the changing pH gradient until it reaches a point in which the pH of that molecule's isoelectric point is reached. At this point the molecule no longer has a net electric charge (due to the protonation or deprotonation of the associated functional groups) and as such will not proceed any further within the gel. The gradient is established before adding the particles of interest by first subjecting a solution of small molecules such as polyampholytes with varying pI values to electrophoresis.
The method is applied particularly often in the study of proteins, which separate based on their relative content of acidic and basic residues, whose value is represented by the pI. Proteins are introduced into an immobilized pH gradient gel composed of polyacrylamide, starch, or agarose where a pH gradient has been established. Gels with large pores are usually used in this process to eliminate any "sieving" effects, or artifacts in the pI caused by differing migration rates for proteins of differing sizes. Isoelectric focusing can resolve proteins that differ in pI value by as little as 0.01. Isoelectric focusing is the first step in two-dimensional gel electrophoresis, in which proteins are first separated by their pI value and then further separated by molecular weight through SDS-PAGE. Isoelectric focusing, on the other hand, is the only step in preparative native PAGE at constant pH.
Living cells
According to some opinions, living eukaryotic cells perform isoelectric focusing of proteins in their interior to overcome a limitation of the rate of metabolic reaction by diffusion of enzymes and their reactants, and to regulate the rate of particular biochemical processes. By concentrating the enzymes of particular metabolic pathways into distinct and small regions of its interior, the cell can increase the rate of particular biochemical pathways by several orders of magnitude. By modification of the isoelectric point (pI) of molecules of an enzyme by, e.g., phosphorylation or dephosphorylation, the cell can transfer molecules of the enzyme between different parts of its interior, to switch on or switch off particular biochemical processes.
Microfluidic chip based
Microchip based electrophoresis is a promising alternative to capillary electrophoresis since it has the potential to provide rapid protein analysis, straightforward integration with other microfluidic unit operations, whole channel detection, nitrocellulose films, smaller sample sizes and lower fabrication costs.
Multi-junction
The increased demand for faster and easy-to-use protein separation tools has accelerated the evolution of IEF towards in-solution separations. In this context, a multi-junction IEF system was developed to perform fast and gel-free IEF separations. The multi-junction IEF system utilizes a series of vessels with a capillary passing through each vessel. Part of the capillary in each vessel is replaced by a semipermeable membrane. The vessels contain buffer solutions with different pH values, so that a pH gradient is effectively established inside the capillary. The buffer solution in each vessel has an electrical contact with a voltage divider connected to a high-voltage power supply, which establishes an electrical field along the capillary. When a sample (a mixture of peptides or proteins) is injected in the capillary, the presence of the electrical field and the pH gradient separates these molecules according to their isoelectric points. The multi-junction IEF system has been used to separate tryptic peptide mixtures for two-dimensional proteomics and blood plasma proteins from Alzheimer's disease patients for biomarker discovery.
References
Electrophoresis
Industrial processes
Protein methods
Molecular biology techniques | 0.775597 | 0.983494 | 0.762795 |
Unique Ingredient Identifier | The Unique Ingredient Identifier (UNII) is an alphanumeric identifier linked to a substance's molecular structure or descriptive information and is generated by the Global Substance Registration System (GSRS) of the Food and Drug Administration (FDA). It classifies substances as chemical, protein, nucleic acid, polymer, structurally diverse, or mixture according to the standards outlined by the International Organization for Standardization in ISO 11238 and ISO DTS 19844. UNIIs are non-proprietary, unique, unambiguous, and free to generate and use. A UNII can be generated for substances at any level of complexity, being broad enough to include "any substance, from an atom to an organism."
The GSRS is used to generate permanent, unique identifiers for substances in regulated products, such as ingredients in drug and biological products. The GSRS uses molecular structure, protein and nucleic sequences and descriptive information to generate the UNII. The preferred means for defining a chemical substance is by its two-dimensional molecular structure since it is pertinent to a substance's identity and information regarding a substance's stereochemistry is readily available. Nucleic acids are defined by their sequences and by any modifications that may be present. In the case of proteins only end-group modifications will be uniquely identified, along with any other modifications that are essential for activity. This is because of the inherently heterogenous nature of proteins. Therefore, two different protein substances can share the same UNII and yet have no biosimilarity or therapeutic equivalence. Polymers are defined by their structural repeating units and physical properties such as molecular weight or properties related to molecular weight (e.g. viscosity). Structurally diverse materials are inherently heterogenous preparations from natural materials such as plant extract and vaccines.
The GSRS is a freely distributable software system provided through a collaboration between the FDA, the National Center for Advancing Translational Sciences (NCATS) and the European Medicines Agency (EMA). The GSRS was developed to implement the ISO 11238 standard which is one of the core ISO Identification of Medicinal Product (IDMP) standards. The GSRS Board which governs the GSRS includes experts from FDA, European Regulatory Agencies, and the United States Pharmacopoeia (USP).
Examples
References
External links
Substance Registration System | Unique Ingredient Identifier (UNII) - Food & Drug Administration
Pharmacological classification systems | 0.775187 | 0.98401 | 0.762791 |
Data model | A data model is an abstract model that organizes elements of data and standardizes how they relate to one another and to the properties of real-world entities. For instance, a data model may specify that the data element representing a car be composed of a number of other elements which, in turn, represent the color and size of the car and define its owner.
The corresponding professional activity is called generally data modeling or, more specifically, database design.
Data models are typically specified by a data expert, data specialist, data scientist, data librarian, or a data scholar.
A data modeling language and notation are often represented in graphical form as diagrams.
A data model can sometimes be referred to as a data structure, especially in the context of programming languages. Data models are often complemented by function models, especially in the context of enterprise models.
A data model explicitly determines the structure of data; conversely, structured data is data organized according to an explicit data model or data structure. Structured data is in contrast to unstructured data and semi-structured data.
Overview
The term data model can refer to two distinct but closely related concepts. Sometimes it refers to an abstract formalization of the objects and relationships found in a particular application domain: for example the customers, products, and orders found in a manufacturing organization. At other times it refers to the set of concepts used in defining such formalizations: for example concepts such as entities, attributes, relations, or tables. So the "data model" of a banking application may be defined using the entity–relationship "data model". This article uses the term in both senses.
Managing large quantities of structured and unstructured data is a primary function of information systems. Data models describe the structure, manipulation, and integrity aspects of the data stored in data management systems such as relational databases. They may also describe data with a looser structure, such as word processing documents, email messages, pictures, digital audio, and video: XDM, for example, provides a data model for XML documents.
The role of data models
The main aim of data models is to support the development of information systems by providing the definition and format of data. According to West and Fowler (1999) "if this is done consistently across systems then compatibility of data can be achieved. If the same data structures are used to store and access data then different applications can share data. The results of this are indicated above. However, systems and interfaces often cost more than they should, to build, operate, and maintain. They may also constrain the business rather than support it. A major cause is that the quality of the data models implemented in systems and interfaces is poor".
"Business rules, specific to how things are done in a particular place, are often fixed in the structure of a data model. This means that small changes in the way business is conducted lead to large changes in computer systems and interfaces".
"Entity types are often not identified, or incorrectly identified. This can lead to replication of data, data structure, and functionality, together with the attendant costs of that duplication in development and maintenance".
"Data models for different systems are arbitrarily different. The result of this is that complex interfaces are required between systems that share data. These interfaces can account for between 25-70% of the cost of current systems".
"Data cannot be shared electronically with customers and suppliers, because the structure and meaning of data has not been standardized. For example, engineering design data and drawings for process plant are still sometimes exchanged on paper".
The reason for these problems is a lack of standards that will ensure that data models will both meet business needs and be consistent.
A data model explicitly determines the structure of data. Typical applications of data models include database models, design of information systems, and enabling exchange of data. Usually, data models are specified in a data modeling language.[3]
Three perspectives
A data model instance may be one of three kinds according to ANSI in 1975:
Conceptual data model: describes the semantics of a domain, being the scope of the model. For example, it may be a model of the interest area of an organization or industry. This consists of entity classes, representing kinds of things of significance in the domain, and relationship assertions about associations between pairs of entity classes. A conceptual schema specifies the kinds of facts or propositions that can be expressed using the model. In that sense, it defines the allowed expressions in an artificial 'language' with a scope that is limited by the scope of the model.
Logical data model: describes the semantics, as represented by a particular data manipulation technology. This consists of descriptions of tables and columns, object oriented classes, and XML tags, among other things.
Physical data model: describes the physical means by which data are stored. This is concerned with partitions, CPUs, tablespaces, and the like.
The significance of this approach, according to ANSI, is that it allows the three perspectives to be relatively independent of each other. Storage technology can change without affecting either the logical or the conceptual model. The table/column structure can change without (necessarily) affecting the conceptual model. In each case, of course, the structures must remain consistent with the other model. The table/column structure may be different from a direct translation of the entity classes and attributes, but it must ultimately carry out the objectives of the conceptual entity class structure. Early phases of many software development projects emphasize the design of a conceptual data model. Such a design can be detailed into a logical data model. In later stages, this model may be translated into physical data model. However, it is also possible to implement a conceptual model directly.
History
One of the earliest pioneering works in modeling information systems was done by Young and Kent (1958), who argued for "a precise and abstract way of specifying the informational and time characteristics of a data processing problem". They wanted to create "a notation that should enable the analyst to organize the problem around any piece of hardware". Their work was the first effort to create an abstract specification and invariant basis for designing different alternative implementations using different hardware components. The next step in IS modeling was taken by CODASYL, an IT industry consortium formed in 1959, who essentially aimed at the same thing as Young and Kent: the development of "a proper structure for machine-independent problem definition language, at the system level of data processing". This led to the development of a specific IS information algebra.
In the 1960s data modeling gained more significance with the initiation of the management information system (MIS) concept. According to Leondes (2002), "during that time, the information system provided the data and information for management purposes. The first generation database system, called Integrated Data Store (IDS), was designed by Charles Bachman at General Electric. Two famous database models, the network data model and the hierarchical data model, were proposed during this period of time". Towards the end of the 1960s, Edgar F. Codd worked out his theories of data arrangement, and proposed the relational model for database management based on first-order predicate logic.
In the 1970s entity–relationship modeling emerged as a new type of conceptual data modeling, originally formalized in 1976 by Peter Chen. Entity–relationship models were being used in the first stage of information system design during the requirements analysis to describe information needs or the type of information that is to be stored in a database. This technique can describe any ontology, i.e., an overview and classification of concepts and their relationships, for a certain area of interest.
In the 1970s G.M. Nijssen developed "Natural Language Information Analysis Method" (NIAM) method, and developed this in the 1980s in cooperation with Terry Halpin into Object–Role Modeling (ORM). However, it was Terry Halpin's 1989 PhD thesis that created the formal foundation on which Object–Role Modeling is based.
Bill Kent, in his 1978 book Data and Reality, compared a data model to a map of a territory, emphasizing that in the real world, "highways are not painted red, rivers don't have county lines running down the middle, and you can't see contour lines on a mountain". In contrast to other researchers who tried to create models that were mathematically clean and elegant, Kent emphasized the essential messiness of the real world, and the task of the data modeler to create order out of chaos without excessively distorting the truth.
In the 1980s, according to Jan L. Harrington (2000), "the development of the object-oriented paradigm brought about a fundamental change in the way we look at data and the procedures that operate on data. Traditionally, data and procedures have been stored separately: the data and their relationship in a database, the procedures in an application program. Object orientation, however, combined an entity's procedure with its data."
During the early 1990s, three Dutch mathematicians Guido Bakema, Harm van der Lek, and JanPieter Zwart, continued the development on the work of G.M. Nijssen. They focused more on the communication part of the semantics. In 1997 they formalized the method Fully Communication Oriented Information Modeling FCO-IM.
Types
Database model
A database model is a specification describing how a database is structured and used.
Several such models have been suggested. Common models include:
Flat model
This may not strictly qualify as a data model. The flat (or table) model consists of a single, two-dimensional array of data elements, where all members of a given column are assumed to be similar values, and all members of a row are assumed to be related to one another.
Hierarchical model
The hierarchical model is similar to the network model except that links in the hierarchical model form a tree structure, while the network model allows arbitrary graph.
Network model
This model organizes data using two fundamental constructs, called records and sets. Records contain fields, and sets define one-to-many relationships between records: one owner, many members. The network data model is an abstraction of the design concept used in the implementation of databases.
Relational model
is a database model based on first-order predicate logic. Its core idea is to describe a database as a collection of predicates over a finite set of predicate variables, describing constraints on the possible values and combinations of values. The power of the relational data model lies in its mathematical foundations and a simple user-level paradigm.
Object–relational model
Similar to a relational database model, but objects, classes, and inheritance are directly supported in database schemas and in the query language.
Object–role modeling
A method of data modeling that has been defined as "attribute free", and "fact-based". The result is a verifiably correct system, from which other common artifacts, such as ERD, UML, and semantic models may be derived. Associations between data objects are described during the database design procedure, such that normalization is an inevitable result of the process.
Star schema
The simplest style of data warehouse schema. The star schema consists of a few "fact tables" (possibly only one, justifying the name) referencing any number of "dimension tables". The star schema is considered an important special case of the snowflake schema.
Data structure diagram
A data structure diagram (DSD) is a diagram and data model used to describe conceptual data models by providing graphical notations which document entities and their relationships, and the constraints that bind them. The basic graphic elements of DSDs are boxes, representing entities, and arrows, representing relationships. Data structure diagrams are most useful for documenting complex data entities.
Data structure diagrams are an extension of the entity–relationship model (ER model). In DSDs, attributes are specified inside the entity boxes rather than outside of them, while relationships are drawn as boxes composed of attributes which specify the constraints that bind entities together. DSDs differ from the ER model in that the ER model focuses on the relationships between different entities, whereas DSDs focus on the relationships of the elements within an entity and enable users to fully see the links and relationships between each entity.
There are several styles for representing data structure diagrams, with the notable difference in the manner of defining cardinality. The choices are between arrow heads, inverted arrow heads (crow's feet), or numerical representation of the cardinality.
Entity–relationship model
An entity–relationship model (ERM), sometimes referred to as an entity–relationship diagram (ERD), could be used to represent an abstract conceptual data model (or semantic data model or physical data model) used in software engineering to represent structured data. There are several notations used for ERMs. Like DSD's, attributes are specified inside the entity boxes rather than outside of them, while relationships are drawn as lines, with the relationship constraints as descriptions on the line. The E-R model, while robust, can become visually cumbersome when representing entities with several attributes.
There are several styles for representing data structure diagrams, with a notable difference in the manner of defining cardinality. The choices are between arrow heads, inverted arrow heads (crow's feet), or numerical representation of the cardinality.
Geographic data model
A data model in Geographic information systems is a mathematical construct for representing geographic objects or surfaces as data. For example,
the vector data model represents geography as points, lines, and polygons
the raster data model represents geography as cell matrixes that store numeric values;
and the Triangulated irregular network (TIN) data model represents geography as sets of contiguous, nonoverlapping triangles.
Generic data model
Generic data models are generalizations of conventional data models. They define standardized general relation types, together with the kinds of things that may be related by such a relation type. Generic data models are developed as an approach to solving some shortcomings of conventional data models. For example, different modelers usually produce different conventional data models of the same domain. This can lead to difficulty in bringing the models of different people together and is an obstacle for data exchange and data integration. Invariably, however, this difference is attributable to different levels of abstraction in the models and differences in the kinds of facts that can be instantiated (the semantic expression capabilities of the models). The modelers need to communicate and agree on certain elements that are to be rendered more concretely, in order to make the differences less significant.
Semantic data model
A semantic data model in software engineering is a technique to define the meaning of data within the context of its interrelationships with other data. A semantic data model is an abstraction that defines how the stored symbols relate to the real world. A semantic data model is sometimes called a conceptual data model.
The logical data structure of a database management system (DBMS), whether hierarchical, network, or relational, cannot totally satisfy the requirements for a conceptual definition of data because it is limited in scope and biased toward the implementation strategy employed by the DBMS. Therefore, the need to define data from a conceptual view has led to the development of semantic data modeling techniques. That is, techniques to define the meaning of data within the context of its interrelationships with other data. As illustrated in the figure. The real world, in terms of resources, ideas, events, etc., are symbolically defined within physical data stores. A semantic data model is an abstraction that defines how the stored symbols relate to the real world. Thus, the model must be a true representation of the real world.
Topics
Data architecture
Data architecture is the design of data for use in defining the target state and the subsequent planning needed to hit the target state. It is usually one of several architecture domains that form the pillars of an enterprise architecture or solution architecture.
A data architecture describes the data structures used by a business and/or its applications. There are descriptions of data in storage and data in motion; descriptions of data stores, data groups, and data items; and mappings of those data artifacts to data qualities, applications, locations, etc.
Essential to realizing the target state, Data architecture describes how data is processed, stored, and utilized in a given system. It provides criteria for data processing operations that make it possible to design data flows and also control the flow of data in the system.
Data modeling
Data modeling in software engineering is the process of creating a data model by applying formal data model descriptions using data modeling techniques. Data modeling is a technique for defining business requirements for a database. It is sometimes called database modeling because a data model is eventually implemented in a database.
The figure illustrates the way data models are developed and used today. A conceptual data model is developed based on the data requirements for the application that is being developed, perhaps in the context of an activity model. The data model will normally consist of entity types, attributes, relationships, integrity rules, and the definitions of those objects. This is then used as the start point for interface or database design.
Data properties
Some important properties of data for which requirements need to be met are:
definition-related properties
relevance: the usefulness of the data in the context of your business.
clarity: the availability of a clear and shared definition for the data.
consistency: the compatibility of the same type of data from different sources.
content-related properties
timeliness: the availability of data at the time required and how up-to-date that data is.
accuracy: how close to the truth the data is.
properties related to both definition and content
completeness: how much of the required data is available.
accessibility: where, how, and to whom the data is available or not available (e.g. security).
cost: the cost incurred in obtaining the data, and making it available for use.
Data organization
Another kind of data model describes how to organize data using a database management system or other data management technology. It describes, for example, relational tables and columns or object-oriented classes and attributes. Such a data model is sometimes referred to as the physical data model, but in the original ANSI three schema architecture, it is called "logical". In that architecture, the physical model describes the storage media (cylinders, tracks, and tablespaces). Ideally, this model is derived from the more conceptual data model described above. It may differ, however, to account for constraints like processing capacity and usage patterns.
While data analysis is a common term for data modeling, the activity actually has more in common with the ideas and methods of synthesis (inferring general concepts from particular instances) than it does with analysis (identifying component concepts from more general ones). {Presumably we call ourselves systems analysts because no one can say systems synthesists.} Data modeling strives to bring the data structures of interest together into a cohesive, inseparable, whole by eliminating unnecessary data redundancies and by relating data structures with relationships.
A different approach is to use adaptive systems such as artificial neural networks that can autonomously create implicit models of data.
Data structure
A data structure is a way of storing data in a computer so that it can be used efficiently. It is an organization of mathematical and logical concepts of data. Often a carefully chosen data structure will allow the most efficient algorithm to be used. The choice of the data structure often begins from the choice of an abstract data type.
A data model describes the structure of the data within a given domain and, by implication, the underlying structure of that domain itself. This means that a data model in fact specifies a dedicated grammar for a dedicated artificial language for that domain. A data model represents classes of entities (kinds of things) about which a company wishes to hold information, the attributes of that information, and relationships among those entities and (often implicit) relationships among those attributes. The model describes the organization of the data to some extent irrespective of how data might be represented in a computer system.
The entities represented by a data model can be the tangible entities, but models that include such concrete entity classes tend to change over time. Robust data models often identify abstractions of such entities. For example, a data model might include an entity class called "Person", representing all the people who interact with an organization. Such an abstract entity class is typically more appropriate than ones called "Vendor" or "Employee", which identify specific roles played by those people.
Data model theory
The term data model can have two meanings:
A data model theory, i.e. a formal description of how data may be structured and accessed.
A data model instance, i.e. applying a data model theory to create a practical data model instance for some particular application.
A data model theory has three main components:
The structural part: a collection of data structures which are used to create databases representing the entities or objects modeled by the database.
The integrity part: a collection of rules governing the constraints placed on these data structures to ensure structural integrity.
The manipulation part: a collection of operators which can be applied to the data structures, to update and query the data contained in the database.
For example, in the relational model, the structural part is based on a modified concept of the mathematical relation; the integrity part is expressed in first-order logic and the manipulation part is expressed using the relational algebra, tuple calculus and domain calculus.
A data model instance is created by applying a data model theory. This is typically done to solve some business enterprise requirement. Business requirements are normally captured by a semantic logical data model. This is transformed into a physical data model instance from which is generated a physical database. For example, a data modeler may use a data modeling tool to create an entity–relationship model of the corporate data repository of some business enterprise. This model is transformed into a relational model, which in turn generates a relational database.
Patterns
Patterns are common data modeling structures that occur in many data models.
Related models
Data-flow diagram
A data-flow diagram (DFD) is a graphical representation of the "flow" of data through an information system. It differs from the flowchart as it shows the data flow instead of the control flow of the program. A data-flow diagram can also be used for the visualization of data processing (structured design). Data-flow diagrams were invented by Larry Constantine, the original developer of structured design, based on Martin and Estrin's "data-flow graph" model of computation.
It is common practice to draw a context-level data-flow diagram first which shows the interaction between the system and outside entities. The DFD is designed to show how a system is divided into smaller portions and to highlight the flow of data between those parts. This context-level data-flow diagram is then "exploded" to show more detail of the system being modeled
Information model
An Information model is not a type of data model, but more or less an alternative model. Within the field of software engineering, both a data model and an information model can be abstract, formal representations of entity types that include their properties, relationships and the operations that can be performed on them. The entity types in the model may be kinds of real-world objects, such as devices in a network, or they may themselves be abstract, such as for the entities used in a billing system. Typically, they are used to model a constrained domain that can be described by a closed set of entity types, properties, relationships and operations.
According to Lee (1999) an information model is a representation of concepts, relationships, constraints, rules, and operations to specify data semantics for a chosen domain of discourse. It can provide sharable, stable, and organized structure of information requirements for the domain context. More in general the term information model is used for models of individual things, such as facilities, buildings, process plants, etc. In those cases the concept is specialised to Facility Information Model, Building Information Model, Plant Information Model, etc. Such an information model is an integration of a model of the facility with the data and documents about the facility.
An information model provides formalism to the description of a problem domain without constraining how that description is mapped to an actual implementation in software. There may be many mappings of the information model. Such mappings are called data models, irrespective of whether they are object models (e.g. using UML), entity–relationship models or XML schemas.
Object model
An object model in computer science is a collection of objects or classes through which a program can examine and manipulate some specific parts of its world. In other words, the object-oriented interface to some service or system. Such an interface is said to be the object model of the represented service or system. For example, the Document Object Model (DOM) is a collection of objects that represent a page in a web browser, used by script programs to examine and dynamically change the page. There is a Microsoft Excel object model for controlling Microsoft Excel from another program, and the ASCOM Telescope Driver is an object model for controlling an astronomical telescope.
In computing the term object model has a distinct second meaning of the general properties of objects in a specific computer programming language, technology, notation or methodology that uses them. For example, the Java object model, the COM object model, or the object model of OMT. Such object models are usually defined using concepts such as class, message, inheritance, polymorphism, and encapsulation. There is an extensive literature on formalized object models as a subset of the formal semantics of programming languages.
Object–role modeling
Object–Role Modeling (ORM) is a method for conceptual modeling, and can be used as a tool for information and rules analysis.
Object–Role Modeling is a fact-oriented method for performing systems analysis at the conceptual level. The quality of a database application depends critically on its design. To help ensure correctness, clarity, adaptability and productivity, information systems are best specified first at the conceptual level, using concepts and language that people can readily understand.
The conceptual design may include data, process and behavioral perspectives, and the actual DBMS used to implement the design might be based on one of many logical data models (relational, hierarchic, network, object-oriented, etc.).
Unified Modeling Language models
The Unified Modeling Language (UML) is a standardized general-purpose modeling language in the field of software engineering. It is a graphical language for visualizing, specifying, constructing, and documenting the artifacts of a software-intensive system. The Unified Modeling Language offers a standard way to write a system's blueprints, including:
Conceptual things such as business processes and system functions
Concrete things such as programming language statements, database schemas, and
Reusable software components.
UML offers a mix of functional models, data models, and database models.
See also
Business process model
Core architecture data model
Common data model, any standardised data model
Data collection system
Data dictionary
Data Format Description Language (DFDL)
Distributional–relational database
JC3IEDM
Process model
References
Further reading
David C. Hay (1996). Data Model Patterns: Conventions of Thought. New York:Dorset House Publishers, Inc.
Len Silverston (2001). The Data Model Resource Book Volume 1/2. John Wiley & Sons.
Len Silverston & Paul Agnew (2008). The Data Model Resource Book: Universal Patterns for data Modeling Volume 3. John Wiley & Sons.
Matthew West (2011) Developing High Quality Data Models Morgan Kaufmann | 0.766158 | 0.995602 | 0.762788 |
Glutathione S-transferase | Glutathione S-transferases (GSTs), previously known as ligandins, are a family of eukaryotic and prokaryotic phase II metabolic isozymes best known for their ability to catalyze the conjugation of the reduced form of glutathione (GSH) to xenobiotic substrates for the purpose of detoxification. The GST family consists of three superfamilies: the cytosolic, mitochondrial, and microsomal—also known as MAPEG—proteins. Members of the GST superfamily are extremely diverse in amino acid sequence, and a large fraction of the sequences deposited in public databases are of unknown function. The Enzyme Function Initiative (EFI) is using GSTs as a model superfamily to identify new GST functions.
GSTs can constitute up to 10% of cytosolic protein in some mammalian organs. GSTs catalyse the conjugation of GSH—via a sulfhydryl group—to electrophilic centers on a wide variety of substrates in order to make the compounds more water-soluble. This activity detoxifies endogenous compounds such as peroxidised lipids and enables the breakdown of xenobiotics. GSTs may also bind toxins and function as transport proteins, which gave rise to the early term for GSTs, ligandin.
Classification
Protein sequence and structure are important additional classification criteria for the three superfamilies (cytosolic, mitochondrial, and MAPEG) of GSTs: while classes from the cytosolic superfamily of GSTs possess more than 40% sequence homology, those from other classes may have less than 25%. Cytosolic GSTs are divided into 13 classes based upon their structure: alpha, beta, delta, epsilon, zeta, theta, mu, nu, pi, sigma, tau, phi, and omega. Mitochondrial GSTs are in class kappa. The MAPEG superfamily of microsomal GSTs consists of subgroups designated I-IV, between which amino acid sequences share less than 20% identity. Human cytosolic GSTs belong to the alpha, zeta, theta, mu, pi, sigma, and omega classes, while six isozymes belonging to classes I, II, and IV of the MAPEG superfamily are known to exist.
Nomenclature
Standardized GST nomenclature first proposed in 1992 identifies the species to which the isozyme of interest belongs with a lower-case initial (e.g., "h" for human), which precedes the abbreviation GST. The isozyme class is subsequently identified with an upper-case letter (e.g., "A" for alpha), followed by an Arabic numeral representing the class subfamily (or subunit). Because both mitochondrial and cytosolic GSTs exist as dimers, and only heterodimers form between members of the same class, the second subfamily component of the enzyme dimer is denoted with a hyphen, followed by an additional Arabic numeral. Therefore, if a human glutathione S-transferase is a homodimer in the pi-class subfamily 1, its name will be written as "hGSTP1-1."
The early nomenclature for GSTs referred to them as “Y” proteins, referring to their separation in the “Y” fraction (as opposed to the “X and Z” fractions) using Sephadex G75 chromatography. As GST sub-units were identified they were referred to as Ya, Yp, etc. with if necessary, a number identifying the monomer isoform (e.g. Yb1). Litwack et al proposed the term “Ligandin” to cover the proteins previously known as “Y” proteins.
In clinical chemistry and toxicology, the terms alpha GST, mu GST, and pi GST are most commonly used.
Structure
The glutathione binding site, or "G-site", is located in the thioredoxin-like domain of both cytosolic and mitochondrial GSTs. The region containing the greatest amount of variability between the assorted classes is that of helix α2, where one of three different amino acid residues interacts with the glycine residue of glutathione. Two subgroups of cytosolic GSTs have been characterized based upon their interaction with glutathione: the Y-GST group, which uses a tyrosine residue to activate glutathione, and the S/C-GST, which instead uses serine or cysteine residues.
"GST proteins are globular proteins with an N-terminal mixed helical and beta-strand domain and an all-helical C-terminal domain."
The porcine pi-class enzyme pGTSP1-1 was the first GST to have its structure determined, and it is representative of other members of the cytosolic GST superfamily, which contain a thioredoxin-like N-terminal domain as well as a C-terminal domain consisting of alpha helices.
Mammalian cytosolic GSTs are dimeric, with both subunits being from the same class of GSTs, although not necessarily identical. The monomers are approximately 25 kDa in size. They are active over a wide variety of substrates with considerable overlap. The following table lists all GST enzymes of each class known to exist in Homo sapiens, as found in the UniProtKB/Swiss-Prot database.
Evolution
Environmental challenge by natural toxins helped to prepare Drosophilae for DDT challenge, by shaping the evolution of Drosophila GST - which metabolizes both.
Function
The activity of GSTs is dependent upon a steady supply of GSH from the synthetic enzymes gamma-glutamylcysteine synthetase and glutathione synthetase, as well as the action of specific transporters to remove conjugates of GSH from the cell. The primary role of GSTs is to detoxify xenobiotics by catalyzing the nucleophilic attack by GSH on electrophilic carbon, sulfur, or nitrogen atoms of said nonpolar xenobiotic substrates, thereby preventing their interaction with crucial cellular proteins and nucleic acids. Specifically, the function of GSTs in this role is twofold: to bind both the substrate at the enzyme's hydrophobic H-site and GSH at the adjacent, hydrophilic G-site, which together form the active site of the enzyme; and subsequently to activate the thiol group of GSH, enabling the nucleophilic attack upon the substrate. The glutathione molecule binds in a cleft between N- and C-terminal domains - the catalytically important residues are proposed to reside in the N-terminal domain. Both subunits of the GST dimer, whether hetero- or homodimeric in nature, contain a single nonsubstrate binding site, as well as a GSH-binding site. In heterodimeric GST complexes such as those formed by the cytosolic mu and alpha classes, however, the cleft between the two subunits is home to an additional high-affinity nonsubstrate xenobiotic binding site, which may account for the enzymes' ability to form heterodimers.
The compounds targeted in this manner by GSTs encompass a diverse range of environmental or otherwise exogenous toxins, including chemotherapeutic agents and other drugs, pesticides, herbicides, carcinogens, and variably-derived epoxides; indeed, GSTs are responsible for the conjugation of β1-8,9-epoxide, a reactive intermediate formed from aflatoxin B1, which is a crucial means of protection against the toxin in rodents. The detoxification reactions comprise the first four steps of mercapturic acid synthesis, with the conjugation to GSH serving to make the substrates more soluble and allowing them to be removed from the cell by transporters such as multidrug resistance-associated protein 1 (MRP1). After export, the conjugation products are converted into mercapturic acids and excreted via the urine or bile.
Most mammalian isoenzymes have affinity for the substrate 1-chloro-2,4-dinitrobenzene, and spectrophotometric assays utilising this substrate are commonly used to report GST activity. However, some endogenous compounds, e.g., bilirubin, can inhibit the activity of GSTs. In mammals, GST isoforms have cell specific distributions (for example, α-GST in hepatocytes and π-GST in the biliary tract of the human liver).
GSTs have a role in the bioactivation process of clopidogrel prodrug.
Role in cell signaling
Although best known for their ability to conjugate xenobiotics to GSH and thereby detoxify cellular environments, GSTs are also capable of binding nonsubstrate ligands, with important cell signaling implications. Several GST isozymes from various classes have been shown to inhibit the function of a kinase involved in the MAPK pathway that regulates cell proliferation and death, preventing the kinase from carrying out its role in facilitating the signaling cascade.
Cytosolic GSTP1-1, a well-characterized isozyme of the mammalian GST family, is expressed primarily in heart, lung, and brain tissues; in fact, it is the most common GST expressed outside the liver. Based on its overexpression in a majority of human tumor cell lines and prevalence in chemotherapeutic-resistant tumors, GSTP1-1 is thought to play a role in the development of cancer and its potential resistance to drug treatment. Further evidence for this comes from the knowledge that GSTP can selectively inhibit C-Jun phosphorylation by JNK, preventing apoptosis. During times of low cellular stress, a complex forms through direct protein–protein interactions between GSTP and the C-terminus of JNK, effectively preventing the action of JNK and thus its induction of the JNK pathway. Cellular oxidative stress causes the dissociation of the complex, oligomerization of GSTP, and induction of the JNK pathway, resulting in apoptosis. The connection between GSTP inhibition of the pro-apoptotic JNK pathway and the isozyme's overexpression in drug-resistant tumor cells may itself account for the tumor cells' ability to escape apoptosis mediated by drugs that are not substrates of GSTP.
Like GSTP, GSTM1 is involved in regulating apoptotic pathways through direct protein–protein interactions, although it acts on ASK1, which is upstream of JNK. The mechanism and result are similar to that of GSTP and JNK, in that GSTM1 sequesters ASK1 through complex formation and prevents its induction of the pro-apoptotic p38 and JNK portions of the MAPK signaling cascade. Like GSTP, GSTM1 interacts with its partner in the absence of oxidative stress, although ASK1 is also involved in heat shock response, which is likewise prevented during ASK1 sequestration. The fact that high levels of GST are associated with resistance to apoptosis induced by a range of substances, including chemotherapeutic agents, supports its putative role in MAPK signaling prevention.
Implications in cancer development
There is a growing body of evidence supporting the role of GST, particularly GSTP, in cancer development and chemotherapeutic resistance. The link between GSTP and cancer is most obvious in the overexpression of GSTP in many cancers, but it is also supported by the fact that the transformed phenotype of tumor cells is associated with aberrantly regulated kinase signaling pathways and cellular addiction to overexpressed proteins. That most anti-cancer drugs are poor substrates for GSTP indicates that the role of elevated GSTP in many tumor cell lines is not to detoxify the compounds, but must have another purpose; this hypothesis is also given credence by the common finding of GSTP overexpression in tumor cell lines that are not drug resistant.
Clinical significance
In addition to their roles in cancer development and chemotherapeutic drug resistance, GSTs are implicated in a variety of diseases by virtue of their involvement with GSH. Although the evidence is minimal for the influence of GST polymorphisms of the alpha, mu, pi, and theta classes on susceptibility to various types of cancer, numerous studies have implicated such genotypic variations in asthma, atherosclerosis, allergies, and other inflammatory diseases.
Because diabetes is a disease that involves oxidative damage, and GSH metabolism is dysfunctional in diabetic patients, GSTs may represent a potential target for diabetic drug treatment. In addition, insulin administration is known to result in increased GST gene expression through the PI3K/AKT/mTOR pathway and reduced intracellular oxidative stress, while glucagon decreases such gene expression.
Omega-class GST (GSTO) genes, in particular, are associated with neurological diseases such as Alzheimer's, Parkinson's, and amyotrophic lateral sclerosis; again, oxidative stress is believed to be the culprit, with decreased GSTO gene expression resulting in a lowered age of onset for the diseases.
Release of GSTs as an indication of organ damage
The high intracellular concentrations of GSTs coupled with their cell-specific cellular distribution allows them to function as biomarkers for localising and monitoring injury to defined cell types. For example, hepatocytes contain high levels of alpha GST and serum alpha GST has been found to be an indicator of hepatocyte injury in transplantation, toxicity and viral infections.
Similarly, in humans, renal proximal tubular cells contain high concentrations of alpha GST, while distal tubular cells contain pi GST. This specific distribution enables the measurement of urinary GSTs to be used to quantify and localise renal tubular injury in transplantation, nephrotoxicity and ischaemic injury.
In rodent pre-clinical studies, urinary and serum alpha GST have been shown to be sensitive and specific indicators of renal proximal tubular and hepatocyte necrosis respectively.
GST-tags and the GST pull-down assay
GST can be added to a protein of interest to purify it from solution in a process known as a pull-down assay. This is accomplished by inserting the GST DNA coding sequence next to that which codes for the protein of interest. Thus, after transcription and translation, the GST protein and the protein of interest will be expressed together as a fusion protein. Because the GST protein has a strong binding affinity for GSH, beads coated with the compound can be added to the protein mixture; as a result, the protein of interest attached to the GST will stick to the beads, isolating the protein from the rest of those in solution. The beads are recovered and washed with free GSH to detach the protein of interest from the beads, resulting in a purified protein. This technique can be used to elucidate direct protein–protein interactions. A drawback of this assay is that the protein of interest is attached to GST, altering its native state.
A GST-tag is often used to separate and purify proteins that contain the GST-fusion protein. The tag is 220 amino acids (roughly 26 kDa) in size, which, compared to tags such as the Myc-tag or the FLAG-tag, is quite large. It can be fused to either the N-terminus or C-terminus of a protein. In addition to functioning as a purification tag, GST acts as a chaperone for the attached protein, promoting its correct folding, as well as preventing it from becoming aggregated in inclusion bodies when expressed in bacteria. The GST tag can easily be removed following purification by addition of thrombin protease if a suitable cleavage site has been inserted between the GST-tag and the protein of interest (which is usually included in many commercially available sources of GST-tagged plasmids).
See also
Affinity chromatography
Bacterial glutathione transferase
Glutathione S-transferase Mu 1
Glutathione S-transferase, C-terminal domain
GSTP1
Maltose-binding protein
Protein tag
References
Low et al 2007
External links
Overview of Glutathione S-Transferases
Glutathione Vs Vitamic C
Preparation of GST Fusion Proteins
How Does Glutathione Work
GST Gene Fusion System Handbook
Transferases | 0.774154 | 0.985311 | 0.762782 |
Heat of combustion | The heating value (or energy value or calorific value) of a substance, usually a fuel or food (see food energy), is the amount of heat released during the combustion of a specified amount of it.
The calorific value is the total energy released as heat when a substance undergoes complete combustion with oxygen under standard conditions. The chemical reaction is typically a hydrocarbon or other organic molecule reacting with oxygen to form carbon dioxide and water and release heat. It may be expressed with the quantities:
energy/mole of fuel
energy/mass of fuel
energy/volume of the fuel
There are two kinds of enthalpy of combustion, called high(er) and low(er) heat(ing) value, depending on how much the products are allowed to cool and whether compounds like are allowed to condense.
The high heat values are conventionally measured with a bomb calorimeter. Low heat values are calculated from high heat value test data. They may also be calculated as the difference between the heat of formation ΔH of the products and reactants (though this approach is somewhat artificial since most heats of formation are typically calculated from measured heats of combustion)..
For a fuel of composition CcHhOoNn, the (higher) heat of combustion is usually to a good approximation (±3%), though it gives poor results for some compounds such as (gaseous) formaldehyde and carbon monoxide, and can be significantly off if , such as for glycerine dinitrate, .
By convention, the (higher) heat of combustion is defined to be the heat released for the complete combustion of a compound in its standard state to form stable products in their standard states: hydrogen is converted to water (in its liquid state), carbon is converted to carbon dioxide gas, and nitrogen is converted to nitrogen gas. That is, the heat of combustion, ΔH°comb, is the heat of reaction of the following process:
(std.) + (c + - ) (g) → c (g) + (l) + (g)
Chlorine and sulfur are not quite standardized; they are usually assumed to convert to hydrogen chloride gas and or gas, respectively, or to dilute aqueous hydrochloric and sulfuric acids, respectively, when the combustion is conducted in a bomb calorimeter containing some quantity of water.
Ways of determination
Gross and net
Zwolinski and Wilhoit defined, in 1972, "gross" and "net" values for heats of combustion. In the gross definition the products are the most stable compounds, e.g. (l), (l), (s) and (l). In the net definition the products are the gases produced when the compound is burned in an open flame, e.g. (g), (g), (g) and (g). In both definitions the products for C, F, Cl and N are (g), (g), (g) and (g), respectively.
Dulong's Formula
The heating value of a fuel can be calculated with the results of ultimate analysis of fuel. From analysis, percentages of the combustibles in the fuel (carbon, hydrogen, sulfur) are known. Since the heat of combustion of these elements is known, the heating value can be calculated using Dulong's Formula:
HHV [kJ/g]= 33.87mC + 122.3(mH - mO ÷ 8) + 9.4mS
where mC, mH, mO, mN, and mS are the contents of carbon, hydrogen, oxygen, nitrogen, and sulfur on any (wet, dry or ash free) basis, respectively.
Higher heating value
The higher heating value (HHV; gross energy, upper heating value, gross calorific value GCV, or higher calorific value; HCV) indicates the upper limit of the available thermal energy produced by a complete combustion of fuel. It is measured as a unit of energy per unit mass or volume of substance. The HHV is determined by bringing all the products of combustion back to the original pre-combustion temperature, including condensing any vapor produced. Such measurements often use a standard temperature of . This is the same as the thermodynamic heat of combustion since the enthalpy change for the reaction assumes a common temperature of the compounds before and after combustion, in which case the water produced by combustion is condensed to a liquid. The higher heating value takes into account the latent heat of vaporization of water in the combustion products, and is useful in calculating heating values for fuels where condensation of the reaction products is practical (e.g., in a gas-fired boiler used for space heat). In other words, HHV assumes all the water component is in liquid state at the end of combustion (in product of combustion) and that heat delivered at temperatures below can be put to use.
Lower heating value
The lower heating value (LHV; net calorific value; NCV, or lower calorific value; LCV) is another measure of available thermal energy produced by a combustion of fuel, measured as a unit of energy per unit mass or volume of substance. In contrast to the HHV, the LHV considers energy losses such as the energy used to vaporize water - although its exact definition is not uniformly agreed upon. One definition is simply to subtract the heat of vaporization of the water from the higher heating value. This treats any H2O formed as a vapor that is released as a waste. The energy required to vaporize the water is therefore lost.
LHV calculations assume that the water component of a combustion process is in vapor state at the end of combustion, as opposed to the higher heating value (HHV) (a.k.a. gross calorific value or gross CV) which assumes that all of the water in a combustion process is in a liquid state after a combustion process.
Another definition of the LHV is the amount of heat released when the products are cooled to . This means that the latent heat of vaporization of water and other reaction products is not recovered. It is useful in comparing fuels where condensation of the combustion products is impractical, or heat at a temperature below cannot be put to use.
One definition of lower heating value, adopted by the American Petroleum Institute (API), uses a reference temperature of .
Another definition, used by Gas Processors Suppliers Association (GPSA) and originally used by API (data collected for API research project 44), is the enthalpy of all combustion products minus the enthalpy of the fuel at the reference temperature (API research project 44 used 25 °C. GPSA currently uses 60 °F), minus the enthalpy of the stoichiometric oxygen (O2) at the reference temperature, minus the heat of vaporization of the vapor content of the combustion products.
The definition in which the combustion products are all returned to the reference temperature is more easily calculated from the higher heating value than when using other definitions and will in fact give a slightly different answer.
Gross heating value
Gross heating value accounts for water in the exhaust leaving as vapor, as does LHV, but gross heating value also includes liquid water in the fuel prior to combustion. This value is important for fuels like wood or coal, which will usually contain some amount of water prior to burning.
Measuring heating values
The higher heating value is experimentally determined in a bomb calorimeter. The combustion of a stoichiometric mixture of fuel and oxidizer (e.g. two moles of hydrogen and one mole of oxygen) in a steel container at is initiated by an ignition device and the reactions allowed to complete. When hydrogen and oxygen react during combustion, water vapor is produced. The vessel and its contents are then cooled to the original 25 °C and the higher heating value is determined as the heat released between identical initial and final temperatures.
When the lower heating value (LHV) is determined, cooling is stopped at 150 °C and the reaction heat is only partially recovered. The limit of 150 °C is based on acid gas dew-point.
Note: Higher heating value (HHV) is calculated with the product of water being in liquid form while lower heating value (LHV) is calculated with the product of water being in vapor form.
Relation between heating values
The difference between the two heating values depends on the chemical composition of the fuel. In the case of pure carbon or carbon monoxide, the two heating values are almost identical, the difference being the sensible heat content of carbon dioxide between 150 °C and 25 °C (sensible heat exchange causes a change of temperature, while latent heat is added or subtracted for phase transitions at constant temperature. Examples: heat of vaporization or heat of fusion). For hydrogen, the difference is much more significant as it includes the sensible heat of water vapor between 150 °C and 100 °C, the latent heat of condensation at 100 °C, and the sensible heat of the condensed water between 100 °C and 25 °C. In all, the higher heating value of hydrogen is 18.2% above its lower heating value (142MJ/kg vs. 120MJ/kg). For hydrocarbons, the difference depends on the hydrogen content of the fuel. For gasoline and diesel the higher heating value exceeds the lower heating value by about 10% and 7%, respectively, and for natural gas about 11%.
A common method of relating HHV to LHV is:
where Hv is the heat of vaporization of water, n,out is the number of moles of water vaporized and nfuel,in is the number of moles of fuel combusted.
Most applications that burn fuel produce water vapor, which is unused and thus wastes its heat content. In such applications, the lower heating value must be used to give a 'benchmark' for the process.
However, for true energy calculations in some specific cases, the higher heating value is correct. This is particularly relevant for natural gas, whose high hydrogen content produces much water, when it is burned in condensing boilers and power plants with flue-gas condensation that condense the water vapor produced by combustion, recovering heat which would otherwise be wasted.
Usage of terms
Engine manufacturers typically rate their engines fuel consumption by the lower heating values since the exhaust is never condensed in the engine, and doing this allows them to publish more attractive numbers than are used in conventional power plant terms. The conventional power industry had used HHV (high heat value) exclusively for decades, even though virtually all of these plants did not condense exhaust either. American consumers should be aware that the corresponding fuel-consumption figure based on the higher heating value will be somewhat higher.
The difference between HHV and LHV definitions causes endless confusion when quoters do not bother to state the convention being used. since there is typically a 10% difference between the two methods for a power plant burning natural gas. For simply benchmarking part of a reaction the LHV may be appropriate, but HHV should be used for overall energy efficiency calculations if only to avoid confusion, and in any case, the value or convention should be clearly stated.
Accounting for moisture
Both HHV and LHV can be expressed in terms of AR (all moisture counted), MF and MAF (only water from combustion of hydrogen). AR, MF, and MAF are commonly used for indicating the heating values of coal:
AR (as received) indicates that the fuel heating value has been measured with all moisture- and ash-forming minerals present.
MF (moisture-free) or dry indicates that the fuel heating value has been measured after the fuel has been dried of all inherent moisture but still retaining its ash-forming minerals.
MAF (moisture- and ash-free) or DAF (dry and ash-free) indicates that the fuel heating value has been measured in the absence of inherent moisture- and ash-forming minerals.
Heat of combustion tables
Note
There is no difference between the lower and higher heating values for the combustion of carbon, carbon monoxide and sulfur since no water is formed during the combustion of those substances.
BTU/lb values are calculated from MJ/kg (1 MJ/kg = 430 BTU/lb).
Higher heating values of natural gases from various sources
The International Energy Agency reports the following typical higher heating values per Standard cubic metre of gas:
Algeria: 39.57MJ/Sm3
Bangladesh: 36.00MJ/Sm3
Canada: 39.00MJ/Sm3
China: 38.93MJ/Sm3
Indonesia: 40.60MJ/Sm3
Iran: 39.36MJ/Sm3
Netherlands: 33.32MJ/Sm3
Norway: 39.24MJ/Sm3
Pakistan: 34.90MJ/Sm3
Qatar: 41.40MJ/Sm3
Russia: 38.23MJ/Sm3
Saudi Arabia: 38.00MJ/Sm3
Turkmenistan: 37.89MJ/Sm3
United Kingdom: 39.71MJ/Sm3
United States: 38.42MJ/Sm3
Uzbekistan: 37.89MJ/Sm3
The lower heating value of natural gas is normally about 90% of its higher heating value. This table is in Standard cubic metres (1atm, 15°C), to convert to values per Normal cubic metre (1atm, 0°C), multiply above table by 1.0549.
See also
Adiabatic flame temperature
Cost of electricity by source
Electrical efficiency
Energy content of fuel
Energy conversion efficiency
Energy density
Energy value of coal
Exothermic reaction
Figure of merit
Fire
Food energy
Internal energy
ISO 15971
Mechanical efficiency
Thermal efficiency
Wobbe index: heat density
References
Further reading
External links
NIST Chemistry WebBook
Engineering thermodynamics
Combustion
Fuels
Thermodynamic properties
Nuclear physics
Thermochemistry | 0.76481 | 0.997323 | 0.762762 |
Object–relational impedance mismatch | Object–relational impedance mismatch is a set of difficulties going between data in relational data stores and data in domain-driven object models. Relational Database Management Systems (RDBMS) is the standard method for storing data in a dedicated database, while object-oriented (OO) programming is the default method for business-centric design in programming languages. The problem lies in neither relational databases nor OO programming, but in the conceptual difficulty mapping between the two logic models. Both logical models are differently implementable using database servers, programming languages, design patterns, or other technologies. Issues range from application to enterprise scale, whenever stored relational data is used in domain-driven object models, and vice versa. Object-oriented data stores can trade this problem for other implementation difficulties.
The term impedance mismatch comes from impedance matching in electrical engineering.
Mismatches
OO mathematically is directed graphs, where objects reference each other. Relational is tuples in tables with relational algebra. Tuples are data fields grouped into a "row" with typed fields. Links are reversible (INNER JOIN is symmetric to follow foreign keys backwards), forming undirected graphs.
Object-oriented concepts
Encapsulation
Object encapsulation hides internals. Object properties only show through implemented interfaces. However, many ORMs expose the properties publicly to work with database columns. Metaprogramming ORMs avoid violating encapsulation.
Accessibility
"Private" versus "public" is need-based in relational. In OO it is absolutely class-based. This relativity versus absolutism of classifications and characteristics clashes.
Interface, class, inheritance and polymorphism
Objects must implement interfaces to expose internals. Relational uses views to vary perspectives and constraints. It lacks OO concepts like classes, inheritance and polymorphism.
Mapping to relational concepts
In order for an ORM to work properly, tables that are linked via Foreign Key/Primary Key relations need to be mapped to associations in object-oriented analysis.
Data type differences
Relational prohibits by-reference (e.g. pointers), while OO embraces by-reference. Scalar types differ between them, impeding mapping.
SQL supports strings with maximum lengths (faster than without) and collations. OO has collation only with sort routines and strings limited only by memory. SQL usually ignores trailing whitespace during comparison char, but OO libraries do not. OO does not newtype using constraints on primitives.
Structural and integrity differences
Objects can comprise other objects or specialize. Relational is unnested, and a relation (tuples with the same header) does not fit in OO.
Relational uses declarative constraints on scalar types, attributes, relation variables, and/or globally. OO uses exceptions protecting object internals.
Manipulative differences
Relational uses standardized operators for data manipulation, while OO uses per-class per-case imperative. Any OO declarative support is for lists and hash tables, distinct from the sets in relational.
Transactional differences
Relational's unit is the transaction which outsizes any OO class methods. Transactions include arbitrary data manipulation combinations, while OO only has individual assignments to primitive fields. OO lacks isolation and durability, so atomicity and consistency are only with primitives.
Solving impedance mismatch
Solutions start with recognizing the differences between the logic systems. This minimizes or compensates for the mismatch.
Alternative architectures
NoSQL. The mismatch is not between OO and DBMSes. Object-relational impedance mismatch is eponymously only between OO and RDBMSes. Alternatives like NoSQL or XML databases avoid this.
Functional-relational mapping. Functional programming is a popular alternative to object-oriented programming. Comprehensions in functional programming languages are isomorphic with relational queries. Some functional programming languages implement functional-relational mapping. The direct correspondence between comprehensions and queries avoids many of the problems of object-relational mapping.
Minimization in OO
Object databases (OODBMS) to avoid the mismatch exist but only less successfully than relational databases. OO is a poor basis for schemas. Future OO database research involves transactional memory.
Another solution layers the domain and framework logic. Here, OO maps relational aspects at runtime rather than statically. Frameworks have a tuple class (also named row or entity) and a relation class (a.k.a dataset).
Advantages
Straightforward to frameworks and automation around data transport, presentation, and validation
Smaller, faster, quicker code
Dynamic database schema
Namespace and semantic match
Expressive constraints
Avoids complex mapping
Disadvantages
No static typing. Typed accessors mitigate this.
Indirection performance cost
Ignores concepts like polymorphism
Compensation
Mixing OO levels of discourse is problematic. Mainly framework support compensates, by automating data manipulation and presentation patterns on the level of modelling. Reflection and code generation are used. Reflection addresses code as data to allow automatic data transport, presentation, and integrity. Generation turns schemas into classes and helpers. Both have anomalies between levels, where generated classes have both domain properties (e.g. Name, Address, Phone) and framework properties (e.g. IsModified).
Occurrence
Although object-relational impedance mismatches can occur with object-oriented programming in general, a particular area of difficulty is with object–relational mappers (ORMs). Since the ORM is often specified in terms of configuration, annotations, and restricted domain-specific languages, it lacks the flexibility of a full programming language to resolve the impedance mismatch.
Contention
True RDBMS model
Christopher J. Date says a true relational DBMS overcomes the problem, as domains and classes are equivalent. Mapping between relational and OO is a mistake. Relational tuples relate, not represent, entities. OO's role becomes only managing fields.
Constraints and illegal transactions
Domain objects and user interfaces have mismatched impedances. Productive UIs should prevent illegal transactions (database constraint violations) to help operators and other non-programmers manage the data. This requires knowledge about database attributes beyond name and type, which duplicates logic in the relational schemata.
Frameworks leverage referential integrity constraints and other schema information to standardize handling away from case-by-case code.
SQL-specific impedance and workarounds
SQL, lacking domain types, impedes OO modelling. It is lossy between the DBMS and the application (OO or not). However, many avoid NoSQL and alternative vendor-specific query languages. DBMSes also ignore Business System 12 and Tutorial D.
Mainstream DBMSes like Oracle and Microsoft SQL Server solve this. OO code (Java and .NET respectively) extend them and are invokeable in SQL as fluently as if built into the DBMS. Reusing library routines across multiple schemas is a supported modern paradigm.
OO is in the backend because SQL will never get modern libraries and structures for today's programmers, despite the ISO SQL-99 committee wanting to add procedural. It is reasonable to use them directly rather than changing SQL. This blurs the division of responsibility between "application programming" and "database administration" because implementing constraints and triggers now requires both DBA and OO skills.
This contention may be moot. RDBMSes are not for modelling. SQL is only lossy when abused for modelling. SQL is for querying, sorting, filtering, and storing big data. OO in the backend encourages bad architecture as business logic should not be in the data tier.
Location of canonical copy of data
Relational says the DBMS is authoritative and the OO program's objects are temporary copies (possibly outdated if the database is modified concurrently). OO says the objects are authoritative, and the DBMS is just for persistence.
Division of responsibility
New features change both code and schemas. The schema is the DBA's responsibility. DBAs are responsible for reliability, so they refuse programmers' unnecessary modifications. Staging databases help but merely move the approval to release time. DBAs want to contain changes to code, where defects are less catastrophic.
More collaboration solves this. Schema change decisions should be from business needs. Novel data or performance boosts both modify the schema.
Philosophical differences
Key philosophical differences exist:
Declarative vs. imperative interfaces Relational uses declarative data while OO uses imperative behavior. Few compensate for relational with triggers and stored procedures.
Schema bound Relational limits rows to their entity schemas. OO's inheritance (tree or not) is similar. OO can also add attributes. New and few dynamic database systems unlimit this for relational.
Access rules Relational uses standardized operators, while OO classes have individual methods. OO is more expressive, but relational has math-like reasoning, integrity, and design consistency.
Relationship between nouns and verbs An OO class is a noun entity tightly associated with verb actions. This forms a concept. Relational disputes the naturalness or logicality of that tight association.
Object identity Two mutable objects with the same state differ. Relational ignores this uniqueness, and must fabricate it with candidate keys but that is a poor practice unless this identifier exists in the real world. Identity is permanent in relational, but maybe transient in OO.
Normalization OO neglects relational normalization. However, objects interlinked via pointers are arguably a network database, which is arguably an extremely denormalized relational database.
Schema inheritance Relational schemas reject OO's hierarchical inheritance. Relational accepts more powerful set theory. Unpopular non-tree (non-Java) OO exists, but is harder than relational algebra.
Structure vs. behaviour OO focuses on structure (maintainability, extensibility, reusable, safe). Relational focuses on behavior (efficiency, adaptability, fault-tolerance, liveness, logical integrity, etc.). OO code serves programmers, while relational stresses user-visible behavior. However it could be non-inherent in relational, as task-specific views commonly present information to subtasks, but IDEs ignore this and assume objects are used.
Set vs. graph relationships Relational follows set theory, but OO follows graph theory. While equivalent, access and management paradigms differ.
Therefore, partisans argue the other's technology should be abandoned. Some RDBMS DBAs even advocate procedural over OO, namely that objects should not outlive transactions. OO retorts with OODBMS technology to be developed replacing relational. However, most programmers abstain and view the object–relational impedance mismatch as just a hurdle.
ORMs situationally offer advantages. Skeptics cite drawbacks, and little value when blindly applied.
See also
References
External links
The Object–Relational Impedance Mismatch – Agile Data Essay
The Vietnam of Computer Science – Examples of mismatch problems
Object-oriented programming
Object–relational mapping
Relational model | 0.773211 | 0.98648 | 0.762757 |
Asymptomatic | Asymptomatic (or clinically silent) is an adjective categorising the medical conditions (i.e., injuries or diseases) that patients carry but without experiencing their symptoms, despite an explicit diagnosis (e.g., a positive medical test).
Pre-symptomatic is the adjective categorising the time periods during which the medical conditions are asymptomatic.
Subclinical and paucisymptomatic are other adjectives categorising either the asymptomatic infections (i.e., subclinical infections), or the psychosomatic illnesses and mental disorders expressing a subset of symptoms but not the entire set an explicit medical diagnosis requires.
Examples
An example of an asymptomatic disease is cytomegalovirus (CMV) which is a member of the herpes virus family. "It is estimated that 1% of all newborns are infected with CMV, but the majority of infections are asymptomatic." (Knox, 1983; Kumar et al. 1984) In some diseases, the proportion of asymptomatic cases can be important. For example, in multiple sclerosis it is estimated that around 25% of the cases are asymptomatic, with these cases detected postmortem or just by coincidence (as incidental findings) while treating other diseases.
Importance
Knowing that a condition is asymptomatic is important because:
It may be contagious, and the contribution of asymptomatic and pre-symptomatic infections to the transmission level of a disease helps set the required control measures to keep it from spreading.
It is not required that a person undergo treatment. It does not cause later medical problems such as high blood pressure and hyperlipidaemia.
Be alert to possible problems: asymptomatic hypothyroidism makes a person vulnerable to Wernicke–Korsakoff syndrome or beri-beri following intravenous glucose.
For some conditions, treatment during the asymptomatic phase is vital. If one waits until symptoms develop, it is too late for survival or to prevent damage.
Mental health
Subclinical or subthreshold conditions are those for which the full diagnostic criteria are not met and have not been met in the past, although symptoms are present. This can mean that symptoms are not severe enough to merit a diagnosis, or that symptoms are severe but do not meet the criteria of a condition.
List
These are conditions for which there is a sufficient number of documented individuals that are asymptomatic that it is clinically noted. For a complete list of asymptomatic infections see subclinical infection.
Balanitis xerotica obliterans
Benign lymphoepithelial lesion
Cardiac shunt
Carotid artery dissection
Carotid bruit
Cavernous hemangioma
Chloromas (Myeloid sarcoma)
Cholera
Chronic myelogenous leukemia
Coeliac disease
Coronary artery disease
Coronavirus disease 2019
Cowpox
Diabetic retinopathy
Essential fructosuria
Flu or Influenza strains
Folliculosebaceous cystic hamartoma
Glioblastoma multiforme (occasionally)
Glucocorticoid remediable aldosteronism
Glucose-6-phosphate dehydrogenase deficiency
Hepatitis
Hereditary elliptocytosis
Herpes
Heterophoria
Human coronaviruses (common cold germs)
Hypertension (high blood pressure)
Histidinemia
HIV (AIDS)
HPV
Hyperaldosteronism
hyperlipidaemia
Hyperprolinemia type I
Hypothyroidism
Hypoxia (some cases)
Idiopathic thrombocytopenic purpura
Iridodialysis (when small)
Lesch–Nyhan syndrome (female carriers)
Levo-Transposition of the great arteries
Measles
Meckel's diverticulum
Microvenular hemangioma
Mitral valve prolapse
Monkeypox
Monoclonal B-cell lymphocytosis
Myelolipoma
Nonalcoholic fatty liver disease
Optic disc pit
Osteoporosis
Pertussis (whooping cough)
Pes cavus
Poliomyelitis
Polyorchidism
Pre-eclampsia
Prehypertension
Protrusio acetabuli
Pulmonary contusion
Renal tubular acidosis
Rubella
Smallpox (extinct since the 1980s)
Spermatocele
Sphenoid wing meningioma
Spider angioma
Splenic infarction (though not typically)
Subarachnoid hemorrhage
Tonsillolith
Tuberculosis
Type II diabetes
Typhus
Vaginal intraepithelial neoplasia
Varicella (chickenpox)
Wilson's disease
Millions of women reported lack of symptoms during pregnancy until the point of childbirth or the beginning of labor; they didn't know they were pregnant. This phenomenon is known as cryptic pregnancies.
See also
Symptomatic
Subclinical infection
References
Medical terminology
Symptoms | 0.76719 | 0.994221 | 0.762756 |