score
float64
4
5.34
text
stringlengths
256
572k
url
stringlengths
15
373
4.09375
Classical liberalism stressed not only human rationality but the importance of individual property rights, natural rights, the need for constitutional limitations on government, and, especially, freedom of the individual from any kind of external restraint. Classical liberalism drew upon the ideals of the Enlightenment and the doctrines of liberty supported in the American and French revolutions. The Enlightenment, also known as the Age of Reason, was characterized by a belief in the perfection of the natural order and a belief that natural laws should govern society. Logically it was reasoned that if the natural order produces perfection, then society should operate freely without interference from government. The writings of such men as Adam Smith, David Ricardo, Jeremy Bentham, and John Stuart Mill mark the height of such thinking. In Great Britain and the United States the classic liberal program, including the principles of representative government, the protection of civil liberties, and laissez-faire economics, had been more or less effected by the mid-19th cent. The growth of industrial society, however, soon produced great inequalities in wealth and power, which led many persons, especially workers, to question the liberal creed. It was in reaction to the failure of liberalism to provide a good life for everyone that workers' movements and Marxism arose. Because liberalism is concerned with liberating the individual, however, its doctrines changed with the change in historical realities. Sections in this article: The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved. See more Encyclopedia articles on: Political Science: Terms and Concepts
http://www.infoplease.com/encyclopedia/history/liberalism-classical-liberalism.html
4.09375
All matter can exhibit wave-like behaviour. For example a beam of electrons can be diffracted just like a beam of light or a water wave. Matter waves are a central part of the theory of quantum mechanics, being an example of wave–particle duality. The concept that matter behaves like a wave is also referred to as the de Broglie hypothesis (//) due to having been proposed by Louis de Broglie in 1924. Matter waves are often referred to as de Broglie waves. Wave-like behaviour of matter was first experimentally demonstrated in the Davisson–Germer experiment using electrons, and it has also been confirmed for other elementary particles, neutral atoms and even molecules. The wave-like behaviour of matter is crucial to the modern theory of atomic structure and particle physics. - 1 Historical context - 2 The de Broglie hypothesis - 3 Experimental confirmation - 4 de Broglie relations - 5 Interpretations - 6 De Broglie's phase wave and periodic phenomenon - 7 See also - 8 References - 9 Further reading - 10 External links At the end of the 19th century, light was thought to consist of waves of electromagnetic fields which propagated according to Maxwell’s equations, while matter was thought to consist of localized particles (See history of wave and particle viewpoints). In 1900, this division was exposed to doubt, when, investigating the theory of black body thermal radiation, Max Planck proposed that light is emitted in discrete quanta of energy. It was thoroughly challenged in 1905. Extending Planck's investigation in several ways, including its connection with the photoelectric effect, Albert Einstein proposed that light is also propagated and absorbed in quanta. Light quanta are now called photons. These quanta would have an energy given by the Planck–Einstein relation: and a momentum where ν (lowercase Greek letter nu) and λ (lowercase Greek letter lambda) denote the frequency and wavelength of the light, c the speed of light, and h Planck’s constant. In the modern convention, frequency is symbolized by f as is done in the rest of this article. Einstein’s postulate was confirmed experimentally by Robert Millikan and Arthur Compton over the next two decades. The de Broglie hypothesis De Broglie, in his 1924 PhD thesis, proposed that just as light has both wave-like and particle-like properties, electrons also have wave-like properties. By rearranging the momentum equation stated in the above section, we find a relationship between the wavelength, λ associated with an electron and its momentum, p, through the Planck constant, h: The relationship is now known to hold for all types of matter: all matter exhibits properties of both particles and waves. |“||When I conceived the first basic ideas of wave mechanics in 1923–24, I was guided by the aim to perform a real physical synthesis, valid for all particles, of the coexistence of the wave and of the corpuscular aspects that Einstein had introduced for photons in his theory of light quanta in 1905.||”| — De Broglie Matter waves were first experimentally confirmed to occur in the Davisson-Germer experiment for electrons, and the de Broglie hypothesis has been confirmed for other elementary particles. Furthermore, neutral atoms and even molecules have been shown to be wave-like. In 1927 at Bell Labs, Clinton Davisson and Lester Germer fired slow-moving electrons at a crystalline nickel target. The angular dependence of the diffracted electron intensity was measured, and was determined to have the same diffraction pattern as those predicted by Bragg for x-rays. Before the acceptance of the de Broglie hypothesis, diffraction was a property that was thought to be only exhibited by waves. Therefore, the presence of any diffraction effects by matter demonstrated the wave-like nature of matter. When the de Broglie wavelength was inserted into the Bragg condition, the observed diffraction pattern was predicted, thereby experimentally confirming the de Broglie hypothesis for electrons. This was a pivotal result in the development of quantum mechanics. Just as the photoelectric effect demonstrated the particle nature of light, the Davisson–Germer experiment showed the wave-nature of matter, and completed the theory of wave-particle duality. For physicists this idea was important because it meant that not only could any particle exhibit wave characteristics, but that one could use wave equations to describe phenomena in matter if one used the de Broglie wavelength. Experiments with Fresnel diffraction and an atomic mirror for specular reflection of neutral atoms confirm the application of the de Broglie hypothesis to atoms, i.e. the existence of atomic waves which undergo diffraction, interference and allow quantum reflection by the tails of the attractive potential. Advances in laser cooling have allowed cooling of neutral atoms down to nanokelvin temperatures. At these temperatures, the thermal de Broglie wavelengths come into the micrometre range. Using Bragg diffraction of atoms and a Ramsey interferometry technique, the de Broglie wavelength of cold sodium atoms was explicitly measured and found to be consistent with the temperature measured by a different method. This effect has been used to demonstrate atomic holography, and it may allow the construction of an atom probe imaging system with nanometer resolution. The description of these phenomena is based on the wave properties of neutral atoms, confirming the de Broglie hypothesis. Recent experiments even confirm the relations for molecules and even macromolecules that otherwise might be supposed too large to undergo quantum mechanical effects. In 1999, a research team in Vienna demonstrated diffraction for molecules as large as fullerenes. The researchers calculated a De Broglie wavelength of the most probable C60 velocity as 2.5 pm. More recent experiments prove the quantum nature of molecules with a mass up to 6910 amu. de Broglie relations where h is Planck's constant. The equations can also be written as allows the equations to be written as where denotes the particle's rest mass, its velocity, the Lorentz factor, and the speed of light in a vacuum. See below for details of the derivation of the de Broglie relations. Group velocity (equal to the particle's speed) should not be confused with phase velocity (equal to the product of the particle's frequency and its wavelength). In the case of a non-dispersive medium, they happen to be equal, but otherwise they are not. Albert Einstein first explained the wave–particle duality of light in 1905. Louis de Broglie hypothesized that any particle should also exhibit such a duality. The velocity of a particle, he concluded, should always equal the group velocity of the corresponding wave. The magnitude of the group velocity is equal to the particle's speed. Both in relativistic and non-relativistic quantum physics, we can identify the group velocity of a particle's wave function with the particle velocity. Quantum mechanics has very accurately demonstrated this hypothesis, and the relation has been shown explicitly for particles as large as molecules. De Broglie deduced that if the duality equations already known for light were the same for any particle, then his hypothesis would hold. This means that where m is the mass of the particle and v its velocity. Also in special relativity we find that where v is the velocity of the particle regardless of wave behavior. By the de Broglie hypothesis, we see that Using relativistic relations for energy and momentum, we have where E is the total energy of the particle (i.e. rest energy plus kinetic energy in kinematic sense), p the momentum, the Lorentz factor, c the speed of light, and β the speed as a fraction of c. The variable v can either be taken to be the speed of the particle or the group velocity of the corresponding matter wave. Since the particle speed for any particle that has mass (according to special relativity), the phase velocity of matter waves always exceeds c, i.e. and as we can see, it approaches c when the particle speed is in the relativistic range. The superluminal phase velocity does not violate special relativity, because phase propagation carries no energy. See the article on Dispersion (optics) for details. Using 4-Vectors, the De Broglie relations form a single equation: which is frame-independent. Likewise, the relation between group/particle velocity and phase velocity is given in frame-independent form by: The physical reality underlying de Broglie waves is a subject of ongoing debate. Some theories treat either the particle or the wave aspect as its fundamental nature, seeking to explain the other as an emergent property. Some, such as the hidden variable theory, treat the wave and the particle as distinct entities. Yet others propose some intermediate entity that is neither quite wave nor quite particle but only appears as such when we measure one or the other property. The Copenhagen interpretation states that the nature of the underlying reality is unknowable and beyond the bounds of scientific enquiry. Schrödinger's quantum mechanical waves are conceptually different from ordinary physical waves such as water or sound. Ordinary physical waves are characterized by undulating real-number 'displacements' of dimensioned physical variables at each point of ordinary physical space at each instant of time. Schrödinger's "waves" are characterized by the undulating value of a dimensionless complex number at each point of an abstract multi-dimensional space, for example of configuration space. |“||If one wishes to calculate the probabilities of excitation and ionization of atoms [M. Born, Zur Quantenmechanik der Stossvorgange, Z. f. Phys., 37 (1926), 863; [Quantenmechanik der Stossvorgange], ibid., 38 (1926), 803] then one must introduce the coordinates of the atomic electrons as variables on an equal footing with those of the colliding electron. The waves then propagate no longer in three-dimensional space but in multi-dimensional configuration space. From this one sees that the quantum mechanical waves are indeed something quite different from the light waves of the classical theory.||”| At the same conference, Erwin Schrödinger reported likewise. |“||Under [the name 'wave mechanics',] at present two theories are being carried on, which are indeed closely related but not identical. The first, which follows on directly from the famous doctoral thesis by L. de Broglie, concerns waves in three-dimensional space. Because of the strictly relativistic treatment that is adopted in this version from the outset, we shall refer to it as the four-dimensional wave mechanics. The other theory is more remote from Mr de Broglie's original ideas, insofar as it is based on a wave-like process in the space of position coordinates (q-space) of an arbitrary mechanical system.[Long footnote about manuscript not copied here.] We shall therefore call it the multi-dimensional wave mechanics. Of course this use of the q-space is to be seen only as a mathematical tool, as it is often applied also in the old mechanics; ultimately, in this version also, the process to be described is one in space and time. In truth, however, a complete unification of the two conceptions has not yet been achieved. Anything over and above the motion of a single electron could be treated so far only in the multi-dimensional version; also, this is the one that provides the mathematical solution to the problems posed by the Heisenberg-Born matrix mechanics.||”| In 1955, Heisenberg reiterated this. |“||An important step forward was made by the work of Born [Z. Phys., 37: 863, 1926 and 38: 803, 1926] in the summer of 1926. In this work, the wave in configuration space was interpreted as a probability wave, in order to explain collision processes on Schrödinger's theory. This hypothesis contained two important new features in comparison with that of Bohr, Kramers and Slater. The first of these was the assertion that, in considering "probability waves", we are concerned with processes not in ordinary three-dimensional space, but in an abstract configuration space (a fact which is, unfortunately, sometimes overlooked even today); the second was the recognition that the probability wave is related to an individual process.||”| It is mentioned above that the "displaced quantity" of the Schrödinger wave has values that are dimensionless complex numbers. One may ask what is the physical meaning of those numbers. According to Heisenberg, rather than being of some ordinary physical quantity such as for example Maxwell's electric field intensity, or for example mass density, the Schrödinger-wave packet's "displaced quantity" is probability amplitude. He wrote that instead of using the term 'wave packet', it is preferable to speak of a probability packet. The probability amplitude supports calculation of probability of location or momentum of discrete particles. Heisenberg recites Duane's account of particle diffraction by probabilistic quantal translation momentum transfer, which allows, for example in Young's two-slit experiment, each diffracted particle probabilistically to pass discretely through a particular slit. Thus one does not need necessarily think of the matter wave, as it were, as 'composed of smeared matter'. These ideas may be expressed in ordinary language as follows. In the account of ordinary physical waves, a 'point' refers to a position in ordinary physical space at an instant of time, at which there is specified a 'displacement' of some physical quantity. But in the account of quantum mechanics, a 'point' refers to a configuration of the system at an instant of time, every particle of the system being in a sense present in every 'point' of configuration space, each particle at such a 'point' being located possibly at a different position in ordinary physical space. There is no explicit definite indication that, at an instant, this particle is 'here' and that particle is 'there' in some separate 'location' in configuration space. This conceptual difference entails that, in contrast to de Broglie's pre-quantum mechanical wave description, the quantum mechanical probability packet description does not directly and explicitly express the Aristotelian idea, referred to by Newton, that causal efficacy propagates through ordinary space by contact, nor the Einsteinian idea that such propagation is no faster than light. In contrast, these ideas are so expressed in the classical wave account, through the Green's function, though it is inadequate for the observed quantal phenomena. The physical reasoning for this was first recognized by Einstein. De Broglie's phase wave and periodic phenomenon De Broglie's thesis started from the hypothesis, "that to each portion of energy with a proper mass m0 one may associate a periodic phenomenon of the frequency ν0 , such that one finds: hν0 = m0c2. The frequency ν0 is to be measured, of course, in the rest frame of the energy packet. This hypothesis is the basis of our theory." De Broglie followed his initial hypothesis of a periodic phenomenon, with frequency ν0 , associated with the energy packet. He used the special theory of relativity to find, in the frame of the observer of the electron energy packet that is moving with velocity , that its frequency was apparently reduced to using the same notation as above. The quantity is the velocity of what de Broglie called the "phase wave". Its wavelength is and frequency . De Broglie reasoned that his hypothetical intrinsic particle periodic phenomenon is in phase with that phase wave. This was his basic matter wave conception. He noted, as above, that , and the phase wave does not transfer energy. While the concept of waves being associated with matter is correct, de Broglie did not leap directly to the final understanding of quantum mechanics with no missteps. There are conceptual problems with the approach that de Broglie took in his thesis that he was not able to resolve, despite trying a number of different fundamental hypotheses in different papers published while working on, and shortly after publishing, his thesis. These difficulties were resolved by Erwin Schrödinger, who developed the wave mechanics approach, starting from a somewhat different basic hypothesis. - Bohr model - Faraday wave - Kapitsa–Dirac effect - Matter wave clock - Schrödinger equation - Theoretical and experimental justification for the Schrödinger equation - Thermal de Broglie wavelength - De Broglie–Bohm theory - Feynman, R.; QED the Strange Theory of Light and matter, Penguin 1990 Edition, page 84. - Einstein, A. (1917). Zur Quantentheorie der Strahlung, Physicalische Zeitschrift 18: 121–128. Translated in ter Haar, D. (1967). The Old Quantum Theory. Pergamon Press. pp. 167–183. LCCN 66029628. - J. P. McEvoy & Oscar Zarate (2004). Introducing Quantum Theory. Totem Books. pp. 110–114. ISBN 1-84046-577-8. - Louis de Broglie "The Reinterpretation of Wave Mechanics" Foundations of Physics, Vol. 1 No. 1 (1970) - Mauro Dardo, Nobel Laureates and Twentieth-Century Physics, Cambridge University Press 2004, pp. 156–157 - R.B.Doak; R.E.Grisenti; S.Rehbein; G.Schmahl; J.P.Toennies; Ch. Wöll (1999). "Towards Realization of an Atomic de Broglie Microscope: Helium Atom Focusing Using Fresnel Zone Plates". Physical Review Letters 83 (21): 4229–4232. Bibcode:1999PhRvL..83.4229D. doi:10.1103/PhysRevLett.83.4229. - F. Shimizu (2000). "Specular Reflection of Very Slow Metastable Neon Atoms from a Solid Surface". Physical Review Letters 86 (6): 987–990. Bibcode:2001PhRvL..86..987S. doi:10.1103/PhysRevLett.86.987. PMID 11177991. - D. Kouznetsov; H. Oberst (2005). "Reflection of Waves from a Ridged Surface and the Zeno Effect". Optical Review 12 (5): 1605–1623. Bibcode:2005OptRv..12..363K. doi:10.1007/s10043-005-0363-9. - H.Friedrich; G.Jacoby; C.G.Meister (2002). "quantum reflection by Casimir–van der Waals potential tails". Physical Review A 65 (3): 032902. Bibcode:2002PhRvA..65c2902F. doi:10.1103/PhysRevA.65.032902. - Pierre Cladé; Changhyun Ryu; Anand Ramanathan; Kristian Helmerson; William D. Phillips (2008). "Observation of a 2D Bose Gas: From thermal to quasi-condensate to superfluid". arXiv:0805.3519. - Shimizu; J.Fujita (2002). "Reflection-Type Hologram for Atoms". Physical Review Letters 88 (12): 123201. Bibcode:2002PhRvL..88l3201S. doi:10.1103/PhysRevLett.88.123201. PMID 11909457. - D. Kouznetsov; H. Oberst; K. Shimizu; A. Neumann; Y. Kuznetsova; J.-F. Bisson; K. Ueda; S. R. J. Brueck (2006). "Ridged atomic mirrors and atomic nanoscope". Journal of Physics B 39 (7): 1605–1623. Bibcode:2006JPhB...39.1605K. doi:10.1088/0953-4075/39/7/005. - Arndt, M.; O. Nairz; J. Voss-Andreae; C. Keller; G. van der Zouw; A. Zeilinger (14 October 1999). "Wave-particle duality of C60". Nature 401 (6754): 680–682. Bibcode:1999Natur.401..680A. doi:10.1038/44348. PMID 18494170. - Gerlich, S.; S. Eibenberger; M. Tomandl; S. Nimmrichter; K. Hornberger; P. J. Fagan; J. Tüxen; M. Mayor & M. Arndt (5 April 2011). "Quantum interference of large organic molecules". Nature Communications 2 (263): 263–. Bibcode:2011NatCo...2E.263G. doi:10.1038/ncomms1263. PMC 3104521. PMID 21468015. - Resnick, R.; Eisberg, R. (1985). Quantum Physics of Atoms, Molecules, Solids, Nuclei and Particles (2nd ed.). New York: John Wiley & Sons. ISBN 0-471-87373-X. - Z.Y.Wang (2016). "Generalized momentum equation of quantum mechanics". Optical and Quantum Electronics 48 (2). doi:10.1007/s11082-015-0261-8. - Holden, Alan (1971). Stationary states. New York: Oxford University Press. ISBN 0-19-501497-9. - Williams, W.S.C. (2002). Introducing Special Relativity, Taylor & Francis, London, ISBN 0-415-27761-2, p. 192. - de Broglie, L. (1970). The reinterpretation of wave mechanics, Foundations of Physics 1(1): 5–15, p. 9. - Born, M., Heisenberg, W. (1928). Quantum mechanics, pp. 143–181 of Électrons et Photons: Rapports et Discussions du Cinquième Conseil de Physique, tenu à Bruxelles du 24 au 29 Octobre 1927, sous les Auspices de l'Institut International de Physique Solvay, Gauthier-Villars, Paris, p. 166; this translation at p. 425 of Bacciagaluppi, G., Valentini, A. (2009), Quantum Theory at the Crossroads: Reconsidering the 1927 Solvay Conference, Cambridge University Press, Cambridge UK, ISBN 978-0-521-81421-8. - Schrödinger, E. (1928). Wave mechanics, pp. 185–206 of Électrons et Photons: Rapports et Discussions du Cinquième Conseil de Physique, tenu à Bruxelles du 24 au 29 Octobre 1927, sous les Auspices de l'Institut International de Physique Solvay, Gauthier-Villars, Paris, pp. 185–186; this translation at p. 447 of Bacciagaluppi, G., Valentini, A. (2009), Quantum Theory at the Crossroads: Reconsidering the 1927 Solvay Conference, Cambridge University Press, Cambridge UK, ISBN 978-0-521-81421-8. - Heisenberg, W. (1955). The development of the interpretation of the quantum theory, pp. 12–29, in Niels Bohr and the Development of Physics: Essays dedicated to Niels Bohr on the occasion of his seventieth birthday, edited by W. Pauli, with the assistance of L. Rosenfeld and V. Weisskopf, Pergamon Press, London, p. 13. - Heisenberg, W. (1927). Über den anschlaulichen Inhalt der quantentheoretischen Kinematik und Mechanik, Z. Phys. 43: 172–198, translated by eds. Wheeler, J.A., Zurek, W.H. (1983), at pp. 62–84 of Quantum Theory and Measurement, Princeton University Press, Princeton NJ, p. 73. Also translated as 'The actual content of quantum theoretical kinematics and mechanics' here - Heisenberg, W. (1930). The Physical Principles of the Quantum Theory, translated by C. Eckart, F. C. Hoyt, University of Chicago Press, Chicago IL, pp. 77–78. - Fine, A. (1986). The Shaky Game: Einstein Realism and the Quantum Theory, University of Chicago, Chicago, ISBN 0-226-24946-8 - Howard, D. (1990). "Nicht sein kann was nicht sein darf", or the prehistory of the EPR, 1909–1935; Einstein's early worries about the quantum mechanics of composite systems, pp. 61–112 in Sixty-two Years of Uncertainty: Historical Philosophical and Physical Inquiries into the Foundations of Quantum Mechanics, edited by A.I. Miller, Plenum Press, New York, ISBN 978-1-4684-8773-2. - de Broglie, L. (1923). Waves and quanta, Nature 112: 540. - de Broglie, L. (1924). Thesis, p. 8 of Kracklauer's translation. - Medicus, H.A. (1974). Fifty years of matter waves, Physics Today 27(2): 38–45. - MacKinnon, E. (1976). De Broglie's thesis: a critical retrospective, Am. J. Phys. 44: 1047–1055. - Espinosa, J.M. (1982). Physical properties of de Broglie's phase waves, Am. J. Phys. 50: 357–362. - Brown, H.R., Martins, R.deA. (1984). De Broglie's relativistic phase waves and wave groups, Am. J. Phys. 52: 1130–1140. - Bacciagaluppi, G., Valentini, A. (2009). Quantum Theory at the Crossroads: Reconsidering the 1927 Solvay Conference, Cambridge University Press, Cambridge UK, ISBN 978-0-521-81421-8, pp. 30–88. - Martins, Roberto de Andrade (2010). "Louis de Broglie's Struggle with the Wave-Particle Dualism, 1923-1925". Quantum History Project, Fritz Haber Institute of the Max Planck Society and the Max Planck Institute for the History of Science. Retrieved 2015-01-03. - L. de Broglie, Recherches sur la théorie des quanta (Researches on the quantum theory), Thesis (Paris), 1924; L. de Broglie, Ann. Phys. (Paris) 3, 22 (1925). English translation by A.F. Kracklauer. And here. - Broglie, Louis de, The wave nature of the electron Nobel Lecture, 12, 1929 - Tipler, Paul A. and Ralph A. Llewellyn (2003). Modern Physics. 4th ed. New York; W. H. Freeman and Co. ISBN 0-7167-4345-0. pp. 203–4, 222–3, 236. - Zumdahl, Steven S. (2005). Chemical Principles (5th ed.). Boston: Houghton Mifflin. ISBN 0-618-37206-7. - An extensive review article "Optics and interferometry with atoms and molecules" appeared in July 2009: http://www.atomwave.org/rmparticle/RMPLAO.pdf. - "Scientific Papers Presented to Max Born on his retirement from the Tait Chair of Natural Philosophy in the University of Edinburgh", 1953 (Oliver and Boyd)
https://en.wikipedia.org/wiki/De_Broglie_Wavelength
4.125
The Modern Language Association Style Manual is one of the most common style guides and often the first one used by students. Students writing research papers in the arts and humanities, as well as in middle and high school, are often required to use MLA style. MLA provides guidance for formatting papers and citing references, ensuring that your paper is consistent and easy to read. The first page of an MLA paper has a double-spaced header in the top left corner. Type your name on the first line, with your professor's name on the following line. The third line is for the class title and the fourth line should have the date formatted with the day first, such as 10 August 2013. Each page should have your last name followed by the page number in the upper right hand corner, and your paper needs a title centered on the first page immediately after the header. MLA papers use 1-inch margins all around and an easy-to-read font that is distinct from italics, such as Times New Roman. Your paper must be typed and printed on 8.5-by-11-inch paper. Use italics for titles of books, magazines and other references. MLA papers do not use footnotes. Instead, use endnotes, and put the endnotes page immediately before your Works Cited page. On your endnotes page, type and center the word Endnotes. MLA style does not mandate an endnotes page. Instead endnotes are an option for providing further clarification and details. MLA papers require double-spacing. This makes it easier to edit them and ensures that your professor can add notes to your paper. Paragraphs should begin on a new line without any additional spacing, but must be indented one-half inch. On most computers, pressing the tab key will yield the proper indentation. If you insert a quotation that is more than four lines long, the quotation should be indented on both sides and preceded by a colon and one line of space. When you cite specific facts and figures in your paper or quote a source, use parenthetical citations. Put the author of the work first, followed by the page number. For example, if you're quoting page 9 of a book by John Smith, your citation would look like this: (Smith, 9). List all of your sources in alphabetical order on a Works Cited page at the end of your paper, with Works Cited centered and typed at the top of the page. Cite references using a hanging indent, with all lines after the first line indented. The author's name, last name first, goes first. If you are citing a specific chapter or article title, put the title in quotation marks, followed by the italicized name of the source. For books, list the city of publication, publisher's name and year of publications. For articles, list the volume and issue number and date of publication. Conclude with the page numbers you used or, if citing an entire book, omit the page numbers. Style Your World With Color Create balance and growth throughout your wardrobe.View Article Let your imagination run wild with these easy-to-pair colors.View Article Understand how color and its visual effects can be applied to your closet.View Article See if her signature black pairs well with your personal style.View Article - Purdue Online Writing Lab: MLA Formatting and Style Guide - California State University at Los Angeles: MLA Format - The University of Wisconsin - Madison Writing Center: MLA Documentation Guide - MLA Handbook for Writers of Research Papers, 7th Edition; Modern Language Association - Creatas Images/Creatas/Getty Images
http://classroom.synonym.com/mla-guidelines-students-1345.html
4.40625
The Cockroach Life Cycle and Behavior As with many animals, cockroach reproduction relies on eggs from a female and sperm from a male. Usually, the female releases pheromones to attract a male, and in some species, males fight over available females. But exactly what happens after the male deposits his sperm into the female varies from species to species. Most roaches are oviparous -- their young grow in eggs outside of the mother's body. In these species, the mother roach carries her eggs around in a sac called an ootheca, which is attached to her abdomen. The number of eggs in each ootheca varies from species to species. Many female roaches drop or hide their ootheca shortly before the eggs are ready to hatch. Others continue to carry the hatching eggs and care for their young after they are born. But regardless of how long the mother and her eggs stay together, the ootheca has to stay moist in order for the eggs to develop. Other roaches are ovoviviparous. Rather than growing in an ootheca outside of the mother's body, the roaches grow in an ootheca inside the mother's body. In a few species, the eggs grow inside the mother's uterus without being surrounded by an ootheca. The developing roaches inside feed on the eggs' yolks, just as they would if the eggs were outside the body. One species is viviparous -- its young develop in fluid in the mother's uterus the way most mammals do. Ovoviviparous and viviparous species give birth to live young. Whether mother roaches care for their young also varies from one species to another. Some mothers hide or bury their ootheca and never see their offspring. Others care for their offspring after birth, and scientists believe that some offspring have the ability to recognize their mothers. The number of young that one roach can bear also varies considerably. A German cockroach and her young can produce 300,000 more roaches in one year. An American cockroach and her young can produce a comparatively small 800 new roaches per year. Newly hatched roaches, known as nymphs, are usually white. Shortly after birth, they turn brown, and their exoskeletons harden. They begin to resemble small, wingless adult roaches. Nymphs molt several times as they become adults. The period between each molt is known as an instar. Each instar is progressively more like an adult cockroach. In some species, this process takes only a few weeks. In others, like the oriental cockroach, it takes between one and two years. The overall life span of cockroaches differs as well -- some live only a few months while others live for more than two years. Cockroaches generally prefer warm, humid, dark areas. In the wild, they are most common in tropical parts of the world. They are omnivores, and many species will eat virtually anything, including paper, clothing and dead bugs. A few live exclusively on wood, much like termites do. Although cockroaches are closely related to termites, they are not as social as termites are. Termite colonies have an organized social structure in which different members have different roles. Cockroaches do not have these types of roles, but they do tend to prefer living in groups. A study at the Free University of Brussels in Belgium revealed that groups of cockroaches make collective decisions about where to live. When one space was large enough for all of the cockroaches in the study, the cockroaches all stayed there. But when the large space was not available, the roaches divided themselves into equal groups to fit in the smallest number of other enclosures. Another study suggests that cockroaches have a collective intelligence made up of the decisions of individual roaches. European scientists developed a robot called InsBot that was capable of mimicking cockroach behavior. The researchers applied cockroach pheromones to the robot so real roaches would accept it. By taking advantage of roaches' tendencies to follow each other, InsBot was able to influence the behavior of entire groups, including convincing roaches to leave the shade and move into lighted areas. Scientists theorize that similar robots could be used to herd animals or to control cockroach populations. In addition to robotic intervention, there are several steps that people can take to reduce or eliminate cockroach populations. We'll look at these next.
http://animals.howstuffworks.com/insects/cockroach2.htm
4.09375
Importance of free body diagram to determine the forces and stresses is as follows: • A free body diagram represents all the forces both in magnitude and direction acting on an object when it is isolated from the system. The forces include reaction forces, self-weight, tension in the string etc. • It is very easy to find the unknown forces and moments by applying static equilibrium principles for a free body diagram. • Stresses can be determined by using the stress - force equation from the calculated forces. • Engineers can use the mathematical equations to determine the loads, nature of loads, and geometry involved to find the stress at various points easily with the help of free body diagrams. The following example gives a brief explanation about the engineering application of the free body diagram.
http://www.chegg.com/homework-help/fundamentals-of-machine-component-design-5th-edition-chapter-4-problem-76p-solution-9781118012895
4.03125
This interactive activity from NOVA challenges students' knowledge of igloo construction. The quiz format includes questions concerning where igloos were traditionally built, the best type of snow for building, and the shape on which these traditional Canadian Inuit structures were modeled. Detailed explanations provide further insight into how these ingenious snow shelters enabled entire families to survive the brutal Arctic winters. This interactive activity requires Adobe Flash Player. Learn more here.
http://knpb.pbslearningmedia.org/resource/ipy07.sci.engin.design.igloo101/igloo-101/
4.4375
A bell-shaped graph, or bell curve, displays the distribution of variability for a given data set. For example, the most well-known example, the IQ graph, shows that the average intelligence of humans falls around a mean score of 100 and trails off in both directions around that center score. You can generate your own bell curve graphs by calculating a standard deviation and mean for any collected set of data. Items you will need Gather your data of interest. For example, if you study economics, you may wish to collect the average annual income of citizens of a given state. To ensure your graph looks more bell-shaped, aim for a high population sample, such as forty or more individuals. Calculate your sample mean. The mean is an average of all of your samples. Therefore, add up your total data set and divide by the population sample size, n. Compute your standard deviation. To do this, subtract your mean from each of your individual datum. Then square the result. Add up all of these squared results and divide that sum by n -- 1, which is your sample size minus one. Lastly, take the square root of this result. The standard deviation formula reads as follows: s = sqrt[ sum( (data -- mean)^2 ) / (n -- 1) ]. Plot your mean along the x-axis. Make increments from your mean spaced by a distance of one, two and three times your standard deviation. For example, if your mean is 100 and your standard deviation is 15, then you would have a marking for your mean at x = 100, another important marking around x = 115 and x = 75 (100 + or - 15), another around x = 130 and x = 60 (100 + or - 2(15)) and a final marking around x = 145 and x = 45 (100 + or - 3(15)). Sketch the bell curve. The highest point will be at your mean. The y-value of your mean does not precisely matter, but as you smoothly descend left and right to your next incremental marking, you should reduce the height by about one-third. Once you pass your third standard deviation left and right of your mean, the graph should have a height of almost zero, tracing just above the x-axis as it continues in its respective direction. Style Your World With Color Explore a range of deep greens with the year's "it" colors.View Article Barack Obama's signature color may bring presidential power to your wardrobe.View Article Create balance and growth throughout your wardrobe.View Article See how the colors in your closet help determine your mood.View Article - A graphing calculator or spreadsheet can help produce means and standard deviations faster than doing all calculations by hand. - bell image by Vaida from Fotolia.com
http://classroom.synonym.com/create-bell-curve-graph-2797.html
4.3125
Though it is often viewed both as the archetypal Anglo-Saxon literary work and as a cornerstone of modern literature, Beowulf has a peculiar history that complicates both its historical and its canonical position in English literature. By the time the story of Beowulf was composed by an unknown Anglo-Saxon poet around 700 a.d., much of its material had been in circulation in oral narrative for many years. The Anglo-Saxon and Scandinavian peoples had invaded the island of Britain and settled there several hundred years earlier, bringing with them several closely related Germanic languages that would evolve into Old English. Elements of the Beowulf story—including its setting and characters—date back to the period before the migration. The action of the poem takes place around 500 a.d. Many of the characters in the poem—the Swedish and Danish royal family members, for example—correspond to actual historical figures. Originally pagan warriors, the Anglo-Saxon and Scandinavian invaders experienced a large-scale conversion to Christianity at the end of the sixth century. Though still an old pagan story, Beowulf thus came to be told by a Christian poet. The Beowulf poet is often at pains to attribute Christian thoughts and motives to his characters, who frequently behave in distinctly un-Christian ways. The Beowulf that we read today is therefore probably quite unlike the Beowulf with which the first Anglo-Saxon audiences were familiar. The element of religious tension is quite common in Christian Anglo-Saxon writings (The Dream of the Rood, for example), but the combination of a pagan story with a Christian narrator is fairly unusual. The plot of the poem concerns Scandinavian culture, but much of the poem’s narrative intervention reveals that the poet’s culture was somewhat different from that of his ancestors, and that of his characters as well. The world that Beowulf depicts and the heroic code of honor that defines much of the story is a relic of pre–Anglo-Saxon culture. The story is set in Scandinavia, before the migration. Though it is a traditional story—part of a Germanic oral tradition—the poem as we have it is thought to be the work of a single poet. It was composed in England (not in Scandinavia) and is historical in its perspective, recording the values and culture of a bygone era. Many of those values, including the heroic code, were still operative to some degree in when the poem was written. These values had evolved to some extent in the intervening centuries and were continuing to change. In the Scandinavian world of the story, tiny tribes of people rally around strong kings, who protect their people from danger—especially from confrontations with other tribes. The warrior culture that results from this early feudal arrangement is extremely important, both to the story and to our understanding of Saxon civilization. Strong kings demand bravery and loyalty from their warriors, whom they repay with treasures won in war. Mead-halls such as Heorot in Beowulf were places where warriors would gather in the presence of their lord to drink, boast, tell stories, and receive gifts. Although these mead-halls offered sanctuary, the early Middle Ages were a dangerous time, and the paranoid sense of foreboding and doom that runs throughout Beowulf evidences the constant fear of invasion that plagued Scandinavian society. Only a single manuscript of Beowulf survived the Anglo-Saxon era. For many centuries, the manuscript was all but forgotten, and, in the 1700s, it was nearly destroyed in a fire. It was not until the nineteenth century that widespread interest in the document emerged among scholars and translators of Old English. For the first hundred years of Beowulf’s prominence, interest in the poem was primarily historical—the text was viewed as a source of information about the Anglo-Saxon era. It was not until 1936, when the Oxford scholar J. R. R. Tolkien (who later wrote The Hobbit and The Lord of the Rings, works heavily influenced by Beowulf) published a groundbreaking paper entitled “Beowulf: The Monsters and the Critics” that the manuscript gained recognition as a serious work of art. Beowulf is now widely taught and is often presented as the first important work of English literature, creating the impression that Beowulf is in some way the source of the English canon. But because it was not widely read until the 1800s and not widely regarded as an important artwork until the 1900s, Beowulf has had little direct impact on the development of English poetry. In fact, Chaucer, Shakespeare, Marlowe, Pope, Shelley, Keats, and most other important English writers before the 1930s had little or no knowledge of the epic. It was not until the mid-to-late twentieth century that Beowulf began to influence writers, and, since then, it has had a marked impact on the work of many important novelists and poets, including W. H. Auden, Geoffrey Hill, Ted Hughes, and Seamus Heaney, the 1995 recipient of the Nobel Prize in Literature, whose recent translation of the epic is the edition used for this SparkNote. Beowulf is often referred to as the first important work of literature in English, even though it was written in Old English, an ancient form of the language that slowly evolved into the English now spoken. Compared to modern English, Old English is heavily Germanic, with little influence from Latin or French. As English history developed, after the French Normans conquered the Anglo-Saxons in 1066, Old English was gradually broadened by offerings from those languages. Thus modern English is derived from a number of sources. As a result, its vocabulary is rich with synonyms. The word kingly, for instance, descends from the Anglo-Saxon word cyning, meaning “king,” while the synonym royal comes from a French word and the synonymregal from a Latin word. Fortunately, most students encountering Beowulf read it in a form translated into modern English. Still, a familiarity with the rudiments of Anglo-Saxon poetry enables a deeper understanding of the Beowulf text. Old English poetry is highly formal, but its form is quite unlike anything in modern English. Each line of Old English poetry is divided into two halves, separated by a caesura, or pause, and is often represented by a gap on the page, as the following example demonstrates: Setton him to heafdon hilde-randas. . . . Because Anglo-Saxon poetry existed in oral tradition long before it was written down, the verse form contains complicated rules for alliteration designed to help scops, or poets, remember the many thousands of lines they were required to know by heart. Each of the two halves of an Anglo-Saxon line contains two stressed syllables, and an alliterative pattern must be carried over across the caesura. Any of the stressed syllables may alliterate except the last syllable; so the first and second syllables may alliterate with the third together, or the first and third may alliterate alone, or the second and third may alliterate alone. For instance: Lade ne letton. Leoht eastan com. Lade, letton, leoht, and eastan are the four stressed words. In addition to these rules, Old English poetry often features a distinctive set of rhetorical devices. The most common of these is the kenning, used throughout Beowulf. A kenning is a short metaphorical description of a thing used in place of the thing’s name; thus a ship might be called a “sea-rider,” or a king a “ring-giver.” Some translations employ kennings almost as frequently as they appear in the original. Others moderate the use of kennings in deference to a modern sensibility. But the Old English version of the epic is full of them, and they are perhaps the most important rhetorical device present in Old English poetry. i need to find the part in the book where beowulf lands upon the shore and the guard comes down n' confronts him. 10 out of 29 people found this helpful i really love this great epic.....i enjoy reading it line by line Reading this was my prison punishment. Eye brows on fleek. Finna get crunk. 7 out of 9 people found this helpful
http://www.sparknotes.com/lit/beowulf/context.html
4
The discovery of a planet outside our solar system used to be so important that a big announcement from NASA or other professional planet-finders would usually bring news of a single planet, or perhaps a few. Not so anymore. The more we look, the more we find. Exoplanet discoveries are so plentiful these days that leading groups have started unveiling them by the dozen. That is just what scientists from NASA's Kepler mission did January 26, when the team announced the discovery of 26 newfound planets orbiting distant stars. Astronomers have now identified more than 700 exoplanets, all of them in the past two decades or so. Kepler illustrates how new technologies have improved our ability to discover faraway worlds. It is a space-based observatory that tracks the brightness of more than 150,000 stars near the constellation Cygnus. For those stars hosting planetary systems, and for those planetary systems whose orbital plane is aligned with Kepler's line of sight, the spacecraft registers a periodic dip in starlight when an orbiting planet passes across the star's face. Using this method, mission scientists have identified more than 2,300 planetary candidates awaiting follow-up observation and confirmation. (Some astrophysical phenomena, such as a pair of eclipsing binary stars in the background, can mimic a planetlike dimming of starlight.) The Kepler team confirmed most of the latest batch of planets—11 planetary systems containing up to five worlds apiece—by measuring transit timing variations, or orbital disturbances caused by the gravitational pull of planetary neighbors. The newfound Kepler worlds are depicted as green orbs in the graphic above, with the planets of the solar system in blue for comparison. Purple dots are possible additional planets that have not yet been validated. The orbital spacing of the planets is not to scale; all 26 of the Kepler planets orbit closer to their host stars than Venus, the second-innermost planet in the solar system, does to the sun. The exoplanets range from roughly 1.5 times the diameter of Earth (marked as "Sol d" here, as it would be under exoplanetary nomenclature) to approximately 1.3 times the diameter of Jupiter (Sol f).
http://www.scientificamerican.com/gallery/dozens-and-dozens-nasas-kepler-spies-packs-of-new-exoplanets/?shunter=1455118371677
4.0625
These lessons, by one of our most consistent FaithWriters' Challenge Champions, should not be missed. So we're making a permanent home for them here. A dialect is a pattern of speech that is found in a particular region. It is not a separate language, but it may differ from the standard language of the country in its vocabulary, its pronunciation, and its sentence structure. Some examples of dialect might be the “Jersey Shore” speech heard in the television show of that same name, a Cockney accent from London’s east end, or the “Yooper” dialect from my own home state of Michigan. If I make the definition a bit broader and include accents, the list of possibilities is virtually endless. Writers frequently have one or more of their characters speak in a dialect. This can often be a good thing, but there are also some pitfalls for writers to avoid. I’ll try to cover both of those in this lesson. When you write your character’s voice in a dialect, you are telling your reader several things about that character. Dialect can indicate a character’s position in life: her level of education, her age, her economic status, her geographical background—and if I add jargon (the vocabulary particular to a profession or some other distinct group), it might even indicate her occupation. So dialect can aid you in characterization—these are things that you do not have to tell the reader, saving you more words that you can then use to tell your story. Dialect is also a good tool for giving your characters unique voices. If you have two characters who can be similarly described—both, for example, are middle-aged men—then giving one of them a dialect or an accent will help your readers to keep track of who’s who. Additionally, well-written dialect can give your writing a unique rhythm, and it can be really fun to read. I recommend that you give writing in dialect a try if you’re looking for a way to stretch yourself or to make your piece stand out. However—there are a few warnings for those who write in a dialect. 1. Be sure that you get it right. If it’s not a dialect that is very familiar to you, spend some time listening to speakers of that dialect, and perhaps transcribing what you hear. If that’s not possible for reasons of time or geography, find something that’s written with that dialect and take note of how it is written. If you get it wrong, it will reflect on your writing, and someone who is more familiar with that dialect will call you on it. Although King James English isn’t exactly a dialect, I can use it to demonstrate this point. We’ve all heard people with only a nodding acquaintance with the language of that version of the Bible when they attempt what they think is “biblical” language. Thou is makething me laughest. Ye shouldeth not doeth that. It makes you cringe, doesn’t it? That’s the way a poorly-written or inauthentic dialect will sound to those who are familiar with its rhythms. 2. Be careful not to overdo the writing of non-standard phrases; your reader will get weary of the work they have to do to mentally translate the dialect. Take a look at this, written in a bad approximation of a southern dialect: Ah jes’ couldn’ belive mah eyes! Lawd, thet young’un were a sight, ‘n’ ah never knewed whut dun hit me. She wuz so purty it made me wanna slap mah muther, ‘n’ she wuz jist a-grinnin’ an’ a-laughin’ et me lak nobuddy’s bidness. That’s exhausting to read, isn’t it? I’d suggest that if you have a character who speaks in a dialect, you should pick a few words or linguistic quirks that are suggestive of that dialect--just enough to give your reader the idea of that character’s speech. 3. Finally, you should be very careful—very, very careful—that your rendering of dialect does not come across as a stereotype, exaggeration, or satire of any particular group’s speech patterns, and that nothing you write could be considered insulting to members of the group for whom that dialect is their native tongue. If you’re not sure, have someone from that group read it. They will tell you if it is accurate, and also if it is offensive. HOMEWORK: (Choose one or more of the following exercises) 1. Write a paragraph or two with some dialect or accent. 2. Link to a challenge entry you wrote with a dialect or accent, and tell whether you think you did it effectively. Also tell why you used the dialect—what did it bring to the story? 3. Tell about a book you’ve read that uses dialect effectively. 4. Ask a question or make a comment about the use of dialect. Finally, I'd like to encourage you to check out the Critique Circle. I know that Mike and Bea have elicited the help of several seasoned writers and editors to stop by there frequently and to critique new additions. There's a new category there for "Challenge Entries"--a great place to put that entry that you loved, but didn't score well for a more in-depth critique. Ha, I just discovered I can access FW on my e-reader! I have tried on my smartphone, and it shuts down my browser every time. I agree with all your points, Jan. I would add that if you incorporate terms from a particular region, along with the dialect, that it is very clear from the context what you are referring to. An example of that would be an eastern Canadian (specifically a Newfoundlander) saying something to the effect, "She has a tongue like a logan." Meaning she talks a lot, and often gossips. But the reference is to a boot called a logan, a tall, lace-up boot worn by a fisherman, with a very long tongue. A non-maritimer wouldn't know that -- I may be saying it incorrectly after these years. It's been a while since I heard Mrs. Meta -- so while it's colourful, there's no meaning if the reader is unacquainted with the dialect. But revelation must be subtle. There's nothing more annoying than a series of definitions injected into the text. Dialect must always flow naturally. Here's a link to a story I wrote with dialect. (I have forgotten how to link the title to the link.) http://www.faithwriters.com/wc-article- ... p?id=31783 I used dialect for two reasons. To show place and time. To give my characters life. Recently, I've read several books with heavy use of dialect. Cane River by Lalita Tademy. Wonderful flow which kept the reader 'in time' and empathetic to the characters. Another is The Birth House by Ami McKay. Takes place in maritime Canada just before WW1. I would consider each brilliant examples of bringing authenticity to the stories through the use of dialect and the vernacular without its being forced or awkward. "What remains of a story after it is finished? Another story..." Eli Wiesel Writing in dialect can be fun - but YES, it can definitely be overdone. If I'm spending too much time deciphering the words that I can't concentrate on the story, it's time to tone it down a bit. But, I'll admit, it's VERY hard to figure out where you are on that slipperly slope! Here's a challenge entry I wrote with dialect Ol' Hairy Ears 'n Me. I used the dialect for characterization - but also, if I recall, because there was a discussion here on the boards about dialect, and I went back through my challenge entries and realized, as of that time, I had never done it. Looking back on it now, I think I may have overdone it a bit - though I will say that I wrote this almost six years ago, and that's a good enough excuse for me! It definitely gave the piece a fun angle. Oh, I love regional dialects. Here is a link to "The Rev'rend Makes a Sick Call". It is a retelling of one of my favorite Bible stories which I always imagined in an Appalachian setting. As mentioned in some of the comments, I made a number of mistakes. Many FW members are more familiar with Appalachian dialect than I: This is a link to "The Cardinal Visits the Bishop", written in Scottish dialect, which lies somewhere between Scottish English and the Scots language. In this story, an Italian cardinal arrives in Scotland to find that his perfect Oxford English is not considered "proper English" by the Scottish bishop. I lived in Scotland for two years, and so have more of an ear for this dialect: I, like many Americans, love to hear Scottish people talk---even when we can't understand a thing they are saying. Ann, as always, your story was lovely, and it was no effort at all to read your dialect. In fact, even though it's not one that I've ever heard, I felt as if I could hear it as I was reading it, and that's exactly the desired effect. Thanks for the link! Jo, I don't think you overdid it at all. A great story that I actually remember reading first time around. Thanks for sharing it again! I wrote one just to play with the idea of dialect. The POV was voiced in a back hills, uneducated American dialect and he is telling about trying to communicate with a refined, educated relative in England. My challenge in this one was in writing the Englishman's dialect as repeated by the American (with much lost in the effort). I hope the "proper" English feel came through even though it was intentionally mutilated. The title is "The Problem With Englishmen" and the link is http://www.faithwriters.com/wc-article- ... p?id=27512. I figure there must have been some dialect overkill because it didn't place well at all. The thought of writing dialects freaks me out, but I sure enjoy reading stories by good writers who do it well. I'm reading The Adventures of Huckleberry Finn out loud to my kids right now, and I love reading Huck's voice, but Jim's is crazy hard to read out loud, and half of the time I don't know what he's saying. I don't really think that this counts as dialect, but I tried to write a story in the conversational tone of a precocious 11 year-old. My FaithWriters profile: RachelM FW member profile Rachel, you're right that this isn't really a dialect--but still, it makes an important point. However a writer imagines their characters, they need to be sure that their characters speak authentically. I've read lots of entries in which children speak far too wisely for their years (less often, far too young for their years). And I've read entries that featured teens, or doctors, or teachers, or truckers, or any number of other identifiers, in which their speech did not ring true. That's not the case with this story--you got it right! Here's an excerpt from one of my favorite classics, The Grapes of Wrath: Joad looked at (the cat), and his face was puzzled. "I know what's the matter," he cried. "That cat jus' made me figger what's wrong." Seems to me there's lots wrong," said Casy. "No, it's more'n jus' this place. Whyn't that cat jus' move in with some neighbors--with Rances. How come nobody ripped some lumber off this house? Ain't been nobody here for three-four months, an' nobody's stole no lumber. Nice planks on the barn shed, plenty good planks on the house, winda frames--an' nobody's took 'em. That ain't right. That's what was botherin' me, an' I couldn't catch hold of (the cat). "I don' know. Seems like maybe there ain't any neighbors..." I love this book. The story, the characters, and even the strong dialect depicting the poor lower class in all their humanity, as the author rides the line so dangerously close to being offensive. (Or maybe Steinbeck stepped over the line once or twice). Thanks for the lesson, Jan. Dialect is something I struggle with in my own writing and I have a question. Where can a writer find a really great resource for studying a Texas drawl? Theresa, I wish I knew the answer to that, but I don't. I think maybe you should take a nice vacation to Texas! My favorite example of powerful dialect writing is in "To Kill a Mockingbird," especially during the courtroom scenes when Mayella Ewing is speaking. When I was teaching, I team-taught for a few years with a male English teacher, and when we got to the courtroom chapters, he and I read those scenes to the class as if we were Mayella and Atticus. Could be my favorite classroom memory. Dialect is very hard! I don’t think I’ve ever written in it, but here are a few thoughts triggered by the lesson and others’ comments. 1. I have written in “King James English” once (in this piece) and had an interesting experience. I made sure all the pronouns and verb endings were correct. It’s really not that hard—all you have to understand is 1st, 2nd, 3rd person; singular and plural pronouns; and nominative, objective, and possessive case. But I over-thought things and decided that some readers would think some of the correct usages were incorrect, so I deliberately changed some correct ones to incorrect ones. As you can see from the comments, someone—I don’t remember who (cough cough Jan cough cough)—busted me. 2. As folks may remember, Twain wrote this at the beginning of Adventures of Huckleberry Finn: 3. Dialects can change quickly, often within 20 or 30 miles. When I was a forester in North and South Carolina, I loved to hear the differences over the areas I worked. Typically, I would be assigned to an 8 - 10 county area and the dialects would vary greatly. One town, spelled “Whiteville,” was pronounced “Whahdvul” by the locals. In that same area, “whatever” meant “what,” and if you really wanted to convey “whatever,” you had to say “whatever what.” Same with “when,” “whenever,” and “whenever when”; and other “-ever” word groups. In and around Charleston, some people say “case quarter” for “quarter” (the coin). I could go on and on, as could we all; but my point is how SMALL an area a dialect can be accurate for. "When the Round Table is broken every man must follow Galahad or Mordred; middle things are gone." C.S. Lewis “The chief purpose of life … is to increase according to our capacity our knowledge of God by all the means we have, and to be moved by it to praise and thanks. To do as we say in the Gloria in Excelsis ... We praise you, we call you holy, we worship you, we proclaim your glory, we thank you for the greatness of your splendor.” J.R.R. Tolkien The Adventures of Huckleberry Finn was another one that I loved reading aloud to my students. So much fun! (Sorry for dinging you on the King James English.) Who is online Users browsing this forum: No registered users and 3 guests Does God exist? Build a writers website Does truth exist? Website online in minutes
http://www.faithwriters.com/Boards/phpBB2/viewtopic.php?f=67&t=38089
4.15625
A helix (pl: helices), from the Greek word έλικας/έλιξ, is a three-dimensional, twisted shape. Common objects formed like a helix are a spring, a screw, and a spiral staircase (though the last would be more correctly called helical). Helices are important in biology, as the DNA molecule is formed as two intertwined helices, and many proteins have helical substructures, known as alpha helices. Helices can be either right-handed or left-handed. With the line of sight being the helical axis, if clockwise movement of the helix corresponds to axial movement away from the observer, then it is a right-handed helix. If counter-clockwise movement corresponds to axial movement away from the observer, it is a left-handed helix. Handedness (or chirality) is a property of the helix, not of the perspective: a right-handed helix cannot be turned or flipped to look like a left-handed one unless it is viewed through a mirror, and vice versa. A double helix typically consists geometrically of two congruent helices with the same axis, differing by a translation along the axis, which may or may not be half-way. A conic helix may be defined as a spiral on a conic surface, with the distance to the apex an exponential function of the angle indicating direction from the axis. An example of a helix would be the Corkscrew roller coaster at Cedar Point amusement park. In cylindrical coordinates (r, θ, h), the same helix is described by: Another way of mathematically constructing a helix is to plot a complex valued exponential function (e^xi) taking imaginary arguments (see Euler's formula). Except for rotations, translations, and changes of scale, all right-handed helices are equivalent to the helix defined above. The equivalent left-handed helix can be constructed in a number of ways, the simplest being to negate either the x, y or z component. The length of a general helix expressed in rectangular coordinates as equals , its curvature is .
http://www.wikidoc.org/index.php/Helix
4.5
To graph linear inequalities, start by drawing the line in the same fashion as you would with a linear equation. A linear inequality has many solutions that can lie above or below ... How to Graph Linear Inequalities A linear equation is an equation that makes a line when graphed. A linear inequality is the same type of expression with an inequality sign rather than an equals sign. For example, the general formula for a linear equation is y = mx + b, where m is the... Demonstrates, step-by-step and with illustrations, how to graph linear (two- variable) inequalities such as 'y < 3x + 2'. Learn how to graph two-variable linear inequalities. Sal graphs the inequality y<3x+5. ... Solving and graphing linear inequalities in two variables 1 ... Constraint solution sets of two-variable linear inequalities. This is a graph of a linear inequality: The inequality y ≤ x + 2. You can see the y = x + 2 line, and the shaded area is where y is less than or equal to x + 2 ... Fun math practice! Improve your skills with free problems in 'Graph a linear inequality in the coordinate plane' and thousands of other practice lessons. To understand how to graph the equations of linear inequalities such y ≥ x + 1, make sure that you have a good understanding of how to graph the equation of a www.ask.com/youtube?q=Graphing Linear Inequalities&v=5h6YzRRxzO4 Jan 9, 2011 ... This is a video lesson on Graphing Linear Inequalities. For questions 1 - 4, you will need paper and pencil to draw your graphs. If you have graph paper, please use your graph paper. You should be able to graph ...
http://www.ask.com/web?qsrc=6&o=102341&oo=102341&l=dir&gc=1&q=Graphing+Linear+Inequalities
4
Attempts to create controlled nuclear fusion - the process that powers stars - have been a source of continuing controversy. Scientists have struggled for decades to effectively harness nuclear fusion in hot plasma for energy generation - potentially a cleaner alternative to the current nuclear-fission reactors - but have so far been unsuccessful at turning this into an economically viable process. Meanwhile, claims of cheap "bench-top" fusion by electrolysis of heavy water ("cold fusion") and by sonic bubble-formation in water (sonoluminescence) have been greeted with skepticism, and have not been successfully reproduced. In this week's Nature, Brian Naranjo and colleagues report a new kind of "bench-top" nuclear fusion, based on measurements that seem considerably more convincing than these previous claims. The publication was written by a UCLA team that includes Brian Naranjo, a graduate student in physics; James Gimzewski, professor of chemistry; and Seth Putterman, professor of physics. Gimzewski and Putterman are members of the California NanoSystems Institute at UCLA. The team initiates fusion of deuterium — heavy hydrogen, the fuel used in conventional plasma fusion research — using the strong electric field generated in a pyroelectric crystal. Such materials produce electric fields when heated, and the researchers concentrated this field at the tip of a tungsten needle connected to the crystal. In an atmosphere of deuterium gas, this generates positively charged deuteron ions and accelerates them to high energy in a beam. When this beam strikes a target of erbium deuteride, Naranjo and colleagues detect neutrons coming from the target with precisely the energy expected if they were generated by the nuclear fusion of two deuterium nuclei. The neutron emission is 400 times stronger than the usual background level. The researchers say that this method of producing nuclear fusion won't be useful for normal power generation, but it might find applications in the generation of neutron beams for research purposes, and perhaps as a propulsion mechanism for miniature spacecraft. Publication: The Journal Nature, April 28, 2005 "Observation of Nuclear Fusion Driven by a Pyroelectric Crystal" For more information about the project, visit rodan.physics.ucla.edu/pyrofusion Explore further: Seeing where energy goes may bring scientists closer to realizing nuclear fusion
http://phys.org/news/2005-05-ucla-nuclear-fusion-lab.html
4.09375
Arctic sea ice ecology and history The Arctic sea ice covers less area in the summer than in the winter. The multi-year (i.e. perennial) sea ice covers nearly all of the central deep basins. The Arctic sea ice and its related biota are unique, and the year-round persistence of the ice has allowed the development of ice endemic species, meaning species not found anywhere else. There are differing scientific opinions about how long perennial sea ice has existed in the Arctic. Estimates range from 700,000 to 4 million years. The specialized, sympagic (i.e. ice-associated) community within the sea ice is found in the tiny (mostly <1mm diameter) liquid filled network of pores and brine channels or at the ice-water interface. The organisms living within the sea ice are consequently small (<1mm), and dominated by bacteria, and unicellular plants and animals. Diatoms, a certain type of algae, are considered the most important primary producers inside the ice with more than 200 species occurring in Arctic sea ice. In addition, flagellates contribute substantially to biodiversity, but their species number is unknown. Protozoan and metazoan ice meiofauna, in particular turbellarians, nematodes, crustaceans and rotifers, can be abundant in all ice types year-round. In spring, larvae and juveniles of benthic animals (e.g. polychaetes and molluscs) migrate into coastal fast ice to feed on the ice algae for a few weeks. A partially endemic fauna, comprising mainly gammaridean amphipods, thrive at the underside of ice floes. Locally and seasonally occurring at several 100 individuals m-2, they are important mediators for particulate organic matter from the sea ice to the water column. Ice-associated and pelagic crustaceans are the major food sources for polar cod (Boreogadus saida) that occurs in close association with sea ice and acts as the major link from the ice-related food web to seals and whales. While previous studies of coastal and offshore sea ice provided a glimpse of the seasonal and regional abundances and the diversity of the ice-associated biota, biodiversity in these communities is virtually unknown for all groups, from bacteria to metazoans. Many taxa are likely still undiscovered due to the methodological problems in analyzing ice samples. The study of diversity of ice related environments is urgently required before they ultimately change with altering ice regimes and the likely loss of the multi-year ice cover. Dating Arctic ice Estimates of how long the Arctic Ocean has had perennial ice cover vary. Those estimates range from 700,000 years in the opinion of Worsley and Herman, to 4 million years in the opinion of Clark. Here is how Clark refuted the theory of Worsley and Herman: Recently, a few coccoliths have been reported from late Pliocene and Pleistocene central Arctic sediment (Worsley and Herman, 1980). Although this is interpreted to indicate episodic ice-free conditions for the central Arctic, the occurrence of ice-rafted debris with the sparse coccoliths is more easily interpreted to represent transportation of coccoliths from ice-free continental seas marginal to the central Arctic. The sediment record as well as theoretical considerations make strong argument against alternating ice-covered and ice-free....The probable Middle Cenozoic development of an ice cover, accompanied by Antarctic ice development and a late shift of the Gulf Stream to its present position, were important events that led to the development of modern climates. The record suggests that altering the present ice cover would have profound effects on future climates. More recently, Melnikov has noted that, "There is no common opinion on the age of the Arctic sea ice cover." Experts apparently agree that the age of the perennial ice cover exceeds 700,000 years but disagree about how much older it is. However, some research indicates that a sea area north of Greenland may have been open during the Eemian interglacial 120,000 years ago. Evidence of subpolar foraminifers (Turborotalita quinqueloba) indicate open water conditions in that area. This is in contrast to Holocene sediments that only show polar species. - Arctic amplification - Arctic Climate Impact Assessment - Arctic ecology - Arctic Ocean - Arctic sea ice decline - Arctic shrinkage - Climate of the Arctic - Bluhm, B., Gradinger R. (2008) "Regional Variability In Food Availability For Arctic Marine Mammals." Ecological Applications 18: S77–96 (link to free PDF) - Gradinger, R.R., K. Meiners, G.Plumley, Q. Zhang,and B.A. Bluhm (2005) "Abundance and composition of the sea-ice meiofauna in off-shore pack ice of the Beaufort Gyre in summer 2002 and 2003." Polar Biology 28: 171 – 181 - Melnikov I.A.; Kolosova E.G.; Welch H.E.; Zhitina L.S. (2002) "Sea ice biological communities and nutrient dynamics in the Canada Basin of the Arctic Ocean." Deep Sea Res 49: 1623–1649. - Christian Nozais, Michel Gosselin, Christine Michel, Guglielmo Tita (2001) "Abundance, biomass, composition and grazing impact of the sea-ice meiofauna in the North Water, northern Baffin Bay." Mar Ecol Progr Ser 217: 235–250 - Bluhm BA, Gradinger R, Piraino S. 2007. "First record of sympagic hydroids (Hydrozoa, Cnidaria) in Arctic coastal fast ice." Polar Biology 30: 1557–1563. - Horner, R. (1985) Sea Ice Biota. CRC Press. - Melnikov, I. (1997) The Arctic Sea Ice Ecosystem. Gordon and Breach Science Publishers. - Thomas, D., Dieckmann, G. (2003) Sea Ice. An Introduction to its Physics, Chemistry, Biology and Geology. Blackwell. - Butt, F. A.; H. Drange; A. Elverhoi; O. H. Ottera; A. Solheim (2002). "The Sensitivity of the North Atlantic Arctic Climate System to Isostatic Elevation Changes, Freshwater and Solar Forcings" (PDF) 21 (14-15). Quaternary Science Reviews: 1643–1660. OCLC 108566094. - Worsley, Thomas R.; Yvonne Herman (1980-10-17). "Episodic Ice-Free Arctic Ocean in Pliocene and Pleistocene Time: Calcareous Nannofossil Evidence". Science 210 (4467): 323–325. doi:10.1126/science.210.4467.323. PMID 17796050. - Clark, David L. (1982). "The Arctic Ocean and Post-Jurassic Paleoclimatology". Climate in Earth History: Studies in Geophysics. Washington D.C.: The National Academies Press. p. 133. ISBN 0-309-03329-2. - Melnokov, I. A. (1997). The Arctic Sea Ice Ecosystem (pdf). Google Book Search (CRC Press). p. 172. ISBN 2-919875-04-3. - Mikkelsen, Naja et al. "Radical past climatic changes in the Arctic Ocean and a geophysical signature of the Lomonosov Ridge north of Greenland" (2004).
https://en.wikipedia.org/wiki/Arctic_sea_ice_ecology_and_history
4.1875
IntroductionThe element uranium is the heaviest atom found in nature, and is the only element with all its natural isotopes radioactive. Since an isotope differs from other isotopes of the same element in having a different relative atomic mass (r.a.m.), or atomic weight, it may be defined by the name of the element to which it belongs and the r.a.m. of the isotope. Thus uranium-238, which is the most common uranium isotope, has 92 protons (the atomic number of uranium is 92) and 146 neutrons in each atomic nucleus and therefore has a r.a.m. of 238. Uranium-235, the next most common uranium isotope, has 3 neutrons less in each nucleus. These are the two most common uranium isotopes. They are both unstable and therefore decay radioactively into other elements by emitting charged particles from the nuclei of their atoms. Instead of decaying into stable atoms, they decay into atoms which are themselves radioactive. These then decay into a different atom and the process is repeated until a stable atom is reached. A system where atoms decay through a series of elements in this way is called a radioactive series. The natural radioactive seriesThere are three entirely separate radioactive series found in nature, the longest of which begins with the decay of uranium-238 and is known as the naturally radioactive uranium-238 series. The second naturally occurring radioactive series originates with the second most commonly occurring isotope of uranium, uranium-235, and this is called the uranium-235 series. The third series is the naturally radioactive thorium-232 series. These series arise because of the loss of either a beta particle or an alpha particle from an atom, a process which changes the charge on the nucleus of the decaying atom. When a beta particle is lost, it means that the charge on the nucleus (and therefore the atomic number of the atom) is increased by one. When an alpha particle is emitted the atomic number of the atom is reduced by two and its atomic weight by four. The newly-created atom then emits a particle to become an atom of a different element which then itself decays into yet another different atom. Most often, all atoms of a particular radioactive isotope of an element emit either alpha particles and no beta particles, or all beta particles and no alpha particles. Some members of the radioactive series have, however, some atoms which emit alpha particles and some which emit beta particles. (No single atom emit both alpha and beta particles.) In these cases, a branch occurs in the radioactive series, with some of the decaying atoms converted into another element. For example, in the uranium-235 series, actinium-227 decays either into francium-223 by loss of an alpha particle, or into thorium-227 by loss of a beta particle. Both of these then decay, the francium by loss of a beta particle, and the thorium by loss of an alpha particle, into radium-223. The radium-223 produced by either route is identical, and the branch in the series is thus closed. The radium then decays to produce the next member in the series, radon-219. The complete details of the three naturally occurring series are as in the diagrams, with the branchings which occur in the series shown. It will be noticed that all of the natural series terminate with isotopes of lead, i.e., lead-206, lead-207, and lead-208. It happens that all of these are stable isotopes of the element and no further radioactive emission therefore takes place. The times taken (as measured by the half-life) for the various radionuclides in the radioactive series to decay differs widely. For example the time taken for half an amount of uranium-238 to decay to thorium-234, which is the first step in the uranium-238 series, is 4.5 × 109 (4,500 million) years. By contrast, the half-life of thorium-234 is only 24.5 days, and the half-life of the next member in the series, protactinium-234, a mere 1.14 minutes. An artificial seriesAfter the discovery of the three radioactive series of naturally found isotopes, many physicists searched in the hope of finding more series. In 1940 new elements were artificially made which had atomic numbers greater than 92, the atomic number of uranium. These were called transuranic elements. The transuranic elements neptunium (atomic number 93) and plutonium (atomic number 94) were the first to be produced and isolated. In 1945 others were discovered, including americium (atomic number 95). These three elements are radioactive in any of their isotopic forms, and since they are produced artificially, they are said to be artificially radioactive. Later it was realized that a fourth radioactive series does exist – the neptunium radioactive series – but it cannot be called a naturally radioactive series since it contains some transuranic elements. The neptunium radioactive series (so-called because neptunium-237 is the most stable radioactive isotope in the series) is shown in the bottom diagram. As with the naturally radioactive series, alpha particle and beta particle emission causes the decay of the radioactive isotopes. Branching again occurs with a bismuth isotope, in this case where some atoms of bismuth-213 decay by alpha particle emission to thallium-209 atoms, while others decay by beta particle emission to atoms of polonium-213. Thallium-209, by beta particle emission, and polonium-213 by alpha particle emission, both decay to identical atoms of lead-209, which decays to bismuth-209, a stable isotope, which forms the end of the neptunium radioactive series. As with the naturally radioactive series, a quantity of any isotope in the series will eventually decay to become a similar quantity of the stable isotope – in this case bismuth-209. Most of the natural radioactive isotopes take their places in a radioactive series. Only seven of the naturally found radioactive isotopes do not appear in one of the three naturally radioactive series. Forty-six isotopes do appear in these three series – all of which are isotopes of the elements with atomic numbers between 81 and 92. In contrast, most of the artificial radioactive isotopes do not belong to a radioactive series. Related category• ATOMIC AND NUCLEAR PHYSICS Home • About • Copyright © The Worlds of David Darling • Encyclopedia of Alternative Energy • Contact
http://www.daviddarling.info/encyclopedia/R/radioactive_series.html
4.28125
Explore! There are resources out there that can inform you as you work with all learners in your classroom. As you find resources that you think should be added to this list, please send them to Linda Gamble at email@example.com and we will post them on this resource page. Image courtesy of [duron123] / FreeDigitalPhotos.net Consider the response of the children http://www.dailydot.com/lol/kids-react-cheerios-commercial-race/ Did you know that there was a controversy over a Cheerios ad? Watch this video and consider your own response. Why do you think it was controversial? What does this controversy say about children? How could you use this in your classroom? Choosing to Participate Poster Exhibit http://www.tolerance.org/choosing-to-participate A Smithsonian Institution Traveling Exhibit and online resource for teachers and administrators. This page includes resources and downloadable posters and other educator materials. FREE and useful to encourage dialogue. From their website: “As our world grows increasingly interconnected, it’s more important than ever to inspire people of all ages to create positive social change. When we stop to consider the consequences of our everyday choices—to discover how “little things are big”—we can make a huge difference.” Books for Classroom Use: Skipping Stones A listing of great books to use with children and youth about topics related to diversity.You may also find interesting information at Skipping Stones: An international multicultural magazine.http://www.skippingstones.org/ Film Short: “Immersion” http://learning.snagfilms.com/film/immersion This 12-minute film is thought-provoking and an important resource that can be used to help learners and to think about immersion strategies and the experience of ELLs. YouTube: Strategies for teaching ESL students in your classroom. This is a great introduction and is only 7 minutes long. http://www.youtube.com/watch?v=_Ub0NJ6UClI&feature=related YouTube: ESL Struggles and Strategies. Another good look at what it is like to be an English Language Learner. Just under 7 minutes long and begins by asking if English should be our “official” language. http://www.youtube.com/watch?v=-bWU238PymM&NR=1 YouTube: Scaffolding Language Skills. A short introduction to strategies you may already know but haven’t thought about in terms of working with English Language Learners. Just under 2 minutes long. http://www.youtube.com/watch?v=lmJoOjLQM3U&feature=related Families of the World: a website that has clips from a series of PBS documentaries. The documentaries provide insight into the world of families and children who live in different cultures. The clips are interesting to watch and a teacher might use this as a resource if a family has ties to a different part of the world or if a class is studying a specific country or region. http://www.familiesoftheworld.com/ August to June: the story of a different approach to schools and learning. A clip from a film that demonstrates there are different ways to educate. http://www.augusttojune.com/ A Refugee Camp: Doctors without Borders site which illustrates the life of a refugee leaving home through pictures and words. http://www.doctorswithoutborders.org/events/refugeecamp/guide/index.cfm Walking the “Only” Road: Psychological Tight Spaces by Dr. Shawn Arango Ricks, past-president of the Southern Organization for Human Services (SOHS). This is a very thought provoking piece about concepts such as what does it mean to live in a “post-racial” or “color-blind” America. Image courtesy of [suphakit73] / FreeDigitalPhotos.net Teaching Tolerance – A new text selection tool from Teaching Tolerance. According to their information it can help you select culturally responsive texts and meet the requirements of the Common Core. TED-Ed —A guide to creating “FLIPPED” lessons and online resources. http://ed.ted.com/ Five Things High School Students Should Know About Race Understanding Language –– This article provides a solid, thoughtful overview of the knowledge that students should learn about race. Extremely well written article from the Harvard Education Letter. http://www.hepg.org/hel/article/553#home Understanding Language: Language, Literacy and Learning in the Content Area – This website has resources to help teachers prepare to support ELL students meeting the Common Core State Standards. http://ell.stanford.edu/ GLSEN – Here you’ll find lesson plans, curricular tools, information on teacher training programs and more. http://glsen.org/educate/resources Teaching in Racially Diverse Classrooms – a tip-sheet from the Derek Bok Center for Teaching and Learning at Harvard University. Thought provoking! http://isites.harvard.edu/fs/html/icb.topic58474/TFTrace.html Multicultural Education Pavilion – many resources to use in teaching, learning, and professional development. http://www.edchange.org/multicultural/index.html Teaching Tolerance – Southern Poverty Law Center – tools to help teachers guide students as they learn to live in their diverse world. An outstanding resource! The Teaching Diverse Students Initiative – Southern Poverty Law Center – tools, case studies and learning resources to explore and consider diversity in the classroom. Project Implicit – An opportunity to sample your conscious and unconscious bias and learn more about yourself. everythingESL.net – ESL lesson plan samples, teaching tips, resources, ESL national news. http://www.everythingesl.net Beatrice Moore Luchin 2011 Maine workshops “How Do I Know What they Know and Understand? Instant Math Assessment Techniques” – April 29, 2010 “How Do I Know What They Know? Building Your Toolkit of Assessment Strategies” PowerPoint in pdf | Handouts from the workshop Colorin Colorado –a tremendous resource for working with Spanish speaking families but also useful for other ELL families http://www.colorincolorado.org/ ePals Global Community – A k-12 Social Learning Network. Lots of great opportunities to expand your students’ horizons. http://www.epals.com/ Discover Education – many free lesson plans including some specifically targeting diversity issues. http://www.discoveryeducation.com/teachers/ Understanding Language – Stanford University website about working with English Language Learners that includes professional papers and teaching resources that will be updated frequently. http://ell.stanford.edu/ Image courtesy of [stockimages] / FreeDigitalPhotos.net Southern Poverty Law Commonly Held Beliefs Survey http://www.tolerance.org/supplement/test-yourself-hidden-bias This is an important first step! Take this quiz and check in with yourself about your beliefs. Mass Customized Learning: One teacher’s vision to help you understand MCL. Imagine Learning – Archived Webinars about education with an emphasis on working with English Language Learners. High quality professional development at your desk! http://www.imaginelearning.com/webinars/#AcademicSuccessForELs Preparing All Teachers to Meet the Needs of English Language Learners – A comprehensive report from the Center for American Progress. http://www.americanprogress.org/issues/education/report/2012/04/30/11372/preparing-all-teachers-to-meet-the-needs-of-english-language-learners/ Choosing to Participate – Educator resources, including a self-paced educator workshop about the importance of choosing to be part of positive social change. This site also includes materials that would be useful in the classroom. http://www.choosingtoparticipate.org/ IRIS Resources – Resources, including case studies, that guide learning about inclusive environments http://iris.peabody.vanderbilt.edu/resources.html Educating English Language Learners: Building Teacher Capacity – A roundtable report that explores what teachers need to know to work with English Language Learners NCELA – The National Clearing House for English Language Acquisition with links to professional development materials http://www.ncela.us/professional-development Multicultural Resources – Maine DHHS Office of Multicultural Affairs Bridging Refugee Youth and Children’s Services (BRYCS) – Information about working with refugee families and youth. http://www.brycs.org/ Image courtesy of [twobee] / FreeDigitalPhotos.net 10 Ways Well-Meaning White Teachers Bring Racism into our Schools: An interesting and thought provoking article. http://everydayfeminism.com/2015/08/10-ways-well-meaning-white-teachers-bring-racism-into-our-schools/ KAHNAcademy: A resource for supplemental learning opportunities. Check this out and think about how it might support and encourage student learning. http://www.khanacademy.org/ Tell Me More: This NPR radio program focuses on issues of interest to all from very diverse perspectives. Of particular interest might be the twitter conference on Education discussed on the October 11th, 2012 program. http://www.npr.org/programs/tell-me-more/ Mom (and Dad’s) View of ADHD: a web page and blog about life with ADHD. http://adhdmomma.com/ Down’s Syndrome Blogs: parents blogging about down’s syndrome: “For parents of children with Down’s Syndrome (also known as Downs Syndrome, Down Syndrome, Trisomy 21, DS, and T21), the world can sometimes seem like a hostile and unsympathetic place. We are cast alternately as demons, for allowing perceived genetically abnormal people to survive, or as saints doing charitable work. We are neither. We are parents. The Color of Life: parenting advice for transracial adoptive families. This site underscores some of the issues and challenges that children and families confront and also includes resources that could be useful in the classroom. https://www.adoptivefamilies.com/category/transracial-adoption/ MamiVerse: a website of parenting advice for Latino families. This site includes good resources for teachers as well as parents. For example, there is an article about great Latino Children’s Books is posted on this website. http://mamiverse.com/ Mahogany Momma’s Black Parenting Blog: a website of parenting advice for African American families. This site includes good resources for teachers as well as parents. For example, there is an article about positive images for black girls posted on this website. http://www.blackparentingblog.com A Parent’s Guide to Raising Multiracial Children: an excerpt from the book Does Anybody Else Look Like Me: A Parent’s Guide to Raising Multiracial Children by Donna Jackson Nakazawa. http://donnajacksonnakazawa.com/does-anybody-else/ See Baby Discriminate: an article that reports the results of a study about young children’s perception of race. Image courtesy of [kiddaikiddeestudio] / FreeDigitalPhotos.net Fresh Air Fund – Work with children from New York City who are experiencing life outside the city. Literacy Volunteers – Volunteer as a tutor! Training is provided and you can make the difference in a student’s life. Opportunities in Franklin County: http://lvfranklin-somerset.maineadulted.org/, the greater Augusta area: http://lva-augusta.org/ or the Waterville area: http://www.lvwaterville.com/
http://www2.umf.maine.edu/teachereducation/resources-for-pre-service-and-in-service-teachers/diversity-resources/
4
Changes in air temperature, not precipitation, drove the expansion and contraction of glaciers in Africa's Rwenzori Mountains at the height of the last ice age, according to a Dartmouth-led study. The results -- along with a recent Dartmouth-led study that found air temperature also likely influenced the fluctuating size of South America's Quelccaya Ice Cap over the past millennium -- support many scientists' suspicions that today's tropical glaciers are rapidly shrinking primarily because of a warming climate rather than declining snowfall or other factors. The two studies will help scientists to understand the natural variability of past climate and to predict tropical glaciers' response to future global warming. The most recent study, which marks the first time that scientists have used the beryllium-10 surface exposure dating method to chronicle the advance and retreat of Africa's glaciers, appears in the journal Geology. Africa's glaciers, which occur atop the world's highest tropical mountains, are among the most sensitive components of the world's frozen regions, but the climatic controls that influence their fluctuations are not fully understood. Dartmouth glacial geomorphologist Meredith Kelly and her team used the beryllium-10 method to determine the ages of quartz-rich boulders atop moraines in the Rwenzori Mountains on the border of Uganda and the Democratic Republic of Congo. These mountains have the most extensive glacial and moraine systems in Africa. Moraines are ridges of sediments that mark the past positions of glaciers. The results indicate that glaciers in equatorial East Africa advanced between 24,000 and 20,000 years ago at the coldest time of the world's last ice age. A comparison of the moraine ages with nearby climate records indicates that Rwenzori glaciers expanded contemporaneously with regionally dry, cold conditions and retreated when air temperature increased. The results suggest that, on millennial time scales, past fluctuations of Rwenzori glaciers were strongly influenced by air temperature. The study was funded by the National Geographic Society and the National Science Foundation. Cite This Page:
https://www.sciencedaily.com/releases/2014/04/140416143309.htm
4.09375
Significance: Confederate currency—produced by the Confederate government and by individual states in the Confederacy—was critical to the South during the U.S. Civil War in its attempts to establish its own union. This currency was to be credited after the Confederacy’s victory but became worthless after its defeat. It later became a collector’s item, fetching prices from a few dollars to tens of thousands of dollars for the rarest denominations. The Confederate government began to issue currency in April of 1861, the month the Civil War began. The main printing press for central government- issued currency was in Richmond, Virginia, but currency was also printed by states, local municipalities, and merchants. Paper money was printed as well as coins, and both included symbolic representations of the Old South, including images of historical figures, military technology, and slavery. Because it was philosophically opposed to federalism, the Confederate government was not able to tax its citizens sufficiently to prepare for the war effort. In addition, European markets were gaining access to alternative sources of cotton, such as India and Egypt. As a result, American cotton was selling for lower prices overseas, exacerbating the South’s financial problems. Thus, Confederate currency was sure to experience high inflation should the South struggle in the war. Counterfeiting of Confederate currency was common. Since Confederate currency was printed at a number of different venues and by different levels of government, Northern counterfeiters were easily able to buy Southern goods with replica money. The resulting increase in the amount of currency in circulation contributed to the high inflation that began to mount as the tide of the war turned in the North’s favor. Confederate money was relatively valuable when the Civil War began. The gold dollar was the standard of value at the time, and a Confederate dollar was worth as much as 95 cents against the gold dollar. Shortly after the Battle of Gettysburg (1863), as the likelihood of a Southern victory decreased, the value of a Confederate dollar dropped to roughly 33 cents against the gold dollar. Investors shied away from trading for currency that could become worthless if the South lost the war. Instead, they began to accumulate goods and services that would be redeemable regardless of the war’s outcome. At the end of the war, the value of a Confederate dollar was about one penny against the gold dollar, and the currency ceased to be traded soon thereafter. Shull, Hugh. Guide Book of Southern States Currency. Florence, Ala.: Whitman, 2006. Slabaugh, Arlie. Confederate States Paper Money. Lola, Wis.: Krause, 1998. Tremmel, George. Confederate Currency of the Confederate States of America. Jefferson, N.C.: McFarland, 2003. See also: U.S. Mint
http://ebusinessinusa.com/2400-confederate-currency.html
4.03125
- Our Services - Events and Training November 15, 2010 In traditional mass walls, e.g. a wall of solid masonry or earth, the resistance to rain penetration was only one aspect of enclosure performance (Photograph 1). Heat flow was also controlled by the thermal storage capacity of the massive walls, not just by virtue of the materials' thermal conductivity like the specialized insulation layers commonly used in modern building assemblies. The sun's heat was absorbed, stored, and slowly released to the interior and exterior, effectively damping typical daily fluctuations and thus increasing comfort. Vapor and airflow were also controlled by the mass of the wall. It is little wonder that such walls were used for thousands of years. Built of only brick and mortar, the wall carried all structural loads as well as performing as an acceptable enclosure. The small unit size of the brick allowed for planning flexibility so that such walls could be used for most purposes. Because mass brick walls allow a considerable amount of heat to pass through, the exterior surface temperature remained elevated throughout the winter and thus freeze-thaw durability and interstitial condensation problems were avoided. Compared to the poor control of airflow through windows and doors, the walls seemed airtight to the occupants. If the wall was sheltered by topography, other buildings, and roof overhangs, the amount of rainwater reaching the surface was so little that the wall could control this water before it reached the inner surface and caused damage. The biggest drawback to such wall systems was the large amounts of material and labor needed to construct them and the poor thermal control. With the change from low-rise buildings with solid load-bearing walls to taller framed buildings, the dead weight and cost of traditional mass wall systems became prohibitive. Chicago's 16-storey Monadnock Building, constructed with 6-foot thick base walls between 1889 and 1891, pushed to the limit the load-bearing mass masonry wall (Photograph 2). Taller buildings with mass walls were practically impossible with the combination of high dead weight and low compressive strength. A large percentage of valuable ground floor area was lost to load-bearing walls and the resistance to seismic loads was poor. Today, poor control of rain penetration, heat, air, and vapor flow can be added to the list of drawbacks. Photograph 2: Monadnock Building (www.monadnockbuilding.com) The industrial revolution and the scientific knowledge and technical confidence it provided resulted in attempts to produce perfect barrier wall systems. These systems very often fail to be perfect barriers because of defects in design, construction, or materials although they may still perform as required. While a unit of sealed glazing will not fail to resist rain (unless the glass cracks) the joint between the glazing and the window frame may. Similarly, metal panel systems developed in the post-war period rarely failed, but the joints and interfaces did. These examples reinforce the importance of considering the wall as a three-dimensional assemblage including joints. In many manufactured curtain walls, a small amount of rain penetration will cause no harm and either goes unnoticed or a drainage system is incorporated to deal with these small failures. Corbusier is largely credited with popularizing the idea of separating the primary structural system from the enclosure system. Although the concept itself was well-developed by his time, the Domino house project made this approach desirable (Photograph 3). However, it is only in recent decades that the separation of the enclosure into layers and sub-systems for specific functions (support and control) has become more widely accepted and actually applied to building enclosures. Photograph 3: Le Corbusier's Domino House (www.usc.edu) The current best practice in building enclosure design emphasizes the use of drainage as a rain control strategy, and demands a well-defined rain control layer, air control layer, and unbroken thermal control layer. Building science research and field experience over the last two decades have demonstrated how powerful the drained approach to rain control can be. However, other changes have also occurred over this time, specifically the use of air barriers, and steadily increasing insulation requirements. The increase in airtightness and thermal control (insulation, white roofs, radiant barriers) reduced the energy flow across the enclosure available to dry this remaining moisture. Hence, the potential duration of wetting for materials in high-performance enclosures is increasing, and this can cause durability problems. Drainage does not remove all water that penetrates the cladding, as any rainwater absorbed by materials or clinging to surfaces can only be removed by evaporation. Similarly, air leakage condensation, which is now more likely in frequency, and severe in intensity because of higher levels of thermal insulation and higher cold-weather interior humidity levels (themselves the result of increased airtightness), cannot be dried as quickly as in the past. This lack of drying capacity, when combined with changing materials (masonry to gypsum sheathing, brick veneers to metal panels) and the substitution of traditional materials with often less durable modern ones, has increased the probability of moisture-related enclosure failures. What is needed is a re-evaluation of how we assemble enclosures, and improvements in ensuring continuity of the control functions. As we change the insulation levels, airtightness, and materials, we need to consider changes in how materials are assembled in enclosures. The steel-stud framed walls of the 80’s and 90’s cannot continue to be built in the same manner in the 2010’s and 2020’s. It is now clear that such walls did not provide continuous thermal control. Better rain control is a critical part of the needed change, as are increases in tightness, better control of thermal bridging, and a protection of moisture-sensitive materials from extreme temperatures and prolonged wetting. The “perfect wall” approach (Figure 1) described above provides all of these improvements. Once thought of as an ideal enclosure assembly that would rarely be built, it is becoming the new standard for durable, energy-efficient, high-performance enclosures. Figure 1: The Perfect Wall — see also BSI-001: The Perfect Wall
http://buildingscience.com/documents/insights/bsi-042-historical-development-building-enclosure?topic=resources/freeze-thaw-damage
4.09375
Microeconomics: A Brief History by Marc Davis As early as the 18th century, economists were studying the decision-making processes of consumers, a principal concern of microeconomics. Swiss mathematician Nicholas Bernoulli (1695-1726) proposed an extensive theory of how consumers make their buying choices in what was perhaps the first written explanation of how this often mysterious and always complex process works. According to Bernoulli's theory, consumers make buying decisions based on the expected results of their purchases. Consumers are assumed to be rational thinkers who are able to forecast with reasonable accuracy the hopefully satisfactory consequences of what they buy. They select to purchase, among the choices available, the product or service they believe will provide maximum satisfaction or well-being. For some 200 years beginning in the mid-1700s, the dominant economic theory was Adam's Smith's laissez-faire (French for "leave alone" or "let do") approach to the economy, which advocated a government hands-off policy regarding free markets and the machinery of capitalism. The laissez-faire theory argues that an economy functions best when the "invisible hand" of self-interest is allowed to operate freely, without government intervention. Smith and Marshall Scottish-born Smith (1723-1790) wrote in his book, "Wealth of Nations," that if the government does not tamper with the economy, a nation's resources will be most efficiently used, free-market problems will correct themselves and a country's welfare and best interests will be served. (For further reading on Adam Smith see, Adam Smith: the Father of Economics.) Smith's views on the economy prevailed through two centuries, but in the late 19th and early 20the century, the ideas of Alfred Marshall (1842-1924), a London-born economist, had a major impact on economic thought. In Marshall's book, "Principles of Economics, Vol. 1." published in 1890, he proposed, as Bernoulli had three centuries earlier, the study of consumer decision making. Marshall proposed a new idea as well - the study of specific, individual markets and firms, as a means of understanding the dynamics of economics. Marshall also formulated the concepts of consumer utility, price elasticity of demand and the demand curve, all of which will be discussed in the following chapter. At the time of Marshall's death, John Maynard Keynes (1883-1946), who would become the most influential economist of the 20th century starting in the 1930s, was already at work on his revolutionary ideas about government management of the economy. Born in Cambridge, England, Keynes' contributions to economic theory have guided the thinking and policy-making of central bankers and government economists for decades, both globally and in the U.S. (To learn more see, Can Keynesian Economics Reduce Boom-Bust Cycles?) So much of U.S. monetary policy, the setting of key interest rates, government spending to stimulate the economy, support of private enterprise through various measures, tax policy and government borrowing through the issuance of Treasury bonds, bills and notes, have been influenced by the revolutionary ideas of Keynes, which he introduced in his books and essays. What all these concepts had in common was their advocacy of government management of the economy. Keynes advocated government intervention into free markets and into the general economy when market crises warranted, an unprecedented idea when proposed during the Great Depression. (For more on this read, What Caused The Great Depression?) Government spending to stimulate an economy, a Keynesian idea, was used during the Depression to put unemployed people to work, thus providing cash to millions of consumers to buy the country's products and services. Most of Keynes' views were the exact opposites of Adam Smith's. An economy, for optimum functioning, must be managed by government, Keynes wrote. (For related reading, see The Federal Reserve.) Thus was born the modern science of macroeconomics – the big picture view of the economy – evolving in large part from what came to be called Keynesian economic theory. These are among the tools of microeconomics, and their principles, along with others, are still employed today by economists who specialize in this area. Keynes' policies, to varying degrees, have been, and continue to be, employed with generally successful results worldwide in almost all modern capitalist economies. If and when economic problems occur, many economists often attribute them to some misapplication or non-application of a Keynesian principle. While Keynesian economic theory was being applied in most of the world's major economies, the new concept of microeconomics, pioneered by Marshall, was also taking hold in economic circles. The study of smaller, more focused aspects of the economy, which previously were not given major importance, was fast becoming an integral part of the entire economic picture. (For further information on past economists, read How Influential Economists Changed Our History.) Microeconomics had practical appeal to economists because it sought to understand the most basic machinery of an economic system: consumer decision-making and spending patterns, and the decision-making processes of individual businesses. The study of consumer decision-making reveals how the price of products and services affects demand, how consumer satisfaction – although not precisely measurable – works in the decision-making process, and provides useful information to businesses selling products and services to these consumers. The decision-making processes of a business would include how much to make of a certain product and how to price these products to compete in the marketplace against other similar products. The same decision-making dynamic is true of any business that sells services rather than products. Although economics is a broad continuum of all the factors - both large and small - that make up an economy, microeconomics does not take into direct account what macroeconomics considers. Macroeconomics is concerned principally with government spending, personal income taxes, corporate taxes, capital gains taxes and other taxes; the key interest rates set by the Federal Reserve, the banking system and other economic factors such as consumer confidence, unemployment or gross national product, which may influence the entire economy. (For more on macroeconomics read, Macroeconomic Analysis.) Economics, like all sciences, is continually evolving, with new ideas being introduced regularly, and old ideas being refined, revised, and rethought. Some 200 years after Bernoulli's theory was first introduced, it was expanded upon by Hungarian John von Neumann (1903-1957), and Austrian Oskar Morgenstern (1920-1976). A more detailed and nuanced theory than Bernoulli's and Marshall's emerged from their collaboration, which they called utility theory. The theory was elaborated in their book, "Theory of Games and Economic Behavior," published in 1944. In the 1950s, Herbert A. Simon (1916-2001), a 1978 Nobel Memorial Prize-winner in economics, introduced a simpler theory of consumer behavior called "satisficing". The satisficing theory contends that when consumers find what they want, they then abandon the quest and decision-making processes, and buy the product or service which seems to them as "good enough." (For more on the Nobel Memorial Prize, read Nobel Winners Are Economic Prizes.) And so the history of microeconomics continues to unfold, awaiting perhaps another Bernoulli, Adam Smith, Alfred Marshall, or John Maynard Keynes, to provide it with some new, revolutionary ideas. An area of economics that studies the economic impact of environmental ... An economic theory from the 18th century that is strongly opposed ... When a company expands its business into areas that are at different ... The study and use of how economic theory and methods influences ... A situation in which the supply and demand for a good or service ... Economy is the large set of inter-related economic production ... Positive correlation exists when two variables move in the same direction. A basic example of positive correlation is height ... Read Full Answer >> Utility is a loose and controversial topic in microeconomics. Generally speaking, utility refers to the degree of removed ... Read Full Answer >> According to economic theory, the law of demand states that the relative demand for a good or service is inversely correlated ... Read Full Answer >> Microeconomics can be, but is not necessarily, math-intensive. Fundamental microeconomic assumptions about scarcity, human ... Read Full Answer >> Fracking helps to keep natural gas prices lower by increasing the available supply to consumers. This is perhaps more true ... Read Full Answer >> Classical economic theory presumed that if demand for a commodity or service was raised, then prices would rise correspondingly ... Read Full Answer >>
http://www.investopedia.com/university/microeconomics/microeconomics1.asp
4.25
The floor of a legislature or chamber is the place where members sit and make speeches. When a person is speaking there formally, they are said to have the floor. The House of Commons and the House of Lords In the United Kingdom, the U.S. House of Representatives and the U.S. Senate all have "floors" with established procedures and protocols. Activity on the floor of a council or legislature, such as debate, may be contrasted with meetings and discussion which takes place in committee, for which there are often separate committee rooms. Some actions, such as the overturning of an executive veto, may only be taken on the floor. In the United Kingdom's House of Commons a rectangular configuration is used with the government ministers and their party sitting on the right of the presiding Speaker and the opposing parties sitting on the benches opposite. Members are not permitted to speak between the red lines on the floor which mark the boundaries of each side. These are traditionally two sword lengths apart to mitigate the possibility of physical conflict. If a member changes allegiance between the two sides, they are said to cross the floor. Only members and the essential officers of the house such as the clerks are permitted upon the floor while parliament is in session. The two important debating floors of the U.S. Federal government are in the House of Representatives and the Senate. The rules of procedure of both floors have evolved to change the balance of power and decision making between the floors and the committees. Both floors were publicly televised by 1986. The procedures for passing legislation are quite varied with differing degrees of party, committee and conference involvement. In general, during the late 20th century, the power of the floors increased and the number of amendments made on the floor increased significantly. The procedures used upon legislative floors are based upon standard works which include - Erskine May: Parliamentary Practice, which was written for the UK House of Commons - Jefferson's Manual, which was written for the US Senate and was incorporated into the rules for the US House of Representatives. On the other hand, the following work was initially based on the procedures used upon legislative floors: - Robert's Rules of Order, which was based upon the rules of the US House of Representatives and is intended for use by ordinary bodies and societies such as church meetings. - Floor of the United States House of Representatives - Plenary session - Floor leader - Recognition (parliamentary procedure) - assignment of the floor - Robert J. McKeever, Brief Introduction to US Politics - David M. Olson, The legislative process: a comparative approach, p. 350 - William McKay, Charles W. Johnson, Parliament and Congress: Representation and Scrutiny in the Twenty-first Century - Steven S. Smith, Call to order: floor politics in the House and Senate
https://en.wikipedia.org/wiki/Floor_(legislative)
4.1875
Students encounter the concept of scarcity in their daily tasks but have little comprehension as to its meaning or how to deal with the concept of scarcity. Scarcity is really about knowing that often life is 'This OR That' not 'This AND That'. This lesson plan for students in grades K-2 and 3-5 introduces the concept of scarcity by illustrating how time is finite and how life involves a series of choices. Specifically, this lesson teaches students about scarcity and choice: Scarcity means we all have to make choices and all choices involve "costs." Not only do you have to make a choice every minute of the day because of scarcity, but, when making a choice, you have to give up something. This cost is called oppportunity cost. Opportunity cost is defined as the value of the next best thing you would have chosen. It is not the value of all things you could have chosen. Choice gives us 'benefits' and choice gives us 'costs'. Not only do you have to make a choice every minute of the day, because of scarcity, but also, when making a choice, you have to give up something of value (opportunity cost). To be asked to make a choice between 'this toy OR that toy' is difficult for students who want every toy. A goal in life for each of us is to look at our wants, determine our opportunities, and try and make the best choices by weighing the benefits and costs. The introduction to this lesson is a brief online story about a little girl’s visit to a pet store with her father. She considers several pets before choosing a “cute and cuddly” dog. Students are reminded that pet owners are responsible for keeping their pets safe, healthy and happy. A discussion of a pet owners desire to provide the best for their pets leads to an exploration of people’s wants. The activities that follow challenge students to explore the wants of a pet owner and their desire to provide the best for their pet fish, and then the wants of a person. The students learn that the ability to discover their wants will help them establish priorities when they are faced with scarcity. During the evaluation process, students identify some of their personal wants. As a class, they discuss why some choices are the same and others are different. They take the discussion a step further exploring how their wants compare with those of siblings and adults in their lives. They discover that age, lifestyle, likes (tastes and preferences) and what one views as important (values) help to explain the differences. When individuals produce goods or services, they normally trade (exchange) most of them to obtain other more desired goods or services. In doing so, individuals are immediately confronted with the problem of scarcity - as consumers they have many different goods or services to choose from, but limited income (from their own production) available to obtain the goods and services. Scarcity dictates that consumers must choose which goods and services they wish to purchase. When consumers purchase one good or service, they are giving up the chance to purchase another. The best single alternative not chosen is their opportunity cost. Since a consumer choice always involves alternatives, every consumer choice has an opportunity cost. The following lessons come from the Council for Economic Education's library of publications. Clicking the publication title or image will take you to the Council for Economic Education Store for more detailed information. Designed primarily for elementary and middle school students, each of the 15 lessons in this guide introduces an economics concept through activities with modeling clay. 17 out of 17 lessons from this publication relate to this EconEdLink lesson. This publication contains 16 stories that complement the K-2 Student Storybook. Specific to grades K-2 are a variety of activities, including making coins out of salt dough or cookie dough; a song that teaches students about opportunity cost and decisions; and a game in which students learn the importance of savings. 9 out of 18 lessons from this publication relate to this EconEdLink lesson. This interdisciplinary curriculum guide helps teachers introduce their students to economics using popular children's stories. 8 out of 29 lessons from this publication relate to this EconEdLink lesson.
http://www.econedlink.org/economic-standards/EconEdLink-related-publications.php?lid=738
4
Don't forget to look at the how to guide in the welcome lounge and help forum. Discussion in 'Early Years' started by tinyears, Jan 22, 2008. We talk about 2D shapes being flat (hands together) and 3D shapes being fat (hands wide apart). I introduced 3d shapes by "blowing up" 2d shapes in my magic bag. See the link below for an explanation: Of course, the problem with the plastic 2d shapes that we all use at school for sorting etc, is that they are really very thin 3d shapes. I'm not sure how you get around that problem. We look at 3D shapes and call them '3D shapes' and then ask what the difference is between 3D and Flat (as others have said). Then we look for all the 2D shapes in 3D shapes eg cylinder = 2 circles + rectangle .. they could be 'open' so no circles!! We do 'Transformation of Shape' activities ie "Change this rectangle piece of card into a 3D cylinder" etc via problem solving activities. Through transformation (hands-on) the kids seem to get the idea. The more you practice, the more they get it!! This is one of my 'bugbears'. I don't believe we should use 'flats' to teach 2D shape at all. Flats ARE 3D shapes and it is totally flawed to use them to teach 2D shape and the name of 2D shapes. Why don't we just stick to what is correct in the first place? You can easily use resources which have 'drawn' 2D shapes on. You can explain easily that we can hold 3D shapes - that they are solid. You can talk about 'faces' and 'edges' at an early stage and use terms like the cylinder has two 'cicular' faces and so on. You can show that you can hold the paper or card that the 2D shapes are drawn on but that you cannot hold the 2D shapes themselves. You can show that flats are not flat at all because if you make a pile of them, they have height. It is the easiest thing in the world to teach 2D and 3D shape when they are taught together so that you can point out the differences in meaning and definition. It is just a mess when we try to use flats and other 3D shapes to teach the names and understanding of 2D shapes. But - to try this idea out of 'correctness', you have to be prepared to disregard some of the resources and guidance that exists already. However, don't I always encourage teachers to be discerning, thinking, challenging and brave? Go for it. 'cicular' - try "circular"! This is one of my bugbears too. I was told that I was "getting philosophical" when I pointed out that the circle I had drawn on a sheet of paperwas 2D because it didn't exist on the other side. I think the teacher who said this browses this site. I'd like to reassure her that I have the greatest respect for her talent and subject knowledge... But I think I'm right on this. We can all do one thing right from the start by using the word face. Little ones accept that word with no problem, so we can use it fearlessly. I say 2D shapes are like the ones we draw but can't really hold in our hands and 3D are the solid ones we can hold. I wonder if you explore interactive white board resources for 2D and 3D shapes at secondary level you will find something that illustrates your points nicely? Sorry not to be more specific than this but I think there's probably something on mymaths etc that would do the job nicely. Also while you are at it, my other bugbear is talking about circles having one curved side. With an interactive white board resource you can show so nicely how if you draw a regular polygon and keep on increasing the number if sides you get closer and closer to a circle. I like to use the wonderful bit in Joyce's Portrait of an Artist as a Young Man in which eternity is described by a priest in terms of a bird taking a grain of sand and building a mountain...and thebn... ...but with older children, of course.. Mystery10, I wonder why you're so worried when you're clearly doing everything yoiu can for your daughter. And I can't resist pointing out that you;ve used the word nicely three times in your post. Do you think we're all duffers? i'd love you all to read the book but know that time takes it toll. I was going to explain this but am currently so downhearted by the fact that the "superhighways" haven't increased general knowledge despite all the hype. never mind "what is a 2D shape or what is a 3D shape". What is a "shape"? Let's start at the beginning, before irreconcilable contradictions have been embedded. are we going with shape is the form of an object? I agree with debbie - this is a bugbear. But where do you stop?? So many concepts are (initially) incorrectly formed at this young age - some are refined and altered, others remain fixed until adulthood. Weight /mass? , most of my staff talk about the scales for weighing, but many of them are actually just pan-balances, a ruler is really a rule.... The list is endless!!!! Anyway - I tell my children that 3D shapes are solid and that means you can pick them up or wrap your hands round them. Some children sometimes point out that you can do this with flat shapes - if they are intelligent enough to realise that, then I go on to explain that they actually are 3D. As a previous poster implied- in EYFS it's just as important to be matching, comparing, pointing out shapes (without necessarily naming) Learning about corners, sides, faces, edges I'm in pre-school though... I'm interested in this one as a maths teacher / reception parent, occasionally seeing things that might explain later problems. I tend to agree about the plastic "2D" shapes - and interactive whiteboards give a good kinaesthetic way of exploring genuinely 2D shapes. My daughter started using a ruler, and explained to me that you start drawing the line from 1 on the ruler. Might be sensible advice when drawing with one of those rulers that starts directly at 0, but probably feeds into the problem of measuring from 1 instead of 0. The word rhombus got replaced in her vocabulary by diamond, whilst at pre-school. (I don't mind her knowing the word diamond, but it did seem to replace rhombus - I suspect somebody must have said "no, it's a diamond", as two words for the same thing isn't usually a problem for her.) Cariadlet and I were chatting about 2d and 3d shapes the other day. We eventually decided that even when you draw a shape it is really 3d, it's just that the thickness is so tiny you can't see it with the naked eye. But there must be some height to a drawn shape - otherwise pencils wouldn't get shorter and shorter the more you use them. That led us onto deciding that the only truly 2d shapes must be imaginary ones that you picture in your head (we hadn't thought about the IWB). Mind you Cariadlet is 8 - I don't think I'd have that conversation in my Reception classroom. You can get a set of 3D shapes that are hollow and have one face missing so that you can stuff a piece of material in there and pull it out like a magician. They are also very useful in the sand and water tray as the children can contrast the cube that fills up with water versus the square that you can hold in your hand but will not fill up no matter how much you pour. I tell my Y2s that 2D shapes have height and width but no depth but I'm not sure reception have the concept of height , width or depth to understand and really they just need lots of experience of handling 3D shapes and developing that understanding
https://community.tes.com/threads/how-do-you-explain-the-difference-between-3d-and-2d-shapes-to-children.151254/
4.4375
Trigonometry/For Enthusiasts/Trigonometry Done Rigorously< Trigonometry ||This page started life as an introduction to the most basic concepts of trigonometry, such as measuring an angle. Done properly this is an advanced topic. This page has now been moved/renamed into book 3 to take on that role, and needs considerable revision. Some content that is still here may need to be integrated back into book 1| - 1 Introduction to Angles - 2 Definition of an Angle - 3 Definition of a Triangle - 4 Introduction to Radian Measure Introduction to AnglesEdit An angle is formed when two lines intersect; the point of intersection is called the vertex. We can think of an angle as the wedge-shaped space between the lines where they meet. Note that if both lines are extended through the meeing point, there are in fact four angles. The size of the angle is the degree of rotation between the lines. The more we must rotate one line to meet the other, the larger the angle is. Suppose you wish to measure the angle between two lines exactly so that you can tell a remote friend about it: draw a circle with its center located at the meeting of the two lines, making sure that the circle is small enough to cross both lines, but large enough for you to measure the distance along the circle's edge, the circumference, between the two cross points. Obviously this distance depends on the size of the circle, but as long as you tell your friend both the radius of the circle used, and the length along the circumference, then your friend will be able to reconstruct the angle exactly. Definition of an AngleEdit An angle is determined by rotating a ray about its endpoint. The starting position of the ray is called the initial side of the angle. The ending position of the ray is called the terminal side. The endpoint of the ray is called its vertex. Positive angles are generated by counter-clockwise rotation. Negative angles are generated by clockwise rotation. Consequently an angle has four parts: its vertex, its initial side, its terminal side, and its rotation. An angle is said to be in standard position when it is drawn in a cartesian coordinate system in such a way that its vertex is at the origin and its initial side is the positive x-axis. Definition of a TriangleEdit A triangle is a planar (flat) shape with three straight sides. An angle is formed between each two sides of a triangle, and a triangle has three angles, hence the name tri-angle. So a triangle has three straight sides and three angles. If you give me three lengths, I can only make a triangle from them if the greatest length is less than the sum of the other two. Three lengths that do not make the sides of a triangle are your height, the height of the nearest tree, the distance from the top of the tree to the center of the sun. Angles are not affected by the length of lines: an angle is invariant under transformations of scale, that is: An angle of particular significance is the right angle: the angle at each corner of a square or a rectangle. A rectangle can always be divided into two triangles by drawing a line from one corner of the rectangle to the opposite corner. It is also true that every right-angled triangle is half a rectangle. A rectangle has four sides; they are generally of two different lengths: two long sides and two short sides. (A rectangle with all sides equal is a square.) When we split the rectangle into two right-angled triangles, each triangle has a long side and a short side from the rectangle as well as a copy of the split line. So the area of a right-angled triangle is half the area of the rectangle from which it was split. Looking at a right-angled triangle, we can tell what the long and short sides of that rectangle were; they are the sides, the lines, that meet at a right angle. The area of the complete rectangle is the long side times the short side. The area of a right-angled triangle is therefore half as much. Right Triangles and MeasurementEdit It is possible to bisect any angle using only circles (which can be drawn with a compass) and straight lines by the following procedure: - Call the vertex of the angle O. Draw a circle centered at O. - Mark where the circle intersects each ray. Call these points A and B. - Draw circles centered at A and B with equal radii, but make sure that these radii are large enough to make the circles intersect at two points. One sure way to do this is to draw line segment AB and make the radius of the circles equal to the length of that line segment. On the diagram, circles A and B are shown as near-half portions of a circle. - Mark where these circles intersect, and connect these two points with a line. This line bisects the original angle. A proof that the line bisects the angle is found in Proposition 9 of Book 1 of the Elements. Given a right angle, we can use this process to split that right angle indefinitely to form any binary fraction (i.e., , e.g. ) of it. Thus, we can measure any angle in terms of right angles. That is, a measurement system in which the size of the right angle is considered to be one. Introduction to Radian MeasureEdit Trigonometry is simplified if we choose the following strange angle as "one": Understanding the three sides of a RadianEdit To illustrate how the three sides of a radian relate to one another try the below thought experiment: - Assume you have a piece of string that is exactly the length of the radius of a circle. - Assume you have drawn a radian in the same circle. The radian has 3 points. One is the center of your circle and the other two are on the circumference of the circle where the sides of the radian intersect with the circle. - Attach one end of the string at one of the points where the radian intersects with the circumference of the circle. - Take the other end of the string and, starting at the point you chose in Step 2 above, trace the circumference of the circle towards the second point where the radian intersects with the circle until the string is pulled tight. - You will see that the end of the string travels past the second point. - This is because the string is now in a straight line. However, the radian has an arc for its third side, not a straight line. Even though a radian has three equal sides, the arc's curve causes the two points where the radian intersects with the circle to be closer together than they are to the third point of the triangle, which is at the center of our circle in this example. - Now, with the string still pulled tight, find the half way point of the string, then pull it onto the circumference of the circle while allowing the end of the string to move along the circle's circumference. The end of the string is now closer to the second point because the path of the string is closer to the path of the circle's circumference. - We can keep improving the fit of the string's path to the path of the circle's circumference by dividing each new section of the string in half and pulling it towards the circumference of the circle. Is a radian affected by the size of its circle?Edit Does it matter what size circle is used to measure in radians? Perhaps using the radius of a large circle will produce a different angle than that produced by the radius of a small circle. The answer is no. Recall our radian and circle from the experiment in the subsection above. Draw another circle inside the first circle, with the same center, but with half the radius. You will see that you have created a new radian inside the smaller circle that shares the same angle as the radian in the larger circle. We know that the two sides of the radians emanating from the center of the circles are equal to the radius of their respective circle. We also know that the third side of the radian in the larger circle (the arc) is also equal to the larger circle's radius. But how do we know that the third side of the radian in the smaller circle (the arc that follows the circumference of the smaller circle) is equal to the radius of the smaller circle? To see why we do know that the third side of the smaller circle's radian is equal to its radius, we first connect the two points of each radian that intersect with the circle with each other. By doing so, you will have created two isosceles triangles (triangles with two equal sides and two equal angles). An isosceles triangle has two equal angles and two equal sides. If you know one angle of any isosceles triangle and the length of two sides that make up that angle, then you can easily deduce the remaining characteristics of the isosceles triangle. For instance, if the two equidistant sides of an isosceles triangle intersect to form an angle that is equal to 40o, then you know the remaining angles must both equal 70º. Since we know that the equidistant sides of our two isosceles triangles make up our known angle, then we can deduce that both of our radians (when converted to isosceles triangles with straight lines) have identical second and third angles. We also know that triangles with identical angles, regardless of their size, will have the length of their sides in a constant ratio to each other. For instance, we can deduce that an isosceles triangle will have sides that measure 2 meters by 4 meters by 4 meters if we know that an isoscleses triangle with identical angles measures 1 meter by 2 meters by 2 meters. Therefore, in our example, our isosceles triangle formed by the second smaller circle will have a third side exaclty equal to half of the third side of the isosceles triangle created by the larger circle. The relationship between the size of the sides of two triangles that share identical angles is also found in the relationship between radius and circumference of two circles that share the same center point - they will share the exact same ratio. In our example then, since the radius of our second smaller circle is exactly half of the larger circle, the circumference between the two points where the radian of the smaller circle intersect (which we have shown is one half of the distance between the two similar points on the larger circle) will share the same exact ratio. And there you have it - the size of the circle does not matter. Using Radians to Measure AnglesEdit Once we have an angle of one radian, we can chop it up into binary fractions as we did with the right angle to get a vast range of known angles with which to measure unknown angles. A protractor is a device which uses this technique to measure angles approximately. To measure an angle with a protractor: place the marked center of the protractor on the corner of the angle to be measured, align the right hand zero radian line with one line of the angle, and read off where the other line of the angle crosses the edge of the protractor. A protractor is often transparent with angle lines drawn on it to help you measure angles made with short lines: this is allowed because angles do not depend on the length of the lines from which they are made. If we agree to measure angles in radians, it would be useful to know the size of some easily defined angles. We could of course simply draw the angles and then measure them very accurately, though still approximately, with a protractor: however, we would then be doing physics, not mathematics. The ratio of the length of the circumference of a circle to its radius is defined as 2π, where π is an invariant independent of the size of the circle by the argument above. Hence if we were to move 2π radii around the circumference of a circle from a given point on the circumference of that circle, we would arrive back at the starting point. We have to conclude that the size of the angle made by one circuit around the circumference of a circle is 2π radians. Likewise a half circuit around a circle would be π radians. Imagine folding a circle in half along an axis of symmetry: the resulting crease will be a diameter, a straight line through the center of the circle. Hence a straight line has an angle of size π radians. Folding a half circle in half again produces a quarter circle which must therefore have an angle of size π/2 radians. Is a quarter circle a right angle? To see that it is: draw a square whose corner points lie on the circumference of a circle. Draw the diagonal lines that connect opposing corners of the square, by symmetry they will pass through the center of the circle, to produce 4 similar triangles. Each such triangle is isoceles, and has an angle of size 2π/4 = π/2 radians where the two equal length sides meet at the center of the circle. Thus the other two angles of the triangles must be equal and sum to π/2 radians, that is each angle must be of size π/4 radians. We know that such a triangle is right angled, we must conclude that an angle of size π/2 radians is indeed a right angle. Summary and Extra NotesEdit In summary: it is possible to make deductions about the sizes of angles in certain special conditions using geometrical arguments. However, in general, geometry alone is not powerful enough to determine the size of unknown angles for any arbitrary triangle. To solve such problems we will need the help of trigonometric functions. In principle, all angles and trigonometric functions are defined on the unit circle. The term unit in mathematics applies to a single measure of any length. We will later apply the principles gleaned from unit measures to larger (or smaller) scaled problems. All the functions we need can be derived from a triangle inscribed in the unit circle: it happens to be a right-angled triangle. The center point of the unit circle will be set on a Cartesian plane, with the circle's centre at the origin of the plane — the point (0,0). Thus our circle will be divided into four sections, or quadrants. Quadrants are always counted counter-clockwise, as is the default rotation of angular velocity (omega). Now we inscribe a triangle in the first quadrant (that is, where the x- and y-axes are assigned positive values) and let one leg of the angle be on the x-axis and the other be parallel to the y-axis. (Just look at the illustration for clarification). Now we let the hypotenuse (which is always 1, the radius of our unit circle) rotate counter-clockwise. You will notice that a new triangle is formed as we move into a new quadrant, not only because the sum of a triangle's angles cannot be beyond 180°, but also because there is no way on a two-dimensional plane to imagine otherwise.
https://en.m.wikibooks.org/wiki/Trigonometry/For_Enthusiasts/Trigonometry_Done_Rigorously
4.03125
Found only in the southern part of Madagascar in the dry forest and bush, the ring-tailed lemur is a large, vocal primate with brownish-gray fur and a distinctive tail with alternating black and white rings. Male and female ring-tailed lemurs are similar physically. They are roughly the same size, measuring about 42.5 cm (1.4 ft.) from head to rump and weighing roughly 2.25 kg (5 lb.). Highly social creatures, ring-tailed lemurs live in groups averaging 17 members. Their society is female-dominant, and a group will often contain multiple breeding females. Females reproduce starting at 3 years of age, generally giving birth to one baby a year. When born, a ring-tailed lemur baby weighs less than 100 g (3 oz.). The newborn is carried on its mother’s chest for 1-2 weeks and then is carried on her back. At 2 weeks, the baby starts eating solid food and begins venturing out on its own. But the juvenile is not fully weaned until 5 months of age. Although they are capable climbers, ring-tailed lemurs spend a third of their time on the ground foraging for food. They range far to find leaves, flowers, bark, sap, and small invertebrates to eat. When the lemurs travel over ground, they keep their tails in the air to ensure everyone in the group is in sight and stays together. Aside from using visual cues, ring-tailed lemurs also communicate via scent and vocalizations. They mark their territory by scent. A male lemur will also engage in stink fights during mating seasons, wiping his tail with the scent glands on his wrists and waving it at another male while staring menacingly. Eventually one male will back down and run away. Vocally, ring-tailed lemurs have several different alarms calls that alert members to danger. They have several predators, including fossas (mammals related to the mongoose), Madagascar Harrier-hawks, Madagascar buzzards, Madagascar ground boas, civets, and domestic cats and dogs. Ring-tailed lemurs are considered endangered by the IUCN Red List. The main threat to their population is habitat destruction. Much of their habitat is being converted to farmland or burned for the production of charcoal. However, the ring-tailed lemur is popular in zoos, and they do comparatively well in captivity and reproduce regularly. In captivity, ring-tailed lemurs can live for nearly 30 years, compared to up to 20 in the wild. What You Can Do to Help You can help ring-tailed lemurs by contributing to the Lemur Conservation Foundation through volunteer work or donations. The WWF also provides the opportunity to adopt a lemur. The money donated goes to help establish and manage parks and protected areas in Madagascar. Ring-tailed Lemur Distribution Ring-tailed Lemur Resources - Duke University Lemur Center – Ring-tailed Lemur - National Geographic – Ring-Tailed Lemur - IUCN Redlist – Lemur catta You Might Also Like Blog Posts about the Ring-tailed Lemur - Ring-tailed Lemur Baby at Taronga Western Plains Zoo - October 20, 2015 - Baby Ring-Tailed Lemurs at Busch Gardens - May 21, 2015 - Nevada Zoo Welcomes Ring-tailed Lemur Baby - April 22, 2010 Last updated on August 24, 2014.
http://www.animalfactguide.com/animal-facts/ring-tailed-lemur/
4.1875
Kindergarten students do not have textbooks. They learn through units that teachers use to deliver instruction based on kindergarten learning objectives focused around a specific theme. These units form the year's curriculum. The time spent on each unit ranges from one week to a month. Some kindergarten themes are teacher-created; others are produced by publishers of kindergarten reading programs. Many teachers use a transportation theme because it allows children to learn about the many ways people travel. Kindergarten students do not learn social studies as a separate subject. Instead, it is embedded in the themes they study all year. In the transportation unit, students learn to classify different modes of transportation according to where they are used -- sky, sea or land. Teachers also emphasize the importance of vehicles in people's daily lives. Teachers focus on reading skills during the majority of the school day. Thematic units give them many opportunities to incorporate reading objectives, such as blending vowel-consonant sounds orally to make words. In a transportation unit, teachers help students sound out words such as "jet", "truck" and "bike." Teachers read stories and poems that use these words so children can hear them being used in the correct context. Kindergarten teachers help students build vocabulary and language skills through a transportation unit. During whole- and small-group discussions, students learn the names of various kinds of transportation and how to use these words in complete sentences. This type of activity improves their ability to communicate, too. Kindergarten students begin writing letters at the start of the year and then progress to words and sentences. In the transportation unit, the teacher may have students draw or color a picture of their favorite kind of transportation and write a sentence about it. This reinforces knowledge that language is written and read from left to right, while giving students the opportunity to practice correct letter formation. Kindergarten students take math as a separate subject, but thematic units incorporate math skills for reinforcement. Students can make trains from colorful, interlocking plastic cubes, and kindergarten teachers can use this lesson to reinforce mathematical concepts. Teachers can have students make their train with different colors to teach patterns, or they can let students have train races. For a race, two students take turns rolling a die. Each student moves his train according to the number he has thrown on the die. The student whose train reaches the finish line first wins. Style Your World With Color Barack Obama's signature color may bring presidential power to your wardrobe.View Article Let your clothes speak for themselves with this powerhouse hue.View Article Create balance and growth throughout your wardrobe.View Article See how the colors in your closet help determine your mood.View Article - Comstock/Comstock/Getty Images
http://classroom.synonym.com/objectives-transportation-units-kindergarten-3581.html
4.21875
Dylan came storming in the door after a busy day at school. He slammed his books down on the kitchen table. “What is the matter?” his Mom asked sitting down at the table. “Well, I made this great geodesic dome. It is finished and doing great, but Mrs. Patterson wants me to investigate other shapes that you could use to make a dome. I don’t want to do it. I feel like my project is finished,” Dylan explained. “Maybe Mrs. Patterson just wanted to give you an added challenge.” “Maybe, but what other shapes can be used to form a dome? The triangle makes the most sense,” Dylan said. “Yes, but to figure this out, you need to know what other shapes tessellate,” Mom explained. “What does it mean to tessellate? And how can I figure that out?” Pay attention to this Concept and you will know how to answer these questions by the end of it. We can use translations and reflections to make patterns with geometric figures called tessellations. A tessellation is a pattern in which geometric figures repeat without any gaps between them. In other words, the repeated figures fit perfectly together. They form a pattern that can stretch in every direction on the coordinate plane. Take a look at the tessellations below. This tessellation could go on and on. We can create tessellations by moving a single geometric figure. We can perform translations such as translations and rotations to move the figure so that the original and the new figure fit together. How do we know that a figure will tessellate? If the figure is the same on all sides, it will fit together when it is repeated. Figures that tessellate tend to be regular polygons. Regular polygons have straight sides that are all congruent. When we rotate or slide a regular polygon, the side of the original figure and the side of its translation will match. Not all geometric figures can tessellate, however. When we translate or rotate them, their sides do not fit together. Remember this rule and you will know whether a figure will tessellate or not! Think about whether or not there will be gaps in the pattern as you move a figure. Sure. To make a tessellation, as we have said, we can translate some figures and rotate others. Take a look at this situation. Create a tessellation by repeating the following figure. First, trace the figure on a piece of stiff paper and then cut it out. This will let you perform translations easily so you can see how best to repeat the figure to make a tessellation. This figure is exactly the same on all sides, so we do not need to rotate it to make the pieces fit together. Instead, let’s try translating it. Trace the figure. Then slide the cutout so that one edge of it lines up perfectly with one edge of the figure you drew. Trace the cutout again. Now line the cutout up with another side of the original figure and trace it. As you add figures to the pattern, the hexagons will start making themselves! Check to make sure that there are no gaps in your pattern. All of the edges should fit perfectly together. You should be able to go on sliding and tracing the hexagon forever in all directions. You have made a tessellation! Do the following figures tessellate? Why or why not? Solution: Yes, because it is a regular polygon with sides all the same length. Solution: No, because it is a circle and the sides are not line segments. Solution: Yes, because it is made up of two figures that tessellate. Now let's go back to the dilemma at the beginning of the Concept. First, let’s answer the question about tessellations. What does it mean to tessellate? To tessellate means that congruent figures are put together to create a pattern where there aren’t any gaps or spaces in the pattern. Figures can be put side by side and/or upside down to create the pattern. The pattern is called a tessellation. How do you figure out which figures will tessellate and which ones won’t? Figure that will tessellate are congruent figures. They have to be exactly the same length on all sides. They also have to be able to fit together. A circle will not tessellate because there aren’t sides to fit together. A hexagon, on the other hand, will tessellate as long as the same hexagon is used to create the pattern. a pattern made by using different transformations of geometric figures. A figure will tessellate if it is a regular geometric figure and if the sides all fit together perfectly with no gaps. Here is one for you to try on your own. Draw a tessellation of equilateral triangles. In an equilateral triangle each angle is 60∘. Therefore, six triangles will perfectly fit around each point. Directions: Will the following figures tessellate? - A regular pentagon - A regular octagon - A square - A rectangle - An equilateral triangle - A parallelogram - A circle - A cylinder - A cube - A cone - A sphere - A rectangular prism - A right triangle - A regular heptagon - A regular decagon
http://www.ck12.org/book/CK-12-Middle-School-Math-Concepts-Grade-8/r9/section/6.16/
4.1875
Japanese American Internment was the relocation and internment by the United States government in 1942 of about 110,000 Japanese Americans and Japanese who lived along the Pacific coast of the United States to camps called “War Relocation Camps,” in the wake of Imperial Japan‘s attack on Pearl Harbor. The internment of Japanese Americans was applied unequally throughout the United States. All who lived on the West Coast of the United States were interned, while in Hawaii, where the 150,000-plus Japanese Americans composed over one-third of the population, an estimated 1,200 to 1,800 were interned. Of those interned, 62% were American citizens. Executive Order 9066, issued February 19, 1942, which allowed local military commanders to designate “military areas” as “exclusion zones,” from which “any or all persons may be excluded.” This power was used to declare that all people of Japanese ancestry were excluded from the entire Pacific coast, including all of California and much of Oregon, Washington and Arizona, except for those in internment camps. Many internees lost irreplaceable personal property due to the restrictions on what could be taken into the camps. Some Japanese-American farmers were able to find families willing to tend their farms for the duration of their internment. In other cases Japanese-American farmers had to sell their property in a matter of days, usually at great financial loss. These losses were compounded by theft and destruction of items placed in governmental storage. A number of persons died or suffered for lack of medical care, and several were killed by sentries. Loyalty questions and segregation Some Japanese Americans did question the American government, after finding themselves in internment camps. Several pro-Japan groups formed inside the camps, particularly at the Tule Lake location. When the government passed a law that made it possible for an internee to renounce American citizenship, 5,589 internees opted to do so; 5,461 of these were at Tule Lake. Of those who renounced their citizenship, 1,327 were repatriated to Japan. Many of these individuals would later face stigmatization in the Japanese-American community, after the war, for having made that choice, although even at the time they were not certain what their futures held were they to remain American, and remain interned. These renunciations of American citizenship have been highly controversial, for a number of reasons. Some apologists for internment have cited the renunciations as evidence that “disloyalty” or anti-Americanism was well represented among the interned peoples, thereby justifying the internment. Many historians have dismissed the latter argument, for its failure to consider that the small number of individuals in question were in the midst of persecution by their own government at the time of the “renunciation”: [T]he renunciations had little to do with “loyalty” or “disloyalty” to the United States, but were instead the result of a series of complex conditions and factors that were beyond the control of those involved. Prior to discarding citizenship, most or all of the renunciants had experienced the following misfortunes: forced removal from homes; loss of jobs; government and public assumption of disloyalty to the land of their birth based on race alone; and incarceration in a “segregation center” for “disloyal” ISSEI or NISEI… Minoru Kiyota, who was among those who renounced his citizenship and swiftly came to regret the decision, has stated that he wanted only “to express my fury toward the government of the United States,” for his internment and for the mental and physical duress, as well as the intimidation, he was made to face. [M]y renunciation had been an expression of momentary emotional defiance in reaction to years of persecution suffered by myself and other Japanese Americans and, in particular, to the degrading interrogation by the FBI agent at Topaz and being terrorized by the guards and gangs at Tule Lake. Civil rights attorney Wayne M. Collins successfully challenged most of these renunciations as invalid, owing to the conditions of duress and intimidation under which the government obtained them. Many of the deportees were Issei (first generation Japanese immigrants) who often had difficulty with English and often did not understand the questions they were asked. Even among those Issei who had a clear understanding, Question 28 posed an awkward dilemma: Japanese immigrants were denied US citizenship at the time, so when asked to renounce their Japanese citizenship, answering “Yes” would have made them stateless persons. When the government circulated a questionnaire seeking army volunteers from among the internees, 6% of military-aged male respondents volunteered to serve in the U.S. Armed Forces. Most of those who refused tempered that refusal with statements of willingness to fight if they were restored their rights as American citizens. 20,000 Japanese American men and many Japanese American women served in the U.S. Army during World War II. The famed 442nd Regimental Combat Team, which fought in Europe, was formed from those Japanese Americans who did agree to serve. This unit was the most highly decorated US military unit of its size and duration. Most notably, the 442nd was known for saving the 141st (or the “lost battalion”) from the Germans. The 1951 film Go For Broke! was a fairly accurate portrayal of the 442nd, and starred several of the RCT’s veterans. In 1980, President Jimmy Carter conducted an investigation to determine whether putting Japanese Americans into internment camps was justified well enough by the government. He appointed the Commission on Wartime Relocation and Internment of Civilians to investigate the camps. The commission’s report, named “Personal Justice Denied,” found little evidence of Japanese disloyalty at the time and recommended the government pay reparations to the survivors. They formed a payment of $20,000 to each individual internment camp survivor. These were the reparations passed by President Ronald Reagan. Manzanar – Historical Resource Study/Special History Study (Epilogue) On the 34th anniversary of the issuance of Executive Order 9066, President Gerald R. Ford formally rescinded the presidential proclamation, stating “We know now what we should have known then: not only was evacuation wrong, but Japanese-Americans were and are loyal Americans.” On November 25, 1978, the first “Day of Remembrance” program was conducted at Camp Harmony, Washington, site of the former Puyallup Assembly Center. In late January 1979, the JACL National Redress Committee met with Hawaii Senators Daniel Inouye and Spark Matsunaga and California Congressmen Norman Mineta and Robert Matsui to discuss strategies for obtaining redress. A study commission was proposed. Finally, on July 31, 1980, President Jimmy Carter signed into law the Commission on Wartime Relocation and Internment of Civilians (CWRIC) Act. Between July 14 and December 9, 1981, the CWRIC held twenty days of hearings in nine cities during which more than 750 witnesses testified. In December 1982, the CWRIC released its report, Personal Justice Denied, concluding that Executive Order 9066 was “not justified by military necessity” and was the result of “race prejudice, war hysteria, and a failure of political leadership.” In June 1983, the CWRIC issued five recommendations for redress to Congress. First, it called for a joint congressional resolution acknowledging and apologizing for the wrongs initiated in 1942. Second, it recommended a presidential pardon for persons who had been convicted of violating the several statutes establishing and enforcing the evacuation and relocation program. Third, it urged Congress to direct various parts of the government to deal liberally with applicants for restitution of status and entitlements lost because of wartime prejudice and discrimination, such as the less than honorable discharges that were given to many Japanese American soldiers in the weeks after Pearl Harbor. Fourth, it recommended that Congress appropriate money to establish a special foundation to sponsor research and public educational activities “so that the causes and circumstances of this and similar events may be illuminated”. For the entire article: http://www.cr.nps.gov/history/online_books/manz/hrse.htm - Gila River Relocation Camp - Granada Relocation Center NHL Nomination – nps.gov - Haiku Internment Camp - Heart Mountain Relocation Center NHL Nomination – nps.gov - Honouliuli National Monument (U.S. National Park Service) - Jerome Relocation Camp - Kalaheo Internment Camp - Kilauea Detention Center - Manzanar National Historic Site (U.S. National Park Service) - Minidoka National Historic Site (U.S. National Park Service) - Poston War Relocation Center - Rohwer Relocation Camp - Sand Island Internment Camp - Topaz Central Utah Relocation Center – Site NHL Nomination - Tule Lake Unit (U.S. National Park Service) – nps.gov Day of Remembrance: Japanese-Americans show support for Muslims, Sikhs SAN JOSE — For older Japanese-Americans, the discrimination and attacks on Muslims and Sikhs are opening afresh an old wound that never healed. To show support for Arab-Americans, the South Bay’s Japanese-American community held a somber candlelight ceremony and procession at San Jose Buddhist Church on Sunday evening, linking diverse faiths through similar fears. The “Day of Remembrance” is an annual commemoration of Feb. 19, 1942, a day that changed the lives of Japanese-Americans forever. Citing concerns about wartime sabotage and espionage, President Franklin D. Roosevelt signed an order that led to the internment of more than 110,000 people of Japanese ancestry at 10 camps scattered across seven states. But the gathering evoked memories of more recent horrors, such as the murder of the six worshipers at a Sikh temple in Oak Creek, Wisc., and the burning of a mosque in Joplin, Mo. “We have common issues in terms of justice, equity and fair treatment under the Constitution,” said Congressman Mike Honda, D-San Jose, who was interned in Colorado as a child. There is no justification for racism or denial of civil liberties, not in 1942 and not in 2013, said Honda. He also urged the acceptance of Latinos, gays and lesbians and others suffering from discrimination. US Minorities Civil Rights Timeline 1863-1963 (ProPresObama.org Civil Rights Timelines ™) US Minorities Civil Rights Timeline 1964-2009 (ProPresObama.org Civil Rights Timelines ™)
http://propresobama.org/2013/02/18/executive-order-9066-japanese-americans-incarceration/
4.03125
The tricolored blackbird forms the largest colonies of any North American land bird, often with breeding groups of tens of thousands of individuals. In the 19th century, some colonies contained more than a million birds — enough to make one observer exclaim over flocks darkening the sky “for some distance by their masses,” not unlike passenger pigeons. But because a small number of colonies may contain most of the population, human impacts can have devastating results. Over the past 70 years, destruction of the tricolor’s marsh and grassland homes has reduced its populations to a small fraction of their former enormity. While its big breeding colonies make the species seem abundant to casual observers, the blackbird’s gregarious nesting behavior renders these colonies vulnerable to large-scale failures. In agricultural habitat the birds experience huge losses of reproductive effort to crop-harvesting; every year, thousands of nests in dairy silage fields — where grass is being fermented and preserved for fodder — are lost to mowing. In what little remains of California’s native emergent-marsh habitat, tricolors are vulnerable to high levels of predation. The species has been in decline ever since widespread land conversion took hold in California. The Center submitted state and federal listing petitions for the species in 2004, but continuing threats to tricolors were ignored for many years. California announced its refusal to protect the species, as did the U.S. Fish and Wildlife Service in 2006. But in 2014, when the bird's population reached the smallest number ever recorded, only 145,000 — and when comprehensive statewide surveys showed that an additional two-thirds of remaining tricolored blackbirds had been lost since 2008 — 2014 the Center again petitioned for an endangered listing under the California Endangered Species Act, on an emergency basis. And finally, in December 2015, the California Fish and Game Comission announced it was making the species a "candidate" for state protection — a definitive victory, since candidates for state protection enjoy actual safeguards until they receive a place on the state's endangered species list (unlike federal "candidate" species). |Get the latest on our work for biodiversity and learn how to help in our free weekly e-newsletter.| Contact: Lisa Belenky
http://www.biologicaldiversity.org/species/birds/tricolored_blackbird/index.html
4
Cardiorespiratory refers to the ability of the circulatory and respiratory systems to supply oxygen to skeletal muscles during sustained physical activity. Regular exercise makes these systems more efficient by enlarging the heart muscle, enabling more blood to be pumped with each stroke, and increasing the number of small arteries in trained skeletal muscles, which supply more blood to working muscles. Exercise improves the respiratory system by increasing the amount of oxygen that is inhaled and distributed to body tissue. There are many benefits of cardiorespiratory fitness. It can reduce the risk of heart disease, lung cancer, type 2 diabetes, stroke, and other diseases. Cardiorespiratory fitness helps improve lung and heart condition, and increases feelings of wellbeing. The American College of Sports Medicine recommends aerobic exercise 3–5 times per week for 30–60 minutes per session, at a moderate intensity, that maintains the heart rate between 65–85% of the maximum heart rate. The cardiovascular system is responsible for a vast set of adaptations in the body throughout exercise. It must immediately respond to changes in cardiac output, blood flow, and blood pressure. Cardiac output is defined as the product of heart rate and stroke volume which represents the volume of blood being pumped by the heart each minute. Cardiac output increases during physical activity due to an increase in both the heart rate and stroke volume. At the beginning of exercise, the cardiovascular adaptations are very rapid: “Within a second after muscular contraction, there is a withdrawal of vagal outflow to the heart, which is followed by an increase in sympathetic stimulation of the heart. This results in an increase in cardiac output to ensure that blood flow to the muscle is matched to the metabolic needs”. Both heart rate and stroke volume vary directly with the intensity of the exercise performed and many improvements can be made through continuous training. Another important issue is the regulation of blood flow during exercise. Blood flow must increase in order to provide the working muscle with more oxygenated blood which can be accomplished through neural and chemical regulation. Blood vessels are under sympathetic tone, therefore the release of noradrenaline and adrenaline will cause vasoconstriction of non-essential tissues such as the liver, intestines, and kidneys, and decrease neurotransmitter release to the active muscles promoting vasodilatation. Also, chemical factors such as a decrease in oxygen concentration and an increase in carbon dioxide or lactic acid concentration in the blood promote vasodilatation to increase blood flow. As a result of increased vascular resistance, blood pressure rises throughout exercise and stimulates baroreceptors in the carotid arteries and aortic arch. “These pressure receptors are important since they regulate arterial blood pressure around an elevated systemic pressure during exercise”. Respiratory system adaptations Although all of the described adaptations in the body to maintain homeostatic balance during exercise are very important, the most essential factor is the involvement of the respiratory system. The respiratory system allows for the proper exchange and transport of gases to and from the lungs while being able to control the ventilation rate through neural and chemical impulses. In addition, the body is able to efficiently use the three energy systems which include the phosphagen system, the glycolytic system, and the oxidative system. In most cases, as the body is exposed to physical activity, the core temperature of the body tends to rise as heat gain becomes larger than the amount of heat lost. “The factors that contribute to heat gain during exercise include anything that stimulate metabolic rate, anything from the external environment that causes heat gain, and the ability of the body to dissipate heat under any given set of circumstances”. In response to an increase in core temperature, there are a variety of factors which adapt in order to help restore heat balance. The main physiological response to an increase in body temperature is mediated by the thermal regulatory center located in the hypothalamus of the brain which connects to thermal receptors and effectors. There are numerous thermal effectors including sweat glands, smooth muscles of blood vessels, some endocrine glands, and skeletal muscle. With an increase in the core temperature, the thermal regulatory center will stimulate the arterioles supplying blood to the skin to dilate along with the release of sweat on the skin surface to reduce temperature through evaporation. In addition to the involuntary regulation of temperature, the hypothalamus is able to communicate with the cerebral cortex to initiate voluntary control such as removing clothing or drinking cold water. With all regulations taken into account, the body is able to maintain core temperature within about two or three degrees Celsius during exercise. - Donatello, Rebeka J. (2005). Health, The Basics. San Francisco: Pearson Education, Inc. - Pollock, M.L.; Gaesser, G.A. (1998). "Acsm position stand: the recommended quantity and quality of exercise for developing and maintaining cardiorespiratory and muscular fitness, and flexibility in healthy adults". Medicine & Science in Sports & Exercise 30 (6): 975–991. doi:10.1097/00005768-199806000-00032. PMID 9624661. Retrieved 22 March 2012. - Brown, S.P.; Eason, J.M.; Miller, W.C. (2006). "Exercise Physiology: Basis of Human". Movement in Health and Disease: 75–247. - Howley, E.T., and Powers, S.K. (1990). Exercise Physiology: Theory and Application to Fitness and Performance. Dubuque, IA: Wm. C. Brown Publishers. pp. 131–267. - Shaver, L.G. (1981). Essentials of Exercise Physiology. minneapolis, MN: Burgess Publishing Company. pp. 1–132.
https://en.wikipedia.org/wiki/Cardiorespiratory_fitness
4.1875
Government of the Roman Republic - The Senate The Roman senate had much of the real power during the time of the republic. The senate was made up of 300 powerful Roman men (although it was increased to as many as 900 in the later years of the republic). Many Roman senators had held another high office before being appointed to the senate. In fact, once a Roman had served in a high office in the republic (consul, praetor, etc…), he was made a senator for life. Most members of the Roman Senate were patricians or members of wealthy landowning families. New senators were selected by other high ranking officials in the republic like consuls or tribunes. Senators were not elected by the people. They were more like what we would call the “good ole boys” club today – basically a group of noble or very wealthy men with lots of connections. The House of Lords in Britain would be similar also, although the House of Lords no longer has much real political power. The Roman Senate had plenty of power, especially during the republic. Other senators obtained office by being elected or appointed to another office. So, for example, the high priest of Rome, the pontifex maximus, automatically had a seat in the senate. These senators did not have an official vote in the senate, but they could participate in the debate and support one side or the other on a particular issue. Members of the Roman senate were not supposed to own or run businesses while they were in office. This regulation was often ignored and rarely enforced. Senators were also sometimes allowed special treatment (seating preference, etc.) at feasts, the circus, plays, or other important events or performances. Being a senator in ancient Rome was an honor, and with that honor came a lot of influence. High ranking officials (magistrates) in the government – consuls, dictators, praetors – could call a meeting of the senate for just about any purpose. During a senate meeting, magistrates could propose legislation. The senators would then debate the law and send it to the populous – the people in the assemblies – for a vote. Duties of the republican Roman senate included: • Public welfare (taking care of the people) • Overseeing Roman religious law • Debating and preparing legislation (laws) to be reviewed by the assembly – but – the senate, by itself, could not make law • Managing Rome’s affairs with other nations • Regulating the taxing and spending of money
http://project-history.blogspot.com/2008/02/government-of-roman-republic-senate.html
4.03125
Obsessive-compulsive disorder (OCD) is a psychiatric anxiety disorder most commonly characterized by a subject's obsessive, distressing, intrusive thoughts and related compulsions (tasks or "rituals") which attempt to neutralize the obsessions. To be diagnosed with obsessive-compulsive disorder, one must have the presence of obsessions, compulsions, or both, according to the Diagnostic and Statistical Manual of Mental Disorders (DSM)-V diagnostic criteria. The manual to the diagnostic criteria from DSM-V (2013) describes these obsessions and compulsions: Obsessions are defined by: - Recurrent and persistent thoughts, impulses, or images that are experienced at some time during the disturbance, as intrusive and undesirable, and that cause marked anxiety or distress. - The thoughts, impulses, or images are not simply excessive worries about real-life problems. - The person attempts to ignore or suppress such thoughts, impulses, or images, or to neutralize them with some other thought or action (for instance, by performing a compulsion). - The person recognizes that the obsessional thoughts, impulses, or images are a product of his or her own mind, and are not based in reality. Compulsions are defined by: - Repetitive behaviors or mental acts that the person feels driven to perform in response to an obsession, or according to rules that must be applied rigidly. - The behaviors or mental acts are aimed at preventing or reducing distress or preventing some dreaded event or situation; however, these behaviors or mental acts either are not connected in a realistic way with what they are designed to neutralize or prevent or are clearly excessive. In addition to these criteria, at some point during the course of the disorder, the obsessions or compulsions must be time-consuming (taking up more than one hour per day), cause distress, or cause impairment in social, occupational, or school functioning. The symptoms are not attributable to any physiological effects of a substance or other medical condition. The disturbance is also not better explained by symptoms of another mental disorder. Community studies have estimated 1-year prevalence of OCD to be 1.2% in the US and 1.1%-1.8% internationally. Research indicates that females are affected at a slightly higher rate than males in adulthood, and males are more commonly affected than females in childhood. OCD usually begins in adolescence or early adulthood, but it may also manifest in childhood. Typically, the onset of symptoms is gradual, although acute onset has also been reported. The majority of untreated individuals experience a chronic waxing and waning course, while others can experience episodic or deteriorating courses. The phrase "obsessive-compulsive" has worked its way into the wider English lexicon, and is often used in an offhand manner to describe someone who is meticulous or absorbed in a cause (see "anal retentive"). It is also important to distinguish OCD from other types of anxiety, including the routine tension and stress that appear throughout life. Although these signs are often present in OCD, a person who shows signs of infatuation or fixation with a subject/object, or displays traits such as perfectionism, does not necessarily have OCD, a specific and well-defined condition.
http://www.med.upenn.edu/ctsa/ocd_symptoms.html?6
4.09375
One of the realities of life is how so much of the world runs by mathematical rules. As one of the tools of mathematics, linear systems have multiple uses in the real world. Life is full of situations when the output of a system doubles if the input doubles, and the output cuts in half if the input does the same. That's what a linear system is, and any linear system can be described with a linear equation. In the Kitchen If you've ever doubled a favorite recipe, you've applied a linear equation. If one cake equals 1/2 cup of butter, 2 cups of flour, 3/4 tsp. of baking powder, three eggs and 1 cup of sugar and milk, then two cakes equal 1 cup of butter, 4 cups of flour, 1 1/2 tsp. of baking powder, six eggs and 2 cups of sugar and milk. To get twice the output, you put in twice the input. You might not have known you were using a linear equation, but that's exactly what you did. Suppose a water district wants to know how much snowmelt runoff it can expect this year. The melt comes from a big valley, and every year the district measures the snowpack and the water supply. It gets 60 acre-feet from every 6 inches of snowpack. This year surveyors measure 6 feet and 4 inches of snow. The district put that in the linear expression (60 acre-feet/6 inches) * 76 inches. Water officials can expect 760 acre-feet of snowmelt from the water. Just for Fun It's springtime and Irene wants to fill her swimming pool. She doesn't want to stand there all day, but she doesn't want to waste water over the edge of the pool, either. She sees that it takes 25 minutes to raise the pool level by 4 inches. She needs to fill the pool to a depth of 4 feet; she has 44 more inches to go. She figures out her linear equation: 44 inches * (25 minutes/4 inches) is 275 minutes, so she knows she has four hours and 35 minutes more to wait. Ralph has also noticed that it's springtime. The grass has been growing. It grew 2 inches in two weeks. He doesn't like the grass to be taller than 2 1/2 inches, but he doesn't like to cut it shorter than 1 3/4 inches. How often does he need to cut the lawn? He just puts that calculation in his linear expression, where (14 days/2 inches) * 3/4 inch tells hims he needs to cut his lawn every 5 1/4 days. He just ignores the 1/4 and figures he'll cut the lawn every five days. It's not hard to see other similar situations. If you want to buy beer for the big party and you've got $60 in your pocket, a linear equation tells you how much you can afford. Whether you need to bring in enough wood for the fire to burn overnight, calculate your paycheck, figure out how much paint you need to redo the upstairs bedrooms or buy enough gas to make it to and from your Aunt Sylvia's, linear equations provide the answers. Linear systems are, literally, everywhere. Where They Aren't One of the paradoxes is that just about every linear system is also a nonlinear system. Thinking you can make one giant cake by quadrupling a recipe will probably not work. If there's a really heavy snowfall year and snow gets pushed up against the walls of the valley, the water company's estimate of available water will be off. After the pool is full and starts washing over the edge, the water won't get any deeper. So most linear systems have a "linear regime" --- a region over which the linear rules apply--- and a "nonlinear regime" --- where they don't. As long as you're in the linear regime, the linear equations hold true. Style Your World With Color Explore a range of cool greys with the year's top colors.View Article See how the colors in your closet help determine your mood.View Article Barack Obama's signature color may bring presidential power to your wardrobe.View Article Explore a range of deep greens with the year's "it" colors.View Article - Comstock/Comstock/Getty Images
http://classroom.synonym.com/real-life-functions-linear-equations-2608.html
4.09375
A carry-save adder is a type of digital adder, used in computer microarchitecture to compute the sum of three or more n-bit numbers in binary. It differs from other digital adders in that it outputs two numbers of the same dimensions as the inputs, one which is a sequence of partial sum bits and another which is a sequence of carry bits. Consider the sum: 12345678 + 87654322 = 100000000 Using basic arithmetic, we calculate right to left, "8+2=0, carry 1", "7+2+1=0, carry 1", "6+3+1=0, carry 1", and so on to the end of the sum. Although we know the last digit of the result at once, we cannot know the first digit until we have gone through every digit in the calculation, passing the carry from each digit to the one on its left. Thus adding two n-digit numbers has to take a time proportional to n, even if the machinery we are using would otherwise be capable of performing many calculations simultaneously. In electronic terms, using bits (binary digits), this means that even if we have n one-bit adders at our disposal, we still have to allow a time proportional to n to allow a possible carry to propagate from one end of the number to the other. Until we have done this, - We do not know the result of the addition. - We do not know whether the result of the addition is larger or smaller than a given number (for instance, we do not know whether it is positive or negative). A carry look-ahead adder can reduce the delay. In principle the delay can be reduced so that it is proportional to logn, but for large numbers this is no longer the case, because even when carry look-ahead is implemented, the distances that signals have to travel on the chip increase in proportion to n, and propagation delays increase at the same rate. Once we get to the 512-bit to 2048-bit number sizes that are required in public-key cryptography, carry look-ahead is not of much help. The basic concept Here is an example of a binary sum: 10111010101011011111000000001101 + 11011110101011011011111011101111 Carry-save arithmetic works by abandoning the binary notation while still working to base 2. It computes the sum digit by digit, as 10111010101011011111000000001101 + 11011110101011011011111011101111 = 21122120202022022122111011102212 The notation is unconventional but the result is still unambiguous. Moreover, given n adders (here, n=32 full adders), the result can be calculated after propagating the inputs through a single adder, since each digit result does not depend on any of the others. If the adder is required to add two numbers and produce a result, carry-save addition is useless, since the result still has to be converted back into binary and this still means that carries have to propagate from right to left. But in large-integer arithmetic, addition is a very rare operation, and adders are mostly used to accumulate partial sums in a multiplication. Supposing that we have two bits of storage per digit, we can use a redundant binary representation, storing the values 0, 1, 2, or 3 in each digit position. It is therefore obvious that one more binary number can be added to our carry-save result without overflowing our storage capacity: but then what? The key to success is that at the moment of each partial addition we add three bits: - 0 or 1, from the number we are adding. - 0 if the digit in our store is 0 or 2, or 1 if it is 1 or 3. - 0 if the digit to its right is 0 or 1, or 1 if it is 2 or 3. To put it another way, we are taking a carry digit from the position on our right, and passing a carry digit to the left, just as in conventional addition; but the carry digit we pass to the left is the result of the previous calculation and not the current one. In each clock cycle, carries only have to move one step along, and not n steps as in conventional addition. Because signals don't have to move as far, the clock can tick much faster. There is still a need to convert the result to binary at the end of a calculation, which effectively just means letting the carries travel all the way through the number just as in a conventional adder. But if we have done 512 additions in the process of performing a 512-bit multiplication, the cost of that final conversion is effectively split across those 512 additions, so each addition bears 1/512 of the cost of that final "conventional" addition. At each stage of a carry-save addition, - We know the result of the addition at once. - We still do not know whether the result of the addition is larger or smaller than a given number (for instance, we do not know whether it is positive or negative). This latter point is a drawback when using carry-save adders to implement modular multiplication (multiplication followed by division, keeping the remainder only). If we cannot know whether the intermediate result is greater or less than the modulus, how can we know whether to subtract the modulus? Montgomery multiplication, which depends on the rightmost digit of the result, is one solution; though rather like carry-save addition itself, it carries a fixed overhead, so that a sequence of Montgomery multiplications saves time but a single one does not. Fortunately exponentiation, which is effectively a sequence of multiplications, is the most common operation in public-key cryptography. The carry-save unit consists of n full adders, each of which computes a single sum and carry bit based solely on the corresponding bits of the three input numbers. Given the three n - bit numbers a, b, and c, it produces a partial sum ps and a shift-carry sc: The entire sum can then be computed by: - Shifting the carry sequence sc left by one place. - Appending a 0 to the front (most significant bit) of the partial sum sequence ps. - Using a ripple carry adder to add these two together and produce the resulting n + 1-bit value. - Earle, J. G. et al U.S. Patent 3,340,388 "Latched Carry Save Adder Circuit for Multipliers" filed July 12, 1965 - Earle, J. (March 1965), "Latched Carry-Save Adder", IBM Technical Disclosure Bulletin 7 (10): 909–910 - John von Neumannn, Collected Works.
https://en.wikipedia.org/wiki/Carry-save_adder
4.09375
Parents' Guides to Student Success The Parents’ Guides to Student Success were developed by teachers, parents and education experts in response to the Common Core State Standards that more than 45 states have adopted. Created for grades K-8 and high school English, language arts/literacy and mathematics, the guides provide clear, consistent expectations for what students should be learning at each grade in order to be prepared for college and career. The guides include: - Key items children should be learning in English language arts and mathematics in each grade, once Common Core Standards are fully implemented. - Activities that parents can do at home to support their child's learning. Methods for helping parents build stronger relationships with their child's teacher. Tips for planning for college and career (high school only). What PTAs Can Do PTAs can play a pivotal role in how the standards are put in place at the state and district levels. PTA leaders are encouraged to meet with their school, district and/or state administrators to discuss their plans to implement the standards and how their PTA can support that work. The goal is that PTAs and education administrators will collaborate on how to share the guides with all of the parents and caregivers in their states or communities, once the Common Core Standards are fully implemented.
http://www.pta.org/parents/content.cfm?ItemNumber=2583&navItemNumber=3363
4.15625
Once a new national government had been established under a new Constitution, attention naturally Grade Range: 4-12 Resource Type(s): Interactives & Media, Lessons & Activities Duration: 10 minutes Date Posted: 3/1/2012 Use short videos, mini-activities, and practice questions to explore the basic elements of the United States government in this segment of Preparing for the Oath: U.S. History and Civics for Citizenship. The ten questions included in this segment cover topics such as federalism, the Constitution, and checks and balances. This site was designed with the needs of recent immigrants in mind. It is written at a “low-intermediate” ESL level. United States History Standards (Grades 5-12) 3: The institutions and practices of government created during the Revolution and how they were revised between 1787 and 1815 to create the foundation of the American political system based on the U.S. Constitution and the Bill of Rights
https://historyexplorer.si.edu/resource/preparing-oath-government-basics
4
Classroom Demonstrations and Lessons Find information about formulations, what common household products can be used to represent pesticide formulations, and exercise sheets. Find some incompatibility activities as well. What you need and how to do the LD50 demonstration. This demonstration illustrates how exposure to pesticides can be significantly reduced by wearing Personal Protective Equipment (PPE). This demonstration introduces biomagnification – when organisms accumulate chemical residues from the organisms they are eating underneath them in the food chain. Learn how to read a label by answering questions to compare two Hot Shot Fogger labels to see how these products are similar and different. Learn how lures and traps work to monitor and/or control pests in this lure demonstration. Learn how the EPA used a risk cup to limit the amount of aggregate exposure from all the pesticides with a common mode of action. Check out these posters to use as handouts or to post in your classroom. We have a form online to request a high quality PDF of these posters which will be sent to your email.
http://extension.psu.edu/pests/pesticide-education/educators/ag-and-science-teachers/classroom-demonstrations-and-lessons
4.0625
2 Answers | Add Yours The key concept here is density. Less dense objects will rise to the top of a more dense medium while more dense objects will sink to the bottom of a less dense medium. In the case of a hot air balloon floating in the sky, we are talking about hot air versus cold air here. The hot air in the balloon is less dense that the cold air medium that surrounds it, so the hot air in the balloon will lift it higher in the air. In the case of a boat on water, we are talking about air versus water. Since air is much less dense than water, the air trapped behind the hull of the boat will keep the entire boat floating on top of the water. In terms of forces, the more dense medium (cold air or water) exerts an upward force against the less dense object (hot air). If you drop a penny in water, however, it will instantly sink since copper (or metal in general) is much more dense than water. A boat floating in water, or a hot air balloon suspended in the air are examples of buoyancy. It was Archimedes who discovered the law of buoyancy, namely that a body immersed or suspended in a liquid or gas is buoyed up by a force equal to the weight of the liquid or gas displaced by the object. In the case of a boat, the force of buoyancy is equal to the weight of water displaced by the boat. As the boat enters the water it will sink into the water until enough water is displaced to equal the weight of the boat and its contents. Thus, as weight is added to a given boat, the water line will rise. A hot air balloon rises because hot air is less dense than cold air. This is because the molecules of air are driven farther apart from one another as the temperature increases. The air becomes less dense. The balloon displaces its entire volume. So the volume of the balloon is equal to the volume of the air it displaces. By Archimedes Law, the balloon is buoyed up by a force equal to the weight of the displaced (cold) air, which is greater than the weight of an equal volume of the lighter hot air contained in the balloon. Hence, the balloon rises. We’ve answered 301,114 questions. We can answer yours, too.Ask a question
http://www.enotes.com/homework-help/how-physics-boat-floating-water-same-hot-air-301727
4.375
In a familiar high-school chemistry demonstration, an instructor first uses electricity to split liquid water into its constituent gases, hydrogen and oxygen. Then, by combining the two gases and igniting them with a spark, the instructor changes the gases back into water with a loud pop. Scientists at the University of Illinois have discovered a new way to make water, and without the pop. Not only can they make water from unlikely starting materials, such as alcohols, their work could also lead to better catalysts and less expensive fuel cells. “We found that unconventional metal hydrides can be used for a chemical process called oxygen reduction, which is an essential part of the process of making water,” said Zachariah Heiden, a doctoral student and lead author of a paper accepted for publication in the Journal of the American Chemical Society. A water molecule (formally known as dihydrogen monoxide) is composed of two hydrogen atoms and one oxygen atom. But you can’t simply take two hydrogen atoms and stick them onto an oxygen atom. The actual reaction to make water is a bit more complicated: 2H2 + O2 = 2H2O + Energy. In English, the equation says: To produce two molecules of water (H2O), two molecules of diatomic hydrogen (H2) must be combined with one molecule of diatomic oxygen (O2). Energy will be released in the process. “This reaction (2H2 + O2 = 2H2O + Energy) has been known for two centuries, but until now no one has made it work in a homogeneous solution,” said Thomas Rauchfuss, a U. of I. professor of chemistry and the paper’s corresponding author. The well-known reaction also describes what happens inside a hydrogen fuel cell. In a typical fuel cell, the diatomic hydrogen gas enters one side of the cell, diatomic oxygen gas enters the other side. The hydrogen molecules lose their electrons and become positively charged through a process called oxidation, while the oxygen molecules gain four electrons and become negatively charged through a process called reduction. The negatively charged oxygen ions combine with positively charged hydrogen ions to form water and release electrical energy. The “difficult side” of the fuel cell is the oxygen reduction reaction, not the hydrogen oxidation reaction, Rauchfuss said. “We found, however, that new catalysts for oxygen reduction could also lead to new chemical means for hydrogen oxidation.” Rauchfuss and Heiden recently investigated a relatively new generation of transfer hydrogenation catalysts for use as unconventional metal hydrides for oxygen reduction. In their JACS paper, the researchers focus exclusively on the oxidative reactivity of iridium-based transfer hydogenation catalysts in a homogenous, non-aqueous solution. They found the iridium complex effects both the oxidation of alcohols, and the reduction of the oxygen. “Most compounds react with either hydrogen or oxygen, but this catalyst reacts with both,” Heiden said. “It reacts with hydrogen to form a hydride, and then reacts with oxygen to make water; and it does this in a homogeneous, non-aqueous solvent.” The new catalysts could lead to eventual development of more efficient hydrogen fuel cells, substantially lowering their cost, Heiden said. Source: University of Illinois at Urbana-Champaign Explore further: Research reveals mechanism for direct synthesis of hydrogen peroxide
http://phys.org/news/2007-10-scientists.html
4.03125
|This article does not cite any sources. (April 2014)| A redox titration is a type of titration based on a redox reaction between the analyte and titrant. Redox titration may involve the use of a redox indicator and/or a potentiometer. A common example of a redox titration is treating a solution of iodine with a reducing agent to produce iodide using a starch indicator to help detect the endpoint. Iodine (I2) can be reduced to iodide (I−) by e.g. thiosulfate (S2O32−), and when all iodine is spent the blue colour disappears. This is called an iodometric titration. Most often of all, the reduction of iodine to iodide is the last step in a series of reactions where the initial reactions are used to convert an unknown amount of the solute (the substance being analyzed) to an equivalent amount of iodine, which may then be titrated. Sometimes other halogens (or halogenoalkanes) than iodine are used in the intermediate reactions because they are available in better measurable standard solutions and/or react more readily with the solute. The extra steps in iodometric titration may be worth while because the equivalence point, where the blue turns a bit colourless, is more distinct than some other analytical or may be by volumetric methods. The main redox titration types are:
https://en.wikipedia.org/wiki/Redox_titration
4.34375
Trail of Tears OverviewIn 1830, President Andrew Jackson signed the Indian Removal Act which led to the removal of nearly 46,000 Native Americans from their homes east of the Mississippi River to lands west of Missouri. Some tribes went peacefully knowing that they were no match for the US Army, however some tribes were tricked into signing treaties giving up their land while others were forced to march thousands of miles to their new homes which they were promised would never be taken by white settlers. Students will be able to identify the five main tribes that were moved to the Permanent Indian Frontier. Students will be able to identify and explain the reasons why so many Native Americans died en route to their new homes. Students will be able to describe how well the Native Americans were treated by the US Army during the time period. White settlers had long been moving onto land that had once been roamed by nomadic Indian tribes. Nomadic tribes moved with the seasons and the migration of the animals that they hunted, those animals were the life blood of the tribes. Animals that were hunted not only provided food, but also clothing, shelter and the tools for everyday life. When the prairie was an open space, the buffalo especially would roam in large groups which allowed the Indians to hunt in large parties and capture many animals at a time. As more and more white settlers came to America, they wanted to move onto the land to farm, which meant that the Indians could no longer roam there. This caused many problems and led to many conflicts between the white settlers and the Indians. By 1830 President Andrew Jackson believed that the Indians needed to all be moved west of the Mississippi River to lands that would be set aside for the Indians alone. There was however some major problems with this idea, the major one being that most of the Indians did not want to move. By the year 1842 the US Army had established a series of forts along what they called the Permanent Indian Frontier, stretching from Fort Snelling in Minnesota to Fort Jesup in Louisiana. These forts were created to keep white settlers off of Indian lands and toe keep peace between the many tribes that would be forced to live in close proximity with each other. Between the years of 1831-1842 nearly 46,000 Native Americans would be moved from their eastern homelands to lands in what is today Kansas, Oklahoma and Nebraska. During these years, some tribes moved on their own and without major problems, while still others had to be removed by the US Army using force. The most infamous of these involved the Cherokee tribe being marched from Georgia to present day Oklahoma. However many of these types of marches took place. The tragedies that befell the Indian tribes were atrocious; some were preventable while others happened whenever large groups of people came together in those days. For example, cholera was a common illness when large groups congregated together because of lack of knowledge about sanitation. However there were other atrocities, such as not enough food or blankets to keep the tribes from freezing to death. Some tribes were not allowed to bring any of their belonging with them which left them at a distinct disadvantage in their ability to care for themselves. This lesson is designed for students to study the emigration of the tribes during Indian Removal as well as to get some understanding of the hardships that the tribes faced. Materials you will need for this lesson are the associated map activity which shows the location of the Indian tribes before and after removal and the hazard cards which are used to determine who survives and who dies in the reenactment of the journey. Before you begin: Make copies of the hazard cards that will be handed out. (There are 24 cards with this lesson; however you can change that to whatever number that you need.) Hand out maps of before relocation and after that show where tribes were marched to. Step 1: Introduce the topic of Indian Removal. Discuss why some tribes would voluntarily leave their homeland while others refused. Discuss the Five Civilized tribes and why they may have been called that. Also make sure that students know that there were many more tribes than just those 5. Have students look at the map and talk about why some tribes were given more land area than others. Step 2: Ask the students to make a educated guess of how many in the class would not have made it to the final destination. (The amount of hazard cards is slightly off from the actual percentages that would have died en route.) Give each student a random hazard card, but ask them not to look at them yet. Discuss the techniques used by the US Army to get the tribes to march west, make sure to discuss how some tribes were not allowed to take any belongings with them. Step 3: There are 2 ways to do this activity: 1. As the teacher continues discussing or reading about the Indian Removal the teacher can randomly call out a hazard card description and those students must either sit on the floor or put their heads on the desk. 2. If you have access to a gym or open area outside you could do this as a kinetic activity. Take students on a walk and talk to them about the tribes, as you walk randomly call out a hazard card. (For example you might say something like "Cholera has struck the camp- if you have a cholera card, please sit down on the floor.") If the student has the hazard card then they have to sit down. Teachers will need to make sure that they call out all the hazard cards by the end except for the survivor cards, those should be the only students still standing at the end of the activity. After all the cards have been called ask the students to look around- many of their classmates will either be sitting on the floor or with their heads down, they represent the number of Indians that would not have survived the trip west. Ask students to think about that number on the grand scale of how many people were displaced form their homes. At the conclusion of the activity, have students answer the following questions: 1. Why were Native American tribes being moved west? What was happening to the land they left behind? 2. What types of things happened to the tribes as they marched west? 3. How were the tribes treated during this time frame? Students should be able to write at least 3-4 sentences on each of the 3 questions. Teachers are checking for understanding based on the discussions that took place as well as the culminating writing activity. Fort Scott was created as a part of the Permanent Indian Frontier, the soldiers that were stationed at the fort kept peace between the tribes that had been relocated to this region. Tribes from east of the Mississippi River who had been forcibly moved to this area were promised that this would be "permanent" Indian terrritory. Soldiers at Fort Scott formed a "border patrol" keeping white settlers and Indian tribes seperated. Prior to the establishment of Fort Scott, a military garrison had been present at Fort Wayne in the heart of Cherokee land. The Cherokee objected to a military presence at this location and Fort Scott was established in part to placate the Cherokee tribe. There are many excellent books, both fiction and informational text, that would go right along with this lesson.
http://www.nps.gov/fosc/learn/education/classrooms/totlesson.htm
4.25
Fever or Chills, Age 12 and OlderSkip to the navigation Fever is the body's normal and healthy reaction to infection and other illnesses, both minor and serious. It helps the body fight infection. Fever is a symptom, not a disease. In most cases, having a fever means you have a minor illness. When you have a fever, your other symptoms will help you determine how serious your illness is. Temperatures in this topic are oral temperatures. Oral temperatures are usually taken in older children and adults. Normal body temperature Most people have an average body temperature of about 98.6°F (37°C), measured orally (a thermometer is placed under the tongue). Your temperature may be as low as 97.4°F (36.3°C) in the morning or as high as 99.6°F (37.6°C) in the late afternoon. Your temperature may go up when you exercise, wear too many clothes, take a hot bath, or are exposed to hot weather. A fever is a high body temperature. A temperature of up to 102°F (38.9°C) can be helpful because it helps the body fight infection. Most healthy children and adults can tolerate a fever as high as 103°F (39.4°C) to 104°F (40°C) for short periods of time without problems. Children tend to have higher fevers than adults. The degree of fever may not show how serious the illness is. With a minor illness, such as a cold, you may have a temperature, while a very serious infection may cause little or no fever. It is important to look for and evaluate other symptoms along with the fever. If you are not able to measure your temperature with a thermometer, you need to look for other symptoms of illness. A fever without other symptoms that lasts 3 to 4 days, comes and goes, and gradually reduces over time is usually not a cause for concern. When you have a fever, you may feel tired, lack energy, and not eat as much as usual. High fevers are not comfortable, but they rarely cause serious problems. Oral temperature taken after smoking or drinking a hot fluid may give you a false high temperature reading. After drinking or eating cold foods or fluids, an oral temperature may be falsely low. For information on how to take an accurate temperature, see the topic Body Temperature. Causes of fever Travel outside your native country can expose you to other diseases. Fevers that begin after travel in other countries need to be evaluated by your doctor. Fever and respiratory symptoms are hard to evaluate during the flu season. A fever of 102°F (38.9°C) or higher for 3 to 4 days is common with the flu. For more information, see the topic Respiratory Problems, Age 12 and Older. Recurrent fevers are those that occur 3 or more times within 6 months and are at least 7 days apart. Each new viral infection may cause a fever. It may seem that a fever is ongoing, but if 48 hours pass between fevers, then the fever is recurring. If you have frequent or recurrent fevers, it may be a symptom of a more serious problem. Talk to your doctor about your fevers. Treating a fever In most cases, the illness that caused the fever will clear up in a few days. You usually can treat the fever at home if you are in good health and do not have any medical problems or significant symptoms with the fever. Make sure that you are taking enough foods and fluids and urinating in normal amounts. Low body temperature An abnormally low body temperature (hypothermia) can be serious, even life-threatening. Low body temperature may occur from cold exposure, shock, alcohol or drug use, or certain metabolic disorders, such as diabetes or hypothyroidism. A low body temperature may also be present with an infection, particularly in newborns, older adults, or people who are frail. An overwhelming infection, such as sepsis, may also cause an abnormally low body temperature. Check your symptoms to decide if and when you should see a doctor. Check Your Symptoms Many prescription and nonprescription medicines can trigger an allergic reaction and cause a fever. A few examples are: - Barbiturates, such as phenobarbital. - Aspirin, if you take too much. Sudden drooling and trouble swallowing can be signs of a serious problem called epiglottitis. This problem can happen at any age. The epiglottis is a flap of tissue at the back of the throat that you can't see when you look in the mouth. When you swallow, it closes to keep food and fluids out of the tube (trachea) that leads to the lungs. If the epiglottis becomes inflamed or infected, it can swell and quickly block the airway. This makes it very hard to breathe. The symptoms start suddenly. A person with epiglottitis is likely to seem very sick, have a fever, drool, and have trouble breathing, swallowing, and making sounds. In the case of a child, you may notice the child trying to sit up and lean forward with his or her jaw forward, because it's easier to breathe in this position. Call 911 Now Based on your answers, you need emergency care. Call 911 or other emergency services now. Seek Care Today Based on your answers, you may need care soon. The problem probably will not get better without medical care. - Call your doctor today to discuss the symptoms and arrange for care. - If you cannot reach your doctor or you don't have one, seek care today. - If it is evening, watch the symptoms and seek care in the morning. - If the symptoms get worse, seek care sooner. Seek Care Now Based on your answers, you may need care right away. The problem is likely to get worse without medical care. - Call your doctor now to discuss the symptoms and arrange for care. - If you cannot reach your doctor or you don't have one, seek care in the next hour. - You do not need to call an - You cannot travel safely either by driving yourself or by having someone else drive you. - You are in an area where heavy traffic or other problems may slow you down. Shock is a life-threatening condition that may quickly occur after a sudden illness or injury. Symptoms of shock (most of which will be present) include: - Passing out. - Feeling very dizzy or lightheaded, like you may pass out. - Feeling very weak or having trouble standing. - Not feeling alert or able to think clearly. You may be confused, restless, fearful, or unable to respond to questions. Pain in children under 3 years It can be hard to tell how much pain a baby or toddler is in. - Severe pain (8 to 10): The pain is so bad that the baby cannot sleep, cannot get comfortable, and cries constantly no matter what you do. The baby may kick, make fists, or grimace. - Moderate pain (5 to 7): The baby is very fussy, clings to you a lot, and may have trouble sleeping but responds when you try to comfort him or her. - Mild pain (1 to 4): The baby is a little fussy and clings to you a little but responds when you try to comfort him or her. Certain health conditions and medicines weaken the immune system's ability to fight off infection and illness. Some examples in adults are: - Diseases such as diabetes, cancer, heart disease, and HIV/AIDS. - Long-term alcohol and drug problems. - Steroid medicines, which may be used to treat a variety of conditions. - Chemotherapy and radiation therapy for cancer. - Other medicines used to treat autoimmune disease. - Medicines taken after organ transplant. - Not having a spleen. Symptoms of serious illness may include: - A severe headache. - A stiff neck. - Mental changes, such as feeling confused or much less alert. - Extreme fatigue (to the point where it's hard for you to function). - Shaking chills. If you're not sure if a fever is high, moderate, or mild, think about these issues: With a high fever: - You feel very hot. - It is likely one of the highest fevers you've ever had. High fevers are not that common, especially in adults. With a moderate fever: - You feel warm or hot. - You know you have a fever. With a mild fever: - You may feel a little warm. - You think you might have a fever, but you're not sure. Sudden tiny red or purple spots or sudden bruising may be early symptoms of a serious illness or bleeding problem. There are two types. Petechiae (say "puh-TEE-kee-eye"): - Are tiny, flat red or purple spots in the skin or the lining of the mouth. - Do not turn white when you press on them. - Range from the size of a pinpoint to the size of a small pea and do not itch or cause pain. - May spread over a large area of the body within a few hours. - Are different than tiny, flat red spots or birthmarks that are present all the time. Purpura (say "PURR-pyuh-ruh" or “PURR-puh-ruh”): - Is sudden, severe bruising that occurs for no clear reason. - May be in one area or all over. - Is different than the bruising that happens after you bump into something. Temperature varies a little depending on how you measure it. For adults and children age 12 and older, these are the ranges for high, moderate, and mild, according to how you took the temperature. Oral (by mouth) temperature - High: 104°F (40°C) and higher - Moderate: 100.4°F (38°C) to 103.9°F (39.9°C) - Mild: 100.3°F (37.9°C) and lower A forehead (temporal) scanner is usually 0.5°F (0.3°C) to 1°F (0.6°C) lower than an oral temperature. Ear or rectal temperature - High: 105°F (40.6°C) and higher - Moderate: 101.4°F (38.6°C) to 104.9°F (40.5°C) - Mild: 101.3°F (38.5°C) and lower Armpit (axillary) temperature - High: 103°F (39.5°C) and higher - Moderate: 99.4°F (37.4°C) to 102.9°F (39.4°C) - Mild: 99.3°F (37.3°C) and lower Severe trouble breathing means: - You cannot talk at all. - You have to work very hard to breathe. - You feel like you can't get enough air. - You do not feel alert or cannot think clearly. Moderate trouble breathing means: - It's hard to talk in full sentences. - It's hard to breathe with activity. Mild trouble breathing means: - You feel a little out of breath but can still talk. - It's becoming hard to breathe with activity. Symptoms of difficulty breathing can range from mild to severe. For example: - You may feel a little out of breath but still be able to talk (mild difficulty breathing), or you may be so out of breath that you cannot talk at all (severe difficulty breathing). - It may be getting hard to breathe with activity (mild difficulty breathing), or you may have to work very hard to breathe even when you’re at rest (severe difficulty breathing). Severe dehydration means: - Your mouth and eyes may be extremely dry. - You may pass little or no urine for 12 or more hours. - You may not feel alert or be able to think clearly. - You may be too weak or dizzy to stand. - You may pass out. Moderate dehydration means: - You may be a lot more thirsty than usual. - Your mouth and eyes may be drier than usual. - You may pass little or no urine for 8 or more hours. - You may feel dizzy when you stand or sit up. Mild dehydration means: - You may be more thirsty than usual. - You may pass less urine than usual. You can get dehydrated when you lose a lot of fluids because of problems like vomiting or fever. Symptoms of dehydration can range from mild to severe. For example: - You may feel tired and edgy (mild dehydration), or you may feel weak, not alert, and not able to think clearly (severe dehydration). - You may pass less urine than usual (mild dehydration), or you may not be passing urine at all (severe dehydration). Try Home Treatment You have answered all the questions. Based on your answers, you may be able to take care of this problem at home. - Try home treatment to relieve the symptoms. - Call your doctor if symptoms get worse or you have any concerns (for example, if symptoms are not getting better as you would expect). You may need care sooner. Many things can affect how your body responds to a symptom and what kind of care you may need. These include: - Your age. Babies and older adults tend to get sicker quicker. - Your overall health. If you have a condition such as diabetes, HIV, cancer, or heart disease, you may need to pay closer attention to certain symptoms and seek care sooner. - Medicines you take. Certain medicines, herbal remedies, and supplements can cause symptoms or make them worse. - Recent health events, such as surgery or injury. These kinds of events can cause symptoms afterwards or make them more serious. - Your health habits and lifestyle, such as eating and exercise habits, smoking, alcohol or drug use, sexual history, and travel. Fever can be a symptom of almost any type of infection. Symptoms of a more serious infection may include the following: - Skin infection: Pain, redness, or pus - Joint infection: Severe pain, redness, or warmth in or around a joint - Bladder infection: Burning when you urinate, and a frequent need to urinate without being able to pass much urine - Kidney infection: Pain in the flank, which is either side of the back just below the rib cage - Abdominal infection: Belly pain Make an Appointment Based on your answers, the problem may not improve without medical care. - Make an appointment to see your doctor in the next 1 to 2 weeks. - If appropriate, try home treatment while you are waiting for the appointment. - If symptoms get worse or you have any concerns, call your doctor. You may need care sooner. Severe trouble breathing means: - The child cannot eat or talk because he or she is breathing so hard. - The child's nostrils are flaring and the belly is moving in and out with every breath. - The child seems to be tiring out. - The child seems very sleepy or confused. Moderate trouble breathing means: - The child is breathing a lot faster than usual. - The child has to take breaks from eating or talking to breathe. - The nostrils flare or the belly moves in and out at times when the child breathes. Mild trouble breathing means: - The child is breathing a little faster than usual. - The child seems a little out of breath but can still eat or talk. Pain in adults and older children - Severe pain (8 to 10): The pain is so bad that you can't stand it for more than a few hours, can't sleep, and can't do anything else except focus on the pain. - Moderate pain (5 to 7): The pain is bad enough to disrupt your normal activities and your sleep, but you can tolerate it for hours or days. Moderate can also mean pain that comes and goes even if it's severe when it's there. - Mild pain (1 to 4): You notice the pain, but it is not bad enough to disrupt your sleep or activities. Shock is a life-threatening condition that may occur quickly after a sudden illness or injury. Symptoms of shock in a child may include: - Passing out. - Being very sleepy or hard to wake up. - Not responding when being touched or talked to. - Breathing much faster than usual. - Acting confused. The child may not know where he or she is. It's easy to become dehydrated when you have a fever. In the early stages, you may be able to correct mild to moderate dehydration with home treatment measures. It is important to control fluid losses and replace lost fluids. Adults and children age 12 and older If you become mildly to moderately dehydrated while working outside or exercising: - Stop your activity and rest. - Get out of direct sunlight and lie down in a cool spot, such as in the shade or an air-conditioned area. - Prop up your feet. - Take off any extra clothes. - Drink a rehydration drink, water, juice, or sports drink to replace fluids and minerals. Drink 2 qt (2 L) of cool liquids over the next 2 to 4 hours. You should drink at least 10 glasses of liquid a day to replace lost fluids. You can make an inexpensive rehydration drink at home. But do not give this homemade drink to children younger than 12. Measure all ingredients precisely. Small variations can make the drink less effective or even harmful. Mix the following: - 1 quart (1 L) purified water - ½ teaspoon (2.5 mL) salt - 6 teaspoons (30 mL) sugar Rest and take it easy for 24 hours, and continue to drink a lot of fluids. Although you will probably start feeling better within just a few hours, it may take as long as a day and a half to completely replace the fluid that you lost. Many people find that taking a lukewarm [80°F (27°C) to 90°F (32°C)] shower or bath makes them feel better when they have a fever. Do not try to take a shower if you are dizzy or unsteady on your feet. Increase the water temperature if you start to shiver. Shivering is a sign that your body is trying to raise its temperature. Do not use rubbing alcohol, ice, or cold water to cool your body. Dress lightly when you have a fever. This will help your body cool down. Wear light pajamas or a light undershirt. Do not wear very warm clothing or use heavy bed covers. Keep room temperature at 70°F (21°C) or lower. If you are not able to measure your temperature, you need to look for other symptoms of illness every hour while you have a fever and follow home treatment measures. |Try a nonprescription medicine to help treat your fever or pain:| Talk to your child's doctor before switching back and forth between doses of acetaminophen and ibuprofen. When you switch between two medicines, there is a chance your child will get too much medicine. |Be sure to follow these safety tips when you use a nonprescription medicine:| Be sure to check your temperature every 2 to 4 hours to make sure home treatment is working. Symptoms to watch for during home treatment Call your doctor if any of the following occur during home treatment: - Level of consciousness changes. - You have signs of dehydration and you are unable to drink enough to replace lost fluids. Signs of dehydration include being thirstier than usual and having darker urine than usual. - Other symptoms develop, such as pain in one area of the body, shortness of breath, or urinary symptoms. - Symptoms become more severe or frequent. The best way to prevent fevers is to reduce your exposure to infectious diseases. Hand-washing is the single most important prevention measure for people of all ages. Immunizations can reduce the risk for fever-related illnesses, such as the flu. Although no vaccine is 100% effective, most routine immunizations are effective for 85% to 95% of the people who receive them. For more information, see the topic Immunizations. Preparing For Your Appointment To prepare for your appointment, see the topic Making the Most of Your Appointment. You can help your doctor diagnose and treat your condition by being prepared to answer the following questions: - What is the history of your fever? - When did you fever start? - How often do you have a fever? - How long does your fever last? - Does your fever have a pattern? - Are you able to measure your temperature? How high is your fever? - Have you had any other health problems over the past 3 months? - Have you recently been exposed to anyone who has a fever? - Have you recently traveled outside the country or been exposed to immigrants or other nonnative people? - Have you had any insect bites in the past 6 weeks, including tick bites? - What home treatment measures you have tried? Did they help? - What nonprescription medicines have you taken? Did they help? Keep a fever chart of what your temperature was before and after home treatment. - Do you have any health risks? Other Places To Get Help - Abdominal Pain, Age 11 and Younger - Abdominal Pain, Age 12 and Older - Ear Problems and Injuries, Age 11 and Younger - Ear Problems and Injuries, Age 12 and Older - Fever Seizures - Respiratory Problems, Age 11 and Younger - Respiratory Problems, Age 12 and Older - Sore Throat and Other Throat Problems - Urinary Problems and Injuries, Age 11 and Younger - Urinary Problems and Injuries, Age 12 and Older Primary Medical Reviewer William H. Blahd, Jr., MD, FACEP - Emergency Medicine Specialist Medical Reviewer H. Michael O'Connor, MD - Emergency Medicine Current as ofNovember 14, 2014 Current as of: November 14, 2014 To learn more about Healthwise, visit Healthwise.org © 1995-2015 Healthwise, Incorporated. Healthwise, Healthwise for every health decision, and the Healthwise logo are trademarks of Healthwise, Incorporated.
http://www.uwhealth.org/health/topic/symptom/-fever-or-chills-age-12-and-older/fevr4.html
4.125
Bone is a hard substance that makes up the skeleton, which supports the body and provides protection for the organs. Bone is composed of minerals, mainly calcium and phosphate, which it stores and provides to the body as they are needed. Bone consists of three layers: the outside covering of the bone (periosteum); the hard middle (compact) bone; and the inner spongy (cancellous) bone. The covering of the bone contains nerves and blood vessels that feed the hard bone. Holes and channels run through the hard bone to supply oxygen and nutrients to the inner bone cells. The spongy bone contains bone marrow, which produces red and white blood cells and platelets. Normal bone is constantly dissolving and being absorbed into the body and then being rebuilt in a process called remodeling. This allows bones to react to changes in body weight and structure and to increase bone strength in areas of stress. eMedicineHealth Medical Reference from Healthwise To learn more visit Healthwise.org - Early Care for Your Premature Baby - What to Eat When You Have Cancer - When to Take More Pain Medication
http://www.emedicinehealth.com/script/main/art.asp?articlekey=137597&ref=128776
4.15625
Jetstream - On-line School for Weather, NOAA - National Weather Service Activity takes about 1 class period.Learn more about Teaching Climate Literacy and Energy Awareness» See how this Activity supports the Next Generation Science Standards» Middle School: 1 Performance Expectation, 2 Disciplinary Core Ideas, 8 Cross Cutting Concepts, 6 Science and Engineering Practices About Teaching Climate Literacy Other materials addressing 2b Excellence in Environmental Education Guidelines Other materials addressing: C) Systems and connections. Other materials addressing: D) Flow of matter and energy. Notes From Our Reviewers The CLEAN collection is hand-picked and rigorously reviewed for scientific accuracy and classroom effectiveness. Read what our review team had to say about this resource below or learn more about how CLEAN reviews teaching materials Teaching Tips | Science | Pedagogy | - Educators may wish to supplement this with background materials, see for example: http://www.srh.noaa.gov/jetstream/atmos/whatacycle_max.html. - Educators may also want each student to discuss their own pathway through the water cycle with the group to reinforce how complex the water cycle really is. - To connect to climate change introduce some "What if...?" scenarios in a post-activity discussion. e.g. "What if the temperature of the ocean sea surface increased? How might this change other elements of the cycle?" - Could use as is with elementary students; one could add complexity to it for middle school students. One concept to consider introducing is the energy gained or lost during evaporation or condensation, and students could leave or take a token at a station to represent the gain or loss of energy. Another concept to consider adding would be the flux of water molecules. About the Science - Activity gives students a visceral sense of where and how frequently water molecules move around in the water cycle. - As noted in its description, the activity is unrealistic as most water molecules are contained in the ocean. About half of the students are initially placed at the ocean station. - Comments from expert scientist: Creative way to engage students in a "game" to learn about the various interactions within the water cycle. Presents a thorough number of paths and parts of the water cycle, to illustrate water cycle complexity. The cards describe and define, in appropriate scientific terms, the process that takes place for the student (i.e. water molecule) to transition from one place in the cycle to the next. That's where the real learning can come in, in having the students learn about how those movements within the water system take place. About the Pedagogy - While the activity does not include much scientific background on the water cycle itself, it is a kinesthetic exercise that will give students a strong sense of what water molecules do within the water cycle, and the variety of pathways that a molecule can take. Technical Details/Ease of Use - The website includes printouts for both the station cards for each station in the water cycle and the water cycle worksheets for each student. These are in color but don't require a color printer. - Students must be mobile and the classroom space must be configured such that students can move around. Next Generation Science Standards See how this Activity supports: Performance Expectations: 1 MS-ESS2-4: Develop a model to describe the cycling of water through Earth's systems driven by energy from the sun and the force of gravity. Disciplinary Core Ideas: 2 MS-ESS2.C1:Water continually cycles among land, ocean, and atmosphere via transpiration, evaporation, condensation and crystallization, and precipitation, as well as downhill flows on land. MS-ESS2.C3:Global movements of water and its changes in form are propelled by sunlight and gravity. Cross Cutting Concepts: 8 MS-C4.2: Models can be used to represent systems and their interactions—such as inputs, processes and outputs—and energy, matter, and information flows within systems. MS-C5.1:Matter is conserved because atoms are conserved in physical and chemical processes. MS-C5.2: Within a natural or designed system, the transfer of energy drives the motion and/or cycling of matter. MS-C7.3:Stability might be disturbed either by sudden events or gradual changes that accumulate over time. MS-C1.2: Patterns in rates of change and other numerical relationships can provide information about natural and human designed systems MS-C2.2:Cause and effect relationships may be used to predict phenomena in natural or designed systems. MS-C3.1:Time, space, and energy phenomena can be observed at various scales using models to study systems that are too large or too small. MS-C3.3: Proportional relationships (e.g., speed as the ratio of distance traveled to time taken) among different types of quantities provide information about the magnitude of properties and processes. Science and Engineering Practices: 6 MS-P2.1:Evaluate limitations of a model for a proposed object or tool. MS-P2.4:Develop and/or revise a model to show the relationships among variables, including those that are not observable but predict observable phenomena. MS-P2.5:Develop and/or use a model to predict and/or describe phenomena. MS-P4.2:Use graphical displays (e.g., maps, charts, graphs, and/or tables) of large data sets to identify temporal and spatial relationships. MS-P5.4:Apply mathematical concepts and/or processes (e.g., ratio, rate, percent, basic operations, simple algebra) to scientific and engineering questions and problems. MS-P6.2:Construct an explanation using models or representations.
http://cleanet.org/resources/44660.html
4.125
A horseshoe orbit is a type of co-orbital motion of a small orbiting body relative to a larger orbiting body (such as Earth). The orbital period of the smaller body is very nearly the same as for the larger body, and its path appears to have a horseshoe shape in a rotating reference frame as viewed from the larger object. The loop is not closed but will drift forward or backward slightly each time, so that the point it circles will appear to move smoothly along the larger body's orbit over a long period of time. When the object approaches the larger body closely at either end of its trajectory, its apparent direction changes. Over an entire cycle the center traces the outline of a horseshoe, with the larger body between the 'horns'. Asteroids in horseshoe orbits with respect to Earth include 54509 YORP, 2002 AA29, 2010 SO16, 2015 SO2 and possibly 2001 GO2. A broader definition includes 3753 Cruithne, which can be said to be in a compound and/or transition orbit, or (85770) 1998 UP1 and 2003 YN107. Explanation of horseshoe orbital cycle The following explanation relates to an asteroid which is in such an orbit around the Sun, and is also affected by the Earth. The asteroid is in almost the same solar orbit as Earth. Both take approximately one year to orbit the Sun. It is also necessary to grasp two rules of orbit dynamics: - A body closer to the Sun completes an orbit more quickly than a body further away. - If a body accelerates along its orbit, its orbit moves outwards from the Sun. If it decelerates, the orbital radius decreases. The horseshoe orbit arises because the gravitational attraction of the Earth changes the shape of the elliptical orbit of the asteroid. The shape changes are very small but result in significant changes relative to the Earth. The horseshoe becomes apparent only when mapping the movement of the asteroid relative to both the Sun and the Earth. The asteroid always orbits the Sun in the same direction. However, it goes through a cycle of catching up with the Earth and falling behind, so that its movement relative to both the Sun and the Earth traces a shape like the outline of a horseshoe. Stages of the orbit Starting out at point A on the inner ring between L5 and Earth, the satellite is orbiting faster than the Earth. It's on its way toward passing between the Earth and the Sun. But Earth's gravity exerts an outward accelerating force, pulling the satellite into a higher orbit which (per Kepler's third law) decreases its angular speed. When the satellite gets to point B, it is traveling at the same speed as Earth. Earth's gravity is still accelerating the satellite along the orbital path, and continues to pull the satellite into a higher orbit. Eventually, at C, the satellite reaches a high enough, slow enough orbit and starts to lag behind Earth. It then spends the next century or more appearing to drift 'backwards' around the orbit when viewed relative to the Earth. Its orbit around the Sun still takes only slightly more than one Earth year. Eventually the satellite comes around to point D. Earth's gravity is now reducing the satellite's orbital velocity, causing it to fall into a lower orbit, which actually increases the angular speed of the satellite. This continues until the satellite's orbit is lower and faster than Earth's orbit. It begins moving out ahead of the earth. Over the next few centuries it completes its journey back to point A. A somewhat different, but equivalent, view of the situation may be noted by considering conservation of energy. It is a theorem of classical mechanics that a body moving in a time-independent potential field will have its total energy, E = T + V, conserved, where E is total energy, T is kinetic energy (always non-negative) and V is potential energy, which is negative. It is apparent then, since V = -GM/R near a gravitating body of mass M, that seen from a stationary frame, V will be increasing for the region behind M, and decreasing for the region in front of it. However, orbits with lower total energy have shorter periods, and so a body moving slowly on the forward side of a planet will lose energy, fall into a shorter-period orbit, and thus slowly move away, or be "repelled" from it. Bodies moving slowly on the trailing side of the planet will gain energy, rise to a higher, slower, orbit, and thereby fall behind, similarly repelled. Thus a small body can move back and forth between a leading and a trailing position, never approaching too close to the planet that dominates the region. - See also trojan (astronomy). Figure 1 above shows shorter orbits around the Lagrangian points L4 and L5 (e.g. the lines close to the blue triangles). These are called tadpole orbits and can be explained in a similar way, except that the asteroid's distance from the Earth does not oscillate as far as the L3 point on the other side of the Sun. As it moves closer to or farther from the Earth, the changing pull of Earth's gravitational field causes it to accelerate or decelerate, causing a change in its orbit known as libration. An example of a body in a tadpole orbit is Polydeuces, a small moon of Saturn which librates around the trailing L5 point relative to a larger moon, Dione. In relation to the orbit of Earth, the 300-meter-diameter asteroid 2010 TK7 is in a tadpole orbit around the leading L4 point.
https://en.wikipedia.org/wiki/Horseshoe_orbit
4.09375
How to identify parallel lines, a line parallel to a plane, and two parallel planes. How to write an equation for the coordinate planes or any plane that is parallel to one. How to find the angle between planes, and how to determine if two planes are parallel or perpendicular. How to find a vector normal (perpendicular) to a plane given an equation for the plane. How to form sentences with parallel structure. How resistors in parallel affect current flow How capacitors in parallel affect current flow. How to plot complex numbers on the complex plane. How to take the converse of the parallel lines theorem. How to mark parallel lines, how to show lines are parallel, and how to compare skew and parallel lines. How to describe and label point, line, and plane. How to define coplanar and collinear. How to determine whether lines are parallel, perpendicular, or neither. How to prove two triangles are similar using a line parallel to a base. How to determine whether two lines in space are parallel or perpendicular. How to prove that opposite angles in a cyclic quadrilateral are congruent; how to prove that parallel lines create congruent arcs in a circle. How to construct parallel lines using three different methods.
https://www.brightstorm.com/tag/parallel-planes/
4.0625
1 Answer | Add Yours The physical structure of the Earth, starting from the center, consists of a solid inner core surrounded by a liquid outer core. The outer core is enveloped by a highly viscous layer called mantle the physical properties of which are equivalent to that of a solid. The outermost layer is called the crust which is solid too and is approximately 6 km thick under the oceans and 50 km thick under the continents. Tectonic plates refer to contiguous segments of the crust that move independently of each other. The motion of the tectonic plates over billions of years has changed the surface of the Earth and led to the formation of the continents and the oceans as we see them today from a single large continent called Pangaea. The tectonic plates continue to move, though the very slow pace at which they do so compared to how long humans have existed on the Earth gives us an impression that the geographical structure of the Earth is constant. With the help of modern technology like satellite imagery it is possible for geologists to see the present movement of tectonic plates and make very rough estimates of how the Earth would look in the future. It is estimated that in 30 - 50 millions years, Africa would move towards Europe and the interaction of the two plates lead to the formation of a mountain range like the Himalayan mountain range. The North and South Americas are estimated to move farther away leading to an expansion in the Atlantic Ocean. Australia is estimated to move towards Asia with a formation of a joint continent. We’ve answered 301,523 questions. We can answer yours, too.Ask a question
http://www.enotes.com/homework-help/explain-how-earth-may-appear-30-million-years-from-359620
4
By attaching short sequences of single-stranded DNA to nanoscale building blocks, researchers can design structures that can effectively build themselves. The building blocks that are meant to connect have complementary DNA sequences on their surfaces, ensuring only the correct pieces bind together as they jostle into one another while suspended in a test tube. The spheres that make up the crystal follow each other in slipstreams, making some patterns more likely to form. (Ian Jenkins) Now, a University of Pennsylvania team has made a discovery with implications for all such self-assembled structures. Earlier work assumed that the liquid medium in which these DNA-coated pieces float could be treated as a placid vacuum, but the Penn team has shown that fluid dynamics play a crucial role in the kind and quality of the structures that can be made in this way. As the DNA-coated pieces rearrange themselves and bind, they create slipstreams into which other pieces can flow. This phenomenon makes some patterns within the structures more likely to form than others. The research was conducted by professors Talid Sinno and John Crocker, alongside graduate students Ian Jenkins, Marie Casey and James McGinley, all of the Department of Chemical and Biomolecular Engineering in Penn’s School of Engineering and Applied Science. It was published in the Proceedings of the National Academy of Sciences. The Penn team’s discovery started with an unusual observation about one of their previous studies, which dealt with a reconfigurable crystalline structure the team had made using DNA-coated plastic spheres, each 400 nanometers wide. These structures initially assemble into floppy crystals with square-shaped patterns, but, in a process similar to heat-treating steel, their patterns can be coaxed into more stable, triangular configurations. Surprisingly, the structures they were making in the lab were better than the ones their computer simulations predicted would result. The simulated crystals were full of defects, places where the crystalline pattern of the spheres was disrupted, but the experimentally grown crystals were all perfectly aligned. While these perfect crystals were a positive sign that the technique could be scaled up to build different kinds of structures, the fact that their simulations were evidently flawed indicated a major hurdle. “What you see in an experiment,” Sinno said, “is usually a dirtier version of what you see in simulation. We need to understand why these simulation tools aren’t working if we’re going to build useful things with this technology, and this result was evidence that we don’t fully understand this system yet. It’s not just a simulation detail that was missing; there’s a fundamental physical mechanism that we’re not including.” By process of elimination, the missing physical mechanism turned out to be hydrodynamic effects, essentially, the interplay between the particles and the fluid in which they are suspended while growing. The simulation of a system’s hydrodynamics is critical when the fluid is flowing, such as how rocks are shaped by a rushing river, but has been considered irrelevant when the fluid is still, as it was in the researchers’ experiments. While the particles’ jostling perturbs the medium, the system remains in equilibrium, suggesting the overall effect is negligible. “The conventional wisdom,” Crocker said, “was that you don't need to consider hydrodynamic effects in these systems. Adding them to simulations is computationally expensive, and there are various kinds of proofs that these effects don’t change the energy of the system. From there you can make a leap to saying, ‘I don’t need to worry about them at all.’” Particle systems like ones made by these self-assembling DNA-coated spheres typically rearrange themselves until they reach the lowest energy state. An unusual feature of the researchers’ system is that there are thousands of final configurations — most containing defects — that are just as energetically favorable as the perfect one they produced in the experiment. “It’s like you’re in a room with a thousand doors,” Crocker said. “Each of those doors takes you to a different structure, only one of which is the copper-gold pattern crystal we actually get. Without the hydrodynamics, the simulation is equally likely to send you through any one of those doors.” The researchers’ breakthrough came when they realized that while hydrodynamic effects would not make any one final configuration more energy-favorable than another, the different ways particles would need to rearrange themselves to get to those states were not all equally easy. Critically, it is easier for a particle to make a certain rearrangement if it’s following in the wake of another particle making the same moves. “It’s like slipstreaming,” Crocker said. “The way the particles move together, it’s like they’re a school of fish.” “How you go determines what you get,” Sinno said. “There are certain paths that have a lot more slipstreaming than others, and the paths that have a lot correspond to the final configurations we observed in the experiment.” The researchers believe that this finding will lay the foundation for future work with these DNA-coated building blocks, but the principle discovered in their study will likely hold up in other situations where microscopic particles are suspended in a liquid medium. “If slipstreaming is important here, it’s likely to be important in other particle assemblies,” Sinno said. It’s not just about these DNA-linked particles; it’s about any system where you have particles at this size scale. To really understand what you get, you need to include the hydrodynamics.” The research was supported by the National Science Foundation through its Chemical, Bioengineering, Environmental and Transport Systems Division. Evan Lerner | EurekAlert! Gene switch may repair DNA and prevent cancer 12.02.2016 | Institute for Integrated Cell-Material Sciences at Kyoto University New method opens crystal clear views of biomolecules 11.02.2016 | Deutsches Elektronen-Synchrotron DESY Today, plants and microorganisms are heavily used for the production of medicinal products. The production of biopharmaceuticals in plants, also referred to as “Molecular Pharming”, represents a continuously growing field of plant biotechnology. Preferred host organisms include yeast and crop plants, such as maize and potato – plants with high demands. With the help of a special algal strain, the research team of Prof. Ralph Bock at the Max Planck Institute of Molecular Plant Physiology in Potsdam strives to develop a more efficient and resource-saving system for the production of medicines and vaccines. They tested its practicality by synthesizing a component of a potential AIDS vaccine. The use of plants and microorganisms to produce pharmaceuticals is nothing new. In 1982, bacteria were genetically modified to produce human insulin, a drug... Atomic clock experts from the Physikalisch-Technische Bundesanstalt (PTB) are the first research group in the world to have built an optical single-ion clock which attains an accuracy which had only been predicted theoretically so far. Their optical ytterbium clock achieved a relative systematic measurement uncertainty of 3 E-18. The results have been published in the current issue of the scientific journal "Physical Review Letters". Atomic clock experts from the Physikalisch-Technische Bundesanstalt (PTB) are the first research group in the world to have built an optical single-ion clock... The University of Würzburg has two new space projects in the pipeline which are concerned with the observation of planets and autonomous fault correction aboard satellites. The German Federal Ministry of Economic Affairs and Energy funds the projects with around 1.6 million euros. Detecting tornadoes that sweep across Mars. Discovering meteors that fall to Earth. Investigating strange lightning that flashes from Earth's atmosphere into... Physicists from Saarland University and the ESPCI in Paris have shown how liquids on solid surfaces can be made to slide over the surface a bit like a bobsleigh on ice. The key is to apply a coating at the boundary between the liquid and the surface that induces the liquid to slip. This results in an increase in the average flow velocity of the liquid and its throughput. This was demonstrated by studying the behaviour of droplets on surfaces with different coatings as they evolved into the equilibrium state. The results could prove useful in optimizing industrial processes, such as the extrusion of plastics. The study has been published in the respected academic journal PNAS (Proceedings of the National Academy of Sciences of the United States of America). Exceeding critical temperature limits in the Southern Ocean may cause the collapse of ice sheets and a sharp rise in sea levels A future warming of the Southern Ocean caused by rising greenhouse gas concentrations in the atmosphere may severely disrupt the stability of the West... 12.02.2016 | Event News 09.02.2016 | Event News 02.02.2016 | Event News 12.02.2016 | Physics and Astronomy 12.02.2016 | Life Sciences 12.02.2016 | Medical Engineering
http://www.innovations-report.com/html/reports/life-sciences/the-motion-of-the-medium-matters-for-self-assembling-particles-penn-research-shows.html
4.34375
The Germanic umlaut (more usually called i-umlaut or i-mutation) is a type of linguistic umlaut in which a back vowel changes to the associated front vowel (fronting) or a front vowel becomes closer to /i/ (raising) when the following syllable contains /i/, /iː/, or /j/. It took place separately in various Germanic languages starting around 450 or 500 Ad and affected all of the early languages except Gothic. An example of the resulting vowel alternation is the English plural foot ~ feet (from Germanic */fōts/, pl. */fōtiz/). Germanic umlaut, as covered in this article, does not include other historical vowel phenomena that operated in the history of the Germanic languages such as Germanic a-mutation and the various language-specific processes of u-mutation, as well as the earlier Indo-European ablaut (vowel gradation), which is observable in the declension of Germanic strong verbs such as sing/sang/sung. - 1 Description - 2 Morphological effects - 3 German orthography - 4 False ablaut in verbs - 5 West Germanic languages - 6 North Germanic languages - 7 See also - 8 References - 9 Bibliography Umlaut is a form of assimilation or vowel harmony, the process by which one speech sound is altered to make it more like another adjacent sound. If a word has two vowels with one far back in the mouth and the other far forward, more effort is required to pronounce the word than if the vowels were closer together; therefore, one possible linguistic development is for these two vowels to be drawn closer together. Germanic umlaut is a specific historical example of this process that took place in the unattested earliest stages of Old English and Old Norse and apparently later in Old High German, and some other old Germanic languages. The precise developments varied from one language to another, but the general trend was this: - Whenever a back vowel (/a/, /o/ or /u/, whether long or short) occurred in a syllable and the front vowel /i/ or the front glide /j/ occurred in the next, the vowel in the first syllable was fronted (usually to /æ/, /ø/, and /y/ respectively). Thus, for example, West Germanic *mūsiz "mice" shifted to proto-Old English *mȳsiz, which eventually developed to modern mice, while the singular form *mūs lacked a following /i/ and was unaffected, eventually becoming modern mouse. - When a low or mid-front vowel occurred in a syllable and the front vowel /i/ or the front glide /j/ occurred in the next, the vowel in the first syllable was raised. This happened less often in the Germanic languages, partly because of earlier vowel harmony in similar contexts. However, for example, proto-Old English /æ/ became /e/ in, for example, */bæddj-/ > /bedd/ 'bed'. The fronted variant caused by umlaut was originally allophonic (a variant sound automatically predictable from the context), but it later became phonemic (a separate sound in its own right) when the context was lost but the variant sound remained. The following examples show how, when final -i was lost, the variant sound -ȳ- became a new phoneme in Old English: |Loss of final -z||West Germanic||*mūs||*mūsi||*fōt||*fōti| |Germanic umlaut||Pre-Old English||*mūs||*mȳsi||*fōt||*fø̄ti| |Loss of i after a heavy syllable||Pre-Old English||mūs||mȳs||fōt||fø̄t| |Unrounding of ø̄ (> ē)||Most Old English dialects||mūs||mȳs||fōt||fēt| |Unrounding of ȳ (> ī)||Early Middle English||mūs||mīs||fōt||fēt| |Great Vowel Shift||Early Modern and Modern English||/maʊs/||/maɪs/||/fʊt/||/fiːt/| Although umlaut was not a grammatical process, umlauted vowels often serve to distinguish grammatical forms (and thus show similarities to ablaut when viewed synchronically), as can be seen in the English word man. In ancient Germanic, it and some other words had the plural suffix -iz, with the same vowel as the singular. As it contained an i, this suffix caused fronting of the vowel, and when the suffix later disappeared, the mutated vowel remained as the only plural marker: men. In English, such plurals are rare: man, woman, tooth, goose, foot, mouse, louse, brother (archaic or specialized plural in brethren), and cow (poetic and dialectal plural in kine). It also can be found in a few fossilized diminutive forms, such as kitten from cat and kernel from corn, and the feminine vixen from fox. Umlaut is conspicuous when it occurs in one of such a pair of forms, but there are many mutated words without an unmutated parallel form. Germanic actively derived causative weak verbs from ordinary strong verbs by applying a suffix, which later caused umlaut, to a past tense form. Some of these survived into modern English as doublets of verbs, including fell and set vs. fall (older past *fefall) and sit. Umlaut could occur in borrowings as well if stressed vowel was coloured by a subsequent front vowel, such as German Köln, "Cologne", from Latin Colonia, or Käse, "cheese", from Latin caseus. Parallel umlauts in some modern Germanic languages |*fallaną - *fallijaną||fallen - fällen||to fall - fell||vallen - vellen||falla - fälla||falla - fella| |*fōts - *fōtiz||Fuß - Füße||foot - feet||voet - voeten (no umlaut)||fot - fötter||fótur - føtur| |*aldaz - *alþizô - *alþistaz||alt - älter - am ältesten||old - elder - eldest||oud - ouder - oudst (no umlaut)||gammal - äldre - äldst (irregular)||gamal - eldri - elstur (irregular)| |*fullaz - *fullijaną||voll - füllen||full - fill||vol - vullen||full - fylla||fullur - fylla| |*langaz - *langīn/*langiþō||lang - Länge||long - length||lang - lengte||lång - längd||langur - longd| |*lūs - *lūsiz||Laus - Läuse||louse - lice||luis - luizen (no umlaut)||lus - löss||lús - lýs| German orthography is generally consistent in its representation of i-umlaut. The umlaut diacritic, consisting of two dots above the vowel, is used for the fronted vowels, making the historical process much more visible in the modern language than is the case in English: a>ä, o>ö, u>ü, au>äu. Sometimes a word has a vowel affected by i-umlaut, but the vowel is not marked with the umlaut diacritic. Usually, the word with an umlauted vowel comes from an original word without umlaut, but the two are not recognized as a pair because the meaning of the umlauted word has changed. The adjective fertig ("ready", "finished"; originally "ready to go") contains an umlaut mutation, but it is spelled with e rather than ä as its relationship to Fahrt (journey) has, for most speakers of the language, been lost from sight. Likewise, alt (old) has the comparative älter (older), but the noun from this is spelled Eltern (parents). Aufwand (effort) has the verb aufwenden (to spend, to dedicate) and the adjective aufwendig (requiring effort) though the 1996 spelling reform now permits the alternative spelling aufwändig (but not aufwänden). For denken, see below. On the other hand, some foreign words have umlaut diacritics that do not mark a vowel produced by the sound change of umlaut. Notable examples are Känguru from English kangaroo, and Büro from French bureau. In the latter case, the diacritic is a pure phonological marker, with no regard to etymology; in case of the kangaroo (identical in sound to *Kenguru), it somewhat etymologically marks the fact that the sound is written with an a in English. Similarly, Big Mac can be spelt Big Mäc in German, which even used to be the official spelling used by McDonald's in Germany. In borrowings from Latin and Greek, Latin ae, oe, or Greek ai, oi, are rendered in German as ä and ö respectively (Ägypten, "Egypt", or Ökonomie, "economy"). However, Latin/Greek y is written y in German instead of ü (Psychologie); y ended up being used entirely instead of ü in Scandinavia for native words as well. Für "for" is a special case; it is an umlauted form of vor "before", but other historical developments changed the expected ö into ü. In this case, the ü marks a genuine but irregular umlaut. Other special cases are fünf "five" (expected form *finf) and zwölf "twelve" (expected form *zwälf/zwelf), in which modern umlauted vowel arose from a different process:rounding an unrounded front vowel (possibly from the labial consonants w/f occurring on both sides). Orthography and design history The German phonological umlaut is present in the Old High German period and continues to develop in Middle High German. From the Middle High German, it was sometimes denoted in written German by adding an e to the affected vowel, either after the vowel or, in the small form, above it. This can still be seen in some names:Goethe, Goebbels, Staedtler. In blackletter handwriting, as used in German manuscripts of the later Middle Ages and also in many printed texts of the early modern period, the superscript ⟨e⟩ still had a form that would now be recognisable as an ⟨e⟩, but in manuscript writing, umlauted vowels could be indicated by two dots since the late medieval period. Unusual umlaut designs are sometimes also created for graphic design purposes, such as to fit an umlaut into tightly-spaced lines of text. It may include umlauts placed vertically or inside the body of the letter. False ablaut in verbs Two interesting examples of umlaut involve vowel distinctions in Germanic verbs and often are subsumed under the heading "ablaut" in descriptions of Germanic verbs, giving them the name false ablaut. The German word Rückumlaut ("reverse umlaut") is the slightly misleading term given to the vowel distinction between present and past tense forms of certain Germanic weak verbs. Examples in English are think/thought, bring/brought, tell/told, sell/sold. (These verbs have a dental -t or -d as a tense marker; therefore, they are weak and the vowel change cannot be conditioned by ablaut.) The presence of umlaut is possibly more obvious in German denken/dachte ("think/thought"), especially if it is remembered that in German the letters <ä> and <e> are usually phonetically equivalent. The Proto-Germanic verb would have been *þankijaną; the /j/ caused umlaut in all the forms that had the suffix; subsequently, the /j/ disappeared. The term "reverse umlaut" indicates that if, with traditional grammar, the infinitive and the present tense as the starting point, there is an illusion of a vowel shift towards the back of the mouth (so to speak, <ä>→<a>) in the past tense, but of course, the historical development was simply umlaut in the present tense forms. A variety of umlaut occurs in the second and third person singular forms of the present tense of some Germanic strong verbs. For example, German fangen ("to catch") has the present tense ich fange, du fängst, er fängt. The verb geben ("give") has the present tense ich gebe, du gibst, er gibt, but the shift e→i would not be a normal result of umlaut in German. There are, in fact, two distinct phenomena at play here; the first is indeed umlaut as it is best known, but the second is older and occurred already in Proto-Germanic itself. In both cases, a following i triggered a vowel change, but in Proto-Germanic, it affected only e. The effect on back vowels did not occur until hundreds of years later, after the Germanic languages had already begun to split up: *fą̄haną, *fą̄hidi with no umlaut of a, but *gebanan, *gibidi with umlaut of e. West Germanic languages Although umlaut operated the same way in all the West Germanic languages, the exact words in which it took place and the outcomes of the process differ between the languages. Of particular note is the loss of word-final -i after heavy syllables. In the more southern languages (Old High German, Old Dutch, Old Saxon), forms that lost -i often show no umlaut, but in the more northern languages (Old English, Old Frisian), the forms do. Compare Old English ġiest "guest", which shows umlaut, and Old High German gast, which does not, both from Proto-Germanic *gastiz. That may mean that there was dialectal variation in the timing and spread of the two changes, with final loss happening before umlaut in the south but after it in the north. On the other hand, umlaut may have still been partly allophonic, and the loss of the conditioning sound may have triggered an "un-umlauting" of the preceding vowel. Nevertheless, medial -ij- consistently triggers umlaut although its subsequent loss is universal in West Germanic except for Old Saxon and early Old High German. I-mutation in Old English I-mutation generally affected Old English vowels as follows in each of the main dialects. It led to the introduction into Old English of the new sounds /y(:)/, /ø(:)/ (which, in most varieties, soon turned into /e(:)/ and a sound written in Early West Saxon manuscripts as ie but whose phonetic value is debated. |original||i-mutated||examples and notes| |a||æ, e||æ, e > e||æ, e||bacan "to bake", bæcþ "(he/she) bakes". a > e particularly before nasal consonants: mann "person", menn "people"| |ā||ǣ||ǣ||ǣ||lār "teaching" (cf. "lore"), lǣran "to teach"| |æ||e||e||e||þæc "covering" (cf. "thatch"), þeccan "to cover"| |e||i||i||i||not clearly attested due to earlier Germanic e > i before i, j| |o||oe > e||oe > e||oe > e||Latin olium, Old English oele, ele. Early forms in oe, representing /ø/, later unrounded to e| |ō||oe > ē||oe > ē||oe > ē||fōt "foot", foet, fēt "feet". Early forms in oe, representing /ø/, later unrounded to ē| |u||y||y > e||y||murnan "to mourn", myrnþ "(he/she) mourns"| |ū||ȳ||ȳ > ē||ȳ||mūs "mouse", mȳs "mice"| |ea||ie > y||e||e||eald "old", ieldra, eldra "older" (cf. "elder")| |ēa||īe > ȳ||ē||ē||nēah "near" (cf. "nigh"), nīehst "nearest" (cf. "next")| |eo||io > eo||io > eo||io > eo||examples are rare due to earlier Germanic e > i before i, j. io became eo in most later varieties of Old English| |ēo||īo > ēo||īo > ēo||īo > ēo||examples are rare due to earlier Germanic e > i before i, j. īo became ēo in most later varieties of Old English| |io||ie > y||io, eo||io, eo||*fiohtan "to fight", fieht "(he/she) fights". io became eo in most later varieties of Old English, giving alternations like beornan "to burn", biernþ "(he/she) burns"| |īo||īe > ȳ||īo, ēo||īo, ēo||līoht "light", līehtan "illuminate". īo became ēo in most later varieties of Old English, giving alternations like sēoþan "to boil" (cf. "seethe"), sīeþþ "(he/she) boils"| I-mutation is particularly visible in the inflectional and derivational morphology of Old English since it affected so many of the Old English vowels. Of 16 basic vowels and diphthongs in Old English, only the four vowels ǣ, ē, i, ī were unaffected by i-mutation. Although i-mutation was originally triggered by an /i(:)/ or /j/ in the syllable following the affected vowel, by the time of the surviving Old English texts, the /i(:)/ or /j/ had generally changed (usually to /e/) or been lost entirely, with the result that i-mutation generally appears as a morphological process that affects a certain (seemingly arbitrary) set of forms. These are most common forms affected: - The plural, and genitive/dative singular, forms of consonant-declension nouns (Proto-Germanic (PGmc) *-iz), as compared to the nominative/accusative singular – e.g., fōt "foot", fēt "feet"; mūs "mouse", mȳs "mice". Many more words were affected by this change in Old English vs. modern English – e.g., bōc "book", bēc "books"; frēond "friend", frīend "friends". - The second and third person present singular indicative of strong verbs (Pre-Old-English (Pre-OE) *-ist, *-iþ), as compared to the infinitive and other present-tense forms – e.g. helpan "to help", helpe "(I) help", hilpst "(you sg.) help" (cf. archaic "thou helpest"), hilpþ "(he/she) helps" (cf. archaic "he helpeth"), helpaþ "(we/you pl./they) help". - The comparative form of some adjectives (Pre-OE *-ira < PGmc *-izǭ, Pre-OE *-ist < PGmc *-istaz), as compared to the base form – e.g. eald "old", ieldra "older", ieldest "oldest" (cf. "elder, eldest"). - Throughout the first class of weak verbs (original suffix -jan), as compared to the forms from which the verbs were derived – e.g. fōda "food", fēdan "to feed" < Pre-OE *fōdjan; lār "lore", lǣran "to teach"; feallan "to fall", fiellan "to fell". - In the abstract nouns in þ(u) (PGmc *-iþō) corresponding to certain adjectives – e.g., strang "strong", strengþ(u) "strength"; hāl "whole/hale", hǣlþ(u) "health"; fūl "foul", fȳlþ(u) "filth". - In female forms of several nouns with the suffix -enn (PGmc *-injō) – e.g., god "god", gydenn "goddess" (cf. German Gott, Göttin); fox "fox", fyxenn "vixen". - In i-stem abstract nouns derived from verbs (PGmc *-iz) – e.g. cyme "a coming", cuman "to come"; byre "a son (orig., a being born)", beran "to bear"; fiell "a falling", feallan "to fall"; bend "a bond", bindan "to bind". Note that in some cases the abstract noun has a different vowel than the corresponding verb, due to Proto-Indo-European ablaut. - The phonologically expected umlaut of /a/ is /æ/. However, in many cases /e/ appears. Most /a/ in Old English stem from earlier /æ/ because of a change called a-restoration. This change was blocked when /i/ or /j/ followed, leaving /æ/, which subsequently mutated to /e/. For example, in the case of talu "tale" vs. tellan "to tell", the forms at one point in the early history of Old English were *tælu and *tælljan, respectively. A-restoration converted *tælu to talu, but left *tælljan alone, and it subsequently evolved to tellan by i-mutation. The same process "should" have led to *becþ instead of bæcþ. That is, the early forms were *bæcan and *bæciþ. A-restoration converted *bæcan to bacan but left alone *bæciþ, which would normally have evolved by umlaut to *becþ. In this case, however, once a-restoration took effect, *bæciþ was modified to *baciþ by analogy with bacan, and then later umlauted to bæcþ. - A similar process resulted in the umlaut of /o/ sometimes appearing as /e/ and sometimes (usually, in fact) as /y/. In Old English, /o/ generally stems from a-mutation of original /u/. A-mutation of /u/ was blocked by a following /i/ or /j/, which later triggered umlaut of the /u/ to /y/, the reason for alternations between /o/ and /y/ being common. Umlaut of /o/ to /e/ occurs only when an original /u/ was modified to /o/ by analogy before umlaut took place. For example, dohtor comes from late Proto-Germanic *dohter, from earlier *duhter. The plural in Proto-Germanic was *duhtriz, with /u/ unaffected by a-mutation due to the following /i/. At some point prior to i-mutation, the form *duhtriz was modified to *dohtriz by analogy with the singular form, which then allowed it to be umlauted to a form that resulted in dehter. A few hundred years after i-umlaut began, another similar change called double umlaut occurred. It was triggered by an /i/ or /j/ in the third or fourth syllable of a word and mutated all previous vowels but worked only when the vowel directly preceding the /i/ or /j/ was /u/. This /u/ typically appears as e in Old English or is deleted: - hægtess "witch" < PGmc *hagatusjō (cf. Old High German hagazussa) - ǣmerge "embers" < Pre-OE *āmurja < PGmc *aimurjǭ (cf. Old High German eimurja) - ǣrende "errand" < PGmc *ǣrundijaz (cf. Old Saxon ārundi) - efstan "to hasten" < archaic œfestan < Pre-OE *ofustan - ȳmest "upmost" < PGmc *uhumistaz (cf. Gothic áuhumists) As shown by the examples, affected words typically had /u/ in the second syllable and /a/ in the first syllable. Tge /æ/ developed too late to break to ea or to trigger palatalization of a preceding velar. I-mutation in High German I-mutation is visible in Old High German (OHG), c. 800 AD, only on /a/, which was mutated to /e/. By then, it had already become partly phonologized, since some of the conditioning /i/ and /j/ sounds had been deleted or modified. The later history of German, however, shows that /o/ and /u/ were also affected; starting in Middle High German, the remaining conditioning environments disappear and /o/ and /u/ appear as /ø/ and /y/ in the appropriate environments. That has led to a controversy over when and how i-mutation appeared on these vowels. Some (for example, Herbert Penzl) have suggested that the vowels must have been modified without being indicated for lack of a lack of proper symbols and/or because the difference was still partly allophonic. Others (such as Joseph Voyles) have suggested that the i-mutation of /o/ and /u/ was entirely analogical and pointed to the lack of i-mutation of these vowels in certain places where it would be expected, in contrast to the consistent mutation of /a/. Perhaps[original research?] the answer is somewhere in between — i-mutation of /o/ and /u/ was indeed phonetic, occurring late in OHG, but later spread analogically to the environments where the conditioning had already disappeared by OHG (this is where failure of i-mutation is most likely). It must also be kept in mind that it is an issue of relative chronology: already early in the history of attested OHG, some umlauting factors are known to have disappeared (such as word-internal j after geminates and clusters), and depending on the age of OHG umlaut, that could explain some cases where expected umlaut is missing. In modern German, umlaut as a marker of the plural of nouns is a regular feature of the language, and although umlaut itself is no longer a productive force in German, new plurals of this type can be created by analogy. Likewise, umlaut marks the comparative of many adjectives and other kinds of derived forms. Because of the grammatical importance of such pairs, the German umlaut diacritic was developed, making the phenomenon very visible. The result in German is that the vowels written as <a>, <o>, and <u> become <ä>, <ö>, and <ü>, and the diphthong <au> becomes <äu>: Mann/Männer ("man/men"), lang/länger ("long/longer"), Fuß/Füße ("foot/feet"), Maus/Mäuse ("mouse/mice"), Haus/Häuser ("house/houses"). On the phonetic realisation of these, see German phonology. I-mutation in Old Saxon In Old Saxon, umlaut is much less apparent than in Old Norse. The only vowel that is regularly fronted before an /i/ or /j/ is short /a/: gast – gesti, slahan – slehis. It must have had a greater effect than the orthography shows since all later dialects have a regular umlaut of both long and short vowels. I-mutation in Dutch The situation in Old Dutch is similar to the situation found in Old Saxon and Old High German. Late Old Dutch saw a merger of /u/ and /o/, causing their umlauted results to merge as well, giving /ʏ/. The lengthening in open syllables in early Middle Dutch then lengthened and lowered this short /ʏ/ to long /øː/ (spelled eu) in some words. This is parallel to the lowering of /i/ in open syllables to /eː/, as in schip ("ship") – schepen ("ships"). Later developments in Middle Dutch show that long vowels and diphthongs were not affected by umlaut in the more western dialects, including those in western Brabant and Holland that were most influential for standard Dutch. Thus, for example, where modern German has fühlen /ˈfyːlən/ and English has feel /fiːl/ (from Proto-Germanic *fōlijaną), standard Dutch retains a back vowel in the stem in voelen /ˈvulə(n)/. Thus, only two of the original Germanic vowels were affected by umlaut at all in western/standard Dutch: /a/, which became /ɛ/, and /u/, which became /ʏ/ (spelled u). As a result of this relatively sparse occurrence of umlaut, standard Dutch does not use umlaut as a grammatical marker. An exception is the noun stad "city" which has the irregular umlauted plural steden. The more eastern dialects of Dutch, including eastern Brabantian and all of Limburgish have umlaut of long vowels, however. Consequently, these dialects also make grammatical use of umlaut to form plurals and diminutives, much as most other modern Germanic languages do. Compare vulen /vylə(n)/ and menneke "little man" from man. North Germanic languages I-mutation in Old Norse |This section does not cite any sources. (August 2010)| The situation in Old Norse is complicated as there are two forms of i-mutation. Of these two, only one is phonologized.[clarification needed] I-mutation in Old Norse is phonological: - In Proto-Norse, if the syllable was heavy and followed by vocalic i (*gastiʀ > gestr, but *staði > *stað) or, regardless of syllable weight, if followed by consonantal i (*skunja > skyn). The rule is not perfect, as some light syllables were still umlauted: *kuni > kyn, *komiʀ > kømr. - In Old Norse, the following syllable contains a remaining Proto-Norse i.[why?] For example, the root of the dative singular of u-stems are i-mutated as the desinence contains a Proto-Norse i, but the dative singular of a-stems is not, as their desinence stems from P-N ē. I-mutation is not phonological if the vowel of a long syllable is i-mutated by a syncopated i. I-mutation does not occur in short syllables. |a||e (ę)||fagr (fair) / fegrstr (fairest)| |au||ey||lauss (loose) / leysa (to loosen)| |á||æ||Áss / Æsir| |jú||ý||ljúga (to lie) / lýgr (lies)| |o||ø||koma (to come) / kømr (comes)| |ó||œ||róa (to row) / rœr (rows)| |u||y||upp (up) / yppa (to lift up)| |ú||ý||fúll (foul) / fýla (stink, foulness)| |ǫ||ø||sǫkk (sank) / søkkva (to sink)| - Cercignani, Fausto (1980). "Early "Umlaut" Phenomena in the Germanic Languages". Language 56 (1): 126–136. doi:10.2307/412645. - Cercignani, Fausto (1980). "Alleged Gothic Umlauts". Indogermanische Forschungen 85: 207–213. - Campbell, A. 1959. Old English Grammar. Oxford: Clarendon Press. §§624-27. - Hogg, Richard M., ‘Phonology and Morphology’, in The Cambridge History of the English Language, Volume 1: The Beginnings to 1066, ed. by Richard M. Hogg (Cambridge: Cambridge University Press, 1992), pp. 67–167 (p. 113). - Table adapted from Campbell, Historical Linguistics (2nd edition), 2004, p. 23. See also Malmkjær, The Linguistics Encyclopedia (2nd Edition), 2002, pp. 230-233. - Ringe 2006, pp. 274, 280 - Duden, Die deutsche Rechtschreibung, 21st edition, p. 133. - Isert, Jörg. "Fast Food: McDonald's schafft "Big Mäc" und "Fishmäc" ab" [Fast food: McDonald's abolishes "Big Mäc" and "Fishmäc"]. Welt Online (in German). Axel Springer AG. Retrieved 21 April 2012. - In medieval German manuscripts, other digraphs could also be written using superscripts: in bluome 'flower', for example, the ⟨o⟩ was frequently placed above the ⟨u⟩, although this letter survives now only in Czech. Compare also the development of the tilde as a superscript n. - Hardwig, Florian. "Unusual Umlauts (German)". Typojournal. Retrieved 15 July 2015. - Hardwig, Florian. "Jazz in Town". Fonts in Use. Retrieved 15 July 2015. - "Flickr collection: vertical umlauts". Flickr. Retrieved 15 July 2015. - Hardwig, Florian. "Compact umlaut". Fonts in Use. Retrieved 15 July 2015. - Campbell, A. 1959. Old English Grammar. Oxford: Clarendon Press. §§112, 190–204, 288. - Penzl, H. (1949). "Umlaut and Secondary Umlaut in Old High German". Language 25 (3): 223–240. JSTOR 410084. - Voyles, Joseph (1992). "On Old High German i-umlaut". In Rauch, Irmengard; Carr, Gerald F.; Kyes, Robert L. On Germanic linguistics: issues and methods. - Malmkjær, Kirsten (Ed.) (2002). The linguistics encyclopedia (2nd ed.). London: Routledge, Taylor & Francis Group. ISBN 0-415-22209-5. - Campbell, Lyle (2004). Historical Linguistics: An Introduction (2nd ed.). Edinburgh University Press. - Cercignani, Fausto, Early "Umlaut" Phenomena in the Germanic Languages, in «Language», 56/1, 1980, pp. 126–136. - Cercignani, Fausto, Alleged Gothic Umlauts, in «Indogermanische Forschungen», 85, 1980, pp. 207–213.
https://en.wikipedia.org/wiki/I-umlaut
4
Names of Korea |This article needs additional citations for verification. (January 2007)| There are various names of Korea in use today, derived from ancient kingdoms and dynasties. The modern English name Korea is an exonym derived from the Goryeo period and is used by both North Korea and South Korea in international contexts. In the Korean language, the two Koreas use different terms to refer to the nominally unified nation: Chosŏn (조선) in North Korea, and Hanguk (한국) in South Korea. - 1 History - 2 Current usage - 3 Sobriquets for Korea - 4 See also - 5 Notes The earliest records of Korean history are written in Chinese characters called hanja. Even after the invention of hangul, Koreans generally recorded native Korean names with hanja, by translation of meaning, transliteration of sound, or even combinations of the two. Furthermore, the pronunciations of the same character are somewhat different in Korean and the various Korean dialects, and have changed over time. For all these reasons, in addition to the sparse and sometimes contradictory written records, it is often difficult to determine the original meanings or pronunciations of ancient names. Until 108 BC, northern Korea and Manchuria were controlled by Gojoseon. In contemporaneous Chinese records, it was written as 朝鮮, which is pronounced in modern Korean as Joseon (조선). Go (古), meaning "ancient", distinguishes it from the later Joseon Dynasty. The name Joseon is also now still used by North Koreans and Koreans living in China to refer to the peninsula, and as the official Korean form of the name of Democratic People's Republic of Korea. The word is also used in many Eurasian languages to refer to Korea, such as Japanese, Vietnamese, and Chinese. Possibly the Chinese characters phonetically transcribed a native Korean name, perhaps pronounced something like "Jyusin". Some speculate that it also corresponds to Chinese references to 肅愼 (숙신, suksin), 稷愼 (직신, jiksin) and 息愼 (식신, siksin), although these latter names probably describe the ancestors of the Jurchen. Other scholars believe 朝鮮 was a translation of the native Korean Asadal (아사달), the capital of Gojoseon: asa being a hypothetical Altaic root word for "morning", and dal meaning "mountain", a common ending for Goguryeo place names. An early attempt to translate these characters into English gave rise to the expression "The Land of the Morning Calm" for Korea, which parallels the expression "The Land of the Rising Sun" for Japan. While the wording is fanciful, the essence of the translation is valid. Around the time of Gojoseon's fall, various chiefdoms in southern Korea grouped into confederacies, collectively called the Samhan (삼한, "Three Han"). Han is a native Korean root for "leader" or "great", as in maripgan ("king", archaic), hanabi ("grandfather", archaic), and Hanbat ("Great Field", archaic name for Daejeon). It may be related to the Mongol/Turkic title Khan. Han was transliterated in Chinese records as 韓 (한, han), 幹 (간, gan), 刊 (간, gan), 干 (간, gan), or 漢 (한, han), but is unrelated to the Chinese people and states also called Han, which is a different character pronounced with a different tone. (See: Transliteration into Chinese characters). The Han dynasty did have an influence on the three Han as Chinese characters were eventually used called Han-ja. Around the beginning of the Common Era, remnants of the fallen Gojoseon were re-united and expanded by the kingdom of Goguryeo, one of the Three Kingdoms of Korea. It, too, was a native Korean word, probably pronounced something like "Guri", transcribed with various hanja characters: 高句麗, 高勾麗, or 高駒麗 (고구려, Goguryeo), 高麗 (고려, Goryeo), 高離 (고리, Gori), or 句麗 (구려, Guryeo). The source native name is thought to be either *Guru ("walled city, castle, fortress"; attested in Chinese historical documents, but not in native Korean sources) or Gauri (가우리, "center"; cf. Middle Korean *gaβɔndɔy and Standard Modern Korean gaunde 가운데). The theory that Goguryeo referenced the founder's surname has been largely discredited (the royal surname changed from Hae to Go long after the state's founding). Revival of the names In the south, the Samhan resolved into the kingdoms of Baekje and Silla, constituting, with Goguryeo, the Three Kingdoms of Korea. In 668, Silla unified the three kingdoms, and reigned as Unified Silla until 935. The succeeding dynasty called itself Goryeo (고려, 高麗), in reference to Goguryeo. Through the Silk Road trade routes, Muslim merchants brought knowledge about Silla and Goryeo to India and the Middle East. Goryeo was transliterated into Italian as "Cauli", the name Marco Polo used when mentioning the country in his Travels, derived from the Chinese form Gāolí. From "Cauli" eventually came the English names "Corea" and the now standard "Korea" (see English usage below). In 1392, a new dynasty established by a military coup revived the name Joseon (조선, 朝鮮). The hanja were often translated into English as "morning calm/sun", and Korea's English nickname became "The Land of the Morning Calm"; however, this interpretation is not often used in the Korean language, and is more familiar to Koreans as a back-translation from English. This nickname was coined by Percival Lowell in his book, "Choson, the Land of the Morning Calm," published in 1885. In 1897, the nation was renamed Daehan Jeguk (대한제국, 大韓帝國, literally, "Great Han Empire", known in English as Korean Empire). Han had been selected in reference to Samhan (Mahan, Jinhan, Byeonhan), which was synonymous with Samkuk, Three Kingdoms of Korea (Goguryeo, Silla, Baekje), at that time. So, Daehan Jeguk (대한제국, 大韓帝國) means it is an empire that rules the area of Three Kingdoms of Korea. This name was used to emphasize independence of Korea, because an empire can't be a subordinate country. When the Korean Empire came under Japanese rule in 1910, the name reverted to Joseon (officially, the Japanese pronunciation Chōsen). During this period, many different groups outside of Korea fought for independence, the most notable being the Daehan Minguk Imsi Jeongbu (대한민국 임시정부, 大韓民國臨時政府), literally the "Provisional Government of the Great Han People's Nation", known in English as the Provisional Government of the Republic of Korea (民國 = 民 ‘people’ + 國 country/nation’ = ‘republic’ in East Asian capitalist societies). In 1948, the South adopted the provisional government's name of Daehan Minguk (대한민국, 大韓民國; see above), known in English as the Republic of Korea. Meanwhile, the North became the Chosŏn Minjujuŭi Inmin Konghwaguk (조선민주주의인민공화국, 朝鮮民主主義人民共和國) literally the "Chosŏn Democratic People Republic", known in English as the Democratic People's Republic of Korea. The name itself was adopted from the short-lived People's Republic of Korea (PRK) formed in Seoul after liberation and later added the word "democratic" to its title. Today, South Koreans use Hanguk to refer to just South Korea or Korea as a whole, Namhan (남한, 南韓; "South Han") for South Korea, and Bukhan (북한, 北韓; "North Han") for North Korea. South Korea less formally refers to North Korea as Ibuk (이북, 以北; "The North"). In addition the official name for the Republic of Korea in the Korean language is "Dae Han Minguk" (대한민국; "The Republic of Korea"). North Koreans use Chosŏn, Namjosŏn (남조선, 南朝鮮; "South Chosŏn"), and Bukchosŏn (북조선, 北朝鮮; "North Chosŏn") when referring to Korea, South Korea, and North Korea, respectively. The term Bukchosŏn, however, is rarely used in the north, although it may be found in the Song of General Kim Il-sung. In the tourist regions in North Korea and the official meetings between South Korea and North Korea, Namcheuk (남측, 南側) and Bukcheuk (북측, 北側), or "Southern Side" and "Northern Side", are used instead of Namhan and Bukhan. The Korean language is called Hangugeo (한국어, 韓國語) or Hangukmal (한국말) in the South and Chosŏnmal (조선말, 朝鮮말) or Chosŏnŏ (조선어, 朝鮮語) in the North. The Korean script is called hangeul (한글) in South Korea and Chosŏn'gŭl (조선글) in North Korea. The Korean Peninsula is called Hanbando (한반도, 韓半島) in the South and Chosŏn Pando (조선반도, 朝鮮半島) in the North. In Chinese-speaking areas such as mainland China, Hong Kong, Macau and Taiwan, different naming conventions on several terms have been practiced according to their political proximity to whichever Korean government although there is a growing trend for convergence. In the Chinese language, the Korean Peninsula is usually called Cháoxiǎn Bàndǎo (simplified Chinese: 朝鲜半岛; traditional Chinese: 朝鮮半島) and in rare cases called Hán Bàndǎo (simplified Chinese: 韩半岛; traditional Chinese: 韓半島). Ethnic Koreans are also called Cháoxiǎnzú (朝鲜族), instead of Dàhán mínzú (大韓民族). However, the term Hánguó ren (韩国人) may be used to specifically refer to South Koreans. Before establishing diplomatic relations with South Korea, the People's Republic of China tended to use the historic Korean name Cháoxiǎn (朝鲜 "Joseon"), by referring to South Korea as Nán Cháoxiǎn (南朝鲜 ("South Joseon"). Since diplomatic ties were restored, China has used the names that each of the two sides prefer, by referring to North Korea as Cháoxiǎn and to South Korea as Hánguó (韩国 "Hanguo"). The Korean language can be referred to as either Cháoxiǎnyǔ (朝鲜语) or Hányǔ (韩语). The Korean War is also referred as Cháoxiǎn Zhànzhēng (朝鲜战争) in official documents but it is also popular to use hánzhàn (韓战) colloquially. Taiwan, on the other hand, uses the South Korean names, referring to North Korean as Běihán (北韓 "North Han") and South Korean as Nánhán (南韓 "South Han"). The Republic of China previously maintained diplomatic relations with South Korea, but has never had relations with North Korea. As a result, in the past, Hánguó (韓國) had been used to refer to the whole Korea, and Taiwanese textbooks treated Korea as a unified nation (like mainland China). The Ministry of Foreign Affairs of the Republic of China under the Democratic Progressive Party Government considered North and South Koreas two separate countries. However, general usage in Taiwan is still to refer to North Korea as Běihán (北韓 "North Han[guk]") and South Korea as Nánhán (南韓 "South Han[guk]") while use of Cháoxiǎn (朝鮮) is generally limited to ancient Korea. The Korean language is usually referred to as Hányǔ (韓語). Similarly, general usage in Hong Kong and Macau has traditionally referred to North Korea as Bak Hon (北韓 "North Han") and South Korea as Nam Hon (南韓 "South Han"). Under the influence of official usage, which is itself influenced by the official usage of the People's Republic of China government, the mainland practice of naming the two Koreas differently has become more common. In the Chinese language used in Singapore and Malaysia, North Korea is usually called Cháoxiǎn (朝鲜 "Chosŏn") with Běi Cháoxiǎn (北朝鲜 "North Chosŏn") and Běihán (北韩 "North Han") less often used, while South Korea is usually called Hánguó (韩国 "Hanguk") with Nánhán (南韩 "South Han[guk]") and Nán Cháoxiǎn (南朝鲜 "South Chosŏn") less often used. The above usage pattern does not apply for Korea-derived words. For example, Korean ginseng is commonly called Gāolì shēn (高麗參). In Japan, the name preferred by each of the two sides for itself is used, so that North Korea is called Kita-Chōsen (北朝鮮; "North Chosŏn") and South Korea Kankoku (韓国 "Hanguk"). However, North Koreans claim the name Kita-Chōsen is derogatory, as it only refers to the northern part of Korean Peninsula, whereas the government claims the sovereignty over its whole territory. Pro-North people such as Chongryon use the name Kyōwakoku (共和国; "the Republic") instead, but the ambiguous name is not popular among others. In 1972 Chongryon campaigned to get the Japanese media to stop referring to North Korea as Kita-Chōsen. This effort was not successful, but as a compromise most media companies agreed to refer to the nation with its full official title at least once in every article, thus they used the lengthy Kita-Chōsen (Chōsen Minshu-shugi Jinmin Kyōwakoku) (北朝鮮(朝鮮民主主義人民共和国) "North Chosŏn (The People's Democratic Republic of Chosŏn)"). By January 2003, this policy started to be abandoned by most newspapers, starting with Tokyo Shimbun, which announced that it would no longer write out the full name, followed by Asahi, Mainichi, and Nikkei For Korea as a whole, Chōsen (朝鮮; "Joseon") is commonly used. The term Chōsen, which has a longer usage history, continues to be used to refer to the Korean peninsula, the Korean ethnic group, and the Korean language, which are use cases that won't cause confusion between Korea and North Korea. When referring to both North Korean and South Korean nationals, the transcription of phonetic English Korean (コリアン, Korian) may be used because a reference to a Chōsen national may be interpreted as a North Korean national instead. The Korean language is most frequently referred to in Japan as Kankokugo (韓国語) or Chōsengo (朝鮮語). While academia mostly prefers Chōsengo, Kankokugo became more and more common in non-academic fields, thanks to the economic and cultural presence of South Korea. The language is also referred to as various terms, such as "Kankokuchōsengo" (韓国朝鮮語), "Chōsen-Kankokugo" (朝鮮・韓国語), "Kankokugo (Chōsengo)" (韓国語(朝鮮語)), etc. Some people refer to the language as Koriago (コリア語; "Korean Language"). This term is not used in ordinary Japanese, but was selected as a compromise to placate both nations in a euphemistic process called kotobagari. Likewise, when NHK broadcasts a language instruction program for Korean, the language is referred to as hangurugo (ハングル語; "hangul language"); although it's technically incorrect since hangul itself is a writing system, not a language. Some argue that even Hangurugo is not completely neutral, since North Korea calls the letter Chosŏn'gŭl, not hangul. Urimaru (ウリマル), a direct transcription of uri mal (우리 말, "our language") is sometimes used by Korean residents in Japan, as well as by KBS World Radio. This term, however, may not be suitable to ethnic Japanese whose "our language" is not necessarily Korean. In Japan, those who moved to Japan usually maintain their distinctive cultural heritages (such as the Baekje-towns or Goguryeo-villages). Ethnic Korean residents of Japan have been collectively called Zainichi Chōsenjin (在日朝鮮人 "Joseon People in Japan"), regardless of nationality. However, for the same reason as above, the euphemism Zainichi Korian (在日コリアン; "Koreans in Japan") is increasingly used today. Zainichi (在日; "In Japan") itself is also often used colloquially. People with North Korean nationality are called Zainichi Chōsenjin, while those with South Korean nationality, sometimes including recent newcomers, are called Zainichi Kankokujin (在日韓国人 "Hanguk People in Japan"). Mongols have their own word for Korea: Солонгос (Solongos). In Mongolian, solongo means "rainbow." And another theory is probably means derived from Solon tribe living in Manchuria, a tribe culturally and ethnically related to the Korean people. North and South Korea are, accordingly, Хойд Солонгос (Hoid Solongos) and Өмнөд Солонгос (Ömnöd Solongos). The name of either Silla or its capital Seora-beol was also widely used throughout Northeast Asia as the ethnonym for the people of Silla, appearing [...] as Solgo or Solho in the language of the medieval Jurchens and their later descendants, the Manchus respectively. In Vietnam, people call North Korea Triều Tiên (朝鮮; "Chosŏn") and South Korea Hàn Quốc (韓國; "Hanguk"). Prior to unification, North Vietnam used Bắc Triều Tiên (北朝鮮; Bukchosŏn) and Nam Triều Tiên (南朝鮮; Namjoseon) while South Vietnam used Bắc Hàn (北韓; Bukhan) and Nam Hàn (南韓; Namhan) for North and South Korea, respectively. After unification, the northern Vietnamese terminology persisted until the 1990s. When South Korea reestablished diplomatic relations with Vietnam in 1993, it requested that Vietnam use the name that it uses for itself, and Hàn Quốc gradually replaced Nam Triều Tiên in usage. In the Vietnamese language used in the United States, Bắc Hàn and Nam Hàn are most common used. Outside East Asia Both South and North Korea use the name "Korea" when referring to their countries in English. North Korea is sometimes referred to as "Korea DPR" (PRK) and South Korea is sometimes referred to as the "Korea Republic" (KOR), especially in international sporting competitions, such as FIFA football. As with other European languages, English historically had a variety of names for Korea derived from Marco Polo's rendering of Goryeo, "Cauli" (see Revival of the names above). These included Caule, Core, Cory, Caoli, and Corai as well as two spellings that survived into the 19th century, Corea and Korea. (The modern spelling, "Korea", first appeared in late 17th century in the travel writings of the Dutch East India Company's Hendrick Hamel.) Despite the coexistence of the spellings "Corea" and "Korea" in 19th-century English publications, some Koreans believe that Japan, around the time of the Japanese occupation, intentionally standardised the spelling on "Korea", so that "Japan" would appear first alphabetically. Both major English-speaking governments of the time (i.e. the United States and the United Kingdom and its Empire) used both "Korea" and "Corea" until the early part of the colonial period. English-language publications in the 19th century generally used the spelling Corea, which was also used at the founding of the British embassy in Seoul in 1890. However, the U.S. minister and consul general to Korea, Horace Newton Allen, used "Korea" in his works published on the country. At the official Korean exhibit at the World's Columbian Exhibition in Chicago in 1893 a sign was posted by the Korean Commissioner saying of his country's name that "'Korea' and 'Corea' are both correct, but the former is preferred." This may have had something to do with Allen's influence, as he was heavily involved in the planning and participation of the Korean exhibit at Chicago. A shift can also be seen in Korea itself, where postage stamps issued in 1884 used the name "Corean Post" in English, but those from 1885 and thereafter used "Korea" or "Korean Post". By the first two decades of the 20th century, "Korea" began to be seen more frequently than "Corea" - a change that coincided with Japan's consolidation of its grip over the peninsula. Most evidence of a deliberate name change orchestrated by Japanese authorities is circumstantial, including a 1912 memoir by a Japanese colonial official that complained of the Koreans' tendency "to maintain they are an independent country by insisting on using a C to write their country's name." However, the spelling "Corea" was occasionally used even under full colonial rule and both it and "Korea" were largely eschewed in favor of the Japanese-derived "Chosen", which itself was derived from "Joseon". European languages use variations of the name "Korea" for both North and South Korea. In general, Celtic and Romance languages spell it "Corea" (or variations) since "c" represents the /k/ sound in most Romance and Celtic orthographies. However, Germanic and Slavic languages largely use variants of "Korea" since, in many of these languages, "c" represents other sounds such as /ts/. In languages using other alphabets such as Russian (Cyrillic), variations phonetically similar to "Korea" are also used for example the Russian name for Korea is Корея, romanization Koreya. Outside of Europe, most languages also use variants of "Korea", often adopted to local orthographies. "Korea" in the Jurchen Jin's national language (Jurchen) is "Sogo". Emigrants who moved to Russia and Central Asia call themselves Goryeoin or Koryo-saram (고려인; 高麗人; literally "person or people of Goryeo"), or корейцы in Russian. Many Goryeoin are living in the CIS, including an estimated 106,852 in Russia, 22,000 in Uzbekistan, 20,000 in Kyrgyzstan, 17,460 in Kazakhstan, 8,669 in Ukraine, 2,000 in Belarus, 350 in Moldova, 250 in Georgia, 100 in Azerbaijan, and 30 in Armenia. As of 2005, there are also 1.9 million ethnic Koreans living in China who hold Chinese citizenship and a further 560,000 Korean expatriates from both North and South living in China. South Korean expatriates living in the United States, around 1.7 million, will refer to themselves as Jaemi(-)gyopo (재미교포; 在美僑胞, or "temporary residents in America"), or sometimes simply "gyopo" for short. Sobriquets for Korea In traditional Korean culture, as well as in the cultural tradition of East Asia, the land of Korea has assumed a number of sobriquets over the centuries, including: - 계림 (鷄林) Gyerim, "Rooster Forest", in reference to an early name for Silla. - 군자지국 (君子之國) Gunjaji-guk, or "Land of Scholarly Gentlemen". - 금수강산 (錦繡江山) Geumsu gangsan, "Land of Embroidered (or Splendid) Rivers and Mountains". - 단국 (檀國) Danguk, "Country of Dangun". - 대동 (大東) Daedong, "Great East". - 동국 (東國) Dongguk, "Eastern Country". - 동방 (東邦) Dongbang, literally "an Eastern Country" referring to Korea. - 동방예의지국 (東方禮義之國, 東方禮儀之國) Dongbang yeuiji-guk, "Eastern Country of Courtesy". - 동야 (東野) Dongya, "Eastern Plains". - 동이 (東夷) Dong-yi, or "Eastern Foreigners". - 구이 (九夷) Gu-yi, "Nine-yi", refers to ancient tribes in the Korean peninsula. - 동토 (東土) Dongto, "Eastern Land". - 배달 (倍達) Baedal, an ancient reference to Korea. - 백의민족 (白衣民族) Baeguiminjok, "The white-clad people". - 삼천리 (三千里) Three-thousand Ri, a reference to the length traditionally attributed to the country from its northern to southern tips plus eastern to western tips. - 소중화 (小中華) Sojunghwa, "Small China" or "Little Sinocentrism" was used by the Joseon Court. It is nowadays considered degrading and is not used. - 아사달 (阿斯達) Asadal, apparently an Old Korean term for Joseon. - 청구 (靑丘) Cheonggu, or "Azure Hills". The color Azure is associated with the East. - 팔도강산 (八道江山) Paldo gangsan, "Rivers and Mountains of the Eight Provinces", referring to the traditional eight provinces of Korea. - 근화향 (槿花鄕) Geunhwahyang, "Country of Mugunghwa" refer to Silla Kingdom. - 근역 (槿域) Geunyeok, "Hibiscus Territory", or Land of Hibiscus - 삼한 (三韓) Samhan, or "Three Hans", refers to Samhan confederacy that ruled Southern Korea. - 해동 (海東) Haedong, "East of the Sea" (here being the West Sea separating from Korea). - 해동삼국 (海東三國) Haedong samguk, "Three Kingdoms East of the Sea" refers to Three Kingdoms of Korea - 해동성국 (海東盛國) Haedong seongguk, literally "Flourishing Eastern Sea Country", historically refers to Balhae Kingdom of North-South period. - 진국 (震國,振國) Jinguk, "Shock Country", old name of Balhae Kingdom. - 진역 (震域) Jinyeok, "Eastern Domain". - 진단 (震檀,震壇) Jindan, "Eastern Country of Dangun". - 진국 (辰國) Jinguk, "Country of Early Morning", refer to the Jin state of Gojoseon period. - Kyu Chull Kim (8 March 2012). Rootless: A Chronicle of My Life Journey. AuthorHouse. p. 128. ISBN 978-1-4685-5891-3. Retrieved 19 September 2013. - [땅이름] 태백산과 아사달 / 허재영 (Korean) - Korea原名Corea? 美國改的名. United Daily News website. 5 July 2008. Retrieved 28 March 2014. (Chinese) - Actually Republic is 共和國 공화국 (″Mutually peaceful country″) as can be seen in the names of China and North Korea but Taiwan and South Korea coined the latter 民國 민국 - The North Korean Revolution, 1945–1950 By Charles K. Armstrong - Shane Green, Treaty plan could end Korean War, The Age, November 6, 2003 - Tokyo Shimbun, December 31, 2002 - Asahi, Mainichi, and Nikkei - In the program, however, teachers avoid the name Hangurugo, by always saying this language. They would say, for instance, "In this language, Annyeong haseyo means 'Hello' ". - Barbara Demick. "Breaking the occupation spell: Some Koreans see putdown in letter change in name." Boston Globe. 18 September 2003. Retrieved 5 July 2008. - "Korea versus Corea". 14 May 2005. Retrieved 12 November 2013. - Korea from around 1913 using the spelling "Corean" - H N Allen, MD Korean Tales: Being a Collection of Stories Translated from the Korean Folk Lore. New York: G. P. Putnam's Sons, 1889. - "Korea in the White City: Korea at the World's Columbian Exhibition (1893)." Transactions of the Korea Branch of the Royal Asiatic Society 77 (2002), 27. - KSS-Korbase's Korean Stamp Issuance Schedules - Commonwealth of Independent States Report, 1996. - 재외동포현황 Current Status of Overseas Compatriots, South Korea: Ministry of Foreign Affairs and Trade, 2009, retrieved 2009-05-21 - "The Korean Ethnic Group", China.org.cn, 2005-06-21, retrieved 2009-02-06 - Huang, Chun-chieh (2014). Humanism in East Asian Confucian Contexts. Verlag. p. 54. ISBN 9783839415542. Retrieved 23 July 2015. - Ancient History of the Manchuria By Lee Mosol, MD, MPH
https://en.wikipedia.org/wiki/Names_of_Korea
4.46875
The Treaty of Guadalupe Hidalgo, which brought an official end to the Mexican-American War (1846-1848) was signed on February 2, 1848, at Guadalupe Hidalgo, a city north of the capital where the Mexican government had fled with the advance of U.S. forces. To explore the circumstances that led to this war with Mexico, visit, "Lincoln's Spot Resolutions." With the defeat of its army and the fall of the capital, Mexico City, in September 1847 the Mexican government surrendered to the United States and entered into negotiations to end the war. The peace talks were negotiated by Nicholas Trist, chief clerk of the State Department, who had accompanied General Winfield Scott as a diplomat and President Polk's representative. Trist and General Scott, after two previous unsuccessful attempts to negotiate a treaty with Santa Anna, determined that the only way to deal with Mexico was as a conquered enemy. Nicholas Trist negotiated with a special commission representing the collapsed government led by Don Bernardo Couto, Don Miguel Atristain, and Don Luis Gonzaga Cuevas of Mexico. In The Mexican War, author Otis Singletary states that President Polk had recalled Trist under the belief that negotiations would be carried out with a Mexican delegation in Washington. In the six weeks it took to deliver Polk's message, Trist had received word that the Mexican government had named its special commission to negotiate. Against the president's recall, Trist determined that Washington did not understand the situation in Mexico and negotiated the peace treaty in defiance of the president. In a December 4, 1847, letter to his wife, he wrote, "Knowing it to be the very last chance and impressed with the dreadful consequences to our country which cannot fail to attend the loss of that chance, I decided today at noon to attempt to make a treaty; the decision is altogether my own." In Defiant Peacemaker: Nicholas Trist in the Mexican War, author Wallace Ohrt described Trist as uncompromising in his belief that justice could be served only by Mexico's full surrender, including surrender of territory. Ignoring the president's recall command with the full knowledge that his defiance would cost him his career, Trist chose to adhere to his own principles and negotiate a treaty in violation of his instructions. His stand made him briefly a very controversial figure in the United States. Under the terms of the treaty negotiated by Trist, Mexico ceded to the United States Upper California and New Mexico. This was known as the Mexican Cession and included present-day Arizona and New Mexico and parts of Utah, Nevada, and Colorado (see Article V of the treaty). Mexico relinquished all claims to Texas and recognized the Rio Grande as the southern boundary with the United States (see Article V). The United States paid Mexico $15,000,000 to compensate for war-related damage to Mexican property (see Article XII of the treaty) and agreed to pay American citizens debts owed to them by the Mexican government (see Article XV). Other provisions included protection of property and civil rights of Mexican nationals living within the new boundaries of the United States (see Articles VIII and IX), the promise of the United States to police its boundaries (see Article XI), and compulsory arbitration of future disputes between the two countries (see Article XXI). Trist sent a copy to Washington by the fastest means available, forcing Polk to decide whether or not to repudiate the highly satisfactory handiwork of his discredited subordinate. Polk chose to forward the treaty to the Senate. When the Senate reluctantly ratified the treaty (by a vote of 34 to 14) on March 10, 1848, it deleted Article X guaranteeing the protection of Mexican land grants. Following the ratification, U.S. troops were removed from the Mexican capital. To carry the treaty into effect, commissioner Colonel Jon Weller and surveyor Andrew Grey were appointed by the United States government and General Pedro Conde and Sr. Jose Illarregui were appointed by the Mexican government to survey and set the boundary. A subsequent treaty of December 30, 1853, altered the border from the initial one by adding 47 more boundary markers to the original six. Of the 53 markers, the majority were rude piles of stones; a few were of durable character with proper inscriptions. Over time, markers were moved or destroyed, resulting in two subsequent conventions (1882 and 1889) between the two countries to more clearly define the boundaries. Photographers were brought in to document the location of the markers. These photographs are in Record Group 77, Records of the Office of the Chief Engineers, in the National Archives. An example of one of these photographs, taken in the 1890s, is available online through the Archival Research Catalog (ARC) database, identifier: 519681 . Click to Enlarge View: Cover | Page 1 | Signature Page National Archives and Records Administration General Records of the United States Record Group 11 ARC Identifier: 299809
http://www.roebuckclasses.com/201/regional/treatyguadalupehildalgo.htm
4.21875
Today’s blog post comes from National Archives social media intern Anna Fitzpatrick. Nine months before President Lincoln signed the Emancipation Proclamation, he signed a bill on April 16, 1862, that ended slavery in the District of Columbia. The act finally concluded many years of disagreements over ending ”the national shame” of slavery in the nation’s capital. The law provided for immediate emancipation, compensation to loyal Unionist masters of up to $300 for each freed slave, voluntary colonization of former slaves to colonies outside the United States, and payments of up to $100 to each person choosing emigration. Although this three-way approach of immediate emancipation, compensation, and colonization did not serve as a model for the future, it pointed toward slavery’s death. Emancipation was greeted with great joy by the District’s African American community. The white population of DC took advantage of the act’s promise of compensation. One month after the act was issued, Margaret Barber presented a claim to the Board of Commissioners for the Emancipation of Slaves in the District of Columbia, saying that she wanted to be compensated by the Federal Government, which had freed her 34 slaves. Margaret Barber estimated that her slaves were worth a total of $23,400. On June 16, 1862, slave trader Bernard Campbell examined 28 of Barber’s slaves to assess their value for the Commission. In the end, Barber received $9,351.30 in compensation for their emancipation. But five of the 34 did not await the Commission’s deliberations. ”[S]ince the United States troops came here,” said Barber, they had ”absented themselves and went off and are believed still to be in some of the Companies and in their service.” Although the final Emancipation Proclamation did not allow for compensation such as Margaret Barber received, this earlier act proved to be an important step towards the final emancipation of the slaves. Less than a year later, on New Year’s Day of 1863, President Lincoln signed the Emancipation Proclamation into effect, and two years later the 13th Amendment finished the process of freeing all the slaves. To learn more about the compensation of owners and the personal information you can find in the commission records at the National Archives, you can read “Slavery and Emancipation in the Nation’s Capital: Using Federal Records to Explore the Lives of African American Ancestors” by Damani Davis. The story of Margaret Barber’s claims for compensation and the District of Columbia Emancipation Act is based on the article ”Teaching with Online Primary Sources: Documents from the National Archives: The Demise of Slavery in the District of Columbia, April 16, 1862,” written by Michael Hussey. It’s also featured in The Meaning and Making of Emancipation, an eBook created by the National Archives. Be sure to stop by the National Archives for the special display of the original document from Sunday, December 30, to Tuesday, January 1. The commemoration will include extended viewing hours, inspirational music, a dramatic reading of the Emancipation Proclamation, and family activities and entertainment for all ages.
http://prologue.blogs.archives.gov/2012/12/26/emancipation-proclamation-freedom-in-washington-dc/
4.03125
M.Ed., Stanford University Winner of multiple teaching awards Patrick has been teaching AP Biology for 14 years and is the winner of multiple teaching awards. M.Ed., Stanford University Winner of multiple teaching awards Patrick has been teaching AP Biology for 14 years and is the winner of multiple teaching awards. Although we don't like to think about it that much, we, like everything else in this universe, that's made up of or not made up of energy, we're made out of molecules. Scientists realised this a long time ago, but still wanted to think we're special. So they decided that we have 'organic molecules' while things like dirt, air, they're made out of metal, those are inorganic and we're special. Oops! Turns out we're not. We still have to follow the same rules as all those inorganic molecules. But we're still kind of stuck with this nomenclature so that we still talk about organic Chemistry and inorganic Chemistry. And we also wind up with some weirdly arbitrary definitions where things like carbon dioxide gas, even though it has got carbon in it, it's declared just an inorganic molecule. Because it turned out that all those organic molecules, what's the basic thing behind them, is that they're made out of carbon. And it's because carbon can form these long chains or these rings that give the great diversity of the organic molecules. So you really need to know about organic molecules for the AP biology test. Both because, they will ask questions on it and because their properties underlie a lot of the basic properties or things that go on in Biology like proteins, specificity or membrane function. So I'm going to begin with the simplest of the organic molecules, the carbohydrates and continue then with the fats and lipids. Third, I'll go through the proteins, very important topic and finish off with the nucleic acids. As I go through these groups of organic molecules, I'll begin by mentioning some of the common examples to give you a context for what I'm talking about. Then I'll describe the monomers, the basic building blocks that can be joined together to form the larger molecules called polymers. Then I'll finish off by going through some of the major functions of each of the groups of organic molecules. So to begin with the carbohydrates, you can probably already hopefully mention off some common examples of carbohydrates. These are things like glucose, fructose, lactose, cellulose, starch. You may notice that a lot of them all end with the word 'ose'. That's one of the tips that you can use to fake you're way through some parts of the AP Biology exam. Because if you see something that ends with 'ose' then it's a carbohydrate. You don't even need to know what it is. If you see the word amylose, it's a carbohydrate. So let's look at the monomers, the basic building blocks that are used to build the rest of the carbohydrates. The monomers of carbohydrates are called monosaccharides, where mono means one, saccha means sugar. So this means one sugar, or simple sugars. The monosaccharides of the carbohydrates are typically groups of three to eight carbons joined together with a bunch of hydrogens, and OH groups, or hydroxyl groups they're called. So we can see here an example of ribose which is a five carbon sugar, and glucose which is a six carbon sugar. Now these five carbon or six carbon sugars, can very often not only be in these what are called straight chains, but they can manoeuvre and join to form ring structures. Let's take a look at what happens with glucose. So you could see glucose has a group of six carbons and a linear form or straight chain form, or forming this ring structure, the hexagon shape. So if you are looking at a AP-Bio multiple choice test question and you see this hexagon shape, or pentagon shape for a five carbon sugar, you know it's a monosaccharide, a carbohydrate. So that's a big clue, just look for these multiple hexagons formed in long chains. When you're trying to form a disaccharide that's when you put two of them together. And here we see glucose plus glucose to form sucrose, a disaccharide. You know 'di' means two, di-sugar, two sugars put together. So glucose plus a fructose, forms a sucrose. Now you'll notice to get them to join, we need to pull off a hydroxide group, an OH from one, a hydrogen from the other, and that forms water. And we're left with this oxygen here forming the bond between those two. Because we're removing water to do this, this is called dehydration synthesis. So this is putting things together, 'de' remove, hydro-water, dehydration synthesis. Can you guess what would happen if we were breaking it apart? You've got it, hydro means water. You may recall in other videos I've talked about lice meaning to split or break. Hydrolysis is if we ran this backwards, where we split this two consuming a water. All of us you me, everybody, every human, has the enzyme that can break this bond to do that hydrolysis. There are some other sugars, if we put a glucose together with let's say fructose, if we put it together with a monosaccharide called galactose, we would form a molecule called lactose. You may have heard of that. Lactose is a sugar commonly found in milk. Again you need an enzyme to break that because while monosaccharides can be absorbed easily in your small intestine, disaccharides are too large to fit through the walls of the small intestine. So if you don't have that enzyme to break lactose, that lactose winds up in your large intestine. You may have heard of people who are lactose intolerant, which means they lack the enzyme, lactase enzyme, that is needed to break up lactose. So, some poor guy wanted to just eat some ice cream and later on he winds up having issues because instead of him absorbing the lactose sugar, all the wee beasties that are living inside of his large intestine they go 'uh yummy!' and they start going to town and he starts having issues. Now what if we put a whole bunch of them together. Putting together a whole bunch of molecules together. You know from Math, poly means many, like polygons. Well a whole bunch of sugars put together is called a polysaccharide. Now, a couple of common polysaccarides include starch which is made of a group of glucoses joined together. Well there is another one called cellulose, which again is made out of a bunch of glucoses joined together. What's the difference is exactly how the glucoses are joined together. Notice here how this CH2OH group is always above the plane? Here it alternates, left right. We can break down starch. So if I sit here... my body can break down the starches that make up things like apples or potatoes, because I've got an enzyme called amylase. The proper name for starch is amylose. But to break cellulose, I would need a different enzyme. And it turns out almost nothing on this planet has the enzyme to break apart cellulose. So if I sat here and try to do this, instead of getting yummy nutrition, I get splinters in places I don't want splinters. Who does have it? It's a few bacteria and a few single celled creatures called prototista. You may realise there is lots of things that eat cellulose, that eat wood. Well, what are those creatures? They are things like cows and termites. How do they get any nutrition from it? Inside their guts, they actually have colonies of these bacteria or these protista breaking down the cellulose for them and sharing the glucose that's coming out of that. Now, another example of a polysaccharide is a special polysaccharide called chitin, which is used only in fungi and in arthropods to make up their exoskeleton shells. And that leads into one of the major functions of carbohydrates. Carbohydrates are used for a lot of cellulose structures. The cellulose I mentioned before forms the structural outer cell wall of plants cells, while chitin is used to make the cell walls of fungi. It's also used to make the shells like I said of arthropods, things like the lobsters or bugs. The other major function of the carbohydrates is energy storage, whether it is starch or the simpler sugars glucose or the nice sugary sweet sucrose that we love on our cereal in the morning. Carbohydrates aren't the only energy storage molecules, obviously fats are a big player in that. The lipids are a big group, ranging from the things that we may know like the triglycerides, which are fat. Which we're all familiar with, and some of us are a little bit too familiar. To things like the testosterone, which is a steroid hormone that has been screwing with your body since puberty. To the not so familiar, phospholipids. Now, as a group, unfortunately the fats and lipids don't have a common monomer like the carbohydrates do with the monosaccharides or the proteins do with the amino acids. So unfortunately, they are all grouped together not because of some common structure, but because a common behavior. They're all the rejects. That is because they don't dissolve well in water. They're called hydrophobic and that's because they don't have the ability to do what's called hydrogen bonding. Let's take a look first at one of the two major groups which are the triglycerides and phospholipids and they will have a common structure that we can see here. You see this three carbon molecule there, that's a molecule called glycerol. The triglycerides, you can hear the 'glyc' and the phospholipids share this carbon-carbon-carbon chain. Now in the triglycerides, the fats attached to each of the carbons in the glycerol, you'll have these long chains of carbons with hydrogens on them. These are called fatty acids. wWereas with the phospholipids, you'll see the wiggly fatty acids here. But and on that third carbon, instead of having a fatty acid, you'll have a phosphate ion attached to it. Now, notice how each of these is kind of lined up and this little wiggly line there represents one of these carbon chains. Notice how this one is bent, that's because this is what's known as a saturated fatty acid, while this is an unsaturated fatty acid. You may have seen food labels where they have to list the saturated versus unsaturated fats. And there is two kinds of unsaturated fats. There is trans fatty acid and cis fatty acids. All you need to know really on that is see how this one is bent, that's a cis fatty acid, that's good. If it was one of these triglycerides, if they had a bent one here, because those trans fatty acids stay in a straight line like these guys. And they cam make easy big stacks of triglycerides in your bloodstream clogging them. While the cis fatty acid is with their bend, they can't form bid clumps. So the cis fatty acids, those are good because they can help dissolve the big chunks of fat into smaller chunks of fat. A little bit of health issue there. The other big group or structural group of the fats and lipids, are the steroid molecules. They have lots of different things attached to it but, all the steroids share this 1, 2, 3, 4 ring structure. And whatever is attached to the outsides of that, make it different from one to the next. In your body, all of the steroids hormones are made using the steroid core that you get from the cholesterol in your diet. So we always talk about cholesterol being bad. It's bad on high levels but if you didn't have a certain level of cholesterol on your diet, you couldn't make testosterone and oestrogen and all the other steroid hormones that you need in order to maintain homeostasis and be healthy. So what are the functions of fats and lipids? Well they are pretty wide ranging. They range from, obviously the energy storage of the triglycerides, the fats. But fat does more than just store energy. It also provides insulation. Both against heat loss and electrical insulation in your brain provided by a special fat called myelin. They also help protect and cushion against shock, whether it's the fat in your derrière or the pads of fat behind your eyes. So that when you're going jogging eyeballs aren't bounce around and popping. There are also of course those steroid hormones testosterone and oestrogen that I mentioned before, forming signalling compounds in your body. And then those phospholipids that I mentioned previously, they form the cell membrane. And it's their chemical behavior that gives the cell membrane a lot of its properties. The last fat that you may see mentioned on the AP exam, would be the waxes. I mean again because the waxes are hydrophobic, they prevent the movement of water across them. And that's why we have waxes in our ears to help prevent our eardrums from drying out, or plants will have a waxy cuticle. That a word to remember, to get that extra little point there in the essay about leaf structure. The waxy cuticle prevents water loss at the surface of leaves. So while carbohydrates and fats are really good at storing energy and they form some important structural parts of the cell, it's the proteins that are the real work forces of the cell. What are some proteins you may have heard? Well there is keratin. The stuff that makes up finger nails or hair. There is enzymes like the lactase enzyme that we mentioned before, amylase. There is myosin. It's one of the two major contractors or proteins found in muscles. So these are some common proteins or another one that you may see on the AP exam, will be albumen, which is that egg white protein. So what are the basic building blocks or monomers of proteins? There are structures called amino acids. Let's take a look at one amino acid. All amino acids and there is roughly 20 of them, share a common structure. They have a central carbon that's often called the alpha carbon. On one end, you'll have an amino group. And in some textbooks, depending on the pH that the solution is at, you may see 3 Hydrogens instead of the two here. On the other end you'll have the carboxyl group, which again may have lost that hydrogen depending on the pH. Down of the alpha carbon, you'll find a single hydrogen by itself. And then up here you'll find one of 20 different possible R-groups. I'm not going to have you take a look at all of them, you can easily find those in your textbook. But let's take a quick look at some of the various R-groups. And it's the R-groups that make each amino acid unique. Now I've often thought of amino acids kind of like train cars. There's box cars, flat cars, passenger cars, dinning cars. But all train cars share a common structure the wheel base, the axils and etcetera. And that's kind of like the amino carboxyl group with the alpha carbon and it's hydrogen. What makes each train car unique, is what on top whether it's a group of carbons, that's a passenger car or if it's a big flat platform with some tie downs that's a flat car. And it's these R-groups here that make each one unique. To join them together, much like how you join train cars together, what you do is you'll take an OH group off a carboxyl of one peptide, or amino acid. Peptide is an old name for amino acids. And you'll rip off a hydrogen from the amino of the next amino acid. And by doing that, we pull out a water, and now we've joined together our two amino acids. They call the bond that holds the two amino acids together, between the amino of one, and the carboxyl group of the other, they call that a peptide bond. Again, because the old name of the amino acids was peptides, that's also why an amino acid chain which is a group these all hook together, is sometimes called a polypeptide. Now, when scientists were first starting to study proteins, they run into a problem. Remember those R groups are extremely variable, and they have different chemistries. Some of them are negatively charged, some of them are positively charged. Some of them are non polar or hydrophobic R groups. And so they all start to form up into these really complicated tangled up masses. And initially when scientists were first studying this, they just couldn't make sense of it. It'd be kind of like if I handed one of the Lord of the Rings books to a kindergarten. So what I'm going to do is, I'm going to take you through the different levels of structure, because initially, the only thing that scientists could figure out was, the different amino acids in the chain that makes up the protein. They couldn't figure out anything beyond that. Just like the kindergarten over the Lord of the Rings book, all he could figure out was the sequence of letters. If you asked him what's it about? No idea. So let's take a quick look at a video from YouTube that takes us all through four layers of a structure of a protein. Here is that video I was talking about at YouTube. Let's make it bigger so it's easier to see. Now all these little balls here, those are amino acids. So let's go ahead and we'll put them together, and that's what a ribosome does. Is it builds a peptide bond between each one. Now let's pause it here. This sequence of amino acids and these are all just the abbreviations of the real amino acids names. If I went along and I rattled off each amino acid in sequence, that's what's called primary structure or one, the first level of structure. That's the simplest thing. That's again like that kindergarten who's saying, "The first letter is T, the next letter is H, then E then a space." Again ,it doesn't tell you what the protein can do but it does tell you some information. If we let it continue though, let's go ahead and start it up, the video again. You'll see that the R-groups of the amino acids start to interact with each other, and they start bending and whopping portions of it in space. Well let's pause it here. This is called the secondary structure. The secondary structure is what's going on? What are the interactions? Well I see this part of the chain here, is in parallel with that part of the chain over there, and again over here and here. And then this area here start to spiral up. After some hard work, scientists started being able to figure out, "Okay, we know that all these areas here, all the R-groups are all say negative and positive." They'll start curling towards each other and that may form a spiral. That's called the secondary or second level of structure. And that's kind of like, if a third grader read Lord of the Rings. He might say, "Look in this chapter, they are fighting oh it's cool and then in this chapter, photos winding again." Again he doesn't really know about the big grand sweep theme of the book that he's reading, but he can tell what's going on in small sections. Let's start that up again and we'll look at the third level or the tertiary structure of a protein. So again this is the alpha helix, that's called a Beta pleated sheet. That parallel ripple effect. Now you can see some of these lines here represent the hydrogen bonds, and other forms of bonds that are helping hold it in its shape. Let's pause it here, the tertiary structure. The tertiary or third level of structure is the 3D shape of the protein. That's kind of like what the heck happened in that novel? If you read Fellowship of the Rings, then you know what happened. And that's something that a high school kid can do with a novel. They can get a great idea of what's going on in the book. This took scientists a lot of scientists a lot of time to figure out. And it's only after investing years or decades of effort, that they can figure out the 3D or tertiary shape of a protein. Nowadays however with modern computers, this kind work can be done pretty quickly. Instead of a year or decade, it could take as short as a couple of months, or even faster. Now with a lot of proteins, that's it. You've figured out it's three dimensional shape, that tells you what kind of molecules can it fit to, how does it interact with other things. But some proteins are actually made out of more than one chain of amino acids. And again we can still see here's the chain. Now, some proteins are made out of multiple chains. And here we see those ones coming in. Let's pause it real quick before it goes to the end. And we can see the yellow chain has to fit onto this blue guy, the red one has to go in there. And that's, again if I go to to the Lord of the Rings, that is not a single novel. It's actually a trilogy. And if you just read Fellowship of the Rings, and you thought, "That's it? You know what's the heck, they are just going off in different directions. What's up with that?" You need to know how those Fellowship of the rings fit in amongst the other two books for you to really understand what's going on. So again, there's the four layer of structure within a protein. There is the primary structure, that's the sequence of amino acids from start to end of a chain. As you relax in tension, you'll find that some parts will spiral other parts will bend and that's the tertiary level structure. When you finally let go the chain, it'll wrap itself into some complicated shape, and that's the tertiary structure. If that chain happens to be put together with other polypeptide, or amino acid chains, that's what is called the quaternary structure. So now you know how proteins are put together. What are some of their functions? It's actually a lot easier to just ask what don't they do. Some proteins obviously make up those enzymes that I've mentioned before, lactase enzyme, amylase enzyme there is a very important enzyme in photosynthesis called RuBisCo enzyme or Ribulose or bisphosphate carboxylase. They make up structural things like I mentioned hair is made out of keratin. They form important components of the cytoskeleton. They form protein hormones, some of the signals that are used between your various cells. They form channels in the cell membrane to allow stuff to go in or out of the cell. They form antibodies to help your immune system. They form receptor proteins, also embedded in the membrane, to help your cells communicate one to the other. So that gives you a sense of the broad range of what proteins can do. So how do your cells know how to build those incredibly complex proteins? That's where nucleic acids come in. Now you may have heard of some of the nucleic acids such as DNA of course, but there is also RNA and ATP. The monomers of nucleic acids are molecules called nucleotides. Let's take a look. All nucleotides share a common structure. They have a phoshate group attached to a central five carbon sugar. And then they have some kind of nitrogenous or nitrogen carrying base over here. And that five carbon sugar can be deoxyribose in the case of DNA, or ribose in RNA. Now in another separate episode, I went far more in depth into the structure of DNA. And I recommend that you watch it if you're kind of unclear on that. But I want to make sure that we go over the basics of it. Now then nitrogenous base that I mentioned before comes in four varieties. With DNA, if you take a look at that, you'll see that the four kinds are thymine, cytosine. You notice each of those only has a single ring, whereas adenine and guanine they're double ring structures. Those are called purines these are called pyrimidines. With DNA and RNA, there is pretty much the same bases. The one difference is instead of thymine in RNA, they use a molecule called uracil. How do you remember that? Just think, 'You are correct'. Again in case you missed that, that means uracil is in RNA that's the correct answer. Now to join the nucleotides again, you do that dehydration synthesis process. And what will happen is, you'll wind up forming long chains. What you're doing is you're popping off an OH group from the corner of the sugar here, and you're joining it to a hydrogen that you rip off of the phosphate of the next nucleotide. So you'll start to form a long strand with the sugars and the phosphates forming the backbone of this and the nitrogenous bases sticking out like this. With RNA, that's it. RNA is generally single stranded molecule. But DNA, of course you've heard of the double helix. With DNA, you actually form a second strand. Notice how the pentagon is now pointing downwards, that's called anti-parallel. Mention that during an AP Biology essay on the structure of the DNA, and you got yourself another point. You'll see that between adenine and thymine, you'll see two hydrogen bonds. Whereas between guanine and cytosine, those dash lines are the three hydrogen bonds that form between them. And if you can remember, two hydrogen bonds between A and T, three hydrogen bonds between guanine and cytosine, you got yourself another point. So it's always A to T, G to C. Remember that and you're pretty good to go for DNA structure. Now what are the functions of the nucleic acids. Well everybody hopefully knows that DNA is the holder of your genetic information and RNA helps in that transmission of the genetic, or the inheritance abilities. But the one thing that a lot of people forget is that ATP, the energy currency of the cell is also a nucleotide. How is it different? It just has three phosphates in it instead of the normal one in the other nucleotides. Here's a memory trick that'll help you learn these four different kinds of organic molecules. And it's a memory trick that you can also use to help learn any kind of categorical knowledge. And it's taking advantage of a former memory that you have, that you use all the time. It's the one that you use to remember, for example, where did you park your car? Very few of you will pull out your flashcards and cram to remember that. No. You just walk in the mall and an hour later, you walk out and there you are at your car. What you do, is you visualise each of these different categories into a different location and when you study them, turn and look in that area and visualize them being there. Then during a test, all you can do is just turn and look. Now make sure that you're not memorizing your location on your partners or tablemate's paper, because your teacher may not like that. So what you do is, look over here and think that where those carbohydrates are those energy storing and structural molecules made out of monosaccharides getting joined together into the disaccharide, or even longer chains called polysaccharides. Next to them is that hydrophobic reject group of the organic molecules called the fats and lipids. Those include remember, the triglycerides and phospholipids used in that glycerol and fatty acid stuff to make them up, which are involving things like making fat or phospholipids in the membrane. Or the steroid core fats that are used for things like waxes and hormones. Over here, we have the proteins. Now remember, the proteins are the ones with that incredibly complex structure, where you hook the individual monomers called amino acids, together to form long chains, the simple structure of the chain is called the primary structure. As you let it begin to coil up a little bit, that secondary structure it's three dimensional shape, its tertiary structure. And if an actual protein is made out of multiple chains, how those multiple chains fit together is called its quaternary structure. And again those proteins, they form enzymes, they form membrane channels and hormones. They make all sorts of things in the cell. The last category way over here is the nucleic acids; the DNAs and RNA. And you recall of course, they are made of nucleotides joined together in strands with DNA requiring two strands to form the double helix. You use this tricks and you'll be better able to put this stuff together. And remember, it's not how hard you study, it's how smart you study. You put your time into being efficient, and that allows you to spend the time doing the things you really want to do like watching Desperate House Wives. AP Biology Videos
https://www.brightstorm.com/test-prep/ap-biology/ap-biology-videos/organic-molecules/
4.03125
- For the first definition, a monomial, also called power product, is a product of powers of variables with nonnegative integer exponents, or, in other words, a product of variables, possibly with repetitions. The constant 1 is a monomial, being equal to the empty product and x0 for any variable x. If only a single variable x is considered, this means that a monomial is either 1 or a power xn of x, with n a positive integer. If several variables are considered, say, then each can be given an exponent, so that any monomial is of the form with non-negative integers (taking note that any exponent 0 makes the corresponding factor equal to 1). - For the second definition, a monomial is a monomial in the first sense multiplied by a nonzero constant, called the coefficient of the monomial. A monomial in the first sense is also a monomial in the second sense, because the multiplication by 1 is allowed. For example, in this interpretation and are monomials (in the second example, the variables are and the coefficient is a complex number). Since the word "monomial", as well as the word "polynomial", comes from the late Latin word "binomium" (binomial), by changing the prefix "bi" (two in Latin), a monomial should theoretically be called a "mononomial". "Monomial" is a syncope of "mononomial". Comparison of the two definitions With either definition, the set of monomials is a subset of all polynomials that is closed under multiplication. Both uses of this notion can be found, and in many cases the distinction is simply ignored, see for instance examples for the first and second meaning. In informal discussions the distinction is seldom important, and tendency is towards the broader second meaning. When studying the structure of polynomials however, one often definitely needs a notion with the first meaning. This is for instance the case when considering a monomial basis of a polynomial ring, or a monomial ordering of that basis. An argument in favor of the first meaning is also that no obvious other notion is available to designate these values (the term power product is in use, in particular when monomial is used with the first meaning, but it does not make the absence of constants clear either), while the notion term of a polynomial unambiguously coincides with the second meaning of monomial. The remainder of this article assumes the first meaning of "monomial". The most obvious fact about monomials (first meaning) is that any polynomial is a linear combination of them, so they form a basis of the vector space of all polynomials, called the monomial basis - a fact of constant implicit use in mathematics. The number of monomials of degree d in n variables is the number of multicombinations of d elements chosen among the n variables (a variable can be chosen more than once, but order does not matter), which is given by the multiset coefficient . This expression can also be given in the form of a binomial coefficient, as a polynomial expression in d, or using a rising factorial power of d + 1: The latter forms are particularly useful when one fixes the number of variables and lets the degree vary. From these expressions one sees that for fixed n, the number of monomials of degree d is a polynomial expression in d of degree with leading coefficient . For example, the number of monomials in three variables () of degree d is ; these numbers form the sequence 1, 3, 6, 10, 15, ... of triangular numbers. The Hilbert series is a compact way to express the number of monomials of a given degree: the number of monomials of degree d in n variables is the coefficient of degree d of the formal power series expansion of The number of monomials of degree at most d in n variables is This follows from the one to one correspondence between the monomials of degree d in n+1 variables and the monomials of degree at most d in n variables, which consists in substituting by 1 the extra variable. Notation for monomials is constantly required in fields like partial differential equations. If the variables being used form an indexed family like , , , ..., then multi-index notation is helpful: if we write we can define and save a great deal of space. The degree of a monomial is defined as the sum of all the exponents of the variables, including the implicit exponents of 1 for the variables which appear without exponent; e.g., in the example of the previous section, the degree is . The degree of is 1+1+2=4. The degree of a nonzero constant is 0. For example, the degree of -7 is 0. The degree of a monomial is sometimes called order, mainly in the context of series. It is also called total degree when it is needed to distinguish it from the degree in one of the variables. Monomial degree is fundamental to the theory of univariate and multivariate polynomials. Explicitly, it is used to define the degree of a polynomial and the notion of homogeneous polynomial, as well as for graded monomial orderings used in formulating and computing Gröbner bases. Implicitly, it is used in grouping the terms of a Taylor series in several variables. In algebraic geometry the varieties defined by monomial equations for some set of α have special properties of homogeneity. This can be phrased in the language of algebraic groups, in terms of the existence of a group action of an algebraic torus (equivalently by a multiplicative group of diagonal matrices). This area is studied under the name of torus embeddings. - Monomial representation - Monomial matrix - Homogeneous polynomial - Homogeneous function - Multilinear form - Log-log plot - Power law
https://en.wikipedia.org/wiki/Monomial
4.28125
Bullying is when one or more people repeatedly attempt to hurt, intimidate, or torment another person. Bullying can be either physical or emotional. Bullying is most common among youths and young adults, but it can also occur in adulthood. Bullying is a common concern for school-aged youths. It has potentially serious consequences. Physical bullying includes any attempts to cause harm to another person. Emotional attacks include name-calling, teasing, threatening, or publicly humiliating the victim. Both physical and emotional attacks can be either direct or indirect. Direct bullying involves an actual confrontation between the bully and victim. Indirect attacks include the spread of rumors or attempts to humiliate the victim when he or she is not present. Cyberbullying, or bullying that occurs online or in social media forums, is a form of indirect bullying. People who are bullied are often physically smaller or perceived as weaker than the bully. This weakness can be real or imagined. Other people who are bullied are targeted for their differences. They may have a disability, or have developed differently from their immediate peers. They may have a different sexual orientation, be of a different socio-economic class, or possess traits that others are jealous of. Bullying can occur between peers or between adults and youths. It can occur between people of the same gender or people of different genders. A victim of bullying often does nothing to cause the attacks. Bullying occurs when one person, the attacker, has some sort of power over his or her victim, and acts on it. Bullies may be popular and powerful in their social circles. Sometimes they are isolated and not accepted by their peers. Children with mental health conditions, little parental involvement, violent tendencies, aggressive personalities, or issues at home are more likely to bully others. A person may bully another in order to: - increase his or her self-esteem - feel powerful - get his or her way - get respect from others - become more popular - make others laugh - fit in Victims of bullying may experience significant physical and emotional stress. Some effects of bullying include: - hurt feelings - depression or anxiety - low self-esteem - poor performance in school - physical pain (headaches or stomachaches) Ongoing abuse can lead to long-term stress or fear. Some victims of bullying end up taking matters into their own hands with violence. Bullying can also lead to suicide. The effects of bullying can last well into adulthood. For adults, bullying in the workplace can lead to frequent missed days and poor work performance. Employers can attempt to stop bullying with new policies, training, education, or by determining the root cause of the bullying. Victims of bullying often don’t report the abuse to teachers or parents out of fear of embarrassment or reprisals. They often feel isolated. They may feel that they will not be believed. They might also be afraid of backlash from the bullies or rejection from classmates. Educators can prevent bullying by talking openly about issues related to respect, by looking for signs of bullying, and by making sure students are aware they can come to educators with problems. Educators should intervene in and mediate bullying situations. Parents should discuss concerns with children—behavioral changes can be a sign of bullying (either being bullied or being a bully) and, if applicable, with school authorities. To stop bullying of yourself or another person, it usually helps to inform a trusted adult. Victims can learn to stand up for themselves. This may cause them to be targeted at first, because the bully will not expect this change in behavior. Victims should remain confident, tell their bullies to stop, and remove themselves from the situation. They should not use violence or reciprocate bullying. People who see others being bullied can help them by standing up for them and telling a trusted adult about the incident.
http://www.healthline.com/health/bullying
4
Here are some easy tips to help you upgrade your childs supplies for high school. more How to teach division So your child has mastered addition, subtraction and multiplication. Well done, there is only division left to learn of the basic numeric operations. Division for kids aged 5-6 If your child is aged 5-6 years then the focus is on using concrete materials to learn the concept of equal shares. This is usually easily learnt as most parents have familiarised their children with the language of ‘sharing’ through play experiences with other children. Ideas to use at home are: - Equal pouring – fill a jug with water and let your child fill smaller cups/glasses with the same amount of water. - Ask your child while wrapping presents to cut sticky tape or ribbon so that there are two lengths the same. - Drawing games – lots of legs is a great one that can be done with drawing or with toothpicks and play dough. Show them 20 toothpicks and tell them you need to share the legs evenly between the monsters. Talk about what happens when the monsters have two legs, when they have 3, when they have 10. Division for kids aged 7-8 Children aged 7-8 years are recognising the division sign and understand that division “undoes” the effects of multiplication just as subtraction “undoes” or is the reverse of addition. Your child may also understand division as repeated subtraction. Try these games to reinforce the concepts: - Animal Paddocks – give your child an A4 piece of paper which has been divided into different sized segments. Give your child plastic animals and ask them to place the animals into paddocks so that they each have the same amount of space. This is working on division as well as a lead into fractions. - Dividing food is always a strong motivator. When cutting a birthday cake or slicing a pizza have children count the number of people and tell you how many pieces you will need for everyone to have an equal share. - Pegging clothes – explain that you need help with hanging the washing. Each piece of clothing takes 2 pegs and you have 20 pegs – let them guess how many items they will be able to hang and then have a go! Alter this depending on how well they know their multiplication facts. Division for kids aged 9-10 Children 9-10 years are relating their division facts for example they know 36 divided by 4 is the same as halving 36 and halving again. They recognise the sign for short and long division and can list multiples and factors. They know that when dividing there is often a remainder and can explain why. Try these learning games to help with division: - Dice Games – take three dice (you can use numbers written on cards if you don’t have dice to roll) and roll two. These two are multiplied to become the total. Then roll the 3rd dice and divide the total by this number. Model the number sentence with concrete materials or draw it if needed. Discuss why there are remainders. - Real life situations – children of this age are usually getting pocket money. Discuss real life situations that involve money and remainders for example “share $7 between 3 people”. Alternatively questions such as “fifty eggs are packed into half dozen lots (groups of 6). How many cartons would a farmer need? Division for kids aged 11-12 For children aged 11-12 division becomes complex. Children are now doing long division dividing a number by a two and three digit numbers. Children are also learning to write remainders as fractions and decimals. Further children are having to apply their knowledge of division to problem solve. - Value for Money – when going shopping ask kids to determine which item is the best value for money. Questions such as “which is the best value buy 4 toilet rolls for $2.95 or 6 toilet rolls for $3.95” are suitable for kids this age. - Dividing with Place Value with problems such as ‘On the way to school 4 children found a $50 note. They handed it in to the principal. They will get a share of the $50 if no one claims it after a week.’ Then ask how much would each child get? How much would each child get if $5 was found? How much would each child get if 50c was found? - Division Webs – this is an alternative to worksheets and algorithms. Children create web patterns using three- or four-digit numbers. They draw the web with the divisor in the middle and the divided number around the web. To make this more difficult you write in the numbers around the web and they find the common divider in the centre. - Averages – Determining the average whether it be the weather average for the week or their favourite cricket player’s batting average, this activity is a great way to practise division. To find the average add the total together and divide by the number added. Helping your child does not have to involve worksheets, flashcards or expensive maths programs. It only requires patience, enthusiasm and links to real life experiences. Find more teaching tricks to inspire learning: - Teaching kids to tell time - Teaching left vs. right - Tips for teaching addition - Tips for teaching subtraction - Tips for teaching multiplication - Tips for teaching division - The importance of music lessons - Mathematical milestones for pre-kinder children - Mathematical milestones for 5-6 year old children - Mathematical milestones for 7-8 year old children - Mathematical milestones for 9-10 year old children - Mathematical milestones for 11-12 year old children Find more articles about learning games: - Reading games for fun - Host your own spelling bee - Learning games with Kidspot's spelling scrambler handwriting with printable mazes - Handwriting fun with dot-to-dots - Fun teaching ideas to learn left from right facts and learning games - Subtraction learning games - Multiplication facts and learning ideas - How to teach division - What cooking will teach our kids
http://www.kidspot.com.au/schoolzone/Maths-&-science-Learning-games-How-to-teach-division+4253+316+article.htm
4
The Asteraceae (Compositae, alternate name) with its approximately 1,620 genera and more than 23,600 species is the largest family of flowering plants (Stevens, 2001). The family is distributed worldwide except for Antarctica but is especially diverse in the tropical and subtropical regions of North America, the Andes, eastern Brazil, southern Africa, the Mediterranean region, central Asia, and southwestern China. The majority of Asteraceae species are herbaceous, yet an important component of the family is constituted by shrubs or even trees occurring primarily in the tropical regions of North and South America, Africa and Madagascar and on isolated islands in the Atlantic and Pacific Oceans. Many species of sunflowers are ruderal and especially abundant in disturbed areas, but a significant number of them, especially in mountainous tropical regions, are narrow endemics. Because of the relentless habitat transformation precipitated by human expansion in montane tropical regions, a number of these species are consequently in danger of extinction. The family contains several species that are important sources of cooking oils, sweetening agents, and tea infusions. Members of several genera of the family are well-known for their horticultural value and popular in gardens across the world and include zinnias, marigolds, dahlias, and chrysanthemums. The commercial sunflower genus Helianthus has been used as a model in the study of hybridization and its role in speciation (Rieseberg et al., 2003). See list of economically important Asteraceae
http://www.eol.org/data_objects/10108486
4.09375
Temporal range: Late Cretaceous, 85.8–66 Ma |Mounted skeleton of Parasaurolophus cyrtocristatus, Field Museum of Natural History| Hadrosaurids (Greek: ἁδρός, hadrós, "stout, thick"), or duck-billed dinosaurs, are members of the ornithischian family Hadrosauridae. The family, which includes ornithopods such as Edmontosaurus and Parasaurolophus, was a common herbivore in the Upper Cretaceous Period of what is now Asia, Europe and North America. Hadrosaurids are descendants of the Upper Jurassic/Lower Cretaceous iguanodontian dinosaurs and had a similar body layout. Hadrosaurids are divided into two principal subfamilies: the lambeosaurines (Lambeosaurinae), which had hollow cranial crests or tubes, and the saurolophines, identified as hadrosaurines in most pre-2010 works (Saurolophinae or Hadrosaurinae), which lacked hollow cranial crests (solid crests were present in some forms). Saurolophines tended to be bulkier than lambeosaurines. Lambeosaurines are divided into Aralosaurines, Lambeosaurines, Parasaurolophines and Tsintaosaurines, while Saurolophines include Saurolophus, Brachylophosaurines and Kritosaurines. - 1 Characteristics - 2 Cranial differences between subfamilies - 3 Discoveries - 4 Classification - 5 Paleobiology - 6 Ichnology - 7 See also - 8 References - 9 External links The hadrosaurs are known as the duck-billed dinosaurs due to the similarity of their head to that of modern ducks. In some genera, including Edmontosaurus, the whole front of the skull was flat and broadened out to form a beak, which was ideal for clipping leaves and twigs from the forests of Asia, Europe and North America. However, the back of the mouth contained thousands of teeth suitable for grinding food before it was swallowed. This has been hypothesized to have been a crucial factor in the success of this group in the Cretaceous compared to the sauropods. In 2009, paleontologist Mark Purnell conducted a study into the chewing methods and diet of hadrosaurids from the Late Cretaceous period. By analyzing hundreds of microscopic scratches on the teeth of a fossilized Edmontosaurus jaw, the team determined hadrosaurs had a unique way of eating unlike any creature living today. In contrast to a flexible lower jaw joint prevalent in today's mammals, hadrosaurs had a unique hinge between the upper jaws and the rest of its skull. The team found the dinosaur's upper jaws pushed outwards and sideways while chewing, as the lower jaw slid against the upper teeth. Cranial differences between subfamilies The two major divisions of hadrosaurids are differentiated by their cranial ornamentation. While members of the Lambeosaurinae subfamily have hollow crests that differ depending on species, members of the Saurolophinae (Hadrosaurinae) subfamily have solid crests or none at all. Lambeosaurine crests had air chambers that may have produced a distinct sound and meant that their crests could have been used for both an audio and visual display. Hadrosaurids were the first dinosaur family to be identified in North America - the first traces being found in 1855-1856 with the discovery of fossil teeth. Joseph Leidy examined the teeth, and erected the genera Trachodon and Thespesius (others included Troodon, Deinodon and Palaeoscincus). One species was named Trachodon mirabilis. Ultimately, Trachodon included all sorts of cerapod dinosaurs, including ceratopsids, and is now considered an invalid genus. In 1858, the teeth were associated with Leidy's eponymous Hadrosaurus foulkii, which was named after the fossil hobbyist William Parker Foulke. More and more teeth were found, resulting in even more (now obsolete) genera. When a second duck-bill skeleton was unearthed, Edward Drinker Cope incorrectly named it Diclonius mirabilis in 1883 instead of Trachodon mirabilis. But Trachodon, together with other poorly typed genera, was used more widely and, when Cope's famous "Diclonius mirabilis" skeleton was mounted at the American Museum of Natural History, it was labeled as a "Trachodont dinosaur". The duck-billed dinosaur family was then named Trachodontidae. A very well preserved complete hadrosaurid specimen, AMNH 5060, (Edmontosaurus annectens) was recovered in 1908 by the fossil collector Charles Hazelius Sternberg and his three sons, in Converse County, Wyoming. Analyzed by Henry Osborn in 1912, it has come to be known as the "Trachodon mummy". This specimen's skin was almost completely preserved in the form of impressions. Lawrence Lambe erected the genus Edmontosaurus ("lizard from Edmonton") in 1917 from a find in the lower Edmonton Formation (now Horseshoe Canyon Formation), Alberta. Hadrosaurid systematics were addressed in a 1942 monograph by Richard Swann Lull and Nelda Wright. They proposed the genus Anatosaurus for several species of dubious genera. Cope's famous mount at the AMNH became Anatosaurus copei. In 1990, Anatosaurus was moved to Edmontosaurus. One former Anatosaurus species was distinct enough from Edmontosaurus to be placed in a separate genus, named Anatotitan, so in 1990 the AMNH mount was re-labelled Anatotitan copei. One of the most complete fossilized specimens was found in 1999 in the Hell Creek Formation of North Dakota and is now nicknamed "Dakota". The hadrosaur fossil is so well preserved that scientists have been able to calculate its muscle mass and learn that it was more muscular than thought, probably giving it the ability to outrun predators, such as Tyrannosaurus rex. Dakota is more than fossilized bones, it is a fossilized mummy. It comes complete with skin (not merely skin impressions), ligaments, tendons and possibly some internal organs. It is being analyzed in the world's largest CT scanner, operated by the Boeing Co. The machine is usually used for detecting flaws in space shuttle engines and other large objects, but previously none as large as this. Researchers hope that the technology will help them learn more about the fossilized insides of the creature. They also found a gap of about a centimeter between each vertebra, indicating that there may have been a disk or other material between them, allowing more flexibility and meaning the animal was actually longer than what is shown in a museum. Skin impressions have been found from the following hadrosaurs: Edmontosaurus annectens, Corythosaurus casuarius, Brachylophosaurus canadensis, Gryposaurus notabilis, Parasaurolophus walkeri, Lambeosaurus magnicristatus, Lambeosaurus lambei, Saurolophus osborni, Magnapaulia laticaudus and, Saurolophus angustirostris. Paleontologists from the Instituto Nacional de Antropología e Historia (INAH, Mexico's federal National Institute of Anthropology and History), identified a find in Mexico near the town of General Cepeda in the state of Coahuila, as a hadrosaur. Fifty of its tail vertebrae were found intact among other of its fossilized bones at the site. In September 2015, researchers from Alaska Fairbanks University and Florida University concluded that the remains of a duck-billed dinosaur found in the high arctic of Alaska is a new species of hadrosaur, provisionally named Ugrunaaluk kuukpikensis. It apparently survived in conditions much harsher than those of other dinosaurs. The family Hadrosauridae was first used by Edward Drinker Cope in 1869. Since its creation, a major division has been recognized in the group between the (generally crested) subfamily Lambeosaurinae and (generally crestless) subfamily Saurolophinae (or Hadrosaurinae). Phylogenetic analysis has increased the resolution of hadrosaurid relationships considerably (see Phylogeny below), leading to the widespread usage of tribes (a taxonomic unit below subfamily) to describe the finer relationships within each group of hadrosaurids. However, many hadrosaurid tribes commonly recognized in online sources have not yet been formally defined or seen wide use in the literature. Several were briefly mentioned under informal names, but not named as such, in the first edition of The Dinosauria. In this 1990 reference, "gryposaurs" included Aralosaurus, Gryposaurus, Hadrosaurus, and Kritosaurus; "brachylophosaurs" included Brachylophosaurus and Maiasaura; "saurolophs" included Lophorhothon, Prosaurolophus, and Saurolophus; and "edmontosaurs" included Edmontosaurus, and Shantungosaurus. Lambeosaurines have also been traditionally split into Parasaurolophini (Parasaurolophus) and Corythosaurini (Corythosaurus, Hypacrosaurus, and Lambeosaurus). Corythosaurini and Parasaurolophini as terms entered the formal literature in Evans and Reisz's 2007 redescription of Lambeosaurus magnicristatus. Corythosaurini is defined as all taxa more closely related Corythosaurus casuarius than to Parasaurolophus walkeri, and Parasaurolophini as all those taxa closer to P. walkeri than to C. casuarius. In this study, Charonosaurus and Parasaurolophus are parasaurolophins, and Corythosaurus, Hypacrosaurus, Lambeosaurus, Nipponosaurus, and Olorotitan are corythosaurins. In recent years Tsintaosaurini (Tsintaosaurus + Pararhabdodon) and Aralosaurini (Aralosaurus + Canardia) have also emerged. The use of the term Hadrosaurinae was questioned in a comprehensive study of hadrosaurid relationships by Albert Prieto-Márquez in 2010. Prieto-Márquez noted that, though the name Hadrosaurinae had been used for the clade of mostly crestless hadrosaurids by nearly all previous studies, its type species, Hadrosaurus foulkii, has almost always been excluded from the clade that bears its name, in violation of the rules for naming animals set out by the ICZN. Prieto-Márquez defined Hadrosaurinae as just the lineage containing H. foulkii, and used the name Saurolophinae instead for the traditional grouping. The following taxonomy includes dinosaurs currently referred to the Hadrosauridae and its subfamilies. Hadrosaurids that were accepted as valid, but not placed in a cladogram at the time of Prieto-Márquez's 2010 study, are included at the highest level to which they were placed (either then, or in their description if they postdate the papers used here). - Family Hadrosauridae Hadrosauridae was first defined as a clade, by Forster, in a 1997 abstract, as simply "Lambeosaurinae plus Hadrosaurinae and their most recent common ancestor". In 1998, Paul Sereno defined the clade Hadrosauridae as the most inclusive possible group containing Saurolophus (a well-known saurolophine) and Parasaurolophus (a well-known lambeosaurine), later emending the definition to include Hadrosaurus, the type genus of the family, which ICZN rules state must be included, despite its status as a nomen dubium. According to Horner et al. (2004), Sereno's definition would place a few other well-known hadrosaurs (such as Telmatosaurus and Bactrosaurus) outside the family, which led them to define the family to include Telmatosaurus by default. The following cladogram was recovered in a 2010 phylogenetic analysis by Prieto-Márquez. Hadrosauridae has not been subjected to as many phylogenetic analyses as other dinosaur groups, so other workers may find quite different phylogenies. Gates and Sampson (2007) published the following alternate cladogram of Saurolophinae (identified as "Hadrosaurinae" in the study) in their description of Gryposaurus monumentensis: The following cladogram is after the 2007 redescription of Lambeosaurus magnicristatus (Evans and Reisz, 2007): |This section relies far too heavily on the Tanke & Brett-Surman (2001) reference. (March 2014)| While studying the chewing methods of hadrosaurids in 2009, the paleontologists Vincent Williams, Paul Barrett, and Mark Purnell found that hadrosaurs likely grazed on horsetails and vegetation close to the ground, rather than browsing higher-growing leaves and twigs. This conclusion was based on the evenness of scratches on hadrosaur teeth, which suggested the hadrosaur used the same series of jaw motions over and over again. As a result, the study determined that the hadrosaur diet was probably made of leaves and lacked the bulkier items, such as twigs or stems, that might have required a different chewing method and created different wear patterns. However, Purnell said these conclusions were less secure than the more conclusive evidence regarding the motion of teeth while chewing. The hypothesis that hadrosaurs were likely grazers rather than browsers appears to contradict previous findings from preserved stomach contents found in the fossilized guts in previous hadrosaur studies. The most recent such finding before the publication of the Purnell study was conducted in 2008, when a team led by University of Colorado at Boulder graduate student Justin S. Tweet found a homogeneous accumulation of millimeter-scale leaf fragments in the gut region of a well-preserved partially grown Brachylophosaurus. As a result of that finding, Tweet concluded in September 2008 that the animal was likely a browser, not a grazer. In response to such findings, Purnell said that preserved stomach contents are questionable because they do not necessarily represent the usual diet of the animal. The issue remains a subject of debate. Mallon et al. (2013) examined herbivore coexistence on the island continent of Laramidia, during the Late Cretaceous. It was concluded that hadrosaurids could reach low-growing trees and shrubs that were out of the reach of ceratopsids, ankylosaurs, and other small herbivores. Hadrosaurids were capable of feeding up to 2 m when standing quadrupedally, and up to 5 m bipedally. Coprolites (fossilized droppings) of some Late Cretaceous hadrosaurs show that the animals sometimes deliberately ate rotting wood. Wood itself is not nutritious, but decomposing wood would have contained fungi, decomposed wood material and detritus-eating invertebrates, all of which would have been nutritious. In the Dinosaur Park Formation In a 2001 review of hadrosaur eggshell and hatchling material from the Dinosaur Park Formation, Darren H. Tanke and M. K. Brett-Surman concluded that hadrosaurs nested in both the ancient upland and lowlands of the formation's depositional environment. The upland nesting grounds may have been preferred by the less common hadrosaurs, like Brachylophosaurus and Parasaurolophus. However, the authors were unable to determine what specific factors shaped nesting ground choice in the formation's hadrosaurs. They suggested that behavior, diet, soil condition, and competition between dinosaur species all potentially influenced where hadrosaurs nested. Sub-centimeter fragments of pebbly-textured hadrosaur eggshell have been reported from the Dinosaur Park Formation. This eggshell is similar to the hadrosaur eggshell of Devil's Coulee in southern Alberta as well as that of the Two Medicine and Judith River Formations in Montana, United States. While present, dinosaur eggshell is very rare in the Dinosaur Park Formation and is only found in two different microfossil sites. These sites are distinguished by large numbers of pisidiid clams and other less common shelled invertebrates, like unionid clams and snails. This association is not a coincidence, as the invertebrate shells would have slowly dissolved and released enough basic calcium carbonate to protect the eggshells from naturally occurring acids that otherwise would have dissolved them and prevented fossilization. In contrast with eggshell fossils, the remains of very young hadrosaurs are actually somewhat common. Darren Tanke has observed that an experienced collector could actually discover multiple juvenile hadrosaur specimens in a single day. The most common remains of young hadrosaurs in the Dinosaur Park Formation are dentaries, bones from limbs and feet, as well as vertebral centra. The material showed little or none of the abrasion that would have resulted from transport, meaning the fossils were buried near their point of origin. Bonebeds 23, 28, 47, and 50 are productive sources of young hadrosaur remains in the formation, especially bonebed 50. The bones of juvenile hadrosaurs and fossil eggshell fragments are not known to have preserved in association with each other, despite both being present in the formation. The limbs of the juvenile hadrosaurs are anatomically and proportionally similar to those of adult animals. However, the joints often show "predepositional erosion or concave articular surfaces", which was probably due to the cartilaginous cap covering the ends of the bones. The pelvis of a young hadrosaur was similar to that of an older individual. Daily activity patterns Comparisons between the scleral rings of several hadrosaur genera (Corythosaurus, Prosaurolophus, and Saurolophus) and modern birds and reptiles suggest that they may have been cathemeral, active throughout the day at short intervals. - Boyle, Alan (2009-06-29). "How dinosaurs chewed". MSNBC. Retrieved 2009-06-03. - "Hadrosaur Forelimb Study". Palaeo-electronica.org. Retrieved 2013-07-23. - Fassett, J, Zielinski, R.A., and Budahn, J.R. (2002). Dinosaurs that did not die; evidence for Paleocene dinosaurs in the Ojo Alamo Sandstone, San Juan Basin, New Mexico. In: Koeberl, C., and MacLeod, K. (eds.). Catastrophic events and mass extinctions: impacts and beyond. Special Paper – Geological Society of America 356:307-336. - (Reuters News) "Mummified dinosaur reveals surprises: scientists" 3 December 2007. - Schmid, Randolph (2007-12-03). "Mummified Dinosaur May Have Outrun T Rex". Associated Press. Retrieved 2010-11-10. - Bell, P. R. (2012). Farke, Andrew A, ed. "Standardized Terminology and Potential Taxonomic Utility for Hadrosaurid Skin Impressions: A Case Study for Saurolophus from Canada and Mongolia". PLoS ONE 7 (2): e31295. doi:10.1371/journal.pone.0031295. PMC 3272031. PMID 22319623. - Cohen, Luc, et al, Paleontologists discover dinosaur tail in northern Mexico, Reuters, Tuesday, July 23, 2013 - "New Duck-billed Dinosaur Species Discovered in Alaska". Sci-News.com. Retrieved 2015-09-23. - Weishampel, David B.; Horner, Jack R. (1990). "Hadrosauridae". In Weishampel, David B.; Dodson, Peter; Osmólska, Halszka. The Dinosauria (1st ed.). Berkeley: University of California Press. pp. 534–561. ISBN 0-520-06727-4. - Glut, Donald F. (1997). Dinosaurs: The Encyclopedia. Jefferson, North Carolina: McFarland & Co. p. 69. ISBN 0-89950-917-7. - Evans, David C.; Reisz, Robert R. (2007). "Anatomy and relationships of Lambeosaurus magnicristatus, a crested hadrosaurid dinosaur (Ornithischia) from the Dinosaur Park Formation, Alberta". Journal of Vertebrate Paleontology 27 (2): 373–393. doi:10.1671/0272-4634(2007)27[373:AAROLM]2.0.CO;2. ISSN 0272-4634. - "PLOS ONE: Diversity, Relationships, and Biogeography of the Lambeosaurine Dinosaurs from the European Archipelago, with Description of the New Aralosaurin Canardia garonnensis". - name=PM2010>Prieto-Márquez, A. (2010). "Global phylogeny of Hadrosauridae (Dinosauria: Ornithopoda) using parsimony and Bayesian methods." Zoological Journal of the Linnean Society, 159: 435–502. - Prieto-Márquez, A. (2010). "Global phylogeny of Hadrosauridae (Dinosauria: Ornithopoda) using parsimony and Bayesian methods." Zoological Journal of the Linnean Society, 159: 435–502. - Gates, Terry A.; Sampson, Scott D. (2007). "A new species of Gryposaurus (Dinosauria: Hadrosauridae) from the late Campanian Kaiparowits Formation, southern Utah, USA". Zoological Journal of the Linnean Society 151 (2): 351–376. doi:10.1111/j.1096-3642.2007.00349.x. - Williams, Vincent S.; Barrett, Paul M.; Purnell, Mark A. (2009). "Quantitative analysis of dental microwear in hadrosaurid dinosaurs, and the implications for hypotheses of jaw mechanics and feeding". Proceedings of the National Academy of Sciences 106 (27): 11194–11199. doi:10.1073/pnas.0812631106. PMC 2708679. PMID 19564603. - Bryner, Jeanna (2009-06-29). "Study hints at what and how dinosaurs ate". LiveScience. Retrieved 2009-06-03. - Tweet, Justin S.; Chin, Karen; Braman, Dennis R.; Murphy, Nate L. (2008). "Probable gut contents within a specimen of Brachylophosaurus canadensis (Dinosauria: Hadrosauridae) from the Upper Cretaceous Judith River Formation of Montana". PALAIOS 23 (9): 624–635. doi:10.2110/palo.2007.p07-044r. - Lloyd, Robin (2008-09-25). "Plant-eating dinosaur spills his guts: Fossil suggests hadrosaur's last meal included lots of well-chewed leaves". MSNBC. Retrieved 2009-06-03. - This information comes from the aforementioned Alan Boyle source from June 29, 2009. However, this specific information is not included in the body of the article, but rather a response by Boyle to comments in the article. Since the comments were written by Boyle himself, and since they cite information he received specifically from Purnell, they are as legitimate a source of information as the article itself. - Mallon, Jordan C.; Evans, David C.; Ryan, Michael J.; Anderson, Jason S. (2013). "Feeding height stratification among the herbivorous dinosaurs from the Dinosaur Park Formation (upper Campanian) of Alberta, Canada". BMC Ecology 13: 14. doi:10.1186/1472-6785-13-14. PMC 3637170. PMID 23557203. - Chin, K. (September 2007). "The Paleobiological Implications of Herbivorous Dinosaur Coprolites from the Upper Cretaceous Two Medicine Formation of Montana: Why Eat Wood?". PALAIOS 22 (5): 554. doi:10.2110/palo.2006.p06-087r. Retrieved 2008-09-10. - Tanke, D.H. and Brett-Surman, M.K. 2001. Evidence of Hatchling and Nestling-Size Hadrosaurs (Reptilia:Ornithischia) from Dinosaur Provincial Park (Dinosaur Park Formation: Campanian), Alberta, Canada. pp. 206-218. In: Mesozoic Vertebrate Life—New Research Inspired by the Paleontology of Philip J. Currie. Edited by D.H. Tanke and K. Carpenter. Indiana University Press: Bloomington. xviii + 577 pp. - Schmitz, L.; Motani, R. (2011). "Nocturnality in Dinosaurs Inferred from Scleral Ring and Orbit Morphology". Science 332 (6030): 705–708. doi:10.1126/science.1200043. PMID 21493820. |Wikiquote has quotations related to: Hadrosaurid| |Wikispecies has information related to: Hadrosauridae|
https://en.wikipedia.org/wiki/Hadrosaurinae
4.125
In the 19th century, Manifest Destiny was a widely held belief in the United States that American settlers were destined to expand throughout the continent. Historians have for the most part agreed that there are three basic themes to Manifest Destiny: - The special virtues of the American people and their institutions - America's mission to redeem and remake the west in the image of agrarian America - An irresistible destiny to accomplish this essential duty Historian Frederick Merk says this concept was born out of "a sense of mission to redeem the Old World by high example ... generated by the potentialities of a new earth for building a new heaven". Historians have emphasized that "Manifest Destiny" was a contested concept—Democrats endorsed the idea but many prominent Americans (such as Abraham Lincoln, Ulysses S. Grant, and most Whigs) rejected it. Historian Daniel Walker Howe writes, "American imperialism did not represent an American consensus; it provoked bitter dissent within the national polity.... Whigs saw America's moral mission as one of democratic example rather than one of conquest." Newspaper editor John O'Sullivan coined the term Manifest Destiny in 1845 to describe the essence of this mindset, which was a rhetorical tone. It was used by Democrats in the 1840s to justify the war with Mexico and it was also used to divide half of Oregon with the United Kingdom. But Manifest Destiny always limped along because of its internal limitations and the issue of slavery, says Merk. It never became a national priority. By 1843 John Quincy Adams, originally a major supporter of the concept underlying manifest destiny, had changed his mind and repudiated expansionism because it meant the expansion of slavery in Texas. - From the outset Manifest Destiny—vast in program, in its sense of continentalism—was slight in support. It lacked national, sectional, or party following commensurate with its magnitude. The reason was it did not reflect the national spirit. The thesis that it embodied nationalism, found in much historical writing, is backed by little real supporting evidence. - 1 Context - 2 Themes and influences - 3 Alternative interpretations - 4 Era of continental expansion - 5 Beyond North America - 6 Relationship with German Lebensraum ideology - 7 See also - 8 Notes - 9 References - 10 Further reading - 11 External links There was never a set of principles defining manifest destiny therefore Manifest Destiny was always a general idea rather than a specific policy made with a motto. Ill-defined but keenly felt, manifest destiny was an expression of conviction in the morality and value of expansionism that complemented other popular ideas of the era, including American exceptionalism and Romantic nationalism. Andrew Jackson, who spoke of "extending the area of freedom", typified the conflation of America's potential greatness, the nation's budding sense of Romantic self-identity, and its expansion. Yet Jackson would not be the only president to elaborate on the principles underlying manifest destiny. Owing in part to the lack of a definitive narrative outlining its rationale, proponents offered divergent or seemingly conflicting viewpoints. While many writers focused primarily upon American expansionism, be it into Mexico or across the Pacific, others saw the term as a call to example. Without an agreed upon interpretation, much less an elaborated political philosophy, these conflicting views of America's destiny were never resolved. This variety of possible meanings was summed up by Ernest Lee Tuveson, who writes: A vast complex of ideas, policies, and actions is comprehended under the phrase "Manifest Destiny". They are not, as we should expect, all compatible, nor do they come from any one source. Journalist John L. O'Sullivan, an influential advocate for Jacksonian democracy and a complex character described by Julian Hawthorne as "always full of grand and world-embracing schemes", wrote an article in 1839, which, while not using the term "manifest destiny", did predict a "divine destiny" for the United States based upon values such as equality, rights of conscience, and personal enfranchisement "to establish on earth the moral dignity and salvation of man". This destiny was not explicitly territorial, but O'Sullivan predicted that the United States would be one of a "Union of many Republics" sharing those values. Six years later, in 1845, O'Sullivan wrote another essay titled Annexation in the Democratic Review, in which he first used the phrase manifest destiny. In this article he urged the U.S. to annex the Republic of Texas, not only because Texas desired this, but because it was "our manifest destiny to overspread the continent allotted by Providence for the free development of our yearly multiplying millions". Overcoming Whig opposition, Democrats annexed Texas in 1845. O'Sullivan's first usage of the phrase "manifest destiny" attracted little attention. O'Sullivan's second use of the phrase became extremely influential. On December 27, 1845, in his newspaper the New York Morning News, O'Sullivan addressed the ongoing boundary dispute with Britain. O'Sullivan argued that the United States had the right to claim "the whole of Oregon": And that claim is by the right of our manifest destiny to overspread and to possess the whole of the continent which Providence has given us for the development of the great experiment of liberty and federated self-government entrusted to us. That is, O'Sullivan believed that Providence had given the United States a mission to spread republican democracy ("the great experiment of liberty"). Because Britain would not spread democracy, thought O'Sullivan, British claims to the territory should be overruled. O'Sullivan believed that manifest destiny was a moral ideal (a "higher law") that superseded other considerations. O'Sullivan's original conception of manifest destiny was not a call for territorial expansion by force. He believed that the expansion of the United States would happen without the direction of the U.S. government or the involvement of the military. After Americans emigrated to new regions, they would set up new democratic governments, and then seek admission to the United States, as Texas had done. In 1845, O'Sullivan predicted that California would follow this pattern next, and that Canada would eventually request annexation as well. He disapproved of the Mexican–American War in 1846, although he came to believe that the outcome would be beneficial to both countries. Ironically, O'Sullivan's term became popular only after it was criticized by Whig opponents of the Polk administration. Whigs denounced manifest destiny, arguing, "that the designers and supporters of schemes of conquest, to be carried on by this government, are engaged in treason to our Constitution and Declaration of Rights, giving aid and comfort to the enemies of republicanism, in that they are advocating and preaching the doctrine of the right of conquest". On January 3, 1846, Representative Robert Winthrop ridiculed the concept in Congress, saying "I suppose the right of a manifest destiny to spread will not be admitted to exist in any nation except the universal Yankee nation". Winthrop was the first in a long line of critics who suggested that advocates of manifest destiny were citing "Divine Providence" for justification of actions that were motivated by chauvinism and self-interest. Despite this criticism, expansionists embraced the phrase, which caught on so quickly that its origin was soon forgotten. Themes and influences Historian William E. Weeks has noted that three key themes were usually touched upon by advocates of manifest destiny: - the virtue of the American people and their institutions; - the mission to spread these institutions, thereby redeeming and remaking the world in the image of the United States; - the destiny under God to do this work. The origin of the first theme, later known as American Exceptionalism, was often traced to America's Puritan heritage, particularly John Winthrop's famous "City upon a Hill" sermon of 1630, in which he called for the establishment of a virtuous community that would be a shining example to the Old World. In his influential 1776 pamphlet Common Sense, Thomas Paine echoed this notion, arguing that the American Revolution provided an opportunity to create a new, better society: We have it in our power to begin the world over again. A situation, similar to the present, hath not happened since the days of Noah until now. The birthday of a new world is at hand... Many Americans agreed with Paine, and came to believe that the United States' virtue was a result of its special experiment in freedom and democracy. Thomas Jefferson, in a letter to James Monroe, wrote, "it is impossible not to look forward to distant times when our rapid multiplication will expand itself beyond those limits, and cover the whole northern, if not the southern continent." To Americans in the decades that followed their proclaimed freedom for mankind, embodied in the Declaration of Independence, could only be described as the inauguration of "a new time scale" because the world would look back and define history as events that took place before, and after, the Declaration of Independence. It followed that Americans owed to the world an obligation to expand and preserve these beliefs. The second theme's origination is less precise. A popular expression of America's mission was elaborated by President Abraham Lincoln's description in his December 1, 1862, message to Congress. He described the United States as "the last, best hope of Earth". The "mission" of the United States was further elaborated during Lincoln's Gettysburg Address, in which he interpreted the Civil War as a struggle to determine if any nation with democratic ideals could survive; this has been called by historian Robert Johannsen "the most enduring statement of America's Manifest Destiny and mission". The third theme can be viewed as a natural outgrowth of the belief that God had a direct influence in the foundation and further actions of the United States. Clinton Rossiter, a scholar, described this view as summing "that God, at the proper stage in the march of history, called forth certain hardy souls from the old and privilege-ridden nations ... and that in bestowing his grace He also bestowed a peculiar responsibility". Americans presupposed that they were not only divinely elected to maintain the North American continent, but also to "spread abroad the fundamental principles stated in the Bill of Rights". In many cases this meant neighboring colonial holdings and countries were seen as obstacles rather than the destiny God had provided the United States. - "Most Democrats were wholehearted supporters of expansion, whereas many Whigs (especially in the North) were opposed. Whigs welcomed most of the changes wrought by industrialization but advocated strong government policies that would guide growth and development within the country's existing boundaries; they feared (correctly) that expansion raised a contentious issue, the extension of slavery to the territories. On the other hand, many Democrats feared industrialization the Whigs welcomed... For many Democrats, the answer to the nation's social ills was to continue to follow Thomas Jefferson's vision of establishing agriculture in the new territories in order to counterbalance industrialization." Another possible influence is racial predominance, namely the idea that the American Anglo-Saxon race was "separate, innately superior" and "destined to bring good government, commercial prosperity and Christianity to the American continents and the world". This view also held that "inferior races were doomed to subordinate status or extinction." This was used to justify "the enslavement of the blacks and the expulsion and possible extermination of the Indians". With the Louisiana Purchase in 1803, which doubled the size of the United States, Thomas Jefferson set the stage for the continental expansion of the United States. Many began to see this as the beginning of a new providential mission: If the United States was successful as a "shining city upon a hill", people in other countries would seek to establish their own democratic republics. However, not all Americans or their political leaders believed that the United States was a divinely favored nation, or thought that it ought to expand. For example, many Whigs opposed territorial expansion based on the Democratic claim that the United States was destined to serve as a virtuous example to the rest of the world, and also had a divine obligation to spread its superordinate political system and a way of life throughout North American continent. Many in the Whig party "were fearful of spreading out too widely", and they "adhered to the concentration of national authority in a limited area". In July 1848, Alexander Stephens denounced President Polk's expansionist interpretation of America's future as "mendacious". In the mid‑19th century, expansionism, especially southward toward Cuba, also faced opposition from those Americans who were trying to abolish slavery. As more territory was added to the United States in the following decades, "extending the area of freedom" in the minds of southerners also meant extending the institution of slavery. That is why slavery became one of the central issues in the continental expansion of the United States before the Civil War. Before and during the Civil War both sides claimed that America's destiny were rightfully their own. Lincoln opposed anti-immigrant nativism, and the imperialism of manifest destiny as both unjust and unreasonable. He objected to the Mexican War and believed each of these disordered forms of patriotism threatened the inseparable moral and fraternal bonds of liberty and Union that he sought to perpetuate through a patriotic love of country guided by wisdom and critical self-awareness. Lincoln's "Eulogy to Henry Clay", June 6, 1852, provides the most cogent expression of his reflective patriotism. Era of continental expansion The phrase "manifest destiny" is most often associated with the territorial expansion of the United States from 1812 to 1860. This era, from the end of the War of 1812 to the beginning of the American Civil War, has been called the "age of manifest destiny". During this time, the United States expanded to the Pacific Ocean—"from sea to shining sea"—largely defining the borders of the contiguous United States as they are today. War of 1812 One of the causes of the War of 1812 may have been an American desire to annex or threaten to annex British Canada in order to stop the Indian raids into the Midwest, expel Britain from North America, and gain additional land. The American victories at the Battle of Lake Erie and the Battle of the Thames in 1813 ended the Indian raids and removed one of the reasons for annexation. The American failure to occupy any significant part of Canada prevented them from annexing it for the second reason, which was largely ended by the Era of Good Feelings, which ensued after the war between Britain and the United States. To end the War of 1812 John Quincy Adams, Henry Clay and Albert Gallatin (former Treasury Secretary and a leading expert on Indians) and the other American diplomats negotiated the Treaty of Ghent in 1814 with Britain. They rejected the British plan to set up an Indian state in U.S. territory south of the Great Lakes. They explained the American policy toward acquisition of Indian lands: - The United States, while intending never to acquire lands from the Indians otherwise than peaceably, and with their free consent, are fully determined, in that manner, progressively, and in proportion as their growing population may require, to reclaim from the state of nature, and to bring into cultivation every portion of the territory contained within their acknowledged boundaries. In thus providing for the support of millions of civilized beings, they will not violate any dictate of justice or of humanity; for they will not only give to the few thousand savages scattered over that territory an ample equivalent for any right they may surrender, but will always leave them the possession of lands more than they can cultivate, and more than adequate to their subsistence, comfort, and enjoyment, by cultivation. If this be a spirit of aggrandizement, the undersigned are prepared to admit, in that sense, its existence; but they must deny that it affords the slightest proof of an intention not to respect the boundaries between them and European nations, or of a desire to encroach upon the territories of Great Britain. . . . They will not suppose that that Government will avow, as the basis of their policy towards the United States a system of arresting their natural growth within their own territories, for the sake of preserving a perpetual desert for savages. The 19th-century belief that the United States would eventually encompass all of North America is known as "continentalism". An early proponent of this idea was John Quincy Adams, a leading figure in U.S. expansion between the Louisiana Purchase in 1803 and the Polk administration in the 1840s. In 1811, Adams wrote to his father: The whole continent of North America appears to be destined by Divine Providence to be peopled by one nation, speaking one language, professing one general system of religious and political principles, and accustomed to one general tenor of social usages and customs. For the common happiness of them all, for their peace and prosperity, I believe it is indispensable that they should be associated in one federal Union. Adams did much to further this idea. He orchestrated the Treaty of 1818, which established the United States–Canada border as far west as the Rocky Mountains, and provided for the joint occupation of the region known in American history as the Oregon Country and in British and Canadian history as the New Caledonia and Columbia Districts. He negotiated the Transcontinental Treaty in 1819, purchasing Florida from Spain and extending the U.S. border with Spanish Mexico all the way to the Pacific Ocean. And he formulated the Monroe Doctrine of 1823, which warned Europe that the Western Hemisphere was no longer open for European colonization. The Monroe Doctrine and manifest destiny were closely related ideas: historian Walter McDougall calls manifest destiny a corollary of the Monroe Doctrine, because while the Monroe Doctrine did not specify expansion, expansion was necessary in order to enforce the Doctrine. Concerns in the United States that European powers (especially Great Britain) were seeking to acquire colonies or greater influence in North America led to calls for expansion in order to prevent this. In his influential 1935 study of manifest destiny, Albert Weinberg wrote, "the expansionism of the [1830s] arose as a defensive effort to forestall the encroachment of Europe in North America." Manifest destiny played its most important role in, and was coined during the course of, the Oregon boundary dispute with Britain. The Anglo-American Convention of 1818 had provided for the joint occupation of the Oregon Country, and thousands of Americans migrated there in the 1840s over the Oregon Trail. The British rejected a proposal by President John Tyler to divide the region along the 49th parallel, and instead proposed a boundary line farther south along the Columbia River, which would have made most of what later became the state of Washington part of British North America. Advocates of manifest destiny protested and called for the annexation of the entire Oregon Country up to the Alaska line (54°40ʹ N). Presidential candidate James K. Polk used this popular outcry to his advantage, and the Democrats called for the annexation of "All Oregon" in the 1844 U.S. Presidential election. As president, however, Polk sought compromise and renewed the earlier offer to divide the territory in half along the 49th parallel, to the dismay of the most ardent advocates of manifest destiny. When the British refused the offer, American expansionists responded with slogans such as "The Whole of Oregon or None!" and "Fifty-Four Forty or Fight!", referring to the northern border of the region. (The latter slogan is often mistakenly described as having been a part of the 1844 presidential campaign.) When Polk moved to terminate the joint occupation agreement, the British finally agreed to divide the region along the 49th parallel in early 1846, keeping the lower Columbia basin as part of the United States, and the dispute was settled by the Oregon Treaty of 1846, which the administration was able to sell to Congress because the United States was about to begin the Mexican–American war, and the president and others argued it would be foolish to also fight the British Empire. Despite the earlier clamor for "All Oregon", the treaty was popular in the United States and was easily ratified by the Senate. The most fervent advocates of manifest destiny had not prevailed along the northern border because, according to Reginald Stuart, "the compass of manifest destiny pointed west and southwest, not north, despite the use of the term 'continentalism'." Mexico and Texas Manifest Destiny played an important role in the expansion of Texas and American relationship with Mexico. In 1836, the Republic of Texas declared independence from Mexico and, after the Texas Revolution, sought to join the United States as a new state. This was an idealized process of expansion that had been advocated from Jefferson to O'Sullivan: newly democratic and independent states would request entry into the United States, rather than the United States extending its government over people who did not want it. The annexation of Texas was controversial as it would add another slave state to the Union. Presidents Andrew Jackson and Martin Van Buren declined Texas's offer to join the United States in part because the slavery issue threatened to divide the Democratic Party. Before the election of 1844, Whig candidate Henry Clay and the presumed Democratic candidate, former President Van Buren, both declared themselves opposed to the annexation of Texas, each hoping to keep the troublesome topic from becoming a campaign issue. This unexpectedly led to Van Buren being dropped by the Democrats in favor of Polk, who favored annexation. Polk tied the Texas annexation question with the Oregon dispute, thus providing a sort of regional compromise on expansion. (Expansionists in the North were more inclined to promote the occupation of Oregon, while Southern expansionists focused primarily on the annexation of Texas.) Although elected by a very slim margin, Polk proceeded as if his victory had been a mandate for expansion. After the election of Polk, but before he took office, Congress approved the annexation of Texas. Polk moved to occupy a portion of Texas that had declared independence from Mexico in 1836, but was still claimed by Mexico. This paved the way for the outbreak of the Mexican–American War on April 24, 1846. With American successes on the battlefield, by the summer of 1847 there were calls for the annexation of "All Mexico", particularly among Eastern Democrats, who argued that bringing Mexico into the Union was the best way to ensure future peace in the region. This was a controversial proposition for two reasons. First, idealistic advocates of manifest destiny like John L. O'Sullivan had always maintained that the laws of the United States should not be imposed on people against their will. The annexation of "All Mexico" would be a violation of this principle. And secondly, the annexation of Mexico was controversial because it would mean extending U.S. citizenship to millions of Mexicans. Senator John C. Calhoun of South Carolina, who had approved of the annexation of Texas, was opposed to the annexation of Mexico, as well as the "mission" aspect of manifest destiny, for racial reasons. He made these views clear in a speech to Congress on January 4, 1848: We have never dreamt of incorporating into our Union any but the Caucasian race—the free white race. To incorporate Mexico, would be the very first instance of the kind, of incorporating an Indian race; for more than half of the Mexicans are Indians, and the other is composed chiefly of mixed tribes. I protest against such a union as that! Ours, sir, is the Government of a white race.... We are anxious to force free government on all; and I see that it has been urged ... that it is the mission of this country to spread civil and religious liberty over all the world, and especially over this continent. It is a great mistake. This debate brought to the forefront one of the contradictions of manifest destiny: on the one hand, while identitarian ideas inherent in manifest destiny suggested that Mexicans, as non-whites, would present a threat to white racial integrity and thus were not qualified to become Americans, the "mission" component of manifest destiny suggested that Mexicans would be improved (or "regenerated", as it was then described) by bringing them into American democracy. Identitarianism was used to promote manifest destiny, but, as in the case of Calhoun and the resistance to the "All Mexico" movement, identitarianism was also used to oppose manifest destiny. Conversely, proponents of annexation of "All Mexico" regarded it as an anti-slavery measure. The controversy was eventually ended by the Mexican Cession, which added the territories of Alta California and Nuevo México to the United States, both more sparsely populated than the rest of Mexico. Like the All Oregon movement, the All Mexico movement quickly abated. Historian Frederick Merk, in Manifest Destiny and Mission in American History: A Reinterpretation (1963), argued that the failure of the "All Oregon" and "All Mexico" movements indicates that manifest destiny had not been as popular as historians have traditionally portrayed it to have been. Merk wrote that, while belief in the beneficent mission of democracy was central to American history, aggressive "continentalism" were aberrations supported by only a minority of Americans, all of them Democrats. Some Democrats were also opposed; the Democrats of Louisiana opposed annexation of Mexico, while those in Mississippi supported it. After the Mexican–American War ended in 1848, disagreements over the expansion of slavery made further annexation by conquest too divisive to be official government policy. Some, such as John Quitman, governor of Mississippi, offered what public support they could offer. In one memorable case, Quitman simply explained that the state of Mississippi had "lost" its state arsenal, which began showing up in the hands of filibusters. Yet these isolated cases only solidified opposition in the North as many Northerners were increasingly opposed to what they believed to be efforts by Southern slave owners—and their friends in the North—to expand slavery through filibustering. Sarah P. Remond on January 24, 1859, delivered an impassioned speech at Warrington, England, that the connection between filibustering and slave power was clear proof of "the mass of corruption that underlay the whole system of American government". The Wilmot Proviso and the continued "Slave Power" narratives thereafter, indicated the degree to which manifest destiny had become part of the sectional controversy. Without official government support the most radical advocates of manifest destiny increasingly turned to military filibustering. Originally filibuster had come from the Dutch vrijbuiter and referred to buccaneers in the West Indies that preyed on Spanish commerce. While there had been some filibustering expeditions into Canada in the late 1830s, it was only by mid-century did filibuster become a definitive term. By then, declared the New-York Daily Times "the fever of Fillibusterism is on our country. Her pulse beats like a hammer at the wrist, and there's a very high color on her face." Millard Fillmore's second annual message to Congress, submitted in December 1851, gave double the amount of space to filibustering activities than the brewing sectional conflict. The eagerness of the filibusters, and the public to support them, had an international hue. Clay's son, diplomat to Portugal, reported that Lisbon had been stirred into a "frenzy" of excitement and were waiting on every dispatch. Although they were illegal, filibustering operations in the late 1840s and early 1850s were romanticized in the United States. The Democratic Party's national platform included a plank that specifically endorsed William Walker's filibustering in Nicaragua. Wealthy American expansionists financed dozens of expeditions, usually based out of New Orleans, New York, and San Francisco. The primary target of manifest destiny's filibusters was Latin America but there were isolated incidents elsewhere. Mexico was a favorite target of organizations devoted to filibustering, like the Knights of the Golden Circle. William Walker got his start as a filibuster in an ill-advised attempt to separate the Mexican states Sonora and Baja California. Narciso López, a near second in fame and success, spent his efforts trying to secure Cuba from the Spanish Empire. The United States had long been interested in acquiring Cuba from the declining Spanish Empire. As with Texas, Oregon, and California, American policy makers were concerned that Cuba would fall into British hands, which, according to the thinking of the Monroe Doctrine, would constitute a threat to the interests of the United States. Prompted by John L. O'Sullivan, in 1848 President Polk offered to buy Cuba from Spain for $100 million. Polk feared that filibustering would hurt his effort to buy the island, and so he informed the Spanish of an attempt by the Cuban filibuster Narciso López to seize Cuba by force and annex it to the United States, foiling the plot. Nevertheless, Spain declined to sell the island, which ended Polk's efforts to acquire Cuba. O'Sullivan, on the other hand eventually landed in legal trouble. Filibustering continued to be a major concern for presidents after Polk. Whigs presidents Zachary Taylor and Millard Fillmore tried to suppress the expeditions. When the Democrats recaptured the White House in 1852 with the election of Franklin Pierce, a filibustering effort by John A. Quitman to acquire Cuba received the tentative support of the president. Pierce backed off, however, and instead renewed the offer to buy the island, this time for $130 million. When the public learned of the Ostend Manifesto in 1854, which argued that the United States could seize Cuba by force if Spain refused to sell, this effectively killed the effort to acquire the island. The public now linked expansion with slavery; if manifest destiny had once enjoyed widespread popular approval, this was no longer true. Filibusters like William Walker continued to garner headlines in the late 1850s, but to little effect. Expansionism was among the various issues that played a role in the coming of the war. With the divisive question of the expansion of slavery, Northerners and Southerners, in effect, were coming to define manifest destiny in different ways, undermining nationalism as a unifying force. According to Frederick Merk, "The doctrine of Manifest Destiny, which in the 1840s had seemed Heaven-sent, proved to have been a bomb wrapped up in idealism." The Homestead Act of 1862 encouraged 600,000 families to settle the West by giving them land (usually 160 acres) almost free. They had to live on and improve the land for five years. Before the Civil War, Southern leaders opposed the Homestead Acts because they feared it would lead to more free states and free territories. After the mass resignation of Southern senators and representatives at the beginning of the war, Congress was subsequently able to pass the Homestead Act. Manifest destiny had serious consequences for Native Americans, since continental expansion implicitly meant the occupation and annexation of Native American land, sometimes to expand slavery. This ultimately led to the ethnic cleansing of several groups of native peoples via Indian removal. The United States continued the European practice of recognizing only limited land rights of indigenous peoples. In a policy formulated largely by Henry Knox, Secretary of War in the Washington Administration, the U.S. government sought to expand into the west through the purchase of Native American land in treaties. Only the Federal Government could purchase Indian lands and this was done through treaties with tribal leaders. Whether a tribe actually had a decision-making structure capable of making a treaty was a controversial issue. The national policy was for the Indians to join American society and become "civilized", which meant no more wars with neighboring tribes or raids on white settlers or travelers, and a shift from hunting to farming and ranching. Advocates of civilization programs believed that the process of settling native tribes would greatly reduce the amount of land needed by the Native Americans, making more land available for homesteading by white Americans. Thomas Jefferson believed that while American Indians were the intellectual equals of whites, they had to live like the whites or inevitably be pushed aside by them. Jefferson's belief, rooted in Enlightenment thinking, that whites and Native Americans would merge to create a single nation did not last his lifetime, and he began to believe that the natives should emigrate across the Mississippi River and maintain a separate society, an idea made possible by the Louisiana Purchase of 1803. In the age of manifest destiny, this idea, which came to be known as "Indian removal", gained ground. Humanitarian advocates of removal believed that American Indians would be better off moving away from whites. As historian Reginald Horsman argued in his influential study Race and Manifest Destiny, racial rhetoric increased during the era of manifest destiny. Americans increasingly believed that Native American ways of life would fade away as the United States expanded. As an example, this idea was reflected in the work of one of America's first great historians, Francis Parkman, whose landmark book The Conspiracy of Pontiac was published in 1851. Parkman wrote that after the British conquest of Canada in 1760, Indians were "destined to melt and vanish before the advancing waves of Anglo-American power, which now rolled westward unchecked and unopposed". Parkman emphasized that the collapse of Indian power in the late 18th century had been swift and was a past event. Beyond North America As the Civil War faded into history, the term manifest destiny experienced a brief revival. Protestant missionary Josiah Strong, in his best seller of 1885 Our Country argued that the future was devolved upon America since it had perfected the ideals of civil liberty, "a pure spiritual Christianity", and concluded "My plea is not, Save America for America's sake, but, Save America for the world's sake." In the 1892 U.S. presidential election, the Republican Party platform proclaimed: "We reaffirm our approval of the Monroe doctrine and believe in the achievement of the manifest destiny of the Republic in its broadest sense." What was meant by "manifest destiny" in this context was not clearly defined, particularly since the Republicans lost the election. In the 1896 election, however, the Republicans recaptured the White House and held on to it for the next 16 years. During that time, manifest destiny was cited to promote overseas expansion. Whether or not this version of manifest destiny was consistent with the continental expansionism of the 1840s was debated at the time, and long afterwards. For example, when President William McKinley advocated annexation of the Republic of Hawaii in 1898, he said that "We need Hawaii as much and a good deal more than we did California. It is manifest destiny." On the other hand, former President Grover Cleveland, a Democrat who had blocked the annexation of Hawaii during his administration, wrote that McKinley's annexation of the territory was a "perversion of our national destiny". Historians continued that debate; some have interpreted American acquisition of other Pacific island groups in the 1890s as an extension of manifest destiny across the Pacific Ocean. Others have regarded it as the antithesis of manifest destiny and merely imperialism. Spanish–American War and the Philippines In 1898, the United States intervened in the Cuban insurrection and launched the Spanish–American War to force Spain out. According to the terms of the Treaty of Paris, Spain relinquished sovereignty over Cuba and ceded the Philippine Islands, Puerto Rico, and Guam to the United States. The terms of cession for the Philippines involved a payment of the sum of $20 million by the United States to Spain. The treaty was highly contentious and denounced by William Jennings Bryan, who tried to make it a central issue in the 1900 election. He was defeated in landslide by McKinley. The Teller Amendment, passed unanimously by the U.S. Senate before the war, which proclaimed Cuba "free and independent", forestalled annexation of the island. The Platt Amendment (1902), however, established Cuba as a virtual protectorate of the United States. The acquisition of Guam, Puerto Rico, and the Philippines after the war with Spain marked a new chapter in U.S. history. Traditionally, territories were acquired by the United States for the purpose of becoming new states on equal footing with already existing states. These islands, however, were acquired as colonies rather than prospective states. The process was validated by the Insular Cases. The Supreme Court ruled that full constitutional rights did not automatically extend to all areas under American control. Nevertheless, in 1917, Puerto Ricans were all made full American citizens via the Jones Act. This also provided for a popularly elected legislature, a bill of rights and authorized the election of a Resident Commissioner who has a voice (but no vote) in Congress. According to Frederick Merk, these colonial acquisitions marked a break from the original intention of manifest destiny. Previously, "Manifest Destiny had contained a principle so fundamental that a Calhoun and an O'Sullivan could agree on it—that a people not capable of rising to statehood should never be annexed. That was the principle thrown overboard by the imperialism of 1899." Albert J. Beveridge maintained the contrary at his September 25, 1900, speech in the Auditorium, at Chicago. He declared that the current desire for Cuba and the other acquired territories was identical to the views expressed by Washington, Jefferson and Marshall. Moreover, "the sovereignty of the Stars and Stripes can be nothing but a blessing to any people and to any land." The Philippines was eventually given its independence in 1946; Guam and Puerto Rico have special status to this day, but all their people have United States citizenship. Rudyard Kipling's poem "The White Man's Burden", which was subtitled "The United States and the Philippine Islands", was a famous expression of imperialist sentiments, which were common at the time. The nascent revolutionary government desirous of independence, however, resisted the United States in the Philippine–American War in 1899. After the war began, William Jennings Bryan, an opponent of overseas expansion, wrote, "'Destiny' is not as manifest as it was a few weeks ago." The belief in an American mission to promote and defend democracy throughout the world, as expounded by Thomas Jefferson and his "Empire of Liberty" and Abraham Lincoln, was continued by Theodore Roosevelt and Woodrow Wilson. Under Harry Truman (and Douglas MacArthur) it was implemented in practice in the American rebuilding of Japan and Germany after World War II. George W. Bush in the 21st century applied it to the Middle East, in Afghanistan and Iraq. Tyner argues that in proclaiming a mission to combat terror, Bush was continuing a long tradition of prophetic presidential action to be the beacon of freedom in the spirit of Manifest Destiny. After the turn of the nineteenth century to the twentieth, the phrase manifest destiny declined in usage, as territorial expansion ceased to be promoted as being a part of America's "destiny". Under President Theodore Roosevelt the role of the United States in the New World was defined, in the 1904 Roosevelt Corollary to the Monroe Doctrine, as being an "international police power" to secure American interests in the Western Hemisphere. Roosevelt's corollary contained an explicit rejection of territorial expansion. In the past, manifest destiny had been seen as necessary to enforce the Monroe Doctrine in the Western Hemisphere, but now expansionism had been replaced by interventionism as a means of upholding the doctrine. President Woodrow Wilson continued the policy of interventionism in the Americas, and attempted to redefine both manifest destiny and America's "mission" on a broader, worldwide scale. Wilson led the United States into World War I with the argument that "The world must be made safe for democracy." In his 1920 message to Congress after the war, Wilson stated: ... I think we all realize that the day has come when Democracy is being put upon its final test. The Old World is just now suffering from a wanton rejection of the principle of democracy and a substitution of the principle of autocracy as asserted in the name, but without the authority and sanction, of the multitude. This is the time of all others when Democracy should prove its purity and its spiritual power to prevail. It is surely the manifest destiny of the United States to lead in the attempt to make this spirit prevail. This was the only time a president had used the phrase "manifest destiny" in his annual address. Wilson's version of manifest destiny was a rejection of expansionism and an endorsement (in principle) of self-determination, emphasizing that the United States had a mission to be a world leader for the cause of democracy. This U.S. vision of itself as the leader of the "Free World" would grow stronger in the 20th century after World War II, although rarely would it be described as "manifest destiny", as Wilson had done. "Manifest Destiny" is sometimes used by critics of U.S. foreign policy to characterize interventions in the Middle East and elsewhere. In this usage, "manifest destiny" is interpreted as the underlying cause of what is denounced by some as "American imperialism". The positive phrasing is "nation building", and State Department official Karin Von Hippel notes that the U.S. has "been involved in nation-building and promoting democracy since the middle of the nineteenth century and 'Manifest Destiny'". The legacy is a complex one. The belief in an American mission to promote and defend democracy throughout the world, as expounded by Thomas Jefferson and his "Empire of Liberty", and by Abraham Lincoln, Woodrow Wilson and George W. Bush, continues to have an influence on American political ideology. Bush looked at the American success after 1945 in imposing democracy in Japan as a model. Under Douglas MacArthur, the Americans "were imbued with a sense of manifest destiny" says historian John Dower. Relationship with German Lebensraum ideology German geographer Friedrich Ratzel visited North America beginning in 1873 and saw the effects of American manifest destiny. Ratzel sympathized with the results of "manifest destiny", but he never used the term. Instead he relied on the Frontier Thesis of Frederick Jackson Turner. Ratzel promoted overseas colonies for Germany in Asia and Africa, but not an expansion into Slavic lands. Later German publicists misinterpreted Ratzel to argue for the right of the German race to expand within Europe; that notion was later incorporated into Nazi ideology, as Lebensraum. Harriet Wanklyn (1961) argues that Ratzel's theory was designed to advance science, and that politicians distorted it for political goals. |History of U.S. expansion and influence |Timeline of military operations| |List of wars| |List of bases| Authors and literature - Thomas Hart Benton—Missouri senator, proponent of western expansion - Stephen A. Douglas—prominent spokesman of "Young America" - Horace Greeley—popularized the phrase "Go West, young man." - Duff Green—writer, politician, and prominent manifest destiny advocate - Frances Fuller Victor—prominent western historian and fiction writer who captured the spirit of western expansion - "The White Man's Burden"—an influential poem by Rudyard Kipling advocating colonization by the United States - Young America movement—a political and literary movement with connections to manifest destiny - Expansionism—for expansionist ideas in other countries - "John Gast, American Progress, 1872". Picturing U.S. History. City University of New York. External link in - Robert J. Miller (2006). Native America, Discovered And Conquered: Thomas Jefferson, Lewis & Clark, And Manifest Destiny. Greenwood. p. 120. - Merk 1963, p. 3 - Daniel Walker Howe, What Hath God Wrought: The Transformation of America 1815–1848, (2007) pp. 705–6 - "29. Manifest Destiny". American History. USHistory.org. External link in - Merk 1963, pp. 215–216 - Merk 1963, p. 215 - Ward 1962, pp. 136–137 - Hidalgo, Dennis R. (2003). "Manifest Destiny". encyclopedia.com taken from Dictionary of American History. Retrieved June 11, 2014. - Tuveson 1980, p. 91. - Merk 1963, p. 27 - O'Sullivan, John. "The Great Nation of Futurity". The United States Democratic Review Volume 0006 Issue 23 (November 1839). - O'Sullivan, John L., A Divine Destiny for America, 1845. - O'Sullivan, John L. (July–August 1845). "Annexation". United States Magazine and Democratic Review 17 (1): 5–11. Retrieved 2008-05-20. - See Julius Pratt, "The Origin Of 'Manifest Destiny'", American Historical Review, (1927) 32#4, pp. 795–98 in JSTOR. Linda S. Hudson has argued that it was coined by writer Jane McManus Storm; Greenburg, p. 20; Hudson 2001; O'Sullivan biographer Robert D. Sampson disputes Hudson's claim for a variety of reasons (See note 7 at Sampson 2003, pp. 244–245). - Adams 2008, p. 188. - Quoted in Thomas R. Hietala, Manifest design: American exceptionalism and Empire (2003) p. 255 - Robert W. Johannsen, "The Meaning of Manifest Destiny", in Johannsen 1997. - McCrisken, Trevor B., "Exceptionalism: Manifest Destiny" in Encyclopedia of American Foreign Policy (2002), Vol. 2, p. 68 - Weinberg 1935, p. 145; Johannsen 1997, p. 9. - Johannsen 1997, p. 10 - "Prospectus of the New Series", The American Whig Review Volume 7 Issue 1 (Jan 1848) p. 2 - Weeks 1996, p. 61. - Justin B. Litke, "Varieties of American Exceptionalism: Why John Winthrop Is No Imperialist", Journal of Church and State, 54 (Spring 2012), 197–213. - Ford 2010, pp. 315–319 - Somkin 1967, pp. 68–69 - Johannsen 1997, pp. 18–19. - Rossiter 1950, pp. 19–20 - John Mack Faragher et al. Out of Many: A History of the American People, (2nd ed. 1997) page 413 - Reginald Horsman. Race and Manifest Destiny. pp. 2, 6. - Witham, Larry (2007). A City Upon a Hill: How Sermons Changed the Course of American History. New York: Harper. - Merk 1963, p. 40 - Byrnes, Mark Eaton (2001). James K. Polk: A Biographical Companion. Santa Barbara, Calif: ABC-CLIO. p. 145. - Morrison, Michael A. (1997). Slavery and the American West: The Eclipse of Manifest Destiny and the Coming of the Civil War. Chapel Hill: University of North Carolina Press. - Mountjoy, Shane (2009). Manifest Destiny: Westward Expansion. New York: Chelsea House Publishers. - Joseph R. Fornieri (April–June 2010). "Lincoln's Reflective Patriotism". Perspectives on Political Science 39 (2): 108–117. doi:10.1080/10457091003685019. - Kurt Hanson; Robert L. Beisner. American Foreign Relations since 1600: A Guide to the Literature, Second Edition. ABC-CLIO. pp. 313. ISBN 978-1-57607-080-2. - Stuart and Weeks call this period the "era of manifest destiny" and the "age of manifest destiny", respectively. - Nugent, pp. 74–79 - The acquisition of Canada this year, as far as the neighborhood of Quebec, will be a mere matter of marching, and will give us experience for the attack of Halifax the next, and the final expulsion of England from the American continent.—To William Duane. vi, 75. Ford ed., ix, 366. (M., August 1812.) - Charles M. Gates (1940). "The West in American Diplomacy, 1812–1815". Mississippi Valley Historical Review 26 (4): 499–510. doi:10.2307/1896318. JSTOR 1896318. quote on p. 507. - Continental and Continentalism, sociologyindex.com. Archived May 9, 2015 at the Wayback Machine - Adams quoted in McDougall 1997, p. 78. - McDougall 1997, p. 74; Weinberg 1935, p. 109. - Treaty popular: Stuart 1988, p. 104; compass quote p. 84. - Merk 1963, pp. 144–147; Fuller 1936; Hietala 2003. - Calhoun, John C. (1848). "Conquest of Mexico". TeachingAmericanHistory.org. Retrieved 2007-10-19. - McDougall 1997, pp. 87–95. - Fuller 1936, pp. 119, 122, 162 and passim. - Billy H. Gilley (1979). "'Polk's War' and the Louisiana Press". Louisiana History 20: 5–23. JSTOR 4231864. - Robert A. Brent (1969). "Mississippi and the Mexican War". Journal of Mississippi History 31 (3): 202–14. - Ripley 1985 - "A Critical Day". The New York Times. March 4, 1854. - Crenshaw 1941 - Greene 2006, pp. 1–50[citation not found] - Crocker 2006, p. 150. - Weeks 1996, pp. 144–52. - Merk 1963, p. 214. - Lesli J. Favor (2005). "6. Settling the West". A Historical Atlas of America's Manifest Destiny. Rosen. - "Teaching With Documents:The Homestead Act of 1862". The U.S. National Archives and Records Administration. Retrieved 2012-06-29. - Robert E. Greenwood PhD (2007). Outsourcing Culture: How American Culture has Changed From "We the People" Into a One World Government. Outskirts Press. p. 97. - Rajiv Molhotra (2009). "American Exceptionalism and the Myth of the American Frontiers". In Rajani Kannepalli Kanth. The Challenge of Eurocentrism. Palgrave MacMillan. pp. 180, 184, 189, 199. - Paul Finkelman and Donald R. Kennon (2008). Congress and the Emergence of Sectionalism. Ohio University Press. pp. 15,141,254. - Ben Kiernan (2007). Blood and Soil: A World History of Genocide and Extermination from Sparta to Darfur. Yale University Press. pp. 328, 330. - Prucha 1995, p. 137, "I believe the Indian then to be in body and mind equal to the white man," (Jefferson letter to the Marquis de Chastellux, June 7, 1785). - American Indians. Thomas Jefferson's Monticello. Retrieved April 26, 2015. - Francis Parkman (1913) . The conspiracy of Pontiac and the Indian war after the conquest of Canada. p. 9. - Strong 1885, pp. 107–108 - Official Manual of the State of Missouri. Office of the Secretary of State of Missouri. 1895. p. 245. - Republican Party platform; context not clearly defined, Merk 1963, p. 241. - McKinley quoted in McDougall 1997, pp. 112–13; Merk 1963, p. 257. - Bailey, Thomas A. (1937). "Was the Presidential Election of 1900 a Mandate on Imperialism?". Mississippi Valley Historical Review 24 (1): 43–52. doi:10.2307/1891336. JSTOR 1891336. - Merk 1963, p. 257. - Beveridge 1908, p. 123 - Kipling, Rudyard. "The White Man's Burden". - Bryan 1899. - James A. Tyner (2005). Iraq, Terror, and the Philippines' Will to War. Rowman & Littlefield. p. 62. - "Safe for democracy"; 1920 message; Wilson's version of manifest destiny: Weinberg 1935, p. 471. - Karin Von Hippel (2000). Democracy by Force: U.S. Military Intervention in the Post-Cold War World. Cambridge University Press. p. 1. - Charles Philippe David and David Grondin (2006). Hegemony Or Empire?: The Redefinition of Us Power Under George W. Bush. Ashgate. pp. 129–30. - Stephanson 1996, pp. 112–29 examines the influence of manifest destiny in the 20th century, particularly as articulated by Woodrow Wilson. - Scott, Donald. "The Religious Origins of Manifest Destiny". National Humanities Center. Retrieved 2011-10-26. - John W. Dower (2000). Embracing Defeat: Japan in the Wake of World War II. W. W. Norton. p. 217. - Mattelart 1996, pp. 212–216. - Klinghoffer 2006, p. 86. - "A German Appraisal of the United States". The Atlantic Monthly. January 1895. pp. 124–128. Retrieved 2009-10-17. - Woodruff D. Smith (February 1980). "Friedrich Ratzel and the Origins of Lebensraum". German Studies Review 3 (1): 51–68. doi:10.2307/1429483. JSTOR 1429483. - Wanklyn 1961, pp. 36–40. - Adams, Sean Patrick (2008). The Early American Republic: A Documentary Reader. Wiley–Blackwell. ISBN 978-1-4051-6098-8. - Bryan, William Jennings (1899). Republic or Empire?. - Beveridge, Albert J. (1908). The Meaning of the Times and Other Speeches. Indianapolis: The Bobbs–Merrill Company. - Crenshaw, Ollinger (1941). "The Knights of the Golden Circle: The Career of George Bickley". The American Historical Review 1 (42): 23–50. - Crocker, H. W. (2006). Don't tread on me: a 400-year history of America at war, from Indian fighting to terrorist hunting. Crown Forum. ISBN 978-1-4000-5363-6. - Cheery, Conrad (1998). God's New Israel. The University of North Carolina Press. p. 424. ISBN 978-0-8078-4754-1. Retrieved 2012-08-02. - Greene, Laurence (2008). The Filibuster. New York: Kessinger Publishing, LLC. p. 384. ISBN 978-1-4366-9531-2. Retrieved 2012-08-02. - Fisher, Philip (1985). Hard facts: setting and form in the American novel. Oxford University Press. ISBN 978-0-19-503528-5. - Fuller, John Douglas Pitts (1936). The movement for the acquisition of all Mexico, 1846–1848. Johns Hopkins Press. - Greenberg, Amy S. (2005). Manifest manhood and the antebellum American empire. Cambridge University Press. ISBN 978-0-521-84096-5. - Hietala, Thomas R. (February 2003). Manifest Design: American Exceptionalism and Empire. Cornell University Press. ISBN 978-0-8014-8846-7. Previously published as Hietala, Thomas R. (1985). Manifest design: anxious aggrandizement in late Jacksonian America. Cornell University Press. ISBN 978-0-8014-1735-1. - Hudson, Linda S. (2001). Mistress of Manifest Destiny: a biography of Jane McManus Storm Cazneau, 1807–1878. Texas State Historical Association. ISBN 978-0-87611-179-6. - Johannsen, Robert Walter (1997). Manifest destiny and empire: American antebellum expansionism. Texas A&M University Press. ISBN 978-0-89096-756-0. - Klinghoffer, Arthur Jay (2006). The power of projections: how maps reflect global politics and history. Greenwood Publishing Group. ISBN 978-0-275-99135-7. - Ford, Paul L., ed. (2010). Works of Thomas Jefferson, IX. Cosmo Press Inc. ISBN 978-1-61640-210-5. - May, Robert E. (2004). Manifest Destiny's Underworld. The University of North Carolina Press. p. 448. ISBN 978-0-8078-5581-2. Retrieved 2012-08-02. - Mattelart, Armand (1996). The Invention of Communication. U of Minnesota Press. ISBN 978-0-8166-2697-7. - McDougall, Walter A. (1997). Promised land, crusader state: the American encounter with the world since 1776. Houghton Mifflin. ISBN 978-0-395-83085-7. - Merk, Frederick (1963). Manifest Destiny and Mission in American History. Harvard University Press. ISBN 978-0-674-54805-3. - Prucha, Francis Paul (1995). The great father: the United States government and the American Indians. U of Nebraska Press. ISBN 978-0-8032-8734-1. - Ripley, Peter C. (1985). The Black Abolitionist Papers. Chapel Hill, NC: University of North Carolina Press. p. 646. - Rossiter, Clinton (1950). "The American Mission". The American Scholar (The American Scholar) (20): 19–20. - Sampson, Robert (2003). John L. O'Sullivan and his times. Kent State University Press. ISBN 978-0-87338-745-3. - Stephanson, Anders (1996). Manifest destiny: American expansionism and the empire of right. Hill and Wang. ISBN 978-0-8090-1584-9. - Stuart, Reginald C. (1988). United States expansionism and British North America, 1775–1871. University of North Carolina Press. ISBN 978-0-8078-1767-4. - Somkin, Fred (1967). Unquiet Eagle: Memory and Desire in the Idea of American Freedom, 1815–1860. Ithaca, N.Y. - Strong, Josiah (1885). Our Country. Baker and Taylor Company. - Tuveson, Ernest Lee (1980). Redeemer nation: the idea of America's millennial role. University of Chicago Press. ISBN 978-0-226-81921-1. - Weeks, William Earl (1996). Building the continental empire: American expansion from the Revolution to the Civil War. Ivan R. Dee. ISBN 978-1-56663-135-8. - Ward, John William (1962). Andrew Jackson : Symbol for an Age: Symbol for an Age. Oxford University Press. ISBN 978-0-19-992320-5. - Weinberg, Albert Katz; Walter Hines Page School of International Relations (1935). Manifest destiny: a study of nationalist expansionism in American history. The Johns Hopkins Press. ISBN 0-404-14706-2. - Wanklyn, Harriet (1961). Friedrich Ratzel: A Biographical Memoir and Bibliography. - Dunning, Mike (2001). "Manifest Destiny and the Trans-Mississippi South: Natural Laws and the Extension of Slavery into Mexico.". Journal of Popular Culture 35 (2): 111–127. doi:10.1111/j.0022-3840.2001.00111.x. ISSN 0022-3840. Fulltext: Ebsco. - Pinheiro, John C (2003). "'Religion Without Restriction': Anti-catholicism, All Mexico, and the Treaty of Guadalupe Hidalgo". Journal of the Early Republic 23 (1): 69–96. doi:10.2307/3124986. ISSN 0275-1275. - Sampson, Robert D (2002). "The Pacifist-reform Roots of John L. O'Sullivan's Manifest Destiny". Mid-America 84 (1–3): 129–144. ISSN 0026-2927. - Brown, Charles Henry (January 1980). Agents of manifest destiny: the lives and times of the filibusters. University of North Carolina Press. ISBN 978-0-8078-1361-4. - Burns, Edward McNall (1957). The American idea of mission: concepts of national purpose and destiny. Rutgers University Press. - Fresonke, Kris (2003). West of Emerson: the design of manifest destiny. University of California Press. ISBN 978-0-520-23185-6. - Gould, Lewis L. (1980). The Presidency of William McKinley. Regents Press of Kansas. ISBN 978-0-7006-0206-3. - Graebner, Norman A. (1968). Manifest destiny. Bobbs–Merrill. ISBN 0-672-50986-5. - Heidler, David Stephen; Heidler, Jeanne T. (2003). Manifest destiny. Greenwood Press. ISBN 978-0-313-32308-9. - Hofstadter, Richard (1965). "Cuba, the Philippines, and Manifest Destiny". The paranoid style in American politics: and other essays. Knopf. - Horsman, Reginald (1981). Race and manifest destiny: The origins of American racial Anglo-Saxonism. Harvard University Press. ISBN 978-0-674-94805-1. - McDonough, Matthew Davitian. Manifestly Uncertain Destiny: The Debate over American Expansionism, 1803–1848. PhD dissertation, Kansas State University, 2011. - Merk, Frederick, and Lois Bannister Merk. Manifest Destiny and Mission in American History: A Reinterpretation. New York: Knopf, 1963. - May, Robert E. (2002). Manifest destiny's underworld: filibustering in antebellum America. University of North Carolina Press. ISBN 978-0-8078-2703-1. - Morrison, Michael A. (August 18, 1999). Slavery and the American West: The Eclipse of Manifest Destiny and the Coming of the Civil War. UNC Press Books. ISBN 978-0-8078-4796-1. - Sampson, Robert (2003). John L. O'Sullivan and his times. Kent State University Press. ISBN 978-0-87338-745-3. - Smith, Gene A. (2000). Thomas Ap Catesby Jones: commodore of Manifest Destiny. Naval Institute Press. ISBN 978-1-55750-848-5. |Wikiquote has quotations related to: Manifest destiny| - Manifest Destiny and the U.S.–Mexican War: Then and Now - President Polk's Inaugural Address - Gayle Olson-Raymer, "The Expansion of Empire", 15-page teaching guide for high school students, Zinn Education Project/Rethinking Schools
https://en.wikipedia.org/wiki/Manifest_Destiny
4.21875
Long landslides spotted on Saturn's moon, Iapetus, could help provide clues to similar movements of material on Earth. Scientists studying the icy satellite have determined that flash heating could cause falling ice to travel 10 to 15 times farther than previously expected on Iapetus. Extended landslides can be found on Mars and Earth, but are more likely to be composed of rock than ice. Despite the differences in materials, scientists believe there could be a link between the long-tumbling debris on all three bodies. "We think there's more likely a common mechanism for all of this, and we want to be able to explain all of the observations," lead scientist Kelsi Singer of Washington University told SPACE.com. Giant landslides stretching as far as 50 miles (80 kilometers) litter the surface of Iapetus. Singer and her team identified 30 such displacements by studying images taken by NASA's Cassini spacecraft. [Photos: Latest Saturn Photos from NASA's Cassini Orbiter] Composed almost completely of ice, Iapetus already stands out from other moons. While most bodies in the solar system have rocky mantles and metallic cores, with an icy layer on top, scientists think Iapetus is composed almost completely of frozen water. There are bits of rock and carbonaceous material that make half the moon appear darker than the other, but this seems to be only a surface feature. Ice on Iapetus is different from ice found on Earth. Because the moon's temperature can get as low as 300 degrees Fahrenheit (150 degrees Celsius), the moon's ice is very hard and very dry. "It's more like what we experience on Earth as rock, just because it's so cold," Singer said. Slow-moving ice creates a lot of friction, so when the ice falls from high places, scientists expected that it would behave much like rock on Earth does. Instead, they found that it traveled significantly farther than predicted. How far a landslide runs is usually related to how far it falls, Singer explained. Most of the time, debris of any type loses energy before traveling twice the distance it fell from. But on Iapetus, the pieces of ice move 20 to 30 times as far as their falling height. Flash heating could be providing that extra push. Faster and farther Flash heating occurs when material falls so fast that the heat doesn't have time to dissipate. Instead, it stays concentrated in small areas, reducing the friction between the sliding objects and allowing them to travel faster and farther than they would under normal conditions. "They're almost acting more like a fluid," Singer said. On Iapetus, falling material has a good chance of reaching great speeds because there are a number of great heights to fall from. The moon hosts a ring of mountains around its bulging equator that can tower as high as 12 miles (20 km), and the longest run-outs discovered are associated with the ridge and with impact-basin walls. Scientists think that the landslides are relatively recent, and could have been triggered by impacts in the last billion years or so. "You don't see a lot of small craters on the landslide material itself," Singer said, although the surrounding terrain boasts evidence of bombardment. Over time, landscapes tend to be dotted by falling rocks, so the less cratered a surface is, the younger it is thought to be. [Photos of Saturn's Moons] Resting on the ridges and walls, the material gradually becomes more unstable. Close impacts could set them off, but powerful, distant impacts reverberating through the ice could also send them tumbling. The research was published in the July 29 issue of the journal Nature Geoscience. Connecting ice and rock Differences in gravity, atmosphere and water content make landslides seen on Iapetus difficult to duplicate in the laboratory. But the fact that they happen on different types of worlds makes it more likely that the mechanism triggering the extended slide is dependent on things unique to either environment. "We have them on Iapetus, Earth and Mars," Singer said. "Theoretically, they should be very similar." Singer pointed out the implications for friction within fault lines, which produces earthquakes. As plates on Earth move, the rocks within a fault snag on each other, until forces drag them apart. But sometimes, the faults slip farther than scientists can explain based on their understanding of friction. If flash heating occurs within the faults, it could explain why the two opposing faces slide the way they do, and provoke a better understanding of earthquakes. In such cases, flash heating would cause minerals to melt and reform, producing an unexpected material around the faults. Some such materials have been identified at the base of long landslides on Earth. "If something else is going on, like flash heating, or something making [the material] have a lower coefficient of friction, this would affect any models that use the coefficient of friction," Singer said.
http://www.foxnews.com/tech/2012/07/31/50-mile-landslides-spotted-on-saturn-icy-moon.html?intcmp=related
4.125
Commutative Property of Addition Teacher Resources Find Commutative Property of Addition educational ideas and activities Showing 21 - 40 of 148 resources Solve for Unknowns Using the Commutative Property of Addition What is the commutative property and how do you use it? Find out how to solve for unknowns using this very special property of addition. Excellent visuals and real-world stories are used to define the commutative property in a way that... 4 mins 3rd - 4th Math CCSS: Designed Arithmetic Commutative Property of Addition 1 Worksheet Skills and drill practice may not thrill your class but as they say, practice makes perfect. They solve 16 single digit addition problems that require them to use the commutative property. Additional worksheet are available through this... 1st - 2nd Math How Do You Add and Subtract a Bunch of Numbers with Different Signs? So you have an expression of positive and negative numbers and you want to add and subtract them. Do you do the operations in order that they are written? Do you combine some? Do you move them around? Do you change their signs? The... 3 mins 6th - 12th Math Common Core State Standards 1st Grade Math Here are the complete set of the math practice standards and the Common Core standards for first grade. They cover operations & algebraic thinking, operations in base 10, measurement and data, and geometry. Illustrated nicely with fun... 1st Math CCSS: Designed
http://www.lessonplanet.com/lesson-plans/commutative-property-of-addition/2
4.09375
Gothic architecture is a style of architecture that flourished during the high and late medieval period. It evolved from Romanesque architecture and was succeeded by Renaissance architecture. Originating in 12th-century France and lasting into the 16th century, Gothic architecture was known during the period as Opus Francigenum ("French work") with the term Gothic first appearing during the later part of the Renaissance. Its characteristics include the pointed arch, the ribbed vault and the flying buttress. Gothic architecture is most familiar as the architecture of many of the great cathedrals, abbeys and churches of Europe. It is also the architecture of many castles, palaces, town halls, guild halls, universities and to a less prominent extent, private dwellings, such as dorms and rooms. It is in the great churches and cathedrals and in a number of civic buildings that the Gothic style was expressed most powerfully, its characteristics lending themselves to appeals to the emotions, whether springing from faith or from civic pride. A great number of ecclesiastical buildings remain from this period, of which even the smallest are often structures of architectural distinction while many of the larger churches are considered priceless works of art and are listed with UNESCO as World Heritage Sites. For this reason a study of Gothic architecture is largely a study of cathedrals and churches. A series of Gothic revivals began in mid-18th-century England, spread through 19th-century Europe and continued, largely for ecclesiastical and university structures, into the 20th century. - 1 The term "Gothic" - 2 Definition and scope - 3 Influences - 4 Architectural background - 5 Architectural development - 6 Characteristics of Gothic cathedrals and great churches - 7 Regional differences - 8 Other Gothic buildings - 9 Gothic survival and revival - 10 See also - 11 Notes - 12 References - 13 Further reading - 14 External links The term "Gothic" The term "Gothic architecture" originated as a pejorative description. Giorgio Vasari used the term "barbarous German style" in his Lives of the Artists to describe what is now considered the Gothic style, and in the introduction to the Lives he attributes various architectural features to "the Goths" whom he holds responsible for destroying the ancient buildings after they conquered Rome, and erecting new ones in this style. At the time in which Vasari was writing, Italy had experienced a century of building in the Classical architectural vocabulary revived in the Renaissance and seen as evidence of a new Golden Age of learning and refinement. The Renaissance had then overtaken Europe, overturning a system of culture that, prior to the advent of printing, was almost entirely focused on the Church and was perceived, in retrospect, as a period of ignorance and superstition. Hence, François Rabelais, also of the 16th century, imagines an inscription over the door of his utopian Abbey of Thélème, "Here enter no hypocrites, bigots..." slipping in a slighting reference to "Gotz" and "Ostrogotz." In English 17th-century usage, "Goth" was an equivalent of "vandal", a savage despoiler with a Germanic heritage, and so came to be applied to the architectural styles of northern Europe from before the revival of classical types of architecture. According to a 19th-century correspondent in the London Journal Notes and Queries: There can be no doubt that the term 'Gothic' as applied to pointed styles of ecclesiastical architecture was used at first contemptuously, and in derision, by those who were ambitious to imitate and revive the Grecian orders of architecture, after the revival of classical literature. Authorities such as Christopher Wren lent their aid in deprecating the old medieval style, which they termed Gothic, as synonymous with everything that was barbarous and rude. On 21 July 1710, the Académie d'Architecture met in Paris, and among the subjects they discussed, the assembled company noted the new fashions of bowed and cusped arches on chimneypieces being employed "to finish the top of their openings. The Company disapproved of several of these new manners, which are defective and which belong for the most part to the Gothic." Definition and scope Gothic architecture is the architecture of the late medieval period, characterised by use of the pointed arch. Other features common to Gothic architecture are the rib vault, buttresses, including flying buttresses; large windows which are often grouped, or have tracery; rose windows, towers, spires and pinnacles; and ornate façades. As an architectural style, Gothic developed primarily in ecclesiastical architecture, and its principles and characteristic forms were applied to other types of buildings. Buildings of every type were constructed in the Gothic style, with evidence remaining of simple domestic buildings, elegant town houses, grand palaces, commercial premises, civic buildings, castles, city walls, bridges, village churches, abbey churches, abbey complexes and large cathedrals. The greatest number of surviving Gothic buildings are churches. These range from tiny chapels to large cathedrals, and although many have been extended and altered in different styles, a large number remain either substantially intact or sympathetically restored, demonstrating the form, character and decoration of Gothic architecture. The Gothic style is most particularly associated with the great cathedrals of Northern France, the Low Countries, England and Spain, with other fine examples occurring across Europe. At the end of the 12th century, Europe was divided into a multitude of city states and kingdoms. The area encompassing modern Germany, southern Denmark, the Netherlands, Belgium, Luxembourg, Switzerland, Austria, Slovakia, Czech Republic and much of northern Italy (excluding Venice and Papal State) was nominally part of the Holy Roman Empire, but local rulers exercised considerable autonomy. France, Denmark, Poland, Hungary, Portugal, Scotland, Castile, Aragon, Navarre, Sicily and Cyprus were independent kingdoms, as was the Angevin Empire, whose Plantagenet kings ruled England and large domains in what was to become modern France. Norway came under the influence of England, while the other Scandinavian countries and Poland were influenced by trading contacts with the Hanseatic League. Angevin kings brought the Gothic tradition from France to Southern Italy, while Lusignan kings introduced French Gothic architecture to Cyprus. Throughout Europe at this time there was a rapid growth in trade and an associated growth in towns. Germany and the Lowlands had large flourishing towns that grew in comparative peace, in trade and competition with each other, or united for mutual weal, as in the Hanseatic League. Civic building was of great importance to these towns as a sign of wealth and pride. England and France remained largely feudal and produced grand domestic architecture for their kings, dukes and bishops, rather than grand town halls for their burghers. The Catholic Church prevailed across Europe at this time, influencing not only faith but also wealth and power. Bishops were appointed by the feudal lords (kings, dukes and other landowners) and they often ruled as virtual princes over large estates. The early Medieval periods had seen a rapid growth in monasticism, with several different orders being prevalent and spreading their influence widely. Foremost were the Benedictines whose great abbey churches vastly outnumbered any others in France and England. A part of their influence was that towns developed around them and they became centers of culture, learning and commerce. The Cluniac and Cistercian Orders were prevalent in France, the great monastery at Cluny having established a formula for a well planned monastic site which was then to influence all subsequent monastic building for many centuries. In the 13th century St. Francis of Assisi established the Franciscans, or so-called "Grey Friars", a mendicant order. The Dominicans, another mendicant order founded during the same period but by St. Dominic in Toulouse and Bologna, were particularly influential in the building of Italy's Gothic churches. From the 10th to the 13th century, Romanesque architecture had become a pan-European style and manner of construction, affecting buildings in countries as far apart as Ireland, Croatia, Sweden and Sicily. The same wide geographic area was then affected by the development of Gothic architecture, but the acceptance of the Gothic style and methods of construction differed from place to place, as did the expressions of Gothic taste. The proximity of some regions meant that modern country borders do not define divisions of style. On the other hand, some regions such as England and Spain produced defining characteristics rarely seen elsewhere, except where they have been carried by itinerant craftsmen, or the transfer of bishops. Regional differences that are apparent in the great abbey churches and cathedrals of the Romanesque period often become even more apparent in the Gothic. The local availability of materials affected both construction and style. In France, limestone was readily available in several grades, the very fine white limestone of Caen being favoured for sculptural decoration. England had coarse limestone and red sandstone as well as dark green Purbeck marble which was often used for architectural features. In Northern Germany, Netherlands, northern Poland, Denmark, and the Baltic countries local building stone was unavailable but there was a strong tradition of building in brick. The resultant style, Brick Gothic, is called "Backsteingotik" in Germany and Scandinavia and is associated with the Hanseatic League. In Italy, stone was used for fortifications, but brick was preferred for other buildings. Because of the extensive and varied deposits of marble, many buildings were faced in marble, or were left with undecorated façade so that this might be achieved at a later date. The availability of timber also influenced the style of architecture, with timber buildings prevailing in Scandinavia. Availability of timber affected methods of roof construction across Europe. It is thought that the magnificent hammer-beam roofs of England were devised as a direct response to the lack of long straight seasoned timber by the end of the Medieval period, when forests had been decimated not only for the construction of vast roofs but also for ship building. Gothic architecture grew out of the previous architectural genre, Romanesque. For the most part, there was not a clean break, as there was to be later in Renaissance Florence with the revival of the Classical style by Filippo Brunelleschi in the early 15th century, and the sudden abandonment in Renaissance Italy of both the style and the structural characteristics of Gothic. By the 12th century, Romanesque architecture (termed Norman architecture in England because of its association with the Norman invasion), was established throughout Europe and provided the basic architectural forms and units that were to remain in evolution throughout the Medieval period. The important categories of building: the cathedral church, the parish church, the monastery, the castle, the palace, the great hall, the gatehouse, the civic building, had been established in the Romanesque period. Many architectural features that are associated with Gothic architecture had been developed and used by the architects of Romanesque buildings. These include ribbed vaults, buttresses, clustered columns, ambulatories, wheel windows, spires and richly carved door tympana. These were already features of ecclesiastical architecture before the development of the Gothic style, and all were to develop in increasingly elaborate ways. It was principally the widespread introduction of a single feature, the pointed arch, which was to bring about the change that separates Gothic from Romanesque. The technological change permitted a stylistic change which broke the tradition of massive masonry and solid walls penetrated by small openings, replacing it with a style where light appears to triumph over substance. With its use came the development of many other architectural devices, previously put to the test in scattered buildings and then called into service to meet the structural, aesthetic and ideological needs of the new style. These include the flying buttresses, pinnacles and traceried windows which typify Gothic ecclesiastical architecture. But while pointed arch is so strongly associated with the Gothic style, it was first used in Western architecture in buildings that were in other ways clearly Romanesque, notably Durham Cathedral in the north of England, Monreale Cathedral and Cathedral of Cefalù in Sicily, Autun Cathedral in France. Possible Oriental influence The pointed arch, one of the defining attributes of Gothic, was earlier incorporated into Islamic architecture following the Islamic conquests of Roman Syria and the Sassanid Empire in the Seventh Century. The pointed arch and its precursors had been employed in Late Roman and Sassanian architecture; within the Roman context, evidenced in early church building in Syria and occasional secular structures, like the Roman Karamagara Bridge; in Sassanid architecture, in the parabolic and pointed arches employed in palace and sacred construction. Increasing military and cultural contacts with the Muslim world, including the Norman conquest of Islamic Sicily in 1090, the Crusades, beginning 1096, and the Islamic presence in Spain, may have influenced Medieval Europe's adoption of the pointed arch, although this hypothesis remains controversial. Certainly, in those parts of the Western Mediterranean subject to Islamic control or influence, rich regional variants arose, fusing Romanesque and later Gothic traditions with Islamic decorative forms, as seen, for example, in Monreale and Cefalù Cathedrals, the Alcázar of Seville, and Teruel Cathedral. Transition from Romanesque to Gothic architecture The characteristic forms that were to define Gothic architecture grew out of Romanesque architecture and developed at several different geographic locations, as the result of different influences and structural requirements. While barrel vaults and groin vaults are typical of Romanesque architecture, ribbed vaults were used in the naves of two Romanesque churches in Caen, Abbey of Saint-Étienne and Abbaye aux Dames in 1120. Another early example is the nave and apse area of the Cathedral of Cefalù in 1131. The ribbed vault over the north transept at Durham Cathedral in England, built from 1128 to 1133, is probably earlier still and was the first time pointed arches were used in a high vault. Other characteristics of early Gothic architecture, such as vertical shafts, clustered columns, compound piers, plate tracery and groups of narrow openings had evolved during the Romanesque period. The west front of Ely Cathedral exemplifies this development. Internally the three tiered arrangement of arcade, gallery and clerestory was established. Interiors had become lighter with the insertion of more and larger windows. The Basilica of Saint Denis is generally cited as the first truly Gothic building, however the distinction is best reserved for the choir, of which the ambulatory remains intact. Noyon Cathedral, also in France, saw the earliest completion of a rebuilding of an entire cathedral in the new style from 1150 to 1231. While using all those features that came to be known as Gothic, including pointed arches, flying buttresses and ribbed vaulting, the builders continued to employ many of the features and much of the character of Romanesque architecture including round-headed arch throughout the building, varying the shape to pointed where it was functionally practical to do so. At the Abbey Saint-Denis, Noyon Cathedral, Notre Dame de Paris and at the eastern end of Canterbury Cathedral in England, simple cylindrical columns predominate over the Gothic forms of clustered columns and shafted piers. Wells Cathedral in England, commenced at the eastern end in 1175, was the first building in which the designer broke free from Romanesque forms. The architect entirely dispensed with the round arch in favour of the pointed arch and with cylindrical columns in favour of piers composed of clusters of shafts which lead into the mouldings of the arches. The transepts and nave were continued by Adam Locke in the same style and completed in about 1230. The character of the building is entirely Gothic. Wells Cathedral is thus considered the first truly Gothic cathedral. The eastern end of the Basilica Church of Saint-Denis, built by Abbot Suger and completed in 1144, is often cited as the first truly Gothic building, as it draws together many of architectural forms which had evolved from Romanesque and typify the Gothic style. Suger, friend and confidant of the French Kings, Louis VI and Louis VII, decided in about 1137, to rebuild the great Church of Saint-Denis, attached to an abbey which was also a royal residence. He began with the West Front, reconstructing the original Carolingian façade with its single door. He designed the façade of Saint-Denis to be an echo of the Roman Arch of Constantine with its three-part division and three large portals to ease the problem of congestion. The rose window is the earliest-known example above the West portal in France. The façade combines both round arches and pointed arches of the Gothic style. At the completion of the west front in 1140, Abbot Suger moved on to the reconstruction of the eastern end, leaving the Carolingian nave in use. He designed a choir that would be suffused with light. To achieve his aims, his masons drew on the several new features which evolved or had been introduced to Romanesque architecture, the pointed arch, the ribbed vault, the ambulatory with radiating chapels, the clustered columns supporting ribs springing in different directions and the flying buttresses which enabled the insertion of large clerestory windows. The new structure was finished and dedicated on 11 June 1144, in the presence of the King. The choir and west front of the Abbey of Saint-Denis both became the prototypes for further building in the royal domain of northern France and in the Duchy of Normandy. Through the rule of the Angevin dynasty, the new style was introduced to England and spread throughout France, the Low Countries, Germany, Spain, northern Italy and Sicily. Characteristics of Gothic cathedrals and great churches While many secular buildings exist from the Late Middle Ages, it is in the buildings of cathedrals and great churches that Gothic architecture displays its pertinent structures and characteristics to the fullest advantage. A Gothic cathedral or abbey was, prior to the 20th century, generally the landmark building in its town, rising high above all the domestic structures and often surmounted by one or more towers and pinnacles and perhaps tall spires. These cathedrals were the skyscrapers of that day and would have been the largest buildings by far that Europeans would ever have seen. It is in the architecture of these Gothic churches that a unique combination of existing technologies established the emergence of a new building style. Those technologies were the ogival or pointed arch, the ribbed vault, and the buttress. The Gothic style, when applied to an ecclesiastical building, emphasizes verticality and light. This appearance was achieved by the development of certain architectural features, which together provided an engineering solution. The structural parts of the building ceased to be its solid walls, and became a stone skeleton comprising clustered columns, pointed ribbed vaults and flying buttresses. Most large Gothic churches and many smaller parish churches are of the Latin cross (or "cruciform") plan, with a long nave making the body of the church, a transverse arm called the transept and, beyond it, an extension which may be called the choir, chancel or presbytery. There are several regional variations on this plan. The nave is generally flanked on either side by aisles, usually single, but sometimes double. The nave is generally considerably taller than the aisles, having clerestory windows which light the central space. Gothic churches of the Germanic tradition, like St. Stephen of Vienna, often have nave and aisles of similar height and are called Hallenkirche. In the South of France there is often a single wide nave and no aisles, as at Sainte-Marie in Saint-Bertrand-de-Comminges. In some churches with double aisles, like Notre Dame, Paris, the transept does not project beyond the aisles. In English cathedrals transepts tend to project boldly and there may be two of them, as at Salisbury Cathedral, though this is not the case with lesser churches. The eastern arm shows considerable diversity. In England it is generally long and may have two distinct sections, both choir and presbytery. It is often square ended or has a projecting Lady Chapel, dedicated to the Virgin Mary. In France the eastern end is often polygonal and surrounded by a walkway called an ambulatory and sometimes a ring of chapels called a "chevet". While German churches are often similar to those of France, in Italy, the eastern projection beyond the transept is usually just a shallow apsidal chapel containing the sanctuary, as at Florence Cathedral. Structure: the pointed arch One of the defining characteristics of Gothic architecture is the pointed or ogival arch. Arches of a similar type were used in the Near East in pre-Islamic as well as Islamic architecture before they were structurally employed in medieval architecture. It is thought by some architectural historians that this was the inspiration for the use of the pointed arch in France, in otherwise Romanesque buildings, as at Autun Cathedral. Contrary to the diffusionist theory, it appears that there was simultaneously a structural evolution towards the pointed arch, for the purpose of vaulting spaces of irregular plan, or to bring transverse vaults to the same height as diagonal vaults. This latter occurs at Durham Cathedral in the nave aisles in 1093. Pointed arches also occur extensively in Romanesque decorative blind arcading, where semi-circular arches overlap each other in a simple decorative pattern, and the points are accidental to the design. The Gothic vault, unlike the semi-circular vault of Roman and Romanesque buildings, can be used to roof rectangular and irregularly shaped plans such as trapezoids. The other structural advantage is that the pointed arch channels the weight onto the bearing piers or columns at a steep angle. This enabled architects to raise vaults much higher than was possible in Romanesque architecture. While, structurally, use of the pointed arch gave a greater flexibility to architectural form, it also gave Gothic architecture a very different and more vertical visual character than Romanesque. In Gothic architecture the pointed arch is used in every location where a vaulted shape is called for, both structural and decorative. Gothic openings such as doorways, windows, arcades and galleries have pointed arches. Gothic vaulting above spaces both large and small is usually supported by richly moulded ribs. Rows of pointed arches upon delicate shafts form a typical wall decoration known as blind arcading. Niches with pointed arches and containing statuary are a major external feature. The pointed arch lent itself to elaborate intersecting shapes which developed within window spaces into complex Gothic tracery forming the structural support of the large windows that are characteristic of the style. A characteristic of Gothic church architecture is its height, both absolute and in proportion to its width, the verticality suggesting an aspiration to Heaven. A section of the main body of a Gothic church usually shows the nave as considerably taller than it is wide. In England the proportion is sometimes greater than 2:1, while the greatest proportional difference achieved is at Cologne Cathedral with a ratio of 3.6:1. The highest internal vault is at Beauvais Cathedral at 48 metres (157 ft). Externally, towers and spires are characteristic of Gothic churches both great and small, the number and positioning being one of the greatest variables in Gothic architecture. In Italy, the tower, if present, is almost always detached from the building, as at Florence Cathedral, and is often from an earlier structure. In France and Spain, two towers on the front is the norm. In England, Germany and Scandinavia this is often the arrangement, but an English cathedral may also be surmounted by an enormous tower at the crossing. Smaller churches usually have just one tower, but this may also be the case at larger buildings, such as Salisbury Cathedral or Ulm Minster, which has the tallest spire in the world, slightly exceeding that of Lincoln Cathedral, the tallest which was actually completed during the medieval period, at 160 metres (520 ft). The pointed arch lends itself to a suggestion of height. This appearance is characteristically further enhanced by both the architectural features and the decoration of the building. On the exterior, the verticality is emphasised in a major way by the towers and spires and in a lesser way by strongly projecting vertical buttresses, by narrow half-columns called attached shafts which often pass through several storeys of the building, by long narrow windows, vertical mouldings around doors and figurative sculpture which emphasises the vertical and is often attenuated. The roofline, gable ends, buttresses and other parts of the building are often terminated by small pinnacles, Milan Cathedral being an extreme example in the use of this form of decoration. On the interior of the building attached shafts often sweep unbroken from floor to ceiling and meet the ribs of the vault, like a tall tree spreading into branches. The verticals are generally repeated in the treatment of the windows and wall surfaces. In many Gothic churches, particularly in France, and in the Perpendicular period of English Gothic architecture, the treatment of vertical elements in gallery and window tracery creates a strongly unifying feature that counteracts the horizontal divisions of the interior structure. Expansive interior light has been a feature of Gothic cathedrals since the first structure was opened. The metaphysics of light in the Middle Ages led to clerical belief in its divinity and the importance of its display in holy settings. Much of this belief was based on the writings of Pseudo-Dionysius, a sixth century mystic whose book, The Celestial Hierarchy, was popular among monks in France. Pseudo-Dionysius held that all light, even light reflected from metals or streamed through windows, was divine. To promote such faith, the abbot in charge of the Saint-Denis church on the north edge of Paris, the Abbot Suger, encouraged architects remodeling the building to make the interior as bright as possible. Ever since the remodeled Basilica of Saint-Denis opened in 1144, Gothic architecture has featured expansive windows, such as at Sainte Chapelle, York Minster, Gloucester Cathedral. The increase in size between windows of the Romanesque and Gothic periods is related to the use of the ribbed vault, and in particular, the pointed ribbed vault which channeled the weight to a supporting shaft with less outward thrust than a semicircular vault. Walls did not need to be so weighty. A further development was the flying buttress which arched externally from the springing of the vault across the roof of the aisle to a large buttress pier projecting well beyond the line of the external wall. These piers were often surmounted by a pinnacle or statue, further adding to the downward weight, and counteracting the outward thrust of the vault and buttress arch as well as stress from wind loading. The internal columns of the arcade with their attached shafts, the ribs of the vault and the flying buttresses, with their associated vertical buttresses jutting at right-angles to the building, created a stone skeleton. Between these parts, the walls and the infill of the vaults could be of lighter construction. Between the narrow buttresses, the walls could be opened up into large windows. Through the Gothic period, thanks to the versatility of the pointed arch, the structure of Gothic windows developed from simple openings to immensely rich and decorative sculptural designs. The windows were very often filled with stained glass which added a dimension of colour to the light within the building, as well as providing a medium for figurative and narrative art. The façade of a large church or cathedral, often referred to as the West Front, is generally designed to create a powerful impression on the approaching worshipper, demonstrating both the might of God and the might of the institution that it represents. One of the best known and most typical of such façades is that of Notre Dame de Paris. Central to the façade is the main portal, often flanked by additional doors. In the arch of the door, the tympanum, is often a significant piece of sculpture, most frequently Christ in Majesty and Judgment Day. If there is a central doorjamb or a trumeau, then it frequently bears a statue of the Madonna and Child. There may be much other carving, often of figures in niches set into the mouldings around the portals, or in sculptural screens extending across the façade. Above the main portal there is generally a large window, like that at York Minster, or a group of windows such as those at Ripon Cathedral. In France there is generally a rose window like that at Reims Cathedral. Rose windows are also often found in the façades of churches of Spain and Italy, but are rarer elsewhere and are not found on the façades of any English Cathedrals. The gable is usually richly decorated with arcading or sculpture or, in the case of Italy, may be decorated with the rest of the façade, with polychrome marble and mosaic, as at Orvieto Cathedral. The West Front of a French cathedral and many English, Spanish and German cathedrals generally have two towers, which, particularly in France, express an enormous diversity of form and decoration. However some German cathedrals have only one tower located in the middle of the façade (such as Freiburg Münster). Basic shapes of Gothic arches and stylistic character The way in which the pointed arch was drafted and utilised developed throughout the Gothic period. There were fairly clear stages of development, which did not, however, progress at the same rate, or in the same way in every country. Moreover, the names used to define various periods or styles within Gothic architecture differs from country to country. The simplest shape is the long opening with a pointed arch known in England as the lancet. Lancet openings are often grouped, usually as a cluster of three or five. Lancet openings may be very narrow and steeply pointed. Lancet arches are typically defined as two-centered arches whose radii are larger than the arch's span. Salisbury Cathedral is famous for the beauty and simplicity of its Lancet Gothic, known in England as the Early English Style. York Minster has a group of lancet windows each fifty feet high and still containing ancient glass. They are known as the Five Sisters. These simple undecorated grouped windows are found at Chartres and Laon Cathedrals and are used extensively in Italy. Many Gothic openings are based upon the equilateral form. In other words, when the arch is drafted, the radius is exactly the width of the opening and the centre of each arch coincides with the point from which the opposite arch springs. This makes the arch higher in relation to its width than a semi-circular arch which is exactly half as high as it is wide. The Equilateral Arch gives a wide opening of satisfying proportion useful for doorways, decorative arcades and large windows. The structural beauty of the Gothic arch means, however, that no set proportion had to be rigidly maintained. The Equilateral Arch was employed as a useful tool, not as a Principle of Design. This meant that narrower or wider arches were introduced into a building plan wherever necessity dictated. In the architecture of some Italian cities, notably Venice, semi-circular arches are interspersed with pointed ones. The Equilateral Arch lends itself to filling with tracery of simple equilateral, circular and semi-circular forms. The type of tracery that evolved to fill these spaces is known in England as Geometric Decorated Gothic and can be seen to splendid effect at many English and French Cathedrals, notably Lincoln and Notre Dame in Paris. Windows of complex design and of three or more lights or vertical sections, are often designed by overlapping two or more equilateral arches. The Flamboyant Arch is one that is drafted from four points, the upper part of each main arc turning upwards into a smaller arc and meeting at a sharp, flame-like point. These arches create a rich and lively effect when used for window tracery and surface decoration. The form is structurally weak and has very rarely been used for large openings except when contained within a larger and more stable arch. It is not employed at all for vaulting. Some of the most beautiful and famous traceried windows of Europe employ this type of tracery. It can be seen at St Stephen's Vienna, Sainte Chapelle in Paris, at the Cathedrals of Limoges and Rouen in France. In England the most famous examples are the West Window of York Minster with its design based on the Sacred Heart, the extraordinarily rich nine-light East Window at Carlisle Cathedral and the exquisite East window of Selby Abbey. Doorways surmounted by Flamboyant mouldings are very common in both ecclesiastical and domestic architecture in France. They are much rarer in England. A notable example is the doorway to the Chapter Room at Rochester Cathedral. The style was much used in England for wall arcading and niches. Prime examples in are in the Lady Chapel at Ely, the Screen at Lincoln and externally on the façade of Exeter Cathedral. In German and Spanish Gothic architecture it often appears as openwork screens on the exterior of buildings. The style was used to rich and sometimes extraordinary effect in both these countries, notably on the famous pulpit in Vienna Cathedral. The depressed or four-centred arch is much wider than its height and gives the visual effect of having been flattened under pressure. Its structure is achieved by drafting two arcs which rise steeply from each springing point on a small radius and then turn into two arches with a wide radius and much lower springing point. This type of arch, when employed as a window opening, lends itself to very wide spaces, provided it is adequately supported by many narrow vertical shafts. These are often further braced by horizontal transoms. The overall effect produces a grid-like appearance of regular, delicate, rectangular forms with an emphasis on the perpendicular. It is also employed as a wall decoration in which arcade and window openings form part of the whole decorative surface. The style, known as Perpendicular, that evolved from this treatment is specific to England, although very similar to contemporary Spanish style in particular, and was employed to great effect through the 15th century and first half of the 16th as Renaissance styles were much slower to arrive in England than in Italy and France. It can be seen notably at the East End of Gloucester Cathedral where the East Window is said to be as large as a tennis court. There are three very famous royal chapels and one chapel-like Abbey which show the style at its most elaborate: King's College Chapel, Cambridge; St George's Chapel, Windsor; Henry VII's Chapel at Westminster Abbey and Bath Abbey. However very many simpler buildings, especially churches built during the wool boom in East Anglia, are fine examples of the style. Symbolism and ornamentation The Gothic cathedral represented the universe in microcosm and each architectural concept, including the loftiness and huge dimensions of the structure, were intended to convey a theological message: the great glory of God. The building becomes a microcosm in two ways. Firstly, the mathematical and geometrical nature of the construction is an image of the orderly universe, in which an underlying rationality and logic can be perceived. Secondly, the statues, sculptural decoration, stained glass and murals incorporate the essence of creation in depictions of the Labours of the Months and the Zodiac and sacred history from the Old and New Testaments and Lives of the Saints, as well as reference to the eternal in the Last Judgment and Coronation of the Virgin. Many churches were very richly decorated, both inside and out. Sculpture and architectural details were often bright with coloured paint of which traces remain at the Cathedral of Chartres. Wooden ceilings and panelling were usually brightly coloured. Sometimes the stone columns of the nave were painted, and the panels in decorative wall arcading contained narratives or figures of saints. These have rarely remained intact, but may be seen at the Chapterhouse of Westminster Abbey. Some important Gothic churches could be severely simple such as the Basilica of Mary Magdalene in Saint-Maximin, Provence where the local traditions of the sober, massive, Romanesque architecture were still strong. Wherever Gothic architecture is found, it is subject to local influences, and frequently the influence of itinerant stonemasons and artisans, carrying ideas between cities and sometimes between countries. Certain characteristics are typical of particular regions and often override the style itself, appearing in buildings hundreds of years apart. The distinctive characteristic of French cathedrals, and those in Germany and Belgium that were strongly influenced by them, is their height and their impression of verticality. Each French cathedral tends to be stylistically unified in appearance when compared with an English cathedral where there is great diversity in almost every building. They are compact, with slight or no projection of the transepts and subsidiary chapels. The west fronts are highly consistent, having three portals surmounted by a rose window, and two large towers. Sometimes there are additional towers on the transept ends. The east end is polygonal with ambulatory and sometimes a chevette of radiating chapels. In the south of France, many of the major churches are without transepts and some are without aisles. The distinctive characteristic of English cathedrals is their extreme length, and their internal emphasis upon the horizontal, which may be emphasised visually as much or more than the vertical lines. Each English cathedral (with the exception of Salisbury) has an extraordinary degree of stylistic diversity, when compared with most French, German and Italian cathedrals. It is not unusual for every part of the building to have been built in a different century and in a different style, with no attempt at creating a stylistic unity. Unlike French cathedrals, English cathedrals sprawl across their sites, with double transepts projecting strongly and Lady Chapels tacked on at a later date, such as at Westminster Abbey. In the west front, the doors are not as significant as in France, the usual congregational entrance being through a side porch. The West window is very large and never a rose, which are reserved for the transept gables. The west front may have two towers like a French Cathedral, or none. There is nearly always a tower at the crossing and it may be very large and surmounted by a spire. The distinctive English east end is square, but it may take a completely different form. Both internally and externally, the stonework is often richly decorated with carvings, particularly the capitals. Germany and Central Europe Romanesque architecture in Germany, Poland, the Czech Lands and Austria is characterised by its massive and modular nature. This is expressed in the Gothic architecture of Central Europe in the huge size of the towers and spires, often projected, but not always completed. The west front generally follows the French formula, but the towers are very much taller and, if complete, are surmounted by enormous openwork spires that are a regional feature. Because of the size of the towers, the section of the façade between them may appear narrow and compressed. The eastern end follows the French form. The distinctive character of the interior of German Gothic cathedrals is their breadth and openness. This is the case even when, as at Cologne, they have been modelled upon a French cathedral. German cathedrals, like the French, tend not to have strongly projecting transepts. There are also many hall churches (Hallenkirchen) without clerestory windows. In Catalonia and the territories under its influence (Northern Catalonia in France, the Balearic Islands, the Valencian Country, among others in the Italian islands), the Gothic style allowed to create very wide spaces, with few ornaments; it is called Catalan Gothic style (different than the Spanish or French style). The most important samples of Catalan Gothic style are the cathedrals of Girona, Barcelona, Perpignan and Palma (in Mallorca), the basilica of Santa Maria del Mar (in Barcelona), the basilica del Pi (in Barcelona), and the church of Santa Maria de l'Alba in Manresa. Spain and Portugal The distinctive characteristic of Gothic cathedrals of the Iberian Peninsula is their spatial complexity, with many areas of different shapes leading from each other. They are comparatively wide, and often have very tall arcades surmounted by low clerestories, giving a similar spacious appearance to the 'Hallenkirche of Germany, as at the Church of the Batalha Monastery in Portugal. Many of the cathedrals are completely surrounded by chapels. Like English cathedrals, each is often stylistically diverse. This expresses itself both in the addition of chapels and in the application of decorative details drawn from different sources. Among the influences on both decoration and form are Islamic architecture and, towards the end of the period, Renaissance details combined with the Gothic in a distinctive manner. The West front, as at Leon Cathedral, typically resembles a French west front, but wider in proportion to height and often with greater diversity of detail and a combination of intricate ornament with broad plain surfaces. At Burgos Cathedral there are spires of German style. The roofline often has pierced parapets with comparatively few pinnacles. There are often towers and domes of a great variety of shapes and structural invention rising above the roof. The distinctive characteristic of Italian Gothic is the use of polychrome decoration, both externally as marble veneer on the brick façade and also internally where the arches are often made of alternating black and white segments, and where the columns may be painted red, the walls decorated with frescoes and the apse with mosaic. The plan is usually regular and symmetrical, Italian cathedrals have few and widely spaced columns. The proportions are generally mathematically equilibrated, based on the square and the concept of "armonìa", and except in Venice where they loved flamboyant arches, the arches are almost always equilateral. Colours and moldings define the architectural units rather than blending them. Italian cathedral façades are often polychrome and may include mosaics in the lunettes over the doors. The façades have projecting open porches and occular or wheel windows rather than roses, and do not usually have a tower. The crossing is usually surmounted by a dome. There is often a free-standing tower and baptistry. The eastern end usually has an apse of comparatively low projection. The windows are not as large as in northern Europe and, although stained glass windows are often found, the favourite narrative medium for the interior is the fresco. Other Gothic buildings - See also Castle Synagogues were commonly built in the Gothic style in Europe during the Medieval period. A surviving example is the Old New Synagogue in Prague built in the 13th century. The Palais des Papes in Avignon is the best complete large royal palace, alongside the Royal palace of Olite, built during the 13th and 14th centuries for the kings of Navarre. The Malbork Castle built for the master of the Teutonic order is an example of Brick Gothic architecture. Partial survivals of former royal residences include the Doge's Palace of Venice, the Palau de la Generalitat in Barcelona, built in the 15th century for the kings of Aragon, or the famous Conciergerie, former palace of the kings of France, in Paris. Secular Gothic architecture can also be found in a number of public buildings such as town halls, universities, markets or hospitals. The Gdańsk, Wrocław and Stralsund town halls are remarkable examples of northern Brick Gothic built in the late 14th centuries. The Belfry of Bruges or Brussels Town Hall, built during the 15th century, are associated to the increasing wealth and power of the bourgeoisie in the late Middle Ages; by the 15th century, the traders of the trade cities of Burgundy had acquired such wealth and influence that they could afford to express their power by funding lavishly decorated buildings of vast proportions. This kind of expressions of secular and economic power are also found in other late mediaeval commercial cities, including the Llotja de la Seda of Valencia, Spain, a purpose built silk exchange dating from the 15th century, in the partial remains of Westminster Hall in the Houses of Parliament in London, or the Palazzo Pubblico in Siena, Italy, a 13th-century town hall built to host the offices of the then prosperous republic of Siena. Other Italian cities such as Florence (Palazzo Vecchio), Mantua or Venice also host remarkable examples of secular public architecture. By the late Middle Ages university towns had grown in wealth and importance as well, and this was reflected in the buildings of some of Europe's ancient universities. Particularly remarkable examples still standing nowadays include the Collegio di Spagna in the University of Bologna, built during the 14th and 15th centuries; the Collegium Carolinum of the University of Prague in Bohemia; the Escuelas mayores of the University of Salamanca in Spain; the chapel of King's College, Cambridge; or the Collegium Maius of the Jagiellonian University in Kraków, Poland. In addition to monumental secular architecture, examples of the Gothic style in private buildings can be seen in surviving medieval portions of cities across Europe, above all the distinctive Venetian Gothic such as the Ca' d'Oro. The house of the wealthy early 15th-century merchant Jacques Coeur in Bourges, is the classic Gothic bourgeois mansion, full of the asymmetry and complicated detail beloved of the Gothic Revival. Other cities with a concentration of secular Gothic include Bruges and Siena. Most surviving small secular buildings are relatively plain and straightforward; most windows are flat-topped with mullions, with pointed arches and vaulted ceilings often only found at a few focal points. The country-houses of the nobility were slow to abandon the appearance of being a castle, even in parts of Europe, like England, where defence had ceased to be a real concern. The living and working parts of many monastic buildings survive, for example at Mont Saint-Michel. Exceptional works of Gothic architecture can also be found on the islands of Sicily and Cyprus, in the walled cities of Nicosia and Famagusta. Also, the roofs of the Old Town Hall in Prague and Znojmo Town Hall Tower in the Czech Republic are an excellent example of late Gothic craftsmanship. Gothic survival and revival In 1663 at the Archbishop of Canterbury's residence, Lambeth Palace, a Gothic hammerbeam roof was built to replace that destroyed when the building was sacked during the English Civil War. Also in the late 17th century, some discrete Gothic details appeared on new construction at Oxford University and Cambridge University, notably on Tom Tower at Christ Church, Oxford, by Christopher Wren. It is not easy to decide whether these instances were Gothic survival or early appearances of Gothic revival. Ireland was a focus for Gothic architecture in the 17th and 18th centuries. Derry Cathedral (completed 1633), Sligo Cathedral (c. 1730), and Down Cathedral (1790-1818) are notable examples. The term "Planter's Gothic" has been applied to the most typical of these. In England in the mid-18th century, the Gothic style was more widely revived, first as a decorative, whimsical alternative to Rococo that is still conventionally termed 'Gothick', of which Horace Walpole's Twickenham villa "Strawberry Hill" is the familiar example. 19th- and 20th-century Gothic Revival In England, partly in response to a philosophy propounded by the Oxford Movement and others associated with the emerging revival of 'high church' or Anglo-Catholic ideas during the second quarter of the 19th century, neo-Gothic began to become promoted by influential establishment figures as the preferred style for ecclesiastical, civic and institutional architecture. The appeal of this Gothic revival (which after 1837, in Britain, is sometimes termed Victorian Gothic), gradually widened to encompass "low church" as well as "high church" clients. This period of more universal appeal, spanning 1855–1885, is known in Britain as High Victorian Gothic. The Houses of Parliament in London by Sir Charles Barry with interiors by a major exponent of the early Gothic Revival, Augustus Welby Pugin, is an example of the Gothic revival style from its earlier period in the second quarter of the 19th century. Examples from the High Victorian Gothic period include George Gilbert Scott's design for the Albert Memorial in London, and William Butterfield's chapel at Keble College, Oxford. From the second half of the 19th century onwards it became more common in Britain for neo-Gothic to be used in the design of non-ecclesiastical and non-governmental buildings types. Gothic details even began to appear in working-class housing schemes subsidised by philanthropy, though given the expense, less frequently than in the design of upper and middle-class housing. In France, simultaneously, the towering figure of the Gothic Revival was Eugène Viollet-le-Duc, who outdid historical Gothic constructions to create a Gothic as it ought to have been, notably at the fortified city of Carcassonne in the south of France and in some richly fortified keeps for industrial magnates. Viollet-le-Duc compiled and coordinated an Encyclopédie médiévale that was a rich repertory his contemporaries mined for architectural details. He effected vigorous restoration of crumbling detail of French cathedrals, including the Abbey of Saint-Denis and famously at Notre Dame de Paris, where many of whose most "Gothic" gargoyles are Viollet-le-Duc's. He taught a generation of reform-Gothic designers and showed how to apply Gothic style to modern structural materials, especially cast iron. In Germany, the great cathedral of Cologne and the Ulm Minster, left unfinished for 600 years, were brought to completion, while in Italy, Florence Cathedral finally received its polychrome Gothic façade. New churches in the Gothic style were created all over the world, including Mexico, Argentina, Japan, Thailand, India, Australia, New Zealand, Hawaii and South Africa. As in Europe, the United States, Canada, Australia and New Zealand utilised Neo-Gothic for the building of universities, a fine example being the University of Sydney by Edmund Blacket. In Canada, the Canadian Parliament Buildings in Ottawa designed by Thomas Fuller and Chilion Jones with its huge centrally placed tower is influenced by Flemish Gothic buildings. Although falling out of favour for domestic and civic use, Gothic for churches and universities continued into the 20th century with buildings such as Liverpool Cathedral, the Cathedral of Saint John the Divine, New York and São Paulo Cathedral, Brazil. The Gothic style was also applied to iron-framed city skyscrapers such as Cass Gilbert's Woolworth Building and Raymond Hood's Tribune Tower. Post-Modernism in the late 20th and early 21st centuries has seen some revival of Gothic forms in individual buildings, such as the Gare do Oriente in Lisbon, Portugal and a finishing of the Cathedral of Our Lady of Guadalupe in Mexico. About medieval Gothic in particular - Czech Gothic architecture - English Gothic architecture - French Gothic architecture - Italian Gothic architecture - List of Gothic architecture - Medieval architecture - Middle Ages in history - Polish Gothic architecture - Portuguese Gothic architecture - Renaissance of the 12th century - Spanish Gothic architecture - Gothic secular and domestic architecture About Gothic architecture more generally or in other senses - Architectural history - Architectural style - Architecture of cathedrals and great churches - Gothic Revival architecture - Carpenter Gothic - Collegiate Gothic in North America - Tented roof - Vasari, G. The Lives of the Artists. Translated with an introduction and notes by J.C. and P. Bondanella. Oxford: Oxford University Press (Oxford World’s Classics), 1991, pp. 117 & 527. ISBN 9780199537198 - Vasari, Giorgio. (1907) Vasari on technique: being the introduction to the three arts of design, architecture, sculpture and painting, prefixed to the Lives of the most excellent painters, sculptors and architects. G. Baldwin Brown Ed. Louisa S. Maclehose Trans. London: Dent, pp. b & 83. - "Gotz" is rendered as "Huns" in Thomas Urquhart's English translation. - Notes and Queries, No. 9. 29 December 1849 - Christopher Wren, 17th-century architect of St. Paul's Cathedral. - "pour terminer le haut de leurs ouvertures. La Compagnie a désapprové plusieurs de ces nouvelles manières, qui sont défectueuses et qui tiennent la plupart du gothique." Quoted in Fiske Kimball, The Creation of the Rococo, 1943, p 66. - "L'art Gothique", section: "L'architecture Gothique en Angleterre" by Ute Engel: L'Angleterre fut l'une des premieres régions à adopter, dans la deuxième moitié du XIIeme siècle, la nouvelle architecture gothique née en France. Les relations historiques entre les deux pays jouèrent un rôle prépondérant: en 1154, Henri II (1154–1189), de la dynastie Française des Plantagenêt, accéda au thrône d'Angleterre." (England was one of the first regions to adopt, during the first half of the 12th century, the new Gothic architecture born in France. Historic relationships between the two countries played a determining role: in 1154, Henry II (1154–1189) became the first of the Anjou Plantagenet kings to ascend to the throne of England). - Banister Fletcher, A History of Architecture on the Comparative Method. - John Harvey, The Gothic World - Alec Clifton-Taylor, The Cathedrals of England - Nikolaus Pevsner, An Outline of European Architecture. - Warren, John (1991). "Creswell's Use of the Theory of Dating by the Acuteness of the Pointed Arches in Early Muslim Architecture". Muqarnas (BRILL) 8: 59–65 (61–63). doi:10.2307/1523154. JSTOR 1523154. - Petersen, Andrew (2002-03-11). Dictionary of Islamic Architecture at pp. 295-296. Routledge. ISBN 978-0-203-20387-3. Retrieved 2013-03-16. - Scott, Robert A.: The Gothic enterprise: a guide to understanding the Medieval cathedral, Berkeley 2003, University of California Press, p. 113 ISBN 0-520-23177-5 - Cf. Bony (1983), especially p.17 - Le genie architectural des Normands a su s’adapter aux lieux en prenant ce qu’il y a de meilleur dans le savoir-faire des batisseurs arabes et byzantins”, Les Normands en Sicile, pp.14, 53-57. - Harvey, L. P. (1992). "Islamic Spain, 1250 to 1500". Chicago : University of Chicago Press. ISBN 0-226-31960-1; Boswell, John (1978). Royal Treasure: Muslim Communities Under the Crown of Aragon in the Fourteenth Century. Yale University Press. ISBN 0-300-02090-2. - Cannon, J. 2007. Cathedral: The Great English Cathedrals and the World that Made Them - Erwin Panofsky argued that Suger was inspired to create a physical representation of the Heavenly Jerusalem, although the extent to which Suger had any aims higher than aesthetic pleasure has been called into doubt by more recent art historians on the basis of Suger's own writings. - Wim Swaan, The Gothic Cathedral - While the engineering and construction of the dome of Florence Cathedral by Brunelleschi is often cited as one of the first works of the Renaissance, the octagonal plan, ribs and pointed silhouette were already determined in the 14th century. - *Warren, John (1991). "Creswell's Use of the Theory of Dating by the Acuteness of the Pointed Arches in Early Muslim Architecture". Muqarnas (BRILL) 8: 59–65. doi:10.2307/1523154. JSTOR 1523154. - "Architectural Importance". Durham World Heritage Site. Retrieved 2013-03-26. - The open-work spire was completed in 1890 to the original design. - Ching, Francis D.K. (2012). A Visual Dictionary of Architecture (2nd ed.). John Wiley & Sons, Inc. p. 6. ISBN 978-0-470-64885-8. - This does not happen in French or English Gothic and so to the British or French eye, to be a strange disregard for style. - The Zodiac comprises a sequence of twelve constellations which appear overhead in the Northern Hemisphere at fixed times of year. In a rural community with neither clock nor calendar, these signs in the heavens were crucial in knowing when crops were to be planted and certain rural activities performed. - Freiburg, Regensburg, Strasbourg, Vienna, Ulm, Cologne, Antwerp, Gdansk, Wroclaw. - Begun in 1443. "House of Jacques Cœur at Bourges (Begun 1443), aerial sketch". Liam’s Pictures from Old Books. Retrieved 29 September 2007. - -Bob Hunter "Londonderry Cathedtral". BBC. - Bony, Jean (1983). French Gothic Architecture of the Twelfth and Thirteenth Centuries. Berkeley: University of California Press. ISBN 0-520-02831-7. - Bumpus, T. Francis (1928). The Cathedrals and Churches of Belgium. T. Werner Laurie. - Clifton-Taylor, Alec (1967). The Cathedrals of England. Thames and Hudson. ISBN 0-500-18070-9. - Fletcher, Banister (2001). A History of Architecture on the Comparative method. Elsevier Science & Technology. ISBN 0-7506-2267-9. - Gardner, Helen; Fred S. Kleiner; Christin J. Mamiya (2004). Gardner's Art through the Ages. Thomson Wadsworth. ISBN 0-15-505090-7. - Harvey, John (1950). The Gothic World, 1100–1600. Batsford. - Harvey, John (1961). English Cathedrals. Batsford. - Huyghe, Rene (ed.) (1963). Larousse Encyclopedia of Byzantine and Medieval Art. Paul Hamlyn. - Icher, Francois (1998). Building the Great Cathedrals. Harry N. Abrams. ISBN 0-8109-4017-5. - Pevsner, Nikolaus (1964). An Outline of European Architecture. Pelican Books. ISBN 0-14-061613-6. - Summerson, John (1983). Pelican History of Art, ed. Architecture in Britain, 1530–1830. ISBN 0-14-056003-3. - Swaan, Wim (1988). The Gothic Cathedral. Omega Books. ISBN 090785348X. - Swaan, Wim. Art and Architecture of the Late Middle Ages. Omega Books. ISBN 0-907853-35-8. - Tatton-Brown, Tim; John Crook (2002). The English Cathedral. New Holland Publishers. ISBN 1-84330-120-2. - Fletcher, Banister; Cruickshank, Dan, Sir Banister Fletcher's a History of Architecture, Architectural Press, 20th edition, 1996 (first published 1896). ISBN 0-7506-2267-9. Cf. Part Two, Chapter 14. - von Simson, Otto Georg (1988). The Gothic cathedral: origins of Gothic architecture and the medieval concept of order. ISBN 0-691-09959-6. - Glaser, Stephanie, "The Gothic Cathedral and Medievalism," in: Falling into Medievalism, ed. Anne Lair and Richard Utz. Special Issue of UNIversitas: The University of Northern Iowa Journal of Research, Scholarship, and Creative Activity, 2.1 (2006). (on the Gothic revival of the 19th century and the depictions of Gothic cathedrals in the Arts) - Moore, Charles (1890). Development & Character of Gothic Architecture. Macmillan and Co. ISBN 1-4102-0763-3. - Tonazzi, Pascal (2007) Florilège de Notre-Dame de Paris (anthologie), Editions Arléa, Paris, ISBN 2-86959-795-9 - Wilson, Christopher (2005). The Gothic Cathedral - Architecture of the Great Church. Thames and Hudson. ISBN 978-0500276815. |Wikimedia Commons has media related to Gothic architecture.| |Wikivoyage has a travel guide for Gothic architecture.| |Wikisource has the text of the 1911 Encyclopædia Britannica article Gothic.| - Mapping Gothic France, a project by Columbia University and Vassar College with a database of images, 360° panoramas, texts, charts and historical maps - Gothic Architecture Encyclopædia Britannica - Holbeche Bloxam, Matthew (1841). Gothic Ecclesiastical Architecture, Elucidated by Question and Answer. Gutenberg.org, from Project Gutenberg - Brandon, Raphael; Brandon, Arthur (1849). An analysis of Gothick architecture: illustrated by a series of upwards of seven hundred examples of doorways, windows, etc., and accompanied with remarks on the several details of an ecclesiastical edifice., Archive.org, from Internet Archive
https://en.wikipedia.org/wiki/Gothic_style
4.09375
International Ladies Garment Workers Union The International Ladies Garment Workers Union was founded in 1900. The eleven Jewish men who founded the union represented seven local unions from East Coast cities with heavy Jewish immigrant populations. This all-male convention was made up exclusively of cloak makers and one skirt maker, highly skilled Old World tailors who had been trying to organize in a well-established industry for a couple of decades. White goods workers, including skilled corset makers, were not invited to the first meeting. Nor were they or the largely young immigrant Jewish workers in the newly developing shirtwaist industry recruited for the union in the early years of its existence. But these women workers still tried to organize. The shirtwaist was a woman’s garment with a mannish touch: a buttoned front. Charles Dana Gibson, an illustrious illustrator of the time, popularized this daring design by featuring his handsome Gibson girl wearing a shirtwaist. The introduction of the shirtwaist lent itself to a system of inside contracting where the work done by women was moved into factories and workshops, still under the control of a contractor but not within the household. As a result, women workers faced different kinds of control, regulation, and, ultimately, sexual harassment. However, the new system also provided larger work sites where numbers of women could gather together to talk about their grievances, among other things. Thus the possibility for unionizing increased. A handful of these women workers, goaded on by the intolerable sweatshop conditions in which they toiled, joined the shirtwaist makers’ Local 25 of the International Ladies Garment Workers Union. The struggling local had few members, fewer finances, and virtually no bargaining power until the historic Uprising of the 20,000 in 1909. This was partly due to the men’s insistence that only “skilled” workers could effectively organize, partly to the sex-segregated nature of the industry, which kept women in relatively less skilled jobs, and partly to the rapid turnover of women garment workers who moved from job to job in search of better wages. Nonetheless, small strikes and work protests by women pockmarked the first decade of the century. Most of them were quickly lost. Then came the 1909 uprising, itself preceded by a two-month strike at the soon-to-be-infamous Triangle Shirtwaist Company. The uprising was more than a “strike.” It was the revolt of a community of “greenhorn” teenagers against a common oppression. The uprising set off shock waves in multiple directions: in the labor movement, which discovered women could be warriors; in American society, which found out that young “girls”—immigrants, no less—out of the disputatious Jewish community could organize; in the suffragist movement, which saw in the plight of these women a good reason why women should have the right to vote; and among feminists, who recognized this massive upheaval as a protest against sexual harassment. This strike and subsequent ones in the apparel industry stemmed from long days, low wages, manipulations of pay, and the denial of work in the absence of sexual favors, distinctive aspects of the garment trades. The uprising had its Joan of Arc, a wisp of a “girl” arising out of nowhere, or so it seemed to the men who ran the union. Her name was Clara Lemlich [Shavelson]. She was not one of the scheduled speakers, although she had proved herself an outspoken activist and daring organizer in previous strikes. But she spoke the words that sparked the conflagration. The overflow meeting in the Great Hall at New York’s Cooper Union, the site of Abraham Lincoln’s historic speech on Union and Liberty, was to be addressed by Samuel Gompers, president of the American Federation of Labor; Benjamin Feigenbaum, later elected as a socialist to the New York State Assembly; Jacob Panken, later elected a judge; Bernard Weinstein, head of the United Hebrew Trades; Meyer London, a labor attorney and the first socialist to be elected to Congress from the Lower East Side; and Mary Dreier, a prominent progressive socialite who had been walking the picket line with the strikers and was head of the Women’s Trade Union League. When Jacob Panken was introduced, he was interrupted by a high-pitched voice from the audience. “I want to say a few words,” she said. From the audience came a clamor of voices, “Get up on the platform.” Chairperson Feigenbaum sensed the mood of the moment. He ruled that since this girl was a striker and had been beaten up on the picket line, she should be heard. Panken acquiesced. In what one press report called a “philippic in Yiddish,” Clara Lemlich concluded, “I offer that a general strike be declared now.” Although not everyone in the audience was conversant in Yiddish—there were many Italian immigrant workers in the garment industry—they all understood. Feigenbaum reached into the Jewish past to endow the moment with a touch of tradition. He called upon all those present to raise their hand and to “take the old Jewish oath. If I turn traitor to the cause I now pledge, may this hand wither from the arm I now raise.” The strike, directed against employer tyranny in the sweatshop, served many purposes, one of which was to draw the attention of suffragists to the plight of working women. Up to that time, those at the forefront of the fight for women’s right to vote came almost exclusively from the economic and educated elite in the United States. To these women, the conditions of the shirtwaist makers were evidence of what happens when women are denied a voice in the governance of their communities and country. The active resistance to economic exploitation of these young Jewish women indicated that they should, and could, add new legions to the ranks of the suffragists. Indeed, Jewish immigrants subsequently became outspoken supporters of suffrage, helping to pass the New York State law in 1917. As a consequence of the uprising, the crusades for the rights of working people as workers and of women as women and as citizens were coming together. The lasting meaning of the uprising was summarized by Samuel Gompers at the American Federation of Labor convention after the shirtwaist strike. It “brought to the consciousness of the nation,” he declaimed, “a recognition of certain features looming up on its social development. These are the extent to which women are taking up with industrial life, their consequent tendency to stand together in the struggle to protect their common interests as wage-earners, the readiness of people in all classes to approve of trade-union methods on behalf of working women, and the capacity of women as strikers to suffer, to do, and to dare in support of their rights.” Inspiring as the uprising was, its immediate consequence in terms of working conditions was limited. This was especially true in the case of the Triangle Shirtwaist Company, whose brutal mistreatment of its employees was the original cause célèbre that set off the uprising and which remained unorganized. The Jewish employers of Triangle had been cited several times for violation of the city’s fire safety code; the company paid the fine and then went about doing its business as usual. On March 25, 1911, a fire broke out in the Triangle factory. It claimed 146 lives, mainly Jewish women. These victims became the martyred dead in a cause that, in time, revolutionized labor conditions and labor relations in America. It led to more effective fire and safety regulations in New York State, and it inspired women like Rose Schneiderman of the Women’s Trade Union League to argue forcefully that legislation was less important than organization. The large numbers of women garment workers in the ILGWU shaped its labor philosophy, despite the conspicuous absence of women among the union’s top leadership. The ILGWU looked upon the union not only as a means to protect and promote the immediate interests of garment workers but also as part of a greater international movement to convert a dog-eat-dog economic system into a global cooperative commonwealth. Its leaders viewed the class struggle as a classroom where working men and women would learn about the whys and hows of improving their personal lives and remolding the social order. For members to pay dues was vital, but it was equally important for them to pay attention to their own development and to their role in the reshaping of society. Through education, working people would become their own messiah. Women, especially activists in Local 25, championed this mission. In 1916, spurred by Local 25, the union convention voted to establish an education department to be headed by Juliet S. Poyntz, a former history teacher at Barnard College. To supplement the teaching skills of this outsider, the union chose Fannia Cohn, for many years the only woman on the union’s general executive board, to apply her organizing skills as an insider to enroll members en masse in this novel grass-roots educational program. When Poyntz resigned in 1918 under pressure from the general executive board, Cohn, named as executive secretary of the education department, carried on under a male education director, and she generated one of the most remarkable worker education programs in America. Under Cohn’s guidance, the ILGWU instituted a Workers University in New York City’s Washington Irving High School where union members attended lectures by such distinguished college professors as Charles Beard, Harry Carmen, and Paul Brissenden. The U.S. Bureau of Labor Statistics noted in a 1920 report that “the first systematic scheme of education undertaken by organized [labor] in the United States was put in practice by the ILGWU.” It further reported that “up to the spring of 1919 eight hundred [members] had either completed one or more courses or were engaged in the study of various subjects.” Cohn greatly expanded the union’s educational offerings, setting up programs in Cleveland, Boston, and Philadelphia. While the university was the jewel in the diadem of the union’s educational work, there were, in addition, eight Unity Centers that offered basic courses in literacy. Union leaders were trained in public speaking and parliamentary procedure. There were also classes in health—how to stay well. As the union grew, many of these activities proliferated. Members took classes on college campuses during the summer, and there was a formal Officers Training Institute, as well as intensive pretraining for citizenship and extensive education in health care. Many of these, which became models for the entire American labor movement, derived from the early initiatives of Fannia Cohn. Women in Local 25, where the membership was more than 75 percent female in 1919, wanted the union to perform a social role that would create community and comradeship as well as loyalty to the union. So, for example, the ILGWU created vacation houses and developed a pioneer medical institution—the ILGWU Health Center. It was a unique and influential conception of unionization. By 1919, drawing on their confidence gained from classes and discussion groups, women in Local 25 began to question why they had not a single woman officer. The demand for union democracy took hold. But soon women’s issues were taken over by male insurgents, many of them Communist Party organizers. Trusted women leaders like Fannia Cohn and Pauline Newman were caught in the middle between a battle of the “lefts and rights.” The political infighting seriously weakened the union; women declined as members from 75 percent to a mere 39 percent by 1924, and male union leaders were reluctant to start new organizing drives to unionize women. In the 1930s, rejuvenated by the New Deal’s support of labor organizing, women once again came to dominate the membership of the ILGWU. And once again the issue of women’s leadership arose. Rose Pesotta was the only woman on the union’s executive board. When Miriam Speishandler of Local 22 was nominated as a delegate to the national convention, she took the opportunity to ask ILGWU president David Dubinsky why there were not more women on the executive board of a union that was 85 percent female. Pesotta’s visibility in California led to her election in 1934 as a vice president of the ILGWU, serving on the general executive board. Pesotta was conflicted about her ten years of service in that position. Sexism and a loss of personal independence continually troubled her, until she finally resigned from the position in 1942. The heady days of the 1930s also led to such unusual innovations as the musical revue Pins and Needles. Written by Harold Rome, the successful musical ran for an impressive 1,108 performances in 1937 to consistently enthusiastic audiences who appreciated its humor, its political message, and its sharp social commentary. The show’s cast were all union members who effectively propagandized the trade union movement through song and dance. More recently, as the ILGWU’s membership has shifted from Jewish and Italian women to Latino, African-American, and Asian women, one Jewish woman has served as the union’s legislative voice in the halls of Congress for almost half a century. Unlike the other Jewish women of prominence in the union, Evelyn Dubrow was not an immigrant. She grew up in New Jersey and was educated at the New York University School of Journalism. When American labor, through the Congress of Industrial Organizations, began to reach into mass manufacture in the 1930s, Dubrow served as education director for the New Jersey Textile Workers of America. As a writer, she worked as secretary of the New Jersey Newspaper Guild from 1943 to 1946. Subsequently she became national director of organization of the Americans for Democratic Action and a founder of the Consumer Federation of America. By her performance on the Hill, Dubrow has won recognition and admiration from those who know how the wheels of government run. In 1982 the Washington Business Review named her as one of D.C.’s top ten lobbyists, and in 1994 Washingtonian Magazine listed her as one of America’s top 100 women. Although Evelyn Dubrow is distinguished for her political work, she was exceptional among the Jewish women in the union only in her official assignment to that mission. All of the women leaders mentioned were intensely political, as were many of the rank and file. For them it was never enough to have a union to ease and enrich the lives of those in the apparel industry. They dreamed of and worked for a movement that would someday transform the world into a place where the ideals of equality and justice for all would be a reality. Glenn, Susan A. Daughters of the Shtetl: Life and Labor in the Immigrant Generation (1990); Howe, Irving. World of Our Fathers: The Journey of East European Jews to America and the Life They Found and Made (1976); ILGWU. Pauline Newman (1986); Kessler-Harris, Alice. “Organizing the Unorganizable: Three Jewish Women and Their Union.” Labor History 17 (Winter 1976): 5–23, and “Rose Schneiderman and the Limits of Women’s Trade Unionism.” In Labor Leaders in America, edited by Melvyn Dubofsky and Warren Van Tine (1987); Leeder, Elaine. The Gentle General: Rose Pesotta, Anarchist and Labor Organizer (1993); Levine, Louis. The Women’s Garment Workers (1924); Orleck, Annelise. Common Sense and a Little Fire: Women and Working-Class Politics in the Unites States, 1900–1965 (1995); Pesotta, Rose. Bread upon the Waters (1945); Seidman, Joel. The Needle Trades (1942); Stein, Leon. Out of the Sweat Shop (1977), and The Triangle Fire (1962); Stolberg, Benjamin. Tailor’s Progress (1944); Tyler, Gus. Look for the Union Label (1995). How to cite this page . The Editors. "International Ladies Garment Workers Union." Jewish Women: A Comprehensive Historical Encyclopedia. 1 March 2009. Jewish Women's Archive. (Viewed on February 6, 2016) <http://jwa.org/encyclopedia/article/international-ladies-garment-workers-union>.
http://jwa.org/encyclopedia/article/international-ladies-garment-workers-union
4.40625
What if you were given two points that a line passes through like (-1, 0) and (2, 2)? How could you find the slope of that line? After completing this Concept, you'll be able to find the slope of any line. Wheelchair ramps at building entrances must have a slope between and . If the entrance to a new office building is 28 inches off the ground, how long does the wheelchair ramp need to be? We come across many examples of slope in everyday life. For example, a slope is in the pitch of a roof, the grade or incline of a road, or the slant of a ladder leaning on a wall. In math, we use the word slope to define steepness in a particular way. To make it easier to remember, we often word it like this: In the picture above, the slope would be the ratio of the height of the hill to the horizontal length of the hill. In other words, it would be , or 0.75. If the car were driving to the right it would climb the hill - we say this is a positive slope. Any time you see the graph of a line that goes up as you move to the right, the slope is positive. If the car kept driving after it reached the top of the hill, it might go down the other side. If the car is driving to the right and descending, then we would say that the slope is negative. Here’s where it gets tricky: If the car turned around instead and drove back down the left side of the hill, the slope of that side would still be positive. This is because the rise would be -3, but the run would be -4 (think of the axis - if you move from right to left you are moving in the negative direction). That means our slope ratio would be , and the negatives cancel out to leave 0.75, the same slope as before. In other words, the slope of a line is the same no matter which direction you travel along it. Find the Slope of a Line A simple way to find a value for the slope of a line is to draw a right triangle whose hypotenuse runs along the line. Then we just need to measure the distances on the triangle that correspond to the rise (the vertical dimension) and the run (the horizontal dimension). Find the slopes for the three graphs shown. There are already right triangles drawn for each of the lines - in future problems you’ll do this part yourself. Note that it is easiest to make triangles whose vertices are lattice points (i.e. points whose coordinates are all integers). a) The rise shown in this triangle is 4 units; the run is 2 units. The slope is . b) The rise shown in this triangle is 4 units, and the run is also 4 units. The slope is . c) The rise shown in this triangle is 2 units, and the run is 4 units. The slope is . Find the slope of the line that passes through the points (1, 2) and (4, 7). We already know how to graph a line if we’re given two points: we simply plot the points and connect them with a line. Here’s the graph: Since we already have coordinates for the vertices of our right triangle, we can quickly work out that the rise is and the run is (see diagram). So the slope is . If you look again at the calculations for the slope, you’ll notice that the 7 and 2 are the coordinates of the two points and the 4 and 1 are the coordinates. This suggests a pattern we can follow to get a general formula for the slope between two points and : Slope between and In the second equation the letter denotes the slope (this is a mathematical convention you’ll see often) and the Greek letter delta means change. So another way to express slope is change in divided by change in . In the next section, you’ll see that it doesn’t matter which point you choose as point 1 and which you choose as point 2. Find the Slopes of Horizontal and Vertical lines Determine the slopes of the two lines on the graph below. There are 2 lines on the graph: and . Let’s pick 2 points on line —say, and —and use our equation for slope: If you think about it, this makes sense - if doesn’t change as increases then there is no slope, or rather, the slope is zero. You can see that this must be true for all horizontal lines. Horizontal lines ( = constant) all have a slope of 0. Now let’s consider line . If we pick the points and , our slope equation is . But dividing by zero isn’t allowed! In math we often say that a term which involves division by zero is undefined. (Technically, the answer can also be said to be infinitely large—or infinitely small, depending on the problem.) Vertical lines constant) all have an infinite (or undefined) slope. Watch this video for help with the Examples above. Find the slopes of the lines on the graph below. Look at the lines - they both slant down (or decrease) as we move from left to right. Both these lines have negative slope. The lines don’t pass through very many convenient lattice points, but by looking carefully you can see a few points that look to have integer coordinates. These points have been circled on the graph, and we’ll use them to determine the slope. We’ll also do our calculations twice, to show that we get the same slope whichever way we choose point 1 and point 2. For Line : You can see that whichever way round you pick the points, the answers are the same. Either way, Line has slope -0.364, and Line has slope -1.375. Use the slope formula to find the slope of the line that passes through each pair of points. - (-5, 7) and (0, 0) - (-3, -5) and (3, 11) - (3, -5) and (-2, 9) - (-5, 7) and (-5, 11) - (9, 9) and (-9, -9) - (3, 5) and (-2, 7) - (2.5, 3) and (8, 3.5) For each line in the graphs below, use the points indicated to determine the slope. - For each line in the graphs above, imagine another line with the same slope that passes through the point (1, 1), and name one more point on that line. Answers for Explore More Problems To view the Explore More answers, open this PDF file and look for section 4.6.
http://www.ck12.org/algebra/Slope/lesson/Slope---Intermediate/
4
Electrical System of the Heart What controls the timing of your heartbeat? Your heart's electrical system controls the timing of your heartbeat by regulating your: - Heart rate, which is the number of times your heart beats per minute. - Heart rhythm, which is the synchronized pumping action of your four heart chambers. Your heart's electrical system should maintain: - A steady heart rate of 60 to 100 beats per minute at rest. The heart's electrical system also increases this rate to meet your body's needs during physical activity and lowers it during sleep. - An orderly contraction of your atria and ventricles (this is called a sinus rhythm). See a picture of the heart and its electrical system. How does the heart's electrical system work? Your heart muscle is made of tiny cells. Your heart's electrical system controls the timing of your heartbeat by sending an electrical signal through these cells. Two different types of cells in your heart enable the electrical signal to control your heartbeat: - Conducting cells carry your heart's electrical signal. - Muscle cells enable your heart's chambers to contract, an action triggered by your heart's electrical signal. The electrical signal travels through the network of conducting cell "pathways," which stimulates your upper chambers (atria) and lower chambers (ventricles) to contract. The signal is able to travel along these pathways by means of a complex reaction that allows each cell to activate one next to it, stimulating it to "pass along" the electrical signal in an orderly manner. As cell after cell rapidly transmits the electrical charge, the entire heart contracts in one coordinated motion, creating a heartbeat. The electrical signal starts in a group of cells at the top of your heart called the sinoatrial (SA) node. The signal then travels down through your heart, triggering first your two atria and then your two ventricles. In a healthy heart, the signal travels very quickly through the heart, allowing the chambers to contract in a smooth, orderly fashion. The heartbeat happens as follows: - The SA node (called the pacemaker of the heart) sends out an electrical impulse. - The upper heart chambers (atria) contract. - The AV node sends an impulse into the ventricles. - The lower heart chambers (ventricles) contract or pump. - The SA node sends another signal to the atria to contract, which starts the cycle over again. This cycle of an electrical signal followed by a contraction is one heartbeat. SA node and atria When the SA node sends an electrical impulse, it triggers the following process: - The electrical signal travels from your SA node through muscle cells in your right and left atria. - The signal triggers the muscle cells that make your atria contract. - The atria contract, pumping blood into your left and right ventricles. AV node and ventricles After the electrical signal has caused your atria to contract and pump blood into your ventricles, the electrical signal arrives at a group of cells at the bottom of the right atrium called the atrioventricular node, or AV node. The AV node briefly slows down the electrical signal, giving the ventricles time to receive the blood from the atria. The electrical signal then moves on to trigger your ventricles. When the electrical signal leaves the AV node, it triggers the following process: - The signal travels down a bundle of conduction cells called the bundle of His, which divides the signal into two branches: one branch goes to the left ventricle, another to the right ventricle. - These two main branches divide further into a system of conducting fibers that spreads the signal through your left and right ventricles, causing the ventricles to contract. - When the ventricles contract, your right ventricle pumps blood to your lungs and the left ventricle pumps blood to the rest of your body. After your atria and ventricles contract, each part of the system electrically resets itself. How does the heart's electrical system regulate your heart rate? The cells of the SA node at the top of the heart are known as the pacemaker of the heart because the rate at which these cells send out electrical signals determines the rate at which the entire heart beats (heart rate). The normal heart rate at rest ranges between 60 and 100 beats per minute. Your heart rate can adjust higher or lower to meet your body's needs. What makes your heart rate speed up or slow down? Your brain and other parts of your body send signals to stimulate your heart to beat either at a faster or a slower rate. Although the way all of the chemical signals interact to affect your heart rate is complex, the net result is that these signals tell the SA node to fire charges at either a faster or slower pace, resulting in a faster or a slower heart rate. For example, during periods of exercise, when the body requires more oxygen to function, signals from your body cause your heart rate to increase significantly to deliver more blood (and therefore more oxygen) to the body. Your heart rate can increase beyond 100 beats per minute to meet your body's increased needs during physical exertion. Similarly, during periods of rest or sleep, when the body needs less oxygen, the heart rate decreases. Some athletes actually may have normal heart rates well below 60 because their hearts are very efficient and don't need to beat as fast. Changes in your heart rate, therefore, are a normal part of your heart's effort to meet the needs of your body. How does your body control your heart rate? Your body controls your heart by: - The sympathetic and parasympathetic nervous systems, which have nerve endings in the heart. - Hormones, such as epinephrine and norepinephrine (catecholamines), which circulate in the bloodstream. Sympathetic and parasympathetic nervous systems The sympathetic and parasympathetic nervous systems are opposing forces that affect your heart rate. Both systems are made up of very tiny nerves that travel from the brain or spinal cord to your heart. The sympathetic nervous system is triggered during stress or a need for increased cardiac output and sends signals to your heart to increase its rate. The parasympathetic system is active during periods of rest and sends signals to your heart to decrease its rate. During stress or a need for increased cardiac output, the adrenal glands release a hormone called norepinephrine into the bloodstream at the same time that the sympathetic nervous system is also triggered to increase your heart rate. This hormone causes the heart to beat faster, and unlike the sympathetic nervous system that sends an instantaneous and short-lived signal, norepinephrine released into the bloodstream increases the heart rate for several minutes or more. |Primary Medical Reviewer||Rakesh K. Pai, MD, FACC - Cardiology, Electrophysiology| |Specialist Medical Reviewer||George Philippides, MD - Cardiology| |Last Revised||March 7, 2012| Last Revised: March 7, 2012 Author: Healthwise Staff To learn more visit Healthwise.org © 1995-2013 Healthwise, Incorporated. Healthwise, Healthwise for every health decision, and the Healthwise logo are trademarks of Healthwise, Incorporated.
http://www.cheshire-med.com/health_wellness/health_encyclopedia/te7147abc
4.09375
||This article needs attention from an expert on the subject. The specific problem is: See Talk.| |Beyond the Standard Model| |Part of a series on| Dark matter is a hypothetical substance that is believed by most astronomers to account for around five-sixths of the matter in the universe. Although it has not been directly observed, its existence and properties are inferred from its various gravitational effects: on the motions of visible matter; via gravitational lensing; its influence on the universe's large-scale structure, and its effects in the cosmic microwave background. Dark matter is transparent to electromagnetic radiation (light, cosmic rays, etc.) and/or is so dense and small that it fails to absorb or emit enough radiation to appear via imaging technology. The standard model of cosmology indicates that the total mass–energy of the universe contains 4.9% ordinary matter, 26.8% dark matter and 68.3% dark energy. Thus, dark matter constitutes 84.5%[note 1] of total mass, while dark energy plus dark matter constitute 95.1% of total mass–energy content. The dark matter hypothesis plays a central role in state-of-the-art modeling of cosmic structure formation and galaxy formation and evolution and on explanations of the anisotropies observed in the cosmic microwave background (CMB). All these lines of evidence suggest that galaxies, clusters of galaxies and the universe as a whole contain far more matter than that which is observable via electromagnetic signals. Although the existence of dark matter is generally accepted by most of the astronomical community, a minority of astronomers argue for various modifications of the standard laws of general relativity, such as MOND and TeVeS, that attempt to account for the observations without invoking additional matter. Many experiments to detect proposed dark matter particles through non-gravitational means are under way. - 1 History - 2 Observational evidence - 3 Composition - 4 Detection - 5 Synthesis - 6 Alternative theories - 7 Popular culture - 8 See also - 9 Notes - 10 References - 11 External links The first to suggest using stellar velocities to infer the presence of dark matter was Dutch astronomer Jacobus Kapteyn in 1922. Fellow Dutchman and radio astronomy pioneer Jan Oort hypothesized the existence of dark matter, in 1932. Oort was studying stellar motions in the local galactic neighborhood and found that the mass in the galactic plane must be greater than what was observed, but this measurement was later determined to be erroneous. In 1933, Swiss astrophysicist Fritz Zwicky, who studied galactic clusters while working at the California Institute of Technology, made a similar inference. Zwicky applied the virial theorem to the Coma cluster and obtained evidence of unseen mass that he called dunkle Materie 'dark matter'. Zwicky estimated its mass based on the motions of galaxies near its edge and compared that to an estimate based on its brightness and number of galaxies. He estimated that the cluster had about 400 times more mass than was visually observable. The gravity effect of the visible galaxies was far too small for such fast orbits, thus mass must be hidden from view. Based on these conclusions, Zwicky inferred that some unseen matter provided the mass and associated gravitation attraction to hold the cluster together. This was the first formal inference about the existence of dark matter. Zwicky's estimates were off by more than an order of magnitude, mainly due to an obsolete value of the Hubble constant,; the same calculation today shows a smaller fraction, using greater values for luminous mass. However, Zwicky did correctly infer that the bulk of the matter was dark. The first robust indications that the mass to light ratio was anything other than unity came from measurements of galaxy rotation curves. In 1939, Horace W. Babcock reported the rotation curve for the Andromeda nebula, which suggested that the mass-to-luminosity ratio increases radially. He attributed it to either light absorption within the galaxy or modified dynamics in the outer portions of the spiral and not to missing matter. Vera Rubin and Kent Ford in the 1960s–1970s were the first to postulate "dark matter" based upon robust evidence, using galaxy rotation curves. Rubin worked with a new spectrograph to measure the velocity curve of edge-on spiral galaxies with greater accuracy. This result was independently confirmed in 1978. An influential paper presented Rubin's results in 1980. Rubin found that most galaxies must contain about six times as much dark as visible mass; thus, by around 1980 the apparent need for dark matter was widely recognized as a major unsolved problem in astronomy. A stream of independent observations in the 1980s indicated its presence, including gravitational lensing of background objects by galaxy clusters, the temperature distribution of hot gas in galaxies and clusters, and the pattern of anisotropies in the cosmic microwave background. According to consensus among cosmologists, dark matter is composed primarily of a not yet characterized type of subatomic particle. The search for this particle, by a variety of means, is one of the major efforts in particle physics. Cosmic microwave background radiation In cosmology, the CMB is explained as relic radiation which has travelled freely since the era of recombination, around 375,000 years after the Big Bang. The CMB's anisotropies are explained as the result of small primordial density fluctuations, and subsequentacoustic oscillations in the photon-baryon plasma whose restoring force is gravity. The NASA Cosmic Background Explorer (COBE) found the CMB spectrum to be a very precise blackbody spectrum with a temperature of 2.726 K. In 1992, COBE detected CMB fluctuations (anisotropies) at a level of about one part in 105. In the following decade, CMB anisotropies were investigated by ground-based and balloon experiments. Their primary goal was to measure the angular scale of the first acoustic peak of the anisotropies' power spectrum, for which COBE had insufficient resolution. During the 1990s, the first peak was measured with increasing sensitivity, and in 2000 the BOOMERanG experiment reported that the highest power fluctuations occur at scales of approximately one degree, showing that the Universe is close to flat. These measurements were able to rule out cosmic strings as the leading theory of cosmic structure formation, and suggested cosmic inflation was the correct theory. Ground-based interferometers provided fluctuation measurements with higher accuracy, including the Very Small Array, the Degree Angular Scale Interferometer (DASI) and the Cosmic Background Imager (CBI). DASI first detected the CMB polarization, and CBI provided the first E-mode polarization spectrum with compelling evidence that it is out of phase with the T-mode spectrum. COBE's successor, the Wilkinson Microwave Anisotropy Probe (WMAP) provided the most detailed measurements of (large-scale) anisotropies in the CMB in 2003 - 2010. ESA's Planck spacecraft returned more detailed results in 2013-2015. WMAP's measurements played the key role in establishing the Standard Model of Cosmology, namely the Lambda-CDM model, which posits a dark energy-dominated flat universe, supplemented by dark matter and atoms with density fluctuations seeded by a Gaussian, adiabatic, nearly scale invariant process. Its basic properties are determined by six adjustable parameters: dark matter density, baryon (atom) density, the universe's age (or equivalently, the Hubble constant), the initial fluctuation amplitude and their scale dependence. Much of the evidence comes from the motions of galaxies. Many of these appear to be fairly uniform, so by the virial theorem, the total kinetic energy should be half the galaxies' total gravitational binding energy. Observationally, the total kinetic energy is much greater. In particular, assuming the gravitational mass is due to only visible matter, stars far from the center of galaxies have much higher velocities than predicted by the virial theorem. Galactic rotation curves, which illustrate the velocity of rotation versus the distance from the galactic center, show the "excess" velocity. Dark matter is the most straightforward way of accounting for this discrepancy. The distribution of dark matter in galaxies required to explain the motion of the observed matter suggests the presence of a roughly spherically symmetric, centrally concentrated halo of dark matter with the visible matter concentrated in a central disc. Low surface brightness dwarf galaxies are important sources of information for studying dark matter. They have an uncommonly low ratio of visible to dark matter, and have few bright stars at the center that would otherwise impair observations of the rotation curve of outlying stars. Gravitational lensing observations of galaxy clusters allow direct estimates of the gravitational mass based on its effect on light coming from background galaxies, since large collections of matter (dark or otherwise) gravitationally deflect light. In clusters such as Abell 1689, lensing observations confirm the presence of considerably more mass than is indicated by the clusters' light. In the Bullet Cluster, lensing observations show that much of the lensing mass is separated from the X-ray-emitting baryonic mass. In July 2012, lensing observations were used to identify a "filament" of dark matter between two clusters of galaxies, as cosmological simulations predicted. Galaxy rotation curves A galaxy rotation curve is a plot of the orbital velocities (i.e., the speeds) of visible stars or gas in that galaxy versus their radial distance from that galaxy's center. The rotational/orbital speeds of galaxies/stars does not decline with distance, unlike other orbital systems such as stars/planets and planets/moons that also have most of their mass at the centre. In the latter cases, this reflects the mass distributions within those systems. The mass observations for galaxies based on the light that they emit are far too low to explain the velocity observations. The dark matter hypothesis supplies the missing mass, resolving the anomaly. A universal rotation curve can be expressed as the sum of an exponential distribution of visible matter that tapers to zero with distance from the center, and a spherical dark matter halo with a flat core of radius r0 and density ρ0 = 4.5 × 10−2(r0/kpc)−2/3 M☉pc−3. Low-surface-brightness (LSB) galaxies have a much larger visible mass deficit than others. This property simplifies the disentanglement of the dark and visible matter contributions to the rotation curves. Rotation curves for some elliptical galaxies do display low velocities for outlying stars (tracked for example by the motion of embedded planetary nebulae). A dark-matter compliant hypothesis proposes that some stars may have been torn by tidal forces from disk-galaxy mergers from their original galaxies during the first close passage and put on outgoing trajectories, explaining the low velocities of the remaining stars even in the presence of a halo. Velocity dispersions of galaxies Diffuse interstellar gas measurements of galactic edges indicate missing ordinary matter beyond the visible boundary, but that galaxies are virialized (i.e., gravitationally bound and orbiting each other with velocities that correspond to predicted orbital velocities of general relativity) up to ten times their visible radii. This has the effect of pushing up the dark matter as a fraction of the total matter from 50% as measured by Rubin to the now accepted value of nearly 95%. Dark matter seems to be a small component or absent in some places. Globular clusters show little evidence of dark matter, except that their orbital interactions with galaxies do support galactic dark matter. Star velocity profiles seemed to indicate a concentration of dark matter in the disk of the Milky Way. It now appears, however, that the high concentration of baryonic matter in the disk (especially in the interstellar medium) can account for this motion. Galaxy mass and light profiles appear to not match. The typical model for dark matter galaxies is a smooth, spherical distribution in virialized halos. This avoids small-scale (stellar) dynamical effects. A 2006 study explained the warp in the Milky Way's disk by the interaction of the Large and Small Magellanic Clouds and the 20-fold increase in predicted mass from dark matter. In 2005, astronomers claimed to have discovered a galaxy made almost entirely of dark matter, 50 million light years away in the Virgo Cluster, which was named VIRGOHI21. Unusually, VIRGOHI21 does not appear to contain visible stars: it was discovered with radio frequency observations of hydrogen. Based on rotation profiles, the scientists estimate that this object contains approximately 1000 times more dark matter than hydrogen and has a mass of about 1/10 that of the Milky Way. The Milky Way is estimated to have roughly 10 times as much dark matter as ordinary matter. Models of the Big Bang and structure formation suggested that such dark galaxies should be very common, but VIRGOHI21 was the first to be detected. Galaxy clusters and gravitational lensing Galactic clusters also lack sufficient luminous matter to explain the measured orbital velocities of galaxies within them. Galaxy cluster masses have been estimated in three independent ways: - Radial velocity scatter of the galaxies within clusters - X-rays emitted by hot gas. Gas temperature and density can be estimated from the X-ray energy and flux; assuming pressure and gravity balance determines the cluster's mass profile. Chandra X-ray Observatory experiments use this technique to independently determine cluster mass. These observations generally indicate that baryonic mass is approximately 12–15 percent, in reasonable agreement with the Planck spacecraft cosmic average of 15.5–16 percent. - Gravitational lensing (usually on more distant galaxies) predicts masses without relying on observations of dynamics (e.g., velocity). Multiple Hubble projects used this method to measure cluster masses. Generally these methods find missing luminous matter. Gravity acts as a lens to bend the light from a more distant source (such as a quasar) around a massive object (such as a cluster of galaxies) lying between the source and the observer in accordance with general relativity. Strong lensing is the observed distortion of background galaxies into arcs when their light passes through such a gravitational lens. It has been observed around a few distant clusters including Abell 1689. By measuring the distortion geometry, the mass of the intervening cluster can be obtained. In the dozens of cases where this has been done, the mass-to-light ratios obtained correspond to the dynamical dark matter measurements of clusters. Weak gravitational lensing investigates minute distortions of galaxies, using statistical analyses from vast galaxy surveys. By examining the apparent shear deformation of the adjacent background galaxies, astrophysicists can characterize the mean distribution of dark matter. The mass-to-light ratios correspond to dark matter densities predicted by other large-scale structure measurements. Galactic cluster Abell 2029 comprises thousands of galaxies enveloped in a cloud of hot gas and dark matter equivalent to more than M☉. At the center of this cluster is an enormous elliptical galaxy likely formed from many smaller galaxies. 1014 The most direct observational evidence comes from the Bullet Cluster. In most regions dark and visible matter are found together, due to their gravitational attraction. In the Bullet Cluster however, the two matter types split apart. This was apparently caused by a collision between two smaller clusters. Electromagnetic interactions among passing gas particles would then have caused the luminous matter to slow and settle near the point of impact. Because dark matter does not interact electromagnetically, it did not slow and continued past the center. X-ray observations show that much of the luminous matter (in the form of 107–108 Kelvin gas or plasma) is concentrated in the cluster's center. Weak gravitational lensing observations show that much of the missing mass would reside outside the central region. Unlike galactic rotation curves, this evidence is independent of the details of Newtonian gravity, directly supporting dark matter. Dark matter's observed behavior constrains whether and how much it scatters off other dark matter particles, quantified as its self-interaction cross section. If dark matter has no pressure, it can be described as a perfect fluid that has no damping. The distribution of mass in galaxy clusters has been used to argue both for and against the significance of self-interaction. An ongoing survey using the Subaru telescope uses weak lensing to analyze background light, bent by dark matter, to determine how the shape of the lens (how dark matter is distributed in the foreground). The survey studies galaxies more than a billion light-years distant, across an area greater than a thousand square degrees (about one fortieth of the entire sky). Cosmic microwave background Angular CMB fluctuations provide evidence for dark matter. The typical angular scales of CMB oscillations, measured as the power spectrum of the CMB anisotropies, reveal the different effects of baryonic and dark matter. Ordinary matter interacts strongly via radiation whereas dark matter particles (WIMPs) do not; both affect the oscillations by way of their gravity, so the two forms of matter have different effects. The spectrum shows a large first peak and smaller successive peaks. The first peak tells mostly about the density of baryonic matter, while the third peak relates mostly to the density of dark matter, measuring the density of matter and the density of atoms.[clarification needed] Sky surveys and baryon acoustic oscillations The early universe's acoustic oscillations affected visible matter by way of Baryon Acoustic Oscillation (BAO) clustering, in a way that can be measured with sky surveys such as the Sloan Digital Sky Survey and the 2dF Galaxy Redshift Survey. These measurements are consistent CMB metrics derived from the WMAP spacecraft and further constrain the Lambda CDM model and dark matter. Note that CMB and BAO data adopt different distance scales. Type Ia supernova distance measurements Type Ia supernovae can be used as "standard candles" to measure extragalactic distances. Extensive data sets of these supernovae can be used to constrain cosmological models. They constrain the dark energy density ΩΛ = ~0.713 for a flat, Lambda CDM universe and the parameter for a quintessence model. The results are roughly consistent with those derived from the WMAP observations and further constrain the Lambda CDM model and (indirectly) dark matter. In astronomical spectroscopy, the Lyman-alpha forest is the sum of the absorption lines arising from the Lyman-alpha transition of neutral hydrogen in the spectra of distant galaxies and quasars. Lyman-alpha forest observations can also constrain cosmological models. These constraints agree with those obtained from WMAP data. Structure formation refers to the serial transformations of the universe following the Big Bang. Prior to structure formation, e.g., Friedmann cosmology solutions to general relativity describe a homogeneous universe. Later, small anisotropies gradually grew and condensed the homogeneous universe into stars, galaxies and larger structures. Observations suggest that structure formation proceeds hierarchically, with the smallest structures collapsing first, followed by galaxies and then galaxy clusters. As the structures collapse in the evolving universe, they begin to "light up" as baryonic matter heats up through gravitational contraction and approaches hydrostatic pressure balance. CMB anisotropy measurements fix models in which most matter is dark. Dark matter also close gaps in models of large-scale structure. The dark matter hypothesis corresponds with statistical surveys of the visible structure and precisely to CMB predictions. Initially, baryonic matter's post-Big Bang temperature and pressure were too high to collapse and form smaller structures, such as stars, via the Jeans instability. The gravity from dark matter increase the compaction force, allowing the creation of these structures. Computer simulations of billions of dark matter particles confirmed that the "cold" dark matter model of structure formation is consistent with the structures observed through galaxy surveys, such as the Sloan Digital Sky Survey and 2dF Galaxy Redshift Survey, as well as observations of the Lyman-alpha forest. Tensions separate observations and simulations. Observations have turned up 90-99% fewer small galaxies than permitted by dark matter-based predictions. In addition, simulations predict dark matter distributions with a dense cusp near galactic centers, but the observed halos are smoother than predicted. ||This section includes a list of references, related reading or external links, but the sources of this section remain unclear because it lacks inline citations. (August 2015)| |Unsolved problem in physics: The composition of dark matter remains uncertain. Possibilities include dense baryonic (interacts with electromagnetic force) matter and non-baryonic matter (interacts with its surroundings only through gravity). Baryonic vs nonbaryonic matter Baryonic matter is made of baryons (protons and neutrons), that make up stars and planets. It also encompasses less common black holes, neutron stars, faint old white dwarfs and brown dwarfs, collectively known as massive compact halo objects or MACHOs. Baryonic dark matter (other than MACHOs) must be made of thus far unknown non-luminous elementary particles. Candidates include weakly interacting massive particles (WIMPs), including neutralinos, axions and sterile neutrinos. Candidates for nonbaryonic dark matter are hypothetical particles such as axions or supersymmetric particles; neutrinos can only supply a small fraction of dark matter, due to limits derived from large-scale structure and high-redshift galaxies. Unlike baryonic matter, nonbaryonic matter did not contribute to the formation of the elements in the early universe ("Big Bang nucleosynthesis") and so its presence is revealed only via its gravitational effects. In addition, if the particles of which it is composed are supersymmetric, they can undergo annihilation interactions with themselves, possibly resulting in observable by-products such as gamma rays and neutrinos ("indirect detection"). Multiple lines of evidence suggest the majority of dark matter is not made of baryons: - Sufficient diffuse, baryonic gas or dust would be visible when backlit by stars. - The theory of Big Bang nucleosynthesis predicts the observed abundance of the chemical elements and that baryonic matter accounts for around 4–5 percent of the universe's critical density, leaving 95-6% unaccounted for. In contrast, large-scale structure and other observations indicates that the total matter density is about 30% of the critical density. - Large astronomical searches for gravitational microlensing in the Milky Way found only a small contingent of the missing matter in dark, compact, conventional objects (MACHOs, etc.); the examined range of object sizes is from half the Earth's mass up to 30 solar masses, which covers nearly all the plausible candidates. - Detailed analysis of the small irregularities (anisotropies) in the cosmic microwave background observed by WMAP and Planck shows that around five-sixths of the total matter is in a form that interacts significantly with ordinary matter or photons only through gravitational effects. - Data from galaxy rotation curves, gravitational lensing, structure formation, the fraction of baryons in clusters and cluster abundance combined with independent evidence for baryon density, indicate that 85–90% of dark matter is non-baryonic (does not interact with the electromagnetic force). Dark matter can be divided into cold, warm and hot categories. These categories refer to velocity rather than temperature, indicating how far corresponding objects moved due to random motions in the early universe, before they slowed due to expansion – this is an important distance called the "free streaming length" (FSL). Primordial density fluctuations smaller than this length get washed out as particles spread from overdense to underdense regions, while larger fluctuations are unaffected; therefore this length sets a minimum scale for structure formation. The categories are set with respect to the size of a protogalaxy (an object that later evolves into a dwarf galaxy). Cold, warm and hot dark matter's FSLs are much smaller, similar and much larger, respectively. Cold dark matter leads to a "bottom-up" formation of structure while hot dark matter would result in a "top-down" formation scenario; he latter is excluded by high-redshift galaxy observations. These categories also correspond according to fluctuation spectrum effects and interval following the Big Bang at which each type became non-relativistic. Davis et al. wrote in 1985: Candidate particles can be grouped into three categories on the basis of their effect on the fluctuation spectrum (Bond et al. 1983). If the dark matter is composed of abundant light particles which remain relativistic until shortly before recombination, then it may be termed "hot". The best candidate for hot dark matter is a neutrino ... A second possibility is for the dark matter particles to interact more weakly than neutrinos, to be less abundant, and to have a mass of order 1 keV. Such particles are termed "warm dark matter", because they have lower thermal velocities than massive neutrinos ... there are at present few candidate particles which fit this description. Gravitinos and photinos have been suggested (Pagels and Primack 1982; Bond, Szalay and Turner 1982) ... Any particles which became nonrelativistic very early, and so were able to diffuse a negligible distance, are termed "cold" dark matter (CDM). There are many candidates for CDM including supersymmetric particles. Another approximate dividing line is that warm dark matter became non-relativistic when the universe was approximately 1 year old and 1 millionth of its present size and in the radiation-dominated era (photons and neutrinos), with a photon temperature 2.7 million K. Standard physical cosmology gives the particle horizon size as 2 ct[clarification needed] in the radiation-dominated era, thus 2 light-years. A region of this size would ultimately expand to 2 million light years (absent structure formation). The actual FSL is roughly 5x the above length, since it continues to grow slowly as particle velocities decrease inversely with the scale factor after they become non-relativistic. In this example the FSL would correspond to 10 million light-years or 3 Mpc today, around the size containing an average large galaxy. The 2.7 million K photon temperature gives a typical photon energy of 250 electron-volts, thereby setting a typical mass scale for "warm" dark matter: particles much more massive than this, such as GeV – TeV mass WIMPs, would become non-relativistic much earlier than 1 year after the Big Bang and thus have FSL's much smaller than a proto-galaxy, making them cold. Conversely, much lighter particles, such as neutrinos with masses of only a few eV, have FSL's much larger than a proto-galaxy, thus qualifying them as hot. Cold dark matter Cold dark matter offers the simplest explanation for most cosmological observations. It is dark matter composed of constituents with an FSL much smaller than a protogalaxy. This is the focus for dark matter research, as hot dark matter does not seem to be capable of supporting galaxy or galaxy cluster formation, and most particle candidates slowed early. The constituents of cold dark matter are unknown. Possibilities range from large objects like MACHOs (such as black holes) or RAMBOs (such as clusters of brown dwarfs), to new particles such as WIMPs and axions. Studies of Big Bang nucleosynthesis and gravitational lensing convinced most cosmologists that MACHOs cannot make up more than a small fraction of dark matter. According to A. Peter: "... the only really plausible dark-matter candidates are new particles." The DAMA/NaI experiment and its successor DAMA/LIBRA claimed to directly detect dark matter particles passing through the Earth, but many researchers remain skeptical, as negative results from similar experiments seem incompatible with the DAMA results. Many supersymmetric models offer dark matter candidates in the form of the WIMPy Lightest Supersymmetric Particle (LSP). Separately, heavy sterile neutrinos exist in non-supersymmetric extensions to the standard model that explain the small neutrino mass through the seesaw mechanism. Warm dark matter Warm dark matter refers to particles with an FSL comparable to the size of a protogalaxy. Predictions based on warm dark matter are similar to those for cold dark matter on large scales, but with less small-scale density perturbations. This reduces the predicted abundance of dwarf galaxies and may lead to lower density of dark matter in the central parts of large galaxies; some researchers consider this to be a better fit to observations. A challenge for this model is the lack of particle candidates with the required mass ~ 300 eV to 3000 eV. No known particles can be categorized as warm dark matter. A postulated candidate is the sterile neutrino: a heavier, slower form of neutrino that does not interact through the weak force (unlike other neutrinos). Some modified gravity theories, such as scalar-tensor-vector gravity, require warm dark matter to make their equations work. Hot dark matter Hot dark matter consists of particles whose FSL is much larger than the size of a protogalaxy. The neutrino qualifies. They were discovered independently, long before the hunt for dark matter: they were postulated in 1930, and detected in 1956. Neutrinos' mass is less than 10-6 that of an electron. Neutrinos interact with normal matter only via gravity and the weak force, making them difficult to detect (the weak force only works over a small distance, thus a neutrino triggers a weak force event only if it hits a nucleus head-on). This makes them 'weakly interacting light particles' (WILPs), as opposed to WIMPs. The three known flavors of neutrinos are the electron, muon and tau. Their masses are slightly different. Neutrinos oscillate among the flavors as they move. It is hard to determine an exact upper bound on the collective average mass of the three neutrinos (or for any of the three individually). For example, if the average neutrino mass were over 50 eV/c2 (less than 10-5 of the mass of an electron), the universe would collapse. CMB data and other methods indicate that their average mass probably does not exceed 0.3 eV/c2. Thus, observed neutrinos cannot explain dark matter. Because galaxy-size density fluctuations get washed out by free-streaming, hot dark matter implies that the first objects that can form are huge supercluster-size pancakes, which then fragment into galaxies. Deep-field observations show instead that galaxies formed first, followed by clusters and superclusters as galaxies clump together. If dark matter is made up of WIMPs, then millions, possibly billions, of WIMPs must pass through every square centimeter of the Earth each second. Many experiments aim to test this hypothesis. Although WIMPs are popular search candidates, the Axion Dark Matter eXperiment (ADMX) searches for axions. Another candidate is heavy hidden sector particles that only interact with ordinary matter via gravity. These experiments can be divided into two classes: direct detection experiments, which search for the scattering of dark matter particles off atomic nuclei within a detector; and indirect detection, which look for the products of WIMP annihilations. Direct detection experiments operate deep underground to reduce the interference from cosmic rays. Detectors include the Stawell mine, the Soudan mine, the SNOLAB underground laboratory at Sudbury, Ontario, the Gran Sasso National Laboratory, the Canfranc Underground Laboratory, the Boulby Underground Laboratory, the Deep Underground Science and Engineering Laboratory and the Particle and Astrophysical Xenon Detector. These experiments mostly use either cryogenic or noble liquid detector technologies. Cryogenic detectors operating at temperatures below 100mK, detect the heat produced when a particle hits an atom in a crystal absorber such as germanium. Noble liquid detectors detect scintillation produced by a particle collision in liquid xenon or argon. Cryogenic detector experiments include: CDMS, CRESST, EDELWEISS, EURECA. Noble liquid experiments include ZEPLIN, XENON, DEAP, ArDM, WARP, DarkSide, PandaX, and LUX, the Large Underground Xenon experiment. Both of these techniques distinguish background particles (that scatter off electrons) from dark matter particles (that scatter off nuclei). Other experiments include SIMPLE and PICASSO. The DAMA/NaI, DAMA/LIBRA experiments detected an annual modulation in the event rate that they claim is due to dark matter. (As the Earth orbits the Sun, the velocity of the detector relative to the dark matter halo will vary by a small amount). This claim is so far unconfirmed and unreconciled with negative results of other experiments. A low pressure time projection chamber makes it possible to access information on recoiling tracks and constrain WIMP-nucleus kinematics. WIMPs coming from the direction in which the Sun is travelling (roughly towards Cygnus) may then be separated from background, which should be isotropic. Directional dark matter experiments include DMTPC, DRIFT, Newage and MIMAC. In 2009, CDMS researchers reported two possible WIMP candidate events. They estimate that the probability that these events are due to background (neutrons or misidentified beta or gamma events) is 23%, and conclude "this analysis cannot be interpreted as significant evidence for WIMP interactions, but we cannot reject either event as signal." In 2011, researchers using the CRESST detectors presented evidence of 67 collisions occurring in detector crystals from subatomic particles. They calculated the probability that all were caused by known sources of interference/contamination was 1 in 10-5. Indirect detection experiments search for the products of WIMP annihilation/decay. If WIMPs are Majorana particles (their own antiparticle) then two WIMPs could annihilate to produce gamma rays or Standard Model particle-antiparticle pairs. If the WIMP is unstable, WIMPs could decay into standard model (or other) particles. These processes could be detected indirectly through an excess of gamma rays, antiprotons or positrons emanating from high density regions. The detection of such a signal is not conclusive evidence, as the sources of gamma ray production are not fully understood. A few of the WIMPs passing through the Sun or Earth may scatter off atoms and lose energy. Thus WIMPs may accumulate at the center of these bodies, increasing the chance of collision/annihilation. This could produce a distinctive signal in the form of high-energy neutrinos. Such a signal would be strong indirect proof of WIMP dark matter. High-energy neutrino telescopes such as AMANDA, IceCube and ANTARES are searching for this signal. WIMP annihilation from the Milky Way Galaxy as a whole may also be detected in the form of various annihilation products. The Galactic Center is a particularly good place to look because the density of dark matter may be higher there. The EGRET gamma ray telescope observed more gamma rays than expected from the Milky Way, but scientists concluded that this was most likely due to incorrect estimation of the telescope's sensitivity. The Fermi Gamma-ray Space Telescope is searching for similar gamma rays. In April 2012, an analysis of previously available data from its Large Area Telescope instrument produced statistical evidence of a 130 GeV signal in the gamma radiation coming from the center of the Milky Way. WIMP annihilation was seen as the most probable explanation. In 2013 results from the Alpha Magnetic Spectrometer on the International Space Station indicated excess high-energy cosmic rays that could be due to dark matter annihilation. An alternative approach to the detection of WIMPs in nature is to produce them in the laboratory. Experiments with the Large Hadron Collider (LHC) may be able to detect WIMPs produced in collisions of the LHC proton beams. Because a WIMP has negligible interaction with matter, it may be detected indirectly as (large amounts of) missing energy and momentum that escape the detectors, provided other (non-negligible) collision products are detected. These experiments could show that WIMPs can be created, but a direct detection experiment must still show that they exist in sufficient numbers to account for dark matter. Mass in extra dimensions In some multidimensional theories, the force of gravity is the only force with effect across all dimensions. This explains the relative weakness of gravity compared to the other forces of nature that cannot cross into extra dimensions. In that case, dark matter could exist in a “Hidden Valley” in other dimensions that only interact with the matter in our dimensions through gravity. That dark matter could potentially aggregate in the same way as ordinary matter, forming other-dimensional galaxies. Dark matter could consist of primordial defects ("birth defects") in the topology of quantum fields, which would contain energy and therefore gravitate. This possibility may be investigated by the use of an orbital network of atomic clocks that would register the passage of topological defects by changes to clock synchronization. The Global Positioning System may be able to operate as such a network. Some theories modify the laws of gravity. The earliest was Mordehai Milgrom's Modified Newtonian Dynamics (MOND) in 1983, which adjusts Newton's laws to increase gravitational field strength where gravitational acceleration becomes tiny (such as near the rim of a galaxy). It had some success explaining rotational velocity curves of elliptical and dwarf elliptical galaxies, but not galaxy cluster gravitational lensing. MOND was not relativistic: it was an adjustment of the Newtonian account. Attempts were made to bring MOND into conformity with general relativity; this spawned competing MOND-based hypotheses—including TeVeS, MOG or STV gravity and the phenomenological covariant approach. In 2007, Moffat proposed a modified gravity hypothesis based on nonsymmetric gravitational theory (NGT) that claims to account for the behavior of colliding galaxies. This model requires the presence of non-relativistic neutrinos or other cold dark matter, to work. Another proposal uses a gravitational backreaction from a theory that explains gravitational force between objects as an action, a reaction and then a back-reaction. Thus, an object A affects an object B, and the object B then re-affects object A, and so on: creating a feedback loop that strengthens gravity. In 2008, another group proposed "dark fluid", a modification of large-scale gravity. It hypothesized that attractive gravitational effects are instead a side-effect of dark energy. Dark fluid combines dark matter and dark energy in a single energy field that produces different effects at different scales. This treatment is a simplification of a previous fluid-like model called the generalized Chaplygin gas model in which the whole of spacetime is a compressible gas. Dark fluid can be compared to an atmospheric system. Atmospheric pressure causes air to expand and air regions can collapse to form clouds. In the same way, the dark fluid might generally disperse, while collecting around galaxies. Applying relativity to fractal, non-differentiable spacetime, Nottale suggests that potential energy may arise due to the fractality of spacetime, which would account for the missing mass-energy observed at cosmological scales. Mention of dark matter is made in some video games and other works of fiction. In such cases, it is usually attributed extraordinary physical or magical properties. Such descriptions are often inconsistent with the hypothesized properties of dark matter in physics and cosmology. - Since dark energy, by convention, does not count as "matter", this is 26.8/(4.9 + 26.8)=0.845 - "Hubble Finds Dark Matter Ring in Galaxy Cluster". - Trimble, V. (1987). "Existence and nature of dark matter in the universe". Annual Review of Astronomy and Astrophysics 25: 425–472. Bibcode:1987ARA&A..25..425T. doi:10.1146/annurev.aa.25.090187.002233. - Ade, P. A. R.; Aghanim, N.; Armitage-Caplan, C.; (Planck Collaboration); et al. (22 March 2013). "Planck 2013 results. I. Overview of products and scientific results – Table 9". Astronomy and Astrophysics 1303: 5062. arXiv:1303.5062. Bibcode:2014A&A...571A...1P. doi:10.1051/0004-6361/201321529. - Francis, Matthew (22 March 2013). "First Planck results: the Universe is still weird and interesting". Arstechnica. - "Planck captures portrait of the young Universe, revealing earliest light". University of Cambridge. 21 March 2013. Retrieved 21 March 2013. - Sean Carroll, Ph.D., Cal Tech, 2007, The Teaching Company, Dark Matter, Dark Energy: The Dark Side of the Universe, Guidebook Part 2 page 46, Accessed Oct. 7, 2013, "...dark matter: An invisible, essentially collisionless component of matter that makes up about 25 percent of the energy density of the universe... it's a different kind of particle... something not yet observed in the laboratory..." - Ferris, Timothy. "Dark Matter". Retrieved 2015-06-10. - Jarosik, N.; et al. (2011). "Seven-Year Wilson Microwave Anisotropy Probe (WMAP) Observations: Sky Maps, Systematic Errors, and Basic Results". Astrophysical Journal Supplement 192 (2): 14. arXiv:1001.4744. Bibcode:2011ApJS..192...14J. doi:10.1088/0067-0049/192/2/14. - Siegfried, T. (5 July 1999). "Hidden Space Dimensions May Permit Parallel Universes, Explain Cosmic Mysteries". The Dallas Morning News. - Copi, C. J.; Schramm, D. N.; Turner, M. S. (1995). "Big-Bang Nucleosynthesis and the Baryon Density of the Universe". Science 267 (5195): 192–199. arXiv:astro-ph/9407006. Bibcode:1995Sci...267..192C. doi:10.1126/science.7809624. PMID 7809624. - Kroupa, P.; et al. (2010). "Local-Group tests of dark-matter Concordance Cosmology: Towards a new paradigm for structure formation". Astronomy and Astrophysics 523: 32–54. arXiv:1006.1647. Bibcode:2010A&A...523A..32K. doi:10.1051/0004-6361/201014892. - Angus, G. (2013). "Cosmological simulations in MOND: the cluster scale halo mass function with light sterile neutrinos". Monthly Notices of the Royal Astronomical Society 436: 202–211. arXiv:1309.6094. Bibcode:2013MNRAS.436..202A. doi:10.1093/mnras/stt1564. - Bertone, G.; Hooper, D.; Silk, J. (2005). "Particle dark matter: Evidence, candidates and constraints". Physics Reports 405 (5–6): 279–390. arXiv:hep-ph/0404175. Bibcode:2005PhR...405..279B. doi:10.1016/j.physrep.2004.08.031. - Kapteyn, Jacobus Cornelius (1922). "First attempt at a theory of the arrangement and motion of the sidereal system". Astrophysical Journal 55: 302–327. Bibcode:1922ApJ....55..302K. doi:10.1086/142670. It is incidentally suggested that when the theory is perfected it may be possible to determine the amount of dark matter from its gravitational effect.(emphasis in original) - Rosenberg, Leslie J (30 June 2014). Status of the Axion Dark-Matter Experiment (ADMX) (PDF). 10th PATRAS Workshop on Axions, WIMPs and WISPs. p. 2. - Oort, J.H. (1932) “The force exerted by the stellar system in the direction perpendicular to the galactic plane and some related problems,” Bulletin of the Astronomical Institutes of the Netherlands, 6 : 249-287. - "The Hidden Lives of Galaxies: Hidden Mass". Imagine the Universe!. NASA/GSFC. - Kuijken, K.; Gilmore, G. (July 1989). "The Mass Distribution in the Galactic Disc - Part III - the Local Volume Mass Density" (PDF). Monthly Notices of the Royal Astronomical Society 239 (2): 651–664. Bibcode:1989MNRAS.239..651K. doi:10.1093/mnras/239.2.651. - Zwicky, F. (1933). "Die Rotverschiebung von extragalaktischen Nebeln". Helvetica Physica Acta 6: 110–127. Bibcode:1933AcHPh...6..110Z. - Zwicky, F. (1937). "On the Masses of Nebulae and of Clusters of Nebulae". The Astrophysical Journal 86: 217. Bibcode:1937ApJ....86..217Z. doi:10.1086/143864. - Zwicky, F. (1933), "Die Rotverschiebung von extragalaktischen Nebeln", Helvetica Physica Acta 6: 110–127, Bibcode:1933AcHPh...6..110Z See also Zwicky, F. (1937), "On the Masses of Nebulae and of Clusters of Nebulae", Astrophysical Journal 86: 217, Bibcode:1937ApJ....86..217Z, doi:10.1086/143864 - Some details of Zwicky's calculation and of more modern values are given in Richmond, M., Using the virial theorem: the mass of a cluster of galaxies, retrieved 2007-07-10 - Freese, Katherine (4 May 2014). The Cosmic Cocktail: Three Parts Dark Matter. Princeton University Press. ISBN 978-1-4008-5007-5. - Babcock, H, 1939, "The rotation of the Andromeda Nebula", Lick Observatory bulletin ; no. 498 - First observational evidence of dark matter. Darkmatterphysics.com. Retrieved 6 August 2013. - Rubin, Vera C.; Ford, W. Kent, Jr. (February 1970). "Rotation of the Andromeda Nebula from a Spectroscopic Survey of Emission Regions". The Astrophysical Journal 159: 379–403. Bibcode:1970ApJ...159..379R. doi:10.1086/150317. - Bosma, A. (1978). "The distribution and kinematics of neutral hydrogen in spiral galaxies of various morphological types" (Ph.D. Thesis). Rijksuniversiteit Groningen. - Rubin, V.; Thonnard, W. K. Jr.; Ford, N. (1980). "Rotational Properties of 21 Sc Galaxies with a Large Range of Luminosities and Radii from NGC 4605 (R = 4kpc) to UGC 2885 (R = 122kpc)". The Astrophysical Journal 238: 471. Bibcode:1980ApJ...238..471R. doi:10.1086/158003. - Bergstrom, L. (2000). "Non-baryonic dark matter: Observational evidence and detection methods". Reports on Progress in Physics 63 (5): 793–841. arXiv:hep-ph/0002126. Bibcode:2000RPPh...63..793B. doi:10.1088/0034-4885/63/5/2r3. - Komatsu, E.; et al. (2009). "Five-Year Wilkinson Microwave Anisotropy Probe Observations: Cosmological Interpretation". The Astrophysical Journal Supplement 180 (2): 330–376. arXiv:0803.0547. Bibcode:2009ApJS..180..330K. doi:10.1088/0067-0049/180/2/330. - Boggess, N. W.; et al. (1992). "The COBE Mission: Its Design and Performance Two Years after the launch". The Astrophysical Journal 397: 420. Bibcode:1992ApJ...397..420B. doi:10.1086/171797. - Melchiorri, A.; et al. (2000). "A Measurement of Ω from the North American Test Flight of Boomerang". The Astrophysical Journal Letters 536 (2): L63–L66. arXiv:astro-ph/9911445. Bibcode:2000ApJ...536L..63M. doi:10.1086/312744. - Leitch, E. M.; et al. (2002). "Measurement of polarization with the Degree Angular Scale Interferometer". Nature 420 (6917): 763–771. arXiv:astro-ph/0209476. Bibcode:2002Natur.420..763L. doi:10.1038/nature01271. PMID 12490940. - Leitch, E. M.; et al. (2005). "Degree Angular Scale Interferometer 3 Year Cosmic Microwave Background Polarization Results". The Astrophysical Journal 624 (1): 10–20. arXiv:astro-ph/0409357. Bibcode:2005ApJ...624...10L. doi:10.1086/428825. - Readhead, A. C. S.; et al. (2004). "Polarization Observations with the Cosmic Background Imager". Science 306 (5697): 836–844. arXiv:astro-ph/0409569. Bibcode:2004Sci...306..836R. doi:10.1126/science.1105598. PMID 15472038. - Hinshaw, G.; et al. (2009). "Five-Year Wilkinson Microwave Anisotropy Probe Observations: Data Processing, Sky Maps, and Basic Results". The Astrophysical Journal Supplement 180 (2): 225–245. arXiv:0803.0732. Bibcode:2009ApJS..180..225H. doi:10.1088/0067-0049/180/2/225. - "Serious Blow to Dark Matter Theories?" (Press release). European Southern Observatory. 18 April 2012. - Freeman, K.; McNamara, G. (2006). In Search of Dark Matter. Birkhäuser. p. 37. ISBN 0-387-27616-5. - Jörg, D.; et al. (2012). "A filament of dark matter between two clusters of galaxies". Nature 487 (7406): 202. arXiv:1207.0809. Bibcode:2012Natur.487..202D. doi:10.1038/nature11224. - Salucci, P.; Borriello, A. (2003). "The Intriguing Distribution of Dark Matter in Galaxies". Lecture Notes in Physics. Lecture Notes in Physics 616: 66–77. arXiv:astro-ph/0203457. Bibcode:2003LNP...616...66S. doi:10.1007/3-540-36539-7_5. ISBN 978-3-540-00711-1. - Dekel, A.; et al. (2005). "Lost and found dark matter in elliptical galaxies". Nature 437 (7059): 707–710. arXiv:astro-ph/0501622. Bibcode:2005Natur.437..707D. doi:10.1038/nature03970. PMID 16193046. - Faber, S. M.; Jackson, R. E. (1976). "Velocity dispersions and mass-to-light ratios for elliptical galaxies". The Astrophysical Journal 204: 668–683. Bibcode:1976ApJ...204..668F. doi:10.1086/154215. - Collins, G. W. (1978). "The Virial Theorem in Stellar Astrophysics". Pachart Press. - Rejkuba, M.; Dubath, P.; Minniti, D.; Meylan, G. (2008). "Masses and M/L Ratios of Bright Globular Clusters in NGC 5128". Proceedings of the International Astronomical Union 246: 418–422. Bibcode:2008IAUS..246..418R. doi:10.1017/S1743921308016074. - Weinberg, M. D.; Blitz, L. (2006). "A Magellanic Origin for the Warp of the Galaxy". The Astrophysical Journal Letters 641 (1): L33–L36. arXiv:astro-ph/0601694. Bibcode:2006ApJ...641L..33W. doi:10.1086/503607. - Minchin, R.; et al. (2005). "A Dark Hydrogen Cloud in the Virgo Cluster". The Astrophysical Journal Letters 622: L21–L24. arXiv:astro-ph/0502312. Bibcode:2005ApJ...622L..21M. doi:10.1086/429538. - Ciardullo, R.; Jacoby, G. H.; Dejonghe, H. B. (1993). "The radial velocities of planetary nebulae in NGC 3379". The Astrophysical Journal 414: 454–462. Bibcode:1993ApJ...414..454C. doi:10.1086/173092. - Vikhlinin, A.; et al. (2006). "Chandra Sample of Nearby Relaxed Galaxy Clusters: Mass, Gas Fraction, and Mass–Temperature Relation". The Astrophysical Journal 640 (2): 691–709. arXiv:astro-ph/0507092. Bibcode:2006ApJ...640..691V. doi:10.1086/500288. - Taylor, A. N.; et al. (1998). "Gravitational Lens Magnification and the Mass of Abell 1689". The Astrophysical Journal 501 (2): 539. arXiv:astro-ph/9801158. Bibcode:1998ApJ...501..539T. doi:10.1086/305827. - Wu, X.; Chiueh, T.; Fang, L.; Xue, Y. (1998). "A comparison of different cluster mass estimates: consistency or discrepancy?". Monthly Notices of the Royal Astronomical Society 301 (3): 861–871. arXiv:astro-ph/9808179. Bibcode:1998MNRAS.301..861W. doi:10.1046/j.1365-8711.1998.02055.x. - Refregier, A. (2003). "Weak gravitational lensing by large-scale structure". Annual Review of Astronomy and Astrophysics 41 (1): 645–668. arXiv:astro-ph/0307212. Bibcode:2003ARA&A..41..645R. doi:10.1146/annurev.astro.41.111302.102207. - "Abell 2029: Hot News for Cold Dark Matter". Chandra X-ray Observatory. 11 June 2003. - Massey, R.; et al. (2007). "Dark matter maps reveal cosmic scaffolding". Nature 445 (7125): 286–290. arXiv:astro-ph/0701594. Bibcode:2007Natur.445..286M. doi:10.1038/nature05497. PMID 17206154. - Clowe, D.; et al. (2006). "A direct empirical proof of the existence of dark matter". The Astrophysical Journal 648 (2): 109–113. arXiv:astro-ph/0608407. Bibcode:2006ApJ...648L.109C. doi:10.1086/508162. - Tiberiu, H.; Lobo, F. S. N. (2011). "Two-fluid dark matter models". Physical Review D 83 (12): 124051. arXiv:1106.2642. Bibcode:2011PhRvD..83l4051H. doi:10.1103/PhysRevD.83.124051. - Spergel, D. N.; Steinhardt, P. J. (2000). "Observational evidence for self-interacting cold dark matter". Physical Review Letters 84 (17): 3760–3763. arXiv:astro-ph/9909386. Bibcode:2000PhRvL..84.3760S. doi:10.1103/PhysRevLett.84.3760. - Markevitch, M.; et al. (2004). "Direct Constraints on the Dark Matter Self-Interaction Cross Section from the Merging Galaxy Cluster 1E 0657-56". The Astrophysical Journal 606 (2): 819–824. arXiv:astro-ph/0309303. Bibcode:2004ApJ...606..819M. doi:10.1086/383178. - Allen, S. W.; Evrard, A. E.; Mantz, A. B. (2011). "Cosmological Parameters from Observations of Galaxy Clusters". Annual Review of Astronomy & Astrophysics 49: 409–470. arXiv:1103.4829. Bibcode:2011ARA&A..49..409A. doi:10.1146/annurev-astro-081710-102514. - "Press Release - Dark Matter Map Begins to Reveal the Universe's Early History - Subaru Telescope". www.subarutelescope.org. Retrieved 2015-07-03. - Miyazaki, Satoshi; Oguri, Masamune; Hamana, Takashi; Tanaka, Masayuki; Miller, Lance; Utsumi, Yousuke; Komiyama, Yutaka; Furusawa, Hisanori; Sakurai, Junya (2015-07-01). "Properties of Weak Lensing Clusters Detected on Hyper Suprime-Cam’s 2.3 deg2 field". The Astrophysical Journal 807 (1): 22. arXiv:1504.06974. Bibcode:2015ApJ...807...22M. doi:10.1088/0004-637X/807/1/22. ISSN 0004-637X. - Percival, W. J.; et al. (2007). "Measuring the Baryon Acoustic Oscillation scale using the Sloan Digital Sky Survey and 2dF Galaxy Redshift Survey". Monthly Notices of the Royal Astronomical Society 381 (3): 1053–1066. arXiv:0705.3323. Bibcode:2007MNRAS.381.1053P. doi:10.1111/j.1365-2966.2007.12268.x. - Kowalski, M.; et al. (2008). "Improved Cosmological Constraints from New, Old, and Combined Supernova Data Sets". The Astrophysical Journal 686 (2): 749–778. arXiv:0804.4142. Bibcode:2008ApJ...686..749K. doi:10.1086/589937. - Viel, M.; Bolton, J. S.; Haehnelt, M. G. (2009). "Cosmological and astrophysical constraints from the Lyman α forest flux probability distribution function". Monthly Notices of the Royal Astronomical Society 399 (1): L39–L43. arXiv:0907.2927. Bibcode:2009MNRAS.399L..39V. doi:10.1111/j.1745-3933.2009.00720.x. - "Hubble Maps the Cosmic Web of "Clumpy" Dark Matter in 3-D" (Press release). NASA. 7 January 2007. - Springel, V.; et al. (2005). "Simulations of the formation, evolution and clustering of galaxies and quasars". Nature 435 (7042): 629–636. arXiv:astro-ph/0504097. Bibcode:2005Natur.435..629S. doi:10.1038/nature03597. PMID 15931216. - Mateo, M. L. (1998). "Dwarf Galaxies of the Local Group". Annual Review of Astronomy and Astrophysics 36 (1): 435–506. arXiv:astro-ph/9810070. Bibcode:1998ARA&A..36..435M. doi:10.1146/annurev.astro.36.1.435. - Moore, B.; et al. (1999). "Dark Matter Substructure within Galactic Halos". The Astrophysical Journal Letters 524 (1): L19–L22. arXiv:astro-ph/9907411. Bibcode:1999ApJ...524L..19M. doi:10.1086/312287. - Bertone, G.; Merritt, D. (2005). "Dark Matter Dynamics and Indirect Detection". Modern Physics Letters A 20 (14): 1021–1036. arXiv:astro-ph/0504422. Bibcode:2005MPLA...20.1021B. doi:10.1142/S0217732305017391. - Achim Weiss, "Big Bang Nucleosynthesis: Cooking up the first light elements" in: Einstein Online Vol. 2 (2006), 1017 - Raine, D.; Thomas, T. (2001). An Introduction to the Science of Cosmology. IOP Publishing. p. 30. ISBN 0-7503-0405-7. - Tisserand, P.; Le Guillou, L.; Afonso, C.; Albert, J. N.; Andersen, J.; Ansari, R.; Aubourg, É.; Bareyre, P.; Beaulieu, J. P.; Charlot, X.; Coutures, C.; Ferlet, R.; Fouqué, P.; Glicenstein, J. F.; Goldman, B.; Gould, A.; Graff, D.; Gros, M.; Haissinski, J.; Hamadache, C.; De Kat, J.; Lasserre, T.; Lesquoy, É.; Loup, C.; Magneville, C.; Marquette, J. B.; Maurice, É.; Maury, A.; Milsztajn, A.; Moniez, M. (2007). "Limits on the Macho content of the Galactic Halo from the EROS-2 Survey of the Magellanic Clouds". Astronomy and Astrophysics 469 (2): 387. arXiv:astro-ph/0607207. Bibcode:2007A&A...469..387T. doi:10.1051/0004-6361:20066017. - Graff, D. S.; Freese, K. (1996). "Analysis of a Hubble Space Telescope Search for Red Dwarfs: Limits on Baryonic Matter in the Galactic Halo". The Astrophysical Journal 456. arXiv:astro-ph/9507097. Bibcode:1996ApJ...456L..49G. doi:10.1086/309850. - Najita, J. R.; Tiede, G. P.; Carr, J. S. (2000). "From Stars to Superplanets: The Low‐Mass Initial Mass Function in the Young Cluster IC 348". The Astrophysical Journal 541 (2): 977. arXiv:astro-ph/0005290. Bibcode:2000ApJ...541..977N. doi:10.1086/309477. - Wyrzykowski, Lukasz et al. (2011) The OGLE view of microlensing towards the Magellanic Clouds – IV. OGLE-III SMC data and final conclusions on MACHOs, MNRAS, 416, 2949 - Freese, Katherine; Fields, Brian; Graff, David (2000). "Death of Stellar Baryonic Dark Matter Candidates". arXiv:astro-ph/0007444 [astro-ph]. - Freese, Katherine; Fields, Brian; Graff, David (2000). "Death of Stellar Baryonic Dark Matter". The First Stars. ESO Astrophysics Symposia. p. 18. arXiv:astro-ph/0002058. Bibcode:2000fist.conf...18F. doi:10.1007/10719504_3. ISBN 3-540-67222-2. - Silk, Joseph (6 December 2000). "IX". The Big Bang: Third Edition. Henry Holt and Company. ISBN 978-0-8050-7256-3. - Vittorio, N.; J. Silk (1984). "Fine-scale anisotropy of the cosmic microwave background in a universe dominated by cold dark matter". Astrophysical Journal, Part 2 – Letters to the Editor 285: L39–L43. Bibcode:1984ApJ...285L..39V. doi:10.1086/184361. - Umemura, Masayuki; Satoru Ikeuchi (1985). "Formation of Subgalactic Objects within Two-Component Dark Matter". Astrophysical Journal 299: 583–592. Bibcode:1985ApJ...299..583U. doi:10.1086/163726. - Davis, M.; Efstathiou, G., Frenk, C. S., & White, S. D. M. (May 15, 1985). "The evolution of large-scale structure in a universe dominated by cold dark matter". Astrophysical Journal 292: 371–394. Bibcode:1985ApJ...292..371D. doi:10.1086/163168. - Hawkins, M. R. S. (2011). "The case for primordial black holes as dark matter". Monthly Notices of the Royal Astronomical Society 415 (3): 2744–2757. arXiv:1106.3875. Bibcode:2011MNRAS.415.2744H. doi:10.1111/j.1365-2966.2011.18890.x. - Carr, B. J.; et al. (May 2010). "New cosmological constraints on primordial black holes" (PDF). Physical Review D 81 (10): 104019. arXiv:0912.5297. Bibcode:2010PhRvD..81j4019C. doi:10.1103/PhysRevD.81.104019. - Peter, A. H. G. (2012). "Dark Matter: A Brief Review". arXiv:1201.3942 [astro-ph.CO]. - Garrett, Katherine; Dūda, Gintaras (2011). "Dark Matter: A Primer". Advances in Astronomy 2011: 1. arXiv:1006.2483. Bibcode:2011AdAst2011E...8G. doi:10.1155/2011/968283. MACHOs can only account for a very small percentage of the nonluminous mass in our galaxy, revealing that most dark matter cannot be strongly concentrated or exist in the form of baryonic astrophysical objects. Although microlensing surveys rule out baryonic objects like brown dwarfs, black holes, and neutron stars in our galactic halo, can other forms of baryonic matter make up the bulk of dark matter? The answer, surprisingly, is no... - Bertone, G. (2010). "The moment of truth for WIMP dark matter". Nature 468 (7322): 389–393. arXiv:1011.3532. Bibcode:2010Natur.468..389B. doi:10.1038/nature09509. PMID 21085174. - Olive, Keith A. (2003). "TASI Lectures on Dark Matter". p. 21 - Jungman, Gerard; Kamionkowski, Marc; Griest, Kim (1996-03-01). "Supersymmetric dark matter". Physics Reports 267 (5–6): 195–373. doi:10.1016/0370-1573(95)00058-5. - "Neutrinos as Dark Matter". Astro.ucla.edu. 21 September 1998. Retrieved 6 January 2011. - Gaitskell, Richard J. (2004). "Direct Detection of Dark Matter". "Annual Review of Nuclear and Particle Systems" 54: 315–359. Bibcode:2004ARNPS..54..315G. doi:10.1146/annurev.nucl.54.070103.181244. - "NEUTRALINO DARK MATTER". Retrieved 26 December 2011. Griest, Kim. "WIMPs and MACHOs" (PDF). Retrieved 26 December 2011. - Drukier, A.; Freese, K. and Spergel, D. (1986). "Detecting Cold Dark Matter Candidates". Physical Review D 33 (12): 3495–3508. Bibcode:1986PhRvD..33.3495D. doi:10.1103/PhysRevD.33.3495. - Bernabei, R.; Belli, P.; Cappella, F.; Cerulli, R.; Dai, C. J.; d’Angelo, A.; He, H. L.; Incicchitti, A.; Kuang, H. H.; Ma, J. M.; Montecchia, F.; Nozzoli, F.; Prosperi, D.; Sheng, X. D.; Ye, Z. P. (2008). "First results from DAMA/LIBRA and the combined results with DAMA/NaI". Eur. Phys. J. C 56 (3): 333–355. arXiv:0804.2741. doi:10.1140/epjc/s10052-008-0662-y. - Stonebraker, Alan (2014-01-03). "Synopsis: Dark-Matter Wind Sways through the Seasons". Physics - Synopses (American Physical Society). Retrieved 6 January 2014. - Lee, Samuel K.; Mariangela Lisanti, Annika H. G. Peter, and Benjamin R. Safdi (2014-01-03). "Effect of Gravitational Focusing on Annual Modulation in Dark-Matter Direct-Detection Experiments". Phys. Rev. Lett. (American Physical Society) 112 (1): 011301 (2014) [5 pages]. arXiv:1308.1953. Bibcode:2014PhRvL.112a1301L. doi:10.1103/PhysRevLett.112.011301. - The Dark Matter Group. "An Introduction to Dark Matter". Dark Matter Research (Sheffield, UK: University of Sheffield). Retrieved 7 January 2014. - "Blowing in the Wind". Kavli News (Sheffield, UK: Kavli Foundation). Retrieved 7 January 2014. Scientists at Kavli MIT are working on...a tool to track the movement of dark matter. - The CDMS II Collaboration; Ahmed, Z.; Akerib, D. S.; Arrenberg, S.; Bailey, C. N.; Balakishiyeva, D.; Baudis, L.; Bauer, D. A.; Brink, P. L.; Bruch, T.; Bunker, R.; Cabrera, B.; Caldwell, D. O.; Cooley, J.; Cushman, P.; Daal, M.; Dejongh, F.; Dragowsky, M. R.; Duong, L.; Fallows, S.; Figueroa-Feliciano, E.; Filippini, J.; Fritts, M.; Golwala, S. R.; Grant, D. R.; Hall, J.; Hennings-Yeomans, R.; Hertel, S. A.; Holmgren, D.; Hsu, L. (2010). "Dark Matter Search Results from the CDMS II Experiment". Science 327 (5973): 1619–1621. arXiv:0912.3592. Bibcode:2010Sci...327.1619C. doi:10.1126/science.1186112. PMID 20150446. - Angloher, G.; Bauer; Bavykina; Bento; Bucci; Ciemniak; Deuter; von Feilitzsch; Hauff; et al. (2011). "Results from 730kg days of the CRESST-II Dark Matter Search". arXiv:1109.0702v1 [astro-ph.CO]. - "Dark matter even darker than once thought". Retrieved 16 June 2015. - Freese, K. (1986). "Can Scalar Neutrinos or Massive Dirac Neutrinos be the Missing Mass?". Physics Letters B 167 (3): 295–300. Bibcode:1986PhLB..167..295F. doi:10.1016/0370-2693(86)90349-7. - Ellis, J.; Flores, R. A.; Freese, K.; Ritz, S.; Seckel, D.; Silk, J. (1988). "Cosmic ray constraints on the annihilations of relic particles in the galactic halo". Physics Letters B 214 (3): 403. Bibcode:1988PhLB..214..403E. doi:10.1016/0370-2693(88)91385-8. - Bertone, Gianfranco (2010). "Dark Matter at the Centers of Galaxies". Particle Dark Matter: Observations, Models and Searches. Cambridge University Press. pp. 83–104. arXiv:1001.3706. ISBN 978-0-521-76368-4. - Stecker, F.W.; Hunter, S; Kniffen, D (2008). "The likely cause of the EGRET GeV anomaly and its implications". Astroparticle Physics 29 (1): 25–29. arXiv:0705.4311. Bibcode:2008APh....29...25S. doi:10.1016/j.astropartphys.2007.11.002. - Atwood, W.B.; Abdo, A. A.; Ackermann, M.; Althouse, W.; Anderson, B.; Axelsson, M.; Baldini, L.; Ballet, J.; et al. (2009). "The large area telescope on the Fermi Gamma-ray Space Telescope Mission". Astrophysical Journal 697 (2): 1071–1102. arXiv:0902.1089. Bibcode:2009ApJ...697.1071A. doi:10.1088/0004-637X/697/2/1071. - Weniger, Christoph (2012). "A Tentative Gamma-Ray Line from Dark Matter Annihilation at the Fermi Large Area Telescope". Journal of Cosmology and Astroparticle Physics 2012 (8): 7. arXiv:1204.2797v2. Bibcode:2012JCAP...08..007W. doi:10.1088/1475-7516/2012/08/007. - Cartlidge, Edwin (24 April 2012). "Gamma rays hint at dark matter". Institute Of Physics. Retrieved 23 April 2013. - Albert, J.; Aliu, E.; Anderhub, H.; Antoranz, P.; Backes, M.; Baixeras, C.; Barrio, J. A.; Bartko, H.; Bastieri, D.; Becker, J. K.; Bednarek, W.; Berger, K.; Bigongiari, C.; Biland, A.; Bock, R. K.; Bordas, P.; Bosch‐Ramon, V.; Bretz, T.; Britvitch, I.; Camara, M.; Carmona, E.; Chilingarian, A.; Commichau, S.; Contreras, J. L.; Cortina, J.; Costado, M. T.; Curtef, V.; Danielyan, V.; Dazzi, F.; De Angelis, A. (2008). "Upper Limit for γ‐Ray Emission above 140 GeV from the Dwarf Spheroidal Galaxy Draco". The Astrophysical Journal 679: 428. arXiv:0711.2574. Bibcode:2008ApJ...679..428A. doi:10.1086/529135. - Aleksić, J.; Antonelli, L. A.; Antoranz, P.; Backes, M.; Baixeras, C.; Balestra, S.; Barrio, J. A.; Bastieri, D.; González, J. B.; Bednarek, W.; Berdyugin, A.; Berger, K.; Bernardini, E.; Biland, A.; Bock, R. K.; Bonnoli, G.; Bordas, P.; Tridon, D. B.; Bosch-Ramon, V.; Bose, D.; Braun, I.; Bretz, T.; Britzger, D.; Camara, M.; Carmona, E.; Carosi, A.; Colin, P.; Commichau, S.; Contreras, J. L.; Cortina, J. (2010). "Magic Gamma-Ray Telescope Observation of the Perseus Cluster of Galaxies: Implications for Cosmic Rays, Dark Matter, and Ngc 1275". The Astrophysical Journal 710: 634. arXiv:0909.3267. Bibcode:2010ApJ...710..634A. doi:10.1088/0004-637X/710/1/634. - Adriani, O.; Barbarino, G. C.; Bazilevskaya, G. A.; Bellotti, R.; Boezio, M.; Bogomolov, E. A.; Bonechi, L.; Bongi, M.; Bonvicini, V.; Bottai, S.; Bruno, A.; Cafagna, F.; Campana, D.; Carlson, P.; Casolino, M.; Castellini, G.; De Pascale, M. P.; De Rosa, G.; De Simone, N.; Di Felice, V.; Galper, A. M.; Grishantseva, L.; Hofverberg, P.; Koldashov, S. V.; Krutkov, S. Y.; Kvashnin, A. N.; Leonov, A.; Malvezzi, V.; Marcelli, L.; Menn, W. (2009). "An anomalous positron abundance in cosmic rays with energies 1.5–100 GeV". Nature 458 (7238): 607–609. arXiv:0810.4995. Bibcode:2009Natur.458..607A. doi:10.1038/nature07942. PMID 19340076. - Aguilar, M. (AMS Collaboration); et al. (3 April 2013). "First Result from the Alpha Magnetic Spectrometer on the International Space Station: Precision Measurement of the Positron Fraction in Primary Cosmic Rays of 0.5–350 GeV". Physical Review Letters. Bibcode:2013PhRvL.110n1102A. doi:10.1103/PhysRevLett.110.141102. Retrieved 3 April 2013. - "First Result from the Alpha Magnetic Spectrometer Experiment". AMS Collaboration. 3 April 2013. Retrieved 3 April 2013. - Heilprin, John; Borenstein, Seth (3 April 2013). "Scientists find hint of dark matter from cosmos". Associated Press. Retrieved 3 April 2013. - Amos, Jonathan (3 April 2013). "Alpha Magnetic Spectrometer zeroes in on dark matter". BBC. Retrieved 3 April 2013. - Perrotto, Trent J.; Byerly, Josh (2 April 2013). "NASA TV Briefing Discusses Alpha Magnetic Spectrometer Results". NASA. Retrieved 3 April 2013. - Overbye, Dennis (3 April 2013). "New Clues to the Mystery of Dark Matter". New York Times. Retrieved 3 April 2013. - Kane, G. and Watson, S. (2008). "Dark Matter and LHC:. what is the Connection?". Modern Physics Letters A 23 (26): 2103–2123. arXiv:0807.2244. Bibcode:2008MPLA...23.2103K. doi:10.1142/S0217732308028314. - Extra dimensions, gravitons, and tiny black holes. CERN. Retrieved on 17 November 2014. - Dark matter. CERN. Retrieved on 17 November 2014. - Rzetelny, Xaq (19 November 2014). "Looking for a different sort of dark matter with GPS satellites". Ars Technica. Retrieved 24 November 2014. - Exirifard, Q. (2010). "Phenomenological covariant approach to gravity". General Relativity and Gravitation 43 (1): 93–106. arXiv:0808.1962. Bibcode:2011GReGr..43...93E. doi:10.1007/s10714-010-1073-6. - Brownstein, J.R.; Moffat, J. W. (2007). "The Bullet Cluster 1E0657-558 evidence shows modified gravity in the absence of dark matter". Monthly Notices of the Royal Astronomical Society 382 (1): 29–47. arXiv:astro-ph/0702146. Bibcode:2007MNRAS.382...29B. doi:10.1111/j.1365-2966.2007.12275.x. - Anastopoulos, C. (2009). "Gravitational backreaction in cosmological spacetimes". Physical Review D 79 (8): 084029. arXiv:0902.0159. Bibcode:2009PhRvD..79h4029A. doi:10.1103/PhysRevD.79.084029. - "New Cosmic Theory Unites Dark Forces". SPACE.com. 11 February 2008. Retrieved 6 January 2011. - Nottale, Laurent (May 29, 2009). "Scale relativity and fractal space-time: theory and applications" (PDF). - Nottale, Laurent (17 June 2011). Scale Relativity and Fractal Space-Time: A New Approach to Unifying Relativity and Quantum Mechanics. World Scientific. p. 516. ISBN 978-1-908977-87-8. |Wikimedia Commons has media related to Dark matter.| - Dark matter at DMOZ - Dark matter (Astronomy) at Encyclopædia Britannica - What is dark matter? at cosmosmagazine.com - The Dark Matter Crisis 18 August 2010 by Pavel Kroupa, posted in General - The European astroparticle physics network - Helmholtz Alliance for Astroparticle Physics - "NASA Finds Direct Proof of Dark Matter" (Press release). NASA. 21 August 2006. - Tuttle, Kelen (22 August 2006). "Dark Matter Observed". SLAC (Stanford Linear Accelerator Center) Today. - Sample, Ian (17 December 2009). "Dark Matter Detected". London: Guardian. Retrieved 1 May 2010. - Video lecture on dark matter by Scott Tremaine, IAS professor - Science Daily story "Astronomers' Doubts About the Dark Side ..." - Gray, Meghan; Merrifield, Mike; Copeland, Ed (2010). "Dark Matter". Sixty Symbols. Brady Haran for the University of Nottingham.
https://en.wikipedia.org/wiki/Dark_matter
4.0625
The Roche limit (pronounced /ʁoʃ/ in IPA, similar to the sound of rosh), sometimes referred to as the Roche radius, is the distance within which a celestial body, held together only by its own gravity, will disintegrate due to a second celestial body's tidal forces exceeding the first body's gravitational self-attraction. Inside the Roche limit, orbiting material disperses and forms rings whereas outside the limit material tends to coalesce. The term is named after Édouard Roche, who is the French astronomer who first calculated this theoretical limit in 1848. - 1 Explanation - 2 Roche limits for selected examples - 3 Determining the Roche limit - 4 See also - 5 References - 6 Sources - 7 External links Typically, the Roche limit applies to a satellite's disintegrating due to tidal forces induced by its primary, the body about which it orbits. Parts of the satellite that are closer to the primary are attracted more strongly by gravity from the primary than parts that are farther away; this disparity effectively pulls the near and far parts of the satellite apart from each other, and if the disparity (combined with any centrifugal effects due to the object's spin) is larger than the force of gravity holding the satellite together, it can pull the satellite apart. Some real satellites, both natural and artificial, can orbit within their Roche limits because they are held together by forces other than gravitation. Objects resting on the surface of such a satellite would be lifted away by tidal forces. A weaker satellite, such as a comet, could be broken up when it passes within its Roche limit. Since, within the Roche limit, tidal forces overwhelm the gravitational forces that might otherwise hold the satellite together, no satellite can gravitationally coalesce out of smaller particles within that limit. Indeed, almost all known planetary rings are located within their Roche limit, Saturn's E-Ring and Phoebe ring being notable exceptions. They could either be remnants from the planet's proto-planetary accretion disc that failed to coalesce into moonlets, or conversely have formed when a moon passed within its Roche limit and broke apart. Roche limits for selected examples ||This section may be too technical for most readers to understand. (March 2014)| The table below shows the mean density and the equatorial radius for selected objects in the Solar System. |Primary||Density (kg/m3)||Radius (m)| The equations for the Roche limits relate the minimum sustainable orbital radius to the ratio of the two objects' densities and the Radius of the primary body. Hence, using the data above, the Roche limits for these objects can be calculated. This has been done twice for each, assuming the extremes of the rigid and fluid body cases. The average density of comets is taken to be around 500 kg/m³. The table below gives the Roche limits expressed in kilometres and in primary radii. The mean radius of the orbit can be compared with the Roche limits. For convenience, the table lists the mean radius of the orbit for each, excluding the comets, whose orbits are extremely variable and eccentric. |Body||Satellite||Roche limit (rigid)||Roche limit (fluid)||Mean orbital radius (km)| |Distance (km)||R||Distance (km)||R| So, clearly these bodies are well outside their Roche limits by various factors, from 21 for the Moon (over its fluid-body Roche limit) as part of the Earth–Moon system, upwards to thousands for Earth and Jupiter. But how close are the Solar System's other moons to their Roche limits? The table below gives each satellite's closest approach in its orbit divided by its own Roche limit. Again, both rigid and fluid body calculations are given. Note that Pan, Cordelia and Naiad, in particular, may be quite close to their actual break-up points. In practice, the densities of most of the inner satellites of giant planets are not known. In these cases, shown in italics, likely values have been assumed, but their actual Roche limit can vary from the value shown. |Primary||Satellite||Orbital Radius / Roche limit| Determining the Roche limit The limiting distance to which a satellite can approach without breaking up depends on the rigidity of the satellite. At one extreme, a completely rigid satellite will maintain its shape until tidal forces break it apart. At the other extreme, a highly fluid satellite gradually deforms leading to increased tidal forces, causing the satellite to elongate, further compounding the tidal forces and causing it to break apart more readily. Most real satellites would lie somewhere between these two extremes, with tensile strength rendering the satellite neither perfectly rigid nor perfectly fluid. But note that, as defined above, the Roche limit refers to a body held together solely by the gravitational forces which cause otherwise unconnected particles to coalesce, thus forming the body in question. The Roche limit is also usually calculated for the case of a circular orbit, although it is straightforward to modify the calculation to apply to the case (for example) of a body passing the primary on a parabolic or hyperbolic trajectory. The rigid-body Roche limit is a simplified calculation for a spherical satellite. Irregular shapes such as those of tidal deformation on the body or the primary it orbits are neglected. It is assumed to be in hydrostatic equilibrium. These assumptions, although unrealistic, greatly simplify calculations. The Roche limit for a rigid spherical satellite is the distance, , from the primary at which the gravitational force on a test mass at the surface of the object is exactly equal to the tidal force pulling the mass away from the object: This does not depend on the size of the objects, but on the ratio of densities. This is the orbital distance inside of which loose material (e.g. regolith) on the surface of the satellite closest to the primary would be pulled away, and likewise material on the side opposite the primary will also be pulled away from, rather than toward, the satellite. Note that this is an approximate result as inertia force and rigid structure are ignored in its derivation. Derivation of the formula In order to determine the Roche limit, we consider a small mass on the surface of the satellite closest to the primary. There are two forces on this mass : the gravitational pull towards the satellite and the gravitational pull towards the primary. Assuming that the satellite is in free fall around the primary and that the tidal force is the only relevant term of the gravitational attraction of the primary. This assumption is a simplification as free-fall only truly applies to the planetary center, but will suffice for this derivation. The gravitational pull on the mass towards the satellite with mass and radius can be expressed according to Newton's law of gravitation. the tidal force on the mass towards the primary with radius and mass , at a distance between the centers of the two bodies, can be expressed approximately as To obtain this approximation, find the difference in the primary's gravitational pull on the center of the satellite and on the edge of the satellite closest to the primary: In the approximation where and , we can say that the in the numerator and every term with in the denominator goes to zero, which gives us: The Roche limit is reached when the gravitational force and the tidal force balance each other out. which gives the Roche limit, , as However, we don't really want the radius of the satellite to appear in the expression for the limit, so we re-write this in terms of densities. For a sphere the mass can be written as - where is the radius of the primary. - where is the radius of the satellite. Substituting for the masses in the equation for the Roche limit, and cancelling out gives which can be simplified to the Roche limit: A more accurate formula and it gets added to FT. Doing the force-balance calculation yields this result for the Roche limit: - .......... (1) or: .......... (2) Use (where is the radius of the satellite) to replace in formula(1), we can have a third formula: - .......... (3) Thus, we just need to observe the mass of the star(planet) and to estimate the density of the planet(satellite), then we can have the certain Roche limit of this planet(satellite) in the stellar(planetary) system. Roche limit, Hill sphere and radius of the planet Consider a planet with a density of and a radius of , orbiting a star with a mass of M in a distant of R, Let's place the planet on its Roche limit: Hill sphere of the planet here is around L1(or L2): , Hill sphere ..........(4) see Hill sphere, or Roche lobe. the surface of the planet coincide with the Roche lobe(or the planet fill full the Roche lobe)! Celestial body cannot absorb any little thing or further more, lose its material.This is the physical meaning of Roche limit, Roche lobe and Hill sphere. Formula(2) can be described as: , a perfect mathematical symmetry. This is the astronomical significance of Roche limit and Hill sphere. A more accurate approach for calculating the Roche limit takes the deformation of the satellite into account. An extreme example would be a tidally locked liquid satellite orbiting a planet, where any force acting upon the satellite would deform it into a prolate spheroid. The calculation is complex and its result cannot be represented in an exact algebraic formula. Roche himself derived the following approximate solution for the Roche limit: However, a better approximation that takes into account the primary's oblateness and the satellite's mass is: where is the oblateness of the primary. The numerical factor is calculated with the aid of a computer. The fluid solution is appropriate for bodies that are only loosely held together, such as a comet. For instance, comet Shoemaker–Levy 9's decaying orbit around Jupiter passed within its Roche limit in July 1992, causing it to fragment into a number of smaller pieces. On its next approach in 1994 the fragments crashed into the planet. Shoemaker–Levy 9 was first observed in 1993, but its orbit indicated that it had been captured by Jupiter a few decades prior. Derivation of the formula As the fluid satellite case is more delicate than the rigid one, the satellite is described with some simplifying assumptions. First, assume the object consists of incompressible fluid that has constant density and volume that do not depend on external or internal forces. Second, assume the satellite moves in a circular orbit and it remains in synchronous rotation. This means that the angular speed at which it rotates around its center of mass is the same as the angular speed at which it moves around the overall system barycenter. The angular speed is given by Kepler's third law: When M is very much bigger than m, this will be close to The synchronous rotation implies that the liquid does not move and the problem can be regarded as a static one. Therefore, the viscosity and friction of the liquid in this model do not play a role, since these quantities would play a role only for a moving fluid. Given these assumptions, the following forces should be taken into account: - The force of gravitation due to the main body; - the centrifugal force in the rotary reference system; and - the self-gravitation field of the satellite. Since all of these forces are conservative, they can be expressed by means of a potential. Moreover, the surface of the satellite is an equipotential one. Otherwise, the differences of potential would give rise to forces and movement of some parts of the liquid at the surface, which contradicts the static model assumption. Given the distance from the main body, our problem is to determine the form of the surface that satisfies the equipotential condition. As the orbit has been assumed circular, the total gravitational force and orbital centrifugal force acting on the main body cancel. That leaves two forces: the tidal force and the rotational centrifugal force. The tidal force depends on the position with respect to the center of mass, already considered in the rigid model. For small bodies, the distance of the liquid particles from the center of the body is small in relation to the distance d to the main body. Thus the tidal force can be linearized, resulting in the same formula for FT as given above. While this force in the rigid model depends only on the radius r of the satellite, in the fluid case we need to consider all the points on the surface and the tidal force depends on the distance Δd from the center of mass to a given particle projected on the line joining the satellite and the main body. We call Δd the radial distance. Since the tidal force is linear in Δd, the related potential is proportional to the square of the variable and for we have Likewise, the centrifugal force has a potential for rotational angular velocity . We want to determine the shape of the satellite for which the sum of the self-gravitation potential and VT + VC is constant on the surface of the body. In general, such a problem is very difficult to solve, but in this particular case, it can be solved by a skillful guess due to the square dependence of the tidal potential on the radial distance Δd To a first approximation, we can ignore the centrifugal potential VC and consider only the tidal potential VT. Since the potential VT changes only in one direction, i.e. the direction toward the main body, the satellite can be expected to take an axially symmetric form. More precisely, we may assume that it takes a form of a solid of revolution. The self-potential on the surface of such a solid of revolution can only depend on the radial distance to the center of mass. Indeed, the intersection of the satellite and a plane perpendicular to the line joining the bodies is a disc whose boundary by our assumptions is a circle of constant potential. Should the difference between the self-gravitation potential and VT be constant, both potentials must depend in the same way on Δd. In other words, the self-potential has to be proportional to the square of Δd. Then it can be shown that the equipotential solution is an ellipsoid of revolution. Given a constant density and volume the self-potential of such body depends only on the eccentricity ε of the ellipsoid: where is the constant self-potential on the intersection of the circular edge of the body and the central symmetry plane given by the equation Δd=0. The dimensionless function f is to be determined from the accurate solution for the potential of the ellipsoid and, surprisingly enough, does not depend on the volume of the satellite. Although the explicit form of the function f looks complicated, it is clear that we may and do choose the value of ε so that the potential VT is equal to VS plus a constant independent of the variable Δd. By inspection, this occurs when This equation can be solved numerically. The graph indicates that there are two solutions and thus the smaller one represents the stable equilibrium form (the ellipsoid with the smaller eccentricity). This solution determines the eccentricity of the tidal ellipsoid as a function of the distance to the main body. The derivative of the function f has a zero where the maximal eccentricity is attained. This corresponds to the Roche limit. More precisely, the Roche limit is determined by the fact that the function f, which can be regarded as a nonlinear measure of the force squeezing the ellipsoid towards a spherical shape, is bounded so that there is an eccentricity at which this contracting force becomes maximal. Since the tidal force increases when the satellite approaches the main body, it is clear that there is a critical distance at which the ellipsoid is torn up. The maximal eccentricity can be calculated numerically as the zero of the derivative of f'. One obtains which corresponds to the ratio of the ellipsoid axes 1:1.95. Inserting this into the formula for the function f one can determine the minimal distance at which the ellipsoid exists. This is the Roche limit, Surprisingly, including the centrifugal potential makes remarkably little difference, though the object becomes a Roche ellipsoid, a general triaxial ellipsoid with all axes having different lengths. The potential becomes a much more complicated function of the axis lengths, requiring elliptic functions. However, the solution proceeds much as in the tidal-only case, and we find The ratios of polar to orbit-direction to primary-direction axes are 1:1.06:2.07. - Roche lobe - Chandrasekhar limit - Hill sphere - Spaghettification (a rather extreme tidal distortion) - Black hole - Triton (moon) (Neptune's satellite) - Comet Shoemaker–Levy 9 - Eric W. Weisstein (2007). "Eric Weisstein's World of Physics – Roche Limit". scienceworld.wolfram.com. Retrieved September 5, 2007. - NASA. "What is the Roche limit?". NASA – JPL. Retrieved September 5, 2007. - see calculation in Frank H. Shu, The Physical Universe: an Introduction to Astronomy, p. 431, University Science Books (1982), ISBN 0-935702-05-9. - "Roche Limit: Why Do Comets Break Up?". - Gu; et al. "The effect of tidal inflation instability on the mass and dynamical evolution of extrasolar planets with ultrashort periods". Astrophysical Journal. Retrieved May 1, 2003. - International Planetarium Society Conference, Astronaut Memorial Planetarium & Observatory, Cocoa, Florida Rob Landis 10–16 July 1994 archive 21/12/1996 - Édouard Roche: "La figure d'une masse fluide soumise à l'attraction d'un point éloigné" (The figure of a fluid mass subjected to the attraction of a distant point), part 1, Académie des sciences de Montpellier: Mémoires de la section des sciences, Volume 1 (1849) 243–262. 2.44 is mentioned on page 258. (French) - Édouard Roche: "La figure d'une masse fluide soumise à l'attraction d'un point éloigné", part 2, Académie des sciences de Montpellier: Mémoires de la section des sciences, Volume 1 (1850) 333–348. (French) - Édouard Roche: "La figure d'une masse fluide soumise à l'attraction d'un point éloigné", part 3, Académie des sciences de Montpellier: Mémoires de la section des sciences, Volume 2 (1851) 21–32. (French) - George Howard Darwin, "On the figure and stability of a liquid satellite", Scientific Papers, Volume 3 (1910) 436–524. - James Hopwood Jeans, Problems of cosmogony and stellar dynamics, Chapter III: Ellipsoidal configurations of equilibrium, 1919. - S. Chandrasekhar, Ellipsoidal figures of equilibrium (New Haven: Yale University Press, 1969), Chapter 8: The Roche ellipsoids (189–240). - S. Chandrasekhar, "The equilibrium and the stability of the Roche ellipsoids", Astrophysical Journal 138 (1963) 1182–1213.
https://en.wikipedia.org/wiki/Roche_limit
4.25
A minimum wage is the lowest remuneration that employers may legally pay to workers. Equivalently, it is the price floor below which workers may not sell their labor. Although minimum wage laws are in effect in many jurisdictions, differences of opinion exist about the benefits and drawbacks of a minimum wage. Supporters of the minimum wage say it increases the standard of living of workers, reduces poverty, reduces inequality, boosts morale and forces businesses to be more efficient. In contrast, opponents of the minimum wage say it increases poverty, increases unemployment (particularly among unskilled or inexperienced workers) and is damaging to businesses. - 1 History - 2 Minimum wage laws - 3 Economics models - 4 Empirical studies - 5 Debate over consequences - 6 Surveys of economists - 7 Alternatives - 8 US movement - 9 See also - 10 Notes - 11 Further reading - 12 External links Modern minimum wage laws trace their origin to the Ordinance of Labourers (1349), which was a decree by King Edward III that set a maximum wage for laborers in medieval England. King Edward III, who was a wealthy landowner, was dependent, like his lords, on serfs to work the land. In the autumn of 1348, the Black Plague reached England and decimated the population. The severe shortage of labor caused wages to soar and encouraged King Edward III to set a wage ceiling. Subsequent amendments to the ordinance, such as the Statute of Labourers (1351), increased the penalties for paying a wage above the set rates. While the laws governing wages initially set a ceiling on compensation, they were eventually used to set a living wage. An amendment to the Statute of Labourers in 1389 effectively fixed wages to the price of food. As time passed, the Justice of the Peace, who was charged with setting the maximum wage, also began to set formal minimum wages. The practice was eventually formalized with the passage of the Act Fixing a Minimum Wage in 1604 by King James I for workers in the textile industry. By the early 19th century, the Statutes of Labourers was repealed as increasingly capitalistic England embraced laissez-faire policies which disfavored regulations of wages (whether upper or lower limits). The subsequent 19th century saw significant labor unrest affect many industrial nations. As trade unions were decriminalized during the century, attempts to control wages through collective agreement were made. However, this meant that a uniform minimum wage was not possible. In Principles of Political Economy in 1848, John Stuart Mill argued that because of the collective action problems that workers faced in organisation, it was a justified departure from laissez faire policies (or freedom of contract) to regulate people's wages and hours by law. It was not until the 1890s that modern legislative attempts to regulate minimum wages were seen in New Zealand and Australia. The movement for a minimum wage was initially focused on stopping sweatshop labor and controlling the proliferation of sweatshops in manufacturing industries. The sweatshops employed large numbers of women and young workers, paying them what were considered to be substandard wages. The sweatshop owners were thought to have unfair bargaining power over their employees, and a minimum wage was proposed as a means to make them pay fairly. Over time, the focus changed to helping people, especially families, become more self-sufficient. Minimum wage laws The first national minimum wage law was enacted by the government of New Zealand in 1894, followed by Australia in 1896 and the United Kingdom in 1909. In the United States, statutory minimum wages were first introduced nationally in 1938, and reintroduced and expanded in the United Kingdom in 1998. There is now legislation or binding collective bargaining regarding minimum wage in more than 90 percent of all countries. In the European Union, 21 member states currently have national minimum wages. In July 2014 Germany began legislating to introduce a federally-mandated minimum wage which would come into effect on 1 January 2015. Many countries, such as Sweden, Finland, Denmark, Switzerland, Austria, and Italy have no minimum wage laws, but rely on employer groups and trade unions to set minimum earnings through collective bargaining. Minimum wage rates vary greatly across many different jurisdictions, not only in setting a particular amount of money – e.g. US$7.25 per hour ($14,500 per year) under certain states' laws (or $2.13 for employees who receive tips, known as the tipped minimum wage), $9.47 in the US state of Washington, and £6.50 (for those aged 21+) in the United Kingdom – but also in terms of which pay period (e.g. Russia and China set monthly minimums) or the scope of coverage. Some jurisdictions allow employers to count tips given to their workers as credit towards the minimum wage levels. India was one of the first developing countries to introduce minimum wage policy. It also has one of the most complicated systems with more than 1200 minimum wage rates. Informal minimum wages Customs and extra-legal pressures from governments or labor unions can produce a de facto minimum wage. So can international public opinion, by pressuring multinational companies to pay Third World workers wages usually found in more industrialized countries. The latter situation in Southeast Asia and Latin America was publicized in the 2000s, but it existed with companies in West Africa in the middle of the twentieth century. Setting minimum wage Among the indicators that might be used to establish an initial minimum wage rate are ones that minimize the loss of jobs while preserving international competitiveness. Among these are general economic conditions as measured by real and nominal gross domestic product; inflation; labor supply and demand; wage levels, distribution and differentials; employment terms; productivity growth; labor costs; business operating costs; the number and trend of bankruptcies; economic freedom rankings; standards of living and the prevailing average wage rate. In the business sector, concerns include the expected increased cost of doing business, threats to profitability, rising levels of unemployment (and subsequent higher government expenditure on welfare benefits raising tax rates), and the possible knock-on effects to the wages of more experienced workers who might already be earning the new statutory minimum wage, or slightly more. Among workers and their representatives, political consideration weigh in as labor leaders seek to win support by demanding the highest possible rate. Other concerns include purchasing power, inflation indexing and standardized working hours. In the United States, the minimum wage promulgated by the Fair Labor Standards Act of 1938 was intentionally set at a high, national level to render low-technology, low-wage factories in the South obsolete. According to the Economic Policy Institute, the minimum wage in the United States would have been $18.28 in 2013 if the minimum wage kept pace with labor productivity. To adjust for increased rates of worker productivity in the United States, raising the minimum wage to $22 (or more) an hour has been presented. Supply and demand An analysis of supply and demand of the type shown in many mainstream economics textbooks implies that by mandating a price floor above the equilibrium wage, minimum wage laws should cause unemployment. This is because a greater number of people are willing to work at the higher wage while a smaller number of jobs will be available at the higher wage. Companies can be more selective in those whom they employ thus the least skilled and least experienced will typically be excluded. An imposition or increase of a minimum wage will generally only affect employment in the low-skill labor market, as the equilibrium wage is already at or below the minimum wage, whereas in higher skill labor markets the equilibrium wage is too high for a change in minimum wage to affect employment. According to the supply and demand model shown in many textbooks on economics, increasing the minimum wage decreases the employment of minimum-wage workers. One such textbook says: If a higher minimum wage increases the wage rates of unskilled workers above the level that would be established by market forces, the quantity of unskilled workers employed will fall. The minimum wage will price the services of the least productive (and therefore lowest-wage) workers out of the market. …The direct results of minimum wage legislation are clearly mixed. Some workers, most likely those whose previous wages were closest to the minimum, will enjoy higher wages. This is known as the "ripple effect". The ripple effect shows that when you increase the minimum wage the wages of all others will consequently increase due the need for relativity. Others, particularly those with the lowest prelegislation wage rates, will be unable to find work. They will be pushed into the ranks of the unemployed or out of the labor force. Some argue that by increasing the federal minimum wage, however, the economy will be adversely affected due to small businesses not being able to keep up with the need to subsequently increase all workers wages. The textbook illustrates the point with a supply and demand diagram similar to the one above. In the diagram it is assumed that workers are willing to labor for more hours if paid a higher wage. Economists graph this relationship with the wage on the vertical axis and the quantity (hours) of labor supplied on the horizontal axis. Since higher wages increase the quantity supplied, the supply of labor curve is upward sloping, and is shown as a line moving up and to the right. A firm's cost is a function of the wage rate. It is assumed that the higher the wage, the fewer hours an employer will demand of an employee. This is because, as the wage rate rises, it becomes more expensive for firms to hire workers and so firms hire fewer workers (or hire them for fewer hours). The demand of labor curve is therefore shown as a line moving down and to the right. Combining the demand and supply curves for labor allows us to examine the effect of the minimum wage. We will start by assuming that the supply and demand curves for labor will not change as a result of raising the minimum wage. This assumption has been questioned. If no minimum wage is in place, workers and employers will continue to adjust the quantity of labor supplied according to price until the quantity of labor demanded is equal to the quantity of labor supplied, reaching equilibrium price, where the supply and demand curves intersect. Minimum wage behaves as a classical price floor on labor. Standard theory says that, if set above the equilibrium price, more labor will be willing to be provided by workers than will be demanded by employers, creating a surplus of labor, i.e. unemployment. In other words, the simplest and most basic economics says this about commodities like labor (and wheat, for example): Artificially raising the price of the commodity tends to cause the supply of it to increase and the demand for it to lessen. The result is a surplus of the commodity. When there is a wheat surplus, the government buys it. Since the government does not hire surplus labor, the labor surplus takes the form of unemployment, which tends to be higher with minimum wage laws than without them. So the basic theory says that raising the minimum wage helps workers whose wages are raised, and hurts people who are not hired (or lose their jobs) because companies cut back on employment. But proponents of the minimum wage hold that the situation is much more complicated than the basic theory can account for. One complicating factor is possible monopsony in the labor market, whereby the individual employer has some market power in determining wages paid. Thus it is at least theoretically possible that the minimum wage may boost employment. Though single employer market power is unlikely to exist in most labor markets in the sense of the traditional 'company town,' asymmetric information, imperfect mobility, and the personal element of the labor transaction give some degree of wage-setting power to most firms. Criticism of the neoclassical model The argument that a minimum wage decreases employment is based on a simple supply and demand model of the labor market. A number of economists (for example Pierangelo Garegnani, Robert L. Vienneau, and Arrigo Opocher & Ian Steedman), building on the work of Piero Sraffa, argue that that model, even given all its assumptions, is logically incoherent. Michael Anyadike-Danes and Wynne Godley argue, based on simulation results, that little of the empirical work done with the textbook model constitutes a potentially falsifiable theory, and consequently empirical evidence hardly exists for that model. Graham White argues, partially on the basis of Sraffianism, that the policy of increased labor market flexibility, including the reduction of minimum wages, does not have an "intellectually coherent" argument in economic theory. Gary Fields, Professor of Labor Economics and Economics at Cornell University, argues that the standard textbook model for the minimum wage is ambiguous, and that the standard theoretical arguments incorrectly measure only a one-sector market. Fields says a two-sector market, where "the self-employed, service workers, and farm workers are typically excluded from minimum-wage coverage... [and with] one sector with minimum-wage coverage and the other without it [and possible mobility between the two]," is the basis for better analysis. Through this model, Fields shows the typical theoretical argument to be ambiguous and says "the predictions derived from the textbook model definitely do not carry over to the two-sector case. Therefore, since a non-covered sector exists nearly everywhere, the predictions of the textbook model simply cannot be relied on." An alternate view of the labor market has low-wage labor markets characterized as monopsonistic competition wherein buyers (employers) have significantly more market power than do sellers (workers). This monopsony could be a result of intentional collusion between employers, or naturalistic factors such as segmented markets, search costs, information costs, imperfect mobility and the personal element of labor markets. In such a case a simple supply and demand graph would not yield the quantity of labor clearing and the wage rate. This is because while the upward sloping aggregate labor supply would remain unchanged, instead of using the upward labor supply curve shown in a supply and demand diagram, monopsonistic employers would use a steeper upward sloping curve corresponding to marginal expenditures to yield the intersection with the supply curve resulting in a wage rate lower than would be the case under competition. Also, the amount of labor sold would also be lower than the competitive optimal allocation. Such a case is a type of market failure and results in workers being paid less than their marginal value. Under the monopsonistic assumption, an appropriately set minimum wage could increase both wages and employment, with the optimal level being equal to the marginal product of labor. This view emphasizes the role of minimum wages as a market regulation policy akin to antitrust policies, as opposed to an illusory "free lunch" for low-wage workers. Another reason minimum wage may not affect employment in certain industries is that the demand for the product the employees produce is highly inelastic. For example, if management is forced to increase wages, management can pass on the increase in wage to consumers in the form of higher prices. Since demand for the product is highly inelastic, consumers continue to buy the product at the higher price and so the manager is not forced to lay off workers. Economist Paul Krugman argues this explanation neglects to explain why the firm was not charging this higher price absent the minimum wage. Three other possible reasons minimum wages do not affect employment were suggested by Alan Blinder: higher wages may reduce turnover, and hence training costs; raising the minimum wage may "render moot" the potential problem of recruiting workers at a higher wage than current workers; and minimum wage workers might represent such a small proportion of a business's cost that the increase is too small to matter. He admits that he does not know if these are correct, but argues that "the list demonstrates that one can accept the new empirical findings and still be a card-carrying economist." Economists disagree as to the measurable impact of minimum wages in the 'real world'. This disagreement usually takes the form of competing empirical tests of the elasticities of supply and demand in labor markets and the degree to which markets differ from the efficiency that models of perfect competition predict. Economists have done empirical studies on different aspects of the minimum wage, including: - Employment effects, the most frequently studied aspect - Effects on the distribution of wages and earnings among low-paid and higher-paid workers - Effects on the distribution of incomes among low-income and higher-income families - Effects on the skills of workers through job training and the deferring of work to acquire education - Effects on prices and profits - Effects on on-the-job training Until the mid-1990s, a general consensus existed among economists, both conservative and liberal, that the minimum wage reduced employment, especially among younger and low-skill workers. In addition to the basic supply-demand intuition, there were a number of empirical studies that supported this view. For example, Gramlich (1976) found that many of the benefits went to higher income families, and in particular that teenagers were made worse off by the unemployment associated with the minimum wage. Brown et al. (1983) noted that time series studies to that point had found that for a 10 percent increase in the minimum wage, there was a decrease in teenage employment of 1–3 percent. However, the studies found wider variation, from 0 to over 3 percent, in their estimates for the effect on teenage unemployment (teenagers without a job and looking for one). In contrast to the simple supply and demand diagram, it was commonly found that teenagers withdrew from the labor force in response to the minimum wage, which produced the possibility of equal reductions in the supply as well as the demand for labor at a higher minimum wage and hence no impact on the unemployment rate. Using a variety of specifications of the employment and unemployment equations (using ordinary least squares vs. generalized least squares regression procedures, and linear vs. logarithmic specifications), they found that a 10 percent increase in the minimum wage caused a 1 percent decrease in teenage employment, and no change in the teenage unemployment rate. The study also found a small, but statistically significant, increase in unemployment for adults aged 20–24. Wellington (1991) updated Brown et al.'s research with data through 1986 to provide new estimates encompassing a period when the real (i.e., inflation-adjusted) value of the minimum wage was declining, because it had not increased since 1981. She found that a 10% increase in the minimum wage decreased the absolute teenage employment by 0.6%, with no effect on the teen or young adult unemployment rates. Some research suggests that the unemployment effects of small minimum wage increases are dominated by other factors. In Florida, where voters approved an increase in 2004, a follow-up comprehensive study after the increase confirmed a strong economy with increased employment above previous years in Florida and better than in the U.S. as a whole. When it comes to on-the-job training, some believe the increase in wages is taken out of training expenses. A 2001 empirical study found that there is "no evidence that minimum wages reduce training, and little evidence that they tend to increase training." Some empirical studies have tried to ascertain the benefits of a minimum wage beyond employment effects. In an analysis of Census data, Joseph Sabia and Robert Nielson found no statistically significant evidence that minimum wage increases helped reduce financial, housing, health, or food insecurity. This study was undertaken by the Employment Policies Institute, a think tank funded by the food, beverage and hospitality industries. In 2012, Michael Reich published an economic analysis that suggested that a proposed minimum wage hike in San Diego might stimulate the city's economy by about $190 million. The Economist wrote in December 2013: "A minimum wage, providing it is not set too high, could thus boost pay with no ill effects on jobs....America's federal minimum wage, at 38% of median income, is one of the rich world's lowest. Some studies find no harm to employment from federal or state minimum wages, others see a small one, but none finds any serious damage. ... High minimum wages, however, particularly in rigid labour markets, do appear to hit employment. France has the rich world’s highest wage floor, at more than 60% of the median for adults and a far bigger fraction of the typical wage for the young. This helps explain why France also has shockingly high rates of youth unemployment: 26% for 15- to 24-year-olds." Most studies on the effects of minimum wages have been conducted in high-income economies. A study on minimum wages increases in China shows that "minimum wage changes led to significant adverse effects on employment in the Eastern and Central regions of China, and resulted in disemployment for females, young adults, and low-skilled workers". Card and Krueger In 1992, the minimum wage in New Jersey increased from $4.25 to $5.05 per hour (an 18.8% increase) while the adjacent state of Pennsylvania remained at $4.25. David Card and Alan Krueger gathered information on fast food restaurants in New Jersey and eastern Pennsylvania in an attempt to see what effect this increase had on employment within New Jersey. Basic economic theory would have implied that relative employment should have decreased in New Jersey. Card and Krueger surveyed employers before the April 1992 New Jersey increase, and again in November–December 1992, asking managers for data on the full-time equivalent staff level of their restaurants both times. Based on data from the employers' responses, the authors concluded that the increase in the minimum wage slightly increased employment in the New Jersey restaurants. One possible explanation that the current minimum wage laws may not affect unemployment in the United States is that the minimum wage is set close to the equilibrium point for low and unskilled workers. Thus, according to this explanation, in the absence of the minimum wage law unskilled workers would be paid approximately the same amount and an increase above this equilibrium point could likely bring about increased unemployment for the low and unskilled workers. Card and Krueger expanded on this initial article in their 1995 book Myth and Measurement: The New Economics of the Minimum Wage. They argued that the negative employment effects of minimum wage laws are minimal if not non-existent. For example, they look at the 1992 increase in New Jersey's minimum wage, the 1988 rise in California's minimum wage, and the 1990–91 increases in the federal minimum wage. In addition to their own findings, they reanalyzed earlier studies with updated data, generally finding that the older results of a negative employment effect did not hold up in the larger datasets. Research subsequent to Card and Krueger's work In subsequent research, David Neumark and William Wascher attempted to verify Card and Krueger's results by using administrative payroll records from a sample of large fast food restaurant chains in order to verify employment. They found that the minimum wage increases were followed by decreases in employment. On the other hand, an assessment of data collected and analyzed by Neumark and Wascher did not initially contradict the Card and Krueger results, but in a later edited version they found a four percent decrease in employment, and reported that "the estimated disemployment effects in the payroll data are often statistically significant at the 5- or 10- percent level although there are some estimators and subsamples that yield insignificant—although almost always negative" employment effects. However, this paper's conclusions were rebutted in a 2000 paper by Card and Krueger. A 2011 paper has reconciled the difference between Card and Krueger's survey data and Neumark and Wascher's payroll-based data. The paper shows that both datasets evidence conditional employment effects that are positive for small restaurants, but are negative for large fast-food restaurants. In 1996 and 1997, the federal minimum wage was increased from $4.25 to $5.15, thereby increasing the minimum wage by $0.90 in Pennsylvania but by just $0.10 in New Jersey; this allowed for an examination of the effects of minimum wage increases in the same area, subsequent to the 1992 change studied by Card and Krueger. A study by Hoffman and Trace found the result anticipated by traditional theory: a detrimental effect on employment. Further application of the methodology used by Card and Krueger by other researchers yielded results similar to their original findings, across additional data sets. A 2010 study by three economists (Arindrajit Dube of the University of Massachusetts Amherst, T. William Lester of the University of North Carolina at Chapel Hill, and Michael Reich of the University of California, Berkeley), compared adjacent counties in different states where the minimum wage had been raised in one of the states. They analyzed employment trends for several categories of low-wage workers from 1990 to 2006 and found that increases in minimum wages had no negative effects on low-wage employment and successfully increased the income of workers in food services and retail employment, as well as the narrower category of workers in restaurants. However, a 2011 study by Baskaya and Rubinstein of Brown University found that at the federal level, "a rise in minimum wage have [sic] an instantaneous impact on wage rates and a corresponding negative impact on employment", stating, "Minimum wage increases boost teenage wage rates and reduce teenage employment." Another 2011 study by Sen, Rybczynski, and Van De Waal found that "a 10% increase in the minimum wage is significantly correlated with a 3−5% drop in teen employment." A 2012 study by Sabia, Hansen, and Burkhauser found that "minimum wage increases can have substantial adverse labor demand effects for low-skilled individuals", with the largest effects on those aged 16 to 24. A 2013 study by Meer and West concluded that "the minimum wage reduces net job growth, primarily through its effect on job creation by expanding establishments ... most pronounced for younger workers and in industries with a higher proportion of low-wage workers." This study by Meer and West was later critiqued for its trends of assumption in the context of narrowly defined low-wage groups. The authors replied to the critiques and released additional data which addressed the criticism of their methodology, but did not resolve the issue of whether their data showed a causal relationship. Another 2013 study by Suzana Laporšek of the University of Primorska, on youth unemployment in Europe claimed there was "a negative, statistically significant impact of minimum wage on youth employment." A 2013 study by labor economists Tony Fang and Carl Lin which studied minimum wages and employment in China, found that "minimum wage changes have significant adverse effects on employment in the Eastern and Central regions of China, and result in disemployment for females, young adults, and low-skilled workers". Several researchers have conducted statistical meta-analyses of the employment effects of the minimum wage. In 1995, Card and Krueger analyzed 14 earlier time-series studies on minimum wages and concluded that there was clear evidence of publication bias (in favor of studies that found a statistically significant negative employment effect). They point out that later studies, which had more data and lower standard errors, did not show the expected increase in t-statistic (almost all the studies had a t-statistic of about two, just above the level of statistical significance at the .05 level). Though a serious methodological indictment, opponents of the minimum wage largely ignored this issue; as Thomas C. Leonard noted, "The silence is fairly deafening." In 2005, T.D. Stanley showed that Card and Krueger's results could signify either publication bias or the absence of a minimum wage effect. However, using a different methodology, Stanley concludes that there is evidence of publication bias and that correction of this bias shows no relationship between the minimum wage and unemployment. In 2008, Hristos Doucouliagos and T.D. Stanley conducted a similar meta-analysis of 64 U.S. studies on dis-employment effects and concluded that Card and Krueger's initial claim of publication bias is still correct. Moreover, they concluded, "Once this publication selection is corrected, little or no evidence of a negative association between minimum wages and employment remains." Consistent with the results from Doucouliagos and Stanley, and Card and Krueger, Baskaya and Rubinstein's 2011 study, which analyzed 24 papers on the minimum wage, found "mild positive, yet statistically insignificant association between the change in the employment of teenagers" at state minimum wage levels. However, when minimum wage is set at the federal level, they found "notable wage impacts and large corresponding disemployment effects". Debate over consequences Minimum wage laws affect workers in most low-paid fields of employment and have usually been judged against the criterion of reducing poverty. Minimum wage laws receive less support from economists than from the general public. Despite decades of experience and economic research, debates about the costs and benefits of minimum wages continue today. Various groups have great ideological, political, financial, and emotional investments in issues surrounding minimum wage laws. For example, agencies that administer the laws have a vested interest in showing that "their" laws do not create unemployment, as do labor unions whose members' finances are protected by minimum wage laws. On the other side of the issue, low-wage employers such as restaurants finance the Employment Policies Institute, which has released numerous studies opposing the minimum wage. The presence of these powerful groups and factors means that the debate on the issue is not always based on dispassionate analysis. Additionally, it is extraordinarily difficult to separate the effects of minimum wage from all the other variables that affect employment. The following table summarizes the arguments made by those for and against minimum wage laws: Supporters of the minimum wage claim it has these effects: Opponents of the minimum wage claim it has these effects: A widely circulated argument that the minimum wage was ineffective at reducing poverty was provided by George Stigler in 1949: - Employment may fall more than in proportion to the wage increase, thereby reducing overall earnings; - As uncovered sectors of the economy absorb workers released from the covered sectors, the decrease in wages in the uncovered sectors may exceed the increase in wages in the covered ones; - The impact of the minimum wage on family income distribution may be negative unless the fewer but better jobs are allocated to members of needy families rather than to, for example, teenagers from families not in poverty; - Forbidding employers to pay less than a legal minimum is equivalent to forbidding workers to sell their labor for less than the minimum wage. The legal restriction that employers cannot pay less than a legislated wage is equivalent to the legal restriction that workers cannot work at all in the protected sector unless they can find employers willing to hire them at that wage. In 2006, the International Labour Organization (ILO) argued that the minimum wage could not be directly linked to unemployment in countries that have suffered job losses. In April 2010, the Organisation for Economic Co-operation and Development (OECD) released a report arguing that countries could alleviate teen unemployment by "lowering the cost of employing low-skilled youth" through a sub-minimum training wage. A study of U.S. states showed that businesses' annual and average payrolls grow faster and employment grew at a faster rate in states with a minimum wage. The study showed a correlation, but did not claim to prove causation. Although strongly opposed by both the business community and the Conservative Party when introduced in 1999, the Conservatives reversed their opposition in 2000. Accounts differ as to the effects of the minimum wage. The Centre for Economic Performance found no discernible impact on employment levels from the wage increases, while the Low Pay Commission found that employers had reduced their rate of hiring and employee hours employed, and found ways to cause current workers to be more productive (especially service companies). The Institute for the Study of Labor found prices in the minimum wage sector rose significantly faster than prices in non-minimum wage sectors, in the four years following the implementation of the minimum wage. Neither trade unions nor employer organizations contest the minimum wage, although the latter had especially done so heavily until 1999. In 2014, supporters of minimum wage cited a study that found that job creation within the United States is faster in states that raised their minimum wages. In 2014, supporters of minimum wage cited news organizations who reported the state with the highest minimum-wage garnered more job creation than the rest of the United States. In 2014, in Seattle, Washington, liberal and progressive business owners who had supported the city's new $15 minimum wage said they might hold off on expanding their businesses and thus creating new jobs, due to the uncertain timescale of the wage increase implementation. However, subsequently at least two of the business owners quoted did expand. The dollar value of the minimum wage loses purchasing power over time due to inflation. Minimum wage laws, for instance proposals to index the minimum wage to average wages, have the potential to keep the dollar value of the minimum wage relevant and predictable. With regard to the economic effects of introducing minimum wage legislation in Germany in January 2015, recent developments have shown that the feared increase in unemployment has not materialized, however, in some economic sectors and regions of the country, it came to a decline in job opportunities particularly for temporary and part-time workers, and some low-wage jobs have disappeared entirely. Because of this overall positive development, the Deutsche Bundesbank revised its opinion, and ascertained that “the impact of the introduction of the minimum wage on the total volume of work appears to be very limited in the present business cycle”. Surveys of economists According to a 1978 article in the American Economic Review, 90% of the economists surveyed agreed that the minimum wage increases unemployment among low-skilled workers. By 1992 the survey found 79% of economists in agreement with that statement, and by 2000, 45.6% were in full agreement with the statement and 27.9% agreed with provisos (73.5% total). The authors of the 2000 study also reweighted data from a 1990 sample to show that at that time 62.4% of academic economists agreed with the statement above, while 19.5% agreed with provisos and 17.5% disagreed. They state that the reduction on consensus on this question is "likely" due to the Card and Krueger research and subsequent debate. A similar survey in 2006 by Robert Whaples polled PhD members of the American Economic Association (AEA). Whaples found that 46.8% respondents wanted the minimum wage eliminated, 37.7% supported an increase, 14.3% wanted it kept at the current level, and 1.3% wanted it decreased. Another survey in 2007 conducted by the University of New Hampshire Survey Center found that 73% of labor economists surveyed in the United States believed 150% of the then-current minimum wage would result in employment losses and 68% believed a mandated minimum wage would cause an increase in hiring of workers with greater skills. 31% felt that no hiring changes would result. Surveys of labor economists have found a sharp split on the minimum wage. Fuchs et al. (1998) polled labor economists at the top 40 research universities in the United States on a variety of questions in the summer of 1996. Their 65 respondents were nearly evenly divided when asked if the minimum wage should be increased. They argued that the different policy views were not related to views on whether raising the minimum wage would reduce teen employment (the median economist said there would be a reduction of 1%), but on value differences such as income redistribution. Daniel B. Klein and Stewart Dompe conclude, on the basis of previous surveys, "the average level of support for the minimum wage is somewhat higher among labor economists than among AEA members." In 2007, Klein and Dompe conducted a non-anonymous survey of supporters of the minimum wage who had signed the "Raise the Minimum Wage" statement published by the Economic Policy Institute. 95 of the 605 signatories responded. They found that a majority signed on the grounds that it transferred income from employers to workers, or equalized bargaining power between them in the labor market. In addition, a majority considered disemployment to be a moderate potential drawback to the increase they supported. In 2013, a diverse group of 37 economics professors was surveyed on their view of the minimum wage's impact on employment. 34% of respondents agreed with the statement, "Raising the federal minimum wage to $9 per hour would make it noticeably harder for low-skilled workers to find employment." 32% disagreed and the remaining respondents were uncertain or had no opinion on the question. 47% agreed with the statement, "The distortionary costs of raising the federal minimum wage to $9 per hour and indexing it to inflation are sufficiently small compared with the benefits to low-skilled workers who can find employment that this would be a desirable policy", while 11% disagreed. Economists and other political commentators[who?] have proposed alternatives to the minimum wage.[which?] They argue that these alternatives may address the issue of poverty better than a minimum wage, as it would benefit a broader population of low wage earners, not cause any unemployment, and distribute the costs widely rather than concentrating it on employers of low wage workers. A basic income (or negative income tax) is a system of social security that periodically provides each citizen with a sum of money that is sufficient to live on frugally. It is argued that recipients of the basic income would have considerably more bargaining power when negotiating a wage with an employer as there would be no risk of destitution for not taking the employment. As a result, the jobseeker could spend more time looking for a more appropriate or satisfying job, or they could wait until a higher-paying job appeared. Alternately, they could spend more time increasing their skills in university, which would make them more suitable for higher-paying jobs, as well as provide numerous other benefits. Experiments on Basic Income and NIT in Canada and the USA show that people spent more time studying while the program was running. Proponents argue that a basic income that is based on a broad tax base would be more economically efficient, as the minimum wage effectively imposes a high marginal tax on employers, causing losses in efficiency. Guaranteed minimum income A guaranteed minimum income is another proposed system of social welfare provision. It is similar to a basic income or negative income tax system, except that it is normally conditional and subject to a means test. Some proposals also stipulate a willingness to participate in the labor market, or a willingness to perform community services. Refundable tax credit A refundable tax credit is a mechanism whereby the tax system can reduce the tax owed by a household to below zero, and result in a net payment to the taxpayer beyond their own payments into the tax system. Examples of refundable tax credits include the earned income tax credit and the additional child tax credit in the U.S., and working tax credits and child tax credits in the UK. Such a system is slightly different from a negative income tax, in that the refundable tax credit is usually only paid to households that have earned at least some income. This policy is more targeted against poverty than the minimum wage, because it avoids subsidizing low-income workers who are supported by high-income households (for example, teenagers still living with their parents). In the United States, earned income tax credit rates, also known as EITC or EIC, vary by state − some are refundable while other states do not allow a refundable tax credit. The federal EITC program has been expanded by a number of presidents including Jimmy Carter, Ronald Reagan, George H.W. Bush, and Bill Clinton. In 1986, President Reagan described the EITC as "the best anti poverty, the best pro-family, the best job creation measure to come out of Congress." The ability of the earned income tax credit to deliver larger monetary benefits to the poor workers than an increase in the minimum wage and at a lower cost to society was documented in a 2007 report by the Congressional Budget Office. Italy, Sweden, Norway, Finland, and Denmark are examples of developed nations where there is no minimum wage that is required by legislation. Such nations, particularly the Nordics, have very high union participation rates. Instead, minimum wage standards in different sectors are set by collective bargaining. In January 2014, seven Nobel economists--Kenneth Arrow, Peter Diamond, Eric Maskin, Thomas Schelling, Robert Solow, Michael Spence, and Joseph Stiglitz—and 600 other economists wrote a letter to the US Congress and the US President urging that, by 2016, the US government should raise the minimum wage to $10.10. They endorsed the Minimum Wage Fairness Act which was introduced by US Senator Tom Harkin in 2013. Democratic presidential candidate Bernie Sanders has introduced a bill to the Senate that would raise the minimum wage to $15. - "The Advantage Of The Minimum Wage | Robert Nielsen". Robertnielsen21.wordpress.com. Retrieved 2013-03-30. - "Minimum Wages. by David Neumark and William L. Wascher". - "The Young and the Jobless". The Wall Street Journal. 2009-10-03. Retrieved 2014-01-11. - Black, John (September 18, 2003). Oxford Dictionary of Economics. Oxford University Press. p. 300. ISBN 978-0-19-860767-0. - Mihm, Stephen (5 September 2013). "How the Black Death Spawned the Minimum Wage". Bloomberg View. Retrieved 17 April 2014. - Thorpe, Vanessa (29 March 2014). "Black death was not spread by rat fleas, say researchers". theguardian.com. Retrieved 29 March 2014. - Starr, Gerald (1993). Minimum wage fixing : an international review of practices and problems (2nd impression (with corrections) ed.). Geneva: International Labour Office. p. 1. ISBN 9789221025115. - Nordlund, Willis J. (1997). The quest for a living wage : the history of the federal minimum wage program. Westport, Conn.: Greenwood Press. p. xv. ISBN 9780313264122. - Neumark, David; William L. Wascher (2008). Minimum Wages. Cambridge, Massachusetts: The MIT Press. ISBN 978-0-262-14102-4. - "OECD Statistics (GDP, unemployment, income, population, labour, education, trade, finance, prices...)". Stats.oecd.org. Retrieved 2013-03-29. - Grossman, Jonathan. "Fair Labor Standards Act of 1938: Maximum Struggle for a Minimum Wage". Department of Labor. Retrieved 17 April 2014. - Stone, Jon (1 October 2010). "History of the UK's minimum wage". Total Politics. Retrieved 17 April 2014. - Williams, Walter E. (June 2009). "The Best Anti-Poverty Program We Have?". Regulation 32 (2): 62. - "ILO 2006: Minimum wages policy (PDF)" (PDF). Ilo.org. Retrieved March 1, 2012. - Eurostat (2006): Minimum Wages 2006 - Variations from 82 to 1503 euro gross per month (PDF) - "Germany may become 22nd EU state with federal minimum wage". Germany News.Net. Retrieved 7 July 2014. - Ehrenberg, Ronald G. Labor Markets and Integrating National Economies, Brookings Institution Press (1994), p. 41 - Alderman, Liz; Greenhouse, Steven (October 27, 2014). "Fast Food in Denmark Serves Something Atypical: Living Wages". New York Times. Retrieved October 27, 2014. - "Minimum Wage". Washington State Dept. of Labor & Industries. Retrieved 2015-01-18. - "British Government website". Retrieved 2013-09-05. - "Most Asked Questions about Minimum Wages in India". PayCheck.in. 2013-02-22. Retrieved 2013-03-29. - Sowell, Thomas (2004). "Minimum Wage Laws". Basic Economics: A Citizen's Guide to the Economy. New York: Basic Books. pp. 163–9. ISBN 978-0-465-08145-5. - Provisional Minimum Wage Commission: Preliminary Views on a Bask of Indicators, Other Relevant Considerations and Impact Assessment, Provisional Minimum Wage Commission, Hong Kong Special Administrative Region Government, - Setting the Initial Statutory Minimum Wage Rate, submission to government by the Hong Kong General Chamber of Commerce. - Li, Joseph, "Minimum wage legislation for all sectors," China Daily October 16, 2008 , "Hong Kong sets Minimum Wage – What one Singaporean thinks," Speaker's Corner, SG Forums, November 5, 2010 - Berstein, David E., & Leonard, Thomas C., Excluding Unfit Workers: Social Control Versus Social Justice in the Age of Economic Reform, Law and Contemporary Problems, Vol. 72, No. 3, 2009 - Editorial Board (February 9, 2014). "The Case for a Higher Minimum Wage". New York Times. Retrieved February 9, 2014. - Chumley, Cheryl K. (March 18, 2013). "Take it to the bank: Sen. Elizabeth Warren wants to raise minimum wage to $22 per hour". Washington Times. Retrieved January 22, 2014. - Wing, Nick (March 18, 2013). "Elizabeth Warren: Minimum Wage Would Be $22 An Hour If It Had Kept Up With Productivity". Huffington Post. Retrieved January 22, 2014. - Hart-Landsberg, Ph.D., Martin (December 19, 2013). "$22.62/HR: The Minimum Wage If It Had Risen Like The Incomes Of The 1%". thesocietypages.org. Retrieved January 22, 2014. - Rmusemore (December 3, 2013). "Stop Complaining Republicans, the Minimum Wage Should be $22.62 an Hour". policususa.com. Retrieved January 22, 2014. - McConnell, C. R.; Brue, S. L. (1999). Economics (14th ed.). Irwin-McGraw Hill. p. 594. - Gwartney, J. D.; Stroup, R. L.; Sobel, R. S.; Macpherson, D. A. (2003). Economics: Private and Public Choice (10th ed.). Thomson South-Western. p. 97. - Mankiw, N. Gregory (2011). Principles of Macroeconomics (6th ed.). South-Western Pub. p. 311. - Card, David; Krueger, Alan B. (1995). Myth and Measurement: The New Economics of the Minimum Wage. Princeton University Press. pp. 1; 6–7. - Formby, J. P.; Bishop, J. A.; Kim, H. (2010). "The Redistributive Effects and Cost-Effectiveness of Increasing the Federal Minimum Wage". Public Finance Review 38 (5): 585–618. doi:10.1177/1091142110373481. - Belman, Dale L.; Wolfson, Paul (2010). "The Effect of Legislated Minimum Wage Increases on Employment and Hours: A Dynamic Analysis". Labour 24 (1): 1–25. doi:10.1111/j.1467-9914.2010.00468.x. - Gwartney, James David; Stroup, Richard L.; Studenmund, A. H. (1987). Economics: Private and Public Choice. New York: Harcourt Brace Jovanovich. pp. 559–62. ISBN 978-0-15-518880-8. - e.g. DE Card and AB Krueger, Myth and Measurement: The New Economics of the Minimum Wage (1995) and S Machin and A Manning, ‘Minimum wages and economic outcomes in Europe’ (1997) 41 European Economic Review 733 - Rittenberg, Timothy Tregarthen, Libby (1999). Economics (2nd ed.). New York: Worth Publishers. p. 290. ISBN 9781572594180. Retrieved 21 June 2014. - Ehrenberg, R. and Smith, R. "Modern labor economics: theory and public policy", HarperCollins, 1994, 5th ed.[page needed] - By Jim Stanford, Debate: Boost the wage, help the worker, National Post, February 22, 2011 - Boal, William M.; Ransom, Michael R (March 1997). "Monopsony in the Labor Market". Journal of Economic Literature 35 (1): 86–112. JSTOR 2729694. - Garegnani, P. (July 1970). "Heterogeneous Capital, the Production Function and the Theory of Distribution". The Review of Economic Studies 37 (3): 407–36. doi:10.2307/2296729. JSTOR 2296729. - Vienneau, Robert L. (2005). "On Labour Demand and Equilibria of the Firm". The Manchester School 73 (5): 612–9. doi:10.1111/j.1467-9957.2005.00467.x. - Opocher, A.; Steedman, I. (2009). "Input price-input quantity relations and the numeraire". Cambridge Journal of Economics 33 (5): 937–48. doi:10.1093/cje/bep005. - Anyadike-Danes, Michael; Godley, Wynne (1989). "Real Wages and Employment: A Sceptical View of Some Recent Empirical Work". The Manchester School 57 (2): 172–87. doi:10.1111/j.1467-9957.1989.tb00809.x. - White, Graham (November 2001). "The Poverty of Conventional Economic Wisdom and the Search for Alternative Economic and Social Policies". The Drawing Board: an Australian Review of Public Affairs 2 (2): 67–87. - Fields, Gary S. (1994). "The Unemployment Effects of Minimum Wages". International Journal of Manpower 15 (2): 74–81. doi:10.1108/01437729410059323. - Manning, Alan (2003). Monopsony in motion: Imperfect Competition in Labor Markets. Princeton, NJ: Princeton University Press. ISBN 0-691-11312-2.[page needed] - Gillespie, Andrew (2007). Foundations of Economics. Oxford University Press. p. 240. - Krugman, Paul (2013). Economics. Worth Publishers. p. 385. - Blinder, Alan S. (May 23, 1996). "The $5.15 Question". The New York Times. p. A29. - Schmitt, John (February 2013). "Why Does the Minimum Wage Have No Discernible Effect on Employment?" (PDF). Center for Economic and Policy Research. Retrieved December 5, 2013. Lay summary – The Washington Post (February 14, 2013). - Gramlich, Edward M.; Flanagan, Robert J.; Wachter, Michael L. (1976). "Impact of Minimum Wages on Other Wages, Employment, and Family Incomes". Brookings Papers on Economic Activity 1976 (2): 409–61. doi:10.2307/2534380. - Brown, Charles; Gilroy, Curtis; Kohen, Andrew (Winter 1983). "Time-Series Evidence of the Effect of the Minimum Wage on Youth Employment and Unemployment". The Journal of Human Resources 18 (1): 3–31. doi:10.2307/145654. JSTOR 145654. - Wellington, Alison J. (Winter 1991). "Effects of the Minimum Wage on the Employment Status of Youths: An Update". The Journal of Human Resources 26 (1): 27–46. doi:10.2307/145715. JSTOR 145715. - Fox, Liana (October 24, 2006). "Minimum wage trends: Understanding past and contemporary research". Economic Policy Institute. Retrieved December 6, 2013. - "The Florida Minimum Wage: Good for Workers, Good for the Economy" (PDF). Retrieved 3 November 2013. - Acemoglu, Daron; Pischke, Jörn-Steffen (November 2001). "Minimum Wages and On-the-Job Training" (PDF). Institute for the Study of Labor. SSRN 288292. Retrieved December 6, 2013. Also published as Acemoglu, Daron; Pischke, Jörn-Steffen (2003). "Minimum Wages and On-the-job Training". In Polachek, Solomon W. Worker Well-Being and Public Policy. Research in Labor Economics 22. pp. 159–202. doi:10.1016/S0147-9121(03)22005-7. ISBN 978-0-76231-026-5. - Sabia, Joseph J.; Nielsen, Robert B. (April 2012). Can Raising the Minimum Wage Reduce Poverty and Hardship?. Employment Policies Institute.[page needed] - Michael Reich. "Increasing the Minimum Wage in San Jose: Benefits and Costs- White Paper" (PDF). Retrieved 2013-03-29. - The Economist-The Logical Floor-December 2013 - Fang, Tony; Lin, Carl (2015-11-27). "Minimum wages and employment in China". IZA Journal of Labor Policy 4 (1): 22. doi:10.1186/s40173-015-0050-9. ISSN 2193-9004. - Card, David; Krueger, Alan B. (September 1994). "Minimum Wages and Employment: A Case Study of the Fast-Food Industry in New Jersey and Pennsylvania". The American Economic Review 84 (4): 772–93. JSTOR 2118030. - ISBN 0-691-04823-1[full citation needed][page needed] - Card; Krueger (2000). "Minimum Wages and Employment: A Case Study of the Fast-Food Industry in New Jersey and Pennsylvania: Reply". American Economic Review 90 (5): 1397–1420. doi:10.1257/aer.90.5.1397. - Dube, Arindrajit; Lester, T. William; Reich, Michael (November 2010). "Minimum Wage Effects Across State Borders: Estimates Using Contiguous Counties". Review of Economics and Statistics 92 (4): 945–964. doi:10.1162/REST_a_00039. Retrieved 10 March 2014. - Schmitt, John (January 1, 1996). "The Minimum Wage and Job Loss". Economic Policy Institute. Retrieved December 7, 2013. - Neumark, David; Wascher, William (December 2000). "Minimum Wages and Employment: A Case Study of the Fast-Food Industry in New Jersey and Pennsylvania: Comment". The American Economic Review 90 (5): 1362–96. doi:10.1257/aer.90.5.1362. JSTOR 2677855. - http://www.davidson.edu/academic/economics/foley/eco324_s06/Neumark_Wascher%20AER%20(2000).pdf[full citation needed][dead link] - Card and Krueger (2000) "Minimum Wages and Employment: A Case Study of the Fast-Food Industry in New Jersey and Pennsylvania: Reply" American Economic Review, Volume 90 No. 5. pg 1397-1420 - Ropponen, Olli (2011). "Reconciling the evidence of Card and Krueger (1994) and Neumark and Wascher (2000)". Journal of Applied Econometrics 26 (6): 1051–7. doi:10.1002/jae.1258. - Hoffman, Saul D; Trace, Diane M (2009). "NJ and PA Once Again: What Happened to Employment when the PA–NJ Minimum Wage Differential Disappeared?". Eastern Economic Journal 35 (1): 115–28. doi:10.1057/eej.2008.1. - Dube, Arindrajit; Lester, T. William; Reich, Michael (November 2010). "Minimum Wage Effects Across State Borders: Estimates Using Contiguous Counties" (PDF). The Review of Economics and Statistics 92 (4): 945–64. doi:10.1162/REST_a_00039. - FOLBRE, NANCY (November 1, 2010). "Along the Minimum-Wage Battle Front". New York Times. Retrieved 4 December 2013. - "Using Federal Minimum Wages to Identify the Impact of Minimum Wages on Employment and Earnings Across the U.S. States" (PDF). 1 Oct 2011. - "Teen employment, poverty, and the minimum wage: Evidence from Canada". 1 Jan 2011. - "Are the Effects of Minimum Wage Increases Always Small? New Evidence from a Case Study of New York State". 2 Apr 2012. - Meer, Jonathan; West, Jeremy (2013). "Effects of the Minimum Wage on Employment Dynamics". NBER Working Paper No. 19262. - "Minimum wage effects on youth employment in the European Union". 14 Sep 2013. - "Minimum Wages and Employment in China". 14 Dec 2013. - Card, David; Krueger, Alan B. (May 1995). "Time-Series Minimum-Wage Studies: A Meta-analysis". The American Economic Review 85 (2): 238–43. JSTOR 2117925. - Leonard, T. C. (2000). "The Very Idea of Applying Economics: The Modern Minimum-Wage Controversy and Its Antecedents". History of Political Economy 32: 117. doi:10.1215/00182702-32-Suppl_1-117. - Stanley, T. D. (2005). "Beyond Publication Bias". Journal of Economic Surveys 19 (3): 309. doi:10.1111/j.0950-0804.2005.00250.x. - Doucouliagos, Hristos; Stanley, T. D. (2009). "Publication Selection Bias in Minimum-Wage Research? A Meta-Regression Analysis". British Journal of Industrial Relations 47 (2): 406–28. doi:10.1111/j.1467-8543.2009.00723.x. - Eatwell, John, Ed.; Murray Milgate; Peter Newman (1987). The New Palgrave: A Dictionary of Economics. London: The Macmillan Press Limited. pp. 476–478. ISBN 0-333-37235-2. - Bernstein, Harry (September 15, 1992). "Troubling Facts on Employment". Los Angeles Times. p. D3. Retrieved December 6, 2013. - Engquist, Erik (May 2006). "Health bill fight nears showdown". Crain's New York Business 22 (20): 1. - Stilwell, Victoria (March 8, 2014). "Highest Minimum-Wage State Washington Beats U.S. in Job Creation". Bloomberg. - "Real Value of the Minimum Wage". Epi.org. Retrieved 2013-03-29. - Freeman, Richard B. (1994). "Minimum Wages – Again!". International Journal of Manpower 15 (2): 8–25. doi:10.1108/01437729410059305. - Bernard Semmel, Imperialism and Social Reform: English Social-Imperial Thought 1895–1914 (London: Allen and Unwin, 1960), p. 63. - "ITIF Report Shows Self-service Technology a New Force in Economic Life". The Information Technology & Innovation Foundation. April 14, 2010. Retrieved October 5, 2011. - Alesina, Alberto F.; Zeira, Joseph (2006). "Technology and Labor Regulations". SSRN Electronic Journal. doi:10.2139/ssrn.936346. - "Minimum Wages in canada : theory, evidence and policy". Hrsdc.gc.ca. March 7, 2008. Retrieved October 5, 2011. - Kallem, Andrew (2004). "Youth Crime and the Minimum Wage". SSRN Electronic Journal. doi:10.2139/ssrn.545382. - "Crime and work: What we can learn from the low-wage labor market | Economic Policy Institute". Epi.org. July 1, 2000. Retrieved October 5, 2011. - Kosteas, Vasilios D. "Minimum Wage." Encyclopedia of World Poverty. Ed. M. Odekon.Thousand Oaks, CA: Sage Publications, Inc., 2006. 719-21. SAGE knowledge. Web. - Abbott, Lewis F. Statutory Minimum Wage Controls: A Critical Review of their Effects on Labour Markets, Employment, and Incomes. ISR Publications, Manchester UK, 2nd. edn. 2000. ISBN 978-0-906321-22-5. [page needed] - Llewellyn H. Rockwell Jr. (October 28, 2005). "Wal-Mart Warms to the State - Mises Institute". Mises.org. Retrieved October 5, 2011. - Tupy, Marian L. Minimum Interference, National Review Online, May 14, 2004 - "The Wages of Politics". Wall Street Journal. November 11, 2006. Retrieved December 6, 2013. - Messmore, Ryan. "Increasing the Mandated Minimum Wage: Who Pays the Price?". Heritage.org. Retrieved October 5, 2011. - Art Carden. "Why Wal-Mart Matters - Mises Institute". Mises.org. Retrieved October 5, 2011. - "Will have only negative effects on the distribution of economic justice. Minimum-wage legislation, by its very nature, benefits some at the expense of the least experienced, least productive, and poorest workers." (Cato) - Williams, Walter (1989). South Africa's War Against Capitalism. New York: Praeger. ISBN 0-275-93179-X. - A blunt instrument, The Economist, October 26, 2006 (English) - Partridge, M. D.; Partridge, J. S. (1999). "Do minimum wage hikes reduce employment? State-level evidence from the low-wage retail sector". Journal of Labor Research 20 (3): 393. doi:10.1007/s12122-999-1007-9. - "The Effects of a Minimum-Wage Increase on Employment and Family Income". February 18, 2014. Retrieved July 26, 2014. - Scarpetta, Stephano, Anne Sonnet and Thomas Manfredi,Rising Youth Unemployment During The Crisis: How To Prevent Negative Long-Term Consequences on a Generation?, April 14, 2010 (read-only PDF) - Fiscal Policy Institute, "States with Minimum Wages Above the Federal Level have had Faster Small Business and Retail Job Growth," March 30, 2006. - "National Minimum Wage". politics.co.uk. Archived from the original on December 1, 2007. Retrieved December 29, 2007. - Metcalf, David (April 2007). "Why Has the British National Minimum Wage Had Little or No Impact on Employment?". - Low Pay Commission (2005). National Minimum Wage - Low Pay Commission Report 2005 - Wadsworth, Jonathan (September 2009). "Did the National Minimum Wage Affect UK Prices" (PDF). - Rugaber, Christopher S. (July 19, 2014). "States with higher minimum wage gain more jobs". USA Today. - Lobosco, Katie (May 14, 2014). "Washington state defies minimum wage logic". CNN. - Meyerson, Harold (May 21, 2014). "Harold Meyerson: A higher minimum wage may actually boost job creation". The Washington Post. - Minimum Wage Limbo Keeps Small Business Owners Up At Night, kuow.org, May 22, 2014 - Seattle Magazine, March 23, 2015 - KOMO News, July 31,2015 - C. Eisenring (Dec 2015). Gefährliche Mindestlohn-Euphorie (in German). Neue Zürcher Zeitung. Retrieved 30 December 2015. - R. Janssen (Sept 2015). The German Minimum Wage Is Not A Job Killer. Social Europe. Retrieved 30 December 2015. - Kearl, J. R.; Pope, Clayne L.; Whiting, Gordon C.; Wimmer, Larry T. (May 1979). "A Confusion of Economists?". The American Economic Review 69 (2): 28–37. JSTOR 1801612. - Alston, Richard M.; Kearl, J. R.; Vaughan, Michael B. (May 1992). "Is There a Consensus Among Economists in 1990s?". The American Economic Review 82 (2): 203–9. JSTOR 2117401. - survey by Dan Fuller and Doris Geide-Stevenson using a sample of 308 economists surveyed by the American Economic Association - Hall, Robert Ernest. Economics: Principles and Applications. Centage Learning. ISBN 1111798206. - Fuller, Dan; Geide-Stevenson, Doris (2003). "Consensus Among Economists: Revisited". Journal of Economic Education 34 (4): 369–87. doi:10.1080/00220480309595230. - Whaples, Robert (2006). "Do Economists Agree on Anything? Yes!". The Economists' Voice 3 (9): 1–6. doi:10.2202/1553-3832.1156. - http://epionline.org/studies/epi_minimumwage_07-2007.pdf[full citation needed] - Fuchs, Victor R.; Krueger, Alan B.; Poterba, James M. (September 1998). "Economists' Views about Parameters, Values, and Policies: Survey Results in Labor and Public Economics". Journal of Economic Literature 36 (3): 1387–425. JSTOR 2564804. - Klein, Daniel; Dompe, Stewart (January 2007). "Reasons for Supporting the Minimum Wage: Asking Signatories of the 'Raise the Minimum Wage' Statement". Economics in Practice 4 (1): 125–67. - "Minimum Wage". IGM Forum. February 26, 2013. Retrieved December 6, 2013. - "Suggestion: Raise welfare children in institutions". Star-News. Jan 28, 1972. Retrieved November 19, 2013. - David Scharfenberg (April 28, 2014). "What The Research Says In The Minimum Wage Debate". WBUR. - "50 State Resources Map on State EITCs". The Hatcher Group. Retrieved June 16, 2010. - "New Research Findings on the Effects of the Earned Income Tax Credit". Center on Budget and Policy Priorities. Retrieved June 30, 2010. - Furman, Jason (April 10, 2006). "Tax Reform and Poverty". Center on Budget and Policy Priorities. Retrieved December 7, 2013. - "Response to a Request by Senator Grassley About the Effects of Increasing the Federal Minimum Wage Versus Expanding the Earned Income Tax Credit" (PDF). Congressional Budget Office. January 9, 2007. Retrieved July 25, 2008. - Olson, Parmy (9/01/2009). The Best Minimum Wages In Europe. Forbes. Retrieved 21 February 2014. - "Labor Criticizes". Lewiston Morning Tribune. Associated Press. March 2, 1933. pp. 1, 6. - 75 economists back minimum wage hike CNN Money, January 14, 2014 - Over 600 Economists Sign Letter In Support of $10.10 Minimum Wage Economist Statement on the Federal Minimum Wage, Economic Policy Institute - "Sanders Introduces Bill for $15-an-Hour Minimum Wage". Sen. Bernie Sanders. Retrieved 2015-09-15. - The rapid success of Fight for $15: 'This is a trend that cannot be stopped' S. Greenhouse, The Guardian, US-News, 24 Jul 2015 - Burkhauser, R. V. (2014). Why minimum wage increases are a poor way to help the working poor (No. 86). IZA Policy Paper, Institute for the Study of Labor (IZA). |Wikiquote has quotations related to: Minimum wage| |Wikimedia Commons has media related to Minimum wage.| - Minimum wage at DMOZ - Resource Guide on Minimum Wages from the International Labour Organization (a UN agency) - Minimum Wage Rates in All States of India from Paycheck India - The National Minimum Wage (U.K.) from official UK government website - Find It! By Topic: Wages: Minimum Wage U.S. Department of Labor - Characteristics of Minimum Wage Workers: 2009 U.S. Department of Labor, Bureau of Labor Statistics - History of Changes to the Minimum Wage Law U.S. Department of Labor, Wage and Hour Division - The Effects of a Minimum-wage Increase on Employment and Family Income Congressional Budget Office - Inflation and the Real Minimum Wage: A Fact Sheet Congressional Research Service - Minimum Wages in Central and Eastern Europe Database Central Europe - Prices and Wages - research guide at the University of Missouri libraries - Increasing national minimum wage - from official Aaron and Partners site - Issues about Minimum Wage from the AFL-CIO (U.S. labor federation favoring the minimum wage) - Issue Guide on the Minimum Wage from the Economic Policy Institute - A $15 U.S. Minimum Wage: How the Fast-Food Industry Could Adjust Without Shedding Jobs from the Political Economy Research Institute, January 2015. - Reporting the Minimum Wage from The Cato Institute (U.S. libertarian organization opposed to the minimum wage) - The Economic Effects of Minimum Wages from Show-Me Institute (U.S. libertarian organization opposed to the minimum wage) - Economics in One Lesson: The Lesson Applied, Chapter 19: Minimum Wage Laws by Henry Hazlitt
https://en.wikipedia.org/wiki/Minimum_wage
4.0625
Making Measurements in ScienceThe standard system of measurement all scientists use is the METRIC SYSTEM Measuring Mass Mass is a measurement of the amount of matter in an object Instrument: Balancetriple beam,double panElectronic Base unit: gram (g) Other common units: milligram (mg) kilogram (kg) Which is larger than a gram? Which is smaller? Measuring Distance Distance is a measurement of how far apart objects are from each other. Instrument: Meter stick Base unit: meter (m) Other common units: millimeter (mm) centimeter (cm) kilometer (km) Which of the units are larger than a meter? Which are smaller? Measuring Liquid Volume Volume is a measurement of the amount of space an object or matter occupies Instrument: graduated cylinder or beaker Which instrument will give you a more accurate measurement? Base unit: liter (L) Other common unit: milliliter (mL) Is this smaller or larger than a liter? Reading the Meniscus When measuring liquid volume, you must remember to read the meniscus properly The meniscus is curve of a liquid when placed in a cylinder Always make your measurement at the middle of the meniscus What is the measurement in the graduated cylinder? Measuring Solid Volume How do you measure the volume of an object that has a regular shape, such as a block of wood? Instrument: meter stick Formula: Volume = l x w x h Base unit: cubic meter (m3) Other common unit: cubic cm (cm3) Why is the unit cubed? Measuring Solid Volume Water Displacement Method How do you measure the volume of an object that has a irregular shape, such as a rock? There are actually 2 methods, both involving water. Instrument: graduated cylinder or overflow can Base unit: cubic meter (m3) Other common unit: cubic cm (cm3) Which do you think is more accurate? Measuring Weight Weight is a measurement of the force of gravity acting upon an object Instrument: Spring scale Base unit: Newton (N) Measuring Temperature Temperature is the measurement of the energy of the molecules in an object The higher the temperature, the faster the molecules are moving Instrument: thermometer Base unit: degrees Celsius (oC) Other unit: Kelvin (K)
http://www.slideshare.net/mrzolli/making-measurements-in-science
4.1875
5 Written questions 5 Matching questions - base-isolated building - reverse fault - strike-slip fault - a rocks on either side of the fault slip past each other sideways, with little up or down motion - b stress force that pulls on the crust stretching rock so that it becomes thinner in the middle. - c number assigned by geologists based on the Earth quakes size - d same as a reverse fault except the blocks move in the opposite direction - e building designed to reduce the amount of energy that reaches the building during an earthquake. 5 Multiple choice questions - the rock on a normal fault that lies below. - the shaking and trembling that results from the movement of rock beneath the earths surface - A type of seismic wave that compresses and expands the ground. - A type of seismic wave that moves the ground up and down or side to side. - Stress force that squeezes rock until until it folds or breaks 5 True/False questions plateau → A type of seismic wave that compresses and expands the ground. stress → A type of seismic wave that moves the ground up and down or side to side. normal fault → same as a reverse fault except the blocks move in the opposite direction mercalli scale → developed to rate earthquakes according to the level of damage at a given place. SYNCLINE → a fold in rock that bends downward to form a valley
https://quizlet.com/6711830/test
4.25
Using Tangent Lines to Approximate Function Values "Approximation" is what we do when we can't or don't want to find an exact value. We're going to approximate actual function values using tangent lines. We pointed out earlier that if we zoom in far enough on a continuous function, it looks like a line. For example, take the function f(x) = x2 and zoom in around x = 1. If we zoom in enough near x = 1, the function f looks like a line. If we graph the function and its tangent line at 1, we'll see that as we zoom in around x = 1, the function f looks like its tangent line. If we zoom back out a little bit, the function doesn't look quite so much like a line. However, the function and its tangent line are still "close together." This means, for example, that the y-value on the tangent line at x = 1.1 is "close" to the y-value on the function f(x) = x2 when x = 1.1. We found earlier that the tangent line to f(x) = x2 at 1 has the equation: y = 2x – 1. If we don't feel like calculating the actual value f(1.1), we can instead plug 1.1 into the tangent line equation and see what comes out: 2(1.1) – 1 = 2.2 – 1 = 1.2. This is a good approximation of f(1.1): If we then go and calculate the exact value of the function, we find f(1.1) = 1.21. This means our approximation was only 0.01 off. Why bother? Approximation is supposed to make life easier, so why should we go to all that work of finding the equation of a line and finding the y-value of the line when x = 1.1 instead of calculating f(1.1) and being done with it? In that example, we could calculate f(1.1) exactly, but we can't do that for every function. Try doing this with a function like ex or ln(x). Without a calculator, evaluating those functions for most values of x will get pretty hairy.
http://www.shmoop.com/derivatives/tangent-line-approximating-functions.html
4
Twelve Minor Prophets The Minor Prophets or Twelve Prophets (Aramaic: תרי עשר, Trei Asar, "The Twelve"), occasionally Book of the Twelve, is the last book of the Nevi'im, the second main division of the Jewish Tanakh. The collection is broken up to form twelve individual books in the Christian Old Testament, one for each of the prophets. The terms "minor prophets" and "twelve prophets" can also refer to the twelve traditional authors of these works. The term "Minor" relates to the length of each book (ranging from a single chapter to fourteen); even the longest is short compared to the three major prophets, Isaiah, Ezekiel and Jeremiah. It is not known when these short works were collected and transferred to a single scroll, but the first extra-biblical evidence we have for the Twelve as a collection is c. 190 BCE in the writings of Jesus ben Sirach, and evidence from the Dead Sea Scrolls suggests that the modern order was established by 150 BCE. It is believed that initially the first six were collected, and later the second six were added; the two groups seem to complement each other, with Hosea through Micah raising the question of iniquity, and Nahum through Malachi proposing resolutions. In the Hebrew Old Testament, these works were counted as one book. The works are commonly studied together, and are consistently ordered in Jewish, Protestant and Catholic Bibles as: - Hosea (Osee) - Obadiah (Abdias) - Jonah (Jonas) - Micah (Micheas) - Habakkuk (Habacuc) - Zephaniah (Sophanias) - Haggai (Aggeus) - Zechariah (Zacharias) - Malachi (Malachias) Many, though not all, modern scholars agree that the editing process which produced the Book of the Twelve reached its final form in Jerusalem during the Achaemenid period (538–332 BCE), although there is disagreement over whether this was early or late. Scholars usually assume that there exists an original core of prophetic tradition behind each book which can be attributed to the figure after whom it is named. The noteworthy exception is the Book of Jonah, an anonymous work containing no prophetic oracles, probably composed in the Hellenistic period (332–167 BCE). In general, each book includes three types of material: - Autobiographical material in the first person, some of which may go back to the prophet in question; - Biographical materials about the prophet in the third person – which incidentally demonstrate that the collection and editing of the books was completed by persons other than the prophets themselves; - Oracles or speeches by the prophets, usually in poetic form, and drawing on a wide variety of genres, including covenant lawsuit, oracles against the nations, judgment oracles, messenger speeches, songs, hymns, narrative, lament, law, proverb, symbolic gesture, prayer, wisdom saying, and vision. The comparison of different ancient manuscripts indicates that the order of the individual books was originally fluid. The arrangement found in current Bibles is roughly chronological. First come those prophets dated to the early Assyrian period: Hosea, Amos, Obadiah, Jonah, and Micah; Joel is undated, but it was possibly placed before Amos because parts of a verse near the end of Joel (3.16 [4.16 in Hebrew]) and one near the beginning of Amos (1.2) are identical. Also we can find in both Amos (4.9 and 7.1–3) and Joel a description of a plague of locusts. These are followed by prophets that are set in the later Assyrian period: Nahum, Habakkuk, and Zephaniah. Last come those set in the Persian period: Haggai, Zechariah, and Malachi. However it is important to note that chronology was not the only consideration, as "It seems that an emphatic focus on Jerusalem and Judah was [also] a main concern. For example, Obadiah is generally understood as reflecting the destruction of Jerusalem in 586 BCE. and would therefore fit later in a purely chronological sequence. In the Roman Catholic Church, the twelve minor prophets are read in the Lectionary during the fourth and fifth weeks of November, which are the last two weeks of the liturgical year. They are collectively commemorated in the Calendar of saints of the Armenian Apostolic Church on July 31. - Achtemeier, Elizabeth R. & Murphy, Frederick J. The New Interpreter’s Bible, Vol. VII: Introduction to Apocalyptic Literature, Daniel, The Twelve Prophets (Abingdon, 1996) - Cathcart, Kevin J. & Gordon, Robert P. The Targum of the Minor Prophets. The Aramaic Bible 14 (Liturgical Press, 1989) - Chisholm, Robert B. Interpreting the Minor Prophets (Zondervan, 1990) - Coggins, Richard; Han, Jin H (2011). Six Minor Prophets Through the Centuries: Nahum, Habakkuk, Zephaniah, Haggai, Zechariah, and Malachi. John Wiley & Sons. ISBN 978-1-44434279-6. - Coogan, Michael D (2009). A brief introduction to the Old Testament. Oxford University Press. - Dell, Katherine J (1996). "Reinventing the Wheel: the Shaping of the Book of Jonah". In Barton, John; Reimer, David James. After the exile: essays in honour of Rex Mason. Mercer University Press. ISBN 978-0-86554524-3. - Feinberg, Charles L. The Minor Prophets (Moody, 1990) - Ferreiro, Alberto (ed). The Twelve Prophets. Ancient Christian Commentary on Scripture (Inter-Varsity Press, 2003) - Floyd, Michael H (2000). Minor prophets 2. Eerdmans. ISBN 9780802844521. - Hill, Robert C. (tr). Theodoret of Cyrus: Commentary on the Prophets Vol 3: Commentary on the Twelve Prophets (Holy Cross Orthodox Press, 2007) - of Mopsuestia, Theodore; Hill, Robert C, tr (2004). "Commentary on the Twelve Prophets". The Fathers of the Church. Catholic University of America. - House, Paul R. The Unity of the Twelve. JSOT Supplement Series, 97 (Almond Press, 1990) - Jones, Barry Alan. The Formation of the Book of the Twelve: a Study in Text and Canon. SBL Dissertation Series 149 (Society of Biblical Literature, 1995) - Keil, Carl Friedrich. Keil on the Twelve Minor Prophets (1878) (Kessinger, 2008) - Longman, Tremper & Garland, David E. (eds). Daniel–Malachi. The Expositor’s Bible Commentary (Revised ed) 8 (Zondervan, 2009) - McComiskey, Thomas Edward (ed). The Minor Prophets: An Exegetical and Expository Commentary (Baker, 2009) - Navarre Bible, The: Minor Prophets (Scepter & Four Courts, 2005) - Nogalski, James D. Literary Precursors to the Book of the Twelve. Beihefte Zur Zeitschrift Fur Die Alttestamentliche Wissenschaft (Walter de Gruyter, 1993) - Nogalski, James D; Sweeney, Marvin A; 3, eds. (2000). Reading and Hearing the Book of the Twelve. Symposium. Society of Biblical Literature. - Petterson, Anthony R., ‘The Shape of the Davidic Hope across the Book of the Twelve’, Journal for the Study of the Old Testament 35 (2010), 225–46. - Phillips, John. Exploring the Minor Prophets. The John Phillips Commentary Series. (Kregel, 2002) - Redditt, Paul L (2003). "The Formation of the Book of the Twelve". In Redditt, Paul L; Schart, Aaron. Thematic threads in the Book of the Twelve. Beihefte Zur Zeitschrift Fur Die Alttestamentliche Wissenschaft. Walter de Gruyter. ISBN 978-3-11017594-3. - Roberts, Matis (ed). Trei asar: The Twelve Prophets: a New Translation with a Commentary Anthologized from Talmudic, Midrashic, and Rabbinic Sources (Mesorah, 1995–) - Rosenberg, A.J. (ed). The Twelve Prophets: Hebrew Text and English Translation. Soncino Books of the Bible (Soncino, 2004) - Schart, Aaron (1998). "Die Entstehung des Zwölfprophetenbuchs. Neubearbeitungen von Amos im Rahmen schriftenübergreifender Redaktionsprozesse". Beihefte zur Zeitschrift für die alttestamentliche Wissenschaft (in German) (260). Walter de Gruyter. - Shepherd, Michael B. "The Twelve Prophets in the New Testament" (Peter Lang, 2011) - Slavitt, David R. (tr). The Book of the Twelve Prophets (Oxford University Press, 1999) - Smith, James E. The Minor Prophets. Old Testament Survey (College Press, 1994) - Stevenson, John. Preaching From The Minor Prophets To A Postmodern Congregation (Redeemer, 2008) - Walton, John H. (ed). The Minor Prophets, Job, Psalms, Proverbs, Ecclesiastes, Song of Songs Zondervan Illustrated Bible Backgrounds Commentary (Zondervan, 2009) - Zvi, Ehud Ben (2004). "Introduction to The Twelve Minor Prophets". In Berlin, Adele; Brettler, Mark Zvi. The Jewish Study Bible. Oxford University Press. ISBN 978-0-19529751-5. |Hebrew Bible||Followed by |Christian Old Testament||End of Old Testament New Testament begins with
https://en.wikipedia.org/wiki/Twelve_Minor_Prophets
4.1875
Like their human hosts, bacteria need iron to survive and they must obtain that iron from the environment. While humans obtain iron primarily through the food they eat, bacteria have evolved complex and diverse mechanisms to allow them access to iron. A Syracuse University research team led by Robert Doyle, assistant professor of chemistry in The College of Arts and Sciences, discovered that some bacteria are equipped with a gene that enables them to harvest iron from their environment or human host in a unique and energy efficient manner. Doyle's discovery could provide researchers with new ways to target such diseases as tuberculosis. The research will be published in the August issue (volume 190, issue 16) of the prestigious Journal of Bacteriology, published by the American Society for Microbiology. "Iron is the single most important micronutrient bacteria need to survive," Doyle says. "Understanding how these bacteria thrive within us is a critical element of learning how to defeat them." Doyle's research group studied Streptomyces coelicolor, a Gram-positive bacteria that is closely related to the bacteria that causes tuberculosis. Streptomyces is abundant in soil and in decaying vegetation, but does not affect humans. The TB bacteria and Streptomyces are both part of a family of bacteria called Actinomycetes. These bacteria have a unique defense mechanism that enables them to produce chemicals to destroy their enemies. Some of these chemicals are used to make antibiotics and other drugs. Actinomycetes need lots of iron to wage chemical warfare on its enemies; however, iron is not easily accessible in the environments in which the bacteria live— e.g. human or soil. Some iron available in the soil is bonded to citrate, making a compound called iron-citrate. Citrate is a substance that cells can use as a source of energy. Doyle and his research team wondered if the compound iron-citrate could be a source of iron for the bacteria. In a series of experiments that took place over more than two years, the researchers observed that Streptomyces could ingest iron-citrate, metabolize the iron, and use the citrate as a free source of energy. Other experiments demonstrated that the bacteria ignored citrate when it was not bonded to iron; likewise, the bacteria ignored citrate when it was bonded to other metals, such as magnesium, nickel, and cobalt. The next task was to uncover the mechanism that triggered the bacteria to ingest iron-citrate. Computer modeling predicted that a single Streptomyces gene enabled the bacteria to identify and ingest iron-citrate. The researchers isolated the gene and added it to E. coli bacteria (which is not an Actinomycete bacteria). They found that the mutant E. coli bacteria could also ingest iron-citrate. Without the gene, E. coli could not gain access to the iron. "It's amazing that the bacteria could learn to extract iron from their environment in this way," Doyle says. "We went into these experiments with no idea that this mechanism existed. But then, bacteria have to be creative to survive in some very hostile environments; and they've had maybe 3.5 billion years to figure it out." The Streptomyces gene enables the bacteria to passively diffuse iron-citrate across the cell membrane, which means that the bacteria do not expend additional energy to ingest the iron. Once in the cell, the bacteria metabolize the iron and, as an added bonus, use the citrate as an energy source. Doyle's team is the first to identify this mechanism in a bacteria belonging to the Actinomycete family. The team plans further experiments to confirm that the gene performs the same signaling function in tuberculosis bacteria. If so, the mechanism could potentially be exploited in the fight against tuberculosis. "TB bacteria have access to an abundant supply of iron-citrate flowing through the lungs in the blood," Doyle says. "Finding a way to sneak iron from humans at no energy cost to the bacteria is as good as it gets. Our discovery may enable others to figure out a way to limit TB's access to iron-citrate, making the bacteria more vulnerable to drug treatment." Source: Syracuse University
http://phys.org/news/2008-07-scientists-bacteria-iron-human-hosts.html
4.3125
Students will learn about the conservation of angular momentum. How to apply it in both conceptual questions and in problem solving situations. The angular momentum of a spinning object can be found in two equivalent ways. Just like linear momentum, one way, shown in the first equation, is to multiply the moment of inertia, the rotational analog of mass, with the angular velocity. The other way is simply multiplying the linear momentum by the radius, as shown in the second equation. Just the same as linear momentum, the torque required to change the momentum L in t time (L/t) can be compared to the force required to change the momentum p in t time. (p/t) Torques produce a change in angular momentum with time. The same principles that hold with linear momentum conservation can be applied here with angular momentum conservation. The direction of L is given by the right hand rule. Simply wrap your fingers around and in the direction the object is spinning and your thumb tells you the direction the vector is pointing. - Angular momentum can not change unless an outside torque is applied to the object. - Recall that momentum is a vector quantity, thus the direction a spinning object is pointing can not change without an applied torque. Watch this Explanation - You have two coins; one is a standard U.S. quarter, and the other is a coin of equal mass and size, but with a hole cut out of the center. - A star is rotating with a period of 10.0 days. It collapses with no loss in mass to a white dwarf with a radius of of its original radius. - What is its initial angular velocity? - What is its angular velocity after collapse? - A merry-go-round consists of a uniform solid disc of and a radius of . A single person stands on the edge when it is coasting at revolutions per sec. How fast would the device be rotating after the person has walked toward the center. (The moments of inertia of compound objects add.) - a. Coin with the hole b. Coin with the hole - a. b.
http://www.ck12.org/physics/Angular-Momentum/lesson/Angular-Momentum/
4.3125
|This article needs additional citations for verification. (January 2014)| An ocean current is a continuous, directed movement of seawater generated by forces acting upon this mean flow, such as breaking waves, wind, the Coriolis effect, cabbeling, temperature and salinity differences, while tides are caused by the gravitational pull of the Sun and Moon. Depth contours, shoreline configurations, and interactions with other currents influence a current's direction and strength. Ocean currents flow for great distances, and together, create the global conveyor belt which plays a dominant role in determining the climate of many of the Earth’s regions. More specifically, ocean currents influence the temperature of the regions through which they travel. For example, warm currents traveling along more temperate coasts increase the temperature of the area by warming the sea breezes that blow over them. Perhaps the most striking example is the Gulf Stream, which makes northwest Europe much more temperate than any other region at the same latitude. Another example is Lima, Peru where the climate is cooler (sub-tropical) than the tropical latitudes in which the area is located, due to the effect of the Humboldt Current. Surface oceanic currents are sometimes wind driven and develop their typical clockwise spirals in the northern hemisphere and counterclockwise rotation in the southern hemisphere because of imposed wind stresses. In wind driven currents, the Ekman spiral effect results in the currents flowing at an angle to the driving winds. The areas of surface ocean currents move somewhat with the seasons; this is most notable in equatorial currents. Ocean basins generally have a non-symmetric surface current, in that the eastern equatorward-flowing branch is broad and diffuse whereas the western poleward flowing branch is very narrow. These western boundary currents (of which the Gulf Stream is an example) are a consequence of the rotation of the Earth. Deep ocean currents are driven by density and temperature gradients. Thermohaline circulation is also known as the ocean's conveyor belt (which refers to deep ocean density driven ocean basin currents). These currents, called submarine rivers, flow under the surface of the ocean and are hidden from immediate detection. Where significant vertical movement of ocean currents is observed, this is known as upwelling and downwelling. Deep ocean currents are currently being researched using a fleet of underwater robots called Argo. The South Equatorial Currents of the Atlantic and Pacific straddle the equator. Though the Coriolis effect is weak near the equator (and absent at the equator), water moving in the currents on either side of the equator is deflected slightly poleward and replaced by deeper water. Thus, equatorial upwelling occurs in these westward flowing equatorial surface currents. Upwelling is an important process because this water from within and below the pycnocline is often rich in the nutrients needed by marine organisms for growth. By contrast, generally poor conditions for growth prevail in most of the open tropical ocean because strong layering isolates deep, nutrient rich water from the sunlit ocean surface. Surface currents make up only 8% of all water in the ocean, are generally restricted to the upper 400 m (1,300 ft) of ocean water, and are separated from lower regions by varying temperatures and salinity which affect the density of the water, which in turn, defines each oceanic region. Because the movement of deep water in ocean basins is caused by density driven forces and gravity, deep waters sink into deep ocean basins at high latitudes where the temperatures are cold enough to cause the density to increase. Ocean currents are measured in sverdrup (sv), where 1 sv is equivalent to a volume flow rate of 1,000,000 m3 (35,000,000 cu ft) per second. Horizontal and vertical currents also exist below the pycnocline in the ocean's deeper waters. The movement of water due to differences in density as a function of water temperature and salinity is called thermohaline circulation. Ripple marks in sediments, scour lines, and the erosion of rocky outcrops on deep-ocean floors are evidence that relatively strong, localized bottom currents exist. Some of these currents may move as rapidly as 60 centimeters (24 inches) per second. These currents are strongly influenced by bottom topography, since dense, bottom water must forcefully flow around seafloor projections. Thus, they are sometimes called contour currents. Bottom currents generally move equator-ward at or near the western boundaries of ocean basins (below the western boundary surface currents). The deep-water masses are not capable of moving water at speeds comparable to that of wind-driven surface currents. Water in some of these currents may move only 1 to 2 meters per day. Even at that slow speed, the Coriolis effect modifies their pattern of flow. Downwelling of deep water in polar regions Antarctic Bottom Water is the most distinctive of the deep-water masses. It is characterized by a salinity of 34.65‰, a temperature of -0.5 °C (30 °F), and a density of 1.0279 grams per cubic centimeter. This water is noted for its extreme density (the densest in the world ocean), for the great amount of it produced near Antarctic coasts, and for its ability to migrate north along the seafloor. Most Antarctic Bottom Water forms near the Antarctic coast south of South America during winter. Salt is concentrated in pockets between crystals of pure water and then squeezed out of the freezing mass to form a frigid brine. Between 20 million and 50 million cubic meters of this brine form every second. The water's great density causes it to sink toward the continental shelf, where it mixes with nearly equal parts of water from the southern Antarctic Circumpolar Current. The mixture settles along the edge of Antarctica's continental shelf, descends along the slope, and spreads along the deep-sea bed, creeping north in slow sheets. Antarctic Bottom Water flows many times as slowly as the water in surface currents: in the Pacific it may take a thousand years to reach the equator. Antarctic Bottom Water also flows into the Atlantic Ocean basin, where it flows north at a faster rate than in the Pacific. Antarctic Bottom Water has been identified as high as 40° N on the Atlantic floor. A small amount of dense bottom water also forms in the northern polar ocean. Although, the topography of the Arctic Ocean basin prevents most of the bottom water from escaping, with the exception of deep channels formed in the submarine ridges between Scotland, Iceland, and Greenland. These channels allow the cold, dense water formed in the Arctic to flow into the North Atlantic to form North Atlantic Deep Water. North Atlantic Deep Water forms when the relatively warm and salty North Atlantic Ocean cools as cold winds from northern Canada sweep over it. Exposed to the chilled air, water at the latitude of Iceland releases heat, cools from 10 °C to 2 °C, and sinks. Gulf Stream water that sinks in the north is replaced by warm water flowing clockwise along the U.S. east coast in the North Atlantic gyre. Knowledge of surface ocean currents is essential in reducing costs of shipping, since traveling with them reduces fuel costs. In the wind powered sailing-ship era, knowledge was even more essential. A good example of this is the Agulhas Current, which long prevented Portuguese sailors from reaching India. In recent times, around-the-world sailing competitors make good use of surface currents to build and maintain speed. Ocean currents are also very important in the dispersal of many life forms. An example is the life-cycle of the European Eel. Ocean currents are important in the study of marine debris, and vice versa. These currents also affect temperatures throughout the world. For example, the ocean current that brings warm water up the north Atlantic to northwest Europe also cumulatively and slowly blocks ice from forming along the seashores, which would also block ships from entering and exiting inland waterways and seaports, hence ocean currents play a decisive role in influencing the climates of regions through which they flow. Cold ocean water currents flowing from polar and sub-polar regions bring in a lot of plankton that are crucial to the continued survival of several key sea creature species in marine ecosystems. Since plankton are the food of fish, abundant fish populations often live where these currents prevail. Ocean currents can also be used for marine power generation, with areas off of Japan, Florida and Hawaii being considered for test projects. OSCAR: Near-realtime global ocean surface current data set The OSCAR Near-realtime global ocean surface currents website from which users can create customized graphics and download the data. A section of the website provides validation studies in the form of graphics comparing OSCAR data with moored buoys and global drifters. OSCAR data is used extensively in climate studies. maps and descriptions or annotations of climatic anomalies have been published in the monthly Climate Diagnostic Bulletin since 2001 and are routinely used to monitor ENSO and to test weather prediction models. OSCAR currents are routinely used to evaluate the surface currents in Global Circulation Models (GCMs), for example in NCEP Global Ocean Data Assimilation System (GODAS) and European Centre for Medium-Range Weather Forecasts (ECMWF). - Deep ocean water - Thermohaline circulation - Fish migration - List of ocean circulation models - Oceanic gyres - Physical oceanography - Marine current power - Latitude of the Gulf Stream and the Gulf Stream north wall index - Hansen, B.; Østerhus, S; Quadfasel, D; Turrell, W (2004). "Already the day after tomorrow?". Science 305 (5686): 953–954. doi:10.1126/science.1100085. PMID 15310882. - Kerr, Richard A. (2004). "A slowing cog in the North Atlantic ocean's climate machine". Science 304 (5669): 371–372. doi:10.1126/science.304.5669.371a. PMID 15087513. - Munday, Phillip L.; Jones, Geoffrey P.; Pratchett, Morgan S.; Williams, Ashley J. (2008). "Climate change and the future for coral reef fishes". Fish and Fisheries 9 (3): 261–285. doi:10.1111/j.1467-2979.2008.00281.x. - Rahmstorf, S. (2003). "Thermohaline circulation: The current climate". Nature 421 (6924): 699–699. doi:10.1038/421699a. PMID 12610602. - Roemmich, D. (2007). "Physical oceanography: Super spin in the southern seas". Nature 449 (7158): 34–35. doi:10.1038/449034a. PMID 17805284. |Wikimedia Commons has media related to Ocean currents.| - NOAA Ocean Surface Current Analyses - Realtime (OSCAR) Near-realtime Global Ocean Surface Currents derived from satellite altimeter and scatterometer data. - RSMAS Ocean Surface Currents - Coastal Ocean Current Monitoring Program - Ocean Motion and Surface Currents - Data Visualizer from OceanMotion.org - Changes in Ocean Circulation - Cluster of Excellence "Future Ocean", Kiel
https://en.wikipedia.org/wiki/Current_(ocean)
4
Marfan syndrome is a genetic disorder that affects the body’s connective tissue. Connective tissue holds all the body’s cells, organs and tissue together. Marfan syndrome is a condition in which your body's connective tissue is abnormal. Connective tissue helps support all parts of your body. It also helps control how ... WebMD's guide to Marfan syndrome, an inherited disease that affects the heart. Marfan syndrome is a life-threatening genetic disorder, and an early, accurate diagnosis is essential, not only for people with Marfan syndrome, but also for those ... Marfan syndrome is an inherited disorder that affects connective tissue — the fibers that support and anchor your organs and other structures in your body. Read about Marfan syndrome, a hereditary condition affecting connective tissue. Read about Marfan syndrome facts, treatment, symptoms, prognosis, life expectacny ... Marfan Syndrome. October 2015. Questions and Answers about Marfan Syndrome. This publication answers general questions about Marfan syndrome. It describes the ... Marfan syndrome (also called Marfan's syndrome) is a genetic disorder of connective tissue. It has a variable clinical presentation, ranging from mild to severe ... Marfan syndrome is a disorder that affects the connective tissue in many parts of the body. Connective tissue provides strength and flexibility to ... This is an easy-to-read public information piece. Marfan syndrome is a disorder that affects connective tissue.
http://search.lycos.com/web/?q=marfan_syndrome
4
The Mexican-American War began on April 25, 1846 and ended February 2, 1848. In the U.S. the war is termed the Mexican–American War, also known as the Mexican .... In the sparsely settled interior of northern Mexico, the end of Spanish ... that this territory, which it... War's End ... The Treaty of Guadalupe Hidalgo ended the U.S.-Mexican War. .... the Native Americans in the ceded territories, who in fact were Mexican citizens, ... Find out more about the history of Mexican-American War, including videos, interesting articles, ... Did You Know? ... Santa Anna convinced Polk that, if allowed to return to Mexico, he would end the war on terms favorable to the ... on February 2, 1848, ended the Mexican-American War in favor of the United ... 1846, asked Congress to declare war on Mexico, which it did two days later. ... which brought an official end to the Mexican-American War (1846-1848) was ... Trist determined that Washington did not understand the situation in Mexico ... Between 1846 and 1848, a war fought between two North American nations, the United States and Mexico, did what most wars do- it began with a ... incur many battles, be fought mostly on land, and result in an end to the Texas/Mexico border Instead, Mexico's neighbor to the north had captured the country. How and why did the United States defeat Mexico in the Mexican-American War? To the victors The Mexican-American War From 1846-1848. ... William Huddle's 1886 depiction of the end of the. Texas Revolution shows Mexican General Santa Anna. Sep 11, 2015 ... Mexican-American War, also called Mexican War, Spanish Guerra de ... Ultimately, the House did not act on Lincoln's resolutions, and Polk ...
http://www.ask.com/web?q=When+Did+the+Mexican+American+War+End%3F&o=2603&l=dir&qsrc=3139&gc=1
4
Rhabdomyolysis is a condition that may occur when muscle tissue is damaged due to an injury in which muscle in the body is damaged (rhabdomyo=skeletal muscle + lysis= rapid breakdown). There are three types of muscle in the body, including: - skeletal muscles that move the body; - cardiac muscle located in the heart; and - smooth muscle that lines blood vessels, gastrointestinal tract, bronchi in the lung, and thebladder and uterus. This type of muscle is not under conscious control. Rhabdomyolysis occurs when there is damage to the skeletal muscle. The injured muscle cell leaks myoglobin (a protein) into the blood stream. Myoglobin can be directly toxic to kidney cells, and it can impair and clog the filtration system of the kidney. Both mechanisms can lead to kidney failure (the major complication of rhabdomyolysis). Significant muscle injury can cause fluid and electrolyte shifts from the bloodstream into the damaged muscle cells, and in the other direction (from the damaged muscle cells into the bloodstream). As a result, dehydration may occur. Elevated levels of potassium in the bloodstream (hyperkalemia) may be associated with heart rhythm disturbances and sudden cardiac death due to ventricular tachycardia and ventricular fibrillation. Complications of rhabdomyolysis also include disseminated intravascular coagulation, a condition that occurs when small blood clots begin forming in the body's blood vessels. These clots consume all the clotting factors and platelets in the body, and bleeding begins to occur spontaneously. When muscles are damaged, especially due to a crush injury, swelling within the muscle can occur, causing compartment syndrome. If this occurs in an area where the muscle is bound by fascia (a tough fibrous tissue membrane), the pressure inside the muscle compartment can increase to the point at which blood supply to the muscle is compromised and muscle cells begin to die. Rhabdomyolysis was first appreciated as a significant complication from crush and blast injuries sustained in a volcano eruption in Italy, in 1908. Victims of the blast injuries during the first and second World Wars help further understand the relationship between massive muscle damage and kidney failure. Medically Reviewed by a Doctor on 9/1/2015 Must Read Articles Related to Rhabdomyolysis Chemical burns can occur in the home, at work or school, and as a result of accident or assault.learn more >> Overdoses of drugs or chemicals can be either accidental or intentional. Drug overdoses occur when a person takes more than the medically recommended dose. Over...learn more >> Epilepsy is a condition in which the brain repea...learn more >> Patient Comments & Reviews The eMedicineHealth doctors ask about Rhabdomyolysis:
http://www.emedicinehealth.com/rhabdomyolysis/article_em.htm
4
Eastern long-beaked echidna |Eastern long-beaked echidna| |Eastern long-beaked echidna range| The eastern long-beaked echidna (Zaglossus bartoni), also known as Barton's long-beaked echidna, is one of three species from the genus Zaglossus to occur in New Guinea. It is found mainly in Papua New Guinea at elevations between 2,000 and 3,000 metres (6,600 and 9,800 ft). The eastern long-beaked echidna can be distinguished from other members of the genus by the number of claws on the fore and hind feet: it has five claws on its fore feet and four on its hind feet. Its weight varies from 5 to 10 kilograms (11 to 22 lb); its body length ranges from 60 to 100 centimetres (24 to 39 in); it has no tail. It has dense black fur. It rolls into a spiny ball for defense. All long-beaked echidnas were classified as a single species, until 1998 when Flannery published an article identifying several new species and subspecies. Three species were then recognized based on various attributes such as body size, skull morphology, and the number of toes on the front and back feet. There are four recognized subspecies of Zaglossus bartoni : - Z. bartoni bartoni - Z. bartoni clunius - Z. bartoni smeenki - Z. bartoni diamondi The population of each subspecies is geographically isolated. The subspecies are distinguished primarily by differences in body size. Eastern long-beaked echidnas are mainly insect eaters, or insectivores. The long snout proves essential for the Echidna’s survival because of its ability to get in between hard-to-reach places and scavenge for smaller insect organisms such as larvae and ticks. Along with this snout, they have a specific evolutionary adaptation in their tongues for snatching up various earthworms, which are its main type of food source. Zaglossus bartoni habitats include tropical hill forests to sub-alpine forests, upland grasslands and scrub. The species has been found in locations up to an elevation of around 4,150 m. Today is it rare to find Zaglossus bartoni at sea level. Ecology and Behavior Humans are the main factor in diminishing populations of eastern long-beaked echidnas. Locals in areas surrounding regions that these organisms inhabit often prey upon them for food. Feral dogs are known to occasionally consume this species. These mammals dig burrows, providing some protection from predation. The eastern long-beaked echidna is a member of the order Monotremata. Although monotremes have some of the same mammal features such as hair and mammary glands, they do not give birth to live young, they lay eggs. Like birds and reptiles, monotremes have a single opening, the cloaca. The cloaca allows for the passage of urine and feces, the transmission of sperm, and the laying of eggs. Little is actually known about the breeding behaviors of this animal, due to the difficulty of finding and tracking specimens. Unfortunately, the way the spines on the echidna lie make it difficult to attach tracking devices, in addition to the difficulty in finding the animals themselves, as they are mainly nocturnal. ||This article includes a list of references, but its sources remain unclear because it has insufficient inline citations. (April 2009)| - Groves, C.P. (2005). "Order Monotremata". In Wilson, D.E.; Reeder, D.M. Mammal Species of the World: A Taxonomic and Geographic Reference (3rd ed.). Johns Hopkins University Press. p. 1. ISBN 978-0-8018-8221-0. OCLC 62265494. - Leary, T., Seri, L., Flannery, T., Wright, D., Hamilton, S., Helgen, K., Singadan, R., Menzies, J., Allison, A., James, R., Aplin, K., Salas, L. & Dickman, C. (2008). Zaglossus bartoni. In: IUCN 2008. IUCN Red List of Threatened Species. Retrieved 28 December 2008. Database entry includes justification for why this species is listed as critically endangered. - Flannery, T. F.; Groves, C. P. (Jan 1998). "A revision of the genus Zaglossus (Monotremata, Tachyglossidae), with description of new species and subspecies". 'Mammalia' 6 (3): 367–396. doi:10.1515/mamm.19188.8.131.527. - Wilson, Don E. "Zaglossus bartoni". Integrated Taxonomic Information System. Retrieved 25 October 2013. - "Zaglossus bartoni (Eastern Long-beaked Echidna)". The IUCN Red List of Threatened Species. 2014. Retrieved 29 July 2014. - "Monotreme". Columbia Electronic Encyclopedia, 6th Edition. EBSCOhost. 2013. ISBN 9780787650155. - Opiang, Muse (April 2009). "Home Ranges, Movement, and Den Use in Long-Beaked Echidnas, Zaglossus Bartoni, From Papua New Guinea". Journal of Mammalogy (American Society of Mammalogists) 9 (2): 340–346. doi:10.1644/08-MAMM-A-108.1. - Flannery, T.F. and Groves, C.P. 1998. A revision of the genus Zaglossus (Monotremata, Tachyglossidae), with description of new species and subspecies. Mammalia, 62(3): 367–396 |Wikispecies has information related to: Zaglossus| - EDGE of Existence (Zaglossus spp.) – Saving the World's most Evolutionarily Distinct and Globally Endangered (EDGE) species
https://en.wikipedia.org/wiki/Eastern_Long-beaked_Echidna
4.1875
West Nile Virus and West Nile Encephalitis (WNE) West Nile Virus Facts - West Nile virus is transmitted to humans by mosquito bites and may cause encephalitis (West Nile encephalitis or WNE) in a few patients. - West Nile virus usually occurs in birds but can be transmitted by a mosquito vector to humans. - Symptoms of West Nile viral infections may range from no symptoms to fever, chills, muscle aches, headaches, and sensitivity to light; severe infections may cause additional symptoms associated with meningitis, encephalitis, coma, seizures, and infrequently, death. - West Nile virus infections are diagnosed by the patient's physical exam and by immunological tests. - Treatment for West Nile virus infections is mainly supportive and is aimed at reducing symptoms; severe infections often require hospital treatment. - Risk factors for West Nile virus infections include exposure to infected mosquitoes, being age 50 and older, and having any medical problem that reduces the immune response. - In general, the prognosis of most West Nile viral infections is very good; however, severe infections have a more guarded prognosis because of potential neurological damage. - Currently, there is no vaccine available to prevent West Nile virus infections in humans; however, preventing mosquito bites by several methods (wearing long-sleeved shirts, long pants, using mosquito repellent, and eliminating areas that are good breeding grounds for mosquitoes) help prevent infections. West Nile Virus Overview West Nile virus is a Flaviviridae virus transmitted to humans by mosquito bites. Virus symptoms range from none to severe: encephalitis (inflammation of the brain) or meningitis (inflammation of the lining of the brain and spinal cord). The disease the virus causes is termed West Nile encephalitis (WNE). WNE currently is endemic in Asia, Africa, and the Middle East. Since 1999, the disease has been detected in many states (see map below) in the U.S. The disease is considered to be endemic now in the U.S.; in 2013, 39,567 individuals had been diagnosed with the disease. From 2013-2015, about 2,000 per year are detected with new West Nile infections in 47 states in the U.S. West Nile virus was discovered in 1937 in the West Nile district of Uganda. Although wild birds are the preferred hosts for the virus and are likely the hosts that spread the disease from country to country, West Nile virus can infect other mammals such as horses and dogs, for example. The virus is transferred from animal or birds to humans by mosquitoes. Since the virus was first detected in the United States in 1999, every year since then there has been an outbreak in the U.S. of West Nile virus (for example, outbreaks have occurred in California, Arizona, Illinois, Massachusetts, Oregon, Pennsylvania, Wisconsin, and Texas); the virus has been detected in 47 U.S. states and in Canada. Medically Reviewed by a Doctor on 4/9/2015 Must Read Articles Related to West Nile Virus Encephalitis is an acute infection and inflammation of the brain itself. This is in contrast to meningitis, which is an inflammation of the layers covering the ...learn more >> Insect stings and bites are common. Common symptoms include pain, swelling, redness, and itching. Treatment of insect stings and bites depends on the type of in...learn more >> Patient Comments & Reviews The eMedicineHealth doctors ask about West Nile Virus:
http://www.emedicinehealth.com/west_nile_virus/article_em.htm
4.25
As of right now, the Common Core Standards objective numbers are written for grades 3 and up. *With prompting and support, ask and answer questions about key details in a text. *Identify the front cover, back cover, and title page of a book. *Name the author and illustrator of a text and define the role of each in presenting the ideas or information in a text. *Actively engage in group reading activities with purpose and understanding. *Use a combination of drawing, dictating, and writing to compose opinion pieces in which they tell a reader the topic or the name of the book they are writing about and state an opinion or preference about the topic or book (e.g., My favorite book is . . .). *Confirm understanding of a text read aloud or information presented orally or through other media by asking and answering questions about key details and requesting clarification if something is not understood. *Write numbers from 0 to 20. Represent a number of objects with a written numeral 0-20 (with 0 representing a count of no objects). *Count to answer “how many?” questions about as many as 20 things arranged in a line, a rectangular array, or a circle, or as many as 10 things in a scattered configuration; given a number from 1–20, count out that many objects. *Identify whether the number of objects in one group is greater than, less than, or equal to the number of objects in another group, e.g., by using matching and counting strategies.1 *Solve addition and subtraction word problems, and add and subtract within 10, e.g., by using objects or drawings to represent the problem. Life cycle- the process of moving from one stage of life to another (egg-caterpillar (larva)-pupa-butterfly) Cocoon- a shell formed around a moth larva for protection during the pupal stage Chrysalis- a shell formed around a butterfly larva for protection during the pupal stage Metamorphosis- a change of physical form such as a caterpillar to a butterfly Egg- a protective hard shell from which a baby caterpillar hatches Larva-a caterpillar that hatches from an egg Pupa- the stage of a caterpillars life where it builds a protective covering (chrysalis/butterfly or cocoon/moth) around itself so that it can turn into an adult Butterfly- an insect that flies around in the daytime with brightly colored wings and a hollow tongue for sucking nectar from plants The teacher will: 1. Read the story, The Very Hungry Caterpillar by Eric Carle a. after reading (can be later on in the day or next day as day 2 of story) lead discussion about * which days had an even or an odd number of food, *which day did he eat the most food, the least food *if you added Monday and Tuesdays food together would it be greater than (more), less than (less) or equal to Wednesdays food? ***put different combinations together b. if you decide to do the food pyramid chart, discuss the categories and which foods are good choices and which foods are once in a while choices. Have children come up to put lamintated cut outs of the food the caterpillar ate in the proper spots 2. Discuss/review the vocabulary words before, during and after the story a. write words on board so children can notice spelling patterns and use vocab in writing 3. Use the poly vision board or poster to discuss and show pictures of the life cycle of a butterfly 4. Give the instructions and model how to complete the life cycle worksheet 5. Give the instructions and model how to complete the writing paper and illustration on which part of the story was the student's favorite part and why. The students will: 1. Listen to the teacher read The Very Hungry Caterpillar by Eric Carle 2a. Tell which day had an even or an odd number of food, which day had the most and the least amount of food and add two days together and answer whether those days food totals are greater than, less than or equal to another day (laminated cut outs of the food can be used to solve addition problems and for students who need to manipulate items to see <, > and =) 2b. Place laminated cut outs of the food that the caterpillar ate in the proper spots on the food pyramid chart. 3. Join in the discussion of the vocabulary words (ask questions if need be, repeat word, make prediction of what word means) 4. Use the poly vision board to see pictures of the life cycle of a butterfly. 5. Complete the life cycle worksheet 6. Complete the writing and illustration paper on which part of the story and why was their favorite. (I liked it when the caterpillar ......./ The best part was .....) 7. Share their writing with the class if they would like to. Why do you think Eric Carle chose to write a book about a caterpillar? Which day did the caterpillar eat the most food? (Saturday) Which day did the caterpillar eat the least amount of food? (Monday) Which amount of food is an even number? (2, 4) Which amount of food is an odd number? (1, 3, 5) Which foods that the caterpillar ate are good choices on the food pyramid? (apple, pear, plum, strawberries, oranges, cheese, pickle, sausage, salami, watermelon) Which foods that the caterpillar ate are choices that we should only have every once in awhile? (lollipop, cupcake/cake, pie) When we are making choices on what food we want to eat, why should you choose these choices (point to laminated "good choice foods") over these choices (point to laminated "once in a while foods")? How many parts are there in a butterfly life cycle? (4) What is it called when when a caterpillar moves from one stage of its life to another? (life cycle) What are the four parts of the butterfly life cycle? (egg, larva, pupa, butterfly) Which part comes first? (egg) Which part is last? (butterfly) What is the protective covering around the caterpillar called? (chrysalis) What is it called when a caterpillar turns into a butterfly? (metamorphosis) Which part of the story was your favorite part? Why do you like that part? One 30-40 minute session for reading, discussing life cycle and completing life cycle sheet. One 20-30 minute session for writing and illustrating about favorite part. One 20 minute session for math questions and/or enrichment chart. The Very Hungry Caterpillar book by Eric Carle. caterpillar puppet, a piece of fabric for the chrysalis, butterfly puppet and laminated pictures of what it eats poly vision board and pen dry erase markers (just in case poly vision board pen isn't working properly) Butterfly life cycle poster Life cycle worksheet and construction paper pencil, crayons, scissors, glue |W:||I will introduce the story of The Very Hungry Caterpillar by telling the children they will be learning about the life cycle of a butterfly. I will help them understand the new vocabulary words by showing them pictures of the life cycle of a butterfly and the words that go along with each stage. I will explain that they will be responsible for completing a life cycle paper that looks like the one we are completing as a class.| |H:||I will hold the students attention during reading the story by using a caterpillar puppet, cut outs of the food he eats, fabric to wrap him in a chrysalis and a butterfly puppet to represent the life cycle of a butterfly.| |E:||I will motivate the students to further understand the life cycle by showing pictures of different stages of a butterfly cycle and, after breaking into groups, having the random reporter tell us which stage the butterfly in the picture is in.| |R:||I will motivate the students to reflect, revisit, revise and rethink the vocabulary words by reviewing the stages and the vocabulary words first and then explaining the instructions on how to complete the life cycle worksheet by themselves at their desks.| |E:||I will evaluate the students understanding of the butterfly life cycle by giving them a worksheet that asks them to match the pictures of each stage with the vocabulary words.| |T:||To meet the needs of all of my students, I will use strategies such as Think-Pair-Share, random reporter and have the children partner up to find the answers to questions posed during reading. I will gear my level of questioning based on the developmental level of each child (ask easier questions to those who struggle, harder questions to those who need more challenge) and use manipulatives to find the answers if need be.| |O:||I will organize the objectives that the children have learned to further their prior knowledge for later concepts. As part of reading unit on trees and plants, I could have the children compare the life cycle of a butterfly to the life cycle of a tree or plant. That discussion could lead to a discussion on the food that trees and plants give us which could tie in with the food pyramid and making healthy eating choices.| The teacher will use the poly vision board to: 1. show a video of a butterfly life cycle 2. show pictures of each stage so the students can use the pen to label each stage 3. show pictures of different butterflies. 4. show a food pyramid that the students can interact with and decide in which section would each piece of food the caterpillar eats belong 1. After discussing the differences between a moth and a butterfly, the teacher will show pictures of each and the children will decide whether the picture is a moth or a butterfly. 2. After viewing different types of butterflies on the poly vision board, each child will pick the one they liked best, print the picture and attach it to their writing on why that butterfly is their favorite. 3. Create a class graph (which includes tally marks, numbers and words) to show which butterfly is the favorite. 4. Based on the food that the caterpillar eats in the story, create a food pyramid to show which food choices are healthy and which are not. (use laminated cut outs of food for children to place in proper spots on food pyramid) The Very Hungry Caterpillar by Eric Carle 1. Tell whether the food on any given day within the story is an even number or an odd number? Even yes/no Odd yes/no 2. Add two days of food from the story together? Yes/No 3. Tell which number of food is greater than, less than or equal to another number of food? Greater than Yes/No Less than Yes/No Equal to Yes/No 4. Put the pictures of a butterfly life cycle in sequential order? Yes/No 5. Match the words (egg, larva, pupa, butterfly) to the correct pictures in the butterfly life cycle? 6. Write and illustrate a sentence (or two) about the part of the story that they liked best? Yes/No ***Questions 1,2 and 3 can be assessed either one on one or using individual white boards or chalkboards in a whole group setting. ***Questions 4, 5 and 6 can be assessed with the student's work. Other resources I could use with this lesson are: 1. Eric Carle library (The Very Grouchy Ladybug, The Very Quiet Cricket, etc...) 2. Interview with Eric Carle on Reading Rockets Intervention site: http://www.readingrockets.org/books/interviews/carle/
http://www.pdesas.org/module/content/resources/19862/view.ashx
4.3125
On January 16, 1920 the United States embarked on one of its greatest social experiments—the effort to prohibit within its borders the manufacture and sale of alcoholic beverages. A year earlier, the 18th Amendment had been ratified by the states, setting the process in motion; the federal government had followed with enabling legislation, defining alcoholic drinks, establishing an enforcement procedure, and setting penalties for violators. The drive to prohibit the consumption of intoxicating beverages was not an American innovation. Most societies from antiquity shared a common desire to maintain stability and believed that drunkenness led too often to signs of alcoholism, impoverishment and the disintegration of families. Movements for temperance developed in many western countries, particularly in northern Europe. Public attitudes toward drinking were often much more accepting in the Mediterranean European countries. The First Reform Era in the pre-Civil War United States brought a host of social concerns to public attention. Beginning with an outburst of religious enthusiasm, the movement concentrated most notably on the abolition of slavery, but also on the punitive treatment of the mentally ill, the wretched conditions of prisoners and the growing toll taken by Demon Rum. By the 1830s, thousands of temperance societies, with hundreds of thousands of members, had been formed in the United States. Massachusetts, in 1838, crafted a law requiring the purchase of hard liquor to be made in large quantities; this measure was designed to make it more difficult for the laboring class to afford strong drink. A more far-reaching law was enacted by Maine in 1846, becoming the first to opt for statewide prohibition. Other towns and localities voted to become “dry," as did a dozen other states. In succeeding years, most of those laws were either voided by court action or repealed. The stresses and privations of the Civil War later wiped out most of the few remaining gains made by the temperance movement. Following the war, relaxed standards of behavior and the growth of the liquor industry brought a massive increase in drunkenness and revived the social reformers. The political parties were timid; both the Republican and Democratic parties declined to nail prohibition planks onto their platforms. This omission provoked the inception of the Prohibition Party in 1869. That organization, the Woman`s Christian Temperance Union (1874) and lesser-known groups turned prohibition into a political issue. A sharpening of differences in American society gave added momentum to alcohol reform efforts. By the 1890s, a wide gulf separated urban and rural dwellers, as evidenced in differing positions on many economic issues of the day. Rural elements in the West and South viewed the rapidly expanding cities with alarm. The urban centers were the home of easily available alcohol and host of other vices. Immigration of this era was largely from southern and eastern Europe where prohibition movements had made little headway. Further, many of the recently arrived city dwellers were Roman Catholics, making them all the more suspect in the eyes of old line Christian evangelicals. Suspicion of city life reached its height during the era of the muckrakers, whose writings detailed the corruption and depravity of urban America. New organizations, like the Anti-Saloon League (1893), began on the local level to induce towns, cities, and counties to go dry. In 1913, they launched a national drive for a constitutional amendment prohibiting the manufacture and sale of alcoholic beverages. This effort, however, failed to garner the necessary support in the House of Representatives. Despite that national failure, state legislatures came increasingly under the control of prohibition supporters. During World War I, prohibition advocates buttressed their cause through the Food and Fuel Control Act (1917), which contained a section prohibiting manufacture of distilled liquor, beer, and wine. Support was given to this measure by non-prohibitionists who were convinced that grain production should be devoted to food, not drink, during wartime. Moreover, the 1917 Reed Amendment to the Webb-Kenyon Act made it unlawful to use the mails to send liquor advertisements to persons in dry territory. In December 1917, Congress began the Constitutional amendment process by passing a resolution that would make the entire country dry. Many states did not wait for ratification and 31 adopted statewide laws supporting prohibition. In the end, however, prohibition was a manifest failure. Bootlegging, defined as the unlawful manufacture, sale, and transportation of alcoholic beverages without registration or payment of taxes, became widespread and a staple of organized crime. Home stills sprouted up both in isolated places and the bathtubs of posh homes. Illegal drinking establishments, dubbed "speakeasies," sprang up in many parts of the country, especially large cities. Concealment of alcohol on one`s person became an artform. Methods from hollow canes to hollow books were used. Enforcement of prohibition was an extremely difficult, costly, and often violent proposition for law enforcement from the local to federal level. In 1932, the Republican and Democratic party platforms called for repeal of prohibition, subject to the will of the people. The Congress passed a resolution proposing repeal in 1933, and it was promptly ratified by three-fourths of the states before year’s end. The 21st Amendment remains as the only amendment repealing a previously adopted one. ---- Selected Quotes ---- Quotes regarding Prohibition. By Al Capone I make my money by supplying a public demand. If I break the law, my customers, who number hundreds of the best people in Chicago, are as guilty as I am. Everybody calls me a racketeer. I call myself a businessman. Comment in 1925 By Eleanor Roosevelt Little by little it dawned upon me that this law was not making people drink any less, but it was making hypocrites and law breakers of a great number of people. Her newspaper column, "My Day," July 14, 1939 By Rutherford B. Hayes Personally I do not resort to force — not even the force of law — to advance moral reforms. I prefer education, argument, persuasion, and above all the influence of example — of fashion. Until these resources are exhausted I would not think of force. Regarding a suggested Prohibition amendment, in his diary, 1883 - - - Books You May Like Include: ---- Baltimore Beer A Satisfying History of Charm City Brewing by Rob Kasper. Dark Tide: The Great Boston Molasses Flood of 1919 by Stephen Puleo. Hoosier Beer Tapping into Indiana Brewing Histor by Bob Ostrander and Derrick Morris. Last Call: The Rise and Fall of Prohibition by Daniel Okrent. Lost German Chicago by Joseph C. Heinen, Susan Barton Heinen. Only Yesterday by Frederick L. Allen. The Jazz Age: The 20s by Time-Life Books.
http://www.u-s-history.com/pages/h1085.html
4
James Bradley was an English astronomer most famous for his discovery of the aberration of starlight. The finding was an important piece of evidence supporting Copernicus's theory that the Earth moved around the sun, and also provided an alternative way to estimate the velocity of light. Born in Sherborne, England, Bradley was the nephew of clergyman and amateur astronomer James Pound. His uncle trained him in astronomy from an early age and Bradley formally studied at Oxford University, from which he received a Bachelors degree in 1714, and a Masters in 1717. Fearing an inability to support himself financially as an astronomer, Bradley became a member of the clergy and was given a living at Bridstow. However, due to his scientific efforts and friendship with Edmund Halley, Bradley was elected a fellow of the Royal Society in 1718. An offer of professorship at Oxford followed in 1721, and twenty-eight year old Bradley quickly gave up his living at Bridstow in order to teach astronomy at the prestigious school. A driving force in Bradley's career was his desire to measure the parallax of the stars, an apparent change in their positions that mirrored the change in the Earth's position in its orbit around the sun. Utilizing the observatory of his friend Samuel Molyneux, Bradley systematically studied the star Gamma Draconis and, though he did not successfully observe parallax, he made an important discovery while attempting to do so. Bradley found that Gamma Draconis did indeed shift in its location, but in the opposite direction from what was expected. He then deduced that the observed stellar variation in position was brought about by the aberration of light, a result of the finite speed of light and the forward movement of the Earth in its orbit. Bradley announced his discovery to the Royal Society in 1728. The aberration of stellar light was of particular interest to the organization's members because it provided some proof for the extremely controversial heliocentric theory. The findings were also significant in that they provided another technique for calculating the speed of light. By analyzing measurements of stellar aberration angle and applying that data to the orbital speed of the Earth, Bradley was able to arrive at the remarkably accurate estimate of 183,000 miles (295,000 kilometers) per second. Another important scientific contribution made by Bradley was the discovery of the nutation, or oscillation, of the Earth's axis. Bradley first noticed the fluctuation when he was carrying out his studies on parallax at Molyneux's observatory. However, since he believed that nutation was caused by the moon's gravitational pull, he decided to observe a full cycle of the motion of the moon's nodes, approximately 18.6 years, before announcing any findings. Completing his research in 1747, Bradley's discovery was finally made public in 1748 and was honored with the Copley Medal of the Royal Society that same year. When Edmund Halley died in 1742, Bradley was named his successor as Astronomer Royal at Greenwich Observatory. He held the influential position for the rest of his life, greatly improving upon the condition of the observatory and the instruments it contained. Bradley also continued to study the stars and composed extremely accurate star charts, though the bulk of his observations would be published posthumously. He died on July 13, 1762, never realizing his hope of detecting the parallactic motion of the stars, but profoundly affecting the field of astronomy in his attempts to observe the elusive phenomenon. BACK TO PIONEERS IN OPTICS Questions or comments? Send us an email. © 1995-2015 by Michael W. Davidson and The Florida State University. All Rights Reserved. No images, graphics, software, scripts, or applets may be reproduced or used in any manner without permission from the copyright holders. Use of this website means you agree to all of the Legal Terms and Conditions set forth by the owners. This website is maintained by our Graphics & Web Programming Team in collaboration with Optical Microscopy at the National High Magnetic Field Laboratory. Last Modification Friday, Nov 13, 2015 at 01:19 PM Access Count Since March 11, 2003: 26419 Visit the websites of our partners in education:
http://micro.magnet.fsu.edu/optics/timeline/people/bradley.html
4.1875
Timeline of the American Civil Rights Movement The Civil Rights Movement in the United States began to gain prominence in the late 1940s. In 1948 President Truman signed the Executive Order 9981, which declared there would be equal treatment and opportunity for all persons regardless of race or color in the armed services. This was the first step in creating a nation filled with equality. Throughout the passing years, there were many events that were milestones in the Civil rights movement. Below are some of the most well known events that helped shaped history. 1954 – Brown vs. Board of Education - Summary of Brown Vs. the Board of Education - This event is one of the most significant trials in US history. - Segregation of White and Black Children - This supreme court case ended segregation in the classroom - Brown Vs. the Board of Education Historic Site - Learn about where the injustice behind this court case took place. - Archive of Brown Vs. the Board of Education - Take a walk through history with information on the court case, oral arguments both for an against Brown Vs. the Board of Education, and an image gallery that focuses on the Civil rights movement. 1955 – Montgomery Bus Boycott - Story of the Montgomery Bus Boycott - Learn the historic story of a town full of civilians who banded together to make a stand in the Civil rights movement. - Montgomery Bus Boycott - Articles, historical timelines and biographies of important people who made the Montgomery Bus Boycott a critical piece of US history. - Montgomery Alabama and the Bus Boycott - Learn about Alabama's shining moments in the Civil rights movement, as well as in American history. - Rosa Parks - One of the most famous people to come out of the Civil rights movement, Rosa Parks was a key factor in the Montgomery Bus Boycott. - Martin Luther King Jr. - The face of the Civil rights movement, Martin Luther King Jr. helped to lead the Montgomery Bus Boycott. 1957 – Desegregation at Little Rock - Segregation Showdown at Little Rock - Follow the archives through the breakdown of segregation in Little Rock, Arkansas. - Little Rock Nine - In Little Rock, Arkansas students attempted to attend an all white high school. Read the documentation of what happened following this event. - Stand Up for Your Rights - Read the story of what happened to the 9 students who attempted to attend a high school that was still racially segregated. - Little Rock Central High School - The protest of black students entering this Arkansas school got so bad, President Eisenhower was forced to send in federal protection. 1960 – Sit-in Campaign - Sit-in Campaign - The basis of sit-in campaigns resulted from students "sitting" at lunch counters until they were acknowledged and served food. - Nashville, TN Sit-in Campaigns - African Americans would sit and wait at the lunch counters in a very polite, non-violent manner. If police arrested them for not leaving, a new group of African Americans would take their place. 1961 – Freedom Rides - Civil Rights Movements and Freedom Rides - Learn how American's tested the commitment to Civil rights through this unique strategy. - Freedom Riders - The Congress on Racial Equality organized these techniques by placing black and white volunteers next to each other on buses and other forms of public transportation. - Freedom Rides - See how the freedom riders played a part in the Civil rights movement timeline. 1962 – Mississippi Riot - Mississippi Riot - Learn how the state of Mississippi rallied against a federal court's decision to allow one black man to attend an all white school. - James H. Meredith - This man was a crucial figure in the American Civil rights movement. By having a federal court approve his case to attend an all white school in Mississippi, riots broke out and in turn paved the way for equality in the US. - University of Mississippi Riot - Learn about the violence and death that ensued from the protest of a black man attending a white school. 1963 – Birmingham - Birmingham, Alabama – In one of the most turbulent cities during the Civil rights movement, this organization explores all of the different activities that made this city a hub of change during this time period. - Birmingham Demonstrations - Read about the efforts Martin Luther King Jr. and citizens hoping for change took to ensure equality for all. - Birmingham Civil Rights District - A historical look at all of the events that took place in Birmingham during the Civil rights movement. 1963 – March on Washington - March on Washington - With an estimated 250,000 people in attendance, this was truly a landmark event for the Civil rights movement. - March on Washington for Jobs and Freedom - Both black and white people gathered together to witness Martin Luther King Jr. give his historical "I Have a Dream" speech. - "I Have a Dream" - Read the words, written and spoke by Martin Luther King Jr., which united a nation. 1964 – Freedom Summer - In the summer of l964, forty-one Freedom Schools opened in the churches, on the back porches, and under the trees of Mississippi. - Mississippi Freedom Summer (Summer Project) Events 1965 – Selma - Selma Marches - What was to be a peaceful march turned into a violent display of hate against the Civil Rights movement. - Bloody Sunday - The demonstration march from Selma to Montgomery was nicknamed "Bloody Sunday" due to the brutality and violence troops used against the peaceful demonstrators. - March 7th Selma, Alabama - Over 600 people partook in the March from Selma, Alabama. Photographs from Civil Rights Movements - The March on Washington - A collection of photographs from that monumental day in 1963. - Civil Rights Movement in Florida - Images from buses, stores and theatres that demonstrate the progress being made in the Civil rights movement. - Powerful Days in Black and White - Kodak shows the struggles during the Civil rights movement in these photographs. - Black and White Photos - A wonderful collection of black and white photos from the Civil rights movement. The Civil rights movement is a timeline of events that shaped American history and the world we live in today.
https://www.gettysburgflag.com/timeline-american-civil-rights
4
|Rights by claimant| |Other groups of rights| |Part of a series on| |NGOs and political groups| Indigenous rights are those rights that exist in recognition of the specific condition of the indigenous peoples. This includes not only the most basic human rights of physical survival and integrity, but also the preservation of their land, language, religion, and other elements of cultural heritage that are a part of their existence as a people. This can be used as an expression for advocacy of social organizations or form a part of the national law in establishing the relation between a government and the right of self-determination among the indigenous people living within its borders, or in international law as a protection against violation by actions of governments or groups of private interests. Definition and historical background The indigenous rights belong to those who, being indigenous peoples, are defined by being the original people of a land that has been conquested and colonized by outsiders. Exactly who is a part of the indigenous peoples is disputed, but can broadly be understood in relation to colonialism. When we speak of indigenous peoples we speak of those pre-colonial societies that face a specific threat from this phenomenon of occupation, and the relation that these societies have with the colonial powers. The exact definition of who are the indigenous people, and the consequent state of rightsholders, varies. It is considered both to be bad to be too inclusive as it is to be non-inclusive. In the context of modern indigenous people of European colonial powers, the recognition of indigenous rights can be traced to at least the period of Renaissance. Along with the justification of colonialism with a higher purpose for both the colonists and colonized, some voices expressed concern over the way indigenous peoples were treated and the effect it had on their societies. The issue of indigenous rights is also associated with other levels of human struggle. Due to the close relationship between indigenous peoples' cultural and economic situations and their environmental settings, indigenous rights issues are linked with concerns over environmental change and sustainable development. According to scientists and organizations like the Rainforest Foundation, the struggle for indigenous peoples is essential for solving the problem of reducing carbon emission, and approaching the threat on both cultural and biological diversity in general. The rights, claims and even identity of indigenous peoples are apprehended, acknowledged and observed quite differently from government to government. Various organizations exist with charters to in one way or another promote (or at least acknowledge) indigenous aspirations, and indigenous societies have often banded together to form bodies which jointly seek to further their communal interests. There are several non-governmental civil society movements, networks, indigenous and non-indigenous organizations, such as International Indian Treaty Council, Indigenous World Association, the International Land Coalition, ECOTERRA Intl. , Indigenous Environmental Network, Earth Peoples, Global Forest Coalition, Amnesty International, Indigenous Peoples Council on Biocolonialism, Friends of Peoples Close to Nature, Indigenous Peoples Issues and Resources, Minority Rights Group International, Survival International and Cultural Survival, whose founding mission is to protect indigenous rights, including land rights. These organizations, networks and groups underline that the problems that indigenous peoples are facing is the lack of recognition that they are entitled to live the way they choose, and lack of the right to their lands and territories. Their mission is to protect the rights of indigenous peoples without states imposing their ideas of "development". These groups say that each indigenous culture is differentiated, rich of religious believe systems, way of life, substenance and arts, and that the root of problem would be the interference with their way of living by state's disrespect to their rights, as well as the invasion of traditional lands by multinational cooperations and small businesses for exploitation of natural resources. Indigenous peoples and their interests are represented in the United Nations primarily through the mechanisms of the Working Group on Indigenous Populations (WGIP). In April 2000 the United Nations Commission on Human Rights adopted a resolution to establish the United Nations Permanent Forum on Indigenous Issues (PFII) as an advisory body to the Economic and Social Council with a mandate to review indigenous issues. In late December 2004, the United Nations General Assembly proclaimed 2005–2014 to be the Second International Decade of the World's Indigenous People. The main goal of the new decade will be to strengthen international cooperation around resolving the problems faced by indigenous peoples in areas such as culture, education, health, human rights, the environment, and social and economic development. In September 2007, after a process of preparations, discussions and negotiations stretching back to 1982, the General Assembly adopted the Declaration on the Rights of Indigenous Peoples. The non-binding declaration outlines the individual and collective rights of indigenous peoples, as well as their rights to identity, culture, language, employment, health, education and other issues. Four nations with significant indigenous populations voted against the declaration: the United States, Canada, New Zealand and Australia. All four have since then changed their vote in favour. Eleven nations abstained: Azerbaijan, Bangladesh, Bhutan, Burundi, Colombia, Georgia, Kenya, Nigeria, Russia, Samoa and Ukraine. Thirty-four nations did not vote, while the remaining 143 nations voted for it. ILO 169 is a convention of the International Labour Organisation. Once ratified by a state, it is meant to work as a law protecting tribal people's rights. There are twenty-two cphysical survival and ntegrity, but also the preservation of their land, language, religion, ribhf Definition and historical background The indigenous rights belong to those who, being indigenous peoples, are defined by being the original people of a land that has been invaded and colonized by outsiders. Exactly who is a part of the indigenous peoples is disputed, but can broadly be understood in relation to colonialism. When we speak of indigenous peoples we speak of those pre-colonial societies that face a specific threat from this phenomenon of occupation, and the relation that these societies have with the colonial powers. The exact definition of who are the indigenous people, and the consequent state of rightsholders, varies. It is considered both to be bad to be too inclusive as it is to be non-inclusive. In the context of modern indigenous people of European colonial powers, the recognition of indigenous rights can be traced to at least the period of Renaissance. Along with the justification of colonialism with a higher purpose for both the colonists and colonized, some voices ountries that ratified the Convention 169 since the year of adoption in 1989: Argentina, Bolivia, Brazil, Central African Republic, Chile, Colombia, Costa Rica, Denmark, Dominica, Ecuador, Fiji, Guatemala, Honduras, México, Nepal, Netherlands, Nicaragua, Norway, Paraguay, Peru, Spain and Venezuela. The law recognizes land ownership; equality and freedom; and autonomy for decisions affecting indigenous peoples. Organization of American States Since 1997, the nations of the Organization of American States have been discussing draft versions of a proposed American Declaration on the Rights of Indigenous Peoples. - Lindholt, Lone (2005). Human Rights in Development Yearbook 2003: Human Rights and Local/living Law. Martinus Nijhoff Publishers. ISBN 90-04-13876-5. - Gray, Andrew (2003). Indigenous Rights and Development: Self-Determination in an Amazonian Community. Berghahn Books. ISBN 1-57181-837-5. - Keal, Paul (2003). European Conquest and the Rights of Indigenous Peoples: The Moral Backwardness of International Society. Cambridge University Press. ISBN 0-521-82471-0. - Kuppe, Rene (2005). Law & Anthropology: "Indigenous Peoples, Constitutional States And Treaties Of Other Constructive Arrangements Between Indigenous Peoples And States". Brill Academic Publishers. ISBN 90-04-14244-4. - Anaya, S. James (2004). Indigenous Peoples in International Law. Oxford University Press. ISBN 0-19-517350-3. - Stevens, Stanley (1997). Conservation through cultural survival: indigenous peoples and protected areas. Island Press. ISBN 1-55963-449-9. - United Nations, State of The World's Indigenous Peoples – UNPFII report, First Issue, 2009 - Earth Peoples - Survival International website – About Us - International Indian Treaty Council website - UNPO – ILO 169: 20 years later - Survival International – ILO 169 - Jones, Peris: When the lights go out. Struggles over hydroelectric power and indigenous rights in Nepal NIBR International Blog 11.03.10 - Website of the Proposed American Declaration on the Rights of Indigenous Peoples |Library resources about - The Rights of Indigenous Peoples: Study Guide – University of Minnesota - Researching Indigenous People's Rights Under International Law – Steven C. Perkins - Indigenous Rights – International Encyclopedia of the Social Sciences, 2nd Edition - United Nations Declaration on the Rights of Indigenous Peoples - ILO Convention 169 (full text) - Current international law on indigenous peoples - State of The World's Indigenous Peoples – UN report, First Issue, 2009 - Genocide Lewis, Norman, February 1969 - Article that led to the foundation of several prominent indigenous rights organizations
https://en.wikipedia.org/wiki/Indigenous_rights
4.125
Astronomers have long known that light bouncing off man-made reflectors on the lunar surface is fainter than expected, and mysteriously dims even more whenever the moon is full. Now they think moon dust and solar heating may be the dirty culprit, according to a new report. The evidence is right here on Earth, researchers said. Only a fraction of the light a team beamed at the moon from a telescope in New Mexico bounces off of old reflectors on the lunar surface and returns to the observatory. "Near full moon, the strength of the returning light decreases by a factor of ten," said Tom Murphy, an associate professor of physics at the University of California, San Diego, and the study's lead author. "Something happens on the surface of the moon to destroy the performance of the reflectors at full moon." Measuring the moon Murphy leads an effort to precisely measure the distance from Earth to the moon by timing the pulses of laser light that reflect off targets left on the lunar surface 40 years ago by Apollo astronauts. Earth's atmosphere scatters the outgoing beam, spreading it over a distance of approximately 1.24 miles (2 km) on the surface of the moon. The scientists aim the light at polished blocks of glass called comer cube prisms, each of which is about 1 1/2 inches (3.8 cm) in diameter. Most of the laser light misses its target, which is roughly equivalent to the size of a suitcase. Furthermore, the reflectors also diffract returning light so that it spreads over 9.3 miles (15 km) when it reaches Earth again. So the researchers have always expected to recapture only a small portion of the reflected photons, or particles of light, that actually bounce back. On average, their instruments detect just one-tenth of the returning light, and when the moon is full, ?the results are oddly ten times worse. Moon dust and heat Murphy believes that the cubes are heating unevenly at full moon, and that the cause of this discrepancy is likely caused by dust. "Dust is dark," Murphy said. "It absorbs solar light and would warm the cube prism on the front face." Ideally, for optimum performance, the entire cube must be the same temperature. "It doesn't take much, just a few degrees, to significantly affect performance," Murphy said. NASA engineers went to great lengths to minimize temperature differences across the prisms, which rest in arrays tilted toward Earth. Individual prisms sit in recessed pockets so that they are shielded from direct light when the sun is low on the moon's horizon. But, when the full face of the moon appears illuminated from Earth, the sun is directly above the arrays. "At full moon, the sun is coming straight down the pipe into these recessed pockets," Murphy explained. The reflective properties of the prisms, which are clear glass, derive from the shape of their polished facets. Uneven heating of the prisms, which could occur with absorption by a dust coating, would bend the shape of the light pulses they reflect, interfering with the accuracy of measurements. Light travels faster through warmer glass, and although all paths through the cube prisms are the same length, photons that strike the edge of the reflector will stay near the surface. Meanwhile, those that strike the center will pass deeper into the cube before hitting a reflective surface. If the surface of the prism is warmer than the deeper parts, light that strikes the edges of the prism will re-emerge sooner than light that strikes the center, distorting the shape of the reflected laser pulses. Lunar dust dilemma But finding the source of the problematic dust could be more difficult, Murphy said. The moon has no atmosphere and no wind, but electrostatic forces can move dust around. A constant rain of micrometeorites might also puff dust onto the moon's surface. Larger impacts that eject material from the surface across a greater distance could also contribute to the buildup. Murphy recently returned from a trip to Italy, where a chamber built to simulate lunar conditions may help sort through the possible explanations. "We think we have a thermal problem at full moon, plus optical loss at all phases of the moon," Murphy said. Accumulated dust on the front surface of the reflectors could account for both observations. If sunlight-heated dust is really to blame, the researchers should notice the effect vanish during a lunar eclipse. In other words, light should bounce back while the moon passes through Earth's shadow, then dim again as sunlight hits the arrays once more. "Measurements during an eclipse ? there are just a few ? look fine," Murphy said. "When you remove the solar flux, the reflectors recover quickly, on a time scale of about half an hour." The researchers' findings will be published in an upcoming issue of the journal Icarus. Previously, the McDonald Observatory, a research unit of The University of Texas at Austin, located in the Davis Mountains of West Texas, ran similar experiments at full moon between 1973 and 1976. But, between 1979 and 1984, they had "a bite taken out of their data," during full moons, Murphy said. "Ours is deeper." This could signify that the problem may be getting worse. So far, bad weather has prevented the project from operating during a lunar eclipse. The next opportunity for the researchers will be on the night of Dec. 21, 2010. - Images - Full Moon Fever - Top 10 Lunar Eclipse Facts - Images - Apollo 11 Anniversary: A Look Back in Pictures
http://www.space.com/8270-mystery-faint-moonlight-finally-solved.html
4.15625
|This article does not cite any sources. (December 2011)| Cross slope or camber is a geometric feature of pavement surfaces: the transverse slope with respect to the horizon. It is a very important safety factor. Cross slope is provided to provide a drainage gradient so that water will run off the surface to a drainage system such as a street gutter or ditch. Inadequate cross slope will contribute to aquaplaning. On straight sections of normal two-lane roads, the pavement cross section is usually highest in the center and drains to both sides. In horizontal curves, the cross slope is banked into superelevation to reduce steering effort and lateral force required to go around the curve. All water drains to the inside of the curve. If the cross slope magnitude oscillates within 1–25 metres (3–82 ft), the body and payload of high (heavy) vehicles will experience high roll vibration. Cross slope is usually expressed as a percentage: Cross slope . Cross Slope is the angle in the vertical plane from a horizontal line to a line on the surface, which is perpendicular to the center line. Typical values range from 2 percent for straight segments to 10 percent for sharp superelevated curves. It may also be expressed as a fraction of an inch in rise over a one-foot run (e.g. 1/4 inch per foot).
https://en.wikipedia.org/wiki/Cross_slope
4.1875
In cell biology, the nucleus (pl. nuclei; from Latin nucleus or nuculeus, meaning kernel) is a membrane-enclosed organelle found in eukaryotic cells. The Cell Nucleus. The nucleus is a highly specialized organelle that serves as the information processing and administrative center of the cell. Biology4Kids.com! This tutorial introduces the cell nucleus. Other sections include plants, animal systems, invertebrates, vertebrates, and microorganisms. The cell nucleus is the command center of our cells. It contains our chromosomes and genetic information needed for the reproduction of life. The eukaryotic cell nucleus. Visible in this diagram are the ribosome-studded double membranes of the nuclear envelope, the DNA (as chromatin), and the nucleolus. A cell nucleus (plural: cell nuclei) is the part of the cell which contains the genetic code, the DNA. The nucleus is small and round, and it works as the cell's ... Definition: The nucleus is a membrane bound structure that contains the cell's hereditary information and controls the cell's growth and reproduction. Cell Nucleus and Nuclear Envelope. The nucleus of a eukaryotic cell contains the DNA, the genetic material of the cell. The DNA contains the information necessary for ... This site provides an educational resource focusing on the structurees, functions, and dynamics of the interphase cell nucleus. The interphase nucleus is the place in ... Cell Nucleus: Structure and Functions. The nucleus is a spherical-shaped organelle present in every eukaryotic cell. It is the control center of eukaryotic cells ...
https://www.search.com/reference/Cell_nucleus
4.5
|This article needs additional citations for verification. (February 2013)| Flaps are devices used to alter the lift characteristics of a wing and are mounted on the trailing edges of the wings of a fixed-wing aircraft to reduce the speed at which the aircraft can be safely flown and to increase the angle of descent for landing. They do this by lowering the stall speed and increasing the drag. Flaps shorten takeoff and landing distances. Extending flaps increases the camber or curvature of the wing, raising the maximum lift coefficient — the lift a wing can generate. This allows the aircraft to generate as much lift, but at a lower speed, reducing the stalling speed of the aircraft, or the minimum speed at which the aircraft will maintain flight. Extending flaps increases drag, which can be beneficial during approach and landing, because it slows the aircraft. On some aircraft, a useful side effect of flap deployment is a decrease in aircraft pitch angle which lowers the nose thereby improving the pilot's view of the runway over the nose of the aircraft during landing. However the flaps may also cause pitch-up depending on the type of flap and the location of the wing. There are many different types of flaps used, with the specific choice depending on the size, speed and complexity of the aircraft on which they are to be used, as well as the era in which the aircraft was designed. Plain flaps, slotted flaps, and Fowler flaps are the most common. Krueger flaps are positioned on the leading edge of the wings and are used on many jet airliners. The Fowler, Fairey-Youngman and Gouge types of flap increase the wing area in addition to changing the camber. The larger lifting surface reduces wing loading and allows the aircraft to generate the required lift at a lower speed and reduces stalling speed. The general airplane lift equation demonstrates these relationships: - L is the amount of Lift produced, - is the air density, - V is the true airspeed of the airplane or the Velocity of the airplane, relative to the air - S is the wing area and - is the lift coefficient, which is determined by the shape of the airfoil used and the angle at which the wing meets the air (or angle of attack). Here, it can be seen that increasing the area (S) and lift coefficient () allow a similar amount of lift to be generated at a lower airspeed (V). Extending the flaps also increases the drag coefficient of the aircraft. Therefore, for any given weight and airspeed, flaps increase the drag force. Flaps increase the drag coefficient of an aircraft due to higher induced drag caused by the distorted spanwise lift distribution on the wing with flaps extended. Some flaps increase the wing area and, for any given speed, this also increases the parasitic drag component of total drag. Flaps during takeoff Depending on the aircraft type, flaps may be partially extended for takeoff. When used during takeoff, flaps trade runway distance for climb rate—using flaps reduces ground roll and the climb rate. The amount of flap used on takeoff is specific to each type of aircraft, and the manufacturer will suggest limits and may indicate the reduction in climb rate to be expected. The Cessna 172S Pilot Operating Handbook generally recommends 10° of flaps on takeoff, especially when the ground is rough or soft. Flaps during landing Flaps may be fully extended for landing to give the aircraft a lower stall speed so the approach to landing can be flown more slowly, which also allows the aircraft to land in a shorter distance. The higher lift and drag associated with fully extended flaps allows a steeper and slower approach to the landing site, but imposes handling difficulties in aircraft with very low wing loading (the ratio between the wing area and the weight of the aircraft). Winds across the line of flight, known as crosswinds, cause the windward side of the aircraft to generate more lift and drag, causing the aircraft to roll, yaw and pitch off its intended flight path, and as a result many light aircraft land with reduced flap settings in crosswinds. Furthermore, once the aircraft is on the ground, the flaps may decrease the effectiveness of the brakes since the wing is still generating lift and preventing the entire weight of the aircraft from resting on the tires, thus increasing stopping distance, particularly in wet or icy conditions. Usually, the pilot will raise the flaps as soon as possible to prevent this from occurring. Some gliders not only use flaps when landing, but also in flight to optimize the camber of the wing for the chosen speed. When thermalling, flaps may be partially extended to reduce the stalling speed so that the glider can be flown more slowly and thereby reduce the rate of sink, which lets the glider use the rising air of the thermal more efficiently, and to turn in a smaller circle to make best use of the core of the thermal. At higher speeds a negative flap setting is used to reduce the nose-down pitching moment. This reduces the balancing load required on the horizontal stabilizer, which in turn reduces the trim drag associated with keeping the glider in longitudinal trim. Negative flap may also be used during the initial stage of an aerotow launch and at the end of the landing run in order to maintain better control by the ailerons. Like gliders, some fighters such as the Nakajima Ki-43 also use special flaps to improve maneuverability during air combat, allowing the fighter to create more lift at a given speed, allowing for much tighter turns. The flaps used for this must be designed specifically to handle the greater stresses and most flaps have a maximum speed at which they can be deployed. Control line model aircraft built for precision aerobatics competition usually have a type of maneuvering flap system that moves them in an opposing direction to the elevators, to assist in tightening the radius of a maneuver. Flap track fairings Fairings streamline the airflow over the flap support mechanisms to help reduce cruise drag - the smaller the fairing the lower the drag. Thrust gates, or gaps, in the trailing edge flaps may be required to minimise interference between the engine flow and deployed flaps. In the absence of an in-board aileron, which provides a gap in many flap installations, a modified flap section may be needed. The thrust gate on the Boeing 757 was provided by a single-slotted flap in between the inboard and outboard double-slotted flaps. The A320,A330,A340 and A380 have no in-board aileron. No thrust gate is required in the continuous, single-slotted flap. Interference in the go-around case while the flaps are still fully deployed can cause increased drag which must not compromise the climb gradient. - Plain flap: the rear portion of airfoil rotates downwards on a simple hinge mounted at the front of the flap. The Royal Aircraft Factory and National Physical Laboratory in the United Kingdom tested flaps in 1913 and 1914, but these were never installed in an actual aircraft. In 1916, the Fairey Aviation Company made a number of improvements to a Sopwith Baby they were rebuilding, including their Patent Camber Changing Gear, making the Fairey Hamble Baby as they renamed it, the first aircraft to fly with flaps. These were full span plain flaps which incorporated ailerons, making it also the first instance of flaperons. Fairey were not alone however, as Breguet soon incorporated automatic flaps into the lower wing of their Breguet 14 reconnaissance/bomber in 1917. Due to the greater efficiency of other flap types, the plain flap is normally only used where simplicity is required. - Split flap: the rear portion of the lower surface of the airfoil hinges downwards from the leading edge of the flap, while the upper surface stays immobile. Like the plain flap, this can cause large changes in longitudinal trim, pitching the nose either down or up, and tends to produce more drag than lift. At full deflection, a split flaps acts much like a spoiler, producing lots of drag and little or no lift. It was invented by Orville Wright and James M. H. Jacobs in 1920, but only became common in the 1930s and was then quickly superseded. The Douglas DC-3 & C-47 used a split flap. - Slotted flap: a gap between the flap and the wing forces high pressure air from below the wing over the flap helping the airflow remain attached to the flap, increasing lift compared to a split flap. Additionally, lift across the entire chord of the primary airfoil is greatly increased as the velocity of air leaving its trailing edge is raised, from the typical non-flap 80% of freestream, to that of the higher-speed, lower-pressure air flowing around the leading edge of the slotted flap. Any flap that allows air to pass between the wing and the flap is considered a slotted flap. The slotted flap was a result of research at Handley-Page, a variant of the slot that dates from the 1920s, but wasn't widely used until much later. Some flaps use multiple slots to further boost the effect. - Fowler flap: split flap that slides backward flat, before hinging downward, thereby increasing first chord, then camber. The flap may form part of the upper surface of the wing, like a plain flap, or it may not, like a split flap, but it must slide rearward before lowering. It may provide some slot effect, but this is not a defining feature of the type. Invented by Harlan D. Fowler in 1924, and tested by Fred Weick at NACA in 1932. They were first used on the Martin 146 prototype in 1935, and in production on the 1937 Lockheed Electra, and are still in widespread use on modern aircraft, often with multiple slots. - Junkers flap: a slotted plain flap where the flap is fixed below the trailing edge of the wing, rotating about its forward edge, and usually forming the "inboard" hinged section (closer to the root) of the Junkers Doppelflügel, or "double-wing" style of wing trailing edge control surfaces (including the outboard-mounted ailerons), which hung just below and behind the wing's fixed trailing edge. When not in use, it has more drag than other types, but is more effective at creating additional lift than a plain or split flap, while retaining their mechanical simplicity. Invented by Otto Mader at Junkers in the late 1920s, they were historically most often seen on both the Ju 52/3m airliner/cargo plane, and the Ju 87 Stuka dive bomber, though the same wing control surface can be also be found on many modern ultralights. - Gouge flap: a type of split flap that slides backward along curved tracks that force the trailing edge downward, increasing chord and camber without affecting trim or requiring any additional mechanisms. It was invented by Arthur Gouge for Short Brothers in 1936 and used on the Short Empire and Sunderland flying boats, which used the very thick Shorts A.D.5 airfoil. Short Brothers may have been the only company to use this type. - Fairey-Youngman flap: drops down (becoming a Junkers Flap) before sliding aft and then rotating up or down. Fairey was one of the few exponents of this design, which was used on the Fairey Firefly and Fairey Barracuda. When in the extended position, it could be angled up (to a negative angle of incidence) so that the aircraft could be dived vertically without needing excessive trim changes. - Zap Flap or commonly, but incorrectly, Zapp Flap: Invented by Edward F. Zaparka while he was with Berliner/Joyce and tested on a General Aircraft Corporation Aristocrat in 1932 and on other types periodically thereafter, but it saw little use on production aircraft other than on the Northrop P-61 Black Widow. The leading edge of the flap is mounted on a track, while a point at mid chord on the flap is connected via an arm to a pivot just above the track. When the flap's leading edge moves aft along the track, the triangle formed by the track, the shaft and the surface of the flap (fixed at the pivot) gets narrower and deeper, forcing the flap down. - Krueger flap: hinged flap, which folds out from under the wing's leading edge while not forming a part of the leading edge of the wing when retracted. This increases the camber and thickness of the wing, which in turn increases lift and drag. This is not the same as a leading edge droop flap, as that is formed from the entire leading edge. Invented by Werner Krüger in 1943 and evaluated in Goettingen, Krueger flaps are found on many modern swept wing airliners. - Gurney flap: A small fixed perpendicular tab of between 1 and 2% of the wing chord, mounted on the high pressure side of the trailing edge of an airfoil. It was named for racing car driver Dan Gurney who rediscovered it in 1971, and has since been used on some helicopters such as the Sikorsky S-76B to correct control problems without having to resort to a major redesign. It boosts the efficiency of even basic theoretical airfoils (made up of a triangle and a circle overlapped) to the equivalent of a conventional airfoil. The principle was discovered in the 1930s, but was rarely used and was then forgotten. Late marks of the Supermarine Spitfire used a bead on the trailing edge of the elevators, which functioned in a similar manner. - Leading edge droop: entire leading edge of the wing rotating downward, effectively increasing camber, but slightly reducing chord. Most commonly found on fighters with very thin wings unsuited to other leading edge high lift devices. - Blown flaps: also known as Boundary Layer Control Systems, are systems that blow engine air or exhaust over the flaps to increase lift beyond that attainable with mechanical flaps. Types include the original (internally blown flap) which blows compressed air from the engine over the top of the flap, the externally blown flap, which blows engine exhaust over the upper and lower surfaces of the flap, and upper surface blowing which blows engine exhaust over the top of the wing and flap. While testing was done in Britain and Germany before the Second World War, and flight trials started, the first production aircraft with blown flaps wasn't until the 1957 Lockheed T2V SeaStar. Upper Surface Blowing was used on the Boeing YC-14 in 1976. - Flexible flap or FlexFoil: modern interpretation of wing warping, internal mechanical actuators bend a lattice that changes the airfoil shape. It may have a flexible gap seal at the transition between fixed and flexible airfoils. - Controls that look like flaps, but are not: - Handley Page leading edge slats/slots may be confused for flaps, but are mounted on the top of the wings' leading edge and while they may be either fixed or retractable, when deployed they provide a slot or gap under the slat to force air against the top of the wing, which is absent on a Krueger flap. They offer excellent lift and enhance controllability at low speeds. Other types of flaps may be equipped with one or more slots to increase their effectiveness, a typical setup on many modern airliners. These are known as slotted flaps as described above. Frederick Handley Page experimented with fore and aft slot designs in the 20s and 30s. - Spoilers may also be confused for flaps, but are intended to create drag and reduce lift by "spoiling" the airflow over the wing. A spoiler is much larger than a Gurney flap, and can be retracted. Spoilers are usually installed mid chord on the upper surface of the wing, but may also be installed on the lower surface of the wing as well. - Air brakes are used on high performance combat aircraft to increase drag, allowing the aircraft to decelerate rapidly. They may be installed either on the wings or fuselage and differ from flaps and spoilers in that they are not intended to reduce lift and are built strongly enough to be deployed at much higher speeds. - Ailerons are similar to flaps (and work the same way), but are intended to provide lateral control, rather than to change the lifting characteristics of both wings together, and so operate differentially - when an aileron on one wing increases the lift, the opposite aileron does not, and will often work to decrease lift. Some aircraft use flaperons, which combine both the functionality of flaps and ailerons in a single control, working together to increase lift, but to slightly different degrees so the aircraft will roll toward the side generating the least lift. Flaperons were used by the Fairey Aviation Company as early as 1916, but didn't become common until after World War II. Split flap on a World War II Avro Lancaster bomber Fully extended double slotted Fowler flaps before landing on a Boeing 737 Triple-slotted trailing-edge flaps and leading edge Krueger flaps fully extended on a Boeing 747 for landing. |Wikimedia Commons has media related to Trailing-edge flaps.| - Air brake (aeronautics) - Aircraft flight control system - Circulation control wing - High-lift device - Leading-edge slats - Perkins, Courtland; Hage, Robert (1949). Airplane performance, stability and control, Chapter 2, John Wiley and Sons. ISBN 0-471-68046-X. - Cessna Aircraft Company. Cessna Model 172S Nav III. Revision 3 - 12, 2006, p. 4-19 to 4-47. - Windrow, 1965, p.4 - http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19960052267.pdf p.39 - http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19960052267.pdf p.40,54 - http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.602.7484&rep=rep1&type=pdf p.7 - Gunston, Bill, The Cambridge Aerospace Dictionary Cambridge, Cambridge University Press 2004, ISBN 978-0-521-84140-5/ISBN 0-521-84140-2 p.452 - Taylor, 1974, pp.8-9 - Toelle, Alan (2003). Windsock Datafile Special, Breguet 14. Hertfordshire, Great Britain: Albatros Productions. ISBN 1-902207-61-0. - Gunston, Bill, The Cambridge Aerospace Dictionary Cambridge, Cambridge University Press 2004, ISBN 978-0-521-84140-5/ISBN 0-521-84140-2 p.584 - Gunston, Bill, The Cambridge Aerospace Dictionary Cambridge, Cambridge University Press 2004, ISBN 978-0-521-84140-5/ISBN 0-521-84140-2 p.569 - Smith, Apollo M. O. (1975). "High-Lift Aerodynamics" (PDF). Journal of Aircraft 12 (6): 518–523. doi:10.2514/3.59830. ISSN 0021-8669. Retrieved 12 July 2011. - Gunston, Bill, The Cambridge Aerospace Dictionary, Cambridge, Cambridge University Press 2004, ISBN 978-0-521-84140-5/ISBN 0-521-84140-2 p.249-250 - Flight 1942 - National Aeronautics and Space Administration. Wind and Beyond: A Documentary Journey Into the History of Aerodynamics. - Gunston, Bill, The Cambridge Aerospace Dictionary, Cambridge, Cambridge University Press 2004, ISBN 978-0-521-84140-5/ISBN 0-521-84140-2 p.331 - Gunston, Bill, The Cambridge Aerospace Dictionary, Cambridge, Cambridge University Press 2004, ISBN 978-0-521-84140-5/ISBN 0-521-84140-2 p.270 - C.M. Poulsen, ed. (27 July 1933). ""The Aircraft Engineer - flight engineering section" Supplement to Flight". Flight Magazine. pp. 754a–d. - NASA on High-Lift Systems - Virginia Tech – Aerospace & Ocean Engineering - Gunston, Bill, The Cambridge Aerospace Dictionary Cambridge, Cambridge University Press 2004, ISBN 978-0-521-84140-5/ISBN 0-521-84140-2 p.335 - from German wiki page on Krüger flaps @ http://wikipedia.qwika.com/de2en/Kr%C3%BCgerklappe (accessed 18 October 2011) - Gunston, Bill, The Cambridge Aerospace Dictionary Cambridge, Cambridge University Press 2004, ISBN 978-0-521-84140-5/ISBN 0-521-84140-2 p.191 - http://naca.central.cranfield.ac.uk/reports/arc/cp/0209.pdf page 1 accessdate=11 Jan 2016 - American Military Training Aircraft' E.R. Johnson and Lloyd S. Jones, McFarland & Co. Inc. Publishers, Jefferson, North Carolina - "Shape-shifting flap takes flight". Retrieved 19 November 2014. - Clancy, L.J. (1975). "6". Aerodynamics. London: Pitman Publishing Limited. ISBN 0-273-01120-0. - Taylor, H.A. (1974). Fairey Aircraft since 1915. London: Putnam. ISBN 0-370-00065-X. - Windrow, Martin C. and René J. Francillon. The Nakajima Ki-43 Hayabusa. Leatherhead, Surrey, UK: Profile Publications, 1965.
https://en.wikipedia.org/wiki/Flaps_(aircraft)
4.3125
Think of a number, square it and subtract your starting number. Is the number you’re left with odd or even? How do the images help to explain this? In each of the pictures the invitation is for you to: Count what you see. Identify how you think the pattern would continue. How can you arrange these 10 matches in four piles so that when you move one match from three of the piles into the fourth, you end up with the same arrangement? Take a counter and surround it by a ring of other counters that MUST touch two others. How many are needed? These squares have been made from Cuisenaire rods. Can you describe the pattern? What would the next square look like? Watch this film carefully. Can you find a general rule for explaining when the dot will be this same distance from the What would be the smallest number of moves needed to move a Knight from a chess set from one corner to the opposite corner of a 99 by 99 square board? Can you see why 2 by 2 could be 5? Can you predict what 2 by 10 Can you continue this pattern of triangles and begin to predict how many sticks are used for each new "layer"? Polygonal numbers are those that are arranged in shapes as they enlarge. Explore the polygonal numbers drawn here. While we were sorting some papers we found 3 strange sheets which seemed to come from small books but there were page numbers at the foot of each page. Did the pages come from the same book? Use the interactivity to investigate what kinds of triangles can be drawn on peg boards with different numbers of pegs. Place the numbers from 1 to 9 in the squares below so that the difference between joined squares is odd. How many different ways can you do this? Delight your friends with this cunning trick! Can you explain how Use the animation to help you work out how many lines are needed to draw mystic roses of different sizes. Find out what a "fault-free" rectangle is and try to make some of Imagine starting with one yellow cube and covering it all over with a single layer of red cubes, and then covering that cube with a layer of blue cubes. How many red and blue cubes would you need? If you can copy a network without lifting your pen off the paper and without drawing any line twice, then it is traversable. Decide which of these diagrams are traversable. Triangle numbers can be represented by a triangular array of squares. What do you notice about the sum of identical triangle numbers? How could Penny, Tom and Matthew work out how many chocolates there are in different sized boxes? Only one side of a two-slice toaster is working. What is the quickest way to toast both sides of three slices of bread? Sweets are given out to party-goers in a particular way. Investigate the total number of sweets received by people sitting in different positions. How many ways can you find to do up all four buttons on my coat? How about if I had five buttons? Six ...? Three circles have a maximum of six intersections with each other. What is the maximum number of intersections that a hundred circles In a Magic Square all the rows, columns and diagonals add to the 'Magic Constant'. How would you change the magic constant of this square? This challenge focuses on finding the sum and difference of pairs of two-digit numbers. Find the sum and difference between a pair of two-digit numbers. Now find the sum and difference between the sum and difference! What happens? Can you dissect an equilateral triangle into 6 smaller ones? What number of smaller equilateral triangles is it NOT possible to dissect a larger equilateral triangle into? Compare the numbers of particular tiles in one or all of these three designs, inspired by the floor tiles of a church in Find a route from the outside to the inside of this square, stepping on as many tiles as possible. Square numbers can be represented as the sum of consecutive odd numbers. What is the sum of 1 + 3 + ..... + 149 + 151 + 153? Strike it Out game for an adult and child. Can you stop your partner from being able to go? This article for teachers describes several games, found on the site, all of which have a related structure that can be used to develop the skills of strategic planning. A 2 by 3 rectangle contains 8 squares and a 3 by 4 rectangle contains 20 squares. What size rectangle(s) contain(s) exactly 100 squares? Can you find them all? Use your addition and subtraction skills, combined with some strategic thinking, to beat your partner at this game. Take any two positive numbers. Calculate the arithmetic and geometric means. Repeat the calculations to generate a sequence of arithmetic means and geometric means. Make a note of what happens to the. . . . What would you get if you continued this sequence of fraction sums? 1/2 + 2/1 = 2/3 + 3/2 = 3/4 + 4/3 = The aim of the game is to slide the green square from the top right hand corner to the bottom left hand corner in the least number of One block is needed to make an up-and-down staircase, with one step up and one step down. How many blocks would be needed to build an up-and-down staircase with 5 steps up and 5 steps down? Ben’s class were cutting up number tracks. First they cut them into twos and added up the numbers on each piece. What patterns could they see? How many different journeys could you make if you were going to visit four stations in this network? How about if there were five stations? Can you predict the number of journeys for seven stations? What happens if you join every second point on this circle? How about every third point? Try with different steps and see if you can predict what will happen. Choose a couple of the sequences. Try to picture how to make the next, and the next, and the next... Can you describe your reasoning? We can arrange dots in a similar way to the 5 on a dice and they usually sit quite well into a rectangular shape. How many altogether in this 3 by 5? What happens for other sizes? In this problem we are looking at sets of parallel sticks that cross each other. What is the least number of crossings you can make? And the greatest? An investigation that gives you the opportunity to make and justify Can you put the numbers 1-5 in the V shape so that both 'arms' have the same total? What happens when you round these three-digit numbers to the nearest 100? Explore the effect of reflecting in two intersecting mirror lines. Imagine a large cube made from small red cubes being dropped into a pot of yellow paint. How many of the small cubes will have yellow paint on their faces?
http://nrich.maths.org/public/leg.php?code=72&cl=2&cldcmpid=502
4.375
|Dictionary / T| Turboprop engines are a type of aircraft powerplant that use a gas turbine to drive a propeller. The gas turbine is designed specifically for this application, with almost all of its output being used to drive the propeller. The engine's exhaust gases contain little energy compared to a jet engine and play a minor role in the propulsion of the aircraft. The propeller is coupled to the turbine through a reduction gear that converts the high RPM, low torque output to low RPM, high torque. The propeller itself is normally a constant speed (variable pitch) type similar to that used with larger reciprocating aircraft engines. Turboprop engines are generally used on small subsonic aircraft, but some aircraft outfitted with turboprops have cruising speeds in excess of 500 kt (926 km/h, 575 mph). Large military and civil aircraft, such as the Lockheed L-188 Electra and the Tupolev Tu-95, have also used turboprop power. In its simplest form a turboprop consists of an intake, compressor, combustor, turbine, and a propelling nozzle. Air is drawn into the intake and compressed by the compressor. Fuel is then added to the compressed air in the combustor, where the fuel-air mixture then combusts. The hot combustion gases expand through the turbine. Some of the power generated by the turbine is used to drive the compressor. The rest is transmitted through the reduction gearing to the propeller. Further expansion of the gases occurs in the propelling nozzle, where the gases exhaust to atmospheric pressure. The propelling nozzle provides a relatively small proportion of the thrust generated by a turboprop. Turboprops are very efficient at modest flight speeds (below 450 mph) because the jet velocity of the propeller (and exhaust) is relatively low. Due to the high price of turboprop engines they are mostly used where high-performance short-takeoff and landing (STOL) capability and efficiency at modest flight speeds are required. The most common application of turboprop engines in civilian aviation is in small commuter aircraft, where their greater reliability than reciprocating engines offsets their higher initial cost. Much of the jet thrust in a turboprop is sacrificed in favor of shaft power, which is obtained by extracting additional power (up to that necessary to drive the compressor) from turbine expansion. While the power turbine may be integral with the gas generator section, many turboprops today feature a free power turbine on a separate coaxial shaft. This enables the propeller to rotate freely, independent of compressor speed. Owing to the additional expansion in the turbine system, the residual energy in the exhaust jet is low. Consequently, the exhaust jet produces (typically) less than 10% of the total thrust. Propellers are not efficient when the tips reach or exceed supersonic speeds. For this reason, a reduction gearbox is placed in the drive line between the power turbine and the propeller to allow the turbine to operate at its most efficient speed while the propeller operates at its most efficient speed. The gearbox is part of the engine and contains the parts necessary to operate a constant speed propeller. This differs from the turboshaft engines used in helicopters, where the gearbox is remote from the engine. Residual thrust on a turboshaft is avoided by further expansion in the turbine system and/or truncating and turning the exhaust through 180 degrees, to produce two opposing jets. Apart from the above, there is very little difference between a turboprop and a turboshaft. While most modern turbojet and turbofan engines use axial-flow compressors, turboprop engines usually contain at least one stage of centrifugal compression. Centrifugal compressors have the advantage of being simple and lightweight, at the expense of a streamlined shape. Propellers lose efficiency as aircraft speed increases, so turboprops are normally not used on high-speed aircraft. However, propfan engines, which are very similar to turboprop engines, can cruise at flight speeds approaching Mach 0.75. To increase the efficiency of the propellers, a mechanism can be used to alter the pitch, thus adjusting the pitch to the airspeed. A variable pitch propeller, also called a controllable pitch propeller, can also be used to generate negative thrust while decelerating on the runway. Additionally, in the event of an engine outage, the pitch can be adjusted to a vaning pitch (called feathering), thus minimizing the drag of the non-functioning propeller. Some commercial aircraft with turboprop engines include the Bombardier Dash 8, ATR 42, ATR 72, BAe Jetstream 31, Embraer EMB 120 Brasilia, The Fairchild Swearingen Metroliner, Saab 340 and 2000, Xian MA60, Xian MA600, and Xian MA700. The world's first turboprop was the Jendrassik Cs-1, designed by the Hungarian mechanical engineer György Jendrassik. It was produced and tested in the Ganz factory in Budapest between 1939 and 1942. It was planned to fit to the Varga RMI-1 X/H twin-engined reconnaissance bomber designed by László Varga in 1940, but the program was cancelled. Jendrassik had also designed a small-scale 75 kW turboprop in 1937. However, Jendrassik's achievement was not unnoticed. After WW2, György Jendrassik moved to London. Building off a similar principle the first British turboprop engine was the Rolls Royce RB.50 Trent The first British turboprop engine was the Rolls-Royce RB.50 Trent, a converted Derwent II fitted with reduction gear and a Rotol 7-ft, 11-in five-bladed propeller. Two Trents were fitted to Gloster Meteor EE227 — the sole "Trent-Meteor" — which thus became the world's first turboprop powered aircraft, albeit a test-bed not intended for production. It first flew on 20th September 1945. From their experience with the Trent, Rolls-Royce developed the Dart, which became one of the most reliable turboprop engines ever built. Dart production continued for more than fifty years. The Dart-powered Vickers Viscount was the first turboprop aircraft of any kind to go into production and sold in large numbers. It was also the first four-engined turboprop. Its first flight was on 16th July 1948. The world's first single engined turboprop aircraft was the Armstrong Siddeley Mamba-powered Boulton Paul Balliol, which first flew on 24th March 1948. While the Soviet Union had the technology to create a jet-powered strategic bomber comparable to Boeing's B-52 Stratofortress, they instead produced the Tupolev Tu-95, powered with four Kuznetsov NK-12 turboprops, mated to eight contra-rotating propellers (two per nacelle) with supersonic tip speeds to achieve maximum cruise speeds in excess of 575 mph, faster than many of the first jet aircraft and comparable to jet cruising speeds for most missions. The Bear would serve as their most successful long-range combat and surveillance aircraft and symbol of Soviet power projection throughout the end of the 20th century. The USA would incorporate contra-rotating turboprop engines, such as the ill-fated Allison T40, into a series of experimental aircraft during the 1950s, but none would be adopted into service. The first American turboprop engine was the General Electric XT31, first used in the experimental Consolidated Vultee XP-81. The XP-81 first flew in December 1945, the first aircraft to use a combination of turboprop and turbojet power. America skipped over turboprop airliners in favor of the Boeing 707, but the technology of the unsuccessful Lockheed Electra was used in both the long-lived P-3 Orion as well as the classic C-130 Hercules, one of the most successful military aircraft ever in terms of length of production. One of the most popular turboprop engines is the Pratt & Whitney Canada PT6 engine. The first turbine powered, shaft driven helicopter was the Bell XH-13F, a version of the Bell 47 powered by Continental XT-51-T-3 (Turbomeca Artouste) engine. (Wikipedia)
https://www.tititudorancea.com/z/turboprop.htm
4.03125
At the simplest level, obesity is caused by consuming more calories than you burn. Obesity, however, is a complex condition caused by more than simply eating too much and moving too little. The environment you live in and your community's social norms surrounding food, eating, and lifestyle strongly influence what, when, and how much you eat. Similarly, your environment affects whether, where, and how you are able to be physically active. Diet and Lifestyle Changes in American dietary habits and lifestyle have contributed to today's high prevalence of obesity. Those changes include: - More adults in the workforce, combined with long work hours and commutes, have led to fewer meals prepared at home. - More Americans eat more meals in restaurants, which often serve oversized portions of calorie-dense foods. - Portion sizes of packaged foods, such as snacks and soft drinks, have gotten larger over the years. - Children spend more hours watching television, using computers, or playing electronic games and less time engaging in active play and recreation. - Adults have gotten more sedentary as fewer perform physical labor on the job. The way communities, workplaces, and schools are structured in much of the United States has contributed to the country's high rate of obesity. Some of the changes seen in the past few decades include: - Food (especially junk food) is now sold in places such as gas stations and office supply stores that historically did not sell food. The end result is that food is available almost constantly. - Food products and restaurants are marketed intensively on television, radio, online, and elsewhere. - Many communities have no safe routes for walking or bicycling, or safe places to play outdoors. - Most jobs present few opportunities for physical activity. - Many schools provide little or no recess periods or gym classes. - Poor neighborhoods are often "food deserts," with no purveyors of fresh, healthy foods. - There are many television shows dedicated to food, restaurants, and cooking that show no regard for the health consequences of the food being featured. Stress contributes to obesity in a few ways: - People who are stressed tend to make bad food choices and to eat too much. - Stress causes the release of stress hormones including cortisol, which triggers the release of triglycerides (fatty acids) from storage and relocates them to fat cells deep in the abdomen. Cortisol also increases appetite. Some people have a genetic predisposition to being overweight or obese. However, in most cases, those people do not become obese unless they also have an energy imbalance — meaning they consume more calories than they burn. A genetic tendency toward obesity often becomes apparent only when a person's or group's lifestyle or environment changes significantly. Genetic syndromes such as Prader-Willi, Alstrom, Bardet-Biedl, Cohen, Börjeson-Forssman-Lehman, Frohlich, and others can also lead to obesity. Such syndromes are rare, however, and they typically include other abnormalities besides obesity. A variety of medical conditions are associated with being overweight and obese, including: - Cushing's syndrome (a rare syndrome that results from excess production of cortisol by the adrenal glands) - Eating disorders, especially binge eating disorder, bulimia nervosa, and night eating disorder - Growth hormone deficiency - Hypogonadism (low testosterone) - Hypothyroidism (underactive thyroid) - Insulinoma (a tumor of the pancreas that secretes insulin) - Polycystic ovarian syndrome In some cases it's not clear whether obesity causes the medical condition, or whether the condition causes obesity. Drugs That Contribute to Obesity Certain drugs have been shown to encourage weight gain — often by increasing appetite — and contribute to obesity. These drugs include: - Diabetes drugs, including insulin, thiazolidinediones (Actos and Avandia), and sulphonylureas (glimepiride, glipizide, and glyburide) - Drugs for high blood pressure, including thiazide diuretics, loop diuretics, calcium channel blockers, beta blockers, and alpha-adrenergic blockers - Antihistamines (used for allergies), particularly cyproheptadine - Steroids, including corticosteroids and birth control pills - Psychotherapeutic medications, including lithium, antipsychotics, and antidepressants - Anticonvulsant drugs (used for epilepsy and some other conditions), such as sodium valproate and carbamazepine In some cases, other drugs can be substituted for those that encourage weight gain, or a lower dose can be used. However, don't stop taking prescribed medications on your own. Discuss your options with your doctor, and make a decision together about what's best for you. If you must take a medicine that increases your appetite, behavioral measures such as learning to count calories and eat slowly can help to limit weight gain. Last Updated: 4/30/2015
http://www.everydayhealth.com/obesity/causes-and-risk-factors/
4.0625
Watching this resources will notify you when proposed changes or new versions are created so you can keep track of improvements that have been made. Favoriting this resource allows you to save it in the “My Resources” tab of your account. There, you can easily access this resource later when you’re ready to customize it or assign it to your students. Truman was the first president to embrace containment and use it as a policy. He funded Greek and Turkish governments to rebuild after WWII because he did not want communist influence to infiltrate and overcome weak countries. Similarly, Truman initiated the Marshall plan and supported NATO in efforts to have financial and military links tying Western nations together. Johnson adhered closely to containment during the Vietnam War. Nixon, who replaced Johnson in 1969, referred to his foreign policy as détente, or a relaxation of tension. Although it continued to aim at restraining the Soviet Union, détente was based on political realism, or thinking in terms of national interest, as opposed to crusades against communism or for democracy. When the U.N. intervened in the Korea crisis, war broke out and the United States was given much military responsibility. While the war was supposed to be a policing action between North and South Korea, McArthur crossed the 38th parallel to attack, signaling to the communists an act of war. Truman tried to continue to keep this a policing mission, but he faced opposition from republicans who wanted rollback. Vietnam was another policing action that tested containment. Johnson did not give full power to one general, and instead separated power in the hands of three. Still, conflict turned into a war, and containment nearly failed as the communists eventually took over South Vietnam in 1975. Nixon used his foreign policy experience to achieve a detente with the Soviets. This was an easing in tensions and focus on diplomacy. Ronald Reagan did not accept coexistence, and wanted the West to win at all costs. One strategy besides all-out war was the Space Defense Iniative. He wanted to outspend the Soviets by creating shields that would protect the United States in the skies, knowing the Soviets would compete to make a duplicate, though they could not afford it. Soviet public spending on its creation would contribute to the financial downfall of the USSR. Another way Reagan combated the Soviet Union was by having the CIA overthrow Third World governments hostile to American interests. He sent the CIA to Latin America, Africa, and the Middle East. The strategy of forcing change in the major policies of a state, usually by replacing its ruling regime. It contrasts with containment, which means preventing the expansion of that state; and with détente, which means a working relationship with that state. Most of the discussions of rollback in the scholarly literature deal with United States foreign policy toward Communist countries during the Cold War. The rollback strategy was tried, and failed, in Korea in 1950, and in Cuba in 1961. National Security Council Report 68 (NSC-68) was a 58-page top secret policy paper issued by the United States National Security Council on April 14, 1950, during the presidency of Harry S. Truman. It was one of the most significant statements of American policy in the Cold War. NSC-68 largely shaped U.S. foreign policy in the Cold War for the next 20 years, and involved a decision to make containment of Communist expansion a high priority. The strategy outlined in NSC-68 achieved ultimate victory, according to this view, with the collapse of the Soviet power and the emergence of a "new world order" centered on American liberal-capitalist values. Truman officially signed NSC-68 on September 30, 1950. (1904 – 2005) an American adviser, diplomat, political scientist and historian, best known as "the father of containment" and as a key figure in the emergence of the Cold War. He was a core member of the group of foreign policy elders known as "The Wise Men." In the late 1940s, his writings inspired the Truman Doctrine and the U.S. foreign policy of "containing" the Soviet Union, thrusting him into a lifelong role as a leading authority on the Cold War. His "Long Telegram" from Moscow in 1946 and the subsequent 1947 article "The Sources of Soviet Conduct" argued that the Soviet regime was inherently expansionist and that its influence had to be "contained" in areas of vital strategic importance to the United States G. F. Kennan had been stationed at the U.S. Embassy in Moscow as minister-counselor since 1944. Although highly critical of the Soviet system, the mood within the U.S. State Department was friendship towards the Soviets, since they were an important ally in the war against Nazi Germany. In February 1946, the United States Treasury asked the U.S. Embassy in Moscow why the Soviets were not supporting the newly created World Bank and the International Monetary Fund. In reply, Kennan wrote the Long Telegram outlining his opinions and views of the Soviets; it arrived in Washington on February 22, 1946. Among its most-remembered parts was that while Soviet power was impervious to the logic of reason, it was highly sensitive to the logic of force. A United States policy using numerous strategies to prevent the spread of communism abroad. A component of the Cold War, this policy was a response to a series of moves by the Soviet Union to enlarge communist influence in Eastern Europe, China, Korea, and Vietnam. It represented a middle-ground position between détente and rollback. Containment was a United States policy using numerous strategies to prevent the spread of communism abroad. This policy was a response to a series of moves by the Soviet Union to enlarge communist influence in Eastern Europe, China, Korea, and Vietnam. The basis of the doctrine was articulated in a 1946 cable by U.S. diplomat George F. Kennan, and the term is a translation of the French cordon sanitaire, used to describe Western policy toward the Soviet Union in the 1920s. Containment is associated most strongly with the policies of U.S. President Harry Truman (1945–53), including the establishment of the North Atlantic Treaty Organization (NATO), a mutual defense pact. Further, President Lyndon Johnson (1963–69) cited containment as a justification for his policies in Vietnam, while President Richard Nixon (1969–74), working with his top adviser Henry Kissinger, rejected containment in favor of friendly relations with the Soviet Union and China. This détente, or relaxation of tensions, involved expanded trade and cultural contacts. Central programs under containment, including NATO and nuclear deterrence, remained in effect even after the end of the war. Following the 1917 communist revolution in Russia, there were calls by Western leaders to isolate the Bolshevik government, which seemed intent on promoting worldwide revolution. In March 1919, French Premier Georges Clemenceau called for a cordon sanitaire, or ring of non-communist states, to isolate the Soviet Union. Translating this phrase, U.S. President Woodrow Wilson called for a "quarantine. " Both phrases compare communism to a contagious disease. Nonetheless, during World War II, the U.S. and the Soviet Union found themselves allied in opposition to the Axis powers. Key State Department personnel grew increasingly frustrated with and suspicious of the Soviets as the war drew to a close. Averell Harriman, U.S. ambassador in Moscow, once a "confirmed optimist" regarding U.S.-Soviet relations, was disillusioned by what he saw as the Soviet betrayal of the 1944 Warsaw Uprising as well as by violations of the February 1945 Yalta Agreement concerning Poland. Harriman would later have significant influence in forming Truman's views on the Soviet Union. In February 1946, the U.S. State Department asked George F. Kennan, then at the U.S. Embassy in Moscow, why the Russians opposed the creation of the World Bank and the International Monetary Fund. He responded with a wide-ranging analysis of Russian policy now called the "Long Telegram" . According to Kennan: The Soviets perceived themselves to be in a state of perpetual war with capitalism; The Soviets would use controllable Marxists in the capitalist world as allies; Soviet aggression was not aligned with the views of the Russian people or with economic reality, but with historic Russian xenophobia and paranoia; The Soviet government's structure prevented objective or accurate pictures of internal and external reality. Clark Clifford and George Elsey produced a report elaborating on the Long Telegram and proposing concrete policy recommendations based on its analysis. This report, which recommended "restraining and confining" Soviet influence, was presented to Truman on September 24, 1946. In March 1947, President Truman, a Democrat, asked the Republican controlled Congress to appropriate $400 million in aid to the Greek and Turkish governments, then fighting Communist subversion. Truman pledged to, "support free peoples who are resisting attempted subjugation by armed minorities or by outside pressures. " This pledge became known as the Truman Doctrine. Portraying the issue as a mighty clash between "totalitarian regimes" and "free peoples", the speech marks the onset of the Cold War and the adoption of containment as official U.S. policy. Truman followed up his speech with a series of measures to contain Soviet influence in Europe, including the Marshall Plan, or European Recovery Program, and NATO, a military alliance between the U.S. and Western European nations created in 1949. Because containment required detailed information about Communist moves, the government relied increasingly on the Central Intelligence Agency (CIA). Established by the National Security Act of 1947, the CIA conducted espionage in foreign lands, some of it visible, more of it secret. Truman approved a classified statement of containment policy called NSC 20/4 in November 1948, the first comprehensive statement of security policy ever created by the United States. The Soviet Union first nuclear test in 1949 prompted the National Security Council to formulate a revised security doctrine. Completed in April 1950, it became known as NSC 68. It concluded that a massive military buildup was necessary to the deal with the Soviet threat. Many Republicans, including John Foster Dulles, concluded that Truman had been too timid. In 1952, Dulles called for rollback and the eventual "liberation" of eastern Europe. Dulles was named Secretary of State by incoming President Dwight Eisenhower, but Eisenhower's decision not to intervene during the Hungarian Uprising of 1956 made containment a bipartisan doctrine. President Eisenhower relied on clandestine CIA actions to undermine hostile governments and used economic and military foreign aid to strengthen governments supporting the American position in the Cold War. Senator Barry Goldwater, the Republican candidate for president in 1964, challenged containment and asked, "Why not victory? " President Johnson, the Democratic nominee, answered that rollback risked nuclear war. Goldwater lost to Johnson in the general election by a wide margin. Johnson adhered closely to containment during the Vietnam War. Nixon, who replaced Johnson in 1969, referred to his foreign policy as détente, or a relaxation of tension. Although it continued to aim at restraining the Soviet Union, it was based on political realism, or thinking in terms of national interest, as opposed to crusades against communism or for democracy. Source: Boundless. “Containment in Foreign Policy.” Boundless U.S. History. Boundless, 21 Jul. 2015. Retrieved 10 Feb. 2016 from https://www.boundless.com/u-s-history/textbooks/boundless-u-s-history-textbook/politics-and-culture-of-abundance-1943-1960-28/policy-of-containment-217/containment-in-foreign-policy-1204-9252/
https://www.boundless.com/u-s-history/textbooks/boundless-u-s-history-textbook/politics-and-culture-of-abundance-1943-1960-28/policy-of-containment-217/containment-in-foreign-policy-1204-9252/
4
The geologic record in stratigraphy, paleontology and other natural sciences refers to the entirety of the layers of rock strata — deposits laid down by volcanism or by deposition of sediment derived from weathering detritus (clays, sands etc.) including all its fossil content and the information it yields about the history of the Earth: its past climate, geography, geology and the evolution of life on its surface. According to the law of superposition, sedimentary and volcanic rock layers are deposited on top of each other. They harden over time to become a solidified (competent) rock column, that may be intruded by igneous rocks and disrupted by tectonic events. Correlating the rock record At a certain locality on the Earth's surface, the rock column provides a cross section of the natural history in the area during the time covered by the age of the rocks. This is sometimes called the rock history and gives a window into the natural history of the location that spans many geological time units such as ages, epochs, or in some cases even multiple major geologic periods—for the particular geographic region or regions. The geologic record is in no one place entirely complete for where geologic forces one age provide a low-lying region accumulating deposits much like a layer cake, in the next may have uplifted the region, and the same area is instead one that is weathering and being torn down by chemistry, wind, temperature, and water. This is to say that in a given location, the geologic record can be and is quite often interrupted as the ancient local environment was converted by geological forces into new landforms and features. Sediment core data at the mouths of large riverine drainage basins, some of which go 7 miles (11 km) deep thoroughly support the law of superposition. However using broadly occurring deposited layers trapped within differently located rock columns, geologists have pieced together a system of units covering most of the geologic time scale using the law of superposition, for where tectonic forces have uplifted one ridge newly subject to erosion and weathering in folding and faulting the strata, they have also created a nearby trough or structural basin region that lies at a relative lower elevation that can accumulate additional deposits. By comparing overall formations, geologic structures and local strata, calibrated by those layers which are widespread, a nearly complete geologic record has been constructed since the 17th century. Discordant strata example Correcting for discordancies can be done in a number of ways and utilizing a number of technologies or field research results from studies in other disciplines. In this example, the study of layered rocks and the fossils they contain is called biostratigraphy and utilizes amassed geobiology and paleobiological knowledge. Fossils can be used to recognize rock layers of the same or different geologic ages, thereby coordinating locally occurring geologic stages to the overall geologic timeline. The pictures of the fossils of monocellular algae in this USGS figure were taken with a scanning electron microscope and have been magnified 250 times. In the U.S. state of South Carolina three marker species of fossil algae are found in a core of rock whereas in Virginia only two of the three species are found in the Eocene Series of rock layers spanning three stages and the geologic ages from 37.2–55.8 Ma. Comparing the record about the discordance in the record to the full rock column shows the non-occurrence of the missing species and that portion of the local rock record, from the early part of the middle Eocene is missing there. This is one form of discordancy and the means geologists use to compensate for local variations in the rock record. With the two remaining marker species it is possible to correlate rock layers of the same age (early Eocene and latter part of the middle Eocene) in both South Carolina and Virginia, and thereby "calibrate" the local rock column into its proper place in the overall geologic record. |Segments of rock (strata) in chronostratigraphy||Time spans in geochronology||Notes to |Eonothem||Eon||4 total, half a billion years or more| |Erathem||Era||10 defined, several hundred million years| |System||Period||22 defined, tens to ~one hundred million years| |Series||Epoch||34 defined, tens of millions of years| |Stage||Age||99 defined, millions of years| |Chronozone||Chron||subdivision of an age, not used by the ICS timescale| Lithology vs paleontology Consequently, as the picture of the overall rock record emerged, and discontinuities and similarities in one place were cross-correlated to those in others, it became useful to subdivide the overall geologic record into a series of component sub-sections representing different sized groups of layers within known geologic time, from the shortest time span stage to the largest thickest strata eonothem and time spans eon. Concurrent work in other natural science fields required a time continuum be defined, and earth scientists decided to coordinate the system of rock layers and their identification criteria with that of the geologic time scale. This gives the pairing between the physical layers of the left column and the time units of the center column in the table at right. Well stratified and fully exposed Dinosaur Park formations (in Dinosaur Provincial Park, Alberta, Canada) and like formations that extend for over a thousand miles exposing eons of rock history through numerous wind and water exposed strata layers— which in the Colorado Plateau are miles thick. New Orleans after Hurricane Katrina: Unlithified sediment layers laid down in historic times. This cut was an attempt to find bedrock near a residential street near the lower breach of the London Avenue Canal after restoring the levees which has been plowed/excavated clear by the Army Corp of Engineers, showing a nascent stratigraphy in the large deposits of silt deposited by flooding in recent earth history. Three eras of deposition and two discordancies are visible in this highway cut in the Netherlands. Note the color and slight angular change between the lower red bed layering and the middle strata. The upper strata are tilted yet again relative to the bottom layerings well demonstrating the cycles this land formation went through as part of the sea floor. An ancient rockfall which protected the rock records beneath its impact site from further large scale erosion. Taken along Burr Trail, Grand Staircase-Escalante National Monument, Utah, USA. Sediment core, taken with a gravity corer by the research vessel POLARSTERN in the South Atlantic; light/dark-coloured changes are due to climatic variation of the Quaternary; basis age of the core is about 1 million years. - Committee on the Geologic Record of Biosphere Dynamics, Board on Earth Sciences and Resources, Board on Life Sciences, Division on Earth and Life Studies, National Research Council (2005), The Geological Record of Ecological Dynamics: Understanding the Biotic Effects of Future Environmental Change, National Academies Press, p. 14 - Cohen, K.M.; Finney, S.; Gibbard, P.L. (2015), International Chronostratigraphic Chart (PDF), International Commission on Stratigraphy. - Hunt , C. B., 1956, Cenozoic Geology of the Colorado Plateau: U. S. Geol. Survey Professional Paper 279, Washington DC, 99 p. - www.riversimulator.org - Annabelle Foos, 1999; GEOLOGY OF THE COLORADO PLATEAU - www.tribesandclimatechange.org
https://en.wikipedia.org/wiki/Geologic_record
4.03125
Image: Laser Stars But there is more to the story. The Bell Labs patent wasn't issued until 1960. And the race to build the first working laser--a ruby crystal that emitted pulses of light at 0.69 microns--was won that same year by Theodore Maiman of Hughes Aircraft Co. Meanwhile, a graduate student at Columbia University named Gordon Gould had scribbled down ideas for a laser in 1957. Gould had his notebook, in which he coined the term laser, notarized. But he didn't apply for patents because he believed he had to build a working model first. By the time he did file in 1959, the Bell Labs application was already being considered by the Patent Office. Only after 20 years of bitter litigation did Gould win several key patents, beginning in 1977. But whatever the official date, the work of these four laser pioneers sparked a technological revolution. In just four decades, the laser has become so commonplace that few people even realize that laser, now used as a noun, had its beginning as an acronym for "light amplification by stimulated emission of radiation." In their paper, Townes and Schawlow presented the idea of arranging mirrors at each end of a cavity containing a gas or substance that could be excited to emit light. The mirrors would bounce the light back and forth so that all the photons would be moving in one direction. The size of the mirrors and the cavity could be adjusted to produce one frequency of light. Theodore Maiman was one reader of the article who decided to see if he could test the idea by building a working laser. He selected a crystal of ruby and coated each end with a silver mirror. One mirror was thinner so that some of the light could escape as a beam. The ruby was surrounded by a flash tube to provide the energy to stimulate the atoms in the crystal. The entire assembly was encased in a polished aluminum tube. It worked. When the team at Bell Labs heard of Maiman's success, they dispatched Amnon Yariv, one of their colleagues who was vacationing in San Diego, to rush to Maiman's lab in Malibu. He returned with the bad (good) news. The laser now existed as more than a theory proposed by Albert Einstein's in 1917. Chagrined but not defeated, the researchers at Bell Labs soon bested Hughes with a laser that ran continuously, rather than in pulses (they replaced the flash lamp with an arc lamp). The contentious genesis of the laser did almost nothing to slow the rapid pace of its development and commercialization. Patent piled on patent, Bell Labs and others churned out a steady stream of innovations that continues unabated today. The importance did not pass unnoticed. In fewer than six years from the publication of the Schawlow-Townes paper, the 1964 Nobel Prize in Physics was shared by Townes and by Nicolay Gennadiyevich Basov and Aleksandr Mikhailovich Prokhorov of the Lebedev Institute in Moscow for their early work in masers ("microwave amplification by stimulated emission of radiation") and the subsequent development of the optical maser, or laser. What the 1964 Nobel committee failed to note was the vast potential for practical applications of the laser, emphasizing instead its opening of "new possibilities for studying the interaction of radiation and matter." That, of course was very true. A recent example is the 1997 Nobel in Physics to Steven Chu of Stanford University for his use of laser light to trap and cool atoms to nearly absolute zero. But today, lasers--from semiconductor devices as tiny as grains of sand to experimental giants the size of buildings--are used in hundreds of applications, from cutting and welding metal to repairing damage to delicate tissue of the eyes. They are at the heart of many scientific instruments and are guiding surveyors and sighting weapons. With their light guided through threads of glass, they have revolutionized communications. Lasers scan bar codes at the supermarket and record sound on compact disks. It's already getting hard to imagine what life was like without them. So, happy 40th--give or take a few months. OPTICAL MASERS, Arthur L. Schawlow, Scientific American, June 1961 ADVANCES IN OPTICAL MASERS, Arthur L. Schawlow, Scientific American, July 1963 THE PRESSURE OF LASER LIGHT, Arthur Ashkin, Scientific American, February 1972 LASER TRAPPING OF NEUTRAL PARTICLES Steven Chu, Scientific American, February 1992. Images: LUCENT TECHNOLOGIES (Townes and Schalow, laser animation), LASER STARS (Maiman, ruby laser), MIT LEMELSON INVENTION DIMENSION (Gould)
http://www.scientificamerican.com/article/the-laser-at-about-40/
4.75
The parts of speech explains the ways words can be used in various contexts. Every word in the English language functions as at least one part of speech; many words can serve, at different times, as two or more parts of speech, depending on the context. 1Understand that nouns and their 'partners' are the following: - A noun is a word or phrase that names a person, place, thing, or idea. (Fred, New York, table, beauty, execution ). A noun may be used as the subject of a verb, the object of a verb, an identifying noun, the object of a preposition, or an appositive (an explanatory phrase coupled with a subject or object ). - An adjective is a word or combination of words that modifies a noun or pronoun. (blue-green, central, half-baked, temporary ). E.g. in use, "blue-green handbag" (modifies the noun 'handbag'). - A pronoun is a word that substitutes for a noun and refers to a person, place, thing, idea, or act that was mentioned previously or that can be inferred from the context of the sentence (he, she, it, that ). A pronoun can also come before a noun, as a form of modification, e.g., "HER book", "her" being a possessive pronoun to indicate who the book (noun) belongs to. - A preposition is a word or phrase that shows the relationship of a noun to another element in the sentence (at, by, in, to, from, with ), e.g., "I put the money in my purse" - "in" is a preposition. 2Think of a preposition as anything that a caterpillar can do to an apple. It can go in, on, under, around, through, below, or into the apple, for example. Those italicized words are all prepositions. They begin prepositional phrases, which always include a noun as the object of the preposition. 3Know that a verb or its modifiers are: - A verb is a word or phrase that expresses action and links. Verbs can be transitive, requiring an object (object = "her", in "I met her" ), or intransitive, requiring only a subject ("The sun rises"). Some verbs, like feel , are both transitive ("Feel the fabric") and intransitive ("I feel cold", in which cold is an adjective and not an object). - An adverb is a word that modifies a verb, an adjective, or another adverb (slowly, obstinately, much ). E.g. in use, "I ran slowly" (modifying the verb 'ran'). 4Learn that connection words or exclamation words are parts of speech also. - A conjunction is a word that connects other words, phrases, or sentences (and, but, or, because ). E.g. in use, "I like cats, but I don't like dogs" ("but", here, is a coordinating conjunction), "I went outside, although it was raining" ("although" is used as a subordinating conjunction). - An interjection is a word, phrase, or sound used as an exclamation and capable of standing by itself (oh, Lord, damn, my goodness ). Also known as an "ejaculation", e.g., "Balderdash!". It is not grammatically related to the rest of the sentence. Questions and Answers Give us 3 minutes of knowledge! - Don't rely on these definitions alone. You need to put the meanings and examples in a way in which you understand. In other languages: Thanks to all authors for creating a page that has been read 83,627 times.
http://www.wikihow.com/Identify-Parts-of-Speech
4
Gender, Sex, and Slavery In this activity students read about slavery's effect on women from the perspectives of an enslaved woman and a plantation mistress. Then students create a dialogue between the two women. Students will analyze two descriptions of the lives of enslaved women. Students will describe how slavery affected women differently than men. Students will create a dialogue between a slave and a slaveowner. Step 1: Divide students into small groups. Have students read Sections V and VI ("Trials of Girlhood" and "Jealous Mistress") from Harriet Jacobs' Incidents in the Life of a Slave Girl. Ask students to free write their general impressions, including any aspects of the reading that surprised them. Step 2: Have students read aloud and discuss the brief excerpts from Harriet Jacobs and Mary Boykin Chestnut ("A Plantation Mistress Decries 'A Monstrous System'"). In discussion, students should address the following questions: Does Jacobs, a former slave, see female slaves as victims or perpetrators? How does Chestnut, a plantation mistress, see female slaves? Step 3: Ask each student to imagine and write a dialogue between Harriet Jacobs and Mary Chestnut. Step 4: Based on their written dialogues and a careful study of the readings, ask students to assess gender roles and moral and sexual attitudes under slavery. In their groups, students should address the following: How did sexual or moral attitudes differ for whites and blacks during slavery? Jacobs and Chestnut believe that oppression differs for women and men under slavery? Do you agree? Why do you think the authors wrote these passages? Who do you think were the audiences for these writings? As students listen to their group members, they should make a list of common points of view and areas of difference between Jacobs and Chestnut. Then ask groups to reconvene and as an entire class, discuss the differences and similarities between the two women's views. The institution of slavery permeated every aspect of American society before the Civil War, and it impacted the lives of women regardless of race or economic status. Defenders of slavery justified the institution on the grounds that there were innate racial differences between blacks and whites. These racial prejudices also helped define gender identities during the antebellum era. Slavery attached different sexual behaviors and traits to white and black women, which were then used as the basis for separate roles for white and black women in both the private and public spheres. Laws and social practices held that white women were fragile, moral, and sexually innocent, while black women were viewed as laborers and over-sexed beings. In the intimate setting of the household, such inequalities ensured that relations between white and black women were fraught with distrust, jealousy, and rage. Plantation mistresses were often well aware that the men in their lives took advantage of black slave women sexually; enslaved women could not escape white men's sexual attentions and rape was common. Many white slave mistresses used their "higher" racial standing to take out their frustrations about their confined role in society and their menfolk's infidelities, on black women. Black women had to cope with the unpredictable actions of both the master and mistress. Creator | American Social History Project/Center for Media and Learning Rights | Copyright American Social History Project/Center for Media and Learning This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License. Item Type | Teaching Activity Cite This document | American Social History Project/Center for Media and Learning, “Gender, Sex, and Slavery,” HERB: Resources for Teachers, accessed February 6, 2016, http://herb.ashp.cuny.edu/items/show/1377.
http://herb.ashp.cuny.edu/items/show/1377
4.28125
fault, in geology, fracture in the earth's crust in which the rock on one side of the fracture has measurable movement in relation to the rock on the other side. Faults on other planets and satellites of the solar system also have been recognized. Evidence of faults are found either at the surface (fault surface) or underground (fault plane). Faults are most evident in outcrops of sedimentary formations where they conspicuously offset previously continuous strata. Movement along a fault plane may be vertical, horizontal, or oblique in direction, or it may consist in the rotation of one or both of the fault blocks, with most movements associated with mountain building and plate tectonics. The two classes of faults include the dip-slip (up and down movement), which is further divided into normal and thrust (reverse) faults; and strike-slip (movement parallel to the fault plane). The San Andreas fault of California is of this type. In dip-slip faults the term "hanging wall" is used for the side that lies vertically above the other, called the "footwall." A fault in which the hanging wall moves down and the footwall is stationary is called a normal fault. Normal faults are formed by tensional, or pull-apart, forces. A fault in which the hanging wall is the upthrown side is called a thrust fault because the hanging wall appears to have been pushed up over the footwall. Such faults are formed by compressional forces that push rock together and are by far the most common of the dip-slip faults. All types of faults have been recognized on the ocean floor: normal faults occur in the rift valleys associated with mid ocean ridges spreading at slow rates; strike-slip faults appear between the offset portions of mid-ocean ridges; and thrust faults occur at subducting plate boundaries. Active faults, though they may not move for decades, can move many feet in a matter of seconds, producing an earthquake. The largest earthquakes occur along thrust faults. Some faults creep from a half inch to as much as 4 in. (1 to 10 cm) per year. Fault movements are measured using laser and other devices. Faults create interpretation problems for geologists by altering the relations of strata (see stratification), such as making the same rock layer offset in two vertical cross sections of a formation or making layers disappear altogether. Faults are often seen on the surface as topographical features, including offset streams, linear lakes, and fault scarps.
http://www.factmonster.com/encyclopedia/science/fault.html
4.03125
Air quality index An air quality index (AQI) is a number used by government agencies to communicate to the public how polluted the air currently is or how polluted it is forecast to become. As the AQI increases, an increasingly large percentage of the population is likely to experience increasingly severe adverse health effects. Different countries have their own air quality indices, corresponding to different national air quality standards. Some of these are the Air Quality Health Index (Canada), the Air Pollution Index (Malaysia), and the Pollutant Standards Index (Singapore). - 1 Definition and usage - 2 Indices by location - 2.1 Canada - 2.2 Hong Kong - 2.3 Mainland China - 2.4 India - 2.5 Mexico - 2.6 Singapore - 2.7 South Korea - 2.8 United Kingdom - 2.9 Europe - 2.10 United States - 3 See also - 4 References - 5 External links Definition and usage Computation of the AQI requires an air pollutant concentration over a specified averaging period, obtained from an air monitor or model. Taken together, concentration and time represent the dose of the air pollutant. Health effects corresponding to a given dose are established by epidemiological research. Air pollutants vary in potency, and the function used to convert from air pollutant concentration to AQI varies by pollutant. Air quality index values are typically grouped into ranges. Each range is assigned a descriptor, a color code, and a standardized public health advisory. The AQI can increase due to an increase of air emissions (for example, during rush hour traffic or when there is an upwind forest fire) or from a lack of dilution of air pollutants. Stagnant air, often caused by an anticyclone, temperature inversion, or low wind speeds lets air pollution remain in a local area, leading to high concentrations of pollutants, chemical reactions between air contaminants and hazy conditions. On a day when the AQI is predicted to be elevated due to fine particle pollution, an agency or public health organization might: - advise sensitive groups, such as the elderly, children, and those with respiratory or cardiovascular problems to avoid outdoor exertion. - declare an "action day" to encourage voluntary measures to reduce air emissions, such as using public transportation. - recommend the use of masks to keep fine particles from entering the lungs During a period of very poor air quality, such as an air pollution episode, when the AQI indicates that acute exposure may cause significant harm to the public health, agencies may invoke emergency plans that allow them to order major emitters (such as coal burning industries) to curtail emissions until the hazardous conditions abate. Most air contaminants do not have an associated AQI. Many countries monitor ground-level ozone, particulates, sulfur dioxide, carbon monoxide and nitrogen dioxide, and calculate air quality indices for these pollutants. The definition of the AQI in a particular nation reflects the discourse surrounding the development of national air quality standards in that nation. A website allowing government agencies anywhere in the world to submit their real-time air monitoring data for display using a common definition of the air quality index has recently become available. Indices by location Air quality in Canada has been reported for many years with provincial Air Quality Indices (AQIs). Significantly, AQI values reflect air quality management objectives, which are based on the lowest achievable emissions rate, and not exclusively concern for human health. The Air Quality Health Index or (AQHI) is a scale designed to help understand the impact of air quality on health. It is a health protection tool used to make decisions to reduce short-term exposure to air pollution by adjusting activity levels during increased levels of air pollution. The Air Quality Health Index also provides advice on how to improve air quality by proposing behavioural change to reduce the environmental footprint. This index pays particular attention to people who are sensitive to air pollution. It provides them with advice on how to protect their health during air quality levels associated with low, moderate, high and very high health risks. The Air Quality Health Index provides a number from 1 to 10+ to indicate the level of health risk associated with local air quality. On occasion, when the amount of air pollution is abnormally high, the number may exceed 10. The AQHI provides a local air quality current value as well as a local air quality maximums forecast for today, tonight, and tomorrow, and provides associated health advice. |Risk:||Low (1–3)||Moderate (4–6)||High (7–10)||Very high (above 10)| |Health Risk||Air Quality Health Index||Health Messages| |At Risk population||*General Population| |Low||1–3||Enjoy your usual outdoor activities.||Ideal air quality for outdoor activities| |Moderate||4–6||Consider reducing or rescheduling strenuous activities outdoors if you are experiencing symptoms.||No need to modify your usual outdoor activities unless you experience symptoms such as coughing and throat irritation.| |High||7–10||Reduce or reschedule strenuous activities outdoors. Children and the elderly should also take it easy.||Consider reducing or rescheduling strenuous activities outdoors if you experience symptoms such as coughing and throat irritation.| |Very high||Above 10||Avoid strenuous activities outdoors. Children and the elderly should also avoid outdoor physical exertion.||Reduce or reschedule strenuous activities outdoors, especially if you experience symptoms such as coughing and throat irritation.| On the 30th December 2013 Hong Kong replaced the Air Pollution Index with a new index called the Air Quality Health Index. This index is on a scale of 1 to 10+ and considers four air pollutants: ozone; nitrogen dioxide; sulphur dioxide and particulate matter (including PM10 and PM2.5). For any given hour the AQHI is calculated from the sum of the percentage excess risk of daily hospital admissions attributable to the 3-hour moving average concentrations of these four pollutants. The AQHIs are grouped into five AQHI health risk categories with health advice provided: |Health risk category||AQHI| Each of the health risk categories has advice with it. At the low and moderate levels the public are advised that they can continue normal activities. For the high category, children, the elderly and people with heart or respiratory illnesses are advising to reduce outdoor physical exertion. Above this (very high or serious) the general public are also advised to reduce or avoid outdoor physical exertion. China's Ministry of Environmental Protection (MEP) is responsible for measuring the level of air pollution in China. As of 1 January 2013, MEP monitors daily pollution level in 163 of its major cities. The API level is based on the level of 6 atmospheric pollutants, namely sulfur dioxide (SO2), nitrogen dioxide (NO2), suspended particulates smaller than 10 μm in aerodynamic diameter (PM10), suspended particulates smaller than 2.5 μm in aerodynamic diameter (PM2.5), carbon monoxide (CO), and ozone (O3) measured at the monitoring stations throughout each city. An individual score (IAQI) is assigned to the level of each pollutant and the final AQI is the highest of those 6 scores. The pollutants can be measured quite differently. PM2.5、PM10 concentration are measured as average per 24h. SO2, NO2, O3, CO are measured as average per hour. The final API value is calculated per hour according to a formula published by the MEP. The scale for each pollutant is non-linear, as is the final AQI score. Thus an AQI of 100 does not mean twice the pollution of AQI at 50, nor does it mean twice as harmful. While an AQI of 50 from day 1 to 182 and AQI of 100 from day 183 to 365 does provide an annual average of 75, it does not mean the pollution is acceptable even if the benchmark of 100 is deemed safe. This is because the benchmark is a 24-hour target. The annual average must match against the annual target. It is entirely possible to have safe air every day of the year but still fail the annual pollution benchmark. AQI and Health Implications (HJ 663-2012) |0–50||Excellent||No health implications.| |51–100||Good||Few hypersensitive individuals should reduce outdoor exercise.| |101–150||Lightly Polluted||Slight irritations may occur, individuals with breathing or heart problems should reduce outdoor exercise.| |151–200||Moderately Polluted||Slight irritations may occur, individuals with breathing or heart problems should reduce outdoor exercise.| |201–300||Heavily Polluted||Healthy people will be noticeably affected. People with breathing or heart problems will experience reduced endurance in activities. These individuals and elders should remain indoors and restrict activities.| |300+||Severely Polluted||Healthy people will experience reduced endurance in activities. There may be strong irritations and symptoms and may trigger other illnesses. Elders and the sick should remain indoors and avoid exercise. Healthy individuals should avoid out door activities.| The Minister for Environment, Forests & Climate Change Shri Prakash Javadekar launched The National Air Quality Index (AQI) in New Delhi on 17 September 2014 under the Swachh Bharat Abhiyan. It is outlined as ‘One Number- One Colour-One Description’ for the common man to judge the air quality within his vicinity. The index constitutes part of the Government’s mission to introduce the culture of cleanliness. Institutional and infrastructural measures are being undertaken in order to ensure that the mandate of cleanliness is fulfilled across the country and the Ministry of Environment, Forests & Climate Change proposed to discuss the issues concerned regarding quality of air with the Ministry of Human Resource Development in order to include this issue as part of the sensitisation programme in the course curriculum. While the earlier measuring index was limited to three indicators, the current measurement index had been made quite comprehensive by the addition of five additional parameters. Under the current measurement of air quality there are 8 parameters . The initiatives undertaken by the Ministry recently aimed at balancing environment and conservation and development as air pollution has been a matter of environmental and health concerns, particularly in urban areas. The Central Pollution Control Board along with State Pollution Control Boards has been operating National Air Monitoring Program (NAMP) covering 240 cities of the country. In addition, continuous monitoring systems that provide data on near real-time basis are also installed in a few cities. They provide information on air quality in public domain in simple linguistic terms that is easily understood by a common person. Air Quality Index (AQI) is one such tool for effective dissemination of air quality information to people. As such an Expert Group comprising medical professionals, air quality experts, academia, advocacy groups, and SPCBs was constituted and a technical study was awarded to IIT Kanpur. IIT Kanpur and the Expert Group recommended an AQI scheme in 2014. There are six AQI categories, namely Good, Satisfactory, Moderately polluted, Poor, Very Poor, and Severe. The proposed AQI will consider eight pollutants (PM10, PM2.5, NO2, SO2, CO, O3, NH3, and Pb) for which short-term (up to 24-hourly averaging period) National Ambient Air Quality Standards are prescribed. Based on the measured ambient concentrations, corresponding standards and likely health impact, a sub-index is calculated for each of these pollutants. The worst sub-index reflects overall AQI. Associated likely health impacts for different AQI categories and pollutants have been also been suggested, with primary inputs from the medical expert members of the group. The AQI values and corresponding ambient concentrations (health breakpoints) as well as associated likely health impacts for the identified eight pollutants are as follows: |AQI Category (Range)||PM10 (24hr)||PM2.5 (24hr)||NO2 (24hr)||O3 (8hr)||CO (8hr)||SO2 (24hr)||NH3 (24hr)||Pb (24hr)| |Moderately polluted (101-200)||101-250||61-90||81-180||101-168||2.1-10||81-380||401-800||1.1-2.0| |Very poor (301-400)||351-430||121-250||281-400||209-748||17-34||801-1600||1200-1800||3.1-3.5| |AQI||Associated Health Impacts| |Good (0-50)||Minimal impact| |Satisfactory (51-100)||May cause minor breathing discomfort to sensitive people.| |Moderately polluted (101–200)||May cause breathing discomfort to people with lung disease such as asthma, and discomfort to people with heart disease, children and older adults.| |Poor (201-300)||May cause breathing discomfort to people on prolonged exposure, and discomfort to people with heart disease.| |Very poor (301-400)||May cause respiratory illness to the people on prolonged exposure. Effect may be more pronounced in people with lung and heart diseases.| |Severe (401-500)||May cause respiratory impact even on healthy people, and serious health impacts on people with lung/heart disease. The health impacts may be experienced even during light physical activity.| The air quality in Mexico City is reported in IMECAs. The IMECA is calculated using the measurements of average times of the chemicals ozone (O3), sulphur dioxide (SO2), nitrogen dioxide (NO2), carbon monoxide (CO) and particles smaller than 10 micrometers (PM10). Singapore uses the Pollutant Standards Index to report on its air quality, with details of the calculation similar but not identical to that used in Malaysia and Hong Kong The PSI chart below is grouped by index values and descriptors, according to the National Environment Agency. |PSI||Descriptor||General Health Effects| |51–100||Moderate||Few or none for the general population| |101–200||Unhealthy||Mild aggravation of symptoms among susceptible persons i.e. those with underlying conditions such as chronic heart or lung ailments; transient symptoms of irritation e.g. eye irritation, sneezing or coughing in some of the healthy population.| |201–300||Very Unhealthy||Moderate aggravation of symptoms and decreased tolerance in persons with heart or lung disease; more widespread symptoms of transient irritation in the healthy population.| |301–400||Hazardous||Early onset of certain diseases in addition to significant aggravation of symptoms in susceptible persons; and decreased exercise tolerance in healthy persons.| |Above 400||Hazardous||PSI levels above 400 may be life-threatening to ill and elderly persons. Healthy people may experience adverse symptoms that affect normal activity.| The Ministry of Environment of South Korea uses the Comprehensive Air-quality Index (CAI) to describe the ambient air quality based on the health risks of air pollution. The index aims to help the public easily understand the air quality and protect people's health. The CAI is on a scale from 0 to 500, which is divided into six categories. The higher the CAI value, the greater the level of air pollution. Of values of the five air pollutants, the highest is the CAI value. The index also has associated health effects and a colour representation of the categories as shown below. |0–50||Good||A level that will not impact patients suffering from diseases related to air pollution.| |51–100||Moderate||A level that may have a meager impact on patients in case of chronic exposure.| |101–150||Unhealthy for sensitive groups||A level that may have harmful impacts on patients and members of sensitive groups.| |151–250||Unhealthy||A level that may have harmful impacts on patients and members of sensitive groups (children, aged or weak people), and also cause the general public unpleasant feelings.| |251–500||Very unhealthy||A level that may have a serious impact on patients and members of sensitive groups in case of acute exposure.| The N Seoul Tower on Namsan Mountain in central Seoul, South Korea, is illuminated in blue, from sunset to 23:00 and 22:00 in winter, on days where the air quality in Seoul is 45 or less. During the spring of 2012, the Tower was lit up for 52 days, which is four days more than in 2011. The most commonly used air quality index in the UK is the Daily Air Quality Index recommended by the Committee on Medical Effects of Air Pollutants (COMEAP). This index has ten points, which are further grouped into 4 bands: low, moderate, high and very high. Each of the bands comes with advice for at-risk groups and the general population. |Air pollution banding||Value||Health messages for At-risk individuals||Health messages for General population| |Low||1–3||Enjoy your usual outdoor activities.||Enjoy your usual outdoor activities.| |Moderate||4–6||Adults and children with lung problems, and adults with heart problems, who experience symptoms, should consider reducing strenuous physical activity, particularly outdoors.||Enjoy your usual outdoor activities.| |High||7–9||Adults and children with lung problems, and adults with heart problems, should reduce strenuous physical exertion, particularly outdoors, and particularly if they experience symptoms. People with asthma may find they need to use their reliever inhaler more often. Older people should also reduce physical exertion.||Anyone experiencing discomfort such as sore eyes, cough or sore throat should consider reducing activity, particularly outdoors.| |Very High||10||Adults and children with lung problems, adults with heart problems, and older people, should avoid strenuous physical activity. People with asthma may find they need to use their reliever inhaler more often.||Reduce physical exertion, particularly outdoors, especially if you experience symptoms such as cough or sore throat.| The index is based on the concentrations of 5 pollutants. The index is calculated from the concentrations of the following pollutants: Ozone, Nitrogen Dioxide, Sulphur Dioxide, PM2.5 (particles with an aerodynamic diameter less than 2.5 μm) and PM10. The breakpoints between index values are defined for each pollutant separately and the overall index is defined as the maximum value of the index. Different averaging periods are used for different pollutants. |Index||Ozone, Running 8 hourly mean (μg/m3)||Nitrogen Dioxide, Hourly mean (μg/m3)||Sulphur Dioxide, 15 minute mean (μg/m3)||PM2.5 Particles, 24 hour mean (μg/m3)||PM10 Particles, 24 hour mean (μg/m3)| |10||≥ 241||≥ 601||≥ 1065||≥ 71||≥ 101| To present the air quality situation in European cities in a comparable and easily understandable way, all detailed measurements are transformed into a single relative figure: the Common Air Quality Index (or CAQI) Three different indices have been developed by Citeair to enable the comparison of three different time scale:. - An hourly index, which describes the air quality today, based on hourly values and updated every hours, - A daily index, which stands for the general air quality situation of yesterday, based on daily values and updated once a day, - An annual index, which represents the city's general air quality conditions throughout the year and compare to European air quality norms. This index is based on the pollutants year average compare to annual limit values, and updated once a year. However, the proposed indices and the supporting common web site www.airqualitynow.eu are designed to give a dynamic picture of the air quality situation in each city but not for compliance checking. The hourly and daily common indices These indices have 5 levels using a scale from 0 (very low) to > 100 (very high), it is a relative measure of the amount of air pollution. They are based on 3 pollutants of major concern in Europe: PM10, NO2, O3 and will be able to take into account to 3 additional pollutants (CO, PM2.5 and SO2) where data are also available. The calculation of the index is based on a review of a number of existing air quality indices, and it reflects EU alert threshold levels or daily limit values as much as possible. In order to make cities more comparable, independent of the nature of their monitoring network two situations are defined: - Background, representing the general situation of the given agglomeration (based on urban background monitoring sites), - Roadside, being representative of city streets with a lot of traffic, (based on roadside monitoring stations) The indices values are updated hourly (for those cities that supply hourly data) and yesterdays daily indices are presented. Common air quality index legend: The common annual air quality index The common annual air quality index provides a general overview of the air quality situation in a given city all the year through and regarding to the European norms. It is also calculated both for background and traffic conditions but its principle of calculation is different from the hourly and daily indices. It is presented as a distance to a target index, this target being derived from the EU directives (annual air quality standards and objectives): - If the index is higher than 1: for one or more pollutants the limit values are not met. - If the index is below 1: on average the limit values are met. The annual index is aimed at better taking into account long term exposure to air pollution based on distance to the target set by the EU annual norms, those norms being linked most of the time to recommendations and health protection set up by World Health Organisation. The United States Environmental Protection Agency (EPA) has developed an Air Quality Index that is used to report air quality. This AQI is divided into six categories indicating increasing levels of health concern. An AQI value over 300 represents hazardous air quality and below 50 the air quality is good. The AQI is based on the five "criteria" pollutants regulated under the Clean Air Act: ground-level ozone, particulate matter, carbon monoxide, sulfur dioxide, and nitrogen dioxide. The EPA has established National Ambient Air Quality Standards (NAAQS) for each of these pollutants in order to protect public health. An AQI value of 100 generally corresponds to the level of the NAAQS for the pollutant. The Clean Air Act (USA) (1990) requires EPA to review its National Ambient Air Quality Standards every five years to reflect evolving health effects information. The Air Quality Index is adjusted periodically to reflect these changes. Computing the AQI The air quality index is a piecewise linear function of the pollutant concentration. At the boundary between AQI categories, there is a discontinuous jump of one AQI unit. To convert from concentration to AQI this equation is used: - = the (Air Quality) index, - = the pollutant concentration, - = the concentration breakpoint that is ≤ , - = the concentration breakpoint that is ≥ , - = the index breakpoint corresponding to , - = the index breakpoint corresponding to . |O3 (ppb)||O3 (ppb)||PM2.5 (µg/m3)||PM10 (µg/m3)||CO (ppm)||SO2 (ppb)||NO2 (ppb)||AQI||AQI| |Clow - Chigh (avg)||Clow - Chigh (avg)||Clow- Chigh (avg)||Clow - Chigh (avg)||Clow - Chigh (avg)||Clow - Chigh (avg)||Clow - Chigh (avg)||Ilow - Ihigh||Category| |0-54 (8-hr)||-||0.0-12.0 (24-hr)||0-54 (24-hr)||0.0-4.4 (8-hr)||0-35 (1-hr)||0-53 (1-hr)||0-50||Good| |55-70 (8-hr)||-||12.1-35.4 (24-hr)||55-154 (24-hr)||4.5-9.4 (8-hr)||36-75 (1-hr)||54-100 (1-hr)||51-100||Moderate| |71-85 (8-hr)||125-164 (1-hr)||35.5-55.4 (24-hr)||155-254 (24-hr)||9.5-12.4 (8-hr)||76-185 (1-hr)||101-360 (1-hr)||101-150||Unhealthy for Sensitive Groups| |86-105 (8-hr)||165-204 (1-hr)||55.5-150.4 (24-hr)||255-354 (24-hr)||12.5-15.4 (8-hr)||186-304 (1-hr)||361-649 (1-hr)||151-200||Unhealthy| |106-200 (8-hr)||205-404 (1-hr)||150.5-250.4 (24-hr)||355-424 (24-hr)||15.5-30.4 (8-hr)||305-604 (24-hr)||650-1249 (1-hr)||201-300||Very Unhealthy| |-||405-504 (1-hr)||250.5-350.4 (24-hr)||425-504 (24-hr)||30.5-40.4 (8-hr)||605-804 (24-hr)||1250-1649 (1-hr)||301-400||Hazardous| |-||505-604 (1-hr)||350.5-500.4 (24-hr)||505-604 (24-hr)||40.5-50.4 (8-hr)||805-1004 (24-hr)||1650-2049 (1-hr)||401-500| Suppose a monitor records a 24-hour average fine particle (PM2.5) concentration of 12.0 micrograms per cubic meter. The equation above results in an AQI of: corresponding to air quality in the "Good" range. To convert an air pollutant concentration to an AQI, EPA has developed a calculator. If multiple pollutants are measured at a monitoring site, then the largest or "dominant" AQI value is reported for the location. The ozone AQI between 100 and 300 is computed by selecting the larger of the AQI calculated with a 1-hour ozone value and the AQI computed with the 8-hour ozone value. 8-hour ozone averages do not define AQI values greater than 300; AQI values of 301 or greater are calculated with 1-hour ozone concentrations. 1-hour SO2 values do not define higher AQI values greater than 200. AQI values of 201 or greater are calculated with 24-hour SO2 concentrations. Real time monitoring data from continuous monitors are typically available as 1-hour averages. However, computation of the AQI for some pollutants requires averaging over multiple hours of data. (For example, calculation of the ozone AQI requires computation of an 8-hour average and computation of the PM2.5 requires a 24-hour average.) To accurately reflect the current air quality, the multi-hour average used for the AQI computation should be centered on the current time, but as concentrations of future hours are unknown and are difficult to estimate accurately, EPA uses surrogate concentrations to estimate these multi-hour averages. For reporting the PM2.5 AQI, this surrogate concentration is called the NowCast. The Nowcast is a particular type of weighted average constructed from the most recent 12-hours of PM2.5 data. EPA estimates eight-hour average ozone values in real time using the most recent 1-hour ozone average and the historical relationship between 1-hour maximum and 8-hour maximum values developed for each ozone monitoring site. Public Availability of the AQI Real time monitoring data and forecasts of air quality that are color-coded in terms of the air quality index are available from EPA's AirNow web site. Historical air monitoring data including AQI charts and maps are available at EPA's AirData website. History of the AQI The AQI made its debut in 1968, when the National Air Pollution Control Administration undertook an initiative to develop an air quality index and to apply the methodology to Metropolitan Statistical Areas. The impetus was to draw public attention to the issue of air pollution and indirectly push responsible local public officials to take action to control sources of pollution and enhance air quality within their jurisdictions. Jack Fensterstock, the head of the National Inventory of Air Pollution Emissions and Control Branch, was tasked to lead the development of the methodology and to compile the air quality and emissions data necessary to test and calibrate resultant indices. The initial iteration of the air quality index used standardized ambient pollutant concentrations to yield individual pollutant indices. These indices were then weighted and summed to form a single total air quality index. The overall methodology could use concentrations that are taken from ambient monitoring data or are predicted by means of a diffusion model. The concentrations were then converted into a standard statistical distribution with a preset mean and standard deviation. The resultant individual pollutant indices are assumed to be equally weighted, although values other than unity can be used. Likewise, the index can incorporate any number of pollutants although it was only used to combine SOx, CO, and TSP because of a lack of available data for other pollutants. While the methodology was designed to be robust, the practical application for all metropolitan areas proved to be inconsistent due to the paucity of ambient air quality monitoring data, lack of agreement on weighting factors, and non-uniformity of air quality standards across geographical and political boundaries. Despite these issues, the publication of lists ranking metropolitan areas achieved the public policy objectives and led to the future development of improved indices and their routine application. - "International Air Quality". Retrieved 20 August 2015. - National Weather Service Corporate Image Web Team. "NOAA's National Weather Service/Environmental Protection Agency - United States Air Quality Forecast Guidance". Retrieved 20 August 2015. - "Step 2 - Dose-Response Assessment". Retrieved 20 August 2015. - Myanmar government (2007). "Haze". Archived from the original on 27 January 2007. Retrieved 2007-02-11. - "Air Quality Index - American Lung Association". American Lung Association. Retrieved 20 August 2015. - "Spare the Air - Summer Spare the Air". Retrieved 20 August 2015. - "FAQ: Use of masks and availability of masks". Retrieved 20 August 2015. - "Air Quality Index (AQI) - A Guide to Air Quality and Your Health". US EPA. 9 December 2011. Retrieved 8 August 2012. - Jay Timmons (13 August 2014). "The EPA's Latest Threat to Economic Growth". WSJ. Retrieved 20 August 2015. - "World Air Quality Index". Retrieved 20 August 2015. - "Environment Canada - Air - AQHI categories and explanations". Ec.gc.ca. 2008-04-16. Retrieved 2011-11-11. - Hsu, Angel. "China’s new Air Quality Index: How does it measure up?". Retrieved 8 February 2014. - "Air Quality Health Index". Government of the Hong Kong Special Administrative Region. Retrieved 9 February 2014. - "Focus on urban air quality daily". Archived from the original on 2004-10-25. - "People's Republic of China Ministry of Environmental Protection Standard: Technical Regulation on Ambient Air Quality Index (Chinese PDF)" (PDF). - Rama Lakshmi (17 October 2014). "India launches its own Air Quality Index. Can its numbers be trusted?". Washington Post. Retrieved 20 August 2015. - "National Air Quality Index (AQI) launched by the Environment Minister AQI is a huge initiative under ‘Swachh Bharat’". Retrieved 20 August 2015. - "India launches index to measure air quality". timesofindia-economictimes. Retrieved 20 August 2015. - "::: Central Pollution Control Board :::". Retrieved 20 August 2015. - "MEWR - Key Environment Statistics - Clean Air". App.mewr.gov.sg. 2011-06-08. Retrieved 2011-11-11. - ."National Environment Agency - Calculation of PSI" (PDF). Retrieved 2012-06-15. - "National Environment Agency". App2.nea.gov.sg. Retrieved 2011-11-11. - "What's CAI". Air Korea. Retrieved 25 October 2015. - "Improved Air Quality Reflected in N Seoul Tower". Chosun Ilbo. 18 May 2012. Retrieved 29 July 2012. - COMEAP. "Review of the UK Air Quality Index". COMEAP website. - "Daily Air Quality Index". Air UK Website. Defra. - Garcia, Javier; Colosio, Joëlle (2002). Air-quality indices : elaboration, uses and international comparisons. Presses des MINES. ISBN 2-911762-36-3. - "Indices definition". Air quality. Retrieved 9 August 2012. - David Mintz (February 2009). Technical Assistance Document for the Reporting of Daily Air Quality – the Air Quality Index (AQI) (PDF). North Carolina: US EPA Office of Air Quality Planning and Standards. EPA-454/B-09-001. Retrieved 9 August 2012. - Revised Air Quality Standards For Particle Pollution And Updates To The Air Quality Index (AQI) (PDF). North Carolina: US EPA Office of Air Quality Planning and Standards. 2013. - "AQI Calculator: Concentration to AQI". Retrieved 9 August 2012. - "AirNow API Documentation". Retrieved 20 August 2015. - "How are your ozone maps calculated?". Retrieved 20 August 2015. - "AirNow". Retrieved 9 August 2012.. - "AirData - US Environmental Protection Agency". Retrieved 20 August 2015. - J.C Fensterstock et al., " The Development and Utilization of an Air Quality Index," Paper No. 69-73, presented at the 62nd Annual Meeting of the Air Pollution Control Administration, June 1969. - CAQI in Europe- AirqualityNow website - CAI at Airkorea.or.kr - website of South Korea Environmental Management Corp. - AQI at airnow.gov - cross-agency U.S. Government site - New Mexico Air Quality and API data - Example of how New Mexico Environment Department publishes their Air Quality and API data. - AQI at Meteorological Service of Canada - The UK Air Quality Archive - API at JAS (Malaysian Department of Environment) - API at Hong Kong - Environmental Protection Department of the Government of the Hong Kong Special Administrative Region - San Francisco Bay Area Spare-the-Air - AQI explanation - Malaysia Air Pollution Index - AQI in Thailand provinces and in Bangkok - Unofficial PM25 AQI in Hanoi, Vietnam
https://en.wikipedia.org/wiki/Air_Quality_Index
4.09375
At the center of our solar system is an enormous nuclear generator. The Earth revolves around this massive body at an average distance of 93 million miles (149.6 million kilometers). It's a star we call the sun. The sun provides us with the energy necessary for life. But could scientists create a miniaturized version here on Earth? It's not just possible -- it's already been done. If you think of a star as a nuclear fusion machine, mankind has duplicated the nature of stars on Earth. But this revelation has qualifiers. The examples of fusion here on Earth are on a small scale and last for just a few seconds at most. To understand how scientists can make a star, it's necessary to learn what stars are made of and how fusion works. The sun is about 75 percent hydrogen and 24 percent helium. Heavier elements make up the final percent of the sun's mass. The core of the sun is intensely hot -- temperatures are greater than 15 million degrees Kelvin (nearly 27 million degrees Fahrenheit or just under 15 million degrees Celsius). At these temperatures, the hydrogen atoms absorb so much energy that they fuse together. This isn't a trivial matter. The nucleus of a hydrogen atom is a single proton. To fuse two protons together requires enough energy to overcome electromagnetic force. That's because protons are positively charged. If you're familiar with magnets, you know that similar charges repel each other. But if you have enough energy to overcome this force, you can fuse the two nuclei into one. What you're left with after this initial fusion is deuterium, an isotope of hydrogen. It's an atom with one proton and one neutron. Fusing deuterium with hydrogen creates helium-3. Fusing two helium-3 atoms together creates helium-4 and two hydrogen atoms. If you break all that down, it essentially means that four hydrogen atoms fuse to create a single helium-4 atom. Here's where energy comes into play. A helium-4 atom has less mass than four hydrogen atoms collectively. So where does that extra mass go? It's converted into energy. And as Einstein's famous equation tells us, energy is equal to the mass of an object times the speed of light squared. That means the mass of the tiniest particle is equivalent to an enormous amount of energy. So how can scientists create a star? Creating enough energy to overcome electromagnetic force isn't easy but the United States managed to do it on Nov. 1, 1952. That's when Ivy Mike, the world's first hydrogen bomb, detonated on Elugelab Island. The bomb had two stages. The first stage was a fission bomb. Fission is the process of splitting a nucleus. It's the type of bomb the United States used on Nagasaki and Hiroshima to end World War II. The fission bomb element of Ivy Mike was necessary to create the massive amount of energy required to overcome the electromagnetic force of hydrogen to fuse it into helium. Heat from the initial explosion transferred through the lead casing of the bomb to a flask containing liquid deuterium. A plutonium rod inside the flask acted as the ignition for the fusion reaction. The resulting explosion was 10.4 megatons in size. It completely obliterated the island, leaving behind a crater 164 feet deep (nearly 50 meters) and 1.2 miles (1.9 kilometers) across [source: Brookings Institution]. For a brief moment, man had harnessed the power of the stars to create a weapon of immense power. The thermonuclear age had begun. Laboratories around the world are now trying to find a way to harness fusion as an energy source. If they can find a way to create sustainable and controllable reactions, scientists could use fusion to provide massive amounts of power for millions of years. There's no shortage of fuel -- hydrogen is plentiful and the oceans have large amounts of deuterium in them. But getting to the point where we can harness fusion for power is going to take years of research and billions of dollars in resources. The amount of power required to initiate fusion coupled with the intense heat created by the event make it difficult to build a facility capable of containing a reaction. Some scientists are looking at massive lasers as a way to ignite a fusion event. Others are exploring options with plasma -- the fourth state of matter. But no one has unlocked the secret just yet. So, we can create a star on Earth -- at least for a short time. But it remains to be seen if we can sustain such a creation and harness its astounding energy. Learn more about stars and energy by following the links on the next page. Many starry superstitions date back thousands of years. View 10 superstitions about stars to learn about star beliefs and legends. Related HowStuffWorks Articles More Great Links - Brookings Institution. "The 'Mike' test, November 1, 1952." 2010. (May 20, 2010) http://www.brookings.edu/projects/archive/nucweapons/mike.aspx - Cox, Brian. "Can we make a star on Earth?" BBC Horizons. February 2009. (May 19, 2010) http://www.bbc.co.uk/programmes/b00hr6bk - Cox, Brian. "How to build a star on Earth." BBC News. Feb. 16, 2009. (May 18, 2010) http://news.bbc.co.uk/2/hi/sci/tech/7891787.stm - Gray, Richard. "Scientists plan to ignite tiny man-made star." Telegraph. Dec. 27, 2008. (May 18, 2010) http://www.telegraph.co.uk/science/science-news/3981697/Scientists-plan-to-ignite-tiny-man-made-star.html - Los Alamos National Labs. "Helium." Dec. 15, 2003. (May 18, 2010) http://periodic.lanl.gov/elements/2.html - Los Alamos National Labs. "Hydrogen." Dec. 15, 2003. (May 18, 2010) http://periodic.lanl.gov/elements/1.html - NASA. "Sun." World Book at NASA. (May 18, 2010) http://www.nasa.gov/worldbook/sun_worldbook.html - The Astrophysics Spectator. "Hydrogen Fusion." Oct. 6, 2004. (May 19, 2010) http://www.astrophysicsspectator.com/topics/stars/FusionHydrogen.html
http://science.howstuffworks.com/create-star-on-earth.htm/printable