score
float64
4
5.34
text
stringlengths
256
572k
url
stringlengths
15
373
4.40625
Viral replication is the term used by virologists to describe the formation of biological viruses during the infection process in the target host cells. Viruses must first get into the cell before viral replication can occur. From the perspective of the virus, the purpose of viral replication is to allow production and survival of its kind. By generating abundant copies of its genome and packaging these copies into viruses, the virus is able to continue infecting new hosts. Replication between viruses is greatly varied and depends on the type of genes involved in them. Most DNA viruses assemble in the nucleus while most RNA viruses develop solely in cytoplasm. " Baltimore Classification System Viruses are classed into 7 types of genes, each of which has its own families of viruses, which in turn have differing replication strategies themselves. David Baltimore, a Nobel Prize-winning biologist, devised a system called the Baltimore Classification System to classify different viruses based on their unique replication strategy. There are seven different replication strategies based on this system (Baltimore Class I, II, III, IV, V, VI, VII). The seven classes of viruses are listed here briefly and in generalities. Class 1: Double-stranded DNA viruses This type of virus usually must enter the host nucleus before it is able to replicate. Some of these viruses require host cell polymerases to replicate their genome, while others, such as adenoviruses or herpes viruses, encode their own replication factors. However, in either cases, replication of the viral genome is highly dependent on a cellular state permissive to DNA replication and, thus, on the cell cycle. The virus may induce the cell to forcefully undergo cell division, which may lead to transformation of the cell and, ultimately, cancer. An example of a family within this classification is the Adenoviridae. There is only one well-studied example in which a class 1 family of viruses does not replicate within the nucleus. This is the Poxvirus family, which comprises highly pathogenic viruses that infect vertebrates. One example is the smallpox virus. Class 2: Single-stranded DNA viruses Viruses that fall under this category include ones that are not as well-studied, but still do pertain highly to vertebrates. Two examples include the Circoviridae and Parvoviridae. They replicate within the nucleus, and form a double-stranded DNA intermediate during replication. A human Circovirus called TTV is included within this classification and is found in almost all humans, infecting them asymptomatically in nearly every major organ. Class 3: Double-stranded RNA viruses Like most viruses with RNA genomes, double-stranded RNA viruses do not rely on host polymerases for replication to the extent that viruses with DNA genomes do. Double-stranded RNA viruses are not as well-studied as other classes. This class includes two major families, the Reoviridae and Birnaviridae. Replication is monocistronic and includes individual, segmented genomes, meaning that each of the genes codes for only one protein, unlike other viruses, which exhibit more complex translation. Classes 4 & 5: Single-stranded RNA viruses These viruses consist of two types, however both share the fact that replication is primarily in the cytoplasm, and that replication is not as dependent on the cell cycle as that of DNA viruses. This class of viruses is also one of the most-studied types of viruses, alongside the double-stranded DNA viruses. Class 4: Single-stranded RNA viruses - Positive-sense The positive-sense RNA viruses and indeed all genes defined as positive-sense can be directly accessed by host ribosomes to immediately form proteins. These can be divided into two groups, both of which replicate in the cytoplasm: - Viruses with polycistronic mRNA where the genome RNA forms the mRNA and is translated into a polyprotein product that is subsequently cleaved to form the mature proteins. This means that the gene can utilize a few methods in which to produce proteins from the same strand of RNA, all in the sake of reducing the size of its gene. - Viruses with complex transcription, for which subgenomic mRNAs, ribosomal frameshifting and proteolytic processing of polyproteins may be used. All of which are different mechanisms with which to produce proteins from the same strand of RNA. Class 5: Single-stranded RNA viruses - Negative-sense The negative-sense RNA viruses and indeed all genes defined as negative-sense cannot be directly accessed by host ribosomes to immediately form proteins. Instead, they must be transcribed by viral polymerases into the "readable" complementary positive-sense. These can also be divided into two groups: - Viruses containing nonsegmented genomes for which the first step in replication is transcription from the negative-stranded genome by the viral RNA-dependent RNA polymerase to yield monocistronic mRNAs that code for the various viral proteins. A positive-sense genome copy that serves as template for production of the negative-strand genome is then produced. Replication is within the cytoplasm. - Viruses with segmented genomes for which replication occurs in the nucleus and for which the viral RNA-dependent RNA polymerase produces monocistronic mRNAs from each genome segment. The largest difference between the two is the location of replication. Class 6: Positive-sense single-stranded RNA viruses that replicate through a DNA intermediate A well-studied family of this class of viruses include the retroviruses. One defining feature is the use of reverse transcriptase to convert the positive-sense RNA into DNA. Instead of using the RNA for templates of proteins, they use DNA to create the templates, which is spliced into the host genome using integrase. Replication can then commence with the help of the host cell's polymerases Class 7: Double-stranded DNA viruses that replicate through a single-stranded RNA intermediate This small group of viruses, exemplified by the Hepatitis B virus, have a double-stranded, gapped genome that is subsequently filled in to form a covalently closed circle (ccc DNA) that serves as a template for production of viral mRNAs and a subgenomic RNA. The pregenome RNA serves as template for the viral reverse transcriptase and for production of the DNA genome. - Roberts RJ, "Fish pathology, 3rd Edition" ,Elsevier Health Sciences, 2001. - N.J. Dimmock et al. "Introduction to Modern Virology, 6th edition." Blackwell Publishing, 2007.
http://en.wikipedia.org/wiki/Viral_replication
4.0625
What is Numeracy/QL/QR? The first known use of the term numerate appeared in the 1959 UK Crowther report when the authors used the term to "coin a word to represent the mirror image of literacy" (Crowther Report 1959: 269). More recently, the 1982 Cockcroft Report elaborated on the meaning of the term: "We would wish the word 'numerate' to imply the possession of two attributes. The first of these is an 'at-homeness' with numbers and an ability to make use of mathematical skills which enables an individual to cope with the practical mathematical demands of his everyday life. The second is an ability to have some appreciation and understanding of information which is presented in mathematical terms, for instance in graphs, charts or tables or by reference to percentage increase or decrease. Taken together, these imply that a numerate person should be expected to be able to appreciate and understand some of the ways in which mathematics can be used as a means of communication." Numeracy is also frequently used interchangeably with such terms as Quantitative Literacy (QL) or Quantitative Reasoning (QR).1 According to the Association of American Colleges and Universities (AAC&U), these terms refer to a "'habit of mind,' competency, and comfort in working with numerical data. Individuals with strong QL skills possess the ability to reason and solve quantitative problems from a wide array of authentic contexts and everyday life situations. They understand and can create sophisticated arguments supported by quantitative evidence and they can clearly communicate those arguments in a variety of formats (using words, tables, graphs, mathematical equations, etc., as appropriate)." Indeed, Bernard Madison (2003: 3) defines QL as "the ability to understand and use numbers and data in everyday life" and Lynn Arthur Steen (2004: 4) describes it as "a practical, robust habit of mind anchored in data, nourished by computers, and employed in every aspect of an alert, informed life." A collection of different views on QL is provided here. Some of the key skills that make up QL/QR include reading graphical displays, modeling real-world phenomena, solving practical problems through the use of data, justifying conclusions, and critiquing research designs (Johnson and Kaplan N.d.). The Quantitative Literacy Rubric2 (Acrobat (PDF) 103kB Aug28 12) of the Association of American Colleges and Universities (AAC&U) highlights some of the essential skills associated with Quantitative Literacy. The Importance of Quantitative Reasoning Quantitative Reasoning (QR)/Quantitative Literacy (QL) skills are essential for social justice: "Without quantitative understanding . . . laypersons may be relatively powerless compared with a small number of individual with specialized knowledge. . . . Informed political decision-making, retirement planning, active parenting, and the vast majority of choices we make in our personal, occupational, and civic lives can be better served by improved quantitative [reasoning skills]" (Wiest and associates 2007: 47, 53). Indeed, "the scientifically and mathematically illiterate are outsiders in a society in which effective participation in public dialogue presumes a grasp of basic science and mathematics" (Carnevale and Desrochers 2002: 29). Poor quantitative skills have serious social and economic consequences ranging from faulty decisions and medical mistakes by patients and healthcare professionals (see, e.g., Ancker and Abramson 2012; Cavanaugh et al. 2008; Master et al. 2010; Nelson et al. 2008, Williams et al. 1995) to a judicial system that is fraught with errors (Schneps and Colmez 2013), and everything in between. Paulos (2001: 6) argues that "innumerate people characteristically have a strong tendency to personalize--to be misled by their own experiences, or by the media's focus on individuals and drama." He also points to a belief in pseudoscience as one consequence of innumeracy. While research has shown that many students lack the quantitative skills needed for personal and professional success, this disadvantage is particularly acute among minority students. As Rivera-Batiz (1992: 313) has noted, "low quantitative literacy appears to be critical in explaining the lower probability of employment of young Black Americans relative to Whites." Just as poor quantitative skills inhibit success, strong ones have an empowering effect. Murnane, Willett, and Levy (1995) found that basic cognitive skills, including the ability to follow directions, manipulate fractions and decimals, and interpret line graphs, have become increasingly important predictors of wages due to rising demands in the labor market. Indeed, quantitative reasoning skills, including the ability to analyze, present and communicate about data are critical for success in today's technologically-oriented and data-driven world. "Work roles in fields as diverse as personnel, city planning, marketing, and welfare administration require the ability to use research by others intelligently, to conduct simple research, and to collaborate with professional researchers" (Markham et al. 1991: 464). It is not surprising that numeracy has been closely linked to economic performance (Robinson 1995). Numeracy Outside the United States Numeracy is increasingly being seen as an essential Adult Basic Education (ABE) skill and quantitative reasoning is increasingly being promoted by various governments and national organizations in countries such as Canada, New Zealand, and the United Kingdom. There is also a rich body of literature on these initiatives at other countries, some of which is referenced on our link to resources on national initiatives to promote numeracy. Videos about Quantitative Reasoning John Allen Paulos speaks about probability and religion: A panel discussion on quantitative literacy from the Quantway™ and Statway™ 2011 Summer Institute. Hosted by Rebecca Hartzler, senior associate of the Carnegie Foundation and introduced by Jane Muhich, director of Quantway™and productive persistence at the Carnegie Foundation. Panelists include: Bernie Madison, University of Arkansas; Eric Gaze, Bowdoin College; and Caren Diefenderfer, Hollins University: Bernie Madison speaks about the importance of Quantitative Reasoning: Videos about Numbers Powers of Ten "takes us on an adventure in magnitudes. Starting at a picnic by the lakeside in Chicago, this famous film transports us to the outer edges of the universe. Every ten seconds we view the starting point from ten times farther out until our own galaxy is visible only as a speck of light among many others. Returning to Earth with breathtaking speed, we move inward- into the hand of the sleeping picnicker- with ten times more magnification every ten seconds. Our journey ends inside a proton of a carbon atom within a DNA molecule in a white blood cell. POWERS OF TEN © 1977 EAMES OFFICE LLC (Available at www.eamesoffice.com)." "This is an animation created by The Danse. It gives the viewer an idea of exactly how much a trillion dollars is." "CNN asks a Temple University mathematics professor [John Allen Paulos] how much $1 trillion actually is. The story notes Senate Republican Leader Mitch McConnell is correct when he says that if you spent $1 million per day starting in the year 0, you still would not have spent $1 trillion by 2009." . 1It should be noted that some researchers and educators argue for a distinction among various terms including Quantitative Literacy (QL), Quantitative Reasoning (QR), and Statistical Literacy. For example, Powell and Leveson (2002) defined quantitative literacy (QL) as "a basic familiarity with numbers, arithmetic and graphs. As in English Literacy, it involves an understanding of the basic rules (grammar) of the language, in the case of mathematics, and an ability of manipulate numbers." In contrast, they define quantitative reasoning (QR) as "the application of logic to problems and the ability to understand the real world meaning of numbers and mathematical statements." They further argue, "In our opinion, the concepts of quantitative literacy (QL) and quantitative reasoning (QR) are end-members of a continuous spectrum of quantitative concepts." Statistical literacy, on the other hand, may be defined as "understanding the basic language of statistics (e.g., knowing what statistical terms and symbols mean and being able to read statistical graphs), and understanding some fundamental ideas of statistics" (Aliaga et al. 2010: 14). 2Reprinted with permission from Assessing Outcomes and Improving Achievement: Tips and tools for Using Rubrics, edited by Terrel L. Rhodes. Copyright 2010 by the Association of American Colleges and Universities. Aliaga, Martha, George Cobb, Carolyn Cuff, Joan Garfield (Chair), Rob Gould, Robin Lock, Tom Moore, Allan Rossman, Bob Stephenson, Jessica Utts, Paul Velleman, and Jeff Witmer. 2010. Guidelines for Assessment and Instruction in Statistics Education: College Report. Alexandria, VA: American Statistical Association. Ancker, Jessica S. and Erika Abramson. 2012. "Doctors and Quantitative Literacy." Paper presented at the Annual meeting of the National Numeracy Network (NNN). NY, NY. Association of American Colleges and Universities. 2010. Quantitative Literacy VALUE Rubric. Washington, DC: Association of American Colleges and Universities. Carnevale, Anthony P., and Donna M. Desrochers." 2003. "The Democratization of Mathematics." In Quantitative Literacy: Why Numeracy Matters for Schools and Colleges, edited by Bernard L. Madison and Lynn Arthur Steen. Princeton, NJ: National Council on Education and the Disciplines. Pp. 21-31. Cavanaugh, K., M.M. Huizinga, K.A. Wallston, T. Gebretsadik, A. Shintani, D. Davis, R.P. Gregory, L. Fuchs, R. Malone, A. Cherrington, M. Pignone, D.A. DeWalt, T.A. Elasy, and R.L. Rothman. 2008. "Association of Numeracy and Diabetes Control." Annals of Internal Medicine 148(10): 737-46. Cockcroft Report. 1982. Crown copyright material is reproduced with the permission of the Controller of HMSO and the Queen's Printer for Scotland. Pg. 11. Crowther Report. 1959. Crown copyright material is reproduced with the permission of the Controller of HMSO and the Queen's Printer for Scotland. Pg. 269. Johnson, Yvette Nicole and Jennifer J. Kaplan. N.d. "Assessing the Quantitative Literacy of Students at a Large Public Research University." Michigan State University. Madison, Bernard L. 2003. "The Many Faces of Quantitative Literacy." In Quantitative Literacy: Why Numeracy Matters for Schools and Colleges, edited by Bernard L. Madison and Lynn Arthur Steen. Princeton, NJ: National Council on Education and the Disciplines, Pp 3-6. Markham, William T. 1991. ''Research Methods in the Introductory Course: To Be or Not to Be?'' Teaching Sociology 19(4): 464-71. Master, V.A., T.V. Johnson, A. Abbasi, S.S. Ehrlich, R.S. Kleris, S. Abbasi, A. Prater, A. Owen-Smith, and M. Goodman. 2010. "Poorly Numerate Patients in an Inner City Hospital Misunderstood the American Urological Association Symptom Score." Urology 75(1): 148-152. Murnane, Richard J., John B. Willett, and Frank Levy. 1995. "The Growing Importance of Cognitive Skills in Wage Determination." The Review of Economics and Statistics 77(2): 251-266. Nelson, Wendy, Valerie F. Reyna, Angela Fagerlin, Isaac Lipkus, and Ellen Peters. 2008. "Clinical Implications of Numeracy: Theory and Practice." Annals of Behavioral Medicine 35(3): 261-274. Paulos, John Allen. 2001. Innumeracy: Mathematical Illiteracy and Its Consequences. New York: Hill and Wang. Powell, Wayne and David Leveson. 2002. "The Unique Role of Introductory Geology Courses in Teaching Quantitative Reasoning." Available URL: http://academic.brooklyn.cuny.edu/quant/powell.htm Rivera-Batiz, Francisco L. 1992. "Quantitative Literacy and the Likelihood of Employment among Young Adults in the United States." Journal of Human Resources27(2): 313-328. Robinson, Peter. 1998. "Literacy, Numeracy and Economic Performance." New Political Economy 3(1): 143-149. Steen, Lynn Arthur. 2004. ''Everything I Needed to Know about Averages I Learned in College.'' Peer Review 6(4): 4-8. Wiest, Lynda R., Heidi J. Higgins, and Janet Hart Frost. 2007. "Quantitative Literacy for Social Justice." Equity & Excellence in Education 40: 47-55. Williams, M.V., R.M. Parker, D.W. Baker, N.S. Parikh, K. Pitkin, W.C. Coates, and J.R. Nurss. 1995. "Inadequate Functional Health Literacy among Patients at Two Public Hospitals." Journal of the American Medical Association 274(21): 1677-82. The Numeracy Infusion Course for Higher Education Support for this project has been provided by the National Science Foundation's (NSF) Transforming Undergraduate Education in Science, Technology, Engineering and Mathematics (STEM) (TUES) award #1121844. Any opinions, findings, and conclusions or recommendations expressed in this web site of those of the authors and do not necessarily represent the views of the National Science Foundation. Please email Esther Wilder (email@example.com) if you have any questions about this project.
http://www.nagt.org/NICHE/index.html
4
Algal Blooms in Fresh Water Aquatic ecologists are concerned with blooms (very high cell densities) of algae in reservoirs, lakes, and streams because their occurrence can have ecological, aesthetic, and human health impacts. In waterbodies used for water supply, algal blooms can cause physical problems (e.g., clogging screens) or can cause taste and odor problems in waters used for drinking. Blooms involving toxin-producing species can pose serious threats to animals and humans. Algae in Aquatic Ecosystems The term "algae" is generally used to refer to a wide variety of different and dissimilar photosynthetic organisms, generally microscopic. Depending on the species, algae can inhabit fresh or salt water. In modern taxonomic systems, algae are usually assigned to one of six divisions (equivalent to phyla; see box on page 22). The misnamed blue-green algae are often grouped with algae because of the chloroplasts contained within the cells. However, these organisms are actually photosynthetic bacteria assigned to the group cyanobacteria. Fresh-water algae, also called phytoplankton, vary in shape and color, and are found in a large range of habitats, such as ponds, lakes, reservoirs, and streams. They are a natural and essential part of the ecosystem . In these habitats, the phytoplankton are the base of the aquatic food chain . Small fresh-water crustaceans and other small animals consume the phytoplankton and in turn are consumed by larger animals. Bloom Occurrences and Impact Under certain conditions, several species of true algae as well as the cyanobacteria are capable of causing various nuisance effects in fresh water, such as excessive accumulations of foams, scums, and discoloration of the water. When the numbers of algae in a lake or a river increase explosively, an algal "bloom" is the result. Lakes, ponds, and slow-moving rivers are most susceptible to blooms. Algal blooms are natural occurrences, and may occur with regularity (e.g., every summer), depending on weather and water conditions. The likelihood of a bloom depends on local conditions and characteristics of the particular body of water. Blooms generally occur where there are high levels of nutrients present, together with the occurrence of warm, sunny, calm conditions. However, human activity often can trigger or accelerate algal blooms. Natural sources of nutrients such as phosphorus or nitrogen compounds can be supplemented by a variety of human activities. For example, in rural areas, agricultural runoff from fields can wash fertilizers into the water. In urban areas, nutrient sources can include treated wastewaters from septic systems and sewage treatment plants, and urban stormwater runoff that carries nonpoint-source pollutants such as lawn fertilizers. An algal bloom contributes to the natural "aging" process of a lake, and in some lakes can provide important benefits by boosting primary productivity. But in other cases, recurrent or severe blooms can cause dissolved oxygen depletion as the large numbers of dead algae decay. In highly eutrophic (enriched) lakes, algal blooms may lead to anoxia and fish kills during the summer. In terms of human values, the odors and unattractive appearance of algal blooms can detract from the recreational value of reservoirs, lakes, and streams. Repeated blooms may cause property values of lakeside or riverside tracts to decline. Some algae produce toxic chemicals that pose a threat to fish, other aquatic organisms, wild and domestic animals, and humans. The toxins are released into the water when the algae die and decay. The most common and visible nuisance algae in fresh water, and the species that are often toxic, are the cyanobacteria. A cyanobacterial bloom will form on the surface and can accumulate downwind, forming a thick scum that sometimes resembles paint floating on the water. Because these mats are blown close to shore, humans and wild and domestic animals can come into contact with the unsightly material. Blooms of toxic species of algae and cyanobacteria can flood the water environment with the biotoxin they produce. When toxic, blooms can cause human illnesses such as gastroenteritis (if the toxin is ingested) and lung irritations (if the toxin becomes aerosolized and hence airborne). Other cyanobacterial toxins are less drastic, and cause skin irritation to people who swim through an algal bloom. Toxicity can sometimes cause severe illness and death to animals that consume the biotoxin-containing water. Cyanobacterial toxins are known to affect bean photosynthesis when they are present in irrigation water. The toxins also can modify zooplankton communities, reduce growth of trout, and interfere with development of fish and amphibians. In some cases, toxins can be bioconcentrated by fresh-water clams. Microcystins comprise the most common group of about fifty cyanobacterial toxins. Among these toxins are ones that, if ingested in sufficient quantity, can harm the liver (hepatotoxins) or nervous system (neurotoxins). Microcystins can persist in water because they are stable in both hot and cold water. Even boiling the water, which makes the water safe from harmful bacteria, will not destroy microcystins. As a result of this threat, the Canadian government implemented a recommended water-quality guideline of 1.5 μg per liter of microsystin-LR (the most common hepatotoxin), and other countries will likely follow suit. In Canada as well as the United States, there are few reports of injury and no reports of human deaths resulting from microcystins in drinking water, in large part because surface-water sources of drinking water (e.g., reservoirs, lakes, and rivers) must undergo filtration and chlorination at water utilities prior to being distributed to customers. (Cyanobacterial toxins can be removed from water only by activated charcoal filters and chlorination.) Repeated episodes of algal blooms can be an indication that a river or lake is being contaminated, or that other aspects of a lake's ecology are out of balance. While cyanobacterial blooms receive the most public and scientific attention, the excessive growth of other algae and other aquatic plants also can cause significant degradation of a lake or pond, particularly in waters receiving sewage or agricultural runoff. Aquatic biologists and other water-quality specialists often are called to identify the causes and recommend management steps to reduce or control the problem. However, prevention of a problem is always better than trying to fix the problem after it happens. Controlling agricultural, urban, and stormwater runoff; properly maintaining septic systems; and properly managing residential applications of fertilizers are probably the most effective measures that can be taken to help prevent human-induced fresh-water algal blooms. SEE ALSO A LGAL B LOOMS , H ARMFUL ; A LGAL B LOOMS IN THE O CEAN ; E COLOGY , F RESH -W ATER ; ; N UTRIENTS IN L AKES AND S TREAMS ; P LANKTON ; P OLLUTION S OURCES : P OINT AND N ONPOINT ; W ASTEWATER T REATMENT AND M ANAGEMENT . Brian D. Hoyle K. Lee Lerner and Elliot Richmond Carmichael, Wayne W. "The Toxins of Cyanobacteria." Scientific American . (January 1994): 78-86. Elder, G. H., Hunter, P. R., and Codd, G. A. "Hazardous Freshwater Cyanobacteria (Blue-Green Algae)." Lancet 341 (1993):1519–1520. Falconer, Ian R. "An Overview of Problems Caused by Toxic Blue-green Algae (Cyanobacteria) in Drinking and Recreational Water." Environmental Toxicology. 14 (1999):5–12. Oberemm, A. et al. "Effects of Cyanobacterial Toxins and Aqueous Crude Extracts of Cyanobacteria on the Development of Fish and Amphibians." Environmental Toxicology 14 (1999):77–88. Blue-green Algae (Cyanobacteria) and their Toxins. Health Canada. <http://www.hc-sc.gc.ca/ehp/ehd/catalogue/general/iyh/algea.htm #x003e; . THE FIVE KINGDOMS Scientists use a system called taxonomy to organize all the biological organisms in the world. Organisms are put into various classification groups according to the distinguishing properties they share. These groups are (from highest to lowest) kingdom, phylum, class, order, family, genus, and species. Although there are several different kingdom classifications in use, it is now generally accepted that all biological organisms can initially be placed into one of five kingdoms: monera, protists, fungi, plants, and animals. Prokaryotes (Cells That Have No Distinct Nuclei) - Monera: Includes aquatic bacteria and blue-green algae, more properly called cyanobacteria. Monerans, though microscopic, are the most dominant organisms on Earth. They have existed for about 3.5 billion years. Eukaryotes (Cells Have Distinct Nuclei) - Protista: Includes plant-like and animal-like primitive organisms, such as algae and protozoa. Organisms are generally unicellular. - Fungi: Includes a large group of parasitic and saprophytic species. Some are parasitic on animals, including humans (ringworm, or athlete's foot). Others are parasitic on plants and include rusts and mildews. Fungi are, along with the bacteria, important decomposers of dead organic matter. - Plants: Make their food by the process of photosynthesis. - Animals: Ingest their food and digest it internally in specialized body cavities. Algal blooms can cover a large area. In 1991, a bloom affected an estimated 1,000-kilometer stretch of the Barwon and Darling Rivers in New South Wales, Australia.
http://www.waterencyclopedia.com/A-Bi/Algal-Blooms-in-Fresh-Water.html
4.4375
United States Congress The United States Congress is the legislative branch of the United States federal government. The structure and responsibilities of Congress are defined in Article One of the United States Constitution. The United States Congress is bicameral, meaning that it has two houses, namely: The Senate currently has 100 seats, one-third being renewed every two years; two members are elected from each U.S. state by popular vote to serve six-year terms. Each state has equal representation in the Senate because the states are each equal members of the federal union. The House of Representatives currently has 435 seats for voting Members. Additionally, there are non-voting "delegates" from the District of Columbia, American Samoa, Guam, Puerto Rico (known as Resident Commissioner and serving a double-length term) and the U.S. Virgin Islands. Members are directly elected by first past the post to serve two-year terms from Congressional districts. Only the non-voting delegate from Puerto Rico (known as "Resident Commissioner") is elected to a four-year term. The states with the very small populations—smaller than the population of a whole Congressional district elsewhere—are still guaranteed one whole seat. These seats are apportioned according to the population of each state, but the total number is fixed by statute at 435 (Public Law 62-5). The first Congress under the current Constitution started its term in Federal Hall in New York City on March 4, 1789 and their first action was to declare that the new Constitution of the United States was in effect. The United States Capitol building in Washington, D.C. hosted its first session of Congress on November 17, 1800. Proceedings of the United States Congress were televised for the first time on January 3, 1947. Proceedings of the general Congress are now regularly broadcast on C-SPAN, as are newsworthy meetings of committees and subcommittees. Specific powers held by the Congress The powers of the Congress are set forth in Article 1 (particularly Article 1, Section 8) of the United States Constitution. The powers originally delegated to the Congress by the original version of the Constitution were supplemented by the post-Civil War amendments to the Constitution (Amendments 13, 14, and 15, each of which authorizes the Congress to enforce its provisions by appropriate legislation), and by the 16th Amendment, which authorizes an income tax. Each house of Congress has the power to introduce legislation on any subject dealing with the powers of Congress, except for legislation dealing with gathering revenue (generally through taxes), which must originate in the House of Representatives (specifically the U.S. House Committee on Ways and Means). The large states may thus appear to have more influence over the public purse than the small states. In practice, however, each house can vote against legislation passed by the other house. The Senate may disapprove a House revenue bill—or any bill, for that matter—or add amendments that change its nature. In that event, a conference committee made up of members from both houses must work out a compromise acceptable to both sides before the bill becomes the law of the land. The broad powers of the whole Congress are spelled out in Article I of the Constitution: A few of these powers are now outdated, but they remain in effect. The Tenth Amendment sets definite limits on congressional authority, by providing that powers not delegated to the national government are reserved to the states or to the people. In addition, the Constitution specifically forbids certain acts by Congress. It may not: The impeachment trial of Bill Clinton in the Senate. The Congress also has sole jurisdiction over impeachment of federal officials. The House has the sole right to bring the charges of misconduct which would be considered at an impeachment trial, and the Senate has the sole power to try impeachment cases and to find officials guilty or not guilty. A guilty verdict requires a two-thirds majority and results in the removal of the federal official from public office. The Senate has further oversight powers over the executive branch. For those, see United States Senate. Officers of the Congress The Constitution provides that the vice president shall be President of the Senate. The vice president has no vote, except in the case of a tie. The Senate chooses a President pro tempore to preside when the vice president is absent. The House of Representatives chooses its own presiding officer—the Speaker of the House. The speaker and the president pro tempore are always members of the political party with the largest representation in each house, aka the majority At the beginning of each new Congress, members of the political parties select floor leaders and other officials to manage the flow of proposed legislation. These officials, along with the presiding officers and committee chairpersons, exercise strong influence over the making of laws. The committee process One of the major characteristics of the Congress is the dominant role that Congressional committees play in its proceedings. Committees have assumed their present-day importance by evolution, not by constitutional design, since the Constitution makes no provision for their establishment. In 1885, when Woodrow Wilson wrote Congressional Government, there were only 60-odd legislative committees and subcommittees, in the 1990's there were 300. There are so many subcommittees that Morris Udall of Arizona could joke that he could address any Democrat whose name he had forgotten, "Good morning, Mr. Chairman," and half the time be right. (Frozen Republic, 191) At present the Senate has 17 full-fledged standing (or permanent) committees; the House of Representatives has 19 standing committees. Each specializes in specific areas of legislation: foreign affairs, defense, banking, agriculture, commerce, appropriations, etc. Almost every bill introduced in either house is referred to a committee for study and recommendation. The committee may approve, revise, kill or ignore any measure referred to it. It is nearly impossible for a bill to reach the House or Senate floor without first winning committee approval. In the House, a petition to release a bill from a committee to the floor requires the signatures of 218 members; in the Senate, a majority of all members is required. In practice, such discharge motions only rarely receive the required support. The majority party in each house controls the committee process. Committee chairpersons are selected by a caucus of party members or specially designated groups of members. Minority parties are proportionally represented on the committees according to their strength in each house. Bills are introduced by a variety of methods. Some are drawn up by standing committees; some by special committees created to deal with specific legislative issues; and some may be suggested by the president or other executive officers. Citizens and organizations outside the Congress may suggest legislation to members, and individual members themselves may initiate bills. After introduction, bills are sent to designated committees that, in most cases, schedule a series of public hearings to permit presentation of views by persons who support or oppose the legislation. The hearing process, which can last several weeks or months, theoretically opens the legislative process to public participation. One virtue of the committee system is that it permits members of Congress and their staffs to amass a considerable degree of expertise in various legislative fields. In the early days of the republic, when the population was small and the duties of the federal government were narrowly defined, such expertise was not as important. Each representative was a generalist and dealt knowledgeably with all fields of interest. The complexity of national life today calls for special knowledge, which means that elected representatives often acquire expertise in one or two areas of public policy. When a committee has acted favorably on a bill, the proposed legislation is then sent to the floor for open debate. In the Senate, the rules permit virtually unlimited debate. In the House, because of the large number of members, the Rules Committee usually sets limits. When debate is ended, members vote either to approve the bill, defeat it, table it (which means setting it aside and is tantamount to defeat) or return it to committee. A bill passed by one house is sent to the other for action. If the bill is amended by the second house, a conference committee composed of members of both houses attempts to reconcile the differences. Conference committees are not supposed to add anything that was not supported by either house, or delete anything that was supported by one house, but in practice conference committees make substantial changes to legislation. According to Citizens Against Government Waste, conference committees even add pork to legislation. For the 2005 budget conference committees added 3407 pork barrel appropriations, budget, up from 47 pork barrel appropriations in 1994. Once passed by both houses, the bill is sent to the president, for constitutionally the president must act on a bill for it to become law. The president has the option of signing the bill—at which point it becomes national law—or vetoing it. A bill vetoed by the president must be reapproved by a two-thirds vote of both houses to become law, this is called overriding a veto. The president may also refuse either to sign or veto a bill. In that case, the bill becomes law without his signature 10 days after it reaches him (not counting Sundays). The single exception to this rule is when Congress adjourns after sending a bill to the president and before the 10-day period has expired; his refusal to take any action then negates the bill — a process known as the "pocket veto." Congressional powers of investigation One of the most important nonlegislative functions of the Congress is the power to investigate. This power is usually delegated to committees—either to the standing committees, to special committees set up for a specific purpose, or to joint committees composed of members of both houses. Investigations are conducted to gather information on the need for future legislation, to test the effectiveness of laws already passed, to inquire into the qualifications and performance of members and officials of the other branches, and, on rare occasions, to lay the groundwork for impeachment proceedings. Frequently, committees call on outside experts to assist in conducting investigative hearings and to make detailed studies of issues. There are important corollaries to the investigative power. One is the power to publicize investigations and their results. Most committee hearings are open to the public and are widely reported in the mass media. Congressional investigations thus represent one important tool available to lawmakers to inform the citizenry and arouse public interest in national issues. Congressional committees also have the power to compel testimony from unwilling witnesses and to cite for contempt of Congress witnesses who refuse to testify and for perjury those who give false testimony. Informal practices of Congress In contrast to European parliamentary systems, the selection and behavior of U.S. legislators has little to do with central party discipline. Each of the major American political parties is a coalition of local and state organizations that join together as a national party—Republicans and Democrats. Thus, traditionally members of Congress owe their positions to their districtwide or statewide electorate, not to the national party leadership nor to their congressional colleagues. As a result, the legislative behavior of representatives and senators tends to be individualistic and idiosyncratic, reflecting the great variety of electorates represented and the freedom that comes from having built a loyal personal constituency. Congress is thus a collegial and not a hierarchical body. Power does not flow from the top down, as in a corporation, but in practically every direction. There is comparatively minimal centralized authority, since the power to punish or reward is slight. Congressional policies are made by shifting coalitions that may vary from issue to issue. Sometimes, where there are conflicting pressures—from the White House and from important interest groups—legislators will use the rules of procedure to delay a decision so as to avoid alienating an influential sector. A matter may be postponed on the grounds that the relevant committee held insufficient public hearings. Or Congress may direct an agency to prepare a detailed report before an issue is considered. Or a measure may be put aside by either house, thus effectively defeating it without rendering a judgment on its substance. There are informal or unwritten norms of behavior that often determine the assignments and influence of a particular member. "Insiders," representatives and senators who concentrate on their legislative duties, may be more powerful within the halls of Congress than "outsiders," who gain recognition by speaking out on national issues. Members are expected to show courtesy toward their colleagues and to avoid personal attacks, no matter how unpalatable their opponents' policies may be, though in recent years this norm has been called into question. Still, members daily refer to one another as the "Gentlewoman from Tennessee" or the "distinguished Senator from Michigan," reflecting a traditionalist etiquette found in few other domains of American life. Members usually specialize in a few policy areas rather than claim expertise in the whole range of legislative concerns. Those who conform to these informal rules are more likely to be appointed to prestigious committees or at least to committees that affect the interests of a significant portion of their constituents. The traditional independence of members of Congress has both positive and negative aspects. One benefit is that a system that allows legislators to vote their consciences or their constituents’ wishes is inherently more democratic than one that does not. The independence of Congressmen and Senators also allows much greater diversity of opinion than would exist if Congressmen had to obey their leaders. Although there are only two parties represented in Congress, America’s Congress in some ways functions like a multi-party system. The problem of independence is that there is less accountability for voters than there would be if Congressmen took responsibility for their party’s actions. When in the majority, congressional leaders in both houses and both parties use a technique that is sometimes called "catch and release." In "catch and release," if a piece of pending legislation is unpopular in a member’s district or state, that member of Congress will be allowed to vote against the law if his or her vote will not affect the outcome. If the vote will be close, Congressmen will be "reeled in" and required to vote for the party’s legislation. Because of catch and release, it is possible for Congressmen to hide their true political stances from their constituents until an extremely close vote comes up. As an example, in 2002, several members of Congress voted to authorize presidential Trade Promotion Authority, formerly known as "Fast Track," who had never voted for free trade in the past. Apparently, they had always supported free trade, but had been able to conceal it from their anti-trade constituents. On the 2003 Prescription Drug Benefit, 13 Republicans voted affirmatively in extremely close 6:00 AM initial vote only to vote against the conference bill when it returned a few weeks later, thereby being able to tell their constituents whatever they needed to tell them. Congressional freedom of action also allows Congressmen and Senators to hold out on certain bills in order to pull down pork for their districts. Often a reluctant vote has to be won over by pet projects or jobs for allies. In the Senate, small state Senators are more likely to hold out than large state Senators are. The practice of districts choosing their own Congressmen also results in members of Congress being the best fundraisers and best campaigners, not necessarily the best qualified. The US Congress has fewer women in it than legislatures in other countries, as well as many more lawyers. The percentage of lawyers in Congress flucuates around 45 percent, by contrast, in the Canadian House of Commons, the British House of Commons, and the Bundestag, approximately 15 percent of members have law degrees. Lobbying has been called the fourth branch of the American government. Many observers of Congress consider lobbying to be a corrupting practice, but others appreciate the fact that lobbyists provide information. Lobbyists also help write complicated legislation. Lobbyists must be registered in a central database and only sometimes actually work in lobbies. Virtually every group - from corporations to foreign governments to states to grass-roots organizations - employs lobbyists. As of 1987, there were 23,000 registered lobbyists, a sixty-fold increase from 1961. (Power Game, Hendrick Smith, 29-31) Many lobbyists are former Congressmen and Senators. Former Congressmen are advantaged because they retain special access to the Capital, office buildings, and even the Congressional gym. Elections for members of both houses of Congress are invariably held in November of every even numbered year, on that month's first Tuesday following its first Monday (that is to say, on the Tuesday that falls between the second and eighth days, inclusive), a day known as Election Day. In the case of the House of Representatives, these elections occur in every state, and in every district of the states that are divided into Congressional districts. Occasionally a special election is held within a state, or district of a state, that has an unscheduled vacancy in its corresponding seat. In the case of the Senate, however, since terms of office last six years and each state has two, it follows mathematically that Senate elections can occur in a given state no more often than twice for every three Congressional-election years. In fact, no state has elections for both its senators in the same year (with possible exceptions in cases of unscheduled vacancies); every state elects one senator two years after the other, and then next elects a senator after four additional years. (One additional possible wrinkle remains: rarely, a state may divide itself into two Senate districts, with an Senate election occurring every sixth year in each district, and never in both districts in one year.) Replacements for vacant Senate seats are usually appointed by state governors, rather than by special election. Before the passage of the Seventeenth Amendment to the United States Constitution, providing for direct elections, Senators were chosen by state legislatures. Seats by party (108th Congress, 2003-2005) Each state's delegation in Congress consists of two Senators, and a number of Representatives (see below) depending on an apportionment among the states, based every ten years on their respective populations in the U.S. Census. Non-state territories have a Delegate each in the House, and many present states had such delegates when they were organized territories prior to statehood. See also: United States Congressional Apportionment The sum of Senators and Representatives determines that state's number of Electors in the U.S. Electoral College. Based on the 2000 Census, members of the U.S. House of Representatives represent 646,952 persons, on average. The following states Congressional delegations include the number of Representatives indicated; the articles linked in many cases list not only the current Congressional delegation but former Senators, and Representatives; when applicable, Delegates of the former organized territory that had the same extent are included. The following are the changes in apportionment following the 2000 Census: The United States territories are not members of the federal union. They have no Senators, but each has one delegate in the House of Representatives. The delegates can speak in debates, but can only vote in committees. List of United States Congresses by Session For a detailed list of congressional members or information on particular congressional sessions, click on a session's link from the List of United States Congresses.
http://askfactmaster.com/United_States_Congress
4
For the first time in history, a change will be made to the atomic weights of some elements listed on the Table of Standard Atomic Weights of the chemical elements found in the inside covers of chemistry textbooks worldwide. The International Union of Pure and Applied Chemistry’s (IUPAC) Commission on Isotopic Abundances and Atomic Weights is publishing a new table that will express atomic weights of ten elements as intervals, rather than as single standard values. The new table is the result of cooperative research supported by the U.S. Geological Survey, IUPAC, and other contributing Commission members and institutions. Standard atomic weights commonly are thought of as constants of nature, despite the fact that atomic weights of many common chemical elements show variations as a result of physical, chemical and biological processes. “For more than a century and a half, many were taught to use standard atomic weights — a single value — found on the inside cover of chemistry textbooks and on the periodic table of the elements,” said Ty Coplen, director of the USGS Reston Stable Isotope Laboratory. “Though this change offers significant benefits in the understanding of chemistry, one can imagine the challenge now to educators and students who will have to select a single value out of an interval when doing chemistry calculations.” The standard atomic weights for hydrogen, lithium, boron, carbon, nitrogen, oxygen, silicon, sulfur, chlorine and thallium previously were expressed as central values with uncertainties that reflected natural atomic-weight variations. The weights of these elements now will be expressed as intervals to more accurately convey this variation in atomic weight. For example, boron is commonly known to have a standard atomic weight of 10.811. However, its actual atomic weight can be anywhere between 10.806 and 10.821, depending on where the element is found. The atomic weight of an element depends upon how many stable isotopes it has and the relative amount of each stable isotope. Isotopes are atoms of the same element that have different masses. Variations in atomic weight occur when an element has two or more naturally occurring stable isotopes that vary in abundance. Modern analytical techniques can measure the atomic weight of many elements precisely, and these small variations in an element’s atomic weight are important in research and industry. For example, precise measurements of the abundances of isotopes of carbon can be used to determine purity and source of food products, such as vanilla and honey. Isotopic measurements of nitrogen, chlorine and other elements are used for tracing pollutants in streams and groundwater. In sports doping investigations, performance enhancing testosterone can be identified in the human body because the atomic weight of carbon in natural human testosterone is higher than that in pharmaceutical testosterone. Elements with only one stable isotope do not exhibit variations in their atomic weights. For example, the standard atomic weights for fluorine, aluminum, sodium and gold are constant, and their values are known to better than six decimal places. The USGS has a long history of research in determining atomic weights of the chemical elements. As far back as 1882, Frank W. Clark, chief chemist of the USGS, prepared a table of atomic weights. The year 2011 has been designated as the International Year of Chemistry. The IYC is an official United Nations International Year, proclaimed at the UN as a result of the initiative of IUPAC and UNESCO. IUPAC will feature the change in the standard atomic weights table as part of associated IYC activities. This fundamental change in the presentation of the atomic weights is based upon work between 1985 and 2010 supported by IUPAC, the USGS, and other contributing Commission members and institutions. IUPAC oversees the evaluation and dissemination of atomic-weight values. Fundamental research underlying the changes in the atomic weight presentation for selected elements is compiled in the report “Compilation of minimum and maximum isotope ratios of selected elements in naturally occurring terrestrial materials and reagents.” An abbreviated version of this report is published in the IUPAC journal Pure and Applied Chemistry, Vol. 74, No. 10, pp. 1987–2017 (2002). (doi:10.1351/pac200274101987). An overview of the standard atomic weights through the 20th century is also available.
http://www.usgs.gov/newsroom/article_pf.asp?ID=2661
4.15625
Learners are capable of learning at high levels when their strengths are celebrated and their needs are addressed. All learners, children and adults alike, need multiple opportunities to learn new knowledge and skills, using diverse approaches and enough time. The curriculum identifies what students need to know (content and concepts) and be able to do (skills and processes). Standards-based curriculum incorporates the knowledge and skills that are articulated in state standards and their associated core curriculum statements or objectives or grade level expectations. (Different states use different terms for the statements that are most specific, often designated to particular grade levels.) Teachers need to know what it is they are responsible to teach and assess in the classroom. Students need opportunities to learn in relation to standards before they are assessed on these standards on high stakes tests. Click on the links below for Middletown Curriculum and Standards:
http://www.middletowncityschools.org/Academics/StandardsBasedLearning/SBLLearners.aspx
4.1875
To hammer home the theory you've just learnt let's look at a simple problem: Given the digits 0 through 9 and the operators +, -, * and /, find a sequence that will represent a given target number. The operators will be applied sequentially from left to right as you read. So, given the target number 23, the sequence 6+5*4/2+1 would be one possible solution. If 75.5 is the chosen number then 5/2+9*7-5 would be a possible solution. Please make sure you understand the problem before moving on. I know it's a little contrived but I've used it because it's very simple. First we need to encode a possible solution as a string of bits… a chromosome. So how do we do this? Well, first we need to represent all the different characters available to the solution... that is 0 through 9 and +, -, * and /. This will represent a gene. Each chromosome will be made up of several genes. Four bits are required to represent the range of characters used: The above show all the different genes required to encode the problem as described. The possible genes 1110 & 1111 will remain unused and will be ignored by the algorithm if encountered. So now you can see that the solution mentioned above for 23, ' 6+5*4/2+1' would be represented by nine genes like so: 0110 1010 0101 1100 0100 1101 0010 1010 0001 6 + 5 * 4 / 2 + 1 These genes are all strung together to form the chromosome: A Quick Word about Decoding Because the algorithm deals with random arrangements of bits it is often going to come across a string of bits like this: Decoded, these bits represent: 0010 0010 1010 1110 1011 0111 0010 2 2 + n/a - 7 2 Which is meaningless in the context of this problem! Therefore, when decoding, the algorithm will just ignore any genes which don’t conform to the expected pattern of: number -> operator -> number -> operator …and so on. With this in mind the above ‘nonsense’ chromosome is read (and tested) as: 2 + 7 This can be the most difficult part of the algorithm to figure out. It really depends on what problem you are trying to solve but the general idea is to give a higher fitness score the closer a chromosome comes to solving the problem. With regards to the simple project I'm describing here, a fitness score can be assigned that's inversely proportional to the difference between the solution and the value a decoded chromosome represents. If we assume the target number for the remainder of the tutorial is 42, the chromosome mentioned above has a fitness score of 1/(42-23) or 1/19. As it stands, if a solution is found, a divide by zero error would occur as the fitness would be 1/(42-42). This is not a problem however as we have found what we were looking for... a solution. Therefore a test can be made for this occurrence and the algorithm halted accordingly. First, please read this tutorial again. If you now feel you understand enough to solve this problem I would recommend trying to code the genetic algorithm yourself. There is no better way of learning. If, however, you are still confused, I have already prepared some simple code which you can find here. Please tinker around with the mutation rate, crossover rate, size of chromosome etc to get a feel for how each parameter effects the algorithm. Hopefully the code should be documented well enough for you to follow what is going on! If not please email me and I’ll try to improve the commenting. Note: The code given will parse a chromosome bit string into the values we have discussed and it will attempt to find a solution which uses all the valid symbols it has found. Therefore if the target is 42, + 6 * 7 / 2 would not give a positive result even though the first four symbols("+ 6 * 7") do give a valid solution. (Delphi code submitted by Asbjørn can be found here and Java code submitted by Tim Roberts can be found here) I hope this tutorial has helped you get to grips with the basics of genetic algorithms. Please note that I have only covered the very basics here. If you have found genetic algorithms interesting then there is much more for you to learn. There are different selection techniques to use, different crossover and mutation operators to try and more esoteric stuff like fitness sharing and speciation to fool around with. All or some of these techniques will improve the performance of your genetic algorithms considerably. Stuff to Try If you have succeeded in coding a genetic algorithm to solve the problem given in the tutorial, try having a go at the following more difficult problem: Given an area that has a number of non overlapping disks scattered about its surface as shown in Screenshot 1, Use a genetic algorithm to find the disk of largest radius which may be placed amongst these disks without overlapping any of them. See Screenshot 2. As you may have already gathered, I've already written some code that solves this problem so if you get stuck you can find it here. (but you will have a go yourself first eh? ;0)). For those of you without compilers, you can get the executable file here. 1 2 3 Home
http://www.ai-junkie.com/ga/intro/gat3.html
4.0625
If you’ve ever known anyone with some type of disability, whether it was hearing loss, blindness, or another sensory disability, you may have noticed the way the body compensates. For example, a person who can’t see may have improved hearing over those who can both see and hear. According to a study in the Journal of Neuroscience, blindness may improve a person’s ability to understand and process tactile information. Dr. Daniel Goldreich with McMaster University led the research team to specifically evaluate whether a person with a sensory disability would be able to process the tactile sense faster. One of the challenges to performing such a study is the brain’s ability to register sensations, such as touch or sight. The brain is capable of doing this within a fraction of a second in most individuals. According to the study results, the team was able to confirm that the body will compensate for blindness with an increased sense of touch. To test this, the team studied 89 people with sight and 57 people with blindness of some type. The blind group had members of varying sight levels. One of the areas for the study revolved around the concept of masking, which is where the body may miss or misunderstand a sensation when it comes back to back on another one. Participants were asked to detect a tap on their index finger and discern the intensity of the tap. If a longer tap immediately followed a small tap, the first sensation masked the second one more often in the individuals with sight than with the individuals who had vision impairment. The people who performed the best throughout the study were the 22 participants who had complete vision loss since birth. Their tactile sense even surpassed those who had lost their eye sight later in life. As a result of the study, Goldreich’s team hypothesizes that multiple senses can delay the brain’s ability to process back-to-back sensations.
http://www.paulingexhibit.org/tag/vision-loss
4.46875
A chemical reaction is a process that transforms one set of chemical substances to another. The substances that take part in chemical reactions are known as reactants and the substances produced by the reaction are known as products. The study of chemical reactions is part of the field of science called chemistry. Chemical reactions can result in molecules attaching to each other to form larger molecules, molecules breaking apart to form two or more smaller molecules, or rearrangement of atoms within or across molecules. Chemical reactions usually involve the making or breaking of chemical bonds, and in some types of reaction may involve production of electrically charged end products. Reactions can occur in various environments: solids, liquids, gases, or combinations of same. As exemplified in the adjacent figure, the participating reactants typically must surmount a threshold energy or activation energy to initiate the reaction, and intermediate products may exist briefly before the final output products are formed. In the figure, the activation energy on the left is the amount of energy change needed to surmount the activation threshold and the energy output on the right indicates the overall change in energy when the reaction is complete. In the figure's exemplified reaction, the final energy level is below the level of the initial reactants and therefore the reaction requires the input and absorption of energy in some form such as heat, light, or electricity. Chemical reactions can be either spontaneous and require no input of energy, or non-spontaneous which require the input of some form of energy. Classically, chemical reactions are transformations that involve the movement of electrons during the forming and breaking of chemical bonds. A more general concept of chemical reactions would include nuclear reactions and elementary particle reactions. Energy changes in reactions In terms of the energy changes that take place during chemical reactions, a reaction may be either exothermic or endothermic, terms which were first coined by the French chemist Marcellin Berthelot (1827 − 1907). The meaning of those terms and the difference between them are discussed below and illustrated in the adjacent diagram of the energy profiles for exothermic and endothermic reactions. Exothermic chemical reactions release energy. The released energy may be in the form of heat, light, electricity, sound or shock waves ... either singly or in combinations. A few examples of exothermic reactions are: - Mixing of acids and alkalis (releases heat) - Combustion of fuels (releases heat and light) Endothermic chemical reactions absorb energy. The energy absorbed may be in various forms just as is the case with exothermic reactions. A few examples of endothermic reactions are: - Dissolving ammonium nitrate (NH4NO3) in water (absorbs heat and cools the surroundings) - Electrolysis of water to form hydrogen and oxygen gases (absorbs electricity) - Photosynthesis of chlorophyll plus water plus sunlight to form carbohydrates and oxygen (absorbs light) The common kinds of classical chemical reactions include: • Isomerization, in which a chemical compound undergoes a structural rearrangement without any change in its net atomic composition. • Direct combination or synthesis, in which two or more chemical elements or compounds unite to form a more complex product: N2 + 3 H2 ⇒ 2 NH3 • Chemical decomposition, in which a compound is decomposed into elements or smaller compounds: 2 H2O ⇒ 2 H2 + O2 • Single displacement or substitution, characterized by an element being displaced out of a compound by a more reactive element: 2 Na(s) + 2 HCl(aq) ⇒ 2 NaCl(aq) + H2(g) • Metathesis or double displacement, in which two compounds exchange ions or bonds to form different compounds: NaCl(aq) + AgNO3(aq) ⇒ NaNO3(aq) + AgCl(s) • Acid-base reactions, broadly characterized as reactions between an acid and a base, can have different definitions depending on the acid-base concept employed. Some of the most common are: • Arrhenius definition: Acids dissociate in water releasing H3O+ ions; bases dissociate in water releasing OH− ions. • Brønsted-Lowry definition: Acids are proton (H+) donors; bases are proton acceptors. Includes the Arrhenius definition. • Lewis definition: Acids are electron-pair acceptors; bases are electron-pair donors. Includes the Brønsted-Lowry definition. • Redox reactions, in which changes in the oxidation numbers of atoms in the involved species occur. Those reactions can often be interpreted as transfers of electrons between different molecular sites or species. An example of a redox reaction is: 2 S2O32−(aq) + I2(aq) ⇒ S4O62−(aq) + 2 I−(aq) In which iodine (I2) is reduced to the iodine anion (I−) and the thiosulfate anion (S2O32− ) is oxidized to the tetrathionate anion (S4O62− ). • Combustion, a kind of redox reaction in which any combustible substance combines with an oxidizing element, usually oxygen, to generate heat and form oxidized products as exemplified in the combustion of methane: CH4 + 2 O2 ⇒ CO2 + 2 H2O • Disproportionation with one reactant forming two distinct products varying in oxidation state as per this example: 2 Sn2+ ⇒ Sn + Sn4+ • Organic reactions encompass a very wide assortment of reactions involving organic compounds which are chemical compounds having carbon as the main element in their molecular structure. The reactions in which an organic compound may take part are largely defined by its functional groups. - Eds. L.Bergmann, C.Schafer, Wilhelm Raith and Thomas Mulvey (2002), Constituents of Matter: Atmos, Molecules, Nuclei, and Particles, 1st Edition, CRC Press, ISBN 0-8493-1202-7. See Chapter 2, "Molecules - bonds and reactions", Nikolaus Risch. - Chemistry Encyclopedia. 2011. Chemistry Explained. - For some qualitative discussions see, for example: (a) John W. Moore, Conrad L. Stanitski and Peter C. Jurs (2009), Principles of Chemistry: The Molecular Science, Cengage Learning, ISBN 0-495-39079-8 and (b) Eric V. Anslyn and Dennis A. Dougherty (2006), Modern Physical Organic Chemistry , University Science Books, ISBN 1-891389-31-9. - William Reusch, Emeritus Professor, Department of Chemistry, Michigan State University. Reaction Examples, Virtual Textbook of Organic Chemistry, Scroll down to section on "Activation Energy". - Frederick A. Bettelheim, William H. Brown, Mary K. Campbell and Shawn O. Farrell (2009), Introduction to General, Organic and Biochemistry. Brooks/Cole, Cengage Learning, pages 215-216, ISBN 0-495-39112-3. - Paul Collison, David Kirkby and Averil Macdonald (2003), Nelson Modular Science, Volume 2, Nelson Thornes Ltd., page 151, ISBN 0-7487-6247-7. - Note: In the following chemical equations, (aq) indicates an aqueous solution, (g) indicates a gas and (s) indicates a solid. Superscripts with a positive sign (+) indicate a cation and superscripts with a negative sign (−) indicate an anion. - H. Stephen Stoker (2011), General, Organic and Biological Chemistry, 6th Edition, Brooks/Cole, Cengage Learning, ISBN 1-133-10394-4. (See Chapter 9, "Chemical Reactions") - Mark S. Cracolice and Edward I. Peters (2009), Introductory Chemistry: An Active Learning Approach, 4th Edition, Brooks/Cole, Cengage Learning, ISBN 0-495-55847-8.
http://www.eoearth.org/article/Chemical_reaction?topic=74180
4.34375
|Germany Table of Contents By the early 1550s, it was apparent that a negotiated settlement was necessary. In 1555 the Peace of Augsburg was signed.The settlement, which represented a victory for the princes, granted recognition to both Lutheranism and Roman Catholicism in Germany, and each ruler gained the right to decide the religion to be practiced within his state. Subjects not of this faith could move to another state with their property, and disputes between the religions were to be settled in court. The Protestant Reformation strengthened the long-standing trend toward particularism in Germany. German leaders, whether Protestant or Catholic, became yet more powerful at the expense of the central governing institution, the empire. Protestant leaders gained by receiving lands that formerly belonged to the Roman Catholic Church, although not to as great an extent as, for example, would occur in England. Each prince also became the head of the established church within his territory. Catholic leaders benefited because the Roman Catholic Church, in order to help them withstand Protestantism, gave them greater access to church resources within their territories. Germany was also less united than before because Germans were no longer of one faith, a situation officially recognized by the Peace of Augsburg. The agreement did not bring sectarian peace, however, because the religious question in Germany had not yet been settled fully. Source: U.S. Library of Congress
http://countrystudies.us/germany/14.htm
4.1875
Greenhouse gases are chemical compounds that contribute to the greenhouse effect. When in the atmosphere a greenhouse gas allow sunlight (solar radiation) to enter the atmosphere where it warms the Earth’s surface and is reradiated back into the atmosphere as longer-wave energy (heat). Greenhouse gases absorb this heat and ‘trap’ it in the lower atmosphere. The rapid increase in atmospheric concentrations of the three main human-made greenhouse gases – carbon dioxide, methane, and nitrous oxide – is clear from the data sets for these gases over the last 420,000 years. Since around the time of the Industrial Revolution in Western countries, concentrations of carbon dioxide, methane, and nitrous oxide have all risen dramatically. Fossil fuel combustion, increasingly intensive agriculture, and an expanding global human population have been the primary causes for these rapid changes. Methane concentrations have seen the biggest relative increase in the last 200 years, with concentrations more than doubling. However, the rate of methane increase appears now to be lessening and it is concentrations of the human-made greenhouse gases carbon dioxide and nitrous oxide that are likely to increase most in the next 100 years. Fossil fuel-burning is the primary anthropogenic source of carbon dioxide, with cement and lime production also being important. Ruminant livestock and rice cultivation are the leading human activities contributing to methane emissions, while agriculture is the primary source of human-made nitrous oxide emissions. In climate science, the relative climate-forcing strength of different greenhouse gases is described relative to that of carbon dioxide. Methane is much more effective at absorbing infrared radiation (heat) and is thus a more powerful greenhouse gas. Yet its lifetime in the atmosphere is only about 10 years, compared to about 100 years for a molecule of carbon dioxide. As such, the climate-forcing strength of a kilogram of methane on a 100 year time-horizon – its Global Warming Potential (GWP) – is 23. That is, every kilogram of methane in the atmosphere has the equivalent global warming potential of 23 kilograms of carbon dioxide. The GWP of nitrous oxide is 296. Our impact on the global climate since the Industrial Revolution has been complex. Though emissions of greenhouse gases, like carbon dioxide and methane have had a net warming effect, emissions of sulphate aerosols have had a net cooling effect. The overall effect is net global warming, but the complex interaction of these positive and negative influences on global warming make predicting future warming difficult. The problem is exacerbated by our poor level of understanding of exactly how some factors, like land-use albedo (the reflectance of the land), operate and interact. Another very important greenhouse gas is water vapor and, though human activities are not primarily responsible for its concentration in the atmosphere, an indirect increase through elevated surface temperatures may lead to one of the most important postive feedbacks to global warming in the 21st century.
http://communities.earthportal.org/EPCommunity/articles/view/134885/?topic=13048
4.28125
The elevation of a geographic location is its height above a fixed reference point, most commonly a reference geoid, a mathematical model of the Earth's sea level as an equipotential gravitational surface (see Geodetic system, vertical datum). Elevation, or geometric height, is mainly used when referring to points on the Earth's surface, while altitude or geopotential height is used for points above the surface, such as an aircraft in flight or a spacecraft in orbit, and depth is used for points below the surface. Less commonly, elevation is measured using the center of the Earth as the reference point. Due to equatorial bulge, there is debate as to which of the summits of Mt. Everest or Chimborazo is at the higher elevation, as the Chimborazo summit is further from the Earth's center while the Mt. Everest summit is higher above mean sea level. Maps and GIS A topographical map is the main type of map used to depict elevation, often through use of contour lines. In a Geographic Information System (GIS), digital elevation models (DEM) are commonly used to represent the surface (topography) of a place, through a raster (grid) dataset of elevations. Digital terrain models are another way to represent terrain in GIS. The elevation of a mountain usually refers to its summit. The elevation of a hill also refers to the summit. A valley's elevation is usually taken from the lowest point but is often taken all over the valley. Global 1-kilometer map This map is derived from GTOPO30 data that describes the elevation of Earth's terrain at intervals of 30 arcseconds (approximately 1 km). It uses color and shading instead of contour lines to indicate elevation. |Each tile is available at a resolution of 1800 × 1800 pixels (approximate file size 1 MB, 60 pixels = 1 degree, 1 pixel = 1 minute)| Hypsography is the study of the distribution of elevations on the surface of the Earth, although the term is sometimes also applied to other rocky planets such as Mars or Venus. The term originates from the Greek word ὕψος "hypsos" meaning height. Most often it is used only in reference to elevation of land but a complete description of Earth's solid surface requires a description of the seafloor as well. Related to the term hypsometry, the measurement of these elevations of a planet's solid surface are taken relative to mean datum, except for Earth which is taken relative to the sea level. See also - List of European cities by elevation - List of highest mountains - List of highest towns by country - Normaal Amsterdams Peil - Physical geography - Summit (topography) - Table of the highest major summits of North America - Topographic isolation - Topographic map - Topographic prominence - Vertical pressure variation |Look up elevation in Wiktionary, the free dictionary.| - U.S. National Geodetic Survey website - United States Geological Survey website - Geographical Survey Institute - Downloadable ETOPO2 Raw Data Database (2 minute grid) - Downloadable ETOPO5 Raw Data Database (5 minute grid) - Find the elevation of any place |Wikimedia Commons has media related to: Elevation|
http://en.wikipedia.org/wiki/Elevations
4.3125
During the opening months of World War II, almost 120,000 Japanese Americans, two-thirds of them citizens of the United States, were forced out of their homes and into detention camps established by the U.S. government. Many would spend the next three years living under armed guard, behind barbed wire. This exhibit explores this period when racial prejudice and fear upset the delicate balance between the rights of the citizen and the power of the state. It tells the story of Japanese Americans who suffered a great injustice at the hands of the government, and who have struggled ever since to insure the rights of all citizens guaranteed by the U.S. Constitution. The first large groups of Asian immigrants reaching Hawaii a U.S. territory and the United States in the late 19th century faced racial prejudice...More The Japanese attack on Pearl Harbor on December 7, 1941 stunned the United States, and became a catalyst for challenging the loyalty of all Japanese people living in the U.S...More By the end of 1942, more than 120,000 men, women, and children of Japanese ancestry had been uprooted from their homes. Their final destinations would be one of 10 camps...More Japanese internees struggled with the dehumanizing effects of being imprisoned, working to create as normal a life as possible behind the barbed wire...More Some 25,000 Japanese Americans served in U.S. military units during World War II. The valor of these Americans, many of whom had family and friends living behind barbed wire, was extraordinary...More By 1946, Japanese Americans were released from the internment camps, but the injustice of the war years was not forgotten...More
http://amhistory.si.edu/perfectunion/non-flash/overview.html
4.0625
- Objects with similar properties and methods are grouped together to form a Class. Thus a Class represent a set of individual objects or in short we can say class is a template or an bluprint from which we can create objects. Characteristics of an object are represented in a class as Properties . The actions that can be performed by objects becomes functions of the class and is referred to as Methods. For example: Consider we have a Class of Cars under which Santro Xing, Alto and WaganR represents individual Objects. In this context each Car Object will have its own, Model, Year of Manufacture, Colour, Top Speed, Engine Power etc., which form Properties of the Car class and the associated actions i.e., object functions like Start, Move, Stop form the Methods of Car Class. - No memory is allocated when a class is created. Memory is allocated only when an object is created, i.e., when an instance of a class is created.
http://www.sukesh-marla.com/2010_10_25_archive.html
4.25
DefinitionBy Mayo Clinic staff Hypothermia is a medical emergency that occurs when your body loses heat faster than it can produce heat, causing a dangerously low body temperature. Normal body temperature is around 98.6 F (37 C). Hypothermia (hi-po-THUR-me-uh) occurs as your body temperature passes below 95 F (35 C). When your body temperature drops, your heart, nervous system and other organs can't work correctly. Left untreated, hypothermia can eventually lead to complete failure of your heart and respiratory system and to death. Hypothermia is most often caused by exposure to cold weather or immersion in a cold body of water. Primary treatments for hypothermia are methods to warm the body back to a normal temperature. - Extreme cold: A prevention guide to promote your personal health and safety. Centers for Disease Control and Prevention. http://www.bt.cdc.gov/disasters/winter/pdf/cold_guide.pdf. Accessed April 11, 2011. - Hypothermia: A cold weather hazard. National Institute on Aging. http://www.nia.nih.gov/NR/rdonlyres/6A2BEFB0-7D47-4978-9DF0-8638A6318685/15253/Hypothermia.pdf. Accessed April 1, 2011. - Hypothermia. The Merck Manuals: The Merck Manual for Healthcare Professionals. http://www.merckmanuals.com/professional/print/sec21/ch319/ch319d.html. Accessed April 1, 2011. - Mechem CC, et al. Accidental hypothermia in adults. http://www.uptodate.com/home/index.html. Accessed April 1, 2011. - McCullough L, et al. Diagnosis and treatment of hypothermia. American Family Physician. 2004;70:2325. - Jurkovich GJ. Environmental cold-induced injury. Surgical Clinics of North America. 2007;87:247. - Frostbite and hypothermia. American Red Cross. http://www.redcross.org/www-files/Documents/Preparing/Frostbite_and_Hypothermia.pdf. Accessed April 1, 2011. - Winter safety tips. American Academy of Pediatrics. http://www.aap.org/advocacy/releases/decwintertips.cfm. Accessed April 1, 2011. - Angert D, et al. Preventing injuries and illnesses in the wilderness. Pediatric Clinic of North America 2010;57:683. - Hypothermia and cold water. Canadian Red Cross. http://www.redcross.ca/print.asp?id=015204. Accessed April 1, 2011. - Cold water survival tips. U.S. Army Core of Engineers and U.S. Coast Guard. http://www.army.mil/-news/2011/02/03/51309-cold-water-survival-tips-from-usace-and-uscg/. Accessed April 1, 2011.
http://www.mayoclinic.com/health/hypothermia/DS00333
4.46875
Compact Objects - What are they? When astronomers refer to “compact objects,” they are generally referring to objects significantly more dense than a star or a planet. For example, white dwarfs or neutron stars are extremely dense stars that have collapsed, no longer able to produce a sufficient amount of pressure within to prevent their outer layers from falling into their centers. Under extreme conditions, these collapses can trigger the formation of a black hole – a region of space in which gravity is so strong that even light cannot escape. Black Holes – Very Massive Black holes with masses comparable to that of the Sun are scattered through the Milky Way and neighboring galaxies. Scientists also have found strong evidence of massive black holes – a million or more times more massive than the Sun – in the centers of many galaxies. In fact, one of these supermassive black holes sits at the heart of our very own Milky Way. Detecting Black Holes Compact objects are difficult to observe directly. Fortunately any ordinary matter falling toward them – or disappearing entirely into a black hole – tends to heat up and radiate in the process. Sometimes great streams, or “jets,” of matter and energy surge into space at velocities nearly equal to the speed of light. As matter falls onto a compact object, energy is often released in the form of X-rays and gamma-rays. Likewise, rotating neutron stars can produce copious amounts of high-energy radiation but because Earth's atmosphere absorbs most of this kind of radiation, observations must be made from space. Measurements have revealed that these phenomena are responsible for the highest energies yet detected in the universe.
http://kipac.stanford.edu/kipac/compact_objects_what
4.03125
Dates When Postal Zones and Zip Codes Started POSTAL ZONES - You may have noticed that many addresses during the period between 1943 and 1963 had a one or two digit number following the city name. These numbers were postal zones. It may surprise you to learn that postal zones were instituted in 1943 during WWII. They were necessary because many postal clerks had gone into the service and the new inexperienced postal clerks were having trouble sorting the mail. The zone system was put in place to make things easier. ZIP CODES - By 1963, most of first-class mail in the United States was generated by a small number of large-volume mailers, so The Post Office Department devised a plan to speed handling and delivery of letter mail. By this time most businesses had automated mailing systems that could easily handle the 5 digits that would allow mailings to bypass as many as six mail-handling steps. Zip codes went into effect on July 1, 1963. ZIP stood for Zone Improvement Plan. IF YOU ARE INTERESTED IN GENUINE AND ORIGINAL ANTIQUES AND COLLECTIBLES AT BARGAIN BASEMENT PRICES CLICK HERE: items/complete-catalog/list.htm
http://www.oldstuffonly.com/zip_code_date.asp
4.0625
Key Unit Questions 1. How are elements similar and different from one another? 2. What are the properties of each element studied? 3. How do elements react with other elements to form compounds? View our Project Overview Slide Show (.pps) Since the study of Matter is a required unit of study in 5th-8th grades in California and many other states, this unit would be a very valuable resource for 5th-8th grade Science. It can also be simplified for use in 3rd grade. Other grades would likely find it very valuable wherever it would fit into their curriculum. Replication - How Other Teachers Can Easily Adapt This Unit Teachers have permission to use all original work in this unit as long as it is not for profit and it is for educational use. You may use the entire unit or you may modify those sections
http://www.sjteach.org/matter.html
4.15625
The definition of pressure as taught in school is force per area. Written as a formula: p = F/A It's measured in Newtons per square metre(N/m2), more usually called 'pascals' (Pa). A more detailed description of pressure, its uses and its units is found after this first illustrating example. 65kg on a surface of 2cm2 (eg, high heel shoes) will result in a pressure of 3 250 000 Pa (beneath the high heels, if the person is standing on the surface of planet Earth). A four ton elephant, on the other hand, standing on one foot will cause a pressure of only 250 000 Pa under that foot. As an exercise try to calculate the area of the aforementioned foot. The solution can be found in this footnote1. The easy definition above can be seen as a descriptive mechanistic definition of pressure. There are, however, more ways to interpret the phenomenon of pressure, especially when the objects exerting the pressure cannot be weighed, or have their area measured, in the conventional way. Gases and liquids, for example, are not as easy to weigh. Furthermore, they can be compressed and they can flow, facts which change some aspects of the interpretation of pressure. Pressure and Thermodynamics In thermodynamics, pressure can be seen as a potential, especially when The Gas State Equation is used to calculate work2 and heat. Thus, if one has two compartments containing gases with different pressures, such as a full scuba-diving compressed-air cylinder and an empty one, then one can propel a turbine and generate energy by letting the gas stream out of the full cylinder into the empty one. When both compartments have the same pressure, then no more energy can be gained as no gas would flow. This is because there is no potential difference between the compartments. Pressure can also be defined as the transmission of the momentum of every particle stuff is made of. In other words, the force of every single atom that bumps into a surface: the overall pressure is given by the sum over all pressures of all individual particles. This individual pressure is again the quotient of force and area. The force is given by the particle's velocity and mass, while the area is dictated by the particle's size. From this relationship, people can calculate the effective size of gas molecules3. Dynamic and Other Forms of Pressure Forces can also increase and decrease with time. A phenomenon which varies with time is known as a 'dynamic'. There is, therefore, also a dynamic dimension to pressure: 'dynamic pressure'. This is crucial for the mechanisms involved in flying, but dynamic pressure is also involved in calculating football banana-kicks, among many other things. There are more abstract forms of pressure, such as the pressure of a solvent upon a semi-permeable membrane: a phenomenon called osmosis. Sometimes one may find a pressure measurement given as 'head pressure', in metres. Head pressure is the pressure exerted by a column of a fluid with a certain height. For example, a 30m head pressure for water would be equivalent to the pressure on the bottom of a 30m high water column4. To calculate the pressure, one must know the density (p) of the fluid5 and the gravitational acceleration (g6. The pressure (P) is then directly proportional to the height (h). P = pg·h The pressure is then given in Pascals (Pa). For water, 1 metre of head pressure is roughly7 equivalent to 9800 Pa. Humans can suck up to 7m high water column with a straw, though this figure is usually a lot less. The formula above is also useful to calculate pressures at certain depths, for instance when scuba diving: a 10m depth corresponds roughly to 1 atm. It is also worth noting that there is no such thing as an absolute negative pressure, just as with absolute temperatures. A 'negative' pressure can only represent a pressure difference (most commonly against atmospheric pressure). Those pressures are sometimes termed gauge pressure, ie, the pressure indicated. This is in contrast to 'absolute pressure', which refers to all pressures together - atmospheric pressure plus gauge pressure. Since pressure is dictated by two parameters, namely force and area, all one must do to measure the pressure is to keep one of them constant, and proceed to measure the other one. In 99% of cases the area is kept constant and the force is measured. Normally, this is done by measuring the deformation of a membrane or of a coil. In some cases this is done by measuring the height of a fluid column, like mercury (hence the mmHg unit below) or water. The gadgets used to measure pressure are called barometers. Some are called manometers, which are used to measure gas pressure. They work according to the following principle: the pressure of a gas pushes a piston, a membrane or a liquid against a coil or another gas with a known pressure. The mechanic deformation of those materials is visualized electronically, with a needle or on a scale behind the coil and converted into the appropriate units of pressure. As with most measuring apparatus, most of the barometers must be recalibrated every now and then, to make sure what they are indicating is correct. For those who want some proof that physicists are human, the proof is in the idiocy of all the different units which they use...8 - RP Feynman Units are one of the biggest problems in science. There are many allegedly clever ways to relate pressure with a unit that any average Joe can understand. However, a 'Newton per square metre' can get Americans confused. In America the unit psi, which is 'pounds per square inch', is used instead. This is why car tyre pressures are often measured in psi. One psi corresponds approximately to 6894.757 Pa. Many people also use the unit atm, which stands for 'atmosphere', because it's something they apparently relate to. One atm is 101,325 Pa. There are even more units used for inexplicable reasons. In any case here goes a smart conversion table: Smart Unit Conversion Table for Pressure |Pa||1||0.00001||9.8692·10-6 ||0.0075||1.4504·10-4 | |psi||6894.757 ||0.068948 ||0.068046||51.7151||1| There are even more obscure units like 'foot of H2O' (0.00034 Pa) or 'Pound-force per square foot' (0.02088 Pa), but these are used only by very special freaks in very special contexts. Pressure in Everyday Life (Some Figures) - 10-20 atm - The pressure in space-vacuum - 10-16 atm - The lowest pressure ever achieved by a man made gizmo - 10-6 atm - Ordinary vacuum pumps - 10-2 atm - The pressure in a common light bulb - 0.5 - 1.5 atm - Atmospheric pressure - 1.5 - 2.4 atm - Car tyres - 3 - 7 atm - Flatus - 4 - 12 atm - Bicycle tyres - 10 atm - The pressure inside the cylinder cavity in a car's engine - 100 - 500 atm - Compressed gas cylinders - 500 atm - The impact pressure of a karate fist punch - 1000 atm - The pressure at the bottom of the Mariana Trench - 7000 atm - Water compressors9 - 106 atm - The pressure at the centre of the earth, and also the highest pressure ever achieved by a man-made machine (diamond anvil) - 1011 atm - The pressure at the centre of the sun -enough to ignite fusion reactions - Approx. 1029 atm - The pressure at the centre of a neutron star.
http://h2g2.com/dna/h2g2/alabaster/A835409/
4
The incredibly fertile plains of the Nile river encouraged settlement of Neolithic communities. From these communities arose (circa 5000 BCE) the villages and towns that would form the regional districts of Egyptian history. These districts were later called "nomes" by the Greeks. The nomes were united broadly in culture, but each was ruled separately by what amounted to a tribal chieftain. Each nome also seemed to have its own tutelary god, for whom the tribal chieftain was considered the sacral king. The basis of Egyptian religion and government, and the almost total lack of distinction between them, was already lain. Sometime before 3000 BCE the nomes around the Delta region of the Nile which entered into the Mediterranean were united into what was called the Red land. Similarly, the nomes in south of the Delta were united into the White Land. Over the course of a few generations, kings from the south established control over the north. Egypt was united as the Two Lands around 2700 BCE, with the capital at Memphis. The Egyptians also referred to their land as "Kemet," meaning "Black Land" after the color of the fertile silt of the Nile River. Within several generations of the Unification of the Red and White Lands, Egypt became a highly centralized state, united by a god-king and an imperial bureaucracy. The King was in essence a god on earth, and the chief mediator between humanity and the higher gods of the heavens. This was much like the archaic nome chieftains and their tutelary demons, but on a much grander scale. The Kings ordered construction of pyramids, specialized burial chambers from whence it was believed their souls would ascend to the heavens and reside with the gods. At this point in history, eternal life was considered the province of the King only. Beneath the king were his priests, the nobles of the court, the local notables in the nomes, the scribes and other staff of the bureaucracy. A small part-time army was retained, but Egypt's vast deserts helped defend the country from outsiders. The remaining 80% of the population were serfs. They spent three months of the year farming the Nile. The rest of the year they were conscripted by the State for various building projects. The Biblical account of foreign slaves constructing pyramids is not supported by history or archaeology. Egypt traded with some of her neighbors and occasionally went to war. First Intermediate period The God-Kings of the Old Kingdom had built so lavishly on their burial chambers that it actually placed a severe strain on the economy. Furthermore, the nobles and the priests were wresting power and money away from the central court. The central government at Memphis collapsed circa 2180 BCE. The local notables staged civil war for the throne. In the chaos that resulted, Asiatic nomads infiltrated the country. Everywhere there was chaos, followed by famine and disease. The illusion of an all-powerful God-King ruling timelessly over Egypt was shattered. People no longer felt that the King alone was entitled to eternal life or protection of the gods. "The Democratization of the Afterlife" began in this phase, in which all those who submitted to such venerable deities as Aset (Isis) and Wesir (Osiris) could be granted eternal life. This was one of the most profound moments in the evolution of Religious Thought, exerting a strong influence on Paganism ever since, and providing fertile ground for the later Christian religion to follow. The chaos gradually subsided and a new line of Kings emerged around 1991 BCE. It-Towy became the political capital, and Thebes the chief religious city. The powers of the nobles were curtailed by the central court, and a new middle class of skilled labor and traders emerges to replace them. However, about this time the King came increasingly to share power with his deputy, called the vizier, who would later become a force in politics in his own right. On the foreign front, Egypt re-initiated trading relations and military campaigns to regain the international status it had lost in the previous anarchy. It was during this time frame that Egypt made contact with Minos, the proto-Hellenic civilization. On the domestic front, the new kings reordered the construction of pyramids and other burial chambers, despite the fact that their economic burden had contributed to the social collapse of the previous era. Second Intermediate Period By 1786 BCE, Egypt had again fallen into chaos. A series of weak kings came to the throne. During this time frame, the Vizier, the kings deputy, may have been the actual power behind the throne. Rival pretenders to the throne established separate dynasties. With the country weakened, a group of Asiatic invaders called the Hyksos gradually infiltrated the country. They took over the North. The South was not conquered but had to pay taxes in tribute. Later Egyptians would say the Hyksos were cruel, malicious tyrants, though the archaeological records do not bear this out. In any event, the Egyptian psyche was so disturbed by this conquest that they grew increasingly xenophobic. Rulers from the religious center of Thebes in the south organized a revolt and expelled the Hyksos back to West Asia. The Theban princes established themselves as the new rulers of Egypt around 1570 BCE. The mood of Egypt had changed. A once generally peaceful people were altered by the Hyksos invasion. They grew increasingly warlike and Xenophobic, and desired overseas empire to defend themselves and increase their status. The Thebans used the military lessons fighting the Hyksos to establish Egypt's first professional army and conquered parts of West Asia. The military became the second most important institution in the New Kingdom. The most important institution was the clergy. The Thebans attributed to their success to their chief god Amun. The Egyptians, already a deeply religious people, became all the more so after the Hyksos were repelled. Amun, the Theban god of Winds, was associated with Re, the Memphis solar god of the Old Kingdom. This new "Amen-Re" was regarded as an all-powerful creator and warrior deity, the patron of the New Kingdom. The Theban priests of Amen-Re became the powers behind the throne as they confirmed the right of the king to rule. The priesthood also came to exercise considerable influence over the New Kingdom's burgeoning economy. In an effort to wrest political and religious authority from the priesthood and return it to the Monarchy, a King by the name of Amenhotep revolted. Amenhotep denied the existence of Amen-Re, and indeed all other gods. According to him, the only god was Aten, the sun disk, and he was the high priest of this one true god. Amenhotep renamed himself Akhenaten ("servant of Aten"). He closed all the temples and constructed a new capitol around his solar monotheism. Not only the priests but the people at large were scandalized. Once he died, the Theban priests again took control. The cults of the old gods were restored and it would be centuries before Monotheism was again inflicted upon the population. Bolstered by a strong military and a feeling of religious patriotism, the Egyptians of the New Kingdom successfully repelled an invasion of the Sea Peoples. The Sea Peoples were a mysterious race of marauders who had managed to destabilize other parts of the Mediterranean. The New Kingdom finally abandoned the practice of Pyramid building. Instead they buried kings in rock-cut tombs. It was in such a tomb where Tutankhamun was famously discovered. Third Intermediate Period Egypt reached the height of power in the New Kingdom that it would never recover. Around 1089 BCE the Theban princes took full control of the south of the country, leaving the monarchy with the north. The country was divided, and the monarchy of the north was weak. Kings from Libya and then from Nubia took over parts of Egypt. Foreign ruled Egypt came into conflict with the new power in the Ancient world, Assyria. The Assyrians destroyed Memphis and placed puppet rulers in parts of Egypt. The Egyptians gradually regained control of their country from all foreign influences, but their international power and overseas empire was ruined. The Kings could only retain control by hiring large numbers of mercenaries, who were increasingly Greek. At this time Assyria fell to Babylon, which in turn fell to Persia. Persia then subjected Egypt to foreign control again. Xerxes was considered a cruel occupier, and when the Persian Empire was overthrown the Egyptians welcomed their new liberator, though he was also foreign. Alexander the Great defeated the Persian Empire. So the legend goes, the Egyptians greeted him with open arms after the oracle of Amun declared him a god on earth and the rightful King of Egypt. Alexander founded Alexandria on the Delta to become the new administrative capitol, and it quickly became the largest port city in the eastern Mediterranean. Alexandria would become a melting pot of Egyptian, Greco-Macedonian, Jewish and other foreign influences. Alexander died and leadership of Egypt passed to his governor, Ptolemy. Ptolemy ruled as a Pharaoh. Ptolemy and his successors tried to "modernize" the economic and political administration of the Egypt to increase output. This seems to have worked, and the new-found wealth was poured into massive building projects and other affairs of state. The majority of Egyptians did not benefit. The reforms and new economy favored the new regime and its new Greco-Macadonian administrative class. Some Egyptians did move up the social hierarchy by becoming "Greek" in educational terms, but the majority of the populace did not seem able or inclined to trade their culture for that of the occupiers. Despite the Ptolemaic regime taking a relatively innocuous approach in ruling its Egyptian subjects, there were frequent uprisings by the natives. Roman and Byzantine Period The growing shadow of Rome from the Mediterranean coincided with the successive degeneration of the Ptolemaic regime. Under weak rulers, Ptolemaic Egypt watched as Rome devoured the other Hellenistic kingdoms. Only one Ptolemy was wily enough to meet the Romans at their game, and that was Cleopatra (the VII). After wooing Julius Caesar and using him for internal Egyptian politics, Cleopatra turned to Marc Antony when Caesar was assassinated. After a failed bid for military supremacy of the Roman world, Cleopatra and Antony killed themselves. Augustus poised himself as Pharaoh and had an equestrian appointed to rule Egypt as a direct imperial territory. The main Roman interest in Egypt was the grain of the fertile Nile. However, the general economic power of Alexandria, the second largest city in the Empire, was not ignored. The Romans retained the Ptolemaic administration but did introduce Roman legal reforms. The Romans allowed the remains of the Egyptian priesthood to operate so long as they supported the imperial cult. The Egyptians for their part seemed to have largely been apathetic to the Romans, but a few Egyptians (no doubt largely Hellenized by education) did become Senators. The main event in the Roman rule was the introduction of Christianity. Introduced by Hellenized Jews in Alexandria, it spread quickly to the rest of the population. The Cult of Mary and Jesus was facilitated by its iconographic resemblance to the cult of Isis and Horus, and perhaps to the extent that early Christianity was anti-Roman it was taken up enthusiastically by the lower urban classes. Coptic was developed as a literary and religious language at this point. As the Western Empire degraded, the grain supply shifted to the new Eastern capitol of Constantinople. Egypt would become a province of the East, one involved in the various religious controversies of the Byzantines. Egypt was later conquered by Islam, and its culture completely taken over by the new faith. Not until French and British troops in the nineteenth and twentieth century occupied the country would Egypt and its past be significantly reintroduced into the Western conscious.
http://www.unrv.com/provinces/brief-history-of-egypt.php
4
History Of Pan American World Airways Pan American World Airways was one of several carriers that was created and flourished as a result of the Air Mail Act of 1925 (Kelly Act). The Air Mail Act of 1925 was the first major piece of legislation created by Congress in 1925 that would effect the aviation industry. The Act authorized the awarding of government mail contracts to private carries, and established rates for transporting mail. This Act inspired owners of aircraft and investors to start up air carrier services, providing airmail service as it was very profitable. Pan American World Airways, was one of several air carriers that grew out of the Kelly Act. Pan American Airways had procured a lucrative airmail contract from the United States Postal Service in 1927. The contract was to deliver mail to and from the Country of Cuba and United States. It was the Key West, Florida Havana Mail Route The airmail service to Cuba, had proved to be a very profitable route for Pan Am. Its owner, Juan Terry Tripple had such success he expand services between the United States, Mexico and Latin America by 1930. Although Pa Am was providing some passenger service, it was not the bulk of it business. During this period of time, air carriers in general did not concentrated its energies on passenger service. As airmail contracts were much more profitable per air-mile and aircraft were limited in gross weight, roughly around 3,500 pounds. It was much more cost-effective to for air carriers to provide air service for cargo than passengers. However, with the establishment Airmail Act of 1930, air carriers were forced to contend with passenger carriage. The Airmail Act of 1930, changed how airmail contract were to be awarded to air carriers. The act in essence forced air carriers to purchasing larger aircraft. In turn, it placed the carrier in a position of being able to bid on postal contracts, increasing the likelihood of being awarded airmail contracts. To remain competitive air carriers now had to fill space on the aircraft with passengers. This act as well created a frenzy within the industry. For the first time in history, air carriers were being swallowed up by more solvent carriers. This in an effort to increase aircraft inventory and to acquirer air routes that these air carriers had already received rights. By the mid 1930, Pan Am had taken over several smaller air carriers. This in an effort to strengthen its markets and to gain access into new air passenger and airmail markets. On May 20, 1939, Pan Am launched the first U.S. passenger service to Europe, using the Boeing 314 Yankee Clipper, the "flying boat". With the explosion of the Zeppelin Hindenburg in 1937, the flying boat replaced the need for such airships as the Hindenburg. However, the flying boat soon became obsolete. With the United States entering into World War II, Pan Am began providing military transport of US troops into Europe, Africa and Asia. This gave Pan Am the insight and edge over other carriers into that part of the world. By the end of the war, Pa Am had establish passenger and cargo routes throughout the continents of Africa, Europe and Asia. By the mid 1970's, Pan Am had changed its name to Pan American World Airways, acquired several air carriers; such as American Overseas Airlines and National Airlines. Had increased its passenger and cargo services to include such routes as New York to London and had become one of the world's largest air carriers providing passenger and air cargo service. By this time Pa Am's aircraft fleet had included the Boeing 727, 737, Douglas DC-10, Airbus A-300 and A310, and Lockheed L-1011. By the mid 1980's, US air carrier profits and ability to remain competitive began to weaken. This brought on by airline worldwide recession, airline airfare wars and fuel costs. Pan Am was being to loose ground. Of the US airline giants, Delta Airlines, United and American Airlines appeared to maintain their leading positions. Delta Airlines achieved major carrier status after deregulation. Pan Am's competitive edge in foreign markets further placed Pa Am in harder times over other air carriers, due to the turmoil in world politics. Pan Am aircraft were now being used as a conduit by third world countries in an effort to change world politics. An example of this was the bombing of Pan Am's flight 103 over Lockerbie, Scotland in 1988. In which 258 passenger were killed. It was believed that Iran was involved. In London, an anonymous caller phoned the Associated Press to claim that Pan Am flight 103 had been attacked in retaliation of a US Navy F-14 shooting down an Iranian Airbus in July in which 290 passengers were killed. The downing of flight 103, and the trend for terrorist to attack US air carriers significantly hurt Pan Am. In addition Pan Am's aircraft fleet was getting old and it could not afford to purchase new aircraft. Pan Am aircraft fleet was costing Pan Am a more than what it could afford to keep its planes flying. Pan Am was paying more than the industry on a whole was paying in maintenance cost. Pan Am was spending over $800 an hour for maintenance cost for every hour an aircraft was in flight carrying passengers. Were as Delta Airlines was paying just under $400 an hour. With the airline price wars Pan Am did not have a chance. In 1991, Pan American Airways file for bankruptcy and closed its doors. In early 1997, Pan American Airways reopened its doors and began providing services out of Miami, Florida and once again they had to shut their doors as they were unable to pay their creditors 1998. See legal cases, and Case2 In addition Pan Am's aircraft fleet was getting old and it could not afford to purchase new aircraft. Pan Am aircraft fleet was costing Pan Am a more than what it could afford to keep its planes flying. Pan Am was paying more than the industry on a whole was paying in maintenance cost. Pan Am was spending over $800 an hour for maintenance cost for every hour an aircraft was in flight carrying passengers. Were as Delta Airlines was paying just under $400 an hour. With the airline price wars Pan Am did not have a chance. In 1991, Pan American Airways file for bankruptcy and closed its doors. In early 1997, Pan American Airways reopened its doors and began providing services out of Miami, Florida and once again they had to shut their doors as they were unable to pay their creditors 1998. See legal cases, and Case2 |ŠAvStop Online Magazine Contact Us| Grab this Headline Animator
http://avstop.com/history/historyofairlines/panam.htm
4.34375
Residential wood heating patterns differ depending on the season and the weather conditions. The levels of wood smoke pollution vary accordingly. In some neighbourhoods the heating patterns during the winter months cause an increase in wood smoke production. Together with weather and local topography this affects the air quality in these neighbourhoods. In winter the air can become loaded with the products of incomplete combustion such as particulate matter (PM), volatile organic compounds (VOCs), carbon monoxide (CO) and nitrogen oxides (NOx). The severity of the resulting winter smog depends on the degree of atmospheric dispersion. The lower the level of atmospheric dispersion, the higher the level of winter smog. Atmospheric dispersion is mainly determined by wind speed and mixing height. 1. Wind speed pushes and disperses the pollutants horizontally. No wind means stagnant air and allows levels of pollutants to build up in the air (smog). 2. The mixing height refers to the maximum height the pollutants can reach if dispersed vertically. In normal situations, the mixing height is enough to disperse the pollutants high into the atmosphere. The pollutants are carried up by the layer of warm rising air to the colder air higher up (see diagram). In the case of temperature inversion, the pollutants are trapped at ground level where it causes most harm. This inversion occurs for example when ahead of a warm front or in broad surface ridge. Cold air becomes trapped under the layer of warm air that acts as a lid. The pollutants in the cooler layer cannot be dispersed and the pollutants stay concentrated at ground level (see diagram). Topography also plays an important role in the concentration of pollutant levels. The physical "walls" of a valley for example restrict air movement in the valley. High levels of pollutants in the air cannot be sufficiently dispersed and communities located in the valley will be covered in smog. A community located on an open plain will not have this dispersion problem. Smog also arises in places with stretches of rolling terrain where cold air can get trapped in the terrain's many pockets and cause temperature inversion. For More Information - Date Modified:
http://www.ec.gc.ca/Air/default.asp?lang=En&n=AFF4D58F-1
4.3125
New study is part of a broader effort to understand the early years of the universe, after the big bang. For much of the universe's first billion years, the searing brightness born of the big bang faded to black. This dark age represents the least-understood chapter in the history of the cosmos scientists have compiled. On Friday, researchers report they have glimpsed – via computer simulations – the birth of the first small, stable clumps of gas that would have served as seeds for the first generation of stars. Within 10,000 years, the scientists say, these seeds would blossom into blazing orbs at least 100 times more massive than the sun. The simulation is part of a broad effort to fill the dark-age gap. Astronomers worldwide are pushing ground-based optical telescopes to their limits, building vast radiotelescope arrays and looking to a new generation of space- and ground-based telescopes to probe this crucial period. The nuclear furnaces in the first stars would have formed the first atoms of carbon, silicon, oxygen, and other heavy elements, researchers hold. These elements would become incorporated into later generations of stars, which in turn would add their contributions to the chemical inventory. Over time, clusters of stars would form galaxies whose combined radiation would eventually shift the cosmos from opaque to transparent. The heavier elements the stars forged and launched into the cosmos would form basic organic and inorganic molecules, and become the raw material for planets. "We have a good understanding of what the universe looked like shortly after it originated about 14 billion years ago. We also have a good idea of what the universe looks like now," says Lars Hernquist, a Harvard University astrophysicist and member of the team. "But there's a significant gap in our understanding of how the universe made this transition from what it looked like after the big bang to how it appears to us today." Until new tools can peer more deeply into that gap, simulations remain the only vehicles for exploring the transition. In a young, dark universe
http://m.csmonitor.com/USA/2008/0802/p01s01-usgn.html
4.03125
Each year, many Americans are stung by bees, wasps, hornets, yellow jackets, and fire ants. These insects, members of the Hymenoptera family, inject venom into their victims when they sting. Some people develop severe allergic reactions when their immune systems react to the venom. An allergic reaction to an insect sting follows this process: After a first sting, an allergic person's body produces an allergic substance called IgE antibody, which reacts with the insect venom. Reactions to stings commonly last only a few hours. Redness and swelling may develop at the site of the sting; pain and itching are also common. Occasionally, these reactions will grow larger and last as long as two weeks. This is called a large local reaction. Depending on its severity, this type of reaction may also require medical treatment. For a very small number of people allergic to insect venoms, such stings may be life-threatening. Severe allergic reactions to insect stings can involve many organ systems of the body and may develop rapidly after the insect stings. These symptoms can include itching and hives over much of the body, swelling in the throat or tongue, difficulty in breathing, dizziness, severe headache, stomach cramps, nausea, or diarrhea. In severe cases, a rapid fall in blood pressure may result in shock and loss of consciousness. All of these symptoms indicate a type of serious allergic reaction called anaphylaxis. Anaphylaxis is a medical emergency, and may be fatal if the sting victim does not obtain immediate medical treatment. To avoid stinging insects, it's important to learn what they look like and where they live. Most stings are from five types of insects: yellow jackets, honeybees, fire ants, paper wasps, and hornets. These insects are black and have yellow markings (see Figure 1). The queens measure about > inch long; the males and workers are about = inch long. These insects are found from arctic to tropical regions, but are less common in the southwestern United States. Yellow jackets' nests are made of a papier-mâché material they produce by chewing up rotted wood, dead stems and leaves, or paper and cardboard. Their nests are usually underground, but can sometimes be found inside the walls of frame buildings, in cracks in masonry or in woodpiles. They become more aggressive over the summer and are commonly found around foods and sweets. They are easily agitated. Honeybees are about = inch long, and have a rounded hairy body with dark brown coloring and yellow markings (see Figure 2). They have a barbed stinger that they commonly leave in a victim upon stinging. This is an excellent way to identify the stinging insect. After stinging they die; they are fairly nonaggressive and will only sting when provoked. A common setting for a sting is bare feet in a lawn of clover. However, Africanized honeybees, or so-called "killer bees," which are found in the southwestern United States and South and Central America, are more aggressive and may sting in swarms. Domesticated honeybees live in man-made hives, while feral honeybees live in colonies, building nests in hollow trees or in the cavities of buildings. Hives are constructed of beeswax and consist of repeating parallel vertical combs, or "honeycomb." Africanized honeybees are less selective about nesting sites. Common sites include holes in exteriors of homes, between fence posts, in old tires or holes in the ground, or any other partially protected site. These insects' slender, elongated bodies are = inch to one inch long and are black, brown, or red, usually with yellow markings (see Figure 3). Their nests are also made of a paper-like material that they produce, and are composed of a circular comb of cells that open downward. The nests are often located under eaves, behind shutters, or in shrubs or woodpiles. Hornets are black or brown with white, orange or yellow markings and are usually larger than yellow jackets, typically > to one inch long (see Figure 4). Their nests are gray or brown and football shaped, and are made of a paper material similar to that of yellow jackets' nests. Hornets' nests are usually found high above ground on branches of trees, in shrubbery, or on gables; one type nests in tree hollows. These are reddish brown and approximately 1/8 inch in length (see Figure 5). They build their colonies in the ground, with prominent mounds (see Figure 6). These fire ant beds are often found along the borders of sidewalks, driveways and along roadsides. Eliminating these colonies is difficult, since most of the nest is underground, and numerous colonies can be found in the area. Fire ants are typically found in Southern states (South Carolina to Florida and Texas). Stay out of the "territory" of the stinging insects' nests to avoid encountering large numbers of them. Since all of these social insects will sting if their homes are disturbed, it is important to destroy hives and nests around your home. The insect-allergic person should not perform or be near this potentially dangerous activity. A trained exterminator should be employed. Inspect the home and yard weekly, especially in spring and summer, to detect new hives or nests. Paper wasp nests may be eliminated spraying with a contact insecticide on a cool night. Fire ant nests are best treated with approved baits, but can also be treated with an approved drench insecticide. Individuals treating nests should wear appropriate protective clothing. Remain calm and quiet when you encounter any flying stinging insects and move slowly without flailing your arms. Since the smell of food attracts insects, be careful when cooking, eating, or drinking sweet drinks like soda or juice outdoors. Keep food covered until eaten, especially soda and juice cans. Insects are attracted to trash containers; keep these areas clean, cover garbage, and use natural insecticide sprays. They also like bright colors and fragrances. Because honeybees gather nectar from clover and other ground plants, wear closed-toe shoes outdoors and avoid going barefoot. Swimming pools and flower gardens are particularly high-risk areas. Avoid loose-fitting garments that can trap insects between material and skin. Gardeners should take additional precautions. Accidentally disturbing a nest will irritate the insects, inciting them to sting. Watch out for nests in trees, shrubs, woodpiles, under the eaves of the house, and in other protected places. Use hedge clippers, power mowers, and tractors with caution. Severely insect-allergic people should not participate in outdoor activities alone, because if they are stung, they may require assistance in receiving prompt emergency treatment. Among Hymenoptera species, the honeybee commonly leaves a stinger (with venom sac attached) in the skin of its victim. Since it takes several minutes for the venom sac to inject all of the venom, immediate removal of the stinger and sac (within 30 seconds) will limit the amount of venom injected. A quick scrape of the fingernail removes the stinger and sac. Avoid squeezing the sac since this forces more venom through the stinger and into the skin. Hornets, wasps, and yellow jackets do not usually leave their stingers in their victims. These insects should be brushed from the victim's skin promptly with deliberate movements to prevent additional stings. The person should then quietly and immediately leave the area. Fire ants should also be carefully brushed off to prevent repeated stings. If you/your child begin experiencing any of the serious allergic symptoms described previously, have someone take you/your child to an emergency room immediately. Insect sting reactions can be serious and require immediate medical treatment. If you/your child have had reactions before, you/your child should carry an "on the spot" short-term treatment for severe allergic reactions: a self-injectable epinephrine shot. The single- or double-dose syringe is pre-filled with epinephrine, which reduces allergic reactions. Use the injector according to the instructions, and carry it with you/your child whenever you/your child will be outdoors during insect season. Replace the device before the labeled expiration. You/your child should become proficient in self-administering epinephrine in the event that you/your child are stung while alone and develop a sudden anaphylactic reaction. Remember that injectable epinephrine is emergency, rescue medication only. If you/your child use the epinephrine injection, you/your child must still have someone take you/your child to an emergency room immediately. Additional medical treatments may be necessary for the management of some insect sting reactions. We will recommend testing to determine whether the person has an allergy and which type of stinging insect caused the reaction after a careful history. Skin or blood (RAST) testing for insect allergy is used to detect the presence of significant amounts of IgE antibody in the patient. Those who are severely allergic to the venom of stinging insects should receive venom immunotherapy, a highly effective vaccination program that actually prevents future allergic sting reactions in 97 percent of treated patients. It is the closest thing to a "cure" and is highly recommended. The indication should be discussed carefully with the Allergist. Children and adults are treated differently. During immunotherapy, the doses of venom extract gradually increases every few weeks over a period of three to five years. This helps the patient's immune system to become more and more resistant to future insect stings. Those who have severe allergies may also want to wear a special identification tag in the form of a bracelet or necklace that identifies the wearer as having severe insect sting allergy. These tags can also supply other important information about the patient's medical condition. Adapted from American Academy of Allergy Asthma and Immunology's Tips to Remember. Reviewed by: Allergy Section Date: April 2004
http://www.chop.edu/service/allergy/allergy-and-asthma-information/stinging-insect-allergy.html
4.34375
Another important ingredient of music is tone. The pitch of a tone is the listener’s evaluation of the frequency (number of vibrations per second usually shown in Hertz) and is perceived as how high or low a note sounds. The higher the frequency , the higher the pitch. The lower the frequency, the lower the pitch. Pythagoras discovered that the frequency (pitch) of a vibrating string is proportional to its length. Doubling the length of a vibrating string lowers the pitch by one octave. Standards of exact pitch have changed many times over the centuries. The United States and Great Britain eventually adopted A = 440 hertz to be the standard pitch in the 20th century and that is what all piano tuners use as a basis for tuning. How loud a tone is the listener’s perception of the amplitude. The larger the amplitude, the louder the tone. The smaller the amplitude, the softer the tone. There is a set of characters which tells how loud or how soft to play the part. The loudest is fff. That means to play very very loud. It goes down in eight steps, which are: fff, ff, f, mf, mp, p, pp, and ppp. They range from very very loud to very very soft. These marks are called dynamics. Knowing the frequency of Middle A is 440 Hz, what is the frequency of of the A below Middle A?
http://library.thinkquest.org/4116/Music/tone.htm
4
Blackfoot after 1500 Throughout the 1500s and 1600s AD, the Blackfoot continued to live in the same way they had lived before 1500. But the lives of Blackfoot people changed a lot in about 1730 AD, when they got horses from other North American tribes. Once they had horses, they could hunt buffalo and get their food more easily than from farming or gathering. They also got guns in trade about the same time. Also, white settlers were pushing the Sioux further west, and the Sioux were crowding out the Cree, the Crow, and the Blackfoot. Soon, like the Cree and the Crow, the Blackfoot abandoned their land near the Great Lakes and traveled west to the Great Plains to hunt buffalo full-time. By 1800, the Blackfoot nation controlled a lot of north-western North America (the modern provinces of Saskatchewan and Alberta in Canada, and the modern state of Montana in the United States). This was a lot of land, and the Blackfoot nation was powerful and successful. In this period, Blackfoot people were nomads. In the summer, they followed the buffalo and hunted them for most of their food. They traveled in small bands of just a few families. If people weren't getting along, they just changed their band. In the long, cold winter (almost half the year), people settled down in winter camps and didn't move again until spring. The Blackfoot were always fighting wars to defend their own land or to get more of somebody else's land. They fought often with the Cree and the Sioux to their east and the Crow to their south. These wars, combined with frequent epidemics of smallpox beginning in 1780, killed many people by the late 1800s. In the summer, the whole Blackfoot nation got together for the Sun Dance ceremony, which brought them together as a people. Then in the fall there were big buffalo hunts to get enough meat to last, dried or made into pemmican, for the winter. Because the Blackfoot were so far away from where the Spanish, English, and French invaders were, they were able to keep on living their normal lives, hunting the buffalo, until the 1880s AD. But as with the Sioux, the horses ate the food that the buffalo needed in the long cold northern winters, and the more horses the Blackfoot had, the fewer buffalo survived. By 1881, European settlers and the United States and Canadian armies worked together to deliberately kill most of the remaining buffalo in order to force the Blackfoot people onto reservations. BILL OF RIGHTS WHAT IS BC OR AD? The United States army forced the Blackfoot people who were in Montana to move on to a reservation. The Canadian army forced the Blackfoot people who were in Canada to move on to reservations in southern Alberta. Many people died during the late 1800s and early 1900s of diseases like measles and smallpox that they caught from the Europeans. They struggled to figure out how to live without the buffalo. Eventually most people turned to either farming or ranching (raising cattle), and there started to be more Blackfoot people again. Click on these books to buy them at Amazon and learn more:
http://www.historyforkids.org/learn/northamerica/after1500/history/blackfoot.htm
4.09375
Transitioning Example: Within and Between Paragraphs In the example that follows you can see how transitioning within and between paragraphs works in a multi-paragraph example. Ignore the numbers and the underlining and bolding for now; you’ll use them in a minute when we discuss the paragraphs. Children learn (1.a)gross motor skills through their active play. Gross motor skills involve the ability to maintain balance, to run significant lengths, and to jump over specific hurdles. Children tend to fine tune these skills on the playground, where many tools challenge their (2.a)skills. Children (2.b) also acquire (1.b)fine motor (2.c)skills through playing. Fine motor skills develop through using specific hand-eye coordination abilities. By cutting a paper with a pair of scissors or coloring inside the fine lines of a drawing, children develop the ability to manipulate their environment. (1.c)With enhanced gross and fine motor skills, children become more comfortable with the socialization process. By skillfully running, jumping, and then scaling an obstacle, children learn to compete with their peers in active play. (3.a) Similarly, children's fine motor skill development enables them to perform tasks that are considered necessary to existing within a group. (3.b) For example, fine motor skill activities, like putting puzzles together, improve children's critical reasoning and thinking skills, which, in the future, will help them get along with other people. It also helps children gain the confidence needed to help them become a leader in a group rather than just being a part of one. Now that you’ve read through the passage, let’s consider the various transitions within the passage. The numbers here refer to the marked passage above. For example (1) refers to 1.a (“gross motor skills”); 1.b (“fine motor skills”); and 1.c (“with enhanced gross and fine motor skills”) above. (1) Number one has three parts to it (1.a; 1.b; 1.c). We’ve underlined all the parts that go with number one (notice that some of the words are bolded as well; we’ll talk more about those in a moment). In 1.c we can see how the writer uses key phrases (gross and fine motor skills) to link the third paragraph to the previous two paragraphs. (2) Number two has three parts two it as well (2.a; 2.b; 2.c), although in this case the writer is using two strategies that work together to make a subtle transition. The parts here are all bolded. Notice that some of them are underlined as well. The writer uses the key word “skills” in the last sentence of the first paragraph and then reintroduces it in the first sentence of the second paragraph. However, in this case the writer felt that the key word “skills” may not make a strong enough link between ideas for the reader, since the word skills is used often, so the writer included the transition “also” to show that paragraph one and paragraph two contained a related discussion. (3) Number three consists of two parts (3.a; 3.b). Both 3.a and 3.b are examples of transitions used within paragraphs. Again, you can see that these words contain meaning. By using the word “similarly” the writer wants to emphasize or explore a similarity between two ideas. When the writer uses “for example,” she wants to indicate to the reader that she is providing an example of the previous sentence. Open House in Victoria - 5/23/2013 Degree Information Session - 5/29/2013 President's Regional Advisory Board - 6/4/2013 Vietnam War Conference - 6/13/2013 Vietnam War Conference - 6/14/2013 Freshman Advising and Preregistration Day - 6/15/2013 LegalShield Lunch & Learn - 6/18/2013 UHV math professor teaches innovative lesson plans - 05/20/2013 UHV SBDC staff members earn global certifications - 05/17/2013 UHV student receives Salute to Nurses scholarship - 05/16/2013
http://www.uhv.edu/ac/research/write/paragraphtransitioningexample.aspx
4.375
transform plate boundary Simply begin typing or use the editing tools above to add to this article. Once you are finished and click submit, your modifications will be sent to our editors for review. At the third type of plate boundary, the transform variety, two plates slide parallel to one another in opposite directions. These areas are often associated with high seismicity, as stresses that build up in the sliding crustal slabs are released at intervals to generate earthquakes. The San Andreas Fault in California is an example of this type of boundary, which is also known as a fault or... The Earth’s plates, which move horizontally with respect to one another at a rate of a few centimetres per year, form three basic types of boundaries: convergent, divergent, and side-slipping. Japan and the Aleutian Islands are located on convergent boundaries where the Pacific Plate is moving beneath the adjacent continental plates—a process known as subduction. The San Andreas Fault... What made you want to look up "transform plate boundary"? Please share what surprised you most...
http://www.britannica.com/EBchecked/topic/602606/transform-plate-boundary
4.03125
Stopping atoms in their tracks is not the only way to get them to show their wavelike nature. Another way is to throw them at a grating with slits so small and tightly spaced that each atom wave passes through two slits at once and is thus split in two. The split waves can then be recombined to produce an interference pattern--alternating bands of intensity in which the matter waves either cancel each other or reinforce each other, just as interfering light waves do. MIT physicist David Pritchard first measured such atomic interference in 1988. Last February Pritchard and his colleagues reported another first: using the silicon nitride grating shown here, whose slits are just a few hundred-millionths of an inch apart, they managed to separate the split atom waves enough to do separate experiments on them. (The closer the spacing of the slits, the more the waves diverge after they pass through the grating.) The researchers passed one of the waves through a gas or an electric field while leaving the other alone. By observing the effect on the interference pattern--which is extremely sensitive to any tampering with one of the component waves--Pritchard and his team made fundamental measurements that were not possible before. They measured the susceptibility of sodium atoms to electric fields and the degree to which sodium atom waves are refracted--bent and attenuated--as they pass through another gas and the atoms in that gas attract them. Physicists armed with optical interferometers have been able to make similar measurements on light waves for the last century or so--but light waves are 10,000 times longer than atom waves, which means they can be diffracted with much coarser gratings than the one in Pritchard’s atom interferometer. Pritchard has managed to send entire sodium molecules through his device, and in principle, even something as large as a living bacterium could hurtle through it in wave form. But quantum mechanical trade-offs mean that such a large chunk of matter would take thousands of years to pass through the grating. For the moment at least, physicists must be content with finally being able to exploit the wave nature of atoms.
http://discovermagazine.com/1996/jan/interferingatoms676
4.25
To open slideshow in new window click here We know that youngsters learning a second language have many oral language strengths needed for early literacy development. They have an acuity for sounds, for example, and they know that sounds carry meaning in the shape of words. They develop a core vocabulary of ‘here and now’ words to satisfy their immediate communication needs quite quickly. And they seem to make the connection from sound to letter recognition quite easily. These are all important early literacy concepts and skills that are necessary for children to learn to ‘decode’. In the long run, though, these children need to ‘grow’ a bigger vocabulary so that they will enjoy success in the later school years when reading comprehension becomes dependent on word knowledge that is taken from text books. We want to build a better foundation. We think this work can get an early start – in kindergarten. The dual language book project is supported by the following ideas: - We want to involve the family and especially the parents in telling their children family stories that are interesting and that will expand or ‘stretch’ their mother tongue vocabulary. - We want to link this vocabulary to an object that has family and cultural relevance—a family ‘treasure’. - We want the child to bring the object to class, where we can support the story telling in English in small group work. We will write the stories in English and the first language of each child. - We want to target ‘next words to know’ – and purposefully challenge the children to learn lots of new words related to the Family Treasures project. - We want to encourage word play, through recycling activities and games that will lead to deep understanding of word meanings. - We want to link the children’s stories to good children’s literature on the same theme that can be explored for meaning and personal connection. - Most of all, we want to create a learning environment for curiosity, wonder, imagination, respect for and interest in diversity; and fun! Welcome to our Family Treasures dual language book project!
http://www.duallanguageproject.com/
4.09375
How to Find the Derivative of a Line The derivative is just a fancy calculus term for a simple idea that you probably know from algebra — slope. Slope is the fancy algebra term for steepness. And steepness is the fancy word for . . . No! Steepness is the ordinary word you’ve known since you were a kid, as in, Hey, this road sure is steep. Everything you study in differential calculus all relates back to the simple idea of steepness. Here’s a little vocabulary for you: differential calculus is the branch of calculus concerning finding derivatives; and the process of finding derivatives is called differentiation. Notice that the first and third terms are similar but don’t look like the term derivative. The link between derivative and the other two words is based on the formal definition of the derivative, which is based on the difference quotient. Now you can go and impress your friends with this little etymological nugget. Don’t be among the legions of people who mix up the slopes of horizontal and vertical lines. How steep is a flat, horizontal road? Not steep at all, of course. Zero steepness. So, a horizontal line has a slope of zero. What’s it like to drive up a vertical road? You can’t do it. And you can’t get the slope of a vertical line — it doesn’t exist, or, as mathematicians say, it’s undefined. To find points on the line y = 2x + 3 (shown in the figure below), just plug numbers into x and calculate y: plug 1 into x and y equals 5, which gives you the point located at (1, 5); plug 4 into x and y equals 11, giving you the point (4, 11); and so on. You should remember that The rise is the distance you go up (the vertical part of a stair step), and the run is the distance you go across (the horizontal part of a step). Now, take any two points on the line — say, (1, 5) and (6, 15) — and figure the rise and the run. You rise up 10 from (1, 5) to (6, 15) because 15 – 5 = 10. And you run across 5 from (1, 5) to (6, 15) because 6 – 1 = 5. Next, you divide to get the slope: You can just plug in the points (1, 5) and (6, 15):
http://www.dummies.com/how-to/content/how-to-find-the-derivative-of-a-line.navId-403861.html
4.21875
Ancient Greek/Basic Verbs Greek verbs are simultaneously incredibly complicated and remarkably simple, as many verbs follow common ending patterns, or inflections, but there are are vast number of these endings. Unlike English verbs, which normally have at most five forms (sing, sang, sung, singing, sings), a single Greek verb can have hundreds of forms. However, by breaking Greek verbs down into their respective components, each verb can quickly and easily be identified. This means every verb can give out a lot of useful information about the rest of the sentence. For instance, the English verb form are singing could take a variety of subjects (you are singing, you all are singing, we are singing, they are singing), but a Greek verb includes the subject within its ending. The most important marker on a verb (and usually the easiest to spot) is its personal ending. A finite verb will alter its ending depending upon its subject's person (first, second, or third person) and number (singular or plural). This is similar to the way verbs are formed in English: for example, if you take almost any verb in the present tense in the third person singular (the he/she/it form) will add an -s to the end: I work, but she works. Here is how the present of a simple verb conjugates, or changes its personal ending: Notice that the stem, λυ-, does not change. Additionally, the ν at the end of the third person plural form is usually inserted at the end of a sentence and also before another word that begins with a vowel. This ν makes the ending of a Greek form easier to distinguish, so that words do not elide (this added ν is known as the ν-movable in some grammars). If the verb is not at end of a sentence or before a word that begins with a vowel, the ending is just -ουσι. Verbs also change according to their time frame. Since most narrative occurs in the past, these verb forms are critical to know. There is a slightly different set of endings used by verbs in the past, and, in Classical Greek, the past time frame is denoted by adding a past temporal augment, commonly as an ἐ-, to the beginning of the verb. |Singular||1||ἔλυον||I was releasing| |2||ἔλυες||You were releasing| |3||ἔλυε(ν)||He/she/it was releasing| |Plural||1||ἐλύομεν||We were releasing| |2||ἐλύετε||You were releasing| |3||ἔλυον||They were releasing| This is known as the imperfect form. Again, notice how it is composed of the same stem as in the present (λυ-), but includes a past temporal augment. Note also that the accent moves back one syllable. The future tense takes the same endings as the present tense. However, it is different from a present verb by the addition a sigma to the present stem, then adding the present endings as normal: |Singular||1||λύσω||I shall release| |2||λύσεις||You will release| |3||λύσει||He/she/it will release| |Plural||1||λύσομεν||We shall release| |2||λύσετε||You will release| |3||λύσουσι(ν)||They will release| Like most languages, Ancient Greek has irregular verbs, which don't follow the same pattern. There are a number of irregular verbs that appear often in Ancient Greek texts, and they must be known along with the regular verbs. Here follows the present tense of the verb to be: As you can see, like Ancient Greek, even the English forms of to be are far from predictable!
http://en.m.wikibooks.org/wiki/Ancient_Greek/Basic_Verbs
4.09375
Fun Classroom Activities The 20 enjoyable, interactive classroom activities that are included will help your students understand the text in amusing ways. Fun Classroom Activities include group projects, games, critical thinking activities, brainstorming sessions, writing poems, drawing or sketching, and more that will allow your students to interact with each other, be creative, and ultimately grasp key concepts from the text by "doing" rather than simply studying. 1. Video Sharing Bring in a video that reminds you of a setting from "The Last Full Measure". Be ready to present your article and explain your reasoning to the class. Create a monologue for any character in "The Last Full Measure" and imagine what the character sees, says, and feels. Draw a scene from "The Last Full Measure" and explain what it means to you. Create a limerick about just one of your characters in "The... This section contains 579 words| (approx. 2 pages at 300 words per page)
http://www.bookrags.com/lessonplan/the-last-full-measure/funactivities.html
4.09375
Book Description: Connect students in grades 4 and up with science using Learning about Cells. In this 48-page resource, students learn what cells are, the parts of cells, how cells live and reproduce, and how to use a microscope to view them. It establishes a dialogue with students to encourage their interest and participation in creative and straightforward activities. The book also includes a vocabulary list and a unit test. This book supports National Science Education Standards.
http://www.campusbooks.com/books/childrens-books/science-nature-how-it-works/biology/9781580373210_Debbie-Routh_Learning-About-Cells-Grades-4-8.html
4.4375
The Santa Fe Trail was crucial to the Battle of Glorieta Pass. This commercial route from Independence, Missouri to Santa Fe, New Mexico, received official sanction for legal use in 1821, when Mexico won its independence from Spain. It immediately became the principle trade and travel route between the United States and the northern province of Mexico, Chihuahua. In 1862, Confederate general Henry Sibley planned to follow the Santa Fe Trail north from Texas, capture Fort Union in New Mexico Territory, and then march up the trail to invade Colorado. The First Colorado Volunteers traveled down the Santa Fe Trail to Fort Union, and then followed it west to Glorieta Pass, a gap in the Sangre de Cristo mountains. 1. Study the territories and states as they existed in 1862. How does this map differ from a modern map of the United States? 2. Locate the Santa Fe Trail. Name the states or territories shown on this map through which the trail passed on the way to Santa Fe. 3. Why did the Confederacy want to win control of New Mexico Territory? 4. What Indian tribes may have had an interest in the outcome of the war between the Union and the Confederacy in this region? * The map on this screen has a resolution of 72 dots per inch (dpi), and therefore will print poorly. You can obtain a larger version of Map 1, but be aware that the file may take as much as 43 seconds to load with a 28.8K modem.
http://www.cr.nps.gov/nR/twhp/wwwlps/lessons/91glorieta/91locate1.htm
4.09375
Hundreds of millions of years ago, a group of dinosaurs dominated the earth. Most of them have the gigantic size. However, once extinct, no more animals are have jumbo sized. |A T-rex at the Natural History Museum: A new study claims that the biology of dinosaurs was skewed towards big species, with many more mammoth examples than among today's animals. (Picture from: http://www.dailymail.co.uk/)| The experts found the answer to why the dinosaurs became gigantic ancient animals than vertebrates in the modern era. According to them, the dinosaur is not only the largest animal to ever roam the earth. Dinosaurs also has many species tefaih size larger than all other vertebrate in their time. |Frequency distribution of body size | for eight groups: (a) extinct dinosaurs; (b) extant birds; (c) extant reptiles; (d) extant amphibians; (e) extant fish; (f) extant mammals; (g) extinct pterosaurs; (h) Cenozoic mammals. (Picture Their findings explain how life on earth as long as there are dinosaurs look very different. The proportion of large and small animals in ancient times was very different from the present. "In ancient times, there are few small animals," said David Hone of Queen Mary University of London, England, on December 27, 2012. He and his colleague, Eoin Gorman, comparing the size of the femur 329 different species of dinosaurs from the fossil record. In the world of paleontology, the analysis of the length and weight of the femur is a valid method to estimate the body mass of dinosaurs. The results of this analysis are brought Hone and Gorman to the conclusion that dinosaurs had body size distribution pattern in the opposite compared to other vertebrate species. For example in modern mammals tend encountered fewer large species than smaller animals. For example, species of elephant than mice. But the fossil record indicated the opposite trend in the age of dinosaurs are still alive. Hone said the tendency to have more large-bodied species appeared to develop quite early in the evolution of dinbsaurus, which occurred about 225 million years ago in the Late Triassic period. "Young Dinosaurs occupy different ecological niches than their parents so competitive over food the same," he said. Young dinosaurs tend to eat plants or smaller animals. *** [DAILY MAIL | MAHARDIKA SATRIA HADI | KORAN TEMPO 4101]
http://trussty-jasmine.blogspot.com/2013/01/why-dinos-have-giant-size.html
4.3125
||Additional Activity #1 ||Students throw and catch a beanbag and hop around while naming healthy foods. ||Students will identify a variety of healthy foods. ||Beanbag or Koosh Ball - Ask the students to stand in a circle. - Toss the beanbag (using an underhand throw) to a student. This student should then throw the beanbag to a new student. - Each student should catch the beanbag once and then sit down so it is clear who still needs a turn. - Ask the students why it is important to eat lots of different kinds, or a variety, of healthy foods (because each healthy food does something very different and very special for our bodies). Give them some specific examples (e.g. oranges help us fight off colds and low-fat yogurt keeps our bones strong). - Now add the final element. Have the students play again, but this time the student who receives the beanbag should catch it, state one healthy food, and then throw the beanbag to a different student. - If a student names a "slow" food, challenge her or him to think of a healthier choice. - See how long they can keep the beanbag moving and how many healthy foods they can name. Although all foods can fit into a healthy eating plan in moderation, it is important to reinforce that healthier foods give the body more energy to play and grow. "Junk foods," (processed foods high in fat and added sugar), contain a significant amount of calories but add very little nutrition to kidsí diets. "Go" foods refer to nutritious foods which give the body the energy to go and grow. "Slow" foods refer to foods high in fat and added sugar which can slow the body down. Healthy ("Go") Foods and Drinks: ||whole grain bread ||baked tortilla chips with salsa ||air popped popcorn (without butter) ||low-sugar granola bars ||whole wheat pizza ||low-fat trail mix ||peanut butter crackers |100% fruit juice ||skim or low-fat milk ||natural fruit smoothies Less Healthy ("Slow") Foods and Drinks: ||cookies (Oreos, etc.) ||candy (Skittles, etc.) ||high sugar juice (Kool-Aid, etc.) Related National Standards Further information about the National Standards can be found here
http://nyrrf.org/ycr/eat/activity/gradek/ka1.asp
4
MODERN ART AND IDEAS ONE: 1882–1900 SETTING THE SCENE 1. World’s Fair The 1889 Exposition Universelle, or World’s Fair, took place in Paris and showcased new innovations, recent geographical and scientific discoveries, and works of art. World’s Fairs, or Expos, as they are often called today, still take place and are hosted by various countries. Research the Paris World’s Fair of 1889 to learn about the themes, events, and inventions that were seen there. Has there ever been a World’s Fair in your country? Where and when did it take place? What were the important ideas that were represented there? Research modern World’s Fairs that have taken place in countries across the globe. Create your own mini world’s fair in the classroom. As a class, come up with a list of themes or ideas your fair should represent (i.e., technology, innovation, environment, politics, etc.). Form small groups. Each group should create a presentation based on one of the themes or ideas. (Individuals may work independently if preferred.) Include photographs, drawings, or replicas of important inventions already in existence (or drawings or models of your own inventions) that you would like to include in your fair. Many artists whose works are described as Post-Impressionist took their inspiration from the unique surroundings in which they lived and worked. They worked for extended periods in a place and therefore became closely associated with a specific geographical location. For example, Paul Cézanne is closely associated with Aix-en-Provence, France; Paul Gauguin, with Tahiti; Vincent van Gogh, with Arles and Saint Remy, France; and Henri de Toulouse-Lautrec, with Paris. Research these artists, focusing on their relationship to where they lived and worked. Compare and contrast art by artists associated with a city and by those who worked in the countryside. Consider how your environment influences how you think, work, live, and play. Compare your own experiences with what you would imagine life to be like for someone inhabiting a very different kind of environment.
http://www.oxfordartonline.com/public/page/lessons/Unit1
4
A Distant Solar System (Artist's Concept) This artist's concept depicts a distant hypothetical solar system, similar in age to our own. Looking inward from the system's outer fringes, a ring of dusty debris can be seen, and within it, planets circling a star the size of our Sun. This debris is all that remains of the planet-forming disk from which the planets evolved. Planets are formed when dusty material in a large disk surrounding a young star clumps together. Leftover material is eventually blown out by solar wind or pushed out by gravitational interactions with planets. Billions of years later, only an outer disk of debris remains. These outer debris disks are too faint to be imaged by visible-light telescopes. They are washed out by the glare of the Sun. However, NASA's Spitzer Space Telescope can detect their heat, or excess thermal emission, in infrared light. This allows astronomers to study the aftermath of planet building in distant solar systems like our own. For animation of this artist's concept, see PIA07097.
http://www.jpl.nasa.gov/spaceimages/details.php?id=PIA07096
4.03125
What researchers discovered is an internal biological clock, a clock that sometimes acts against the sleep-wake cycle by keeping us alert when we should be feeling tired. Sleep researchers Mary Carskadon, now at Brown University, and Bill Dement at Stanford had seen this biological clock in action when they tested a group of 10-12 year olds at Stanford. Dement, who pioneered sleep research at Stanford, wrote about these experiments: "After centuries of assuming the longer we are awake, the sleepier we will become and the more we will tend to fall asleep, we were confronted by the surprising result that after 12 hours of being awake, the subjects were less sleepy than they had been earlier in the same day, and at the 10 o'clock test, after more than 14 hours of wakefulness had elapsed ...they were even less sleepy." The researchers found that the biological clock opposed the sleep-wakefulness cycle at certain points of the day and at certain ages. It kept people awake when they were very tired. Just before puberty, that internal clock helped teens stay alert at night when they should have been falling asleep. The researchers called this a "phase-delay." The biological clock or circadian rhythms (from the Latin words "circa" and "dies," or "around day") of smaller children don't show the same delays. Nothing is opposing their need to sleep in the evening. Until the age of 10, many children wake up fresh and energetic to start the day. In contrast, the biological clock of pre-teens shifts forward, creating a "forbidden" zone for sleep around 9 or 10 p.m. It is propping them up just as they should be feeling sleepy. Later on, in middle-age, the clock appears to shift back, making it hard for parents to stay awake just when their teens are at their most alert. Carskadon discovered other important patterns in adolescent sleep. By studying alertness, she determined that teens, far from needing less sleep, actually needed as much or more sleep than they had gotten as children -- nine and a quarter hours. Most teenagers weren't getting nearly enough -- an hour and a half less sleep than they needed to be alert. And the drowsiness wasn't only in the early morning. Teens had a kind of sleep trough in the mid-afternoon and then perked up at night, even though they hadn't had a nap. Carskadon is now exploring the effect of light in setting adolescent sleep patterns, for darkness seems to trigger the release of melatonin, often called the "sleep" hormone. Measuring melatonin also helps researchers define the different circadian rhythms of children, teens, and adults. A great concern of sleep researchers is that teens are so sleep-deprived. There are literally millions of adolescents who feel despondent, get poor marks, or are too tired to join high-school teams all because they are getting too little sleep. Sleep, Learning, and Memory The other area of sleep research relevant to teenagers, their parents, and teachers is the effect of sleep on learning and memory. In experiments done at Harvard Medical School and Trent University in Canada, students go through a battery of tests and then sleep various lengths of time to determine how sleep affects learning. What these tests show is that the brain consolidates and practices what is learned during the day after the students (or adults, for that matter) go to sleep. Parents always intuitively knew that sleep helped learning, but few knew that learning actually continues to take place while a person is asleep. That means sleep after a lesson is learned is as important as getting a good night's rest before a test or exam. This research is done by giving students a series of tests. The students are trained, for instance, to catch a ball attached by a string to a cone-like cup. As they repeat the skill during the test day, they are able do it faster and more accurately. Let's say they go from catching a ball 50 percent to 70 percent of the time over a period of half an hour. The students who get a good night's sleep improve when they are retested. On a retest three days after they have a good night's sleep, they might catch a ball 85 percent of the time. The other students who got less than six hours sleep either do not improve or actually fall behind. Some of the tests are more demanding. They are called cognitive procedural tasks and they mimic what a student might learn in physics or math, or in certain sports. They present the student with something new to be learned or require an ability to conceptualize, to form a picture of the task in their minds. The brain consolidates learning during two particular phases of sleep. According to Dr. Robert Stickgold of Harvard University Medical School, who conducted a series of tests involving visual tasks, the brain seems to need lots of slow-wave sleep and a good chunk of another kind of sleep, Rapid Eye Movement, or REM. Dr. Stickgold hypothesizes that the reason the brain needs these particular kinds of sleep is that certain brain chemicals plummet during the first part of the night, and information flows out of the hippocampus (the memory region) and into the cortex. He thinks the brain then distributes the new information into appropriate networks and categories. Inside the brain, proteins strengthen the connections between nerve cells consolidating the new skills learned the day before. Then later, during REM, the brain re-enacts the lessons from the previous day and solidifies the newly-made connections through the memory banks. What these studies show is that learning a new task, whether it is sports or music, will be greatly helped by getting a good night's sleep and that students' ability to remember things, be it a lesson on geometry or the causes of the Second World War, is mediated by sleep. The proposition that sleep aids the learning process is accepted by many researchers. In a review of the Harvard studies, the late Chris Gilpin described the research as "the most believable data ever collected that a specific memory function is associated with sleep." However, a recent study published in the November 2001 issue of the journal Science challenges that conclusion. After conducting a literature review, Jerome M. Siegel of the UCLA Department of Psychiatry and Brain Research and the Center for Sleep Research, judged the evidence of a link between REM sleep and learning to be "weak and contradictory." He pointed to inconsistent results from human and animal studies, and argued that studies of humans who do not experience REM sleep (due to brain injuries or pharmacological reasons) do not show memory problems. Siegel concludes, however, that although he does not believe that the existing literature points to a link between REM sleep and memory consolidation, "just as nutritional status, ambient temperature, level of stress, blood oxygenation, and other variables clearly affect the ability to learn, adequate sleep is vital for optimal performance in learning tasks." Learning Good Sleep Habits Putting good sleep habits into practice is particularly difficult for teenagers. Not only do their own circadian rhythms fight against going to sleep early, but many teens don't have any control over the time they wake up. Teens can do something to try to bring their internal body clock forward. Sleep experts say dimming the lights at night and getting lots of daylight in the morning can help. Having a routine bedtime of 10 p.m., sleeping in a cool environment and turning off music, the Internet, and televisions would help to reset the body clock. And though sleeping in is a good thing, trying to get up after only an extra hour or two is a lot better than "binge-sleeping" on the weekends. If a student is used to getting up at 6:30 a.m., they shouldn't sleep until noon on the weekend. That simply confuses their bodies. And lots of sports helps, too -- better earlier in the day than late. Sleep research not only points out the importance of sleep to teenagers, but explodes some of the myths around sleep: principally the idea that people need less and less sleep as they grow up. There are many factors in the lives of adolescents that elude their control. Sleep is one area where the lessons are clear and the benefits of following them are quickly apparent.
http://www.azleisd.net/education/components/scrapbook/default.php?sectiondetailid=20071
4.09375
Prerequisites: Definition of Derivative, Derivative Function, Chain Rule Goal: To visualize the chain rule. The blue curve represents the graph of f(x)=sin(k x), where k is a constant chosen by the vertical slider. The green line represents the tangent line at the point chosen by the horizontal slider. The red curve represents the derivative function f'(x) of f(x). - What is the amplitude and frequency of f(x)=sin(1.0 x)? - What is the derivative of f(x) = sin(1.0 x)? What is the amplitude and frequency of f'(x)? - What is the amplitude and frequency of g(x)=sin(2.0 x)? - What is the amplitude and frequency of g'(x)? Conjecture the derivative g'(x) of g(x) = sin(2.0 x)? - Without changing anything, predict what you will see in the graphs of the derivatives of sin(3.0 x), sin(4.0 x), and sin(0.2 x). Now adjust the function to check your predictions. How are the function and its derivative related? What can you conclude about the frequency and amplitude of the function and its derivative? - Let f(x) = sin(n x). Using what you know, and what you observed from the graph, what can you say about f'(x)? - Let g(x) = k x so that f(x) = sin(g(x)). - What is g'(x)? - What is f'(x)? - Now let F(x) = f(g(x)). Make a conjecture about the value of F'(x).
http://www.plu.edu/math/math-teaching-tools/151/java/SinX.html
4.0625
Teaching astronomy and space videos The resources are built around a series of Teachers TV programmes, aim to support the teaching of astronomy and space to 11-16 year olds. Produced with generous funding from the Science and Technology Facilities Council, on behalf of the Institute of Physics and Teachers TV, they are now available to watch through a number of websites, including www.schoolsworld.tv/series/teaching-astronomy-and-space. Within the programmes there are sections to use with students, where astronomers talk about their work in an inspiring and engaging way, as well as guidance and advice on setting up and managing practical activities with students. The activities are supported by full teaching notes. The different sections of the programmes are available to download separately below. Astronomy and space videos - Models of the Solar System - Earth, Sun and Moon Explore the science behind our solar system, and how astronomers are exploring its boundaries - Saturn and the Scale of the Solar System Includes stunning images of Saturn and its moons taken from the Cassini spacecraft - Asteroids and Comets The risks and dangers of an asteroid collision on Earth - The Sun A solar physicist reveals what she knows about the Sun and the latest solar missions - The Life Cycle of Stars Explains how we believe stars are born, live and die and the different ends to different sized stars - The Electromagnetic Spectrum Explains how astronomers use radiation from across the electromagnetic spectrum to reveal the secrets of our universe An introduction to SuperWASP, one of the most successful exoplanets finding instruments in the world - How Big is the Universe? Explains how astronomers have learnt to measure the distance to the stars - The Expanding Universe and the Big Bang Evidence for the Big Bang and the expanding universe. - The Seasons demo 1 - The Seasons demo 2 - Phases of the Moon - Solar Eclipses - Cooking up a Comet - Elliptical Orbits - The Earth’s Atmosphere: Why is the Sky Blue? - Invisible Wavelengths - Colour and Temperature of Stars - The Life Cycle of Stars: The Hertzsprung-Russell Diagram These videos and their teaching notes, as well as additional teaching resources and web links, are all available on a DVD from the education department. Email firstname.lastname@example.org to request a copy.
http://www.iop.org/resources/videos/education/classroom/astronomy/page_51897.html
4.25
Galaxy clusters (white spots) are shown on a map of the cosmic microwave background, or CMB. The clusters appear to move, on average, in one direction (toward the purple spot). |NASA, WMAP, Kashlinsky et al.| Scientists have a mystery of cosmic proportions on their hands. Recently astronomers noticed something strange. It seems that millions of stars are racing at high speeds toward a single spot in the sky. Huge collections of stars, gas and dust are called galaxies. Some galaxies congregate into groups of hundreds or thousands, called galaxy clusters. These clusters can be observed by the X-rays they give off. Scientists are excited about the racing clusters because the cause of their movement can’t be explained by any known means. The discovery came about when scientists studied a group of 700 racing clusters. These clusters were carefully mapped in the early 1990s using data collected by an orbiting telescope. The telescope recorded X-rays created by electrons located in the hot core of a galaxy cluster. The researchers then looked at the same 700 clusters on a map of what’s called the cosmic microwave background, or CMB. The CMB is radiation, a form of energy, leftover from the Big Bang. Scientists believe that the Big Bang marks the beginning of the universe, billions of years ago. The CMB provides a picture of how the early universe looked soon after the Big Bang. By comparing information from the CMB to the map of galaxy clusters, scientists could measure the movement of the clusters. This is possible because a cluster’s movement causes a change in how bright the CMB appears. As a galaxy cluster moves across the sky, the electrons from its hot core interact with radiation from the CMB. This interaction creates a change in the radiation’s frequency, or how often an event occurs in a certain amount of time. Scientists can then measure the frequencies to detect movement. As a galaxy cluster moves toward Earth, the radiation frequency goes up. As a cluster moves away from Earth, the frequency goes down. This shift in the frequencies creates an effect similar to the Doppler effect. The Doppler effect is commonly used to measure the speed of moving objects, such as cars. Scientists can use this method to measure the speed and direction of moving galaxies by looking at changes in the radiation frequencies. What the scientists found surprised them. Though the frequency shifts were small, the clusters were moving across the sky at a high speed — about 1,000 kilometers per second. Even more surprising, the clusters were all moving in the same direction toward a single point in the sky. Researchers don’t know what’s pulling this matter across the sky, but they are calling the source “dark flow.” Whatever it is, scientists say the source likely lies outside the visible universe. That means it can’t be detected by ordinary means, such as telescopes. One thing is certain. Dark flow has shown that we don’t understand everything we see in the universe and that there are still discoveries to be made.
http://www.sciencenewsforkids.org/2008/11/galaxies-on-the-go-2/
4.25
- There will need to be enough dice at this center for each student to have their own set. - Students will first need to predict what number will come up most often using two dice (from 2 to 12). - They need to make a simple recording sheet where they can tally the results of 100 throws. - Next they will perform the experiment by tossing the dice, adding them up, and tallying their results. - Afterwards they should make a conclusion explaining the results. You may want to have questions prepared to help students with their theories.
http://www.innovativeclassroom.com/Teaching-Toolbox/Center-Focus/index.php?id=120
4.4375
The Corn Laws were trade laws designed to protect cereal producers in the United Kingdom of Great Britain and Ireland against competition from less expensive foreign imports between 1815 and 1846. More simply, to ensure that British landowners reaped all the financial profits from farming, the corn laws (which imposed steep import duties) made it too expensive for anyone to import grain from other countries, even when the people of Great Britain and Ireland needed the food (as in times of famine). The laws were introduced by the Importation Act 1815 (55 Geo. 3 c. 26) and repealed by the Importation Act 1846 (9 & 10 Vict. c. 22). These laws are often considered as examples of British mercantilism. The economic issue, in essence, was food prices; the price of grain was central to the price of the most important food staple, bread, and the working man spent much of his wages on bread. The political issue was a dispute between landowners (a long-established class, who were heavily represented in Parliament) and the new class of manufacturers and industrialists (who were not): the former desired to maximise their profits from agriculture, by keeping the price at which they could sell their grain high; the latter wished to maximise their profits from manufacture, by reducing the wages they paid to their factory workers—the difficulty being that men could not work in the factories if a factory wage was not enough to feed them and their families; hence, in practice, high grain prices kept factory wages high also. In 1813, a House of Commons Committee recommended excluding foreign-grown corn until the price of domestically grown corn increased to 80 shillings (£4) (2010 equivalent: £202.25) per quarter (1 quarter = 480 lb / 218.8 kg). The political economist Thomas Malthus believed this to be a fair price, and that it would be dangerous for Britain to rely on imported corn because lower prices would reduce labourers' wages, and manufacturers would lose out due to the decrease of purchasing power of landlords and farmers. Nevertheless, 80 shillings a quarter was so high a price that domestic grain never attained it between 1815 and 1848. David Ricardo, however, believed in free trade, so that Britain could use its capital and population to its comparative advantage. With the advent of peace in 1814, corn prices decreased, and the Tory government of Lord Liverpool passed the 1815 Corn Law to keep bread prices high. This resulted in serious rioting in London. In 1820 the Merchants' Petition, written by Thomas Tooke, was presented to the House of Commons demanding free trade and an end to protective tariffs. The Prime Minister, Lord Liverpool, who (falsely) claimed to be in favour of free trade, blocked the Petition; he argued, speciously, that complicated restrictions made it difficult to repeal protectionist laws. He added, though, that he believed Britain's economic dominance grew in spite of, not because of, the protectionist system. In 1821 the President of the Board of Trade, William Huskisson, composed a Commons Committee report which recommended a return to the "practically free" trade of the pre-1815 years. The Importation Act 1822 decreed that corn could be imported when the price of domestically harvested corn rose to 80 shillings per quarter but imported corn was prohibited when the price fell to 70 shillings per quarter. After the passing of this Act until 1828 the corn price never rose to 80 shillings. In 1827 the landlords rejected Huskisson's proposals for a sliding scale and during the next year Huskisson and the new Prime Minister, the Duke of Wellington, devised a new sliding scale for the Importation of Corn Act 1828 whereby when domestic corn was 52 shillings per quarter or less, the duty would be 34 shillings and 8 pence, and when the price increased to 73 shillings the duty decreased to 1 shilling. The Whig governments in power for most of the years 1830–41 decided not to repeal the Corn Laws. However the Liberal Whig MP Charles Pelham Villiers proposed motions for repeal in the House of Commons annually from 1837 to 1845. In 1842 the majority against repeal was 303, by 1845 this had fallen to 132. The first year that Robert Peel voted in favour was 1846, though he had spoken in favour of repeal in 1845 but voted against it. In 1853, when Villiers was made a privy counsellor the Times stated "it was Mr Charles Villiers who practically originated the Free Trade movement". In 1838 Villiers spoke to a meeting of 5,000 "working class men" in Manchester. At the time, he proclaimed that the presence of so many of them demonstrated that he had their support. In 1840 the Committee on Import Duties directed by Villiers published a blue book examining the effects of the Corn Laws. Tens of thousands of copies were printed in pamphlet form by the Anti-Corn Law League, the report was quoted in the major newspapers, reprinted in America and published in an abridged form by The Spectator. In 1841 Sir Robert Peel became Conservative Prime Minister and Richard Cobden, a major proponent of free trade, was elected for the first time. Peel had studied the works of Adam Smith, David Hume and David Ricardo and proclaimed in 1839: "I have read all that has been written by the gravest authorities on political economy on the subject of rent, wages, taxes, tithes". However he voted against repeal every year from 1837 to 1845. In 1842 in response to the blue book published by Villiers' 1840 Committee on Import Duties, Peel gave the concession of modifying the sliding scale by reducing the maximum duty to 20 shillings when the price fell to 51 shillings or less. Peel's acolyte Monckton Milne MP said of Villiers at the time of this concession in 1842 that he was "the solitary Robinson Crusoe sitting on the rock of Corn Law repeal". The landlords claimed that manufacturers like Cobden wanted cheap food so they could reduce wages and thus maximise their profits, an opinion shared by the socialist Chartists. Karl Marx said: "The campaign for the abolition of the Corn Laws had begun and the workers' help was needed. The advocates of repeal therefore promised, not only a Big Loaf (which was to be doubled in size) but also the passing of the Ten Hours Bill" (i.e. to reduce working hours). The Anti-Corn Law League, founded in 1838, was agitating peacefully for repeal. They funded writers like William Cooke Taylor to travel the manufacturing regions of northern England to research their cause. Taylor published a number of books as an Anti-Corn Law propagandist, most notably, The Natural History of Society (1841), Notes of a tour in the manufacturing districts of Lancashire (1842), and Factories and the Factory System (1844). Cobden and the rest of the Anti-Corn Law League believed in the view that cheap food meant greater real wages and Cobden praised a speech by a working man who said: When provisions are high, the people have so much to pay for them that they have little or nothing left to buy clothes with; and when they have little to buy clothes with, there are few clothes sold; and when there are few clothes sold, there are too many to sell, they are very cheap; and when they are very cheap, there cannot be much paid for making them: and that, consequently, the manufacturing working man's wages are reduced, the mills are shut up, business is ruined, and general distress is spread through the country. But when, as now, the working man has the said 25s. left in his pocket, he buys more clothing with it (ay, and other articles of comfort too), and that increases the demand for them, and the greater the demand...makes them rise in price, and the rising price enables the working man to get higher wages and the masters better profits. This, therefore, is the way I prove that high provisions make lower wages, and cheap provisions make higher wages. Continued opposition to repeal In 1844, the agitation subsided as there were fruitful harvests. The situation changed in late 1845 with poor harvests and the Great Famine in Ireland; Britain experienced scarcity and Ireland starvation. Peel argued in Cabinet that tariffs on grain should be rescinded by Order in Council until Parliament assembled to repeal the Corn Laws. His colleagues resisted this. Soon afterwards the Whig leader Lord John Russell declared in favour of repeal. On 4 December 1845 an announcement appeared in The Times that the government had decided to recall Parliament in January 1846 to repeal the Corn Laws. Lord Stanley resigned from the Cabinet in protest. The next day Peel resigned as Prime Minister because he did not believe he could implement his policy and so the Queen sent for Russell to form a government. Russell offered Cobden the post of Vice-President of the Board of Trade but he refused, preferring to remain an advocate of free trade outside the government. By 20 December Russell was unable to form a ministry and so Peel remained Prime Minister. After Parliament was recalled the CAPS started a campaign of resistance. In the rural counties the CAPS was practically supplanting the local Conservative associations and in many areas the independent free holding farmers were resisting the most fiercely. |Wikisource has original text related to this article:| On 27 January 1846, Peel gave a three-hour speech saying that the Corn Laws would be abolished on 1 February 1849 after three years of gradual reductions of the tariff, leaving only a 1 shilling duty per quarter. Benjamin Disraeli and Lord George Bentinck emerged as the most forceful opponents of repeal in Parliamentary debates, arguing that repeal would weaken landowners socially and politically and therefore destroy the "territorial constitution" of Britain by empowering commercial interests. On the third reading of Peel's Bill of Repeal (Importation Act 1846) on 15 May, MPs voted 327 votes to 229 (a majority of 98) to repeal the Corn Laws. On 25 June the Duke of Wellington persuaded the House of Lords to pass it. On that same night Peel's Irish Coercion Bill was defeated in the Commons by 292 to 219 by "a combination of Whigs, Radicals, and Tory protectionists". On 29 June Peel resigned as Prime Minister and in his resignation speech he attributed the success of repeal to Cobden: In reference to our proposing these measures, I have no wish to rob any person of the credit which is justly due to him for them. But I may say that neither the gentlemen sitting on the benches opposite, nor myself, nor the gentlemen sitting round me—I say that neither of us are the parties who are strictly entitled to the merit. There has been a combination of parties, and that combination of parties together with the influence of the Government, has led to the ultimate success of the measures. But, Sir, there is a name which ought to be associated with the success of these measures: it is not the name of the noble Lord, the member for London, neither is it my name. Sir, the name which ought to be, and which will be associated with the success of these measures is the name of a man who, acting, I believe, from pure and disinterested motives, has advocated their cause with untiring energy, and by appeals to reason, expressed by an eloquence, the more to be admired because it was unaffected and unadorned—the name which ought to be and will be associated with the success of these measures is the name of Richard Cobden. Without scruple, Sir, I attribute the success of these measures to him. As a result the Conservative Party divided and the Whigs formed a government with Russell as PM. Those Conservatives who were loyal to Peel were known as the Peelites and included the Earl of Aberdeen and William Ewart Gladstone. In 1859 the Peelites merged with the Whigs and the Radicals to form the Liberal Party. Disraeli became overall Conservative leader in 1868, although, when Prime Minister, he did not attempt to reintroduce protectionism. Scholars have advanced several explanations to resolve the puzzle of why Peel made the seemingly irrational decision to sacrifice his government to repeal the Corn Laws, a policy which he had long opposed. Lusztig (1995) argues that his actions were sensible when considered in the context of his concern for preserving aristocratic government and a limited franchise in the face of threats from popular unrest. Peel was concerned primarily with preserving the institutions of government, and he considered reform as an occasional necessary evil to preclude the possibility of much more radical or tumultuous actions. He acted to check the expansion of democracy by ameliorating conditions which could provoke democratic agitation. He also took care to ensure that the concessions would represent no threat to the British constitution. Effects of repeal The price of corn during the two decades after 1850 averaged 52 shillings. Due to the development of cheaper shipping (both sail and steam), faster and thus cheaper transport by rail and steamboat, and the modernisation of agricultural machinery, the prairie farms of North America were able to export vast quantities of cheap corn, as were peasant farms in the Russian Empire with simpler methods but cheaper labour. Every corn-growing country decided to increase tariffs in reaction to this, except Britain and Belgium. In 1877 the price of British-grown corn averaged 56 shillings and 9 pence a quarter and for the rest of the nineteenth century it never reached within 10 shillings of that figure. In 1878 the price fell to 46 shillings and 5 pence. By 1885 corn-growing land declined by a million acres (4,000 km²) (28½%) and in 1886 the corn price decreased to 31 shillings a quarter. Britain's dependence on imported grain during the 1830s was 2%; during the 1860s it was 24%; during the 1880s it was 45%, for corn it was 65%. The 1881 census showed a decline of 92,250 in agricultural labourers since 1871, with an increase of 53,496 urban labourers. Many of these had previously been farm workers who migrated to the cities to find employment, despite agricultural labourers' wages being higher than those of Europe. Although proficient farmers on good lands did well, farmers with mediocre skills or marginal lands were at a disadvantage. Many relocated to the cities, and unprecedented numbers emigrated. Many emigrants were small, undercapitalized grain farmers who were squeezed out by low prices and inability to increase production or adapt to the more complex challenge of raising livestock. Similar patterns developed in Ireland, where cereal production was labour intensive. The reduction of grain prices reduced the demand for agricultural labour in Ireland, and reduced the output of barley, oats, and wheat. These changes occurred at the same time that emigration was reducing the labour supply and increasing wage rates to levels too great for arable farmers to sustain. See also - In this usage and throughout this article, "corn" has the original meaning of any grain that requires grinding ("corning" or querning) as part of its processing, particularly wheat. In Britain, unlike in North America, the term "corn" retains its historical meaning of grindable "grain" (the kernel), and usually implies the primary grain crop of a country, which in Britain was wheat or oats, rather than maize as in the Americas or rice as in much of China or India, though British usage would still classify maize but not rice as a variety of corn. - According to David Cody, they: ... were designed to protect English landholders by encouraging the export and limiting the import of corn when prices fell below a fixed point. They were eventually abolished in the face of militant agitation by the Anti-Corn Law League, formed in Manchester in 1839, which maintained that the laws, which amounted to a subsidy, increased industrial costs. After a lengthy campaign, opponents of the law finally got their way in 1846—a significant triumph which was indicative of the new political power of the English middle class. - Woodward, p. 61. - Hirst, p. 15. - Hirst, p. 16. - Schonhardt-Bailey, p. 9. - Schonhardt-Bailey, p. 10. - Semmel, p. 143. - Marx, Chapter VIII, p. 6. - The Gentleman's Magazine, 1850, p94–6. - Bright and Thorold Rogers, p. 129. - Hirst, p. 33. - Morley, p. 344. - Coleman, p. 134. - Hirst, p. 35. - Coleman, p. 135–136. - Schonhardt-Bailey, p. 239. - Morley, p. 388. - Lusztig, Michael (1994). "Solving Peel's puzzle: Repeal of the Corn Laws and institutional preservation". Comparative Politics 27 (1): 393–408. JSTOR 422226. - Woodward, Age of Reform p. 124. - Ensor, p. 115–116. - Ensor, p. 116. - Ensor, p. 117. - Vugt, William E. van (1988). "Running from ruin?: the emigration of British farmers to the U.S.A. in the wake of the repeal of the Corn Laws". Economic History Review 41 (3): 411–428. doi:10.1111/j.1468-0289.1988.tb00473.x. - O'Rourke, Kevin (1994). "The repeal of the corn laws and Irish emigration". Explorations in Economic History 31 (1): 120–138. doi:10.1006/exeh.1994.1005. Further reading - Blake, R. (1998) Disraeli, Rev. ed., London: Prion, ISBN 1-85375-275-4 - Cody, D. (1987) Corn Laws, The Victorian Web: literature, history and culture in the age of Victoria, webpage accessed 16 September 2007 - Coleman, B. (1996) "1841–1846", in: Seldon, A. (ed.), How Tory Governments Fall. The Tory Party in Power since 1783, London: Fontana, ISBN 0-00-686366-3 - Ensor, R.C.K. (1936) England, 1870–1914, The Oxford history of England 14, Oxford: Clarendon Press, ISBN 0-19-821705-6. - Hilton, Boyd (2008) A Mad, Bad, and Dangerous People?: England 1783-1846, New Oxford History of England, Oxford University Press, ISBN 0-19-921891-9 - Hirst, F. W. (1925) From Adam Smith to Philip Snowden. A history of free trade in Great Britain, London: T. Fisher Unwin. - Morley, J. (1905) The Life of Richard Cobden, 12th ed., London: T. Fisher Unwin, 985 p., republished by London: Routledge/Thoemmes (1995), ISBN 0-415-12742-4 - Schonhardt-Bailey, C. (2006) From the Corn Laws to Free Trade: interests, ideas, and institutions in historical perspective, Cambridge, Mass.; London: The MIT Press, ISBN 0-262-19543-7; quantitivate studies of the politics involved - Semmel, B. (2004) The Rise of Free Trade Imperialism: classical political economy the empire of free trade and imperialism, 1750–1850, Cambridge University Press, ISBN 0-521-54815-2 - Woodward, E.L., Sir (1962) The Age of Reform, 1815–1870, The Oxford history of England 13, 2nd Ed., Oxford: Clarendon Press, ISBN 0-19-821711-0 Primary and contemporary sources - Bright, J. and Thorold Rogers, J.E. (eds.) (1908) Speeches on Questions of Public Policy by Richard Cobden, M.P., Vol. 1, London: T. Fisher Unwin, republished as Cobden, R. (1995), London: Routledge/Thoemmes, ISBN 0-415-12742-4 - Marx, K. (1970) Capital: a critique of political economy; Vol. 3: the process of capitalist production as a whole, Engels, F. (Ed.), London: Lawrence & Wishart, ISBN 0-85315-028-1 - Taylor, W.C. (1841) Natural History of Society, D. Appleton & Co., New York - Taylor, W.C. (1842) Notes of a tour in the manufacturing districts of Lancashire: in a series of letters, London: Duncan & Malcolm. - Taylor, W.C. (1844) Factories and the Factory System, Jeremiah How, London
http://en.wikipedia.org/wiki/Corn_Laws
4.125
ReadWriteThink couldn't publish all of this great content without literacy experts to write and review for us. If you've got lessons plans, activities, or other ideas you'd like to contribute, we'd love to hear from you. Find the latest in professional publications, learn new techniques and strategies, and find out how you can connect with other literacy professionals. Teacher Resources by Grade |1st - 2nd||3rd - 4th| |5th - 6th||7th - 8th| |9th - 10th||11th - 12th| Polishing Preposition Skills through Poetry and Publication |Grades||6 – 8| |Lesson Plan Type||Standard Lesson| |Estimated Time||Four 50-minute sessions| Through the text Behind the Mask, students have the opportunity to deepen and refine their understanding of prepositions, including some of the more confusing standard usage guidelines, while enjoying the vivid pictures of Ruth Heller. After reading Behind the Mask, students discuss the book, focusing on the use of prepositions in the text. Taking those experiences as a reader, students continue to engage with the prepositions by composing prepositional poems, modeled on the text of Behind the Mask. To conclude the project, students create study guides that demonstrate their more advanced understanding of prepositions. Multigenre Mapper: Students can use this online tool to create multigenre, multimodal texts, including three types of writing and a drawing, in response to the Gettysburg Address. Flip Book: This online tool is designed to allow users to type and illustrate tabbed flip books up to ten pages long. Prepositions handout: This handout includes a handy list of prepositions. "Grammar worksheets and grammar textbooks have their place and their purposes, but their limitations are serious," cautions Brock Haussamen in his chapter "Discovering Grammar" from Grammar Alive! A Guide for Teachers (16). As an alternative, he suggests that "we should teach grammar from authentic texts as much as possible. You can use the literature students are reading . . . to demonstrate any grammar lesson. You can also use the students' own writing to illustrate points of grammar-to illustrate not just errors but effective grammar as well" (17). Constance Weaver similarly advocates for learning grammatical structures and sentence patterns by imitating quality literature in Teaching Grammar in Context (189). While her argument applies to all grade levels, she notes that "by the middle school level . . . many students should benefit from imitating literary sentences that feature [more advanced] constructions" such as the special standards for preposition use in the book featured in this lesson, Behind the Mask. Haussamen, Brock, et al. 2003. Grammar Alive! A Guide for Teachers. Urbana, IL: NCTE. Weaver, Constance. 1996. Teaching Grammar in Context. Portsmouth, NH: Heinemann.
http://www.readwritethink.org/classroom-resources/lesson-plans/polishing-preposition-skills-through-1100.html
4.375
To explain the rules for multiplication ofsigned numbers, we recall that multiplication of whole numbers may be thought of as shortened addition. Two types of multiplication problems must be examined; the first type involves number8 with unlike signs, and the second involves numbers with like signs. Consider the example 3(-4), in which themultiplicand is negative. This means we are to add -4 three times; that is, 3(-4) is equal to (-4) + (-4) + (-4), which is equal to -12. For example, if we have three 4-dollar debts, we owe 12 dollars in all. When the multiplier is negative, as in -3(7), we are to take away 7 three times. Thus, -3(7) is equal to -(7) - (7) - (7) which is equal to -21.For example, if 7 shells were expended in one firing, 7 the next, and 7 the next, there would be a loss of 21 shells in all. Thus, the rule is as follows: The product of two numbers with unlike signs is negative. The law of signs for unlike signs is sometimes stated as follows: Minus times plus isminus; plus times minus is minus. Thus a problem such as 3(-4) can be reduced to the following two steps: 1. Multiply the signs and write down thesign of’ the answer before working with the numbers themselves. 2. Multiply the numbers as if they were unsigned numbers. Using the suggested procedure, the sign ofthe answer for 3(-4) is found to be minus. The product of 3 and 4 is 12, and the final answer is -12. When there are more than two numbers to be multiplied, the signs are taken in pairs until the final sign is determined. When both factors are positive, as in 4(5),the sign of the product is positive. We are to add +5 four times, as follows: 4(5) = 5 + 5 + 5 + 5 = 20 When both factors are negative, as in -4(-5),the sign of the product is positive. We are to take away -5 four times. Remember that taking away a negative 5 is thesame as adding a positive 5. For example, suppose someone owes a man 20 dollars and pays him back (or diminishes the debt) 5 dollars at a time. He takes away a debt of 20 dollars by giving him four positive 5-dollar bills, or a total of 20 positive dollars in all. The rule developed by the foregoing exampleis as follows: The product of two numbers with like signs is positive. Knowing that the product of two positive numbers or two negative numbers is positive, we can conclude that the product of any even number of negative numbers is positive. Similarly, the product of any odd number of negative numbers is negative. The laws of signs may be combined as follows: Minus times plus is minus; plus times minus is minus; minus times minus is plus; plus times plus is plus. Use of this combined rule may be illustrated as follows: 4(-2) - (-5) - (6) - (-3) = -720 Taking the signs in pairs, the understood plus on the 4 times the minus on the 2 produces a minus. This minus times the minus on the 5 produces a plus. This plus times the understood plus on the 6 produces a plus. This plus times the minus on the 3 produces a minus, so we know that the final answer is negative. The product of the numbers, disregarding their signs, is 720; therefore, the final answer is -720. Practice problems. Multiply as indicated: 1. 5(-8) = ? Because division is the inverse of multiplication, we can quickly develop the rules for division of signed numbers by comparison with the corresponding multiplication rules, as in the following examples: 1. Division involving two numbers with unlike signs is related to multiplication with unlike signs, as follows: Thus, the rule for division with unlike signs is: The quotient of two numbers with unlike signs is negative. 2. Division involving two numbers with like signs is related to multiplication with like signs, as follows: 3(-4) = -12 Thus the rule for division with like signs is: The quotient of two numbers with like signs is positive. The following examples show the application of the rules for dividing signed numbers: Practice problems. Multiply and divide as indicated:
http://www.tpub.com/math1/4b.htm
4.03125
The Indus River today flows mainly through the state of Pakistan, though its actual source is in Western Tibet. With a length of 1,800 miles (2,900 kilometers), the Indus River is one of the longest rivers in the world. It discharges into the Arabian Sea at Karachi. The name Indus is the basis for the Roman name given to the Indian subcontinent - India, and is also the basis for the name given to modern India's largest religion - Hinduism. In Sanskrit the river is known as the Sindhu, which through Persian and Greek came down to the Romans as India. A centre of ancient civilisation The river valley is famed as the cradle of the enigmatic Indus Valley Civilisation. It is thought to have lasted from 4000-2500 BC, contemporary with Mesopotamia and ancient Egypt. Major sites in the valley include the cities of Mohenjo-daro, and Harappa. These urban centres were planned, integrated townships set in a grid pattern, with drainage and other civic attributes associated with sophisticated urban civilisations. Excavations began only in the 1920s, and initial theories suggested that the Indus Valley culture was largely dependent on riverine trade, and thus confined to the course of the Indus itself. Subsequent excavations in Pakistan, Afghanistan and Western India have extended its coverage to a vast tract of land extending from the fringes of Afghanistan to just north of Bombay (now known as Mumbai). As the script of the Indus civilisation has yet to be deciphered, and as only a small fraction of possible sites have been excavated, many facts are unsure. Early theories suggested the Indus Valley civilisation was destroyed by nomadic Aryan invaders sometime about 1500 BC. This is conjecture based on a belief that Aryan influences (and thus Hinduism) were largely pastoral, whereas the Indus Valley was largely urban. The waters of the Indus are home to a variety of unique species, including the highly endangered Indus dolphin. Large dams across the river have affected its fragile ecosystem and the pressures of human population and industrialisation have taken their toll. Like the other major river systems of the subcontinent, the Indus, crucial to the irrigation of Punjab and Sindh provinces of Pakistan, is heavily-used, polluted and bedevilled by water disputes.
http://www.kew.org/plant-cultures/themes/places_indus_valley.html
4.0625
We have provided a variety of resources to support and extend each chapter in the Level 5 Math Central textbook. Chapter 1: Whole Numbers and Decimals Chapter 2: Multiplication of Whole Numbers Chapter 3: Division of Whole Numbers Chapter 4: Collecting, Organizing, and Using Data Chapter 5: Measurement and Geometry Chapter 6: Multiplication of Decimals Chapter 7: Division of Decimals Chapter 8: Geometry Chapter 9: Fractions and Mixed Numbers Chapter 10: Addition and Subtraction of Fractions Chapter 11: Multiplication and Division of Fractions Chapter 12: Ratio, Percent, and Probability Chapter 13: Area and Volume Copyright © 1999 Houghton Mifflin Company. All Rights Reserved. Terms and Conditions of Use.
http://www.eduplace.com/math/mathcentral/grade5/index.html
4
Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer. 2006 June 30 Explanation: Some 60 million light-years away in the southerly constellation Corvus, two large galaxies have collided. But stars in the two galaxies - NGC 4038 and NGC 4039 - don't collide in the course of the ponderous, billion year or so long event. Instead, their large clouds of molecular gas and dust do, triggering furious episodes of star formation. Spanning about 500 thousand light-years, this stunning view reveals new star clusters and matter flung far from the scene of the accident by gravitational tidal forces. Of course, the visual appearance of the far-flung arcing structures gives the galaxy pair its popular name - The Antennae. Recorded in this deep image of the region at the tip of the upper arc is a tidal dwarf galaxy NGC 4038S, formed in the cosmic debris. Authors & editors: NASA Web Site Statements, Warnings, and Disclaimers NASA Official: Jay Norris. Specific rights apply. A service of: EUD at NASA / GSFC & Michigan Tech. U.
http://apod.nasa.gov/apod/ap060630.html
4
From the ancient "camera obscura" to the modern day digital camera, taking pictures has fascinated people. Children, in particular, love the idea of capturing images on film. To their young sensibilities, there is something magical about the photographic process and the re-creation of their world with a camera and film. This module about Photography utilizes Internet resources to introduce the principles, tools, and techniques of photography to young children in Grades 2-5. Photography is inter-disciplinary. It includes the following content areas: - Science: light; color; - Art: principles of design and composition; viewing and appreciation of photographs - Social Studies: the development of the camera and the photographic process; photographs as historical documents; photojournalism and - Language Arts: sequencing; photo essays; criticism & evaluation; creative writing - Technology: Internet search techniques; Web navigation; computerized photo manipulation; the future of photography The Web is replete with information about photography. The stops on the Teacher Resource Tour take you to many excellent Web sites that can help you implement a cross-curriculum unit on photography with your class. The last stop on the tour is from the Yahoo index. It provides you with an enormous number of additional photography related web sites to use in your teaching. Several of the stops on the Photography Tour are "interactive." Children should be encouraged to try the various simulations that illustrate photographic principles. There is, however, a wide range of reading levels represented, and younger students may need assistance. In the Teacher's Resources section, you will find projects that can be used with your students. Many of them have directions and lesson plans located at additional Web sites. - To understand the characteristics and properties of light - To learn how a camera, film, and darkroom work - To learn about the history of photography - To learn how to operate a camera and take photographs - To understand the elements of composition in photography - To relate photography to the study of history, art, science and current events
http://www.field-guides.com/cross/photo/index.htm
4.09375
Diagnosing a black hole flareMay 7th, 2012 in Space & Earth / Astronomy An optical-IR image showing a galaxy that suddenly brightened when the supermassive black hole at its center shredded and absorbed a star that wandered too close. Credit: NASA; Gezari, Rest, and Chornock (Phys.org) -- Black holes can come in a wide range of masses. Some, with only about one solar mass, result from the supernova death of a massive star, while those at the center of galaxies (called supermassive black holes) have millions or even billions of solar masses. Supermassive black holes are relatively famous because they are responsible for the powerful jets and other dramatic phenomena seen in some galaxies. The center of our Milky Way galaxy contains a modest-sized supermassive black hole, with about four million solar masses, and (fortunately for us) it is inactive - it lacks the extreme phenomena seen elsewhere. Black holes are so dense that nothing, not even light, can escape from their gravitational clutches. Still, black holes can be detected because matter that falls into them heats up, and emits bright radiation. A short-lived flare, for example, can result when a body (perhaps a cloud of gas or a star) wanders too close to a black hole and is eaten. Astronomers are particularly interested in measuring the way the brightness of the flare increases, versus its decline, because the shape of the rising emission holds clues to the actual infall process. Observing such events is difficult, though, because the flaring activity may only last for a few months -- by the time it is spotted in the sky the most diagnostic phases of flare activity may have passed. Moreover, flares from smaller supermassive black holes (like the one in the center of the Milly Way) may be correspondingly weaker. Pan-STARRS (Panoramic Survey Telescope & Rapid Response System) is a telescope with a small mirror (1.8 meters) but a very large field of view, and large digital cameras (1.4 billion pixels) developed especially to look for transient events. It can observe the entire available sky several times a month. In May of 2010 it spotted what appeared to be a flare from a previously inactive, Milky-Way-sized supermassive black hole in a galaxy about two billion light-years away. A team including CfA astronomers Ryan Chornock, Edo Berger, Peter Challis, Gautham Narayan, Ryan Foley, George Marion, Laura Chomiuk, Alicia Soderberg, Bob Kirshner, and Chris Stubbs, then led an aggressive follow-up campaign of observations to see what was going on. The team reports on their discovery in this week's Nature. They began observing the flare about 40 days after it went off and about 40 days before it peaked, providing excellent data over most of the event. Detailed modeling of the light led the team to conclude that the black hole is less massive than previously thought, only about two million solar masses, and that the object it devoured was probably an evolved star (about 5 billion years old) whose mass was about 0.2 solar masses. These new results provide a particularly impressive, detailed view of what goes on in these exotic cosmic flares, and offer support for the overall model of these flaring events. Provided by Harvard-Smithsonian Center for Astrophysics "Diagnosing a black hole flare." May 7th, 2012. http://phys.org/news/2012-05-black-hole-flare.html
http://phys.org/print255599106.html
4.21875
Anemia and Iron Deficiency Anemia is defined as a qualitative or quantitative deficiency of hemoglobin. Hemoglobin is an iron-rich protein that carries the oxygen from the lungs to the rest of the body tissues. Iron is needed to create hemoglobin and iron is needed to bind oxygen to hemoglobin for transport throughout the body. With anemia present, the blood does not carry enough oxygen to the rest of the body. Anemia goes undetected in many people, and symptoms can be small and vague. Most commonly, people with anemia report a general feeling of weakness or fatigue during exercise, malaise and sometimes poor concentration. People with more severe anemia often report dyspnea (shortness of breath) upon exertion. Very severe anemia prompts the body to compensate by increasing cardiac output, leading to palpitations and sweatiness and heart failure.
http://www.rockwellmed.com/therapeutic-anemia-iron-deficiency.htm
4.125
Learners listen to the lyrics of the song "Hero" by Mariah Carey. They define what a hero is and reflect on a hero's character traits besides courage. One 20-minute lesson The Learner will: This character education mini-lesson is not intended to be a service learning lesson or to meet the K-12 Service-Learning Standards for Quality Practice. The character education units will be most effective when taught in conjunction with a student-designed service project that provides a real world setting in which students can develop and practice good character and leadership skills. For ideas and suggestions for organizing service events go to www.generationon.org. Tell the Learners you are going to play a song and as they listen they are to think about how the lyrics relate to what they have learned and discussed in the last two lessons on about courage. Play the song "Hero" by Mariah Carey (see Materials and Bibliographical References). Audio of the song "Hero" by Mariah Carey found at http://www.lyrics.com/lyrics/mariah-carey/hero.html Lyrics to the Mariah Carey song, "Hero" http://www.lyrics.com/lyrics/mariah-carey/hero.html Mariah Multimedia. Audio and video of "Hero" by Mariah Carey http://www.mariahmm.info/modules.php?name=Downloads&d_op=viewdownload&cid=22 YouTube of "Hero" by Mariah Carey http://www.youtube.com/watch?v=PWlS8Oerx8o Lesson Developed By:Betsy Flikkema All rights reserved. Permission is granted to freely use this information for nonprofit (noncommercial), educational purposes only. Copyright must be acknowledged on all copies.
http://learningtogive.org/lessons/unit492/lesson3.html
4.125
In this section you will be introduced to the concept of Arrays in Java Programming language. You will learn how the Array class in java helps the programmer to organize the same type of data into easily manageable format. Program data is stored in the variables and takes the memory spaces, randomly. However, when we need the data of the same type to store in the contiguous memory allocations we use the data structures like arrays. To meet this feature java has provided an Array class which abstracts the array data-structure. The java array enables the user to store the values of the same type in contiguous memory allocations. Arrays are always a fixed length abstracted data structure which can not be altered when required. The Array class implicitly extends java.lang.Object so an array is an instance of Object. Advantages of Java Array: Disadvantages of Java Array: Recommend the tutorial
http://www.roseindia.net/java/beginners/arrayexamples/introduction_to_java_arrays.shtml
4.5
Students will compare and contrast cuisine from different cultures. - Cultural Cuisine Questions Sheet - Chart paper - Picture books that describe foods from different cultures Explain to the students that America is made up of many different cultures. Immigration has played an important role in making America so diverse. Think about how people from other countries preserve their culture through the food that they eat. You will be researching foods from other countries and discovering their similarities and differences. Read aloud a picture book that describes foods from different cultures. Begin a concept web to discuss words that relate to culture and cuisine. Divide students into groups. Assign or allow students to choose an ethnic group to research. Distribute Cultural Cuisine Questions sheet. Allow groups to answer questions using online resources. Have students use their data to create a menu and decide on sample foods to share with the class. Students should bring in items that reflect the culture that has been assigned. Provide a small area to display their sample foods and cultural items. Items could be travel brochures, post cards, pictures, etc. Groups will present their display and discuss the culture and cuisine of the ethnic group assigned. Students may use their Cultural Cuisine Questions sheet as a guide. Teachers may use a standard rubric to assess each group.
http://libertyslegacy.com/lesson-extensions/item/38-cultural-cuisine
4.21875
Search : Resource Information If you Give a Mouse a Cookie Source: EcEdWeb | Type: Lesson A little mouse shows up at a young man's house. The young man gives the mouse a cookie and starts a chain of events. Learn about unlimited wants, and goods and services. - Economics 1: Scarcity - Economics 3: Allocation of Goods and Services This is one of those lessons that takes few materials, not much time, contains a big fun factor, and teaches some major concepts. The primary students get a kick out of the fact that the mouse really does a good job of going through one good and/or service after another. However, when the book is introduced to older students they quickly comprehend the concept of cause and effect. The grasp of this concept is invaluable when students start making connections concerning historical events, current events, and economic consequences. (I used this lesson with sixth graders when they had completed an assignment in much less time than anticipated.) This is a lesson that well-encompasses several different areas in one, between the economic concepts presented to the literature standards met, and ideas that can be applied to every subject, like cause and effect. The lesson is set up well, and requires little prep time, which makes it a good lesson to have on hand for the times you need a little bit of filler to flesh out a lesson or assignment. Everyone needs to learn the concept of "cause and effect." This lesson provides a good background to help reinforce this concept in kids' minds.
http://www.econedreviews.org/lesson.php?id=703
4.46875
The three most basic units in electricity are voltage (V), current (I) and resistance (r). Voltage is measured in volts, current is measured in amps and resistance is measured in ohms. If we compare an electrical system to a water system, it would shake out like this: | The voltage is equivalent to the water pressure, the resistance is equivalent to the pipe size and the current is equivalent to the rate at which the water is flowing. The relationship between these three can be stated as follows: I = V/r (Current in Amps is equal to voltage divided by resistance) So, using our water analogy, let's say you had a power nozzle on your hose. If you open the faucet more, you will increase the pressure in the hose and the water will flow faster. In electrical terms, increasing the pressure is like increasing voltage and the resulting increase in flow is an increase in current. Report this Word
http://www.rcuniverse.com/community/glossary.cfm?letter=V&pref1set=all&cirkus=0&sort=1&key=
4.4375
Independence from Spain came suddenly for most of Latin America. Between 1810 and 1825, most of Spain's former colonies had declared and won independence and had divided up into republics. Sentiment had been growing in the colonies for some time, dating back to the American Revolution. Although Spanish forces efficiently quashed most early rebellions, the idea of independence had taken root in the minds of the people of Latin America and continued to grow. Napoleon's invasion of Spain (1807-1808) provided the spark the rebels needed. Napoleon, seeking to expand his empire, attacked and defeated Spain, and he put his elder brother Joseph on the Spanish throne. This act made for a perfect excuse for secession, and by the time Spain had gotten rid of Joseph in 1813 most of their former colonies had declared themselves independent. Spain fought valiantly to hold on to its rich colonies. Although the independence movements took place at about the same time, the regions were not united, and each area had its own leaders and history.Independence in Mexico Independence in Mexico was sparked by Father Miguel Hidalgo, a priest living and working in the small town of Dolores. He and a small group of conspirators started the rebellion by ringing the church bells on the morning of September 16, 1810. This act became known as the "Cry of Dolores." His ragtag army made it partway to the capital before being driven back, and Hidalgo himself was captured and executed in July of 1811. Its leader gone, the Mexican Independence movement almost failed, but command was assumed by José María Morelos, another priest and a talented field marshal. Morelos won a series of impressive victories against Spanish forces before being captured and executed in December, 1815. The rebellion continued, and two new leaders came to prominence: Vicente Guerrero and Guadalupe Victoria, both of whom commanded large armies in the south and south-central parts of Mexico. The Spanish sent out a young officer, Agustín de Iturbide, at the head of a large army to quash the rebellion once and for all in 1820. Iturbide, however, was distressed over political developments in Spain and switched sides. With the defection of its largest army, Spanish rule in Mexico was essentially over, and Spain formally recognized Mexico's independence on August 24, 1821.Independence in Northern South America The independence struggle in northern Latin America began in 1806, when Venezuelan Francisco de Miranda first attempted to liberate his homeland with British help. This attempt failed, but Miranda returned in 1810 to head up the First Venezuelan Republic with Simón Bolívar and others. Bolívar fought the Spanish in Venezuela, Ecuador and Colombia for several years, decisively beating them several times. By 1822, those countries were free, and Bolívar set his sights on Peru, the last and mightiest Spanish holdout on the continent. Along with his close friend and subordinate Antonio José de Sucre, Bolívar won two important victories in 1824: at Junín, on August 6, and at Ayacucho on December 9. Their forces routed, the Spanish signed a peace agreement shortly after the battle of Ayacucho.Independence in Southern South America Argentina drew up its own government on May 25, 1810, in response to Napoleon's capture of Spain, although it would not formally declare independence until 1816. Although rebel Argentine forces fought several small battles with Spanish forces, most of their efforts went towards fighting larger Spanish garrisons in Peru and Bolivia. The fight for Argentine Independence was led by José de San Martín, an Argentine native who had been trained as a military officer in Spain. In 1817, he crossed the Andes into Chile, where Bernardo O'Higgins and his rebel army had been fighting the Spanish to a draw since 1810. Joining forces, the Chileans and Argentines soundly defeated the Spanish at the Battle of Maipú (near Santiago, Chile) on April 5, 1818, effectively ending Spanish control over the southern part of South America.Independence in the Caribbean Although Spain lost all of their colonies on the mainland by 1825, it retained control over Cuba and Puerto Rico. It had already lost control of Hispaniola due to slave uprisings in Haiti. In Cuba, Spanish forces put down several major rebellions, including one which lasted from 1868 to 1878. It was led by Carlos Manuel de Cespedes. Another major attempt at independence took place in1895, when ragtag forces including Cuban poet and patriot José Martí were defeated at the Battle of Dos Ríos. The revolution was still simmering in 1898 when the United States and Spain fought the Spanish-American War. After the war, Cuba became a US protectorate and was granted independence in 1902. In Puerto Rico, nationalist forces staged occasional uprisings, including a notable one in 1868. None were successful, however, and Puerto Rico did not become independent from Spain until 1898 as a result of the Spanish-American War. The island became a protectorate of the United States, and it has been so ever since. Harvey, Robert. Liberators: Latin America's Struggle for Independence Woodstock: The Overlook Press, 2000. Lynch, John. The Spanish American Revolutions 1808-1826 New York: W. W. Norton & Company, 1986. Lynch, John. Simon Bolivar: A Life. New Haven and London: Yale University Press, 2006. Scheina, Robert L. Latin America's Wars, Volume 1: The Age of the Caudillo 1791-1899 Washington, D.C.: Brassey's Inc., 2003. Shumway, Nicolas. The Invention of Argentina. Berkeley: The University of California Press, 1991. Villalpando, José Manuel. Miguel Hidalgo. Mexico City: Editorial Planeta, 2002.
http://latinamericanhistory.about.com/od/latinamericaindependence/a/independence.htm
4.09375
THE GEOMORPHOLOGY OF AFRICA During the Primary period of the Earth (some for billion years ago) the primitive crust, that would later become Africa, was essentially made up of eruptive or magmatic rocks (granite, syenite, gabbro...). They were formed deep in the earth's crust (later emerging as a result of erosion), and effusive rocks brought to the surface by volcanic activity. There is emergence of an immense mountain chain called Saharides. A billion years of erosion scraped away this first crust, leaving some platforms that eventually form the Nile cataracts. In the course of the Secondary period (some 200 million years ago) the vast continent called Gondwana begins to dislocate, and the different parts, Africa, Antarctica, Australia, America, Madagascar and the Indian subcontinent move away from each other, becoming individual units. This phenomenon is accompanied by several events: - The inversion of the magnetic poles, which will recur numerous times during the course of the geological eras. - The presence of a new ice cap. The remains of ice caps are easily recognisable given the specific sediments that they engender. The continental masses (tectonic plates) having moved, glacial sediments can be found today on continents that no longer have a polar location. - Volcanic eruptions and repeated sea invasions. Africa fissures, floats and reaches humid equatorial latitudes. During the course of the following millions of years, the marine transgressions on the edges of the African shield leave sediments of clay, sand or shell. They form the soil of the Sahara and of the Nile Valley. During the Primary period, a levelling action leads to subsidence of the north-west of the future African continent. It is followed by the partial invasion of the continent by water. Thick layers of sandstone are then deposited by huge, unstable channels. Around 420 million years ago (middle of the Primary period), the sea covered the north of Africa and there accumulated schist with graptolites, which later was to become the future reservoirs of Saharan oil. At the end of the Secondary period, Africa migrated towards the north, reaching equatorial latitudes and was then covered by conifers. The latter are the vestiges of the fossilized wood that is today found in the desert. Great rivers transported sand and clay, depositing them in lakes and swamps. These materials, once solidified, form the Nubian sandstone, the reservoir-rock of the current ground water. The Mediterranean, the domain of the sea-deity Thetis, only opened up around 95 million years ago, and its depth increased during the Palaeogene, some 65 million years ago. During the Tertiary (from 65 million years ago), an immense complex of lakes and rivers shaped the landscape. On the edges of the great plateaux, water run-off formed large valleys, creating large tabular mountains called gours. The Sahara and the Nile Valley are both rare witnesses of the first formation of the Earth.
http://nubie-international.fr/accueil.php?a=page122000&lang=en
4.34375
This program generates word codes showing the patterns of repeating letters in words. It is intended for use in breaking simple substitution ciphers such like a monoalphabet. About Word Patterns Lets look at what the parts mean. The first number, before the dash is the length of the word. After that, there is a number for each letter in the word. Every time a particular letter appears, the corresponding number is the same - both the 'A's is Abba are '1' and both 'B's are '2'. If there are more than nine different letters in a word, then 'A' 'B' 'C' etc are used after '9'. The numbers are given out in order - '1' is given to the first letter, then '2' to the second letter - unless the second letter is the same as the first, in which case '2' is given to the third letter, and so on. This means that the number after the dash will always be '1'. Word patterns are used to help crack substitution ciphers. Details on the techniques for using them can be obtained in a cryptographic reference. Using the Pattern Generator You can use the program to calculate a pattern from a word, and (usually) to find the words that fit a given pattern. To calculate a word pattern, simple type 'wordpat' and enter the word when prompted. All non-alphabetic characters in the word will be ignored. Going the other way round is somewhat more difficult. The idea is that the program generates a huge list of word patterns, and the words that have these patterns. To make this, it needs a word list - a text file containing many words. These are available on the Internet, and often included in Unix installations. The word list should be separated by spaces and/or new line characters. No line should be more than 255 characters long (include new line characters). All non-alphabetic characters will be ignored; case does not matter; duplicate words will be recognised, and only one copy included in the output. To generate the list of word patterns, type 'wordpat wordlist1 wordlist2 ...' where wordlistX is the file name of each word list you have. This may take some time, depending on the length of your word lists and the speed of your computer. Note that whilst doing this, the SORT program is used - so this must be in your path. When the program has completed, it will have generated several .WL files. They are named according to the length of the words then contain - 3.WL has all the three letter words, and so on. Inside each file, the patterns appear in sorted order, and on the lines following each pattern are all the words which fit that pattern, in sorted order. Patterns which no words fit are not included in the .WL file. So, if you have a pattern and you want to find the words that match it, you open the appropriate .WL file, locate the pattern and if present the words are immediately beneath the pattern. If the pattern is not present, the word list did not contain a word with that pattern.
http://pajhome.org.uk/crypt/wordpat.html
4.09375
On 12 May 1846 the United States declared war on Mexico in a dispute over the boundary between Mexico and the state of Texas, a former Mexican province whose independence and subsequent annexation to the United States Mexico did not recognize. In the Mexican War, which ended with the Treaty of Guadalupe-Hidalgo on 2 February 1848, combined American arms won for the United States the nation’s most decisive victory before the Civil War. The Navy played a major role in securing that victory. By blockading Mexico’s port cities, the Navy strangled Mexico’s maritime trade and prevented its forces from threatening U.S. operations from the sea. The Navy also performed an essential service in transporting men and materiel for the Army. The Navy directed the landing of General Winfield Scott’s troops at Veracruz and participated in the bombardment of that city. By establishing and maintaining sea control, the Navy enabled the Army to seize and garrison enemy territory. In addition to projecting power against Mexico itself, U.S. naval forces, assisted by a relatively small number of soldiers, seized California for the United States in what was principally a land campaign. In July 1846 Commodore Robert Stockton took over command of the Pacific Squadron and continued the conquest of California begun by Commodore John D. Sloat. In August, Stockton captured Los Angeles, but the following month Mexican Californians expelled the small party of Americans left to garrison the town. In January 1847 Stockton led a force of some six hundred men, with six field pieces, overland to retake Los Angeles. On 8 January he encountered resistance from an organized force of about two hundred armed men. After his men had struggled to push the heavy artillery through the soft bottom and quicksand of a ford of the San Gabriel River, Stockton relied on the field pieces, whose fire he directed personally, to drive off the enemy. The next day, the enemy blocked the way to Los Angeles once more, and once more Stockton used his artillery to disperse the opposition. After these engagements, known as the Battles of San Gabriel and the Mesa, the way to Los Angeles lay open. The Americans reoccupied the town the next day. The Mexican War experience of Thomas Southwick, carpenter of U.S. frigate Congress, illustrates the crucial roles that the technical skills of essential but unsung warrant officers played in securing victory. Southwick went ashore in California with Commodore Stockton’s force of sailors and marines, attached to an artillery company commanded by navy Lieutenant Richard L. Tilghman. In the Battles of San Gabriel and the Mesa, Southwick had two of the field guns under his charge. Furthermore, the zealous carpenter was instrumental in the general equipping of the artillery. Subsequently, he had charge of a piece of artillery at the capture of Guaymas, a Mexican port in the Gulf of California; and in the attack on Mazatlán, a Mexican coastal city near the mouth of the Gulf of California, he landed with the attacking party, again in charge of a piece of artillery. During the seven-month occupation of Mazatlán, Southwick served ashore where he aided in the engineering and construction of fortifications and the fabrication of gun carriages and in addition had charge of one of the forts. Securing for the United States not only Texas but also New Mexico Territory and California, the Mexican War left the nation with two sea coasts to defend, propelled the United States into Pacific affairs, and provided impetus for the Navy’s expansion. The war also left a body of tactical experience on which officers in the Union and Confederate navies would draw during the Civil War.
http://www.navalhistory.org/2011/05/12/the-mexican-war-12-may-1846
4.375
Information and activities are from KRP’s The Ultimate Holiday Activity Guide from the NIE Institute. Since 1986, the United States has observed the birth of civil rights leader Martin Luther King Jr. as a legal public holiday. It is always celebrated on the third Monday in January. This day is set aside each year to honor King, the powerful black minister from Atlanta who was the main force behind the civil rights movement of the 1950s and 1960s. He was also a recipient of the Nobel Peace Prize (1964) for leading non-violent civil rights demonstrations. Despite his belief in peaceful demonstrations, King himself was often the target of violence. It ended King’s life at the age of 39, when an assassin shot and killed him while he supported a strike by black garbage workers in Memphis, Tenn., in 1968. 1. Even though slavery was officially abolished in 1865, Martin Luther King Jr. talked often about his desire for freedom for African-Americans. Ask students to discuss what they think King meant by freedom. Then have them cut out words and pictures from the newspaper that illustrate freedom to use on a poster. 2. Martin Luther King Jr. was a hero to many people both when he was alive and after his death. Have students look through the newspaper for a present-day hero. Then have them make a list of the character traits that make that person a positive influence. Conclude by having them find a person featured in the newspaper who would not be a good role model. Allow them to discuss their thoughts. 3. Civil rights, such as the right to free speech, are the freedoms a person has because he or she is a member of a civilized society. Ask students to imagine what it would be like to lose their civil rights. What freedoms would they have to give up? Now, ask students to look through the newspaper for a story about someone who is denied his or her civil rights. Have them discuss their thoughts in small groups. 4. Provide students copies of Martin Luther King Jr.’s famous “I Have a Dream Speech” along with examples of news stories and editorials from the newspaper (see the link below for the speech.) Ask each to assume the role of reporter and pretend they were present when King gave the speech. Conclude the activity by having them write either a newspaper story about the speech or an editorial expressing opinions about what was said. Click on the following read or download Martin Luther King Jr.’s “I Have a Dream Speech.” http://legacy.grandforksherald.com/pdfs/I%20HAVE%20A%20DREAM%20PRINTABLE.pdf
http://nierocks.areavoices.com/tag/civil-rights/
4.25
What Australopithecus afarensis might have looked like. © John Sibbick / Natural History Museum Australopithecus afarensis lived between about 3.8 and 3.0 million years ago in eastern Africa. Thanks to the discovery of a relatively complete fossil skeleton, known as Lucy, A. afarensis is one of the best-known early hominin (human-like) species. Lucy transformed our thinking about how early hominins walked. If you have a webcam, you can watch Lucy walking around your room. The partial skeleton of Lucy, Australopithecus afarensis. A. afarensis is known from many fossil finds in Tanzania, Kenya and Ethiopia, but Lucy is particularly important because she is the most complete and well-preserved afarensis fossil ever found. Unearthed in 1974, around 40% of her full skeleton was recovered, making her the most complete skeleton of an early human relative known at the time. This relative completeness helped scientists begin to understand how early human-like species walked on 2 legs (bipedally). However, the honour of being the most complete ancient hominin skeleton now goes to a 4.4-million-year-old Ardipithecus ramidus fossil known as Ardi, revealed in 2009. A. afarensis was once thought to be the earliest human relative to habitually walk upright, but there is now some evidence to suggest earlier species, including A. ramidus, also walked bipedally. The brain size of A. afarensis was ape-like, and there is no evidence so far of tool-making. A. afarensis was evidently similar to living apes in terms of diet, aspects of biology, growth and development. Males of the species were much larger than females, showing high sexual dimorphism. The habitat of A. afarensis was probably a mix of woodland, where they foraged for food on the ground and in trees, along with more open areas where they would have walked upright. Evidence from their teeth suggest that this hominin ate soft fruits and leaves but was also adapted to eat harder, more brittle foods too. A. afarensis fossils are providing us with vital clues as to what hominin life was like after upright walking emerged and before the use of tools transformed human evolution.
http://www.nhm.ac.uk/print-version/?p=/nature-online/life/human-origins/early-human-family/australopithecus-afarensis/index.html
4.03125
Meteor showers are associated with the orbits of comets. As comets travel along their trajectories, they shed part of their substance, icy dust blown away by the pressure of sunlight. As time passes, the tracks of comets become dirty spaces, littered with bits of dust moving in the same orbit as the comet. Every year in August the Earth intersects the orbit of Comet Swift-Tuttle, sweeping up debris. The tiny particles plunge into the Earth's atmosphere at 40 miles per second, where they are heated by friction and vaporize in streaks of light. Not "falling stars" or "shooting stars," but grains of icy rock making spectacular swan dives into the air. Comets take their names from their discoverers. The parent comet of the Perseids was first observed in 1862 by Lewis Swift, a farmer and amateur astronomer of Marathon, New York. Tuttle nearly missed his moment for glory. When he observed the blur of light in the constellation Camelopardis, he thought it too bright to be a comet that had not been observed by someone else. He wrote it off as the already reported Comet Schmidt. Three nights later he realized his mistake and reported his observation, simultaneously with Horace Tuttle of the Harvard Observatory. Swift might have had the comet all to himself. By protocol, the two men share the honor of discovery. When I wrote about this in Honey From Stone, in 1987, Comet Swift-Tuttle was five years overdue for its predicted return, which added a note of mystery and anticipation to my account. The comet was subsequently recovered in 1992, by the Japanese astronomer Tsuruhiko Kiuchi. The orbit has now been greatly refined. We'll next see the comet in 2126. But wait! If we intersect the comet's path every August, what if the Earth and Swift-Tuttle arrive at the fatal spot at the same time? The comet is bigger than the object that wiped out the dinosaurs 65 million years ago. A collision would be of apocalyptic consequence. Calculations suggest this is highly unlikely, at least for the next few thousand years (and obviously it hasn't happened for countless orbits in the past). But farmer Swift has his name attached to a potentially devastating object. He went on to discover a total of thirteen comets, although none as bright or as famous as the Perseid progenitor. For his discoveries he was awarded a gold medal by the Imperial Academy of Sciences in Vienna, and in the 1880s he was appointed to the directorship of the Warner Observatory in Rochester, New York. From the citizens of that city he received a 16-inch refractor telescope costing $11,000, with which he discovered more that a thousand nebulae, among them hundreds of distant galaxies. In the Roman Catholic prayers of the Feast of Saint Lawrence, we hear again and again the martyr's purported words to the emperor Decius, who had promised the saint a night of pain: "Night has no darkness for me, all things become visible in the light." The words might aptly be applied to the ex-farmer of Marathon.
http://blog.sciencemusings.com/2010/08/race-goes-to-swift.html
4.1875
Given that there are thousands of asteroids and probably a hundred thousand million comets, these small bodies must be considered essential components of the solar system. Certainly objects closely similar to the small bodies that remain today were involved in the agglomeration of the larger planets and satellites some 4.5 billion years ago, and much of the importance of the small bodies today derives from the clues that they may contain about the processes that took place in the early solar system. This importance is magnified when we realize that asteroid-like parent bodies are the only solar system objects (other than Earth and the Moon) of which we have samples for detailed laboratory studies. Although our understanding of small bodies is relatively limited, we know enough to realize that geologically these objects are best studied separately from the larger bodies, such as Earth and the Moon. For one thing, gravity is so much smaller on these bodies that it is difficult to extrapolate our experiences with surface processes on larger objects with any great confidence. For another, many of the small objects are irregular and call for mapping and geodetic techniques quite distinct from those commonly used for the larger (usually almost spherical) planets and satellites. 7.1. What is a Small Body? It is not easy (nor is it necessary) to give a rigorous definition of a small body. Certainly implicit in the term is that the object has a low surface gravity and small escape velocity. Rather arbitrarily, we can take the largest small body to be the size of the biggest asteroid, Ceres, which has a diameter of some 1000 km. Most small bodies are considerably smaller; the two satellites of Mars, Phobos (21 km) and Deimos (12 km), are more representative. For an object the size of Phobos, surface gravity is only about 1 cm sec-2, and the escape velocity is some 10 m sec-1. Weak gravity has several important implications. Since such bodies cannot have atmospheres, their regoliths are immune to weathering processes involving the presence of an atmosphere. On the other hand, they are directly exposed to the whole spectrum of meteoroidal impacts, cosmic rays, solar radiation, and the solar wind. Low gravity also makes it impossible for the body to achieve or retain a spherical shape during its history, and many small bodies tend to be irregular in shape. Additionally, low gravity affects the development of the surface under meteoroidal bombardment. Craters probably tend to remain deeper, ejecta become more dispersed, and the proportion of strongly shocked material retained is smaller than on larger bodies. Furthermore, the chances that an asteroid-like small body will suffer a catastrophic, or nearly catastrophic, impact during its history are non-negligible. The study of meteorites has provided incontrovertible proof that some small parent bodies underwent differentiation (Dodd, 1981). In addition, there is strong evidence of subsurface aqueous processes in some parent bodies (Kerridge and Bunch, 1979) and of surface eruptions of lavas on others (Drake, 1979). The realization of the importance of short-lived nuclides such as 26AI as possible heat sources early in the solar system's history has made it quite plausible that some small bodies should have had early histories of melting and other internal activity (Sonett and Reynolds, 1979). Thus, whereas some small bodies (comet nuclei?) may have had dull evolutionary histories and may rightly be regarded as primitive, others have probably experienced histories almost as complex and certainly as interesting as some larger objects. The solar system's small bodies can be divided conveniently into three broad categories: (1) rocky objects (asteroids and some small satellites), (2) icy objects (mostly small satellites, but perhaps including such objects as Chiron), and (3) comet nuclei. The inventory of known small bodies includes thousands of asteroids in the main belt, as well as about 60 Amor, Apollo, and Aten objects. Only about 35 asteroids are larger than 200 km across, although physical measurements have been made of objects as small as 200 meters (Gehrels, 1979). None has yet been studied by spacecraft. The inventory also includes the small satellites of Mars and of the outer planets. Phobos and Deimos, the two tiny satellites of Mars, are the only very small bodies that have been investigated sufficiently by spacecraft (Mariner 9 and Viking) to permit meaningful discussions of surface geologic processes (Veverka and Thomas, 1979). Jupiter has at least a dozen small satellites. Except for a few low-resolution images of Amalthea obtained by Voyager, we know almost nothing about the geology of these bodies. There are also at least 70 known Trojan asteroids near the libration points of Jupiter's orbit, and speculations exist that some of Jupiter's outer satellites may be related to them (Degewij and van Houten, 1979). Recent Earth-based and Voyager observations have greatly expanded our list of Saturn's small satellites, and at least in the case of Mimas and Enceladus, the Voyager data are adequate to support geologic investigation. Beyond Saturn, most of the satellites of Uranus, Neptune's Nereid, and Pluto's Charon probably fall within our definition of small bodies. However, it will be at least 1986 before any spacecraft data on any of these objects are available. It is worthwhile to stress that the above list is almost certainly incomplete and that new small bodies will continue to be discovered. In addition, there are indications that small, so far undetected satellites are associated with the rings of Uranus and perhaps those of Saturn and Jupiter as well. Comets are the most abundant small bodies in the solar system: one estimate is that some 1011 exist in the Oort cloud at the fringes of the solar system (Wilkening, 1982). From the geologic point of view, it is only the nuclei of comets that are of interest and not the comas and tails that develop when the nucleus approaches close enough to the Sun for its surface ices to vaporize. Most comet nuclei are believed to be bodies of rock and ice less than 10 km across, but very little direct information about them exists. None has been studied by spacecraft yet. They could be the parent bodies of some volatile-rich meteorites, and there may be an evolutionary connection between them and some asteroids. For example, it has been suggested that some Apollo asteroids are the remnants of extinct short-period comets (Shoemaker and Helin 1977 Kresak 1979). In summary, three facts about small bodies must be kept in mind: (1) their vast number, (2) their great diversity, and (3) our lack of knowledge concerning them. The next two decades of solar system exploration should remedy our current lack of information about small bodies. We cannot gain a true understanding of the solar system's evolution by ignoring them. They are of interest not only in their own right, but as the solar system's most abundant projectiles, they have influenced, m some cases probably dramatically, the evolution of the surfaces of the larger planets and satellites. 7.3. Why Study Small Bodies? At least four major reasons for studying small bodies in the geologic context can be given: It could also be argued that another important reason for studying small bodies is that their geologic record may extend further back in time than that preserved on the surfaces of the larger bodies. Also, many small bodies (including satellites) probably are collisional fragments of large bodies and in some instances could provide accessible information on the differentiation of large parent bodies. 7.3.1. Effects of Small Bodies on Larger Objects Surfaces in the solar system continue to be modified by impacts, and there is abundant evidence that during the first half billion years of the solar system's existence, the surfaces of planets and satellites were influenced dramatically by collisions with small bodies. From the geologic point of view, we are interested in the time history of the flux and population (size and composition) of the impacting objects at different distances from the Sun. The early fluxes appear to have had a profound influence on the evolution of the crusts of larger bodies, and subsequent fluxes are important in determining relative chronologies of different surface units (chapter 3). The actual nature of the impacting bodies (whether volatile-rich or volatile-poor) may have played a role in determining the evolution of some atmospheres and perhaps even of subsequent weathering processes. For instance, it has been proposed that a significant fraction of some gases in the atmospheres of the terrestrial planets were brought in by comets. Some of the important questions to be addressed are: In the above, the term "flux" should be understood to mean not only total flux of bodies of all sizes (or masses) but also information about the relative fluxes of bodies of various sizes (or masses). A vigorous program of searching for Apollo, Aten, and Amor asteroids, as well as for comets, can answer the first of these questions. The second and third questions are more difficult, but considerable progress is being made in addressing some aspects of them by theoretical calculations. A closely related issue involves the orbital evolution of the various classes of impacting objects (origin, lifetime, and eventual fate). For example, how do objects end up in Apollo orbits? How long do they stay? What happens to them? 7.3.2. Unique Surface Features and Processes Not surprisingly, there are processes that are important on small bodies but impossible to predict from an extrapolation of our terrestrial or lunar experience. In fact, it is sometimes even difficult to predict a priori what form a well-known process will take in the small-body environment. For example, a decade ago, there was a legitimate discussion about whether or not there would be recognizable craters on bodies as small as the satellites of Mars. A more serious debate developed about whether appreciable regoliths would form on such small objects. Although we have now learned.... ....the answers to such rudimentary questions, we cannot pretend to fully understand the process of cratering and regolith formation on small bodies (Cintala et al., 1978; Housen and Wilkening, 1982). For example, we have no convincing explanation for the gross y different appearance of the surfaces of Phobos and Deimos. Why is it that the surface of the smaller Deimos appears to have retained considerably more regolith than that of the larger Phobos? Our very limited experience in exploring small bodies has already confirmed that unique and unexpected surface features and processes come into play. No one anticipated the existence of grooves on Phobos, yet this type of feature may well be a common one on many small bodies (Thomas and Veverka, 1979). There is every reason to expect that additional, important surface features and processes will be discovered as our exploration of small bodies proceeds, especially in the cases of small icy satellites and the nuclei of comets. 7.3.3. Small Bodies as Natural Laboratories Due to their great diversity in size and composition, small bodies provide ideal testing grounds for studying various processes especially those involving cratering. In principle, one can find small.... ....bodies of similar surface gravity but drastically different surface composition (rock versus ice), or bodies of similar composition but very different surface gravity, to test the importance of such variables on crater morphology, ejecta patterns, etc. Much could be learned by comparing surface features and regolith characteristics on three small asteroids of similar surface gravity but of different composition (carbonaceous, stony, or metallic). As a next step, one could investigate the effects of rotation rate on regolith characteristics by comparing two asteroids that are identical in all bulk characteristics except their spin rates. Full exploitation of such possibilities would require an aggressive program of future solar system exploration. 7.3.4. Evolution and Interrelationship There is ample evidence that some small bodies have had complicated evolutionary histories that involved processes of high interest to planetary geologists. The meteorite record proves that some parent bodies experienced internal differentiation, aqueous metamorphism, and even the eruption of lava onto their surfaces (Dodd, 1981). In many cases, very mature and very complex regoliths were developed (Housen and Wilkening, 1982). Understanding the geologic evolution of such interesting bodies is not only worthy in its own right, but would improve our understanding of the possible interrelationships among small bodies and between the small bodies and larger planets. First, there are questions of the following type to be considered: what styles of eruption and what types of volcanic constructs would one expect on a body as small as Vesta? Or, what kinds of structure control the local emission of gases from a comet nucleus? Second, there are the interrelationship questions; for example, is it geologically reasonable that a comet nucleus can evolve into something like an Apollo asteroid or that some volatile-rich carbonaceous chondrites could come from comets? Unfortunately, in many cases we still lack key observational data to address such important questions meaningfully. The small bodies of the solar system are of great intrinsic geologic interest that goes beyond their original role as building blocks of planets and their subsequent role as projectiles. They are characterized by vast numbers and by their diversity. So far, their geologic study has been hampered by a lack of first-hand information of the sort that can be obtained only by direct spacecraft exploration. Even after Viking and Voyager, our inventory of small objects about which enough is known to carry out detailed geological investigations is very meager. It is restricted to a few icy satellites of Saturn and to the two rocky moons of Mars. We have yet to carry out a geologic reconnaissance of an asteroid or a comet nucleus. Although our accumulated knowledge may be adequate to guess what asteroid surfaces may be like in a general way, we really know next to nothing about comet nuclei. Thus, a first-order requirement for progress in our understanding of small bodies is the exploration of at least one asteroid and one comet nucleus during the coming decade. Some important questions, however, can be addressed only by studying a variety of objects. In the meantime, it is important to continue the ongoing active programs of Earth-based observations of small bodies as well as related laboratory and theoretical investigations. It is especially crucial to continue monitoring the neighborhood of Earth's orbit for small comets and asteroids, since there is no other way of obtaining adequate statistics on the population of such objects. In terms of data analysis and interpretation, there are enough unresolved questions concerning the small satellites of Mars and of the outer planets to justify a healthy program of analysis of Viking and Voyager data in these areas. For example, the Viking IRTM * measurements of Phobos and Deimos must be fully correlated with imaging data to gain information on regolith characteristics. We must also develop techniques for mapping irregular satellites and making accurate measurements of their topography and volume. We should make a special effort to apply the many lessons we have learned from comparative planetology during the past two decades to considerations of surface and near-surface processes on small bodies. Such extrapolations from our experience with larger bodies will have to be done judiciously, but the effort should prove beneficial to our general understanding of the solar system. *Infrared Thermal Mapper.
http://history.nasa.gov/SP-467/ch7.htm
4.03125
What are Plutonium Alloys ? Plutonium is a transuranic radioactive chemical element with the chemical symbol Pu and atomic number 94. It is an actinide metal of silvery-gray appearance that tarnishes when exposed to air, forming a dull coating when oxidized. The element normally exhibits six allotropes and four oxidation states. It reacts with carbon, halogens, nitrogen and silicon. When exposed to moist air, it forms oxides and hydrides that expand the sample up to 70% in volume, which in turn flake off as a powder that can spontaneously ignite. It is also radioactive and can accumulate in the bones. These properties make the improper handling of plutonium dangerous. Plutonium is the heaviest primordial element by virtue of its most stable isotope, plutonium-244, whose half-life of about 80 million years is just long enough for the element to be found in trace quantities in nature. Plutonium is also a byproduct of nuclear fission in reactors: Some of the neutrons released by the fission process convert uranium-238 nuclei into plutonium.The most important isotope of plutonium is plutonium-239, with a half-life of 24,100 years. Plutonium-239 is the isotope most useful for nuclear weapons. Plutonium-239 and 241 are fissile, meaning the nuclei of their atoms can split when bombarded by thermal neutrons, releasing energy, gamma radiation and more neutrons. These neutrons can sustain a nuclear chain reaction, leading to applications in nuclear weapons and nuclear reactors. Plutonium, like most metals, has a bright silvery appearance at first, much like nickel, but it oxidizes very quickly to a dull gray, although yellow and olive green are also reported. At room temperature plutonium is in its ? form (alpha). This, the most common structural form of the element (allotrope), is about as hard and brittle as grey cast iron unless it is alloyed with other metals to make it soft and ductile. Unlike most metals, it is not a good conductor of heat or electricity. It has a low melting point (640 °C) and an unusually high boiling point (3,327 °C). Plutonium was first synthesized in 1940 by a team led by Glenn T. Seaborg and Edwin McMillan at the University of California, Berkeley laboratory by bombarding uranium-238 with deuterons. Trace amounts of plutonium were subsequently discovered in nature. Producing plutonium in useful quantities for the first time was a major part of the Manhattan Project during World War II, which developed the first atomic bombs. The first nuclear test, “Trinity” (July 1945), and the second atomic bomb used to destroy a city (Nagasaki, Japan, in August 1945), “Fat Man”, both had cores of plutonium-239. Human radiation experiments studying plutonium were conducted without informed consent, and a number of criticality accidents, some lethal, occurred during and after the war. Plutonium-238 has a half-life of 88 years and emits alpha particles. It is a heat source in radioisotope thermoelectric generators, which are used to power some spacecraft. Plutonium-240 has a high rate of spontaneous fission, raising the neutron flux of any sample it is in. The presence of plutonium-240 limits a sample’s usability for weapons or reactor fuel, and determines its grade. Plutonium isotopes are expensive and inconvenient to separate, so particular isotopes are usually manufactured in specialized reactors. Plutonium-gallium alloy (Pu-Ga) is an alloy of plutonium and gallium, used in nuclear weapon pits – the component of a nuclear weapon where the fission chain reaction is started. Metallic plutonium has several different solid allotropes. The preferred alloy is 3.0–3.5 mol.% (0.8–1.0 wt.%) gallium. More modern pits are produced by casting. However, the phase change is useful during the operation of a nuclear weapon. Gallium tends to segregate in plutonium, causing “coring” – gallium-rich centers of grains and gallium-poor grain boundaries. The time to achieve homogenization of gallium increases with increasing grain size of the alloy and decreases with increasing temperature. Two stabilizing elements were considered, silicon and aluminium. However, only aluminium produced satisfactory alloys. There are several plutonium and gallium intermetallic compounds : PuGa, Pu3Ga, and Pu6Ga. Addition of 7.5 wt.% of plutonium-238, which has significantly faster decay rate, to the alloy increases the aging damage rate by 16 times, assisting with plutonium aging research. The Blue Gene supercomputer aided with simulations of plutonium aging processes. Presence of gallium in plutonium signifies its origin from weapon plants or decommissioned nuclear weapons. For reprocessing of surplus warhead pits into MOX fuel, the majority of gallium has to be removed as its high content could interfere with the fuel rod cladding (gallium attacks zirconium and with migration of fission products in the fuel pellets. In the ARIES process, the pits are converted to oxide by converting the material to plutonium hydride, then optionally to nitride, and then to oxide. Gallium is then mostly removed from the solid oxide mixture by heating at 1100°C in a 94% argon 6% hydrogen atmosphere, reducing gallium content from 1% to 200 ppm. Further dilution of plutonium oxide during the MOX fuel manufacture brings gallium content to levels considered negligible. Electrorefining is another way to separate gallium and plutonium. For weapons use, the plutonium pit parts have to be coated with a layer of another metal. Subsequent pits were coated with nickel, by exposing the plutonium parts to nickel tetracarbonyl gas, which reacts with the plutonium surface and deposits a thin layer of nickel. Plutonium alloys can be produced by adding a metal to molten plutonium.
http://metallurgyfordummies.com/plutonium-alloys/
4.1875
One of the first steps to becoming a reader is developing positive reading behaviors. Even before children can "read", they should be involved with books and print in a positive way. Children who have developed positive reading behaviors choose to read. They enjoy pretend reading, sharing ideas, and asking questions about stories. - Read to your child on a daily basis. You may want to establish a nightly routine of a bedtime story. - Talk with your child about stories you have read together. - Allow your child to "read" familiar stories to you. Accept his/her version of the story. - Get a public library card for your child. - Allow your child to select the story he/she would like to hear, even if you have already read it 100 times. - Provide a special place for your child to keep his/her personal books and library books. This special place will send the message that books are important. - Select different types of books and a wide variety of reading materials for your child to choose from (e.g., magazines, newspapers, nursery rhymes, fairy tales, recipes). - Point out print in the environment (e.g., signs, cereal boxes, restaurants). - Give books as gifts. Select high quality books with detailed illustrations. If you are not sure, ask Ms. Cooke or a salesperson at the bookstore. - Be a model. Let your child see you reading. Remember, he/she wants to grow up to be just like you!
http://www.wsfcs.k12.nc.us/Page/36677
4.0625
WHAT child's imagination has not been captivated by the near-magical transformation that caterpillars undergo to become butterflies? This is the result of an ancient hybridisation between an insect and a worm-like animal, according to zoologist Donald Williamson, and now he says there is enough genetic information to test the theory. Unfortunately for Williamson, now retired from the University of Liverpool, UK, the early returns are not encouraging. Many insect groups, such as butterflies, bees and wasps, have larval stages that look nothing like the adults. Most biologists believe these evolved gradually, perhaps because natural selection favoured juvenile stages that differed from the adults and thus would compete less with them. Williamson offers a different explanation. At some point hundreds of millions of years ago a larva-less insect - something like a grasshopper or cockroach, say - hybridised with a velvet worm. Also known as Onychophora, velvet worms are worm-like invertebrates ... To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
http://www.newscientist.com/article/mg20327234.900-did-two-species-mix-to-make-butterflies.html
4
Between 1900 and 1905, the Wright brothers designed and built three unpowered gliders and three As they designed each aircraft, how did they know how big to make the wings? The Wright brothers operated a bicycle shop in Dayton, Ohio, and had a good working knowledge of math and science. They knew about of motion and about They knew that they needed to generate enough to overcome the of their aircraft. They had written to the Smithsonian when they began their enterprise in 1899 and received technical papers describing the aeronautical theories of the day. There were mathematical equations which could be used to predict the amount of lift that an object would generate. The lift equation is shown on this slide. The amount of lift generated by an object depends on a number of factors, of the air, the velocity between the object and the air, the surface area over which the air flows, the of the body, and the body's inclination to the flow, also called the angle of attack. By the time the Wrights began their studies, it had been determined that lift depends on the square of the velocity and varies linearly with the surface area of the object. Early aerodynamicists characterized the dependence on the properties of the air by a pressure coefficient called Smeaton's coefficient which represented the pressure force (drag) on a one foot square flat plate moving at one mile per hour through the air. They believed that any object moving through the air converted some portion of the pressure force into lift, and they represented that portion by a lift coefficient. The resulting equation is given as: L = k * V^2 * A * cl where L is the lift, k is the Smeaton coefficient, V is the velocity, A is the wing area, and cl is the lift coefficient. This equation is slightly different from the modern used today. The modern equation uses the of the moving air for the pressure dependence, while this equation uses the Smeaton coefficient. Modern lift coefficients relate the lift force on the object to the force generated by the dynamic pressure times the area, while the 1900's lift coefficients relate the lift force to the drag of a flat plate of equal area. The 1900's equation assumes that you know the perpendicular pressure force on a moving flat plate (Smeaton coefficient). Because of measuring inaccuracies at the time, there were many quoted values for the coefficient ranging from .0027 to .005. Lilienthal had used the .005 value in the design and testing of his wings. When the Wrights began to design the they used values for the lift coefficient based on the work by Lilienthal so they too used the .005 value. experiments of 1900 and 1901, the brothers measured the performance of their aircraft. Neither aircraft performed as well as predicted by the lift equation. The had been designed to lift itself (100 pounds) plus a pilot (150 pounds) when flown as a kite in a 15 mile per hour wind at 5 degrees angle of attack. But in flight, it could barely lift itself in a 15 mile per hour wind at a much higher angle of attack. So the brothers began to doubt the .005 value for the Smeaton coefficient and they determined that a value of .0033 more closely approximated their data. The modern accepted value is .00326. The brothers also began to doubt the accuracy of Lilienthal's lift So in the fall of 1901, they decided to determine their own values for the lift coefficient using a The brothers built a clever to directly measure the ratio of the lift of their models to the drag of an equivalent flat plate. We have developed an interactive tunnel simulator so that you can duplicate their wind tunnel results. of testing many airfoil models, the brothers discovered the importance on the lift coefficient. They determined that the Lilienthal data was correct for the wing geometry that he had used, but that the data could not be applied to a wing with a very different geometry. Lilienthal's wings had a rather short span and an elliptical planform, while the brothers used a long, thin, rectangular planform. The brothers tested over fifty different models to determine how lift and drag are affected by various design parameters and they used this data to design their using the lift equation shown on the slide with their own lift coefficients. You can view a short of "Orville and Wilbur Wright" discussing the lift force and how it affected the flight of their aircraft. The movie file can be saved to your computer and viewed as a Podcast on your podcast player. - Re-Living the Wright Way - Beginner's Guide to Aeronautics - NASA Home Page
http://wright.grc.nasa.gov/airplane/liftold.html
4.25
Word-formation – the process of forming words by combining root and affixal morphemes according to certain patterns specific for the language (affixation, composition), or without any outward means of word formation (conversion, semantic derivation). Word formation (словообразование) Is a branch of science of the language, which studies the patterns on which a language forms new lexical items (new unities, new words) It’s a process of forming words by combining root & affixal morphemes. According to certain patterns specific for the language or without any outward means. 2 major groups of word formation: 1) Words formed as grammatical syntagmas, combinations of full linguistic signs (types: compounding (словосложение), prefixation, suffixation, conversion, and back derivation) 2) Words, which are not grammatical syntagmas, which are not made up of full linguistic signs. Ex.: expressive symbolism, blending, clipping, rhyme & some others. Common for both groups is that a new word is based on synchronic relationship between morphemes. Different types of word formation: Is joining together 2 or more stems. 1) Without a connecting element 2) With a vowel or consonant as a linking element 3) With a preposition or conjunction as a linking element down-and-out (в ужасном положении, опустошенный) Compounds can be classified according to their structure: - consisting of simple stem - compounds where at least one stem is a derived one - where one stem is clipped - where one of the elements is also a compound compound nouns, adjectives, verbs. - There are also the so-called reduplicative compounds: Prefixes are such particles that can be prefixed to full words. But are they not with independent existence. Native prefixes have developed out of independent words; there is a small number of them. Prefixes of foreign origin have come into the language ready-made Some scholars: the system of English word formation was entirely upset by the Norman Conquest. Normans have paved the way for the non-Germanic trend the language has taken since that time. From French English borrowed many words with suffixes & prefixes, they became assimilated in the language & started to be used in word building. It led to enormous cut down of the traditional word formation out of native material. Old prefixes (some of them) disappeared forever (too weak phonetically) Nowadays English has no prefixed equivalents for some German prefixes A lot of borrowed prefixes in English: A suffix is a derivative final element, which is or was productive in forming new words. It has semantic value, but doesn’t occur as an independent speech use. The contact of English with foreign languages has led to the adoption of countless foreign words, which started to be used in word building. → we have many hybrid types of derivatives. A hybrid is a word different element of which are of etymologically different origin. 1) A foreign word is combined with a native affix clearness, faithless, faithful 2) Foreign affixes are added to native words As for the first 3 they have never become productive in English; - able was assimilated in English very early and has became productive in many words. Semi suffixes are elements, which stand midway between full words & suffixes a Godlike creature 6 ways of suffixing in English: 1) Derivation by native suffixes without changes in stress, vowels, consonants 2) Derivation by borrowed suffix without changes in stress, vowels, consonants 3) Derivation by imported suffixes, which involves the change in 4) The suffix is added to a Latin stem which closely related to an English word science – scientist 5) The suffix is added to a Latin stem, which has no English equivalent lingua – lingual 6) Words borrowed separately but have the same patterns of word building candidate – candidacy president – presidency This is called correlative derivation. A certain stem is used for the formation of a categorically different word without a derivative element being added. Bag – to bag Back – to back Bottle – to bottle This specific pattern is very productive in English The most popular types are noun → verb or verb → noun To take off – a take off Conversion can be total or partial Partial: the then president (тогдашний) An adverb is used as an adjective, only in this particular context. Total: work – to work Прислала Алена Жильцова
http://www.ranez.ru/article/id/379/
4.5625
Fugitive Slave Act of 1793 Start Your Visit WithHistorical Timelines General Interest Maps Ambiguities present in previous legislation led the U.S. Congress to pass the Fugitive Slave Act of 1793. Slave hunters were allowed to capture an escapee in any territory or state and were required only to confirm orally before a state or federal judge that the person was a runaway. The captive was not entitled to a trial by jury and the judge's decision was terminal. A person hiding an escaped slave could be fined $500 – an expensive penalty in those days. The law was opposed in many Northern states; several reacted by enacting legislation to protect free black Americans and fugitive slaves. These "personal liberty laws" compelled a slave catcher to furnish corroborative proof that his captive was a fugitive and frequently accorded the accused the rights to trial by jury and appeal. Laws in some states made it easier to extradite a runaway if his or her slave status were confirmed. As it turned out, in general the Fugitive Slave Act was inconsistently enforced and provoked ill feeling between northern and southern states. In Prigg v. Pennsylvania (1842), the United States Supreme Court determined that 'personal liberty laws' were unconstitutional: They interfered with the Fugitive Slave Act. The Court held that while states were not compelled to enforce the federal law, they could not override it with other enactments. Prigg induced numerous Northern states to amend their laws, which specified that law enforcement officials and jurists refrain from doing anything about runaway slaves. The only alternative left to slave catchers was to kidnap runaways or drag them before federal judges who were not held to state statutes. The Fugitive Slave Law of 1793 Government Year Published: 1793 The Fugitive Slave Law of 1793 ART. 4. For the better security of the peace and friendship now entered into by the contracting parties, against all infractions of the same, by the citizens of either party ... African American Historical Documents - The Fugitive Slave Law of 1793 ... 22, 2005 Ad Goes Here--> --> --> Historical Documents The Fugitive Slave Law of 1793 ART. 4. For the better security of the peace and friendship now entered into by the contracting parties, against all infractions of the same, by the citizens ... Fugitive Slave Act The Fugitive Slave Act legally mandated the return of any runaway slaves, regardless of the location (state) within the Union where they were at the time of their discovery or capture. As part of The Compromise of 1850, California was ...
http://www.u-s-history.com/pages/h480.html
4.125
10 Steps to Stop and Prevent Bullying Download Bully Free Public Service Announcements Whether you are a parent, an educator, or a concerned friend of the family, there are ten steps you can take to stop and prevent bullying: - Pay attention. There are many warning signs that may point to a bullying problem, such as unexplained injuries, lost or destroyed personal items, changes in eating habits, and avoidance of school or other social situations. However, every student may not exhibit warning signs, or may go to great lengths to hide it. This is where paying attention is most valuable. Engage students on a daily basis and ask open-ended questions that encourage conversation. - Don’t ignore it. Never assume that a situation is harmless teasing. Different students have different levels of coping; what may be considered teasing to one may be humiliating and devastating to another. Whenever a student feels threatened in any way, take it seriously, and assure the student that you are there for them and will help. - When you see something – do something. Intervene as soon as you even think there may be a problem between students. Don’t brush it off as “kids are just being kids. They’ll get over it.” Some never do, and it affects them for a lifetime. All questionable behavior should be addressed immediately to keep a situation from escalating. Summon other adults if you deem the situation may get out of hand. Be sure to always refer to your school’s anti-bullying policy. - Remain calm. When you intervene, refuse to argue with either student. Model the respectful behavior you expect from the students. First make sure everyone is safe and that no one needs immediate medical attention. Reassure the students involved, as well as the bystanders. Explain to them what needs to happen next – bystanders go on to their expected destination while the students involved should be taken separately to a safe place. - Deal with students individually. Don’t attempt to sort out the facts while everyone is present, don’t allow the students involved to talk with one another, and don’t ask bystanders to tell what they saw in front of others. Instead, talk with the individuals involved – including bystanders – on a one-on-one basis. This way, everyone will be able to tell their side of the story without worrying about what others may think or say. - Don’t make the students involved apologize and/or shake hands on the spot. Label the behavior as bullying. Explain that you take this type of behavior very seriously and that you plan to get to the bottom of it before you determine what should be done next and any resulting consequences based on your school’s anti-bullying policy. This empowers the bullied child – and the bystanders – to feel that someone will finally listen to their concerns and be fair about outcomes. - Hold bystanders accountable. Bystanders provide bullies an audience, and often actually encourage bullying. Explain that this type of behavior is wrong, will not be tolerated, and that they also have a right and a responsibility to stop bullying. Identify yourself as a caring adult that they can always approach if they are being bullied and/or see or suspect bullying. - Listen and don’t pre-judge. It is very possible that the person you suspect to be the bully may actually be a bullied student retaliating or a “bully’s” cry for help. It may also be the result of an undiagnosed medical, emotional or psychological issue. Rather than make any assumptions, listen to each child with an open mind. - Get appropriate professional help. Be careful not to give any advice beyond your level of expertise. Rather than make any assumptions, if you deem there are any underlying and/or unsolved issues, refer the student to a nurse, counselor, school psychologist, social worker, or other appropriate professional. - Become trained to handle bullying situations. If you work with students in any capacity, it is important to learn the proper ways to address bullying. Visit www.nea.org/bullyfree for information and resources. You can also take the pledge to stop bullying, as well as learn how to create a Bully Free program in your school and/or community. An additional, yet very important step, is to take at least one child to see the Bully movie, and then use it as an opportunity to begin an on-going conversation about bullying. Requests for additional screenings can be made at www.thebullyproject. ADDITIONAL RESOURCES ON BULLYING NEA's “Bully Free: It Starts With Me” campaign provides information and resources to assist schools, parents and community leaders in addressing issues of violence and bullying in schools, including: - Findings from the National Education Association's Nationwide Study of Bullying This first-of-its-kind, large-scale research study by NEA and Johns Hopkins University examines different school staff members' perspectives on bullying and bullying prevention efforts. - Stand up for bullied students Take the “Bully Free: It Starts With Me” pledge and sign up to receive a free poster and window sticker as well as periodic email messages and information about NEA's Bully Free: It Starts With Me campaign. - Start a Bully Free Campaign Learn the 10 steps to start a Bully Free campaign at your school or within your community organization. - Review specific tips and techniques for various school staff members: - Bus Drivers and Bullying Prevention (PDF) Practical tips about what bus drivers can do to prevent or intervene in bullying situations. - Clerical Services/Administrative ESPs Bullying Prevention (PDF) Administrative staff hear bullying reports from students as well as parents, and are in a good position to intervene. - Food Services ESPs and Bullying Prevention (PDF) School cafeterias a common location for bullying, and it is an area ripe for bullying prevention and intervention, where staff can curb bullying and promote a positive school climate. - Paraeducators and Student-to-Student Bullying (PDF) How paraeducators (teaching assistants, teacher aides, paraprofessionals, paras), who are often more likely than teachers to be in a position to witness bullying and intervene in bullying situations, can deal with bullying situations. - Download Bully Free Public Service Announcements Four PSAs are designed to help to raise awareness about bullying in communities across the nation while providing information on where to find helpful tips and resources. - Download A Guide to the film BULLY As the film offers insight into the lives of bullied, ridiculed children, the accompanying guide tells the personal stories of those bullied, provides essential background information about bullying, including testimony and research findings from experts who have studied the effects of bullying on children, parents, and communities, and suggest various discussion strategies that will help facilitate honest, open dialogue about the film with groups of students and adults alike. - The Stop Bullying Speak Up Comic Challenge! These FREE comic-based activities give students a creative and engaging way to share their strategies for speaking up and putting a stop to bullying. - Learn how to prepare for and respond to a crisis This step-by-step resource created by educators for educators can make it easier for school district administrators and principals to keep schools safe while providing information to schools in the midst of a crisis, e.g., a school shooting, to help students and staff return to learning as quickly as possible. - Additional Resources, Research, and Tools Provides a plethora of additional information, including: - Download Bully Free Public Service Announcements - An Educators Guide to Facebook - After Suicide: A Toolkit for Schools - Alternatives to Zero Tolerance Policies - Bullying, Harassment and Hazing: State School Healthy Policy Database
http://www.nea.org/home/51629.htm
4.125
Observations of 1 Ceres, the largest known asteroid, have revealed that the object may be a "mini planet," and may contain large amounts of pure water ice beneath its surface. The observations by NASA's Hubble Space Telescope also show that Ceres shares characteristics of the rocky, terrestrial planets like Earth. Ceres' shape is almost round like Earth's, suggesting that the asteroid may have a "differentiated interior," with a rocky inner core and a thin, dusty outer crust. "Ceres is an embryonic planet," said Lucy A. McFadden of the Department of Astronomy at the University of Maryland, College Park and a member of the team that made the observations. "Gravitational perturbations from Jupiter billions of years ago prevented Ceres from accreting more material to become a full-fledged planet." The finding will appear Sept. 8 in a letter to the journal Nature. The paper is led by Peter C. Thomas of the Center for Radiophysics and Space Research at Cornell University in Ithaca, N.Y., and also includes project leader Joel William Parker of the Department of Space Studies at Southwest Research Institute in Boulder, Colo. Ceres is approximately 580 miles (930 kilometers) across, about the size of Texas. It resides with tens of thousands of other asteroids in the main asteroid belt. Located between Mars and Jupiter, the asteroid belt probably represents primitive pieces of the solar system that never managed to accumulate into a genuine planet. Ceres comprises 25 percent of the asteroid belt's total mass. However, Pluto, our solar system's smallest planet, is 14 times more massive than Ceres. The astronomers used Hubble's Advanced Camera for Surveys to study Ceres for nine hours, the time it takes the asteroid to complete a rotation. Hubble snapped 267 images of Ceres. From those snapshots, the astronomers determined that the asteroid has a nearly round body. The diameter at its equator is wider than at its poles. Computer models show that a nearly round object like Ceres has a differentiated interior, with denser material at the core and lighter minerals near the surface. All terrestrial planets have differentiated interiors. Asteroids much smaller than Ceres have not been found to have such interiors. The astronomers suspect that water ice may be buried under the asteroid's crust because the density of Ceres is less than that of the Earth's crust, and because the surface bears spectral evidence of water-bearing minerals. They estimate that if Ceres were composed of 25 percent water, it may have more water than all the fresh water on Earth. Ceres' water, unlike Earth's, would be in the form of water ice and located in the mantle, which wraps around the asteroid's solid core. Besides being the largest asteroid, Ceres also was the first asteroid to be discovered. Sicilian astronomer Father Giuseppe Piazzi spotted the object in 1801. Piazzi was looking for suspected planets in a large gap between the orbits of Mars and Jupiter. As more such objects were found in the same region, they became known as "asteroids" or "minor planets." NASA Headquarters, Washington Goddard Space Flight Center, Greenbelt, Md Space Telescope Science Institute, Baltimore
http://www.hubblesite.org/newscenter/archive/releases/solar-system/2005/27/text/
4.03125
Loss of Arctic ice could lead to new hybrid species The loss of Arctic sea ice could lead to species such as polar bears and some types of seal and whale being lost through hybridization. Sea ice in the Arctic is expected to disappear altogether in the summer months by the end of this century. And, writing in Nature, researchers say that this could mean the extinction of some rare marine mammals and the loss of many adaptive gene combinations. More than 20 marine mammal species are believed to be at risk of hybridization, and several Arctic hybrids have already been identified through DNA testing. For example, hunters shot a white bear with brown patches in 2006 that was later confirmed to be a polar bear-grizzly bear hybrid. Marine mammalogist Brendan Kelly of the National Oceanic and Atmospheric Administration’s marine mammal lab says that genes developed over millennia in isolated populations have given many Arctic marine animals sets of fine-tuned adaptations, helping them uniquely thrive in the harsh environment. The authors acknowledge that hybridization can actually be a good thing, especially for the first generation. But in later generations, the process begins to have more negative effects. Genes related to traits that once allowed the animal to thrive in a specific habitat can become diluted, leaving the animal less well adapted to its environment. In the case of creatures such as the rare North Pacific right whale, of which fewer than 200 individuals are believed to be left, interbreeding with the much more numberous bowhead whalescould mean extinction.
http://www.tgdaily.com/sustainability-features/53077-loss-of-arctic-ice-could-lead-to-new-hybrid-species
4.5
The following activity can be used to review concepts through questions, vocabulary, or required memorization such as multiplication tables. It is an excellent way to achieve class participation with students such as beginning level English Language Learners or students who may be shy to speak during a class discussion. 1. Create a few example flashcards that indicate to students the concepts or vocabulary to be reviewed. 2. Direct each student to create several flash cards of the target concepts. Students can use index cards, or simply use small sheets of notebook paper or printer paper. 3. Have each student come to the projector and place flashcards questions, one at a time, under the Document camera, visualiser or digital 4. Direct the rest of the class to answer orally, either by choral response, or by selecting individual students to respond. 5. Choose some of the flashcards to project under the Document camera, visualiser or digital presenter. Have students copy these questions onto paper. Assign as homework. 6. After sufficient reviewing, select flash cards to include on an assessment.
http://www.documentcameraexperts.com/Canada/LessonPlan.aspx?Plan=18
4.03125
Short Term (Working) Memory: Short term memory is also roughly synonymous with "working memory" or conscious thought. As I am typing this, the information is flowing into and out of my working memory. When you are talking or answering questions on an exam, the information must be brought into working memory for you to manipulate, and your words and answers come out of working memory. As you try to understand the words on this page, your ability to understand these concepts depends on your working memory (and to some extent your long term memory). Despite its central importance in the memory process, short term memory is in many respects the "bottle-neck" of the memory process. It is very limited in terms of the both its capacity (amount of information it can hold) and its duration (length of time it can hold information). Let us start by considering the duration of memories in short term memory. The duration of short term memory is 15 -30 seconds (usually closer to 15 than to 30 seconds). Imagine that you are asked to remember a telephone number that is new to you. You could probably keep in in your memory for more than 30 seconds, but only by saying it over and over again "in your head." This is called "rote rehearsal" or "maintenance rehearsal." Rote rehearsal or maintenance rehearsal can help you to keep information in short term memory for more than 30 seconds. BUT, if anything happens to interrupt your rote rehearsal, the information will be lost unless you have already succeeded in moving the information into long term memory. You can probably think of real life examples where this has happened: you were trying to keep a telephone number in your head, but someone interrupted your thoughts, and you lost the number forever. Many students are in the habit of studying by reading things over and over again, a form of rote rehearsal. However, if the exam is more than a few minutes away, and if you are anticipating any interruptions prior to the exam, this is probably NOT a very good strategy. If you get interrupted, and the information has not made its way to long term memory, it will be lost. If you want to remember information for an exam some time in the future, or so you can use it in real life, the information has to make it into long term memory, and it has to get there in such a way that you can get it back out when you need it. Look again at the drawing of the Three-Box Model of Memory. Notice that there is only a dotted line from working memory to long term memory when rote memorization is used. That is because rote memorization is NOT very efficient in moving information to long term memory. (Elaborative rehearsal is a better strategy that we will cover later.) Important point: Rote memorization can help to keep information in working memory, but it is NOT an efficient way to move information to long term memory.Let us move on to a consideration of the capacity of short term memory. If I gave you a series of letters to remember (e.g., twpbdrt), your ability to remember the series correctly would probably depend on the number of letters in the series. Most people can remember a series of letters correctly if there are only 3 or 5 letters in the series. About half the people asked to remember a seven-letter series have difficulty. Relatively few people can remember series consisting of 9 or 11 letters correctly. This finding, that the limit on short term emoryory is around 7 items, is one of the most consistent findings in all of psychology. George Miller, a very famous cognitive psychologist, coined the phrase "the magical number seven, plus or minus two," to describe the capacity of working memory. (You can read this classic paper at http://psychclassics.yorku.ca/Miller/ .) Point: The capacity of short term (working) memory is seven, plus or minus two.Are you feeling skeptical at this point? I hope so! I have told you two things that are inconsistent. First I told you that we essentially "think" out of our working memory, and that I am writing this out of my working memory. Second, I told you that the capacity of working memory is seven plus or minus two items. If the limit on working memory was literally 7 letters, I could not write my own name, which is Barbara Brown, much less form a complete sentence. It is true that short term memory has a capacity of 7 somethings, but the limit is not necessarily 7 letters. To understand the real limits of short term (working) memory, try the following. Read the string of 15 letters below, then close your eyes and try to repeat them back in the correct order. BGI TAE LTE GDO HTE Most people cannot do this. There are 15 letters, and this exceeds the capacity of working memory by quite a lot. However, if we rearrange the letters, this same task can seem easy. Try it again! THE EAT DOG BIG LET Even though there are still 15 letters, most people will get all 15 this time. Why were the letters easier to remember this time? When the letters are arranged to form words, they are easier to remember. Each word is one unit of meaning. With the first set of 15 letters, each letter was the unit of meaning, so there were 15 units to remember. With the second set of 15 letters, each word was a unit of meaning, so there were only 5 units to remember. If the words are arranged in order to form a sentence, they are even easier to remember, as the sentence itself becomes a unit of meaning: LET THE BIG DOG EAT (This sentence is supposed to be meaningful to Georgia fans, although I am told that I misspelled "dawg.") The capacity of short term memory is approximately 7 "chunks." A CHUNK is a meaningful unit of information.We can increase the absolute capacity of short term memory by combining bits of information into meaningful units, or chunks. This process is called "chunking." Instructors try to help you chunk information by focusing on the ways that the pieces of information to be learned relate to one another. Students working on their own often try to memorize long lists of terms without understanding how these pieces of information relate to one another; they quickly overwhelm the capacity of short term memory, and their attempts at learning are ineffective. Whether or not your instructor helps you, always try to understand how the pieces of information you are trying to learn relate to one another and to other information you already know. Important point: To make your learning more effective, practice "chunking" and try not to work with more than seven chunks at a time.This page was last revised on 09/09/2008.
http://facstaff.gpc.edu/~bbrown/psyc1101/memory/stm.htm
4.03125
Centrifugation - Definition, Glossary, Details - Oilgae The instrument used for centrifugation is called as centrifuge.A centrifuge is a piece of equipment, generally driven by a motor, that puts an object in rotation around a fixed axis, applying a force perpendicular to the axis. The centrifuge works using the sedimentation principle, where the centripetal acceleration is used to evenly distribute substances (usually present in a solution for small scale applications) of greater and lesser density. Centrifugation is the separating of molecules by size or density using centrifugal forces generated by a spinning rotor - Source The process of using a centrifuge to separate joined particles suspended in a liquid - Source
http://www.oilgae.com/ref/glos/centrifugation.html
4.15625
Dor beetle (Geotrupes stercorarius) |Size||Length: 15 – 20 mm| The dor beetle, for many people, is the archetypal ‘beetle’. It has the classic beetle shape and colouration, being oval in shape with a shiny domed body, with a blue sheen. The thorax is smooth but the wing cases are grooved longitudinally. The legs are also shiny black and strong with noticeable spikes. The head is small and the antennae short with fan-like tips. The dor beetle, more popularly known as the dung beetle, belongs to a sub-family called scarabs. These beetles were regarded as sacred by the ancient Egyptians. The man whom has come to be most closely associated with the study of the dor beetle was the French entomologist Jean-Henri Fabre (1823 – 1915). Fabre was an amateur with no formal scientific training, but he was fascinated by insects and their behaviour. His books were readable and his enthusiastic style of writing made them very popular. He is regarded to this day as one of the most influential people in the study of insects. The dor beetle is found throughout most of Europe, Asia and Africa. Similar species occur over most of the temperate and tropical countries. Dor beetles are found wherever there are grazing herbivores. In temperate latitudes, they can be found in pastures and meadows, open woodland grazed by sheep or cattle, and they occasionally turn up in parks and gardens. Anyone who has ever examined a cowpat will probably have noticed a patchwork of holes beneath it. These holes are made by the adult beetles burrowing down into the soil, taking with them particles of dung as food supplies for their larvae. Animal dung is rich in nutrient and dung beetles play an important role in returning much of this material to the soil. Their larvae, in eating much of this plant matter, make it more accessible to bacteria and other soil-living organisms which, in turn, allow the soil to maintain its fertility. In other parts of the world, relatives of the dor beetle make small balls of animal dung, which they roll away to suitable nest sites constructed underground. Adults are strong fliers and can travel some distance in search of suitable food supplies for their young. Sometimes they turn up in parks and gardens and this is where people are most likely to find them on the ground. The adults are often infested with small copper-coloured mites, a phenomenon that has led to the beetles’ folk-name of ‘lousy watchman’. The biggest threat to these beetles is the use of invermectin-based worming treatment for grazing animals. This chemical is used to treat intestinal worms in the animals’ gut but persists in their dung when it is deposited on the ground. The ivermectin effectively kills the developing larvae long after it has passed through the larger animal. Although the dor beetle is not thought to be threatened as a species, they are linked to another animal which is the subject of several conservation projects. Dor beetles and their cousins, the chafers, are the main prey of greater horseshoe bats, a species that has been declining in numbers for many years. The cause of this is believed to be the use of the ivermectin-based wormers used in the intensive rearing of cattle. A reduction in the available insect food has affected greater horseshoe bat populations across the UK, and the bats are now limited to parts of the West Country and South Wales. Conservation projects to restore the fortunes of this mammal include limiting the use of ivermectin treatments and re-creating habitats suitable for the bats. As with so many endangered species, the solution to one problem frequently benefits other associated animals. This information is awaiting authentication by a species expert, and will be updated as soon as possible. If you are able to help please contact: - Larvae: stage in an animal’s lifecycle after it hatches from the egg. Larvae are typically very different in appearance to adults; they are able to feed and move around but usually are unable to reproduce.
http://www.arkive.org/dor-beetle/geotrupes-stercorarius/factsheet
4.3125
In many data analyses in social science, it is desirable to compute a coefficient of association. Coefficients of association are quantitative measures of the amount of relationship between two variables. Ultimately, most techniques can be reduced to a coefficient of association and expressed as the amount of relationship between the variables in the analysis. For instance, with a t test, the correlation between group membership and score can be computed from the t value. There are many types of coefficients of association. They express the mathematical association in different ways, usually based on assumptions about the data. The most common coefficient of association you will encounter is the Pearson product-moment correlation coefficient (symbolized as the italicized r), and it is the only coefficient of association that can safely be referred to as simply the "correlation coefficient". It is common enough so that if no other information is provided, it is reasonable to assume that is what is meant. Let's return to our data on IQ and achievement in the previous assignment, only this time, disregard the class groups. Just assume we have IQ and achievement scores on thirty people. IQ has been shown to be a predictor of achievement, that is IQ and achievement are correlated. Another way of stating the relationship is to say that high IQ scores are matched with high achievement scores and low IQ scores are matched with low achievement scores. Given that a person has a high IQ, I would reasonably expect high achievement. Given a low IQ, I would expect low achievement. (Please bear in mind that these variables are chosen for demonstration purposes only, and I do not want to get into discussions of whether the relationship between IQ and achievement is useful or meaningful. That is a matter for another class.) So, the Pearson product-moment correlation coefficient is simply a way of stating such a relationship and the degree or "strength" of that relationship. The coefficient ranges in values from -1 to +1. A value of 0 represents no relationship, and values of -1 and +1 indicate a perfect linear relationships. If each dot represents a single person, and that person's IQ is plotted on the X axis, and their achievement scores is plotted on the Y axis, we can make a scatterplot of the values which allow us to visualize the degree of relationship or correlation between the two variables. The graphic below gives an approximation of how variables X and Y are related at various values of r : The r value for a set of paired scores can be calculated as follows: There is another method of calculating r which helps in understanding what the measure actually is. Review the ideas in the earlier lessons of what a z score is. Any set of scores can be transformed into an equivalent set of z scores. The variable will then have a mean of 0 and a standard deviation of 1. The z scores above mean are positive, and z scores below the mean are negative. The r value for the correlation between the scores is then simply the sum of the products of the the z scores for each pair divided by the total number of pairs minus 1. This method of computation helps to show why the r value signifies what it does. Consider several cases of pairs of scores on X and Y. Now, when thinking of how the numerator of the sum above is computed, consider only the signs of the scores and signs of their products. If a person's score on X is substantially below the mean, then their z score is large and negative. If they are also below the mean on Y, their z score for Y is also large and negative. The product of these two z scores is then large and positive. The product is also obviously large and positive if both people score substantially above the mean on both X and Y. So, the more the z scores on X and Y are alike, the more positive the product sum in the equation becomes. Note that if people score opposite on the measures consistently ( negative z scores on X and positive z scores on Y), the more negative the product sum becomes. This system sometimes helps to give insight into how the correlation coefficient works. The r value is then an average of the products between z scores (using n-1 instead of n to correct for population bias). When the signs of the z scores are random throughout the group, there is roughly equal probability of having a positive ZZ product or a negative ZZ product. You should be able to see how this would tend to lead to a sum close to zero. Interpretation of r One interpretation of r is that the square of the value represents the proportion of variance in one variable which is accounted for by the other variable. The square of the correlation coefficient is called the coefficient of determination. It is easy for most people to interpret quantities when they are on a linear scale, but this square relationship creates an exponential relationship which should be kept in mind when interpreting correlation coefficients in terms of "large", "small", etc. Note the graph below which shows the proportion of variance accounted for at different levels of r . Note that not even half of the variance is accounted for until r reaches .71, and that values below .30 account for less than 10% of the variance. Note also how rapidly the proportion of variance accounted for increases between .80 and .90, as compared to between .30 and .40. Note that r = .50 is only 25% of the variance. Be careful not to interpret r in a linear way like it is a percentage or proportion. It is the square which has that quality. That is, don't fall into the trap of thinking of r = .60 as "better than half", because it clearly is not (it is 36%). There are some obvious caveats in correlation and regression. One has been pointed out by Teri in the last lesson. In order for r to have the various properties needed for it's use in other statistical techniques, and in fact, to be interpreted in terms of proportions of variance accounted for, it is assumed that the relationship between the variables is linear. If the relationship between the variables is curvilinear as shown in the figure below, r will be an incorrect estimate of the relationship. Notice that although the relationship between the curvilinear variables is actually better than with the linear, the r value is likely to be less for the curvilinear case because the assumption is not met. This problem can be addressed with something called nonlinear regression, which is a topic for advanced statistics. However, it should be obvious that one can transform the y variable (such as with log or square functions) to make the relation linear, and then a normal linear regression can be run on the transformed scores. This is essentially how nonlinear regression works. Another assumption is called homoscedasticity (HOMO-SEE-DAS-STI-CITY or HOMO-SKEE-DAS-STI-CITY). This is the assumption that the variance of one variable is the same across all levels of the other. The figure below shows a violation of the homoscedasticity assumption. These data are heteroscedastic (HETERO-SKEE-DASTIC). Note that Y is much better predicted at lower levels of X than at higher levels of X : A related assumption is one of bivariate normality . This assumption is sometimes difficult to understand (and it crops up in even more complicated forms in multivariate statistics), and difficult to test or demonstrate. Essentially, bivariate normality means that for every possible value of one variable, the values of the other variable are normally distributed. You may be able to visualize this by looking at the figure below with thousands of observations (this problem is complicated enough to approach the limits of my artistic ability). Think of the normal curves as being frequency or density at their corresponding values of X or Y. That is, visualize them as perpendicular to the page. Regression and correlation are very sensitive to these assumptions. The values for this type of analysis should not be over interpreted. That is, quantitative predictions should be tempered by the validity of these assumptions. It should be intuitive from the explanation of the correlation coefficient that a significant correlation allows some degree of prediction of Y if we know X. In fact, when we are dealing with z scores, the math for this prediction equation is very simple. The predicted Z for the Y score (z'y) is: When the r value is used in this way, it is called a standardized regression coefficient , and the symbol used to represent it is often a lower case Greek beta (b), so the standardized regression equation for regression of y on x is written as : When we are not working with z scores, but we are attempting to predict Y raw scores from X raw scores, the equation requires a quantity called the unstandardized regression coefficient. This is usually symbolized as B1, and allows for the following prediction equation for raw scores: The unstandardized regression coefficient (B1) can be computed from the r value and the standard deviations of the two sets of scores (Equation a). The B0 is the intercept for the regression line, and it can be computed by subtracting the product of B1 and the mean of the x scores from the mean of the y scores (Equation b). Now, suppose we are attempting to predict Y (achievement) from X (IQ). Assume we have IQ and Achievement scores for a group of 10 people. Suppose I want to develop a regression equation to make the best prediction of a person's Achievement if I am given their IQ score. I would proceed as follows: First compute r. Now, it is a simple matter to compute B1 . B1 = SPxy / SSx = 420 / 512.5 = 0.82 Now compute B0 . B0 = MY - B1 MX = 94.8 - 0.82(99.5) = 13.2 The regression equation for predicting Achievement from IQ is then Y' = B0 + B1(X) ACHIEVEMENT SCORE = 13.2 + 0.82 (IQ) Error of Prediction Given an r value between the two variables, what kind of error in my predicted achievement score should be expected? This is a complicated problem, but an over simplified way of dealing with it can be stated which is not too far off for anything other than extreme values. The standard error of the estimate can be thought of roughly as the standard deviation of the expected distribution of true Y values around a predicted Y value. The problem is that this distribution changes as you move across the X distribution, and so the standard error is not correct for most any prediction. However, it does give a reasonable estimate of the confidence interval around predicted scores. For standardized (z) scores, the standard error of the estimate is equation (a). For raw scores, it is equation (b) : For example, given a predicted Y score of 87, and a standard error of estimate of 5.0, we could speculate that our person's true score is somewhere between 87-2(5) and 87+2(5) for roughly 96% confidence. Again, this is an oversimplification, and the procedures for making precise confidence intervals are best left for another time.
http://jamesstacks.com/stat/pearson.htm
4.03125
|NWT Literacy Council||L a n g u a g e s o f t h e L a n d| AN OVERVIEW OF ABORIGINAL LANGUAGE STRATEGIES It must be understood that there is a difference between a project and a strategy. A project is a single activity, with a beginning and an end. A strategy is an ongoing series of activities that may include one or more projects. In the Northwest Territories, language communities have primarily been involved in projects, while the strategies for language retention and revitalization have been developed and implemented by government agencies, including school boards. This approach is changing significantly at the present time, as the territorial government grants more authority and resources to the respective Aboriginal language communities to develop and implement their own language strategies. This section of the manual presents a few examples of successful Aboriginal language strategies from different parts of the world, including Nunavut, and an overview of language strategies and projects in the Northwest Territories. The "language nest" program was orally based listening and speaking rather than reading and writing. The program was based on the understanding that children learn languages more effectively when they are young and that parents must be speaking the language at home to reinforce its use. In conjunction with these language nests, the Maori eventually implemented Maori language programs in the local schools. It was essential for language preservation that pre-school "graduates" of the language nests could continue to learn and practice their language at home and within the school system. |Previous||Table of Contents||Next|
http://www.nwt.literacy.ca/resources/aborig/land/page23.htm
4.375
The island of Spitsbergen in the high Arctic is home to a seductively simple food web. The food web includes just four vertebrates--three herbivores (reindeer, rock ptarmigans and sibling voles) and one secondary consumer (Arctic foxes). The simplicity of the Spitsbergen food web is precisely what makes it so enticing to scientists who study population dynamics. The effects extreme weather events have on vertebrate populations, for example, can be measured with relative ease. Brage Hansen from the Norwegian University of Science and Technology in Norway and colleagues designed a study to measure the effect extreme weather events has on the vertebrates of Spitsbergen. They observed that during the winter in Spitsbergen, warmer temperatures mean that rain storms (instead of snow storms) are more frequent than they had been in the past. And that rain causes problems for the resident vertebrates. When it rains during the winter in Spitsbergen, the rainwater freezes and thaws repeatedly. Crusty layers of ice form on the ground and over any exposed vegetation. Rainwater also seeps down through any existing snowpack and into the ground where it freezes. The result is a thick layer of ice that entombs vegetation. Herbivores--the reindeer, rock ptarmigans and voles--cannot get to the vegetation and they begin to starve. Hansen and his team noted that rainy, icy winters result in herbivore population crashes. About a year later, Arctic fox populations follow suit. Initially, reindeer carcasses become more common, making food more plentiful for the Arctic foxes. But in subsequent years, there are fewer reindeer (as well as fewer ptarmigans and voles) and less food for the foxes. As a result, the fox population, too, crashes. The study is the first of its kind to show the effects of climate on the populations of an entire community. The results raise concern for the stability of such communities in the future, when extreme weather events are expected to be come even more common. Photos © Brage Bremset Hansen / Norwegian University of Science and Technology. Hansen, B. et. al. (2013). Climate Events Synchronize the Dynamics of a Resident Vertebrate Community in the High Arctic Science, 339 (6117), 313-315 DOI: 10.1126/science.1226766
http://animals.about.com/b/2013/01/19/how-four-vertebrates-cope-with-a-warming-arctic.htm
4.28125
On this day, the United States buttresses its control over the Gadsden Purchase with the establishment of Fort Buchanan. Named for recently elected President James Buchanan, Fort Buchanan was located on the Sonoita River in present-day southern Arizona. The U.S. acquired the bulk of the southwestern corner of the nation from Mexico in 1848 as victors' spoil after the Mexican War. However, congressional leaders, eager to begin construction of a southern railroad, wished to push the border farther to the south. The government directed the American minister to Mexico, James Gadsden, to negotiate the purchase of an additional 29,000 square miles. Despite having been badly beaten in war only five years earlier and forced to cede huge tracts of land to the victorious Americans, the Mexican ruler Santa Ana was eager to do business with the U.S. Having only recently regained power, Santa Ana was in danger of losing office unless he could quickly find funds to replenish his nearly bankrupt nation. Gadsden and Santa Ana agreed that the narrow strip of southwestern desert land was worth $10 million. When the treaty was signed on December 30, 1853, it became the last addition of territory (aside from the purchase of Alaska in 1867) to the continental United States. The purchase completed the modern-day boundaries of the American West. The government established Fort Buchanan to protect emigrants traveling through the new territory from the Apache Indians, who were strongly resisting Anglo incursions. However, the government was never able to fulfill its original purpose for buying the land and establishing the fort—a southern transcontinental railroad. With the outbreak of the Civil War four years later, northern politicians abandoned the idea of a southern line in favor of a northern route that eventually became the Union Pacific line.
http://www.history.com/this-day-in-history/us-establishes-fort-buchanan?catId=11
4.03125
Comets are made up of ice and dust and move through the solar system in orbits. They have been orbiting for thousands of years. When comets orbit closer to the Sun they start to melt and they release dust that looks like a comet's tail or trail. Comets do not come close to our planet, but infact they are millions of miles away. They move very slowly, that is why we can see a comet for several weeks or even months. Back to Main Page
http://academic.emporia.edu/abersusa/students/johnson/comets~1.htm
4.21875
The Domesday Book is one of the earliest surviving public records. It was commissioned in 1085. Domesday is a highly detailed survey and valuation of all the land held by the King and his chief tenants, along with all the resources that went with the land in late eleventh century England. The information in this book is still important to legal affairs, real estate transactions, historians, and genealogists. Even today, the book can be used in court for property disputes. A recent survey revealed that fewer than 1% of England's population have actually been to see the original Domesday Book in The National Archives' museum. Starting this week, images of the entire book are available online for all to see. This is a true milestone document. In 1066 William, Duke of Normandy, defeated the Anglo-Saxon King, Harold II, at the Battle of Hastings and became King of England. In 1085 England was again threatened with invasion, this time from Denmark. William had to pay for the mercenary army he hired to defend his kingdom. To do this, he needed to know what financial and military resources were available to him. At Christmas 1085 William commissioned a survey to discover the resources and taxable values of all the boroughs and manors in England. He wanted to discover who owned what, how much it was worth and how much was owed to him as King in tax, rents, and military service. A reassessment of the tax known as the geld took place at about the same time as Domesday and still survives for the southwestern part of England. Domesday is much more than just a tax record. It also records which manors belonged to which estates and gives the identities of the King's tenants-in-chief who owed him military service in the form of knights to fight in his army. The King was essentially interested in tracing, recording, and recovering his royal rights and revenues, which he wished to maximize. It was also in the interests of his chief barons to co-operate in the survey since it provided a permanent record of the tenurial gains they had made since 1066. The nickname "Domesday" may refer to the Biblical Day of Judgment, or "doomsday" when Christ will return to judge the living and the dead. Just as there will be no appeal on that day against his decisions, so the Domesday Book has the final word - there is no appeal beyond it as evidence of legal title to land. For many centuries Domesday was regarded as the authoritative register regarding rightful possession and was used mainly for that purpose. It was called Domesday by 1180. Before that, it was known as the Winchester Roll or King's Roll, and sometimes as the Book of the Treasury. If you are successful at tracing English ancestry back to the eleventh century, it is likely that you will find your ancestors listed in this book. To be sure, it only lists the landed gentry. However, those are also the families that can sometimes be traced nearly one thousand years. Every person alive on earth today has millions of ancestors from the eleventh century, ignoring "pedigree collapse." If you have English ancestry, you probably have at least a few ancestors of that time listed in the Domesday Book. Your challenge is to find them and to document your lineage! The U.K. National Archives are making online searches free, but downloads of data will cost £3.50 (approx $6.50 US). That price will purchase a copy of an original page, featuring a place name and the tenants and property of that place. It also includes a translation of the entry into modern English. For more information or to view the Domesday Book online, go to http://www.nationalarchives.gov.uk/domesday.
http://blog.eogn.com/eastmans_online_genealogy/2006/08/domesday_book_o.html
4.03125
When a fertilized egg has implanted in the uterus, the group of cells that will become a baby is called an embryo. A developing, fertilized egg is known by several names within the first 2 weeks after conception, including zygote, morula (day 4), and blastocyst (day 5). The embryonic period lasts until about 8 weeks after conception (about 10 weeks from the last menstrual period). During this time, the embryo is forming major body structures, such as the head, spine, and internal organs. This is the time when most birth defects develop. After this point, the growing baby is called a fetus. eMedicineHealth Medical Reference from Healthwise To learn more visit Healthwise.org © 1995-2012 Healthwise, Incorporated. Healthwise, Healthwise for every health decision, and the Healthwise logo are trademarks of Healthwise, Incorporated. Find out what women really need. Most Popular Topics Pill Identifier on RxList - quick, easy, Find a Local Pharmacy - including 24 hour, pharmacies
http://www.emedicinehealth.com/script/main/art.asp?articlekey=133545&ref=129342
4.03125
Learn about key events in history and their connections to today. On May 10, 1869, a golden spike was driven at Promontory, Utah, signaling the completion of the first transcontinental railroad in the United States. The transcontinental railroad had long been a dream for people living in the American West. While there was a large network of railways built in the Eastern part of the country in the 1830s and ’40s, there were few in the West and none that connected with Eastern lines. The Western population boon following the California Gold Rush of 1849 made the need for a transcontinental line apparent. There was, however, the problem of finding a route through the unforgiving terrain of the West, particularly the Sierra Nevada mountains. In 1860, the railroad engineer Theodore Judah uncovered a path through the Sierras using the infamous Donner Pass, where in 1846, a group of pioneers, known as the Donner Party, became trapped and were forced to resort to cannibalism to survive. Judah formed the Central Pacific Railroad and went to Washington for financing. In 1862, Congress passed the Pacific Railroad Act, which provided land and financing to the Central Pacific and a second company, the Union Pacific, to construct a Western line that would connect with the existing Eastern lines. Central Pacific began construction in Sacramento and moved east, while Union Pacific began in Council Bluffs, Iowa (bordering Omaha), and moved west. The two companies met in Promontory to complete the line. The completion of the transcontinental railroad made the American West easily accessible, creating a boon of trade, business and population. The railroad also had a psychological effect of bringing the country together, a sentiment expressed by the May 11 New York Times: “The long-looked-for moment has arrived. The construction of the Pacific Railroad is un fait accompli. The inhabitants of the Atlantic seaboard and the dwellers on the Pacific slopes are henceforth emphatically one people.” Connect to Today: Today, Amtrak operates in 46 states. It runs more than 21,000 miles of routes on tracks shared with freight trains and another 363 miles on tracks shared with hundreds of commuter trains in the Northeast Corridor between Washington and Boston. Federal and state governments are debating how to continue to finance the nation’s network of rail lines, many of which have low ridership. In his 2011 State of the Union address, President Obama pledged $53 billion toward the construction of high-speed rail, but Congressional Republicans greatly reduced the amount of funding. Also, Republican governors in three states declined federal financing for railroads, fearing that the states would be financially responsible for unprofitable lines. An April 2011 Times editorial said, “The agreement between Congress and the White House to virtually eliminate money for high-speed rail is harebrained. France, China, Brazil, even Russia, understand that high-speed rail is central to future development. Not Washington.” The editorial argued for financing to improve the profitable Amtrak Northeast Corridor, and the construction of a line between San Francisco and San Diego. Why do think that high-speed rail is such a contentious topic in the United States? What do you think should be done to increase ridership throughout the country? Do you agree with the editorial that high-speed rail is central to future development? Why or why not? In your opinion, how should the U.S. government approach the financing of railways?
http://learning.blogs.nytimes.com/2012/05/10/may-10-1869-first-transcontinental-railroad-completed/?src=twrhp
4
attorney generalArticle Free Pass attorney general, the chief law officer of a state or nation and the legal adviser to the chief executive. The office is common in almost every country in which the legal system of England has taken root. The office of attorney general dates from the European Middle Ages, but it did not assume its modern form before the 16th century. Initially, king’s attorneys were appointed only for particular business or for particular cases or courts, but by the 15th century an attorney general for the crown was a regular appointee. In time, he acquired the right to appoint deputies and became a figure of great influence as the medieval system broke down and new courts and political institutions evolved. Today the British attorney general and his assistant, the solicitor general, represent the crown in the courts and are legal advisers to the sovereign and the sovereign’s ministers. The attorney general is a member of the government but not of the cabinet. He is consulted on the drafting of all government bills, advises government departments on matters of law, and has a wide range of court-related duties. By virtue of his position as a law officer of the crown, the attorney general, who continues to practice as a barrister with the crown as his only client, is recognized by the bar as the leader of the legal profession. He has control of the office of public prosecutions, which gives advice on and often conducts criminal prosecutions. Certain offenses can be prosecuted only with the consent of either the attorney general or the director of public prosecutions. The attorney general also has the right to stay criminal proceedings in the superior courts. The office of attorney general of the United States was created by the Judiciary Act of 1789 that divided the country into districts and set up courts in each one, along with attorneys with the responsibility for civil and criminal actions in their districts. The attorney general, a member of the cabinet, is appointed by the president and is head of the Department of Justice. As its head, the attorney general has complete control over the law business of the government, all its other law officers being subordinate to him, though other departments have lawyers on their staffs who are not under his specific direction. As head of the Department of Justice, the attorney general must necessarily devote much of his time to administration. He also acts as the legal adviser of the president and of the heads of other cabinet departments with respect to government business. Every U.S. state has an elected attorney general with duties similar to those of the federal attorney general. He is usually elected by the voters at the same time and for the same term as the governor. See also prosecutor. What made you want to look up "attorney general"? Please share what surprised you most...
http://www.britannica.com/EBchecked/topic/42316/attorney-general
4.1875
This archived Web page remains online for reference, research or recordkeeping purposes. This page will not be altered or updated. Web pages that are archived on the Internet are not subject to the Government of Canada Web Standards. As per the Communications Policy of the Government of Canada, you can request alternate formats of this page on the Contact Us page. After the famous battle between the French and the English on the Plains of Abraham of Quebec in 1759, and the capitulation of Montreal by the French in September 1760, the French regime was replaced by an English one. This period is now called the "Conquest." The Conquest led to an entirely new regime. New France became a British colony, just like Nova Scotia and New Brunswick. The British took control over all the land and established their way of government. On October 7, 1763, the first constitution for the "Province of Quebec," the new name for the colony, was introduced by Royal Proclamation. It set up political institutions modelled on British tradition. As had been the case under French law, the Governor of Quebec represented the King, but under English law he had a more significant administrative role, as he also replaced the intendant. Almost 11 years later, the Royal Proclamation was replaced by the Quebec Act. This new constitution effectively increased the size of Quebec by adding Labrador, the Magdalen Islands, the Great Lakes Region, and the Ohio Valley. The new act repealed the Royal Proclamation of 1763 and instituted a more realistic policy for dealing with the Canadians. It reinstituted French civil law, gave official recognition to the French language and the Catholic religion, and allowed for the participation of Canadians of French origin in the civil administration of the colony. The 1774 constitution also created a deeply felt upheaval in America. It enabled the Canadians to join the Empire, but raised the ire of the colonies to the south. These colonies, 13 in all, had gone through considerable development, and they broke their ties with England following the Declaration of Independence of 1776. This independence was recognized seven years later, in 1783, by the Treaty of Paris, which recognized the 13 colonies as the United States of America. The American Declaration of Independence had the effect of bringing to Canada a large number of Loyalists -- Americans who chose to flee the United States to remain loyal to the King and the Empire. They came to the northern British colonies: Nova Scotia, New Brunswick and Quebec. Several of them settled along the upper St. Lawrence and along the banks of Lake Ontario and Lake Erie. With the arrival of such a large number of Loyalists, the Quebec Act quickly became difficult to enforce, as the Loyalists called for the British system of parliament and for British civil law. The British government answered the grievances of the Loyalists by proposing a compromise between their desires and those of the Canadians: the Constitutional Act of 1791. Passed by the Parliament in London, the Constitutional Act did not abolish the Quebec Act but introduced some amendments. The new act divided the Canadian territory into two colonies, a mostly French-speaking Lower Canada, and a mostly English-speaking Upper Canada. To the existing offices of governor and legislative council was added a house of assembly, which jointly held with the Legislative Council, the power to pass laws for the peace, good order and healthy administration of the colonies. The constitutional text remained silent, however, on the subject of the status of the languages. In 1792, a special order rounded out the act by establishing an executive council, whose members were appointed by the King. This executive institution was answerable not to the elected members, but to the governor, and the governor was answerable only to the imperial government. The new constitution did not offer any solutions for resolving conflicts that could arise between the House of Assembly and the Executive Council. Therefore, the Act of 1791 brought the parliamentary system to Lower Canada, but it clearly did not bring democracy. Freedom of religion was upheld, but the Act also provided for establishing the Anglican Church. At the time, the population of Lower Canada was 160,000, of which 20,000 were English-speaking. It was divided into four administrative districts: Gaspé, Québec, Trois-Rivières and Montréal. The territory was also divided into 25 counties. Developments in Lower Canada Prior to the Uprisings of 1837-1838 The first election campaign, with 50 seats at stake, was held in 1792. There were no structured political parties or party leaders. The campaign resulted in the election of 34 French-speaking and 16 English-speaking members. The House of Assembly of Lower Canada officially opened in December of 1793 at the Bishop's residence in Québec City. The first debate concerned the selection of a Speaker or President of the Assembly. Jean-Antoine Panet was elected on December 18. The language issue immediately haunted Assembly debates, and as a result the members were divided into two blocs. Although the French language had no legal status in Canada, official documents had been published in both languages since the Conquest. After a long and noisy debate, the Assembly passed a law decreeing that both languages were official. Nevertheless, London disagreed and imposed English as the only official language of Lower Canada. French was admitted only as a translation language. This first Parliament adopted only four laws of significance, covering the judiciary, the militia, finance and highways. Two parties began to take shape: the Tory Party, which brought together English-speaking members, and the Canadian Party, whose members were French-speaking and were in the majority in the House of Assembly. The bills introduced in the Assembly by the Canadian Party were strongly attacked by the Tories and, in most cases, were blocked by the Legislative Council. In 1805, the British business class founded The Quebec Mercury, a political paper that gave voice to their business, national and political ambitions. To show their opposition to the English, the Canadians founded a paper called Le Canadien in 1806. There were now two clearly defined classes in this new society: English merchants and Canadians. Over the years, the tension increased between these two groups as each defended its own interests. Some powerful Canadian spokespeople were already coming to the fore, notably Pierre-Stanislas Bédard, François Blanchet and Louis-Joseph Papineau. The Canadian Party won the election of 1808 and immediately voted to expel two English members. Furious, Governor James Henry Craig prorogued the House and called a new election. He also had the presses of Le Canadien seized. The Canadian Party again won the election. In addition to its internal struggles, Lower Canada was soon under threat from outside forces. Motivated by expansionist fervor, the United States declared war on Great Britain in 1812. The Americans fielded a large army of 12,000 soldiers, though poorly trained and under inept leadership. The war ended in 1814 after England sent 14,000 well-trained soldiers under good leadership to America. One outcome of this conflict was that it enabled both English- and French-speaking citizens of British North America to discover that they could co-operate in defending their common interests. The subsequent easing of tension between the two groups was short-lived, however, as both wanted to impose their own social structures. The Tories pushed for a society cast in the British mould, characterized by political power in the hands of the aristocracy, intense trade, unconditional attachment to royalty and the British Empire, and a culture pervaded by Protestant reform. In contrast, the Canadian Party preached a society based on local sovereignty, with power exercised on behalf of the working classes by the middle classes, and supported by agriculture, domestic trade, the Custom of Paris, Catholicism and local markets. Both coalitions, sometimes with no regard for their own interests, fell back to stubborn, intransigent positions, which effectively stymied Lower Canada's development and led to armed conflict. Several events contributed to the rise of nationalism, which found its outlet in the insurrection of 1837. Apart from the numerous conflicts that pitted the two groups against each other, a major issue worsened the situation, namely the question of subsidies. Subsidies were the amounts of money that the Assembly granted to the governor and the Executive Council to balance the budget. In 1818, the Assembly approved the subsidies requested by the Governor, but demanded that numerous abuses be rectified, such as pensions for deceased individuals, paying people to do nothing, salaries for non-residents, and fictitious salaries. Nothing was done. The following year, Governor Charles Gordon Lennox, Duke of Richmond, submitted a request containing the same abuses. The Assembly voted on the budget section by section, refusing to allocate funds for abusive expenditures. The Legislative Council blocked the effort. The abuses multiplied year after year, to the benefit of a group of individuals under the Governor's wing. In 1827, 87,000 people signed a petition denouncing the abuses perpetrated by this so-called "château clique." While the subsidy crisis was fomenting, another major problem arose, this time concerning the sharing of customs duties between Upper and Lower Canada. Due to its geographical position, Upper Canada had no seaport and was thus entirely at Lower Canada's mercy, as customs duties were the main source of revenue for the colony. In 1797, it was determined that the lower province -- Lower Canada -- had to remit a share of the customs duties collected according to its regulations and in proportion to the quantity of goods entering at Côteau du Lac. The issue was again raised in 1817, at which time it was agreed that one-fifth of the customs duties collected by Lower Canada were to be remitted to Upper Canada. However, the crisis that developed in 1819 meant that the necessary calculations were not made, and Upper Canada found itself deprived of its share. The feeling was growing that the existence of two completely separate colonies was inadequate. The British in Montreal felt that uniting the English forces of both Canadas was their only hope for becoming leaders of a majority, which would allow them to develop unhindered the business opportunities along the St. Lawrence Valley. They used the pretext of the administrative crisis to demonstrate the inadequacy of the 1791 constitution. In 1822, the English merchants managed to make a secret presentation to London advocating unification of the two Canadas. The plan called for each "section" to be represented by a maximum of 60 members in a new, single legislature. The merchants figured they could get about 20 members elected in Lower Canada, and therefore the 200,000 English people in both Canadas would be represented by 80 members, compared with 40 members for the 300,000 French Canadians. At the heart of the matter was who would truly hold power and be able to impose their law. When introducing the bill in London, the English merchant asked for a quick vote to avoid a counterproductive flood of protest. However, the opposition refused to co-operate. The bill was withdrawn, but it was not completely dead. In September, news of the manoeuvre reached the colonies. Meetings were called immediately and petitions began to circulate. Ethnic tensions mounted. Some 60,000 signatures were collected, and two delegates, pro-French journalist John Neilson and Speaker of the House Louis-Joseph Papineau, were selected to go to London to present the petitions and fight against union. The British ministers heard their statements, as well as one from Governor Dalhousie, who had returned to England for a short stay. They decided to reject the bill, and assured the two men that it would not be studied again during the 1823 session. Moreover, it would never be considered again without the interested parties having had an opportunity to express their views. Nonetheless, Governor Dalhousie did not give up and continued to believe that union of the two Canadas was absolutely necessary to the interests of British colonization. Upon his return to Canada, Dalhousie feared the worst for the 1824 session. An administrative scandal then came to light that seemed to justify the Assembly's claims regarding the administration of public funds. An inquiry showed that the Receiver General John Caldwell, who administered the public funds, was guilty of misappropriation. Some 100,000 pounds sterling had been used for speculative transactions and had been lost. Furthermore, Papineau was becoming an increasingly formidable opponent to Governor Dalhousie's plans. Dalhousie called an election in 1827 in the hope of getting rid of this bothersome opponent. Election results were disastrous for the English party, and Papineau was re-elected Speaker of the Assembly. It was too much for Dalhousie, who refused to approve the choice and immediately prorogued the legislature. The protest movement intensified. A delegation of three members -- John Neilson, Denis-Benjamin Viger and Augustin Cuvillier -- was mandated to go to London to present a petition containing 87,000 signatures and a series of resolutions that dealt with much more than the issue of subsidies. The work they did led to the creation of a special committee of the British House of Commons, responsible for studying and reporting on the Canadian question. Overall, the grievances of the delegation from Lower Canada were recognized as well founded in the ensuing report. In 1828, Governor Dalhousie was replaced by Sir James Kempt. After the report from the House of Commons, the political climate improved throughout the colony. The new governor took advantage of the general lull created by the expectation of corrective measures from London. In 1830, a new governor, Lord Matthew Aylmer, landed in Quebec with new instructions. Meanwhile, the House of Assembly wanted to settle the question of subsidies, and made control of all the colony's revenues and expenses a point of principle. Governor Aylmer, however, informed the House that the next subsidy act would have to respect the requirements of the Crown. Once again the situation was deadlocked. Any hopes raised by the actions of 1827, the report of the special committee of the British Commons, and the statements of the English ministers were dashed. In February 1834, exasperated Assembly members passed ninety-two resolutions that summarized their requests and grievances, and sent them to the government in London. The resolutions denounced all the injustices that the Assembly had noted, similar to the memorandum that the Canadian Party -- now the Patriot Party -- had submitted in 1828. This time, however, the tone was different, and the proposed solutions were of such an uncompromising nature that they rattled the faith of those who had put their trust in the wisdom of the British parliamentary system. Meanwhile, London had fallen into a position where it could not intervene quickly. A domestic political crisis had caught the full attention of the imperial Parliament: in the space of 11 months, there had been four different ministers responsible for the colonies. Despite this overriding concern, a commission of inquiry was created to study the Canadian situation. During this time, the House of Assembly of Lower Canada decided it would not approve any more budgets so long as London did not accept its demands. The official response from England arrived three years later, in May 1837. The British Parliament was in possession of the report from the investigative commission, which rejected the theses of the Patriot Party and recommended that the moderate reform begun in 1828 be continued. Armed with this report, the imperial government felt justified in imposing its views on the radicals in Lower Canada. As a follow-up, the British Parliament adopted the Russell resolutions, which placed an estoppel against the demands from the Lower Canada House of Assembly. The Russell resolutions also authorized the colonial government to do without the consent of the Assembly in the use of public funds, upheld the requirement for a civil list (to cover administrative expenses), confirmed the privileges of the British American Land Company, and raised the threat of unifying the two Canadas if they continued to get in each other's way. By forcing Papineau and his followers to choose between submission and revolt, these resolutions only served to increase the momentum for rebellion. Discontent peaked in Lower Canada in the spring of 1837. Despite the repeated requests of the Patriot Party, London still refused to reconstitute the Legislative Council as an elected body or to make the Executive Council answerable to the House of Assembly. Protest meetings, soon to be prohibited by Governor Gosford, were held everywhere. Rebellion finally broke out in the fall. "Patriots," often poorly organized, took up arms against the English army at St. Denis, St. Charles and St. Eustache. The crackdown was swift: villages were burned, members of the public were attacked, and women and children were put out of their homes just as winter was setting in. Several Patriots who had taken refuge in the United States were eager to take up the struggle again. In February 1838, under the direction of Robert Nelson, they proclaimed the Republic of Lower Canada and invited volunteer Americans to join them. The American president did not co-operate, however, and threatened to imprison anyone who compromised his government's neutrality. In November, the Patriots attacked English troops at Lacolle and Odeltown, but the operation was a fiasco. The second crackdown was worse than the first. More villages were pillaged and burned. Almost 1000 people were arrested, twice as many as in 1837. Of these, 108 were put on trial, about 60 were deported, and 12 were hanged in the Pied-du-Courant prison in Montreal. The Catholic clergy was not inactive during this time of revolt. Monsignor Lartigue, the Bishop of Montreal, spoke out against the insubordination of the Patriots and warned the faithful that those who promoted revolt and disobedience might well find themselves refused the sacraments. The Bishop even went so far as to publish a notice that defended the established powers. The Bishop of Québec adopted the same attitude. After the armed uprisings, the administrator John Colborne dissolved the House of Assembly and appointed a special council to administer Lower Canada until 1841. England was becoming worried during this time, for riots were also breaking out in Upper Canada and discontent was again on the rise in its Gulf colonies. It appointed John George Lambton, Lord Durham, a radical Whig, as Governor General and High Commissioner to British North America. Lord Durham arrived in Quebec on May 27, 1838, to conduct an investigation. In 1839, having spent six months on the new continent, Durham presented his report to the English government. It dealt mainly with various tactics he felt would restore peace: ensure the existence of a majority of loyal English people, anglicize the French Canadians who, in his opinion, had no chance of survival in an Anglo-Saxon America, and establish ministerial responsibility. To Durham, harmony could be re-established only by strengthening the influence of the people. The imperial government immediately rejected ministerial responsibility, as it entailed broadening colonial freedoms. To put the Canadians in a state of political subordination, London introduced the Act of Union in 1840. Its purpose was to reunite the two Canadas under a single parliament and make English the only official language. Henceforth, according to the will of London, one was to speak only of United Canada. The English government felt that a colonial assembly dominated by British elements would guarantee that ties to imperialism would be strengthened and that British investors would be reassured. The Act of Union was by and large based on the ideas about assimilation put forward by Durham, who saw in the conflict a confrontation of two races and, in Francophone society, an atrophied cultural group that hobbled Canada's expansion. The Act of Union was passed by the Parliament in London on July 23, 1840, and came into force on February 10, 1841. It introduced numerous reforms. The two Canadas were to become one United Canada, with one government. This United Canada was to keep the institutions established by the Constitutional Act of 1791: a governor who was answerable to the British Parliament, an executive council appointed by the Crown, a legislative council of 24 members, appointed for life, and a house of assembly of 84 members, half to be elected by Canada East and the other half by Canada West . Officially, Canada East and Canada West simply replaced the names Lower Canada and Upper Canada. In practice, however, the former names did not die quickly. The implementation of political union, which unified the economy as well, greatly pleased the Canadian business class. However, it only made the French Canadians angry, for several clauses of the constitution humiliated them. For example, Canada East, which had a larger population than Canada West, was allotted the same number of elected representatives -- a breach of the principle of democracy. The civil list was raised to 75,000 pounds per year, and elected members no longer had any control over it. Also, section 41 of the Act of Union decreed that English was to be the only official language of the country. This was the first time that England had prohibited French in a constitutional text. The objective pursued by England in the Act of Union was clear: hammer together a British-style parliamentary system with an artificial majority, while waiting for immigration to run its course and give the British a real majority. Such a system would in all likelihood adopt policies favourable to British colonization. So it was that French Canadians began their existence as a minority. The measures of 1841 created deep wounds. In the Québec City region, petitions called for the abolition of the Act. Some people suggested withdrawal from political life. The reaction was so intense that, in 1848, London had to recognize and accept the use of French. At that time, the great French-Canadian champion was Sir Louis-Hippolyte La Fontaine. During the rebellion, he had developed his political philosophy around the notion that political parties must be based on "opinions" instead of "origins." He felt that social peace and prosperity would happen of their own accord once racial distinctions were rooted out of public administration and institutions were given freedom. As a pragmatic politician who strongly denounced the discriminatory elements of the Union regime, he invited fellow French Canadians to get involved in political life. Without being aware of it, therefore, Sir Louis-Hippolyte La Fontaine was urging his compatriots to take the road that was to lead to Confederation.
http://www.collectionscanada.gc.ca/confederation/023001-2200-e.html
4
BACKGROUND: Researchers at the Scripps Institution of Oceanography have discovered bacteria in mud from the Bahamas with the potential to help fight cancer. Now that the bacteria's genome has been successfully sequenced, that information is now being used by a pharmaceutical company to treat bone marrow cancer patients. ABOUT THE BACTERIA: The bacteria known as Salinispora tropica is related to the Streptomyces genus, a land-based group of bacteria considered to be the kinds of antibiotic-producing organisms. First discovered in 1991 in shallow ocean sediment off the Bahamas, it took several years to successfully sequence Salinispora's genome, revealing that this mud-dwelling bacteria produces natural antibiotics and anti-cancer products. Researchers found that 10% of the bacteria's genome is dedicated to producing molecules for antibiotics and anti-cancer agents, compared to only 6% to 8% of most organisms' genomes. The decoding opens the door to a broad range of possibilities for isolating and adapting potent molecules the marine organism naturally employs for chemical defense, scavenging for nutrients, and communication in its ocean environment. One compound, salinosporamide A, is currently in human clinical trials for treating multiple myeloma, a cancer of plasma cells in bone marrow, as well as for treating solid tumors. SEQUENCING ABCS: Genome sequencing is figuring out the order of DNA nucleotides, or bases, in a genome: the building blocks that make up an organism's DNA. The entire genome can't be sequenced at once because DNA sequencing methods can only handle short stretches of DNA at a time. So scientists break the DNA into small pieces, sequence those, and then reassemble the pieces into the proper order to sequence the entire genome. There are two ways of doing this. The 'clone-by-clone' approach involves breaking the genome into chunks, called clones, each about 150,000 base pairs long, then using genome mapping techniques to figure where each belongs in the genome. Next they cut the clones into smaller, overlapping pieces of about 500 base pairs each, sequence those pieces, and use the overlaps to reconstruct the sequence of the entire clone. An alternative strategy, called the 'whole-genome shotgun method', involves breaking the genome into small pieces, sequencing them, and then reassembling the pieces into the full genome. The clone-by-clone approach is more reliable, but slow and time-consuming. The shotgun method is faster, but it can be extremely difficult to accurately put together so many tiny pieces of sequence all at once. Neither of these approaches proved sufficient to completely solve the Salinispora tropica genomic puzzle, however. Instead, information about the natural chemistry of the organism helped close the sequencing gap. The American Geophysical Union contributed to the information contained in the TV portion of this report.
http://www.aip.org/dbis/stories/2007/17127.html
4.21875
The possibility of life on Mars has held human interest for hundreds of years and has recently become an obsession for NASA. A number of atmospheric probes and surface craft have been sent to Mars to assess the planet’s habitability. The ultimate goal is to send future missions to Mars to directly look for evidence of life, both past and contemporary. In the midst of all the excitement and anticipation, it’s easy to forget that there have already been missions to Mars specifically designed to detect life. Over thirty years ago in 1976 NASA sent the Viking 1 and 2 spacecraft to Mars. Two landers made it to the surface. These robotic systems harbored four life-detection experiments. The Viking Biology Experiments The Gas Exchange Experiment: Martian soil samples were incubated with a nutrient broth. A gas chromatograph monitored headspace samples for the generation of gases like oxygen, carbon dioxide, or methane. Gas production in the soil would indicate biological activity. The Label Release Experiment: Martian soil samples were incubated with a nutrient broth. Some of the nutrients in the cocktail were labeled with carbon-14. If organisms were present in the soil, they would consume the labeled nutrients and generate radioactive gas. Detection of radioactivity in the headspace would indicate the presence of life. The Pyrolytic Release Experiment: Martian soil samples were exposed to light, water, carbon-14-labeled carbon dioxide and carbon-14-labeled carbon monoxide. If photosynthetic life was present, the radioactive gases would become incorporated into the soil. The Gas Chromatograph-Mass Spectrometer Experiment: This instrument was designed to detect and identify organic compounds (both from life and meteoritic infall) in the Martian soil. If life were present, organic materials would be abundant in the Martian soil. Results of the Viking Biology Experiments The Gas Exchange Experiment: Gas evolution from the soil was observed. The Label Release Experiment: Radioactive gas was produced after the soil was incubated with radiolabeled nutrient broth. The Pyrolytic Release Experiment: The results of this experiment were initially interpreted as evidence for extremely low levels of microbes in the soil. These results were later reinterpreted as a null result. The Gas Chromatograph-Mass Spectrometer Experiment: No organic compounds were detected in the soil, not even at a trace level. Interpretation of the Viking Biology Experiments Even though the Gas Exchange and Label Release experiments gave positive results, the failure to detect organics in the soil was troubling. It is difficult to conceive of life on the Martian surface without organic compounds in the soil. It appears that a highly oxidizing chemical species in the Martian soil was likely responsible for the release of gases after incubation with nutrients, many of them organic compounds. The oxidizing compounds in the Martian soil would rapidly break down any organic material, generating gases like oxygen and carbon dioxide as the by-products. The highly oxidizing nature of the Martian soil and the intense exposure of the Martian surface to UV radiation explain why no organics exist in the Martian soil, not even organic materials from meteorite infall. UV radiation, like chemical oxidants, readily destroys organic materials. The Viking landers looked for life on Mars and failed to detect it. Revisiting the Interpretation The interpretation of the Viking results is still discussed by astrobiologists. In fact, during the fall of 2006 a team of scientists published a paper questioning the design of the Gas Chromatograph-Mass Spectrometer Experiment. They argued that the experimental setup was fundamentally unable to detect low levels of organics in the Martian soil. They also maintained that if the organic materials were too refractory, the sample preparation procedure for the Gas Chromatograph-Mass Spectrometer Experiment would fail to release them from the soil, leaving the organics unavailable for detection and analysis. They also raised concerns about oxidation and, hence, destruction, of organics during sample preparation. In short, these astrobiologists claimed that it was premature to discount the null results for the Gas Chromatograph-Mass Spectrometer experiments aboard the Viking lander. If organics are indeed present on the Martian surface, it means that the results of the Gas Exchange and Label Release experiments very well may be taken as an indication of life on Mars and, at minimum, could motivate future missions to Mars to look for life. Not So Fast A recent paper, however, discounts the criticisms leveled against the Gas Chromatograph-Mass Spectrometer Experiment. Klaus Biemann—a world renowned mass spectroscopist—demonstrated that the detection limit of the Viking Gas Chromatograph-Mass Spectrometer Experiment was 1-2 ppb (parts per billion). In fact, when on the surface of Mars, the Gas Chromatograph-Mass Spectrometer successfully detected and identified trace levels of organic contaminants introduced into the system while on Earth. Biemann also showed that the sample preparation procedure would not destroy organics and could readily detect refractory organic materials. The bottom line: the null results of the Gas Chromatograph-Mass Spectrometer are valid. There are no organics, nor life, on the surface of Mars. For a more detailed discussion of life on Mars, see the book I wrote with Hugh Ross, Origins of Life: Biblical and Evolutionary Models Face Off.
http://www.reasons.org/articles/viking-invasion-of-mars-thwarted
4.15625
Microwaves are radio waves with wavelengths ranging from as long as one meter to as short as one millimeter, or equivalently, with frequencies between 300 MHz (0.3 GHz) and 300 GHz. This broad definition includes both UHF and EHF (millimeter waves), and various sources use different boundaries. In all cases, microwave includes the entire SHF band (3 to 30 GHz, or 10 to 1 cm) at minimum, with RF engineering often putting the lower boundary at 1 GHz (30 cm), and the upper around 100 GHz (3 mm). Apparatus and techniques may be described qualitatively as "microwave" when the wavelengths of signals are roughly the same as the dimensions of the equipment, so that lumped-element circuit theory is inaccurate. As a consequence, practical microwave technique tends to move away from the discrete resistors, capacitors, and inductors used with lower-frequency radio waves. Instead, distributed circuit elements and transmission-line theory are more useful methods for design and analysis. Open-wire and coaxial transmission lines give way to waveguides and stripline, and lumped-element tuned circuits are replaced by cavity resonators or resonant lines. Effects of reflection, polarization, scattering, diffraction, and atmospheric absorption usually associated with visible light are of practical significance in the study of microwave propagation. The same equations of electromagnetic theory apply at all frequencies. The prefix "micro-" in "microwave" is not meant to suggest a wavelength in the micrometer range. It indicates that microwaves are "small" compared to waves used in typical radio broadcasting, in that they have shorter wavelengths. The boundaries between far infrared light, terahertz radiation, microwaves, and ultra-high-frequency radio waves are fairly arbitrary and are used variously between different fields of study. Microwave technology is extensively used for point-to-point telecommunications (i.e., non broadcast uses). Microwaves are especially suitable for this use since they are more easily focused into narrow beams than radio waves, their comparatively higher frequencies allow broad bandwidth and high data flow, and also allowing smaller antenna size because antenna size is inversely proportional to transmitted frequency (the higher the frequency, the smaller the antenna size). Microwaves are the principal means by which data, TV, and telephone communications are transmitted between ground stations and to and from satellites. Microwaves are also employed in microwave ovens and in radar technology. At about 20 GHz, decreasing microwave transmission through air is seen, due at lower frequencies from absorption from water and at higher frequencies from oxygen. A spectral band structure causes fluctuations in this behavior (see graph at right). Above 300 GHz, the absorption of microwave electromagnetic radiation by Earth's atmosphere is so great that it is in effect opaque, until the atmosphere becomes transparent again in the so-called infrared and optical window frequency ranges. |Name||Wavelength||Frequency (Hz)||Photon Energy (eV)| |Gamma ray||less than 0.02 nm||more than 15 EHz||more than 62.1 keV| |X-Ray||0.01 nm – 10 nm||30 EHz – 30 PHz||124 keV – 124 eV| |Ultraviolet||10 nm – 400 nm||30 PHz – 750 THz||124 eV – 3 eV| |Visible||390 nm – 750 nm||770 THz – 400 THz||3.2 eV – 1.7 eV| |Infrared||750 nm – 1 mm||400 THz – 300 GHz||1.7 eV – 1.24 meV| |Microwave||1 mm – 1 meter||300 GHz – 300 MHz||1.24 meV – 1.24 µeV| |Radio||1 mm – 100,000 km||300 GHz – 3 Hz||1.24 meV – 12.4 feV| Microwave sources High-power microwave sources use specialized vacuum tubes to generate microwaves. These devices operate on different principles from low-frequency vacuum tubes, using the ballistic motion of electrons in a vacuum under the influence of controlling electric or magnetic fields, and include the magnetron (used in microwave ovens), klystron, traveling-wave tube (TWT), and gyrotron. These devices work in the density modulated mode, rather than the current modulated mode. This means that they work on the basis of clumps of electrons flying ballistically through them, rather than using a continuous stream of electrons. Low-power microwave sources use solid-state devices such as the field-effect transistor (at least at lower frequencies), tunnel diodes, Gunn diodes, and IMPATT diodes. Low-power sources are available as benchtop instruments, rackmount instruments, embeddable modules and in card-level formats. A maser is a device similar to a laser, which amplifies light energy by stimulating photons. The maser, rather than amplifying visible light energy, amplifies the lower-frequency, longer-wavelength microwaves and radio frequency emissions. Before the advent of fiber-optic transmission, most long-distance telephone calls were carried via networks of microwave radio relay links run by carriers such as AT&T Long Lines. Starting in the early 1950s, frequency division multiplex was used to send up to 5,400 telephone channels on each microwave radio channel, with as many as ten radio channels combined into one antenna for the hop to the next site, up to 70 km away. Wireless LAN protocols, such as Bluetooth and the IEEE 802.11 specifications, also use microwaves in the 2.4 GHz ISM band, although 802.11a uses ISM band and U-NII frequencies in the 5 GHz range. Licensed long-range (up to about 25 km) Wireless Internet Access services have been used for almost a decade in many countries in the 3.5–4.0 GHz range. The FCC recently[when?] carved out spectrum for carriers that wish to offer services in this range in the U.S. — with emphasis on 3.65 GHz. Dozens of service providers across the country are securing or have already received licenses from the FCC to operate in this band. The WIMAX service offerings that can be carried on the 3.65 GHz band will give business customers another option for connectivity. Metropolitan area network (MAN) protocols, such as WiMAX (Worldwide Interoperability for Microwave Access) are based on standards such as IEEE 802.16, designed to operate between 2 to 11 GHz. Commercial implementations are in the 2.3 GHz, 2.5 GHz, 3.5 GHz and 5.8 GHz ranges. Mobile Broadband Wireless Access (MBWA) protocols based on standards specifications such as IEEE 802.20 or ATIS/ANSI HC-SDMA (such as iBurst) operate between 1.6 and 2.3 GHz to give mobility and in-building penetration characteristics similar to mobile phones but with vastly greater spectral efficiency. Some mobile phone networks, like GSM, use the low-microwave/high-UHF frequencies around 1.8 and 1.9 GHz in the Americas and elsewhere, respectively. DVB-SH and S-DMB use 1.452 to 1.492 GHz, while proprietary/incompatible satellite radio in the U.S. uses around 2.3 GHz for DARS. Microwave radio is used in broadcasting and telecommunication transmissions because, due to their short wavelength, highly directional antennas are smaller and therefore more practical than they would be at longer wavelengths (lower frequencies). There is also more bandwidth in the microwave spectrum than in the rest of the radio spectrum; the usable bandwidth below 300 MHz is less than 300 MHz while many GHz can be used above 300 MHz. Typically, microwaves are used in television news to transmit a signal from a remote location to a television station from a specially equipped van. See broadcast auxiliary service (BAS), remote pickup unit (RPU), and studio/transmitter link (STL). Most satellite communications systems operate in the C, X, Ka, or Ku bands of the microwave spectrum. These frequencies allow large bandwidth while avoiding the crowded UHF frequencies and staying below the atmospheric absorption of EHF frequencies. Satellite TV either operates in the C band for the traditional large dish fixed satellite service or Ku band for direct-broadcast satellite. Military communications run primarily over X or Ku-band links, with Ka band being used for Milstar. Radar uses microwave radiation to detect the range, speed, and other characteristics of remote objects. Development of radar was accelerated during World War II due to its great military utility. Now radar is widely used for applications such as air traffic control, weather forecasting, navigation of ships, and speed limit enforcement. Radio astronomy Most radio astronomy uses microwaves. Usually the naturally-occurring microwave radiation is observed, but active radar experiments have also been done with objects in the solar system, such as determining the distance to the Moon or mapping the invisible surface of Venus through cloud cover. Global Navigation Satellite Systems (GNSS) including the Chinese Beidou, the American Global Positioning System (GPS) and the Russian GLONASS broadcast navigational signals in various bands between about 1.2 GHz and 1.6 GHz. Heating and power application A microwave oven passes (non-ionizing) microwave radiation (at a frequency near 2.45 GHz) through food, causing dielectric heating primarily by absorption of the energy in water. Microwave ovens became common kitchen appliances in Western countries in the late 1970s, following development of inexpensive cavity magnetrons. Water in the liquid state possesses many molecular interactions that broaden the absorption peak. In the vapor phase, isolated water molecules absorb at around 22 GHz, almost ten times the frequency of the microwave oven. Microwave heating is used in industrial processes for drying and curing products. Microwave frequencies typically ranging from 110 – 140 GHz are used in stellarators and more notably in tokamak experimental fusion reactors to help heat the fuel into a plasma state. The upcoming ITER Thermonuclear Reactor is expected to range from 110–170 GHz and will employ Electron Cyclotron Resonance Heating (ECRH). Microwaves can be used to transmit power over long distances, and post-World War II research was done to examine possibilities. NASA worked in the 1970s and early 1980s to research the possibilities of using solar power satellite (SPS) systems with large solar arrays that would beam power down to the Earth's surface via microwaves. Less-than-lethal weaponry exists that uses millimeter waves to heat a thin layer of human skin to an intolerable temperature so as to make the targeted person move away. A two-second burst of the 95 GHz focused beam heats the skin to a temperature of 130 °F (54 °C) at a depth of 1/64th of an inch (0.4 mm). The United States Air Force and Marines are currently using this type of active denial system. Microwave radiation is used in electron paramagnetic resonance (EPR or ESR) spectroscopy, typically in the X-band region (~9 GHz) in conjunction typically with magnetic fields of 0.3 T. This technique provides information on unpaired electrons in chemical systems, such as free radicals or transition metal ions such as Cu(II). Microwave radiation is also used to perform rotational spectroscopy and can be combined with electrochemistry as in microwave enhanced electrochemistry. Microwave frequency bands The microwave spectrum is usually defined as electromagnetic energy ranging from approximately 1 GHz to 100 GHz in frequency, but older usage includes lower frequencies. Most common applications are within the 1 to 40 GHz range. One set of microwave frequency bands designations by the Radio Society of Great Britain (RSGB), is tabulated below: |Letter Designation||Frequency range||Wavelength range||Typical uses| |L band||1 to 2 GHz||15 cm to 30 cm||military telemetry, GPS, mobile phones (GSM), amateur radio| |S band||2 to 4 GHz||7.5 cm to 15 cm||weather radar, surface ship radar, and some communications satellites (microwave ovens, microwave devices/communications, radio astronomy, mobile phones, wireless LAN, Bluetooth, ZigBee, GPS, amateur radio)| |C band||4 to 8 GHz||3.75 cm to 7.5 cm||long-distance radio telecommunications| |X band||8 to 12 GHz||25 mm to 37.5 mm||satellite communications, radar, terrestrial broadband, space communications, amateur radio| |Ku band||12 to 18 GHz||16.7 mm to 25 mm||satellite communications| |K band||18 to 26.5 GHz||11.3 mm to 16.7 mm||radar, satellite communications, astronomical observations| |Ka band||26.5 to 40 GHz||5.0 mm to 11.3 mm||satellite communications| |Q band||33 to 50 GHz||6.0 mm to 9.0 mm||satellite communications, terrestrial microwave communications, radio astronomy, automotive radar| |U band||40 to 60 GHz||5.0 mm to 7.5 mm| |V band||50 to 75 GHz||4.0 mm to 6.0 mm||millimeter wave radar research and other kinds of scientific research| |E band||60 to 90 GHz||3.3 mm to 5 mm||UHF transmissions| |W band||75 to 110 GHz||2.7 mm to 4.0 mm||satellite communications, millimeter-wave radar research, military radar targeting and tracking applications, and some non-military applications| |F band||90 to 140 GHz||2.1 mm to 3.3 mm||SHF transmissions: Radio astronomy, microwave devices/communications, wireless LAN, most modern radars, communications satellites, satellite television broadcasting, DBS, amateur radio| |D band||110 to 170 GHz||1.8 mm to 2.7 mm||EHF transmissions: Radio astronomy, high-frequency microwave radio relay, microwave remote sensing, amateur radio, directed-energy weapon, millimeter wave scanner| When radars were first developed at K band during World War II, it was not realized that there was a nearby absorption band (due to water vapor and oxygen at the atmosphere). to avoid this problem, the original K band was split into a lower band, Ku, and upper band, Ka see. Microwave frequency measurement Microwave frequency can be measured by either electronic or mechanical techniques. Frequency counters or high frequency heterodyne systems can be used. Here the unknown frequency is compared with harmonics of a known lower frequency by use of a low frequency generator, a harmonic generator and a mixer. Accuracy of the measurement is limited by the accuracy and stability of the reference source. Mechanical methods require a tunable resonator such as an absorption wavemeter, which has a known relation between a physical dimension and frequency. In a laboratory setting, Lecher lines can be used to directly measure the wavelength on a transmission line made of parallel wires, the frequency can then be calculated. A similar technique is to use a slotted waveguide or slotted coaxial line to directly measure the wavelength. These devices consist of a probe introduced into the line through a longitudinal slot, so that the probe is free to travel up and down the line. Slotted lines are primarily intended for measurement of the voltage standing wave ratio on the line. However, provided a standing wave is present, they may also be used to measure the distance between the nodes, which is equal to half the wavelength. Precision of this method is limited by the determination of the nodal locations. Health effects Microwaves do not contain sufficient energy to chemically change substances by ionization, and so are an example of nonionizing radiation. The word "radiation" refers to energy radiating from a source and not to radioactivity. It has not been shown conclusively that microwaves (or other nonionizing electromagnetic radiation) have significant adverse biological effects at low levels. Some, but not all, studies suggest that long-term exposure may have a carcinogenic effect. This is separate from the risks associated with very high intensity exposure, which can cause heating and burns like any heat source, and not a unique property of microwaves specifically. During World War II, it was observed that individuals in the radiation path of radar installations experienced clicks and buzzing sounds in response to microwave radiation. This microwave auditory effect was thought to be caused by the microwaves inducing an electric current in the hearing centers of the brain. Research by NASA in the 1970s has shown this to be caused by thermal expansion in parts of the inner ear. When injury from exposure to microwaves occurs, it usually results from dielectric heating induced in the body. Exposure to microwave radiation can produce cataracts by this mechanism, because the microwave heating denatures proteins in the crystalline lens of the eye (in the same way that heat turns egg whites white and opaque). The lens and cornea of the eye are especially vulnerable because they contain no blood vessels that can carry away heat. Exposure to heavy doses of microwave radiation (as from an oven that has been tampered with to allow operation even with the door open) can produce heat damage in other tissues as well, up to and including serious burns that may not be immediately evident because of the tendency for microwaves to heat deeper tissues with higher moisture content. History and research The existence of radio waves was predicted by James Clerk Maxwell in 1864 from his equations. In 1888, Heinrich Hertz was the first to demonstrate the existence of radio waves by building a spark gap radio transmitter that produced 450 MHz microwaves, in the UHF region. The equipment he used was primitive, including a horse trough, a wrought iron point spark, and Leyden jars. He also built the first parabolic antenna, using a zinc gutter sheet. In 1894 Indian radio pioneer Jagdish Chandra Bose publicly demonstrated radio control of a bell using millimeter wavelengths, and conducted research into the propagation of microwaves. Perhaps the first, documented, formal use of the term microwave occurred in 1931: - "When trials with wavelengths as low as 18 cm were made known, there was undisguised surprise that the problem of the micro-wave had been solved so soon." Telegraph & Telephone Journal XVII. 179/1 In 1943, the Hungarian engineer Zoltán Bay sent ultra-short radio waves to the moon, which, reflected from there, worked as a radar, and could be used to measure distance, as well as to study the moon. Perhaps the first use of the word microwave in an astronomical context occurred in 1946 in an article "Microwave Radiation from the Sun and Moon" by Robert Dicke and Robert Beringer. This same article also made a showing in the New York Times issued in 1951. In the history of electromagnetic theory, significant work specifically in the area of microwaves and their applications was carried out by researchers including: |Work carried out by||Area of work| |Barkhausen and Kurz||Positive grid oscillators| |Hull||Smooth bore magnetron| |Varian Brothers||Velocity modulated electron beam → klystron tube| |Randall and Boot||Cavity magnetron| See also - Block upconverter (BUC) - Cosmic microwave background radiation - Electron cyclotron resonance - International Microwave Power Institute - Low-noise block converter (LNB) - Microwave transmission - Microwave chemistry - Microwave auditory effect - Microwave cavity - Microwave radio relay - Orthomode transducer (OMT) - Plasma-enhanced chemical vapour deposition - Rain fade - RF switch matrix - Thing (listening device) - Tropospheric scatter - Pozar, David M. (1993). Microwave Engineering Addison–Wesley Publishing Company. ISBN 0-201-50418-9. - R. Sorrentino, Giovanni Bianchi, Microwave and RF Engineering, John Wiley & Sons, 2010, p. 4 - Microwave Oscillator notes by Herley General Microwave - Liou, Kuo-Nan (2002). An introduction to atmospheric radiation. Academic Press. p. 2. ISBN 0-12-451451-0. Retrieved 12 July 2010. - "IEEE 802.20: Mobile Broadband Wireless Access (MBWA)". Official web site. Retrieved August 20, 2011. - "the way to new energy". ITER. 2011-11-04. Retrieved 2011-11-08. - "Electron Cyclotron Resonance Heating (ECRH)". Ipp.mpg.de. Retrieved 2011-11-08. - Raytheon's Silent Guardian millimeter wave weapon[dead link] - "eEngineer – Radio Frequency Band Designations". Radioing.com. Retrieved 2011-11-08. - PC Mojo – Webs with MOJO from Cave Creek, AZ (2008-04-25). "Frequency Letter bands – Microwave Encyclopedia". Microwaves101.com. Retrieved 2011-11-08. - For other definitions see Letter Designations of Microwave Bands. - Merrill I. Skolnik, Introduction to Radar Systems, Third Ed., Page 522, McGraw Hill, 2001. - Goldsmith, JR (December 1997). "Epidemiologic evidence relevant to radar (microwave) effects". Environmental Health Perspectives 105 (Suppl. 6): 1579–1587. doi:10.2307/3433674. JSTOR 3433674. PMC 1469943. PMID 9467086. - Philip L. Stocklin, US Patent 4,858,612, December 19, 1983 - "''The work of Jagdish Chandra Bose: 100years of MM-wave research'', retrieved 2010 01 31". Tuc.nrao.edu. Retrieved 2011-11-08. |Wikimedia Commons has media related to: Microwaves (radio)| - EM Talk, Microwave Engineering Tutorials and Tools - Microwave Technology Video - Millimeter Wave and Microwave Waveguide dimension chart.
http://www.digplanet.com/wiki/Microwave
4.09375
Activity 6: Seismology SHAKING AND QUAKING, MONITORING EARTHQUAKES Geologists monitor earthquakes to better understand how the crust is affected by movements and to determine how plates are moving. The following activity explains this study, seismology, and how to monitor crustal movement by having students create their own seismograph. Instructional Method: Activity Goal: Explain how scientists monitor earthquakes. Objectives: Students will be able to: Activity time: 30 minutes Seismic waves are energy waves produced when friction along a fault causes vibrations in the crust. As energy waves pass through the crust they can be monitored by sensitive devices called seismographs. When evaluating collected data, various scales are used to describe the magnitude of fault movement, inferred from the intensity of the waves. The magnitude of seismic waves are evaluated according to a Richter scale. This scale is a mathematical computation of the size of the earthquake. Seismographs record zigzag traces showing ground movement beneath the instrument. Sensitive seismographs, which greatly magnify these ground motions, can detect strong earthquakes from sources anywhere in the world. Monitoring earthquakes, all over the world, allows scientists to determine where the earthquake originated. In addition to recording ground motion, seismographs also record the date and time the seismograph felt the waves. Using multiple recordings, scientists are able to calculate where the earthquake originated. The origin of an earthquake is called its hypocenter. This is the point beneath Earth's surface where fault rupture (movement) occurred. The point on the surface that is directly above the hypocenter is called the epicenter. The following information gives an idea of the characteristics and occurrences of various magnitudes of earthquakes around the world. In the side column there are links that help in further studies of earthquakes. The following activity explains how to create your own seismograph. It my not record small earthquakes but can be used to show how it is done in reality. To learn more about seismic waves look at "Surfing Rock Waves". How do geologists know earthquakes are happening even if they can not feel them? Why is it important to monitor earthquakes? Can anyone predict earthquakes? Have you ever felt an earthquake? Did you know how far away it was? How did geologists determine its location, epicenter and hypocenter? Included National Parks and other sites: Aniakchak National Monument and Preserve Utah Science Core: 5th Grade Standard 2 Objective 1,2,3 Did You Know? March 13, 1919: A Utah Joint Memorial passed legislation which read in part: We urge that the Congress of the United States set aside for the use and enjoyment of the people a suitable area embracing "Bryce's Canyon" as a national monument under the name: "Temple of the Gods National Monument." More...
http://www.nps.gov/brca/forteachers/plateact6.htm
4.4375
may wish to view our Digital Story about the Puritans by Michael Ray as an introduction to this section. group has played a more pivotal role in shaping American values than the New England Puritans. The seventeenth-century Puritans contributed to our country's sense of mission, its work ethic, and its moral sensibility. Today, eight million Americans can trace their ancestry to the fifteen to twenty thousand Puritans who migrated to New England between 1629 and 1640. people, however, have been as frequently subjected to caricature and ridicule. The journalist H.L. Mencken defined Puritanism as "the haunting fear that someone, somewhere, might be happy." And particularly during the 1920s, the Puritans came to symbolize every cultural characteristic that "modern" Americans despised. The Puritans were often dismissed as drably-clothed religious zealots who were hostile to the arts and were eager to impose their rigid "Puritanical" morality on the world around them. stereotypical view is almost wholly incorrect. Contrary to much popular thinking, the Puritans were not sexual prudes. Although they strongly condemned sexual relations outside of marriage--levying fines or even whipping those who fornicated, committed adultery or sodomy, or bore children outside of wedlock--they attached a high value to the marital tie. Nor did Puritans abstain from alcohol; even though they objected to drunkenness, they did not believe alcohol as sinful in itself. They were not opposed to artistic beauty; although they were suspicious of the theater and the visual arts, the Puritans valued poetry. Indeed, John Milton (1603-1674), one of England's greatest poets, was a Puritan. Even the association of the Puritans with drab colors is wrong. They especially liked the colors red and blue.
http://www.digitalhistory.uh.edu/learning_history/puritans/puritans_menu.cfm
4.09375
Partners PrEP Study Safe and effective approaches to preventing new HIV infections are urgently needed. An estimated 7,400 people a day are being infected with HIV, according to UNAIDS. More than 60 million people have been infected with HIV since the pandemic began. AIDS resulting from HIV infection is the leading cause of death in sub-Saharan Africa, and the fourth leading cause of death globally. Traditional prevention methods, including abstinence, being faithful to one sexual partner, and using condoms (the ABC's of HIV prevention), are well known; however, not everyone is able to use these methods all of the time. Women are especially vulnerable. The majority of HIV infections in Africa occur among women. For many women, the current prevention methods are inadequate, since they often do not have the social or economic power to refuse sex or negotiate condom use. A vaccine against HIV is most likely more than 10 years away. Thus, new prevention strategies must be found. There is scientific evidence that antiretroviral (anti-HIV) medications may be able to play an important role in reducing HIV risk. Two antiretroviral medications - tenofovir disoproxil fumarate ("tenofovir," also known by its brand name, Viread) and combination tenofovir/emtricitabine (this combination also known by its brand name, Truvada) - taken as daily preventive therapy, might substantially reduce the risk of HIV. This approach is known as pre-exposure prophylaxis, or PrEP: - Pre - before - Exposure - coming into contact with HIV - Prophylaxis - taking medication to prevent becoming HIV infected Several lines of evidence suggest that PrEP might work: - The likelihood of transmitting HIV from mother to child can be halved or more with anti-HIV medications taken during pregnancy, delivery, and after birth. - Anti-HIV medications can decrease the risk of an adult getting HIV after an accidental exposure to HIV - for example, a health care worker accidentally stuck by a needle. - Animal studies have shown that PrEP, using tenofovir or the combination emtricitabine/tenofovir, substantially protects monkeys exposed repeatedly to an HIV-like virus. Although these lines of evidence are very encouraging, we do not know for sure that PrEP works to prevent HIV infection in humans. That is why PrEP needs to be studied. ICRC is working with collaborators in Kenya and Uganda to study HIV discordant couples - that is, where one partner has HIV and the other does not have HIV. The partner who does not have HIV will take the anti-HIV medication daily. The objective of the study is to see whether having this medication in the bloodstream prevents the HIV uninfected partner from getting HIV. The study is currently enrolling participants. A total of 4,700 HIV discordant couples will be enrolled at 9 clinical sites in Uganda and Kenya. The study is a randomized, double-blind placebo-controlled clinical study. The participants who are not HIV infected will be randomly divided into groups by a computer. All participants will take medication every day. Those from one group will take tenofovir, a second group will take the combination of emtricitabine and tenofovir, and the third group will take a placebo. For more information: - External Q&A - Key Messages - Press Release (UW, July 13, 2011) - Press Release (UW, July 11, 2012) Study Photos (Right-click the link and choose "Save Link As..." to download the photo) - Photo 1 (Title: FTC/TDF pill; Credit: Partners PrEP Study Team) - Photo 2 (Title: Partners PrEP Study site in Thika, Kenya; Credit: Jared Baeten, University of Washington) - Baeten JM, Donnell D, Ndase P, Mugo NR, Campbell JD, Wangisi J, Tappero JW, Bukusi EA, Cohen CR, Katabira E, Ronald A, Tumwesigye E, Were E, Fife KH, Kiarie J, Farquhar C, John-Stewart G, Kakia A, Odoyo J, Mucunguzi A, Nakku-Joloba E, Twesigye R, Ngure K, Apaka C, Tamooh H, Gabona F, Mujugira A, Panteleeff D, Thomas KK, Kidoguchi L, Krows M, Revall J, Morrison S, Haugen H, Emmanuel-Ogier M, Ondrejcek L, Coombs RW, Frenkel L, Hendrix C, Bumpus NN, Bangsberg D, Haberer JE, Stevens WS, Lingappa JR, Celum C for the Partners PrEP Study Team. Antiretroviral prophylaxis for HIV-1 prevention among heterosexual men and women. N Engl J Med. Published online 11 July 2012. - Mujugira A, Baeten JM, Donnell D, Ndase P, Mugo NR, Barnes L, Campbell J, Wangisi J, Tappero J, Bukusi E, Cohen CR, Katabira E, Ronald A, Tumwesigye E, Were E, Fife K, Kiare J, Farquhar C, John-Stewart G, Kidoguchi L, Panteleeff D, Krows M, Shah H, Revall J, Morrison S, Ondrejcek L, Ingram C, Coombs RW, Lingappa JR, Becker S, Ridzon R, Celum C for the Partners PrEP Study Team. Characteristics of HIV-1 serodiscordant couples enrolled in a clinical trial of antiretroviral pre-exposure prophylaxis for HIV-1 prevention: The Partners PrEP Study. PLoS ONE. 2011;6(10): e25828. PMCID: 3187805.
http://depts.washington.edu/uwicrc/research/studies/PrEP.html
4.09375
Frederick Douglass was a man who continually reinvented himself and would, in time, create the modern American civil rights movement and reshape American politics. The son of a slave woman and an unknown white man, “Frederick Augustus Washington Bailey” was born in February of 1818 on Maryland’s eastern shore. He spent his early years with his grandparents and with an aunt, seeing his mother only four or five times before her death when he was seven. While growing up, he was witnessed the degradations of slavery, seeing firsthand brutal whippings and spending much time cold and hungry. At the age of eight he was sent to Baltimore to live with a ship carpenter named Hugh Auld. It was there he learned to read and first heard the words abolition and abolitionists. “Going to live at Baltimore,” Douglass would later recall, “laid the foundation, and opened the gateway, to all my subsequent prosperity.” Douglass enjoyed seven relatively comfortable years in Baltimore before being sent back to the country, where he was hired out to a farm run by a notoriously brutal “slavebreaker” named Edward Covey. And the treatment he received was indeed brutal. Whipped daily and barely fed, Douglass was “broken in body, soul, and spirit.” These events were to propel him to become an activist against slavery. Frederick Douglass – Mini Bio On January 1, 1836, he resolved that he would be free by the end of the year. He planned an escape. But early in April he was jailed after his plan was discovered. Two years later, while living in Baltimore and working at a shipyard, Douglass would finally realize his dream: he fled the city on September 3, 1838. Traveling by train, then steamboat, then train, he arrived in New York City the following day. Several weeks later he had settled in New Bedford, Massachusetts, living with his newlywed bride (whom he met in Baltimore and married in New York) under his new name, Frederick Douglass. Douglass continued to educate himself and was an avid reader. In New Bedford, he attended Abolitionists’ meetings and subscribed to William Lloyd Garrison’s weekly journal, the Liberator. After meeting Garrison in 1841, Douglass was mentioned in the Liberator and a few days later gave a speech at the Massachusetts Anti-Slavery Society’s annual convention in Nantucket. It was reported that, “Flinty hearts were pierced, and cold ones melted by his eloquence.” Douglass became a lecturer for the Society for three years and his career as a speaker was launched. Douglass was also an author and publisher. In 1945, despite fears that the information might endanger his freedom, he published his autobiography, Narrative of the Life of Frederick Douglass, an American Slave, Written By Himself. Three years later, after a speaking tour of England, Ireland, and Scotland, Douglass published the first issue of the North Star, a four-page weekly, out of Rochester, New York. During the Civil War, he conferred with Abraham Lincoln and helped the Union Army recruit northern blacks to fight in the conflict. Later he would go on to serve as U.S. minister to Haiti. During his long life, he fought for the right not only of African Americans, but women and other oppressed minorities. Through his writing, speaking and political activities, he helped establish the modern American civil rights movement. He had an enduring vision of America achieving justice and equal rights for all its citizens. But first and foremost, he had a continually evolving vision of himself as someone who, despite his early years as a slave, deserved the freedom, dignity and respect he fought so diligently to obtain for others.
http://www.tutufoundationusa.org/2013/02/frederick-douglass-father-of-americas-civil-rights-movement/
4.1875
Out and About in the Ocean Community Preparing to find out: Sample activities Collect/download photographs of the ocean and/or work and recreational opportunities within them. Students share these and talk about features and activities they recognise. Identify things that are found only in the ocean as opposed to other areas. • What do we mean by the ocean? • Why is it important? • What does it support? • What plants, animals and other organisms live in the ocean? • What activities might be undertaken while in and on the waters of the ocean? • What can be seen in the photographs that cannot be found in other familiar places? Make charts of student responses and use these to develop focus questions for the unit. With the class, prepare a class chart of things students know about the ocean. Students could draw pictures and explain them while the teacher scribes. Prepare a list of questions students want to investigate. Ask students to offer possible answers to these. Place incomplete statements on cards and place them in a box: • The ocean is… • Oceans are more than… • Oceans produce food and shelter for… • We depend on oceans for… • We snorkel and dive in… • Ocean plants… • Ocean animals… • Other types of living things ……. • We should protect our oceans …….. Students can sit in groups and take turns to select a card. Read these and ask students to discuss the statement in groups. Students report back some of the information gained. (Adapted from Hill, S. Games that work – Cooperative Games and Activities for the Primary School Classroom, Eleanor Curtin, 1992).
http://www.mesa.edu.au/seaweek2010/out_about03.asp
4.15625
A hyperlink is a word, phrase, or image that you can click on to jump to a new document or a new section within the current document. Hyperlinks are found in nearly all Web pages, allowing users to click their way from page to page. Text hyperlinks are often blue and underlined, but don't have to be. When you move the cursor over a hyperlink, whether it is text or an image, the arrow should change to a small hand pointing at the link. When you click it, a new page or place in the current page will open. Hyperlinks, often referred to as just "links," are common in Web pages, but can be found in other hypertext documents. These include certain encyclopedias, glossaries, dictionaries, and other references that use hyperlinks. The links act the same way as they do on the Web, allowing the user to jump from page to page. Basically, hyperlinks allow people to browse information at hyperspeed. Tech Factor: 4/10 Copyright © 2013 TechTerms.com
http://www.techterms.com/print/hyperlink
4.125
Children will discover how animals use vibrations in different ways to make sounds and communicate with each other. Estimated Time and Age Level Preparation: 30 minutes to prepare stations Activity: One 30-minute session Station Label for each of the six Animal Sound-Off Stations. 2 to 4 pairs of narrow-necked bottles (try different sizes, shapes and materials) 2 to 4 empty, clean cans with plastic lids 2 to 4 pencils 2 to 4 aluminum pie tins 2 to 4 small combs 2 to 4 stiff playing cards 2 to 4 thick 4-inch rubber bands A variety of other objects that will make sounds by rubbing, shaking, tapping, etc. (rulers, jars enclosing rattling objects, etc.) Paper and pencils for each team to use for taking notes Copy the Station Labels, cut them out, and tape one near each Sound-Off Station. Set up all or some of the following six Sound-Off Stations: howler monkey (bottles, pitcher of water); damselfish (pencils, can drums); cicada (pie tins); grasshopper (combs, playing cards); spider (4-inch rubber bands); and an "Invent-Your-Own-Animal-Sound" station with the other objects you've collected. Create as many teams as you have stations. Explain that at each Sound-Off Station, teams will be using different homemade instruments to mimic the way an animal or insect communicates. After practicing at each station, children will try to communicate a message in the "language" of one of the animals represented. Without demonstrating each sound, suggest that youngsters look for a single characteristic that connects these "instruments." (Though they may come up with the basic idea, you may have to supply the key word: that these instruments depend upon vibration--moving back and forth in a rhythm to produce sound.) Challenge each team to identify that trait by moving from station to station, taking notes as they go. After teams have rotated through three stations, ask them to stop and discuss what they've observed thus far. If no team points to the presence of vibrations, give them a clue. Have youngsters close their eyes and place their hands flat on the floor. Then drop a heavy book. How did they know the book had fallen? (They heard it, but they also felt vibrations through the floor. Note: this might not work if your floor rests on concrete.) Now challenge students to give meaning to the sounds and vibrations they have learned to produce by creating three distinct messages using the "Sound-Off" device at the station where they've ended up: a warning of danger, a mating call, and a "Back off or else!" call. (Or: let them try to communicate another message of their own design.) Give them time to experiment and then ask teams to demonstrate their calls to the entire group. Other students (with their backs turned) can try to guess the meaning of each call. Ask teams to demonstrate to the entire group the animal sounds they devised at the "Invent-Your-Own-Animal-Sound" station. Judge according to the originality and accuracy of their efforts. Have the members of the other teams try to guess the animals being imitated and/or the messages being sent. To imitate how human vocal cords vibrate, have youngsters cut a strip of paper about an inch wide, folding it as shown. They should then cut out a notch in the folded end, hold it up to their mouth, and blow. Ask them to experiment with papers of different thickness. Return to the top of the page This activity has been copied, with permission, from the National Science Foundation server to ours, to allow faster access from our Web site. We encourage you to explore the original site.
http://www.reachoutmichigan.org/funexperiments/quick/webswires/calls.html
4.5625
The Sun’s energy output rises and falls in a regular cycle, with peaks every 11 to 12 years. Data since the 1950s show that the difference in output from each peak to valley is about 0.1%, which has little effect on Earth’s temperature. The last solar cycle ended with a minimum in 2008–09, when sunspot activity dropped to its lowest level since the 1910s. The Sun also goes through longer, more irregular periods of greater or lesser activity. These include the Maunder Minimum, when sunspots nearly vanished from 1645 to 1715. Parts of the Northern Hemisphere were significantly cooler during this time, which occurred within a longer period called the Little Ice Age. Other factors were also involved in the cooling, including powerful volcanic eruptions. Over the last several years, the Sun has been building toward the next peak in its 11-year cycle, which typically boosts sunspot activity and space-weather events. However, the peak may have occurred already, judging from monthly plots of sunspot activity published by NOAA. Some data now indicate that the Sun may be entering a longer period of relatively low activity. If this were to extend over several decades, creating what’s known as a “grand minimum,” then the usual peaks in the Sun’s 11-year cycle could weaken or even disappear. The amount of solar energy reaching Earth—the total solar irradiance (TSI)—could drop to the levels seen in a typical 11-year minimum, or perhaps further, and remain there for decades. Because many solar variables have been measured only for a few decades, and because the Sun includes both predictable and naturally chaotic behavior, it is not possible to pin down the likelihood of a grand minimum. Its effect on Earth’s temperature would depend on exactly how much the TSI dropped and for how long. The TSI has only been measured directly for about 30 years, but proxy data from lake sediments, ice cores, tree rings, and other sources suggest that the total solar energy reaching Earth dropped by as little as 0.1% during the Maunder Minimum. So could a lengthy drop in solar output be enough to counteract human-caused climate change? Recent studies at NCAR and elsewhere have estimated that the total global cooling effect to be expected from reduced TSI during a grand minimum such as Maunder might be in the range of 0.1° to 0.3° Celsius (0.18° to 0.54° Fahrenheit). This compares to an expected warming effect of 3.0°C (5.4°F) or more by 2100 due to greenhouse gas emissions. In other words, even a grand solar minimum might only be enough to offset one decade of global warming. Moreover, since greenhouse gases linger in the atmosphere, the impacts of those added gases would continue after the end of any grand minimum. During the solar cycle, the Sun's ultraviolet (UV) output varies much more than TSI does, including sharp sudden increases associated with solar storms. UV variations affect Earth’s upper atmosphere and may also influence weather and climate, particularly by way of the stratosphere, through their effects on ozone and related processes. Could a weaker Sun avert global warming? (NCAR & UCAR Currents) Learn more about the Sun (NCAR)
http://www2.ucar.edu/climate/faq/isnt-sun-in-quiet-period-wouldnt-grand-minimum-cool-earth-down