score
float64
4
5.34
text
stringlengths
256
572k
url
stringlengths
15
373
4.28125
|Home » Howtos » Using perl » Interpolation| This page shows how variable interpolation works in Perl. Interpolation, meaning "introducing or inserting something", is the name given to replacing a variable with the value of that variable. In Perl, any string that is built with double quotes (or something meaning double quotes) will be interpolated. That is, any variable or escaped char that appears within the string will be replaced with the value of that variable. Here is a small example: #!/usr/bin/perl use strict; use warnings; my $apples = 4; print "I have $apples apples\n"; This would have the following output: I have 4 apples In perl, when you print an array inside double quotes, the array elements are printed with spaces between. The following program provides a small example: #!/usr/bin/perl use strict; use warnings; my @friends = ('Margaret', 'Richard', 'Carolyn', 'Rohan', 'Cathy', 'Yukiko'); print "Friends: @friends\n"; This program produces the following output: Friends: Margaret Richard Carolyn Rohan Cathy Yukiko This is very helpful when debugging. The function qq() works just like double quotes, but makes it easy to put double quotes in your string: #!/usr/bin/perl use strict; use warnings; my $apples = 4; print qq(I "have" $apples apples\n); This would produce: I "have" 4 apples Here documents work in exactly the same way. If the end token of the here document (e.g. <<"EOT") is surrounded in double quotes, then variables in the here document will be interpolated: #!/usr/bin/perl use strict; use warnings; my $apples = 4; my $oranges = 7; my $pears = 3; print <<"EOT"; My fruit list: $apples apples $oranges oranges $pears pears EOT The output of this program is: My fruit list: 4 apples 7 oranges 3 pears Single quoted strings do not interpolate variables or most escaped values, such as \n, (however \' is the exception). Single quotes are helpful when you want to include a $ or a % or any other special character in your output: #!/usr/bin/perl use strict; use warnings; print 'Those strawberries cost $2.50'; This would produce the output you would want: Those strawberries cost $2.50 The function q() works the same as single quotes, except it makes it easier to include a single quote in your data: #!/usr/bin/perl use strict; use warnings; print q(Bob's strawberries cost $2.50); This would produce: Bob's strawberries cost $2.50 In the same way, variables in here documents, where the end token is surrounded with single quotes, are not interpolated: #!/usr/bin/perl use strict; use warnings; my $apples = 4; my $oranges = 7; my $pears = 3; print <<'EOT'; My fruit list: $apples apples $oranges oranges $pears pears EOT This would produce: My fruit list: $apples apples $oranges oranges $pears pears When defining a hash, the key of the hash is a string. For example: my %pets = ( 'bob' => 'cat', 'tamba' => 'dog', 'ceasar' => 'horse', ); The key (on the left) does not have to be quoted unless it contains spaces, or characters that may be interpolated. For example, the above hash could be written as: my %pets = ( bob => 'cat', tamba => 'dog', ceaser => 'horse', ); But neither of the following keys are valid: my %pets = ( a dog => 'tamba', dollars$ => '40', ); If you are constructing or printing a string that contains no variables, then use single quotes or q(). This makes your code easier to read if there are dollar signs or other special characters in your text. If you are printing a lot of data use a here document. It will make your code easier to read. If you need to print a variable amongst some text, use double quotes or qq(). This is much tidier than repeated uses of the '.' concatenation operator. Sometimes you will find that you do need to use either backslashes or concatenation operators. For example, if you want to print a dollar sign and then an amount in a variable, you could use either: my $amount = 40.00; print 'The amount is $' . $amount; my $amount = 40.00; print "The amount is \$$amount"; The first example is probably the better one to use as $$ is the perl variable for process id. perldoc perlop See the sections Comma Operator Quote and Quote-like Operators Gory details of parsing quoted constructs perldoc -q "quote.*strings"
http://perlmeme.org/howtos/using_perl/interpolation.html
4.03125
World War I, known to the generation that lived through it as the Great War, shaped the history of the twentieth century. In the Allied countries, the conflict began in August 1914 with a rush of patriotism and the expectation that the fighting would be brief. Victory, it was believed, was but a few weeks or months away. It would all be over by Christmas. This optimistic forecast, along with much else, was soon shattered by terrible events. Instead of being short and glorious, the war turned out to be long and unremitting. In France, the contrast quickly degenerated into trench warfare, a savage form of combat that tested the resolve of the rival armies and produced casualties on a hitherto unimaginable scale. As death and destruction piled upon death and destruction, there seemed to be no way out, until, finally, the Allies were able to gain the advantage and force surrender.
http://www.tidespoint.com/books/letters_mayolind.shtml
4
Comparing the DNA of different organisms can show how closely related they are. Since for each species the DNA information is organized in a characteristic number of the number of chromosomes is a reasonable indicator of the relatedness of two simliar species. Sometimes the DNA information on a chromosome is reorganized. Chromosomes can sometimes fuse with each other or can exchange chromosome "arms". When this happens, DNA information is not always lost, but it can become mixed up. This sort of rearrangement may not cause problems for the individual who carries the change --- as long as all the DNA is still present. A mule is the product of two different species (a horse and a donkey) mating with each other. The fact that these two different types of animals can mate and produce viable offspring tells scientists that horses and donkeys are closely related. However, mules are always sterile. Why is this? Horses and donkeys have different chromosome numbers (see below). The fact that horses and donkeys have different chromosome numbers tells scientists that these two are different species. For the mule, having parents with different chromosome numbers isn't a problem. During mitotic cell division, each of the chromosomes copies itself and then distributes these two copies to the two daughter cells. In contrast, when the mule is producing sperm or egg cells during meiosis, each pair of chromosomes (one from Mom and one from Dad) need to pair up with each other. Since the mule doesn't have an even number of homologous pairs (his parents had different chromosome numbers), meiosis is disrupted and viable sperm and eggs are not formed. Using chromosomes to classify plant species One Utah species originally assigned to Notholaena has 27 chromosomes. Variations in chromosome number are even more common in the plant kingdom. In plants, chromosome number is an important indicator for determining relationships between plant species. Scientists at the Utah Museum of Natural History recently used studies of chromosome number to show that a Utah fern was not the same species as a very similar fern found in other states. Studies of the Utah Jones Cloak Fern (originally thought to be a Notholaena species) showed that the cells of this plant have 27 chromosomes. Other species of Notholaena found in other states have 30 chromosomes. When combined with the results of other studies, the difference in chromosome number helped to prove that the Utah species actually belonged in the genus Argyrochosma, a very distant relative of Notholaena . This sort of information is important because it helps conservation biologists understand the distribution of each different species of plant. With this sort of information, scientists are better able to decide which plants are rare and require protection by means such as the endangered species list. Species of Notholaena from other states have multiples of 30 chromosomes. This species has 30.
http://learn.genetics.utah.edu/archive/conservation/tools/chromoanalysis.html
4.03125
Learn something new every day More Info... by email A Boolean array in computer programming is a sequence of values that can only hold the values of true or false. By definition, a Boolean can only be true or false and is unable to hold any other intermediary value. An array is a sequence of data types that occupy numerical positions in a linear memory space. While the actual implementation of a Boolean array is often left up to the compiler or computer language libraries, it is most efficiently done by using bits instead of complete bytes or words. There are several uses for a Boolean array, including keeping track of property flags and aligning settings for physical hardware interfaces. The idea of a Boolean array stems from original methods that were used to store information on computers where there was very little available memory. The first implementation of a Boolean array took the form of a bit array. This used larger data types such as bytes or long integers to hold information by setting the bits of the data type to true or false. In this way, a single byte that is eight bits long could hold eight different true or false values, saving space and allowing for efficient bitwise operations. As the size of computer memory increased, the need to use bit arrays declined. While using bits does offer the possibility for bit shifting and using logical operators that allow incredibly fast processing, it also requires custom code to handle these types of operations. Using a standard array structure to hold a sequence of bytes is a simpler solution, but it takes much more memory during program execution. This can be seen when creating an array of 32 Boolean values. With a bit array, the data will only occupy four bytes of memory, but a Boolean type array might occupy anywhere from 32 to 128 bytes, depending on the system implementation. Some computer programming languages do actually implement a bit array when a Boolean array type is used, although this is not common. A Boolean array has the advantage of being very easy to read when viewing source code. Comparisons and assignments are presented clearly, whereas with a bit array the logical operators "and", "or" and "not" must be used, potentially creating confusing code. Despite the ease of use, one feature that cannot be used with a Boolean array is a bitmask. A bitmask is a single byte or larger data type that contains a sequence of true and false values relating to multiple conditions. In a single operation, multiple bits can be checked for their true or false states, all at once. With an integer-based array of Boolean values, the same operation would need to be performed with a loop.
http://www.wisegeek.com/what-is-a-boolean-array.htm
4.28125
This interactive activity for grades 8-12 features eight models that explore atomic arrangements for gases, solids, and liquids. Highlight an atom and view its trajectory to see how the motion differs in each of the three primary phases. As the lesson progresses, students observe and manipulate differences in attractions among atoms in each state and experiment with adding energy to produce state changes. More advanced students can explore models of latent heat and evaporative cooling. See Related Materials for a Teacher's Guide developed specifically to accompany this activity. This item is part of the Concord Consortium, a nonprofit research and development organization dedicated to transforming education through technology. The Concord Consortium develops deeply digital learning innovations for science, mathematics, and engineering. 6-8: 4D/M1a. All matter is made up of atoms, which are far too small to see directly through a microscope. 6-8: 4D/M1cd. Atoms may link together in well-defined molecules, or may be packed together in crystal patterns. Different arrangements of atoms into groups compose all substances and determine the characteristic properties of substances. 6-8: 4D/M3ab. Atoms and molecules are perpetually in motion. Increased temperature means greater average energy of motion, so most substances expand when heated. 6-8: 4D/M3cd. In solids, the atoms or molecules are closely locked in position and can only vibrate. In liquids, they have higher energy, are more loosely connected, and can slide past one another; some molecules may get enough energy to escape into a gas. In gases, the atoms or molecules have still more energy and are free of one another except during occasional collisions. 4E. Energy Transformations 6-8: 4E/M3. Thermal energy is transferred through a material by the collisions of atoms within the material. Over time, the thermal energy tends to spread out through a material and from one material to another if they are in contact. Thermal energy can also be transferred by means of currents in air, water, or other fluids. In addition, some thermal energy in all materials is transformed into light energy and radiated into the environment by electromagnetic waves; that light energy can be transformed back into thermal energy when the electromagnetic waves strike another material. As a result, a material tends to cool down unless some other form of energy is converted to thermal energy in the material. 9-12: 4E/H9. Many forms of energy can be considered to be either kinetic energy, which is the energy of motion, or potential energy, which depends on the separation between mutually attracting or repelling objects. 11. Common Themes 6-8: 11B/M1. Models are often used to think about processes that happen too slowly, too quickly, or on too small a scale to observe directly. They are also used for processes that are too vast, too complex, or too dangerous to study. 6-8: 11B/M4. Simulations are often useful in modeling events and processes. Common Core State Standards for Mathematics Alignments Standards for Mathematical Practice (K-12) MP.4 Model with mathematics. Define, evaluate, and compare functions. (8) 8.F.2 Compare properties of two functions each represented in a different way (algebraically, graphically, numerically in tables, or by verbal descriptions). Use functions to model relationships between quantities. (8) 8.F.5 Describe qualitatively the functional relationship between two quantities by analyzing a graph (e.g., where the function is increasing or decreasing, linear or nonlinear). Sketch a graph that exhibits the qualitative features of a function that has been described verbally. High School — Functions (9-12) Interpreting Functions (9-12) F-IF.5 Relate the domain of a function to its graph and, where applicable, to the quantitative relationship it describes.? Disclaimer: ComPADRE offers citation styles as a guide only. We cannot offer interpretations about citations as this is an automated procedure. Please refer to the style manuals in the Citation Source Information area for clarifications.
http://www.compadre.org/portal/items/detail.cfm?ID=11191
4.3125
Curriculum & Resources: Individual and Community Resilience Great Resources for Teaching from the October 2010 YES! Education Connection Newsletter Read the newsletter: Go Green! Go Simple! Preparing your students for an uncertain world What makes teenage brains unique? What happens when people from all walks of life play an alternate reality game to create a better future? Here are two classroom resources that will inspire your students to explore individual and community resilience. Inside the Teenage Brain Teenagers can be a mystery. One minute, they’re sweet, earnest, and on task. Then, snarly, evasive, and bouncing off the walls the next. Frontline’s series “ Inside the Teenage Brain” explores scientific research and explanations for teenage behavior. Neuroscientists say the brain is like a house that is built in the early years, and the rest of childhood and teenage years is getting the furniture in the house and in the right place. Extensive changes in brain development—referred to as pruning and strengthening—during puberty occur simultaneously with raging hormones. Sleep, mood swings, risky behavior, neuroresearch, public policy, and parenting tips are deftly discussed in this fascinating and helpful program. No matter the research, the experts on the show say that the biggest difference in a teen’s life is the quality time he or she spends with parents or other adults. Episodes include: Teenagers Inexplicable Behavior, The Wiring of the Adolescent Brain, Mood Swings, You Just Don't Understand, From Zzzzs to A's, and Are There Lessons for Parents? To enter the Frontline series and recesses of the teen brain, click here: Inside the Teenage Brain EXPLORE: Anatomy of a Teen Brain NY TIMES LESSON PLAN: What Were They Thinking? Exploring Teen Brain Development In this lesson, your students will review recent scientific research on the teenage brain, including the Frontline series, “Inside the Teenage Brain,” and hold a mini-symposium to discuss its implications to topics related to teens’ freedom and accountability. Your students will note differences between adult and teen brains; what methods neuroscientists use to research those differences; and how that research is applied to parenting (think curfews and hanging out with friends) and public policy, such as teenage driver laws.LESSON PLAN: What Were They Thinking? Exploring Teen Brain Development - Discovering the Beauty of Teenagers Photographer John Hasyn held a photo workshop with Inuit youth from Nunavet. His experience overcame his fear of teenagers and changed his perspective forever. - Project Happiness: 7 Doors Project Explore the idea of real and lasting happiness for teenagers. - This is Your Brain on Bliss After 2,000 years of practice, Buddhist monks know that one secret to happiness is simply to put your mind to it. World Without Oil In May 2007, people from all walks of life began to play a “what if” game. What if an oil crisis started? What would happen? How would the lives of ordinary people change? To play the game, people visualized what would happen if an oil crisis hit the U.S. As the game unfolded and the crisis was in full swing, people told their stories of how the oil shortage affected their lives and what they were doing to cope. As World Without Oil continued, over 1900 people not only created an immensely complex disaster, but they also visualized realistic and achievable solutions via their own personal blog posts, videos, and voicemails. Though the game is officially over, your students can still play and learn. World Without Oil’s 11 stand-alone lessons and grassroots simulation will engage students with questions about energy use, sustainability, the role energy plays in our economy, culture, worldview and history, and the threat of peak oil. LESSON 1: Oil Crisis: Get into the GameA global oil crisis has begun. Oil usage worldwide has increased to where the oil supply can only meet 95% of it. Begin the inquiry into the effects of less oil in our lives. EXPLORE: Lesson One: Oil Crisis LESSON 3: Life is Starting to Change Widespread changes are starting. Goods and services that depended on cheap oil are failing. EXPLORE: Lesson Three: Life is Starting to Change LESSON 6: Food Without Oil The impact of oil on our food supply is one of the most serious aspects of the oil crisis. Shortages are forcing many people to look for locally grown food. To download all 11 World Without Oil lessons, in addition to a student guide on lessons, click here: http://worldwithoutoil.org/metateachers.htm World Without Oil is an alternate reality game created to call attention to and spark dialogue about petroleum dependency. It also aims to inspire individuals to take steps toward living less oil-dependent, more resilient lives. World Without Oil was presented in 2007 by Independent Television Service (ITVS) with funding by the Corporation for Public Broadcasting. It continues through lesson plans for middle and high school teachers. To explore more learning resources, visit the official website: World Without Oil The above resources accompany the October 2010 YES! Education Connection Newsletter READ NEWSLETTER: Go Green! Go Simple! Preparing your students for an uncertain world That means, we rely on support from our readers. Independent. Nonprofit. Subscriber-supported.
http://www.yesmagazine.org/for-teachers/curriculum/curriculum-resources-world-without-oil?icl=yesemail_ednews_oct10&ica=tnBrain
4.15625
Jan. 4, 2010 Spectacular satellite images suggest that Mars was warm enough to sustain lakes three billion years ago, a period that was previously thought to be too cold and arid to sustain water on the surface, according to research published in the journal Geology. The research, by a team from Imperial College London and University College London (UCL), suggests that during the Hesperian Epoch, approximately 3 billion years ago, Mars had lakes made of melted ice, each around 20km wide, along parts of the equator. Earlier research had suggested that Mars had a warm and wet early history but that between 4 billion and 3.8 billion years ago, before the Hesperian Epoch, the planet lost most of its atmosphere and became cold and dry. In the new study, the researchers analysed detailed images from NASA's Mars Reconnaissance Orbiter, which is currently circling the red planet, and concluded that there were later episodes where Mars experienced warm and wet periods. The researchers say that there may have been increased volcanic activity, meteorite impacts or shifts in Mars' orbit during this period to warm Mars' atmosphere enough to melt the ice. This would have created gases that thickened the atmosphere for a temporary period, trapping more sunlight and making it warm enough for liquid water to be sustained. Lead author of the study, Dr Nicholas Warner, from the Department of Earth Science and Engineering at Imperial College London, says: "Most of the research on Mars has focussed on its early history and the recent past. Scientists had largely overlooked the Hesperian Epoch as it was thought that Mars was then a frozen wasteland. Excitingly, our study now shows that this middle period in Mars' history was much more dynamic than we previously thought." The researchers used the images from the Mars Reconnaissance Orbiter to analyse several flat-floored depressions located above Ares Vallis, which is a giant gorge that runs 2,000 km across the equator of Mars. Scientists have previously been unable to explain how these depressions formed, but believed that the depressions may have been created by a process known as sublimation, where ice changes directly from its solid state into a gas without becoming liquid water. The loss of ice would have created cavities between the soil particles, which would have caused the ground to collapse into a depression. In the new study, the researchers analysed the depressions and discovered a series of small sinuous channels that connected them together. The researchers say these channels could only be formed by running water, and not by ice turning directly into gas. The scientists were able to lend further weight to their conclusions by comparing the Mars images to images of thermokarst landscapes that are found on Earth today, in places such as Siberia and Alaska. Thermokarst landscapes are areas where permafrost is melting, creating lakes that are interconnected by the same type of drainage channels found on Mars. The team believe the melting ice would have created lakes and that a rise in water levels may have caused some of the lakes to burst their banks, which enabled water to carve a pathway through the frozen ground from the higher lakes and drain into the lower lying lakes, creating permanent channels between them. Professor Jan-Peter Muller, Mullard Space Science Laboratory, Department of Space Climate Physics at University College London, was responsible for mapping the 3D shape of the surface of Mars. He adds: "We can now model the 3D shape of Mars' surface down to sub-metre resolution, at least as good as any commercial satellite orbiting the Earth. This allows us to test our hypotheses in a much more rigorous manner than ever before." The researchers determined the age of the lakes by counting crater impacts, a method originally developed by NASA scientists to determine the age of geological features on the moon. More craters around a geological feature indicate that an area is older than a region with fewer meteorite impacts. In the study, the scientists counted more than 35,000 crater impacts in the region around the lakes, and determined that the lakes formed approximately three billion years ago. The scientists are unsure how long the warm and wet periods lasted during the Hesperian epoch or how long the lakes sustained liquid water in them. The researchers say their study may have implications for astrobiologists who are looking for evidence of life on Mars. The team say these lake beds indicate regions on the planet where it could have been warm and wet, potentially creating habitats that may have once been suitable for microbial life. The team say these areas may be good targets for future robotic missions. The next step will see the team extend their survey to other areas along the equator of Mars so that they can ascertain how widespread these lakes were during the Hesperian Epoch. The team will focus their surveys on a region at the mouth of Ares Vallis called Chryse Planitia, where preliminary surveys of satellite images have suggested that this area may have also supported lakes. The study was a collaboration between the Department of Earth Science and Engineering at Imperial College London and Space Physics at UCL. The project was funded by the Science and Technology Facilities Council, the Royal Society and the Leverhulme Trust. Other social bookmarking and sharing tools: Note: Materials may be edited for content and length. For further information, please contact the source cited above. - Sanjeev Gupta, Nicholas Warner, Jung-Rack Kim, Shih-Yuan Lin, Jan Peter Muller. Hesperian equatorial thermokarst lakes in Ares Vallis as evidence for transient warm conditions on Mars. Geology, January 2010; Vol. 38, pp. 71-74 Note: If no author is given, the source is cited instead.
http://www.sciencedaily.com/releases/2010/01/100104092452.htm
4
Celiac disease (nontropical sprue, gluten enteropathy, celiac sprue) is a hereditary intolerance to gluten (a protein found in wheat, barley, and oats) that causes characteristic changes in the lining of the small intestine, resulting in malabsorption. Celiac disease is a hereditary disorder that usually affects people of northern European heritage. Celiac disease may affect 1 out of 300 people in Europe, especially in Ireland and Italy, and perhaps 1 out of 250 people in some parts of the United States, yet it is extremely rare in Africa, Japan, and China. About 10 to 20% of close relatives of people with celiac disease are also affected. The disease affects about twice as many women as men. In this disease, gluten―a protein found in wheat and, to a lesser extent, barley, rye, and oats―stimulates the immune system to produce certain antibodies. These antibodies damage the inner lining of the small intestine, resulting in flattening of the villi. The resulting smooth surface leads to malabsorption of nutrients. However, the small intestine's normal brushlike surface and function are restored when the person stops eating foods containing gluten. Some people develop symptoms as children. Others do not develop symptoms until adulthood. The severity of symptoms depends on how much of the small intestine is affected. Most affected adults have weakness and loss of appetite. Diarrhea, often with oily or greasy-appearing stool, is common. Some people are undernourished, have mild weight loss, and occasionally have mouth sores and an inflamed tongue. However, some people do not have digestive symptoms at all. About 10% of people with celiac disease develop a painful, itchy skin rash with small blisters—a disease called dermatitis herpetiformis (see Blistering Diseases: Dermatitis Herpetiformis). In children, symptoms can begin in infancy or early childhood after cereals (most of which contain gluten) are introduced. Some children experience only mild upset stomach, whereas others develop painful abdominal bloating and have light-colored, unusually foul-smelling, bulky stools (steatorrhea). Children typically fail to grow at a normal rate and appear weak, pale, and listless. The nutritional deficiencies resulting from malabsorption in celiac disease can cause additional symptoms, which tend to be more prominent in children. Some children develop growth abnormalities, such as short stature. Anemia, causing fatigue and weakness, develops as a result of iron deficiency. Low protein levels in the blood can lead to fluid retention and tissue swelling (edema). Malabsorption of vitamin B12 can lead to nerve damage, causing a pins-and-needles sensation in the arms and legs. Poor calcium absorption results in abnormal bone growth, a higher risk of broken bones, and painful bones and joints. Lack of calcium can also cause tooth discoloration and greater susceptibility to painful tooth decay. Girls with celiac disease may not have menstrual periods because of a low production of hormones, such as estrogen. Doctors suspect the diagnosis when a person has the previously mentioned symptoms. Measurement of the level of specific antibodies produced when a person with celiac disease consumes gluten is a helpful test. To help confirm the diagnosis, doctors remove a sample of tissue from the person's lining of the small intestine and examine it under a microscope (biopsy). The diagnosis is confirmed if the biopsy shows the intestinal villi are flattened and if the lining of the small intestine subsequently improves after the person stops eating foods containing gluten. Once the diagnosis is made, doctors do blood tests to look for deficiencies of certain vitamins (such as folate [folic acid]) and minerals (such as iron and calcium). Without diagnosis and treatment, celiac disease is ultimately fatal in 10 to 30% of people. Currently, such outcomes are rare, and most people do well if they avoid gluten. Celiac disease does increase the risk of certain cancers of the digestive tract. The most common cancer is lymphoma of the intestine. Such lymphomas affect about 6 to 8% of people who have had celiac disease for a long time (typically more than 20 to 40 years). Strictly adhering to a gluten-free diet significantly decreases the risk of cancer. People with celiac disease must exclude all gluten from their diet, because eating even small amounts may cause symptoms. The response to a gluten-free diet is usually rapid. Once gluten is avoided, the brushlike surface of the small intestine and its absorptive function return to normal. Gluten is so widely used in food products that people with celiac disease need detailed lists of foods to be avoided and expert advice from a dietitian. Gluten is found, for example, in commercial soups, sauces, ice cream, and hot dogs. Doctors give most people with celiac disease supplements to replace vitamins (such as folate) and minerals (such as iron). Some people continue to have symptoms even when gluten is avoided. In such people, either the diagnosis is incorrect or the disease has progressed to a condition called refractory celiac disease. In refractory celiac disease, treatment with corticosteroids, such as prednisone, may help. In rare cases, if there is no response to either gluten withdrawal or drug treatment, intravenous feeding is needed. Sometimes children are seriously ill when first diagnosed and need a period of intravenous feeding before starting a gluten-free diet. Last full review/revision January 2013 by Atenodoro R. Ruiz, Jr., MD
http://www.merckmanuals.com/home/digestive_disorders/malabsorption/celiac_disease.html
4.09375
New research results are consistent with a controversial theory that an extraterrestrial body – such as a comet – impacted the Earth approximately 12,900 years ago, possibly contributing to the significant climatic and ecological changes that date to that time period. The paper in the Proceedings of the National Academy of Sciences (PNAS) includes significant findings about the nature of so-called "microspherules" that were found at a number of prehistoric sites, based on research done at North Carolina State. Specifically, the 2007 team found hundreds to thousands of these microspherules in each kilogram of dirt they sampled at the Younger Dryas Boundary (YDB) layer from several sites. The YDB marks the period when the Earth's climate reverted to conditions similar to the ice age and populations of prehistoric animals, such as mammoths, appear to have dropped off precipitously. It also marks the period when the Clovis culture in North America seems to have experienced a significant population decline or some significant cultural modification. Samples were also taken from layers above and below the YDB. Microspherules were found in much greater numbers in the dirt samples taken from the YDB, as compared to the samples from the other layers. These microspherules have a variety of natural and artificial sources, including impact events, volcanoes and industrial pollution. Most types of microspherules are easily distinguished from one another. However, in 2009, another team of researchers published a paper calling the 2007 findings into question. The researchers had examined two of the sites cited in the 2007 paper – the Blackwater Draw site in New Mexico and the Topper site in South Carolina, as well as 5 others – and reported that its researchers were unable to find increased numbers of the relevant microspherules in the YDB at all but one site – and even that site was questionable. Now the new PNAS paper finds that the 2009 study relied on flawed protocols. Perhaps more importantly, the researchers behind the new study have re-examined the Blackwater Draw and Topper sites – as well as a third site in Maryland common to the 2009 study– and were able to find microspherules in amounts consistent with the 2007 hypothesis at each site. "Our study replicates only a small subset of the research reported in 2007 and within those narrow limits, our results are consistent with theirs. Much research remains to be done to prove or disprove the hypothesis," says Dr. Malcolm LeCompte of Elizabeth City State University, who is lead author of the new PNAS paper. LeCompte brought some of these microspherules to the Analytical Instrumentation Facility (AIF) at NC State, which provides both analytical instrumentation and expert staff to help researchers analyze and characterize materials and material structures at the micro and nanoscale. "They wanted to know what's in these spherules and where they came from," says Charles Mooney, the scanning electron microscope (SEM) lab manager at AIF. "We analyzed the microspherules with an SEM, which allowed us to obtain high-resolution images of the microspherules. We also collected x-rays generated by electron beam-sample interactions to tell us what elements were in each sample," Mooney explains. "This told us that the microspherules were largely made up of iron, aluminum, silicon, and occasionally titanium, with one spherule containing significant amounts of rare earths, such as cerium." Dr. Dale Batchelor, director of operations at AIF, also sliced open some of the microspherules using an analytical instrument composed of a both a focused ion beam (FIB) and an SEM to examine their interior structure and composition. Interestingly, some of the microspherules were partially hollow, but exhibited internal crystal structures when cross sectioned with the FIB. "To our knowledge this is the first instance of the FIB technique being used to cross section YBD microspherules – in effect exploratory surgery on the microscale," Batchelor says. "The FIB is the scalpel and the SEM is the eye." Most of the microspherules were made up of elements in proportions similar to the composition of the Earth's crust and not, as some had proposed, meteorite material. In addition, the surface characteristics of the microspherules indicate that they were heated to a molten temperature and then cooled rapidly. "This is consistent with the theory of an impact event, but falls short of proof positive," says LeCompte. More information: doi: 10.1073/pnas.1208603109 Journal reference: Proceedings of the National Academy of Sciences. The Daily Galaxy via North Carolina State UniversityImage credit University of California Santa Barbara (UCSB)
http://www.dailygalaxy.com/my_weblog/2012/10/theia-impact-hypothesis-of-moons-creation-crucial-evidence-discovered-.html
4.03125
|Yale-New Haven Teachers Institute||Home| Harriet J. Bauman This unit’s focus is the contributions of the Mexican-Americans to American culture. It is designed for either eight or sixteen weeks. It can be used in a high school Spanish II, III, or IV class, alone, or in conjunction with a United States History class, an American Literature class, a Humanities class, or an art history or music history class. A unit such as this is a necessary addition to the Foreign Language curriculum of the New Haven Public Schools. An important facet of the curriculum is the study of the foreign culture. Unfortunately, in our curriculum, all aspects of Spanish and Hispanic culture are studied, except their influence on the United States. Therefore, this unit begins to fill the gap. An historical perspective is maintained throughout the unit. The events and people, which should be familiar to the students from their study of American History, are the basis for explaining the strong Hispanic influence in a major area of the United States: the Southwestern states. A general study of Mexico and Mexican History, undertaken before beginning this unit, will make the suggested activities much more meaningful to the students. An excellent source is Muchas Facetas de México by Jane Burnett (see Reading list for Teachers). It gives a good overview of Mexico today. There are many short chapters each containing concise information in Spanish. For more detailed information on Mexican History, Mexican-Americans: Sons of the Southwest by Ruth S. Lamb is an excellent choice. Mexico’s history from prehistoric times to the present, including the Southwestern states, is presented in such a manner as to explain the Mexican-American today. The information is enormous and it is better for the teacher to use it in a synopsis for the students.(see Reading List for Teachers) Another important pre-unit activity is to have students, working in groups, write letters to the Mexican Embassy in New York and to the Chambers of Commerce of individual states: Texas, Arizona, New Mexico, California, Colorado, Montana, Nevada, Louisiana, and Florida. They are to ask for information and pictures that will help them learn about the Spanish and Hispanic contributions (see Resource List). These activities accomplished, the stage is set for the main event: a study of Mexican-Americans and their influence on the United States. Objective 1 To encourage the learning of Spanish for communication and understanding of the Hispanic heritage. Strategies Since the main subject in which this unit is to be taught is Spanish, a major emphasis of the unit is to continue to build the four main skills of language learning: speaking, listening, reading, and writing. - a. Students will learn about the folklore of the Southwest by reading some Mexican legends in Spanish. Both Leyendas Latinoamericanas by Genevieve Barlow, and Leyendas Mexicanas by Genevieve Barlow and William Stivers contain many of the most popular Mexican legends. They are written simply enough for the students so that they can read them easily. Questions for each legend are included in both books. It is wise to select the legends which will emphasize a particular cultural point such as “La china poblana” in Leyendas Mexicanas which tells the story of a young Asian girl captured by pirates and brought to Mexico. The outfit she was wearing was very different from that of the Mexican women. Instead of protesting her fate, the girl made a new life in Mexico. In her honor, the Mexican women wear a festival dress called “la china poblana”. A picture of this outfit can be found in Muchas Facetas de México on page 9. - b. The students can represent the legends they read by illustrating them or acting in skits based on them. - c. They can write their own legends in Spanish about a particular cultural point. For example, here is an original legend about the origin of the piñata: Once upon a time there was a little boy named Miguel whose family was so poor that there were no toys for him to play with. The family had so little money they could barely pay for food or rent. Miguel wanted something to play with very badly. He prayed every day for a toy. One day, an angel appeared and said, “I will grant your wish if you do something special and dedicate it to me.” The little boy thought and thought. Finally he had an idea. “I will make a beautiful bird, and the angel will be pleased.” He found a clay pot that his mother no longer needed, and some wire that his father was going to throw away. He used the pot as the body of the bird and fashioned the rest of the bird with the wire. He covered it with strips of brightly colored paper making it look like a beautiful quetzal bird, which resembled the one who lived in the tree next to his house. When he was finished, he hung the piñata on a branch of the tree. The next day, the angel reappeared, thanked Miguel, and told him he would find a surprise inside the piñata. Miguel jumped and jumped, but he couldn’t reach it. He found a stick and swung at the piñata. On the third swing, he hit it and broke it. Toys and candies spilled out all around him! Gleefully he played with the toys and ate the candy. A few years later when Miguel was older, he remembered the angel’s visit, and decided to do the same thing for other poor boys and girls. He opened a shop in the Zócalo and made piñatas in many colors and in many different forms. Now, one can purchase a piñata from Miguel for all occasions. (H.J. Bauman, 1984) - d. Students can demonstrate a Mexican recipe or the making of a craft such as an ojo de Dios which is colorful yarn wrapped around two sticks in the shape of a cross, used for good luck. For more information about Mexican crafts, A Treasury of Mexican Folkways by Frances Toor is very useful. (see Teacher’s Reading List) Objective 2 To trace the Spanish, Mexican, and Indian influences existing in the Southwest. Strategies There are many topics of interest to students: art, architecture, crafts, food, music, dance, religion, costumes, holidays, monuments, and famous people, historical and contemporary. Projects should be varied in format as well as in form. That is to say, students could work by themselves, in small groups, or in large groups. Their projects could be drawings, reports, montages (a grouping of pictures), collages (a grouping of many items which vary in form, texture, and color), three-dimensional models, and dioramas. - a. A study of place names (states and cities or towns with Spanish or Mexican names) has a twofold purpose: (1) as a part of language study, the students will translate these names and decide why the settlers chose them; and (2) the students will explore why these areas were settled by the Spanish or Mexicans, and how these first settlements have or have not continued to be Spanish or Mexican in character. - The Spanish or Mexican settlers gave names to their surroundings which reflected their cultures. Some of the place names come from a description of their area. Buena Vista, for example, means beautiful view. Other places were named in honor of a particular saint, or because they were discovered on that Saint’s Day. Florida was so named because it was discovered on el D’a de las flores pascuas or Easter Sunday. Still others were named for a famous person like Ponce de León or Hidalgo. Sometimes the settlers gave a religious idea as a name, such as Trinidad which means Trinity. Lastly, the Spanish or Mexican place names are the same as in Spain, such as Granada, or as in Mexico, like Zapata. (A list of suggested place names is included at the end of the unit.) - b. Spanish architecture is an extremely interesting study. The students can trace the structure of the buildings back to their Spanish sources. Several of the articles from Américas magazine (see Reading list for Students) are extremely well documented. Of particular interest are the following articles: “Proud, lonely Churches” by Pál Kelemen (volume 28 number 2, February, 1976) which is concerned with Mexican churches and their Spanish influences. It contains many fine photographs of these churches; “When Florida Was Spanish” by Guillermo de Zéndegui, which explains how the Spanish built their fortifications using the Indians’ method of making the fort circular with the palisades in a spiral; “The Missions of San Antonio” by Herb Taylor, Jr. (volume 24 number 10, October, 1972), in which the author concentrates on the missions, temporary structures to the Spanish, which were to teach the Indians about farming and also to speak Spanish, as well as to convert them to Christianity; “St. Augustine, U.S.A. 1565” by Guillermo de Zéndegui gives an excellent account of the founding of St. Augustine; Florida (volume 25 number 1, January, 1973); and as a contrast to Spanish architecture, “Space and life Style: A Maya Answer” by Linda Schele (volume 25 number 5, May, 1973) shows the construction of Palenque in southern Mexico. The Indians were exceptional builders. One wonders what would have been constructed if the Spanish had not destroyed most of the Indians’ buildings, and had borrowed their techniques as they did from the Moors in Spain. - c. Hispanic crafts or folk arts is another topic rich in tradition and historical or religious significance. “The Significance of Folk Arts” by Guillermo de Zéndegui (Américas, volume 25 numbers 11-12, November-December, 1973) explains the purpose of folk art and how it manifests itself. This is very helpful for the students. Understanding a people’s way of life includes appreciating its crafts. - Raul Calvimontes’ “Folk Arts Through the Ages” (Américas volume 25 numbers 11-12, November-December, 1973) gives an overview of Hispanic crafts in the Americas. There are excellent photographs of many of the folk arts. The most important sections for our purposes are “The Colony”(pages S-10-S-13) where he explains how the Spanish trained the Indians to create Spanish art in the New World, and how the Indians imbued these works of art with their own perspective. For example, in the churches and palaces where they were to sculpt altars, facades, ceilings, etc., they sculpted ears of corn, American plants and flowers, and American animals. For statues of Christ or the Virgin, the faces were Indian. - The other section of Calvimontes’ article which is important for us is “The Panorama Today” (pages S-15-S-18). Here he talks about the Mexican handicrafts in detail. He does not tell about the Hispanic crafts in the Southwestern United States, but there are some pictures of these crafts. - d. The customs and holidays of the Mexican-Americans can be studied as a contemporary phenomenon: how they are celebrated today, with what historical background, and how they have been altered, if at all, by their English-speaking neighbors. The best source for this information is in A Treasury of Mexican Folkways by Frances Toor. She explains most Mexican holidays and customs in an easy to read style. Another source, which would explain how these celebrations occur today, is information provided by the states in the Southwest (see Resources). - e. An in-depth study of Mexican food and its counterpart in the Southwestern states, and the customs surrounding them is another avenue of exploration. Making menus and actual dishes will show the students that every Spanish-speaking area has its own kind of cuisine unique to that area yet containing a common thread, the Spanish influence. A good source for Mexican food is any Diane Kennedy cookbook. Also, the Time-Life series of cookbooks has one volume dedicated to the cooking of the Southwestern United States. The American Council on the Teaching of Foreign Languages has a booklet containing easy Mexican recipes for classroom use. - ____1. The students can study the folklore surrounding Mexican food, especially the gods and goddesses and holidays. For information use Frances Toor’s A Treasury of Mexican Folkways, and Ruth S. Lamb’s Mexican-Americans: Sons of the Southwest. - ____2. Another intriguing activity is to discover the influence of Spanish cuisine on Mexican cuisine; what foods the Spanish brought with them; what foods the Mexicans had; and what foods the Spanish borrowed from the Mexicans, and vice versa. The two books mentioned in the paragraph above, contain some of this information. - ____3. Using a cookbook on Southwestern cooking, the students can compare and contrast “Tex-Mex” cuisine and authentic Mexican food. - ____ 4. The students can prepare a meal with as much authenticity as possible, using real mesa, for example (Mesa is corn meal flour). Ingredients can be found in some grocery stores like Stop and Shop, or in an Hispanic bodega (grocery store). - f. An interesting vocabulary study is that of the costume, equipment, and life-style of the American cowboy in the Southwest. Many Spanish words are used commonly, without the realization that these words are not English. For example: - el rodeo-like a fiesta campera in Spain where men test themselves in various contests with animals; the Mexicans as well as the cowboys twirl the lariat, ride bucking broncos, and wrestle bulls. - el corral-like a backyard in Spain or Mexico, but used to fence in horses or cattle in the Southwest. - Vamoose-Vamos-let’s go - la reata-the rope the cowboys call a lariat. - los chaparros-chaps or a leather shield for the cowboys’ legs. - el padre-priest (“Father”) - el vaquero-cowboy - el bronco-a wild horse - Twirling the lariat (la reata), and riding a bucking bronco are originally Mexican customs, which are still practiced today at rodeos in the Southwest. - g. The art of the Southwestern states can be studied for its Mexican or Spanish sources. Guadalupe González-Hontoria de Alvarez Romero’s “Aztec Featherwork” (Américas volume 25 number 1 January, 1973), as well as C. Bruce Hunter’s A Guide to Ancient Maya Ruins and Alma M. Reed’s The Ancient Past of Mexico, all discuss Pre-Columbian art of Mexico. The techniques of these ancient peoples have been handed down through the generations, and continue to appear in the weaving and pottery of today. - Estelle Caloia Roberts’ “Los Cuatro Mexico’s Majestic Artists” (Américas volume 29 numbers 6-7, June-July, 1977) treats the four major artists of this century: Diego Rivera, José Clemente Orozco, David Alfaro Siqueiros, and Rufino Tamayo. She explains the political background for these artists’ works, as well as their techniques. There are many excellent photographs of the most compelling paintings. This is a good source for the students as she also points out the Pre-Columbian influences on these contemporary artists. Many of the Hispanic painters of the Southwest were influenced by these artists. - Lastly, Irwin and Emily Whitaker’s “Contemporary Mexican Pottery An Ever-Changing Art Form” (Américas volume 26 number 8, August, 1974) shows the influence of Pre-Columbian civilization on the pottery of today. This is a well-documented article with terrific photographs. - h. Famous monuments can be researched for an oral presentation to the class. The students can make three-dimensional models of the monuments or draw them. The emphasis is on their historical importance, such as the Alamo in Texas, the missions along the Camino Real in California, etc. Most American History textbooks have the names of specific monuments other than those already mentioned. - i. Famous people can be divided into two categories: historical or contemporary. (A partial list of famous people is included at the end of the unit.) Some of the famous people in history are already familiar to the students: Santa Ana, Coronado, Cortés, Pizarro, etc. Others are not as well-known, but are equally important. An article of note is Francisco Teráns “The Conquistadors’ Ladies” (Américas volume 28 number 2, February, 1976). This article discusses the help of Indian women given to the Spanish conquistadors. This new information helps to shed a new light on the Conquest. - ____1. Students can draw portraits of these people, make time lines of their lives, or illustrate an important event. - ____2. Students could write newspaper articles about important events in the famous people’s lives. - For the famous people of today, the students have to rely on current magazines and newspapers for source material, for example on Cisneros, the mayor of San Antonio, Texas. - ____1. The students can perform skits based on these people’s lives. - ____2. Students can also write newspaper articles on these people, or as if they were a famous Hispanic of the Southwest today. - j. Music of the Southwestern states has been greatly influenced by Mexican and Spanish music. Some contemporary musicians of Mexico are listed, along with their contributions, in “Mexico A Story of Three Cultures” (Américas volume 30 number 3, March, 1978). Traditional Mexican music is discussed in Muchas Facetas de México by Jane Burnett. - ____1. Students can learn some songs and sing them for their classmates. - ____2. Students can also trace the Spanish or Mexican roots of the music, and compare and contrast the different versions of songs. - ____3. For today’s students, an examination of contemporary music (for example rock and roll or country and western) would show them that what we listen to and like in Connecticut, is not necessarily the same all over the country. - ____4. They could also compare and contrast the music of the Southwest with salsa (a mixture of Caribbean, African, Spanish, and Indian rhythms which is very popular in the Puerto Rican communities). - k. Mexican and Spanish dances are a colorful reminder of the Hispanic heritage in the Southwestern states. The students can learn some traditional dances and perform them for others. Two good sources are Frances Toor’s A Treasury of Mexican Folkways, and Edith Johnston’s Regional Dances of Mexico. Johnston’s book is accompanied by a tape containing the music. Her book illustrates the costumes and the steps necessary for performing the dances (see Reading List for Teachers). Objective 3 To use students’ knowledge of early Mexican and American History to form opinions about life today in the Southwest. Strategies To fulfill this objective, the students must have a well-developed knowledge of the history of the Southwest from pre-historic times, through the growth of the Spanish colonies, to the Spanish-American War in 1898 when the Spanish finally ceded the last of their territories to the United States. It wasn’t until the Twentieth Century, through the revolutions in Central America and South America, that the Spanish were finally expelled from this hemisphere. - a. Students can study the Toltec, Mayan, and Aztec cultures of Mexico: their religion, their gods and goddesses, their customs, crafts, sports and games, legends, and food. Frances Toor’s A Treasury of Mexican Folkways is highly recommended for this topic. - b. The students can make maps of the Spanish territories of the New World, and of the Spanish-named cities and states of the United States today. These maps can be illustrated with monuments or symbols of important sites. - c. A time line of important events in the history of the American Southwest is another student project. It can be elaborate or simple. It should start with the first settlers of pre-historic times, and end in 1984. This can be an exciting activity for the whole class, working in groups, each concentrating on a different era. Objective 4 To explore the life of Mexican-Americans today in established communities of the Southwest; to determine to what extent their lives are still influenced by their ancestry, and if the contact with the Anglo culture has changed their way of life. Strategies Using current periodicals and books such as Bless Me, Ultima by Rudolfo Anaya, the students will realize that the life of Mexican-Americans today is intensely tied to that of their ancestors. They will study the chicano political movements with insight gained from their work. That is to say, they will know that the Mexican-Americans are not immigrants to the United States. They are original residents just like the American Indians. - a. Students can debate current issues. - b. The students can write an analysis of Chicano politics. - c. The students can write a report or develop a skit about a contemporary Mexican-American. There is still a necessity for a background study of the contributions of Cubans, Dominicans (from Santo Domingo), and Puerto Ricans to the culture of the United States. Antonio de Mendoza Fray Marcos de Niza Juan de Tolosa Felipe de Neve José de Gálvez Teodoro de Croix Diego de Vargas José Mar’a Tornel David G. Burnet General Moray Villamil Octaviano A. Larrazola Alvar Nuñez Cabeza de Vaca Francisco Vásquez de Coronado Don Juan de Oñate Gaspar de Portolá Juan Bautista de Ariza Antonio Mar’a de Mart’nez Stephen F. Austin General Manuel Mier y Terán Father Miguel Muldoon Santa Ana (Anna) James K. Polk General Winfield Scott Manuel de la Peña y Peña Ezequiel de Baca Colonel José Francisco Chávez Francisco I. Madero Ponce de León (inlet) Sangre de Cristo (Mts) Puerto de Luna Mesa de Maya San Juan River Rancho Santa Fe San Juan Capistrano Santa Cruz Island San Miguel Island Santa Rosa Island Santa Catalina Island San Nicolas Island San Clemente Island Corona del Mar San Luis Obispo *Spanish names for these states Write for information and pictures showing the Spanish or Mexican cultural heritage. Ask for specific cities, too. - 1. Yale-New Haven Teachers Institute - 53 Wall Street - New Haven, Connecticut 06520 - All of the Américas magazine articles cited in the unit are on file in the Institute Office. - 2. Mexican Consulate - New York, New York - Write for information and pictures of Old Mexico and its territories which are now in the United States. - 3. State Chamber of Commerce or Department of Tourism - Austin, Texas - Denver, Colorado - Phoenix, Arizona - Baton Rouge, Louisiana - Santa Fe, New Mexico - Carson City, Nevada - Helena, Montana - Tallahassee, Florida - Sacramento, California Christmas customs in all Spanish-speaking countries and New Mexico. A good, concise source of information. Burma, John H. Mexican-Americans in the United States A Reader (Cambridge, Massachusetts: Schenkman Publishing Company, Inc.), 1970. One of many sociological studies. Of particular interest is the article entitled “The Family” by Arthur J. Rubel, which is taken from Across the Tracks (Austin: University of Texas Press, 1966), Chapter 3. It concerns a fictional town, Mexiquito, which is supposed to typify Mexican-American family values. Some good insights. Burnett, Jane. Muchas Facetas de México (Lincolnwood, Ill: National Textbook Company). A good textbook for upper-level Spanish courses. It contains much information about Mexico today. Hunter, C. Bruce. A Guide to Ancient Maya Ruins (Norman: University of Oklahoma Press), 1974. Rather technical information for the non-art major, but it contains good photographs of monuments, statues, etc. Johnston, Edith. Regional Dances of Mexico. (Lincolnwood, Illinois: National Textbook Company). An excellent resource! A tape of music for the dances can be purchased separately. The book contains instructions for doing the dances, some background information as to the purpose of the dances, and ideas about costumes. Kennedy, Diana. The Cuisines of Mexico (New York: Harper and Row Publishers), 1972. This cookbook is full of authentic recipes as well as information about the different foods. It also contains a list of addresses where one may purchase authentic ingredients. Lamb, Ruth S. Mexican-Americans: Sons of the Southwest (Claremont, California: Ocelot Press), 1970. An excellent source of information. Detailed historical information from pre-historic times, the Toltecs, Mayas, and Aztecs, through the conquistadors, the Spanish colonies, the annexation of Texas, to the present. It becomes a sociological treatise when it discusses the Mexican-Americans today. Moore, Joan W. Mexican Americans Ethnic Groups in American Life Series (Englewood Cliffs, New Jersey: Prentice Hall, Inc.), 1970. A dry sociological study. Good information about the history of Mexican-Americans, and the family. Reed, Alma M. The Ancient Past of Mexico (New York: Crown Publishers, Inc.), 1966. This book discusses all the monumental cities in Mexico. There are good illustrations of monuments and some gods. Toor, Frances. A Treasury of Mexican Folkways (New York: Crown Publishers, Inc.), 1964. An excellent source! This book contains information about Mexican worship, ceremonies, customs, holidays, music, dance, songs and translations of myths. There are also many useful illustrations. A collection of legends from Latin America. The five Mexican legends are interesting and appeal to students. Barlow, Genevieve and William Stivers. Leyendas Mexicanas (Lincolnwood, Illinois: National Textbook Company). A good selection of Mexican legends. Some are religious in nature, while others explain natural phenomena. Boggs, Stanley H. “Pre-Maya Costumes and Coiffures” Américas (Washington, D.C.: General Secretariat Of the Organization of American States), Volume 25 Number 2, February, 1973, pp. 19-24. Interesting information and good photographs. Brady, Agnes M. and Domingo Ricart. Dos Aventureros: De Soto y Coronado (Lincolnwood, Illinois: National Textbook Company). The story of two Spanish conquistadores related in an exciting manner. Burnett, Jane. Muchas Facetas de México (Lincolnwood, Ill.: National Textbook Company). A good book to read about Mexico. Calvimontes, Raul. “Folk Arts Through the Ages” Américas (Washington, D.C.: General Secretariat of the Organization of American States), Volume 25 Numbers 11-12, November-December, 1973, pp. S-8-S-24. A good general introduction to the subject of folk art. Casellas, Roberto. “Confederate Colonists in Mexico” Américas (Washington, D.C.: General Secretariat of the Organization of American States), Volume 27 Number 9, September, 1975, pp. 8-15. An interesting topic of the Civil War in the United States. Colina, Rafael de la. “Reevaluating the Discovery” Américas (Washington, D.C.: General Secretariat of the O.A.S.), Vol. 26 No. 10, October, 1974, pp. 2-4. A discussion of the rights and wrongs committed during the conquest of the New World. Cuéllar, Elizabeth Snoddy. “Mexico’s Many Costumes” Américas (Washington, D.C.: General Secretariat of the O.A.S.), Vol. 28 No. 5, May, 1976, pp. 5-13. Some excellent photographs of Mexican costumes. Gannon, Francis X. “Latin America and Europe Today” Américas (Washington, D,C.: General Secretariat of the O.A.S.), Vol. 29 No. 10, October, 1977, p. 29. Discusses the interaction between the countries of Europe and those of Latin America. González-Hontoria de Alvarez Romero, Guadalupe. “Aztec Featherwork” Américas (Washington, D.C.: General Secretariat of the O.A.S.), Vol. 25 No. 1, January, 1973, pp. 13-18. Good photographs of an interesting folk art. Hancock de Sandoval, Judith. “The Painted Ceiling of Tupátero” Américas (Washington, D.C.: General Secretariat of the O.A.S.), Vol. 25 Nos. 6-7, June-July, 1973, pp. 2-11. An interesting article about the decorations of a Catholic church. Hogan, Brother Lawrence. “A Latin-American Chapel in Potomac” Américas (Washington, D.C.: General Secretariat of the O.A.S.), Vol. 25 No. 1, January, 1973, pp. 25-28. The Spanish churches built in their colonies are shown at their best in this church. Holmgren, Virginia C. “Beasts from the New World” Américas (Washington, D.C.: General Secretariat of the O.A.S.), Vol. 25 Nos. 11-12, November-December, 1973, pp. 7-15. An interesting article depicting the animals in the New World. Joya, Joseph John. “Hispanic America and U.S. Independence” Américas (Washington, D.C.: General Secretariat of the O.A.S.) Vol. 28 Nos. 6-7, June-July, 1976, pp. 5-12. This article details how the Revolutionary War affected other Americans. Kelemen, Pál. “Proud, Lonely Churches” Américas (Washington, D.C.: General Secretariat of the O.A.S.), Vol. 28 No. 2, February, 1976, pp. 2-11. An interesting description of Spanish churches in Mexico. “Mexico A Story of Three Cultures” Américas (Washington, D.C.: General Secretariat of the O.A.S.), Vol. 30 No. 3, March, 1978, pp. 17-40. An excellent treatment of background information on Mexico. Peredo, Miguel Guzmán. “Exploring the Sacred Well” Américas (Washington, D.C.: General Secretariat of the O.A.S.), Vol. 26 No. 8, August, 1974, pp. 17-23. A study of underwater archeology in Mexican art. Pietri, Arturo Uslar. “The World the Europeans Called New” Américas (Washington, D.C.: General Secretariat of the O.A.S.), Vol. 28 No. 4, April, 1976, pp. 9-16. An interesting description of the Americas. Piñeiro, Armando Alonso. “When Argentina Conquered California” Américas (Washington, D.C.: General Secretariat of the O.A.S.), Vol. 29 No. 6-7, June-July, 1977, pp. 34-37. In the nineteenth century, the Argentinians wanted territory in the Pacific. On their way, they captured Monterey in California. Profiles from Mexican History (Lincolnwood, Illinois: National Textbook Company). Individual books with the biographies of: Lázaro Cárdenas, Porfirio D’az, Hidalgo, Juárez, Francisco I. Madero, Moctezuma, Morelos, Francisco Villa, Zapata. Reber, James Q., photographer. “Mexican Portfolio Landscape, Art, People” Américas (Washington, D.C.: General Secretariat of the O.A.S.), Vol. 27 No. 5, May, 1975, A great collection of photographs! Roberts, Estelle Caloia. “Los Cuatro Mexico’s Majestie Artists” Américas (Washington, D.C.: General Secretariat of the O.A.S.), Vol. 29 Nos. 6-7, June-July, 1977, pp. 38-45. Mexico’s four greatest artists and their works are presented with many photographs. Schele, Linda. “Space and Life Style: A Maya Answer” Américas (Washington, D.C.: General Secretariat of the O.A.S.), Vol. 25 No. 5, May, 1973, pp. 33-39. An interesting approach to Mayan architecture. Soper, Cherrie L. and Montserrat Blanch de Alcolea. “Extremadura Cradle of Conquistadors” Américas (Washington, D.C.: General Secretariat of the O.A.S.), Vol. 29 No. 10, October, 1977, pp. 34-40. An interesting account of the origins of the Spanish explorers. Taylor, Herb, Jr. “The Missions of San Antonio” Américas (Washington, D.C.: General Secretariat of the O.A.S.), Vol. 24 No. 10, October, 1972, pp. 28-31. An intriguing look at the Spanish mission. Terán, Francisco. “The Conquistadors’ Ladies” Américas (Washington, D.C.: General Secretariat of the O.A.S.), Vol. 28 No. 2, February, 1976, pp. 12-18. An account of the aid given to the Conquistadors by Indian women. Vasquez de Acuña, Isidoro. “The Kingdon of the XV Islands” Américas (Washington, D.C.: General Secretariat of the O.A.S.), Vol. 25 No. 1, January, 1973, pp. 2-6. And “An Enduring Heritage” Américas Vol. 29 No. 10, October, 1977, pp. 30-33. Details of the Spanish Conquest. Whitaker, Irwin and Emily. “Contemporary Mexican Pottery” Américas (Wash.,D.C.: O.A.S.), Vol. 26 No. 8, August, 1974, pp. S-1-S-16. Good photographs of Mexican pottery. Contents of 1984 Volume III | Directory of Volumes | Index | Yale-New Haven Teachers Institute
http://www.yale.edu/ynhti/curriculum/units/1984/3/84.03.01.x.html
4.46875
Algebra II End-of-Course Exam Content Standards—Core: Operations on Numbers and Expressions (Priority: 15%) Successful students will be able to perform operations with rational, real, and complex numbers, using both numeric and algebraic expressions, including expressions involving exponents and roots. There are a variety of types of test items including some that cut across the objectives in this standard and require students to make connections and, where appropriate, solve contextual problems. O1. Real numbers a. Convert between and among radical and exponential forms of numerical expressions. - Convert between expressions involving rational exponents and those involving roots and integral powers. b. Simplify and perform operations on numerical expressions containing radicals. - Convert radicals to alternate forms and use the understanding of this conversion to perform calculations with numbers in radical form. c. Apply the laws of exponents to numerical expressions with rational and negative exponents to order and rewrite them in alternative forms. O2. Complex numbers a. Represent complex numbers in the form a + bi, where a and b are real; simplify powers of pure imaginary numbers. - Every real number, a, is a complex number because it can be expressed as a + 0i. - Represent the square root of a negative number in the form bi, where b is real; simplify powers of pure imaginary numbers. Example: i5 = -i b. Perform operations on the set of complex numbers. O3. Algebraic expressions a. Convert between and among radical and exponential forms of algebraic expressions. b. Simplify and perform operations on radical algebraic expressions. c. Apply the laws of exponents to algebraic expressions, including those involving rational and negative exponents, to order and rewrite them in alternative forms. Example: a4 · a3 = a(4+3) = a7, = a(4-3) = a, (a4)3 = a(4·3) = a12 Example: , (a3b5)2 = a6b10 d. Perform operations on polynomial expressions. - Limit to at most multiplication of a binomial by a trinomial. - For division limit the divisor to a linear or factorable quadratic polynomial. - Division may be performed using factoring. e. Perform operations on rational expressions, including complex fractions. - These expressions should be limited to linear and factorable - Complex fractions should be limited to simple fractions in numerators and denominators. f. Identify or write equivalent algebraic expressions in one or more variables to extract information. Example: The expression, C + 0.07C, represents the cost of an item plus sales tax, while 1.07C is an equivalent expression that can be used to simplify calculations of the total cost. Example: can be rewritten as
http://www.utdanacenter.org/k12mathbenchmarks/alg2eoc/core_operations.php
4.40625
In this example, a rational function with one vertical asymptote greater than zero is graphed. Review of graps of rational functions: A rational function can be written as the ratio of two functions, f(x) and g(x). Rational functions are usually written in this form: y = f(x)/g(x) Since division by zero is undefined, then with all rational functions written in the form shown above, the function g(x) cannot equal zero. In fact, the graphs of rational functions have a characteristic shape around the values of x where g(x) cannot equal zero. The graphs of rational functions have vertical asymptotes, based on the values of x where g(x) equals zero. A vertical boundary line defines the place where the graph of the rational function approaches but does not equal this value of x. In this set of Math Tutorials we analyze the rational functions, identify the vertical asymptotes, and then graph the function. You should use a graphing calculator to graph the rational function and identify the asymptotes. Tweet Learn More About Math Tutorials The library of Math Tutorials is a comprehensive collection of worked-out solutions to common math problems. This overcomes a common limitation of most textbooks: the handful of worked-out examples for a given concept. We provide the full array of examples and solutions, allowing students to identify patterns among the solutions, in order to aid concept retention. We also have quizzes for many of these topics. Our current inventory of Math Examples include: - Math Tutorial: Examples Using Algebra Tiles - Math Tutorial: Solving Equations in One Variable - Math Tutorial: Solving Equations with Fractions - Math Tutorial: Solving Equations with Percents - Math Tutorial: Slope formula - Math Tutorial: Midpoint formula - Math Tutorial: Distance formula - Math Tutorial: Graphing linear functions, given m and b - Math Tutorial: Graphing absolute value functions - Math Tutorial: Graphing linear inequalities - Math Tutorial: Using the Point-Slope form - Math Tutorial: Finding the equation of a line given two points - Math Tutorial: Graphing parallel and perpendicular lines - Math Tutorial: Solving quadratics graphically - Math Tutorial: Solving quadratics by completing the square - Math Tutorial: Factoring Quadratics - Math Tutorial: Polynomial Expansion - Math Tutorial: Solving quadratics using the quadratic formula - Math Tutorial: Using FOIL - Math Tutorial: Graphs of Exponential Functions - Math Tutorial: Laws of Exponents - Math Tutorial: Graphs of Logarithmic Functions - Math Tutorial: Graphs of Rational Functions - Math Tutorial: Combining Rational Expressions - Math Tutorial: Graphs of Conic Sections
http://media4math.com/Examples/GraphingRationalFunctions/GraphingRationalFunctions8.html
4.1875
Seneca Falls Declaration (1848) One of the reform movements that arose during the "freedom's ferment" of the early nineteenth century was a drive for greater rights for women, especially in the political area. Women were heavily involved in many of the reform movements of this time, but they discovered that while they did much of the drudge work, with few exceptions (such as Dorothea Dix) they could not take leadership roles or lobby openly for their goals. Politically, women were to be neither seen nor heard. The drudgery of daily housework and its deadening impact on the mind also struck some women as unfair. The convention at Seneca Falls, New York, in July 1848, was organized by Lucretia Mott and Elizabeth Cady Stanton, two Quakers whose concern for women's rights was aroused when Mott, as a woman, was denied a seat at an international antislavery meeting in London. The Seneca Falls meeting attracted 240 sympathizers, including forty men, among them the famed former slave and abolitionist leader, Frederick Douglass. The delegates adopted a statement, deliberately modeled on the Declaration of Inde-pendence, as well as a series of resolu-tions calling for women's suffrage and the reform of marital and property laws that kept women in an inferior status. Very little in the way of progress came from the Seneca Falls Declaration, although it would serve for the next seventy years as the goal for which the suffrage movement strove. Women's suffrage and nearly all of the other reforms of this era were swallowed up by the single issue of slavery and its abolition, and women did not receive the right to vote until the adoption of the Nineteenth Amendment to the Constitution in 1920. For further reading: Ellen C. DuBois, Feminism and Suffrage (1978); Eleanor Flexner, Century of Struggle (rev. ed. 1975); and Lois W. Banner, Elizabeth Cady Stanton (1980). Seneca Falls Declaration When, in the course of human events, it becomes necessary for one portion of the family of man to assume among the people of the earth a position different from that which they have hitherto occupied, but one to which the laws of nature and of nature's God entitle them, a decent respect to the opinions of mankind requires that they should declare the causes that impel them to such a course. We hold these truths to be self-evident: that all men and women are created equal; that they are endowed by their Creator with certain inalienable rights; that among these are life, liberty, and the pursuit of happiness; that to secure these rights governments are instituted, deriving their just powers from the consent of the governed. Whenever any form of government becomes destructive of these ends, it is the right of those who suffer from it to refuse allegiance to it, and to insist upon the institution of a new government, laying its foundation on such principles, and organizing its powers in such form, as to them shall seem most likely to effect their safety and happiness. Prudence, indeed, will dictate that governments long established should not be changed for light and transient causes; and accordingly all experience hath shown that mankind are more disposed to suffer, while evils are sufferable, than to right themselves by abolishing the forms to which they were accustomed. But when a long train of abuses and usurpations, pursuing invariably the same object, evinces a design to reduce them under absolute despotism, it is their duty to throw off such government, and to provide new guards for their future security. Such has been the patient sufferance of the women under this government, and such is now the necessity which constrains them to demand the equal station to which they are entitled. The history of mankind is a history of repeated injuries and usurpations on the part of man toward woman, having in direct object the establishment of an absolute tyranny over her. To prove this, let facts be submitted to a candid world. He has never permitted her to exercise her inalienable right to the elective franchise. He has compelled her to submit to laws, in the formation of which she had no voice. He has withheld from her rights which are given to the most ignorant and degraded men--both natives and foreigners. Having deprived her of this first right of a citizen, the elective franchise, thereby leaving her without representation in the halls of legislation, he has oppressed her on all sides. He has made her, if married, in the eye of the law, civilly dead. He has taken from her all right in property, even to the wages she earns. He has made her, morally, an irresponsible being, as she can commit many crimes with impunity, provided they be done in the presence of her husband. In the covenant of marriage, she is compelled to promise obedience to her husband, he becoming to all intents and purposes, her master--the law giving him power to deprive her of her liberty, and to administer chastisement. He has so framed the laws of divorce, as to what shall be the proper causes, and in case of separation, to whom the guardianship of the children shall be given, as to be wholly regardless of the happiness of women--the law, in all cases, going upon a false supposition of the supremacy of man, and giving all power into his hands. After depriving her of all rights as a married woman, if single, and the owner of property, he has taxed her to support a government which recognizes her only when her property can be made profitable to it. He has monopolized nearly all the profitable employments, and from those she is permitted to follow, she receives but a scanty remuneration. He closes against her all the avenues to wealth and distinction which he considers most honorable to himself. As a teacher of theology, medicine, or law, she is not known. He has denied her the facilities for obtaining a thorough education, all colleges being closed against her. He allows her in Church, as well as State, but a subordinate position, claiming Apostolic authority for her exclusion from the ministry, and, with some exceptions, from any public participation in the affairs of the Church. He has created a false public sentiment by giving to the world a different code of morals for men and women, by which moral delinquencies which exclude women from society, are not only tolerated, but deemed of little account in man. He has usurped the prerogative of Jehovah himself, claiming it as his right to assign for her a sphere of action, when that belongs to her conscience and to her God. He has endeavored, in every way that he could, to destroy her confidence in her own powers, to lessen her self-respect, and to make her willing to lead a dependent and abject life. Now, in view of this entire disfranchisement of one-half the people of this country, their social and religious degradation--in view of the unjust laws above mentioned, and because women do feel themselves aggrieved, oppressed, and fraudulently deprived of their most sacred rights, we insist that they have immediate admission to all the rights and privileges which belong to them as citizens of the United States. In entering upon the great work before us, we anticipate no small amount of misconception, misrepresentation, and ridicule; but we shall use every instrumentality within our power to effect our object. We shall employ agents, circulate tracts, petition the State and National legislatures, and endeavor to enlist the pulpit and the press in our behalf. We hope this Convention will be followed by a series of Conventions embracing every part of the country. Source: E.C. Stanton, S.B. Anthony and M.J. Gage, eds., History of Women's Suffrage, vol. 1 (1887), 70. Table of Contents
http://usa.usembassy.de/etexts/democrac/17.htm
4.375
Even though many students have mastered basic listening and speaking skills, some students are much more effective in their oral communication than others. And those who are more effective communicators experience more success in school and in other areas of their lives. The skills that can make the difference between minimal and effective communication can be taught, practiced, and improved. The method used for assessing oral communication skills depends on the purpose of the assessment. A method that is appropriate for giving feedback to students who are learning a new skill is not appropriate for evaluating students at the end of a course. However, any assessment method should adhere to the measurement principles of reliability, validity, and fairness. The instrument must be accurate and consistent, it must represent the abilities we wish to measure, and it must operate in the same way with a wide range of students. The concerns of measurement, as they relate to oral communication, are highlighted below. Detailed discussions of speaking and listening assessment may be found in Powers (1984), Rubin and Mead (1984), and Stiggins (1981). HOW ARE ORAL COMMUNICATION AND LISTENING DEFINED? Defining the domain of knowledge, skills, or attitudes to be measured is at the core of any assessment. Most people define oral communication narrowly, focusing on speaking and listening skills separately. Traditionally, when people describe speaking skills, they do so in a context of public speaking. Recently, however, definitions of speaking have been expanded (Brown 1981). One trend has been to focus on communication activities that reflect a variety of settings: one-to-many, small group, one-to-one, and mass media. Another approach has been to focus on using communication to achieve specific purposes: to inform, to persuade, and to solve problems. A third trend has been to focus on basic competencies needed for everyday life -- for example, giving directions, asking for information, or providing basic information in an emergency situation. The latter approach has been taken in the Speech Communication Association's guidelines for elementary and secondary students. Many of these broader views stress that oral communication is an interactive process in which an individual alternately takes the roles of speaker and listener, and which includes both verbal and nonverbal components. Listening, like reading comprehension, is usually defined as a receptive skill comprising both a physical process and an interpretive, analytical process. (See Lundsteen 1979 for a discussion of listening.) However, this definition is often expanded to include critical listening skills (higher-order skills such as analysis and synthesis) and nonverbal listening (comprehending the meaning of tone of voice, facial expressions, gestures, and other nonverbal cues.) The expanded definition of listening also emphasizes the relationship between listening and speaking. HOW ARE SPEAKING SKILLS ASSESSED? Two methods are used for assessing speaking skills. In the observational approach, the student's behavior is observed and assessed unobtrusively. In the structured approach, the student is asked to perform one or more specific oral communication tasks. His or her performance on the task is then evaluated. The task can be administered in a one-on-one setting -- with the test administrator and one student -- or in a group or class setting. In either setting, students should feel that they are communicating meaningful content to a real audience. Tasks should focus on topics that all students can easily talk about, or, if they do not include such a focus, students should be given an opportunity to collect information on the topic. Both observational and structured approaches use a variety of rating systems. A holistic rating captures a general impression of the student's performance. A primary trait score assesses the student's ability to achieve a specific communication purpose -- for example, to persuade the listener to adopt a certain point of view. Analytic scales capture the student's performance on various aspects of communication, such as delivery, organization, content, and language. Rating systems may describe varying degrees of competence along a scale or may indicate the presence or absence of a characteristic. A major aspect of any rating system is rater objectivity: Is the rater applying the scoring criteria accurately and consistently to all students across time? The reliability of raters should be established during their training and checked during administration or scoring of the assessment. If ratings are made on the spot, two raters will be required for some administrations. If ratings are recorded for later scoring, double scoring will be needed. HOW ARE LISTENING SKILLS ASSESSED? Listening tests typically resemble reading comprehension tests except that the student listens to a passage instead of reading it. The student then answers mulitiple-choice questions that address various levels of literal and inferential comprehension. Important elements in all listening tests are (1) the listening stimuli, (2) the questions, and (3) the test environment. The listening stimuli should represent typical oral language, and not consist of simply the oral reading of passages designed to be written material. The material should model the language that students might typically be expected to hear in the classroom, in various media, or in conversations. Since listening performance is strongly influenced by motivation and memory, the passages should be interesting and relatively short. To ensure fairness, topics should be grounded in experience common to all students, irrespective of sex and geographic, socioeconomic, or racial/ethnic background. In regard to questions, multiple-choice items should focus on the most important aspects of the passage -- not trivial details -- and should measure skills from a particular domain. Answers designated as correct should be derived from the passage, without reliance on the student's prior knowledge or experience. Questions and response choices should meet accepted psychometric standards for multiple-choice questions. An alternative to the multiple-choice test is a performance test that requires students to select a picture or actually perform a task based on oral instruction. For example, students might hear a description of several geometric figures and choose pictures that match the description, or they might be given a map and instructed to trace a route that is described orally. The testing environment for listening assessment should be free of external distractions. If stimuli are presented from a tape, the sound quality should be excellent. If stimuli are presented by a test administrator, the material should be presented clearly, with appropriate volume and rate of speaking. HOW SHOULD ASSESSMENT INSTRUMENTS BE SELECTED OR DESIGNED? Identifying an appropriate instrument depends upon the purpose for assessment and the availability of existing instruments. If the purpose is to assess a specific set of skills -- for instance, diagnosing strengths and weaknesses or assessing mastery of an objective -- the test should match those skills. If appropriate tests are not available, it makes sense to design an assessment instrument to reflect specific needs. If the purpose is to assess communication broadly, as in evaluating a new program or assessing district goals, the test should measure progress over time and, if possible, describe that progress in terms of external norms, such as national or state norms. In this case, it is useful to seek out a pertinent test that has undergone careful development, validation, and norming, even if it does not exactly match the local program. Several reviews of oral communication tests are available (Rubin and Mead 1984). The Speech Communication Association has compiled a set of RESOURCES FOR ASSESSMENT IN COMMUNICATION, which includes standards for effective oral communication programs, criteria for evaluating instruments, procedures for assessing speaking and listening, an annotated bibliography, and a list of consultants. The abilities to listen critically and to express oneself clearly and effectively contribute to a student's success in school and later in life. Teachers concerned with developing the speaking and listening communication skills of their students need methods for assessing their students' progress. These techniques range from observation and questioning to standardized testing. However, even the most informal methods should embrace the measurement principles of reliability, validity, and fairness. The methods used should be appropriate to the purpose of the assessment and make use of the best instruments and procedures available. FOR MORE INFORMATION Brown, Kenneth L. TEACHING, SPEAKING AND LISTENING SKILLS IN THE ELEMENTARY AND SECONDARY SCHOOL. Boston, MA: Massachusetts Department of Education, 1981. ED 234 440. Lundsteen, Sara W. LISTENING: ITS IMPACT ON READING AND THE OTHER LANGUAGE ARTS. Revised ed. Urbana, IL: National Council of Teachers of English and the ERIC Clearinghouse on Reading and Communication Skills, 1979. ED 169 537. Powers, Donald E. CONSIDERATIONS FOR DEVELOPING MEASURES OF SPEAKING AND LISTENING. New York: College Entrance Examination Board, 1984. Rubin, Don. L., and Mead, Nancy A. LARGE SCALE ASSESSMENT OF ORAL COMMUNICATION SKILLS: KINDERGARTEN THROUGH GRADE 12. Annandale, VA: Speech Communication Association and the ERIC Clearinghouse on Reading and Communication Skills, 1984. ED 245 293. Speech Communication Association. RESOURCES FOR ASSESSMENT IN COMMUNICATION. Annandale, VA.: Speech Communication Association, 1984. SCA Guidelines: ESSENTIAL SPEAKING AND LISTENING SKILLS FOR ELEMENTARY SCHOOL STUDENTS (6th GRADE LEVEL). Annandale, VA.: Speech Communication Association. (Pamphlet, 1984). SCA Guidelines: SPEAKING AND LISTENING COMPETENCIES FOR HIGH SCHOOL GRADUATES. Annandale, VA.: Speech Communication Association. (Pamphlet, 1984). Stiggins, Richard J., ed. PERSPECTIVES ON THE ASSESSMENT OF SPEAKING AND LISTENING SKILLS FOR THE 1980s. Portland, OR: Clearinghouse for Applied Performance Testing, Northwest Regional Educational Laboratory, 1981. ED 210 748.
http://www.ericdigests.org/pre-923/speaking.htm
4.0625
A new study conducted by the National Institute of Health (NIH) found certain brain functions are enhanced in teens who are fluent in more than one language, particularly those functions that enable teens to determine the relevance and irrelevance of noises around them. About 1 in 5 children nationwide speak a language other than English at home. Children who grow up learning to speak 2 languages tend to learn English words and grammar more slowly than those who speak only English. But studies have found that bilingual children tend to be better than monolingual children at multitasking. They are also better at focusing their attention—for example, homing in on a voice in a noisy school cafeteria. The researchers studied 48 incoming first year high school students, 23 of whom were proficient in both Spanish and English. The researchers played the speech syllable “da” to the teens, using electrodes to record the intensity of their auditory brainstem response. Bilinguals showed a larger response than monolinguals. When the sound was played with a background of babble, monolingual teens had a less intense response than when it was played alone. In contrast, bilinguals showed virtually identical responses with and without the background babble. In another experiment, the teens were given a selective attention test in which they were asked to click a mouse when a 1, but not a 2, was seen or heard. The test involved 500 trials of 1 or 2 seconds each over a period of 20 minutes. The bilingual teens outperformed the monolingual teens on this test. Research conclude that these findings suggest the bilingual experience may help improve selective attention by enhancing the auditory brainstem response. Bilingual students showed a natural ability to determine which sounds were important, and then focus on relevent sounds while discounting the irrelevant. Read more: National Institute of Health
http://thefeministwire.com/2012/05/new-study-shows-benefits-of-bilingualism-in-teens/
4.46875
OUTER EAR MIDDLE EAR INNER EAR How People Hear Hearing is a series of events in which the ear converts sound waves into electrical signals and causes nerve impulses to be sent to the brain, where they are interpreted as sound. The ear has three main parts: the outer ear and reach the middle ear, where they cause the eardrum to vibrate. The vibrations are transmitted through three tiny bones in the middle ear called the ossicles. These three bones are named the malleus, incus, and stapes (and are also known as the hammer, anvil, and stirrup). The eardrum and ossicles amplify the vibrations and carry them to the inner ear. The stirrup transmits the amplified vibrations through the oval window and into the fluid that fills the inner ear. The vibrations move through fluid in the snail-shaped hearing part of the inner ear (cochlea) that contains the hair cells. The fluid in the cochlea moves the top portion of the hair cells, called the hair bundle, which initiates the changes that lead to the production of the nerve impulses. These nerve impulses are carried to the brain, where they are interpreted as sound. Different sounds move the population of hair cells in different ways, thus allowing the brain to distinguish among various sounds, such as different vowel and consonant sounds. Two Types of Hearing Loss 1. Conductive due to the disruption of the transmission of sound through the center and/or middle ear Otitis media (chronic and acute) is the main cause of conductive hearing loss for children. 2. Sensorineural due to sensory or nerve damage in the inner ear, auditory nerve, or auditory cortex of the brain Noise -- explosions, rock music, heavy machinery – is the main cause of sensorineural hearing loss. When both types occur together, it is called mixed which may rise from an infection or from a surgery. People, who were born deaf, would not want to identify themselves as people with hearing losses. In other words, they actually never lost their hearing after they were born. A cochlear implant is a small, complex electronic device that can help to provide a sense of sound to a person who is profoundly deaf. The implant is surgically placed under the skin behind the ear. An implant has four basic parts: (1) a microphone, which picks up sound from the environment; (2) a speech processor, which selects and arranges sounds picked up by the microphone; (3) a transmitter and receiver/stimulator, which receives signals from the speech processor and convert them into electric impulses, and (4) electrodes, which collect the impulses from the stimulator and send them to the brain. An implant does not restore or create normal hearing. Instead, under the appropriate conditions, it can give a deaf person a useful auditory understanding of the environment and help her or him to understand speech. A cochlear implant is very different from a hearing aid. Hearing aids amplify sound. Cochlear implants compensate for damaged or non-working parts of the inner ear. Source: National Institute on Deafness and Other Communication Disorders, United States http://www.nidcd.nih.gov/ Keywords: Hearing, Hearing Loss, Cochlear Implant Annual Deaf Event: May is Better Hearing and Speech Mont
http://www.folda.net/hearing/index.html
4.125
- Write sports articles while applying formulas, percentages, probability, and statistics Create and then convert chances of bad weather from percent to decimal on the weather page - Combine literacy with math to write creative news stories that transform math concepts into characters and events (see samples page for good examples) - Develop a weather page noting the different types of clouds, storms, and other phenomena Create a newspaper that focuses just on recent science discoveries and technologies Write scientist biographies - Students become field reporters, writing articles about significant experiments at the science fair - Teach students the 5 Ws used in newspaper writing, after which students write their own articles employing the same style - Learn what makes effective biographies (relevant information vs trivial details) and then write one for the obituaries section - Create a monthly class newspaper to communicate with parents & students - Book reviews in a variety of genres - Author studies - Chapter summaries (which can be effective assessment) - Create a newspaper depicting one story (e.g. Romeo & Juliet could have feature stories about the major events, a wedding announcement, crossword puzzle with character names, obituary for those who die, etc.) - "facebook" page on literary characters or authors | SOCIAL STUDIES - Identify states, regions, cities, and capitals on the U.S. map found on the weather page - Create an entire paper around specific topics or time periods, such as scientific discoveries of the 20th century, culture of the roaring 20s, technology available and used in ancient Egypt, Greece, etc. - Civics classes use two groups, each with its own paper, either from a conservative or liberal point of view (excellent hands-on activity to understand media bias) to write articles - Write front-page stories, based on national and/or local current events - Use the "facebook" template to create a page for leaders in currect events. Libya, Egypt, etc. would make great pages. - Use the "facebook" template to create a page on famous persons in history, which could substitute for a biography paper. - Foreign language students write newspaper articles in their 2nd language; beginners should use the 100 series; advanced students use 300
http://www.buildanewspaper.com/COOL_SCHOOL_IDEAS.html
4.40625
July 21, 2004 Using ESA’s Integral and XMM-Newton observatories, an international team of astronomers has found more evidence that massive black holes are surrounded by a doughnut-shaped gas cloud, called a torus. Depending on our line of sight, the torus can block the view of the black hole in the centre. The team looked `edge on’ into this doughnut to see features never before revealed in such a clarity. Black holes are objects so compact and with gravity so strong that not even light can escape from them. Scientists think that `supermassive’ black holes are located in the cores of most galaxies, including our Milky Way galaxy. They can contain the mass of thousands of millions of suns, confined within a region no larger than our Solar System. They appear to be surrounded by a hot, thin disk of accreting gas and, farther out, the thick doughnut-shaped torus. Depending on the inclination of the torus, it can hide the black hole and the hot accretion disc from the line of sight. Galaxies in which a torus blocks the light from the central accretion disc are called `Seyfert 2’ types and are usually faint to optical telescopes. Another theory, however, is that these galaxies appear rather faint because the central black hole is not actively accreting gas and the disc surrounding it is therefore faint. An international team of astronomers led by Dr Volker Beckmann, Goddard Space Flight Center (Greenbelt, USA) has studied one of the nearest objects of this type, a spiral galaxy called NGC 4388, located 65 million light years away in the constellation Virgo. Since NGC 4388 is relatively close, and therefore unusually bright for its class, it is easier to study. Astronomers often study black holes that are aligned face-on, thus avoiding the enshrouding torus. However, Beckmann's group took the path less trodden and studied the central black hole by peering through the torus. With XMM-Newton and Integral, they could detect some of the X-rays and gamma rays, emitted by the accretion disc, which partially penetrate the torus. "By peering right into the torus, we see the black hole phenomenon in a whole new light, or lack of light, as the case may be here," Beckmann said. Beckmann's group saw how different processes around a black hole produce light at different wavelengths. For example, some of the gamma rays produced close to the black hole get absorbed by iron atoms in the torus and are re-emitted at a lower energy. This in fact is how the scientists knew they were seeing `reprocessed’ light farther out. Also, because of the line of sight towards NGC 4388, they knew this iron was from a torus on the same plane as the accretion disk, and not from gas clouds `above’ or `below’ the accretion disk. This new view through the haze has provided valuable insight into the relationship between the black hole, its accretion disc and the doughnut, and supports the torus model in several ways. Gas in the accretion disc close to the black hole reaches high speeds and temperatures (over 100 million degrees, hotter than the Sun) as it races toward the void. The gas radiates predominantly at high energies, in the X-ray wavelengths. According to Beckmann, this light is able to escape the black hole because it is still outside of its border, but ultimately collides with matter in the torus. Some of it is absorbed; some of it is reflected at different wavelengths, like sunlight penetrating a cloud; and the very energetic gamma rays pierce through. "This torus is not as dense as a real doughnut or a true German Krapfen, but it is far hotter - up to a thousand degrees - and loaded with many more calories," Beckmann said. The new observations also pinpoint the origin of the high-energy emission from NGC 4388. While the lower-energy X-rays seen by XMM-Newton appear to come from a diffuse emission, far away from the black hole, the higher-energy X-rays detected by Integral are directly related to the black hole activity. The team could infer the doughnut’s structure and its distance from the black hole by virtue of light that was either reflected or completely absorbed. The torus itself appears to be several hundred light years from the black hole, although the observation could not gauge its diameter, from inside to outside. The result marks the clearest observation of an obscured black hole in X-ray and gamma-ray `colours’, a span of energy nearly a million times wider than the window of visible light, from red to violet. Multi-wavelength studies are increasingly important to understanding black holes, as already demonstrated earlier this year. In May 2004, the European project known as the Astrophysical Virtual Observatory, in which ESA plays a major role, found 30 supermassive black holes that had previously escaped detection behind masking dust clouds. This result will appear on The Astrophysical Journal. Besides Volker Beckmann, the author list includes Neil Gehrels, Pascal Favre, Roland Walter, Thierry Courvoisier, Pierre-Olivier Petrucci and Julien Malzac. For more information about the Astrophysical Virtual Observatory programme and how it has allowed European scientists to discover a number of previously hidden black holes, see: More about Integral The International Gamma Ray Astrophysics Laboratory (Integral) is the first space observatory that can simultaneously observe celestial objects in gamma rays, X-rays and visible light. Integral was launched on a Russian Proton rocket on 17 October 2002 into a highly elliptical orbit around Earth. Its principal targets include regions of the galaxy where chemical elements are being produced and compact objects, such as black holes. More information on Integral can be found at: More about XMM-Newton XMM-Newton can detect more X-ray sources than any previous observatory and is helping to solve many cosmic mysteries of the violent Universe, from black holes to the formation of galaxies. It was launched on 10 December 1999, using an Ariane-5 rocket from French Guiana. It is expected to return data for a decade. XMM-Newton’s high-tech design uses over 170 wafer-thin cylindrical mirrors spread over three telescopes. Its orbit takes it almost a third of the way to the Moon, so that astronomers can enjoy long, uninterrupted views of celestial objects. More information on XMM-Newton can be found at: Other social bookmarking and sharing tools: Note: Materials may be edited for content and length. For further information, please contact the source cited above. Note: If no author is given, the source is cited instead.
http://www.sciencedaily.com/releases/2004/07/040721085717.htm
4.03125
Grade 4 - grammar Help your child build his grammar skills with a punctuation worksheet, where he'll practice using commas in a list. Adjectives turn ordinary sentences into exciting adventures. Dress up the sentences in this worksheet and transform your writing from plain to fancy! In this worksheet, kids can learn the meaning of many words with suffixes, and get to see firsthand how adding a suffix changes its definition, just like that! Got questions about quotations marks? No problem! This worksheet will help your child build his grammar and punctuation skills. Here's an ode to the Negative Nancy in all of us! Your child will help choose a negative adjective to describe some bad musical performers, a great way to build vocabulary. Learn your adverbs quickly, efficiently and happily. English grammar can be tough, but this introductory worksheet will be a big help. What is a preposition? Possibly the most confusing parts of speech in English. But this worksheet will be a great introduction. Help your student learn all about writing dialogue and using quotation marks correctly with this introductory worksheet. Have fun with grammar practice and decode these scrambled adjectives, all starting with the letter "A"! This is a great way to flex vocabulary and spelling skills. Help Henry the Hiker add punctuation to his journal entry! Read through the entry and add the correct punctuation marks where needed.
http://www.education.com/collection/dlzkis/grade-4-grammar/
4.09375
At the same time that scientists were beginning to differentiate and name rock units, the quarrymen working deep in the mines underneath Wren’s Nest would have developed their own naming system for the rocks they encountered. Working by candlelight, their names would have been based on basic features and the look of the rock. Experienced miners would have been able to tell the grade of rock they were mining from one look or perhaps even by feel alone. Murchison lists some of the words used by the miners and quarrymen in his Silurian System: - “Strong hanging stone” - “Top sink” - “Half-yard measure” - “Strong grey measure” - “The flints” - “Silks measures” - “bavin” and “rotch” Some of the names we can only begin to guess at the meaning of, but it is not unreasonable to think that perhaps “Half-yard measure” would have been a band of limestone 18 inches thick, and “pricking” is described as a way-board of shale (blasting layer) so would perhaps have been drilled (pricked) with numerous holes which could be filled with black powder to blast layers of rock away from the walls of the mine. We do know that “ballstone” referred to a dome-shaped mass of pure limestone which would have disrupted the bedding of the rock around it, and could sometimes have been very large, up to 6 metres high by 20 metres wide. We now refer to them as bioherms and we know that they represent the remains of small reefs similar to those found in tropical lagoons today, and here the reef-building organisms such as corals and stromatoporoids formed massive calcareous structures which would later be turned into limestone. Today the science of lithostratigraphy deals with the naming of rock layers and formal names are assigned to these rock units such as the Upper Quarried Limestone Member.
http://geologymatters.org.uk/2012/05/17/the-language-of-the-quarryman/
4.03125
Canadian Rocky Mountains The Canadian Rocky Mountains are located along the Alberta/British Columbia provincial border and supply vital source waters to the Saskatchewan River Basin. The mountain snowpack and glaciers serve as a storage facility for water – trapping it as snow in the fall and winter and releasing water throughout the spring and summer melt. Research is needed to study the effects of climate change on cold regions hydrology and model the impacts these changes could have on the Saskatchewan River Basin. The University of Saskatchewan Centre for Hydrology has been conducting research in the Canadian Rockies since 2004 and has a well-established research program in Marmot Creek Research Basin in the Kananaskis Valley. In addition, the Centre for Hydrology has been gathering data on glacier melt and retreat at Peyto Glacier in Banff National Park. The recently awarded Canada Foundation for Innovation Grant for the Canadian Rockies Hydrological Observatory is allowing a substantial expansion of this research with focal high altitude observational areas to include Marmot Creek and Burstall Pass in Kananaskis Country and Peyto Glacier and the Bow Summit area in Banff National Park. The Global Institute for Water Security is investing in the Canadian Rockies Hydrological Observatory research at Marmot Creek and Peyto Glacier, as well as establishing new research sites in the Canadian Rockies to gain a better understanding of wetland and beaver activity influence on river flows and carbon storage. Modelling includes development of the Cold Regions Hydrological Model Platform (CRHM) to test and develop hydrological process descriptions that are particularly suited for Canada’s snowy mountains and prairie. CRHM is being used to simulate the hydrological impact of climate change, pine beetle, forest harvesting and fire, wetland drainage and agricultural management practice and is being evaluated for operational flood prediction. Marmot Creek Research Basin is located in the Rocky Mountain front ranges in Kananaskis Country, Alberta. The basin is a tributary to the Kananaskis and Bow Rivers and covers an area approximately 10 km2 in area, at an elevation of 1600 to 2800 m. It is instrumented with twelve permanent meteorological stations at elevations from 1450 to 2500 m, covering a variety of surface cover types and slope orientations. Marmot Creek was used as a research basin by the Government of Canada from 1962 - 1986 to study the hydrological effects of forest management. In 2004, the U of S Centre for Hydrology, Environment Canada and the Biogeoscience Institute at the University of Calgary re-established a monitoring and research program. The Global Institute for Water Security is supporting continuation of this program and associated hydrological model development. Current scientific focus: - Mountain snow processes, hydrochemistry, groundwater and hydrological modelling (including climate change sensitivity analysis and hydro-climatic trends). - Impact of forest cover change on mountain hydrology. - Mountain hydrological model development and testing. Peyto Glacier is located near the continental divide in Banff National Park, Alberta. The research area is approximately 24 km2, is located at 2100 to 3150 m elevation and is mostly covered by Peyto Glacier. The glacier-fed Peyto Creek flows into the Mistaya and North Saskatchewan Rivers. The glacier has undergone considerable negative net mass balance, downwasting and terminal retreat over the past 50 years. A glacier mass balance program was established at Peyto by the Government of Canada in 1966 and operated by Environment Canada and now by Natural Resources Canada. The site remains a focal point for a wide range of glaciological and hydrological research and reports to the World Glacier Monitoring Program. Current Scientific Focus - Primarily glacier, hydrology and climate studies. - Single meteorological station within basin adjacent to Peyto Glacier and three stations located on the glacier surface representing different elevation zones. Research group members: John Pomeroy, Howard Wheater, Warren Helgason Beaver dams impede upstream flows and flood adjacent riparian areas. Ponds created by beaver dams function as efficient sediment traps, filling with sediment and organic materials, which can remain in the pond area even after dams are abandoned. Once abandoned, dams degrade and the water table recedes, leaving behind a “beaver meadow”, which becomes buried by more peat. Through Global Institute for Water Security-funded research, scientists hope to determine how present and historical beaver ponds affect modern-day peat characteristics and hydrology. This will assist the institute model how different climate future will affect these important source areas for the Saskatchewan River Basin. Current Scientific Focus - Use of remote and ground mapping technology to quantify the spacial extent and distribution of mountain peatlands, specifically those with current or historical beaver activity. - Beaver paleopond groundwater mapping, measurement and modelling. - Mapping of watershed-scale soil properties. - Measurement of the biochemical properties of peat and characterization of nutrient dynamics and organic matter quality. - Environmental manipulation of peat cores to determine how decomposition affects nutrient dynamics, greenhouse gas emissions and carbon sequestration/release.
http://www.usask.ca/water/Research%20Sites/Canadian%20Rocky%20Mountains.php
4.15625
AS OUR ancestors moved north out of Africa and onto the doorstep to the rest of the world, they came across their long-lost cousins: the Neanderthals. As the popular story goes, the brutish hominins were simply no match for cultured, intelligent Homo sapiens and quickly went extinct. Maybe, but it's also possible that Neanderthals were simply unlucky and disappeared by chance, mathematicians propose. We know that humans and Neanderthals got pretty cosy during their time together in the Middle East, 45,000 years ago. Between 1 and 4 per cent of the DNA of modern non-Africans is of Neanderthal origin, implying their ancestors must have interbred before humans moved into Europe (New Scientist, 15 May 2010, p 8). The popular theory has it that humans soon displaced Neanderthals thanks to their superior skills and adaptations. But mathematicians Armando Neves at the Federal University of Minas Gerais in Belo Horizonte, Brazil, and Maurizio Serva at the University of Aquila, Italy, now say that the extinction of Neanderthals may have been down to a genetic lottery. When two populations interbreed, one of them can go extinct simply due to the random mixing of their genes through sexual reproduction. To find out if this could have wiped out Neanderthals, Neves and Serva modelled the populations that met in the Middle East. Using very few assumptions, they estimated the rate of interbreeding that would lead to the observed share of Neanderthal DNA. Their results suggest that the 1 to 4 per cent genetic mix could have come about with one interbreeding every 10 to 80 generations. The time taken to reach this mix would depend on the size of the populations. But regardless of populations, Neves and Serva's model shows that low rates of interbreeding could theoretically have led to the extinction of Neanderthals through a genetic lottery (arxiv.org/abs/1103.4621). "The observed low fraction of Neanderthal DNA could easily have arisen quite naturally even if Neanderthals weren't inferior," says Neves. A strong point of the analysis, says anthropologist Luke Premo of Washington State University in Pullman, is that it makes few assumptions about unknown factors, including the relative sizes of the African and Neanderthal populations at the time. Nevertheless, says Premo, the evidence for some kind of superiority of the African group is still strong. "Humans were expanding while Neanderthals were fairly restricted to a portion of Eurasia," he says. "Given their larger population and expansion, it appears that humans were bound to win out." When this article was first posted, it gave the wrong university affiliation for Luke Premo. - New Scientist - Not just a website! - Subscribe to New Scientist and get: - New Scientist magazine delivered every week - Unlimited online access to articles from over 500 back issues - Subscribe Now and Save If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to. Have your say Only subscribers may leave comments on this article. Please log in. Only personal subscribers may leave comments on this article What Does "neanderthal Origin" Mean? Thu Apr 07 19:22:20 BST 2011 by Philip Smith "Between 1 and 4 per cent of the DNA of modern non-Africans is of Neanderthal origin", according to the article. It seems that we share at least 95% of our DNA with chimps although of course much of that DNA has much older origins. So how does one decide that a particular piece of DNA originated with Neanderthal man? What Does "neanderthal Origin" Mean? Thu Apr 07 20:27:37 BST 2011 by Eric Kvaalen The Neanderthal genome has been sequenced, as explained in the article referenced. You could probably say that we share all our DNA with Neandertals, but there are mutations so we can tell the difference. The conclusion about 1 to 4% is that we have gotten that much of our DNA directly from the Neandertals, not through a common ancestor. Thu Apr 07 20:34:16 BST 2011 by Eric Kvaalen This article doesn't explain why the Neandertals would have gone extinct. As the article by the researchers explains, they assume that the total of Neandertals plus "Africans" (us) is constant over time. So eventually one of the two species drifts into extinction even if (as assumed) they have equal fitness. Fri Apr 08 09:59:39 BST 2011 by Adrian So Neanderthals got cozy and interbred with humans did they! Well, as Neanderthals are firmly embedded in the genus Homo its safe to say they are also humans. The genus goes back at least 2.5 million years to habilis who was the first Homo (no aspersions cast as to his sexual preferences). So while all Homos may be in some sense equal, clearly some are more equal than others Fri Apr 08 11:58:32 BST 2011 by jarvischina Even today Africans (without 4% Neanderthal genes) have a greater fecundity when starving versus Europeans (with 4% Neanderthal genes) that will stop menstruating when food is scarce. Africans continue to breed under extended stress conditions such as drought. Surely the potential lower fecundity of todays Europeans and our common Neanderthal ancestor is one possible reason for Neanderthals failing to compete with these new and fertile migrant Homos out of Africa? These differences in survival may still be expressed in todays populations! Sat Apr 09 17:07:22 BST 2011 by marcos anthony toledo You assumeing you have definite proof of how much Neanderthal DNA genes we have and assuming the extent of the Neanderthal range was. It could have extended all the way to east asia and we have to include the many pandemics that have swept the human race down the ages that skewer the evidence and give false results. All comments should respect the New Scientist House Rules. If you think a particular comment breaks these rules then please use the "Report" link in that comment to report it to us. If you are having a technical problem posting a comment, please contact technical support.
http://www.newscientist.com/article/mg21028073.400-neanderthals-bad-luck-and-its-part-in-their-downfall.html
4.15625
Because much of the world’s surface water is far from concentrations of human settlements, not all of it is readily usable. - It is estimated that the freshwater available for human consumption varies between 12,500 km3 and 14,000 km3 each year (Hinrichsen et al., 1998; Jackson et al., 2001). - Many countries in Africa, the Middle East, western Asia, and some eastern European countries have lower than average quantities of freshwater resources available to their populations. - Due to rapid population growth, the potential water availability for the earth’s population decreased from 12,900 m3 per capita per year in 1970 to 9,000 m3 in 1990, and to less than 7,000 m3 in 2000 (Clarke, 1991; Jackson et al, 2001; Shiklomanov, 1999). - In densely populated parts of Asia, Africa and central and southern Europe, current per capita water availability is between 1,200 m3 and 5,000 m3 per year (Shiklomanov, 1999). - The global availability of freshwater is projected to drop to 5,100 m3 per capita per year by 2025. This amount would be enough to meet individual human needs if it were distributed equally among the world’s population (Shiklomanov, 1999). - It is estimated that 3 billion people will be in the water scarcity category of 1,700 m3 per capita per year by 2025 (UNEP, 2002). - The uneven distribution of freshwater resources creates major problems of access and availability. For example: Asia and the Middle East are estimated to have 60% of the world’s population (3,674,000,000 people in 2000), but only 36% of its river runoff - much of which is confined to the short monsoon season (Graphic Maps, 2001; Shiklomanov, 1999). South America, by contrast, has an estimated 6% of the global population ( 342,000,000 people in 2000) and 26% of its runoff (Graphic Maps, 2001; Shiklomanov, 1999). These examples do not take into account groundwater abstraction.
http://www.grida.no/publications/vg/water2/page/3218.aspx
4.21875
Want to stay on top of all the space news? Follow @universetoday on Twitter It is well documented that dark matter makes up the majority of the mass in our universe. The big problem comes when trying to prove dark matter really is out there. It is dark, and therefore cannot be seen. Dark matter may come in many shapes and sizes (from the massive black hole, to the tiny neutrino), but regardless of size, no light is emitted and therefore it cannot be observed directly. Astronomers have many tricks up their sleeves and are now able to indirectly observe massive black holes (by observing the gravitational, or lensing, effect on light passing by). Now, large-scale structures have been observed by analyzing how light from distant galaxies changes as it passes through the cosmic web of dark matter hundreds of millions of light years across… Dark matter is believed to hold over 80% of the Universe’s total mass, leaving the remaining 20% for “normal” matter we know, understand and observe. Although we can observe billions of stars throughout space, this is only the tip of the iceberg for the total cosmic mass. Using the influence of gravity on space-time as a tool, astronomers have observed halos of distant stars and galaxies, as their light is bent around invisible, but massive objects (such as black holes) between us and the distant light sources. Gravitational lensing has most famously been observed in the Hubble Space Telescope (HST) images where arcs of light from young and distant galaxies are warped around older galaxies in the foreground. This technique now has a use when indirectly observing the large-scale structure of dark matter intertwining its way between galaxies and clusters. Astronomers from the University of British Columbia (UBC) in Canada have observed the largest structures ever seen of a web of dark matter stretching 270 million light years across, or 2000 times the size of the Milky Way. If we could see the web in the night sky, it would be eight times the area of the Moons disk. This impressive observation was made possible by using dark matter gravity to signal its presence. Like the HST gravitational lensing, a similar method is employed. Called “weak gravitational lensing”, the method takes a portion of the sky and plots the distortion of the observed light from each distant galaxy. The results are then mapped to build a picture of the dark matter structure between us and the galaxies. The team uses the Canada-France-Hawaii-Telescope (CFHT) for the observations and their technique has been developed over the last few years. The CFHT is a non-profit project that runs a 3.6 meter telescope on top of Mauna Kia in Hawaii. Understanding the structure of dark matter as it stretches across the cosmos is essential for us to understand how the Universe was formed, how dark matter influences stars and galaxies, and will help us determine how the Universe will develop in the future. “This new knowledge is crucial for us to understand the history and evolution of the cosmos [...] Such a tool will also enable us to glimpse a little more of the nature of dark matter.” – Ludovic Van Waerbeke, Assistant Professor, Department of Physics and Astronomy, UBC Source: UBC Press Release
http://www.universetoday.com/12939/record-breaking-dark-matter-web-structure-observed-spanning-270-million-light-years/
4.03125
An investigation that gives you the opportunity to make and justify When newspaper pages get separated at home we have to try to sort them out and get things in the correct order. How many ways can we arrange these pages so that the numbering may be different? Only one side of a two-slice toaster is working. What is the quickest way to toast both sides of three slices of bread? You cannot choose a selection of ice cream flavours that includes totally what someone has already chosen. Have a go and find all the different ways in which seven children can have ice cream. Ana and Ross looked in a trunk in the attic. They found old cloaks and gowns, hats and masks. How many possible costumes could they Tom and Ben visited Numberland. Use the maps to work out the number of points each of their routes scores. How many ways can you find to do up all four buttons on my coat? How about if I had five buttons? Six ...? What can you say about these shapes? This problem challenges you to create shapes with different areas and perimeters. The challenge here is to find as many routes as you can for a fence to go so that this town is divided up into two halves, each with 8 Place the 16 different combinations of cup/saucer in this 4 by 4 arrangement so that no row or column contains more than one cup or saucer of the same colour. If we had 16 light bars which digital numbers could we make? How will you know you've found them all? Can you help the children find the two triangles which have the lengths of two sides numerically equal to their areas? How many ways can you find of tiling the square patio, using square tiles of different sizes? These rectangles have been torn. How many squares did each one have inside it before it was ripped? Can you put the numbers 1-5 in the V shape so that both 'arms' have the same total? What is the smallest number of tiles needed to tile this patio? Can you investigate patios of different sizes? If you have three circular objects, you could arrange them so that they are separate, touching, overlapping or inside each other. Can you investigate all the different possibilities? Suppose we allow ourselves to use three numbers less than 10 and multiply them together. How many different products can you find? How do you know you've got them all? Nina must cook some pasta for 15 minutes but she only has a 7-minute sand-timer and an 11-minute sand-timer. How can she use these timers to measure exactly 15 minutes? This magic square has operations written in it, to make it into a maze. Start wherever you like, go through every cell and go out a total of 15! Can you fill in this table square? The numbers 2 -12 were used to generate it with just one number used twice. When you throw two regular, six-faced dice you have more chance of getting one particular result than any other. What result would that be? Why is this? Using the statements, can you work out how many of each type of rabbit there are in these pens? What could the half time scores have been in these Olympic hockey Cut differently-sized square corners from a square piece of paper to make boxes without lids. Do they all have the same volume? Can you draw a square in which the perimeter is numerically equal to the area? Stuart's watch loses two minutes every hour. Adam's watch gains one minute every hour. Use the information to work out what time (the real time) they arrived at the airport. A merchant brings four bars of gold to a jeweller. How can the jeweller use the scales just twice to identify the lighter, fake Investigate the different ways you could split up these rooms so that you have double the number. How many different shaped boxes can you design for 36 sweets in one layer? Can you arrange the sweets so that no sweets of the same colour are next to each other in any direction? How could you put eight beanbags in the hoops so that there are four in the blue hoop, five in the red and six in the yellow? Can you find all the ways of doing this? These activities focus on finding all possible solutions so working in a systematic way will ensure none are left out. Can you make square numbers by adding two prime numbers together? Ten cards are put into five envelopes so that there are two cards in each envelope. The sum of the numbers inside it is written on each envelope. What numbers could be inside the envelopes? This challenge is to design different step arrangements, which must go along a distance of 6 on the steps and must end up at 6 high. Add the sum of the squares of four numbers between 10 and 20 to the sum of the squares of three numbers less than 6 to make the square of another, larger, number. Can you work out the arrangement of the digits in the square so that the given products are correct? The numbers 1 - 9 may be used once and once only. Can you rearrange the biscuits on the plates so that the three biscuits on each plate are all different and there is no plate with two biscuits the same as two biscuits on another plate? The planet of Vuvv has seven moons. Can you work out how long it is between each super-eclipse? Find the product of the numbers on the routes from A to B. Which route has the smallest product? Which the largest? On my calculator I divided one whole number by another whole number and got the answer 3.125 If the numbers are both under 50, what are they? This problem is based on the story of the Pied Piper of Hamelin. Investigate the different numbers of people and rats there could have been if you know how many legs there are altogether! What do you notice about the date 03.06.09? Or 08.01.09? This challenge invites you to investigate some interesting dates A thoughtful shepherd used bales of straw to protect the area around his lambs. Explore how you can arrange the bales. Place eight queens on an chessboard (an 8 by 8 grid) so that none can capture any of the others. Lolla bought a balloon at the circus. She gave the clown six coins to pay for it. What could Lolla have paid for the balloon? Investigate all the different squares you can make on this 5 by 5 grid by making your starting side go from the bottom left hand point. Can you find out the areas of all these squares? In the planet system of Octa the planets are arranged in the shape of an octahedron. How many different routes could be taken to get from Planet A to Planet Zargon? There are 4 jugs which hold 9 litres, 7 litres, 4 litres and 2 litres. Find a way to pour 9 litres of drink from one jug to another until you are left with exactly 3 litres in three of the Place the numbers 1 to 8 in the circles so that no consecutive numbers are joined by a line.
http://nrich.maths.org/public/leg.php?code=-99&cl=2&cldcmpid=6106
4.21875
|Solving Multi-Step Linear Equations In this lesson, we are going to look at a few worked examples while putting emphasis on the key steps in solving multi-step equations. You will have the opportunity to practice on your own by trying some problems and compare your answers to the solutions provided. If you just want to practice and skip the lesson itself, go ahead, and click the button below. To solve multi-step equations, you will still need the techniques you learned in solving one-step and two-step equations. This type of equation requires additional steps in order to solve for the value of the unknown variable. Usually the variable involved is x, but it is not always the case. It could be any letters such as m, n, h and z. The main goal in solving multi-step equations is to keep the unknown variable on one side of the equal symbol while keeping the constant or pure number on the opposite side. More importantly, there is no rule where to keep the variable. It all depends on your preference. The "standard" way is to have it on the left side, but there are cases when it is convenient to leave it on the right side of the equation. Finally, since we are dealing with equations, we need to keep in mind that whatever we do on one side must be applied to the other side to keep everything balanced. For instance, adding 5 on the left should force you to add 5 on the right side. To get rid of numbers in the process of solving equations, ALWAYS remember the idea of opposite operations because they are used to cancel or move around numbers. Key steps to remember 1) Eliminate parenthesis by applying the Distributive Property 2) Simplify both sides of the equation by Combining Like Terms. In other words, combine similar variables and constants together. 3) Decide where you want to keep the variable; that helps you decide where to keep the constants (opposite side where the variable is located). 4) Cancel out numbers by applying opposite operations: addition and subtraction are opposite operations as in the case of multiplication and division. Now it's time to take a look at some examples! Example 1: Solve the multi-step equation This is a typical problem in multi-step equations where there are variables on both sides. Notice that there are no parenthesis in this equation and nothing to combine like terms in either both sides of the equation. Clearly, our first step is to decide where to keep or isolate the unknown variable x. Since 7x is "larger" than 2x, then we might as well keep it on the left side. This means we have to get rid of the 2x on the right side. To do that, we need to subtract both sides by 2x because the opposite of +2x is -2x. After simplifying by subtracting both sides by 2x, we have... It's nice to see just the variable x on the left side. This implies that we have to move all the constants to the right side and that +3 on the left must be removed. The opposite of +3 is -3, therefore, we will subtract both sides by 3. After subtracting both sides by 3, we get... The last step is to isolate variable x by itself on the left side of the equation. Since +5 is multiplying x, then its opposite operation is to divide by +5. So, we are going to divide both sides by 5 and then we are done!
http://chilimath.com/algebra/intermediate/linear_equations/revisited/linear_multistep1.html
4.03125
RMS and power in single and three phase AC circuits Power in AC circuits, the use of RMS quantities and 3 phase AC – including answers to these questions: - What are RMS values? - How can you work out the power developed in an AC circuit? - How can you get 680 V dc from a 240 V ac supply just by rectifying? - When do you need three phases and why do you need four wires? This page provides answers to these questions. This is a resource page from Physclips. It is a subsidiary page to the main AC circuits There are separate pages on RC filters, integrators , LC oscillations Power and RMS values The power p converted in a resistor (ie the rate of conversion of electrical energy to heat) is We use lower case p(t) because this is the expression for the instantaneous power at time t. Usually, we are interested in the mean power delivered, which is normally written P. P is the total energy converted in one cycle, divided by the period T of the cycle, so: In the last line, we have used a standard trigonometrical identity that cos(2A) = 1 - 2 sin2A. Now the sinusoidal term averages to zero over any number of complete cycles, so the integral is simple and This last set of equations are useful because they are exactly those normally used for a resistor in DC electricity. However, one must remember that P is the average power, and V = Vm/√2 and I= Im/√2. Looking at the integral above, and dividing by R, we see that I is equal to the square root of the mean value of i2, so I is called the root-mean-square or RMS value. Similarly, V = Vm/√2 ~ 0.71*Vm is the RMS value of the voltage. When talking of AC, RMS values are so commonly used that, unless otherwise stated, you may assume that RMS values are intended*. For instance, normal domestic AC in Australia is 240 Volts AC with frequency 50 Hz. The RMS voltage is 240 volts, so the peak value Vm= V.√2 = 340 volts. So the active wire goes from +340 volts to -340 volts and back again 50 times per second. (This is the answer to the teaser question at the top of the page: rectification of the 240 V mains can give both + 340 Vdc and -340 Vdc.) * An exception: manufacturers and sellers of HiFi equipment sometimes use peak values rather than RMS values, which makes the equipment seem more powerful than it is. Power in a resistor. In a resistor R, the peak power (achieved instantaneously 100 times per second for 50 Hz AC) is Vm2/R = im2*R. As discussed above, the voltage, current and so the power pass through zero volts 100 times per second, so the average power is less than this. The average is exactly as shown above: P = Vm2/2R = V2/R. Power in inductors and capacitors. In ideal inductors and capacitors, a sinusoidal current produces voltages that are respecively 90° ahead and behind the phase of the current. So if i = Imsin wt, the voltages across the inductor and capacitor are Vmcos wt and -Vmcos wt respectively. Now the integral of cos*sin over a whole number of cycles is zero. Consequently, ideal inductors and capacitors do not, on average, take power from the circuit. Three phase AC Go to the main AC circuits site, RC filters, integrators Motors and generators. |Single phase AC has the advantage that it only requires 2 wires. Its disadvantage is seen in the graph at the top of this page: twice every cycle V goes to zero. If you connect a phototransistor circuit to an oscilloscope, you will see that fluorescent lights turn off 100 times per second (or 120, if you are on 60 Hz supply). What if you need a more even supply of electricity? One can store energy in capacitors, of course, but with high power circuits this would require big, expensive capacitors. What to do? generator may have more than one coil. If it has three coils, mounted at relative angles of 120°, then it will produce three sinusoidal emfs with relative phases of 120°, as shown in the upper figure at right. The power delivered to a resistive load by each of these is proportional to V2. The sum of the three V2 terms is a constant. We saw above that the average of V2 is half the peak value, so this constant is 1.5 times the peak amplitude for any one circuit, as is shown in the lower figure at right. Do you need four wires? In principle, no. The sum of the three V terms is zero so, provided that the loads on each phase are identical, the currents drawn from the three lines add to zero. In practice, the current in the neutral wire is usually not quite zero. Further, it should be the same guage as the other wires because, if one of the loads were to fail and form an open circuit, the neutral would carry a current similar to that in the remaining two loads. The voltage (top) and square of the voltage (bottom) in the three active lines of 3 phase supply.
http://www.animations.physics.unsw.edu.au/jw/power.html
4.3125
Cell references can indicate particular cells or cell ranges in columns and rows. Cell references identify individual cells in a worksheet. They tell Excel where to look for values to use in a formula. Excel uses a reference style called A1, which refers to columns with letters and to rows with numbers. The letters and numbers are called row and column headings. The table shows how to refer to cells by using the column letter followed by the row number. In this lesson you'll see why Excel can automatically update the results of formulas that use cell references, and how cell references work when you copy formulas. In the practice session at the end of the lesson you'll have a chance to try out what you've learned. What happens if the value in a cell changes after a total is calculated? Click Next to find out.
http://office.microsoft.com/en-us/excel-help/use-cell-references-RZ010074593.aspx?section=9
4.03125
EXTREME WEATHER: Greenhouse gas emissions from human activity may be predisposing the climate to produce more extreme weather events, such as Hurricane Katrina. Image: Jeff Schmaltz, MODIS Rapid Response Team, NASA/GSFC In the United States, 2011 was a year when weather seemed to ping-pong between extremes. A historic drought struck Texas while floods devastated communities along the Missouri and Mississippi rivers and Hurricane Irene swamped the East Coast. Swarms of tornadoes rolled through the center of the country, and record-setting wildfires blazed in the Southwest. But while 2011's litany of extreme weather was notable, it was "not unique" -- at least not in recent experience, according to a new analysis published yesterday in the journal Nature Climate Change. The world has experienced an "exceptional number of unprecedented extreme weather events" for the past decade, say co-authors Dim Coumou and Stefan Rahmstorf, researchers at the Potsdam Institute for Climate Impact Research in Germany, who surveyed recent research linking climate change to shifts in the frequency and intensity of extreme weather. "The evidence is strong that anthropogenic, unprecedented heat and rainfall extremes are here -- and are causing intense human suffering," the two write. Recent studies suggest that the number of warm nights increased significantly between 1951 and 2003, and twice as many record hot days than record cold days are occurring in the United States and Australia. More footprints appear in weather patterns The number of record hot months observed at different points around the globe is three times as high as it would be in a climate that was not changing, the scientists note. Notable heat waves of the past decade include those that struck Europe in summer 2003 (the warmest summer recorded there in at least 500 years), Greece in summer 2007 and central Russia in 2010 (when July temperatures broke the previous record by 2.5 degrees Celsius). "Several recent studies indicate that many, possibly most of these heat waves would not have occurred without global warming," the new analysis concludes. But it's not just heat waves that appear to be altered by climate change. Previous research has linked some recent extreme rainfall events to climate change -- including the precipitation that doused England and Wales in autumn 2000, during the wettest fall season there since recordkeeping began in 1766. The floods damaged more than 10,000 properties and caused losses estimated at £1.3 billion. When extreme weather hits, journalists and the public often ask whether climate change has caused a particular event -- and are told that scientists cannot say a single weather event was caused by climate change. "This is often misunderstood by the public to mean that the event is not linked to global warming, even though that may be the case -- we just can't be certain," Coumou and Rahmstorf write. "If a loaded dice rolls a six, we cannot say that this particular outcome was due to the manipulation -- the question is ill-posed." The more correct analogy is that loaded dice will roll more sixes than if they weren't loaded, thus man appears to be raising the odds for certain types of extreme weather to turn up as the climate warms. March continues a roll call of extreme U.S. weather "Attribution is not a 'yes or no' issue as the media might prefer," Coumou and Rahmstorf say. "It is an issue of probability. It is very likely that several of the unprecedented extremes of the past decade would not have occurred without [human-caused] global warming." The study echoes findings of the Intergovernmental Panel on Climate Change, which released its own report last year examining the effect of climate change on weather extremes. That report found evidence that climate change is increasing the frequency of drought and heat waves and the intensity of rainstorms, warning that such shifts will require the world's governments to change how they cope with natural disasters (Greenwire, Nov. 18, 2011).
http://www.scientificamerican.com/article.cfm?id=human-pollution-tipping-scaled-toward-more-weather-extremes
4.03125
Water Treatment: How Can We Module written by Susan E. Kegley, Doug Session 1: How can we purify our water? Dissolved substances in natural waters and their sources; concentration units; bar diagrams. Exploration 1A: The Storyline Exploration 1B: What substances do you typically find in natural waters? Exploration 1C: How do dissolved substances get into a water supply? Exploration 1D: How can we obtain a quantitative profile of the ionic constituents in a water supply? Session 2: How do you determine how much of a constituent is in a water supply? Analytical methods for fluoride (ion-selective electrode); water hardness (EDTA titration); iron (colorimetric method); total alkalinity (acid-base titration); total dissolved solids (conductimetric or gravimetric). Exploration 2A: The Storyline Exploration 2B: How do you determine how much fluoride is in a water supply? Exploration 2C: How do you determine how much calcium and magnesium is in a water supply? Exploration 2D: How do you determine how much iron is in a water supply" Exploration 2E: How do you determine the total alkalinity in a water supply? Exploration 2F: How do you determine the concentration of Total Dissolved Solids in a water supply? Session 3: Why do substances dissolve in water? Polarity; solubility of organic and inorganic constituents in water; thermodynamics of the dissolution process. Exploration 3A: The Storyline Exploration 3B: What do we mean by the word "dissolve"? Exploration 3C: What characteristics of a substance affect its solubility in water? Session 4: How can we best describe the extent of a chemical Introduction to equilibrium; dynamic nature of equilibrium; equilibrium expressions and calculations; the reaction quotient, Q; solubility product constant, Ksp; and the relationship of the equilibrium constant to free energy. Exploration 4A: The Storyline Exploration 4B: What is equilibrium? Exploration 4C: What does a chemical system at equilibrium look like at the microscopic level? Exploration 4D: How can equilibrium reactions be described mathematically? Exploration 4E: How can we use the equilibrium expression to predict equilibrium concentrations? Exploration 4F: How can you tell whether a reaction has reached equilibrium? Exploration 4G: How is free energy related to the extent of a reaction? Session 5: How can we remove contaminants from a water Le Chatelier's principle and the effects of concentration, temperature, and pressure on the position of an equilibrium; common ion effect. Exploration 5A: The Storyline Exploration 5B: How can we drive an equilibrium reaction to one side? Exploration 5C: Which precipitating reagent will remove the most contaminant? Exploration 5D: How much precipitating reagent is required for effective water treatment? Session 6: What procedures can you design to remove contaminants from a water supply? Applying the principles of equilibrium and precipitation to remove contaminants from a water sample. Exploration 6A: The Storyline Exploration 6B: How can excess fluoride be removed from a water supply? Exploration 6C: How can excess water hardness be removed from a water supply? Exploration 6D: How can excess iron be removed from a water supply? Session 7: What are acids and bases? Definitions of acids and bases and their relative strengths; the pH scale; calculations involving strong and weak acids and bases; relationship of acid strength to structure and composition of the acid; thermodynamics of acid-base reactions. Exploration 7A: The Storyline Exploration 7B: How are acid and base strengths correlated to the extent of the acid-base reaction? Exploration 7C: How can we best quantify acid and base concentrations? Exploration 7D: How do you determine equilibrium concentrations of strong and weak acids and bases? Exploration 7E: How is acid strength related to the structure and composition of the acid? Session 8: What is the role of acids and bases in water Effects of pH on solubility; neutralization reactions; analytical method for measuring pH. Exploration 8A: The Storyline Exploration 8B: How can we selectively remove ions from solution? Exploration 8C: How do you measure the pH of a water supply? Exploration 8D: How do you change the pH of a water supply? Session 9: What are your results? Project 1: Poster Presentation Project 2: Scientific Report Project 3: Community Education Copyright © 2004 by the trustees of Beloit College and the Regents of the University of California. This Module has been developed under the direction of the ChemLinks Coalition, headed by Beloit College, and the ModularChem Consortium, headed by the University of California at Berkeley. This material is based upon work supported by the National Science Foundation grants No. DUE-9455918 and DUE-9455924. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation, Beloit College, or the Regents of the University of California. Published through exclusive license with W. W. Norton. Water Treatment: How Can We Make Our Water Safe to Drink? ISBN 0-393-92434-3 Water Treatment | ChemConnections
http://chemistry.beloit.edu/Water/index.html
4.34375
How Salmonella Bacteria Spread Some Salmonella bacteria are fast-replicating, quick-moving and armed with a needle-like complex that can penetrate cells in the human gut. The new findings may help explain how Salmonella can spread so efficiently. Salmonella are the most frequently reported cause of food poisoning in the United States. Researchers at NIH’s National Institute of Allergy and Infectious Diseases (NIAID) decided to take took a closer look at how the bacteria might spread from cell to cell or from person to person. Using high-powered microscopes, the scientists found that a subset of Salmonella have long whip-like “tails” that let them move freely within infected cells. These bacteria multiply more quickly than other Salmonella. They also have a “needle complex” that helps them pierce cells and inject their proteins. The bacteria seem especially well-suited for invading other cells. Cells containing these Salmonella were quickly pushed out of a simple layer of lab-grown cells, which led to the release of bacteria. A similar process occurred in certain tissues of infected mice. The shedding cells set off an cascade. These findings may help explain the inflammation seen in Salmonella infections. “Unfortunately, far too many people have experienced the debilitating effects of Salmonella , which causes disease via largely unexplained processes,” says NIAID Director Dr. Anthony S. Fauci. “This elegant study provides new insight into the origins of that inflammatory disease process.”
http://newsinhealth.nih.gov/issue/Nov2010/capsule1
4.09375
The American Republic To 1877 Growth and Expansion In the mid-1700s, inventors in Great Britain created machinery to perform some of the work involved in clothmaking. These innovations led to changes not only in industry but also in the way people lived. The British tried to keep their new industrial technology a secret. However, one man, Samuel Slater, memorized the design of some of the machines and then emigrated to the United States. In Rhode Island he took over the management of a cotton mill and duplicated many of the new designs. As the Industrial Revolution spread through the United States, new technology contributed to the growth of cities and agriculture. During the 1800s Americans moved west of the Appalachian Mountains in increasing numbers. To support the movement of people and goods, roads and canals were built connecting the East with the Ohio River valley. President James Monroe won the election of 1816 by an overwhelming majority. During his first administration a spirit of unity existed throughout the country. Before long, however, this feeling of unity dissolved into regional differences. In other areas, colonies in Central and South America rebelled against Spanish control. In 1822 Spain asked France, Austria, Russia, and Prussia for help in its fight against revolutionary forces in South America. The possibility of increased European involvement in North America led President Monroe to issue a statement declaring that the United States would oppose any new European colonies in North and South America. His statement later became known as the Monroe Doctrine.
http://glencoe.mcgraw-hill.com/sites/0078609836/student_view0/unit4/chapter10/chapter_overviews.html
4.125
A sputum culture is a test to detect and identify bacteria or Reference fungi Opens New Window (plural of fungus) that are infecting the lungs or breathing passages. Sputum is a thick fluid produced in the lungs and in the airways leading to the lungs. A sample of sputum is placed in a container with substances that promote the growth of bacteria or fungi. If no bacteria or fungi grow, the culture is negative. If organisms that can cause infection grow, the culture is positive. The type of bacterium or fungus will be identified with a microscope or by chemical tests. If bacteria or fungi that can cause infection grow in the culture, other tests may be done to determine which antibiotic will be most effective in treating the infection. This is called Reference susceptibility or sensitivity testing. This test is done on a sample of sputum that is usually collected by coughing. For people who can't cough deeply enough to produce a sample, they can breathe in a mist solution to help them cough. |By:||Reference Healthwise Staff||Last Revised: Reference November 1, 2012| |Medical Review:||Reference Adam Husney, MD - Family Medicine Reference Robert L. Cowie, MB, FCP(SA), MD, MSc, MFOM - Pulmonology
http://www.sutterhealth.org/health/healthinfo/?A=C&type=info&hwid=hw5693&section=hw5696
4.03125
General Musicianship For GCSE Music Price: £12.99 (Excluding VAT at 20%) Age Group: 14 - 16 Order Number: 11008 A series of 15 worksheets designed to introduce and to enhance pupil's musical vocabulary and historical awareness. Pupils are required to look up words in music dictionary, to answer questions on written pieces of music, to build charts, to write short descriptions and to complete crossword and word pairing exercises. Almost all questions can be answered away from class, thus allowing teachers to set specific, non-compositional exercises for homework.
http://www.classroom-resources.co.uk/acatalog/Online_Catalogue_General_Musicianship_For_GCSE_Music_1584.html
4.125
How to Find the Centroid of a Triangle The three medians of a triangle intersect at its centroid. The centroid is the triangle’s balance point, or center of gravity. (In other words, if you made the triangle out of cardboard, and put its centroid on your finger, it would balance.) On each median, the distance from the vertex to the centroid is twice as long as the distance from the centroid to the midpoint of the side opposite the vertex. That means that the centroid is exactly 1/3 of the way from the midpoint of the side to the vertex of the triangle. Take a look at the following figure. If you’re from Missouri (the Show-Me State), you might want to actually see how a triangle balances on its centroid. Cut a triangle of any shape out of a fairly stiff piece of cardboard. Carefully find the midpoints of two of the sides, and then draw the two medians to those midpoints. The centroid is where these medians cross. (You can draw in the third median if you like, but you don’t need it to find the centroid.) Now, using something with a small, flat top such as an unsharpened pencil, the triangle will balance if you place the centroid right in the center of the pencil’s tip.
http://www.dummies.com/how-to/content/how-to-find-the-centroid-of-a-triangle.html
4
This section provides a short description of all the major characters in the book. This can be printed out as a study guide for students, used as a "key" for leading a class discussion, or you can jump to the quiz/homework section to find worksheets that incorporate these descriptions into a variety of question formats. St. Anselm of Canterbury - This character is one of the most important philosophers of the medieval period, after St. Augustine and St. Thomas Aquinas. He produces many of the most important arguments in favor of natural theology, which is the branch of theology that attempts to establish religious truths through reason. God - While this character does not speak in the book, Anselm refers to Him on nearly every page. Gaunilo of Marmoutier - This character is a monk from Marmoutier who criticizes Anselm's ontological argument. Jesus Christ - This character is... This section contains 352 words| (approx. 2 pages at 300 words per page)
http://www.bookrags.com/lessonplan/st-anselm/characters.html
4.1875
Introduction to Exponents THis video gives a quick review of exponents. It begins with a review from pre-algebra and then proceeds to explain exponents. The video includes some humor as a teacher and student are seen at the bottom of the screen and the math is written on a large white screen above them. Run time 06:41. An Introduction to Exponents Professor Edward Burger introduces exponents in this video from Thinkwell's online Algebra series. The video uses lecture format and a whiteboard to aid in the explanations. Run time 01:37. Algebra: An Introduction to Exponents In this very short clip, the instructor, Ed Burger, uses humor to introduce exponents. The instructor is in the corner of the screen while writing equations. (01:51) Clear and easy to understand. Introduction to Exponents This video goes through the vocabulary in exponents and how it is said. This video shows the difference between 5 x 2 and 5 raised to the second power. They show many different examples. Then they show how to write the exponent problem as a word statement. They also discuss a number raised to the zero power. It is introduced by looking at the pattern of numbers. They show that any number, except 0, raised to the zero power is equal to one. Then they go to more complicated problems where two numb Radiology Lab 0: Introduction to 2D and 3D Imaging This is a self-directed learning module to introduce students to basic concepts of imaging technology as well as to give students practice going between 2D and 3D imaging using everyday objects. Concept maps: an introduction Using concept maps can help students make connections among subject areas. This article explains how teachers can use concept maps effectively and provides links to tools for creating them online. Blogging: an introduction Weblogs, or "blogs" for short, have many uses in education, as tools for publication, research, administration, and more. An introduction to teacher research Every day, teachers develop lesson plans, evaluate student work, and share outcomes with students, parents, and administrators. Teacher research is simply a more intentional and systematic version of what good teachers already do. Introduction to musical instruments This module features extensive recordings of 18 musical instruments for first steps in musical education. Children (and adults) can use this quiz to learn to identify musical instruments from their sound. Two multiple choice quizzes with different kinds of question are included. Musical instruments have been chosen primarily from Europe, but also from non-European cultures such as the Middle East, India, Japan and Australia. Instruments are illustrated by full-length high-quality MP3 recordings. Introduction to Antibiotic Pharmacology The module contains the following levels: Bactericidal versus Bacteriostatic Spectrum of Activity Gram Positive versus Gram Negative Mechanism of Antibacterial Action Empirical versus Rational Therapy Drug Resistance, Combined Antibiotic Therapy and Superinfection Introduction to Penicillin Classification Cephalosporins and Other Antibacterials Antimicrobials for Various Disease States Complete Introductory Assessment of Antimicrobial Agents The 50S versus the 30S Ribosomal Unit Macrolides vs Am An Italian Introduction to Microsoft Word Con questa serie di domande si valuter? il livello di preparazione dello studente. (A short quiz in Italian). "A Less Reliable Form of Birth Control": Miriam Allen deFord Describes Her Introduction to Contracep Despite major cultural, legal, and medical impediments the use of birth control, including abortion, by American women was widespread at the turn of the century. In their quest to control unwanted pregnancies, American women could be surprisingly resourceful in the methods they used. In this audio excerpt from a 1974 interview with historian Sherna Gluck, Miriam Allen deFord described methods of birth control in vogue in the 1910s, including spermicides, douches, the Dutch pessary (an early diap Calibrated Peer Review: Introduction - Why Study Geology? In this activity, students read an article entitled "Why Study Geology?", then write an essay addressing points listed in the Writing Prompt. After this, students are introduced to the process of Calibrated Peer Review and evaluate their papers. On this Starting Point page, users can access information about the exercise's learning goals, context for use, teaching notes and tips, teaching materials, assessment ideas, references and topics covered. Introduction to Scatter Plots Brief description about scatter plots and lines of best fit. Various types of scatter plots are briefly described. The video goes over how to analyze data or recognize patters using examples. Some multiple choice problems are presented with answers given. Introduction to Frequency Tables A brief introduction to the use of Frequency Tables. Definitions of statistics, data, and frequency table are discussed. An Introduction to BioQUEST's 3Ps A text chapter that introduces and explores some of the key issues in the 3Ps (Problem-posing, Problem-solving, and Persuading Peers) philosophy behind the activities of the BioQUEST Curriculum Consortium and the materials included in The BioQUEST Library. Multimedia Training Videos: Introduction to Graphics in Macromedia Flash Multimedia Training Videos is a series of free learning videos to show anyone interested in learning packages like Macromedia Flash. This video is an introduction to graphics in Flash. Multimedia Training Videos: Introduction to Action Scripts for Games in Macromedia Flash Multimedia Training Videos is a series of free learning videos to show anyone interested in learning packages like Macromedia Flash. This video is an introduction to action scripts for games in Flash. Multimedia Training Videos: Introduction to Action Scripts in Macromedia Flash Multimedia Training Videos is a series of free learning videos to show anyone interested in learning packages like Macromedia Flash. This video is an introduction to action scripts in Flash. Multimedia Training Videos: Introduction to Macromedia Flash Multimedia Training Videos is a series of free learning videos to show anyone interested in learning packages like Macromedia Flash. This video is an introduction to Flash.
http://www.nottingham.ac.uk/xpert/scoreresults.php?keywords=Introduction%20to%20d&start=100&end=120
4.21875
Learn something new every day More Info... by email An action potential is the change in the relative charge across the membrane of some cells found within animals and plants. Cells that undergo action potentials are called excitable cells and include nerve cells, muscle cells, and cells of the endocrine system — or hormone producing system. Action potential generation is the process that causes the change in the charge across the membrane. There are four basic stages that are involved in action potential generation. When an action potential is not occurring, the cell is said to be in the resting phase. Within a nerve cell, the relative charge within the cell is about -70 milliVolts (mV) at this stage. The resting potential of the cell is maintained by charged ions found within and around the cell. For nerve cells, positively charged potassium ions are found with the cell membrane, while positively charged sodium ions and negatively charged chloride ions are found outside the cell. The levels of the ions found within and outside the cell are controlled by ion gates and pumps. During the resting potential stage for nerve cells, potassium ions are pumped into the cell and sodium ions are pumped out. The concentration of these ions relative to each other is what causes the charge across the membrane. At this time, the cell is said to be polarized. For an action potential to be triggered, the charge within the cell has to be reversed. When a stimulus is applied to the cell, it can cause a depolarization to occur. Action potential generation needs a stimulus that meets or exceeds a certain threshold level. If the threshold is not met, the action potential will not be generated. The size of the action potential is the same whether the threshold is reached or exceeded, so action potential generation is called an "all or none event." During depolarization, the ion channels open so that sodium ions rush into the cell. This causes the cells to undergo a reverse in charge. For nerve cells, the threshold for action potential generation is a charge of +40 mV. This takes place in less than two milliseconds, or two one-thousandths of a second. As soon as the action potential has been generated, repolarization of that area of the cell membrane begins. Once an action potential has been triggered within one of these types of cells, it is said to be self-propagating. This means that the action potential in one area of the membrane stimulates the cells to begin the process of action potential generation in the adjacent section of the membrane. As a result, the action potential moves along the length of the cell.
http://www.wisegeek.com/what-is-action-potential-generation.htm
4.21875
Learn something new every day More Info... by email A legislation amendment is a change to a law created by a governing body. This change can happen during the discussion and writing process, but after the law/bill has first been submitted or after final approval. Such amendments, like the legislation as a whole, require a majority vote in order to be enacted. Each country has a different system of scrutinizing and amending legislation. New laws created by a politician, group or even individuals are known as pieces of legislation. These are typically voted upon by an elected chamber. Laws passed by more dictatorial regimes tend to not allow a legislation amendment to occur. Some legislation is passed with little scrutiny or change, but others are altered considerably during the process. American legislation is amended in the Senate and the House of Representatives at the committee level or on the voting floor during discussions. It is then approved or rejected by the sitting president of the day. In Britain, the House of Commons creates the law, discusses it and makes amendments, then passes it to the House of Lords. The Lords can either accept the law or suggest changes and send it back to the Commons for revision. Once both houses pass the law, it is up to the reigning monarch to pass it into law; no monarch has rejected a piece of legislation since 1708. Amendments tend to be proposed by one or more members of an elected chamber during committee scrutiny or discussions. Legislation amendment takes three forms: the altering of, adding of or removal of words and clauses within the bill. Each proposed change requires the majority of people present on the committee or in the chamber to approve it. The Equal Rights Act of 1964 is an example of a much changed piece of legislation. The original bill was proposed by President John F. Kennedy in 1963, before his assassination. The bill was strengthened during the House of Representatives’ committee stage to protect women as well as racial minorities and to strengthen measures the Attorney General could take to prevent abuses of powers against peaceful protestors. In 1964, the new President, Lyndon Johnson, saw the bill move on into the Senate and to a greater level of opposition from Southern Senators. Major amendments and a watering down of the whole bill were required in order to break a filibuster designed to send the legislation into limbo. It is an example of a legislation amendment that cut up a bill, but then saved it. The Patriot Act of 2001, enacted by George W. Bush a month after the 9/11 terrorist attacks, is an example of a complicated bill that had little legislation amendment during the legislation process. Removals and amendments came later during judicial proceedings, showing that legal challenges as well as legislative procedures can change a piece of legislation. Judge Marrero, in 2007, removed the Federal Bureau of Investigation's (FBI) right to issue National Security Letters (NSLs) in order to obtain personal information. In the same year, another judge removed the ‘sneak and peek’ powers contained in the Patriot Act.
http://www.wisegeek.com/what-is-a-legislation-amendment.htm
4.53125
The Gutenberg Press In 1436 Johaness Gutenberg, a German goldsmith, began designing a machine capable of producing pages of text at an incredible speed—a product that he hoped would offset losses from a failed attempt to sell metal mirrors. By 1440 Gutenberg had established the basics of his printing press including the use of a mobile, reusable set of type, and within ten years he had constructed a working prototype of the press. In 1454 Gutenberg put his press to commercial use, producing thousands of indulgences for the Church. The following year he printed his famous 42-line Bible, the first book ever printed on a moveable type printing press.1 Gutenberg's press was the combined effort of several discoveries and inventions. The printing press was built around the traditional screw press, a precursor to today's drill press, with an added matrix on which individually-cast letters and symbols could be arranged to form the desired text. This moveable type design allowed pages of text to be quickly assembled from a pre-cast selection of letters and symbols rather than laboriously carved from a block of wood as in the block printing method. Gutenberg also created a unique oil-based ink which transferred from his metal type to the printing substrate much more effectively than the water-based inks that other printers of the era used. In order to print a page, Gutenberg would arrange the necessary letters on the matrix and coat them in his ink. The matrix was then mounted on the contact end of the modified screw press and lowered until it struck the paper underneath. The process, while labor intensive, allowed Gutenberg to print pages at a much greater rate than printers using the block printing method or those doing manuscript work.2 Johannes Gutenberg's moveable type press marked the beginning of the Printing Revolution, a colossal moment in the history of information and learning. With access to printing presses, scientists, philosophers, politicians, and religious officials could replicate their ideas quickly and make them available to large audiences.
http://osulibrary.oregonstate.edu/specialcollections/omeka/exhibits/show/mcdonald/incunabula/gutenberg
4.03125
From The Encyclopedia of Mormonism Author: Bachman, Danel Author: Esplin, Ronald K. Plural marriage was the nineteenth-century LDS practice of a man marrying more than one wife. Popularly known as polygamy, it was actually polygyny. Although polygamy had been practiced for much of history in many parts of the world, to do so in "enlightened" America in the nineteenth century was viewed by most as incomprehensible and unacceptable, making it the Church's most controversial and least understood practice. Though the principle was lived for a relatively brief period, it had profound impact on LDS self-definition, helping to establish the Latter-day Saints as a "people apart." The practice also caused many nonmembers to distance themselves from the Church and see Latter-day Saints more negatively than would otherwise have been the case. Rumors of plural marriage among the members of the Church in the 1830s and 1840s led to persecution, and the public announcement of the practice after August 29, 1852, in Utah gave enemies a potent weapon to fan public hostility against the Church. Although Latter-day Saints believed that their religiously-based practice of plural marriage was protected by the U.S. Constitution, opponents used it to delay Utah statehood until 1896. Ever harsher antipolygamy legislation stripped Latter-day Saints of their rights as citizens, disincorporated the Church, and permitted the seizure of Church property before the manifesto of 1890 announced the discontinuance of the practice. Plural marriage challenged those within the Church, too. Spiritual descendants of the Puritans and sexually conservative, early participants in plural marriage first wrestled with the prospect and then embraced the principle only after receiving personal spiritual confirmation that they should do so. In 1843, one year before his death, the Prophet Joseph Smith dictated a lengthy revelation on the doctrine of marriage for eternity (D&C 132; see Marriage: Eternal Marriage). This revelation also taught that under certain conditions a man might be authorized to have more than one wife. Though the revelation was first committed to writing on July 12, 1843, considerable evidence suggests that the principle of plural marriage was revealed to Joseph Smith more than a decade before in connection with his study of the Bible (see Joseph Smith Translation of the Bible (JST)), probably in early 1831. Passages indicating that revered Patriarchs and prophets of old were polygamists raised questions that prompted the Prophet to inquire of the Lord about marriage in general and about plurality of wives in particular. He then learned that when the Lord commanded it, as he had with the Patriarchs anciently, a man could have more than one living wife at a time and not be condemned for adultery. He also understood that the Church would one day be required to live the law ( D&C 132:1-4, 28-40). Evidence for the practice of plural marriage during the 1830s is scant. Only a few knew about the still unwritten revelation, and perhaps the only known plural marriage was that between Joseph Smith and Fanny Alger. Nonetheless there were rumors, harbingers of challenges to come. In April 1839, Joseph Smith emerged from six months' imprisonment in Liberty Jail with a sense of urgency about completing his mission (see History of the Church: c 1831-1844, Ohio, Missouri, and Nauvoo Periods). Since receiving the sealing key from Elijah in the Kirtland Temple (D&C 110:13-16) in April 1836, the Prophet had labored to prepare the Saints for additional teachings and ordinances, including plural marriage. Joseph Smith realized that the introduction of plural marriage would inevitably invite severe criticism. After the Kirtland experience, he knew the tension it would create in his own family; even though Emma, with faith in his prophetic calling, accepted the revelation as being from God and not of his own doing, she could not reconcile herself to the practice. Beyond that, it had the potential to divide the Church and increase hostilities from outside. Still, he felt obligated to move ahead. "The object with me is to obey & teach others to obey God in just what he tells us to do," he taught several months before his death. "It mattereth not whether the principle is popular or unpopular. I will always maintain a true principle even if I Stand alone in it" (TPJS, p. 332). Although certain that God would require it of him and of the Church, Joseph Smith would not have introduced it when he did except for the conviction that God required it then. Several close confidants later said that he proceeded with plural marriage in Nauvoo only after both internal struggle and divine warning. Lorenzo Snow later remembered vividly a conversation in 1843 in which the Prophet described the battle he waged "in overcoming the repugnance of his feelings" regarding plural marriage. He knew the voice of God-he knew the commandment of the Almighty to him was to go forward-to set the example, and establish Celestial plural marriage. He knew that he had not only his own prejudices and pre-possessions to combat and to overcome, but those of the whole Christian world ; but God had given the commandment [The Biography and Family Record of Lorenzo Snow, pp. 69-70 (Salt Lake City, 1884)]. Even so, Snow and other confidants agreed that Joseph Smith proceeded in Nauvoo only after an angel declared that he must or his calling would be given to another (Bachman, pp. 74-75). After this, Joseph Smith told Brigham Young that he was determined to press ahead though it would cost him his life, for "it is the work of God, and He has revealed this principle, and it is not my business to control or dictate it" (Brigham Young Discourse, Oct. 8, 1866, Church Archives). Nor did others enter into plural marriage blindly or simply because Joseph Smith had spoken, despite biblical precedents. Personal accounts document that most who entered plural marriage in Nauvoo faced a crisis of faith that was resolved only by personal spiritual witness. Those who participated generally did so only after they had obtained reassurance and saw it as religious duty. Even those closest to Joseph Smith were challenged by the revelation. After first learning of plural marriage, Brigham Young said he felt to envy the corpse in a funeral cortege and "could hardly get over it for a long time" (JD 3:266). The Prophet's brother Hyrum Smith stubbornly resisted the very possibility until circumstances forced him to go to the Lord for understanding. Both later taught the principle to others. Emma Smith vacillated, one day railing in opposition against it and the next giving her consent for Joseph to be sealed to another wife (see comments by Orson Pratt, JD 13:194). Teaching new marriage and family arrangements where the principles could not be openly discussed compounded the problems. Those authorized to teach the doctrine stressed the strict covenants, obligations and responsibilities associated with it-the antithesis of license. But those who heard only rumors, or who chose to distort and abuse the teaching, often envisioned and sometimes practiced something quite different. One such was John C. Bennett, mayor of Nauvoo and adviser to Joseph Smith, who twisted the teaching to his own advantage. Capitalizing on rumors and lack of understanding among general Church membership, he taught a doctrine of "spiritual wifery." He and associates sought to have illicit sexual relationships with women by telling them that they were married "spiritually," even if they had never been married formally, and that the Prophet approved the arrangement. The Bennett scandal resulted in his excommunication and the disaffection of several others. Bennett then toured the country speaking against the Latter-day Saints and published a bitter anti-Mormon exposé charging the Saints with licentiousness. The Bennett scandal elicited several public statements aimed at arming the Saints against the abuses. Two years later enemies and dissenters, some of whom had been associated with Bennett, published the Nauvoo Expositor, to expose, among other things, plural marriage, thus setting in motion events leading to Joseph Smith's death (see Martyrdom of Joseph and Hyrum Smith). Far from involving license, however, plural marriage was a carefully regulated and ordered system. Order, mutual agreements, regulation, and covenants were central to the practice. As Elder Parley P. Pratt wrote in 1845, These holy and sacred ordinances have nothing to do with whoredoms, unlawful connections, confusion or crime; but the very reverse. They have laws, limits, and bounds of the strictest kind, and none but the pure in heart, the strictly virtuous, or those who repent and become such, are worthy to partake of them. And [a] dreadful weight of condemnation await those who pervert, or abuse them [The Prophet, May 24, 1845; cf. D&C 132:7]. The Book of Mormon makes clear that, though the Lord will command men through his prophets to live the law of plural marriage at special times for his purposes, monogamy is the general standard (Jacob 2:28-30); unauthorized polygamy was and is viewed as adultery. Another safeguard was that authorized plural marriages could be performed only through the sealing power controlled by the presiding authority of the Church (D&C 132:19). Once the Saints left Nauvoo, plural marriage was openly practiced. In winter quarters, for example, discussion of the principle was an "open secret" and plural families were acknowledged. As early as 1847, visitors to Utah commented on the practice. Still, few new plural marriages were authorized in Utah before the completion of the Endowment house in Salt Lake City in 1855. With the Saints firmly established in the Great Basin, Brigham Young announced the practice publicly and published the revelation on eternal marriage. Under his direction, on Sunday, August 29, 1852, Elder Orson Pratt publicly discussed and defended the practice of plural marriage in the Church. After examining the biblical precedents (Abraham, Jacob, David, and others), Elder Pratt argued that the Church, as heir of the keys required anciently for plural marriages to be sanctioned by God, was required to perform such marriages as part of the restoration. He offered reasons for the practice and discussed several possible benefits (see JD 1:53-66), a precedent followed later by others. But such discussions were after the fact and not the justification. Latter-day Saints practiced plural marriage because they believed God commanded them to do so. Generally plural marriage involved only two wives and seldom more than three; larger families like those of Brigham Young or Heber C. Kimball were exceptions. Sometimes the wives simply shared homes, each with her own bedroom, or lived in a "duplex" arrangement, each with a mirror-image half of the house. In other cases, husbands established separate homes for their wives, sometimes in separate towns. Although circumstances and the mechanics of family life varied, in general the living style was simply an adaptation of the nineteenth century American family. Polygamous marriages were similar to national norms in fertility and divorce rates as well. Wives of one husband often developed strong bonds of sisterly love; however, strong antipathies could also arise between wives. Faced with a national antipolygamy campaign, LDS women startled their eastern sisters, who equated polygamy with oppression of women, by publicly demonstrating in favor of their right to live plural marriage as a religious principle. Judging from the preaching, women were at least as willing to enter plural marriage as men. Instead of public admonitions urging women to enter plural marriage, one finds many urging worthy men to "do their duty" and undertake to care for a plural wife and additional children. Though some were reluctant to accept such responsibility, many responded and sought another wife. It was not unheard of for a wife to take the lead and insist that her husband take another wife; yet, in other cases, a first marriage dissolved over the husband's insistence on marrying again. As with families generally, some plural families worked better than others. Anecdotal evidence and the healthy children that emerged from many plural households witness that some worked very well. But some plural wives disliked the arrangement. The most common complaint of second and third wives resulted from a husband displaying too little sensitivity to the needs of plural families or not treating them equally. Not infrequently, wives complained that husbands spent too little time with them. But where husbands provided conscientiously even time and wives developed deep love and respect for each other, children grew up as members of large, well-adjusted extended families. Plural marriage helped mold the Church's attitude toward divorce in pioneer Utah. Though Brigham Young disliked divorce and discouraged it, when women sought divorce he generally granted it. He felt that a woman trapped in an unworkable relationship with no alternatives deserved a chance to improve her life. But when a husband sought relief from his familial responsibilities, President Young consistently counseled him to do his duty and not seek divorce from any wife willing to put up with him. Contrary to the caricatures of a hostile world press, plural marriage did not result in offspring of diminished capacity. Normal men and women came from plural households, and their descendants are prominent throughout the Intermountain West. Some observers feel that the added responsibility that fell early upon some children in such households contributed to their exceptional record of achievement. Plural marriage also aided many wives. The flexibility of plural households contributed to the large number of accomplished LDS women who were pioneers in medicine, politics and other public careers. In fact, plural marriage made it possible for wives to have professional careers that would not otherwise have been available to them. The exact percentage of Latter-day Saints who participated in the practice is not known, but studies suggest a maximum of from 20 to 25 of LDS adults were members of polygamous households. At its height, plural marriage probably involved only a third of the women reaching marriageable age-though among Church leadership plural marriage was the norm for a time. Public opposition to polygamy led to the first law against the practice in 1862, and, by the 1880s, laws were increasingly punitive. The Church contested the constitutionality of those laws, but the Supreme Court sustained the legislation (see Reynolds V United States), leading to a harsh and effective federal antipolygamy campaign known by the Latter-day Saints as "the Raid." Wives and husbands went on the "underground" and hundreds were arrested and sentenced to jail terms in Utah and several federal prisons. This campaign severely affected the families involved, and the related attack on Church organization and properties greatly inhibited its ability to function (see History of the Church: c 1878-1898, Late Pioneer Utah Period). Following a vision showing him that continuing plural marriage endangered the temples and the mission of the Church, not just statehood, President Wilford Woodruff issued the Manifesto in October 1890, announcing an official end to new plural marriages and facilitating an eventual peaceful resolution of the conflict. Earlier polygamous families continued to exist well into the twentieth century, causing further political problems for the Church, and new plural marriages did not entirely cease in 1890. After having lived the principle at some sacrifice for half a century, many devout Latter-day Saints found ending plural marriage a challenge almost as complex as was its beginning in the 1840s. Some new plural marriages were contracted in the 1890s in LDS settlements in Canada and northern Mexico, and a few elsewhere. With national attention again focused on the practice in the early 1900s during the House hearings on Representative-elect B. H. Roberts and Senate hearings on Senator-elect Reed Smoot (see Smoot Hearings), President Joseph F. Smith issued his "Second Manifesto" in 1904. Since that time, it has been uniform Church policy to excommunicate any member either practicing or openly advocating the practice of polygamy. Those who do so today, principally members of fundamentalist groups, do so outside the Church. Bachman, Danel W. "A Study of the Mormon Practice of Plural Marriage before the Death of Joseph Smith." M.A. thesis, Purdue University, 1975. Bashore, Melvin L. "Life Behind Bars: Mormon Cohabs of the 180s." Utah Historical Quarterly 47 (Winter 1979):22-41. Bennion, Lowell ("Ben"). "The Incidence of Mormon Polygamy in 1880: 'Dixie' versus Davis Stake." Journal of Mormon History 11 (1984):27-42. Bitton, Davis. "Mormon Polygamy: A Review Article." Journal of Mormon History 4 (1922): 101-118. Embry, Jessie L. Mormon Polygamous Families: Life in the Principle. Salt Lake City, 1987. Foster, Lawrence. Religion and Sexuality: The Shakers, the Mormons, and the Oneida Community. Oxford, 1981. James, Kimberly Jensen. "Between Two Fires: Women on the 'Underground' of Mormon Polygamy." Journal of Mormon History 8 (1981):49-61. Van Wagoner, Richard S. Mormon Polygamy: A History. Salt Lake City, 1986. Whittaker, David J. "Early mormon Polygamy Defenses." Journal of Mormon History 11 (1984) 43-63.
http://eom.byu.edu/index.php/Plural_Marriage
4.6875
Take a rectangular prism. Choose a point P along a vertical rod through the center of the base. Draw lines from P to the four vertices of the prism base so that P is the apex of a rectangular pyramid. A GSP drawing. Move P gently. Question: How should the point P be selected so that the volume of the "base pyramid" equals the volume of a pyramid drawn using one of the sides as the base? Label the figure and draw PM, the altitude from P to the lateral side (i.e. the line segment perpendicular to the plane of the side). Let k be the distance along the rod from the base to P (i.e., the altitude of the pyramid. The volume of a pyramid is one third the area of the base times the altitude. Write equations for the volumes of the two pyramids to compare. What is the altitude of the pyramid with a lateral side of the prisim as its base? Does this altitude length depend on the location of point P on the rod? Why? Question: Does it matter which of the sides is selected? That is, does the location of P depend on which of the four lateral sides is selected? Obviously, if the top of the prism is selected for the second base, the location of P to give equal volumes to the two pyramids is the midpoint of the rod.
http://jwilson.coe.uga.edu/EMT725/Prism.Pyrm/Prism.Pyrm.html
4.03125
Washington in New York City Early in 1776, Charles Lee had begun fortifying the city of New York. He did not believe the city could be held against a protracted British assault. Located on the southern tip of Manhattan Island, the city was vulnerable to British seapower. Despite this strategic weakness, New York was too important economically and politically to be surrendered without a fight. As a thriving port with excellent harbors, it would give the British an invaluable base for military operations. It provided access to the Hudson River and the chain of waterways that acted as a highway and invasion route to and from Canada. British control of the Hudson River Valley would threaten overland communication between New England and the rest of the United States. Congress was determined to defend the city. Once Boston was secure, Washington began moving his army to New York.American Preparations for Defense Washington was convinced that New York, a place of “infinite importance,” could be kept from the enemy. He threw himself into the work of organizing its defenses. He eventually assembled a force that on paper numbered around 30,000, though two-thirds of these were militia and many who were on the rolls were not actually present and ready for service. The Americans continued the construction of fortifications on Manhattan. Washington ordered more fortifications to be built on Long Island, around the village of Brooklyn, which lay just across the East River from New York. The main American position was on Brooklyn Heights, which rose 100 feet above the surrounding ground. A number of gun emplacements were built along the North and East Rivers, to prevent British ships from passing up them and separating the Americans on the islands from each other or from the mainland. The British government demonstrated its commitment to victory with the force it deployed to America in 1776. It was far larger than anything seen in the French and Indian War. The British would not commit such an armada overseas again until the world wars of the twentieth century. Washington garrisoned Manhattan and Long Island and then awaited the British attack. In doing so, he left his army open to complete destruction. He had divided his force between two islands in the face of a numerically superior enemy who had command of the sea. The British navy soon demonstrated that it could cut Washington's forces off at will by sailing some ships up and down the North River, past the ineffectual American batteries. Despite this, Washington and his army stayed in place. They remained confident. Their only experience of war had been the siege of Boston, where the British had impaled themselves on Bunker Hill. Amateurs at war, they thought the enemy would act according to expectation.Howe Makes His Plans On July 2, General Howe landed a force of 9,300 men on Staten Island. This became his base of operations. Howe's intentions in the New York campaign were initially what the Americans feared. He wanted New York as a launching point for operations up the Hudson to cooperate with forces from Canada. He also intended to use it as a convenient place from which to launch attacks on New England. In a letter to Lord Germain, Howe expressed his desire for a decisive battle that would destroy the American army and crush the spirit of resistance in the colonies. The arrival of his brother Lord Howe brought a diplomatic interlude. The Howes had been appointed peace commissioners. Unfortunately, all they could offer the Americans was pardon for a prompt submission to royal authority. Abortive communications with the rebels went nowhere. In the meantime, General Howe's plan of campaign was shifting. When the last of his troops arrived, the campaigning season was well advanced. He was impressed by the number of American defenders in New York and their fortifications. Perhaps he was influenced by his brother, who hoped a display of British military superiority would bring the Americans to their senses without much bloodshed. When it came time to fight, Howe moved at a leisurely pace and passed up opportunities to destroy Washington's army, preferring to wage a war of maneuver aimed at capturing territory with a minimum of casualties.
http://www.netplaces.com/american-revolution/times-that-try-mens-souls/washington-in-new-york-city.htm
4.03125
The earliest inhabitants of Cameroon were probably the Bakas (Pygmies). They still inhabit the forests of the south and east provinces. Bantu speakers originating in the Cameroonian highlands were among the first groups to move out before other invaders. During the late 1770s and early 1800s, the Fulani, a pastoral Islamic people of the western Sahel, conquered most of what is now northern Cameroon, subjugating or displacing its largely non-Muslim inhabitants. Arrival of the Europeans: Although the Portuguese arrived on Cameroon's coast in the 1500s, malaria prevented significant European settlement and conquest of the interior until the late 1870s, when large supplies of the malaria suppressant, quinine, became available. The early European presence in Cameroon was primarily devoted to coastal trade and the acquisition of slaves. The northern part of Cameroon was an important part of the Muslim slave trade network. The slave trade was largely suppressed by the mid-19th century. Christian missions established a presence in the late 19th century and continue to play a role in Cameroonian life. From German Colony to League of Nation Mandates: Beginning in 1884, all of present-day Cameroon and parts of several of its neighbors became the German colony of Kamerun, with a capital first at Buea and later at Yaounde. After World War I, this colony was partitioned between Britain and France under a June 28, 1919 League of Nations mandate. France gained the larger geographical share, transferred outlying regions to neighboring French colonies, and ruled the rest from Yaounde. Britain's territory--a strip bordering Nigeria from the sea to Lake Chad, with an equal population--was ruled from Lagos. Struggle for Independence: In 1955, the outlawed Union of the Peoples of Cameroon (UPC), based largely among the Bamileke and Bassa ethnic groups, began an armed struggle for independence in French Cameroon. This rebellion continued, with diminishing intensity, even after independence. Estimates of death from this conflict vary from tens of thousands to hundreds of thousands. Becoming a Republic: French Cameroon achieved independence in 1960 as the Republic of Cameroon. The following year the largely Muslim northern two-thirds of British Cameroon voted to join Nigeria; the largely Christian southern third voted to join with the Republic of Cameroon to form the Federal Republic of Cameroon. The formerly French and British regions each maintained substantial autonomy. A One Party State: Ahmadou Ahidjo, a French-educated Fulani, was chosen President of the federation in 1961. Ahidjo, relying on a pervasive internal security apparatus, outlawed all political parties but his own in 1966. He successfully suppressed the UPC rebellion, capturing the last important rebel leader in 1970. In 1972, a new constitution replaced the federation with a unitary state. The Road to Multi-Party Democracy: Ahidjo resigned as President in 1982 and was constitutionally succeeded by his Prime Minister, Paul Biya, a career official from the Bulu-Beti ethnic group. Ahidjo later regretted his choice of successors, but his supporters failed to overthrow Biya in a 1984 coup. Biya won single-candidate elections in 1984 and 1988 and flawed multiparty elections in 1992 and 1997. His Cameroon People's Democratic Movement (CPDM) party holds a sizeable majority in the legislature following 2002 elections--149 deputies out of a total of 180. (Text from Public Domain material, US Department of State Background Notes.)
http://africanhistory.about.com/od/cameroon/p/CameroonHist.htm
4.03125
nose; nasal hairs; septum; adenoids; taste; smell; nostril; sneeze; germs ; What to know about noses Your nose sits in the middle of your face. If you try to look at the end of it, you go cross-eyed. The only time you get to see it is in the mirror, and even then you can't really see it well from the side. However you probably already know some things about noses. - We mostly breathe through our noses. - We smell things through our noses. - Our sense of smell helps us to recognise tastes. - The two holes in your nose are called nostrils. - The end of your nose can be wiggled around with your finger. - Some people's noses can wiggle around by themselves! Here are some other things you might not know: - Between the nostrils there is a wall of very thin bone and cartilage, called the septum. - A nose bleed can occur when blood vessels in the septum break. This can be caused by colds, dry air, exercise, pollen, bumping your nose, or picking your nose. (If you get a lot of nose bleeds, see a doctor - especially if they won't stop easily.) - Behind your nose is a space called the nasal cavity. Did you know that a sneeze can travel at up to 160 kms an hour! Just think how fast and how far germs can travel from one sneeze! Grab a tissue quickly, before you spread the germs around! - You breathe air in through your nostrils. As the air comes in it is filtered by lots of little hairs just inside the nostrils, to remove any dust. They are called nasal hairs. - The inside of the nose is a bit wet and slippery at all times. This is so that the air can have moisture added to it before it goes down into your lungs. Your nose also warms the air that you breathe in. - The slippery stuff in your nose is called mucus [say mew-kus] - actually, lots of kids may call mucus 'snot'. - The warm moist air that you breathe in carries oxygen down into your lungs, and is then breathed back out through the nose (or mouth), carrying carbon dioxide that your body is getting rid of. If you breathe in something that irritates the little hairs, they might make you shoot it straight back out again. We call this ……… sneezing. (I knew that you would know that!) - At the back of the nose are lumps of tissue, called the adenoids, which are very much like the tonsils. They help fight any infection when germs get in. For some people, the nose is very important indeed. Some people have very sensitive noses, and can be found doing very unusual jobs. There are people who are employed to use their ‘noses' in the winemaking and perfume industries. Some people's noses can recognise thousands of different perfumes, herbs, spices and flowers. Other people are employed to sniff out fumes and odours around factories to see how much pollution is being caused. Of course, dogs have an even better sense of smell, and their noses are used in all sorts of ways to help humans. There are rescue dogs, drug-sniffing dogs and tracking dogs. These dogs all have a very keen sense of smell. ||All these noses have to be trained to do their very important jobs!| In the story of Pinocchio, the wooden boy had a nose which grew longer every time he told a lie. Wouldn't it be great if that happened in real life? Cleopatra, the famous Egyptian queen, was known for her great beauty AND her big nose! Cyrano De Bergerac, a character in the book and film of that name, was famous for the size of his nose. |Tycho Brahe, a famous astronomer from Denmark, had an unusual nose – the end of it was made of gold. He lost the real end of his nose in a sword fight. There are some famous actors and actresses who have also been known as ‘The Nose', but we are too kind to mention their names. You shouldn't call attention to someone's appearance and embarrass them should you? We know that it's the character of a person that's important, not how they look. If you have a cold, try to stay away from others so that they don't catch it from you. Cover your nose when you sneeze, so that the germs don't fly every where. If you don't have a tissue, use your hand, but then wash your hand before you touch anything else. It's a good idea to use tissues to blow your nose, and then put them in a bin or flush them down the toilet. If you stick them up your sleeve or drop them anywhere, germs can spread when other people have to pick them up." kids say about noses My dad always said, "Keep your nose clean!" when I was going out. He meant – stay out of trouble. We have a few sayings that are about noses. How many do you know? "Don't be nosey!" Matt "Keep your nose out of it." Brittany "She's always got her nose in a book." Rachelle "He's always poking his nose into other people's business." Simon "The journalist had a nose for a good story." Caitlin Can you think of any more? We've provided this information to help you to understand important things about staying healthy and happy. However, if you feel sick or unhappy, it is important to tell your mum or dad, a teacher or another grown-up.
http://www.cyh.sa.gov.au/HealthTopics/HealthTopicDetailsKids.aspx?p=335&np=152&id=1686
4
"ZONATION AND DISTRIBUTION PART II" nutrients:-organic molecules that help organisms grow Once your map is complete, it will become apparent that most of the marine life is concentrated nearshore. Although some very large animals live in the oceanic zone, their numbers are small in comparison to the size of it. Ask the students to describe the trend they see. This trend is due mainly to the abundance of light and nutrients in the coastal zone. It is much shallower than the oceanic zone, sunlight penetrates all the way to the ocean floor. This is important because phytoplankton, which are at the bottom of almost all food chains, require this light for photosynthesis. In photosynthesis, sunlight is captured by the plant and the energy particles in sunlight (photons) are used to drive a chemical reaction that produces sugar. This sugar is the food for the plant. Why don't we see an abundance of life in the upper layers of the oceanic zone also? Because light is not the only important factor. Phytoplankton also require inorganic and organic nutrients like iron and nitrogen. These compounds are most abundant nearshore, because of runoff from the land. Without these nutrients, phytoplankton do not grow as well. As we said above, phytoplankton are at the bottom of the food chain. This means that herbivores eat them, other animals eat the herbivores, and so on. The more phytoplankton that are available, the more herbivores there will be to eat them, and the more herbivores there are, the more carnivores to eat them. Ultimately, if there are lots of phytoplankton, there will be lots of organisms else too. Because phytoplankton are so important, scientists have lots of ways of measuring their abundance. One fairly new method uses images of the ocean taken from satellites in space. Satellites take a picture of the ocean and send it back to earth electronically, like a television signal. These satellites have sensors in them that can detect the specific pigments that phytoplankton use for photosynthesis (the most common pigment is chlorophyll). They then convert the concentration of these pigments into a color. When a scientist looks at a satellite image of the ocean, he or she can tell how much phytoplankton is in the water by the color. Red usually means a lot of phytoplankton and blue/purple means very few phytoplankton. These colors are of course not the actual color of the phytoplankton, but instead are a "false color" used to relay information.
http://www.usc.edu/org/seagrant/Education/IELessons/Unit3/Lesson4/U3L4VB.html
4.21875
Science Fair Project Encyclopedia The German engineer Maximilian Schuler discovered the principle known as Schuler tuning which is fundamental to the operation of a gyrocompass or inertial guidance system that will be operated near the surface of the earth. Schuler's cousin Hermann Anschütz-Kaempfe founded a firm near Kiel, Germany, to manufacture navigational devices using gyroscopes in 1905, and Schuler joined the firm in 1906. For many years they struggled with the problem of maintaining a vertical reference as a craft moved around on the surface of the earth. In 1923 Schuler published his discovery that if the gyrocompass was tuned to have an 84 minute period of oscillation (the Schuler period) then it would resist errors due to sideways acceleration of the ship or aircraft in which it was installed. The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Max_Schuler
4.21875
In order to see the stars better, astronomers use a variety of telescopes. The first kind was a refracting ‘scope that gave Galileo his shocking discovery of moons orbiting Jupiter. A refracting telescope works with two lenses that are held steady at the correct relative distances. Often a solid tube is used to keep the lenses in place (not to keep the light inside!); however, an open web of trusses is just as effective. The larger of the two lenses, located at the end away from the viewer and pointed toward the sky, is the objective lens; the smaller, next to the viewer, is the eyepiece lens. The two lenses work together to gather and focus the light. Light radiates out from its source in a straight line until it hits an object that stops or bends it. When that object is translucent (that lets light transit it), the light is bent—or refracted—in transit. If the object is flat like a window, the refracted light is displaced slightly but resumes its path on a new plane parallel to its original path. The path of the exiting light can be modified with a curved translucent object, called a lens, that will direct and concentrate the light at a point. The distance between that point and the objective is the focal length of the objective. The smaller eyepiece lens is located beyond the focal point of the objective at the distance where it will concentrate and restraighten the original light. That is its focal point. The eyepiece lens will also magnify the image that was brought to focus by the objective. The amount of magnification is related to the focal length of the eyepiece. That is, magnification equals the objective focal length divided by the eyepiece focal length: If the objective focal length is 100 inches and the eyepiece is one inch, the magnification is 100x. The bigger the lenses the more light the ‘scope collects. The biggest reflector, the Yerkes telescope objective has a diameter of 40 inches; the whole ‘scope is housed in a 63½ foot tube. Keeping a 63-foot ‘scope rigid is only one problem for refractors. It’s also hard to make a large piece of glass without any air bubbles. When light hits a bubble, it bounces slightly and therefore is distorted. Another problem is that some wavelengths of light won’t go through even the purest glass. And then, because different colors have different focal lengths, a second lens has to be added to bring the colors to the same focal point. Largely because of the problem of the blurred colors in his ‘scope, Isaac Newton came up with another solution to the question of how to capture and concentrate the light from a star.
http://www.bpastro.org/index.php?page=refracting-telescopes
4.25
According to our best models for the formation of the Solar System, comets and meteorites bombarded the planets in their early days. This bombardment brought water to Earth, Mars, and the other terrestrial worlds. But Mercury is an airless body and is much closer to the Sun, so its daytime surface temperature reaches 426°C. That's hot enough to make sure that any exposed surface water would have evaporated long ago, along with other volatile substances. However, a new set of measurements from the MESSENGER probe has revealed the probable presence of water ice in shadowed craters near Mercury's poles. Using laser and radar reflection along with measurements of neutron emissions, MESSENGER scientists found patches of reflective material that alternate with much darker regions than the average planet surface. This data suggests the presence of both ice and complex organic molecules, both of them probably left over from when the Solar System was young. The presence of water on Mercury had been suspected since as early as 1992, due to Earth-based radar reflection experiments. These measurements were ambiguous enough that more direct observation was desirable. Enter MESSENGER: the MErcury Surface Space ENvironment, GEochemistry, and Ranging probe, which has been orbiting Mercury since 2011. MESSENGER carries a variety of instruments for measuring surface features and magnetic fields, along with more ordinary cameras for imaging. Among those devices are radar and laser range-finders, which (in addition to registering distance) measure the reflective properties of the surface. It's also got a neutron detector. Neutrons are emitted by radioactive materials on Mercury, including those that are made radioactive by cosmic rays. Measuring the neutron spectrum (numbers and energy) helps determine the chemical composition of surface materials, particularly the hydrogen content. The results from both the reflection and neutron analyses were consistent: several craters in Mercury's polar regions provide sufficient shadow for stable water ice. The large craters named Prokofiev and Kandinsky were both found to contain significant radar-bright (RB) patches, indicating highly reflective materials. (Craters on Mercury are commonly named for famous artists, authors, composers, and the like. As a fan of both Prokofiev and Kandinsky, I approve.) The size of the reflective patches matched the total proportion of each crater that lies in permanent shadow. Unlike Earth, Mercury has almost no axial tilt, so it doesn't experience seasons. This leaves many deep craters in the polar regions untouched by sunlight and means that if they're shadowed now, they will generally remain that way. The radar reflection study found that some of Mercury's ice is in the form of frozen ponds. Other ice—still detectable through scattered light and neutron emission—is covered in dark, highly nonreflective material up to 20cm deep. The researchers determined this dark layer contains far less hydrogen than should be present if it were a water-saturated material. Complex organic—meaning carbon-containing—molecules are both dark in color and relatively common components of asteroids, meteorites, and comets. The clear discovery of water ice on Mercury meshes nicely with similar finds on the planetoid/asteroid Vesta and the Moon. The abundance of both water and organic materials is consistent with models of the early Solar System, in which bombardment by comets and meteorites deposited both types of molecules onto the terrestrial planets and moons.
http://arstechnica.com/science/2012/11/craters-on-mercury-are-oases-for-water-ice-organic-molecules/
4.125
1. The nature of the learning process: This process is active, volitional, and internally mediated. It is a process of discovering and constructing meaning from information and experience filtered through the learners’ unique perceptions, thoughts, and feelings (McCombs & Whisler, 1997). Goals of the learning process: “The learner seeks to create meaningful, coherent representations of knowledge regardless of the quantity and quality of data available” (McCombs & Whisler, 1997, p. 5). Learning and improving Clear, specific, reasonable, moderately challenging learning tasks The construction of knowledge: This learning principle reflects “concerns with how individuals build up certain elements of their cognitive or emotional apparatus” (Phillips, 1997, Patterns and connections Higher order thinking: This means to find patterns by comparing, contrasting, classifying, and generalizing information. The goal is to form conclusions based on evidence (Eggen, 1996). Motivational influences on learning: These reflect the "importance of learner beliefs, values, interests, goals, expectations for success, and emotional states of mind in producing either positive or negative motivations to learn (McCombs & Whisler, 1997, p. 75). Intrinsic motivation to learn: This is “the natural tendency to seek out and conquer challenges as we pursue personal interests and exercise capabilities” (Deci & Ryan, as cited in Woolfolk, 2001, p. 368). Enjoyment of learning Characteristics of motivation-enhancing learning include curiosity, creativity, and higher order thinking, which are stimulated by relevant, authentic learning tasks of optimal difficulty and novelty for each student (Schurr, 1994, p. 64). Tasks that activate students’ curiosity Strategies and activities that develop and challenge students’ creativity Developmental constraints and opportunities: Individuals progress through stages of physical, intellectual, emotional, and social development that are a function of unique genetic and environmental factors (McCombs & Whisler, 1997). Social and cultural diversity: Learning is facilitated by social interactions and communication with others in flexible, diverse (in age, culture, family background, etc.), and adaptive instructional settings (McCombs & Whisler, 1997). Civil involvement with others Social acceptance, self-esteem, and learning: Learning and self-esteem are heightened when individuals are in respectful and caring relationships with others who see their potential, genuinely appreciate their unique talents, and accept them as individuals (McCombs & Whisler, 1997). Individual differences in learning: Students have different capabilities and preferences for learning modes and strategies (McCombs & Their immediate environment Their own emotionality Their sociological preferences Their physiological characteristics Their processing inclination These consist of personal beliefs, thoughts, and understandings that result from prior learning and interpretations and become the individual's basis for constructing reality and interpreting life experiences (McCombs & Whisler, 1997). “A student-centered curriculum teaches each learner to select and sequence his own activities and materials (individualization), arranges for students to center on and teach each other (interaction); and interweaves all symbolized and symbolizing subjects so that the student can effectively synthesize knowledge structures in his own mind (integration)” ( Moffett & Wagner, 1992, p. 24). In-depth content knowledge Eggen, P. D., & Donald, P. (1996). Strategies for teachers: Teaching content and thinking skills. Needham Heights, MA: Allyn & Bacon. B. L., & Whisler, J. S. learner-centered classroom and school. San Francisco: Jossey- Bass. J., & Wagner, B. J. (1992). Student-centered language arts, K-12. Portsmouth, NH: Boynton/Cook Publishers National Center for Research on Teacher Learning. classrooms, problem based learning and the construction of understanding and meaning How, why, what, when, and where: Perspectives on constructivism and education. Issues in Education: Contributions from Educational Psychology, 3, 151-194. S.L. (1994) Dynamite in the classroom: A how-to handbook for teachers. Columbus, OH: NMSA. Educational psychology (8th Heights, MA: Allyn & Bacon.
http://www.intime.uni.edu/model/center_of_learning_files/checklist.html
4.15625
Compounding Functions and Graphing Functions of Functions - 0:06 Functions - 0:58 Composite Functions - 4:01 Domain and Range of Composite… - 7:06 Lesson Summary Did You Know… This lesson is part of a free course that leads to real college credit accepted by 2,900 colleges. We know that functions map numbers to other numbers, so what happens when you have a function of a function? Welcome to functions within functions, the realm of composite functions! Recall that functions are like a black box; they map numbers to other numbers. If y is a function of x, then we write it as y=f(x). And for this function, we have an input, x, and an output, y. So x is our independent variable, and y is our dependent variable. Our input will be anywhere within the domain of the function, and our output will be anywhere within the range of the function. So perhaps it's not too much of a stretch to know that you can combine functions into a big function. In math, this is known as a composition of functions. Here you start with x, and you use it as input to a function, y=f(x). And you're going to put that as input into a second function, g. So if we have a function y=f(x), and we want to plug it into z=g(y), we can end up with z=g(f(x)). This is a composite function. When you're looking at composite functions, there are two main points to keep in mind. First, you need to evaluate the function from the inside out. You need to figure out what f(x) is before you figure out what g is. Say we have the function f(x)=3x, and we have another function g(x)= 4 + x. I'm going to find z when x=2. We're going to find f(x) when x=2 for f(2)= 3 * 2, which is 6. Saying g(f(2)) is like saying g(6). We do the same thing and say g(6) = 4 + 6. Well, that's 10, so z is just 10. The second thing to keep in mind is that g(f(x)) does not equal f(g(x)). There are some cases where it can, but in general, it does not. So if we use f(x)=3x and g(x)=x + 4, then let's look at the case where x=0. Then g(f(0)), where f(0) is 0 * 3 - well that's just zero, so I'm looking at g(0). I plug zero in for x here, and it's just 4. Now, if I look at f(g(0)), that's like saying f(4), and that gives me 12. f(g(0))=12, and g(f(0))=4. Those are not the same. So, g(f(x)) does not equal f(g(x)). Domain and Range of Composite Functions What happens to the domain and range of a composite function? Well, if we have the function g(x), we have some domain and some range for g(x). Separately we've got a domain for f(x) and a range for g(x). If I write f(g(x)), then the output of g(x), which is the range, has to be somewhere in the domain of f(x). Otherwise, we could get a number here that f(x) really doesn't know what to do with. What does all this really mean? Consider the function f(x)=sin(x). The domain of sin(x) is going to be all of x, and the range is going to be between -1 and 1. Now let's look at the function g(x) equals the absolute value of x, or g(x)=abs(x). Again the domain is all of x, and the range is everything greater than 1 or equal to 0. If I take those two - here's my range of sin(x) - what happens to g(f(x))? So g is the absolute value, so I'll have abs(sin(x)). What's the domain and range of that composite function? If I'm graphing g(f(x)), I'm graphing the absolute value of sin(x), so the graph looks like this. I have a range here that goes from 0 to 1 and a domain that covers all of x. Well, this makes sense. What if I look at f(g(x)), so the function is going to be sine of the absolute value of x, sin(abs(x)). For the absolute value of x, you can take anything as input, so the domain is going to be all values of x, and the range of abs(x) is going to be zero and up, so anything that's a positive number. Now, sine can take anything, so the range of abs(x) is within the domain of sin(x), but what happens to the output? What is the range of this composite function? Let's graph it - is that unexpected? Now the range is in between -1 and 1, which just so happens to be the range of f(x). To recap, we know functions map numbers to other numbers, like y=f(x). The domain and range tell us the possible values for the input and output of a function. Composite functions take the output of one function and use it as input for another function, and we write this f(g(x)). We're going to evaluate f(g(x)) from the inside out, so we're going to evaluate g(x) before we evaluate f(x). And we also know that f(g(x)) does not equal g(f(x)). Chapters in Math 104: Calculus People are saying… "This just saved me about $2,000 and 1 year of my life." — Student "I learned in 20 minutes what it took 3 months to learn in class." — Student "Really helped me understand something I have been struggling with for a while!!!" — High School Student "Totally awesome! I got the lecture quick, like never before in my life!!!" — College Student "When I studied algebra, linear equations were the hardest for me. I wish someone could have explained rise/run and abstract linear equations like this when I was learning it in high school." — Student "You have made a complex subject like calculus very concrete." — Student "I've never seen numbers in this light before. Thank you." — Sarah, College Student "I think this is the best way to relate classroom teaching to our daily life." — Student
http://education-portal.com/academy/lesson/compounding-functions-and-graphing-functions-of-functions.html
4
In calculus, an antiderivative or primitive function of a given real valued function f is a function F whose derivative is equal to f, i.e. F ' = f. The process of finding antiderivatives is antidifferentiation (or indefinite integration). For example: F(x) = x³ / 3 is an antiderivative of f(x) = x². As the derivative of a constant is zero, x² will have an infinite number of antiderivatives; such as (x³ / 3) + 0 and (x³ / 3) + 7 and (x³ / 3) - 36...thus; the antiderivative family of x² is collectively referred to by F(x) = (x³ / 3) + C; where C is any constant. Essentially, related antiderivatives are vertical translations of each other; each graph's location depending upon the value of C. Every continuous function f has an antiderivative, and one antiderivative F is given by the integral of f with variable upper boundary: There are also some non-continuous functions which have an antiderivative, for example f(x) = 2x sin (1/x) - cos(1/x) with f(0) = 0 is not continuous at x = 0 but has the antiderivative F(x) = x² sin(1/x) with F(0) = 0. There are many functions whose antiderivatives, even though they exist, cannot be expressed in terms of elementary functions (like polynomials, exponential functions, logarithms, trigonometric functions, inverse trigonometric functions and their combinations). Examples of these are Techniques of integration Finding antiderivatives is considerably harder than finding derivatives. We have various methods at our disposal: - the linearity of integration allows us to break complicated integrals into simpler ones, - integration by substitution, often combined with trigonometric identities - integration by parts to integrate products of functions, - the inverse chain rule method, a special case of integration by substitution - the method of partial fractions in integration allows us to integrate all rational functions (fractions of two polynomials), - the natural logarithm integral condition, - the Risch algorithm, - integrals can also be looked up in a table of integrals. - When integrating multiple time, we can use certain additional techniques, see for instance double integrals and polar co-ordinates, the Jacobian and the Stokes theorem. - If a function has no elementary antiderivative (for instance, exp(x²)), an area integral can be approximated using numerical integration.
http://www.encyclopedia4u.com/a/antiderivative.html
4
From Wikipedia, the free encyclopedia A visual evoked potential (VEP) is an evoked potential caused by sensory stimulation of a subject's visual field and is observed using an electroencephalography. Commonly used visual stimuli are flashing lights, or checkerboards on a video screen that flicker between black on white to white on black (invert contrast).Visual evoked potentials are very useful in detecting blindness in patients that cannot communicate, such as babies or non-human animals. If repeated stimulation of the visual field causes no changes in EEG potentials, then the subject's brain is probably not receiving any signals from his/her eyes. Other applications include the diagnosis of optic neuritis, which causes the signal to be delayed. Such a delay is also a classic finding in Multiple Sclerosis. Visual evoked potentials are furthermore used in the investigation of basic functions of visual perception. The term "visual evoked potential" is used interchangeably with "visually evoked potential". It usually refers to responses recorded from the occipital cortex. Sometimes, the term "visual evoked cortical potential" (VECP) is used to distinguish the VEP from retinal or subcortical potentials. The multifocal VEP is used to record separate responses for visual field locations.Some specific VEPs are: This article is based on an article from Wikipedia, the free encyclopedia and is available under the terms of GNU Free Documentation License. In the Wikipedia there is a list with all authors of this article available.
http://www.lumrix.net/health/Visual_evoked_potential.html
4.09375
Introduction to Division Part 2 Introduction to Decimals, Part I Introduction to Interest Introduction to Present Value - Khan Academy Introduction to Balance Sheets Introduction to Compound Interest Introduction to Newton's Three Laws - Lesson 1 An Introduction to Primates An Introduction to Humanism An Introduction to Christianity as a World Religion Introduction to Sikhism Introduction to Sikhism Introduction to Buddhism An Introduction to Adjectives Introduction to Order of Operations Part 1 Introduction to Order of Operations Part 2 Introduction to Order of Operations Introduction to the Dissection of a Virtual Frog An Introduction to Voicethreads Introduction to Using Voice Thread This is part one of a two-part video. This video starts with a review of place value, then proceeds to a decimal into a word statement, then writes the decimal as a fraction. Many examples are done. Then it moves on to whole numbers and decimals, writing them out, and showing the fraction equivalent. A few more examples are done. Video is good quality and good for all students as a review or initial learning of the topic. Explains that interest is rent for money. Simple versus compound interest described. Production is basic using computer software and calculator. However the explanation is clear and concise. Looks at present value - as being a choice between - money now and its value later. This production is demonstrated with computer software and a calculator. The explanation is clear and understandable. Author: Salman Khan In this segment, the instructor, Sal Khan of Khan Academy, uses a home purchase to illustrate assets, liabilities, and owner's equity. The instructor uses computer software for demonstration. The explanation is clear and understandable. This is an introduction to compound interest. The instructor uses computer software for instruction. The explanation is clear and understandable. This is a Khan Academy video. This NASA video segment explores how Newton's Laws of Motion apply to the development and operation of airplanes. Viewers watch an instructor at NASA's National Test Pilot School as he describes and then demonstrates why seatbelts are an important force on pilots; what it means to pull 2, 4 and even 6 g in a jet; and how the thrust of a jet engine causes an aircraft to move forward. Formulas are presented onscreen along with the calculations discussed in the segment. For example, the instructor This video covers 41 species of primates ranging from the 300-gram sized pygmy slow loris to the 200-kilogram-sized gorilla. The animals are organized by taxonomic categories. Run time 03:53. This video is from a lecture in Canada in 2008. It is a brief introduction to Humanism (a non believer, man centered person) and its varied aspects, including Athiesm, Freethought, Agnosticism, Transhumanism, and Skepticism. Humanists believe that they can be good without God. Recorded in left audio channel only. (7:57) A recorded speech from Dr. David Perrin, of St. Jerome's University in Waterloo, Ontario, who explains what Christianity is, its history and its beginnings. It is only recorded on one channel, so it comes out your left speaker only. As well, there is a biography of Dr. Perrin superimposed over his image from 1:37 to 2:37 on the video. This video is a good overview of Sikhism is explained in an articulate, cogent way. Most of its major tenets are described. The only problem with the video is that it was recorded on the left channel only, so on headphones, it sounds out of your left side only. (9:40) Animated Introduction to Core Sikh beliefs. The music is taken from a track produced by the late Singh Kaur and Amar Singh. Singh Kaur was a talented vocal artist and Caucasian western convert to Sikhism. Her music has been compared to that of the the celtic sounds of enya. Sadly Singh Kaur passed away a few years ago (may waheguru bless her soul in eternal bliss). This video introduces Buddhism and talks about its roots. There is historical background and reference to its establishment in China. An adjective is a describing word. This video gives a brief explanation of adjectives along with sentences, illustrations, and examples that includes adjectives. Run time 0:53. This video shows the four parts to the order of operations. The first parts is to perform the operations within the parenthesis. Some examples are shown. The second part is to simplify any numbers that have exponents. Examples are once again shown. The third part is perform multiplication and division in order working from left to right. Many examples are shown. The fourth part is perform addition and subtraction in order working from left to right. Examples are shown. Then they show examples us This is a continuation of part 1. The problem they are working on was introduced in part 1, they are working through all the rules of order of operations. They are continue to work on examples. Video is good quality and good for all students as review or initial learning of concept. This video starts off with a black screen because the narrator uses it as a 'chalkboard'. In an easy, conversational tone, the instructor offers an introduction to the order of operations. This short, 46 second clip demonstrates initial three cuts made to the skin of the body cavity to reveal the frog's muscles. This is a small clip from the Digital Frog Virtual Dissection program. This video shows what a VoiceThread is and how to use it. This tutorial introduces you to the website "Voice Thread." Voice Thread allows you to create an online discussion with your students on a topic, picture, or video. You will learn how to create an account, upload a thread, and make a comment on it. Introduction to Decimals, Part I Introduction to Interest Introduction to Present Value - Khan Academy Introduction to Balance Sheets Introduction to Compound Interest Introduction to Newton's Three Laws - Lesson 1 An Introduction to Primates An Introduction to Humanism An Introduction to Christianity as a World Religion Introduction to Sikhism Introduction to Sikhism Introduction to Buddhism An Introduction to Adjectives Introduction to Order of Operations Part 1 Introduction to Order of Operations Part 2 Introduction to Order of Operations Introduction to the Dissection of a Virtual Frog An Introduction to Voicethreads Introduction to Using Voice Thread
http://www.nottingham.ac.uk/xpert/scoreresults.php?keywords=Statistics%20-%20an%20intuitive%20introduction%20:%20normal%20distributi&start=2660&end=2680
4.3125
Stephen J. Mraz Senior Editor, firstname.lastname@example.org Control surfaces on aircraft — the moving elevators, flaps, and ailerons on the trailing edges of the wings and tail — have long been used by pilots to control a plane’s pitch, roll, low-speed lift, and climb or descent rates. But could they be replaced with moving blades of air? That’s the question researchers at several U. K. universities tried to answer when they designed and built the Demon UAV. If so, it would let aerospace engineers eliminate a host of moving parts subject to wear, icing, and failure. It could also lead to stealthier planes and drones because fluidic controls don’t leave gaps in the surface of the aircraft, nor do they move to present a larger or changing cross section to radars like movable ailerons and elevators do. The major goal of the research project was to see if fluidic controls, those based on the Coanda Effect, could exert control forces on an aircraft. The Coanda Effect, discovered in the 1930s, states that a fluid or gas will hug a convex surface if directed tangent to it surface, and this flow can entrain surrounding gas or fluid to follow that surface as well. This means that air blown along a wing will follow the wing’s contours. So air blown along the top surface of a wing ends up increasing lift, which is perpendicular to the surface of the wing. And air blown along the bottom of the wing also increases lift, but in the downward direction. On the Demon, researchers used what they dubbed fluidic controls — devices that could blow air either along the top or bottom surface of the wing’s trailing edge — to replace ailerons and elevators on a delta-winged drone. This lets them manipulate lift and drag over the wing and both steer and control the aircraft. They used a pair of similar devices to exert the same Coanda Effect on a four-sided engine nozzle. It gives ±90° of vectored pitch control with a nozzle that doesn’t move. The U. K. team based their new drone on a previously successful UAV, the Eclipse, but had to increase size by 15% and add several new features. For example, to ensure there was enough pressurised air to send through the fluidic controls — especially to the thrust-vectoring nozzle during landings, when engine settings are low — the team added an auxiliary power unit mounted in the nose. It consists of a small turbine, fueled by the same aviation gas the main jet engine burns, which turns an air compressor. As the weight of the aircraft climbed by at least 12 lb, the team also exchanged the Olympus jet engine and its 51 lb of thrust for a Titan engine and 88 lb of thrust. To keep expenses down, landing gear was fashioned out of model aircraft wheels and tires, along with brakes, shock absorbers, and cables from mountain bikes. The Demon’s airframe was constructed of carbon-fiber composites, which keeps the aircraft light and responsive to small control forces. For more structural support and to provide a skin, the team applied biaxial woven fabric, which easily drapes to make complex curved surfaces without wrinkles. Unidirectional tape was added in areas that needed additional stiffness. All fabrics and tapes were 0.25-mm thick, giving a good compromise between layup times and being able to adjust thicknesses. The final structure relies solely on adhesives — there are no mechanical fasteners. The floor for the payload bay was built from an aluminum alloy. It provides a place for the nose landing gear, APU, and batteries to be mounted. The trailing edge of the Demon’s wings were modified to carry both conventional and fluidic ailerons and elevators. Each surface uses fluid controls with two submillimeter slots parallel to the trailing edge — one above the trailing edge and one below. Radio-controlled hardware lets researchers switch between the two during flight tests. The APU provides air for the wing’s fluidic controls at 0.5-bar gauge and 80°C. The thrust-vectoring controls use air pressurized to 4-bar gauge and 250°C bled from the main engine. But this air first passes through a bypass shroud, cooling the 800°C engine air down to 250°C to protect the aircraft’s surface. Higher pressure air is used for thrust vectoring because the researchers discovered that blades of air or fluidic controls should have twice the velocity of the primary airflow being manipulated. How well the fluidic controls work is still being evaluated and further tests are planned.
http://machinedesign.com/news/could-future-aircraft-fly-without-traditional-control-surfaces-such-ailerons-flaps-and-elevator
4.4375
An ice core is a core sample that is typically removed from an ice sheet, most commonly from the polar ice caps of Antarctica, Greenland or from high mountain glaciers elsewhere. As the ice forms from the incremental build up of annual layers of snow, lower layers are older than upper, and an ice core contains ice formed over a range of years. The properties of the ice and the recrystallized inclusions within the ice can then be used to reconstruct a climatic record over the age range of the core, normally through isotopic analysis. This enables the reconstruction of local temperature records and the history of atmospheric composition. Ice cores contain an abundance of information about climate. Inclusions in the snow of each year remain in the ice, such as wind-blown dust, ash, bubbles of atmospheric gas and radioactive substances. The variety of climatic proxies is greater than in any other natural recorder of climate, such as tree rings or sediment layers. These include (proxies for) temperature, ocean volume, precipitation, chemistry and gas composition of the lower atmosphere, volcanic eruptions, solar variability, sea-surface productivity, desert extent and forest fires. The length of the record depends on the depth of the ice core and varies from a few years up to 800 kyr (800,000 years) for the EPICA core. The time resolution (i.e. the shortest time period which can be accurately distinguished) depends on the amount of annual snowfall, and reduces with depth as the ice compacts under the weight of layers accumulating on top of it. Upper layers of ice in a core correspond to a single year or sometimes a single season. Deeper into the ice the layers thin and annual layers become indistinguishable. An ice core from the right site can be used to reconstruct an uninterrupted and detailed climate record extending over hundreds of thousands of years, providing information on a wide variety of aspects of climate at each point in time. It is the simultaneity of these properties recorded in the ice that makes ice cores such a powerful tool in paleoclimate research. Structure of ice sheets and cores Ice sheets are formed from snow. Because an ice sheet survives summer, the temperature in that location usually does not warm much above freezing. In many locations in Antarctica the air temperature is always well below the freezing point of water. If the summer temperatures do get above freezing, any ice core record will be severely degraded or completely useless, since meltwater will percolate into the snow. The surface layer is snow in various forms, with air gaps between snowflakes. As snow continues to accumulate, the buried snow is compressed and forms firn, a grainy material with a texture similar to granulated sugar. Air gaps remain, and some circulation of air continues. As snow accumulates above, the firn continues to densify, and at some point the pores close off and the air is trapped. Because the air continues to circulate until then, the ice age and the age of the gas enclosed are not the same, and may differ by hundreds of years. The gas age–ice age difference is as great as 7 kyr in glacial ice from Vostok. Under increasing pressure, at some depth the firn is compressed into ice. This depth may range between a few to several tens of meters to typically 100 m for Antarctic cores. Below this level material is frozen in the ice. Ice may appear clear or blue. Layers can be visually distinguished in firn and in ice to significant depths. In a location on the summit of an ice sheet where there is little flow, accumulation tends to move down and away, creating layers with minimal disturbance. In a location where underlying ice is flowing, deeper layers may have increasingly different characteristics and distortion. Drill cores near bedrock often are challenging to analyze due to distorted flow patterns and composition likely to include materials from the underlying surface. Characteristics of firn The layer of porous firn on Antarctic ice sheets is 50–150 m deep. It is much less deep on glaciers. Air in the atmosphere and firn are slowly exchanged by molecular diffusion through pore spaces, because gases move toward regions of lower concentration. Thermal diffusion causes isotope fractionation in firn when there is rapid temperature variation, creating isotope differences which are captured in bubbles when ice is created at the base of firn. There is gas movement due to diffusion in firn, but not convection except very near the surface. Below the firn is a zone in which seasonal layers alternately have open and closed porosity. These layers are sealed with respect to diffusion. Gas ages increase rapidly with depth in these layers. Various gases are fractionated while bubbles are trapped where firn is converted to ice. A core is collected by separating it from the surrounding material. For material which is sufficiently soft, coring may be done with a hollow tube. Deep core drilling into hard ice, and perhaps underlying bedrock, involves using a hollow drill which actively cuts a cylindrical pathway downward around the core. When a drill is used, the cutting apparatus is on the bottom end of a drill barrel, the tube which surrounds the core as the drill cuts downward around the edge of the cylindrical core. The length of the drill barrel determines the maximum length of a core sample (6 m at GISP2 and Vostok). Collection of a long core record thus requires many cycles of lowering a drill/barrel assembly, cutting a core 4–6 m in length, raising the assembly to the surface, emptying the core barrel, and preparing a drill/barrel for drilling. Because deep ice is under pressure and can deform, for cores deeper than about 300 m the hole will tend to close if there is nothing to supply back pressure. The hole is filled with a fluid to keep the hole from closing. The fluid, or mixture of fluids, must simultaneously satisfy criteria for density, low viscosity, frost resistance, as well as workplace safety and environmental compliance. The fluid must also satisfy other criteria, for example those stemming from the analytical methods employed on the ice core. A number of different fluids and fluid combinations have been tried in the past. Since GISP2 (1990–1993) the US Polar Program has utilized a single-component fluid system, n-butyl acetate, but the toxicology, flammability, aggressive solvent nature, and longterm liabilities of n-butyl acetate raises serious questions about its continued application. The European community, including the Russian program, has concentrated on the use of two-component drilling fluid consisting of low-density hydrocarbon base (brown kerosene was used at Vostok) boosted to the density of ice by addition of halogenated-hydrocarbon densifier. Many of the proven densifier products are now considered too toxic, or are no longer available due to efforts to enforce the Montreal Protocol on ozone-depleting substances. In April 1998 on the Devon Ice Cap filtered lamp oil was used as a drilling fluid. In the Devon core it was observed that below about 150 m the stratigraphy was obscured by microfractures. Core processing Modern practice is to ensure that cores remain uncontaminated, since they are analysed for trace quantities of chemicals and isotopes. They are sealed in plastic bags after drilling and analysed in clean rooms. The core is carefully extruded from the barrel; often facilities are designed to accommodate the entire length of the core on a horizontal surface. Drilling fluid will be cleaned off before the core is cut into 1-2 meter sections. Various measurements may be taken during preliminary core processing. Current practices to avoid contamination of ice include: - Keeping ice well below the freezing point. - At Greenland and Antarctic sites, temperature is maintained by having storage and work areas under the snow/ice surface. - At GISP2, cores were never allowed to rise above -15 °C, partly to prevent microcracks from forming and allowing present-day air to contaminate the fossil air trapped in the ice fabric, and partly to inhibit recrystallization of the ice structure. - Wearing special clean suits over cold weather clothing. - Mittens or gloves. - Filtered respirators. - Plastic bags, often polyethylene, around ice cores. Some drill barrels include a liner. - Proper cleaning of tools and laboratory equipment. - Use of laminar-flow bench to isolate core from room particulates. For shipping, cores are packed in Styrofoam boxes protected by shock absorbing bubble-wrap. Due to the many types of analysis done on core samples, sections of the core are scheduled for specific uses. After the core is ready for further analysis, each section is cut as required for tests. Some testing is done on site, other study will be done later, and a significant fraction of each core segment is reserved for archival storage for future needs. Projects have used different core-processing strategies. Some projects have only done studies of physical properties in the field, while others have done significantly more study in the field. These differences are reflected in the core processing facilities. Ice relaxation Deep ice is under great pressure. When brought to the surface, there is a drastic change in pressure. Due to the internal pressure and varying composition, particularly bubbles, sometimes cores are very brittle and can break or shatter during handling. At Dome C, the first 1000 m were brittle ice. Siple dome encountered it from 400 to 1000 m. It has been found that allowing ice cores to rest for some time (sometimes for a year) makes them become much less brittle. Decompression causes significant volume expansion (called relaxation) due to microcracking and the exsolving of enclathratized gases. Relaxation may last for months. During this time, ice cores are stored below -10 °C to prevent cracking due to expansion at higher temperatures. At drilling sites, a relaxation area is often built within existing ice at a depth which allows ice core storage at temperatures below -20 °C. It has been observed that the internal structure of ice undergoes distinct changes during relaxation. Changes include much more pronounced cloudy bands and much higher density of "white patches" and bubbles. Several techniques have been examined. Cores obtained by hot water drilling at Siple Dome in 1997–1998 underwent appreciably more relaxation than cores obtained with the PICO electro-mechanical drill. In addition, the fact that cores were allowed to remain at the surface at elevated temperature for several days likely promoted the onset of rapid relaxation. Ice core data Many materials can appear in an ice core. Layers can be measured in several ways to identify changes in composition. Small meteorites may be embedded in the ice. Volcanic eruptions leave identifiable ash layers. Dust in the core can be linked to increased desert area or wind speed. Isotopic analysis of the ice in the core can be linked to temperature and global sea level variations. Analysis of the air contained in bubbles in the ice can reveal the palaeocomposition of the atmosphere, in particular CO2 variations. There are great problems relating the dating of the included bubbles to the dating of the ice, since the bubbles only slowly "close off" after the ice has been deposited. Nonetheless, recent work has tended to show that during deglaciations CO2 increases lag temperature increases by 600 +/- 400 years. Beryllium-10 concentrations are linked to cosmic ray intensity which can be a proxy for solar strength. There may be an association between atmospheric nitrates in ice and solar activity. However, recently it was discovered that sunlight triggers chemical changes within top levels of firn which significantly alter the pore air composition. This raises levels of formaldehyde and NOx. Although the remaining levels of nitrates may indeed be indicators of solar activity, there is ongoing investigation of resulting and related effects of effects upon ice core data. Core contamination Some contamination has been detected in ice cores. The levels of lead on the outside of ice cores is much higher than on the inside. In ice from the Vostok core (Antarctica), the outer portion of the cores have up to 3 and 2 orders of magnitude higher bacterial density and dissolved organic carbon than the inner portion of the cores, respectively, as a result of drilling and handling. Paleoatmospheric sampling As porous snow consolidates into ice, the air within it is trapped in bubbles in the ice. This process continuously preserves samples of the atmosphere. In order to retrieve these natural samples the ice is ground at low temperatures, allowing the trapped air to escape. It is then condensed for analysis by gas chromatography or mass spectrometry, revealing gas concentrations and their isotopic composition respectively. Apart from the intrinsic importance of knowing relative gas concentrations (e.g. to estimate the extent of greenhouse warming), their isotopic composition can provide information on the sources of gases. For example CO2 from fossil-fuel or biomass burning is relatively depleted in 13C. See Friedli et al., 1986. Dating the air with respect to the ice it is trapped in is problematic. The consolidation of snow to ice necessary to trap the air takes place at depth (the 'trapping depth') once the pressure of overlying snow is great enough. Since air can freely diffuse from the overlying atmosphere throughout the upper unconsolidated layer (the 'firn'), trapped air is younger than the ice surrounding it. Trapping depth varies with climatic conditions, so the air-ice age difference could vary between 2500 and 6000 years (Barnola et al., 1991). However, air from the overlying atmosphere may not mix uniformly throughout the firn (Battle et al., 1986) as earlier assumed, meaning estimates of the air-ice age difference could be less than imagined. Either way, this age difference is a critical uncertainty in dating ice-core air samples. In addition, gas movement would be different for various gases; for example, larger molecules would be unable to move at a different depth than smaller molecules so the ages of gases at a certain depth may be different. Some gases also have characteristics which affect their inclusion, such as helium not being trapped because it is soluble in ice. In Law Dome ice cores, the trapping depth at DE08 was found to be 72 m where the age of the ice is 40±1 years; at DE08-2 to be 72 m depth and 40 years; and at DSS to be 66 m depth and 68 years. Paleoatmospheric firn studies At the South Pole, the firn-ice transition depth is at 122 m, with a CO2 age of about 100 years. Gases involved in ozone depletion, CFCs, chlorocarbons, and bromocarbons, were measured in firn and levels were almost zero at around 1880 except for CH3Br, which is known to have natural sources. Similar study of Greenland firn found that CFCs vanished at a depth of 69 m (CO2 age of 1929). Analysis of the Upper Fremont Glacier ice core showed large levels of chlorine-36 that definitely correspond to the production of that isotope during atmospheric testing of nuclear weapons. This result is interesting because the signal exists despite being on a glacier and undergoing the effects of thawing, refreezing, and associated meltwater percolation. 36Cl has also been detected in the Dye-3 ice core (Greenland), and in firn at Vostok. Studies of gases in firn often involve estimates of changes in gases due to physical processes such as diffusion. However, it has been noted that there also are populations of bacteria in surface snow and firn at the South Pole, although this study has been challenged. It had previously been pointed out that anomalies in some trace gases may be explained as due to accumulation of in-situ metabolic trace gas byproducts. Dating cores Shallow cores, or the upper parts of cores in high-accumulation areas, can be dated exactly by counting individual layers, each representing a year. These layers may be visible, related to the nature of the ice; or they may be chemical, related to differential transport in different seasons; or they may be isotopic, reflecting the annual temperature signal (for example, snow from colder periods has less of the heavier isotopes of H and O). Deeper into the core the layers thin out due to ice flow and high pressure and eventually individual years cannot be distinguished. It may be possible to identify events such as nuclear bomb atmospheric testing's radioisotope layers in the upper levels, and ash layers corresponding to known volcanic eruptions. Volcanic eruptions may be detected by visible ash layers, acidic chemistry, or electrical resistance change. Some composition changes are detected by high-resolution scans of electrical resistance. Lower down the ages are reconstructed by modeling accumulation rate variations and ice flow. Dating is a difficult task. Five different dating methods have been used for Vostok cores, with differences such as 300 years at 100 m depth, 600yr at 200 m, 7000yr at 400 m, 5000yr at 800 m, 6000yr at 1600 m, and 5000yr at 1934 m. Different dating methods makes comparison and interpretation difficult. Matching peaks by visual examination of Moulton and Vostok ice cores suggests a time difference of about 10,000 years but proper interpretation requires knowing the reasons for the differences. Ice core storage and transport Ice cores are typically stored and transported in refrigerated ISO container systems. Due to the high value and the temperature-sensitive nature of the ice core samples, container systems with primary and back-up refrigeration units and generator sets are often used. Known as a Redundant Container System in the industry, the refrigeration unit and generator set automatically switches to its back-up in the case of a loss of performance or power to provide the ultimate peace of mind when shipping this valuable cargo. Ice core sites Ice cores have been taken from many locations around the world. Major efforts have taken place on Greenland and Antarctica. Sites on Greenland are more susceptible to snow melt than those in Antarctica. In the Antarctic, areas around the Antarctic Peninsula and seas to the west have been found to be affected by ENSO effects. Both of these characteristics have been used to study such variations over long spans of time. The first to winter on the inland ice was J.P. Koch and Alfred Wegener in a hut they built on the ice in Northeast Greenland. Inside the hut they drilled to a depth of 25 m with an auger similar to an oversized corkscrew. Station Eismitte Eismitte means Ice-Center in German. The Greenland campsite was located 402 kilometers (250 mi) from the coast at an estimated altitude of 3,000 meters (9,843 feet). As a member of the Alfred Wegener Expedition to Eismitte in central Greenland from July 1930 to August 1931, Ernst Sorge hand-dug a 15 m deep pit adjacent to his beneath-the-surface snow cave. Sorge was the first to systematically and quantitatively study the near-surface snow/firn strata from inside his pit. His research validated the feasibility of measuring the preserved annual snow accumulation cycles, like measuring frozen precipitation in a rain gauge. Camp VI During 1950-1951 members of Expeditions Polaires Francaises (EPF) led by Paul Emile Victor reported boring two holes to depths of 126 and 150 m on the central Greenland inland ice at Camp VI and Station Central (Centrale). Camp VI is in the western part of Greenland on the EPF-EGIG line at an elevation of 1598 masl. Station Centrale The Station Centrale was not far from station Eismitte. Centrale is on a line between Milcent (70°18’N 45°35’W, 2410 masl) and Crête (71°7’N 37°19’W), at about (70°43'N 41°26'W), whereas Eismitte is at (71°10’N 39°56’W, ~3000 masl). Site 2 In 1956, pre-International Geophysical Year (IGY) of 1957-58, a 10 cm diameter core using a rotary mechanical drill (US) to 305 m was recovered. A second 10 cm diameter core was recovered in 1957 by the same drill rig to 411 m. A commercially modified, mechanical-rotary Failing-1500 rock-coring rig was used, fitted with special ice cutting bits. Camp Century Three cores were attempted at Camp Century in 1961, 1962, and again in 1963. The third hole was started in 1963 and reached 264 m. The 1963 hole was re-entered using the thermal drill (US) in 1964 and extended to 535 m. In mid-1965 the thermal drill was replaced with an electro-mechanical drill, 9.1 cm diameter, that reached the base of the ice sheet in July 1966 at 1387 m. The Camp Century, Greenland, (77°10’N 61°08’W, 1885 masl) ice core (cored from 1963–1966) is 1390 m deep and contains climatic oscillations with periods of 120, 940, and 13,000 years. Another core in 1977 was drilled at Camp Century using a Shallow (Dane) drill type, 7.6 cm diameter, to 100 m. North Site At the North Site (75°46’N 42°27’W, 2870 masl) drilling began in 1972 using a SIPRE (US) drill type, 7.6 cm diameter to 25 m. The North Site was 500 km north of the EGIG line. At a depth of 6–7 m diffusion had obliterated some of the seasonal cycles. North Central The first core at North Central (74°37’N 39°36’W) was drilled in 1972 using a Shallow (Dane) drill type, 7.6 cm diameter to 100 m. At Crête in central Greenland (71°7’N 37°19’W) drilling began in 1972 on the first core using a SIPRE (US) drill type, 7.6 cm diameter to 15 m. The Crête core was drilled in central Greenland (1974) and reached a depth of 404.64 meters, extending back only about fifteen centuries. Annual cycle counting showed that the oldest layer was deposited in 534 AD. The Crête 1984 ice cores consist of 8 short cores drilled in the 1984-85 field season as part of the post-GISP campaigns. Glaciological investigations were carried out in the field at eight core sites (A-H). "The first core drilled at Station Milcent in central Greenland covers the past 780 years." Milcent core was drilled at 70.3°N, 44.6°W, 2410 masl. The Milcent core (398 m) was 12.4 cm in diameter, using a Thermal (US) drill type, in 1973. Dye 2 Drilling with a Shallow (Swiss) drill type at Dye 2 (66°23’N 46°11’W, 2338 masl) began in 1973. The core was 7.6 cm in diameter to a depth of 50 m. A second core to 101 m was 10.2 cm in diameter was drilled in 1974. An additional core at Dye 2 was drilled in 1977 using a Shallow (US) drill type, 7.6 cm diameter, to 84 m. Summit Camp The camp is located approximately 360 km from the east coast and 500 km from the west coast of Greenland at (Saattut, Uummannaq), and 200 km NNE of the historical ice sheet camp Eismitte. The closest town is Ittoqqortoormiit, 460 km ESE of the station. The station however is not part of Sermersooq municipality, but falls within the bounds of the Northeast Greenland National Park. An initial core at Summit (71°17’N 37°56’W, 3212 masl) using a Shallow (Swiss) drill type was 7.6 cm in diameter for 31 m in 1974. Summit Camp, also Summit Station, is a year-round research station on the apex of the Greenland Ice Sheet. Its coordinates are variable, since the ice is moving. The coordinates provided here (72°34’45”N 38°27’26”W, 3212 masl) are as of 2006. South Dome The first core at South Dome (63°33’N 44°36’W, 2850 masl) used a Shallow (Swiss) drill type for a 7.6 cm diameter core to 80 m in 1975. Hans Tausen (or Hans Tavsen) The first GISP core drilled at Hans Tausen Iskappe (82°30’N 38°20’W, 1270 masl) was in 1975 using a Shallow (Swiss) drill type, 7.6 cm diameter core to 60 m. The second core at Hans Tausen was drilled in 1976 using a Shallow (Dane) drill type, 7.6 cm diameter to 50 m. The drilling team reported that the drill was stuck in the drill hole and lost. The Hans Tausen ice cap in Peary Land was drilled again with a new deep drill to 325 m. The ice core contained distinct melt layers all the way to bedrock indicating that Hans Tausen contains no ice from the glaciation; i.e., the world’s northernmost ice cap melted away during the post-glacial climatic optimum and was rebuilt when the climate got colder some 4000 years ago. Camp III The first core at Camp III (69°43’N 50°8’W) was drilled in 1977 using a Shallow (Swiss) drill type, 7.6 cm, to 49 m. The last core at Camp III was drilled in 1978 using a Shallow (Swiss) drill type, 7.6 cm diameter, 80 m depth. Dye 3 The Renland ice core from East Greenland apparently covers a full glacial cycle from the Holocene into the previous Eemian interglacial. It was drilled in 1985 to a length of 325 m. From the delta-profile, the Renland ice cap in the Scoresbysund Fiord has always been separated from the inland ice, yet all the delta-leaps revealed in the Camp Century 1963 core recurred in the Renland ice core. The GRIP and GISP cores, each about 3000 m long, were drilled by European and US teams respectively on the summit of Greenland. Their usable record stretches back more than 100,000 years into the last interglacial. They agree (in the climatic history recovered) to a few metres above bedrock. However, the lowest portion of these cores cannot be interpreted, probably due to disturbed flow close to the bedrock. There is evidence the GISP2 cores contain an increasing structural disturbance which casts suspicion on features lasting centuries or more in the bottom 10% of the ice sheet. The more recent NorthGRIP ice core provides an undisturbed record to approx. 123,000 years before present. The results indicate that Holocene climate has been remarkably stable and have confirmed the occurrence of rapid climatic variation during the last ice age. The NGRIP drilling site is near the center of Greenland ( , 2917 m, ice thickness 3085). Drilling began in 1999 and was completed at bedrock in 2003. The NGRIP site was chosen to extract a long and undisturbed record stretching into the last glacial. NGRIP covers 5 kyr of the Eemian, and shows that temperatures then were roughly as stable as the pre-industrial Holocene temperatures were. The North Greenland Eemian Ice Drilling (NEEM) site is located at 77°27’N 51°3.6’W, masl. Drilling started in June 2009. The ice at NEEM was expected to be 2545 m thick. On July 26, 2010, drilling reached bedrock at 2537.36 m. For the list of ice cores visit IceReader web site Plateau Station Plateau Station is an inactive American research and Queen Maud Land traverse support base on the central Antarctic Plateau. The base was in continuous use until January 29, 1969. Ice core samples were made, but with mixed success. Byrd Station Marie Byrd Land formerly hosted the Operation Deep Freeze base Byrd Station (NBY), beginning in 1957, in the hinterland of Bakutis Coast. Byrd Station was the only major base in the interior of West Antarctica. In 1968, the first ice core to fully penetrate the Antarctic Ice Sheet was drilled here. Dolleman Island The British Antarctic Survey (BAS) has used Dolleman Island as ice core drilling site in 1976, 1986 and 1993. Berkner Island In the 1994/1995 field season the British Antarctic Survey, Alfred Wegener Institute and the Forschungsstelle für Physikalische Glaziologie of the University of Münster cooperated in a project drilling ice cores on the North and South Domes of Berkner Island. Cape Roberts Project Between 1997 and 1999 the international Cape Roberts Project (CRP) has recovered up to 1000 m long drill cores in the Ross Sea, Antarctica to reconstruct the glaciation history of Antarctica. International Trans-Antarctic Scientific Expedition (ITASE) The International Trans-Antarctic Scientific Expedition (ITASE) was created in 1990 with the purpose of studying climate change through research conducted in Antarctica. A 1990 meeting held in Grenoble, France, served as a site of discussion regarding efforts to study the surface and subsurface record of Antarctica’s ice cores. Lake Vida The lake gained widespread recognition in December 2002 when a research team, led by the University of Illinois at Chicago's Peter Doran, announced the discovery of 2,800 year old halophile microbes (primarily filamentous cyanobacteria) preserved in ice layer core samples drilled in 1996. As of 2003, the longest core drilled was at Vostok station. It reached back 420,000 years and revealed 4 past glacial cycles. Drilling stopped just above Lake Vostok. The Vostok core was not drilled at a summit, hence ice from deeper down has flowed from upslope; this slightly complicates dating and interpretation. Vostok core data are available. EPICA/Dome C and Kohnen Station The European Project for Ice Coring in Antarctica (EPICA) first drilled a core near Dome C at (560 km from Vostok) at an altitude of 3,233 m. The ice thickness is 3,309 +/-22 m and the core was drilled to 3,190 m. It is the longest ice core on record, where ice has been sampled to an age of 800 kyr BP (Before Present). Present-day annual average air temperature is -54.5 °C and snow accumulation 25 mm/y. Information about the core was first published in Nature on June 10, 2004. The core revealed 8 previous glacial cycles. They subsequently drilled a core at Kohnen Station in 2006. Although the major events recorded in the Vostok, EPICA, NGRIP, and GRIP during the last glacial period are present in all four cores some variation with depth (both shallower and deeper) occur between the Antarctic and Greenland cores. Dome F Two deep ice cores were drilled near the Dome F summit ( , altitude 3,810 m). The first drilling started in August 1995, reached a depth of 2503 m in December 1996 and covers a period back to 320,000 years. The second drilling started in 2003, was carried out during four subsequent austral summers from 2003/2004 until 2006/2007, and by then a depth of 3,035.22 m was reached. This core greatly extends the climatic record of the first core, and, according to a first, preliminary dating, it reaches back until 720,000 years. WAIS Divide The West Antarctic Ice Sheet Divide (WAIS Divide) Ice Core Drilling Project began drilling over the 2005 and 2006 seasons, drilling ice cores up to the depth of 300 m for the purposes of gas collection, other chemical applications, and to test the site for use with the Deep Ice Sheet Coring (DISC) Drill. Sampling with the DISC Drill will begin over the 2007 season and researchers and scientists expect that these new ice cores will provide data to establish a greenhouse gas record back over 40,000 years. TAlos Dome Ice CorE Project is a new 1620 m deep ice core drilled at Talos Dome that provides a paleoclimate record covering at least the last 250,000 years. The TALDICE coring site (159°11'E 72°49'S; 2315 m a.s.l.; annual mean temperature -41°C) is located near the dome summit and is characterised by an annual snow accumulation rate of 80 mm water equivalent. Non-polar cores The non-polar ice caps, such as found on mountain tops, were traditionally ignored as serious places to drill ice cores because it was generally believed the ice would not be more than a few thousand years old, however since the 1970s ice has been found that is older, with clear chronological dating and climate signals going as far back as the beginning of the most recent ice age. Although polar cores have the clearest and longest chronological record, four-times or more as long, ice cores from tropical regions offer data and insights not available from polar cores and have been very influential in advancing understanding of the planets climate history and mechanisms. Mountain ice cores have been retrieved in the Andes in South America, Mount Kilimanjaro in Africa, Tibet, various locations in the Himalayas, Alaska, Russia and elsewhere. Mountain ice cores are logistically very difficult to obtain. The drilling equipment must be carried by hand, organized as a mountaineering expedition with multiple stage camps, to altitudes upwards of 20,000 feet (helicopters are not safe), and the multi-ton ice cores must then be transported back down the mountain, all requiring mountaineering skills and equipment and logistics and working at low oxygen in extreme environments in remote third world countries. Scientists may stay at high altitude on the ice caps for up 20 to 50 days setting altitude endurance records that even professional climbers do not obtain. American scientist Lonnie Thompson has been pioneering this area since the 1970s, developing light-weight drilling equipment that can be carried by porters, solar-powered electricity, and a team of mountaineering-scientists. The ice core drilled in Guliya ice cap in western China in the 1990s reaches back to 760,000 years before the present — farther back than any other core at the time, though the EPICA core in Antarctica equalled that extreme in 2003. Because glaciers are retreating rapidly worldwide, some important glaciers are now no longer scientifically viable for taking cores, and many more glacier sites will continue to be lost, the "Snows of Mount Kilimanjaro" (Hemingway) for example could be gone by 2015. Upper Fremont Glacier Ice core samples were taken from Upper Fremont Glacier in 1990-1991. These ice cores were analyzed for climatic changes as well as alterations of atmospheric chemicals. In 1998 an unbroken ice core sample of 164 m was taken from the glacier and subsequent analysis of the ice showed an abrupt change in the oxygen isotope ratio oxygen-18 to oxygen-16 in conjunction with the end of the Little Ice Age, a period of cooler global temperatures between the years 1550 and 1850. A linkage was established with a similar ice core study on the Quelccaya Ice Cap in Peru. This demonstrated the same changes in the oxygen isotope ratio during the same period. Nevado Sajama Quelccaya Ice Cap Mount Kilimanjaro ice fields These cores provide a ~11.7 ka record of Holocene climate and environmental variability including three periods of abrupt climate change at ~8.3, ~5.2 and ~4 ka. These three periods correlate with similar events in the Greenland GRIP and GISP2 cores. East Rongbuk Glacier See also - Core drill - Core sample in general from ocean floor, rocks and ice. - Greenland ice cores - Ice core brittle zone - Jean Robert Petit - Scientific drilling - WAIS Divide Ice Core Drilling Project. - ^ Bender M, Sowers T, Brook E (August 1997). "Gases in ice cores". Proc. Natl. Acad. Sci. U.S.A. 94 (16): 8343–9. Bibcode:1997PNAS...94.8343B. doi:10.1073/pnas.94.16.8343. PMC 33751. PMID 11607743. - ^ Kaspers, Karsten Adriaan. "Chemical and physical analyses of firn and firn air: from Dronning Maud Land, Antarctica; 2004-10-04". DAREnet. Retrieved October 14, 2005. - ^ "The Composition of Air in the Firn of Ice Sheets and the Reconstruction of Anthropogenic Changes in Atmospheric Chemistry". Retrieved October 14, 2005. - ^ "http://www.ssec.wisc.edu/icds/reports/Drill_Fluid.pdf" (PDF). Retrieved October 14, 2005. - ^ "http://pubs.usgs.gov/prof/p1386j/history/history-lores.pdf" (PDF). Retrieved October 14, 2005. - ^ Journal of Geophysical Research (Oceans and Atmospheres) Special Issue [Full Text]. Retrieved October 14, 2005. - ^ "Physical Properties Research on the GISP2 Ice Core". Retrieved October 14, 2005. - ^ Svensson, A., S. W. Nielsen, S. Kipfstuhl, S. J. Johnsen, J. P. Steffensen, M. Bigler, U. Ruth, and R. Röthlisberger (2005). "Visual stratigraphy of the North Greenland Ice Core Project (NorthGRIP) ice core during the last glacial period". J. Geophys. Res. 110 (D02108): D02108. Bibcode:2005JGRD..11002108S. doi:10.1029/2004JD005134. - ^ A.J. Gow and D.A. Meese. "The Physical and Structural Properties of the Siple Dome Ice Cores". WAISCORES. Retrieved October 14, 2005. - ^ "Purdue study rethinks atmospheric chemistry from ground up". Archived from the original on December 28, 2005. Retrieved October 14, 2005. - "Summit_ACS.html". Retrieved October 14, 2005. - ^ Amy Ng and Clair Patterson (1981). "Natural concentrations of lead in ancient Arctic and Antarctic ice". Geochimica et Cosmochimica Acta 45 (11): 2109–21. Bibcode:1981GeCoA..45.2109N. doi:10.1016/0016-7037(81)90064-8. - ^ "Glacial ice cores: a model system for developing extraterrestrial decontamination protocols". Publications of Brent Christner. Archived from the original on March 7, 2005. Retrieved May 23, 2005. - ^ Michael Bender, Todd Sowersdagger, and Edward Brook (1997). "Gases in ice cores". Proc. Natl. Acad. Sci. USA 94 (August): 8343–9. Bibcode:1997PNAS...94.8343B. doi:10.1073/pnas.94.16.8343. PMC 33751. PMID 11607743. Bender, M.; Sowers, T; Brook, E (1997). "Gases in ice cores". Proceedings of the National Academy of Sciences 94 (16): 8343–9. Bibcode:1997PNAS...94.8343B. doi:10.1073/pnas.94.16.8343. PMC 33751. PMID 11607743. More than one of - ^ "TRENDS: ATMOSPHERIC CARBON DIOXIDE". Retrieved October 14, 2005. - ^ "CMDL Annual Report 23: 5.6. MEASUREMENT OF AIR FROM SOUTH POLE FIRN". Retrieved October 14, 2005. - ^ "Climate Prediction Center — Expert Assessments". Retrieved October 14, 2005. - ^ M.M. Reddy, D.L. Naftz, P.F. Schuster. "FUTURE WORK". ICE-CORE EVIDENCE OF RAPID CLIMATE SHIFT DURING THE TERMINATION OF THE LITTLE ICE AGE. Archived from the original on September 13, 2005. Retrieved October 14, 2005. - ^ "Thermonuclear 36Cl". Archived from the original on May 23, 2005. Retrieved October 14, 2005. - ^ Delmas RJ, J Beer, HA Synal, et al (2004). "Bomb-test 36Cl measurements in Vostok snow (Antarctica) and the use of 36Cl as a dating tool for deep ice cores". Tellus B 36 (5): 492. Bibcode:2004TellB..56..492D. doi:10.1111/j.1600-0889.2004.00109.x. - ^ Carpenter EJ, Lin S, Capone DG (October 2000). "Bacterial Activity in South Pole Snow". Appl. Environ. Microbiol. 66 (10): 4514–7. doi:10.1128/AEM.66.10.4514-4517.2000. PMC 92333. PMID 11010907. - ^ Warren SG, Hudson SR (October 2003). "Bacterial Activity in South Pole Snow Is Questionable". Appl. Environ. Microbiol. 69 (10): 6340–1; author reply 6341. doi:10.1128/AEM.69.10.6340-6341.2003. PMC 201231. PMID 14532104. - ^ Sowers, T. (2003). "Evidence for in-situ metabolic activity in ice sheets based on anomalous trace gas records from the Vostok and other ice cores". EGS - AGU - EUG Joint Assembly: 1994. Bibcode:2003EAEJA.....1994S. - ^ "NOAA Paleoclimatology Program — Vostok Ice Core Timescales". Retrieved October 14, 2005. - ^ "Polar Paleo-Climate Interests". Retrieved October 14, 2005. - ^ Jim White and Eric Steig. "Siple Dome Highlights: Stable isotopes". WAISCORES. Retrieved October 14, 2005. - ^ "GISP2 and GRIP Records Prior to 110 kyr BP". Archived from the original on September 9, 2005. Retrieved October 14, 2005. - ^ Gow, A. J., D. A. Meese, R. B. Alley, J. J. Fitzpatrick, S. Anandakrishnan, G. A. Woods, and B. C. Elder (1997). "Physical and structural properties of the Greenland Ice Sheet Project 2 ice core: A review". J. Geophys. Res. 102 (C12): 26559–76. Bibcode:1997JGR...10226559G. doi:10.1029/97JC00165. - ^ Whitehouse, David (14 October 2005). "Breaking through Greenland's ice cap". BBC. - ^ "NOAA Paleoclimatology Program — Vostok Ice Core". Retrieved October 14, 2005. - ^ Bowen, Mark (2005). Thin Ice. Henry Holt Company, ISBN 0-8050-6443-5 - British Antarctic Survey, The ice man cometh - ice cores reveal past climates - Hubertus Fischer, Martin Wahlen, Jesse Smith, Derek Mastroianni, Bruce Deck (1999-03-12). "Ice Core Records of Atmospheric CO2 Around the Last Three Glacial Terminations". Science (Science) 283 (5408): 1712–4. Bibcode:1999Sci...283.1712F. doi:10.1126/science.283.5408.1712. PMID 10073931. Retrieved 2010-06-20. - Dansgaard W. Frozen Annals Greenland Ice Sheet Research. Odder, Denmark: Narayana Press. p. 124. ISBN 87-990078-0-0. - Langway CC Jr. (Jan 2008). "The History of Early Polar Ice Cores". Cold Regions Science and Technology 52 (2): 101. doi:10.1016/j.coldregions.2008.01.001. - Wegener K (Sep 1955). "Die temperatur in Grönlandischen inlandeis". Pure Appl Geophys. 32 (1): 102–6. Bibcode:1955GeoPA..32..102W. doi:10.1007/BF01993599. - Rose LE. "The Greenland Ice Cores". Kronos 12 (1): 55–68. - "Crete Ice Core". - Oeschger H, Beer J, Andree M; Beer; Andree (Aug 1987). "10Be and 14C in the Earth system". Phil Trans R Soc Lond A. 323 (1569): 45–56. Bibcode:1987RSPTA.323...45O. doi:10.1098/rsta.1987.0071. JSTOR 38000. - "NOAA Paleoclimatology World Data Centers Dye 3 Ice Core". - Hansson M, Holmén K (Nov 2001). "High latitude biospheric activity during the Last Glacial Cycle revealed by ammonium variations in Greenland Ice Cores". Geophy Res Lett. 28 (22): 4239–42. Bibcode:2001GeoRL..28.4239H. doi:10.1029/2000GL012317. - National Science Foundation press release for Doran et al. (2003) - "Deep ice tells long climate story". BBC News. September 4, 2006. Retrieved May 4, 2010. - Peplow, Mark (25 January 2006). "Ice core shows its age". Nature (journal). doi:10.1038/news060123-3. "part of the European Project for Ice Coring in Antarctica (EPICA) ... Both cores were ... Dome C ... the Kohnen core ..." - "Deciphering the ice". CNN. 12 September 2001. Archived from the original on 13 June 2008. Retrieved 8 July 2010. - Thompson LG, Mosley-Thompson EM, Henderson KA (2000). "Ice-core palaeoclimate records in tropical South America since the Last Glacial Maximum". J Quaternary Sci. 15 (4): 377–94. Bibcode:2000JQS....15..377T. doi:10.1002/1099-1417(200005)15:4<377::AID-JQS542>3.0.CO;2-L. - Thompson LG, Mosley-Thompson EM, Davis ME, Henderson KA, Brecher HH, Zagorodnov VS, Mashlotta TA, Lin PN, Mikhalenko VN, Hardy DR, Beer J (2002). "Kilimanjaro ice core records: evidence of Holocene climate change in tropical Africa". Science. 298 (5593): 589–93. Bibcode:2002Sci...298..589T. doi:10.1126/science.1073198. PMID 12386332. - Ming J, Cachier H, Xiao C, "et al." (2008). ACP 8 (5): 1343–52. - http://www.tonderai.co.uk/earth/ice_cores.php "The Chemistry of Ice Cores" literature review - Barnola J, Pimienta P, Raynaud D, Korotkevich Y (1991). "CO2-Climate relationship as deduced from the Vostok ice core – a reexamination based on new measurements and on a reevaluation of the air dating". Tellus Series B-Chemical and Physical Meteorology 43 (2): 83–90. Bibcode:1991TellB..43...83B. doi:10.1034/j.1600-0889.1991.t01-1-00002.x. - Battle M, Bender M, Sowers T, et al (1996). "Atmospheric gas concentrations over the past century measured in air from firn at the South Pole". Nature 383 (6597): 231–5. Bibcode:1996Natur.383..231B. doi:10.1038/383231a0. - Friedli H, Lotscher H, Oeschger H, et al (1986). "Ice core record of the C13/C12 ratio of atmospheric CO2 in the past two centuries". Nature 324 (6094): 237–8. Bibcode:1986Natur.324..237F. doi:10.1038/324237a0. - Andersen KK, Azuma N, Barnola JM, et al. (September 2004). "High-resolution record of Northern Hemisphere climate extending into the last interglacial period" (PDF). Nature 431 (7005): 147–51. Bibcode:2004Natur.431..147A. doi:10.1038/nature02805. PMID 15356621. |Wikimedia Commons has media related to: Ice cores| - Ice Core Gateway - National Ice Core Laboratory - Facility for storing, curating, and studying ice cores recovered from the polar regions. - Ice-core evidence of rapid climate shift during the termination of the Little Ice Age - Upper Fremont Glacier study - Byrd Polar Research Center - Ice Core Paleoclimatology Research Group - National Ice Core Laboratory - Science Management Office - West Antarctic Ice Sheet Divide Ice Core Project - PNAS Collection of Articles on the Rapid Climate Change - Map of some worldwide ice core drilling locations - Map of some ice core drilling locations in Antarctica - Alley RB (February 2000). "Ice-core evidence of abrupt climate changes". Proc. Natl. Acad. Sci. U.S.A. 97 (4): 1331–4. Bibcode:2000PNAS...97.1331A. doi:10.1073/pnas.97.4.1331. PMC 34297. PMID 10677460. - August 2010: Ice Cores: A Window into Climate History interview with Eric Wolff, British Antarctic Survey from Allianz Knowledge - September 2006: BBC: Core reveals carbon dioxide levels are highest for 800,000 years - June 2004: "Ice cores unlock climate secrets" from the BBC - June 2004: "Frozen time" from Nature - June 2004: "New Ice Core Record Will Help Understanding of Ice Ages, Global Warming" from NASA - September 2003: "Oldest ever ice core promises climate revelations" - from New Scientist
http://en.wikipedia.org/wiki/Ice_core
4.0625
WORDS in English are made from symbols and symbol combinations. Unlike many other languages, some of these symbols and symbol combinations produce different sounds in different words. This multi-sound attribute can make it difficult to know what sound is being made in a word and how a particular word should be pronounced. That difficulty is evident when one compares the sounds made by the consonants “c”, “g”, “s” and “x” in captain, citizen, six, sure, treasure, gun, giraffe box, exit, xylophone and X-ray. Other consonants that make more than one sound are “n”, “q”, “y” and “l”, while some consonants sometimes make no sound at all, ie. they are silent consonants. Examples: honest, know, receipt, listen, bomb, calm, iron, island, indict. Sounding the vowels The problem becomes greater when the different sounds made by the vowels are also put into consideration. ·The eight “a” sounds: cat — ape — want — saw — ask — about — air — orange. ·The five “e” sounds: egg — eat — eight— chateau — pretty. ·The four “i” sounds: sit — side — radio— onion. ·The eight “o” sounds:hot — goat — son —two — woman — corn — women —colonel. ·The seven “u” sounds:hut — unit — rude— put — busy — bury— buy. ·The four semi-vowel “y” sounds: young— pony — sky —gymnast. Speakers must also know the different sounds made by symbol combinations, ie., the blends, digraphs and multi-symbol combinations if they are to be able to pronounce words accurately. Imagine the problems a person would encounter pronouncing some of the common “ch” words like chimney, Christmas, architect and choir if it was not known that “ch” can make different sounds. by Keith Wright, the author and creator of the 4S Approach To Literacy and Language (4S).
http://www.teo-education.com/teo/?cat=33
4.15625
The American Congregational tradition has taken on different meanings over time, but it has always rested on one fundamental principle, that God's voice is most clearly heard when ordinary individual Christians join together under mutual covenant. Congregationalism originated in sixteenth-century England, within the Calvinist wing of the Protestant Reformation. The commitment of these Puritan believers to simple worship in local "gathered" assemblies put them both politically and theologically at odds with England's hierarchical, state-sponsored Anglican Church, and, in the face of persecution, led to their departure to North America in the early seventeenth century. Leaving was complicated. The Pilgrims who first arrived in Plymouth, Massachusetts, in 1620 were a small group of radical Separatists who had fled England for the Netherlands in 1608. Their settlement was small, economically beleaguered, and did not prosper in the long term. A much larger group of Puritans came to Massachusetts Bay in the 1630s and 1640s. Like their predecessors in Plymouth, they had also insisted on local church government, unadorned worship, and covenants between "visible saints". But they strongly resisted the logic of separation from the Anglican Church, believing that they could purify it from within. Under Archbishop William Laud, however, prospects for change grew dim and, also in the face of growing economic duress, thousands of nonseparating Puritans departed to North America. Despite the change of geographic scene, they did not abandon their goal of reforming the English church. New England was to be a "city on a hill", a thoroughly Christian commonwealth and a godly example to all the world. New England's Puritans were not the dour, witch-hunting kill-joys that have often populated American myth and legend. They were in many ways typical Elizabethan English men and women who enjoyed good ale and good company, and who also held their religious beliefs with deep personal conviction. Early on they flourished in New England, buoyed by the conviction that they were God's chosen people, with a central role in the unfolding of divine history. Indeed, when smallpox epidemics decimated the local Native American population, Puritan settlers accepted the tragedy as an affirmation of God's providential care for their fledgling communities. Contrary to the popular notion that these settlements were theocratic – that is, ruled by the clergy – the original Puritans set aside separate realms of activity for church and state, though insisting that the two always worked cooperatively. Following the model set by Calvin's Geneva, the Massachusetts General Court enforced uniformity of belief and the obligations of church membership on all the colony's inhabitants, regardless whether or not they personally held to Puritan doctrines. Religious dissent was, in effect, illegal. The other popular conception, that Puritan New England was an early experiment in democracy, is not strictly true either. Though all church members were automatically voting members of their congregations and in the larger commonwealth, the privilege did not extend to women or to religious dissenters, who were still required to pay taxes for church support. Put simply, New England Puritans were not interested in providing religious liberty for all; their primary goal was to establish and maintain close-knit covenanted communities of believers. Over the course of the seventeenth and eighteenth centuries, Puritan leaders also clarified the meaning of congregational government. The original immigrants had been careful to distinguish themselves from other English Protestants who followed a more presbyterian form of church government – that is, who believed that independent congregations needed some form of institutional oversight by groups of ruling elders, or presbyteries. Once in North America, however, congregationalists, especially in newer settlements like Connecticut, began to discover the need for more institutional structure. The Cambridge Platform of 1648 was a major step in this direction, affirming the Westminster Confession as the standard of belief for all New England churches, clarifying leadership roles in individual churches, and establishing a rationale for meetings of synods, or representatives from each local church body. Over time, churches in Connecticut tended to be less leery of cooperative forms than their coreligionists in Massachusetts; Connecticut's Saybrook Platform of 1708 set up governing bodies, referred to as consociations and associations, with the power to make legally binding decisions for all of the individual churches within their geographic oversight. By the mid-eighteenth century, questions of church government were increasingly overshadowed by much knottier issues of piety and zeal. In order to succeed, the congregational way required high levels of personal commitment and in most of the original Puritan churches, potential members had had to testify to a religious conversion experience in order to join. Already in 1662, however, Puritan leaders had formulated a "Half-Way Covenant", allowing parents who had not experienced conversion to baptize their children in their local churches. Not surprisingly perhaps, this innovation caused as many problems as it solved. The transatlantic religious revival known as the Great Awakening was both bane and blessing in New England. During the 1740s, under the fiery preaching of itinerating evangelists like George Whitefield, Gilbert Tennent, and James Davenport, thousands of laypeople experienced dramatic conversions – and became increasingly critical of the spiritual laxity of the established Congregational clergy. All across New England Congregational churches split into factions, the New Lights supporting the revival and the Old Lights wary of its emotional excesses. Yet revival enthusiasm also generated a variety of intellectually sophisticated responses, particularly the penetratingly analytical yet warmly pastoral writings of Congregational minister Jonathan Edwards. Pastor at Northampton, Massachusetts, during the height of the Great Awakening, Edwards' defense of "religious affections" is a classic melding of "head" and "heart" in American Protestant thought. American independence presented Congregationalists with obstacles as well as opportunities. By the late 1700s, the New England clergy, sometimes referred to as the Standing Order, had become thoroughly used to the privileges of social leadership and tax-supported church budgets. Constitutionally-mandated separation of church and state, a process not complete until Massachusetts changed its laws in 1833, meant that all churches would stand on equal footing and compete for financial support through the voluntary gifts of their membership. Churches with the smallest investments in money and property – at that time primarily Methodists and Baptists – found the transition easiest to negotiate, and they expanded rapidly into the western frontier. Congregationalists, already two centuries old and loosely organized, proceeded more slowly. In 1801 they signed a Plan of Union with the Presbyterian church, which was designed to pool the resources of both denominations as they moved westward. In the early nineteenth century Congregationalists also overcame some of their organizational reluctance and sponsored an impressive array of voluntary societies, including some of the earliest on behalf of foreign missions. The American Board of Commissioners for Foreign Missions (1810), the American Home Missionary Society (1826), the American Education Society, and other similar outreach groups were open to participation by all evangelical Protestants, but spearheaded primarily by Congregationalists. The American Missionary Association, formed in 1846, joined the denomination's antislavery zeal with its commitments to education and evangelism, and in the post-Civil War years established many schools across the South for newly-freed slaves. But for all these successes, Congregationalists also endured disunity. The nineteenth century opened with a series of bruising theological controversies over the divinity of Christ that created a split between Trinitarian and Unitarian churches, and eventuated in the formation of the Unitarian Association in 1825. The Dedham Decision of 1820, a court case which awarded ownership of a Congregational church to the Unitarian-leaning members of the local parish, dealt a further blow to the established Standing Order. But by then, most Congregational churches were far from comfortable with orthodox Calvinist theology; though a strong minority still affirmed the conservative stance of the Burial Hill Declaration (1865), an increasing number were influenced by new strains of liberal theological thought. Many of the nineteenth century's most innovative and influential theologians were Congregationalists. During the antebellum period, the heirs of Jonathan Edwards, led by New Divinity theologians Samuel Hopkins, Joseph Bellamy, Nathaniel Emmons, and Nathaniel William Taylor worked through labyrinthine questions of human freedom and divine sovereignty. In mid-century New Haven pastor Horace Bushnell laid the groundwork for the development of liberal thought, emphasizing the poetic, intuitive nature of religious truth, and the immanence of God in human experience. Bushnell's insistence that God lived within the most minute human interactions powered Protestant investment in Sunday schools and devotional literature for the home; it also legitimated broader concerns for justice in the Social Gospel movement. During the late-nineteenth century, many Congregationalists, most notably pastor and writer Washington Gladden, led national efforts to establish the "kingdom of God on earth" by campaigning for the rights of labor unions, and aid to the urban poor. Other Congregational theologians, led by the faculty of Andover Seminary in Massachusetts, followed Bushnell along more controversial paths. Their so-called New Theology rejected the formal categories of Calvinist thought, emphasizing instead a more optimistic, ethical creed centering on Christ's role as a moral exemplar, affirming human efforts to bring about a just and peaceful social order. By the early twentieth century, however, these views were no longer those of the radical view, as liberal theology dominated the curriculum of most Congregational seminaries, and spread rapidly into church pulpits across the country. Early-twentieth century Congregationalists both merged and divided. With the formation of the National Council of Congregational Churches in 1871 previously independent churches finally came together under a permanent denominational structure. Almost immediately, however, Congregational leaders began to look for ways to overcome institutional barriers that separated Christian believers. That same year the National Council issued a "Declaration on the Unity of the Church", decrying the divided state of American Protestantism and calling for new ecumenical conversations among church leaders. These finally found fruit in the 1931 merger of Congregational churches with the Christian Connection, a group formed in the early nineteenth century by believers who, following the first century pattern, rejected all denominational labels. In 1957 the General Council of Congregational and Christian Churches merged with the Evangelical and Reformed Church, a denomination created by another ecumenical venture, to form the United Church of Christ. Not all Congregationalists followed this route, however. The Conservative Congregational Christian Conference (CCCC), formed in 1948, brought together evangelical churches who had opted against joining the United Church of Christ because of theological disagreements. The National Association of Congregational Christian Churches (NACCC) provided a home for congregations and individuals who opposed the 1957 merger for polity reasons. Thus the NACCC created a "referendum council", through which individual churches reserved the right to modify any act by a national body. In many ways, historical Congregationalism stands at the heart of the American Protestant tradition. The creative tension between individual experience and social witness has been deeply characteristic of the grassroots piety that has undergirded religion in the United States. Congregational wariness toward institutional structures has also been, for good or ill, a prevailing feature of American church life, though it has also allowed room for theological innovation and creative responses to social evils. In American culture, Congregationalists have been among the first to articulate a working relationship between church and state, to promote an educated, engaged citizenship, and Christian mission – in all of the forms this might take – to the wider world.
http://www.congregationallibrary.org/resources/historical-overview
4.125
LESSON FIVE: Recognizing Advertising Techniques Students study techniques used in advertising. R3C Students will use details from the text(s) to summarize author’s ideas, make predictions, make inferences, evaluate the accuracy of the information, analyze propaganda techniques, and analyze two or more nonfiction texts. § Sources of literature o Paper and pencil § Handouts provided § Words to Know o propaganda techniques Students choose one of the following articles, “Fast Food Linked to Child Obesity” and “California Bans Some Junk Food in Schools.” Students write a one-paragraph summary and share their opinions regarding the information. 1. Students tell how they think advertisers lure prospective consumers to buy their particular product. Discuss ways that advertisers entice consumers to purchase snack products. Students predict the type of questions they think will be studied after presenting the questions about advertisers and snack food. Do you think schools have the right to ban the sale of junk food? Do you think fast-food chains are responsible for teen obesity? Do you think magazines should alter photographs for publication? 2. Using the handout “Recognizing Propaganda Techniques,” present a brief lesson on analyzing propaganda techniques. Review the types of propaganda and how they are used. Students give examples of each type from advertisements of products they have heard or seen. Ensure that the format used to share opinions requires students to justify their opinions.
http://dese.mo.gov/divimprove/curriculum/ModelCurriculum/readnonfict/lesson_five.htm
4.09375
In the last lecture, we zoomed in on the weathering/sedimentation part of the rock cycle. Today, we will consider what happens when stuff goes down the tubes, gets metamorphosed and ultimately gets remelted and comes up as igneous rocks once again. Metamorphism means to change form. In the geologist's sense it refers to changes in rocks (protoliths) in the solid state (i.e. not by melting the rock wholesale). One can view metamorphism as similar to cooking. Ingredients are mixed and placed at a different temperature (and/or pressure) and changes occur. What was a batch of goo, turns into a lovely cake! Change occurs in order to maintain equilibrium conditions with new states of heat, pressure, or fluids. Thus, major changes in any of these three environmental variables can result in metamorphism. Because of the great conveyor belt of plate tectonics, rocks can encounter a variety of environmental conditions riding around in pressure, temperature, fluid space. As they go, they may metamorphose and may have a tale to tell of where they've been. The fingerprints of metamorphism are growth of new minerals stable at the new PTF conditions and changes in texture reflecting the state of stress. There are several sources of the thermal energy that drives metamorphism. Obviously, when and igneous body (some 1200 degrees C) intrudes into unsuspecting host rock, the contact zone heats up considerably. This causes a baking of the neighboring rock called contact metamorphism. The temperature in the Earth goes up with depth. Near the surface in ordinary crust, the temperature rises some 30 degrees per kilometer. This is partly due to radioactive decay within the crust, but also comes from the fact that the Earth is still very hot from its initial formation (recall that the core is largely molten iron!). So if rocks are taken from the surface "down the tubes" either by burial or on a lithospheric nose dive, they will heat up and metamorphose. Pressure changes with depth in the Earth as a result of the increased weight of the overlying rock. Increased pressure drives minerals to form more compact phases, driving for example coal to change into diamonds, and clay to rubies. Pressure also changes the state of stress. Crystals will grow or deform by cracking or flowing in response to the change in stress and show either preferential alignment, or evidence of squashing that reflects in some way the stress regime of the new environment. Deformation near fault zones, for example, results in cracking and grinding of rocks and is referred to as cataclastic metamorphism, while deformation deep in the crust occurs more plastically over a wide area producing regional metamorphism. Changes in the chemistry of the fluids in the pore spaces of rocks also induces change. In its lowest temperature/pressure form, this change is called diagenesis, including weathering discussed in the last lecture. Diagenesis is not usually considered part of metamorphism although the distinction is a pretty subtle one if you ask me. One common cause of changes in the fluid chemistry is the proximity of something hot - take volcanic activity for example. The heat induces convection currents in the surrounding fluids. The heated water reacts locally with the hot rock and carries a load of dissolved matter (rich in metals and sulfer) from the region of high heat into a region of cooler rock (or water). Here, the fluids tend to "dump their load". This form of metamorphism is known as hydrothermal alteration and is the way most metallic ore bodies formed. When the super-charged fluids come out of brand-new ocean crust and hit 2C water, they drop their load and form what is known as a black smoker. Lots of animals (if you can call them that) live off the stuff. Check these out: Increasing pressure causes a reduction in volume of the rock by first compaction and then recrystallization of minerals to denser forms. Compaction produces a more closely packed arrangement of grains. This is exmplified by the transformation of clay to slate. Recrystallization involves growth of new minerals. The bulk chemistry need not change (unlessfluids are involved). If the pressure is higher in one direction than the others, minerals will tend to align themselves such that the fabric formsperpendicular to this axis. A planar fabric is known as a foliation while alignment of crystals causes a lineation. Micas (which are platey) will grow with their plates along the foliation. Increasing metamorphism resultsin different types of foliation. In its journey around the tectonic merri-go-round, clay will turn first to shale during diagenesis, then procede to slate. Growth of micas then will contribute a peach fuzz sheen characteristic of phyllite. As metamorphism becomes more intense, schist forms. Schists have large mica flakes in them. schist (thin section) Under the most intense metamorphism, minerals segregrate into bands of light and dark minerals, characteristic of gneiss. If the rock easily splits along smooth parallel surfaces, it has what is known as fracture cleavage. Not all rocks are foliated. For example, contact metamorphism does not generally produce foliated rocks. Also, parent rocks (protoliths) that tend to grow minerals that are not platey or elongated, will produce metamorphic rocks that have no foliation or lineation. These include quartz, and calcite. So sandstones make massive quartzites and limestones make marble, neither of which are strongly foliated. Here is an example of what marble looks like. Porphyroblasts are larger crystals in a finer-grained matrix. The metamorphic rock will bear the name of the dominant porphyroblasts, e.g. garnite schist. Changing conditions results in phase transformation from one mineral to another. Minerals coalesce or change crystal structure. The particular minerals that form are characteristic of thepressure/temperature conditions. Particularly useful for determining PT conditions are the following metamorphic index minerals Low metamorphic grade (low temperatures and pressures) - about 200 Slates and phyllites are characterized by: Intermediate metamorphic grade rocks such as schist often have: High metamorphic grade - 800 degrees C (verging on melting), such as gneiss and migmatite have the high temperature high pressure phase sillimanite. Staurolite, kyanite and sillimanite all have the same composition but are stable at different PT conditions (like graphite and diamond). Therefore the presence of one particular form documents the PT conditions. A more accurate idea of PT conditions can be gotten by considering a whole suite of minerals. Determining the PT history of a sequence of rocks describes the journey of that particular crustal package up and down the tectonic elevator. Obviously, the composition of the protolith plays a strong role in which minerals will grow. Thus basalts, granites and carbonate rocks each develope into the different metamorphic mineral assemblages leading ultimately to amphibolite, gneiss or marble respectively. Hydrothermal metamorphism occurs near mid-ocean ridges driven by the heat of the volcanic activity there. Intrusion of igneous rocks drives contact metamorphism anywhere it occurs. Both of these sorts are metamorphism with high temperatures and low pressures. Faults associated with plate boundaries create cataclastic metamorphismin the shallow crust. Here is an example of: cataclastic metamorphism Cataclasis grades into totally pulverized minerals that are streaked out in bands characteristic of mylonites. Finally, burial of sediments in a sedimentary basin takes the rocks down the PT road characteristic of the crust, the so-called geotherm . They respond to this by developing the characteristic mineral phases of burial metamorphism. The minerals are a guide to just how deep and hot sediments got. When a partial melt forms, it rises and collects in a magma chamber (see Figure 3.2 in your book). In the magma chamber, the melt continues to crystallize thus changing its chemistry. This is a process known as magmatic differentiation. As magmas cool, different minerals will crystallize out of the melt. By studying the crystallization of melts in the laboratory, this process is fairly well understood. If these minerals settle out of the melt to the floor of the magma chamber, (see Figure 4.11 in your book), the chemistry of the remaining melt changes from a more mafic to a more felsic melt; thus, if fractional crystallization is taken to the extreme granite can be gotten from what was originally a basaltic melt. The magma chamber may erupt from time to time. If the melt doesn't make it to the surface, it forms an intrusive rock. (see Figure 4.16 in book). Intrusive bodies can be big balloon shapes (plutons), sub-horizontal slabs (sills) or sub-vertical walls (dikes). If it does make it, it becomes an extrusive rock. Extrusives can flow out over the ground (lava flows) or be blasted into the air to form ash falls and pyroclastics. Igneous Rocks have a two-dimensional classification scheme based on chemistry, grain size and texture. The key to chemical classification in igneous rocks is the amount of Silica (SiO2) in the magma. (Of course people who study this make a much bigger deal out of it! If magmas don't have much silica, their minerals are dominated by magnesium and iron (Fe) - hence the term MAFIC (MA- from the magnesium and FIC from the Fe), or even ULTRAMAFIC for the really silica poor varieties. Silica rich magmas have a mineral named feldspar in them (see book) and are called FELSIC as a result. You will also see the words "acidic" and "basic" used for felsic and mafic respectively and you should be aware that this has nothing to do with pH! One can often tell about how much silica is in a rock just by its color. The more silica, the lighter the color. The main control of grain size is how fast the rock cooled from the molten state. Slow cooling allows bigger crystals to form, and fast cooling makes smaller crystals and even glass (no crystals). So the second dimension of igneous rock classification is whether the rock was formed by cooling on the surface as an extrusive rock. or in the crust as an intusive rock. Magma can either be erupted (extruded) as ash to make pyroclastic rock or as lava to make volcanic rocks. |SiO2 (wt. %)||<45||45 -52||52 - 57||57 - 63||63 - 68||>68| |Compositional or Chemical Equivalent||Ultrabasic||basic||basic to intermediate||intermediate||intermediate to acidic or silicic||acidic or silicic| |Magma Type||ultramafic||mafic||mafic to intermediate||intermediate||intermediate to felsic||felsic| |Extrusive Rock Name||komatiite||basalt||basaltic andesite||andesite||dacite||rhyolite| |Intrusive Rock Name||peridotite||gabbro||diorite||diorite or quartz diorite||granodiorite||granite| |Mafic Mineral Content| |Ca/Na or Ca/K| Igneous textures are classified by the presence or absence of crystals, the size of the crystals, and the size and density of vesicles (holes). Check out this page for a nice summary of igneous textures. Pyroclastic rocks are classified by grain size from BOMBS (>64mm) to ash (<2mm). Lapilli are pea-like grains often in a finer matrix. Here is a nice picture I found to illustrate the classification scheme of pyroclastics: Volcanic rocks are mainly classified by the amount of silica. There are four main categories with increasing silica: basalt, andesite, dacite and rhyolite. Intrusive rocks cool slower and have coarser grain sizes than their extrusive counterparts. The big four of intrusive rocks are with increasing silica: gabbro, diorite, granodiorite, and granite.
http://topex.ucsd.edu/es10/lectures/lecture16/lecture16.html
4
The balance sheet is one of four financial statements. It shows the financial position of a company as of the date issued. It lists a company's assets (e.g. cash, inventory, etc.) and its liabilities (e.g. debt, accounts payable, etc.) and shareholders' equity. Unlike the other financial statements, it is accurate only at one moment in time, not a period of time. The balance sheet is the core of the financial statements. All other statements either feed into or are derived from the balance sheet. The income statement shows how the company's assets were used to generate revenue and income. The statement of cash flows shows how the cash balance changed over time and accounts for changes in various assets and liabilities. The statement of shareholders' equity shows how the equity portion of the balance sheet changed since the last one. Many analysts come to the balance sheet first to gauge the health of the company. It is often listed first on the quarterly or annual reports. The basic equation of accounting is reflected in the balance sheet. <math>Assets = Liabilities + equity</math> If you look at a balance sheet, you'll note that the total assets always equals the total of liabilities and equity. This reflects what the company owns (assets) and how what it owns came about, through the funding given it by liabilities (borrowings) and equity. The balance sheet is either laid out in a side-to-side manner, with the assets on the left and liabilities and equity on the right, or in a vertical manner, with assets listed first, then liabilities, then equity. Assets are listed in order of liquidity, starting with current assets (those which can be converted into cash within one full reporting cycle, usually one year) and starting those with cash, the most liquid of assets. As one moves down through the list, one comes across less liquid assets, such as: - accounts receivable which must be collected from customers before they are cash, and - inventory which must be converted into goods and / or sold before they become cash. Liabilities are listed in order of when they come due, starting with those due within an accounting period (usually one year), such as accounts payable and the portion of long term debt due within that period. Long term liabilities include borrowings from banks, bonds issued, Finally, shareholder equity, is given. This includes: - retained earnings (earnings not paid out as dividends or used to repurchase shares), - stock at par value (the stated value of stock, such as $0.02 per share), - additional paid in capital (what was paid to the company for its shares in excess of par value), and - treasury stock (stock repurchased by the company on the open market, a negative number). Things to remember - Read the footnotes, as many, if not all, of the line items in the balance sheet are expanded upon with more detail there. - Not all debt a company may be liable for will show up on the balance sheet. Always remember Enron! - Different industries have different balance sheets, financial institutions being the most prominent example. Banks, for instance, show the deposits from their customers as a liability (which it is, the bank owes that money to the customers) and loans issued as assets. Both of these are debt obligations running in opposite directions, and belong in different portions of the balance sheet. - Book value is a synonym for equity and is the "net worth" of the company (what it has left after all liabilities are paid from all assets -- go back to the equation above and solve for equity). However, if there is a lot of goodwill as part of the assets, well, you can't spend goodwill, so it's an "intangible" asset. Tangible book value removes goodwill and other intangible assets (such as intellectual property like patents) from the assets before subtracting out liabilities and is a stricter (more conservative) look at the net worth of the company. Related Fool Articles - Foolish Fundamentals: The Balance Sheet - Understanding a Bank's Balance Sheet - How a bank's balance sheet differs from that of typical companies - Accounts payable - Accounts receivable - Cash flow statement - Income statement - Statement of shareholders' equity
http://wiki.fool.com/wiki/index.php?title=Balance_sheet&oldid=20876
4.03125
-- Fun, Facts, and Trivia The Dirksen Center wants to help teachers teach better by giving them the opportunity to use technology to create, customize, and share online learning activities in their classrooms. The Center wants to help students learn more by bringing educational resources together in one place that provide new ways to learn about Congress interactively. Knowing About Congress Knowing about Congress could be considered an effective lobbying tool. Find out how much you already know, or learn as you go, using the online flashcards that you can flip through, print in a variety of formats with custom fonts and font sizes, or download to a Palm Pilot or Windows CE device. Find Knowing About Congress at: http://www.congressforkids.net/games/senate/2_senate.htm Congressional “Brain” Power 1. Congress took advantage of one of its implied powers when, in the _____ _____ Act of 1973, it tried to regulate when the President could send U. S. troops into combat on foreign soil. 2. The last clause of Article I, Section 8 gives Congress its _____. 3. True or False: The elastic clause is used to justify wide expansion of government authority. Student Web Activity Congressional powers are used to conduct investigations and for legislative oversight. The history of Congressional oversight dates back to the 1792 investigation of the government’s handling of the Indian Wars. Teachers, have your students conduct further research to learn about other cases of Congressional oversight investigations. You could have them create an annotated time line of these events using a poster board or presentation software. Along with the date, suggest that they write a brief summary of the background and highlights of the investigation. It would be really cool if they included pictures or illustrations to make their timeline more visually appealing. Your students will find these Web sites helpful: Answers to January’s issue of Fun, Facts, and Trivia link here: http://www.webcommunicator.org/funfactstrivia0103ans.htm Do you have or know of an online activity you would like The Dirksen Congressional Center to feature on its new Web site for students -- Congress for Kids? The Center is currently seeking online activities that provide new ways to learn about Congress and the workings of the federal government interactively. If you have questions or suggestions for online activities, contact Cindy Koeppel.
http://www.webcommunicator.org/funfactstrivia0203ans.htm
4.28125
Notes on the learning environment The Learning Environment - The way in which we set up the learning environment gives strong messages to children about the sorts of experiences that will be encouraged and valued there. - It can also communicate to children ideas about the learning and behaviours that will not be valued, or even allowed. For example, if resources are at child level, clearly labelled and accessible, children are likely to feel confident in exploring independently and finding out for themselves. If resources are difficult to reach or children are prevented from accessing the things they need spontaneously to develop their own thinking and ideas, the opposite is true. - Independent learners are confident learners. - We provide a rich and challenging learning environment that encourages all children to develop independence. The photographs in the gallery area of this website help to show how this is done. All aspects of learning are reflected and provided for throughout, inside and out. - Through initiating their own experiences, children become far more deeply engaged in their play. They will persevere to find out what it is they really want to know or be. - For children to be able to respond fully to this type of environment, time must be spent enabling them to find their way around, to locate and return tools and resources and to experiment with them until they are confident. The adult's role is to support the child through this process, helping them to negotiate use of the environment with others. - Children are encouraged to select and combine resources from different areas so that they become creative in their use and begin to make vital connections to support their learning. - Adults need to show that they trust children to be able to do challenging things for themselves; to try, fail and try again as they test out skills, ideas, thoughts and feelings in play. - The use of the learning environment is carefully balanced. Children will be supported in developing ideas and learning from each other, and will also have opportunities to choose to engage in more focused adult led activities. - Observation of children's play is a vital part of the adult's role and enables them to plan for next steps in learning. - Above all, the learning environment is set up to enable children to engage in play for its own sake and for the sheer pleasure it can bring. Back to Resources
http://earlyyearsmaths.e2bn.org/resources_110.html
4
Learn something new every day More Info... by email Water in the human body plays an essential role by carrying carbohydrates and protein through the blood and eliminating excess salt, minerals, and other substances. Adequate hydration also keeps the body cool when temperatures rise and during physical activity. Water in the human body prevents constipation and keeps skin soft and supple. The lungs and mouth need water to function properly, while the joints use water as lubrication. Every cell in the body relies on water to dissolve chemicals, minerals, and nutrients to make them usable. If the blood lacks sufficient water, it might not flow freely and carry enough oxygen to organs and tissue. The skin might become dry and cracked when water intake falls below recommended levels. Water in the human body represents about 70 percent of the total weight of the brain. The blood is comprised of about 80 percent water, while the content in the lungs is about 90 percent water. Liquid is also used by fat, muscles, and bones to support optimal health. Water in the human body flushes bacteria from the bladder and might prevent formation of kidney stones. It also prevents constipation and carries waste from the body through feces. Each day, water in the human body is lost through urine, perspiration, and respiration. It must be replaced daily because the body cannot store water for later use. The amount of water excreted depends upon a person’s level of activity, the outside temperature, individual metabolism, and the amount of liquid consumed in food and beverages. Very active people, and those who live in hot climates, typically need more water because they often produce more sweat. Children’s bodies contain higher water content than adults and might become dehydrated more quickly. The elderly might need to increase water intake because kidney functions change with age. An older adult might lose up to 2.1 quarts (2 liters) of water a day through normal bodily functions. It is estimated the elderly obtain half that amount each day through food. Dehydration can become a serious health risk, causing kidney failure. Symptoms include dark, yellow urine, headache, and lack of energy. The lips and skin might become dry, along with a dry mouth. By the time a person feels thirsty, dehydration might already exist, which might hinder concentration and the ability to perform mental or physical tasks. Most diets provide about half the necessary water in the human body. Nutritionists usually recommend drinking six to eight glasses of water a day to maintain health. These levels might be obtained from soups, fruit, teas, and other foods. Patients who use medications that increase urination might need to up their daily water intake. People suffering from fever, vomiting, or diarrhea might also quickly lose vital fluids.
http://www.wisegeek.com/what-is-the-role-of-water-in-the-human-body.htm
4.0625
URL stands for Uniform Resource Locator. It is a common form of URI (Uniform Resource Identifier) that describes a path to some resource, often over the Web. The following will give a bit of a technical description of the format of common URLs, mostly in the context of links. A URL is made up of a number of components that describe how to access the destination resource. Any of these components may be omitted and a default will be assumed. The components, if specified, must be written in the following order: The scheme indicates the protocol (the type of computer communication language) that will be used to make the request for the destination resource. On the Web, this is usually HTTP, the HyperText Transfer Protocol, indicated by http:. For a secure (encrypted) HTTP connection, https: may be used. ftp: is another common scheme on the Web. If the scheme is omitted, it assumes the scheme that was used to reach the current document (usually HTTP on the Web). If present, this identifier indicates that “authority” information will follow. Authority information includes one or more of the following components: User information may be used for logins or other identification purposes. It is often written username:password@, although providing password information in this manner could result in the password being leaked and thus isn't recommended. If user information is omitted, no such information will be given. In this case, if the server requires user information for the request, it may ask the user for the information before continuing. The host identifier is the IP address of the destination resource or a domain name that represents its IP address (for example, If the host identifier is omitted, the URL is assumed to be from the same host/domain. The communication port is specified with a colon and the port number to use (for example, If omitted, a port number may be assumed based on the scheme used ( :80 is the default in HTTP) This is the rest of the path to the resource (for example, /TR/html401/). This is a UNIX-style path, which may be an absolute path (beginning with /), or otherwise — if and only if the authority information is omitted — it may be a relative path from the base established by the current document. The default base is the location of the current document, but the document may manually specify a different base such as with the HTML If the path is omitted, the root ( /) is assumed. A query string is used to send special parameters to the resource, such as simple form data. A query string begins with a question mark ( ?) and then contains a series of parameters separated by a delimiter. The delimiter is usually an ampersand ( &) character, but depending on the website it may be a semicolon ( ;) or some other character. The parameters themselves may be simple keywords or a key-value pair. The key-value pairs include a parameter name, followed by an equal sign ( =), followed by the parameter's value. For historical reasons, a plus ( +) in a parameter value is the same as a space. The purpose and usage of these parameters completely depends on the destination resource — the browser merely sends the query string as it is written in the URL. If the query string is omitted, no additional query information will be included in the request. There may also be a fragment identifier at the end of a URI. A fragment identifier refers to a specific section of the resource, such as a certain paragraph on a webpage. It consists of a hash symbol ( #) followed by the name of the fragment in the destination resource. In HTML, fragment names are specified using the name attributes. In XHTML, they are only specified using the If omitted, no particular fragment will be referenced. In order to prevent confusion of the different parts of a URI and the special characters that separate them, parts of the URI are “encoded”, meaning special characters that shouldn't hold special meanings are replaced with something else that can later be “decoded” back to the original characters. Different sections may require different levels of encoding. For example, an ampersand ( &) holds no special meaning in the resource path, but it does in a parameter value. Generally, special characters are encoded using a hexadecimal representation. They begin with a percent sign ( %), followed by a two-digit hexadecimal number representing the character. For example, an encoded ampersand looks like %26 and an encoded space looks like Here are some examples to illustrate different forms of URLs (the current base URL is mailto:email@example.com is a URI, it is not a URL because it does not describe a path. Whether a URI may also be a URL depends on the URI scheme. ftp: both describe path information, while data: do not. Special attention should be drawn to the ampersand ( &) character in SGML-based languages like HTML, XML, and consequently XHTML. While it commonly has a special purpose in URIs (separating parameters), it also has a special purpose in common SGML and XML languages (character references). An ampersand appearing in a URI may accidentally be interpretted as the start of a character reference — thus causing part of the URI to be converted to a different character — or else cause a validation error. Therefore, it is important to change all occurrences of & in the URI. This should be done after the basic URI encoding. For example, the href attribute might look like this: href="http://www.google.com/search?num=20&hl=en&q=m%26m%27s". Notice the ampersand ( %26) in the q value is already encoded and doesn't require a character reference. In HTML-based languages, if there is a base element present in the head section of the current document, a relative URL will be relative to the specified base instead of the path used to reach the current document.
http://www.webdevout.net/articles/urls
4.03125
Flamingos often stand on one leg, the other tucked beneath the body. The reason for this behavior is not fully understood. Recent research indicates that standing on one leg may allow the birds to conserve more body heat, given that they spend a significant amount of time wading in cold water. However, the behaviour also takes place in warm water. As well as standing in the water, flamingos may stamp their webbed feet in the mud to stir up food from the bottom. Flamingos are very social birds; they live in colonies whose population can number in the thousands. These large colonies are believed to serve three purposes for the flamingos: avoiding predators, maximizing food intake, and using scarce suitable nesting sites more efficiently. Before breeding, flamingo colonies split into breeding groups of between about 15 and 50 birds. Both males and females in these groups perform synchronized ritual displays. The members of a group stand together and display to each other by stretching their necks upwards, then uttering calls while head-flagging, and then flapping their wings. The displays do not seem to be directed towards an individual but instead occur randomly.These displays stimulate "synchronous nesting" (see below) and help pair up those birds who do not already have mates. Flamingoes form strong pair bonds of one male and one female, although in larger colonies flamingos sometimes change mates, presumably because there are more mates to choose from).Flamingo pairs establish and defend nesting territories. They locate a suitable spot on the mudflat to build a nest (the spot is usually chosen by the female). It is during nest building that copulation usually occurs. Nest building is sometimes interrupted by another flamingo pair trying to commandeer the nesting site for their own use. Flamingos aggressively defend their nesting sites. Both the male and the female contribute to building the nest, and to defending the nest and egg. After the chicks hatch, the only parental expense is feeding. Both the male and the female feed their chicks with a kind of crop milk, produced in glands lining the whole of the upper digestive tract (not just the crop). Production is stimulated by a hormone called prolactin. The milk contains fat, protein, and red and white blood cells. (Pigeons and doves—Columbidae—also produce a crop milk (just in the glands lining the crop), which contains less fat and more protein than flamingo crop milk.) For the first six days after the chicks hatch, the adults and chicks stay in the nesting sites. At around seven to twelve days old, the chicks begin to move out of their nests and explore their surroundings. When they are two weeks old, the chicks congregate in groups, called "microcrèches", and their parents leave them alone. After a while, the microcrèches merge into "crèches" containing thousands of chicks. Chicks that do not stay in their crèches are vulnerable to predators. flamingolar sıra sıra dizilmişler tuzla sulak alanında Nobody has marked this note useful - Copyright: efser unsal (efi) (5145) - Genre: Places - Medium: Color - Date Taken: 2013-01-10 - Categories: Nature - Exposure: f/14.0, 1/1250 seconds - More Photo Info: view - Photo Version: Original Version - Theme(s): Flamingo, bodrum bodrum III [view contributor(s)] - Date Submitted: 2013-01-30 9:05
http://www.trekearth.com/gallery/Middle_East/Turkey/Aegean/Mugla/tuzla/photo1403721.htm
4.15625
In electronics or EET, a voltage divider (also known as a potential divider) is a linear circuit that produces an output voltage (Vout) that is a fraction of its input voltage (Vin). Voltage division refers to the partitioning of a voltage among the components of the divider. An example of a voltage divider consists of two resistors in series or a potentiometer. It is commonly used to create a reference voltage, or to get a low voltage signal proportional to the voltage to be measured, and may also be used as a signal attenuator at low frequencies. For direct current and relatively low frequencies, a voltage divider may be sufficiently accurate if made only of resistors; where frequency response over a wide range is required, (such as in an oscilloscope probe), the voltage divider may have capacitive elements added to allow compensation for load capacitance. In electric power transmission, a capacitive voltage divider is used for measurement of high voltage. A voltage divider referenced to ground is created by connecting two electrical impedances in series, as shown in Figure 1. The input voltage is applied across the series impedances Z1 and Z2 and the output is the voltage across Z2. Z1 and Z2 may be composed of any combination of elements such as resistors, inductors and capacitors. Applying Ohm's Law, the relationship between the input voltage, Vin, and the output voltage, Vout, can be found: The transfer function (also known as the divider's voltage ratio) of this circuit is simply: A resistive divider is the case where both impedances, Z1 and Z2, are purely resistive (Figure 2). Substituting Z1 = R1 and Z2 = R2 into the previous expression gives: If R1 = R2 then If Vout=6V and Vin=9V (both commonly used voltages), then: and by solving using algebra, R2 must be twice the value of R1. To solve for R1: To solve for R2: Any ratio greater than 1 is possible. That is, using resistors alone it is not possible to either invert the voltage or increase Vout above Vin. Low-pass RC filter Consider a divider consisting of a resistor and capacitor as shown in Figure 3. Comparing with the general case, we see Z1 = R and Z2 is the impedance of the capacitor, given by This divider will then have the voltage ratio: The product τ (tau) = RC is called the time constant of the circuit. The ratio then depends on frequency, in this case decreasing as frequency increases. This circuit is, in fact, a basic (first-order) lowpass filter. The ratio contains an imaginary number, and actually contains both the amplitude and phase shift information of the filter. To extract just the amplitude ratio, calculate the magnitude of the ratio, that is: Inductive dividers split AC input according to inductance: The above equation is for non-interacting inductors; mutual inductance will alter the results. Inductive dividers split DC input according to the resistance of the elements as for the resistive divider above. Capacitive dividers do not pass DC input. For an AC input a simple capacitive equation is: Any leakage current in the capactive elements requires use of the generalized expression with two impedances. By selection of parallel R and C elements in the proper proportions, the same division ratio can be maintained over a useful range of frequencies. This is the principle applied in compensated oscilloscope probes to increase measurement bandwidth. The voltage output of a voltage divider is not fixed but varies according to the load. To obtain a reasonably stable output voltage the output current should be a small fraction of the input current. The drawback of this is that most of the input current is wasted as heat in the divider. Voltage dividers are used for adjusting the level of a signal, for bias of active devices in amplifiers, and for measurement of voltages. A Wheatstone bridge and a multimeter both include voltage dividers. A potentiometer is used as a variable voltage divider in the volume control of a radio. - Voltage divider or potentiometer calculations - Voltage divider tutorial video in HD - Online calculator to choose the values by series E24, E96 - Online voltage divider calculator: chooses the best pair from a given series and also gives the color code - Voltage divider theory - RC low-pass filter example and voltage divider using Thévenin's theorem
http://en.wikipedia.org/wiki/Voltage_divider
4.03125
National Geographic News Add another name to the tide of Canadian rock stars. Paleontologists announced today they've unearthed the world's oldest, intact shark fossila 409-million-year-old specimen of a small, primitive species known as Doliodus problematicusfrom a site in New Brunswick, Canada. The fossil, which measures 23 centimeters (9 inches) long from its snout to upper trunk, includes the fish's braincase, scales, calcified cartilage, large fin spines, and a battery of scissor-like teeth preserved in the upper and lower jaw. Researchers estimate the species only grew 50 to 75 centimeters (20 to 30 inches) long, about the size of a large lake trout. Randall Miller, a paleontologist at the New Brunswick Museum in Saint John, led the field expedition that found the fossil. He said he hoped merely to collect a few more teeth of Doliodus, a species known to science for over a century. Only later did he learn that his team found the first complete specimen of the ancient sharkand the world's oldest intact shark fossil. Miller collaborated with Richard Cloutier, a paleontologist at the University of Quebec at Rimouski, and Susan Turner, a world expert on fossil shark teeth at the Queensland Museum in Australia, to describe the fossil. Their paper appears tomorrow in the science journal Nature. The New Brunswick fossil predates other fossil sharks from Antarctica and South Africa, previously known as the world's oldest, by 15 million years. Paleontologists say the fossil will help shed light on the early evolution of primitive sharks and other vertebrate, or backboned, fish from the Devonian Period, the era between 418 million and 360 million years ago often referred to as the age of fish when most life on Earth was confined to the world's oceans. "Quite a few paleontologists or ichtyologists [fish biologists] have been trying to imagine what the oldest shark looked like. This [fossil] is giving us the information," said Cloutier. Miller and Turner speculate the shark may have resembled an angel shark, a ray-like bottom-dweller found in most temperate and tropical oceans. Intact fossils of sharks, a boneless fish, are exceptionally rare. Many ancient shark species are known only by fossil teeth or skin scales. Until the recent find, Doliodus problematicus (Latin for "a problematic deceiver"), was among them. Researchers discovered the articulated New Brunswick fossil in situ, meaning all its parts, such as the braincase, jaws, teeth, and pectoral fins were found attached in their correct anatomical position. The find will help paleontologists make sense of other isolated, smaller ancient fossil shark specimens. SOURCES AND RELATED WEB SITES
http://news.nationalgeographic.com/news/2003/10/1001_031001_sharkfossil.html
4.09375
Arts & Humanities Students learn about the Navajo code talkers and use a Navajo code talkers' dictionary to create and decode secret messages. learn about the Navajo code talkers and their contributions to World War II. Students use a Navajo code talkers' dictionary to create and decode messages. Native American, Navajo, Navajo code talkers, World War II, dictionary, decode, messages Read to students background information about the Navajo code talkers from library sources or from The Navajo Code Talkers. Discuss the code talkers' contributions. Divide the class into small groups. Distribute printouts from one of the dictionaries. (Note: There is a slight difference between the two dictionaries. Choose the same one to distribute to each group.) Give students an example of how the code might work. (For example, boy in Navajo code might be "shush ne-ahs-jah tsah-as-zih." Shush is the Navajo word for "bear"; ne-ahs-jah is the Navajo word for "owl"; and tsah-as-zih is the Navajo word for "yucca." If you take the first letter of each translated word, those letters spell boy.) Tell students to work together to create messages using the dictionary. Then tell groups to exchange papers to decode one another's messages. Encourage creativity! Observe students' participation and ability to work in cooperative groups. Lesson Plan Source
http://www.educationworld.com/a_lesson/00-2/lp2213.shtml
4.125
Seated Man with Chinese Servant Sixth-plate daguerreotype, c. 1855 This unidentified man and his Chinese servant were photographed during the early years of the California Gold Rush. During the thirty-year period before the passage of the Chinese Exclusion Act of 1882, more than one hundred thousand Chinese immigrated to the United States. For many, “Gold Mountain”—the Chinese name for California—presented an extraordinary opportunity. Jobs were more plentiful in the West than at home, and some achieved a degree of economic independence. Yet most Chinese people endured tremendous hardship and discrimination in their new home. In response to the public outcry regarding “yellow peril,” the Chinese were denied basic civil rights, forced into segregated areas, and ultimately, in 1882, refused entry into the United States. Before then, the Chinese—recruited to work on the transcontinental railroad’s construction or in the mining industry—fulfilled the demand for inexpensive labor.
http://www.npg.si.edu/exhibit/frontier/pop-ups/04-06.html
4.0625
Japanese Culture Workshop: Sample Topics Here are some examples of workshops Motoko offers on Japanese culture. It is recommended that each teacher chooses one topic for her class ahead of time. Each session starts with a brief discussion on geography, climate, and housing in Japan. Then she moves on to talk about the specific cultural topic, tells a folktale and does an art activity related to the theme. Each workshop requires 45 minutes to an hour. 1. Boys Day and Girls Day (Grades 1-4) Discussion of how children are celebrated in Japan. Motoko will tell The Princess Who Loved Bugs, a 12th century Japanese tale of a girls courage and creativity. Then students will make small paper carp kites for Boys day. She will leave instructions for Girls Day art activity with the teachers. 2. Sumo Wrestling (Grades 1-4) Discussion on sports in Japan. Special focus on sumo as a traditional sport that emphasizes respect and self-control. The session includes a sumo story, and a paper sumo game that every child loves! 3. Oni Monster in Japanese Folklore (Grades K-4) The oni, an ogre, is a familiar figure in Japanese folktales and legends. The session includes an oni story, and explanation for Setsubun, a Japanese childrens ritual for driving away the monster. Each student makes a paper oni mask. 4. Japanese New Years (Grades K-3) New Years Day is the most important holiday for Japanese people. Motoko will talk about food, clothing, and customs, and tell a folktale that explains the origin of the Asian zodiac system, which uses names of 12 animals for indicating the year. Students will make Lucky Smile Game, a traditional Japanese version of Pin the Tail on the Donkey. 5. Japanese Writing System (Grades 2-5) In order to write, children in Japan learn two sets of phonetic alphabets (hiragana and katakana) as well as thousands of Chinese characters. Motoko teaches students to sing Japanese ABC Song, demonstrates writing, and makes everyone a nametag in Japanese. Discussion also includes school life in Japan. 6. Origami Storigami (K-5) --- New!! Origami, the ancient Japanese art of paper-folding teaches children to focus, develop their dexterity, and increase their understanding of geometry. This workshop includes traditional and contemporary stories related to origami, and age-appropriate hands-on activities.
http://motoko.folktales.net/Pages/SampleTopic.html
4.375
Primary level: Black History Month Author: Sarah White Grade Level: K - 5 Subject/Content: Social Studies Summary of Lesson: Students will learn about African American leaders in United States history. Focus Question:How have African Americans impacted our society? Database(s): Kids InfoBits and Junior Reference Collection - Access images in Kids InfoBits and put into slide show if desired (pictures may also be shown directly from website). Access images by selecting History and Social Studies, Ethnic Groups, African Americans and Images from the main menu of the database. - Set up classroom TV or Smartboard for student viewing of images. - Print bios to accompany pictures from Junior Reference Collection database Steps/Activities by student(s): Early Elementary Option - Each day during Black History Month share one image and summary from the database during circle or story time. - With each day’s lesson discuss with students the importance of the person addressed and their impact on our culture. - Use the image entitled “An African American Family on Smith’s Plantation at Beaufort in 1862” to begin the lessons and use the questions below to assess students’ prior knowledge. - How were Blacks treated at this time? - How did the Civil War change this? - How is it different today? - What events helped change life for Black Americans? Upper Elementary Option - Assign student images to research in pairs using the Gale Junior Resource Center database. - Students will create a poster about their assigned person using the research. - Students will present their research and poster to the class. - Students will create a hallway display using their completed posters. Outcome: The school community will be informed of the significance and contributions of Black Americans in U.S. history. Related Activities: If school announcements take place students may share their presentations wit the entire school. As an alternate upper elementary students may present their posters to younger classes. Standard Date: November 18, 2006 - What are the benefits of diversity in the United States? - What dispositions or traits of character are important to the preservation and improvement of American democracy? - How can Americans participate in their government? - At Level 1, the student is able to: - Identify significant Black Americans. - At Level 2, the student is able to: - Explain the contributions of these people. - At Level 3, the student is able to: - Explain how Black Americans have benefited our society. Computer Literacy and Usage Standards 9-12: - The student will develop skills using a variety of computer resources to increase productivity, support creativity, conduct and evaluate research and improve communications. ISTE NETS for Students - Use telecommunications and online resources (e.g., e-mail, online discussions, Web environments) to participate in collaborative problem-solving activities for the purpose of developing solutions or products for audiences inside and outside the classroom. (4, 5) Information Power; Information Literacy Standards 1-4: - Standard 1: The student who is information literate accesses information efficiently and effectively. - Standard 3: The student who is information literate uses information accurately and creatively. - Standard 7: The student who contributes positively to the learning community and to society is information literate and recognizes the importance of information to a democratic society.
http://www.galeschools.com/lesson_plans/primary/bhm2007.htm
4.09375
A pioneering British expedition to sample a lake under the Antarctic ice hopes to find unknown forms of life and clues to future climate impacts. The mission will use hot water to melt its way through ice 3km (2 miles) thick to reach Lake Ellsworth, which has been isolated from the outside world between 125,000 and one million years. The team hopes to be the first to sample a sub-glacial Antarctic lake. The project, funded to the tune of £7m by the UK's Natural Environment Research Council, aims to obtain samples of the lake water itself and of sediment on the lake floor. Understanding the West Antarctic Ice Sheet is seen as crucial to forecasting future climate change impacts, as it holds enough ice to raise global sea levels by between 3m (10ft) and 7m (23ft). Exploring sub-glacial lakes may also help scientists design missions to search for life on other worlds such as Jupiter's moon Europa, which is thought to feature a liquid ocean beneath a thick layer of ice. The UK team has carefully designed its equipment and its procedures in order to avoid taking surface organisms down as they drill. "Just about everywhere we look on the planet, we find life, from the outer reaches of the stratosphere to the deepest ocean trenches," said Dr David Pearce from the British Antarctic Survey (BAS), who heads the search for microbiological life in Lake Ellsworth. "Any form of life we find there, we won't have encountered before - there will probably be viruses, and we may have bacteria, archaea (other single-celled organisms) and... maybe fungi." If the lake contains no life, said Dr Pearce, that would be interesting as well, helping to define the conditions under which life can and cannot exist. It means that transporting surface life into this pristine environment would be a disaster, that could not be undone. An engineering team leaves the UK in the coming week with 70 tonnes of gear. The heavy equipment has to be airlifted in to Antarctica and then hauled over land to the drilling location. "Our project will look for life in Lake Ellsworth, and look for the climate record of the West Antarctic Ice Sheet," said the project's principal investigator Professor Martin Siegert from Edinburgh University. "If we're successful, we'll make profound discoveries on both the limits to life on Earth and the history of West Antarctica," he told reporters. Much of the equipment has been designed and built at the National Oceanography Centre in Southampton, under the supervision of Matt Mowlem. "This is an unknown environment - we don't know for example whether there will be dissolved gases in the water," he said. "So the water at its pressure of 300 atmospheres will be sampled. But when we pull the probe up and the flasks hit the cold air in the borehole, the water will try to freeze; the pressure then increases to around 2,700 atmospheres, and that's greater than anything experienced in ocean engineering." The equipment will be delivered to the Ellsworth base during the coming Antarctic summer, and stored away against the harsh winter. The main scientific party will fly out in about a year's time, unpack the equipment, and begin the hunt.
http://theweatherclub.org.uk/news/article/hunting-aliens-in-the-antarctic
4.0625
the science of combining English and mathematics to solve problems. Using each subject as a bridge to learn the other. To englimatize basic mathematics, think of +, -, x. / as verbs. They are the action words in math problems. The 'numbers' are the 'nouns'. Students should always begin by identifying the nouns and the associated verbs to understand what 'actions' need to take place. When writing an essay or doing math homework, look for the connecting Englimatics to make both assignments easier to complete.
http://www.urbandictionary.com/define.php?term=englimatics
4.28125
Solar Eclipses Overview A solar eclipse occurs when the shadow cone of the Moon intersects the surface of the Earth and is observable by anyone within this shadow zone (see Figure 1). Figure 1: The type of eclipse that is observed depends on the position of the Earth in the Moon's shadow. Two conditions have to be met for a solar eclipse to occur. The first concerns the relationship between the orbits of the Earth and the Moon, which are not in the same plane, but are inclined at around 5 degrees (5° 8' 43") to each other. The Moon crosses the plane of the Earth's orbit twice in each complete lunar orbit. For an eclipse to occur the Moon must be near one of these intersection points, called nodes. The second condition is that the Sun, the Earth and the Moon must also be lined up, corresponding to the phase of the New Moon. Types of solar eclipses: annular, partial, total The Moon's shadow consists of two cone-shaped areas (see Figure 1), known as the umbra (externally tangent to the Sun and Moon) and the penumbra (internally tangent to the Sun and Moon). For an observer standing between the Moon and the umbra cone summit the eclipse is total. If the observer is beyond the cone summit, the eclipse is annular (ring-like): the apparent diameter of the Moon is too small to mask the whole solar disk. For an observer standing in the penumbra, only a part of the Sun is masked: the eclipse is partial. Figure 2: Why the eclipse goes west to east The most favourable conditions for a total eclipse are when the Moon is at its perigee, the Earth is farthest from the Sun (around July) and when the Sun is observed near zenith. When these conditions are all met, one can have a totality duration of more than 7 minutes. ||Science of Eclipses Last Update: 22 Feb 2006
http://clusterlaunch.esa.int/science-e/www/object/index.cfm?fobjectid=37889
4.125
As the nation underwent political upheavals in pursuit of democratic development, the Korean Constitution has been amended nine times, the last time on October 29, 1987. The manuscript of the first Constitution of the Republic of Korea (Photo courtesy of the Korea University Museum) The current Constitution represents a major advancement in the direction of full democratization. Apart from the legitimate process by which it was passed, a number of substantive changes are notable. They include the curtailment of presidential powers, the strengthening of the power of the legislature and additional devices for the protection of human rights. In particular, the creation of a new, independent Constitutional Court played a vital role in making Korea a more democratic and free society. The Constitution consists of a preamble, 130 articles, and six supplementary rules. It is divided into 10 chapters: General Provisions, Rights and Duties of Citizens, the National Assembly, the Executive, the Courts, the Constitutional Court, Election Management, Local Authority, the Economy, and Amendments to the Constitution. The basic principles of the Korean Constitution include the sovereignty of the people, separation of powers, the pursuit of peaceful and democratic unification of South and North Korea, the pursuit of international peace and cooperation, the rule of law and the responsibility of the state to promote welfare. A constitutional amendment requires special procedures different from other legislation. Either the President or a majority of the National Assembly may submit a proposal for a constitutional amendment. An amendment needs to be passed not only by the National Assembly but also in a national referendum. The former requires support of two-thirds or more of the National Assembly members, while the latter requires more than one half of all votes cast by more than one half of eligible voters in a national referendum. Department Global Communication and Contents Division
http://www.korea.net/Government/Constitution-and-Government/Constitution
4.125
Agglutinins are antibodies that cause the red blood cells to clump together. - Cold agglutinins are active at cold temperatures. - Febrile (warm) agglutinins are active at normal body temperatures. This article discusses the blood test used to measure the level of these antibodies in the blood. Cold agglutinins; Weil-Felix reaction; Widal's test; Warm agglutinins; Agglutinins How the test is performed: A blood sample is needed. For information on how this is done, see: Venipuncture How to prepare for the test: There is no special preparation. How the test will feel: When the needle is inserted to draw blood, some people feel moderate pain, while others feel only a prick or stinging sensation. Afterward, there may be some throbbing. Why the test is performed: This test is done to diagnose certain infections and to determine the cause of hemolytic anemia . Distinguishing between warm and cold agglutinins can help understand why the hemolytic anemia is occurring and can direct therapy. - Warm agglutinins: no agglutination in titers at or below 1:80 - Cold agglutinins: no agglutination in titers at or below 1:16 The examples above are common measurements for results of these tests. Normal value ranges may vary slightly among different laboratories. Some labs use different measurements or test different samples. Talk to your doctor about the meaning of your specific test results. What abnormal results mean: An abnormal (positive) result means there were agglutinins in the blood sample. Warm agglutinins may occur with: Cold agglutinins may occur with: - Infections, especially mononucleous and Mycoplasma pneumonia - Chicken pox (varicella) - Cytomegalovirus infection - Cancer, including lymphoma and multiple myeloma - Systemic lupus erythematosus - Waldenstrom macrogolulinemia What the risks are: Veins and arteries vary in size from one patient to another and from one side of the body to the other. Obtaining a blood sample from some people may be more difficult than from others. Other risks associated with having blood drawn are slight but may include: - Excessive bleeding Fainting or feeling light-headed - Hematoma (blood accumulating under the skin) - Infection (a slight risk any time the skin is broken) If cold agglutinin disease is suspected, the individual needs to be kept warm. Schwartz RS. Autoimmune and intravascular hemolytic anemias. In: Goldman L, Schafer AI, eds. Cecil Medicine. 24th ed. Philadelphia, Pa:Saunders Elsevier; 2011:chap 163. Baum SG. Mycoplasma infections. In: Goldman L, Schafer AI,eds. Cecil Medicine. 24th ed. Philadelphia, Pa: Saunders Elsevier; 2011:chap 325. Powers A, Silberstein LE. Autoimmune hemolytic anemia. In: Hoffman R, Benz EJ, Shattil SS, et al, eds. Hematology: Basic Principles and Practice. 5th ed. Philadelphia, Pa: Elsevier Churchill Livingstone; 2008:chap 47. |Review Date: 5/31/2012| Reviewed By: Linda J. Vorvick, MD, Medical Director and Director of Didactic Curriculum, MEDEX Northwest Division of Physician Assistant Studies, Department of Family Medicine, UW Medicine, School of Medicine, University of Washington. Jatin M. Vyas, MD, PhD, Assistant Professor in Medicine, Harvard Medical School; Assistant in Medicine, Division of Infectious Disease, Department of Medicine, Massachusetts General Hospital. Also reviewed by David Zieve, MD, MHA, Medical Director, A.D.A.M. Health Solutions, Ebix, Inc. The information provided herein should not be used during any medical emergency or for the diagnosis or treatment of any medical condition. A licensed medical professional should be consulted for diagnosis and treatment of any and all medical conditions. Call 911 for all medical emergencies. Links to other sites are provided for information only -- they do not constitute endorsements of those other sites. © 1997- A.D.A.M., Inc. Any duplication or distribution of the information contained herein is strictly prohibited.
http://www.parkerhospital.org/body.cfm?id=212&action=detail&AEArticleID=003549&AEProductID=Adam2004_1&AEProjectTypeIDURL=APT_1
4.0625
Rocks Found That Could Store Greenhouse Gas Geologists have mapped 6,000 miles of large rock formations in the United States that could be used to store some of the excess carbon dioxide building up in Earth’s atmosphere. The carbon dioxide released by fossil fuel burning has been continually accumulating in the atmosphere since the start of the Industrial Revolution. While some of the greenhouse gas is taken up by plants and absorbed by the ocean, a significant amount is still hanging out in the air, trapping some of the heat that Earth's surface would otherwise radiate to space and thereby warming the globe. Scientists and engineers have proposed several ways to artificially trap and store some of this excess carbon dioxide in underground aquifers and other large rock formations. Now scientists at Columbia University’s Earth Institute and the U.S. Geological Survey have surveyed the United States and found 6,000 square miles (15,500 square kilometers) of so-called ultramafic rocks at or near the surface that could be ideal for storing the excess gas. The locations of the rocks are detailed in a USGS report. Originating deep in the earth, these rocks contain minerals that react naturally with carbon dioxide to form solid minerals, a process called mineral carbonation that could make for an ideal storage mechanism. Other so-called carbon sequestration schemes have focused on storing carbon dioxide in liquid or gas form, but these proposals have met with concerns about leaks. The major drawback to natural mineral carbonation is its slow pace: normally, it takes thousands of years for rocks to react with sizable quantities of carbon dioxide. But scientists are experimenting with ways to speed the reaction up by dissolving carbon dioxide in water and injecting it into the rock, as well as capturing heat generated by the reaction to accelerate the process. "It offers a way to permanently get rid of carbon dioxide emissions,” said Juerg Matter, a scientist at Columbia’s Lamont-Doherty Earth Observatory, where a range of projects to tackle the issue is underway. The United States' ultramafic rocks could be enough to stash more than 500 years of U.S. carbon dioxide production, said the report's lead author, Sam Krevor, a graduate student working through the Earth Institute's Lenfest Center for Sustainable Energy. Most of the locations are conveniently clustered in strips along the east and west coasts — some near major cities including New York, Baltimore and San Francisco. "We're trying to show that anyone within a reasonable distance of these rock formations could use this process to sequester as much carbon dioxide as possible," Krevor said. Klaus Lackner, who helped originate the idea of mineral sequestration in the 1990s, hopes a global mapping effort can be undertaken to find more such storage areas. "It's a really big step forward," he said. Another rock, common volcanic basalt, also reacts with carbon dioxide, and efforts are underway to map this rock type in detail as well. - Video – Goldilocks and the Greenhouse - Global Warming News and Information - What is a Carbon Sink? MORE FROM LiveScience.com
http://www.livescience.com/3364-rocks-store-greenhouse-gas.html
4.15625
Subatomic particles are particles that are smaller than an atom. In 1940, the number of subatomic particles known to science could be counted on the fingers of one hand: protons, neutrons, electrons, neutrinos, and positrons. The first three particles were known to be the building blocks from which atoms are made: protons and neutrons in atomic nuclei and electrons in orbit around those nuclei. Neutrinos and positrons were somewhat peculiar particles discovered outside Earth's atmosphere and of uncertain origin or significance. That view of matter changed dramatically over the next two decades. With the invention of particle accelerators (atom-smashers) and the discovery of nuclear fission and fusion, the number of known subatomic particles increased. Scientists discovered a number of particles that exist at energies higher than those normally observed in our everyday lives: sigma particles, lambda particles, delta particles, epsilon particles, and other particles in positive, negative, and neutral forms. By the end of the 1950s, so many subatomic particles had been discovered that some physicists referred to their list as a "particle zoo." The quark model In 1964, American physicist Murray Gell-Mann (1929– ) and Swiss physicist George Zweig (1937– ) independently suggested a way out of the particle zoo. They suggested that the nearly 100 subatomic particles that had been discovered so far were not really elementary (fundamental) particles. Instead, they suggested that only a relatively few elementary particles existed, and the other subatomic particles that had been discovered were composed of various combinations of these truly elementary particles. Words to Know Antiparticles: Subatomic particles similar to the proton, neutron, electron, and other subatomic particles, but having one property (such as electric charge) opposite them. Atomic mass unit (amu): A unit of mass measurement for small particles. Atomic number: The number of protons in the nucleus of an atom. Elementary particle: A subatomic particle that cannot be broken down into any simpler particle. Energy levels: The regions in an atom in which electrons are most likely to be found. Gluon: The elementary particle thought to be responsible for carrying the strong force (which binds together neutrons and protons in the atomic nucleus). Graviton: The elementary particle thought to be responsible for carrying the gravitational force. Isotopes: Forms of an element in which atoms have the same number of protons but different numbers of neutrons. Lepton: A type of elementary particle. Photon: An elementary particle that carries electromagnetic force. Quark: A type of elementary particle. Spin: A fundamental property of all subatomic particles corresponding to their rotation on their axes. The truly elementary particles were given the names quarks and leptons. Each group of particles, in turn, consists of six different types of particles. The six quarks, for example, were given the rather fanciful names of up, down, charm, strange, top (or truth), and bottom (or beauty). These six quarks could be combined, according to Gell-Mann and Zweig, to produce particles such as the proton (two up quarks and one down quark) and the neutron (one up quark and two down quarks). In addition to quarks and leptons, scientists hypothesized the existence of certain particles that "carry" various kinds of forces. One of those particles was already well known, the photon. The photon is a strange type of particle with no mass that apparently is responsible for the transmission of electromagnetic energy from one place to another. In the 1980s, three other force-carrying particles were also discovered: the W + , W − , and Z 0 bosons. These particles carry certain forces that can be observed during the radioactive decay of matter. (Radioactive elements spontaneously emit energy in the form of particles or waves by disintegration of their atomic nuclei.) Scientists have hypothesized the existence of two other force-carrying particles, one that carries the strong force, the gluon (which binds together protons and neutrons in the nucleus), and one that carries gravitational force, the graviton. Five important subatomic particles For most beginning science students, the five most important sub-atomic particles are the proton, neutron, electron, neutrino, and positron. Each of these particles can be described completely by its mass, electric charge, and spin. Because the mass of subatomic particles is so small, it is usually not measured in ounces or grams but in atomic mass units (label: amu) or electron volts (label: eV). An atomic mass unit is approximately equal to the mass of a proton or neutron. An electron volt is actually a unit of energy but can be used to measure mass because of the relationship between mass and energy (E = mc 2 ). All subatomic particles (indeed, all particles) can have one of three electric charges: positive, negative, or none (neutral). All subatomic particles also have a property known as spin, meaning that they rotate on their axes in much the same way that planets such as Earth do. In general, the spin of a subatomic particle can be clockwise or counterclockwise, although the details of particle spin can become quite complex. Proton. The proton is a positively charged subatomic particle with an atomic mass of about 1 amu. Protons are one of the fundamental constituents of all atoms. Along with neutrons, they are found in a very concentrated region of space within atoms referred to as the nucleus. The number of protons determines the chemical identity of an atom. This property is so important that it is given a special name: the atomic number. Each element in the periodic table has a unique number of protons in its nucleus and, hence, a unique atomic number. Neutron. A neutron has a mass of about 1 amu and no electric charge. It is found in the nuclei of atoms along with protons. The neutron is normally a stable particle in that it can remain unchanged within the nucleus for an infinite period of time. Under some circumstances, however, a neutron can undergo spontaneous decay, breaking apart into a proton and an electron. When not contained with an atomic nucleus, the half-life for this change—the time required for half of any sample of neutrons to undergo decay—is about 11 minutes. The nuclei of all atoms with the exception of the hydrogen-1 isotope contain neutrons. The nuclei of atoms of any one element may contain different numbers of neutrons. For example, the element carbon is made of at least three different kinds of atoms. The nuclei of all three kinds of atoms contain six protons. But some nuclei contain six neutrons, others contain seven neutrons, and still others contain eight neutrons. These forms of an element that contain the same number of protons but different numbers of neutrons are known as isotopes of the element. Electron. Electrons are particles carrying a single unit of negative electricity with a mass of about 1/1800 amu, or 0.0055 amu. All atoms contain one or more electrons located in the space outside the atomic nucleus. Electrons are arranged in specific regions of the atom known as energy levels. Each energy level in an atom may contain some maximum number of electrons, ranging from a minimum of two to a maximum of eight. Electrons are leptons. Unlike protons and neutrons, they are not thought to consist of any smaller particles but are regarded themselves as elementary particles that cannot be broken down into anything simpler. All electrical phenomena are caused by the existence or absence of electrons or by their movement through a material. Neutrino. Neutrinos are elusive subatomic particles that are created by some of the most basic physical processes of the universe, like decay of radioactive elements and fusion reactions that power the Sun. They were originally hypothesized in 1930 by Swiss physicist Wolfgang Pauli (1900–1958). Pauli was trying to find a way to explain the apparent loss of energy that occurs during certain nuclear reactions. Neutrinos ("little neutrons") proved very difficult to actually find in nature, however. They have no electrical charge and possibly no mass. They rarely interact with other matter. They can penetrate nearly any form of matter by sliding through the spaces between atoms. Because of these properties, neutrinos escaped detection for 25 years after Pauli's prediction. Then, in 1956, American physicists Frederick Reines and Clyde Cowan succeeded in detecting neutrinos produced by the nuclear reactors at the Savannah River Reactor. By 1962, the particle accelerator at Brookhaven National Laboratory was generating enough neutrinos to conduct an experiment on their properties. Later, physicists discovered a second type of neutrino, the muon neutrino. Traditionally, scientists have thought that neutrinos have zero mass because no experiment has ever detected mass. If neutrinos do have a mass, it must be less than about one hundred-millionth the mass of the proton, the sensitivity limit of the experiments. Experiments conducted during late 1994 at Los Alamos National Laboratory hinted at the possibility that neutrinos do have a very small, but nonzero, mass. Then in 1998, Japanese researchers found evidence that neutrinos have at least a small mass, but their experiments did not allow them to determine the exact value for the mass. In 2000, at the Fermi National Accelerator Laboratory near Chicago, a team of 54 physicists from the United States, Japan, South Korea, and Greece detected a third type of neutrino, the tau neutrino, considered to be the most elusive member of the neutrino family. Positron. A positron is a subatomic particle identical in every way to an electron except for its electric charge. It carries a single unit of positive electricity rather than a single unit of negative electricity. The positron was hypothesized in the late 1900s by English physicist Paul Dirac (1902–1984) and was first observed by American physicist Carl Anderson (1905–1991) in a cosmic ray shower. The positron was the first antiparticle discovered—the first particle that has properties similar to protons, neutrons, and electrons, but with one property exactly the opposite of them.
http://www.scienceclarified.com/Sp-Th/Subatomic-Particles.html
4.125
Balance Math & More Level 1 is a reproducible workbook for grades 2-5. This 46-page workbook contains activities that combine mathematical computation with logical thinking for a good mental workout. This workbook contains three different types of exercises, and they increase in difficulty throughout the book. The first type is called "Balance Math." In this activity, students are presented with three balances. On one side of the balance are shapes (or one shape, in the earlier exercises), and the other side of the balance assigns a numerical value to these shapes. The first two balances give information, and the final balance asks the student to figure out the numerical value for the shape or shapes on the balance. The second type of exercise is called "Inside Out Math." This activity uses a grid with four squares. Each of the squares contains one of the following: a+c, a+d, b+c, and b+d (or the same letters using subtraction). There are various numerical values filled in, and the student is required to solve for a, b, c, and d. The third type of exercise is called "Tic Tac Math." In this exercise, a tic-tac-toe grid is partially filled in with numerical values. The student must complete the grid so that all rows, columns, and diagonals equal the same sum. Balance Math & More Level 1 is a fun introduction to algebraic thinking, though I do think second graders might have difficulty with some of the activities in the book, especially toward the end. However, a third, fourth, fifth, or even sixth grader who enjoys logic puzzles would enjoy this book. The puzzles are challenging, and I'd recommend this book as a great supplement to any math program.
http://thehomeschoolmagazine.com/Homeschool_Reviews/4767.php
4.15625
In the Channel Stability section, we discussed how stream channels can go through a multi-step evolution following a major disturbance within a watershed. Watershed disturbances can be either human-caused or natural, and can have repercussions well beyond the reorganization of the stream channel. Becoming aware of the potential implications of human actions can aid us in making better-informed land use and management decisions. Natural disturbances include drought, floods, wildfires, insects, and disease. In many cases, ecosystems can restore equilibrium following these types of events without human intervention. Ecosystems by design are resilient; biological systems have evolved under conditions of periodic droughts, floods, and fires. In contrast, human alterations to stream channels can have significant rippling effects within stream systems. Dams, constructed for reasons including generating hydroelectric power and storing water, disrupt the natural fluctuations in stream flows and diminish the amount of water reaching the downstream portions of the watershed. Sediment that is transported via the stream is trapped behind the dam and does not reach the downstream portion of the river, affecting hydrology and habitat. Additionally, the temperature of the stream exiting the dam is much lower than the natural stream temperature, threatening biological communities that evolved under warmer stream conditions. An animation of how stream corridors respond differently to flooding events depending on how they have been altered.
http://ag.arizona.edu/watershedsteward/resources/module/Stream/stream_disturbances_pg1.htm
4.34375
Threatened and Endangered Species Program Since life began, species have come and gone. While extinctions do occur naturally, scientific evidence suggests that the current rate of extinction is much higher than the natural rate. Biologists estimate that since the Pilgrims landed at Plymouth Rock in 1620, more than 5000 species, subspecies and varieties of our Nation's plants and animals have Under the Endangered Species Act of 1973, The U.S. Fish and Wildlife Service has the primary responsibility to coordinate the conservation of those plants and animals that are threatened with extinction and the ecosystems that support them. This is accomplished through: The Chesapeake Bay Field Office is responsible for protecting threatened and endangered species that occur in Maryland, Delaware, District of Columbia and portions of Virginia. Why should we save endangered species? All living creatures, including people, are part of a complex balanced network called the biosphere. The removal of a single species from the biosphere can set off chain reactions that affect other species. By saving endangered species we preserve the natural diversity of life on earth. Endangered and threatened species may also provide the chemical compounds necessary for new medicines, biological controls for agricultural pests and new food sources and act as environmental barometers alerting us to problems affecting wildlife, air, soil or water.
http://www.fws.gov/ChesapeakeBay/Endang.htm
4.03125
In the recent geologic past, volcanic activity dramatically impacted the Grand Canyon. In the western Grand Canyon hundreds of volcanic eruptions occurred over the past two million years. At least a dozen times, lava cascaded down the walls of the Inner Gorge, forming massive lava dams that blocked the flow of the Colorado River. Three of these lava dams were over 1,000 feet high, forming lakes similar to reservoirs such as Lake Powell or Lake Mead. Some of the lakes were over 100 miles long and filled the lower portion of the Grand Canyon for many years before finally over-topping the dam and eroding much of it away. Cinder cones and the remnants of lava flows and dams are visible in the Toroweap area and from the river near Lava Falls. Just southeast of Grand Canyon, near Flagstaff, is Sunset Crater Volcano National Monument, where in A.D. 1064 a series of eruptions built the park’s namesake cinder cone. About 45 earthquakes occurred in or near the Grand Canyon during the 1900’s. Of these, five registered between 5.0 and 6.0 on the Richter Scale. Dozens of faults cross the canyon, with at least several active in the last 100 years. Did You Know? Within the Grand Canyon, the type and abundance of organisms is directly related to the presence or absence of water. The Colorado River and its tributaries, as well as springs, seeps, stock tanks and ephemeral pools provide oases to flora and fauna in this semi-arid southwest desert area.
http://www.nps.gov/grca/naturescience/geologicactivity.htm
4.3125
You are here An In-Depth Look at Language Language is a learned system of symbolic communication. Humans symbols to communicate meaning. These symbols may be sounds, signs, or things that are abstractions applied to the thing they signify. Language provides the rules that allow humans to interpret speech, which is defined as patterned vocalizations. You may want to utilize a language specific answering service to learn more about that language. Linguistics is the study of how language is organized and how it functions. Formal linguistics studies the grammar of languages, the rules of language organization. The ultimate goal of formal linguistics is a universal grammar, or an explanation of how the brain understands and processes language. Formal linguistics includes three main schools: traditional linguistics, structural linguistics, and generative or transformational linguistics. Traditional linguistics is concerned with the grammatical parts of speech, such as nouns, adjective clauses, and verbs. Structural linguistics studies the arrangement of linguistic forms and excludes meaning as a field of study. It is concerned with morphology, syntax, and phonology. Generative or transformational linguistics include the study of meaning and is concerned with the structures of linguistic forms. The Structure of Language Language is the set of rules that govern written and spoken human communication. These rules make what would be arbitrary sounds and signs intelligible and meaningful. The rules that predict what sounds are meaningful in different languages and how those sounds can be combined is called phonology. The rules that govern which combinations of sounds form larger units of meaning (words and sentences) and how those sounds are manipulated to form different words is called morphology. - Basic Structure of Language: An explanation of phonetic alphabets, phonemes, and the building blocks of language. - The Structure of Language: Explanation and exercises. Phonemes are the smallest units of sounds and morphemes are the smallest unit of sound that carries meaning. Phonemes may have more than one allomorph, or variant of the phoneme that represents the actual sound that corresponds to it. Which variant is used is determined by the preceding letter or sound. Phonology is the study of how sounds form patterns to create phonemes and allophones. Phonological rules govern the addition, subtraction, and ordering of sounds in words. These rules give speakers the necessary information to pronounce words. A great answering service will utilize phonology. To learn a language, one must learn the basic units of sound that form the language. Some phonemes exist in some languages and not in others, which can make pronouncing and understanding a language difficult for non-native speakers. - Phonology: Explanation of phonology and phonemes. Also includes exercises. - International Phonetic Association: IPA homepage; includes sound recordings and a chart of the International Phonetic Alphabet. - Segmental Phonology: A resource for those interested in studying segmental phonology; requires a basic understanding of phonetics and phonology. - Phonetics: An online course in phonetics. - Articulatory Phonology: An explanation of articulatory phonology. - Phonology and Phonetics: Definitions and explanations of phonetics, phonology, and their relationship to English spelling. Morphology is the study of the internal structure of words and the rules by which words are formed. Phonemes are combined with each other into larger units of sound called morphemes. Morphemes are the smallest units of sound that convey meaning. Words can be formed by one or more morphemes. Morphemes are classified as “free” or “bound”. Free morphemes can form words on their own. Bound morphemes can form parts of words but cannot be an independent word. A great answering service will utilize morphology. Examples of found morphemes are prefixes and suffixes. Understanding morphemes and their combinations allow speakers to form and learn new words. - Morphology: Exercises and links on morphology. - Morphology and Morphemes: Definition of morphology, morphemes, allomorphs, roots, affixes, and the Greek and Latin roots of English words. - Lexemes and Morphemes: Explanations of morphology, morphemes, lexemes, and Lexeme-Morpheme Base Morphology. - Morphemes, Morphology, and Allomorphs: Definitions and examples. - Morphology and Morphological Rules: Definitions, examples, and rules. Grammar is analyzed in two ways: through morphology and syntax. Syntax is the study of the standardized rules that control how words can be combined to form larger units of meaning, called sentences. Speakers of a language learn syntactical rules from family and friends and refine them through the formal study of grammar. Speakers learn the rules of syntax in order to combine the language’s morphemes into different patterns of meaning. For example, a syntactical rule of English is to place the subject before the verb in a declarative sentence. - The Syntax of Language: An introduction to syntax. - A Slide Presentation on Syntax: Includes the major principles and components of syntax. From the Department of English at the College of DuPage. - English Syntax: Resources for students of English syntax. - What is Syntax?: Definition and examples. All human languages have systematic rules that govern how sounds, words, and sentences can be formed and combined to communicate meaning. Every human language is learned rather than biologically inherited as are systems of vocal communication used among nonhuman primates. While most human languages are spoken, with the exception of sign languages, speech and language are not the same. Speech is patterned vocalization. Animals vocalize, as do human babies, but these vocalizations are not speech because they do not conform to set patterns. Human languages are also symbolic systems of communication, which means that human beings use arbitrary symbols—sounds or words—and apply meaning to them.
http://www.qualityansweringservice.com/depth-look-language
4.34375
The Norwegian tail grab or hval kla is another of the inventions that made industrial whaling using factory ships and slipways on those ships possible. A dead whale could be taken to the slipway and a line passed around it to start the process of winching it up the slipway. But such a great weight and relatively thin lines could cause problems, not the least of which was passing a strong enough line around the whale to haul it up the slipway and then to be be easily removed once on deck. This claw was large enough and strong enough (weighing several tons itself) to be able to "grab" hold of the tail of the whale at the narrowest part or the "small" fairly easily and haul it onto the flensing deck. It could also be relatively easily taken off again once the whale was in place, ready for the next one. No man had to enter the dangerous zone of the slip way and ropes need not be handled to haul the whale except from a distance. The size of the grab can be seen by comparison with the man in the last picture. Operating this contraption was in itself a skilled job with the ship and whale moving with the swell and waves and the grab itself suspended above the whale and might be swinging in a rhythm of its own. At the crucial point, the grab fully opened would be dropped onto the whale and with a crash that could be heard throughout the whole ship would grab the whale's tail, the winch attached to the rear end would pull and the jaws would close together to bring the whale aboard. Note in these pictures how the end parts of the flukes of the whale have been removed, this was done by the catcher boats immediately after the capture of the whale. It is sometimes said that it was done superstitiously to prevent a dead whale that had been inflated and buoyed and left for some time before transport to the factory ship from somehow finning itself many miles away. It probably also had a more practical use in that whales were towed backwards to the factory ship by a rope placed around the small, and intact flukes would act as a brake so slowing this passage down. The flukes were not always cut in this manner.
http://www.coolantarctica.com/gallery/whales_whaling/0057.htm
4.0625
In a new finding that could have game-changing effects if borne out, two astrophysicists think they've finally tracked down the elusive signature of dark matter. This invisible substance is thought to make up much of the universe but scientists have little idea what it is. They can only infer the existence of dark matter by measuring its gravitational tug on the normal matter that they can see. Now, after sifting through observations of the center of our Milky Way galaxy, two researchers think they've found evidence of the annihilation of dark matter particles in powerful explosions. "Nothing we tried besides dark matter came anywhere close to being able to accommodate the features of the observation," Dan Hooper, of the Fermi National Accelerator Laboratory in Batavia, Ill., and the University of Chicago, told SPACE.com. "It's always hard to be sure there isn't something you just haven't thought of. But I've talked to a lot of experts and so far I haven't heard anything that was a plausible alternative." Hooper conducted the analysis with Lisa Goodenough, a graduate student at New York University. Dark matter destruction The idea of dark matter was first proposed in the 1930s, after the velocities of galaxies and stars suggested the universe contained much more mass than what could be seen. Dark matter would not reflect light, so it couldn't be observed directly by telescopes. Now scientists calculate dark matter makes up roughly 80 percent of all matter, with regular atoms contributing a puny 20 percent. The Fermi Gamma-ray Space Telescope, which has scanned the heavens in high-energy gamma-ray light since it was launched in 2008, has observed a signal of gamma-rays at the very center of the galaxy that was brighter than expected. Hooper and Goodenough tested many models to explain what could be creating this light. They ultimately concluded it must be caused by dark matter particles that are packed in so densely that they are destroying each other and releasing energy in the form of light. Physicists have theorized that dark matter particles might be their own antimatter partners, and thus when two dark matter particles meet under the right circumstances, they would destroy each other. Alternatively, dark matter particles might be meeting anti-dark matter particles at the galactic center. Either way, the researchers think the Milky Way's gamma-ray glow is caused by dark matter explosions. By studying the data on this radiation, Hooper and Goodenough calculated that dark matter must be made of particles called WIMPs (weakly interacting massive particles) with masses between 7.3 and 9.2 GeV (giga electron volts) almost nine times the mass of a proton. They also calculated a property known as the cross-section, which describes how likely the particle is to interact with others. Knowing these two properties would represent a huge leap forward in our understanding of dark matter. "It's the biggest thing that's happened in dark matter since we learned it existed," Hooper said. "So long as no unexpected alternative explanations come forward, I think yes, we've finally found it." The researchers have submitted a paper describing their findings to the journal Physics Review Letters B, but it has not yet gone through the peer-review process. Some skepticism remains Not everyone is ready to accept that dark matter has been found. Hooper and Goodenough based their analysis on data released to the public from the Fermi observatory's Large Area Telescope. However, the official Fermi team, a large collaboration of international scientists, has not finished studying the intriguing glow. While they don't exclude the possibility that it is dark matter, team members are not ready to dismiss the possibility of another explanation. "We feel that astrophysical interpretations for the gamma-ray signals from the region of the galactic center have to be further explored," said Seth Digel, analysis coordinator for the Large Area Telescope collaboration and a staff physicist at the SLAC National Accelerator Laboratory in Menlo Park, Calif. "I can't and won't say what they've done is wrong, but as a collaboration we dont have our own final understanding of the data." Fermi scientists stressed that the analysis of the Milky Way's center is very complex, because there are so many bright sources of gamma-ray light in this crowded region. Various types of spinning stars called pulsars, as well as remnants left over from supernovas, also contribute confusing signals. "More work needs to be done in this direction, and people within the collaboration are working hard to accomplish this goal. Until this is done, it is too difficult to interpret the data," said Simona Murgia, another SLAC scientist and Fermi science team member. Hooper agreed that the case is not yet closed. "I want a lot of people who are experts to think about this hard and try to make it go away," he said. "If we all agree we can't, then we'll have our answer." One reason he and Goodenough think they are on the right track is that their calculation of the mass of dark matter particles aligns with some promising hints from other studies, he said. Two ground-based experiments aimed at detecting dark matter have found preliminary indications of particles with roughly the same mass. The University of Chicago's CoGeNT project, buried deep in the Soudan iron mine in northeastern Minnesota, and DAMA, an Italian experiment underground near the Gran Sasso Mountains outside of Rome, both found signals that they can't completely attribute to normal particles, but can't prove are from dark matter. "Part of why this picture is so compelling has to do with those in fact," Hooper said. "I would argue that it's likely that all three of these experiments are seeing the same dark matter particle." Space news from NBCNews.com Teen's space mission fueled by social media Science editor Alan Boyle's blog: "Astronaut Abby" is at the controls of a social-media machine that is launching the 15-year-old from Minnesota to Kazakhstan this month for the liftoff of the International Space Station's next crew. - Buzz Aldrin's vision for journey to Mars - Giant black hole may be cooking up meals - Watch a 'ring of fire' solar eclipse online - Teen's space mission fueled by social media The Sagan standard Still, it will take a lot of work to convince most astrophysicists that such a slippery substance has been captured at last. "It's a complicated task to interpret what Dan and Lisa are seeing," said Doug Finkbeiner, a researcher at the Harvard-Smithsonian Center for Astrophysics in Cambridge, Mass. "I do not find it persuasive, but that doesn't mean it is wrong." Some scientists said we finally may be getting close to solving the mystery of dark matter. Michael Turner, director of the Kavli Institute for Cosmological Physics at the University of Chicago, said that between Fermi, the ground-based experiments, and the recently opened Large Hadron Collider particle accelerator at the CERN laboratory in Switzerland, scientists will likely confirm the existence of dark matter within the next decade. For now, though, he's still waiting. "This result is very intriguing but doesn't yet rise to the Sagan standard extraordinary claims require extraordinary evidence," Turner said. Other explanations would have to be eliminated, he said. "Nature knows many ways to make gamma rays." - Dark Energy and Dark Matter Might Not Exist, Scientists Allege - Video: Dark Matter in 3-D - What is Antimatter © 2013 Space.com. All rights reserved. More from Space.com.
http://www.nbcnews.com/id/39874873/ns/technology_and_science-space/t/has-dark-matter-finally-been-seen/
4.4375
Missouri's First Blacks The first Black slaves to enter what would later be named Missouri arrived in 1719 as unwilling participants in the new French mining venture. Des Ursins bought five Blacks with him, and although he failed to find the silver mines he sought, he did discover several rich lead deposits. In 1720, Phillippe Fransois Renault was sent from France to direct lead-mining operations. He may have brought with him as many as 500 Black slaves from the French island of Haiti. These were the first permanent Black residents of Missouri. The Company of the West contracted to supply Renault with 25 additional Blacks annually. By 1725, Renault's mines were yielding 1,500 pounds of lead per day. The French Explorers The arrival of French explorers and traders radically changed slavery among the Indians. The French wanted to buy Indian slaves, providing tribes such as the Osage and Missouri with guns and ammunition in return for captives. Once one tribe acquired weapons, other groups felt compelled to do the same. Consequently, rather than a by-product of conflict, slavery became its cause. The only way to avoid becoming enslaved was to be stronger than the enemy tribe. Strength was often obtained by capturing slaves and bartering them for weapons. The rise of slave-trading for gain had begun. The Black men and women who finally gained their freedom at the end of the Civil War were not the first Black freedmen of the state. Although the distinction between free Blacks and slaves was vague, a free Negro class existed through the period of slavery in Missouri. The presence of free Blacks in an all-slave society threatened to undermine the very foundation upon which slavery was built. The continuation of the slave system was based upon the assumption that whites should exercise indisputable control over Blacks. Freedmen, regardless of the theoretical rights and equalities which freedom implied, could not be allowed to subvert that system by acting as if they were as good as whites. The Versatility of Missouri Slaves Missouri slaves had a wider range of skills and occupations than slaves in the deep South because of the different type of farming in Missouri. Missouri's land was abundant and fertile, but the cold weather meant a shorter growing season and did not permit the growth of cotton in large quantities. Farmers and their slaves practiced mixed farming. They produced hemp, tobacco, wheat, oats, hay, corn, and other feed trains. Missouri also became well known for its fine cattle, sheep, horses, and pigs, Consequently, the Missouri slave became a multi-talented worker. Civil War Soldiers Putting on the uniform of the United States enhanced the Black man's self-esteem and his dignity. It gave him a sense of identity with the struggle for human freedom, and compelled him to look beyond his own experiences. Sergeant Prince Rivers, a Black soldier of the 1st South Carolina Volunteers, spoke for Black soldiers everywhere when he summed up what the war meant to him: "A new day." Throughout the former slave trade, Blacks believed a new day dawned. Signs of optimism abounded a new status of freedmen, a new sense of belonging and worth, and new opportunities in education. The Black Emphasis on Education As the war drew to a close, many Black and white leaders, aware that slavery was a dying institution, tried to arrange educational opportunities for slaves and free Blacks alike. They viewed education as the single most important key of the black movement into mainstream American society. Black and white educational efforts, on behalf of Blacks, were so large in St. Louis that a Black Board of Education was established. The unofficial board directed four schools with 499 students. By 1865, the system had eight teachers an 600 pupils. In the spring of 1879, thousands of Southern Blacks passed through St. Louis and Kansas City on their way to Kansas, The Citizens of these two cities had often witnessed the arrival of emigrants traveling west. However, few taxed their resources and patience as did the participants of the Exodus of 1879. Many of the Black emigrants were destitute when they arrived in St. Louis and had no means to continue their journey to Kansas. Within three months, the inhabitants of St. Louis and Kansas City organized relief committees to look after these "Exodusters". Migration of the Black Population when the fighting broke out in 1914, European immigration virtually stopped. Southern Blacks began to move north by the thousands to fill the labor gap. Rural Missouri Blacks, hearing of better economic and social opportunities in the cities, moved to St. Louis and Kansas City to work for factories and railroads. Greater numbers of Blacks living in cities meant overcrowded neighborhoods and unsanitary, crime-ridden living conditions. When Black families resorted to moving into white neighborhoods, white families often retaliated with violence. Blacks and the Depression The stock market crash of 1929 sent the American economy in a downward plunge from which it would not recover for more than a decade. Millions of Americans, accustomed to relative comfort and security, faced unemployment, handouts and even soup lines. The Great Depression hit Blacks hardest. They were last to be hired and first to be fired. Picketing was rampant as whites now competed for jobs that were once regarded as "nigger" tasks. The trend was reflected even in the state capitol where white elevator operators replaced blacks. Civil Rights Movement When the Civil war brought freedom but no justice, countless African American civil rights leaders, clergy, educators, philanthropists, and public servants fought on. Undeterred by the lash of the whip, the lynch mob, of the law of the land, they held America accountable to its promise of "liberty and justice for all". Slowly but surely, common, everyday people vanquished the stumbling blocks of segregation that barred African American from America's courtrooms, hotel rooms, dining rooms, restrooms, emergency rooms, locker rooms, and boardrooms. Each door they pried open led to greater opportunities, not only for African Americans, but for every other disenfranchised group in the land. Kansas City's Entrepreneurship 1900-1920 Between 1900 and 1920 a small but steady emerging middle class citizenry began the serious work of building itself economically, politically, and socially. This twenty year span of economic growth was characterized by a growing spirit of entrepreneurship. African Americans opened a variety of business, largely clustered in two areas. One area was on 18th Street along Paseo, Highland, Vine, and Woodland. African American businesses also opened in the area around 12th Street along Woodland and Vine. Some African American professional offices and a few businesses such as the Burton Publishing Company, Southside Pressing Company, and the Ashcraft Barber Shop were located in Kansas City's downtown area. And a few businesses such as the Urbank's Drug Store opened on Independence Avenue near Harrison. The predominance of African American business in the 18th Street and 12th areas created a new sense of vibrancy and provided residents with the goods, services, and products needed to conduct their daily lives. Entreprenurial ventures by African Americans in Kansas City paid off for a few. By 1911 operating within the African American community were 85 tailor shops, 75 pool halls, 25 dry cleaners, four undertaking parlors, seven nightclubs, one shoe store, one dry goods store, and a number of restaurants, all of which were owned by African Americans. By 1915 Kansas City's African American community had six black stores doing an annual business of $60,000 and six undertakers doing an annual business of $100,000. And in 1920 The Colored Chamber of Commerce was formed. The Watkins, Gates, and Blankenships' are just a few Kansas City African American family businesses that started doing this exciting era and still exist today. The history and growth of these business are highlighted in our traveling exhibits.
http://www.blackarchives.org/articles/african-americans-missouri
4.125
During the Classical Era, many changes in instrumental style took place. The classical evolved a great deal during the period. Sonata form was the basic structure in which composers wrote instrumental music. Sonata form was applied to solo sonatas, chamber music, symphonies, and concertos. Musical compositions of this time contained three or four movements, each with its own special characteristics. The first movement of a Sonata was called the sonata-allegro. It consisted of three sections: 1) Exposition:In the second movement of a sonata, there were three specifications that usually occured. It was written in a slow tempo, in a contrasting key (usually the subdominant or dominant), in relation to the whole work. Additionally, this movement was more lyrical than the other movement. This section presented the main theme of the movement in the . The theme then transitioned by a bridge and modulated to the dominant key, or relative major key if the movement was in a minor key. The second theme was presented in the dominant key. This section concluded with a closing theme or . This section used the material from the exposition which the composer "developed" and expanded. Motives were presented in various keys, registers, and groupings of instruments. In this section the composer also used new themes that were not found in the exposition section. The composer ended this section in the and moved directly into the recapitulation. The recapitulation was a restatement of the exposition but with all subsections remaining in the tonic key. The third movement in the classical sonata was called the or minuet. Like the other movements, this one also had special characteristics. It was written in a moderately fast tempo, played in the tonic key, and was written in three-four. The minuet had three sections: minuet, trio, and a repeat of the minuet. In a sonata with three movements, the minuet was left out or omitted. In some of Haydn and in most of Beethovenís works in sonata form, the third movement was called a scherzo. It utilized the same aspects of the minuet, but was more humorous in nature. Sometimes the two middle movements were reversed, so that the minuet came second and the slow movement third. In a three movement composition, the minuet or scherzo was omitted. The fourth movement, or finale, also had distinct characteristics. It had a lively tempo, was played in the tonic key, and was usually played in sonata-allegro form. Another important form of instrumental music was the symphony, which blossomed during the 18th century. The basic form of the classical symphony was the Italian overture, called sinfonia. It was an orchestral composition arranged in three movements (fast-slow-fast). Instrumentation commonly found by the end of the 1700's included: 1. Four woodwind instruments in pairs (flutes, oboes, clarinets and bassoons) 2. Trumpets, horns, and timpani in pairs 3. String choir with first and second violins, violas, cellos, and string basses Orchestration utilized the following: 1. The strings remained the most important sound in the orchestra. 2. Themes were played by first violins. 3. Harmonies were usually played by second violins and violas. 4. Cellos and basses were doubled, however, the basses sounded an octave lower. 5. Brass instruments, without valves, were only used in tutti passages and played harmonies, instead of main thematic material. The Classical solo concerto was similar to that of the Baroque but differed in the style and structure of movements. The Classical Concerto followed the fast-slow-fast formula, but omitted the minuet movement, thereby containing only three movements. The first movement was written in sonata-allegro form, but had two separate expositions. The first exposition introduced principal themes by the orchestra in the tonic key. The second exposition had a solo instrument convey the theme in a more brilliant and showy style. In the next stage the composer developed and expanded these musical ideas. At the conclusion of the development section, recapitulation began. At this point, the composer restated the main themes of the movement. Near the end of recapitulation a is played. This cadenza was freely improvised in a virtuosic manner. During the 1800s, cadenzas were usually written out beforehand by composer or performer. The second movement was written in a contrasting key. It utilized a slow tempo and was stylistically more lyric then the first. This movement is the least virtuosic movement of all three. The third movement was written in form. It had a lively tempo, and was stylistically lighter then the other movements. Sometimes a cadenza was added. Chamber music was its own distinct musical entity, very different from the orchestral medium. It was composed for a very small ensemble with only a few members and with only one instrument to a part. It was at its height in music literature during the Classical era. Divertimento was composed for various media, such as small chamber ensembles and small orchestras. It had three to ten movements, which included minuets, dances, standard sonata-form movements, and marches. This music was meant for outdoor and informal performances. It was less sophisticated than symphonies. Haydn wrote over 60 divertimentos, and Mozart wrote more than 25. String quartets were the most popular chamber medium of the Classical era. They were made up of one cello, two violins, and a viola. They were written in 4 movements, using the Classical sonata form. Other Chamber Music Music was also written for mixed quartets, which used three string instruments and one additional instrument (usually oboe, clarinet, piano or flute). There was also music written for string , mixed trios, string , and mixed quintets. Solo Sonatas for piano or harpsichord were important during the Classical era. Well known composers of this style were Karl Philipp Emanuel Bach, J.C. Bach, and Wilhelm Friedemann Bach. Additionally, Haydn, Mozart, and Beethoven also wrote piano sonatas. Top - Classical | Vocal | Composers Music Styles | Music Theory | Music History | Musical Instruments | Music Professions | Music Links | Music Games | Glossary | Guestbook | Message Board | Search Music Notes | Meet the Treble Rebel Team | Music Notes Home
http://library.thinkquest.org/15413/history/history-cla-inst.htm
4
Gallipoli and Dardanelles Strait, Turkey The city of Gallipoli (Gelibolu in Turkish) sits at a crossroads between the Marmara and Aegean Seas, connected by the Dardanelles Strait. The strait is a 61-kilometer-long drowned valley formed along a fault (fracture in Earth’s crust). The fracture formed as the Arabian, Indian, and African tectonic plates collided with the Eurasian plate during the Tertiary period, approximately 2-65 million years ago. This faulting created the rugged terrain of western Turkey visible in the lower half of this astronaut photograph, as well as the great mountain ranges of the Alps and Himalayas. Plate collision continues today as Turkey moves westward in relation to Eurasia. The movement leads to frequent strike-slip earthquakes (quakes in which the relative ground motion along the fault is forward or backward, rather than up or down.) The urbanized area of modern Gallipoli is visible as a light gray to pink region at the entrance to the Dardanelles Strait. Water in the Strait flows in both northeast and southwest directions due to opposite surface and undercurrents. The Strait has a long history of strategic importance as it provides a conduit between the Mediterranean and Black Seas, as well as access to Ankara, Turkey’s capital, to the northeast (not shown). Several ships are visible in the Strait to the southwest of Gallipoli (image center left). The Battle of Gallipoli—part of an Allied plan to capture Istanbul, then the capital of the Ottoman Empire—was fought near the city during World War I. Astronaut photograph ISS014-E-8138 was acquired November 9, 2006, with a Kodak 760C digital camera using an 180 mm lens, and is provided by the ISS Crew Earth Observations experiment and the Image Science & Analysis Group, Johnson Space Center. The image in this article has been cropped and enhanced to improve contrast. The International Space Station Program supports the laboratory to help astronauts take pictures of Earth that will be of the greatest value to scientists and the public, and to make those images freely available on the Internet. Additional images taken by astronauts and cosmonauts can be viewed at the NASA/JSC Gateway to Astronaut Photography of Earth. Back to: Newsroom View Images Index |Subscribe to the Earth Observatory | About the Earth Observatory Please send comments or questions to: firstname.lastname@example.org Responsible NASA Official: Dr. Michael D. King NASA/GSFC Security and Privacy Statement
http://earth.jsc.nasa.gov/EarthObservatory/GallipoliandDardanellesStrait_Turkey.htm
4
Desks of Power: A U.S. Government Guide for Kids Article brought to you by: Phillip Donaldson The United States has enjoyed a long and interesting history based upon principles that were established over two hundred years ago. At one time in the history of the U.S. they were under the control of the British. As a part of the British Empire, the people who were living in the thirteen original colonies, had to follow all of the laws and rules established for them thousands of miles away. The colonists did not like not having a say in what happened to them and rebelled. This led to the colonies signing the Declaration of Independence which established their claim to be an independent country and not be in control by the British. With this Declaration the colonies fought for independence in the Revolutionary War, and when the British surrendered, the War ended and the colonies were now free! And with the freedom came the need to set up a government and country. The now free colonies created the United States Constitution which established rules and procedures in how the government would function. One of the biggest parts of the Constitution was creating the different branches of government each with a way to monitor the actions of the others. This system of checks and balances assured that no one would have too much power in the government. The United States government and history is interesting and is something that all citizens should be familiar with. And, younger people can learn about the many aspects of the government. To help kids find out about how the country was founded and how it runs, we have put together a number of links and information. We hope you enjoy learning about the United States and how the government works! Branches of Government How Laws Get Passed
http://www.beyondtheofficedoor.com/desks-of-power-government-guide-for-kids.html
4.375
Teaching Plan 3 Explore the Circumcenter of a Triangle This lesson plan is to introduce the concepts of circumcenter by using computers with sketchpad software to explore. Students are able to observe and explore possible results (images) through computers by carrying out their ideas in front of screens. IL Learning Standards 1. Understand the concepts of circumcenter of a triangle and other relative knowledge. 2. Be able to use computers with Geometer's Sketchpad to observe possible results and solve geometric problems. 1. Computers and Geometer's Sketchpad software 2. Papers, pencils, and rulers Lesson PlanDay 1 - Introduction of basic definition, review of relative concepts, and class discussionDay 1 Day 2 - Group activity to answer questions by using computers with sketchpad Day 3 - Group discussion, sharing results, and making conclusion 1. The instructors introduces the basic definition of circumcenter and had better review similar concepts about centroid, incenter, and orthocenter of a triangle. 2. Discuss students' thought and other relative questions about circumcenter. Such as: How many circumcenters are there in a triangle? Is the point of circumcenter always on the inside of a triangle? If not, please describe the possible results and depend on what kind of triangle is. 3 Then, the instructor and students turn toward to play and test computers and discuss how to draw graphs and find their answers by using computers. The instructor has 2-3 students form a group team to work through computers to collect data in order to decide the conclusion for questions. The instructor should turn around each group to observe students' learning and offer some help if students have problems on how to operate computers with sketchpad software. 1. Is there only a point of circumcenter in a triangle? Explain your possible reasons. 2. Is the point of circumcenter always on the inside of a triangle? If not. Please describe the possible results and depend on what kind of triangle is. Worksheet#1 and GSP file. 3. What are the different properties among centroid, incenter, orthocenter, and circumcenter? 4. What kind of triangle will result in that centroid, incenter, orthocenter, or circumcenter in the same triangle will overlap? GSP file 5. Which three points among centroid, incenter, orthocenter, and circumcenter will be on a line? ( This line is called Euler line.) Describe your experimental result and explain it. GSP file. 6. In a triangle ABC, suppose that O is the point of circumcenter of triangle ABC. Observe the relation between angle ABC and angle AOC. Make a conclusion and explain it. Worksheet#2. and GSP file. 7. In a triangle ABC, suppose that O is the point of circumcenter of triangle ABC. Observe the length of OA, OB, and OC. Are they equal? Explain it. Let O be the center, and the length of OA be the radius to draw a circle. Observe the situation of point B and C and explain it. GSP file. ( This circle is called circumscribed circle to the triangle ABC.) In this class, students offer their results to discuss and share among groups and make the final conclusion for the questions of Day 2 activity. Finally, if possible, the instructor should demand students to develop their geometric proof for each of the above questions. And, let students know that lots of results from dynamic models do not represent and make a proof. In a triangle ABC, AB= 3 cm, BC= 4 cm, CA= 5 cm. 1) What kind of triangle is it? Why? 2) Suppose that O is the point of circumcenter of triangle ABC, the sum of OA, OB, and OC is = ______. 1) In a acute triangle ABC, suppose that O is the point of circumcenter of triangle ABC, and the angle BAC is 65 degrees, then the angle BOC is ________ degrees. 2) In a triangle DEF, angle DEF is obtuse angle. Suppose O is the point of circumcenter of triangle DEF, and the angle DEF is 130 degrees, then the angle DOF is ________ degrees. In a triangle ABC, let A' be the midpoint of BC, B' be the midpoint of AC, and C' be the midpoint of AB. And let O is the circumcenter of triangle ABC. Please explain O is the orthocenter of triangle A'B'C'. (Hint: perpendicular lines) There is an arc BCD which is a part of a circle. Could you find the center of this circle and draw the another part of this circle ? Explain your method. (Hint: Three points form a triangle and decide a circle.) 1. Replace traditional geometric teaching in which geometry is taught by a verbal description to dynatmic drawing. 2. Help teacher to teach and replace traditional teaching which uses blackboards and chalks to draw graphs 3. Computers with sketchpad software not only allow students to manipulate geometric shapes to discover and explore the geometric relationships, but also verify possible results, provide a creative activity for students' ideas, and enhance students' geometric intuition. 4. Facilitate the creation of a rich mathematical learning environment to assist students' geometric proof and establish geometric concepts 1. It can not replace traditional logic geometric proof -lots of examples do not make a proof 2. Students can not get maximal and potential learning benefits from by using computers to learn if the instructor do not offer appropriate learning directions and guide. The instructor also should know what kind of learning environment with computers is most likely to encourage and stimulate students' learning. 1. Szymanski, W. A., (1994). Geometric computerized proofs= drawing package + symbolic compution software. Journal of Computers in Mathematics and Science Teaching, 13, p433-444. 2. Silver, J. A. (1998). Can computers to teach proofs? Mathematics Teacher, 91, 660-663 Any Comment: Yi-wen Chen firstname.lastname@example.org
http://mste.illinois.edu/courses/ci499sp01/students/ychen17/project336/teachplan3.html
4.21875
Photograph by Bates Littlehales As a plane soars over the high desert of southern Peru, the dull pale sameness of the rocks and sand organize and change form. Distinct white lines gradually evolve from tan and rust-red. Strips of white crisscross a desert so dry that it rains less than an inch every year. The landscape changes as lines take shape to form simple geometric designs: trapezoids, straight lines, rectangles, triangles, and swirls. Some of the swirls and zigzags start to form more distinct shapes: a hummingbird, a spider, a monkey. These are the renowned Nasca lines—subject of mystery for over 80 years. How were they formed? What purpose could they have served? Were aliens involved? The lines are found in a region of Peru just over 200 miles southeast of Lima, near the modern town of Nasca. In total, there are over 800 straight lines, 300 geometric figures and 70 animal and plant designs, also called biomorphs. Some of the straight lines run up to 30 miles, while the biomorphs range from 50 to 1200 feet in length (as large as the Empire State Building). The Lines Revealed, and the First Theories Peruvian archaeologist Toribio Mejia Xesspe was the first to systematically study the lines in 1926. However, since the lines are virtually impossible to identify from ground level, they were only first brought to public awareness with the advent of flight—by pilots flying commercial planes over Peru in the 1930s. American professor Paul Kosok investigated and found himself at the foot of a line on June 22, 1941—just one day after the winter solstice. At the end of a full day studying the lines, Kosok looked up from his work to catch the sunset in direct alignment with the line. Kosok called the 310 square mile stretch of high desert “the largest astronomy book in the world”. What Are the Lines? Where Did They Come From? The lines are known as geoglyphs – drawings on the ground made by removing rocks and earth to create a “negative” image. The rocks which cover the desert have oxidized and weathered to a deep rust color, and when the top 12-15 inches of rock is removed, a light-colored, high contrasting sand is exposed. Because there’s so little rain, wind and erosion, the exposed designs have stayed largely intact for 500 to 2000 years. Scientists believe that the majority of lines were made by the Nasca people who flourished from around A.D. 1 to 700. Certain areas of the pampa look like a well-used chalk board, with lines overlapping other lines, and designs cut through with straight lines of both ancient and more modern origin. Kosok was followed by the German Maria Reiche who became known as the Lady of the Lines. Reiche studied the lines for 40 years and fought unyieldingly for her theories on the lines’ astronomical and calendrical purpose (she received a National Geographic grant in 1974 for her work). Reiche battled single-handedly to protect the site; she even lived in a small house near the desert so she could personally protect the lines from reckless visitors. The Kosok-Reiche astronomy theories held true until the 1970s when a group of American researchers arrived in Peru to study the glyphs. This new wave of research started to poke holes in the archeo-astronomy view of the lines (not to mention the radical theories in the ‘60s relating to aliens and ancient astronauts). Johan Reinhard, a National Geographic Explorer-in-Residence, brought a multidisciplinary approach to the analysis of the lines, “Look at the large ecological system, what’s around Nasca, where were the Nasca people located.” In a region that receives only about 20 minutes of rain per year, water was clearly an important factor. "It seems likely that most of the lines did not point at anything on the geographical or celestial horizon, but rather led to places where rituals were performed to obtain water and fertility of crops," wrote Reinhard in his book The Nasca Lines: A New Perspective on their Origin and Meanings. Anthony Aveni, a former National Geographic grantee, agrees, "Our discoveries clearly showed that the straight lines and trapezoids are related to water…but not used to find water, but rather used in connection with rituals." "The trapezoids are big wide spaces where people can come in and out," says Aveni. "The rituals were likely involved with the ancient need to propitiate or pay a debt to the gods…probably to plead for water." Reinhard points out that spiral designs and themes have also been found at other ancient Peruvian sites. Animal symbolism is common throughout the Andes and are found in the biomorphs drawn upon the Nasca plain: spiders are believed to be a sign of rain, hummingbirds are associated with fertility, and monkeys are found in the Amazon—an area with an abundance of water. "No single evaluation proves a theory about the lines, but the combination of archeology, ethnohistory, and anthropology builds a solid case," says Reinhard. Add new technological research to the mix, and there’s no doubt that the world’s understanding of the Nasca lines will continue to evolve. After a century of speculation, scientists believe they now understand why ancient people created the sprawling figures etched into the Peruvian desert. The ancient Nasca lines of Peru shed their secrets. A thousand-year old temple complex, including a tomb for 33 tortured women, may be proof that a god's "descendants" really existed, archaeologists say. Built for the "presentation," in which prisoners were bled to fill cups, a Peruvian chamber has emerged with burials intact. Investigate a murder in Cleopatra's palace. Use forensic archaeology to solve the death of China's first emperor. How much do you know about the ancient Mesoamerican Maya civilization? National Geographic Magazine Rome’s border walls were the beginning of its end. How did the Easter Island statues move? That question puzzles archaeologists—and modern-day islanders. Archaeologists and artists, armed with the latest tools and techniques, are bringing the life-size army of painted clay soldiers back to life. The Great Energy Challenge is a National Geographic initiative to help you understand our current energy situation. Explore the GEC to figure out and trim your carbon footprint. See how you measure up against others, and how changes at home or in travel choices could do tons to protect the atmosphere.
http://science.nationalgeographic.com/science/archaeology/nasca-lines/
4.34375
In the previous articles in this series (see below), you learned some background information about sonar mapping and echo location, you collected some data, and you turned that data into a picture of the ocean floor. This article will provide some information about turning that demonstration into a science fair project. Background information was covered in a previous article. Older students will need to do additional research. Put some of your research information on your display. Don’t use just words. Include some diagrams and pictures. If your science fair rules require you to have a problem and hypothesis, you will need to come up with an appropriate question and answer. These are sometimes more difficult to form for demonstrations than for experiments. You might need to do something like “Can I demonstrate how echo location and sonar mapping work?” for your problem and “Yes, I can demonstrate how echo location and sonar mapping work with a partner and a bell.” The previous articles showed you how to collect your data and create a graph. Do multiple runs of this demonstration to create several graphs. These will look nice on your display. This is a fairly simple science fair project for a beginner and it can be done in a week or less. Additional help with creating your science fair project can be found in a series of articles about the science fair process. Because the concept is simple, take the time to make a nice display and guide the viewer through your process. Hopefully they will learn as much from this demonstration as you did.This article is part of the Sonar Mapping series. See the list below for links to the other articles in this series.
http://the-science-mom.com/182/science-demonstration-sonar-mapping-part-4-creating-a-science-fair-project/
4.25
Nova was a high-power laser built at the Lawrence Livermore National Laboratory (LLNL) in 1984 which conducted advanced inertial confinement fusion (ICF) experiments until its dismantling in 1999. Nova was the first ICF experiment built with the intention of reaching "ignition", a chain reaction of nuclear fusion that releases a large amount of energy. Although Nova failed in this goal, the data it generated clearly defined the problem as being mostly a result of magnetohydrodynamic instability, leading to the design of the National Ignition Facility, Nova's successor. Nova also generated considerable amounts of data on high-density matter physics, regardless of the lack of ignition, which is useful both in fusion power and nuclear weapons research. Inertial confinement fusion (ICF) devices use drivers to rapidly heat the outer layers of a target in order to compress it. The target is a small spherical pellet containing a few milligrams of fusion fuel, typically a mix of deuterium and tritium. The heat of the laser burns the surface of the pellet into a plasma, which explodes off the surface. The remaining portion of the target is driven inwards due to Newton's Third Law, eventually collapsing into a small point of very high density. The rapid blowoff also creates a shock wave that travels towards the center of the compressed fuel. When it reaches the center of the fuel and meets the shock from the other side of the target, the energy in the shock wave further heats and compresses the tiny volume around it. If the temperature and density of that small spot can be raised high enough, fusion reactions will occur. The fusion reactions release high-energy particles, some of which (primarily alpha particles) collide with the high density fuel around it and slow down. This heats the fuel further, and can potentially cause that fuel to undergo fusion as well. Given the right overall conditions of the compressed fuel—high enough density and temperature—this heating process can result in a chain reaction, burning outward from the center where the shock wave started the reaction. This is a condition known as ignition, which can lead to a significant portion of the fuel in the target undergoing fusion, and the release of significant amounts of energy. To date most ICF experiments have used lasers to heat the targets. Calculations show that the energy must be delivered quickly in order to compress the core before it disassembles, as well as creating a suitable shock wave. The energy must also be focused extremely evenly across the target's outer surface in order to collapse the fuel into a symmetric core. Although other "drivers" have been suggested, notably heavy ions driven in particle accelerators, lasers are currently the only devices with the right combination of features. LLNL's history with the ICF program starts with physicist John Nuckolls, who predicted in 1972 that ignition could be achieved with laser energies about 1 kJ, while "high gain" would require energies around 1 MJ. Although this sounds very low powered compared to modern machines, at the time it was just beyond the state of the art, and led to a number of programs to produce lasers in this power range. Prior to the construction of Nova, LLNL had designed and built a series of ever-larger lasers that explored the problems of basic ICF design. LLNL was primarily interested in the Nd:glass laser, which, at the time, was one of a very few high-energy laser designs known. LLNL had decided early on to concentrate on glass lasers, while other facilities studied gas lasers using carbon dioxide (e.g. Antares laser, Los Alamos National Laboratory) or KrF (e.g. Nike laser, Naval Research Laboratory). Building large Nd:glass lasers had not been attempted before, and LLNL's early research focussed primarily on how to make these devices. One problem was the homogeneity of the beams. Even minor variations in intensity of the beams would result in "self-focusing" in the air and glass optics in a process known as Kerr lensing. The resulting beam included small "filaments" of extremely high light intensity, so high it would damage the glass optics of the device. This problem was solved in the Cyclops laser with the introduction of the spatial filtering technique. Cyclops was followed by the Argus laser of greater power, which explored the problems of controlling more than one beam and illuminating a target more evenly. All of this work culminated in the Shiva laser, a proof-of-concept design for a high power system that included 20 separate "laser amplifiers" that were directed around the target to illuminate it. It was during experiments with Shiva that another serious unexpected problem appeared. The infrared light generated by the Nd:glass lasers was found to interact very strongly with the electrons in the plasma created during the initial heating through the process of stimulated Raman scattering. This process, referred to as "hot electron pre-heating", carried away a great amount of the laser's energy, and also caused the core of the target to heat before it reached maximum compression. This meant that much less energy was being deposited in the center of the collapse, both due to the reduction in implosion energy, as well as the outward force of the heated core. Although it was known that shorter wavelengths would reduce this problem, it had earlier been expected that the IR frequencies used in Shiva would be "short enough". This proved not to be the case. A solution to this problem was explored in the form of efficient frequency multipliers, optical devices that combine several photons into one of higher energy, and thus frequency. These devices were quickly introduced and tested experimentally on the OMEGA laser and others, proving effective. Although the process is only about 50% efficient, and half the original laser power is lost, the resulting ultraviolet light couples much more efficiently to the target plasma and is much more effective in collapsing the target to high density. With these solutions in hand, LLNL decided to build a device with the power needed to produce ignition conditions. Design started in the late 1970s, with construction following shortly starting with the testbed Novette laser to validate the basic beamline and frequency multiplier design. This was a time of repeated energy crises in the U.S. and funding was not difficult to find given the large amounts of money available for alternative energy and nuclear weapons research. During the initial construction phase, Nuckolls found an error in his calculations, and an October 1979 review chaired by John Foster Jr. of TRW confirmed that there was no way Nova would reach ignition. The Nova design was then modified into a smaller design that added frequency conversion to 351 nm light, which would increase coupling efficiency. The "new Nova" emerged as a system with ten laser amplifiers, or beamlines. Each beamline consisted of a series of Nd:glass amplifiers separated by spatial filters and other optics for cleaning up the resulting beams. Although techniques for folding the beamlines were known as early as Shiva, they were not well developed at this point in time. Nova ended up with a single fold in its layout, and the laser bay containing the beamlines was 300 feet (91 m) long. To the casual observer it appears to contain twenty 300-foot (91 m) long beamlines, but due to the fold each of the ten is actually almost 600 feet (180 m) long in terms of optical path length. Prior to firing, the Nd:glass amplifiers are first pumped with a series of Xenon flash lamps surrounding them. Some of the light produced by the lamps is captured in the glass, leading to a population inversion that allows for amplification via stimulated emission. This process is quite inefficient, and only about 1 to 1.5% of the power fed into the lamps is actually turned into laser energy. In order to produce the sort of laser power required for Nova, the lamps had to be very large, fed power from a large bank of capacitors located under the laser bay. The flash also generates a large amount of heat which distorts the glass, requiring time for the lamps and glass to cool before they can be fired again. This limits Nova to about six firings a day at the maximum. Once pumped and ready for firing, a small pulse of laser light is fed into the beamlines. The Nd:glass disks each dump additional power into the beam as it passes through them. After passing through a number of amplifiers the light pulse is "cleaned up" in a spatial filter before being fed into another series of amplifiers. At each stage additional optics were used to increase the diameter of the beam and allow the use of larger and larger amplifier disks. In total, Nova contained fifteen amplifiers and five filters of increasing size in the beamlines, with an option to add an additional amplifier on the last stage, although it is not clear if these were used in practice. From there all ten beams pass into the experiment area at one end of the laser bay. Here a series of mirrors reflects the beams to impinge in the center of the bay from all angles. Optical devices in some of the paths slow the beams so that they all reach the center at the same time (within about a picosecond), as some of the beams have longer paths to the center than others. Frequency multipliers upconvert the light to green and blue (UV) just prior to entering the "target chamber". Nova is arranged so any remaining IR or green light is focused short of the center of the chamber. The Nova laser as a whole was capable of delivering approximately 100 kilojoules of infrared light at 1054 nm, or 40-45 kilojoules of frequency tripled light at 351 nm (the third harmonic of the Nd:Glass fundamental line at 1054 nm) in a pulse duration of about 2 to 4 nanoseconds and thus was capable of producing a UV pulse in the range of 16 trillion watts. Fusion in Nova Research on Nova was focussed on the "indirect drive" approach, where the laser shine on the inside surface of a thin metal foil, typically made of gold, lead, or another "high-z" metal. When heated by the laser, the metal re-radiates this energy as diffuse x-rays, which are more efficient than UV at compressing the fuel pellet. In order to emit x-rays, the metal must be heated to very high temperatures, which uses up a considerable amount of the laser energy. So while the compression is more efficient, the overall energy delivered to the target is nevertheless much smaller. The reason for the x-ray conversion is not to improve energy delivery, but to "smooth" the energy profile; since the metal foil spreads out the heat somewhat, the anisotropies in the original laser are greatly reduced. The foil shells, or "hohlraums", are generally formed as small open-ended cylinders, with the laser arranged to shine in the open ends at an oblique angle in order to strike the inner surface. In order to support the indirect drive research at Nova, a second experimental area was built "past" the main one, opposite the laser bay. The system was arranged to focus all ten beams into two sets of five each, which passed into this second area and then into either end of the target chamber, and from there into the hohlraums. Confusingly, the indirect drive approach was not made widely public until 1993. Documents from the era published in general science magazines and similar material either gloss over the issue, or imply that Nova was using the direct drive approach, lacking the hohlraum. It was only during the design of NIF that the topic become public, so Nova was old news by that point. As had happened with the earlier Shiva, Nova failed to meet expectations in terms of fusion output. In this case the problem was tracked to instabilities that "mixed" the fuel during collapse and upset the formation and transmission of the shock wave. The maximum fusion yield on NOVA was about 1013 neutrons per shot. The problem was caused by Nova's inability to closely match the output energy of each of the beamlines, which meant that different areas of the pellet received different amounts of heating across its surface. This led to "hot spots" on the pellet which were imprinted into the imploding plasma, seeding Rayleigh–Taylor instabilities and thereby mixing the plasma so the center did not collapse uniformly. Nevertheless, Nova remained a useful instrument even in its original form, and the main target chamber and beamlines were used for many years even after it was modified as outlined below. A number of different techniques for smoothing the beams were attempted over its lifetime, both to improve Nova as well as better understand NIF. These experiments added considerably not only to the understanding of ICF, but also to high-density physics in general, and even the evolution of the galaxy and supernovas. Two beam Shortly after completion of Nova, modifications were made to improve it as an experimental device. One problem was that the experimental chamber took a long time to refit for another "shot", longer than the time needed to cool down the lasers. In order to improve utilization of the laser, a second experimental chamber was built "past" the original, with optics that combined the ten beamlines into two. Nova had been built up against the older Shiva buildings, with the two experimental chambers "back to back" and the beamlines extending outward from the center target areas. The Two Beam system was installed by passing the beamguides and related optics through the now unused Shiva experimental area and placing the smaller experimental chamber in Shiva's beam bay. LMF and Nova Upgrade Nova's partial success, combined with other experimental numbers, prompted Department of Energy to request a custom military ICF facility they called the "Laboratory Microfusion Facility" (LMF) that could achieve fusion yield between 100 and 1000 MJ. Based on the LASNEX computer models, it was estimated that LMF would require a driver of about 10 MJ, in spite of nuclear tests that suggested a higher power. Building such a device was within the state of the art, but would be expensive, on the order of $1 billion. LLNL returned a design with a 5 MJ 350 nm (UV) driver laser that would be able to reach about 200 MJ yield, which was enough to access the majority of the LMF goals. The program was estimated to cost about $600 million FY 1989 dollars, and an additional $250 million to upgrade it to a full 1000 MJ if needed, and would grow to well over $1 billion if LMF was to meet all of the goals the DOE asked for. Other labs also proposed their own LMF designs using other technologies. Faced with this enormous project, in 1989/90 National Academy of Sciences conducted a second review of the US ICF efforts on behalf of the US Congress. The report concluded that "considering the extrapolations required in target physics and driver performance, as well as the likely $1 billion cost, the committee believes that an LMF [i.e. a Laser Microfusion Facility with yields to one gigajoule] is too large a step to take directly from the present program." Their report suggested that the primary goal of the program in the short term should be resolving the various issues related to ignition, and that a full-scale LMF should not be attempted until these problems were resolved. The report was also critical of the gas laser experiments being carried out at LANL, and suggested they, and similar projects at other labs, be dropped. The report accepted the LASNEX numbers and continued to approve an approach with laser energy around 10 MJ. Nevertheless the authors were aware of the potential for higher energy requirements, and noted "Indeed, if it did turn out that a 100-MJ driver were required for ignition and gain, one would have to rethink the entire approach to, and rationale for, ICF." In July 1992 LLNL responded to these suggestions with the Nova Upgrade, which would reuse the majority of the existing Nova facility, along with the adjacent Shiva facility. The resulting system would be much lower power than the LMF concept, with a driver of about 1 to 2 MJ. The new design included a number of features that advanced the state of the art in the driver section, including the multi-pass design in the main amplifiers, and 18 beamlines (up from 10) that were split into 288 "beamlets" as they entered the target area in order to improve the uniformity of illumination. The plans called for the installation of two main banks of laser beam lines,r one in the existing Nova beam line room, and the other in the older Shiva building next door, extending through its laser bay and target area into an upgraded Nova target area. The lasers would deliver about 500 TW in a 4 ns pulse. The upgrades were expected to allow the new Nova to produce fusion yields between 2 and 20 MJ The initial estimates from 1992 estimated construction costs around $400 million, with construction taking place from 1995 to 1999. For reasons that are not well recorded in the historical record, later in 1992 LLNL updated their Nova Upgrade proposal and stated that the existing Nova/Shiva buildings would no longer be able to contain the new system, and that a new building about three times as large would be needed. From then on the plans evolved into the current National Ignition Facility. Starting in the late 1980s a new method of creating very short but very high power laser pulses was developed, known as chirped pulse amplification, or CPA. Starting in 1992, LLNL staff modified one of Nova's existing arms to build an experimental CPA laser that produced up to 1.25 PW. Known simply as Petawatt, it operated until 1999 when Nova was dismantled to make way for NIF. The basic amplification system used in Nova and other high-power lasers of its era was limited in terms of power density and pulse length. One problem was that the amplifier glass responded over a period of time, not instantaneously, and very short pulses would not be strongly amplified. Another problem was that the high power densities led to the same sorts of self-focusing problems that had caused problems in earlier designs, but at such a magnitude that even measures like spacial filtering would not be enough, in fact the power densities were high enough to cause filaments to form in air. CPA avoids both of these problems by spreading out the laser pulse in time. It does this by reflecting a relatively multi-chromatic (as compared to most lasers) pulse off a series of two diffraction gratings, which splits them spatially into different frequencies, essentially the same thing a simple prism does with visible light. These individual frequencies have to travel different distances when reflected back into the beamline, resulting in the pulse being "stretched out" in time. This longer pulse is fed into the amplifiers as normal, which now have time to respond normally. After amplification the beams are sent into a second pair of gratings "in reverse" to recombine them into a single short pulse with high power. In order to avoid filamentation or damage to the optical elements, the entire end of the beamline is placed in a large vacuum chamber. Although Petawatt was instrumental in advancing the practical basis for the concept of "fast ignition fusion", by the time it was operational as a proof-of-concept device, the decision to move ahead with NIF had already been taken. Further work on the fast ignition approach continues, and will potentially reach a level of development far in advance of NIF at HiPER, an experimental system under development in the European Union. If successful, HiPER should generate fusion energy over twice that of NIF, while requiring a laser system of less than one-quarter the power and one-tenth the cost. Fast ignition is one of the more promising approaches to fusion power. "Death" of Nova When Nova was being dismantled to make way for NIF, the target chamber was lent to France for temporary use during the development of Laser Megajoule, a system similar to NIF in many ways. This loan was controversial, as the only other operational laser at LLNL at the time, Beamlet (a single experimental beamline for NIF), had recently been sent to Sandia National Laboratory in New Mexico. This left LLNL with no large laser facility until NIF started operation, which was then estimated as being 2003 at the earliest. Work on NIF was not declared formally completed until March 31, 2009. - "How NIF works", Lawrence Livermore National Laboratory. Retrieved on October 2, 2007. - Per F. Peterson, "Inertial Fusion Energy: A Tutorial on the Technology and Economics", University of California, Berkeley, 1998. Retrieved on May 7, 2008. - Per F. Peterson, "How IFE Targets Work", University of California, Berkeley, 1998. Retrieved on May 8, 2008. - Per F. Peterson, "Drivers for Inertial Fusion Energy", University of California, Berkeley, 1998. Retrieved on May 8, 2008. - Nuckolls et al., "Laser Compression of Matter to Super-High Densities: Thermonuclear (CTR) Applications", Nature Vol. 239, 1972, pp. 129 - John Lindl, "The Edward Teller Medal Lecture: The Evolution Toward Indirect Drive and Two Decades of Progress Toward ICF Ignition and Burn", 11th International Workshop on Laser Interaction and Related Plasma Phenomena, December 1994. Retrieved on May 7, 2008. - "Building increasingly powerful lasers", Year of Physics, 2005, Lawrence Livermore National Laboratory - J. A. Glaze, "Shiva: A 30 terawatt glass laser for fusion research", presented at the ANS Annual Meeting, San Diego, 18–23 June 1978 - "Empowering Light: Historical Accomplishments in Laser Research", Science & Technology Review, September 2002, pp. 20-29 - Matthew McKinzie and Christopher Paine, "When Peer Review Fails", NDRC. Retrieved on May 7, 2008. - Ted Perry, Bruce Remington, "Nova Laser Experiments and Stockpile Stewardship", Science & Technology Review, September 1997, pp. 5-13 - "A Virtual Reality Tour of Nova", Lawrence Livermore National Laboratory– opening diagram shows the modified beamline arrangement. - Moody et all, "Beam smoothing effects on stimulated Raman and Brillouin backscattering in laser-produced plasmas", Journal of Fusion Energy, Vol. 12, No. 3, September 1993, DOI 10.1007/BF01079677, pp. 323-330 - Dixit et all, "Random phase plates for beam smoothing on the Nova laser", Applied Optics, Vol. 32, Issue 14, pp. 2543-2554 - "Colossal Laser Headed for Scrap Heap", ScienceNOW, November 14, 1997 - "Nova Upgrade– A Proposed ICF Facility to Demonstrate Ignition and Gain", Lawrence Livermore National Laboratory ICF Program, July 1992 - "Review of the Department of Energy’s Inertial Confinement Fusion Program, Final Report", National Academy of Sciences - Tobin, M.T et all, "Target area for Nova Upgrade: containing ignition and beyond", Fusion Engineering, 1991, pg. 650–655. Retrieved on May 7, 2008. - An image of the design can be found in "Progress Toward Ignition and Burn Propagation in Interial Confinement Fusion", Physics Today, September 1992, p. 40 - Letter from Charles Curtis, Undersecretary of Energy, June 15, 1995 - Michael Perry, "The Amazing Power of the Petawatt", Science & Technology Review, March 2000, pp. 4-12 - Michael Perry, "Crossing the Petawatt Threshold", Science & Technology Review, December 1996, pp. 4-11 - "US sends Livermore laser target chamber to France on loan", Nature, Vol. 402, pp. 709-710, doi:10.1038/45336 - Kilkenny, J.D.; et al. (May 1992). "Recent Nova Experimental Results". Fusion Technology 21 (3): 1340–1343 Part 2A. - Hammel, B.A. (December 2006). "The NIF Ignition Program: progress and planning". Plasma Physics and Controlled Fusion 48 (12B): B497–B506 Sp. Iss. SI. doi:10.1088/0741-3335/48/12B/S47. - Coleman, L.W. (December 1987). "Recent Experiments With The Nova Laser". Journal of Fusion Energy 6 (4): 319–327. doi:10.1007/BF01052066.
http://en.wikipedia.org/wiki/Nova_(laser)
4.09375
Writing Equations: Precipitation Reactions Equations written to represent precipitation reactions can be written in one of three ways: - In a precipitation reaction a product of the reaction is only slightly soluble, or insoluble. This product is formed as a solid, also known as a precipitate. - Solubility Rules can be used to determine if a product is insoluble (forms a precipitate) - Ions in solution that are not used to form the precipitate are called spectator ions - It is important to include the states of matter in the chemical equation: (s) for solid, the precipitate (g) for gas (l) for liquid (aq) for substances in aqueous solution - Molecular Equations All reactants and products are written as if they are molecules - Ionic Equations All reactants and products that are soluble are written as ions, only the precipitate is written as if it were a molecule - Net Ionic Equations Only the reactants and product taking part in the reaction are written in the equation, the reactants as ions, the product as a molecule. Spectator ions are not included in the equation Consider the reaction between solutions of sodium chloride, NaCl(aq), and silver nitrate, AgNO3(aq). The possible products of the reaction are sodium nitrate, NaNO3, and silver chloride, AgCl. From the solubility rules we find that sodium nitrate, NaNO3, is soluble since all Group I ions form soluble salts and also all nitrates are soluble. Silver chloride, AgCl, is insoluble since all chlorides are soluble EXCEPT those of silver, lead (II), mercury (I), copper (II) and thallium. Writing the precipitation reaction equations - Molecular Equation All species in the reaction are written as if they are molecules, species in solution must include the (aq), the precipitate must include the (s) That is: NaCl(aq), AgNO3(aq), NaNO3(aq), AgCl(s) NaCl(aq) + AgNO3(aq) -----> NaNO3(aq) + AgCl(s) - Ionic Equation All species in solution are written as ions, the precipitate is written as if a molecule. That is; REACTANTS:Na+(aq), Cl-(aq), Ag+(aq), NO3-(aq) PRODUCTS: Na+(aq), NO3-(aq), AgCl(s) Na+(aq) + Cl-(aq) + Ag+(aq) + NO3-(aq) ------> Na+(aq) + NO3-(aq) + AgCl(s) - Net Ionic Equation Written as for Ionic Equation except that spectator ions are not included in the equation: That is; Na+(aq), NO3-(aq) are not included. Only the species involved in producing the precipitate are included in the equation That is; Ag+(aq), Cl-(aq), AgCl(s) are included in the equation Ag+(aq) + Cl-(aq) ------> AgCl(s)
http://www.ausetute.com.au/ppteeqtn.html
4.53125
Author Isabel Allende The following adaptable classroom activities suggest various approaches for introducing and/or extending learning on the writing of Isabel Allende. They are inspired by a conversation between Bill Moyers and Allende from the 6/13/03 NOW with BILL MOYERS broadcast. (Note: A free transcript of this interview is available on the NOW Web site. Teachers may also tape the broadcast off-air and use it in the classroom for one year. Alternatively, programs are available for purchase from ShopPBS. 1. Memory v. Make Believe Allende talks about the power of memories and how her memories have inspired some of her greatest works. However, she also admits that she often invents details or information related to these memories to fill in the gaps and make her stories more interesting. Help students explore memory-based writing by first having them share their recollections of an event with another person who remembers the same experience. How do the memories of each person compare? Next, ask students to choose another memory of an event that is vivid in their mind and write it down the way that they recall it. Then, using their imaginations, have students add to the memory by fabricating or exaggerating details, events, characters, and other information. The goal is to use the memory as a basis for creating a story they can share. When stories are complete, break students into small groups and have them share their original written account of the event, and then the story they developed based on the memory. Finish the exercise by facilitating a discussion that addresses these questions: Students will now be better prepared to read imaginative histories from Allende and others. Please see the Extension Ideas section of NOW's lesson plan related to Allende's work for recommendations. - What was more interesting to hear about, the actual memory, or the story created from the memory? Why? - Was it difficult to use your imagination to change the memory into a story? - What challenges did you face in writing the memory-based story? - Why is fiction often more exciting than fact? - Why is imagination important? - What role does imagination play in people's everyday lives? 2. Responding to Violence After the September 11th attacks on the United States, Allende said she no longer felt like a foreigner in the U.S. because that day she felt the same vulnerability as other Americans. She then believed she had finally "gained a country." Other Americans have also felt stronger feelings of nationalism as a result of 9/11. Ask students to describe how their patriotism was affected by the September 11th attacks. Why did that shared experience do so much to unite Americans? On a different September 11th, this time in 1973, Allende witnessed another violent event. She saw the presidential palace in Chile get bombed during a coup that ended the rule of her uncle, Salvador Allende. (She describes these events during her conversation with Bill Moyers. See also "September 11, 1973: The Day Democracy Died in Chile" from the BBC for more information on the 1973 coup.) How might Allende's experience on September 11, 1973 have influenced her reaction to the events of September 11, 2001? Have students share their ideas in a poem, song, collage, painting, or other creative piece and then share their work with the class. 3. Painting Word Pictures Allende says she wrote her book, MY INVENTED COUNTRY: A NOSTALGIC JOURNEY THROUGH CHILE so people would fall in love with Chile. To help accomplish this goal, Allende makes specific word choices and uses imagery in an effort to influence reader perceptions. Help students explore how she does this by reading an excerpt from MY INVENTED COUNTRY that describes the geography and topography of Chile. Ask students to identify language that they feel paints a particularly vivid picture. Then, encourage them to try their own hand at creating "word pictures." Have students imagine a place that is meaningful to them. Then, ask them to describe this place by incorporating language that considers the five senses and the emotions evoked by the place. Their goal should be to have readers feel as though they are a part of the place, not an outsider looking in. Use the steps of the writing process to pre-write, write, revise, edit, and present their work. When students have completed the assignment, have them share and comment on one another's work in small discussion groups. About the Author Lisa Prososki is an independent educational consultant who taught middle school and high school social studies, English, reading, and technology courses for twelve years. Prososki has worked with PBS TeacherSource and has authored many lesson plans for various PBS programs over the past five years. In addition to conducting workshops for teachers at various state and national conventions, Prososki has also worked as an editor and authored one book.
http://www.pbs.org/now/printable/classroom_allendestarters_print.html
4.1875
Concentrated in the coastal plain of the southeastern United States are a series of elliptical or oval depressions. These depressions are called bays, named for the sweet bay, loblolly bay and red bay trees found growing around them. Of the 500,000 bays estimated to exist, most are small and few are greater than 500 feet in length. Singletary Lake, however, is approximately 4,000 feet long. While in the past nearly all bays contained open water, today most are filled with wet organic soils and are overgrown with swamp vegetation. Only a few relict lakes remain. In addition to Singletary Lake, Baytree Lake, Jones Lake, Salters Lake, Lake Waccamaw and White Lake are included within the North Carolina State Parks system. The origin of the Carolina bays has long been a matter of speculation and debate. Many hypotheses regarding how bay lakes originated have been proposed, and hypotheses include underground springs, the dissolution of subsurface minerals and meteor showers. So far, no single explanation has gained universal acceptance. One of the most supported theories proposes that, approximately 10,000 to 15,000 years ago when the region was covered by water, strong winds created water currents that carved the shallow depressions. Once the water recessed, these depressions formed the lakes we now call Carolina bays. Bay lakes are shallow, ranging from eight to nearly 12 feet in depth. Though not the largest of the Bladen County lakes, with a shoreline of almost four miles Singletary Lake is the deepest at 11.8 feet. Like other bay lakes, Singletary is not fed by streams or springs but depends upon rainfall and runoff from the surrounding land. Therefore, the water level fluctuates with local precipitation. Usually, vegetation is established almost completely around margins of bay lakes. Trees and shrubs along lake perimeters reduce wave and current action, permitting sediments to accumulate and encouraging new plant growth. Peat is produced gradually from dead organic matter along the shoreline, and eventually trees take root. Slowly, the bay forest grows into the lake. This process slowly reduces the size of the lake. Today Singletary Lake is only 44 percent of its original size. Like other bay lakes, Singletary ultimately may be reduced to a moist bog. Singletary Lake is surrounded by typical bay vegetation. Trees include red bay, loblolly bay, pond pine and Atlantic white cedar, and shrubs include pepperbush, gallberry, leucothoe, huckleberry and sheepskill. Areas of the park with the highest elevations provide habitats for turkey oak, longleaf pine, blueberry and holly. Listen to the melodies of songbirds in a park that is home to wood duck, pileated woodpecker and red-tailed hawk. The red-cockaded woodpecker, an endangered species, also resides at Singletary Lake. Catch a glimpse of a wild turkey, white-tailed deer or rabbit, or see their tracks in the sandy soil. Fence lizard, carpenter frog, southern toad and box turtle also reside in the park. Located in the park and designated as a natural area by the Society of American Foresters in the early 1960s, the Turkey Oak Natural Area will remain in its natural state to be used for scientific and educational study. This 133-acre area, named for its predominant tree, consists of both a coarse sand ridge at the southeastern end of the lake and a portion of the bay bog. All of the primary plant community types around the Carolina bays are represented. The rare white wicky, a relative of mountain laurel, grows in the area, as well as a variety of carnivorous plants. Contact the park office to arrange an exploration for your group or class.
http://www.ncparks.gov/Visit/parks/sila/ecology.php
4.125
|Key Stage 4 - Thunderstorms| How far away is the thunderstorm? Are thunderstorms dangerous? Facts and figures After a time, the top of the cloud turns to ice (usually below a temperature of -20 °C) and streams away in the winds at the level of the cloud top, giving it a characteristic anvil shape. An atom consists of three basic parts, a proton (which has a positive charge), a neutron (which has no charge) and an electron (which has a negative charge). Electrons cling to the positively charged centre of the atom because they have a negative electrical charge. During a thunderstorm, some of the atoms in the cloud lose electrons while others gain them. When a cloud is composed entirely of water droplets, there is very little transfer of electrons. As a storm cloud grows in height, the temperature of the water droplets higher up falls. They continue in the liquid state below 0 °C as super-cooled water, but eventually they begin to turn to ice, usually at a temperature below -20 °C. These ice particles often collide and the smaller particles lose an electron to the larger, thereby gaining a positive charge. The small particles are propelled towards the top of the cloud by strong internal winds while the larger particles start to fall. This causes the top of the cloud to develop a strong positive charge. The larger, negatively charged, ice particles begin to 'capture' super-cooled water droplets, turning them instantly to ice and thereby growing, some reaching a sufficient size to start falling. This leads to the base of the cloud becoming negatively charged which, in turn, induces a positive charge on the ground below. In time, the potential gradient between cloud and ground, or between adjacent clouds, becomes large enough to overcome the resistance of the air and there is a massive, very rapid transfer of electrons, which appears as a lightning flash. There are several types of lightning, all of which are made up of different parts and none of which are alike. Lightning that shoots from the cloud to the ground is made up of four main parts: a stepped leader, upward streamers, return strokes and dart leaders. As negative charges collect at the base of the cloud, they repel the electrons near the ground's surface. This leaves the ground and the objects on it with a positive charge. As the attraction between the cloud and the ground grows stronger, electrons shoot down from the cloud. The electrons move in a path that spreads in different directions - like a river delta. Each step is approximately 50 metres long and the branching path is called a stepped leader. Further electrons follow, making new branches. The average speed at which the stepped leader cuts through the air is about 270,000 miles per hour. As the stepped leader approaches the ground, positive electrical sparks rise from tall objects such as trees and buildings. These sparks are known as upward streamers. When the stepped leader meets the upward streamer, the lightning channel is completed. When the lightning channel is complete, the electrons in the channel rush towards the ground. This is the return stroke which lights up the channel. The first electrons to reach the ground light up the bottom of the channel. The upper part of the channel glows as the electrons move rapidly down it. Therefore, the light from the flash starts at the ground and moves upwards. The branches of the stepped leader are also lit up, but not as brightly as the main channel as there are less electrons present. The lightning flash ends when there are no electrons left in the channel. If lightning flickers, it is probably because there has been more than one return stroke. Following a lightning flash, the lightning channel is momentarily empty and it is then possible for electrons from another part of the cloud to enter it. The movement of the electrons into the channel is called a dart leader. It causes another return stroke to occur. The repeated return strokes and dart leaders make the lightning appear to flicker because of the great speed at which they occur. The word 'thunder' is derived from 'Thor', the Norse god of thunder. He was supposed to be a red-bearded man of tremendous strength; his greatest attribute being the ability to forge thunderbolts. The word Thursday is also derived from his name. Thunder is the sharp or rumbling sound that accompanies lightning. It is caused by the intense heating and expansion of the air along the path of the lightning. The rumble of thunder is caused by the noise passing through layers of the atmosphere at different temperature. Thunder lasts longer than lightning because of the time it takes for the sound to travel from different parts of the flash. This can roughly be estimated by measuring the interval between the lightning flash and the start of the thunder. If you count the time in seconds and then divide by three, you will have the approximate distance in kilometres. Thunder is rarely heard at a distance of more than 20 km. Most people are frightened by the crackles and rumbles of thunder rather than the flash of lightning. However, thunder cannot hurt anybody, and the risk of being struck by lightning is far less than that of being killed in a car crash. Ninety per cent of lightning discharges go from cloud to cloud or between parts of the same cloud, never actually reaching the earth. Most of the discharges that do strike the ground cause little or no damage or harm. Lightning takes the shortest and quickest route to the ground, usually via a high object standing alone. Here are a few beliefs about thunder and lightning. Test yourself with the following statements - are they true or false? Some of the answers can be found in the text. Web page reproduced with the kind permission of the Met Office
http://metlink.org/weather-climate-resources-teenagers/key-stage-4-weather/ks4-thunderstorms.html
4.1875
The roots of colonial cricket African scholars at English missionary institutions were keen cricketers and made the game their own. PIC/A1319, Corey Library for Historical Research, Rhodes University. Used with permission. Throughout the 1800s cricket became increasingly popular in the British colonies. Created by the English, the game was introduced by government officials and the military to the West Indies and South Africa at about the same time; to Barbados in 1806 and to Cape Town in 1808. In Barbados, newspapers carried cricket reports alongside notices about slave sales and the fluctuations in sugar prices. After the enslaved Africans were freed in 1838, Britain recognised the power of the game to reinforce the social hierarchy in its colonies. The white classes needed constant reminding that they were a 'race' set apart from the Africans in their midst. Cricket was watched only by 'highly respectable ladies and gentlemen', while the plantation owners prepared the cricket field and provided the hospitality. The boundary of Empire Throughout their Empire the British created boundaries. The cricket 'boundary' is therefore deeply symbolic for people struggling to free themselves. Cricket was taught in missionary schools in order to impart English values to the indigenous people. The Africans came to love the game as much as the English. Colonial cricket gradually evolved from being used as a tool of social control. Adopted by freed Africans throughout the world, the game would take on boundaries very different from those understood by its originators.
http://www.liverpoolmuseums.org.uk/ism/exhibitions/beyondtheboundary/colonial_cricket.aspx
4
Tips for Parents: Raising Children to Enjoy Math and Science Make wonder part of everyday life. Use play, conversation, and activities of everyday life to help your child learn skills and ways of thinking needed for science and math. Here are some simple tips: - Focus on your child's interests. They're going to know more and ask more questions about what they love. - Talk with them about what you're doing—make it a two-way conversation. "I'm using this spoon to stir the cocoa in the hot milk. What else do we stir to mix up?" "I'm going to drag my foot on the right when we go down on the toboggan. Which way do you think we'll turn? How come?" Observe: Ask them to notice small details. "What shapes do you see in those ice crystals? Are they more alike or different? In what ways?" "Where do you first see the moon in the sky? Is it in the same place every day?" Sort: "Which tracks have three toes and which have four?" "Can you sort the adult mittens and hats from the kids'?" "Who's got the longer skating stride—Daddy or you?" "How wide is this snowman at the base? Why does it need to be bigger at the bottom than the top?" "How long do you think it will take the icicle to melt?" "If you have more weight on the sled, do you think you'll go faster or slower?" Girls are just as curious as boys. Experiment with hand tools or water play with cups in the bathtub. Ask her to describe what happens and figure out an explanation for what she notices. Chemistry sets or tool sets are great toys for girls as well as boys. - Hands-on works best. Take them to the Science Museum, a nature center, or zoo where they can do hands-on exhibits and have fun while they learn. - Take things apart to see how they work. Look at the insides of an old remote control, a broken wind-up toy or a battery-operated gadget. - Find positive role models in science and technology careers. Ask friends or family members in science or technology to talk to your child or give a tour of their worksite. - For young kids, reading readiness and science readiness develop at the same time. Use reading time to incorporate science and math skills: "How many bunnies are in the orchard?" "What shapes are the same on this page?" "What do you think would happen if ... ?" Reprinted with the permission of the Science Museum of Minnesota. © 2008 Science Museum of Minnesota. Add your own comment - Kindergarten Sight Words List - The Five Warning Signs of Asperger's Syndrome - First Grade Sight Words List - Graduation Inspiration: Top 10 Graduation Quotes - 10 Fun Activities for Children with Autism - What Makes a School Effective? - Child Development Theories - Should Your Child Be Held Back a Grade? Know Your Rights - Why is Play Important? Social and Emotional Development, Physical Development, Creative Development - Smart Parenting During and After Divorce: Introducing Your Child to Your New Partner
http://www.education.com/reference/article/Ref_Tips_Parents_3/