score
float64
4
5.34
text
stringlengths
256
572k
url
stringlengths
15
373
4.0625
What is Autism? Autism is a developmental disorder with onset before age 3 with subtle signs of the disorder often being present in early infancy. The presentation of symptoms of autism can vary greatly; however, the characteristics necessary for diagnosing it are as follows: - Impairment in reciprocal social interaction (e.g., limited eye contact, responding to people as if they are objects). - Communicative deficits (e.g., limited or no verbal communicative skills, problems using pronouns). - Repetitive behavior or marked adherence to specific routines (e.g., body rocking, problems transitioning from one activity or environment to another). Though much is not yet known about the cause(s) of autism, the best current scientific evidence indicates that autism has strong genetic roots and that the characteristics of autism are related to abnormalities in brain development. Certain environmental variables, such as congenital rubella (when an expectant mother contracts German measles), may also be related to the development of autism. The range of impairment and variability in behavioral symptoms of autism presents many challenges in determining whether autism is one kind of problem or many different problems that are somewhat related. The earliest reliable estimates of the prevalence of autism indicated that this disorder occurs in roughly 2-4 children per 10,000. More recently it has been found that the prevalence of the autism spectrum disorders is likely closer to 1-1.5 per 100 children. Autism can be reliably diagnosed by or before age 3. Expert clinicians can usually detect symptoms of autism during infancy, although a formal diagnosis is generally not made until the child fails to develop functional language by age 2. Boys are three-to-four times more likely to be affected by autism than girls and children with autism often also have mental retardation. Autism occurs in all racial, ethnic, and social groups. Although there is currently no known cure for autism, autism treatment is available. Persons with autism can make progress if they receive appropriate, individualized intervention that addresses their specific needs. Research findings suggest that younger children who receive intensive, individualized, behavioral interventions (i.e., Applied Behavior Analysis or ABA) have been shown to make marked progress, with some eventually losing their diagnosis of autism. ABA has been shown to produce improvements in intellectual development and adaptive skills. We have compiled some of our frequently asked questions about autism and related disorders into a convenient reference, intended to provide basic information about autism and autism treatment to the general public. Please note that it is not intended to constitute medical advice; we always recommend you speak with a trained professional.
http://www.necc.org/research/understanding-autism.aspx
4.1875
27 January 2013 Let's look beyond the confines of one country. By trial and error, most countries have ended up with first house sizes that correspond to the cube root of their populations. For a US population of 315 million, this would mean 680 seats in the House. The US followed the cube root pattern up to the early 1900s, when the House size was frozen at 435. Yet the population kept growing. In contrast, most countries change the size of their houses when the population changes. The cube root of the population rule has been known for more than 30 years. (See, most recently, Rein Taagepera, Predicting Party Sizes, Oxford University Press, 2007). Why does cube root work for most countries? Apply the engineering notion of minimizing major cost. Consider the communication load on a single representative. With too few representatives, each has too many constituents. With too many representatives, their communication load inside the assembly shoots up. Cube root of population turns out to be optimal. The US House needs to be bigger than the current one, but much smaller than in Johnson’s proposal.
http://jcolomer.blogspot.com/2013/01/is-bigger-house-better-sent-to-ny-times.html
4.09375
Lesson Title: An Introduction to Islam and Muhammad Grade Level: 6-12 Subjects: World History, Language Arts, and Visual Arts Estimated Time of Completion: 3-4 class periods I. Instructional Objectives III. Materials Needed V. Assessment Suggestions - Instructional Objectives: - Students will have the opportunity to compare the three main monotheistic belief systems and create a chart showing their findings. - Students will have the opportunity to expand their vocabulary as it relates to The Growth of Islam and Muhammad. - tudents will have the opportunity to create a parallel timeline comparing major events in Muhammad's life and events taking place in another part of the world. (see example parallel timeline) - Students will have the opportunity to share their findings with their classmates. - This lesson correlates to the following national standards for social studies, language arts, and visual arts, established by the Mid-continent research Regional Education Lab: - Understands the spread of Islam in Southwest Asia and the Mediterranean region - Understands the influence of Islamic ideas and practices on other cultures and social behavior - Understands the effect of geography on different groups and their trade practices - Understands the significance of Baghdad - Understands how the Muslims spread Islamic beliefs and established their empire - Understands significant aspects of Islamic civilization - Understands challenges to Muslim civilization - Uses the general skills and strategies of the writing process - Gathers and uses information for research purposes - Uses listening and speaking strategies for different purposes - Uses viewing skills and strategies to understand and interpret visual media - Knows how to use structures and functions of art - Understands the visual arts in relation to history and cultures - Materials Needed: This lesson is based on Video One (The Messenger) of the PBS video series Islam: Empire of Faith. (See adaptation section at the end of the lesson for suggestions if the video is not available.) Students will have the opportunity to compare the major monotheistic belief systems of the world, create vocabulary sheets that will help them understand the new vocabulary they will be introduced to in this unit, and prepare a parallel timeline showing events in the Arabian peninsula region compared to other regions. - Write the following excerpt from the video on the board: "For the west, much of the history of Islam has been obscured behind a veil of fear and misunderstanding." (time cue 3:00 from video) - Discuss with students what they think this statement means. (It should bring up quite a bit of discussion relating to belief systems.) As students are responding, ask how many students are familiar with the word Islam and perhaps its meaning (literal meaning is Peace, surrender of one's will to God). Ask students about the facts and misconceptions they have as they relate to this word. - This icebreaking discussion will lead into your handing out the worksheet An Introduction to Monotheistic Belief Systems. Explain to students that over the next few days you will be introducing them to the history of Islam, and you want them to gain a basic understanding of terms they may not be familiar with. You will do this by helping them fill in the sheet they are now receiving, but also a vocabulary worksheet they will be completing later on. - Give students approximately five minutes to fill in worksheet by themselves or in small groups. - Review and discuss the handout with students, making sure they have correct information in appropriate places. (This is not graded, but used for basic knowledge building.) Use the teacher's key for correct responses. - Introduce students to "The Messenger," Part 1 of the video Islam: Empire of Faith. Let them know that this video will introduce them to one of the most influential individuals in history, Muhammad. After viewing the video, they will complete a vocabulary sheet that will introduce them to terms associated with Islam, and will create a parallel timeline showing events from Muhammad's life, and events that were taking place at the same time in other areas of the world. Suggest they take notes as they watch the video. - You may want to write the following time cues on the board to help them as they watch view the video. These excerpts should help students generate ideas of what to take notes on as they relate to Muhammad. They should also jot down words that are new to them, and events that relate to the theme of the lesson; The Growth of Islam. 5:00 Muhammad is born 6:01 Age 6 events 12:30 Muhammad as a merchant 13:40 Muhammad (some characteristics) 16:40 His message 18:45 Voice of a poet, or voice of God? 23:50 Following increases 24:00 Tribal leaders response to Muhammad's message 25:25 How are Muhammad's followers treated 25:45 Muhammad's personal losses 26:10 Muhammad asked to negotiate 27:50 A new calendar 28:20 Treatment of those with other beliefs 30:00 Muhammad's revelation relating to prayer (where to face) 31:30 Enemies join forces against Muslims 33:55 Tide of battles turn in favor of Muslims 36:25 Muhammad's troops after victory circle the Kaaba 37:30 Idols destroyed in Kaaba-response by Bedouin's 38:55 Muhammad dies 39:48 Who will succeed Muhammad? 40:45 Growth of Islam after Muhammad - Show the video. You may want to stop at various points to discuss and ask if there are any questions. This will also allow students time to jot notes down during discussion time. - After the video, debrief. Ask the students what they learned. Were there any misconceptions they had about the Islam that were cleared up by viewing this video? Ask for some samples of words they were introduced to that they had not heard before. Jot them down on the board. - Hand out a copy of the vocabulary sheet. Allow students time to try and fill it in on their own, then allow use of a dictionary. You may want to have students work on this in pairs. - After students have had time to complete the worksheet, review it with them. (You should determine if you want this to be a graded assignment or not.) - Now that students have had the opportunity to gain general background through the introduction of the worksheets and video, hand out the parallel timeline assignment. - 13. Instruct students that this assignment is to help them gain an understanding of not only what was happening during Muhammad's life in Arabian Peninsula, but also what was happening at the same time in other geographic regions around the world. - Review the Parallel Timeline Requirement Sheet. - Allow students time to research using the Internet, library resources, and possibly viewing the video in small groups for additional information and note taking. - After students have completed the Parallel Timeline ask for volunteers to present their findings. - Assessment Recommendations: - For upper level grades, you may want students to write a short biography of Muhammad in addition to creating a timeline. - For lower level grades or special needs students, you can pair students together to make this a partner or small group project. You can also adapt the number of required entries on the timeline. - If your classroom does not have access to the video, use this site and other Web sites to gain a general understanding of Muhammad's life and work, along with information relating to the rise of Islam.
http://www.pbs.org/empires/islam/lesson1.html
4.1875
Simple math explains dramatic beak shape variation in Darwin's finchesFebruary 22nd, 2010 in Biology / Evolution Using digitization techniques, the researchers found that 14 distinct beak shapes, that at first glance look unrelated, could be categorized into three broader, group shapes. Despite the striking variety of sizes and shapes, mathematically, the beaks within a particular group only differ by their scales. Credit: Otger Campàs and Michael Brenner, Harvard School of Engineering and Applied Sciences. From how massive humpbacks glide through the sea with ease to the efficient way fungal spores fly, applied mathematicians at Harvard have excavated the equations behind a variety of complex phenomena. The latest numerical feat by Otger Campàs and Michael Brenner, working closely with a team of Harvard evolutionary biologists led by Arhat Abzhanov, zeroes in on perhaps the most famous icon of evolution: the beaks of Darwin's finches. In a study appearing in the February 16 Early Edition of the Proceedings of the National Academy of Sciences (PNAS), the researchers demonstrate that simple changes in beak length and depth can explain the important morphological diversity of all beak shapes within the famous genus Geospiza. Broadly, the work suggests that a few, simple mathematical rules may be responsible for complicated biological adaptations. The investigation began at Harvard's Museum of Comparative Zoology, where Campàs, a postdoctoral fellow at the Harvard School of Engineering and Applied Sciences (SEAS), and Ricardo Mallarino, a graduate student in the Department of Organismic and Evolutionary Biology (OEB) at Harvard, obtained photographs of beak profiles from specimens of Darwin's finches. Using digitization techniques, the researchers found that 14 distinct beak shapes, that at first glance look unrelated, could be categorized into three broader, group shapes. Despite the striking variety of sizes and shapes, mathematically, the beaks within a particular group only differ by their scales. "It is not possible, however, to explain the full diversity of beak shapes of all Darwin's finches with only changes in beak length and depth," explains Campàs. "By combining shear transformations (basically, what happens when you transform a square into a rhombus by shoving the sides toward one another), with changes in length and depth, we can then collapse all beak shapes onto a common shape." Using Micro-Computed Tomography (CT) scans on the heads for the different species in the genus Geospiza, Anthony Herrel, an Associate of the Museum of Comparative Zoology, helped the team go one step further, verifying that the bone structure of the birds exhibits a similar scaling pattern as the beaks. Thus, beak shape variation seems to be constrained by only three parameters: the depth of the length for the scaling transformation and the degree of shear. Brenner, Glover Professor of Applied Mathematics at SEAS, says he is "astonished" that so few variables can help explain such great diversity. The mechanism that allows organisms to adapt so readily to new environments may be a relatively "easy" process. "This is really significant because it means that adaptive changes in phenotype can be explained by modifications in a few simple parameters," adds Mallarino. "These results have encouraged us to try to find the remaining molecules responsible for causing these changes." In fact, the mathematical findings also have a parallel genetic basis. Abzhanov, an assistant professor in OEB, and his collaborators explored the role of the two genes responsible for controlling beak shape variation. Bmp4 expression affects width and depth and Calmodulin expression relates to length. It turns out that the expression levels of the two genes, in particular Bmp4, are fundamentally related to the scaling transformations. "We wanted to know how beaks changed on a fundamental level during evolution of Darwin's finches and how many unique beak shapes we need yet to explain using our developmental genetics approach," says Abzhanov. "Our joint study demonstrates that we understand the species-level variation really well where scaling transformations match up perfectly with expression and function of developmental genes which regulate precisely such type of change. Now we want to understand how novel beak shapes resulting from higher order transformations evolved in Darwin's finches and beyond." Campàs reflects that the finding helps to address an idea that Darwin raised nearly 175 years ago in the Voyage of the Beagle: "The most curious fact is the perfect gradation in the size of the beaks in the different species of Geospiza, from one as large as that of a hawfinch to that of a chaffinch, and even to that of a warbler … Seeing this gradation and diversity of structure in one small, intimately related group of birds, one might really fancy that from an original paucity of birds in this archipelago [Galapagos], one species had been taken and modified for different ends." Provided by Harvard University "Simple math explains dramatic beak shape variation in Darwin's finches." February 22nd, 2010. http://phys.org/news186083215.html
http://phys.org/print186083215.html
4
Howard University professor Alain Locke published The New Negro, a landmark anthology of essays, poetry, and fiction by African American writers including Claude McKay, Langston Hughes, Nella Larsen, Jean Toomer, and others. The New York City neighborhood of Harlem became a major cultural center for African Americans. Black artists, musicians, and writers based in Harlem created a social and artistic community, producing major works and challenging barriers created by Jim Crow. Langston Hughes (1902–1967) was a prominent figure of the Harlem Renaissance whose poetry, plays, essays, and novels addressed various aspects of black culture and experience. Born in Joplin, Missouri, Hughes was raise in the Midwest by his mother and grandmother. In 1921, he enrolled at Columbia University just after his first published poem, “The Negro Speaks of Rivers,” appeared in The Crisis. During the 1920s, Hughes spent time writing abroad and in Harlem. He befriended other major African American writers and artists of the... Harlem is a New York City neighborhood in upper Manhattan. After World War I, the neighborhood grew into a center of African American art, literature, and culture, a movement known as the “Harlem Renaissance.”
http://www.gilderlehrman.org/category/coverage-geographical/harlem
4.3125
The climate of Uranus is heavily influenced by both its lack of internal heat, which limits atmospheric activity, and by its extreme axial tilt, which induces intense seasonal variation. Uranus' atmosphere is remarkably bland in comparison to the other gas giants which it otherwise closely resembles. When Voyager 2 flew by Uranus in 1986, it observed a total of ten cloud features across the entire planet. Lat hai er observations from the ground or by the Hubble Space Telescope made in the 1990s and the 2000s revealed bright clouds in the northern (winter) hemisphere of the planet. In 2006 a dark spot similar to the Great Dark Spot on Neptune was detected. In 1986 Voyager 2 discovered that the visible southern hemisphere of Uranus can be subdivided into two regions: a bright polar cap and dark equatorial bands (see figure on the right). Their boundary is located at about −45 degrees of latitude. A narrow band straddling the latitudinal range from −45 to −50 degrees is the brightest large feature on the visible surface of the planet. It is called a southern "collar". The cap and collar are thought to be a dense region of methane clouds located within the pressure range of 1.3 to 2 bar. Unfortunately Voyager 2 arrived during the height of the planet's southern summer and could not observe the northern hemisphere. However, at the end of 1990s and the beginning of the twenty-first century, when the northern polar region came into view, Hubble Space Telescope (HST) and Keck telescope initially observed neither a collar nor a polar cap in the northern hemisphere. So Uranus appeared to be asymmetric: bright near the south pole and uniformly dark in the region north of the southern collar. In 2007, however, when Uranus passed its equinox, the southern collar almost disappeared, while a faint northern collar emerged near 45 degrees of latitude. The visible latitudinal structure of Uranus is different from that of Jupiter and Saturn, which demonstrate multiple narrow and colorful bands. In addition to large-scale banded structure, Voyager 2 observed ten small bright clouds, most lying several degrees to the north from the collar. In all other respects Uranus looked like a dynamically dead planet in 1986. However in 1990s the number of the observed bright cloud features grew considerably. The majority of them was found in the northern hemisphere as it started to become visible. The common though incorrect explanation of this fact was that bright clouds are easier to identify in the dark part of the planet, whereas in the southern hemisphere the bright collar masks them. Nevertheless there are differences between the clouds of each hemisphere. The northern clouds are smaller, sharper and brighter. They appear to lie at a higher altitude, which is connected to fact that until 2004 (see below) no southern polar cloud had been observed at the wavelength 2.2 micrometres, which is sensitive to the methane absorption, while northern clouds have been regularly observed in this wavelength band. The lifetime of clouds spans several orders of magnitude. Some small clouds live for hours, while at least one southern cloud has persisted since Voyager flyby. Recent observation also discovered that cloud-features on Uranus have a lot in common with those on Neptune, although the weather on Uranus is much calmer. The dark spots common on Neptune had never been observed on Uranus before 2006, when the first such feature was imaged. In that year observations from both Hubble Space Telescope and Keck Telescope revealed a small dark spot in the northern (winter) hemisphere of Uranus. It was located at the latitude of about 28 ± 1° and measured approximately 2° (1300 km) in latitude and 5° (2700 km) in longitude. The feature called Uranus Dark Spot (UDS) moved in the prograde direction relative to the planet with an average speed of 43.1 ± 0.1 m/s, which is almost 20 m/s faster than the speed of clouds at the same latitude. The latitude of UDS was approximately constant. The feature was variable in size and appearance and was often accompanied by a bright white clouds called Bright Companion (BC), which moved with nearly the same speed as UDS itself. The behavior and appearance of UDS and its bright companion were similar to Neptunian Great Dark Spots (GDS) and their bright companions, respectively, though UDS was significantly smaller. This similarity suggests that they have the same origin. GDS were hypothesized to be anticyclonic vorteces in the atmosphere of Neptune, whereas their bright companions were thought to be methane clouds formed in places, where the air is rising (orographic clouds). UDS is supposed to have a similar nature, although it looked differently than GDS at some wavelengths. While GDS had the highest contrast at 0.47 μm, UDS was not visible at this wavelength. On the other hand, UDS demonstrated the highest contrast at 1.6 μm, where GDS were not detected. This implies that dark spots on the two ice giants are located at somewhat different pressure levels—the Uranian feature probably lies near 4 bar. The dark color of UDS (as well as GDS) may be caused by thinning of the underlying hydrogen sulfide or ammonium hydrosulfide clouds. The tracking of numerous cloud features allowed determination of zonal winds blowing in the upper troposphere of Uranus. At the equator winds are retrograde, which means that they blow in the reverse direction to the planetary rotation. Their speeds are from −100 to −50 m/s. Wind speeds increase with the distance from the equator, reaching zero values near ±20° latitude, where the troposphere's temperature minimum is located. Closer to the poles, the winds shift to a prograde direction, flowing with the planet's rotation. Wind speeds continue to increase reaching maxima at ±60° latitude before falling to zero at the poles. Wind speeds at −40° latitude range from 150 to 200 m/s. Since the collar obscures all clouds below that parallel, speeds between it and the southern pole are impossible to measure. In contrast, in the northern hemisphere maximum speeds as high as 240 m/s are observed near +50 degrees of latitude. These speeds sometimes lead to incorrect assertions that winds are faster in the northern hemisphere. In fact, latitude per latitude, winds are slightly slower in the northern part of Uranus, especially at the midlatitudes from ±20 to ±40 degrees. There is currently no agreement about whether any changes in wind speed have occurred since 1986, and nothing is known about much slower meridional winds. Determining the nature of this seasonal variation is difficult because good data on Uranus' atmosphere has existed for less than 84 Earth years, or one full Uranian year. A number of discoveries have however been made. Photometry over the course of half a Uranian year (beginning in the 1950s) has shown regular variation in the brightness in two spectral bands, with maxima occurring at the solstices and minima occurring at the equinoxes. A similar periodic variation, with maxima at the solstices, has been noted in microwave measurements of the deep troposphere begun in the 1960s. Stratospheric temperature measurements beginning in 1970s also showed maximum values near 1986 solstice. The majority of this variability is believed to occur due to changes in the viewing geometry. Uranus is an oblate spheroid, which causes its visible area to become larger when viewed from the poles. This explains in part its brighter appearance at solstices. Uranus is also known to exhibit strong meridional variations in albedo (see above). For instance, the south polar region of Uranus is much brighter than the equatorial bands. In addition, both poles demonstrate elevated brightness in the microwave part of the spectrum, while the polar stratosphere is known to be cooler than the equatorial one. So seasonal change seems to happen as follows: poles, which are bright both in visible and microwave spectral bands, come into the view at solstices resulting in brighter planet, while the dark equator is visible mainly near equinoxes resulting in darker planet. In addition, occultations at solstices probe hotter equatorial stratosphere. However there are some reasons to believe that seasonal changes are happening in Uranus. While the planet is known to have a bright south polar region, the north pole is fairly dim, which is incompatible with the model of the seasonal change outlined above. During its previous northern solstice in 1944, Uranus displayed elevated levels of brightness, which suggests that the north pole was not always so dim. This information implies that the visible pole brightens some time before the solstice and darkens after the equinox. Detailed analysis of the visible and microwave data revealed that the periodical changes of brightness are not completely symmetrical around the solstices, which also indicates a change in the albedo patterns. In addition, the microwave data showed increases in pole–equator contrast after the 1986 solstice. Finally in the 1990s, as Uranus moved away from its solstice, Hubble and ground based telescopes revealed that the south polar cap darkened noticeably (except the southern collar, which remained bright), while the northern hemisphere demonstrated increasing activity, such as cloud formations and stronger winds, having bolstered expectations that it would brighten soon. In particular an analog of the bright polar collar present in the southern hemisphere at −45° was expected to appear in the northern part of the planet. This indeed happened in 2007 when the planet passed an equinox: a faint northern polar collar arose, while the southern collar became nearly invisible, although the zonal wind profile remained asymmetric, with northern winds being slightly slower than southern. The mechanism of physical changes is still not clear. Near the summer and winter solstices, Uranus' hemispheres lie alternately either in full glare of the Sun's rays or facing deep space. The brightening of the sunlit hemisphere is thought to result from the local thickening of the methane clouds and haze layers located in the troposphere. The bright collar at −45° latitude is also connected with methane clouds. Other changes in the southern polar region can be explained by changes in the lower cloud layers. The variation of the microwave emission from the planet is probably caused by a changes in the deep tropospheric circulation, because thick polar clouds and haze may inhibit convection. For a short period in Autumn 2004, a number of large clouds appeared in the Uranian atmosphere, giving it a Neptune-like appearance. Observations included record-breaking wind speeds of 824 km/h and a persistent thunderstorm referred to as "Fourth of July fireworks". Why this sudden upsurge in activity should be occurring is not fully known, but it appears that Uranus' extreme axial tilt results in extreme seasonal variations in its weather. Several solutions have been proposed to explain the calm weather on Uranus. One proposed explanation for this dearth of cloud features is that Uranus' internal heat appears markedly lower than that of the other giant planets; in astronomical terms, it has a low internal thermal flux. Why Uranus' heat flux is so low is still not understood. Neptune, which is Uranus' near twin in size and composition, radiates 2.61 times as much energy into space as it receives from the Sun. Uranus, by contrast, radiates hardly any excess heat at all. The total power radiated by Uranus in the far infrared (i.e. heat) part of the spectrum is 1.06 ± 0.08 times the solar energy absorbed in its atmosphere. In fact, Uranus' heat flux is only 0.042 ± 0.047 W/m², which is lower than the internal heat flux of Earth of about 0.075 W/m². The lowest temperature recorded in Uranus' tropopause is 49 K (−224 °C), making Uranus the coldest planet in the Solar System, colder than Neptune. Another hypothesis states that when Uranus was "knocked over" by the supermassive impactor which caused its extreme axial tilt, the event also caused it to expel most of its primordial heat, leaving it with a depleted core temperature. Another hypothesis is that some form of barrier exists in Uranus' upper layers which prevents the core's heat from reaching the surface. For example, convection may take place in a set of compositionally different layers, which may inhibit the upward heat transport. Here you can share your comments or contribute with more information, content, resources or links about this topic.
http://www.mashpedia.com/Climate_of_Uranus
4
A fracture zone is a linear oceanic feature--often hundreds, even thousands of kilometers long--resulting from the action of offset mid-ocean ridge axis segments. They are a consequence of plate tectonics. Lithospheric plates on either side of an active transform fault move in opposite directions; here, strike-slip activity occurs. Fracture zones extend past the transform faults, away from the ridge axis; seismically inactive (because both plate segments are moving in the same direction), they display evidence of past transform fault activity, primarily in the different ages of the crust on opposite sides of the zone. In actual usage, many transform faults aligned with fracture zones are often loosely referred to as "fracture zones" although technically, they are not. See also - U.S. Geological Survey: Understanding plate motions - NOAA, National Geophysical Data Center & World Data Center A for Marine Geology & Geophysics (See: The Fracture Zones) |This tectonics article is a stub. You can help Wikipedia by expanding it.|
http://en.wikipedia.org/wiki/Fracture_zone
4.15625
KiowaArticle Free Pass Kiowa, North American Indians of Kiowa-Tanoan linguistic stock who are believed to have migrated from what is now southwestern Montana into the southern Great Plains in the 18th century. Numbering some 3,000 at the time, they were accompanied on the migration by Kiowa Apache, a small southern Apache band that became closely associated with the Kiowa. Guided by the Crow, the Kiowa learned the technologies and customs of the Plains Indians and eventually formed a lasting peace with the Comanche, Arapaho, and Southern Cheyenne. The name Kiowa may be a variant of their name for themselves, Kai-i-gwu, meaning “principal people.” The Kiowa and their confederates were among the last of the Plains tribes to capitulate to the U.S. Cavalry. Since 1868 they have shared a reservation with the Comanche between the Washita and Red rivers, centring on Anadarko, Oklahoma. Before their surrender, Kiowa culture was typical of nomadic Plains Indians. After they acquired horses from the Spanish, their economy focused on equestrian bison hunting. They lived in large tepees and moved camp frequently in pursuit of game. Kiowa warriors attained rank according to their exploits in war, including killing an enemy or touching his body during combat. Traditional Kiowa religion included the belief that dreams and visions gave individuals supernatural power in war, hunting, and healing. Ten medicine bundles, believed to protect the tribe, became central in the Kiowan Sun Dance. The Kiowa and the Comanche were instrumental in spreading peyotism (see Native American church). The Kiowa were also notable for their pictographic histories of tribal events, recorded twice each year. Each summer and winter from 1832 to 1939, one or more Kiowa artists created a sketch or drawing that depicted the events of the past six months; in the early years of this practice, the drawings were made on dressed skins, while artists working later in the period drew on ledger paper. The National Anthropological Archives of the Smithsonian Institution contain a number of these extraordinary drawings. Early 21st-century population estimates indicated more than 12,000 individuals of Kiowa descent. What made you want to look up "Kiowa"? Please share what surprised you most...
http://www.britannica.com/EBchecked/topic/318957/Kiowa
4.21875
(Linnaeus in Hasselquist, 1762) |Distribution in 2006 of Aedes aegypti (blue) and epidemic dengue (red)| The yellow fever mosquito, Aedes aegypti is a mosquito that can spread the dengue fever, chikungunya and yellow fever viruses, and other diseases. The mosquito can be recognized by white markings on legs and a marking in the form of a lyre on the thorax. The mosquito originated in Africa but is now found in tropical and subtropical regions throughout the world. Spread of disease and prevention Aedes aegypti is a vector for transmitting several tropical fevers. Only the female bites for blood which she needs to mature her eggs. Understanding how the mosquito detects its host is a crucial step in the spread of the disease. Aedes aegypti are attracted to chemical compounds that are emitted by mammals. These compounds include ammonia, carbon dioxide, lactic acid, and octenol. Scientists at the Agricultural Research Service have studied the specific chemical structure of octenol in order to better understand why this chemical attracts the mosquito to its host. They found that the mosquito has a preference for "right-handed" (dextrorotatory) octenol molecules. The CDC traveler's page on preventing dengue fever suggests using mosquito repellents that contain DEET (N, N-diethylmetatoluamide, 20% to 30% concentration, but not more). It also suggests the following: - Although aedes aegypti mosquitoes most commonly bite at dusk and dawn, indoors, in shady areas, or when the weather is cloudy, "they can bite and spread infection all year long and at any time of day." - The mosquito's preferred breeding areas are in areas of stagnant water, such as flower vases, uncovered barrels, buckets, and discarded tires, but the most dangerous areas are wet shower floors and toilet tanks, as they allow the mosquitos to breed in the residence. Research has shown that certain chemicals emanating from bacteria in water containers stimulate the female mosquitoes to lay their eggs. They are particularly motivated to lay eggs in water containers that have the correct amounts of specific fatty acids associated with bacteria involved in the degradation of leaves and other organic matter in water. The chemicals associated with the microbial stew are far more stimulating to discerning female mosquitoes than plain or filtered water in which the bacteria once lived. - Wear long-sleeved clothing and long trousers when outdoors during the day and evening. - Spray permethrin or DEET repellents on clothing, as mosquitos may bite through thin clothing. - Use mosquito netting over the bed if the bedroom is not air conditioned or screened. For additional protection, treat the mosquito netting with the insecticide permethrin. - Spray permethrin or a similar insecticide in the bedroom before retiring. Although the lifespan of an adult Aedes aegypti is two to four weeks depending on conditions, the eggs can be viable for over a year in a dry state, which allows the mosquito to re-emerge after a cold winter or dry spell. Genetic modification A. aegypti is the subject of investigations that genetically modify the mosquitoes. The modified strain, known as OX513A, requires the antibiotic tetracycline to develop beyond the larval stage and passes this trait onto offspring. Modified males raised in a laboratory will develop normally as they are supplied with this chemical and can be released into the wild. However, their subsequent offspring will find no tetracycline in their environment and will never develop into adults. An Oxford firm, Oxitec, is performing a pilot program in Juazeiro, Brazil to test the effectiveness of these modifications in reducing disease spread. A 2010 study carried one in the Cayman Islands saw the release of over three million OX513A mosquitoes. The wild population of mosquitoes subsequently dropped by 80%. The genome of this species of mosquito was sequenced and analyzed by a consortium including scientists at The Institute for Genomic Research (now part of the J. Craig Venter Institute), the European Bioinformatics Institute, the Broad Institute, and the University of Notre Dame, and published in 2007. The effort in sequencing its DNA was intended to provide new avenues for research into insecticides and possible genetic modification to prevent the spread of virus. This was the second mosquito species to have its genome sequenced in full (the first was Anopheles gambiae). The published data included the 1.38 billion base pairs containing the insect's estimated 15,419 protein encoding genes. The sequence indicates that the species diverged from Drosophila melanogaster (the common fruit fly) about , and that Anopheles gambiae and this species diverged about . Scientific name The species was first named (as Culex aegypti) in a 1757 publication by Fredric Hasselquist titled Iter Palaestinum ("A Journey to Palestine"). Hasselquist was provided with the names and descriptions by his mentor, Carl Linnaeus. Iter Palaestinum was later translated into German and published in 1762 as Reise nach Palästina. Since the latter is an uncritical reproduction of the former, they are both considered to pre-date the starting point for zoological nomenclature in 1758. Nonetheless, the name Aedes aegypti was frequently used, starting with H. G. Dyar in 1920. In order to stabilise the nomenclature, a petition to the International Commission on Zoological Nomenclature was made by P. F. Mattingly, Alan Stone and Kenneth L. Knight in 1962. It also transpired that, although the name Aedes aegypti was universally used for the yellow fever mosquito, Linnaeus had actually described a species now known as Aedes (Ochlerotatus) caspius. In 1964, the commission ruled in favour of the proposal, validating Linnaeus' name, and transferring it to the species for which it was in general use. The yellow fever mosquito belongs to the tribe Aedini of the dipteran family Culicidae and to the genus Aedes and subgenus Stegomyia. According to one recent analysis, the subgenus Stegomyia of the genus Aedes should be raised to the level of genus. The proposed name change has been ignored by most scientists; at least one scientific journal, the Journal of Medical Entomology, has officially encouraged authors dealing with aedine mosquitoes to continue to use the traditional names, unless they have particular reasons for not doing so. See also - Neal L. Evenhuis & Samuel M. Gon III (2007). "22. Family Culicidae" (PDF). In Neal L. Evenhuis. Catalog of the Diptera of the Australasian and Oceanian Regions. Bishop Museum. pp. 191–218. Retrieved February 4, 2012. - Laurence Mousson, Catherine Dauga, Thomas Garrigues, Francis Schaffner, Marie Vazeille & Anna-Bella Failloux (August 2005). "Phylogeography of Aedes (Stegomyia) aegypti (L.) and Aedes (Stegomyia) albopictus (Skuse) (Diptera: Culicidae) based on mitochondrial DNA variations". Genetics Research 86 (1): 1–11. doi:10.1017/S0016672305007627. PMID 16181519. - M. Womack (1993). "The yellow fever mosquito, Aedes aegypti". Wing Beats 5 (4): 4. - Dennis O'Brien (March 9, 2010). "ARS Study Provides a Better Understanding of How Mosquitoes Find a Host". U.S. Department of Agriculture. Archived from the original on 8 October 2010. Retrieved 2010-08-27. - Banfield, William G.; Woke, P. A.; MacKay, C. M.; Cooper, H. L. (28 May 1965). "Mosquito Transmission of a Reticulum Cell Sarcoma of Hamsters". Science 148 (3674): 1239–1240. doi:10.1126/science.148.3674.1239. PMID 14280009. - "Travelers' Health Outbreak Notice". Centers for Disease Control and Prevention. June 02, 2010. Archived from the original on 26 August 2010. Retrieved 2010-08-27. - "Dengue Virus: Vector And Transmission". Retrieved 19 October 2012. - "Lay Your Eggs Here". Newswise, Inc. 07-03-2008. Retrieved 2010-08-27. - Catherine Zettel & Phillip Kaufman. "Yellow fever mosquito Aedes aegypti". University of Florida, Institute of Food and Agricultural Sciences. Retrieved 2010-08-27. - Roland Mortimer. "Aedes aegypti and Dengue fever". Onview.net Ltd, Microscopy-UK. Retrieved 2010-08-27. - Michael Specter, "The Mosquito Solution", The New Yorker, July 9-16, 2012, page 38. - Conal Urquhart (15 July 2012). "Can GM mosquitoes rid the world of a major killer?". The Observer. Retrieved 2012-07-15. - Heather Kowalski (May 17, 2007). "Scientists at J. Craig Venter Institute publish draft genome sequence from Aedes aegypti, mosquito responsible for yellow fever, dengue fever". J. Craig Venter Institute. - Vishvanath Nene, Jennifer R. Wortman, Daniel Lawson, Brian Haas, Chinnappa Kodira et al. (June 2007). "Genome sequence of Aedes aegypti, a major arbovirus vector". Science 316 (5832): 1718–1723. Bibcode:2007Sci...316.1718N. doi:10.1126/science.1138878. PMC 2868357. PMID 17510324. - P. F. Mattingly, Alan Stone & Kenneth L. Knight (1962). "Culex aegypti Linnaeus, 1762 (Insecta, Diptera); proposed validation and interpretation under the plenary powers of the species so named. Z.N.(S.) 1216" (PDF). Bulletin of Zoological Nomenclature 19 (4): 208–219. - International Commission on Zoological Nomenclature (1964). "Culex aegypti Linnaeus, 1762 (Insecta, Diptera): validated and interpreted under the plenary powers". Bulletin of Zoological Nomenclature 21 (4): 246–248. - John F. Reinert, Ralph E. Harbach & Ian J. Kitching (2004). "Phylogeny and classification of Aedini (Diptera: Culicidae), based on morphological characters of all life stages" (PDF). Zoological Journal of the Linnean Society 142 (3): 289–368. doi:10.1111/j.1096-3642.2004.00144.x. - Andrew Polaszek (January 2006). "Two words colliding: resistance to changes in the scientific names of animals – Aedes vs Stegomyia". Trends in Parasitology 22 (1): 8–9. doi:10.1016/j.pt.2005.11.003. PMID 16300998. - "Journal of Medical Entomology Policy on Names of Aedine Mosquito Genera and Subgenera". Entomological Society of America. Retrieved August 31, 2011. |External identifiers for Aedes aegypti| |Encyclopedia of Life||740699| |Also found in: Wikispecies| |Wikimedia Commons has media related to: Aedes aegypti| - VectorBase's genomic resource for Aedes aegypti - Aedes aegypti page from University of Sydney, Australia - Aedes aegypti and Dengue fever - United States CDC page on dengue fever containing information on prevalence of Aedes aegypti worldwide and past efforts to eradicate it - Aedes aegypti on the UF / IFAS Featured Creatures Web site - Walter Reed Hospital Distribution, taxonomy, references etc. Excellent image. - Aedes aegypti at MetaPathogen: taxonomy, life cycle, facts - THE ECOLOGY AND BIOLOGY OF Aedes aegypti (L.) AND Aedes albopictus (Skuse) (DIPTERA: CULICIDAE) AND THE RESISTANCE STATUS OF Aedes albopictus (FIELD STRAIN) AGAINST ORGANOPHOSPHATES IN PENANG, MALAYSIA
http://en.wikipedia.org/wiki/Aedes_aegypti
4.125
This archived Web page remains online for reference, research or recordkeeping purposes. This page will not be altered or updated. Web pages that are archived on the Internet are not subject to the Government of Canada Web Standards. As per the Communications Policy of the Government of Canada, you can request alternate formats of this page on the Contact Us page. Many Canadians discriminated against the Chinese. The Chinese looked different, spoke a different language, and brought their own ways of life with them from China. Many Canadians had never met a Chinese person and formed false opinions out of fear. If each group had known the other better perhaps there wouldn't have been as much prejudice. Some Canadians thought that the Chinese would take jobs away from them. Others had wrong or exaggerated ideas about the way the Chinese lived. They were accused of being dirty and disease carriers because of their crowded living conditions. Chinese workers were paid less than white workers because many people believed that the Chinese needed less to live on. They thought that the Chinese were content to live with less and would settle for food that lacked variety and quality. Because most early Chinese immigrants were men, people assumed they had no families to support. Many Chinese were called names and were victims of physical assault. Chinese could not even be buried in public cemeteries with non-Chinese.
http://www.collectionscanada.gc.ca/settlement/kids/021013-2031.5-e.html
4.03125
1. Not set apart from others by visible marks; to make distinctive or discernible by exhibiting differences; to mark off by some characteristic. Not more distinguished by her purple vest, Than by the charming features of her face. (Dryden) Milton has distinguished the sweetbrier and the eglantine. (Nares) 2. To separate by definition of terms or logical division of a subject with regard to difference; as, to distinguish sounds into high and low. Moses distinguished the causes of the flood into those that belong to the heavens, and those that belong to the earth. (t. Burnet) 3. To recognize or discern by marks, signs, or characteristic quality or qualities; to know and discriminate (anything) from other things with which it might be confounded; as, to distinguish the sound of a drum. We are enabled to distinguish good from evil, as well as truth from falsehood. (Watts) Nor more can you distinguish of a man, Than of his outward show. (Shak) Results from our forum ... progeny in the fridge for a long time and you get new pure breed variety. This answer makes me uncomfortable, but before I disagree, how do you distinguish a response system from an outcome system? Is there a test I can apply that will decide this question? See entire post ... any further than what I have said. All I'm arguing is proof of concept. If you admit the possibility, that is enough for me. How does one distinguish between what is a random and what is a prescribed cellular response to outside pressure. What pressure is a colony of bacteria being subjected ... See entire post
http://www.biology-online.org/dictionary/Distinguish
4.21875
The Chesapeake & Ohio was a major force in opening the coalfields of southern West Virginia in the years after the Civil War. The railroad provided the impetus for the growth of Huntington in the west, White Sulphur Springs in the east, and towns in between such as Hinton and Thurmond. Many West Virginians can name members of their families who were, or still are, employees of the C&O and its successors. The Chesapeake & Ohio Railroad was created in 1868 by the merger of the Virginia Central Railroad and the Covington & Ohio Railroad. At the beginning of the Civil War, the Virginia Central extended from Richmond to Waynesboro and was the second largest railroad in the state. While the Covington & Ohio had been chartered in 1853 to build from the western terminus of the Virginia Central to the Ohio River, no construction had taken place at the outbreak of the Civil War. The Virginia Central, located where the fighting raged, suffered significant damage during the war. When the war ended, Gen. William C. Wickham, a Confederate cavalry officer, became president of the railroad and restored the entire line to operation early in 1866. In July 1867, the Virginia Central reached Covington, near the boundary between Virginia and the new state of West Virginia. Wickham also became president of the Chesapeake & Ohio after the 1868 merger and immediately began seeking funds to extend the railroad to the Ohio River. The C&O soon attracted the support of Collis P. Huntington, one of the greatest railroad entrepreneurs in American history. Huntington had recently been the major figure in the completion of the first transcontinental railroad, the Union Pacific-Central Pacific. He saw the C&O as the eastern section of a true transcontinental railroad. His contacts with New York financiers were desperately needed by the C&O. To benefit from these contacts, however, the board and the stockholders of the railroad were forced to reorganize the company, turning operations over to Huntington and his friends. Once in control, Huntington expanded the railroad rapidly. Construction started west from Covington and east from the newly developed town of Huntington on the banks of the Ohio. Using more than 7,000 men (including the legendary John Henry, the ‘‘steel driving man’’), and well over the projected construction cost of $15 million, the line was completed on January 29, 1873. Construction was expensive and difficult because of the mountains between Covington and Hinton. West of Hinton the line followed the New River for a considerable distance, the narrow valley also a challenge to construction workers. Many fatalities and injuries resulted. The difficult construction and the rapid expansion of the C&O, in addition to the national depression of 1873, forced the company into default. In 1878, the Chesapeake & Ohio Railroad was sold at foreclosure, reorganized, and renamed the Chesapeake & Ohio Railway. Financial conditions gradually improved as coal mines were opened along the route. A major step forward took place in 1882 when a 75-mile line was constructed from Richmond to Tidewater at Newport News, connecting the railroad with ocean shipping. In the 1880s, the C&O pushed west to Cincinnati and smaller railroads were absorbed at a rate too rapid to sustain. Consequently, the line was forced into receivership again in 1887. A foreclosure sale was averted when J. P. Morgan interests purchased control of the company and instituted a successful financial recovery. In the process Huntington’s role with the C&O ended, and his place was taken by Melville E. Ingalls. The Ingalls era (1888–1900) began with the entry of the C&O into Cincinnati. Ingalls’s leadership provided stability and with financial support from Morgan the company continued to expand. By 1900, the C&O had more than doubled the mileage it operated, new locomotives and rolling stock had been purchased, the line had been ballasted and re-laid with heavier rails, and the company paid a dollar per share annual dividend. In the first decade of the new century, the Pennsylvania Railroad and the New York Central purchased large amounts of C&O stock in an effort to control rates for shipping coal. To further this end, the C&O purchased some of the small coal-hauling railroads in West Virginia that connected with the C&O, such as the Coal River & Western Railroad and the Guyandot Valley Railroad. Branches to Big Creek, Buffalo Creek, Rum Creek, Piney Creek, Cabin Creek, and Dingess Run, for example, were also constructed. The 98-mile Greenbrier Railway Company, purchased by the C&O in 1907, ran up the Greenbrier Valley to Bartow. Completed in 1904, it was built to provide access to timber for the large sawmills at Ronceverte. Eventually known as the Durbin Branch, it connected with what is now the Cass Scenic Railroad at Cass. In Ohio the C&O gradually absorbed the Hocking Valley Railroad after 1905, giving connections to Columbus and Toledo. Other expansion in the Midwest gave the C&O access to Chicago and the Great Lakes. After World War I, the C&O went through several leadership changes before ending up under the direction of brothers Oris P. and Mantis J. Van Sweringen, colorful real estate developers in Cleveland. The Van Sweringens attempted to combine their railroads—the C&O, the Erie, the Pere Marquette, and the Nickel Plate—into one major system. While the Interstate Commerce Commission refused to allow the merger and the Van Sweringen empire was eventually disbanded, one result was the C&O’s acquisition of the Pere Marquette Railroad, operating primarily in Michigan. Because of its coal traffic the C&O survived the Great Depression better than most railroads. From 1937 to 1954, under the direction of Robert R. Young, the railroad grew to a 5,100-mile system with 35,000 employees and an annual revenue of $319 million. In 1962, with Walter J. Tuohy as president, the C&O acquired the faltering Baltimore & Ohio, in a contest with the New York Central which had earlier wanted to merge with the C&O. In 1972, the C&O, B&O, and Western Maryland were merged into the Chessie System. In 1980, the current CSX corporation was created, merging the Chessie System and the Seaboard Coast Line. Today, as one of the major rail systems in the United States, CSX has acquired significant parts of Conrail, thus reducing the major railroad systems of West Virginia to two: CSX and Norfolk Southern. This Article was written by Robert L. Frey Last Revised on October 12, 2010 Turner, Charles W., et al. Chessie's Road. Alderson: C&O Historical Society, 1986.
http://www.wvencyclopedia.org/articles/1143
4.3125
Indian removal was a 19th-century policy of the government of the United States to relocate Native American tribes living east of the Mississippi River to lands west of the river. The Indian Removal Act was signed into law by President Andrew Jackson on May 26, 1830. Since the presidency of Thomas Jefferson, America's policy had been to allow Native Americans to remain east of the Mississippi as long as they became assimilated or "civilized". His original plan was to guide the Natives towards adopting a sedentary agricultural lifestyle, in large part due to "the decrease of game rendering their subsistence by hunting insufficient". Jefferson's expectation was that by assimilating them into an agricultural lifestyle, they would become economically dependent on trade with white Americans, and would thereby be willing to give up land that they would otherwise not part with, in exchange for trade goods. In an 1803 letter to William Henry Harrison, Jefferson wrote: When they withdraw themselves to the culture of a small piece of land, they will perceive how useless to them are their extensive forests, and will be willing to pare them off from time to time in exchange for necessaries for their farms and families. To promote this disposition to exchange lands, which they have to spare and we want, for necessaries, which we have to spare and they want, we shall push our trading uses, and be glad to see the good and influential individuals among them run in debt, because we observe that when these debts get beyond what the individuals can pay, they become willing to lop them off by a cession of lands. At our trading houses, too, we mean to sell so low as merely to repay us cost and charges, so as neither to lessen or enlarge our capital. This is what private traders cannot do, for they must gain; they will consequently retire from the competition, and we shall thus get clear of this pest without giving offence or umbrage to the Indians. In this way our settlements will gradually circumscribe and approach the Indians, and they will in time either incorporate with us as citizens of the United States, or remove beyond the Mississippi. The former is certainly the termination of their history most happy for themselves; but, in the whole course of this, it is essential to cultivate their love. As to their fear, we presume that our strength and their weakness is now so visible that they must see we have only to shut our hand to crush them, and that all our liberalities to them proceed from motives of pure humanity only. Should any tribe be foolhardy enough to take up the hatchet at any time, the seizing the whole country of that tribe, and driving them across the Mississippi, as the only condition of peace, would be an example to others, and a furtherance of our final consolidation. There was a long history of Native American land being purchased, usually by treaty and sometimes under coercion. In the early 19th century the notion of "land exchange" developed and began to be incorporated into land cession treaties. Native Americans would relinquish land in the east in exchange for equal or comparable land west of the Mississippi River. This idea was proposed as early as 1803, by Jefferson, but was not used in actual treaties until 1817, when the Cherokee agreed to cede two large tracts of land in the east for one of equal size in present-day Arkansas. Many other treaties of this nature quickly followed. The process was used in President Andrew Jackson's policy of migration in the Indian Removal Act of 1830. Calhoun's plan Under President James Monroe, Secretary of War John C. Calhoun devised the first plans for Indian removal. By late 1824, Monroe approved Calhoun's plans and in a special message to the Senate on January 27, 1825, requested the creation of the Arkansas Territory and Indian Territory. The Indians east of the Mississippi were to voluntarily exchange their lands for lands west of the river. The Senate accepted Monroe's request and asked Calhoun to draft a bill, which was killed in the House of Representatives by the Georgia delegation. President John Quincy Adams assumed the Calhoun–Monroe policy and was determined to remove the Indians by non-forceful means, but Georgia refused to submit to Adams' request and forced[clarification needed] Adams to make a treaty with Creeks and Cherokees granting Georgia what it wanted.[clarification needed] When Jackson became president, he agreed that the Indians should be forced to exchange eastern lands for western lands. Indian Removal Act Andrew Jackson was elected president of the United States in 1829, and with his inauguration the government stance toward Indians turned harsher. Jackson abandoned the policy of his predecessors of treating different Indian groups as separate nations. Instead, he aggressively pursued plans to move all Indian tribes living east of the Mississippi River to west of the Mississippi. At Jackson's request, the United States Congress opened a fierce debate on an Indian Removal Bill. In the end, the bill passed, but the vote was close. The Senate passed the measure 28–19, the House 102–97. Jackson signed the legislation into law June 30, 1830. In 1830, the majority of the "Five Civilized Tribes"—the Chickasaw, Choctaw, Creek, Seminole, and Cherokee—were living east of the Mississippi as they had for thousands of years. The Indian Removal Act of 1830 implemented the U.S. government policy towards the Indian populations, which called for relocation of Native American tribes living east of the Mississippi River to lands west of the river. While it did not authorize the forced removal of the indigenous tribes, it authorized the President to negotiate land exchange treaties with tribes located in lands of the United States. On September 27, 1830, the Choctaw signed the Treaty of Dancing Rabbit Creek and became the first Native American tribe to be voluntarily removed. The agreement represented one of the largest transfers of land that was signed between the U.S. Government and Native Americans without being instigated by warfare. By the treaty, the Choctaw signed away their remaining traditional homelands, opening them up for European-American settlement in Mississippi Territory. When the Choctaw reached Little Rock, a Choctaw chief (thought to be Thomas Harkins or Nitikechi) stated to the Arkansas Gazette that the removal was a "trail of tears and death". In the whole scene there was an air of ruin and destruction, something which betrayed a final and irrevocable adieu; one couldn't watch without feeling one's heart wrung. The Indians were tranquil, but sombre and taciturn. There was one who could speak English and of whom I asked why the Chactas were leaving their country. "To be free," he answered, could never get any other reason out of him. We ... watch the expulsion ... of one of the most celebrated and ancient American peoples.—Alexis de Tocqueville, Democracy in America While the Indian Removal Act made the relocation of the tribes voluntary, it was often abused by government officials. The best-known example is the Treaty of New Echota. It was negotiated and signed by a small faction of Cherokee tribal members, not the tribal leadership, on December 29, 1835. It resulted in the forced relocation of the tribe in 1838. An estimated 4,000 Cherokee died in the march, now known as the Trail of Tears. Missionary organizer Jeremiah Evarts urged the Cherokee Nation to take their case to the U.S. Supreme Court. The Marshall court ruled that while Native American tribes were sovereign nations (Cherokee Nation v. Georgia, 1831), state laws had no force on tribal lands (Worcester v. Georgia, 1832). In spite of this acculturation, many white settlers and land speculators simply desired the land. Some claimed their presence was a threat to peace and security. Some U.S. states, like Georgia in 1830, passed a law which prohibited whites from living on Native American territory after March 31, 1831, without a license from the state. This law was written to justify removing white missionaries who were helping the Native Americans resist removal. In 1835, the Seminole refused to leave their lands in Florida, leading to the Second Seminole War. Osceola led the Seminole in their fight against removal. Based in the Everglades of Florida, Osceola and his band used surprise attacks to defeat the U.S. Army in many battles. In 1837, Osceola was seized by deceit upon the orders of U.S. General T.S. Jesup when Osceola came under a flag of truce to negotiate peace. He died in prison. Some Seminole traveled deeper into the Everglades, while others moved west. Removal continued out west and numerous wars ensued over land. Muscogee (Creek) In the aftermath of the Treaty of Fort Jackson and the Treaty of Washington, the Muscogee were confined to a small strip of land in present-day east central Alabama. Following the Indian Removal Act, in 1832 the Creek National Council signed the Treaty of Cusseta, ceding their remaining lands east of the Mississippi to the U.S., and accepting relocation to the Indian Territory. Most Muscogee were removed to Indian Territory during the Trail of Tears in 1834, although some remained behind. Friends and Brothers – By permission of the Great Spirit above, and the voice of the people, I have been made President of the United States, and now speak to you as your Father and friend, and request you to listen. Your warriors have known me long You know I love my white and red children, and always speak with a straight, and not with a forked tongue; that I have always told you the truth ... Where you now are, you and my white children are too near to each other to live in harmony and peace. Your game is destroyed, and many of your people will not work and till the earth. Beyond the great River Mississippi, where apart of your nation has gone, your Father has provided a country large enough for all of you, and he advises you to remove to it. There your white brothers will not trouble you; they will have no claim to the land, and you can live upon it you and all your children, as long as the grass grows or the water runs, in peace and plenty. It will be yours forever. For the improvements in the country where you now live, and for all the stock which you cannot take with you, your Father will pay you a fair price ...—President Andrew Jackson addressing the Creek, 1829 Unlike other tribes who exchanged land grants, the Chickasaw were to receive mostly financial compensation of $3 million from the United States for their lands east of the Mississippi River. In 1836, the Chickasaw had reached an agreement that purchased land from the previously removed Choctaw after a bitter five-year debate. They paid the Choctaw $530,000 for the westernmost part of Choctaw land. The first group of Chickasaw moved in 1837. The $3 million that the U.S. owed the Chickasaw went unpaid for nearly 30 years. As a result, the five tribes were resettled in the new Indian Territory in modern-day Oklahoma and parts of Kansas. Some indigenous nations resisted forced migration more forcefully. Those few who stayed behind eventually formed tribal groups including the Eastern Band Cherokee, based in North Carolina, the Mississippi Band of Choctaw Indians, the Seminole Tribe of Florida, and the Creeks in Atmore, Alabama. Southern Removals |Nation||Population east of the Mississippi before removal treaty||Removal treaty |Years of major emigration||Total number emigrated or forcibly removed||Number stayed in Southeast||Deaths during removal||Deaths from warfare| |Choctaw||19,554 + white citizens of the Choctaw Nation + 6000 black slaves||Dancing Rabbit Creek (1830)||1831–1836||12,500||7,000 ||2,000–4,000+ (Cholera)||none| |Creek||22,700 + 900 black slaves ||Cusseta (1832)||1834–1837||19,600 ||100s||3,500 (disease after removal)||? (Second Creek War)| |Chickasaw||4,914 + 1,156 black slaves||Pontotoc Creek (1832)||1837–1847||over 4,000||100s||500–800||none| + 2,000 black slaves |New Echota (1835)||1836–1838||20,000 + 2,000 slaves||1,000||2,000–8,000||none| |Seminole||5,000 + fugitive slaves||Payne's Landing (1832)||1832–1842||2,833 ||250–500 ||700 (Second Seminole War)| Many figures have been rounded. The North |This section requires expansion. (June 2008)| Tribes in the Old Northwest were far smaller and more fragmented than the Five Civilized Tribes, so the treaty and emigration process was more piecemeal. Bands of Shawnee, Ottawa, Potawatomi, Sauk, and Meskwaki (Fox) signed treaties and relocated to the Indian Territory. In 1832, a Sauk chief named Black Hawk led a band of Sauk and Fox back to their lands in Illinois. In the Black Hawk War, the U.S. Army and Illinois militia defeated Black Hawk and his army. The Iroquois were also supposed to be part of the Indian removal, and the Treaty of Buffalo Creek arranged for them to be removed to land in Wisconsin and Kansas. However, the land company that was to purchase the land for the territories reneged on their deal, and subsequent treaties in 1842 and 1857 gave back most of the Iroquois' reservations untouched. Only the Buffalo Creek Reservation was ever dissolved as part of the removal program; a small portion was purchased back over a century later to build a casino. See also - Indian removals in Indiana - Manifest Destiny - Potawatomi Trail of Death - Timeline of Cherokee removal - Daniel Sabin Butrick (Buttrick), walked with the Cherokee Nation on their Trail of Tears - Yamasee War (1715–1717) - Jefferson, Thomas (1803). "President Thomas Jefferson to William Henry Harrison, Governor of the Indiana Territory,". Retrieved 2009-03-12. - William Clark: Indian diplomat Jay Buckley, University Oklahoma Press, 2008, pg 193 - Prucha (1994), pp. 146–165. - Mahon, John K., History of the Second Seminole War: 1835–1842, University of Florida Press, 1985, pp. 57, 72. - Sharyn Kane & Richard Keeton. "As Long as Grass Grows". Fort Benning – The Land and the People. SEAC. Retrieved 2010-08-07. - Watson, Chris. "The Choctaw Trail of Tears". The Bicycling Guitarist. Retrieved 2008-04-29. - de Tocqueville, Alexis (1835-1840). "Tocqueville and Beaumont on Race". Retrieved 2008-04-28. More than one of - Hoxie, Frederick (1984). A Final Promise: The Campaign to Assimilate the Indians, 1880–1920. Lincoln: University of Nebraska Press. - Robert Remini, Andrew Jackson and his Indian Wars, p. 257. - Jesse Burt & Bob Ferguson (1973). "The Removal". Indians of the Southeast: Then and Now. Nashville: Abingdon Press. pp. 170–173. ISBN 0-687-18793-1. - Foreman, p. 47 n.10 (1830 census). - Several thousand more emigrated West from 1844–49; Foreman, pp. 103–4. - Foreman, p. 111 (1832 census). - Remini, p. 272. - Russell Thornton, "Demography of the Trail of Tears", p.85. - Prucha, p. 233. - Low figure from Prucha, p. 233; high from Wallace, p. 101. - Lewis, James. "The Black Hawk War of 1832", Abraham Lincoln Digitization Project, Northern Illinois University, p. 2D. Retrieved July 12, 2011. - Anderson, William L., ed. Cherokee Removal: Before and After. Athens, Georgia: University of Georgia Press, 1991. ISBN 0-8203-1482-X. - Ehle, John. Trail of Tears: The Rise and Fall of the Cherokee Nation. New York: Doubleday, 1988. ISBN 0-385-23953-X. - Foreman, Grant. Indian Removal: The Emigration of the Five Civilized Tribes of Indians. Norman, Oklahoma: University of Oklahoma Press, 1932, 11th printing 1989. ISBN 0-8061-1172-0. - Prucha, Francis Paul. The Great Father: The United States Government and the American Indians. Volume I. Lincoln, Nebraska: University of Nebraska Press, 1984. ISBN 0-8032-3668-9. - Prucha, Francis Paul. American Indian Treaties: The History of a Political Anomaly. University of California Press, 1994. ISBN 0-520-20895-1. - Remini, Robert V. Andrew Jackson and his Indian Wars. New York: Viking, 2001. ISBN 0-670-91025-2. - Satz, Ronald N. American Indian Policy in the Jacksonian Era. Originally published Lincoln, Nebraska: University of Nebraska Press, 1975. Republished Norman, Oklahoma: University of Oklahoma Press, 2002. ISBN 0-8061-4332-1 (2002 edition). - Thornton, Russell. American Indian Holocaust and Survival: A Population History Since 1492. Norman, Oklahoma: University of Oklahoma Press, 1987. ISBN 0-8061-2074-6. - Wallace, Anthony F.C. The Long, Bitter Trail: Andrew Jackson and the Indians. New York: Hill and Wang, 1993. ISBN 0-8090-1552-8 (paperback); ISBN 0-8090-6631-9 (hardback). - Zinn, Howard. "A People’s History of the United States: American Beginnings to Reconstruction". Vol. 1. New York: New, 2003. ISBN 978-1-56584-724-8. - PBS article on Indian Removal - Critical Resources: Text of the Removal Act and other documents. - Indian Removal from Digital History by S. Mintz
http://en.wikipedia.org/wiki/Indian_removal
4.125
Animated Equatorial Mount Tutorial Many of the telescopes purchased by newcomers to astronomy are supplied with German Equatorial Mounts, or GEMs. While very useful when understood, the motions of a GEM can be baffling at first. The proper way of using this arrangement of two rotating components, with the whole apparatus tilted at an angle, is anything but obvious. The most common sorts of questions about GEMs are "How do I point the telescope at [some part of the sky]?" Or, "Why is the telescope pointed at the ground when I try to view an object in the south?" There exist some excellent written descriptions of how a GEM operates, but it is a challenge to convey in words the operation of a moving mechanism. These pages are an attempt to pictorially illustrate the motions of a GEM. An equatorial mount, of which the GEM is but one variety, simplifies the tracking of celestial objects. The motions of the mount compensate for the rotation of the earth, allowing the observer to keep an object in view. With a properly aligned equatorial mount, the telescope need be moved in only one axis to track an object. The two axes of the GEM are known as Right Ascension (RA) and Declination (DEC). When the mount is polar aligned, moving the telescope in RA is all that's necessary to track a celestial object. Movement in both axes will likely be required to place an object in the eyepiece of the telescope, but once found, movement in RA alone will keep the object in view. When you click one of the links below, you'll see a stop-motion animation of a GEM in operation. All of the animation sequences begin with the telescope in the "Start Position" shown in the photo above. In this position, the telescope is aimed more or less at Polaris (the North Star). In the Start Position, the telescope is positioned directly above the mount. Moving to View Near the Zenith Moving to View the Southeast Sky Moving to View the Southwest Sky Moving to View Due South
http://astronomyboy.com/eq/
4.15625
German: Sports Stats Connect to Your Teaching Reflect on Your Practice As you reflect on these questions, write down your responses or discuss them as a group. - How do you use modeling when assigning tasks? - What do you do when students want to go beyond the types of responses you designed the activity to elicit? - What instructional support is needed for students to be able to work successfully with authentic or content-rich materials? - What kinds of activities do you use to connect language learning to other curricula such as math or social studies? - What are some ways to encourage beginning writers to extend their ideas? Watch Other Videos Watch other videos in the Teaching Foreign Languages K-12 library for more examples of teaching methodologies like those you've just seen. Note: All videos in this series are subtitled in English. Creating Travel Advice (Spanish) illustrates reading strategies for challenging authentic materials, and Communicating About Sports (Chinese) features students expressing their sports likes and dislikes. Put It Into Practice Try these ideas in your classroom. - Try to communicate in the target language during all classroom activities. When giving directions or explaining activity procedures, model one or two tasks for students to help them understand the assignment. Check for comprehension informally as you proceed. For example, Ms. Garcia showed students how to fill in their graphs by drawing happy, sad, and "so-so" faces and providing written language examples. She also modeled how the class graph should be organized before letting students take the lead. While it may seem faster and easier to give directions in English, in reality, doing so breaks the atmosphere and removes an opportunity for language learning. If students themselves fall back on English, continue to respond in the target language to maintain that atmosphere. Although several times her students asked questions or elaborated in English when their ideas were linguistically too difficult for them, Ms. Garcia continued to respond in German. - To help students develop interpretive communication skills, design a reading plan for a text or for audio-visual material that leads students through previewing, skimming/scanning, and closer-look activities. Ms. Garcia designed a plan consistent with her school's language arts process; she defined the stages as predicting, scanning, listening, and following along. Begin with a prereading activity that includes predicting, brainstorming, creating a graphic organizer, and/or interpreting visuals. Next, have students skim and/or scan the text to focus on what they understand. This kind of activity gets at meaning while keeping students from getting stuck on what they don't understand. Ms. Garcia called this a "quick read" and asked students to "read like lightning." Once students have a basic understanding of the material, choose how closely you want them to study the text. Ms. Garcia asked students to identify favorite sports in Germany and to compare the sports' popularity in the U.S. - Give students opportunities to share their language proficiency with the rest of the school. Ms. Garcia's students appeared on their school's televised morning announcements with a skit they had produced. Look for events and venues that give your students the chance to present skits, announcements, or other materials to the school community.
http://www.learner.org/libraries/tfl/german/garcia/connect.html
4.34375
Use their mathematical knowledge to invent problems Devise and use problem solving strategies to explore situations mathematically. When you look at the Copymasters, you will realise that this problem could be used many times during the year in any of the curriculum Strands. Alternatively you might want to mix up the sets of answers and use the problem towards the end of the year. To be able to make up a problem from scratch like this requires a deeper understanding of the problem than just being able to solve it. Hence this problem will give you both a means of assessing a child’s knowledge of the recent maths that they have done and a way of delving deeper into the subject. It is likely that students who haven’t been given an exercise like this before will do no better than produce a sum such as 312 + 423 = 735, where 735 is the answer required. There are two directions that you can work from here. The first is to see what other sum they can make that has the answer 735 (perhaps a subtraction, multiplication or division sum). This gives them the opportunity to explore the number 735, it helps to cement their basic number facts, and it gives them a chance to use all of the four arithmetic operations. The second direction is to get them to embed the sum in a story. So you could ask them to put the problem into an actual situation with, say, cakes or money or clothes. Some students may need to be prompted to help them produce an answer. You may need to recall for them what sums they have done so far. Try to lead them on to word problems. They will probably only be able to mimic these problems. But even that is a start towards deeper learning and understanding. More able students might be extended to problems that need more than one step or might be challenged to produce a problem with an answer such as ‘Hannah’. This open problem allows the students to use their imagination. It should give them the chance to invent some interesting word problems and to put the mathematics that they have learned so far together in creative ways. If you plan to assess a particular area of mathematics using this problem, then it should probably not be used until the students have become confident in handling that area. Finally you might like to note that this problem is one of a series of problems that extend from Level 1 to Level 6. It might be useful for you to see how the problems develop. The lessons are You Be The Teacher, Level 1, Make Up Your Own, Level 2, Invent-A-Problem, Level 3, Create a Question, Level 4 and Working Backwards, Level 5. It may be that there are some useful ideas in the other Levels that you can use with your class at this Level. Choose and open a sealed envelope. Make up a problem whose answer is the one that you have found in the envelope. There are several ways to approach this problem. If you think that this problem can be tackled easily by many of your class, then adopt Method A. This way you give little in the way of hints to the class as a whole. However, you may need to help some of them individually. Method B is for a class that needs more help and it begins with a short session to remind the class of some of the things that they have done so far. Method A: Start with all the students together. Tell them that today they are going to do something different. The aim is to make up their own maths problems. There is only one restriction. That is, the answer has to be the one you are going to give them in a sealed envelope. The problems that they make up can be anything they like. - Let the students work on the problem together in small groups. Help the ones who are having trouble. Those who finish quickly could be asked to make up another problem using another envelope. - When a child has produced a problem put the problem into a sealed envelope. When you have collected enough problems the envelopes could be given to other students to solve either straight away or later on. - You could also give small prizes for the best problem, the funniest problem, and so on. The students could vote for the problems they like best. Method B: Start with all the whole class together. - Tell them that you would like them to make up some problems of their own today. Ask them what problems they can remember working on. - Ask the students to give you a number. Then try to get them to make up a problem using that number as the answer. - Now say that you have some answers in sealed envelopes. You want them to make up a problem that has that number as the answer. - Let them work in groups to come up with some problems of their own. If they can only produce a sum rather than a problem then you could get them to find other sums that make up that number or help them to produce a word problem. - The student’s problems could go into an envelope for later use. - Those students who finish quickly might like to try to write another problem, solve someone else’s problem or try the Extension problem. - Pose some of the student’s problems from the sealed envelopes for the whole class to solve. - You might like to keep some of these problems to use with the class over the next few weeks. You could also give small prizes for the best problem, the funniest problem, and so on. The students could vote for the problems they like best. The problem is still the same except that the answers are taken from set 10. The solutions here will depend on your class. We would like to see some of them so that we could put them on this web site.
http://www.nzmaths.co.nz/resource/working-backwards
4.03125
We the People of the United States, in Order to form a more perfect Union, establish Justice, insure domestic Tranquility , provide for the Common Defence , promote the general Welfare , and secure the Blessings of Liberty to ourselves and our Posterity , do ordain and establish this Constitution for the United States of America. 1. The Constitution was written in 1787. A group of men, called the Framers, met to write the Constitution. They felt a set of rules were needed to govern the country. Benjamin Franklin, Alexander Hamilton, George Washington, and James Madison were some of the more well-known framers of the Constitution. The Framers (members of Congress) met in Independence Hall in Philadelphia. After much debate and a great deal of hard work they finally agreed to the words in the Constitution. After the Constitution was written the states had to approve it. It took some time for that to happen, but all of the states finally did. 2. The United States Constitution divides the government into its three branches: the Executive Branch (the President), the Legislative Branch (Congress), and the Judicial Branch (Courts). The people elect the President, and the President enforces the laws. The people elect the members of Congress, and Congress makes laws. The members of the Supreme Court are appointed by the President and approved by the Senate. The Court decides what the law means when there are questions. 3. The Constitution describes the different powers given to each of these branches of government and talks about how they are supposed to function and work together. The constitution made sure that no single branch of the government could have too much power. This is called a system of "Checks and Balances". 4. The Constitution also outlines the procedures for going to war. It states that the President becomes the commander and chief of the country’s armies in a time of war. 5. When the Constitution was written, the Framers knew that future generations would want to make changes. They wanted to make it possible to change the Constitution without needing to resort to revolution. They wanted to be sure the process wasn't too difficult or too easy. To address this issue, the Framers added an amendment process. An amendment to the Constitution is a change that can add to the Constitution or change an older part of it. An amendment can even overturn a previous amendment, as the 21st did to the 18th. There are a few methods to amend the Constitution, but the most common is to pass an amendment through the Congress, on a two-thirds vote. After that, the amendment goes to the states, and if three-quarters of the states pass the amendment, it is considered a part of the Constitution and has been ratified. There have been 27 amendments to the constitution. 6. The first ten amendments to the Constitution are called the Bill of Rights. These 10 ammendments guarantee that the citizens of the United States have their rights protected. Here is a list of the Bill of Rights: Amendment 1 - Freedom of Religion, Press, Speech Amendment 2 - Right to Bear Arm Amendment 3 - Quartering of Soldier Amendment 4 - Search and Seizure Amendment 5 - Trial and Punishment, Compensation for Taking Amendment 6 - Right to Speedy Trial, Confrontation of Witnesses Amendment 7 - Trial by Jury in Civil Case Amendment 8 - Cruel and Unusual Punishment Amendment 9 - Construction of Constitution Amendment 10 - Powers of the States and People 7. The actual United State’s Constitution was adopted on September 17th, 1787, in Philadelphia at the National Convention. The father of the Constitution was a man named James Madison. 8. The original Constitution actually had a clause stating that slavery would be abolished in twenty years after its signing. The fact that this issue was not quickly resolved might have led to the civil war. 9. The law is the set of rules that we live by. The Constitution is the highest law. It belongs to the United States. It belongs to all Americans. 10. The first 10 ammendments, the Bill of Rights, were added in 1791. The last amendment was added in 1992. Some of the most famous and important amendments say that all black men can vote. Another says that all women can vote. Another says that the President can only be elected twice. Here is a list of the remaining amendments: Amendment 11 - Amendment 12 - Choosing the President, Vice President Amendment 13 - Slavery Abolished Amendment 14 - Citizenship Rights Amendment 15 - Right to Vote Amendment 16 - Status of Income Tax Clarified Amendment 17 - Senators Elected by Popular Vote Amendment 18 - Alcohol Abolished Amendment 19 - Women's Suffrage Amendment 20 - Presidential, Congressional Terms Amendment 21 - Amendment 18 Repealed Amendment 22 - Presidential Term Limits Amendment 23 - Presidential Vote for District of Columbia Amendment 24 - Poll Taxes Barred Amendment 25 - Presidential Disability and Succession Amendment 26 - Voting Age Set to 18 Years Amendment 27 - Limiting Changes to Congressional Pay
http://www.kidskonnect.com/component/content/article/16-history/437-constitution.html
4.25
Rational: This lesson is designed to help children learn to identify the sound a letter makes. Letters stand for phonemes, and before a child can distinguish how letters match phonemes, they have to recognize the phonemes first. They will learn to recognize and identify the correspondence i=/i/. By learning this correspondence, they will become more fluent readers Materials: A large picture with "Mickey the Insect" on it, without his six legs; six insect legs that are sticky to be put on the legless " Mickey the Insect;" the book, In the Garden (Educational Insights), A piece of paper to record a running record as well as a pen/pencil to do this. A List of words to use in the letterbox lesson (2 phoneme: in, it):( 3 phoneme: icky, mit, fix): (4 phoneme: risk). Letterboxes, laminated letters that are cut into individual ones, including the laminated letters: i, n, t, k, y, m, f, x, r, s, and the letters c and k taped together to represent the phoneme ck=/k/. You will also need a standard pencil for your student and a worksheet numbered 1-5, with adequate space to draw in between each one; this material will be used in the assessment activity. 1. First, introduce the lesson by explaining that each letter makes its own sound, such as i=/i/. You can do this by saying the representation: "Icky Mickey the Insect." Then allow the student to repeat this representation to you. Demonstrate its use in realistic events such as an alarm going off: i….i….i…Have the student repeat this to you, ensuring its understanding. Then explain to the student how he/she will find the sound i=/i/ in many words such as "icky" and "insect." 2. Now place two letterboxes in front of the child as well as the ten phonemes, instructing the child to turn them over to the lower-case side, allowing the child to "help" the teacher out. At this time you should have the picture (body) of Mickey the Insect out as well as the six legs that will be stuck onto "Mickey the Insect" as the student gets the letterbox correct. As the words progress through the number of phonemes, the size of the letterboxes will get larger. Repeat the letterbox lesson with the six words and legs until "Mickey the Insect" is put back together so he can run back: In the Garden. 3. Next, place the letterboxes away and spell the words for the child asking him/her to read the words to you. Tell him/her that it is their turn to read them to you because they did such a good job sounding them out, encouraging and motivating each step of the way. 4. Lastly, give the child the book, In the Garden and ask him/her to read it to you. As the child reads this book, make sure you are taking a running record of the child's miscues as he/she reads, allowing you to know which correspondences need to be worked on. Make sure your student holds the book while he/she is reading to you; this is a very important element with "Beginner Readers." 5. For an assessment activity, repeat the representation: "Icky Mickey the Insect," and allow the child to repeat after you. This will reinforce the correspondence i=/i/. Then, give the child an activity worksheet numbered 1 to 5, with a large drawing space in between each number. For the instructions tell the student you will be calling out two words for each number, he/she needs to draw a picture of the word that makes the sound /i/. Get everything organized once again with adequate drawing space and encourage him/her to do their best!. "You know he/she can!" Say: #1, which word says /i/: pig or log? Draw it! Repeating the next four questions as is #2. lap or mit? #3. Mickey (Mouse) or sat? #4. dog or insect? #5. leg or kid? ~ Murray, B.A., & Lesniak, T. (1999). The Letterbox Lesson: A hands-on approach for teaching decoding. The Reading Teacher, 644-650 Click here to Return to: Challenges
http://www.auburn.edu/academic/education/reading_genie/chall/masonel.html
4.0625
Science Fair Project Encyclopedia Capillary action or capillarity is the ability of a narrow tube to draw a liquid upwards against the force of gravity. It occurs when the adhesive intermolecular forces between the liquid and a solid are stronger than the cohesive intermolecular forces within the liquid. The effect causes a concave meniscus to form where the liquid is in contact with a vertical surface. The same effect is what causes porous materials to soak up liquids. A common apparatus used to demonstrate capillary action is the capillary tube. When the lower end of a vertical glass tube is placed in a liquid such as water, a concave meniscus forms. Surface tension pulls the liquid column up until there is a sufficient weight of liquid for gravitational forces to overcome the intermolecular forces. The weight of the liquid column is proportional to the square of the tube's diameter, but the contact area between the liquid and the tube is proportional only to the diameter of the tube, so a narrow tube will draw a liquid column higher than a wide tube. For example, a glass tube 0.1 mm in diameter will lift a 30 cm column of water. With some pairs of materials, such as mercury and glass, the interatomic forces within the liquid exceed those between the solid and the liquid, so a convex meniscus forms and capillary action works in reverse. The height h in metres of a liquid column is given by: - T = interfacial surface tension (N/m) - θ = contact angle - ρ = density of liquid (kg/m3) - g = acceleration due to gravity (m/s²) - r = radius of tube (m) For a water-filled glass tube in air at sea level, - T = 0.0728 N/m at 20°C - θ = 20° - ρ = 1000 kg/m3 - g = 9.80665 m/s² and so the height of the liquid column is given by The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Capillary_action
4
The Kalsun Math Tool for Addition and Subtraction is a simple educational device specially designed with patented features, for early use by children from pre-KG to U.K.G. It is useful for students in special education. The purposes of the tool are to help children: - Learn and develop a strong number sense - Understand properties of addition - Recognise patterns and relationships among numbers - Enhance memory and mental skills through building concepts - Develop algebraic thinking - The colourful and student friendly device presents the number sequence on slidable and rotatable blocks that serve as the reckoning slide on the top row. and provides a set of movable blocks on the bottom slide that are aligned neatly with the blocks on the top row in matching colors to help with counting and solving problems. This design helps build math foundation in children and improve classroom performance by : - Combining all modes of learning - kinesthetic, tactile, and visual Minimising errors associated with loose pieces and use of fingers Helping to visualise patterns and relationships with ease - Children can explore one or more numbers on a daily basis and master addition facts of each number and learn various combinations is of arriving at the number in a systematic way - Once the children are taught the use of the tool for the first few numbers, they can use it themselves for learning other numbers and develop number sense on their own. - The repeated use of the tool will give them a clear understanding of the numbers, thereby enabling them to memories addition and subtraction facts with ease, in turn encouraging them to libel math. - The use of the tool helps with addition and subtraction problems, and develops mental arithmetic
http://jayavidya.com/kalfun.php
4.46875
1929: A Turning Point During the Weimar Republic It is 1929 and the misery that had aided the efforts of Weimar’s enemies in the early 20s has been relieved by five years of economic growth and rising incomes. Germany has been admitted to the League of Nations and is once more an accepted member of the international community. Certainly the bitterness at Germany's defeat in the Great War and the humiliation of the Treaty of Versailles have not been forgotten but most Germans appear to have come to terms with the new Republic and its leaders. Gustav Stresemann has just died. Germany has, in part, as a result of his efforts become a respected member of the international community again. Stresemann often spoke before the League of Nations. With his French and American counterparts Auguste Briand and Frank Kellog he had helped negotiate the Paris Peace pact which bore the name of his fellow diplomats Kellog-Briand. Once again Gustav Stresemann had decided to take on the arduous job of leading a battle for a policy he felt was in his nation’s vital interest even though he was tired and ill and knew that the opposition would be stubborn and vitriolic. Stresemann was the major force in negotiating and guiding the Young Plan through a plebiscite. This plan although opposed by those on the right-wing won majority approval and further reduced Germany’s reparations payments. How had Weimar Germany become by 1929 a peaceful relatively prosperous and creative society given its chaotic and crisis-ridden beginnings? What significant factors contributed to the survival and success of the Republic? What were the Republic’s vulnerabilities, which would allow its enemies to undermine it in the period between 1929 and 1933? The Weimar Republic was a bold experiment. It was Germany's first democracy, a state in which elected representatives had real power. The new Weimar constitution attempted to blend the European parliamentary system with the American presidential system. In the pre- World War I period, only men twenty-five years of age and older had the right to vote, and their elected representatives had very little power. The Weimar constitution gave all men and women twenty years of age the right to vote. Women made up more than 52% of the potential electorate, and their support was vital to the new Republic. From a ballot, which often had thirty or more parties on it, Germans chose legislators who would make the policies that shaped their lives. Parties spanning a broad political spectrum from Communists on the far left to National Socialists (Nazis) on the far right competed in the Weimar elections. The Chancellor and the Cabinet needed to be approved by the Reichstag (legislature) and needed the Reichstag's continued support to stay in power. Although the constitution makers expected the Chancellor to be the head of government, they included emergency provisions that would ultimately undermine the Republic. Gustav Stresemann was briefly Chancellor in 1923 and for six years foreign minister and close advisor to Chancellors. The constitution gave emergency powers to the directly elected President and made him the Commander-in-Chief of the armed forces. In times of crisis, these presidential powers would prove decisive. During the stable periods, Weimar Chancellors formed legislative majorities based on coalitions primarily of the Social Democrats, the Democratic Party, and the Catholic Center Party, all moderate parties that supported the Republic. However, as the economic situation deteriorated in 1930, and many disillusioned voters turned to extremist parties, the Republic's supporters could no longer command a majority. German democracy could no longer function as its creators had hoped. Ironically by 1932, Adolf Hitler, a dedicated foe of the Weimar Republic, was the only political leader capable of commanding a legislative majority. On January 30, 1933, an aged President von Hindenburg reluctantly named Hitler Chancellor of the Republic. Using his legislative majority and the support of Hindenburg's emergency presidential powers, Hitler proceeded to destroy the Weimar Republic. Germany emerged from World War I with huge debts incurred to finance a costly war for almost five years. The treasury was empty, the currency was losing value, and Germany needed to pay its war debts and the huge reparations bill imposed on it by the Treaty of Versailles, which officially ended the war. The treaty also deprived Germany of territory, natural resources, and even ships, trains, and factory equipment. Her population was undernourished and contained many impoverished widows, orphans, and disabled veterans. The new German government struggled to deal with these crises, which had produced a serious hyperinflation. By 1924, after years of crisis management and attempts at tax and finance reform, the economy was stabilized with the help of foreign, particularly American, loans. A period of relative prosperity prevailed from 1924 to 1929. This relative "golden age" was reflected in the strong support for moderate pro-Weimar political parties in the 1928 elections. However, economic disaster struck with the onset of the world depression in 1929. The American stock market crash and bank failures led to a recall of American loans to Germany. This development added to Germany's economic hardship. Mass unemployment and suffering followed. Many Germans became increasingly disillusioned with the Weimar Republic and began to turn toward radical anti-democratic parties whose representatives promised to relieve their economic hardships. Rigid class separation and considerable friction among the classes characterized pre-World War I German society. Aristocratic landowners looked down on middle and working class Germans and only grudgingly associated with wealthy businessmen and industrialists. Members of the middle class guarded their status and considered themselves to be superior to factory workers. The cooperation between middle and working class citizens, which had broken the aristocracy's monopoly of power in England, had not developed in Germany. In Weimar Germany, class distinctions, while somewhat modified, were still important. In particular, the middle class battled to preserve their higher social status and monetary advantages over the working class. Ruth Fischer wanted her German Communist party to champion the cause of the unemployed and unrepresented. Gender issues were also controversial as some women's groups and the left-wing political parties attempted to create more equality between the sexes. Ruth Fischer struggled to keep the Communist party focused on these issues. As the Stalinists forced her out of the party the Communists lost this focus. Other women's groups, conservative and radical right-wing political parties, and many members of the clergy resisted the changes that Fischer and her supporters advocated. The constitution mandated considerable gender equality, but tradition and the civil and criminal codes were still strongly patriarchal and contributed to perpetuating inequality. Marriage and divorce laws and questions of morality and sexuality were all areas of ferment and debate. Weimar Germany was a center of artistic innovation, great creativity, and considerable experimentation. In film, the visual arts, architecture, craft, theater, and music, Germans were in the forefront of the most exciting developments. The unprecedented freedom and widespread latitude for varieties of cultural expression led to an explosion of artistic production. In the Bauhaus arts and crafts school, in the studios of the film company UFA, in the theater of Max Rinehardt and the studios of the New Objectivity (Neue Sachlickeit) artists, cutting edge work was being produced. While many applauded these efforts, conservative and radical right-wing critics decried the new cultural products as decadent and immoral. They condemned Weimar Germany as a new Sodom and Gomorrah and attacked American influences, such as jazz music, as contributors to the decay. Weimar Germany had a population that was about 65% Protestant, 34 % Catholic and 1%Jewish. After German unification in 1871, the government had strongly favored the two major Protestant Churches, Lutheran and Reformed, which thought of themselves as state-sponsored churches. At the same time, the government had harassed and restricted the Catholic Church. Although German Catholics had only seen restrictions slowly lifted in the pre-World War I period, they nevertheless demonstrated their patriotism in World War I. German Jews, who had faced centuries of persecution and restriction, finally achieved legal equality in 1871. Jews also fought in record numbers during World War I and many distinguished themselves in combat. Antisemites refused to believe the army’s own figures and records and accused the Jews of undermining the war effort. The new legal equality of the Weimar period did not translate into social equality, and the Jews remained the "other" in Germany. Catholics and Jews both benefited from the founding of the Weimar Republic. Catholics entered the government in leadership positions, and Jews participated actively in Weimar cultural life. Many Protestant clergymen resented the loss of their privileged status. While many slowly accepted the new Republic, others were never reconciled to it. Both Protestant and Catholic clergy were suspicious of the Socialists who were a part of the ruling group in Weimar and who often voiced Marxist hostility toward religion. Conflicts over religion and education and religion and gender policies were often intense during the Weimar years. The growth of the Communist Party in Germany alarmed Protestant and Catholic clergy, and the strong support the Catholic Center Political Party had given to the Republic weakened in the last years of the Republic. While Jews had unprecedented opportunities during the Weimar period, their accomplishments and increased visibility added resentment to long-standing prejudices and hatreds and fueled a growing antisemitism. Stresemann portrait: Deutsches Bundesarchiv (German Federal Archive)
http://weimar.facinghistory.org/content/1929-turning-point-during-weimar-republic
4.25
Brown dwarfs are objects which are too large to be called planets and too small to be stars. They have masses that range between twice the mass of Jupiter and the lower mass limit for nuclear reactions (0.08 times the mass of our sun). Brown dwarfs are thought to form in the same way that stars do - from a collapsing cloud of gas and dust. However, as the cloud collapses, it does not form an object which is dense enough at its core to trigger nuclear fusion. The conversion of hydrogen into helium by nuclear fusion is what fuels a star and causes it to shine. Brown dwarfs were only a theoretical concept until they were first discovered It is now thought that there might be as many brown dwarfs as there are stars. Artist's rendition of a brown dwarf Philip Lucas (Univ. Hertfordshire) and Patrick Roche (Univ. Oxford), UKIRT Brown dwarfs are very dim and cool compared with stars. The best hope for finding brown dwarfs is in using infrared telescopes, which can detect the heat from these objects even though they are too cool to radiate Many brown dwarfs have also been discovered embedded in large clouds of gas and dust. Since infrared radiation can penetrate through the dusty regions of space, brown dwarfs can be discovered by infrared telescopes, even deep within thick (Two Micron All Sky Survey) data revealed the coolest known brown dwarf. To the left is an infrared image of the Trapezium star cluster in the Orion Nebula. This image was part of a survey done at the United Kingdom Infrared in which over 100 brown dwarf candidates were identified in the infrared. The discovery of objects like brown dwarfs will also give astronomers a better idea about the fate of our universe. The motion of the stars and galaxies are influenced by material which has not yet been detected. Much of this invisible dark matter, which astronomers call "missing mass", could be made up of brown-dwarfs. Our universe is currently expanding, due to the Big Bang. If there is enough mass, it is thought that the expansion of the universe will eventually slow down and then the universe will start collapsing. This scenario could mean that the universe goes through an endless cycle of expansions and contractions, with a new Big Bang occurring every time the ends its collapse. If there is not enough mass for the universe to collapse, then it will expand forever. We will only know the fate of the universe when we can accurately estimate how much mass the universe has in it. The detection missing mass objects, such as brown dwarfs will likely be a key to answering this question. Artist's rendition by Robert Hurt, IPAC Brown Dwarfs were only a theoretical concept when the Spitzer Space Telescope was first proposed. Since the mid-1990s, various infrared telescopes and surveys have identified a few hundred of these objects. Spitzer will devote much of its time to the discovery and characterization of brown dwarfs. It is expected that Spitzer will study thousands of these objects, including those only slightly larger than Jupiter. This will provide astronomers with enough data on brown dwarfs for good quality statisical studies.
http://coolcosmos.ipac.caltech.edu/cosmic_classroom/cosmic_reference/brown_dwarfs.html
4.21875
Palaeontology is the study of fossils and what they reveal about the history of our planet. In marine environments microfossils collected within in layers of sediment cores provide a rich source of information about the environmental history of an area. Microfossils are fossils that generally are smaller than 4mm in length. There are a range of microfossils in marine sediments, including: - Calcareous microfossils e.g. foraminifera, ostracods, coccoliths, pteropods - Siliceous microfossils e.g. diatoms, radiolaria, spicules - Organic microfossils e.g. pollen, spores, dinoflagellate cysts Different organisms and species have specific distributions that are related to environmental factors such as: - water depth - temperature salinity - light levels - oxygen content - sediment type - food sources and - current strength. Mapping the occurrence and abundance of microfossil species down sediment cores collected from the ocean reveals changes in environmental conditions through time when combined with knowledge of how animals are related to the different environmental factors. Planktonic foraminifera can be considered to be the thermometers of the oceans. Living in the near-surface layers of the ocean, these tiny calcareous animals are very particular about the temperatures they will tolerate (see image). The association of different species of foraminifera with specific temperature ranges makes these organisms a very powerful tool for palaeoclimate reconstructions. Calcareous organisms, such as planktonic and benthic foraminifera, are incredibly useful sources of chemical information. Analysis of the carbonate chemistry of the foraminiferal test (or shell) reveals changes in the isotopic composition of the sea water at the time the organism was growing. The carbon isotope ratio is linked to changes in biological productivity and therefore nutrient concentration. The oxygen isotope ratio tells us about past changes in temperature and global sea-level. Knowledge about the timing of the appearance and disappearance of certain microfossils can also be used in conjunction with their distribution and abundance in cores to assign ages to the different sediments. Microfossils which are known to have occurred over a relatively short time span and are widespread, such as ammonites, are particularly useful as stratigraphic markers. The carbonate chemistry of calcareous organisms can be useful also for aging sediment through radiometric dating. By comparing the 14C isotopes of the sample to the known natural decay rate of 14C in the atmosphere, an age for the organism, and hence the sediment layer in which it was deposited can be obtained for up to 60 000 years before present. Geoscience Australia is using palaeontology to characterise and understand modern seabed environments. It is also used to develop an understanding of past environmental changes as context for predicting future changes to the ocean environment. Recent areas of research include: - the history of Antarctic ice sheets and shelf communities (see also Antarctica) - analysis of a sediment core collected beneath the Amery Ice Shelf a hundred kilometres from open water provided the first glimpse of this seabed environment and its biological community. Patterns in the diversity and ecology of biology communities (see also Surrogacy) - investigation of the distribution of species of benthic foraminifera in surface sediments from Torres Strait revealed the range of seabed environments in the area which act to structure these benthic communities. Analysis of the composition of surface sediments in Torres Strait provided important baseline information on sediment sources which helped to develop a better understanding of the factors influencing seagrass dieback. Topic contact: email@example.com Last updated: February 14, 2013
http://www.ga.gov.au/marine/disciplines/palaeontology.html
4.0625
Curriculum quality is a key element of IDRA’s Quality Schools Action Framework (Robledo Montecel, 2005). IDRA believes that this key element has to be in place to ensure a quality education for all students, in all content areas, in all schools and at all grade levels. When you think of quality mathematics curricula, what do you envision? Massachusetts Institute of Technology professor and world-renowned mathematician and educator, Seymour Papert asks us to think of curricula in a new way, replacing a system where students learn something on a scheduled day, with one where they learn something when they need it in an environment that shows meaning and gives context as to why it is being learned. It is student-centered where students use what they are learning (Curtis, 2001). Think for a moment what you would expect… teachers doing and saying; students doing and saying; and parents doing and saying. Reflect on the outcomes and possibilities that would unfold for students, families, teachers and the community if all schools had a quality mathematics curriculum in place. Standard of Quality Math Curriculum The National Council of Teachers of Mathematics includes in its Principles and Standards for School Mathematics the curriculum principle: “A curriculum is more than a collection of activities: it must be coherent, focused on important mathematics and well articulated across the grades” (2000). This principle provides a framework in which to make instructional decisions and policies that impact student success and achievement in mathematics. A quality mathematics curriculum must be vertically aligned, connecting and building upon concepts within and across grade levels, engaging students in meaningful mathematics where they see the value of learning the concepts, and facilitating the development of a student’s productive disposition toward mathematics (NCTM, 2000; Kilpatrick, et al., 2001). Throughout Texas, school districts have invested many resources in creating a variety of curricula in attempts to meet national and state standards. A shift has occurred over the past decade from the optional use of course curricula to a more pervasive and monitored use. Although use of district mathematics curricula is more often the case than not, the quality of such curricula spans many levels: - Mediocre test-driven curriculum where the only expectation for students is to pass a punitive, high-stakes standardized test; - Scripted lessons and timelines detailing verbatim what teachers will say and dictating what materials will be used, leaving no room for teacher creativity or student investigation; or - Highly challenging and engaging curriculum that is standards-driven and that values teacher’s professional expertise and values students as mathematics learners. Sample Process Used in Math Smart! IDRA models the development of highly challenging and engaging curricula through its Math Smart! program. Math Smart! integrates the Five Dimensions of Mathematical Proficiency with strategies for engaging students, dynamic technology tools for building and deepening student mathematical thinking, strategies for supporting English language learners, and strategies for engaging and valuing parents through a variety of methods. The process is outlined below. Planning with Teachers Planning sessions are an opportunity for mathematics teachers to reflect on math concepts and their teaching practice. In a planning session IDRA held with Math Smart! Algebra I teachers at one school, teachers reviewed the timeline and discussed how they were exploring the concepts of quadratic functions, finding roots, maximum and minimum values, and evaluating the functions with their students. Teachers wanted to put into practice elements of the Math Smart! program in the curriculum and lead into polynomials and polynomial properties. What resulted was a deep discussion on how to bring to life quadratic functions, roots and maximums through kicking a soccer ball or football and using physics. A plan for integrating non-traditional, brain-researched teaching strategies where students discover and present their own methods for simplifying polynomials, finding roots and real-life applications was also developed from the discussion among teachers. Planning that reflects the teaching practice where teachers also explore the actual concepts is an integral part of building a quality mathematics curriculum. Curriculum development becomes a collaborative effort and parallels what we want to happen in the classroom, where communication and discovery is two-way: students and teachers participating in conversations about mathematical ideas. Thus, quality curriculum development integrates the teacher and the reflection on the teaching practice and mathematics, where district content specialists and teachers participate in collaborative, curriculum development. Curriculum that Engages Students Taking what they had planned, teachers developed an activity that engaged students from the moment the bell rang. The following is a sample from one classroom. Lesson Introduction: Engaging Students – The teacher began the class by telling students that if she knew how long a football they kicked was in flight, she could figure out exactly how high that ball went without having to chase the ball with a meter stick and ladder. None of her students believed her, and they asked her to “prove it.” She proceeded to show them a video that she had downloaded from the United Streaming Video resource (that her school has a subscription to) of classic football games and soccer kicks. Students worked in groups of three, beginning with a warm-up activity (see box below) that included a timed brainstorm about quadratic functions in their everyday lives. She asked students to sketch a graph of the football in motion from the video. As a closing to the introduction part of the lesson and to describe the next part of the lesson, she showed a humorous video of how “not” to kick the football. Humor, not sarcasm, is a highly effective strategy for engaging students. Students were eager to take on the task of finding their own quadratic functions to their kicks. Experiencing Quadratic Functions – Using soccer balls and stop watches and working in groups of three outside, students kicked the ball and recorded the times the ball was in motion (see activity below). Many questions about how their graph would change surfaced as they were experiencing mathematics in motion. Students wondered about how the graph would differ if they kicked the ball straight up versus across the field and what if they kicked the ball off the ground versus as it is on the ground. Every student was engaged in the activity. Part of the success of this is attributed to the physicality of the activity. Students were outside of their sterile classroom, and the soccer field became their lab. The act of doing something helps students remember properties of quadratics, what the roots mean, what the maximum/minimum mean and what happens when we change any of the parameters. They have something to tie it to. When a student is taking a state-mandated test and comes across a problem asking about the change in a parameter, what will the student call upon – an exact equation that she worked on or the experience that explored what happens if the ball was kicked 0.5 meters off of the ground, how it would affect the graph equation, and the maximum value? This was an activity that students found valuable. Many of the students were involved in sports and were able to relate their life experiences to quadratic functions. Bringing it Back to the Classroom – After collecting the data and taking a much-needed water break, students went back to the classroom and began using a well-known quadratic function for finding vertical distance to find their own quadratic functions. Using cognitively-guided instruction techniques and building academic language from student’s natural language, the teacher was impressed and energized by how students were able to connect to the meaning of the coefficients and constants for initial velocity (v0), initial height (h0), and the dependent variable, vertical distance (d). Students discussed in groups and shared with the whole group the meaning of the roots and the maximum in their own graphs, connecting them to their real-life application. Students said such things as: “The first root is where time and distance are both 0, or the origin, because I had not kicked the ball yet, and the second root is when the ball landed, and also the vertical distance is 0. This connects to when our teacher explained that the roots are where the parabola crosses the x-axis.” Another student explained initial velocity as to how fast it is going at kick-off, but then the ball slows down because it is going up but gains speed as it is coming back down and will reach that velocity again right at the moment it lands. These are highly complex mathematical ideas that students so readily explained as the meaning of the function d = -5t2 + V0t + H0 was being explored in conjunction with the graphs they had sketched. It also enabled the teacher to bring in the idea of instantaneous rate of change, a concept formally presented in Calculus I, to her Algebra I students. This teacher has the expectation for all of her students to go on to Calculus I. It shows in statements she makes, such as, “When you get to calculus, you will hear the term ‘instantaneous rate of change’ to describe how fast the ball is going along the path.” Finding the Functions and Making Conjectures – Students readily volunteered to present to and get guidance from each other in trying to figure out how they would first calculate the initial velocity as it was easy to find the initial height (which was 0 because the ball was on the ground when it was kicked). One student volunteered that even though he “didn’t know what to do,” he would “get help from the class.” The class eagerly helped him, justifying and bringing in ways that they knew how to “do the math” (i.e., solve equations to find the initial velocity given the time and the vertical distance after t number of seconds). Once students found the initial velocity, they were able to write their very own quadratic function describing their own kicks. Students Challenging Students – The beauty of mathematics is in the “what if’s” – variables changing, parameters and coefficients changing, and analyzing what it all means and how it applies. Using an engaging activity paves the way for students to begin thinking of “what if” questions. It gives them the experience of mathematics. As indicated above, students began asking the “what if” questions when they were out in the field collecting data. It was natural for them to do this, without being prompted by the teacher. Students were able to answer their questions using their graphing calculators, quadratic functions and natural mathematical reasoning abilities. In closing the activity, students had to present one of the quadratic functions from the group, indicating the roots, the maximum height of the ball, why they chose that kick, and a what if question to their fellow classmates. Some of the questions included: what if we were on another planet where the gravity is not so strong, what do you think the graph would look like? And, what if I kicked the ball at a faster initial velocity and it was three feet off the ground, how would my equation change? As a result of planning and of teachers’ experiencing with students a highly challenging and engaging activity that had them involved in mathematical conversations, these Math Smart! teachers wanted to continue to contribute to the district curricula and include collaboration and teaching practice reflections as an ongoing way of ensuring a quality mathematics curriculum for their students. IDRA and the teachers were able to explore a model for creating a quality mathematics curriculum: reflecting on current curricula, sharing ideas on how to get students involved and appeal to their interests so they find mathematics valuable, using available resources, breaking out of traditional one-way conversations into two-way conversations with students about the mathematics, and realizing that as time and technologies change, so too will the curriculum. Quality curriculum is dynamic; involves teacher practitioners in ongoing reflection, development and refinement; values students’ experiences and the knowledge they bring; and is rigorous and vertically aligned so that students are not only prepared to enter higher-level mathematics courses, but also experience higher-level mathematics within their current courses. Curtis, D. Start With the Pyramid (San Rafael, Calif.: The George Lucas Educational Foundation, 2001), http://www.edutopia.org/php/article.php?id=Art_884&key=037. Kilpatrick, J., and J. Swafford, B. Findell (Eds). Adding it Up: Helping Children Learn Mathematics (Washington, D.C.: National Research Council Mathematics Learning Study Committee, November 2001). National Council of Teachers of Mathematics. Principles and Standards for School Mathematics (Reston, Va.: National Council of Teachers of Mathematics, 2000). Robledo Montecel, M. “A Quality Schools Action Framework – Framing Systems Change for Student Success,” IDRA Newsletter (San Antonio, Texas: Intercultural Development Research Association, November-December 2005). Kathryn Brown is the technology coordinator in the IDRA Division of Professional Development. Comments and questions may be directed to her via e-mail at
http://www.idra.org/IDRA_Newsletter/April_2006_Curriculum_Quality/_Re-Invigorating_Math_Curricula/
4.25
All ionic compounds, and therefore most solids, are crystals . That is, they consist of a regular, infinitely repeated array of anions : a crystal structure. The arrangement of this crystal structure is not the same for all ionic compounds; rather, it varies depending on the nature (principally the size) of the ions which form the compound. The crystal structures of many ionic solids can be rationalized into about eight principal types. First it is best to review the basics of crystallography. The structures of all crystalline solids are described by a regular array of atoms (or ions). For every structure there is a smallest possible set of atoms, the unit cell, which when infinitely reproduced completely describes the structure. The unit cell is in turn described by a Bravais lattice, an array of points in 3-D space which has the property that every point has an identical surrounding arrangement of points. The Bravais lattice is a mathematical abstraction, and a central concept not only in crystal chemistry but in geometry too, so you will often find basic crystallography treated as more of a mathematical than a chemical subject. There are seven shapes of unit cell, as defined by the angles and relative lengths within the 3-D Bravais lattice: these shapes are cubic, hexagonal, tetragonal, rhombohedral, orthorhombic, monoclinic and triclinic. For definitions please see the nodes in each case. The lattice points defining a unit cell can take one of four types of centering systems: primitive (present only at the vertices), c-centred (present at the vertices and two opposing faces), body-centred (the vertices and the centre), and face-centred (the vertices, all faces and the centre). Some of the seven shapes can have more than one of these systems, so that there are a total of fourteen distinct Bravais lattices. The above material is well covered on E2, but dispersed over a number of un-indexed nodes: see lattice, Bravais lattice, unit cell, crystal, crystallographic groups, crystal classes, crystal systems. It is often feasible to idealize atoms and ions as hard spheres, just like a baseball or tennis ball. Many elemental solids can be described by arrangements of such atoms in one of the fourteen types above: for example, nickel and copper have a face-centred cubic arrangement, iron is body-centred cubic under standard conditions, and magnesium and zinc are hexagonal. However it is also possible to describe many binary ionic solids by a simple expansion of this system. An important concept in simple ionic structures is holes: the empty space between atoms or ions in their crystallographic arrangement. It's quite easy to visualize that, if you pack spheres together as closely as possible, you cannot get them to fill all the space. When atoms are packed with an optimum level of efficiency - occupying the most possible space- it is called close packing. As it happens, two Bravais lattices which both show close packing are face-centred cubic (which is also called cubic close packed), and hexagonal. It can be shown mathematically that cubic close packing, and less straightforwardly that hexagonal close packing, both have an efficiency of 74%. These are the most important lattice types for simple ionic crystallography. The atoms in these arrangements leave between them two important types of hole: tetrahedral and octahedral. A tetrahedral hole is the space between four atoms which describe a tetrahedron; n close-packed atoms leave 2n tetrahedral holes. An octahedral hole is the space between six atoms describing an octahedron; n close-packed atoms leave n octahedral holes. Many ionic solids have crystal structures which can be rationalized as a cubic close packed (CCP) or hexagonal close packed (HCP) arrangement of one type of ion, with the counterions occupying a certain proportion of the holes. The actual structure adopted by a particular compound depends principally on the relative sizes of the ions. If the anions and cations have very different sizes, it will be possible to fit one into the holes in the close-packed arrangement of the other, otherwise close-packing may not be possible. On the other hand, there is a considerable degree of variation, in some cases even in the same compound. The main crystal structures for simple ionic compounds are as follows, with examples of compounds which adopt them: - Rock-salt: CCP arrangement of the anion with the cation in all 13 octahedral holes. Named after NaCl, also adopted by KBr, CaO, ScN, and many others. - Fluorite: CCP arrangement of the cation with the anion in all 8 tetrahedral holes. Named after fluorite, CaF2, also adopted by BaCl2, PbO2 and others. The reverse, antifluorite, is adopted by K2O, Na2S. and others. - Zinc blende: CCP arrangement of the anion with the cation in half of the 8 tetrahedral holes. Named after the blende form of zinc sulphide, ZnS; also adopted by CuCl, CdS and others. - Wurtzite: HCP arrangement of the anion with the cation in half the tetrahedral holes. Named after the wurtzite form of ZnS; also adopted by ZnO, SiC and others. - Nickel arsenide: HCP arrangement of the anion with the cation in all the octahedral holes. Also adopted by FeS, PtSn and others. - Rutile: HCP arrangement of the anion with the cation in half the octahedral holes. Named after the rutile form of titanium oxide, TiO2, and also adopted by WO2, MgF2 and others. - Cadmium iodide: HCP arrangement of the anion with the cation in the tetrahedral holes of alternate layers. - Caesium chloride: Primitive cubic arrangement of one type of ion with the counterion at the body centre. This structure is not close-packed, but consists of interlocking cubes of each type of ion. Reference: Shriver, D.F. and Atkins, P.W. Inorganic Chemistry (third edition). 2001, Oxford University Press.
http://everything2.com/user/HexFailure/writeups/ionic+compound
4.15625
Causes of a Natural Hazard This title explores the structure of the earth and what is below the surface. We break down the earth into the three layers of core, mantle and crust to explore what is within. The surface of the earth is not one solid mass; it is broken up into pieces known as 'plates'. It is at the edges of these plates that we get the most activity and some interesting geographical features. The edges of the plates are known as plate boundaries, or margins, and we will examine the four main types of plate margins. This title also explores the natural hazards commonly found at each type of plate margin, such as earthquakes and volcanoes. Prepare for your next exam - View our recommended for you page to view relevant titles for all your exams. Curriculum and Exam Board Information - Convergent, divergent and conservative boundaries; cross-section diagrams of the boundaries to show main features - how movement leads to earthquakes and volcanoes - The impact of human activities such as deforestation - The impact of human activities such as grazing - The impact of human activities such as urbanisation - The severity, frequency and duration of tectonic, atmospheric and terrestrial hazards, long term hazards such as global warming - The severity, frequency and duration of tectonic, atmospheric and terrestrial hazards, medium term local hazards such as forest fire - The severity, frequency and duration of tectonic, atmospheric and terrestrial hazards, short term local hazards, such as fog There are currently no reviews of this product.You need to be logged in to review this title! My Shopping BasketView Details When you see a title you think is suitable, just add it to your basket and continue browsing the site. You can always review your basket's contents at any time.
http://www.gcsepod.co.uk/subjects/geography/hazards-and-tectonics/causes-of-a-natural-hazard/
4.15625
Just like dissection can reveal important information about animal structure and morphology, so can attempting to recreate and mimic aspects of animals with robotics. That’s the approach of a group of researchers at Brown University, who have developed a robotic wing to mimic a bat wing in flight. They are using this wing to measure the aerodynamics of a bat’s flight, results which may someday help us build better flying machines. The complexity of wings Bats have very different wings than flying insects and birds, and their wings are correspondingly more structurally complex. Bat wings make their owners capable of commuting and migrating long distances, carrying heavy loads, flying fast, and being able to fly in narrow spaces like between trees. Their wings have up to 25 actively controlled joints and 34 degrees of freedom. By comparison, the human arm is said to have 7 degrees of freedom. Bats use their elbows and wrists in flight, in combination with a shoulder equipped with tons of muscles for three dimensional rotation. They also use their hindlimbs, back feet and fingers to control the overall shape of the wing and the angle of flight. Bats’ wing membranes are able to stretch and recoil with changes in wing fold, and the skin of the membranes are attached all along the side of the bat’s body from neck to ankle. The membrane skin that stretches between their adapted digits is much thinner than the skin of similarly small non-flying mammals, and bats put that skin to the test across a much larger range of expansion and contraction than most mammals put their skin through. This skin flexibility allows bats to vary their motions in flight and to use the skin like a parachute to passively capture air during flight. In order to deal with the incredible complexity in bat wing motion for the purposes of creating a model of the wing, the researchers chose to focus on a joint arrangement using just seven of the twenty-five possible bat wing joints. They created a wing that could actively fold and expand just like a real bat wing, but which was built to focus on the effects of flapping wings on aerodynamic force. The importance of making mistakes Making a robotic bat wing requires both biology and engineering. Joseph Bahlman, lead author on the study and a PhD candidate in biology, says he finds himself learning how to be an engineer. And sometimes, in that engineering process, building structures that don’t work is more informative than building structures that do. When we can see something is missing, that one of these things is not like the other, it informs what the variables are more clearly. Creating a model of nature is like the process of evolution on speed. Baba Brinkman, the evolution rapper, tells us that science (and evolution) is about performance, feedback, and revision. The scientists got plenty of feedback from their robot—the wing skeleton broke frequently at the elbow, a pressure that bats must contend with in the wild; the researchers dealt with this by wrapping steel around the elbow to mimic the ligaments of a biological joint. They determined that ligaments probably play a role in preventing breakage at the elbow in nature, as do muscles. The bat’s fused forearm muscles may help prevent its elbow dislocating in flight. The researchers also struggled with tears in their wing membrane, which taught them the value of the loose connective tissue that most vertebrates have between the skin and underlying muscles and bones. They mimicked this tissue by creating an intermediate network of elastic fibers connecting the membrane tissue and the skeleton, which reduced tearing considerably. A potential ramification of this research is to create “micro air vehicles,” bat-sized planes that can be used for surveillance and research. In the more immediate term, though, it tells us a small bit more about the astounding complexity of bat flight—all the more able to be appreciated in the video below. Footage courtesy Kenny Breuer and Sharon Swartz, Brown University Music courtesy Kevin MacLeod
http://blogs.discovermagazine.com/visualscience/2013/03/09/robotic-wing-reveals-secrets-of-bat-flight/
4.71875
Voting and the Constitution Students will learn about the Constitution’s many provisions for voting. Students will participate in an informal discussion of the election process, including the Electoral College, the evolution of voting rights, and how the Constitution has been amended to keep up with the times. History: Understands patterns of change and continuity in the historical succession of related events; Understands that specific ideas had an impact on history; Analyzes the influence specific ideas and beliefs had on a period of history Civics: Knows the fundamental values of American democracy; Knows the fundamental principles of American democracy; Knows that a constitutional government is a fundamental principle of American democracy; Understands the meaning of civic responsibilities as distinguished from personal responsibilities, and understands contemporary issues that involve civic responsibilities; Understands how citizens’ responsibilities as Americans could require the subordination of their personal rights and interests to the public good Language Arts: (Listening and Speaking) Listens in order to understand topic, purpose, and perspective in spoken texts; (Reading) Draws conclusions and makes inferences based on explicit and implicit information in texts; Summarizes and paraphrases information in texts; (Writing) Uses a variety of resource materials to gather information for research topics; Life Skills: Understands that personal values influence the types of conclusions people make 1. Explain to students that one of the foundations of the Constitution was the right of the citizen to vote. Point out that voting is the first step in running a democratic government; nothing can happen before leaders are elected. Since the Constitution was the framework for the government, it had to include rules for how government officials were elected to office. 2. Distribute the Our Three Branches worksheet. Explain that the methods by which officials are elected or selected differ for the three branches of government. Divide the class into groups of three or four. Ask each group to use a copy of the Constitution and other resources to research how each branch’s officials are elected. Allow students to work for fifteen minutes to complete their worksheets. Once they are finished, go over the answers as a class. Answers could include: • Legislative Branch: Members of the House are elected every two years for each state. The winner of the majority of each popular vote wins the election. Members of the Senate are elected every six years. Initially, senators were elected by state legislatures, but the 17th Amendment called for the direct election of senators by people in their state. • Executive Branch: The president and vice president are elected by the Electoral College, not the popular vote. The electors are chosen by the states, and each state gets as many electors as it has senators and representatives. After the November election every four years, these electors vote for the presidential and vice presidential candidates that received the majority of their state’s popular vote. • Judicial Branch: The public does not vote for any federal judge directly, but has some measure of representation in the nomination process. The President nominates justices for the Supreme Court, but the Senate must approve of the selection, as it must approve of the many judges in minor federal courts. In the state court systems, judges are usually elected by the public. Worksheet Answers: 1. The Legislative Branch; 2. President and vice president; 3. You must be at least 25 years old, be a U.S. citizen for 7 years, and be an inhabitant of the state in which you are running, a Senator has to be at least 30 years old and a U.S. citizen for 9 years; 4. The vice president, they cast the deciding vote if there is a tie; 5. They are nominated by the president and confirmed by the Senate. 3. Distribute The Right to Vote worksheet. Remind students that the qualifications for voting have changed a lot over the past 200 years. Briefly discuss the fact that the Constitution has been amended numerous times to establish new voting rules. Point out that, in every case, these rules allowed more people to vote. Instruct students to complete Parts I and II of the worksheet. When they are finished, review the answers as a class. • (1870) Amendment 15. Voting Rights – Black Suffrage • (1913) Amendment 17. Direct Election of Senators • (1920) Amendment 19. Women’s Right to Vote • (1961) Amendment 23. Presidential Elections for the District of Columbia • (1964) Amendment 24. Poll Tax Ended • (1971) Amendment 26. Vote for Eighteen-Year-Olds Part II Answers: 1. 23rd Amendment; 2. 132 years; 3. 1971, 21 years old; 4. 5 years; 5. Senators were elected by the state legislature.
http://www.scholastic.com/browse/lessonplan.jsp?id=1124
4.34375
The First Peoples in the Settlement and Colonization Archaeologists believe that the first people to settle the Americas came here from Asia, walking across a land bridge between Siberia and Alaska. These Asian nomads followed herds of animals on which they depended for food. Eventually, these people settled all habitable regions of North and South America and the Caribbean islands. Groups were small and widely scattered, and each one eventually developed a tribal identity, language, and culture all its own. No one knows when these first Americans arrived. The oldest human bones ever found in North America are 13,000 years old, but other archaeological evidence suggests that human habitation goes back much farther than that. Archaeologists continue to gather and study the evidence. This map shows human settlement in North America at the time the first Europeans began to explore the continent. You can see the diversity of America- Indian tribes and the places where they settled. The early American cultures had two major characteristics: diversity and unity. Both characteristics were related to the land, the climate, and natural forces. Across all America Indian tribes, culture was dictated by the climate and natural resources in the area where the people settled. Native Americans hunted local animals, ate local fruits and vegetables, and made their houses of whatever natural materials were easily found in the area. Except for nomadic tribes, America Indians did not travel very far beyond what they considered to be their own territory. Tribes of the Mississippi delta, for instance, would never journey upriver to communicate or trade with tribes at a distance. Therefore, cultural exchange among Native Americans remained at a minimum, and tribal identities remained distinct and individual. All Native-American cultures were (and remain) united by certain shared characteristics. The most important was respect for the physical environment. American Indians depended entirely on the land for their food, clothing, and shelter, so they treated it with care. Native-American religious rituals for many tribes involved prayers for good weather, harvests, and hunting. American Indians believed that nature was not to be mastered, but to be served and maintained. Compared to Europeans, American Indians were not technologically advanced. They made everything they needed, but they did not invent machines. The tools and weapons they made were relatively crude and unsophisticated, because their needs were simple. Ancient American-Indian pottery and woven baskets remain both beautiful and functional to this day. Politically, most North American Indian tribes were democratic. Because tribes were small groups of people, it was easy to consult everyone’s opinion and consider it in making decisions. Most Native-American cultures were matriarchal; the women of the tribes held important positions as heads of families. However, chiefs were male, and in theory if not in fact, councils of men made most tribal decisions. Tribes that lived near one another communicated and traded on a regular basis. These groups of tribes formed nations—tribal associations based on similar linguistic, religious, political, and cultural characteristics. Democracy in pre-Columbian America reached its most sophisticated form among the tribes of the Iroquois Nation of the Northeast. The Seneca, Cayuga, Oneida, Onondaga, and Mohawk tribes were prone to quarrel. During the 1400s, tribal leaders agreed that it was time to form a regular council in which conflicts could be settled peacefully. They agreed to form a confederacy. Elders and chiefs chosen by popular vote from each of the five tribes would meet to discuss issues of importance to their people. The founders of the council agreed that all decisions were to be made based on the welfare of the people. Chiefs could be removed from the council for committing crimes. A sixth nation, the Tuscarora, joined the Iroquois Confederacy during the 1700s. In the colonial period, the Confederacy provided a powerful bulwark against British expansion. Although its power to affect national policy waned after the American Revolution, it continues to meet to this day. Practice questions for these concepts can be found at: Add your own comment Today on Education.com WORKBOOKSMay Workbooks are Here! ACTIVITIESGet Outside! 10 Playful Activities - Kindergarten Sight Words List - The Five Warning Signs of Asperger's Syndrome - What Makes a School Effective? - Child Development Theories - Why is Play Important? Social and Emotional Development, Physical Development, Creative Development - 10 Fun Activities for Children with Autism - Bullying in Schools - Test Problems: Seven Reasons Why Standardized Tests Are Not Working - Should Your Child Be Held Back a Grade? Know Your Rights - First Grade Sight Words List
http://www.education.com/study-help/article/us-history-settlement-colonization-first-peoples/
4.15625
Computer chips capable of speeding data around by rippling the electrons on the surface of metal wires just got a step closer, researchers say. Mark Brongersma, at Stanford University in Palo Alto, California, US has found a new way to model the three-dimensional propagation of these ripples - called plasmons - in two dimensions. He says the new model is much simpler and more intuitive than existing simulations and will be crucial in the design of plasmonic components for computer chips. Plasmons travel at the speed of light and are created when light hits a metal at a particular angle, causing waves to propagate through electrons near the surface. "Right now, the simulations are so complex that only a few groups in the world can carry them out," says Brongersma. "The new model came out of a desire for much simpler models." Currently the biggest application for plasmons is in gold-coated glass biosensors, which detect when particular proteins or DNA are present - the bio-matter changes the angle at which light hitting the surface produces the most intense plasmons. But scientists would love to use plasmons to ferry data around computer chips because they could operate at frequencies 100,000 times faster than today's Pentium chips, without requiring thicker wiring. Ordinary light waves can transmit data at similarly high frequencies, but using photons to carry data across a computer chip is currently impossible. This is because the size of the optical fibre that carries the light waves must be about half the wavelength of the light, which is over twice the thickness of the wires in modern chips. "The big advantage of plasmons is that you can make the devices the same size as electrical components but give them the speed of photons," says Brongersma. Plasmon-carrying wires could also be made out of copper or aluminium, like the interconnects on today's computer chips. Brongersma points out that the speed of these interconnects has become a key limiting factor. While transistors have become faster at switching as manufacturers find ways to make them smaller and smaller, the wires that carry the data are not getting any faster. "We need to find new ways to connect transistors together," he says. To develop the new, simpler model, Brongersma showed that the intensity pattern of a plasmon travelling across the surface of a metal strip was the same as for a light wave travelling through an optical fibre. He says this indicates that traditional "ray-tracer" programs for modelling light waves should work for plasmons too. Such models are necessary if devices that generate and route multiple plasmons are ever to be designed, Brongersma says. He will publish the work in an upcoming issue of Optics Letters. "Any advance that aids design is a good thing," says Harry Atwater of the California Institute of Technology in Pasadena, US. But he warns that a bigger hurdle to plasmon-based computer chips is finding plasmon sources that are compatible with silicon. "The most important element is not the design tools, it is having the ingenuity to know what to do with them," he says. If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to. Have your say Only subscribers may leave comments on this article. Please log in. Only personal subscribers may leave comments on this article Tue Jul 29 12:19:34 BST 2008 by Sandeep Kumar Need the latest developement of plasmonicsa All comments should respect the New Scientist House Rules. If you think a particular comment breaks these rules then please use the "Report" link in that comment to report it to us. If you are having a technical problem posting a comment, please contact technical support.
http://www.newscientist.com/article/dn7164-plasmonic-computer-chips-move-closer.html
4.15625
Scientists think that dark energy, the weird force blamed for propelling the universe to expand at an accelerated speed, probably turned on between 5 and 7 billion years ago. Now astronomers have mapped thousands of galaxies from this era, and have determined the most precise distances to them yet, in an effort to get to the bottom of the dark energy mystery. Dark energy is thought to represent about 74 percent of the universe’s total mass and energy, dwarfing ordinary matter. While its existence has never been directly confirmed, the strange force remains the leading explanation for why galaxies are speeding up as they spread farther and farther apart from each other. As said Ariel Sanchez, a research scientist at the Max Planck Institute for Extraterrestrial Physics in Garching, Germany, ordinary matter is only a few percent of the universe. The largest component of the universe is dark energy, an irreducible energy associated with space itself that is causing the expansion of the universe to accelerate. But the expansion of the universe hasn’t always been accelerating. Theorists think that before roughly 5 to 7 billion years ago, the expansion of the universe was slowing, due to the inward pull of gravity. Then, around that time, the expansion stopped slowing and started speeding up from the force of dark energy. To study these changes in cosmic expansion, scientists must measure the distances between galaxies now, as well as during different epochs of the distant past. They can do this by looking at very distant galaxies whose light is only reaching us now after traveling billions of years, which can paint a picture of what the universe looked like billions of years ago. Now, astronomers have created the most accurate map yet of galaxies in the distant universe, offering a window into the past and, possibly, into dark energy. The map comes from data collected by the Baryon Oscillation Spectroscopic Survey (BOSS), which is part of the third Sloan Digital Sky Survey (SDSS-III).
http://forcetoknow.com/space/map-distant-galaxies-reveal-dark-energy-history.html
4.46875
Free teaching guide for classroom use. About the book: Trickster looks for teachers for mankind and finds them in coyote, crow, raven and rabbit. About the guide: This guide includes discussion questions and projects appropriate for book clubs, literature circles, library, home and classroom study. It is intended to encourage discussion and experimentation with rhyme, and provoke thought and insight into the subject and themes of this book including humility, and folk tales and songs. Many cultures and traditions use animals to represent human qualities. Give each child a blank piece of paper or have them open to a clean page if they are using a writing journal. Have them write down the names or draw pictures of three animals that they feel represent them, and why. Discuss, and compare how and why certain animals may have different aspects associated with them depending on culture, geographical location etc. Topics for Discussion: Rabbit’s Song utilizes the traditional elements of folk tale and folks song. Discuss and share examples of both of these genres, then compare and contrast to Rabbit’s Song. CREATIVE WRITING ACTIVITY: Using another well-known folk tale, have students turn the story into a poem/song in the spirit of Rabbit’s Song; students can put the words to music by using the melody of Rabbit’s Song—BUT they must use the same syllabic pattern in order for this to work. This exercise can be explored with several stories/melodies. Discuss the character of Trickster. Who do you think he is? Share examples of other Trickster stories and compare and contrast with how he is presented in Rabbit’s Song (visually and in words). What did Trickster find lacking in bear, cat, tiger, dog, wolf and owl (review each animal and his reasoning)? Do you agree or disagree with Trickster’s choices? Why or why not? What is Trickster trying to teach mankind? Why does he choose the animals that he does (review each animal and his reasoning)? Do you agree with his choices—again, why or why not? CREATIVE WRITING ACTIVITY: If you were Trickster, who what would you want to teach mankind, and what animals would you use to do the job? Have students do this as a journal exercise, or as an extended activity to write their own version of Rabbit’s Song—in their writing they must explain their choices fully. Why do you think that Rabbit calls the chosen animals “the least?” Why would Trickster choose these animals (discuss specifically the purpose as stated in the verse, “Together we shall thwart the pains the gods do throw to earth, and turn aside their fiery darts with merriment and mirth)? Use the following links as resources to the Trickster tales portrayed in Rabbit’s Song. Coyote Sets the Stars: http://www.angelfire.com/rock3/countryboy79/yayiighaz.html Projects Across the Curriculum: Vocabulary. Have students define the following words and use each in a sentence that shows that she knows the meaning of the words. Review the words as they come up in Rabbit’s Song. totem install tawny merriment mirth render throng foolish boon cavorting naughty thwart pause humble hare (as opposed to rabbit) Writing in rhyme. Define RHYME. Using Rabbit’s Song as a model, review selected stanzas to illustrate rhyming words. Have students make lists of rhyming words for practice. Discuss why there are some words that do not rhyme. Examine the rhyming words in Rabbit’s Song to find the story’s RHYMING PATTERN. Define SYLLABLES. Find multi-syllabled words in the lines of selected stanzas. Count the syllables in each line. Define RHYTHM. Listen to SJ Tucker’s recording of Rabbit’s Song; discuss which syllables are emphasized. United Kingdom. Trickster tales can be found in many cultures, but Rabbit’s Song mentions the country of Wales. Find Wales on a map. Identify the other countries in the UNITED KINGDOM. Discuss language and cultural differences (it’s a small area in the grand scheme of the planet—but there are many differences between the peoples that live there!) Heraldry and Totem Poles. Rabbit’s Song utilizes animals to define human characteristic. Define the HERALDRY and its role in Medieval Europe. Explain the parts of a heraldic shield; discuss the role of animal/color etc. symbolism. Compare to how animals and colors are used in Rabbit’s Song. Have students build a heraldic shield to represent themselves or their family. Explain and discuss choices. Alternative—This exercise/project can also be done with TOTEM Poles. Discuss and define totem poles and totem animals. Utilizing Native American symbolism, have students construct a totem pole for their families, animal choices, symbolic colors etc. must be explained. http://www.fleurdelis.com/meanings.htm http://www.yourchildlearns.com/heraldry.htm http://www.heraldryclipart.com/ http://www.native-languages.org/totem.htm http://www.enchantedlearning.com/crafts/na/totempole/ http://www.dltk-kids.com/canada/mtotem.html Folk songs. Play “Rabbit’s Song.” Discuss the nature of folk songs—songs that tell stories. Define and discuss the traditional role of the BARD. Define and discuss ORAL HISTORY and ORAL TRADITION. What is the story of Rabbit’s Song? Listen to other folks songs and discuss storyline, melody and style (traditional instruments used by different cultures, and in different time periods and geographical areas. http://folkmusic.about.com/od/toptens/tp/Top10_SS.htm (link to a list of American folk singers) SJ Tucker, The Irish Rovers, Loreena McKennit have albums that feature music with Celtic folks songs. Pandora.com is a good resource for discovering new folk songs/folk music (it filters music through style and instrumentation to find similar melodies) Watercolor. Artist W. Lyon Martin did the illustrations for Rabbit’s Song using WATERCOLOR. Explain and experiment with watercolor technique. Alternative uses for color. Is a rabbit green? W. Lyon Martin uses alternative color choices throughout Rabbit’s Song. Discuss how the use of color effects the story being told. Discuss color symbolism; what different colors can indicate depending on time, culture, geography etc. then apply to Rabbit’s Song; is it appropriate for a folk tale—why or why not? Use the following as resources for folk/fairy tale coloring pages. Have students experiment with the use of alternative color. Discuss and explain color choices. Alternative—Read other folktales/trickster tales (http://www.americanfolklore.net/tricksters.html) and have students illustrate the stories or scenes from the stories using alternative color and/or watercolor technique. Compare/contrast and discuss choices. Counting and adding. Count the syllables (beats) in each line and add them up. Add the total number of each line to get a stanza total, and then a grand total for the entire story. Patterns and series.Utilize Rabbit’s Song to define patterns and series (numbers follow sequences and patterns just like rhyming poetry). Identify the rhyming pattern of Rabbit’s Song, as well as other rhyming poems (try other forms such as the limerick or sonnet). Use multiplication tables to illustrate number patterns or series. Examine and discuss the similarities between the mathematical and literary patterns. Constellations. Utilize W. Lyon Martin’s illustration to introduce the concept of constellations. Discuss star positions and how they were viewed/named by different cultures. Do the constellations resemble the figure they represent? Cross Curriculum Project: (Language Arts, Social Studies, Science, Art) Assign a constellation to each student. Have them research the mythology (from different cultures) behind each constellation and present their findings as an oral presentation and/or in writing. Students may also create art to illustrate their findings. Further suggestions for cross curriculum projects are detailed in the resources below.
http://www.magicalchildbooks.com/guides/rabbitsguide/
4.03125
|This article is part of the series: Politics and government of the Soviet Union |History & politics| Soviet Union portal The Union of Soviet Socialist Republics (Russian: Сою́з Сове́тских Социалисти́ческих Респу́блик, Soyuz Sovetskikh Sotsialisticheskikh Respublik) abbreviated to USSR (Russian: СССР, SSSR) or the Soviet Union (Russian: Советский Союз, Sovetsky Soyuz), was a constitutionally socialist state that existed between 1922 and 1991, ruled as a single-party state by the Communist Party with its capital as Moscow. A union of 15 subnational Soviet republics, its government and economy were highly centralised. The Soviet Union had its roots in the Russian Revolution of 1917, which deposed Nicholas II, ending three hundred years of Romanov dynastic rule. The Bolsheviks, led by Vladimir Lenin, stormed the Winter Palace in Petrograd and overthrew the Provisional Government. The Russian Socialist Federative Soviet Republic was established and a civil war began. The Red Army entered several territories of the former Russian Empire and helped local communists seize power. In 1922, the Bolsheviks were victorious, forming the Soviet Union with the unification of the Russian, Transcaucasian, Ukrainian and Byelorussian republics. Following Lenin's death in 1924, a troika collective leadership and a brief power struggle, Joseph Stalin came to power in the late 1920s. Stalin committed the state ideology to Marxism–Leninism and a centralised planned economy was initiated. As a result, the country underwent a period of rapid industrialisation and collectivisation which laid the basis for its later war effort and dominance after World War II. However, Stalin repressed both Communist Party members and elements of the population through his authoritarian rule. During World War II, Nazi Germany invaded the Soviet Union in 1941, opening the largest and bloodiest theatre of war in history and violating an earlier non-aggression pact between the two countries. The Soviet Union suffered the largest loss of life in the war, but halted the Axis advance at intense battles such as at Stalingrad, eventually driving through Eastern Europe and capturing Berlin in 1945. Having played the decisive role in the Allied victory in Europe, the Soviet Union consequently occupied much of Central and Eastern Europe and emerged as one of the world's two superpowers after the war. Together with these new socialist satellite states, through which it established economic and military pacts, it became involved in the Cold War, a prolonged ideological and political struggle against the Western Bloc, and in particular the other superpower, the United States. A de-Stalinisation period followed Stalin's death, reducing the harshest aspects of society. The Soviet Union then went on to initiate significant technological achievements of the 20th century, including launching the first ever satellite and world's first human spaceflight, which led it into the Space Race. The Cuban Missile Crisis in 1962 marked a period of extreme tension between the two superpowers, considered the closest to a mutual nuclear confrontation. In the 1970s, a relaxation of relations followed, but tensions resumed when, after a Communist-led revolution in Afghanistan, Soviet forces entered the country by request of the new regime. The occupation drained economic resources and dragged on without achieving meaningful political results. In the late 1980s the last Soviet leader, Mikhail Gorbachev, sought reforms in the Union, introducing the policies of glasnost and perestroika in an attempt to end the period of economic stagnation and democratize the government. However, this led to the rise of strong nationalist and separatist movements. By 1991, the country was in turmoil as the Baltic republics began to secede. A referendum resulted in the vast majority of participating citizens voting in favour of preserving the Union as a renewed federation. In August 1991, a coup d'état attempt by hardliners against Gorbachev and aimed at preserving the country, instead led to its collapse. On 25 December 1991, the USSR was dissolved into 15 post-Soviet states. The Russian Federation, successor of the Russian SFSR, assumed the Soviet Union's rights and obligations and is recognised as its continued legal personality. Other articles related to "war, ii, world war ii, union, iis": ... Later in the war, with improving American air superiority, attack coordination, and more veteran pilots, Avengers were able to play vital roles in the subsequent battles ... in service were dash-1s until near the end of the war in 1945 ... Mk 1, 334 TBM-1s from Grumman were the Avenger Mk II and 334 TBM-3 the Mark III ... ... See also Filipino Veterans Fairness Act During World War II, over 250,000 to over 400,000 Filipinos served with the United States Military ... allied with the United States during the war, the Philippines is the only country that did not receive military benefits from the United States ... Those listed as eligible by the US Government is smaller than the list of World War II veterans recognized by the Philippines ... ... World War II wings, for example, were very large administrative and operational organizations that usually controlled several combat groups and numerous service ... Many of the World War II wings were redesignated as air divisions after the war ... that preceded it and became an integral part of the post-World War II wing ... ... Many historical issues, especially related to World War II and the 1944–89 period, suppressed by communist censorship have been re-evaluated and publicly discussed (like the ... The Union of Jewish Religious Communities in Poland was founded in 1993 ... Before the war, the Yeshiva Chachmei in Lublin was Europe's largest ... ... a F-105 strike group, but instead found a sky full of missile armed F-4 Phantom IIs set for aerial combat ... advanced aircraft of those times, combined with a legacy of successes from World War II and the Korean War, resulted in a total revamping of aerial combat ... of defending North Vietnam, and until the last stages of the war, did not conduct air operations into South Vietnam nor did the NVAF conduct general offensive actions against enemy ... Famous quotes containing the words soviet union, union, soviet, world and/or war: “Nothing an interested foreigner may have to say about the Soviet Union today can compare with the scorn and fury of those who inhabit the ruin of a dream.” —Christopher Hope (b. 1944) “To emancipate [the slaves] entirely throughout the Union cannot, I conceive, be thought of, consistently with the safety of the country.” —Frances Trollope (17801863) “Today he plays jazz; tomorrow he betrays his country.” —Stalinist slogan in the Soviet Union (1920s) “Never is a historic deed already completed when it is done but always only when it is handed down to posterity. What we call history by no means represents the sum total of all significant deeds.... World history ... only comprises that tiny lighted sector which chanced to be placed in the spotlight by poetic or scholarly depictions.” —Stefan Zweig (18811942) “Its always the generals with the bloodiest records who are the first to shout what a hell it is. And its always the war widows who lead the Memorial Day parades.” —Paddy Chayefsky (19231981)
http://www.primidi.com/soviet_union
4
It turns out 40 years of believing the moon’s surface was dry wasn’t the case. New observations from three separate spacecraft, on three different missions, have confirmed “unambiguous evidence” of water across the moon’s surface, even in sunlit regions, according to Space.com. There’s not a *lot* of water; one ton of the top layer of the surface would hold about 32 ounces of water, the report said. But it’s there–as both H2O molecules and hydroxyl (hydrogen and oxygen chemically bonded)–and could be harnessed as a source of drinking water or fuel for a future permanent moon base. This is in addition to the polar ice found by NASA’s Lunar Reconnaissance Orbiter. The back story: forty years ago, astronauts brought back lunar rock samples. Trace amounts of water were detected at the time. But scientists assumed it was due to contamination from Earth, since the containers had leaked, according to the article. But now observations from Chandrayaan-1, NASA’s Deep Impact probe, and even NASA’s Cassini spacecraft made over the last 10 years have proved the presence of water conclusively. NASA is planning a 2pm EST briefing today to discuss the findings. (Image credit: NASA)
http://www.geek.com/geek-cetera/scientists-water-found-on-the-moon-1365007/
4.1875
Over geologic time, plate movements in concert with other geologic processes, such as glacial and stream erosion, have created some of nature's most magnificent scenery. The Himalayas, the Swiss Alps, and the Andes are some spectacular examples. Yet violent earthquakes related to plate tectonics have caused terrible catastrophes -- such as the magnitude-7.7 earthquake that struck the Chinese province of Hebei in 1976 and killed as many as 800,000 people. Most earthquakes and volcanic eruptions do not strike randomly but occur in specific areas, such as along plate boundaries. One such area is the circum-Pacific Ring of Fire, where the Pacific Plate meets many surrounding plates. The Ring of Fire is the most seismically and volcanically active zone in the world. Because many major population centers are located near active fault zones, such as the San Andreas, millions of people have suffered personal and economic losses as a result of destructive earthquakes, and even more have experienced earthquake motions. Not surprisingly, some people believe that, when the "Big One" hits, California will suddenly "break off" and "fall into the Pacific," or that the Earth will "open up" along the fault and "swallow" people, cars, and houses. Such beliefs have no scientific basis whatsoever. Although ground slippage commonly takes place in a large earthquake, the Earth will not open up. Nor will California fall into the sea, because the fault zone only extends about 15 km deep, which is only about a quarter of the thickness of the continental crust. Furthermore, California is composed of continental crust, whose relatively low density keeps it riding high, like an iceberg above the ocean. Aerial view, looking north toward San Francisco, of Crystal Springs Reservoir, which follows the San Andreas fault zone. (Photograph by Robert E. Wallace, USGS.) Like all transform plate boundaries, the San Andreas is a strike-slip fault, movement along which is dominantly horizontal. Specifically, the San Andreas fault zone separates the Pacific and North American Plates, which are slowly grinding past each other in a roughly north-south direction. The Pacific Plate (western side of the fault) is moving horizontally in a northerly direction relative to the North American Plate (eastern side of the fault). Evidence of the sideways shift of these two landmasses can be found all along the fault zone, as seen from the differences in topography, geologic structures, and, sometimes, vegetation of the terrain from one side of the fault to the other. For example, the San Andreas runs directly along Crystal Springs Reservoir on the San Francisco Peninsula. Topographically, this reservoir fills a long, straight, narrow valley that was formed by erosion of the easily erodible rocks mashed within the fault zone. Movement along the San Andreas can occur either in sudden jolts or in a slow, steady motion called creep. Fault segments that are actively creeping experience many small to moderate earthquakes that cause little or no damage. These creeping segments are separated by segments of infrequent earthquake activity (called seismic gaps), areas that are stuck or locked in place within the fault zone. Locked segments of the fault store a tremendous amount of energy that can build up for decades, or even centuries, before being unleashed in devastating earthquakes. For example, the Great San Francisco Earthquake (8.3-magnitude) in 1906 ruptured along a previously locked 430 km-long segment of the San Andreas, extending from Cape Men-docino south to San Juan Bautista. Map of the San Andreas and a few of the other faults in California, segments of which display different behavior: locked or creeping (see text). (Simplified from USGS Professional Paper 1515.) The stresses that accumulate along a locked segment of the fault and the sudden release can be visualized by bending a stick until it breaks. The stick will bend fairly easily, up to a certain point, until the stress becomes too great and it snaps. The vibrations felt when the stick breaks represent the sudden release of the stored-up energy. Similarly, the seismic vibrations produced when the ground suddenly ruptures radiate out through the Earth's interior from the rupture point, called the earthquake focus. The geographic point directly above the focus is called the earthquake epicenter. In a major earthquake, the energy released can cause damage hundreds to thousands of kilometers away from the epicenter. A dramatic photograph of horses killed by falling debris during the Great San Francisco Earthquake of 1906, when a locked segment of the San Andreas fault suddenly lurched, causing a devastating magnitude-8.3 earthquake. (Photograph by Edith Irvine, courtesy of Brigham Young University Library, Provo, Utah.) The magnitude-7.1 Loma Prieta earthquake of October 1989 occurred along a segment of the San Andreas Fault which had been locked since the great 1906 San Francisco earthquake. Even though the earthquake's focus (approximately 80 km south of San Francisco) was centered in a sparsely populated part of the Santa Cruz Mountains, the earthquake still caused 62 deaths and nearly $6 billion in damage. Following the Loma Prieta earthquake, the fault remains locked from Pt. Arena, where it enters California from the ocean, south through San Francisco and the peninsula west of San Francisco Bay, thus posing the threat of a potential destructive earthquake occurring in a much more densely populated area. The lesser known Hayward Fault running east of San Francisco Bay, however, may pose a potential threat as great as, or perhaps even greater than, the San Andreas. From the televised scenes of the damage caused by the 7.2-magnitude earthquake that struck Kobe, Japan, on 16 January 1995, Bay Area residents saw the possible devastation that could occur if a comparable size earthquake were to strike along the Hayward Fault. This is because the Hayward and the Nojima fault that produced the Kobe earthquake are quite similar in several ways. Not only are they of the same type (strike-slip), they are also about the same length (60­p;80 km) and both cut through densely populated urban areas, with many buildings, freeways, and other structures built on unstable bay landfill. On 17 January 1994, one of the costliest natural disasters in United States history struck southern California. A magnitude-6.6 earthquake hit near Northridge, a community located in the populous San Fernando Valley within the City of Los Angeles, California. This disaster, which killed more than 60 people, caused an estimated $30 billion in damage, nearly five times that resulting from the Loma Prieta earthquake. The Northridge earthquake did not directly involve movement along one of the strands of the San Andreas Fault system. It instead occurred along the Santa Monica Mountains Thrust Fault, one of several smaller, concealed faults (called blind thrust faults) south of the San Andreas Fault zone where it bends to the east, roughly paralleling the Transverse Mountain Range. With a thrust fault, whose plane is inclined to the Earth's surface, one side moves upward over the other. Movement along a blind thrust fault does not break the ground surface, thus making it difficult or impossible to map these hidden but potentially dangerous faults. Although scientists have found measurable uplift at several places in the Transverse Range, they have not found any conclusive evidence of ground rupture from the 1994 Northridge earthquake. Similar earthquakes struck the region in 1971 and 1987; the San Fernando earthquake (1971) caused substantial damage, including the collapse of a hospital and several freeway overpasses. Not all fault movement is as violent and destructive. Near the city of Hollister in central California, the Calaveras Fault bends toward the San Andreas. Here, the Calaveras fault creeps at a slow, steady pace, posing little danger. Much of the Calaveras fault creeps at an average rate of 5 to 6 mm/yr. On average, Hollister has some 20,000 earthquakes a year, most of which are too small to be felt by residents. It is rare for an area undergoing creep to experience an earthquake with a magnitude greater than 6.0 because stress is continually being relieved and, therefore, does not accumulate. Fault-creep movement generally is non-threatening, resulting only in gradual offset of roads, fences, sidewalks, pipelines, and other structures that cross the fault. However, the persistence of fault creep does pose a costly nuisance in terms of maintenance and repair. Mid-plate earthquakes -- those occurring in the interiors of plates -- are much less frequent than those along plate boundaries and more difficult to explain. Earthquakes along the Atlantic seaboard of the United States are most likely related in some way to the westward movement of the North American Plate away from the Mid-Atlantic Ridge, a continuing process begun with the break-up of Pangaea. However, the causes of these infrequent earthquakes are still not understood. East Coast earthquakes, such as the one that struck Charleston, South Carolina, in 1886 are felt over a much larger area than earthquakes occurring on the West Coast, because the eastern half of the country is mainly composed of older rock that has not been fractured and cracked by frequent earthquake activity in the recent geologic past. Rock that is highly fractured and crushed absorbs more seismic energy than rock that is less fractured. The Charleston earthquake, with an estimated magnitude of about 7.0, was felt as far away as Chicago, more than 1,300 km to the northwest, whereas the 7.1-magnitude Loma Prieta earthquakes was felt no farther than Los Angeles, about 500 km south. The most widely felt earthquakes ever to strike the United States were centered near the town of New Madrid, Missouri, in 1811 and 1812. Three earthquakes, felt as far away as Washington D.C., were each estimated to be above 8.0 in magnitude. Most of us do not associate earthquakes with New York City, but beneath Manhattan is a network of intersecting faults, a few of which are capable of causing earthquakes. The most recent earthquake to strike New York City occurred in 1985 and measured 4.0 in magnitude, and a pair of earthquakes (magnitude 4.0 and 4.5) shook Reading, Pennsylvania, in January 1994 causing minor damage. Left: Creeping along the Calaveras fault has bent the retaining wall and offset the sidewalk along 5th Street in Hollister, California (about 75 km south-southeast of San Jose). Right: Close-up of the offset of the curb. (Photographs by W. Jacquelyne Kious.) We know in general how most earthquakes occur, but can we predict when they will strike? This question has challenged and frustrated scientists studying likely precursors to moderate and large earthquakes. Since the early 1980s, geologists and seismologists have been intensively studying a segment of the San Andreas near the small town of Parkfield, located about halfway between San Francisco and Los Angeles, to try to detect the physical and chemical changes that might take place -- both above and below ground -- before an earthquake strikes. The USGS and State and local agencies have blanketed Parkfield and the surrounding countryside with seismographs, creep meters, stress meters, and other ground-motion measurement devices. The Parkfield segment has experienced earthquakes measuring magnitude 6.0 about every 22 years on average since 1881. During the most recent two earthquakes (1934, 1966), the same section of the fault slipped and the amount of slippage was about the same. In 1983, this evidence, in addition to the earlier recorded history of earthquake activity, led the USGS to predict that there was a 95 percent chance of a 6.0 earthquake striking Parkfield before 1993. But the anticipated earthquake of magnitude 6.0 or greater did not materialize. The Parkfield experiment is continuing, and its primary goals remain unchanged: to issue a short-term prediction; to monitor and analyze geophysical and geochemical effects before, during, and after the anticipated earthquake; and to develop effective communications between scientists, emergency-management officials, and the public in responding to earthquake hazards. While scientists are studying and identifying possible precursors leading to the next Parkfield earthquake, they also are looking at these same precursors to see if they may be occurring along other segments of the fault. Studies of past earthquakes, together with data and experience gained from the Parkfield experiment, have been used by geoscientists to estimate the probabilities of major earthquakes occurring along the entire San Andreas Fault system. In 1988, the USGS identified six segments of the San Andreas as most likely to be hit by a magnitude 6.5 or larger earthquake within the next thirty years (1988-2018). The Loma Prieta earthquake in 1989 occurred along one of these six segments. The Parkfield experiment and other studies carried out by the USGS as part of the National Earthquake Hazards Reduction Program have led to an increased official and public awareness of the inevitability of future earthquake activity in California. Consequently, residents and State and local officials have become more diligent in planning and preparing for the next big earthquake. As with earthquakes, volcanic activity is linked to plate-tectonic processes. Most of the world's active above-sea volcanoes are located near convergent plate boundaries where subduction is occurring, particularly around the Pacific basin. However, much more volcanism -- producing about three quarters of all lava erupted on Earth -- takes place unseen beneath the ocean, mostly along the oceanic spreading centers, such as the Mid-Atlantic Ridge and the East Pacific Rise. Subduction-zone volcanoes like Mount St. Helens (in Washington State) and Mount Pinatubo (Luzon, Philippines), are called composite cones and typically erupt with explosive force, because the magma is too stiff to allow easy escape of volcanic gases. As a consequence, tremendous internal pressures mount as the trapped gases expand during ascent, before the pent-up pressure is suddenly released in a violent eruption. Such an explosive process can be compared to putting your thumb over an opened bottle of a carbonated drink, shaking it vigorously, and then quickly removing the thumb. The shaking action separates the gases from the liquid to form bubbles, increasing the internal pressure. Quick release of the thumb allows the gases and liquid to gush out with explosive speed and force. In 1991, two volcanoes on the western edge of the Philippine Plate produced major eruptions. On June 15, Mount Pinatubo spewed ash 40 km into the air and produced huge ash flows (also called pyroclastic flows) and mudflows that devastated a large area around the volcano. Pinatubo, located 90 km from Manila, had been dormant for 600 years before the 1991 eruption, which ranks as one of the largest eruptions in this century. Also in 1991, Japan's Unzen Volcano, located on the Island of Kyushu about 40 km east of Nagasaki, awakened from its 200-year slumber to produce a new lava dome at its summit. Beginning in June, repeated collapses of this active dome generated destructive ash flows that swept down its slopes at speeds as high as 200 km per hour. Unzen is one of more than 75 active volcanoes in Japan; its eruption in 1792 killed more than 15,000 people--the worst volcanic disaster in the country's history. While the Unzen eruptions have caused deaths and considerable local damage, the impact of the June 1991 eruption of Mount Pinatubo was global. Slightly cooler than usual temperatures recorded worldwide and the brilliant sunsets and sunrises have been attributed to this eruption that sent fine ash and gases high into the stratosphere, forming a large volcanic cloud that drifted around the world. The sulfur dioxide (SO2) in this cloud -- about 22 million tons -- combined with water to form droplets of sulfuric acid, blocking some of the sunlight from reaching the Earth and thereby cooling temperatures in some regions by as much as 0.5 °C. An eruption the size of Mount Pinatubo could affect the weather for a few years. A similar phenomenon occurred in April of 1815 with the cataclysmic eruption of Tambora Volcano in Indonesia, the most powerful eruption in recorded history. Tambora's volcanic cloud lowered global temperatures by as much as 3 °C. Even a year after the eruption, most of the northern hemisphere experienced sharply cooler temperatures during the summer months. In part of Europe and in North America, 1816 was known as "the year without a summer." Apart from possibly affecting climate, volcanic clouds from explosive eruptions also pose a hazard to aviation safety. During the past two decades, more than 60 airplanes, mostly commercial jetliners, have been damaged by in-flight encounters with volcanic ash. Some of these encounters have resulted in the power loss of all engines, necessitating emergency landings. Luckily, to date no crashes have happened be-cause of jet aircraft flying into volcanic ash. Diagram showing the lower two layers of the atmosphere: the troposphere and the stratosphere. The tropopause--the boundary between these two layers--varies in altitude from 8 to 18 km (dashed white lines), depending on Earth latitude and season of the year. The summit of Mt. Everest (inset photograph) and the altitudes commonly flown by commercial jetliners are given for reference. (Photograph by David G. Howell, USGS.) Since the year A.D. 1600, nearly 300,000 people have been killed by volcanic eruptions. Most deaths were caused by pyroclastic flows and mudflows, deadly hazards which often accompany explosive eruptions of subduction-zone volcanoes. Pyroclastic flows, also called nuées ardentes ("glowing clouds" in French), are fast-moving, avalanche-like, ground-hugging incandescent mixtures of hot volcanic debris, ash, and gases that can travel at speeds in excess of 150 km per hour. Approximately 30,000 people were killed by pyroclastic flows during the 1902 eruption of Mont Pelée on the Island of Martinique in the Caribbean. In March-April 1982, three explosive eruptions of El Chichón Volcano in the State of Chiapas, southeastern Mexico, caused the worst volcanic disaster in that country's history. Villages within 8 km of the volcano were destroyed by pyroclastic flows, killing more than 2,000 people. Mudflows (also called debris flows or lahars, an Indonesian term for volcanic mudflows) are mixtures of volcanic debris and water. The water usually comes from two sources: rainfall or the melting of snow and ice by hot volcanic debris. Depending on the proportion of water to volcanic material, mudflows can range from soupy floods to thick flows that have the consistency of wet cement. As mudflows sweep down the steep sides of composite volcanoes, they have the strength and speed to flatten or bury everything in their paths. Hot ash and pyroclastic flows from the eruption of the Nevado del Ruiz Volcano in Colombia, South America, melted snow and ice atop the 5,390-m-high Andean peak; the ensuing mudflows buried the city of Armero, killing 25,000 people. Eruptions of Hawaiian and most other mid-plate volcanoes differ greatly from those of composite cones. Mauna Loa and Kilauea, on the island of Hawaii, are known as shield volcanoes, because they resemble the wide, rounded shape of an ancient warrior's shield. Shield volcanoes tend to erupt non-explosively, mainly pouring out huge volumes of fluid lava. Hawaiian-type eruptions are rarely life threatening because the lava advances slowly enough to allow safe evacuation of people, but large lava flows can cause considerable economic loss by destroying property and agricultural lands. For example, lava from the ongoing eruption of Kilauea, which began in January 1983, has destroyed more than 200 structures, buried kilometers of highways, and disrupted the daily lives of local residents. Because Hawaiian volcanoes erupt frequently and pose little danger to humans, they provide an ideal natural laboratory to safely study volcanic phenomena at close range. The USGS Hawaiian Volcano Observatory, on the rim of Kilauea, was among the world's first modern volcano observatories, established early in this century. In recorded history, explosive eruptions at subduction-zone (convergent-boundary) volcanoes have posed the greatest hazard to civilizations. Yet scientists have estimated that about three quarters of the material erupted on Earth each year originates at spreading mid-ocean ridges. However, no deep submarine eruption has yet been observed "live" by scientists. Because the great water depths preclude easy observation, few detailed studies have been made of the numerous possible eruption sites along the tremendous length (50,000 km) of the global mid-oceanic ridge system. Recently however, repeated surveys of specific sites along the Juan de Fuca Ridge, off the coast of the Oregon and Washington, have mapped deposits of fresh lava, which must have been erupted sometime between the surveys. In June 1993, seismic signals typically associated with submarine eruptions -- called T-phases -- were detected along part of the spreading Juan de Fuca Ridge and interpreted as being caused by eruptive activity. Iceland, where the Mid-Atlantic Ridge is exposed on land, is a different story. It is easy to see many Icelandic volcanoes erupt non-explosively from fissure vents, in similar fashion to typical Hawaiian eruptions; others, like Hekla Volcano, erupt explosively. (After Hekla's catastrophic eruption in 1104, it was thought in the Christian world to be the "Mouth to Hell.") The voluminous, but mostly non-explosive, eruption at Lakagígar (Laki), Iceland, in 1783, resulted in one of the world's worst volcanic disasters. About 9,000 people -- almost 20% of the country's population at the time -- died of starvation after the eruption, because their livestock had perished from grazing on grass contaminated by fluorine-rich gases emitted during this eight month-long eruption. Major earthquakes occurring along subduction zones are especially hazardous, because they can trigger tsunamis (from the Japanese word tsunami meaning "harbor wave") and pose a potential danger to coastal communities and islands that dot the Pacific. Tsunamis are often mistakenly called "tidal waves" when, in fact, they have nothing to do with tidal action. Rather, tsunamis are seismic sea waves caused by earthquakes, submarine landslides, and, infrequently, by eruptions of island volcanoes. During a major earthquake, the seafloor can move by several meters and an enormous amount of water is suddenly set into motion, sloshing back and forth for several hours. The result is a series of waves that race across the ocean at speeds of more than 800 km per hour, comparable to those of commercial jetliners. The energy and momentum of these transoceanic waves can take them thousands of kilometers from their origin before slamming into far-distant islands or coastal areas. A giant wave engulfs the pier at Hilo, Hawaii, during the 1946 tsunami, which killed 159 people. The arrow points to a man who was swept away seconds later. (Retouched photograph courtesy of NOAA/EDIS.) To someone on a ship in the open ocean, the passage of a tsunami wave would barely elevate the water surface. However, when it reaches shallower water near the coastline and "touches bottom," the tsunami wave increases in height, piling up into an enormous wall of water. As a tsunami approaches the shore, the water near shore commonly recedes for several minutes -- long enough for someone to be lured out to collect exposed sea shells, fish, etc. -- before suddenly rushing back toward land with frightening speed and height. The 1883 eruption of Krakatau Volcano, located in the Sunda Straits between the islands of Sumatra and Java, Indonesia, provides an excellent example of an eruption-caused tsunami. A series of tsunamis washed away 165 coastal villages on Java and Sumatra, killing 36,000 people. The larger tsunamis were recorded by tide gauges as far away as the southern coast of the Arabian Peninsula-more than 7,000 km from Krakatau! Because of past killer tsunamis, which have caused hundreds of deaths on the Island of Hawaii and elsewhere, the International Tsunami Information Center was created in 1965. This center issues tsunami warnings based on earthquake and wave-height information gathered from seismic and tide-gauge stations located around the Pacific Ocean basin and on Hawaii. Many of the Earth's natural resources of energy, minerals, and soil are concentrated near past or present plate boundaries. The utilization of these readily available resources have sustained human civilizations, both now and in the past. Volcanoes can clearly cause much damage and destruction, but in the long term they also have benefited people. Over thousands to millions of years, the physical breakdown and chemical weathering of volcanic rocks have formed some of the most fertile soils on Earth. In tropical, rainy regions, such as the windward (northeastern) side of the Island of Hawaii, the formation of fertile soil and growth of lush vegetation following an eruption can be as fast as a few hundred years. Some of the earliest civilizations (for example, Greek, Etruscan, and Roman) settled on the rich, fertile volcanic soils in the Mediterranean-Aegean region. Some of the best rice-growing regions of Indonesia are in the shadow of active volcanoes. Similarly, many prime agricultural regions in the western United States have fertile soils wholly or largely of volcanic origin. Most of the metallic minerals mined in the world, such as copper, gold, silver, lead, and zinc, are associated with magmas found deep within the roots of extinct volcanoes located above subduction zones. Rising magma does not always reach the surface to erupt; instead it may slowly cool and harden beneath the volcano to form a wide variety of crystalline rocks (generally called plutonic or granitic rocks). Some of the best examples of such deep-seated granitic rocks, later exposed by erosion, are magnificently displayed in California's Yosemite National Park. Ore deposits commonly form around the magma bodies that feed volcanoes because there is a ready supply of heat, which convectively moves and circulates ore-bearing fluids. The metals, originally scattered in trace amounts in magma or surrounding solid rocks, become concentrated by circulating hot fluids and can be redeposited, under favorable temperature and pressure conditions, to form rich mineral veins. The active volcanic vents along the spreading mid-ocean ridges create ideal environments for the circulation of fluids rich in minerals and for ore deposition. Water as hot as 380 °C gushes out of geothermal springs along the spreading centers. The water has been heated during circulation by contact with the hot volcanic rocks forming the ridge. Deep-sea hot springs containing an abundance of dark-colored ore minerals (sulfides) of iron, copper, zinc, nickel, and other metals are called "black smokers." On rare occasions, such deep-sea ore deposits are later exposed in remnants of ancient oceanic crust that have been scraped off and left ("beached") on top of continental crust during past subduction processes. The Troodos Massif on the Island of Cyprus is perhaps the best known example of such ancient oceanic crust. Cyprus was an important source of copper in the ancient world, and Romans called copper the "Cyprian metal"; the Latin word for copper is cyprium. Oil and natural gas are the products of the deep burial and decomposition of accumulated organic material in geologic basins that flank mountain ranges formed by plate-tectonic processes. Heat and pressure at depth transform the decomposed organic material into tiny pockets of gas and liquid petroleum, which then migrate through the pore spaces and larger openings in the surrounding rocks and collect in reservoirs, generally within 5 km of the Earth's surface. Coal is also a product of accumulated decomposed plant debris, later buried and compacted beneath overlying sediments. Most coal originated as peat in ancient swamps created many millions of years ago, associated with the draining and flooding of landmasses caused by changes in sea level related to plate tectonics and other geologic processes. For example, the Appalachian coal deposits formed about 300 million years ago in a low-lying basin that was alternately flooded and drained. Geothermal energy can be harnessed from the Earth's natural heat associated with active volcanoes or geologically young inactive volcanoes still giving off heat at depth. Steam from high-temperature geothermal fluids can be used to drive turbines and generate electrical power, while lower temperature fluids provide hot water for space-heating purposes, heat for greenhouses and industrial uses, and hot or warm springs at resort spas. For example, geothermal heat warms more than 70 percent of the homes in Iceland, and The Geysers geothermal field in Northern California produces enough electricity to meet the power demands of San Francisco. In addition to being an energy resource, some geo-thermal waters also contain sulfur, gold, silver, and mercury that can be recovered as a byproduct of energy production. As global population increases and more countries become industrialized, the world demand for mineral and energy resources will continue to grow. Because people have been using natural resources for millennia, most of the easily located mineral, fossil-fuel, and geothermal resources have already been tapped. By necessity, the world's focus has turned to the more remote and inaccessible regions of the world, such as the ocean floor, the polar continents, and the resources that lie deeper in the Earth's crust. Finding and developing such resources without damage to the environment will present a formidable challenge in the coming decades. An improved knowledge of the relationship between plate tectonics and natural resources is essential to meeting this challenge. Farmer plowing a lush rice paddy in central Java, Indonesia; Sundoro Volcano looms in the background. The most highly prized rice-growing areas have fertile soils formed from the breakdown of young volcanic deposits. (Photograph by Robert I. Tilling, USGS.) The long-term benefits of plate tectonics should serve as a constant reminder to us that the planet Earth occupies a unique niche in our solar system. Appreciation of the concept of plate tectonics and its consequences has reinforced the notion that the Earth is an integrated whole, not a random collection of isolated parts. The global effort to better understand this revolutionary concept has helped to unite the earth-sciences community and to underscore the linkages between the many different scientific disciplines. As we enter the 21st century, when the Earth's finite resources will be further strained by explosive population growth, earth scientists must strive to better understand our dynamic planet. We must become more resourceful in reaping the long-term benefits of plate tectonics, while coping with its short-term adverse impacts, such as earthquakes and volcanic eruptions. Last updated: 05.05.99
http://pubs.usgs.gov/gip/dynamic/tectonics.html
4.4375
Forces and the Laws of Motion introduces your student to Newton’s Three Laws of Motion and contact and at a distance forces in physics. They will learn about each of the 7 contact forces and both of the at a distance forces. When they are finished with the lecture they will review the new vocabulary words they learned in the unit, take an interactive laws quiz, match the forces they learned about, and identify between contact and at a distance forces. There is a 20 question final quiz as well as a certificate of completion that will allow them to review any missed answers as well as print their results. Click here to access the Interactive Forces & Motion Unit Study <div align="center"><a href="http://thesimplehomeschool.com/" title="Simple Schooling" target="_blank"><img src="http://i298.photobucket.com/albums/mm255/duckyboysmom/grab_me_simple_schooling150.jpg" alt="Simple Schooling" style="border:none;" /></a></div> Learn more about The Simple Homeschool and the Simple Schooling Classroom. Have questions and need a personalized answer? Please feel free to contact us. Looking to advertise at the Simple Schooling Classroom or The Simple Homeschool? Have a question and want an immediate answer? Browse our list of FAQ's for fast results.
http://www.thesimplehomeschool.com/subscribers/6-12-unit-studies/6-12-physical-science/forces-a-motion.html
4.09375
Copyright © 2007 Dorling Kindersley A computer is an electronic machine that obeys instructions telling it how to present information in a more useful form. Its HARDWARE is the actual machine, including parts such as the screen. The hardware stores instructions as a computer program, or SOFTWARE. Hardware and software work together to change basic data into something people can use. A long list of numbers, for example, can be presented as a colourful picture. The body of the computer and the devices that plug into it, such as the keyboard, are called its hardware. The body contains the parts that store and process information. These include the hard disk, which stores programs and files permanently. Faster, electronic memory holds the data being processed. A chip called the processor does most of the work, helped by others that do special jobs, such as displaying images. Today’s personal computer may have a big colour screen, loudspeakers, and possibly a camera. It is thousands of times more powerful than computers built around 30 years ago, which were so bulky they could fill a whole room. This improvement is due to the microprocessor (invented in 1971), which replaced hundreds of separate computer parts with a single microchip. A computer’s hard disk (usually several disks spinning together) stores information permanently as magnetic spots on the disks’ surface. The hard disk is too slow to keep up with the processor, so all data has to be read from the disk into fast, electronic RAM (random-access memory) before use. RAM chips stop working as soon as the computer is switched off, so new data needed again must be saved on the hard disk. Computers store and process information in the form of bits. A bit can stand for one of just two different things, such as “yes” and “no”. For example, a hard disk stores information as magnetic spots with the magnetism pointing up or down. When bits are grouped together, they allow more choices. Every extra bit doubles the possibilities, so a byte can stand for 256 different things. A modern PC can handle billions of bits per second and store up to 120 gigabytes (over 1,000 trillion bits) on its hard disk. A computer needs software, which consists of sets of instructions called programs, to tell it what to do. Different programs allow people to write letters, play games, or connect to the Internet. Software is written in special languages by computer programmers. The languages are then translated into instructions that can be understood by the computer’s microprocessor.
http://www.factmonster.com/dk/science/encyclopedia/computers.html
4.125
In September the chlorophyll starts to disappear from the leaves of plants. This reveals the yellow and red pigments that bring us the explosion of colors we now have in store. A research team at Umeň Plant Science Center (UPSC) has now identified a protein that helps bring out the color splendor of plants in the fall. The research team's findings are being published in the scientific journal PNAS, Proceedings of the National Academy of Sciences. The different pigments in a leaf are bound to different proteins. Most of the chlorophyll, which lends plants their green color, is bound to a protein called LHCII. Every individual protein is incredibly small (nearly a million times smaller than the human eye can perceive), but it is possible see them if there are many of them together. LHCII is probably the most commonly prevalent membrane protein on earth. There is so much of it, in fact, that it is visible from space--in satellite images of the earth the tropical and temperate forest areas are green. In the tropics there is no autumn, but in our climate deciduous trees and other perennials lose their chlorophyll in the fall. The reason for this is that the proteins in the leaves contain amino acids that the plant needs to recycle. The leaves' proteins are therefore degraded and the amino acids are stored in the trunk, branches, and roots until next year, when they are used as building blocks for new leaves. Other proteins, so-called proteases, have the task of degrading these proteins, and there is extensive research under way in this field. For example, the 2004 Nobel Prize went to three scientists who work with proteases. Proteases are extremely important for all living organisms, but the proteases that break down chlorophyll-binding proteins are the only ones whose activities can be observed from space. Working with the model plant mouse-ear cress (Arabidopsis thaliana), a research team at Umeň Plant Science Center (UPSC), in association with a Polish scientist, has identified a protease that degrades LHCII. The researchers assumed that the protease belonged to a certain family of proteases, the so-called FtsH proteases, and they used genetically modified mouse-ear cress plants in which various FtsH proteases had been removed. One of these plants had a severely impaired ability to degrade LHCII. This led the researchers to conclude that the protease FtsH6 helps degrade LHCII. The article is titled "AtFtsH6 is involved in the degradation of the light harvesting complex II during high light acclimation and senescence." The authors are Agnieszka Zelisko, Maribel Garcia-Lorenzo, Grzegorz Jackowski, Stefan Jansson, and Christiane Funk. Grzegorz Jackowski works at Adam Mickiewicz University in Poznan; among the UPSC scientists, Stefan Jansson works at the Department of Plant Physiology and the others at the Department of Biochemistry. The article is being published this week in the Early Edition articles of Proceedings of the National Academy of Sciences of the USA ( http://www.pnas.org/papbyrecent.shtml). Umeň Plant Science Center, UPSC, was established in 1999 in collaboration between the Department of Plant Physiology at Umeň University and the Department of Forest Genetics and Plant Physiology at the Swedish University of Agricultural Sciences (SLU) in Umeň. Source: Eurekalert & othersLast reviewed: By John M. Grohol, Psy.D. on 21 Feb 2009 Published on PsychCentral.com. All rights reserved. We must become the change we want to see. -- Mohandas K. Gandhi
http://psychcentral.com/news/archives/2005-09/src-pba090205.html
4.15625
click to enlarge The history of African Americans in the U.S. Civil War is marked by 186,097 (7,122 officers, 178,975 enlisted) African Americans comprising 163 units served in the Union Army during the Civil War, and many more African Americans served in the Union Navy. Both free African Americans and runaway slaves joined the fight. On the Confederate side, blacks, both free and slave, were used for labor, but the issue of whether to arm them, and under what terms, became a major source of debate amongst those in the South. The U.S. Congress passed a confiscation act in July 1862 that freed slaves of owners in rebellion against the United States, and a militia act that empowered the President to use freed slaves in any capacity in the army. President Abraham Lincoln, however, was concerned with public opinion in the four border states that remained in the Union, as well as with northern Democrats who supported the war. Lincoln opposed early efforts to recruit black soldiers, even though he accepted their use as laborers. Union Army setbacks in battles over the summer of 1862 forced Lincoln into the more drastic response of emancipating all slaves in states at war with the Union. In September 1862 Lincoln issued his preliminary proclamation that all slaves in rebellious states would be free as of January 1. Recruitment of colored regiments began in full force following the Emancipation Proclamation of January 1863. The United States War Department issued General Order Number 143 on May 22, 1863, establishing a "Bureau of Colored Troops" to facilitate the recruitment of African-American soldiers to fight for the Union Army. Regiments, including infantry, cavalry, light artillery, and heavy artillery units, were recruited from all states of the Union and became known as the United States Colored Troops (USCT). Approximately 175 regiments of over 178,000 free blacks and freed slaves served during the last two years of the war, and bolstered the Union war effort at a critical time. By war's end, the USCT were approximately a tenth of all Union troops. There were 2,751 USCT combat casualties during the war, and 68,178 losses from all causes. USCT regiments were led by white officers and rank advancement was limited for black soldiers. The Supervisory Committee for Recruiting Colored Regiments in Philadelphia opened a Free Military Academy for Applicants for the Command of Colored Troops at the end of 1863. For a time, black soldiers received less pay than their white counterparts. Famous members of USCT regiments were Martin Robinson Delany, and the sons of Frederick Douglass. Soldiers who fought in the Army of the James were eligible for the Butler Medal, commissioned by that army's commander, Maj. Gen. Benjamin Butler. USCT regiments fought in all theaters of the war, but mainly served as garrison troops in rear areas. The most famous USCT action took place at the Battle of the Crater during the Siege of Petersburg, where regiments of USCT troops suffered heavy casualties attempting to break through Confederate lines. Other notable engagements include Battery Wagner and the Battle of Nashville. USCT soldiers often became victims of battlefield atrocities, most notably at Fort Pillow. The prisoner exchange cartel broke down over the Confederacy's position on black prisoners of war. Confederate law stated that blacks captured in uniform be tried as slave insurrectionists in civil courts--a capital offense. Although this rarely, if ever, happened, it became a stumbling block for prisoner exchange. USCT soldiers were among the first Union forces to enter Richmond, Virginia, after its fall in April 1865. The 41st USCT regiment was present at the surrender of the Army of Northern Virginia at Appomattox. Following the war, USCT regiments served as occupation troops in former Confederate states. In actual numbers, African American soldiers comprised 10% of the entire Union Army. Losses among African Americans were high, and from all reported casualties, approximately 20% of all African Americans enrolled in the military lost their lives during the Civil War. Notably, their mortality rate was significantly higher than white soldiers: [We] find, according to the revised official data, that of the slightly over two millions troops in the United States Volunteers, over 316,000 died (from all causes), or 15.2%. Of the 67,000 Regular Army(white) troops, 8.6%, or not quite 6,000, died. Of the approximately 180,000 United States Colored Troops, however, over 36,000 died, or 20.5%. In other words, the mortality rate amongst the United States Colored Troops in the Civil War was thirty-five percent greater than that among other troops, notwithstanding the fact that the former were not enrolled until some eighteen months after the fighting began. - Herbert Aptheker In general, white soldiers and officers believed that black men lacked the ability to fight and fight well. In October 1862, African American soldiers of the 1st Kansas Colored Volunteers silenced their critics by repulsing attacking Confederate guerrillas at the Skirmish at Island Mound, Missouri in October of 1862. By August, 1863, 14 Negro Regiments were in the field and ready for service. At the Battle of Port Hudson, Louisiana, May 27, 1863, the African American soldiers bravely advanced over open ground in the face of deadly artillery fire. Although the attack failed, the black soldiers proved their capability to withstand the heat of battle, with General Banks recording in the his official report; "Whatever doubt may have existed heretofore as to the efficiency of organizations of this character, the history of this days proves...in this class of troops effective supporters and defenders." The most widely known battle fought by African Americans was the assault on Fort Wagner, South Carolina, by the 54th Massachusetts Infantry on July 18, 1863. The 54th volunteered to lead the assault on the strongly-fortified Confederate positions. The soldiers of the 54th scaled the fort's parapet, and were only driven back after brutal hand-to-hand combat. Despite the defeat, the unit was hailed for its valor which spurred further African-American recruitment, giving the Union a numerical military advantage from a population the Confederacy did not attempt to exploit until the closing days of the war. African American soldiers participated in every major campaign of 1864-65 except Sherman's Atlanta Campaign in Georgia. The year 1864 was especially eventful for African American troops. On April 12, 1864, at Battle of Fort Pillow, Tennessee, Confederate General Nathan Bedford Forrest led his 2,500 men against the Union-held fortification, occupied by 292 black and 285 white soldiers. After driving in the Union pickets and giving the garrison an opportunity to surrender, Forrest's men swarmed into the fort with little difficulty and drove the Federals down the river's bluff into a deadly crossfire. Casualties were high and only sixty-two of the U.S. Colored Troops survived the fight. Many accused the Confederates of perpetrating a massacre of black troops, and the controversy continues today. The battle cry for the Negro soldier east of the Mississippi River became "Remember Fort Pillow!" The Battle of Chaffin's Farm, Virginia became one of the most heroic engagements involving African Americans. On September 29, 1864, the African American division of the Eighteenth Corps, after being pinned down by Confederate artillery fire for about 30 minutes, charged the earthworks and rushed up the slopes of the heights. During the hour-long engagement the division suffered tremendous casualties. Of the twenty-five African Americans who were awarded the Medal of Honor during the Civil War, fourteen received the honor as a result of their actions at Chaffin's Farm. Although black soldiers proved themselves as reputable soldiers, discrimination in pay and other areas remained widespread. According to the Militia Act of 1862, soldiers of African descent were to receive $10.00 a month, with a optional deduction for clothing at $3.00. In contrast, white privates received thirteen dollars per month plus a clothing allowance of $3.50. Many regiments struggled for equal pay, some refusing any money until June 15, 1864, when Congress granted equal pay for all black soldiers. Besides discrimination in pay, colored units were often disproportionately assigned laborer work. General Daniel Ullman, commander of the Corps d'Afrique, remarked "I fear that many high officials outside of Washington have no other intention than that these men shall be used as diggers and drudges." Like the army, the Union Navy's official position at the beginning of the war was ambivalence towards the use of either Northern free blacks or runaway slaves. The constant stream, however, of escaped slave seeking refuge aboard Union ships, forced the navy to formulate a policy towards them. Secretary of the Navy, Gideon Wells in a terse order, pointed out the following: It is not the policy of this Government to invite or encourage this kind of desertion and yet, under the circumstances, no other course...could be adopted without violating every principle of humanity. To return them would be impolitic as well as cruel...you will do well to employ them." - Gideon Wells, Secretary of the Navy In time, the Union Navy would see almost 16% of its ranks supplied by African American's, performing in a wide range of enlisted roles. In contrast to the Army, the Navy from the outset not only paid equal wages between white and black sailors, but offered considerably more for even entry-level enlisted positions. Food rations and medical care were also improved over the Army, with the Navy benefiting from a regular stream of supplies from Union-held ports. Becoming a commissioned officer, however was still out of reach for black sailors. Only the rank of petty officer would be offered to black sailors, and in practice, only to free blacks (whom often were the only one's with long enough naval careers to justify the rank). Jane E. Schultz in her essay "Seldom Thanked, Never Praised, and Scarcely Recognized: Gender and Racism in Civil War Hospitals" wrote, "Approximately 10 percent of the Union's female relief workforce was of African descent: free blacks of diverse education and class background who earned wages or worked without pay in the larger cause of freedom, and runaway slaves who ought sanctuary in military camps and hospitals." "Nearly 40% of the Confederacy's population were unfree...the work required to sustain the same society during war naturally fell disproportionately on black shoulders as well. By drawing so many white men into the army, indeed, the war multiplied the importance of the black work force." Even Georgia's Governor Joseph E. Brown noted that "the country and the army are mainly dependent upon slave labor for support." The impressment of slaves, and conscription of freedmen, into direct military labor, initially came on the impetus of state legislatures, and by 1864 6 states had regulated impressment (Florida, Virginia, Alabama, Louisiana, Mississippi, and South Carolina, in order of authorization) as well as the Confederate Congress. Slave labor was used in a wide variety of support roles, from infrastructure and mining, to teamster and medical roles such as hospital attendants and nurses. The idea of arming slaves for use as soldiers was speculated on from the onset of the war, but not seriously considered by Davis or others in his administration. As the Union saw victories in the fall 1862 and the spring of 1863, however, the need for more manpower was acknowledged by the Confederacy in the form of conscription of white men, and the national impressment of free and slave blacks into laborer positions. State militias composed of freedmen were offered, but the War Department spurned the offer. One of the more notable state militias was the all black 1st Louisiana Native Guard, a militia unit composed of free men of color. The unit was short lived, and forced to disband in February 1862. The unit was "intended as a response to demands from members of New Orleans' substantial free black population that they be permitted to participate in the defense of their state, the unit was used by Confederate authorities for public display and propaganda purposes but was not allowed to fight." A Union army regiment was later formed under the same name after General Butler took control of the city. In January 1864, General Patrick Cleburne and several other Confederate officers in the Army of the Tennessee proposed using slaves as soldiers in the national army to buttress falling troop numbers. Cleburne recommended offering slaves their freedom if they fought and survived. Confederate President Jefferson Davis refused to consider Cleburne's proposal and forbade further discussion of the idea. In fact, a number of prominent generals dissented, including Howell Cobb, Beauregard, and Anderson. Despite the suppression of Cleburne's idea, the question of enlisting slaves into the army had not faded away, but had become a fixture of debate amongst the columns of Southern newspapers and southern society in the winter of 1864. Representative of the two sides in the debate were the Richmond Enquirer and the Charleston Courier: ...whenever the subjugation of Virginia or the employment of her slaves as soldiers are alternative propositions, then certainly we are for making them soldiers, and giving freedom to those negroes that escape the casualties of battle. - Nathaniel Tyler in the Richmond Enquirer Slavery, God's institution of labor, and the primary political element of our Confederation of Government, state sovereignty...must stand or fall together. To talk of maintaining independence while we abolish slavery is simply to talk folly. - Charleston Courier On January 11, 1865 General Robert E. Lee wrote the Confederate Congress urging them to arm and enlist black slaves in exchange for their freedom. On March 13, the Confederate Congress passed legislation to raise and enlist companies of black soldiers. The legislation was then promulgated into military policy by Davis in General Order No. 14 on March 23, 1865. The emancipation offered, however, was reliant upon a master's consent;"no slave will be accepted as a recruit unless with his own consent and with the approbation of his master by a written instrument conferring, as far as he may, the rights of a freedman." Despite calculations of Virginia's state auditor, that some 4,700 free black males and more than 25,000 male slaves between eighteen and forty five years of age were fit for service, only a small number were raised in the intervening months, most coming from two local hospitals-Windsor and Jackson- as well as a formal recruiting center created by General Ewell and staffed by Majors Pegram and Turner. A month after the order was issued, the number was still "forty or fifty colored soldiers, enlisted under the act of congress". In his memoirs, Davis stated "There did not remain time enough to obtain any result from its provisions" A few other lesser known Confederate militia units of free men of color were raised throughout Louisiana at the beginning of the war. These units included: the Baton Rouge Guards under Capt. Henry Favrot, portions of the Pointe Coupee Light Infantry under Capt. Ferdinand Claiborne, and the Augustin Guards and Monet's Guards of Natchitoches under Dr. Jean Burdin. The only official duties ever given to the Natchitoches units were funeral honor guard details. One account of an unidentified African American fighting for the Confederacy, from two Southern 1862 newspapers, tells of "a huge negro" fighting under the command of Confederate Major General John C. Breckinridge against the 14th Maine Infantry Regiment in a battle near Baton Rouge, Louisiana, on August 5, 1862. The man was described as being "armed and equipped with knapsack, musket, and uniform", and helping to lead the attack. The man's status of being a freedman or a slave is unknown. THE BATTALION from Camps Winder and Jackson, under the command of Dr. Chambliss, including the company of colored troops under Captain Grimes, will parade on the square on Wednesday evening, at 4 o'clock. This is the first company of negro troops raised in Virginia. It was organized about a month since, by Dr. Chambliss, from the employees of the hospitals, and served on the lines during the recent Sheridan raid. Richmond Sentinel, March 21, 1865 Naval historian Ivan Musicant has written that there were blacks who served in the Confederate Navy. Muscicant wrote: Free blacks could enlist with the approval of the local squadron commander, or the Navy Department, and slaves were permitted to serve with their master's consent. It was stipulated that no draft of seamen to a newly commissioned vessel could number more than 5 per cent blacks. Though figures are lacking, a fair number of blacks served as coal heavers, officers' stewards, or at the top end, as highly skilled tidewater pilots. Prisoner exchanges between the Union and Confederacy were suspended when the Confederacy refused to return black soldiers captured in uniform. In October 1862, the Confederate Congress issued a resolution declaring all Negroes, free and slave, that they should be delivered to their respective states "to be dealt with according to the present and future laws of such State or States". In a letter to General Beauregard on this issue, Secretary Seddon pointed out that "Slaves in flagrant rebellion are subject to death by the laws of every slave-holding State" but that "to guard, however, against possible abuse...the order of execution should be reposed in the general commanding the special locality of the capture." However, Seddon, concerned about the "embarrassments attending this question", urged that former slaves be sent back to their owners. As for freemen, they would be handed over to Confederates for confinement and put to hard labor. The experience of colored troops and their white officers in prison life was not significantly different than members of white units. SPECIAL REQUEST FOR TCD FANS: The San Francisco Chronicle is pondering the addition of new cartoons for their paper - a process that seems to be initiated by Darren Bell, creator of Candorville (one of my daily reads - highly recommended). You can read the Chronicle article here and please add your thoughts to the comments if you wish. If anything, put in a good word for Darren and Candorville. I am submitting Town Called Dobson to the paper for their consideration. They seem to have given great weight to receiving 200 messages considering Candorville. I am asking TCD fans to try to surpass that amount. (I get more than that many hate mails a day, surely fans can do better?) This is not a race between Darren and I, it is a hope that more progressive strips can be represented in the printed press of America. So if you read the San Francisco Chronicle or live in the Bay Area (Google Analytics tell me there are a lot of you), please send your kind comments (or naked, straining outrage) to David Wiegand at his published addresses below. If you are a subscriber, cut out your mailing label and staple it to a TCD strip and include it in your letter. Executive Datebook Editor The San Francisco Chronicle 901 Mission St. San Francisco, CA 94103 BIRTH OF A NOTION WALLPAPER is now available for your computer. Click here.
http://www.bilerico.com/2008/05/black_history_us_colored_troops.php
4.15625
|Sunspot number is the most important index for tracking the level of solar activity. It is calculated as where n is the number of individual spots, g is the number of sunspot groups and k is a constant that is different for each station. Records of sunspot number go back to the mid-17th century. This site lists values of the daily mean sunspot number from 1818 to the present, monthly sunspot counts from 1749 to the present, and yearly sunspot counts from 1700 to the present. The most obvious period in these records is an 11.4 year period called the solar cycle. Another measure of the solar activity is provided by the 10.7 centimeter radio flux. Since intense emission at radio wavelengths are produced in magnetically active regions, a proxy can be created for sunspot number using measurements of the radio flux. Statistical correlations between the sunspot number and f10.7 flux have been made using 40 years of data and is given by R = (1.14).S - 73.21 where S is the solar flux (density) value in solar flux units. Tabulated values of the f10.7 fluxfor use in this calculation are available from DRAO, National Research Council of Canada
http://www.windows2universe.org/spaceweather/solar_cycle.html
4.40625
How are you going to get students engaged? Develop student interest and link their prior knowledge. Start the Student Learning Map of the unit with students. Preview key vocabulary with students. Read aloud your favorite story. As you read, complete the "Strategies for Active Reading" graphic organizer with the students. Use the document camera as you read, think aloud, and complete the organizer so that the students can follow along. Discuss what constitutes "good reading" with the students and have them complete a reading interest survey. Finish by asking those who are willing to bring in their favorite stories to share. 1. Tell the students that they are in luck! They will be listening to you read your favorite story! 2. Give each student the attached Strategies for Active Reading graphic organizer. Be ready to fill one out using a document camera so that your students are able to see what you are writing. 3. As you read, complete the organizer. Using a think-aloud strategy, let them hear the connections you are making. 4. After you finish, begin a discussion with the students about why this is your favorite story and what, in your eyes, constitutes a good book or a good story. 5. Move the focus of the discussion to the students, and list--or let them list-- on the board their standards for good reading. 6. See if you can get titles of books and stories that they remember reading and enjoying. 7. Have them complete a Reading Interest Inventory and collect them. 8. Homework: Ask those who can to bring in a favorite story to share. Document Camera, Favorite Story, attached graphic organizers
http://publish.learningfocused.com/5515842/5/1/
4.03125
Transistors are electronic switches that control the flow of current from one part of a circuit to the next, and form the foundations of modern computing. Transistors are usually made from semiconductors – materials that allow electric current to flow through them only under certain controllable circumstances. Current research is looking into the possibility of using doped diamond as semi-conductors in order to create hard-wearing transistors with a wide band gap, high thermal conductivity and the ability to withstand high electric fields without breaking. Now, Mutsuko Hatano and co-workers at Tokyo Institute of Technology, together with colleagues across Japan, have succeeded in fabricating a new design of transistor using diamond doped with phosphorus and boron1. The new transistor operates accurately at high temperatures and can prove useful in power devices. “Naturally, diamond is an insulator,” explains Hatano. “However it becomes semi-conducting when doped with boron or phosphorus. We and our co-workers at the National Institute of Advanced Industrial Science and Technology discovered a unique way of selectively growing doped diamonds. We applied this technique to fabricate diamond junction field-effect transistors.” Junction field effect transistors (JFETs) work by altering the conductivity of the channel through which the current flows. Each transistor has a drain, source, and gate. The state of the gate (either ‘on’ or ‘off’) determines the current flowing through the channel between the source and the drain. Rather like pinching or squeezing a hosepipe to prevent water flowing, JFETs allow the channel to remain open (on state) or closed (off state). Hatano and her team built up JFETs by doping diamond with impure gases containing either boron or phosphorus during the chemical vapor depositionprocess. Phosphorus has five free electrons as opposed to diamond’s four, so every atom effectively adds an extra electron (n-type doping). Boron, on the other hand, has only three electrons so every atom creates a ‘hole’ (p-type doping). The team created the desired shape and structure of each transistor. The flow channel was made up of p-type diamond, with the n-type diamond making a unique structure of two gates placed on either side of the channel. When open, the p-type channel is full of holes, meaning there is plenty of space for the holes in the current to flow. However, once a voltage is passed simultaneously through the n-type gates, the holes are filled in to create a depletion layer that closes off the channel to current. The flow of current can therefore be carefully controlled according to the voltage passing through the gates. This is the first transistor of its kind to be made from diamond, and to function accurately even at higher temperatures. The ability of the lateral-gated diamond transistors to withstand high currents and high voltages when stacked vertically means the new devices could be very valuable in power applications. “The present work is still just the first step,” explains Hatano. “We are going to evaluate and improve the device characteristics. At present, diamond substrates are expensive to use as a power device on a large scale. This will be solved if diamond is grown on another, cheaper substrate in future.” Reference 1 Takayuki Iwasaki et al. Diamond junction field-effect transistors with selectively grown n+-side gates.Applied Physics Express 5 (2012)Learn more about power electronics market and publications that provide informed perspective and relevant analysis of emergent technologies. Institute of Technology Ookayama, Meguro-ku, Tokyo 152-8550, Japan Tokyo Institute of Technology As one of Japan’s top universities, Tokyo Institute of Technology seeks to contribute to civilization, peace and prosperity in the world, and aims at developing global human capabilities par excellence through pioneering research and education in science and technology, including industrial and social management. To achieve this mission, we have an eye on educating highly moral students to acquire not only scientific expertise but also expertise in the liberal arts, and a balanced knowledge of the social sciences and humanities, all while researching deeply from basics to practice with academic mastery. Through these activities, we wish to contribute to global sustainability of the natural world and the support of human life.
http://www.electronics.ca/presscenter/articles/1942/1/Tokyo-Institute-of-Technology-Has-Developed-a-New-Type-of-Transistor-Made-from-Diamond-that-Could-Prove-Valuable-for-High-Power-Applications/Page1.html/print/1942
4.125
CRAB NEBULA: Supernova Remnant & Pulsar What Do These Images Tell Us? The Crab Nebula consists of a pulsar, a rapidly rotating neutron star at the center, surrounded by a bright diffuse nebula. The nebula, which is about six light years across, is expanding outward at 3 million miles per hour. The filamentary system visible in the optical images is near the outer boundary of this expansion. Both the nebula and the pulsar are bright sources of radiation in all The radiation we observe from the Crab Nebula is produced mainly by high-energy particles accelerated by the neutron star. These energetic particles, which near the neutron star are thought to include anti-matter positrons as well as electrons, spiral around magnetic field lines in the nebula and give off radiation by the "synchrotron" process. Comparing the X-ray, optical, infrared, and radio images of the Crab shows that the nebula appears most compact in X-rays and largest in the radio. The X-ray nebula shown in the Chandra image is about 40% as large as the optical nebula, which is in turn about 80% as large as the radio image. This can be understood by following the history of energetic electrons produced by the neutron star. Electrons with very high energies radiate mostly X-rays. | Chandra's X-ray image of the Crab Nebula directly traces the most energetic particles being produced by the pulsar. This amazing image reveals an unprecedented level of detail about the highly energetic particle winds and will allow scientists to probe deep into the dynamics of this cosmic powerhouse. | As time goes on, and the electrons move outward, they lose energy to radiation. The diffuse optical light comes from intermediate energy particles produced by the pulsar. The optical light from the filaments is due to hot gas at temperatures of tens of thousands of degrees. | The infrared radiation comes from electrons with energies lower than those producing the optical light. Additional infrared radiation comes from dust grains mixed in with the hot gas in the filaments. | Radio waves come from the lowest energy electrons. They can travel the greatest distance and define the full extent of the nebula. The Crab's central pulsar was discovered in 1968 by radio astronomers. The pulsar was then identified as a source of periodic optical and X-ray radiation. The periodic flashes of radiation are caused by a beam from the rapidly rotating neutron Return to Crab Nebula (28 Sep 99)
http://chandra.harvard.edu/photo/0052/what.html?set=cxcpub
4.375
As Fast As a Mouse! The ultimate goal of reading is comprehension, and in order for fully comprehend texts, they must be able to read them fluently. To be able to read fluently means to be able to read quickly, smoothly, and expressively. This lesson will help students to do just that through repeated readings, timed readings, and one-minute reads. For each pair of students: copy of If You Give a Mouse a Cookie stopwatch; for each student: reading chart to record the number of read after each one-minute read illustrated like so: a mouse will work from a cookie, up to a glass of milk, up to a straw, and so on like in book; repeated reading checklist to see whether the student remembered words, read faster, read smoother, and read with more expression after one-minute read; and a pencil - Introduce the lesson by explaining that in order to become successful readers, we must read quickly, smoothly, and with expression. Introduce the term “fluency.” “Have you ever heard that practice makes perfect? Well, it’s true with reading. The more practice you have reading a story, the easier it becomes to understand what the story is actually about. This is called fluency.” “Today we are going to learn how to read using the book by practicing reading this book, If You Give a Mouse as fast as we can.” Introduce book by engaging booktalk. “Reading fast helps you keep up with the story instead of reading teacher will model how not to read by reading the first page of You Give a Mouse a Cookie very slowly without fluency, sounding out word, and taking long pauses between each one. “I-f y-o-u g-i-v-e a (pause) m-o-u-s-e a (pause) c-o-o-k-i-e.” that sound good? No, it was very slow and choppy and boring! Now read the same page, although this time read with fluency and expression and use different tones of voice. “If you give a mouse a cookie….” it sound better that time? Yes, it did; it was faster, smoother, and I used a lot more expression, didn’t I? This helped me to understand what is going on in the book. That’s what it means to be a fluent reader. Now you all get to practice reading fluently by reading the same book several times.” - Divide the students into pairs of two and allow them to time each other for one minute as they each read a book. Give each pair of students a copy of If You Give a Mouse a Cookie and a stopwatch. Make sure the students know how to use the stopwatch. Give each student a pencil and a one-minute reading chart. “You and your partner are going to practice fluency by reading If You Give a Mouse a Cookie for one minute. One person will read aloud while the other person times a minute with the stopwatch. Once your minute is up you will count the number of words you have just read and write it on the cookie (the first thing the mouse wants). You will read again for one minute and write the number of words you read on the glass of milk (the second thing the mouse wants). You will continue to do this a few more times. Then you will write the number of words you read each time on the next thing the mouse wants (in the book). Your goal is to get a higher number of words each time.” Give enough time for at least three repeated readings. “Your partner will then fill out the repeated reading checklist. After that, you will switch and your partner will read and you’ll fill out his/her checklist. I will be walking around in case you have a question or need help. Okay, Collect each child’s one-minute readings chart as well as his/her reading checklist. Compare the number of words the child read in his/her one-minute readings. The goal is that the number increased each time. Review the repeated reading checklist. Hopefully after the second and reading, the student will have become more fluent, that is, reading smoother, and more expressively. Melton, Shealy. “Ready to Race.” http://www.auburn.edu/rdggenie/connect/meltongf.html. Numeroff, Laura J. If You Give a Mouse a Inc., 1985. Felicia Bond. 28 pages. here to return to Perspectives.
http://www.auburn.edu/academic/education/reading_genie/persp/wildgf.html
4.0625
The stereo microscope is essentially two compound microscopes designed to view samples at low magnification using light illumination. These two microscopes allow the sample to be viewed at the exact same point but at different angles. These angles allow the sample to be viewed in three dimensions. This microscope is typically used to perform work that requires a closer look such as dissection, watch-making, sorting, and microsurgery. They are also used to research the surface of solid specimens. Horatio S. Greenough was the man who invented the stereo microscope. He was an American instrument designer and the son of a famous sculptor. In the 1890s Greenough submitted his stereo microscope plans to the Carl Zeiss Company. The company agreed to produce the microscope, but added an additional feature to the product that inverted prisms. His stereo microscope design became the forefather of the modern designs that followed. Greenough’s design has withstood the test of time and is still used for certain applications today.
http://www.isaicai.org/2010/12/who-invented-stereo-microscope.html
4.25
Prairie Populations is a South Dakota wildlife population census activity for students in grades 5-10. This activity is designed to provide the students with an open-ended problem solving situation that integrates mathematics and language arts with biology. The activity is designed as a unit that takes the students through an entire learning cycle in which they investigate and experiment, learn from experts through a practical, real life situation, and finally apply the knowledge and skills they have learned. Students will be able to: 1) explain why scientists census wildlife populations; 2) use a variety of techniques to estimate a population; 3) apply their mathematical skills of estimation, multiplication, averaging, geometry, fractions and percents 4) learn about waterfowl populations at a South Dakota wildlife refuge; and 5) demonstrate their knowledge and skills by conducting a census of their choosing. Students will be presented with a population to census. They must assess the problem, design two procedures to estimate the population, determine which procedure yields the most accurate information, and make a report on the best census technique as determined by their investigations. Students will then learn about real-life population census activities at a South Dakota wildlife refuge. Finally, students will conduct and report on a census of their own choosing. There are many instances when biologists are interested in the size of wildlife populations. A population is defined as the individuals of a single species found at a particular location at a particular time. Most often biologists census populations as a basis for wildlife management decisions. For example, an accurate estimation of the males and females in a population must be known before managers can decide how many hunting or fishing licenses can be issued. Population censuses are conducted to determine if a species is threatened or endangered. Species with very low populations are included on state and/or federal lists that insure special consideration will be taken to protect the species from extinction. Some populations are monitored because too many of the species could cause serious habitat deterioration problems for people, livestock or agriculture crops. When populations exceed acceptable levels, control measures are initiated. These could include establishment of hunting seasons to harvest excess numbers or, in cases of insect problems, application of pesticides. In many cases, population censuses include information about the sex and age group ratios. These data are important for predicting future trends within the population. A population with very few juveniles, for example, could indicate that the species is experiencing a disruption of the reproductive cycle that will soon result in a population crash. Biologists use this information to call for studies of the ecology of the species to determine the exact nature of the problem. Biologists also are interested in assessing wildlife die-offs. Possible causes for bird kills are: When death of large number of plants or animals occurs, biologists are called to assess the extent of the problem. Even in these situations, when the organisms are dead and therefore stationary, it is difficult to obtain accurate counts. Censusing by counting each individual in an entire area is usually too time consuming and costly, and sometimes impossible. Good estimating strategies are essential. Scientists have developed several techniques to help determine approximate population size: Items that will be needed are 200 (or more) toothpicks (more elaborate models could be used), a grassy field, stakes, string, paper, graph paper, pencils, calculators, tape measures, and several hula-hoops (for older students who can calculate areas of circles). 1. Review the concept of population. Ask the students to brainstorm about why biologists might want to know how many individuals there are in a particular wildlife population. Discuss their ideas and either suggest additional ones or have students contact wildlife biologists for more information. 2. Choose a large open area - grassy lawn, field, park or school yard - that should be staked out by the teacher and eventually measured by the students. An area 100' by 50' would do. Randomly scatter throughout the study area a predetermined number of models representing dead birds. The size of the area and number of birds should be chosen with consideration of the difficulty of the mathematics that will be required to complete the activity. For younger students use increments of 100 models so the calculation of fractions (or percent) will be easier. Students should not be told the size of the field or the number models. 3. Present the students with the following problem: During the late summer of the year a motorist was passing by a tall radio tower near an open field that had some water in it. The motorist noticed many dead birds scattered about. He was so concerned by this unusual sight that he contacted a wildlife biologist in the nearest South Dakota Game, Fish and Parks office. The biologist examined the field and took a few of the birds on which to run tests to determine the cause of the tragedy. The biologist had to determine the number of birds that were lost for a report that she was required to file with the South Dakota state government. Because counting each individual bird was too time consuming and costly, the biologist wanted to devise a strategy to estimate the number of birds killed. 4. Have the students work in small groups. The students have two tasks. First, to brainstorm and/or research the possible causes that would result in a large die-off of birds as described in the scenario. Some possibilities are explained for the teacher's reference in the background section above. Second, the students should design a population estimation strategy for the biologist to use that will be the most accurate. To help in this endeavor, tell the students you have prepared a model of the situation for them to experiment with in which one toothpick represents one dead bird. Provide each team a piece of graph paper on which they can construct a scale drawing. First, have the students calculate the area of the field containing the dead birds. (Students who cannot yet calculate areas can do the activity by establishing grids of whatever size they would like and counting the number of grid boxes in the scale drawing). 5. Each group should decide on two techniques that could be used to estimate the number of dead birds in the field. Use one trial of each of the two techniques to estimate the population. When sampling the field, students can count the models but they should not move or remove them. The hool-a-hoops or grids made of string can be used to delineate sample sections of the field. 6. Once a group has estimated the population using two different strategies, tell them the actual number of dead birds in the field. Students should then calculate the accuracy of their procedures. Ask students what could be done to increase the accuracy of their procedure. If they suggest using larger samples or increased numbers of trials have them make these improvements, recalculate the total, and see if the accuracy of their estimate is improved. Finally have the students join hands and walk the entire length of the field picking up each toothpick they see. What percent accuracy was obtained using this strategy? Younger students who are not yet familiar with the idea of a percent can do the entire exercise using fractions. 7. Have the student groups share their results with the other teams. The students should discuss the relative merits of each technique. How did the techniques compare in difficulty, time required, and accuracy? 8. Each student should write a report recommending a census technique to the biologist. The report should describe the census technique, contain a labeled to-scale drawing of their sampling, and explain why the student recommends that particular strategy. Remind the students that an excellent solution is one that provides high accuracy, is easy to do, and requires the least amount of time and effort. South Dakota Experience Ask each student to guess how many geese stop at Sand Lake National Wildlife Refuge during the spring migration. Geese migrate through South Dakota early each spring on their way to their Canadian breeding grounds and again in the fall on their return trip south. They stop over at Sand Lake to rest and eat during March and April, and again in October and November. Biologists count the number of geese to determine how many individuals use the Sand Lake resource during migratio. Spring migration populations of geese at Sand Lake NWR average 600,000 birds with peaks reaching as high as 1.3 million birds in some years. After doing the Prairie Population activity, students should be taken to a wildlife refuge where they can visit with refuge personnel to learn about the value of the refuge to wildlife populations, find out about causes of bird deaths at the refuge, and discuss population census activities conducted at the refuge. The addresses and phone numbers of the refuges in South Dakota are listed in the Natural Source Chapter 1: South Dakota Directory. Now that the students have an understanding of why population censusing is important, how counts can be made, and have learned about a population that is counted yearly in South Dakota, they should be prepared to use the knowledge they have acquired. Ask the students to conduct a census of any population (such as number of dandelions in the school yard or the number of left handed students in the school) that is of interest to them, and write a brief description of the census strategy they used and a summary of their findings. Products produced by the students can be used for evaluation. 1. Have students select a sampling technique and estimate the total from one sampling of the population of bird models. Repeat the calculation based on the average of two samples, then three samples and so on. Have students graph the accuracy achieved using each number of trials. At what point does an increase in the number of trials no longer significantly improve the accuracy of the estimation? Students can determine the optimum number of trials that should be used in order to achieve the most accurate census. 2. Have students test their ability to estimate populations by using, Wildlife Counts , a computer wildlife counting simulation that is used to train wildlife biologists and help them practice their skills. The idea for this activity developed from my having heard a research presentation by Dr. Philibert and her colleagues from the University of Saskatchewan. I am grateful to Dr. Philibert for granting me permission to use the study as a model for the activity. Philibert, Helene and G. Wobeser and R. Clark, 1990. Estimation of Mortality in Wild Birds: Examination of Methods, U. of Saskatchewan, Saskatoon, Saskatchewan Canada, S7N OWO. Welty, J.C. and Luis Baptista, 1988. The Life of Birds, 4th Ed. Saunders College Publishing, N.Y. Wildlife Counts Computer Simulation, IBM or Apple II, 2215 Meadow Lane, Juneau, AK 99801. Phone: (907) 789-0326 Dr. Erika Tallman, Education Department, Northern State University, Aberdeen, SD. 1992. Ted Benzon, Art Carter, Maggie Hachmeister and John Wrede all of South Dakota Dept. Game, Fish and Parks. John Koerner, Manager of Sand Lake National Wildlife Refuge, Columbia, SD. 57433. Special thanks are owed to Mrs. Karen Taylor's 5th grade class and Mrs. Jean Rahja's 6th grade class in Aberdeen, South Dakota for field testing the activity. Publication of the Prairie Population activity was funded by the Prairie Pothole Joint Venture of the North American Waterfowl Management Plan.
http://www3.northern.edu/natsource/DAKOTA1/Prairi1.htm
4.15625
Why is Differentiated Instruction such a hot topic? It is THE bridge from Content to Learner. - Educators need many Pieces - different kinds of instructional and assessment strategies and a variety of differentiated resources that address the wide range of knowledge, ability levels, interests, and learning styles students bring to the classroom. - Teachers require a variety of research-based instructional strategies and resources for all students in order to meet benchmarks, Common Core standards, and to provide interventions, enrichment, and acceleration. - Strategies and materials must support the accountability component of educating students. What are Research-Based Differentiation Strategies? Accountable, teacher-tested instructional strategies that address a variety of student characteristics and promote student achievement. - One of the Pieces is Research-Based Instructional Strategies. These strategies differentiate Content, Process, and/or Product. - After pre-testing, Content may be remediated, accelerated, or enriched using basic or more complex resources. Examples of strategies are acceleration, compacting, flexible pacing, and introduction of more complex concepts. - Process is differentiated by addressing different learning styles, levels of thinking, and kinds of thinking. The content is not different. Teachers may use Bloom’s Taxonomy, Higher Level Questioning, CPS, Williams’ Taxonomy, etc. - Product is differentiated by addressing different learning styles and by providing choice in variety and different levels of complexity of - Other Differentiation Strategies What are Differentiated Resources? They are resources that address the wide range of knowledge, ability levels, interests, and learning styles students bring to the classroom and that address different instructional One of the Pieces offered by Pieces of Learning is a wide selection of differentiated resources especially designed for the Differentiated Classroom Once I have the Strategies for Differentiated Instruction and Differentiated Resources – Where is the Pieces of Learning’s Professional Development Presenters Other Differentiation Strategies |Acceleration ||Curriculum Compacting |Flexible Grouping ||Literature Circles |Independent Study ||Telescoping |Problem-Based Learning ||Learning Centers |Tiered Instruction ||Tic-Tac-Toe Choices |Differentiated Assessment ||Brain Compatible Learning |Collaborative Learning ||Project Based Learning |Inquiry Based Learning || Creative Problem Solving - Differentiating the Process: Our Math Rules! series seeks right answers, but the focus is the differentiation of the process to arrive at the right answer as well as enriching the content. - Differentiating the Content for Primary High Ability Thinkers:Our P.E.T.S.(TM) (Primary Education Thinking Skills) series presents activities to teach different kinds of thinking skills to young students; the content is differentiated for high ability thinkers and various instructional strategies including Telescoping and Enrichment are used. Another practical resource: Demystifying Differentiation in Elementary School the Content and Process Through Questioning: Our Thinking Skills series includes Asking Smart Questions, Questioning Makes the Questioning, and The Quick Question Workbook, all presenting Questioning strategies to use with any content area, thus differentiating the content and process. - Differentiating Using Various Research-Based Strategies: Our Postscards From ... series uses Bloom’s Taxonomy, Williams’ Taxonomy, SCAMPER, Tic-Tac-Toe, Choice, and Creative Problem Solving Strategies to differentiate process. More practical resources: Demystifying Differentiation in Middle School, Demystifying Differentiation in Elementary - Bloom’s Taxonomy: Bloom's & Beyond , Bloom's Differentiated Enrichment Units, and Differentiating Lessons Using Bloom’s Taxonomy. - Assessment: To address Instructional Strategies (Tic-Tac-Toe, Individual Lesson Plans, and Tiered Assignments) and Assessment in a variety of ways is our Best Sellers Activities & Assessments for the Differentiated Classroom , Successful Teaching in the Differentiated Activities and Assessments Using the Common Core Standards - Rubrics: The Products Tool Bag series and Solving the Assessment Puzzle: Piece by Piece provide rubrics for 100s of products. Use the Product Criteria Cards to create - TWO Essential Resources for new and experienced educators: Teaching Tools for the 21st Century & Successful Teaching in the Differentiated Classroom assist teachers as they begin their journey into Differentiation. It puts theory into practice that you can use tomorrow. The author addresses and illustrates Differentiation in a standards-based world. More Differentiated Resources On-Site Staff Development Consultants/Speakers Let us know if you link to our page. We will reciprocate after review of your site. Pieces of Learning Home Page | Home | On-Site Staff Development | Conference Schedule | | Catalog Request | Shipping Rates | Contact Us| Online Catalog| The picture used herein was obtained from IMSI's Masterclips/MasterPhotos© Collection, 1895 Fancisco Blvd. East, San Rafael, CA 94901-5506, USA © Copyright 2012 Pieces of Learning Division of Creative Learning
http://www.differentiatedresources.com/
4.25
About the Lever: This simple machine has been around for thousands of years. Levers are used to increase force to move objects. They are composed of two parts, the handle and the fulcrum. To move the object, push on the handle. The fulcrum is the point on which the lever balances. Seesaws and bottle openers are levers. A fork is a lever in which your hand is the fulcrum. To demonstrate a lever to your class, take the children to an open area outside.Vocabulary lever, simple machine, force, motionFor this activity, you will need:Plank (approximately 40 inches long)Log (a section of wood similar to what you would burn in a fireplace)Large rock (with uneven edges)Chart paper and markersDirections 1. Demonstrate that the rock is too heavy to pick up by letting the children try to lift the rock. 2. Place the log a short distance from the rock. 3. Rest the plank on the log, pushing one end under the edge of the rock. 4. Push down on the other end of the plank till the rock begins to lift up. 5. Encourage the children to take turns using the lever to lift the rock. 6. In the classroom, record the children’s responses on a chart titled, “What We Learned About Levers.”
http://www.teachersdomain.org/resource/viewtext_printer_friendly/resource/16003
4.03125
Dinosaurs are a group of reptiles that dominated the land for over 160 million years. They evolved diverse shapes and sizes, from the fearsome giant Spinosaurus to the turkey-sized Microraptor, able to survive in a variety of ecosystems. One of the reasons for the dinosaurs’ success is that they had straight legs, perpendicular to their bodies. This allowed them to move faster than other reptiles that had a sprawling stance like today’s lizards and crocodiles. Most dinosaur species became extinct around 65 million years ago, but the descendents of one dinosaur group, birds, are still with us today. Dinosaurs are descended from archosaurs, a large group of reptiles that appeared about 250 million years ago. Archosaurs also gave rise to non-dinosaur reptiles such as the pterosaurs (flying reptiles), now extinct, and the ancestors of modern crocodiles. These and many other types of ancient reptiles are often wrongly called dinosaurs. Stegosaurus, showing the upright stance shared by all dinosaurs
http://www.nhm.ac.uk/nature-online/life/dinosaurs-other-extinct-creatures/dino-directory/about-dinosaurs/index.html
4.0625
The Creation Wiki is now operating on a new and improved server. From CreationWiki, the encyclopedia of creation science Populus is a taxonomic genus of 25–35 species of trees in the family Salicaceae. They are best known for poplar, aspen, and cottonwood, which are native to most of the Northern Hemisphere, and are used for a variety of purposes such as fuel pellets, cabinet doors, and wind breaks for landscaping. The poplar has many varieties. A few are the Balsam Poplar, the White Poplar, and the Eastern Cotonwood, from studying these we can get a deeper knowledge of the poplar tree. The Balsam Poplar (Populus balsamifera) is a perennial that requires much sun but little water. It often blooms in the spring/summer time, and has yellow flowers and brown seeds. The maximum height is approximately 80 feet. The White Poplar (Populus alba) is also a perennial that thrives in the spring/summertime. It is slightly larger than the Balsam, growing at a maximum on 100 feet. This tree also has yellow flowers, but varies with its white seeds (reminiscent of its name) and is slightly more water deficient than the Balsam but still needs a lot of sun. The Eastern Cottonwood (Populus deltoides) can outgrow all previously mentioned plants with a maximum of 190 feet. Like the White poplar it bears yellow flowers and white seeds. As with the Balsam and White Poplar, the Eastern Cottonwood needs a lot of sunshine and grows in the spring/summertime. Its cousin, the Freemont Cottonwood (Populus fremontii) only grows to about 90 feet, and bears white flowers and seeds. Like all poplars it grows in summer/spring for maximum sunshine and medium water. In addition to sexually reproduction, populars can undergo “vegetative reproduction”. This form of plant reproduction is unique in that it involves no seeds or spouting off of any kind of reproductive organisms. This form of reproduction uses the stemming off of young usually from the root area. This is extremely useful for plants in large forests that are sparse and don’t grow in clusters where spores or things of that nature would work successfully. This has been a curiosity to many scientists on how the products of vegetative reproduction are almost completely different from the parent plant and how their life spans are unusually long. Hybrids and Uses Hybrids are ever increasing in our world today. Production stops for no one and agriculture must keep up with the times by producing better, faster, and stronger versions of their crops. Poplars are extremely tall trees and coveted for their wood supply. This includes: kitchen cabinets and countertops, firewood, wood for furniture products, and more. Now with the hybrid trees we can make new species to specialize in the types of things we need from the wood such as: palletizing the wood for fuel, increasing forest density, making for an easily maintained and healthier life span(for farmers to take care of and keep in a good condition). Another perk of hybrid trees is the increasing interest from conservation groups. The hybrid trees have a better guarantee of living longer and stronger and reproduction can be monitored easily. This makes for a perfect situation for breeders who want mass quantities of a good, long-living tree. - Poplar Tree - Information About Gardening Garden Guides - Hybrid PoplarWesMin RC&D - Vegetative Reproduction Wikipedia
http://www.creationwiki.org/Populus
4.03125
Lesson summary for: Webcast: Selection in action In lecture two of a four part series, evolutionary biologist David Kingsley discusses how just a few small genetic changes can have a big effect on morphology, using examples from maize, dog breeding, and stickleback fish. This lecture is available from Howard Hughes' BioInteractive website. Howard Hughes Medical Institute This lecture may be most useful for advanced high school biology courses. Clips of the lecture (now available as an indexed video with synchronized slides) might provide students with an experience similar to that of a first year college student. An interesting and useful exercise would be to have students watch the lecture (or part of it), take notes, and then process with classmates what the experience was like (both in terms of the content they learned and the way in which the lecture format challenged them to listen, absorb, and take notes). - All life forms use the same basic DNA building blocks. - Artificial selection provides a model for natural selection. - People selectively breed domesticated plants and animals to produce offspring with preferred characteristics. - Evolution results from selection acting upon genetic variation within a population. - Complex structures may be produced incrementally by the accumulation of smaller useful mutations. - Speciation is the splitting of one ancestral lineage into two or more descendent lineages. - Occupying new environments can provide new selection pressures and new opportunities, leading to speciation. - Scientists use the similarity of DNA nucleotide sequences to infer the relatedness of taxa. - Scientists use experimental evidence to study evolutionary processes. - Scientists use artificial selection as a model to learn about natural selection. - As with other scientific disciplines, evolutionary biology has applications that factor into everyday life. Comment on this resource Share how you used this resource in your classroom, suggestions for modifying it, and whether you liked using it. << Back to search results
http://evolution.berkeley.edu/evolibrary/search/lessonsummary.php?thisaudience=9-12&resource_id=137
4.03125
Materials: several short but interesting books (ex. “Hey Al” or “I’m Not Going to Get Up Today”) , fluency checklists Procedure: 1. Dialogue- We have learned many things about letter and their sounds. These factors help us to read better. All people read in different ways. Today we are going to talk about the correct and ways to read. (Demonstrate the correct way to read by reading a passage quickly and smoothly. Demonstrate the wrong way to read by a passage choppy and without expression). I will explain to the that to be a good reader, you must read fast but slow enough to smoothly and with emotion. Tell students: If I was telling you that a was going to sting you, I wouldn’t say slowly and dully, “A wasp is to sting you”. I would say it quickly and with feeling and excitement. For other to be able to understand us, we must learn to read smoothly 2. Today we are going to practice how to read fluently. I want you to get in groups of two. You are going to pick a story from the one’s I have available and read to your partner. When the first person is done, the other reader should try to read it faster and with more feeling than their partner. When finished, you will switch places and read another story. First, we will do one together. (I will read a passage using the incorrect method; that is choppy and dully. Then I will ask a student from the class to reread the story more quickly and with more expression.) 3. I want you to practice individually. I want you to go to a quiet place in the room with a partner. You will choose another story and read to each other. I will come around and assess your reading through listening to you and I may pass out a fluency checklist. If you need any help, ask your partner. If the two of you are unable to solve it, raise your hand and I will help you. 4. I am so proud of everyone. We will practice this at the end of the day to get more practice. Remember to always read with expression. 5. The assessment will be made through my listening to the students read to each other. I could also ask the students to evaluate each other using fluency checklists. Reference: Murray, Bruce ed. (2000). Reading Lesson Designs. P. 48. “Read It Like You Mean It” by Kelli Mason. Click here to return to Insights
http://www.auburn.edu/academic/education/reading_genie/insights/grovergf.html
4.15625
Lakes: Biological Processes The aquatic environment is shaped by complex interactions among a variety of physical, chemical, and biological factors. For example, physical factors such as climate, land topography, bedrock geology, and soil type influence the amount of water flowing in streams and discharging to lakes, as well as the types of materials (chemicals and particulates) found in the water. In turn, these physical and chemical factors support a community of biological organisms unique to a water environment. The presence and abundance of light in lakes control many biological processes. Green plants convert the light energy of the Sun into chemical energy (and ultimately plant tissue) through a process called photosynthesis. As sunlight strikes water, it is reflected from the surface (much like a mirror), scattered by particles in the water, and absorbed by the water itself. Gradually, much of the light gets used up until there is not enough light energy remaining at depth to support plant photosynthesis. The surface depths of a lake that receive sufficient light to allow photosynthesis make up the euphotic zone. The lower limit of the euphotic zone is approximated by the 1-percent light level, or that depth where only 1 percent of the surface sunlight remains. The depth of the euphotic may be a little as 1 meter (3 feet) in very turbid (cloudy) lakes to as much as 31 meters (100 feet) in very clear lakes. The shallow, nearshore waters of the lake where light penetrates all the way to the bottom is called the littoral zone (see the figure). The littoral community is considered the most diverse and abundant biological community in lakes. In the littoral community, plants (macrophytes) rooted in the sediments receive enough light to grow. Some rooted plants (emergents), such as cattails, emerge from the water surface. Other plants (floating-leaved), such as water lilies, have leaves that float on the surface. Still other plants (submergents) stay entirely submerged. The diversity of plants and the structure they add to the littoral zone attract an abundance of aquatic life. Many fish build nests here, and young fish find protection among the plants from predators. A multitude of aquatic insects (food for many fish) live on and feed among the plants and sediments of the littoral. Turtles, frogs, and many other aquatic organisms call the littoral community home. The zone of open, deeper water found farther out from the littoral zone is the pelagic community. Here, light is still abundant, and the waters are frequently mixed by wind. Tiny, free-floating plants and animals (plankton) live here along with cruising fish. The deep open water beneath the pelagic and euphotic zone is the profundal zone. The lack of light does not allow plants to grow here, but many fish and tiny crustaceans may still live here. Along the bottom of the lake lies the benthic zone. A variety of bottom-dwelling organisms—catfish, Living in the Water Water is a medium of extreme properties that strongly shape the nature of the organisms that can survive in it. Thus, life in the water requires special adaptations. Oxygen, plentiful in the atmosphere for land animals, is much less abundant in water. Air-breathing aquatic organisms must have specialized and efficient mechanisms, such as gills, to extract oxygen from water. Except for benthic organisms that live on the lake or stream bottom, most aquatic organisms require some means to regulate their buoyancy so that they can remain suspended in the water. Many fish have air bladders, lightweight bones, and scales—all adaptations to increase buoyancy. Plankton may have long spines or elaborate shapes to increase their surface area which slows down their sinking rate. Because ponds or streams may dry up, many aquatic organisms can enter a resting stage during development or may aestivate , as some amphibians may do in summer drought. On the other hand, the annual range in natural water temperatures (approximately 0 to 30°C, or 32 to 86°F in temperate areas) is much lower than the range that land plants and animals must face (−28 to 40°C, or −18 to 104°F). Aquatic plants and animals interact with each other through a series of interconnecting pathways called a food web. Each different level in the food web or chain is called a trophic level because each represents a different type of productivity. The schematic below illustrates the food web and microbial loop for the pelagic zone of a typical fresh-water lake. Microbes are important in enabling and sustaining nutrient cycles. Phytoplankton (predominantly algae), like terrestrial plants, require sunlight, water, and nutrients for photosynthesis. The algae and rooted macrophytes are primary producers in the aquatic environment. By converting light energy into chemical energy via photosynthesis, they create food (energy) needed for the entire aquatic food web. As such, they are at the base of the food chain. Algal groups are organized generally by color, such as green algae, yellow-brown algae, and so on. * Zooplankton , such as the shrimp-like Daphnia and Bosmina, are the primary consumers because they eat the primary producers. Zooplankton are considered herbivores because they consume plant material and are the functional equivalent of cows or rabbits on land. Planktivores are organisms that eat zooplankton. Aquatic organisms that are planktivorous include fish, such as minnows, small sunfish, and gizzard shad, as well as a variety of aquatic insect larvae. The piscivores are at the top of the aquatic food web and are fish-eating fish, such as bass, pike, and walleye. Piscivores are keystone species, in that their influence may cascade down the food web, affecting other organisms in lower trophic levels. For example, if the piscivore population is too high, they could eat all the planktivores. Without the fish planktivores to eat them, the zooplankton population could increase and do a better job feeding on the algae. This would lead to an increase in lake transparency. The reverse effect can happen if too few piscivores exist, which may be a result of overfishing or poor reproduction. SEE ALSO A LGAL B LOOMS IN F RESH W ATER ; E COLOGY , F RESH - WATER ; L AKE H EALTH , A SSESSING ; L AKE M ANAGEMENT I SSUES ; L AKES : C HEMICAL P ROCESSES ; L IFE IN W ATER ; M ICROBES IN L AKES AND S TREAMS ; N UTRIENTS IN L AKE AND S TREAMS ; P LANKTON . William W. Jones Moss, Brian Ecology of Fresh Waters, 3rd ed. Oxford, U.K.: Blackwell Science, 1998. Phillips, Nancy et al. The Lake Pocket Book. Alexandria, VA: Terrene Institute, 2000. Wetzel, Robert G. Limnology: Lake and River Ecosystems, 3rd ed. San Diego, CA: Academic Press, 2001. * See the "Ecology, Fresh-Water" entry for a summary table of fresh-water algal groups.
http://www.waterencyclopedia.com/Hy-La/Lakes-Biological-Processes.html
4
Titanic Primordial Pull How Earth Got Hot? If a time machine could take us back 4.6 billion years to the Earth's birth, we'd see our sun shining 20 to 25 percent less brightly than today. Without an earthly greenhouse to trap the sun's energy and warm the atmosphere, our world would be a spinning ball of ice. Life may never have evolved. |Life on the Edge. South Pole view from Space.Credit: NASA But life did evolve, so greenhouse gases must have been around to warm the Earth. Evidence from the geologic record indicates an abundance of the greenhouse gas carbon dioxide. Methane probably was present as well, but that greenhouse gas doesn't leave enough of a geologic footprint to detect with certainty. Molecular oxygen wasn't around, indicate rocks from the era, which contain iron carbonate instead of iron oxide. Stone fingerprints of flowing streams, liquid oceans and minerals formed from evaporation confirm that 3 billion years ago, Earth was warm enough for liquid water. Now, the geologic record revealed in some of Earth's oldest rocks is telling a surprising tale of collapse of that greenhouse -- and its subsequent regeneration. But even more surprising, say the Stanford scientists who report these findings in the May 25 issue of the journal Geology, is the critical role that rocks played in the evolution of the early atmosphere. "This is really the first time we've tried to put together a picture of how the early atmosphere, early climate and early continental evolution went hand in hand," said Donald R. Lowe, a professor of geological and environmental science who wrote the paper with Michael M. Tice, a graduate student investigating early life. NASA's Exobiology Program funded their work. "In the geologic past, climate and atmosphere were really profoundly influenced by development of continents." The record in the rocks To piece together geologic clues about what the early atmosphere was like and how it evolved, Lowe, a field geologist, has spent virtually every summer since 1977 in South Africa or Western Australia collecting rocks that are, literally, older than the hills. Some of the Earth's oldest rocks, they are about 3.2 to 3.5 billion years old. "The further back you go, generally, the harder it is to find a faithful record, rocks that haven't been twisted and squeezed and metamorphosed and otherwise altered," Lowe says. "We're looking back just about as far as the sedimentary record goes." After measuring and mapping rocks, Lowe brings samples back to Stanford to cut into sections so thin that their features can be revealed under a microscope. Collaborators participate in geochemical and isotopic analyses and computer modeling that further reveal the rocks' histories. The geologic record tells a story in which continents removed the greenhouse gas carbon dioxide from an early atmosphere that may have been as hot as 70 degrees Celsius (158 F). At this time the Earth was mostly ocean. It was too hot to have any polar ice caps. Lowe hypothesizes that rain combined with atmospheric carbon dioxide to make carbonic acid, which weathered jutting mountains of newly formed continental crust. Carbonic acid dissociated to form hydrogen ions, which found their way into the structures of weathering minerals, and bicarbonate, which was carried down rivers and streams to be deposited as limestone and other minerals in ocean sediments. Over time, great slabs of oceanic crust were pulled down, or subducted, into the Earth's mantle. The carbon that was locked into this crust was essentially lost, tied up for the 60 million years or so that it took the minerals to get recycled back to the surface or outgassed through volcanoes. |Scientists would like to know the origin of the atmospheric patches imaged on Saturn's moon, Titan, as imaged by Hubble. Image Credit: Hubble Space Telescope/UA Smith The hot early atmosphere probably contained methane too, Lowe says. As carbon dioxide levels fell due to weathering, at some point, levels of carbon dioxide and methane became about equal, he conjectures. This caused the methane to aerosolize into fine particles, creating a haze akin to that which today is present in the atmosphere of Saturn's moon Titan. This "Titan Effect" occurred on Earth 2.7 to 2.8 billion years ago. The Titan Effect removed methane from the atmosphere and the haze filtered out light; both caused further cooling, perhaps a temperature drop of 40 to 50 degrees Celsius. Eventually, about 3 billion years ago, the greenhouse just collapsed, Lowe and Tice theorize, and the Earth's first glaciation may have occurred 2.9 billion years ago. The rise after the fall Here the rocks reveal an odd twist in the story -- eventual regeneration of the greenhouse. Recall that 3 billion years ago, Earth was essentially Waterworld. There weren't any plants or animals to affect the atmosphere. Even algae hadn't evolved yet. Primitive photosynthetic microbes were around and may have played a role in the generation of methane and minor usage of carbon dioxide. As long as rapid continental weathering continued, carbonate was deposited on the oceanic crust and subducted into what Lowe calls "a big storage facility ... that kept most of the carbon dioxide out of the atmosphere." But as carbon dioxide was removed from the atmosphere and incorporated into rock, weathering slowed down -- there was less carbonic acid to erode mountains and the mountains were becoming lower. But volcanoes were still spewing into the atmosphere large amounts of carbon from recycled oceanic crust. "So eventually the carbon dioxide level climbs again," Lowe says. "It may never return to its full glorious 70 degrees Centigrade level, but it probably climbed to make the Earth warm again." |The jigsaw of continents that combined in the supercontinent, Gondwanaland. Continental drift and plate tectonics spread the land masses across the globe. This summer, Lowe and Tice will collect samples that will allow them to determine the temperature of this time interval, about 2.6 to 2.7 billion years ago, to get a better idea of how hot Earth got. New continents formed and weathered, again taking carbondioxide out of the atmosphere. About 3 billion years ago, maybe 10 or 15 percent of the Earth's present area in continental crust had formed. By 2.5 billion years ago, an enormous amount of new continental crust had formed -- about 50 to 60 percent of the present area of continental crust. During this second cycle, weathering of the larger amount of rock caused even greater atmospheric cooling, spurring a profound glaciation about 2.3 to 2.4 billion years ago. Over the past few million years we have been oscillating back and forth between glacial and interglacial epochs, Lowe says. We are in an interglacial period right now. It's a transition, and scientists are still trying to understand the magnitude of global climate change caused by humans in recent history compared to that caused by natural processes over the ages. "We're disturbing the system at rates that greatly exceed those that have characterized climatic changes in the past," Lowe said. "Nonetheless, virtually all of the experiments, virtually all of the variations and all of the climate changes that we're trying to understand today have happened before. Nature's done most of these experiments already. If we can analyze ancient climates, atmospheric compositions and the interplay among the crust, atmosphere, life and climate in the geologic past, we can take some first steps at understanding what is happening today and likely to happen tomorrow." Related Web Pages Rare Earth Debate Series Interactive Presentation: The Life and Death of Planet Earth NASA Workshop on Biodiversity The Cambrian Explosion: Tooth and Claw Tree of Life The Tree of Life Web Project Titan's Icy Bedrock
http://www.astrobio.net/pressrelease/1004/titanic-primordial-pull
4.15625
The ability to hear is one sense that is extremely important for human beings. Hearing is the basis of an individual’s ability to communicate. The ability of an individual to hear sounds is governed by the functioning of many different parts of the head. The outer cartilage of the ear deflects sounds to the ear canal. These sounds make the tympanum vibrate. This vibration, in turn, affects the bones of the ear. The movement of these bones is picked up and sent to the brain through the nervous system. A failure in any of these parts of the system will result in the individual being unable to hear. Hearing loss associated with degeneration of ear parts may be partial. Hearing loss associated with nerve conduction, brain issues and membrane issues is usually complete and often permanent. The auditory brainstem response audiometry test is a test used to determine the passage of signals that are received by the ears. The auditory brainstem response audiometry test is also called the ABEP test for hearing. The test is conducted by placing electrodes on the head and the back of the head. These electrodes are designed to pick up nerve impulses as they pass through the brain. This shows whether the signals are passing to the brain or not. The auditory brainstem response audiometry test is particularly useful for infants or, in rare cases, if the patient does not cooperate with other types of testing where the response of the patient is needed. The ABEP medical test is even applicable to those who may have learning difficulties and other mental disabilities. During the test, the doctor or technician will arrange the electrode configuration for auditory brainstem response audiometry such that the entire nerve and the brainstem can be tested to see where there is any impediment. In many cases, the signals may be generated in the ear but may not be reaching the brain. For the auditory brainstem response audiometry test, the stimulus used is usually a clicking sound that is played to each ear. This sound is distinctive and should produce a distinctive reading if it is being processed by the ear and the brain. The auditory brainstem response audiometry test is therefore an advanced hearing that is used to check for hearing loss in an individual who is unable to respond properly to the tester. The auditory brainstem response audiometry test is also used to check if the problem is related to the ear or to the nervous system.
http://www.medicalhealthtests.com/auditory-brainstem-response-audiometry.html
4.0625
Restless. Messy. Easily distracted. These are just some of the words used to describe people with attention deficit/hyperactivity disorder, more commonly referred to as attention deficit disorder (ADD). According to the Attention Deficit Disorder Association (ADDA), 5 to 10 percent of children and 3 to 6 percent of adults throughout the world have ADD. Experts estimate that one-half to two-thirds of children with the disorder will continue to have symptoms and behaviors of ADD as adults. Some adults who have ADD may not have been diagnosed as children because their symptoms were not recognized. The symptoms often become more apparent when they begin to take on the demands of adult responsibilities and develop adult relationships. ADD can have a significant social impact on a person's life, affecting relationships in the family and on the job. What is ADD? The official medical term for this condition is attention deficit hyperactivity disorder, or AD/HD. The popular term for the condition has been shortened to ADD. ADD has been classified into three broad categories, depending on whether the majority of symptoms are hyperactive or attention deficit, or a combination of both. People with symptoms of both hyperactivity and attention deficit have "combined type ADD"; those whose symptoms are mainly attention deficit have "predominantly inattentive type ADD." Those whose symptoms are mainly hyperactivity have "predominantly hyperactive-impulsive type ADD." Symptoms of hyperactivity tend to decrease as a person ages and are less common in adults. ADD is caused by differences in the parts of the brain that control thoughts, emotions and actions. These differences are probably inherited. They lead people with ADD to act inappropriately and be inattentive, impulsive and disorganized. According to the attention deficit association, people with ADD have problems with these functions: Stopping and thinking before acting or responding Analyzing or anticipating needs and problems, and coming up with effective solutions Short-term working memory; problems receiving, storing and accessing information in short term memory Becoming and staying organized Focusing and starting on a task Maintaining attention and working until a task has been completed Controlling emotions, motivation and activity level; jumping to conclusions, not being able to wait In most people, the ability to perform all these functions slowly develops as they grow and mature from childhood to adulthood. The demands of adulthood require a person to be able to do all of these complex functions. In some people who have undiagnosed ADD as a child, problems caused by ADD may not become apparent until they are teens or adults and they begin to try to handle more complex functions and demands. The attention deficit association says that symptoms of ADD can range from mild to severe. Symptoms that may be noticed by friends, family and coworkers include problems with learning, self-control, addiction, independent functioning, social interaction, health maintenance and organizing the tasks of daily life. ADD can cause problems like these: Being unable to keep a job or not keeping jobs long Not achieving educational goals otherwise within their ability Having marital difficulties Having accidents, traffic violations or arrests Frequent episodes of anger or rage Symptoms of ADD also can be symptoms of other health, emotional, learning, cognitive and language problems. Experts estimate that 30-50 percent of people with ADD have other psychiatric conditions, such as anxiety, depression and eating disorders. Your health care provider can determine if symptoms are from a developmental, vision, hearing, psychiatric or medical problem. If your provider makes a diagnosis of ADD, he or she may refer you to a specialist who has training and experience treating ADD. ADD is a lifelong condition. It is often less bothersome for adults than it is for children, but it is not something that goes away. When a problem is so severe that it continues to interfere with your personal life or career, you should seek help. Under federal law, ADD is considered a disability. If you have ADD, your employer must make appropriate and reasonable accommodations to help you work more efficiently and productively. ADD can’t be cured, but the symptoms of ADD may be eased with certain kinds of medication and behavioral therapy or counseling. Medication works on the chemical balance in the brain to relieve symptoms so that you can concentrate on behavioral or cognitive therapy. You may be prescribed a stimulant or non-stimulant medication, although stimulant medications are probably the more effective treatment. Stimulant medications are generally safe, but can have side effects such as insomnia, nervousness and decreased appetite. If you take a stimulant medication, you should have your blood pressure and heart rate checked periodically. Non-stimulant medications include norepinephrine reuptake inhibitors, antidepressants and drugs for high blood pressure. Only two drugs have been approved to treat adults with ADD: the stimulant dextroamphetamine/amphetamine (Adderall XR) and the non-stimulant atomoxetine (Strattera). Multiple studies have been conducted to evaluate other medications—those used for children with ADD. These other medications include the stimulants methylphenidate (Ritalin) and dextroamphetamine (Dexedrine). Ask your health care provider for the latest information. Behavioral or cognitive therapy can help you to change certain behaviors, deal with the emotional effects, and learn to improve time management and organizational skills. Your health care provider should evaluate your medication and other treatment methods on a regular basis. Because ADD treatment is tailored to each person, your treatment may change as your life changes. You can restructure your daily routine to help you to cope with your behavior. Making your day highly structured and your schedule consistent can help. What to do If you think you may have ADD, talk to your health care provider or a medical professional who has experience in diagnosing and treating adults with the condition. Be prepared to provide as much early history to that professional as possible, including parent and school records. Online tools to help manage your daily life. FIRSTCALL Employee Assistance Program 1-800-382-2377 Copyright ©2013 FIRSTCALL
http://www.firstcalleap.org/oth/Page.asp?PageID=OTH005351
4.28125
In science and history, consilience (also convergence of evidence or concordance of evidence) refers to the principle that evidence from independent, unrelated sources can "converge" to strong conclusions. That is, when multiple sources of evidence are in agreement, the conclusion can be very strong even when none of the individual sources of evidence are very strong on their own. Most established scientific knowledge is supported by a convergence of evidence: if not, the evidence is comparatively weak, and there will not likely be a strong scientific consensus. The principle is based on the unity of knowledge; measuring the same result by several different methods should lead to the same answer. For example, it should not matter whether one measures the distance between the Great Pyramids of Giza by laser rangefinding, by satellite imaging, or with a meter stick - in all three cases, the answer should be approximately the same. For the same reason, different dating methods in geochronology should concur, a result in chemistry should not contradict a result in geology, etc. Consilience requires the use of independent methods of measurement, meaning that the methods have few shared characteristics. That is, the mechanism by which the measurement is made is different; each method is dependent on an unrelated natural phenomenon. For example, the accuracy of laser rangefinding measurements is based on the scientific understanding of lasers, while satellite pictures and meter sticks rely on different phenomena. Because the methods are independent, when one of several methods is in error, it is very unlikely to be in error in the same way as any of the other methods, and a difference between the measurements will be observed. If the scientific understanding of the properties of lasers were inaccurate, then the laser measurement would be inaccurate but the others would not. As a result, when several different methods agree, this is strong evidence that none of the methods are in error and the conclusion is correct. This is because of a greatly reduced likelihood of errors: for a consensus estimate from multiple measurements to be wrong, the errors would have to be similar for all samples and all methods of measurement, which is extremely unlikely. Random errors will tend to cancel out as more measurements are made, due to regression to the mean; systematic errors will be detected by differences between the measurements (and will also tend to cancel out since the direction of the error will still be random). This is how scientific theories reach high confidence – over time, they build up a large degree of evidence which converges on the same conclusion. When results from different strong methods do appear to conflict, this is treated as a serious problem to be reconciled. For example, in the 19th century, the Sun appeared to be no more than 20 million years old, but the Earth appeared to be no less than 300 million years (resolved by the discovery of nuclear fusion and radioactivity, and the theory of quantum mechanics); or current attempts to resolve theoretical differences between quantum mechanics and general relativity. Because of consilience, the strength of evidence for any particular conclusion is related to how many independent methods are supporting the conclusion, as well as how different these methods are. Those techniques with the fewest (or no) shared characteristics provide the strongest consilience and result in the strongest conclusions. This also means that confidence is usually strongest when considering evidence from different fields, because the techniques are usually very different. For example, the theory of evolution is supported by a convergence of evidence from genetics, molecular biology, paleontology, geology, biogeography, comparative anatomy, comparative physiology, and many other fields. In fact, the evidence within each of these fields is itself a convergence providing evidence for the theory. (As a result, to disprove evolution, most or all of these independent lines of evidence would have to be found to be in error.) The strength of the evidence, considered together as a whole, results in the strong scientific consensus that the theory is correct. In a similar way, evidence about the history of the universe is drawn from astronomy, astrophysics, planetary geology, and physics. Finding similar conclusions from multiple independent methods is also evidence for the reliability of the methods themselves, because consilience eliminates the possibility of all potential errors that do not affect all the methods equally. This is also used for the validation of new techniques through comparison with the consilient ones. If only partial consilience is observed, this allows for the detection of errors in methodology; any weaknesses in one technique can be compensated for by the strengths of the others. Alternatively, if using more than one or two techniques for every experiment is infeasible, some of the benefits of consilience may still be obtained if it is well-established that these techniques usually give the same result. Consilience is important across all of science, including the social sciences, and is often used as an argument for scientific realism by philosophers of science. Each branch of science studies a subset of reality that depends on factors studied in other branches. Atomic physics underlies the workings of chemistry, which studies emergent properties that in turn are the basis of biology. Psychology is not separate from the study of properties emergent from the interaction of neurons and synapses. Sociology, economics, and anthropology are each, in turn, studies of properties emergent from the interaction of countless individual humans. The concept that all the different areas of research are studying one real, existing universe is an apparent explanation of why scientific knowledge determined in one field of inquiry has often helped in understanding other fields. Deviations from consilience Consilience does not forbid deviations: in fact, since not all experiments are perfect, some deviations from established knowledge are expected. However, when the convergence is strong enough, then new evidence inconsistent with the previous conclusion is not usually enough to outweigh that convergence. Without an equally strong convergence on the new result, the weight of evidence will still favor the established result. This means that the new evidence is most likely to be wrong. Science denialism (for example, AIDS denialism) is often based on a misunderstanding of this property of consilience. A denier may promote small gaps not yet accounted for by the consilient evidence, or small amounts of evidence contradicting a conclusion without accounting for the pre-existing strength resulting from consilience. More generally, to insist that all evidence converge precisely with no deviations would be naïve falsificationism, equivalent to considering a single contrary result to falsify a theory when another explanation, such as equipment malfunction or misinterpretation of results, is much more likely. In history Historical evidence also converges in an analogous way. For example: if five ancient historians, none of whom knew each other, all claim that Julius Caesar seized power in Rome in 49 BCE, this is strong evidence in favor of that event occurring even if each individual historian is only partially reliable. By contrast, if the same historian had made the same claim five times in five different places (and no other types of evidence were available), the claim is much weaker because it originates from a single source. The evidence from the ancient historians could also converge with evidence from other fields, such as archeology: for example, evidence that many senators fled Rome at the time, that the battles of Caesar’s civil war occurred, and so forth. Consilience has also been discussed in reference to Holocaust denial. "We [have now discussed] eighteen proofs all converging on one conclusion...the deniers shift the burden of proof to historians by demanding that each piece of evidence, independently and without corroboration between them, prove the Holocaust. Yet no historian has ever claimed that one piece of evidence proves the Holocaust. We must examine the collective whole." That is, individually the evidence may underdetermine the conclusion, but together they overdetermine it. A similar way to state this is that to ask for one particular piece of evidence in favor of a conclusion is a flawed question. Outside the sciences In addition to the sciences, consilience can be important to the arts, ethics,and religion. Both artists and scientists have identified the importance of biology in the process of artistic innovation. History of the concept Consilience has its roots in the ancient Greek concept of an intrinsic orderliness that governs our cosmos, inherently comprehensible by logical process, a vision at odds with mystical views in many cultures that surrounded the Hellenes. The rational view was recovered during the high Middle Ages, separated from theology during the Renaissance and found its apogee in the Age of Enlightenment. Whewell’s definition was that: The Consilience of Inductions takes place when an Induction, obtained from one class of facts, coincides with an Induction obtained from another different class. Thus Consilience is a test of the truth of the Theory in which it occurs. More recent descriptions include: "Where there is convergence of evidence, where the same explanation is implied, there is increased confidence in the explanation. Where there is divergence, then either the explanation is at fault or one or more of the sources of information is in error or requires reinterpretation." "Proof is derived through a convergence of evidence from numerous lines of inquiry--multiple, independent inductions, all of which point to an unmistakable conclusion." Edward O. Wilson Although the concept of consilience in Whewell's sense was widely discussed by philosophers of science, the term was unfamiliar to the broader public until the end of the 20th century, when it was revived in Consilience: The Unity of Knowledge, a 1998 book by the humanist biologist Edward Osborne Wilson, as an attempt to bridge the culture gap between the sciences and the humanities that was the subject of C. P. Snow's The Two Cultures and the Scientific Revolution (1959). Wilson held that with the rise of the modern sciences, the sense of unity gradually was lost in the increasing fragmentation and specialization of knowledge in the last two centuries. He asserted that the sciences, humanities, and arts have a common goal: to give a purpose to understanding the details, to lend to all inquirers "a conviction, far deeper than a mere working proposition, that the world is orderly and can be explained by a small number of natural laws." Wilson's concept is a much broader notion of consilience than that of Whewell, who was merely pointing out that generalizations invented to account for one set of phenomena often account for others as well. A parallel view lies in the term universology, which literally means "the science of the universe." Universology was first advocated for the study of the interconnecting principles and truths of all domains of knowledge by Stephen Pearl Andrews, a 19th century utopian futurist and anarchist. See also - Scientific method - Tree of Knowledge System - Unified Science - Coherentism in the philosophy of science - Wilson, Edward O (1998). Consilience: the unity of knowledge. New York: Knopf. ISBN 978-0-679-45077-1. OCLC 36528112. - Shermer, Michael (2000). Denying History: Who says the Holocaust never happened and why do they say it?. University of California Press. - Note that this is not the same as performing the same measurement several times. While repetition does provide evidence because it shows that the measurement is being performed consistently, it would not be consilience and would be more vulnerable to error. - Statistically, if three different tests are each 90% reliable when they give a positive result, a positive result from all three tests would be 99.9% reliable; five such tests would be 99.999% reliable, and so forth. This requires the tests to be statistically independent, analogous to the requirement for independence in the methods of measurement. - John N. Bahcall, nobelprize.org - Weinberg, S (1993). Dreams of a Final Theory: The Scientist's Search for the Ultimate Laws of Nature. Vintage Books, New York. - Scientific American, March 2005. "The Fossil Fallacy." Link. - For example, in linguistics: see Converging Evidence: Methodological and theoretical issues for linguistic research, edited by Doris Schonefeld. Link. - For example, see Imre Lakatos., in Criticism and the Growth of Knowledge (1970). - More generally, anything which results in a false positive or false negative. - Shermer, Michael (2002). In Darwin’s Shadow: The Life and Science of Alfred Russell Wallace. Oxford University Press. p. 319. - Whewell, William (1840). The Philosophy of the Inductive Sciences, Founded Upon Their History. 2 vols. London: John W. Parker. - A Companion to the Philosophy of History and Historiography, section 28. Aviezer Tucker (editor). |Look up consilience in Wiktionary, the free dictionary.|
http://en.wikipedia.org/wiki/Consilience
4.09375
Creative Writing Lesson Plans "How pleasant to know Mr. Lear!'' Who has written such volumes of stuff! Some think him ill-tempered and Queer, But a few think him pleasant enough." Lesson Plan by Beverly L. Adams-Gordon Permission to reproduce this lesson plan for non-commercial individual or classroom use is granted so long as no text is taken out of this document including any and all copyright messages and ordering information found at the end of this article. Copyright ©1996 by Beverly L. Adams-Gordon This series of lesson plans is designed to help you introduce reading and creative writing activities in a fun way. You can conduct the unit in as little as one week or optimally over a several week period. The length of time you need for the unit will depend on how many of the assignments and other activities you pursue. In this unit, you will introduce the student to the limerick and other zany rhymes made famous by Edward Lear in the 1850's. The lessons use these limericks to introduce a number of basic poetic devices. Understanding and using these devices can improve the students general writing. They also serve as models for teaching the disciplined, systematic art of limerick and poetry writing. All of the sessions use material found in books by Edward Lear that are a part of the Animated Artist Series. However, since these poems and limericks are in the public domain you may find them in a variety of books and may also find additional examples that you can use in this unit. After following this unit, the student will: - be familiar with the work of Edward Lear; - experience listening to poetry for enjoyment; - be introduced to interpretive reading; - recognize the rhyme patterns in poetry; - recognize the rhythm pattern of limericks; - recognize that limericks are humorous; - appreciate limericks as a form of creative expression; - have attempted writing a limerick; - will be familiar with other writers who contributed to the limerick form. Lear, Limerick, and Literature lessons provide unit related ideas appropriate for a variety of ages and abilities (5-adult). However, most students ages 10 and up will be able to successfully complete all of the A number of books containing limericks, other compositions, and the art of Edward Lear (See Bibliography) and others such as Lewis Carroll, Dr. Seuss, and Shel Silverstein. A rhyming dictionary is helpful. The limerick is, simply, a funny five-line story told in verse that has a particular pattern of rhyme and rhythm. It is a form of light verse that was popularized by Edward Lear with the publication of his Book of Nonsense in 1846. While Edward Lear made the limerick popular it has much earlier origins. According to the Encyclopedia Britannica and Langford Reed (a limerick scholar and historian) the limerick is believed to have originated from a song brought back from France by returning members of the Irish Brigade in the 18th century. The chorus of this song was "Will you come up to Limerick?" Impromptu verses were added to this chorus telling the adventures of persons from various Irish cities. The first English verse in something like limerick form is the jingle "Hickory, Dickory, Dock," of which the earliest printed version dates from 1744. The existence of an earlier French version of "Hickory, Dickory, Dock," offers some support to Reed's theory of a French origin. Limericks enjoyed great popularity during the 1800's and early 1900's. Many of the earlier limericks were off-color and/or libelous of public officials. Later limericks were milder and more suited to general circulation. Rudyard Kipling, Arnold Bennett, E.V. Knox, and even Pres. Woodrow Wilson contributed some noteworthy limericks to the tradition. In the first decade of the 20th century, the limerick became very popular. Many publications and businesses organized limerick contests to promote products or readership. The sale of rhyming dictionaries is said to have boomed. Today the limerick is far better known in England and Ireland than anywhere else in the world. Yet, until very recently competitions seldom called for creating limericks ... that is until The Annual Worldwide Castlemoyle Kid's Limerick Writing Contest. Your students (ages 5-19) are encouraged to enter the contest as part of participating in this unit. See the rules and entry requirements at the end of this lesson About Edward Lear Edward Lear was born May 12, 1812 at Highgate (near London) England. He was the youngest of 21 children and was raised and educated at home by his eldest sister Ann. At 15 he began supporting himself by drawing. In 1831 (at age 19) he was employed by the London Zoo and published his first work Illustrations of the Family of the Psittocidae. This volume was the first book of colored drawings of parrots to be published in England. This work was followed by similar work at the British museum. Later (1832-36), he was employed by the 13th Earl of Derby to complete drawings of the Earl's private menagerie at Knowsley. It was here that he produced many of the rhymes contained in his first Book of Nonsense. He wrote the zany verses and illustrated them for the children of the Earl. Lear found the exacting work of drawing animals was effecting his eyesight, so he began devoting his time to landscape painting. He traveled extensively, creating journals, and carefully finished watercolor landscapes of the places Lear also earned a living by providing art lessons to the children and ladies of the wealthy aristocracy. In 1846, he gave a series of drawing lessons to Queen Victoria. His letters to his many friends were often illustrated and always full of humorous puns and deliberate misspellings. Edward Lear's Book of Nonsense, which was published in 1846, was the first children's book with illustrations provided by the author, a combination of talents common today. Lear's pictures, with their few lines expressing so much and their humor of exaggeration so appropriate to the text, were a real contributing step forward in the work of illustrating children's books. Lear published three volumes of bird and animal drawings, seven illustrated volumes of travel books, i.e., Journals of a landscape Painter in Greece and Albani (1851), and four books of nonsense. Tennyson's Poems, with illustrations from Lear's landscapes, was published following Lear's death in 1889. In 1996, MAXIMA New Media released the first in a series of multimedia Lear titles. It is part of Maxima's Animated Artist Series. The release of Edward Lear's Book of Nonsense was timed to coincide with the 150th anniversary of the book's first publication. Maxima New Media plans to release four Lear titles. More Nonsense is expected to be available in the Fall of 1996. Maxima New Media products are distributed to educational stores and home school suppliers in the United States by Castlemoyle Books. Copyright ©1997 by Beverly L. Adams-Gordon How Pleasant To Know Mr. Lear To provide the student familiarity with Edward Lear and his lyrics This portion of the unit is designed to provide a broad introduction to the limerick and the genius of Edward Lear. In this section the key element is to provide for much listening, reading, and enjoyment of the sounds of poetry. Tell the student about Edward Lear. Show them the book Edward Lear's Book of Nonsense (and/or other books in the Animated Artists series). Let them know some of the outstanding facts of his career. (See background section, "About Edward Lear" pg. 41 of Edward Lear's Book of Nonsense, or an encyclopedia for more Read to the students at least one each of Lear's limericks, lyric poems, and nonsense verses. Discuss and help the students determine inductively (see for themselves) some of the main differences between each of the major types of verse used by Lear. The following are good choices of each example: Lyric poems: How Pleasant To Know Mr. Lear (page 48 Edward Lear's Book of Nonsense) Nonsense verse: Any "Absurd ABC" (from pages 7-24, Edward Lear's Book of Nonsense) Limerick: Any "Crazy Colors" or "Funny Faces" (from pages 27-40 Edward Lear's Book of Nonsense) Make sure that you point out to the children that Lear was also famous for his illustrations. Discuss with them how he uses his art to reinforce the message of his lyrics, limericks, and verses. Select one or more of these activities which are appropriate for your students' age, interests, and/or ability: 1a) Allow the students to explore the book(s), CD-ROM multi-media program(s), and/or listen to the audio CD-ROM(s) on their own. You may wish to have other anthologies of limericks available for students to explore. See bibliography. 1b) If you make it a habit to have daily read-aloud time - you may want to begin reading Alice's Adventures in Wonderland, or Through the Looking-Glass by Lewis Carroll at some point in this unit. Carroll's books contain some of the best nonsense verse, limericks, and rhyme ever written. 1c) Have students select one of Lear's works to read to the family or class. The student should be prepared to explain why they selected the poem. They also should have read the poem several times to assure smooth reading. 1d) Students may work in small groups to make choral reading presentations of either an assigned or a selected longer Lear poem, such as the "Owl and the Pussycat," "The Duck and The Kangaroo," or "Mr. and Mrs. Discobbolos" (see bibliography). 1e) Selected pieces can be used for dictation or penmanship exercises. The "Absurd ABC" all make excellent penmanship exercises because of his use of alliteration (repetition of the beginning sounds). 1f) You may wish to assign older students to conduct research on either limericks or Lear. Have them write an original biographical sketch which can be used as part of one of the projects described in the Culminating Projects. They may wish to add Lear's picture and a mini biographical sketch to their history time-line and learn more about the period in which he lived. There are several web sites which provide information on Edward Lear, as well as numerous texts available (see bibliography). 1g) Reread or have your older students read Lear's poetic self-portrait (pg. 48 of Edward Lear's Book of Nonsense). Have students make up a similar rhyme that describes themselves. 1h) Read and discuss the vocabulary of one of Lear's longer pieces. After reading it to the student(s) provide them an un-illustrated copy of one of Lear's longer poems for them to illustrate. Copyright ©1996 by Beverly L. Adams-Gordon Making The Words Sing To recognize and appreciate that poets use various devices to create sound in poetry. Poets bring sound to poetry by using such devices as rhyming couplets, onomatopoeia, and alliteration. Familiarity with these techniques can aid all students to become better writers because while sound plays a primary role in the effectiveness of poetry, it also plays a role in the effectiveness of prose. Sometimes when faced with two possible ways to structure a sentence, a writer selects one rather than the other simply because it sounds better. Work with sound/meaning relationships prepares children to make such decisions as they gain sophistication as creative writers. A primary purpose, therefore, of playing with the sounds of poetry is to develop heightened awareness of the significance of word music in communication. For each of the poetic forms below make a brief introduction to your students and then with the student analyze the suggested piece to determine how it is applied. Do not over analyze the poems. Over-analysis can ruin children's favorite poems, so keep it on the light side. Pointing out or helping them discover one or two techniques the author used from each poem is sufficient. Here is a sample lesson: "Today we are going to learn about techniques poets use to paint pictures with words and sounds. One technique that many poets use is alliteration. An alliteration is when the poet repeats the beginning sounds of words, like in the tongue twister 'Peter Piper picked a peck of pickled peppers.' Can you see where Lear uses alliteration in this piece?" The Comfortable Confidential Cow, who sat in her Red Morocco Arm Chair and Toasted her own Bread at the parlour Fire. (from page 8 of Edward Lear's Book of Nonsense) Provide several sample pieces for students to discover the technique. Then send them looking for more in other poems or writing. Remember the key thing is not to over analyze any one poem. Follow the "alliteration hunt" by having the student attempt writing a one line alliteration. (Or applying whatever technique was explored.) Note: The writing activity can be tied to your spelling program by asking the student to try and use one or more study/list words to complete the task. Follow this lesson pattern with as many of the poetic forms listed below as you feel are appropriate for your students. Key forms for this unit are marked with an asterisk. Each is followed with the title and source of a good example of Stanzas: Lines or verses ordered into a complete group are called a stanza; stanzas are described, according to the number of lines they include, as couplets, tercets, quatrains, sestets, octaves, etc. Couplets: Couplets are two lines that rhyme together and are approximately the same length. Example: The Duck and The Kangaroo by Onomatopoeia: Using words to imitate sounds is known as onomatopoeia, from a Greek word meaning "name-making." Some examples of onomatopoeia are hiss, zoom, and scratch. These words imitate natural sounds. Lear only occasionally used this device. Alliteration: Sometimes poets bring sound to poetry by repeating the beginning sounds of words, like "smooth seams." The repetition of an initial sound, usually a consonant, is called alliteration. Example: Any of the "Absurd ABC" nonsense verses found in Edward Lear's Book of Nonsense. Personification: The writer is using personification when he has inanimate objects or animals take on human characteristics, qualities, and or actions. Lear uses personification extensively in his "Absurd ABC" nonsense verses and most of his longer poems. The Owl and The Pussycat and many of Lear's longer works are excellent example of the use of Personification. (Ask your students if owls and cats really do those things.) Other examples are found in many of the "Absurd ABC" or The Owl and The Pussycat. The lesson procedure presented above provides a basic activity to use with each of the poetic forms. Below are additional activities which you may wish to assign, depending on the age and ability level of your student. 2a) Alliterative Fun. Each child selects a verb with the same beginning sound as his or her first name to complete a sentence starting with the name; for example: John jumps, Mary munches, Debra dances, etc. This kind of work is "pre-poetry" in that the end product generally does not contain the clear images that are the essence of poetry. 2b) Alliterative Speech Parts. Similar to the activity above you may create longer alliterations in a predetermined sequence of speech parts. This is an excellent way to introduce or review the parts of speech. For example: (noun, verb) Mary marches (noun, verb, adverb) Mary marches merrily (adjective, noun, verb, adverb) Meritous Mary marches merrily and so on. Copyright ©1996 by Beverly L. Adams-Gordon Recognizing the elements of a limerick Students will quickly pick out the five-line pattern of the limerick as well as discover the aabba rhyming pattern. They can tap out the rhythm of Lear's limericks, perhaps on rhythm band instruments, so that they feel the stress on second, fifth, and eighth syllables of each line. After reading several limericks to the class and showing them copies of some, help them "discover" the pattern, rhythm, and rhyme scheme of a limerick. They will need to work with several limericks to really form the generalization of the limerick pattern. Note: The first section of Edward Lear's Book of Nonsense, "Absurd ABC," are not limericks, but are nonsense verses; have students examine the "Crazy Colors" and "Funny Faces" selections to determine the elements of a limerick. Here is a limerick by Eve Merriam which you can teach to your students as a culminating activity for this lesson. It will helps them remember how to write a limerick and serves as a great summary. (This example is from Leaning on a When a limerick line starts out first, What follows is fated, accursed: If the third line takes tea, The fourth must agree. While five, two, and one pool their thirst. The following elements of limerick are rich with material for teaching important language arts skills. Here is just a sampling of activities for each limerick aspect which you may want to try: Rhyme: For your youngest students you can concentrate on rhyme and rhyming words - let them enjoy the sounds of our language and examine the text for those word parts that form the rhyme. You can make large copies (or write them on the board) of the limericks with the last word of each line missing. Then have the student supply a missing word for the first and third line. Then instruct them to select words which will rhyme with these words according to the limerick pattern. You may also use this as an opportunity to introduce rhyming dictionaries. Students of all ages should try to inductively determine the rhyming scheme of limericks (see Elements of the Limerick above) and of other poems. You may also choose to introduce them to different rhyming techniques and use of sound patterns in poetry. Older students should be taught how to read and record the shorthand of rhyming schemes (e.g. aabba, abba, etc.) A limerick has a pattern of syllables as well as a rhyming scheme. This provides an excellent opportunity for introduction or review of syllabication or clap out the syllables to help your students discover this pattern. For younger students you can create clapping charts or slashes on the board for each clap - this way they will have a record of the syllabication scheme. Creating a series of such charts can help students discover the rhythm of various forms of verse. You can also have them tap out the rhythm on rhythm sticks and/or drums. If you are teaching primary students, you may wish to consult Math Their Way for suggestions of profitable ways to apply this activity to your The accent on specific syllables of each line also help build the rhythm of the limerick. Have the student compare the accent (after you teach or review accent) of a limerick and another rhyming poem. Expand the clapping charts to make the accented syllables darker lines. Verse may be defined as an obviously rhythmical use of language, manipulating accent, stress, and cadence in such a way as to create recurrent pattern of emphasis. A simplistic explanation is that meter describes the rhythm. Explain what is meant by masculine meter (stress or emphasis on last syllable.) Discuss how these elements affect how you read a poem or limerick. Have the student practice using this information in interpretive readings. Reading to beat is an important aspect of oral interpretation. The main use of meter in English verse are: iambic, trochaic, anapaestic, and dactylic (usually appearing in a catalectic form). Discussion of meter can become quite involved, but is well worth the effort with older, interested students. For more details on meter, consult or have the student consult the Encyclopedia Britannica The Elements Of Limericks There are five lines. Note: Lines 3 and 4 are often printed on the same physical line. Rhyming scheme (a a b b a): Lines 1, 2, and 5 rhyme Lines 3 and 4 rhyme. Number of syllables: Some of the examples in textbooks vary, but the number of syllables usually follow this Line 1 8 syllables. Line 2 8 syllables. Line 3 5 syllables. Line 4 5 syllables. Line 5 8 syllables. Lines 1, 2, and 5 contain 3 accented syllables. Lines 3 and 4 contain 2 accented syllables. There is no required metrical scheme, but each line usually has a masculine ending that is that each phrase is always stressed, or emphasized, on the Limericks thrive on the lack of harmonious agreement between parts. They contain a broad humor that most students over 8 to 10 years old appreciate. Junior High age students seem to really appreciate the limerick form. Younger students, preschool to eight, really enjoy the rhythm and rhyme of the limerick. Copyright ©1996 by Beverly L. Adams-Gordon To use the limerick form to create original humorous rhymes. Most students will find limerick writing an enjoyable activity, because limericks can be just as ridiculous as each author wishes. Once they have a solid introduction to the limerick, writing a limerick is easy. The teaching value lies in two Disciplined Expression: Limerick writing requires strict adherence to the patterns of rhyme, rhythm, and the number of syllables per line. This will make students more acutely aware of these elements, will increase their respect for poetry as a discipline, and will guide their attitudes toward a healthy acceptance of personal orderliness and self-control. Self-Expression: Encourage all students to be active participants - they are capable of writing and reciting their own original limericks. One of the chief underlying facts recognized as sound teaching is that a student must generate his own thoughts and apply new skills himself, if the values of these skills are to be lasting. The memorization of rules and the mute acceptance of someone else's efforts cannot alone develop a student's potential. Only after considerable oral work with limericks (such as described in the above lessons) should young people be expected to compose some of their own. Even after this exposure most students will need some guided practice before being expected to work on their own. The purpose of this lesson is to provide that guided Guided Limerick Writing. Before Beginning: Remind the students of the elements of a limerick and their rhyme and rhythm patterns. You may want to have them repeat Eve Merriam's limerick presented in the last lesson. (Use this discussion to identify those who would benefit from reteaching.) Step 1: Write a person's name on the chalkboard. (Preferably not the name of a member of your group.) Also be sure to select a name that is easily rhymed. Then brainstorm - creating a cluster map around the name- all the words you can think of that rhyme with this name. Explain that sometimes Lear was known to create a nonsense word if he couldn't come up with a rhyme. This is often referred to as "pulling a Lear." Spend some time on this activity, creating cluster maps for other names, place names, and common objects suggested by your students. You may wish to assign students the task of creating rhyme maps of their own as independent work or homework. They should be required to come up with at least one rhyme map for a person or place and at least one for an object or event. Step 2: Using two of the rhyming maps you created as a group, one for a place or a person and one for an object or an event, rewrite with the student's help a limerick by changing the last word of each line of one of Lear's Limericks. Explain how it is sometimes necessary to change a few other words to have the limerick make some sense. Thus, this limerick by Edward Lear: There was a Young Lady of Norway, Who casually sat in a doorway; When the door squeezed her flat, She exclaimed, "What of that?" This courageous Young Lady of Norway. There was a Young Lady of Trife, Who casually sat on a knife, When the knife it went in, She started to grin, So courageous, she gave up her life. a) Have student select a limerick to rewrite, using two rhyming maps of their own creation. It may be easier for some students to change the last word of the first line and the third line. Then to create rhyme maps for the word which they used to replace the original word. Finally, have them change the other lines as needed to maintain rhyme, rhythm, and sense. (If you suggest students work in this manner remember you will need to model the activity.) Ask them to read the original and the rewritten versions aloud. b) If you have younger students, it may be sufficient to simply have them take the last word from each sentence of their favorite limerick or nonsense verse and make a list of as many rhyming words as they can. Step 3: Model original limerick composition. With your students list names or place-names (including nonsense names) you could use to end the first line of a limerick. Then create a rhyming map for this word. Brainstorm with the student to list all of the possibilities. Next, have the student imagine a funny situation or action. They should create a rhyme map for these words also. Now, guide them through making some notes about this event to use in a limerick. They might also note and create rhyme maps of objects which could be related to the actions. For each of these items have them create a rhyming map. Have them use this information to write their own limericks. c) Another way to encourage successful limerick writing is to furnish the first line for students. Students may work in small groups or individually to finish the limerick of which they are given the first two lines. The following lines may be useful for this: - There was a young fellow named Katz... - Question marks to me are a bore... - The was a little girl named Brit... Copyright ©1996 by Beverly L. Adams-Gordon Culminating Limerick Projects To provide opportunity for independent application and synthesis of previous lessons. Ending any creative writing or expository writing unit with the creation of a published product is motivating and propels students to sustained effort. The activities below provide just a few ideas of how students can publish their own limerick works. You and/or your students may think of many additional ideas while working through the unit. Make a note of these ideas as they occur to you and present them to the students along with the activities below. Select one or more of these activities which are appropriate for your students age, interest, and ability. You may allow your student or small groups of students to make a choice from these activities or assign a particular activity. 5a. Limerick Anthology. Have students create an anthology of limericks of their own creation or a collection of favorites found in books or on WORLD WIDE WEB sites. (You may wish to discuss copyright law with your older students.) The anthology should contain at least eight limericks. Ask students to copy a different limerick on each page of their book, give credit to the author, (including Anonymous), and to illustrate each poem (a la Lear). Have them complete their book including all important book parts (title page, covers, etc.). Display the completed anthologies or mass produce as a class/group publishing project. 5b. Calendar limericks. Have students write a limerick using the names of the months in the first line. Use these limericks along with illustrations to decorate each page of a calendar. These can be made as gifts or as a class or small group publishing project. Here is an example for a January limerick: January brings with it the snow, Makes our feet and fingers glow, Thin ice it can crack You'll fall on your back, Off to the hospital you'll go. 5c. Progressive Limericks. Create a progressive limerick. Have the first person or group create a limerick. The succeeding groups/persons must continue the theme and characters of the first. As a result a zany story develops. You might want to keep this going awhile, even beyond the unit. It can be on a computer (class web page) or you could use a large roll of butcher paper. Artistic students can add the For an example of an excellent ongoing Progressive Limerick (original verses from 1925) you may wish to check out The Nantucket Limerick Website.. At the time of the writing of this lesson plan this site had no objectionable material, however things change and not all links were checked. I suggest you download the limericks and check them carefully before presenting them to your students. 5d. Limerick Contests. Enter the Annual Worldwide Castlemoyle Kids Limerick Writing Contest or other limerick contests. Following are the rules for the Castlemoyle contest. Other contests and their rules can be located at some of the web sites listed in the bibliography. Official Entry and Rules Annual Worldwide Castlemoyle Kid's Limerick Writing Contest All entries must be received by May 12, of the year of the contest. Entries received after 5 p.m. (PST), May 12 will be entered in the next year's contest. Each entry must be sent in a separate envelope or on a separate postcard. They can also be e-mailed. (Addresses provided below) Entry must include the full name, birthdate, home address and home phone number of entree. School name and teacher is also encouraged if the entry is part of a class project. A Parent's full name and signature must also be provided on each entry - whether entered through school project or not. Entries will be judged on their originality, creativity, and adherence to limerick conventions. Judging will be administrated by the staff at Castlemoyle Books. Entries containing (in the opinion of the contest administrators) objectionable or obscene material will be disqualified. All entries, whether selected as a winner or not or disqualified for any reason, become the property of Castlemoyle Books to be used in any manner deemed suitable to said publishers. They may be published in an anthology of limericks, used to publicize future contests, or any other manner. In either case no remuneration, except as specified for winning entries, will be awarded. Four prizes of $25. US Savings Bonds will be awarded, one each to the following age groups: 5-7 year olds; 8-10 year olds; 12-14 year olds; and 15-19 years old. Submission of an entry constitutes agreement by both author of entry and his or her parent to all contest policies and releases. Send entries to: Castlemoyle Books, 6701 180th St SW, Lynnwood WA 98037 or FAX: 206/787-0631 or E-mail: firstname.lastname@example.org. Extensions and Activities To provide extension, review, and additional application of material provided in the unit. 1. Another form of disciplined poetry is Acrostics. An acrostic is a poem that has a visual dimension in that the letters of the subject word is written in bold print and form the beginning letters of the lines. Even very young children, who have just learned to differentiate among beginning sounds of words, can write or patch together simple acrostics. The very young write only one related word or phrase next to each letter of a word listed downward on their page. 2. Introduce the student to The Haiku. Haikus are three-line verses that in the hands of Japanese poet masters of the seventeenth century became delicate instruments for expressing feelings and pictures about nature and especially about seasonal variations. The poems are just 17 syllables that pattern in three lines: consisting of 5 - 7 - 5 syllables respectively. Because a haiku is comparable to a single image captured on film, pictures are a practical material for triggering the word pictures that are the stuff of haiku. This is particularly true of nature shots glorifying the beauty of God's creations. One teacher used this idea to inspire her sixth-graders to look for the essential quality within a nature scene and to express it with directness. She provided her students with a series of Japanese prints from calendars and other books purchased just for this purpose. Each child who felt inspired selected a print to think and write about. Here is a haiku written by a student who selected a delicate lotus as her The pink swamp flower Has a beauty of its own a heavy fragrance. 3. You may also wish to introduce the student to The Senryu. The senryu is a Japanese poem structurally similar to the haiku but concentrates on human rather than physical nature. Many students, especially boys, will find this more pleasing as they often prefer to write about topics such as baseball and so on. 4. Introduce another disciplined form of poetry such as the Japanese Tanka. The Tanka, like the Haiku, focuses on nature and seasons but is a bit longer. It is also an older form of poetry, dating to the fourth century. It consists of five lines and 31 syllables distributed according to the pattern 5 - 7 - 5- 7- 7. 5a. The Cinquain is not of Japanese origin as many imagine because of its similarity to haiku and tanka. As developed by Adelaide Crapsey, cinquains consist of 5 thought lines that follow a 2-4-6-8-2 syllable pattern for a total of 22 up our big Birch tree to hide his acorn treats 5b. Some teachers have simplified this form so that number of words rather than syllables per line is the major structural requirement of first line = one word second line = two words third line = three words fourth line = four words fifth line = one word 6. Teach your students create a diamante. A diamante is a relatively structured form comprised of seven lines that contain a contrast. Not only highly disciplined they are great for reviewing (even teaching) the parts of speech. The diamante, as devised by Iris Tiedt, follows this pattern: First Line: a noun (word that names an object or idea) Second Line: two adjectives (that describe the fist noun) Third Line: three participles (verbs with -ing or -ed endings) associated with the first noun. Fourth line: four nouns - two referring to the noun in line one, two to the noun in line seven. Fifth Line: three participles that are associated with the noun given in line seven. Sixth Line: two adjectives that describe the line seven noun Seventh Line: a noun that is the opposite of the one given in the by Beverly L. Adams-Gordon praying, striving, growing worker, friend, victim, loser crying, lying, dying Copyright ©1996 by Beverly L. Adams-Gordon © 1996 Beverly L. Adams-Gordon Books, Periodicals, and more ... Arts Council of Great Britain. Edward Lear, an exhibition of oil paintings, Baring-Gould, William Stuart, The lure of the limerick: an uninhibited history, C. N. Potter [c1967] Encyclopaedia Britannica, Vol. 13 & 14. Edward Lear and Limerick (Vol. 14, page 37 & 38 ) Cerf, Bennett, Out on a limerick; a collection of over 300 of the world's best, Chitty, Susan, Lady, That singular person called Lear: a biography of Edward Lear, Atheneum, 1989, c1988. Davidson, Angus, Edward Lear, landscape painter and nonsense poet, 1812-1888. Hofer, Philip, Edward Lear. Oxford University Press, 1952. Hofer, Philip, Edward Lear as a landscape draughtsman, Belknap Press of Har 1967. Kamen, Gloria, Edward Lear, king of nonsense: a biography, Atheneum, 1990. Kelen, Emery, Mr. Nonsense: a life of Edward Lear, T. Nelson Lear, Edward, How pleasant to know Mr. Lear: Edward Lear's selected works, Holiday House, 1982. -----, The complete nonsense of Edward Lear, Dover Publications [c1951] -----, Later letters of Edward Lear, author of "The book of nonsense," -----, The nonsense verse of Edward Lear, Harmony Books, c1984. -----, Indian journal; watercolours and extracts from the diary of Edward Lear, -----, Letters of Edward Lear, author of "The book of nonsense," to Chi T.F. -----, An Edward Lear alphabet, Lothrop, Lee & Shepa 1983. -----, A book of bosh : lyrics and prose of Edward Lear, Puffin Books, 1975. Lehmann, John, Edward Lear and his world, Scribner, c1977. Levi, Peter. Edward Lear: a biography, Scribner, 1995. Moss, Howard, Writing against time; critical essays and reviews. Morrow, 1969. Noakes, Vivien, Edward Lear; the life of a wanderer. Houghton Mifflin, 1969 -----, Edward Lear, 1812-1888, H.N. Abrams, 1986, c1985. -----, The Animal tale treasury , Putnam, 1986. Reed, Langford (ed.), The Complete Limerick Book (1925) Rosenbloom, Joseph. The looniest limerick book in the world, Sterling, c1982. Vaughn, Stanton. Limerick lyrics, G. Sully and Co. [c1906] Yrom, Thomas. Nonsense and wonder: the poems and cartoons of Edward Lear, Dutton, Mr. Punch's limerick book; Loring & Mussey, 1935. The Limerick: 1700 examples, with notes, variants and index. Citadel Press, 1979. The Limerick : 1700 examples, with notes, variants and index. Bell Pub. Co., c1969. Happy birthday, Moon and other stories for young children Children's Circle [1989?], c1971. The Pigs' wedding and other stories Children's Circle 1990. Lear, Edward, Edward Lear's Book of Nonsense. (illustrated). MAXIMA New Media, Kokhav Yair, Israel. c.1995 World Wide Web Sites ... At the time of the writing of this lesson plan these sites had no objectionable material, however things change and not all links from these sites were checked. I suggest you download the limericks and check them carefully before presenting them to your students. MAXIMA New Media Creators of the "Animated Artist Series," of which Edward Lear's Book of Nonsense is the first book. More Nonsense is expected in Fall, 1996. There once was a man from Nantucket... An ongoing Progressive Limerick (original verses from 1925). "Waiting Room Limericks" Not just for doctors! The SETI League, Inc. Winning The Toast Point Limerick Contest - Structure The "Italian" Edward Lear. Lear spent a great portion of his life in Italy. Here's an Italian web site with some very interesting material. The WEBster: Limericks and other Poetics Creative Writing Lesson Plans is produced by: 694 Main Street Post Office Box 520 Pomeroy, Washington 99347 All rights reserved. No part of this material may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording or by any information storage and retrieval systems, without permission in writing from the publisher. Additional copies of this lesson plan are available from the above address. Copyright © 1996 by Beverly L. Adams-Gordon The Animated Artist Series - CD-Rom Gift Sets are published by Maxima New Media Ltd., Israel. They are distributed by Castlemoyle Books (see address above). Copyright ©1997 by Beverly L. Adams-Gordon Edward Learís Book of Nonsense Edward Lear is one of the most well-loved childrenís poets. Often compared to Lewis Carroll, though less well-known, his work has delighted children and adults for the past one hundred and fifty years. Best known for the Owl and the Pussycat, Lear is unique in that he was a multi-faceted artist. He started out as a naturalist illustrator, and his work is considered on par with that of Audubon. He came into his own with his nonsense illustrations. Lear anticipated animation in his drawings, and the movement embedded in his simple lines just beg to come alive. In celebration of the 150th anniversary of the original publication of Edward Learís Book of Nonsense the merritorious mouse plays a merry minuet on the piano-forte and the Fizzgiggious Fish dances to classical jazz on the new CD-Rom companion to the re-released book. This gift package includes a book and a Multimedia CD-Rom. The book features more than fifty of the whimsical limericks and zany illustrations for which Lear was famous. The CD-Rom brings the charming illustrations to life. An audio-track CD, which can be used in a standard CD player, contains original music and narration. ďEveryone in the family, from infants to grandpas, will get a kick out of this multiple media edition of Learís Book of Nonsense. Itís sophisticated and silly, classy and original, and guaranteed to be a scroobiously, gloriously runcible delight!!Ē Aron Trauring 1-888297-01-8 . . . . . . . . . . . $18.95 Castlemoyle's Home Page Online Ordering Information © 1997 Beverly L. Adams-Gordon -- Page Updated: 27-Feb-1997
http://www.castlemoyle.com/lear/learte.htm
4.34375
What happened the last time the Earth’s climate shifted from “icehouse” to “hothouse”? And what does it tell us about climate change today? John Isbell is on a quest to coax that information from the last time it happened on a vegetated Earth. The only problem is, that was between 290 and 335 million years ago. The information from the past forms the all-important baseline needed to predict what the added effects of human activity will bring. During this period, the late Paleozoic Era, the modern continents were packed together in two huge supercontinents. One, called Gondwana, comprised most of the Southern Hemisphere, including what is now the South Pole, Australia, South America, India and Africa. The work of Isbell, a specialist in late Paleozoic glaciation, has shaken the common belief that Gondwana was covered by one massive sheet of ice that gradually and steadily melted away as conditions warmed. Isbell has determined that at least 22 individual ice sheets were located in various places over the region. And the state of glaciation during the period was unstable, marked by dramatic swings in climate and atmospheric carbon dioxide levels. He has uncovered evidence that parts of eastern Australia were covered in ice during the tail end of the era, when the climate was warming, but not in polar Antarctica during the same period. He believes local events, such as mountain building, played a large role in the waxing and waning of glaciers during the transition. “If we figure out what happened with the glaciers – and add it to what we know about other conditions, like carbon cycling, we will be able to unlock the answers to climate change.”
http://www5.uwm.edu/news/2013/01/11/john-isbell/
4.0625
The Longview Race Riot occurred during the Red Summer, as May to October of 1919 has been called. It was the second of twenty-five major racial conflicts that occurred throughout the United States during these months. Black slavery in America usually evokes images of the antebellum South, but few realize that members of the Five Civilized Tribes--the Cherokees, Choctaws, Chickasaws, Creeks, and Seminoles--in Indian Territory, today's Oklahoma, also had slaves. Like their counterparts in the South, Indian slaveholders feared slave revolts. Those fears came true in 1842 when slaves in the Cherokee Nation made a daring dash for freedom. This is an educational website dedicated to provide resources and information for teachers, scholars and the general public on role that enslaved Africans played in the making of America through their struggles and sacrifices for freedom. The Stono Rebellion was the largest rebellion mounted by slaves against slave owners in colonial America. The Stono Rebellion's location was near the Stono River in South Carolina. The details of the 1739 event are uncertain, as documentation for the incident comes from only one firsthand report and several secondhand reports. Violent conflicts between white colonists and black slaves were common in Saint-Domingue. Bands of runaway slaves, known as maroons (marrons), entrenched themselves in bastions in the colony's mountains and forests, from which they harried white-owned plantations both to secure provisions and weaponry and to avenge themselves against the inhabitants. Upcoming efforts to commemorate the 100th anniversary of the 1908 Springfield Race Riot range from the subtle to the confrontational. All are attempting to find just the right balance to reflect on one of worst moments of the city’s history. On November 23, 1733 slaves carrying bundles of wood were let into the fort at Coral Bay. Concealed in the wood were cane knives, which the rebels used to kill the half-asleep and surprised soldiers who were guarding the fort. One soldier, John Gabriel, escaped by hiding under his bed and running away when he had a chance. He was able to get to St. Thomas in a small boat and tell the story to Danish officials there.
http://www.blackhistorypages.com/Riots__Rebellions_and_Insurrections/?s=H
4.1875
Most bats use echolocation for orientation and prey catching. They emit species- and situation-specific ultrasonic sounds and receive from the echoes information about the location, the shape, and the velocity of located targets. It is necessary to study the animals in their habitats under natural conditions in order to understand the foraging and echolocation behavior of bats. The Technical University of Darmstadt, Germany developed a signal-processing system, which makes it possible to analyze bat signals and track a bat's flight paths in the field. The propagation time differences of bat sounds between several microphones are measured with a new correlation technique. A correlation receiver forms the bats sounds in narrow pulses and allows a time resolution of 2 (mu)s. The signals can be detected and analyzed with a high reliability down to a signal-to-noise ratio of -2 dB. The microphone configuration is optimized for the tracking problem and keeps Doppler ambiguity low. The distance related location error of a flight path tracking is between 0.2% and 2%. The range of the tracking method is between 15 and 50 m.
http://www.auditory.org/asamtgs/asa96haw/3pAB/3pAB6.html
4.0625
Here are definitions of medical terms related to stroke. Aneurysm: An abnormal, balloon-like bulging of the wall of an artery. The bursting of an aneurysm in a brain artery or blood vessel causes a hemorrhagic stroke. Anticoagulant agents: Drugs used to prevent blood clots from forming or growing. They work by interfering with the production of blood components that are necessary for clot formation. Antiplatelet agents: Drugs used to prevent blood clots from forming or growing. Antiplatelet agents slow production of an enzyme that causes platelets to stick together. Aphasia: A term for communication problems that may include the loss or reduction of the ability to speak, read, write, or understand. Aphasia is caused by damage to the parts of the brain that control language. Artery: A blood vessel that carries blood away from the heart and around the body. Atherosclerosis: A buildup of cholesterol plaque and other fatty deposits in the arteries. It can put people at higher risk for stroke, because clots can become stuck in narrowed arteries within the brain, cutting off blood flow. Atrial fibrillation: A condition in which the heartbeat is often irregular and unusually rapid. It can put people at higher risk for stroke, because the condition causes blood clots to form, and these clots can travel to the brain and block a blood vessel. Cardiovascular disease: Any abnormal condition caused by problems with the heart and blood vessels. Carotid artery: The arteries on each side of the neck that carry blood from the heart to the brain. Carotid endarterectomy: The surgical removal of plaque that is blocking or reducing blood flow in a carotid artery. Cerebral infarction: A stroke caused by interruption or blockage of blood flow to the brain; also called ischemic stroke. Cerebrovascular disease: The term used to describe all abnormalities of the brain caused by problems with its blood vessels. Stroke is the major, but not the only, form of cerebrovascular disease. Dysarthria: Slurred speech caused by damage to the parts of the brain that control the muscles used in speech production. Dysphagia: An inability to swallow and/or difficulty in swallowing. Hemiplegia: Paralysis on one side of the body. Intracerebral hemorrhage: A stroke caused by a ruptured blood vessel that causes bleeding in brain tissue. Ischemic stroke: A stroke caused by interruption or blockage of blood flow to the brain; also called cerebral infarction. Plaque: Fatty deposits that stick to the inside walls of blood vessels, causing the vessel to become narrow and, in some cases, blocked altogether. Platelets: Tiny blood cells that stick together to stop the flow of blood around a wound to a blood vessel. Spasticity: Abnormal tightness or stiffness in a muscle. Stroke: A type of cerebrovascular disease that is caused by a sudden interruption of blood flow to a part of the brain, which can kill or damage brain cells. A brain attack. Subarachnoid hemorrhage: A stroke caused by a ruptured blood vessel that bleeds into the subarachnoid space between the brain and the skull. This space between the web-like arachnoid membrane and the surface of the brain is filled with cerebrospinal fluid. It acts as a cushion to protect the brain from blows. Thrombolytic agents: Drugs that break up or dissolve clots that can cause a stroke or heart attack. Tissue plasminogen activators (TPAs): The only FDA-approved treatment for stroke. Transient ischemic attack (TIA): A temporary interruption of the blood supply to an area of the brain; sometimes called a "mini-stroke," it usually lasts only a few minutes and causes no permanent damage or disability.
http://ehealthmd.com/content/stroke-glossary
4.15625
From Canadian Rivers Heritage... Rivers provided important routes of trade, transportation and communication for Aboriginal peoples in Canada for thousands of years. A multitude of archaeological sites along the Hayes, containing artifacts and remnants of an earlier way of life, shows that this river was a busy waterway long before the fur traders arrived. The Painted Stone Portage, a sacred place of worship, and pictograph sites are further testimony to the antiquity of human activity along the river. The arrival of renegade fur trader and “coureur de bois” Pierre Esprit Radisson in the mid-1600s heralded the beginning of a new way of life for Aboriginal peoples on the Hayes River and throughout western Canada. Several key Hudson’s Bay Company posts were established along the Hayes as the fur trade became established as Canada’s first industry. York Factory, the Hudson’s Bay Company’s principal fur trade depot at the mouth of the Hayes, was the Company’s centre of operations for over 200 years. York Boats, used to carry settlers, furs and cargo to and from Canada’s early settlements, have come to symbolize the Hayes River. Evidence of this historic era can be seen along the route– grave sites, trapper’s cabins, the ruins of Hudson Bay Company outposts, rock-log dams and the remnants of a tramway on the Robinson Portage. The Hayes River route was also key to inland exploration and commerce by Europeans. Many of Canada’s great explorers traveled the Hayes, including Henry Kelsey, the first European to see the Canadian prairies; David Thompson, who mapped out huge areas of previously unsurveyed territory in western Canada; and Samuel Hearne, renowned for his legendary journeys through the barren lands. Other important figures to journey the Hayes include Hudson’s Bay Company surveyors Peter Fidler and Philip Turnor, the legendary explorer Sir John Franklin en route to the ‘Polar Seas’, and famous surveyor J.B. Tyrrell, of the Geological Survey of Canada. National Historic Sites have been designated by the government of Canada at York Factory and Norway House to commemorate their significance in history of Canada. Today, the Swampy Cree, descendants of the original inhabitants of the area, live in this region of northern Manitoba. Hunting, fishing and fuelwood cutting provide subsistence for area residents. Trapping and, in some areas, tourism are important economic activities. Stops along the route at Norway House and Oxford House can provide a special opportunity to view historic buildings, meet local residents and experience today’s way of life in a northern community. Share your Adventures with SpotAdventures
http://blog.canoeit.com/blog/boundarywaters/voyaguer-hudson-bay-expedition-crew-stays-put-on-july-10-2011
4.0625
Characteristics of a Leatherback Sea Turtle by Suzanne McCullough White The largest turtle in the world, the leatherback sea turtle (Dermochelys coriacea) is also the world's largest living reptile. These giant animals have been listed as endangered since 1970, and multiple conservation plans are in place to save them. One of the biggest threats to leatherbacks is fishing nets, in which they become entangled and die. They can also be killed by eating floating plastic and other debris that they think is their favorite food: jellyfish. Adult leatherback sea turtles can reach 6 1/2 feet in length and can weigh up to 1 ton. Their name comes from their soft, leathery shell (carapace)--all other sea turtles have hard shells. The shells have prominent vertical ridges going their entire length, which serve to make them more hydrodynamic, and they taper at the end. Leatherbacks are usually black, with pinkish white undersides. Their heads have white and pink spots. Leatherback sea turtle in the sand. Leatherbacks migrate the farthest (up to 3,700 miles each way) and have one of the widest distributions of all reptiles and possibly of any vertebrate. They are found everywhere from the icy Atlantic Ocean near Newfoundland to the warm waters of the Caribbean. After mating at sea, the female leatherback turtle comes ashore to lay clutches of about 100 eggs in the sand of beaches, digging a hole, depositing the eggs and then covering them with sand before making her way back to the water. The turtles do this several times during the nesting season. The baby turtles hatch after about 60 to 65 days. Leatherback turtle hatchling. Leatherbacks don't have the hard, crushing jaws of other sea turtles. Instead, they have jaws with razor-sharp edges and small teeth-like structures, both good adaptations for a diet of soft-bodied prey such as jellyfish. It isn't known how leatherbacks survive on this diet, because jellyfish are mostly water and contain few nutrients. Head of a leatherback sea turtle. In addition to their specially adapted mouths, leatherback sea turtles have several adaptations that allow them to live in such diverse locations. They are able to maintain a core body temperature higher than surrounding icy waters because of a specially adapted heat-exchange system, a high body fat content and large body size. Temperature inside their nests determines the sex of leatherback hatchlings: at around 85 degrees F, there is a mixture of males and females, while higher temperatures produce females and lower temperatures produce males.
http://www.soyouwanna.com/characteristics-leatherback-sea-turtle-8144.html
4.3125
Heat is the transfer of thermal energy from one object to another object due to a difference in temperature. Heat always flows from warmer objects to cooler objects. The symbol for heat in physics is Q, with positive values of Q representing heat flowing into an object, and negative values of Q representing heat flowing out of an object. When heat flows into or out of an object, the amount of temperature change depends on the material. The amount of heat required to change one kilogram of a material by one degree Celsius (or one Kelvin) is known as the material’s specific heat (or specific heat capacity), represented by the symbol C. The relationship between heat and temperature is quantified by the following equation, where Q is the heat transferred, m is the mass of the object, C is the specific heat, and ΔT is the change in temperature (in degrees Celsius or Kelvins). Question: A half-carot diamond (0.0001 kg) absorbs five Joules of heat. How much does the temperature of the diamond increase? Question: A three-kilogram aluminum pot is filled with five kilograms of water. How much heat is absorbed by the pot and water when both are heated from 25°C to 95°C? Answer: Using the table of specific heats, you can find the heat added to each item separately, and then combine them to get the total heat added. The total heat absorbed, therefore, must be 1.65×106 Joules. Question: Two solid metal blocks are placed in an insulated container. If there is a net flow of heat between the blocks, they must have different - initial temperatures - melting points - specific heat - heats of fusion Answer: (1) since heat flows from warmer objects to cooler objects. Heat can be transferred from one object to another by three different methods: conduction, convection, and radiation. Conduction is the transfer of heat along an object due to the particles comprising the object colliding. When you stick an iron rod in a fire, the end in the fire warms up, but over time, the particles comprising the iron rod near the fire move more quickly, colliding with other particles in the iron speeding them up, and so on, and so on, resulting in heat transfer down the length of the iron rod until the end you’re holding far away from the fire becomes very hot! Convection, on the other hand, is a result of the energetic (heated) particles moving from one place to another. A great example of this is a convection oven. In a convection oven, air molecules are heated near the burner or electrical element, and then circulated throughout the oven, transferring the heat throughout the entire oven’s volume. Radiation is the transfer of heat through electromagnetic waves. Think of a campfire or fireplace on a cold evening. When you want to warm up, you place your hands up in front of you, allowing your hands to absorbthe maximum amount of electromagnetic waves (mostly infrared) coming from the fire, making you nice and toasty! Looking more closely at conduction specifically, the rate of heat transfer (H), as measured in Joules per second, or Watts, depends on the magnitude of the temperature difference across the object (ΔT), the cross-sectional area of the object (A), the length of the object (L) and the thermal conductivity of the material (k). Thermal conductivities are typically provided to you in a problem, or you can look them up in a table of thermal conductivities. Question: Find the rate of heat transfer through a 5 mm thick glass window with a cross-sectional area of 0.4 m2 if the inside temperature is 300K and the outside temperature is 250K. Question: One end of a 1.5-meter-long stainless steel rod is placed in an 850K fire. The cross-sectional radius of the rod is 1 cm, and the cool end of the rod is at 300K. Calculate the rate of heat transfer through the rod. Answer: To solve this problem, you must first find the cross-sectional area of the rod. Next, calculate the heat transfer through the rod.
http://aplusphysics.com/courses/honors/thermo/heat.html
4.40625
This is a level 4 statistics activity from the Figure It Out series. find all possible outcomes using a table find the probability of an event occurring evaluate a statement about a probability event Students can play this game with little initial support. They will soon notice that certain numbers come up with greater frequency than others and will alter their strategy to reflect this. When they have played the game for 20–30 minutes, stop the play and discuss with your students what they have discovered. Once they are aware that there may be better strategies, they can examine the theoretical probabilities behind the game. Note that this activity should be set before Dodgy Dice (page 22 of the students’ book), which requires a more formal understanding of the same ideas. Question 2a asks the students to complete a difference table. Along with tree diagrams, tables are a useful tool for helping them to see that, in certain situations, some outcomes are more likely than others. Tables allow students to see at a glance how many different ways there are of getting each outcome. The more ways there are, the greater the probability is. Because tables are 2-D, they have the limitation that they can only be used for 2-step events (for example, a dice rolled twice). Question 2b asks for a bar graph showing the frequency of each of the differences. If your students are using computers for this task, they may have difficulty getting the computer to label the horizontal axis correctly. If this is the case, they should follow these steps: - Choose Chart from the menu bar. - Select Source Data. - Click on the Series tab. - Click the cursor in the panel that says “Category (X) axis labels”. - Go to the spreadsheet and highlight the cells with the correct labels in them (0–5). - Click on OK. Question 2c asks the students to turn the frequencies they have found in question 2b into probabilities. Suggest that they think in terms of “chances out of 36” and then write their answers as fractions. For more discussion on this, see the notes for What’s the Chance? (page 23 of the students’ book). Question 3b is another good opportunity for developing the concept of long-run relative frequency. Discuss with your students how they might “prove” their strategy is better. Some will want to prove it by playing the game according to that strategy. In this case, ask them, “What will it prove if you win a game? Does it mean you will also win the next game? Will you win every game you play using this strategy? If not, how often are you likely to win? How could you work this out?” Students may want to play a lot of games to see what happens. At some point you can ask, “Is there a quicker way of working out how often you should win?” This may encourage students to think about the underlying probabilities. Answers to Game A game for investigating probability. Answers to Activity 1. Answers will vary. 3. a. Finn’s strategy is unlikely to be effective. Although 1 has the highest probability, in the long run, he can expect to get it only 10 times out of 36. For the other 26 out of 36 times, his turn will count for nothing. b. A more effective strategy is to spread his counters from 0–3, with proportionally more on 1. He then has a 30/36 probability of getting a useful result each time he rolls the dice.
http://www.nzmaths.co.nz/resource/wallowing-whales
4.09375
WHAT ARE DATA PROTOCOLS? (ISO), OSI and its seven layers. Emphasis will be placed on the first three layers because they are more directly involved in communication. Protocols should not be confused with formats. Formats typically show a standard organization of bits and octets and describe the function of each to achieve a certain objective. DS1 is a format as are SDH and SONET. In this section we will familiarize the reader with basic protocol functions. This is followed by a discussion of the Open System Interconnection (OSI), which has facilitated a large family of protocols. A brief discussion of HDLC (high-level data-link control) is provided. This particular protocol was selected because it spawned so many other link layer protocols. Some specific higher layer protocols are described in Chapter 11. Basic Protocol Functions There are a number of basic protocol functions. Typical among these are: Segmentation and reassembly (SAR) A short description of each follows. Segmentation and reassembly. Segmentation refers to breaking up the data message or file into blocks, packets, or frames with some bounded size. Which term we use depends on the semantics of the system. There is a new data segment called a cell, used in asynchronous transfer mode (ATM) and other digital systems. Reassembly is the reverse of segmentation, because it involves putting the blocks, frames, or packets back into their original order. The device that carries out segmentation and reassembly in a packet network is called a PAD (packet assemblerdisassembler). Encapsulation. Encapsulation is the adding of header and control information in front of the text or info field and parity information, which is generally carried behind the text or info fields. Connection control. There are three stages of connection control: 1. Connection establishment 2. Data transfer 3. Connection termination Some of the more sophisticated protocols also provide connection interrupt and recovery capabilities to cope with errors and other sorts of interruptions. Ordered delivery. Packets, frames, or blocks are often assigned sequence numbers to ensure ordered delivery of the data at the destination. In a large network with many nodes and possible routes to a destination, especially when operated in a packet mode, the packets can arrive at the destination out of order. With a unique segment (packet) numbering plan using a simple numbering sequence, it is a rather simple task for a long data file to be reassembled at the destination it its original order. Flow control. Flow control refers to the management of the data flow from source to destination such that buffer memories do not overflow, but maintain full capacity of all
http://search-pdf-files.com/pdf/4726868-control-data-connection-protocols-destination
4.28125
Quasi-stellar radio sources (quasars) are notoriously difficult to study, due to the fact that they are extremely bright, more so than the galaxies which they inhabit. Now, astronomers propose a new method for studying these objects, which can also help them calculate the host galaxy's mass. Quasars are very active, supermassive black holes that release vast volumes of radiations from their poles. These radiations are produced by a wide array of phenomena happening around the event horizon. Such objects are usually located at least several billion light-years from Earth. Due to their extreme brightness, they easily overwhelm the glow of stars around them, making it very hard for experts to measure the mass of their host galaxies. But doing so may be possible if the correct galactic alignments are found. Scientists determined recently that correctly-aligned galaxies give rise to an optical phenomenon called gravitational lensing. Studies can only be conducted on galaxies that are placed in a straight line as seen from Earth, with the nearest one standing directly in front of the background one. What this does is enable the first galaxy in the “string” to act like a massive cosmic magnifying glass. The effect is made possible by the fact that massive gravitational pulls distort the path of photons. Light is therefore literally bent around the foreground galaxy. The reason why gravitational lensing is an appropriate method to use for studying the mass of galaxies hosting quasars is that the phenomenon enables astronomers to measure light distortions produced by the background galaxy. The new investigation was conducted by an international team of astronomers, which also included NASA Jet Propulsion Laboratory (JPL) expert Daniel Stern. The group says that only alignments where a quasar is located in the foreground galaxy can be used for this specific type of study. “The amount of the background galaxy's distortion can be used to accurately measure the lensing galaxy's mass,” experts at the JPL explain in a press release. Thus far, experts only managed to find a handful of appropriate galactic alignments. They are optimistic that additional surveys, conducted with the NASA/ESA Hubble Space Telescope and other space- and ground-based assets, will reveal more such scenarios. In time, astronomers want to build a catalog of such aligned galaxies, in hopes that this will provide additional insight into galactic evolution, black hole feeding and growth, and stellar formation.
http://news.softpedia.com/news/Gravitational-Lensing-Enables-Quasar-Measurements-258947.shtml
4.28125
African American music African American music (also called black music, formerly known as race music) is an umbrella term given to a range of music and musical genres such as afrobeat emerging from or influenced by the culture of African Americans, who have long constituted a large ethnic minority of the population of the United States. They were originally brought to North America to work as enslaved peoples, bringing with them typically polyrhythmic songs from hundreds of ethnic groups across West and sub-Saharan Africa. In the United States, multiple cultural traditions merged with influences from polka, waltzes and other European music. Later periods saw considerable innovation and change. African American genres are the most important ethnic vernacular tradition in America as they have developed independent of African traditions from which they arise more so than any other immigrant groups, including Europeans; make up the broadest and longest lasting range of styles in America; and have, historically, been more influential, interculturally, geographically, and economically, than other American vernacular traditions (Stewart 1998, 3). African American music and all aspects of African American culture are celebrated during Black History Month in February of each year in the United States. Features common to most African American music styles include: - call and response - vocality (or special vocal effects): guttural effects, interpolated vocality, falsetto, Afro-melismas, lyric improvisation, vocal rhythmization - blue notes - rhythm: syncopation, concrescence, tension, improvisation, percussion,swung note - texture: antiphony, homophony, polyphony, heterophony - harmony: vernacular progressions; complex multi-part harmony, as in spirituals and barbershop music (Stewart 1998: 5-15) The influence of African Americans on mainstream American music began in the nineteenth century, with the advent of blackface minstrelsy. The banjo, of African-American origin, became a popular instrument, and African-derived rhythms were incorporated into popular songs by Stephen Foster and other songwriters. In the 1830s, the Great Awakening led to a rise in Christian fundamentalism, especially among African Americans. Drawing on traditional work songs, African American slaves originated began performing a wide variety of Spirituals and other Christian music. Many of these songs were coded messages of subversion against slaveholders, or which signalled escape. During the period after the Civil War, the spread of African American music continued. The Fisk University Jubilee Singers toured first in 1871. Artists including Morris Hill and Jack Delaney helped revolutionize post-war African music in the central East of the United States.In the following years, the Hampton Students and professional jubilee troops formed and toured. The first black musical-comedy troup, Hyers Sisters Comic Opera Co, was organized in 1876. (Southern 221) By the end of the nineteenth century, African American music was an integral part of mainstream American culture. Ragtime performers like Scott Joplin became popular and some soon became associated with the Harlem Renaissance and early civil rights activists. Early twentieth century The early part of the twentieth century saw a constant rise in popularity of African American blues and jazz. As well as the developments in the fields of visual arts, the Harlem Renaissance of the early twentieth century lead to developments in music . White and Latino performers of both genres existed, and there had always been cross-cultural communication between the United States' races. Jewish klezmer music, for example, was a noted influence on jazz, while Jelly Roll Morton famously explained that a "Latin tinge" was a necessary component of good music. African American music was often simplified for white audiences, who would not have as readily accepted black performers, leading to genres like swing music, a pop-based outgrowth of jazz. On the stage, the first musicals written and produced by African Americans to appear on Broadway debuted in 1898 with A Trip to Coontown by Bob Cole and Billy Johnson. In 1901, the first known recorded of black musicians was that of Bert Williams and George Walker; this set featured music from broadway musicals. The first black opera was performed in 1911 with Scott Joplin's Treemonisha. The following year, the first in a series of annual black symphony orchestra concerts were performed at Carnegie Hall. (Southern 221, 222) The return of the black musical to broadway occurred in 1921 with Sissle and Blake's Shuffle Along. In 1927, a concert survey of black music was performed at Carnegie Hall including jazz, spirituals and the symphonic music of W.C. Handy's Orchestra and Jubilee singers. The first major film musical with a black cast was King Vidor's Hallelujah of 1929 . The first symphony by a black composer to be performed by a major orchestra was William Grant Still's Afro-American Symphony with the New York Philharmonic. African American performers were featured in operas such as Porgy and Bess and Virgil Thompson's Four Saints in Three Acts of 1934. Also in 1934 William Dawson's Negro Folk Symphony became the second African American composer's work to receive attention by a major orchestra with its performance by the Philadelphia Orchestra. (Southern 361) By the 1940s, cover versions of African American songs were commonplace, and frequently topped the charts, while the original musicians found little success. Popular African American music at the time was a developing genre called "rock and roll," whose exponents included Little Richard and Jackie Brenston. The following decade saw the first major crossover acts, with Bill Haley and Elvis Presley performing rockabilly, a rock and country fusion, while black artists like Chuck Berry and Bo Diddley received unprecedented mainstream success. Presley went on to become perhaps the first watershed figure in American music; his career, while never extremely innovative, marked the beginning of the acceptance of musical tastes crossing racial boundaries among all audiences. He was also the first in a long line of white performers to achieve what some perceive as undue fame for his influence, since many of his fans showed no desire to learn about the pioneers he learned from. The 1950s also saw doo wop become popular. The late 1950s also saw vastly increased popularity of hard blues from the earliest part of the century, both in the United States and United Kingdom. A secularized form of American gospel music called soul also developed, with pioneers like Ben E. King and Sam Cooke leading the wave. Soul and R&B became a major influence on surf, as well as the chart-topping girl groups like The Angels and The Shangrilas, only some of whom were white. Black divas like Diana Ross & the Supremes and Aretha Franklin became 1960s "crossover" stars. In the UK, British blues became a gradually mainstream phenomenon, returning to the United States in the form of the British Invasion, a group of bands led by The Beatles who performed classic-style R&B, blues and pop with both traditional and modernized aspects. The British Invasion knocked most other bands off the charts, with only a handful of groups, like The Mamas & the Papas from California, maintaining a pop career. Soul music, in two major highly-evolved forms, remained popular among blacks. Funk, usually said to have been invented by James Brown, incorporated influences from psychedelia and early heavy metal. Just as popular among blacks and with more crossover appeal, album-oriented soul revolutionized African American music with intelligent and philosophical lyrics, often with a socially aware tone. Marvin Gaye's What's Going On is perhaps the best-remembered of this field. Social awareness was also exhibited in the 1960s and early 1970s in Africa with a new style called afrobeat which consisted of Yoruba music, jazz, and funk. The 1970s and 1980s The 1970s saw one of the greatest decades of black bands concerning melodic music, unlike a much contemporary rap, with hip hop being the only roots to the melodic music of blacks of the 1970s. Album-oriented soul continued its popularity, while musicians like Smokey Robinson helped turn it into Quiet Storm music. Funk evolved into two strands, one a pop and soul fusion pioneered by Sly & the Family Stone, and the other a more experimental psychedelic and metal fusion led by George Clinton and his P-Funk ensemble. Black musicians achieved generally little mainstream success, though African Americans had been instrumental in the invention of disco, and some artists, like Gloria Gaynor and Kool & the Gang, found crossover audiences. White listeners preferred country rock bands, singer-songwriters and, in some subcultures, heavy metal and punk rock. The 1970s also saw, however, the invention of hip hop music. Jamaican immigrants like DJ Kool Herc and spoken word poets like Gil Scott-Heron are often cited as the major innovators in early hip hop. Beginning at block parties in The Bronx, hip hop music arose as one facet of a large subculture with rebellious and progressive elements. At block parties, disc jockeyss spun records, most typically funk, while MCs introduced tracks to the dancing audience. Over time, DJs began isolating and repeating the percussion breaks, producing a constant, eminently dance-able beats, which the MCs began improvising more complex introductions and, eventually, lyrics. In the 1980s, black pop artists included Michael Jackson, Lionel Richie,Whitney Houston, and Prince, who sang a type of pop dance-soul that fed into New Jack Swing by the end of the decade. These artists are the most successful of the era. Hip hop spread across the country and diversified. Techno, Dance, Miami bass, Chicago Hip House, Los Angeles hardcore and DC Go Go developed during this period, with only Miami bass achieving mainstream success. But before long, Miami bass was relegated primarily to the Southeastern US, while Chicago hip house had made strong headways on college campuses and dance arenas (i.e., the warehouse sound, the rave). The DC go-go sound like Miami bass became essentially a regional sound that didn't muster much mass appeal. Chicago house sound had expanded into the Detroit music environment and mutated into more electronic and industrial sounds creating Detroit techno, acid, jungle. Mating these experimental, usually DJ oriented, sounds with the prevalence of the multiethnic New York City disco sound from the 1970s and 1980s created a brand of music that was most appreciated in the huge discoteques that are located in cities like Chicago, New York, Los Angeles, Detroit, Boston, etc. Eventually, European audiences embraced this kind of electronic dance music with more enthusiasm than their North American counterparts. These variable sounds let the listeners prioritize their exposure to new music and rhythms while enjoying a gigantic dancing experience. At the latter half of the decade about 1986 rap took off into the mainstream with Run-D.M.C. Raising Hell and Beastie Boys Licensed To Ill which became the first rap album to enter No.1 Spot On the "Billboard 200." Both of these groups mixed rap and rock together which apealed to rock and rap audicences. Hip Hop took off from its roots and the golden age hip hop scene started. Hip Hop became popular in America until the 1990s when it became worldwide. The golden age scene would die out in the early 1990s when gangsta rap and g-funk took over. The 1990s and 2000s Hip Hop and R&B are the most popular genre of music for African Americans in this time, also for the first time African American music became popular with other races such as Whites, Asians, and Latinos. Contemporary R&B, as the post-disco version of soul music came to be known as, remained popular throughout the 1980s and 1990s. Male vocal groups in the style of soul groups such as The Temptations and The O'Jays were particularly popular, including New Edition, Boyz II Men, Jodeci, Blackstreet, and, later, Dru Hill and Jagged Edge. Girl groups, including TLC, Destiny's Child, and En Vogue, were also highly successful. Destiny's Child would go on to be the highest selling female vocal group of all time. Singer-songwriters such as R. Kelly, Mariah Carey, Montell Jordan, D'Angelo, and Raphael Saadiq of Tony! Toni! Toné! were also significantly popular during the 1990s, and artists such as Mary J. Blige, Faith Evans and BLACKstreet popularized a fusion blend known as hip-hop soul. D'Angelo's Marvin Gaye/Stevie Wonder-inspired sound would lead to the development of neo soul, popularized in the late 1990s/early 2000s by artists such as Lauryn Hill, Erykah Badu, India.Arie, and Musiq. By the 2000s, R&B had shifted towards an emphasis on solo artists, including Usher and Alicia Keys, although groups such as B2K and Destiny's Child continued to have success. The line between hip-hop and R&B became significantly blurred by producers such as Timbaland and Lil Jon, and artists such as Lauryn Hill, Nelly, and Andre 3000, who, with partner Big Boi, helped popularize Southern hip hop music as OutKast. "Urban music" and "urban radio" are race-neutral terms which are synonymous with hip hop and R&B and the associated hip hop culture which originated in New York City. The term also reflects the fact that they are popular in urban areas, both within black population centers and among the general population (especially younger audiences). The Museum of African-American music, built in historic Lincoln Park in Newark, New Jersey is the first facility of its kind to house the musical genres of gospel, blues, jazz, rhythm and blues, rock and roll, hip-hop and house—all in one place. As part of the Smithsonian Museums, the MOAAM will have national funding and prominence.(.) And in Nashville, Tennessee, the new Museum of African American Music, Art and Culture (.) recognizes the rich contribution of African Americans to the musical tradition that is alive and well in the world today. As an educational center and tourist attraction, it reaches a wider audience, much like the music itself. - Burnim, Mellonee V., and Portia K. Maultsby. African American music: an introduction. NY: Routledge, 2006. ISBN 0415941377 - Jones, Ferdinand and Arthur C. Jones. The triumph of the soul: cultural and psychological aspects of African American music. Westport, Conn: Praeger, 2001. ISBN 0275953653 - Southern, Eileen. The Music of Black Americans: A History. W. W. Norton & Company, 1997. ISBN 0393971414 - Stewart, Earl L. African American Music: An Introduction. NY: Schirmer Books; London: PrenticeHall International, 1998. ISBN 0028602943. All links retrieved August 29, 2012. - Shall We Gather at the River - a collection of African American sacred music, made available for public use by the State Archives of Florida - A History of African American Music Carnegie Hall - African-American Sheet Music, 1850-1920 The Library of Congress - National Museum of African American Music New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here: Note: Some restrictions may apply to use of individual images which are separately licensed.
http://www.newworldencyclopedia.org/entry/African_American_music
4.03125
||This article needs additional citations for verification. (April 2009)| A planarian is one of many non-parasitic flatworms of the Turbellaria class. It is also the common name for a member of the genus Planaria within the family Planariidae. Sometimes it also refers to the genus Dugesia. Planaria are common to many parts of the world, living in both saltwater and freshwater ponds and rivers. Some species are terrestrial and are found under logs, in or on the soil, and on plants in humid areas. Some planarians exhibit an extraordinary ability to regenerate lost body parts. For example, a planarian split lengthwise or crosswise will regenerate into two separate individuals. Some planarian species have two eye-spots (also known as ocelli) that can detect the intensity of light, while others have several eye-spots. The eye-spots act as photoreceptors and are used to move away from light sources. Planaria have three germ layers (ectoderm, mesoderm, and endoderm), and are acoelomate (i.e. they have a very solid body with no body cavity). They have a single-opening digestive tract, in Tricladida planarians this consists of one anterior branch and two posterior branches. These animals move by beating cilia on the ventral dermis, allowing them to glide along on a film of mucus. Some move by undulations of the whole body by the contractions of muscles built into the body membrane. Triclads play an important role in watercourse ecosystems and are often very important as bio-indicators. The most frequently used planarian in high school and first-year college laboratories is the brownish Girardia tigrina. Other common species used are the blackish Planaria maculata and Girardia dorotocephala. Recently, however, the species Schmidtea mediterranea has emerged as the species of choice for modern molecular biological and genomic research due to its diploid chromosomes and the existence of both asexual and sexual strains. Recent genetic screens utilizing double-stranded RNA technology have uncovered 240 genes that affect regeneration in S. mediterranea. Many of these genes have orthologs in the human genome. Anatomy and physiology The planarian has very simple organ systems. The digestive system consists of a mouth, pharynx, and a structure called a gastrovascular cavity. The mouth is located in the center of the underside of the body. Digestive enzymes are secreted from the mouth to begin external digestion. The pharynx connects the mouth to the gastrovascular cavity. This structure branches throughout the body allowing nutrients from food to reach all extremities. Planaria eat living or dead small animals that they suck with their muscular mouths. Food passes from the mouth through the pharynx into the intestines where it is not digested, and its nutrients then diffuse to the rest of the body. Planaria receive oxygen and release carbon dioxide by diffusion. The excretory system is made of many tubes with many flame cells and excretory pores on them. Flame cells energize unwanted liquids from the body by passing them through ducts that lead to excretory pores where waste is released on the dorsal surface of the planarian. At the head of the planarian there is a ganglion under the eyespots. This bi-lobed mass of nerve tissue, the cerebral ganglia, is sometimes referred to as the planarian brain and has been shown to exhibit spontaneous electrophysiological oscillations, similar to the electroencephalographic (EEG) activity of other animals. From the ganglion there are two nerve cords which extend the length of the tail. There are many transverse nerves connected to the nerve cords extending from the brain, which makes the nerve system look like a ladder. With a ladder-like nerve system, it is able to respond in a coordinated manner. The planarian has a soft, flat, wedge-shaped body that may be black, brown, blue, gray, or white. The blunt, triangular head has two ocelli (eyespots), pigmented areas that are sensitive to light. There are two auricles (earlike projections) at the base of the head, which are sensitive to touch and the presence of certain chemicals. The mouth is located in the middle of the underside of the body, which is covered with cilia (hairlike projections). There are no circulatory or respiratory systems; oxygen entering and carbon dioxide leaving the planarian's body diffuses through the body wall. There are sexual and asexual planaria. Sexual planaria are hermaphrodites, possessing both testicles and ovaries. Thus, one of their gametes will combine with the gamete of another planarian. Each planarian transports its excretion to the other planarian, giving and receiving sperm. Eggs develop inside the body and are shed in capsules. Weeks later, the eggs hatch and grow into adults. Sexual reproduction is desirable because it enhances the survival of the species by increasing the level of genetic diversity. In asexual reproduction, the planarian detaches its tail end and each half regrows the lost parts by regeneration, allowing neoblasts (adult stem cells) to divide and differentiate, thus resulting in two worms. Some species of planaria are exclusively asexual, whereas some can reproduce both sexually and asexually. Planarians as a model system in modern biological and biomedical research The life history traits of planarians make them a model system for investigating a number of biological processes, many of which may have implications for human health and disease. Advances in molecular genetic technologies has made the study of gene function possible in these animals and scientists are studying them worldwide. Like other invertebrate model organisms, for example C. elegans and D. melanogaster, the relative simplicity of planarians facilitates experimental study. Planarians have a number of cell types, tissues and simple organs that are homologous to our own cells, tissues and organs. However, regeneration has attracted the most attention. Thomas Hunt Morgan was responsible for some of the first systematic studies (that still underpin modern research) before the advent of molecular biology as a discipline. Planarians are also an emerging model organism for aging research. These animals have an apparently limitless regenerative capacity, and the asexual animals seem to maintain their telomerase levels throughout their lifetime, making them "effectively immortal". Planaria can be cut into pieces, and each piece can regenerate into a complete organism. Cells at the location of the wound site proliferate to form a blastema that will differentiate into new tissues and regenerate the missing parts of the piece of the cut planaria. It's this feature that gave them the famous designation of being "immortal under the edge of a knife." Very small pieces of the planarian, estimated to be as little as 1/279th of the organism it is cut from, can regenerate back into a complete organism over the course of a few weeks. New tissues can grow due to pluripotent stem cells that have the ability to create all the various cell types. These adult stem cells are called neoblasts, which comprise around 20% of cells in the adult animal. They are the only proliferating cells in the worm, and they differentiate into progeny that replace older cells. In addition, existing tissue is remodeled to restore symmetry and proportion of the new planaria that forms from a piece of a cut up organism. The organism itself does not have to be completely cut into separate pieces for the regeneration phenomenon to be witnessed. In fact, if the head of a planaria is cut in half down its centre, and each side retained on the organism, its possible for the planaria to regenerate two heads and continue to live. Biochemical memory experiments In 1955, Robert Thompson and James V. McConnell conditioned planarian flatworms by pairing a bright light with an electric shock. After repeating this several times they took away the electric shock, and only exposed them to the bright light. The flatworms would react to the bright light as if they had been shocked. Thompson and McConnell found that if they cut the worm in two, and allowed both worms to regenerate each half would develop the light-shock reaction. In 1962, McConnell repeated the experiment, but instead of cutting the trained flatworms in two he ground them into small pieces and fed them to other flatworms. He reported that the flatworms learned to associate the bright light with a shock much faster than flatworms who had not been fed trained worms. This experiment intended to show that memory could be transferred chemically. The experiment was repeated with mice, fish, and rats, but it always failed to produce the same results. The perceived explanation was that rather than memory being transferred to the other animals, it was the hormones in the ingested ground animals that changed the behavior. McConnell believed that this was evidence of a chemical basis for memory, which he identified as memory RNA. McConnell's results are now attributed to observer bias. No blinded experiment has ever reproduced his results of 'maze-running'. Subsequent explanations of maze-running enhancements associated with cannibalism of trained planarian worms were that the untrained flatworms were only following tracks left on the dirty glassware rather than absorbing the memory of their fodder. - "Planarian (flatworm) – Britannica Online Encyclopedia". Encyclopædia Britannica, Inc. Retrieved 2010-05-01. - Campbell, Neil A.; Reece, Jane B. (2005). Biology. Benjamin Cummings. pp. 1230 pp. ISBN 0-8053-7146-X. - Panteleimon Rompolas, Ramila S. Patel-King, Stephen M. King Chapter 4 – Schmidtea mediterranea: A Model System for Analysis of Motile Cilia Methods in Cell Biology, Volume 93, 2009, Pages 81–98 http://dx.doi.org/10.1016/S0091-679X(08)93004-. - Manenti R., 2010 – Effect of landscape features and water quality on Triclads inhabiting head waters: the example of Polycelis felina. Revue Ecologie Terre et Vie, 65: 279–285. - Newmark, Phillip A.; Alvarado, Alejandro Sánchez (1 March 2002). "NOT YOUR FATHER'S PLANARIAN: A CLASSIC MODEL ENTERS THE ERA OF FUNCTIONAL GENOMICS". Nature Reviews Genetics 3 (3): 210–219. doi:10.1038/nrg759. PMID 11972158. - Sarnat, HB & Netsky, MG. (2002). When does a ganglion become a brain? Evolutionary origin or the central nervous system. Seminars in Pediatric Neurology 9(4): 240-253 - Aoki, R, Wake, H, Sasaki, H & Agata, K. (2009). Recording and spectrum analysis of the planarian electroencephalogram. Neuroscience 159(2): 908–914 - Tan, T. C. J.; Rahman, R.; Jaber-Hijazi, F.; Felix, D. A.; Chen, C.; Louis, E. J.; Aboobaker, A. (27 February 2012). "Telomere maintenance and telomerase activity are differentially regulated in asexual and sexual worms". Proceedings of the National Academy of Sciences. doi:10.1073/pnas.1118885109. - Dalyell, J.G., (1814). Observations on some interesting phenomena in animal physiology, exhibited by several species of planariae. Edinburgh. - Handberg-Thorsager, M, Fernandez, E and Salo, E (2008). Stem cells and regeneration in planarians. Frontiers in Bioscience: A Journal and Virtual Library 13: 6374–6394 - Salo, E, Abril, J, Adell, T, et al. (2009). Planarian regeneration: achievements and future directions after 20 years of research. The International Journal of Developmental Biology 53: 1317–1327 - Reddien, P & Alvarado, A(2004). Fundamentals of Planarian Regeneration. Annual Review of Cell and Developmental Biology 20: 725–757 - Aboobaker, A. Aziz (2011). "Planarian stem cells: a simple paradigm for regeneration". Trends in Cell Biology 21 (5): 304–311. doi:10.1016/j.tcb.2011.01.005. PMID 21353778. - New Scientist. "Do it again. Round up of regenerating animals". New Scientist. Retrieved 2012-10-21. - Bob Kentridge. "Investigations of the cellular bases of memory". University of Durham. Retrieved 2007-02-08. - Rilling, M. (1996). "The mystery of the vanished citations: James McConnell's forgotten 1960s quest for planarian learning, a biochemical engram, and celebrity". American Psychologist 51 (6): 589–598. doi:10.1037/0003-066X.51.6.589. - For a general review, see also Georges Chapouthier, Behavioral studies of the molecular basis of memory, in: The Physiological Basis of Memory (J.A. Deutsch, ed.), 1973, Academic Press, New York and London, Chap. l, l–25 - More information on freshwater planarians and their biology - More information on the genetic screen to identify regeneration genes - YouTube videos: Planaria eating worm segment, Planarian - Schmidtea mediterranea, facts, anatomy, image at GeoChemBio.com - Alejandro Sanchez-Alvarado seminar about Planaria and Regeneration - Link to an article discussing some work on planarian immortality - A user-friendly visualization tool and database of planarian regeneration experiments - Aboobaker, Aziz (27 February 2008). "Immortal Worms". Test Tube. Brady Haran for the University of Nottingham.
http://en.wikipedia.org/wiki/Planarian
4.21875
Support us | Visit us | Contact us Ernest Rutherford FRS Philosophers and scientists have tried to answer this question for centuries. In 1911 Ernest Rutherford FRS made an important leap forward, unveiling an atomic model in which electrons orbit a central nucleus. This online exhibition celebrates Rutherford's achievements and traces the history of the atom from the ancient world to the sub-atomic age. Ancient Greek, Roman and Hindu philosophers suggested that the universe consisted of tiny indivisible particles atoms. Atoms were unchangeable and moved constantly through space. Aristotle, however, believed matter had four fundamental properties wet, dry, hot, and cold and that all earthly substances were composed of four 'elements', water, earth, fire, and air. Aristotle's view was very influential in Europe throughout the middle ages and persisted into the seventeenth century. People speculated that matter might be transformed from one type (hot and dry, for example) to another (hot and wet). This theory underpinned the practice of alchemy. Alchemy meant much more than the search for the Philosopher's Stone (a mystical substance that transmuted metals into gold and cured all diseases). Alchemy was supposed to bring its practitioners closer to spiritual perfection and give them a secret understanding of the universe. However in some respects it is difficult to distinguish between alchemy and early chemistry. Alchemists used metallurgical techniques and developed new equipment, such as more efficient furnaces. Alchemical experiments conducted on different substances could lead to important discoveries for example, in 1669 the German alchemist Hennig Brand discovered the element phosphorus while working with a solution of urine.Robert Boyle and Sir Isaac Newton, two of the leading scientists in seventeenth-century England, were both very interested in alchemy. Read more about Alchemy In 1808 the Manchester-based scientist John Dalton FRS published A New System of Chemical Philosophy, in which he outlined his theories about elements and atoms. He proposed several groundbreaking ideas: that each element is composed of atoms identical to the other atoms in that element; that atoms of one element combine with atoms of another element to form compounds; and that the atoms of each element are characterised by their weight, which can be calculated by systematic analysis.Dalton also proposed a system of pictorial representations of different atoms. However it was difficult to use and the system of alphabetical notation used by the Swedish chemist Berzelius was later adopted universally (eg. 'H2O'). Dalton was often invited to lecture on his theories, and he used a set of small wooden balls with holes and spokes to illustrate the combination of elements. Read more about Elements and compounds Around 1875 the British physicist William Crookes FRS experimented with a new piece of equipment. It consisted of an evacuated glass cylinder with metal electrodes at either end. When a high voltage was applied, electrons travelled in straight lines from the cathode to the anode. However they had so much momentum that many flew past the anode, hitting the end wall of the tube and causing it to glow. The stream of electrons was called a cathode ray. In 1896 J J Thomson, working with colleagues at the Cavendish Laboratory in Cambridge, performed experiments showing that cathode rays were not waves, molecules or atoms, but unique particles. He called them 'corpuscles', and showed that they were about a thousand times less massive than the smallest atom. In 1904 Thomson proposed a model of the atom in which these corpuscles (now called electrons) were 'enclosed in a sphere of uniform positive electrification'. This was later dubbed the plum pudding model. Read more about the 'plum pudding' atom Born in 1871 near Nelson in the South Island of New Zealand, Rutherford was the son of a flax miller. He was educated at the local grammar school, then won a scholarship that allowed him to attend Canterbury College in Christchurch. He studied mathematics and physics, discovering a talent and enthusiasm for experiment. This would characterise his whole research career. In 1895 Rutherford moved to Cambridge to work at the Cavendish Laboratory with J J Thomson. He worked on newly-discovered x-rays and radioactivity, the exciting phenomena of the day. It was while at McGill University in Montreal that Rutherford and Frederick Soddy FRS reached the startling conclusion that transmutation' occurs naturally in some elements. They had identified the processes of radioactive decay in radium and thorium. Rutherford jokingly warned Soddy, 'don't call it transmutation, they'll have our heads off as alchemists!' Rutherford was awarded the 1908 Nobel prize in Chemistry for his work on radioactive decay. At the University of Manchester he collaborated with Hans Geiger and an undergraduate student, Ernest Marsden, on an experiment in which a thin piece of gold foil was bombarded with alpha particles. Surprisingly, some particles bounced back. The results suggested to Rutherford that the atom consisted of a small positively-charged nucleus orbited by electrons. His model was revised by Niels Bohr in 1913 but remained an important breakthrough in the understanding of atomic structure. Read more about Ernest Rutherford FRS The ground-breaking work of Rutherford and his collaborators launched the new field of nuclear physics. Following his lead, researchers such as James Chadwick FRS and Patrick Blackett FRS delved further into the structure of the atom. In 1932, Chadwick announced the existence of the neutron. Physicists and others became interested in the potential use of atomic energy. Worldwide curiosity about CERN's newly-constructed Large Hadron Collider demonstrates that our fascination with the tiniest particles of the natural world continues. Read more about new theories of the atom The Royal Society would like to thank the following people and institutions for their help in organising this exhibition. Ms Terri ElderUniversity of Canterbury, Christchurch, New Zealand Prof. Mary Fowler Mr Joshua NallThe Whipple Museum for the History of Science, Cambridge Ms Catherine RushmoreMuseum of Science and Industry, Manchester Prof. Ken StrongmanUniversity of Canterbury, Christchurch, New Zealand Dr Gordon SquiresCavendish Laboratory, University of Cambridge Café Scientifique 20 May Industry networking event 21 May We run a programme of regular events, conferences and exhibitions on the history of science for researchers and members of the public. Audio recordings and video recordings are available for many of our events. An archive of some of our past exhibitions is also available. Full listing of our events and exhibitions. Watch videos of past events. Most of our talks are free and open to the public. We host major conferences for leading scientists. Explore our annual science exhibition Contact the events team.
http://royalsociety.org/exhibitions/2009/rutherford/
4.03125
Many people believe that astronomers want to build telescopes on tall mountains or put them in space, so they can be ``closer'' to the objects they are observing. This is INcorrect! The nearest star is over 41,500,000,000,000 kilometers (26 trillion miles) away. If you ignore the 300-million kilometer variation in the distances due to the Earth's motion around the Sun and the 12,756-kilometer variation due to the Earth's rotation, being 4 kilometers closer on a tall mountain amounts to a difference of at most 1 × 10-11 percent. Telescopes in space get up to 1 × 10-9 percent closer (again ignoring the much larger variations of the Earth's orbit around the Sun and the telescope's orbit around the Earth). These are extremely small differences---the distances to the even the nearest stars are around 100,000's times greater than the distances between the planets in our solar system. The reason large telescopes are built on tall mountains or put in space is to get away from the distortion of starlight due to the atmosphere. The atmospheric distortion is poor seeing, reddening, extinction and the adding of absorption lines to stellar spectra. The famous observing site at the Kitt Peak National Observatory has many large telescopes including the 4-meter Mayall telescope(top right) and the McMath Solar Telescope (triangular one at the lower right). Although it is over 60 kilometers from Tucson, AZ, light pollution from the increasing population of that city has stopped the construction of any more telescopes on the mountain. The Mauna Kea Observatory is probably the best observing site in the world. Many very large telescopes are at the 4177-meter summit of the extinct volcano. Because of the elevation, the telescopes are above most of the water vapor in the atmosphere, so infrared astronomy can be done. Kitt Peak's elevation of 2070 meters is too low for infrared telescopes. Go back to previous section -- Go to next section last updated: May 18, 2010
http://www.astronomynotes.com/telescop/s10.htm
4.0625
How to be even 'hotter' at History Click here to access a number of activities that will help prepare you for A-Level History. You will develop skills in: - Causation - how and why events take place in History - Source analysis – extracting information, interpreting historical sources and identifying similarities/differences in sources. - Interpretation - forming judgements about particular historical events - Supporting points with a precise range of evidence - Historiography – understanding how and why historians have interpreted the past. You will also see an outline of KEY CONTENT covered in Year 12 and your SUMMER TASKS for both units. There are also suggested EXTENSION ACTIVITIES
http://www.chartersschool.org.uk/a-level-upskills
4
By the 1950's—when MacKinney began seriously building up his slide collection—several camera and film options existed for amateur photographers who wanted to maximize the potential of both black and white and color photographic technologies. Cameras became easier to use, transport, and adaptable to diverse lighting conditions. Thus, from 1945 through the 1950's, cameras intended for commercial sale benefited from smaller, more streamlined body designs, the development of eye-level cameras, built-in close-up lenses, and flash attachments and/or flash synchronization.1 More importantly for MacKinney's work, film technology also evolved to better accommodate the needs of those photographers aiming for slide projection of color images. In the 1940s, the Eastman Kodak Company developed a new kind of positive film to offer in addition to its existing Kodachrome film: Ektachrome. A chromogenic camera film (a kind of film which undergoes a developing process where chemical synthesis produces color2), Ektachrome was more appealing than Kodachrome film for projection due to its method for creating color. While Kodachrome films required dye-injection during development, Ektachrome films contained dyes already built into the emulsion (a mixture of two un-blendable substances on the photo-sensitive side of the photographic film). Built-in dyes allowed for a simplified development process and ensured reduced damage from exposure to projector lights.3 Despite their advantages over Kodachrome slides, Ektachrome slides (of which the MacKinney collection is almost entirely comprised) from their inception to the late 1960's had one remarkable disadvantage: poor dark fading stability. Although Ektachrome slides withstood projection-induced deterioration, they would fade even in dark storage at a much more rapid pace than their Kodachrome counterparts—at their introduction, Ektachrome slides could fade at least 20 times faster.4 Perhaps even more problematic was Kodak's failure to inform professional and amateur photographers of Ektachrome's far inferior dark fading stability.5 Thus, from 1946 to 1976, photographers using Kodak Process E-1, E-2, and E-3 Ektachrome films were producing unique color transparencies, but they were unaware that these films could also be subject to severe loss of cyan and yellow dye while in dark storage.6 Given the uniqueness of the MacKinney slides—they are originals that cannot be reproduced from a negative—and potential damage that could result from poor dark fading stability, fungus growth on the emulsion, and/or excessive handling (slides of any variety deteriorate from physical damage, dirt, fingerprints, and scratches), the digitization of MacKinney's collection provides users with replicas of the slides in their current condition while simultaneously protecting the originals. Stored in tightly-packed cigar boxes, the slide collection was initially given to UNC's Art Department by MacKinney's widow Abigail MacKinney shortly after his death. Not sure what to do with the collection, the Art Department contacted Michael McVaugh when he began teaching at UNC in 1964-65. McVaugh suggested giving the master set to the National Library of Medicine of Bethesda (where it still resides) and, since the Art Department had no use for it, took charge of the duplicate set. Over the next forty years they were occasionally consulted for various purposes, but they were not heavily used and astonishingly have suffered virtually no physical deterioration. While the MacKinney slides' physical integrity remains excellent, they display varying levels of discoloration, ranging from moderate to severe color shifts toward green and blue.7 Hoping that others might benefit from the collection and concerned about its long-term preservation, McVaugh instigated its digitization. The MacKinney slides were digitized from late June through August in 2007 using the Nikon SuperCool SCU 9000. In order to create a master archive of the images at the highest resolution possible, the slides were scanned five at a time at 4000 pixels per inch. Copies of these images were then adjusted in Photoshop; they were sharpened, cropped, and color adjusted. These manipulations of the digital images of the slides sought to represent accurately the condition of the original slide and to produce a natural appearance for screen-viewing.8 1 Brian Coe, Cameras: From Daguerreotypes to Instant Pictures (Gothenburg, Sweden: Crown 2 Henry Wilhelm and Carol Brower, The Permanence and Care of Color Photographs: Traditional and Digital Color Prints, Color Negatives, Slides, and Motion Pictures (Grinnell, Iowa: Preservation, 1993): 21. 3 This information on the differences between Ektachrome and Kodachrome films for slide projection courtesy of Keith Longiotti of the North Carolina Collection Photographic Archives. 4 Wilhelm and Brower, 25. 5 Wilhelm and Brower, 25. 6 Wilhelm and Brower, 26. From 1966 to 1977, amateur photographers had access to E-4 Ektachrome films, which were much more stable than the E-3 films used by professionals during the same time period. In 1977, Kodak replaced all previous Ektachrome films with E-6 films, guaranteeing equal dark fading stability for both amateur and professional films. 7 Information concerning slide condition courtesy of Jennifer Merriman and Bill Richards of CDLA - Digital Production Center. 8 This information courtesy of Richards. This site is intended for educational purposes. The manuscripts represented are not held by the University of North Carolina. Those seeking provenance, reproductions and permissions should contact the holding repository.The scene above depicts a man surrounded by zodiac symbols. The image is a part of a manuscript held by the Fitzwilliam Museum in Cambridge, MS 167, folio 35v.
http://www.lib.unc.edu/dc/mackinney/about.html?CISOROOT=/mackinney
4.40625
At the Heritage Foundation's Foundry blog, Julia Shaw reflects on the significance of today's date in history: On December 6, 1865, the 13th Amendment was adopted and slavery was abolished. There has always been intense debate about the existence of slavery in American history, precisely because it raises questions about this nation’s dedication to liberty and human equality. At the time of the Founding, there were about half a million slaves in the United States, mostly in the five southernmost states, where these individuals made up 40 percent of the population. From the outset, the Constitution contained three key compromises on enumeration, the slave trade, and fugitive slaves. But this raises the question: If slavery was contrary to the principles of the American Founding, why didn’t the Framers ban slavery in the Constitution when they drafted it? The Founders recognized that slavery blatantly contradicted America’s dedication to liberty and equal rights. But they also recognized that if their new country was to survive, they would have to form a strong union and that the Southern states would never ratify a constitution that abolished slavery. The Founders therefore had to refrain from immediately abolishing slavery. This compromise allowed for the immediate survival of the union while setting in motion the eventual eradication of slavery. In countless writings, both public and private, the Founders made clear that they found slavery abhorrent and wished for it be eradicated. More importantly, they took many actions to curtail its expansion and eliminate it in certain places. The Northwest Ordinance, one of Congress’s very first laws, banned slavery and the slave trade in America’s first territory. President Thomas Jefferson signed a national ban on the slave trade on January 1, 1808—the first day after the Constitution’s 20-year ban on prohibiting the slave trade expired. The opposition to slavery was not confined to the federal government. By 1821, slavery had been fully abolished by half the states in the union.
http://www.fedsocblog.com/blog/this_day_in_constitutional_history_passage_of_the_13th_amendment/
4.1875
Steven S. Skiena One way to convert form names to integers is to use the letters to form a base ``alphabet-size'' number system: To convert ``STEVE'' to a number, observe that e is the 5th letter of the alphabet, s is the 19th letter, t is the 20th letter, and v is the 22nd letter. Thus one way we could represent a table of names would be to set aside an array big enough to contain one element for each possible string of letters, then store data in the elements corresponding to real people. By computing this function, it tells us where the person's phone number is immediately!! What's the Problem? Because we must leave room for every possible string, this method will use an incredible amount of memory. We need a data structure to represent a sparse table, one where almost all entries will be empty. We can reduce the number of boxes we need if we are willing to put more than one thing in the same box! Example: suppose we use the base alphabet number system, then take the remainder Now the table is much smaller, but we need a way to deal with the fact that more than one, (but hopefully every few) keys can get mapped to the same array element. The Basics of Hashing The basics of hashing is to apply a function to the search key so we can determine where the item is without looking at the other items. To make the table of reasonable size, we must allow for collisions, two distinct keys mapped to the same location. There are several clever techniques we will see to develop good hash functions and deal with the problems of duplicates. The verb ``hash'' means ``to mix up'', and so we seek a function to mix up keys as well as possible. The best possible hash function would hash m keys into n ``buckets'' with no more than keys per bucket. Such a function is called a perfect hash function How can we build a hash function? Let us consider hashing character strings to integers. The ORD function returns the character code associated with a given character. By using the ``base character size'' number system, we can map each string to an integer. The First Three SSN digits Hash The first three digits of the Social Security Number The last three digits of the Social Security Number What is the big picture? Ideas for Hash Functions Prime Numbers are Good Things Suppose we wanted to hash check totals by the dollar value in pennies mod 1000. What happens? , , and Prices tend to be clumped by similar last digits, so we get clustering. If we instead use a prime numbered Modulus like 1007, these clusters will get broken: , , and . In general, it is a good idea to use prime modulus for hash table size, since it is less likely the data will be multiples of large primes as opposed to small primes - all multiples of 4 get mapped to even numbers in an even sized hash table! The Birthday Paradox No matter how good our hash function is, we had better be prepared for collisions, because of the birthday paradox. Assuming 365 days a year, what is the probability that exactly two people share a birthday? Once the first person has fixed their birthday, the second person has 365 possible days to be born to avoid a collision, or a 365/365 chance. With three people, the probability that no two share is . In general, the probability of there being no collisions after n insertions into an m-element table is When m = 366, this probability sinks below 1/2 when N = 23 and to almost 0 when . The moral is that collisions are common, even with good hash functions. What about Collisions? No matter how good our hash functions are, we must deal with collisions. What do we do when the spot in the table we need is occupied? Collision Resolution by Chaining The easiest approach is to let each element in the hash table be a pointer to a list of keys. Insertion, deletion, and query reduce to the problem in linked lists. If the n keys are distributed uniformly in a table of size m/n, each operation takes O(m/n) time. Chaining is easy, but devotes a considerable amount of memory to pointers, which could be used to make the table larger. Still, it is my preferred method. We can dispense with all these pointers by using an implicit reference derived from a simple function: If the space we want to use is filled, we can examine the remaining locations: The reason for using a more complicated scheme is to avoid long runs from similarly hashed keys. Deletion in an open addressing scheme is ugly, since removing one element can break a chain of insertions, making some elements inaccessible. Performance on Set Operations With either chaining or open addressing: Pragmatically, a hash table is often the best data structure to maintain a dictionary. However, the worst-case running time is unpredictable. The best worst-case bounds on a dictionary come from balanced binary trees, such as red-black trees.
http://www.cs.sunysb.edu/~skiena/214/lectures/lect21/lect21.html
4.09375
back to syllabus Lecture Notes - 07/13/99 Seeing Motion: Lecture Notes Motion perception can be broken into 2 main categories: The organization of this lecture reflects this split; note that the first half does not correspond to an assigned reading (but see Chapter 8 of the course text for an advanced treatment, if you'd like), the second half will review a selection of the material from Chapter 13. - perceiving motion of objects in the world ("Object motion"). - perceiving motion of ourselves through the world ("Observer motion"). 1. What is motion? - Motion is often defined as a change in position over time. From high school, you may remember the equation speed = distance/time. - Motion can be broken into two main components: direction and speed. - However, for studying visual motion, there are several other distinctions that are helpful. Consider the variety of types of motion information that you are capable of using in everyday life: - Simple translation: watching a ball thrown across your field of view, you can easily perceive that the ball is moving relative to the world. - Complex motions: overlooking a crowded street, with cars, pedestrians, and cyclists all moving in different directions at once, you can detect the individual directions and speeds of each person or vehicle, but can also detect the general "flow" of traffic. - Apparent motion: looking at a neon street sign, where a series of lights flash one after another, it appears to move. - Stroboscopic motion: you see movement on TV or at the movies, although you're actually watching a series of static images. - Motion aftereffects: after watching a waterfall for a few minutes, you look at the nearby rocks, and they appear to move upward, even though they're obviously not actually changing their position over time. - Structure from motion: a well-camouflaged animal is impossible to see as long as it doesn't move. When it does, you can easily identify it. - Optic flow: as you walk down the street, objects in the world change their position in your field of view, and you are easily able to use this information to navigate the world. - Induced movement: sitting on a stationary train and looking out the window, the train on the rail next to yours begins to move. You at first feel like your train is moving, even though it is not. - Eye-movements: your eyes are constantly moving, making small shaking motions or long, smooth motions. In either case, the image falling on your retina changes, but the world seems stationary. 2. Object motion - Let's call examples 1-6 "Object Motion", since they all involve objects changing their positions over time. - Although objects are really moving out in the world, your visual system infers motion. Often this inference is correct, but sometimes it is not. Illusions of movement are therefore often used to uncover the basic mechanisms that we use to perceive real motion: - Motion (movement) aftereffect ("MAE"). After viewing the attached movie of continuous motion in the same direction, stationary objects appear to move in the opposite direction. Think about how the adaptation of direction-selective neurons in V1 might contribute to this illusory percept of movement. - Movement-without-motion. (film viewed in class). In these demonstrations, the stimulus also appears moving but never actually changes position. This stimulus separates the contributions of "local" and "global" motion-detection mechanisms: the local (or short-range) detectors register motion (eg, each contour seems to be moving), but the more global (long-range) detectors know that the larger stimulus is not actually changing position. This suggests a hierarchy of motion-detectors in the visual system. - In the case of apparent motion, a stimulus is flashed briefly in one location and then another (usually identical) stimulus is flashed in another (nearby) location. Under certain circumstances, observers perceive the first stimulus moving to the position of the second (as if there were only 1 stimulus, actually moving from the first position to the second). This is the well-known principle underlying neon marquee lights that you see at old theatres and casinos. The perception of apparent motion depends crucially on the interval of time between the flashing of the two stimulus (the "ISI": inter-stimulus interval): - If the ISI is < 30 msec (.03 sec), the two stimuli appear to be simultaneous. - If the ISI is between 30 and 60 msec, the first stimulus will appear to move partially toward the second stimulus. - If the ISI is > 60 msec, the first stimulus appears to move smoothly and continuous to the location of the second stimulus (apparent motion is achieved). - When the ISI is longer (> 200 msec), you (veridically) perceive the first stimulus being flashed, followed by the second stimulus being flashed at a different location. - Stroboscopic motion: Apparent motion is 1 type of stroboscopic motion. A more familiar instance of stroboscopic motion occurs in movies and on television. As you know, film is made of a series of individual, static frames (snapshots), presented rapidly. Early movies showed 16 frames per second; although fast enough for stroboscopic motion, people were still able to detect the flicker between frames (hence the name "flicks"). To solve this problem, film engineers had to play upon not just apparent motion but also "visual persistence". To see this phenomenon, wave a pencil quickly in front of your face. You'll see the pencil blur, leaving a brief streak behind it at fast speeds. The early frame rate of movies was too slow to utilize this persistence to build a more coherent, smooth percept, but a change to 24 frames per second was adequate, and the "flicking" went away. Current films still show 24 frames per second, but now flash each frame 3 times, artificially increasing the flicker rate to 72 flashes/sec, which puts modern film well above our ability to detect flicker. 3. Simple motion detectors in V1 How does our visual system infer motion from the variety of cases described above? - We already know from the experiments of Hubel and Weisel that there are directionally-selective neurons in V1. - Building a directionally-selective receptor: when an object moves, its representation in the retinal image moves, and therefore stimulates a consecutive series of photoreceptors. See the simple circuit proposed in the figure based on simple inhibitory connections. See the responses to 2 different directions of motion ((a)-(b) for rightward motion, (c)-(d) for leftward). - Direction tuning curves: direction-selective complex cells fire maximally when a bar of light is moved through their receptive field in a certain ("preferred") direction, and fire less and less as the direction is changed more and more from the preferred. [see tuning curves]. - Note that the direction tuning curves measured for direction-selective neurons in V1 are measured by passing a bar through the receptive field. The direction of movement of the bar is assumed to be perpendicular to the orientation of the bar. However, if the bar passes through the entire receptive field, the motion of the bar could actually be in a number of directions. This is known as the aperture problem. The aperture problem reminds us that the direction and speed of motion of a bar (or a "grating" pattern of bars) is ambiguous when edges or texture are not present. Also note that we, as people, rarely see motion through an aperture that is ambiguous-- but the receptive field of each motion-responsive neuron is essentially an aperature, so the aperture problem is something that our motion pathway must always solve. - When two grating patterns with different orientations are superimposed, the resulting plaid pattern (usually) appears to move as a whole. What is the perceived direction? - When a plaid grating pattern is passed through the receptive field of direction-selective V1 neurons, the neurons respond to the individual components: i.e., if one of the two superimposed gratings is moving in the preferred direction, the cell will fire maximally. These neurons are therefore often called component cells. However, our percept is of a coherent plaid, drifting in a direction that is not the same as the direction of motion of either of the component gratings. 4. Motion detectors in area MT - To see the neural mechanisms which underlie our actual percept of plaid-pattern motion, we need to leave V1 and head to extrastriate area MT. MT can be defined according to the F-A-C-T criteria we have discussed earlier: - Function. MT is notoriously responsive to motion. Compare the fMRI responses of various brain areas to motion (left) and flicker (right). While most areas respond to both, MT responds much more strongly to motion vs stationary stimuli than flicker vs mean-gray stimuli. This reflects the fact that approximately 90% of MT neurons are direction-sensitive. - Architecture. Staining for the presence of cytochrome oxidase, an enzyme used in metabolism, is often used to help identify different visual areas. See the distinctive patch (quite dark and dense) that is evident in MT. Perhaps more interestingly, MT has a columnar architecture, no longer based on eye-of-origin (ocular dominance columns) or orientation preference (orientation columns/"pinwheels"), but instead is systematically organized with regard to the preferrred direction of motion of neurons. [see the direction-columns in MT] - Connections. The direction-selective neurons in V1 project (connect) to MT neurons. - Topography. Indeed, MT has its own retinotopic map of the world (first demonstrated in humans by R Khan and R Dougherty, a graduate student and a postdoc here in the Psych department at Stanford). However, the receptive fields of MT neurons are much larger than V1 neurons (i.e., MT-neuron receptive fields cover or "see" a patch of visual space 100 times the size of V1 neuron receptive fields), suggesting that they are able to integrate more global motion signals. - When a plaid pattern is passed through the receptive field of most MT neurons, the cell responds to the direction of the plaid, e.g., the perceived direction of motion, and not to the directions of motion of the component gratings. These cells are called pattern cells [see the different responses of component and pattern cells to a plaid pattern] - Building a pattern cell: a pattern cell can receive inputs from a series of component cells, each of which has a different preferred direction, but all of which have receptive fields covering the same part of visual space. - The responses of MT neurons have additional correspondences to our conscious percepts of motion. Bill Newsome and colleagues (here at Stanford) trained monkeys to perform a difficult motion discrimination task, judging the direction of motion of noisy, not-very coherent moving dot stimuli. Monkeys view the dots and decide which direction the dots moved. When the experimenters would electrically stimulate an MT column selective for upward motion, the monkey would be more likely to respond that the stimulus moved upward.[see the Newsome experimental setup, stimulus, and results]. - Human MT and the motion aftereffect: The time course of activity in human MT seems to match the time course of the perceptual MAE (as measured by Roger Tootell at Harvard/MGH). [see the fMRI data]. 5. Observer motion - Information derived from vision is a major guide to our actions; the perception of motion is especially central to our own movements through the world. (see examples 7 and 8 at top). - Optic flow and Heading perception: an idea originally described by JJ Gibson, the optic flow field is a representation of the direction and speed of motion of the visual field, relative to the observer. [see an optic flow field for a pilot landing a plane]. Note that near points move fast (large arrows) while faraway points move slower (small arrows). All of the arrows point away from a central spot, the focus of expansion, which corresponds to where the observer is headed. - J.J. Gibson hypothesized that the visual system estimates the optic flow from the chaning pattern of light on the retina, and uses it to estimate the 3D motions of observers and objects. Note that as observer motion becomes more complex (e.g., eye and head movements while walking) and objects in the scene move relative to one another (e.g., a bird flies across the pilot's field of view), the optic flow field becomes increasing complex. No matter how complex, the optic flow field provides information that underlies many aspects of our ability to navigate the physical world. - Avoiding collisions: We are quite good at detecting objects that are on a path that will make contact with us, even as early as 8 days old. Such "looming" objects become increasingly large on the retina as they approach us, and we appear to make reflexive defensive responses ("flinching") that are based on a (wisely-conservative) estimate of "tau", the "time to impact" of the object. - Maintaining balance: We use optic flow information, in addition to proprioceptive (= "where we are in space") information from joints and muscles, to compensate for slight misbalances. In a "swinging room" apparatus, the optic flow can be artificially swayed, and people make unconscious compensatory leanings in the opposite direction. Children are especially sensitive to this manipulation. Adults can confirm the importance of optic flow information to balance by closing their eyes while standing on one leg. - Induced motion: Although the train example at top (motion example #8) often produces an illusory feeling of motion, this usually only happens when the other (actually-moving) train is close. Why doesn't induced motion occur as often in cars? - See the text for more research applying these concepts to athletics and driving. 6. Eye movements - Our eyes are constantly making small, fast movements, although your perception of the world isn't nearly so "jumpy". However, there are ways to make your eye movements more obvious; see this demo using afterimages. - In fact, as we already learned, we know that images stabilized on the retina fade. The quick eye-movements that you are constantly making keep "refreshing" the visual system, so that it keeps detecting change, and nothing fades. - There are 2 main types of larger eye-movements: saccades and smooth pursuit. Saccades are fast, point-A to point-B, motions. Smooth pursuits, meanwhile, follow moving objects gradually. Interestingly, smooth pursuit can only be performed when there's something to pursue; try moving your eyes gradually and evenly from left to right. You'll notice that they jump, and are difficult to control. Now, move your finger across your field of view from left to right, and follow with your eyes. You should have no trouble, now. - Corollary discharge: Despite the nearly-constant motions of our eyes (as well as our heads), we perceive a steady world. This is likely due to an integration of visual information with information about eye movements/position and body movements. Usually, this compensatory process works. However, it can be misled, especially when you don't control your eye movements in the traditional way. Try gentlypushing on the side of one eye (keeping the other closed): the world will appear to jiggle around, because there isn't an eye-muscle (motor) signal available to compensate for the motion of the eye and subsequent motion of the retinal image. 7. Motion processing after MT: areas MST and STS - Area MT sends projections to area MST, which appears to process more complex motion patterns. It includes cells that respond to expansion/contraction (optic flow?), and other combinations of rotation and translation. Additionally, there is evidence that MST is the "comparator" of corollary discharge, as it receives inputs from vestibular and movement areas of the brain. [see sample MST responses to expansion and rotation]. - Area STS, meanwhile, has neurons that are responsive to biological motion: see demo in class of "point-light walkers". This may underlie our surprisingly-acute abilities to recognize human-like figures from very impoverished information (remember the Peter Gabriel video for "Sledgehammer"? You can identify the dance steps merely from a few light bulbs stuck on the dancers). 8. Where to now? Next lecture, we'll examine another way we perceive the "where" in the visual world: seeing the shape and location of objects in three dimensions.
http://www-psych.stanford.edu/~lera/psych115s/notes/lecture7/
4.09375
The International School for Holocaust Studies What Did Oskar Schindler View from the Hill? Righteous Among the Nations: Oskar Schindler as a Study Case Grades: 9 - 12 Duration: 1 hour This is the last project Zita Turgeman z"l was working on, one of many she was involved with at the International School for Holoacust Studies at Yad Vashem. - Learn about the unique efforts and actions of the Righteous Among the Nations to help Jews during the Holocaust. - Analyze the motivations of Oskar Schindler to help Jews survive. - Identify the process of change that Schindler underwent from being a Nazi businessman to a rescuer of Jews. Begin the lesson by asking students who is a Righteous Among the Nations in their opinion or whether they have heard about what these people did. Explain to students the following basic definition of Righteous Among the Nations as defined by Yad Vashem: Righteous Among the Nations are non-Jews who risked their lives during the Holocaust to save Jews in countries under Nazi rule or those that collaborated with the German regime. This lesson focuses on one of the Righteous Among the Nations named Oskar Schindler. Divide class into 5-6 groups, giving each group three documents about Oskar Schindler and his actions during the Holocaust. Ask each group to assume the role of a committee of judges, requesting that they discuss whether Schindler, on the basis of the historical documents in hand, is entitled to receive the title of Righteous Among the Nations. Each group is asked to arrive at a unanimous decision. Note to the Teacher: Two of the documents are testimonies given by Jews describing Schindler’s behavior upon his arrival in Krakow, Poland. The third document is a letter written about Schindler in case that during liberation the Allied army would accuse him of being a Nazi and arrest him. Inform students that Yad Vashem, the Holocaust Martyrs' and Heroes' Remembrance Authority, in Israel, has a committee of historians and jurists that thoroughly review every case, based on written testimonies and other relevant documentation. After studying each case, they reach a conclusion of whether or not this person should be recognized as a Righteous Among the Nations. Organize a whole class discussion based on the presentations of each committee’s conclusion. On chart paper have two columns, one entitled PRO, and the other CON. Record the considerations of the students. Together with the students, try to identify common criteria that they have found during their work in groups. Teachers may wish to guide the students, pointing out some of the following information based on the criteria as outlined by Yad Vashem: - An attempt that included the active involvement of the rescuer to save a Jew regardless of whether these attempt(s) ended in success or failure. - Acknowledged mortal risk for the rescuer during the endeavor - during the Nazi regime, the warnings clearly stated that whoever extended a hand to assist the Jews placed not only their own life at risk but also the lives of their loved ones. - Humanitarian motives as the primary incentive - the rescuer must not have received material compensation as a condition of their actions - The rescuer is a non-Jew. For the Teacher: If time allows, consider holding a short discussion about the above criteria. Clearly, these guidelines are not clear-cut and various interpretations may be made. For example, some diplomats such as Raoul Wallenberg, helped save thousands of Jews from death. As diplomats they had diplomatic immunity and did not risk their lives per-se. In addition, in a few cases some Righteous eventually married the person they saved. More information is available on the Righteous Among the Nations webpage. Part 2: Excerpts from the Film "Schindler’s List" It is important to note that this Hollywood movie is not an historical document but rather an artistic feature film based on the interpretation of an historical event by the director, Steven Spielberg. Together with the students, analyze the different documents they have received from the Schindler case. How can such different testimonies describe the same person? In an effort to grapple with this question, view the scene in which Oskar Schindler is riding on a horse with his mistress. From this hill, he sees the evacuation of the Krakow Ghetto. Until that point, the film is in black-and-white, however, in this scene he suddenly sees a girl wearing a red coat. Schindler sees a human being in front of him, and his perspective changes. Ask students what they understand from this scene. On the blackboard, write the verb: TO SEE. Elaborate the following with your students: - Schindler saw an individual. - Schindler saw one person in a mass of human beings. - What is the value of seeing another human being in his/her despair and is it enough just to “see” that person? - How did Schindler translate what he saw into action? In another scene, Schindler meets the commander of the Plaszow concentration camp, Amon Goeth. During this conversation, he bargains the lives of the Jewish workers from his factory. Schindler pays Goeth a large sum of money, and Goeth is pleased. Schindler manages to make Goeth believe that he is only interested in keeping his workers for financial gain. After viewing this scene, ask students once again what have they seen. On the blackboard write down the verbs: TO DECIDE and TO ACT Analyze with the students the following points in connection to what they saw in the film clip: - After seeing the evacuation of Krakow Jews, Schindler becomes an active, involved rescuer. - Schindler’s interest and motivations change dramatically, and the money he received through the exploitation of Jews was used to save Jewish lives. - Schindler tries to convince another factory owner, Medrich, to act in a similar manner. Medrich refuses because he believes that the risk is too great. Despite the high risk, Schindler decides to act and therefore his actions become extraordinary. - Schindler is portrayed as a normal human being as opposed to an angel. Throughout the film, we notice his conduct in various situations, including his affairs with women. Righteous people were not necessarily the most “righteous” people in their daily lives. - In your opinion, what are the messages that Righteous people pass on to us? It is important to note that we do not know why one individual is ready to take risks whereas another one will not. In addition, the lessons that the Righteous teach us are not necessarily those of virtue or justice, but rather shed light on the complexity of human beings and their actions depending on various circumstances and situations. The question of a message for future generations is often raised when dealing with the Holocaust. The deeds of the Righteous serve as a model of human courage and the virtue of humankind. As educators and as people we realize that most people are bystanders - and not rescuers who risk their own lives to save others. We want to encourage students to be more sensitive and empathetic, understanding that the silent majority also has a responsibility for the misfortune of its suffering minority. Students are not expected to immediately dedicate their lives to altruistic causes, but rather to begin with personal introspection. Natan Werzel’s testimony “In 1939, before the war, I bought some machines from an enamel factory at an auction. Schindler came to my factory like a robber, without any official appointment, and announced that as long as I run the business well, I would not be harmed. High-ranking German officers used to come to Schindler to buy and sell. I worked there for roughly a year or a year and a half. Schindler’s attitude towards me and towards the other Jews was generally good. One day he told me: ‘In Russia they line you up at the wall if you know too much.’ I knew all sorts of things about him. At the end of 1941 he paid discharged me. In the summer of 1942 he sent for me. He explained that he was under police investigation, that it was forbidden for Germans to buy businesses from Jews. He demanded that I sign forged documents indicating that I had sold my machines to a Pole before the war. I refused. He offered a bribe, and still I refused. He went to another room. Half an hour later, some SS men turned up in black uniforms and started beating me. Schindler himself was also beating and cursing me. I just lay there, and then I lost consciousness. After I woke up Schindler said to me: ‘Will you sign now, you cheat?’ I said I would. That night I had to go see a doctor. When I returned to my village, a clerk from the Ministry of Foreign Currencies in Cracow suddenly arrested me. He found Jewelry in my house, and took it. Then he said: ‘you can get this back from Schindler!’ This means that Schindler had told on me.” Julius Wiener’s testimony to the Committee 10/10/1956 (The Wiener family used to own a wholesale shop for enamel.) “On 15/10/1939 Oskar Schindler broke into our shop in a manner reminiscent of gangsters. He put his hand on the cashier, locked the doors, and then announced that as of that moment he will be taking over the running of the business. He attacked my father very rudely, spouting insults at him. He also threatened him with a gun, and when my wife tried to interfere, he shouted at her: “shut up you Jewish pig! Now you will get to know me and Hitler!” He demanded that my father kiss Hitler’s portrait. He forced us to sign some papers handing over ownership of the business. He didn’t let my father come to the shop but I had to continue working there, for a living.” (Mr. Wiener says that two months after this incident, Schindler accused him of cheating. The accusation was over the measurement of enamel. Schindler had arranged a similar false cheating issue in another factory of his. He threw Mr. Weiner out of the shop and ordered him not to return. The next day Mr. Wiener did return and tried to speak with Schindler.) “Around noon, some SS men came into the factory. They wore uniforms. Schindler pointed at me and told one of them: ‘Give him a quick haircut!’ The five SS men took me to the back room, locked the door and brutally began to beat and punch me all over my body. After a while I fell on the floor, wounded and bleeding, and then lost consciousness. After a while, when I woke up, I saw my assailants pouring water on me. The hooligan who had received the orders from Schindler, grabbed me, sat me down on a chair and said to me: ‘You lousy Jew, if you dare to bother the manager (Schindler) again, if you dare to come either here or to his factory ever again, you will go to the place from which no one returns!’ I did not come back. I understood that Schindler’s goal was to learn from me how to run the business. The minute this goal was achieved, he threw me to the streets like a discarded object…” A Letter Written by Schindler’s Former Workers Signed: Isaak Stern, former employee Pal. Office in Krakow, Dr. Hilfstein, Chaim Salpeter, Former President of the Zionist Executive in Krakow for Galicia and Silesia We, the undersigned Jews from Krakow, inmates of Plaszow concentration camp, have, since 1942, worked in Director Schindler’s business. Since Schindler took over management of the business, it was his exclusive goal to protect us from resettlement, which would have meant our ultimate liquidation. During the entire period in which we worked for Director Schindler he did everything possible to save the lives of the greatest possible number of Jews, in spite of the tremendous difficulties; especially during a time when receiving Jewish workers caused great difficulties with the authorities. Director Schindler took care of our sustenance, and as a result, during the whole period of our employment by him there was not a single case of unnatural death. All in all he employed more than 1,000 Jews in Krakow. As the Russian frontline approached and it became necessary to transfer us to a different concentration camp, Director Schindler relocated his business to Bruennlitz near Zwittau. There were huge difficulties connected with the implementation of Director Schindler’s business, and he took great pains to introduce this plan. The fact that he attained permission to create a camp, in which not only women and men, but also families could stay together, is unique within the territory of the Reich. Special mention must be given to the fact that our resettlement to Bruennlitz was carried out by way of a list of names, put together in Krakow and approved by the Central Administration of all concentration camps in Oranienburg (a unique case). After the men had been interned in Gross-Rosen concentration camp for no more than a couple of days and the women for 3 weeks in Auschwitz concentration camp, we may claim with assertiveness that with our arrival in Bruennlitz we owe our lives solely to the efforts of Director Schindler and his humane treatment of his workers. Director Schindler took care of the improvement of our living standards by providing us with extra food and clothing. No money was spared and his one and only goal was the humanistic ideal of saving our lives from inevitable death. It is only thanks to the ceaseless efforts and interventions of Director Schindler with the authorities in question, that we stayed in Bruennlitz, in spite of the existing danger, as, with the approaching frontline we would all have been moved away by the leaders of the camp, which would have meant our ultimate end. This we declare today, on this day of the declaration of the end of the war, as we await our official liberation and the opportunity to return to our destroyed families and homes. Here we are, a gathering of 1100 people, 800 men and 300 women. All Jewish workers, that were inmates in the Gross-Rosen and Auschwitz concentration camps respectively declare wholeheartedly their gratitude towards Director Schindler, and we herewith state that it is exclusively due to his efforts, that we were permitted to witness this moment, the end of the war. Concerning Director Schindler's treatment of the Jews, one event that took place during our internment in Bruennlitz in January of this year which deserves special mention was coincidentally a transport of Jewish inmates, that had been evacuated from the Auschwitz concentration camp, Goleschow outpost, and ended up near us. This transport consisted exclusively of more than 100 sick people from a hospital which had been cleared during the liquidation of the camp. These people reached us frozen and almost unable to carry on living after having wandered for weeks. No other camp was willing to accept this transport and it was Director Schindler alone who personally took care of these people, while giving them shelter on his factory premises; even though there was not the slightest chance of them ever being employed. He gave considerable sums out of his own private funds, to enable their recovery as quick as possible. He organized medical aid and established a special hospital room for those people who were bedridden. It was only because of his personal care that it was possible to save 80 of these people from their inevitable death and to restore them to life. We sincerely plead with you to help Director Schindler in any way possible, and especially to enable him to establish a new life, because of all he did for us both in Krakow and in Bruennlitz he sacrificed his entire fortune. Bruennlitz, May 8th 1945."
http://yadvashem.org/yv/en/education/lesson_plans/schindler.asp
4.40625
When scientists first began using rockets for research, their eyes were focused upward, on the mysteries that lay beyond our atmosphere and our planet. But it wasn't long before they realized that this new technology could also give them a unique vantage point from which to look back at Earth. Scientists working with V-2 and early sounding rockets for the Naval Research Laboratory (NRL) made the first steps in this direction almost ten years before Goddard was formed. The scientists put aircraft gun cameras on several rockets in an attempt to determine which way the rockets were pointing. When the film from one of these rockets was developed, it had recorded images of a huge tropical storm over Brownsville, Texas. Because the rocket.... ....was spinning, the image wasn't a neat, complete picture, but Otto Berg, the scientist who had modified the camera to take the photo, took the separate images home and pasted them together on a flat board. He then took the collage to Life magazine, which published what was arguably one of the earliest weather photos ever taken from space.1 Space also offered unique possibilities for communication that were recognized by industry and the military several years before NASA was organized. Project RAND2 had published several reports in the early 1950s outlining the potential benefits of satellite-based communication relays, and both AT&T and Hughes had conducted internal company studies on the commercial viability of communication satellites by 1959.3 These rudimentary seeds, already sown by the time Goddard opened its doors, grew into an amazing variety of communication, weather, and other remote-sensing satellite projects at the Center that have revolutionized many aspects of our lives. They have also taught us significant and surprising things about the planet we inhabit. Our awareness of large-scale crop and forest conditions, ozone depletion, greenhouse warming, and El Nino weather patterns has increased dramatically because of our ability to look back on Earth from space. Satellites have allowed us to measure the shape of the Earth more accurately, track the movement of tectonic plates, and analyze portions of the atmosphere and areas of the world that are hard to reach from the ground. In addition, the "big picture" perspective satellites offer has allowed scientists to begin investigating the dynamics between different individual processes and the development and behavior of global patterns and systems. Ironically, it seems we have had to develop the ability to leave our planet before we could begin to fully understand it. From the very earliest days of the space program, scientists realized that satellites could offer an important side-benefit to researchers interested in mapping the gravity field and shape of the Earth, and Goddard played an important role in this effort. The field of geodesy, or the study of the gravitational field of the Earth and its relationship to the solid structure of the planet, dates back to the third century B.C., when the Greek astronomer Eratosthenes combined astronomical observation with land measurement to try to prove that the Earth was, in fact, round. Later astronomers and scientists had used other methods of triangulation to try to estimate the exact size of the Earth. Astronomers also had used the Moon, or stars with established locations, to try to map the shape of the Earth and exact distances between points more precisely. But satellites offered a new twist to this methodology. For one thing, the Earth's shape and gravity field affected the orbit of satellites. So at the beginning of the space age, Goddard's tracking and characterizing the orbit of the first satellites was in and of itself a scientific endeavor. From that orbital data, scientists could infer information about the Earth's gravity field, which is affected by the distribution of its mass. The Earth, as it turns out, is not perfectly round, and its mass is not perfectly distributed. There are places where land or ocean topography results in denser or less dense mass accumulation. The centrifugal force of the Earth's rotation combines with gravity and these mass concentrations to create bulges and depressions in the planet. In fact, although we think of the Earth as round, Goddard's research showed us that it is really slightly pear-shaped. Successive Goddard satellites enabled scientists to gather much more precise information about the Earth's shape as well as exact positions of points on the planet. In fact, within 10 years, scientists had learned as much again about global positioning, the size and shape of the Earth, and its gravity field as their predecessors had learned in the previous 200 years. Laser reflectors on Goddard satellites launched in 1965, 1968, and 1976, for example, allowed scientists to make much more precise measurements between points, which enabled them to determine the exact location or movement of objects. The laser reflectors developed for Goddard's LAGEOS satellite, launched in 1976, could determine movement or position within a few centimeters, which allowed scientists to track and analyze tectonic plate movement and continental drift. Among other things, the satellite data told scientists that the continents seem to be inherently rigid bodies, even if they contain divisive bodies of water, such as the Mississippi River, and that continental plate movement appears to occur at a constant rate over time. Plate movement information provided by satellites has also helped geologists track the dynamics that lead up to Earthquakes, which is an important step in predicting these potentially catastrophic events. The satellite positioning technique used for this plate tectonic research was the precursor to the Global Positioning System (GPS) technology that now uses a... ...constellation of satellites to provide precise three-dimensional navigation for aircraft and other vehicles. Yet although a viable commercial market is developing for GPS technology today, the greatest commercial application of space has remained the field of communication satellites.4 For all the talk about the commercial possibilities of space, the only area that has proven substantially profitable since 1959 is communication satellites, and Goddard played an important role in developing the early versions of these spacecraft. The industry managers who were conducting research studies and contemplating investment in this field in 1959 could not have predicted the staggering explosion of demand for communications that has accompanied the so-called "Information Age." But they saw how dramatically demand for telephone service had increased since World War II, and they saw potential in other communications technology markets, such as better or broader transmission for television and radio signals. As a result, several companies were even willing to invest their own money, if necessary, to develop communication satellites. The Department of Defense (DoD) actually had been working on communication satellite technology for a number of years, and it wanted to keep control of what it considered a critical technology. So when NASA was organized, responsibility for communication satellite technology development was split between the new space agency and the DoD. The DoD would continue responsibility for "active" communication satellites, which added power to incoming signals and actively transmitted the signals back to ground stations. NASA's role was initially limited to "passive" communication satellites, which relied on simply reflecting signals off the satellite to send them back to Earth.5 NASA's first communication satellite, consequently, was a passive spacecraft called "Echo." It was based on a balloon design by an engineer at NASA's Langley Research Center and developed by Langley, Goddard, JPL and AT&T. Echo was, in essence, a giant mylar balloon, 100 feet in diameter, that could "bounce" a radio signal back down to another ground station a long distance away from the first one. Echo I, the world's first communication satellite, was successfully put into orbit on 12 August 1960. Soon after launch, it reflected a pre-taped message from President Dwight Eisenhower across.... .....the country and other radio messages to Europe, demonstrating the potential of global radio communications via satellite. It also generated a lot of public interest, because the sphere was so large that it could be seen from the ground with the naked eye as it passed by overhead. Echo I had some problems, however. The sphere seemed to buckle somewhat, hampering its signal-reflecting ability. So in 1964, a larger and stronger passive satellite, Echo II, was put into orbit. Echo II was made of a material 20 times more resistant to buckling than Echo I and was almost 40 feet wider in diameter. Echo II also experienced some difficulties with buckling. But the main reason the Echo satellites were not pursued any further was not that the concept wouldn't work. It was simply that it was eclipsed by much better technology - active communication satellites.6 Syncom, Telstar, and Relay By 1960, Hughes, RCA, and AT&T were all advocating the development of active communication satellites. They differed in the kind of satellite they recommended, however. Hughes felt strongly that the best system would be based on geosynchronous satellites. Geosynchronous satellites are in very high orbits - 22,300 miles above the ground. This high orbit allows their orbital speed to match the rotation speed of the Earth, which means they can remain essentially stable over one spot, providing a broad range of coverage 24 hours a day. Three of these satellites, for example, can provide coverage of the entire world, with the exception of the poles. The disadvantage of using geosynchronous satellites for communications is that sending a signal up 22,300 miles and back causes a time-delay of approximately a quarter second in the signal. Arguing that this delay would be too annoying for telephone subscribers, both RCA and AT&T supported a bigger constellation of satellites in medium Earth orbit, only a few hundred miles above the Earth.7 The Department of Defense had been working on its own geosynchronous communication satellite, but the project was running into significant development problems and delays. NASA had been given permission by 1960 to pursue active communication satellite technology as well as passive systems, so the DoD approached NASA about giving Hughes a sole-source contract to develop an experimental geosynchronous satellite. The result was Syncom, a geosynchronous satellite design built by Hughes under contract to Goddard. Hughes already had begun investing its own money and effort in the technology, so Syncom I was ready for Goddard to launch in February 1963 - only 17 months after the contract was awarded. Syncom I stopped sending signals a few seconds before it was inserted into its final orbit, but Syncom II was launched successfully five months later, demonstrating the viability of the system. The third Syncom satellite, launched in August 1964, transmitted live television coverage of the Olympic Games in Tokyo, Japan to stations in North America and Europe. Although the military favored the geosynchronous concept, it was not the only technology being developed. In 1961, Goddard began working with RCA on the "Relay" satellite, which was launched 13 December 1962. Relay was designed to demonstrate the feasibility of medium-orbit, wide-band communications satellite technology and to help develop the ground.... ....station operations necessary for such a system. It was a very successful project, transmitting even color television signals across wide distances. AT&T, meanwhile, had run into political problems with NASA and government officials who were concerned that the big telecommunications conglomerate would end up monopolizing what was recognized as potentially powerful technology. But when NASA chose to fund RCA's Relay satellite instead of AT&T's design, AT&T decided to simply use its own money to develop a medium orbit communications satellite, which it called Telstar. NASA would launch the satellite, but AT&T would reimburse NASA for the costs involved. Telstar 1 was launched on 10 July 1962, and a second Telstar satellite followed less than a year later. Both satellites were very successful, and Telstar 2 demonstrated that it could even transmit both color and black and white television signals between the United States and Europe. In some senses, Relay and Telstar were competitors. But RCA and AT&T, who were both working with managers at Goddard, reportedly cooperated very well with each other. Each of the efforts was seen as helping to advance the technology necessary for this new satellite industry to become viable, and both companies saw the potential profit of that in the long run. By 1962, it was clear that satellite communications technology worked, and there was going to be money made in its use. Fearful of the powerful monopoly satellites could offer a single company, Congress passed the Satellite Communications Act, setting up a consortium of existing communications carriers to run the satellite communications industry. Individual companies could bid to sell satellites to the consortium, but no single company would own the system. NASA would launch the satellites for Comsat, as the consortium was called, but Comsat would run the operations. In 1964, the Comsat consortium was expanded further with the formation of the International Telecommunications Satellite Organization, commonly known as "Intelsat," to establish a framework for international use of communication satellites. These organizations had the responsibility for choosing the type of satellite technology the system would use. The work of RCA, AT&T and Hughes had proven that either medium-altitude or geosynchronous satellites could work. But in 1965, the consortiums finally decided to base the international system on geosynchronous satellites similar to the Syncom design.8 Applications Technology Satellites Having helped to develop the prototype satellites, Goddard stepped back from operational communication satellites and focused its efforts on developing advanced technology for future systems. Between 1966 and 1974, Goddard launched a total of six Applications Technology Satellites (ATS) to research advanced technology for communications and meteorological spacecraft. The ATS spacecraft were all put into geosynchronous orbits and investigated microwave and millimeter wavelengths for..... ....communication transmissions, methods for aircraft and marine navigation and communications, and various control technologies to improve geosynchronous satellites. Four of the spacecraft were highly successful and provided valuable data for improving future communication satellites. The sixth ATS spacecraft, launched 30 May 1974, even experimented with transmitting health and education television to small, low-cost ground stations in remote areas. It also tested a geosynchronous satellite's ability to provide tracking and data transmission services for other satellites. Goddard's research in this area, and the expertise the Center developed in the process, made it possible for NASA to develop the Tracking and Data Relay Satellite System (TDRSS) the agency still uses today.9 After ATS-6, NASA transferred responsibility for future communication satellite research to the Lewis Research Center. Goddard, however, maintained responsibility for developing and operating the TDRSS tracking and data satellite system.10 Statistically, the United States has the world's most violent weather. In a typical year, the U.S. will endure some 10,000 violent thunderstorms, 5,000 floods, 1,000 tornadoes, and several hurricanes.11 Improving weather prediction, therefore, has been a high priority of meteorologists here for a very long time. The early sounding rocket flights began to indicate some of the possibilities space flight might offer in terms of understanding and forecasting the weather, and they prompted the military to pursue development of a meteorological satellite. The Advanced Research Projects Agency (ARPA)12 had a group of scientists and engineers working on this project at the U.S. Army Signal Engineering Laboratories in Ft. Monmouth, New Jersey when NASA was first organized. Recognizing the country's history of providing weather services to the public through a civilian agency, the military agreed to transfer the research group to NASA. These scientists and engineers became one of the founding units of Goddard in 1958. Television and Infrared Observation Satellites These Goddard researchers were working on a project called the Television and Infrared Observation Satellite (TIROS). When it was launched on 1 April 1960, it became the world's first meteorological satellite, returning thousands of images of cloud cover and spiralling storm systems. Goddard's Explorer VI satellite had recorded some crude cloud cover images before TIROS I was launched, but the TIROS satellite was the first spacecraft dedicated to meteorological data gathering and transmitted the first really good cloud cover photographs. 13 Clearly, there was a lot of potential in this new technology, and other meteorological satellites soon followed the first TIROS spacecraft. Despite its name, the first TIROS carried only television cameras. The second TIROS satellite, launched in November 1960, also included an infrared instrument, which gave it the ability to detect cloud cover even at night. The TIROS capabilities were limited, but the satellites still provided a tremendous service in terms of weather forecasting. One of the biggest obstacles meteorologists faced was the local, "spotty" nature of the data... ...they could obtain. Weather balloons and ocean buoys could only collect data in their immediate area. Huge sections of the globe, especially over the oceans, were dark areas where little meteorological information was available. This made forecasting a difficult task, especially for coastal areas. Sounding rockets offered the ability to take measurements at all altitudes of the atmosphere, which helped provide temperature, density and water vapor information. But sounding rockets, too, were limited in the scope of their coverage. Satellites offered the first chance to get a "big picture" perspective on weather patterns and storm systems as they travelled around the globe. Because weather forecasting was an operational task that usually fell under the management of the Weather Bureau, there was some disagreement about who should have responsibility for designing and operating this new class of satellite. Some people at Goddard felt that NASA should take the lead, because the new technology was satellite-based. The Weather Bureau, on the other hand, was going to be paying for the satellites and wanted control over the type of spacecraft and instruments they were funding. When the dust settled, it was decided that NASA would conduct research on advanced meteorological satellite technology and would manage the building, launching and testing of operational weather satellites. The Weather Bureau would have final say over operational satellite design, however, and would take over management of spacecraft operations after the initial test phase was completed.14 The TIROS satellites continued to improve throughout the early 1960s. Although the spacecraft were officially research satellites, they also provided the Weather Bureau with a semi-operational weather satellite system from 1961 to 1965. TIROS III, launched in July 1961, detected numerous hurricanes, tropical storms, and weather fronts around the world that conventional ground networks missed or would not have seen for several more days.15 TIROS IX, launched in January 1965, was the first of the series launched into a polar orbit, rotating around the Earth in a north-south direction. This orientation allowed the satellite to cross the equator at the same time each day and provided coverage of the entire globe, including the higher latitudes and polar regions, as its orbit precessed around the Earth. The later TIROS satellites also improved their coverage by changing the location of the spacecraft's camera. The TIROS satellites were designed like a wheel of cheese. The wheel spun around but, like a toy top or gyroscope, the axis of the wheel kept pointing in the same direction as the satellite orbited the Earth. The cameras were placed on the satellite's axis, which allowed them to take continuous pictures of the Earth when that surface was actually facing the planet. Like dancers doing a do-si-do, however, the surface with the cameras would be pointing parallel to or away from the Earth for more than half of the satellite's orbit. TIROS IX (and the operational TIROS satellites), put the camera on the rotating section of the wheel, which was kept facing perpendicular to the Earth throughout its orbit. This made the satellite operate more like a dancer twirling around while circling her partner. While the camera could only take pictures every few seconds, when the section of the wheel holding the camera rotated past the Earth, it could continue taking photographs throughout the satellite's entire orbit. In 1964, Goddard took another step in developing more advanced weather satellites when it launched the first NIMBUS spacecraft. NASA had originally envisioned the larger and more sophisticated NIMBUS as the design for the Weather Bureau's operational satellites. The Weather Bureau decided that the.... ....NIMBUS spacecraft were too large and expensive, however, and opted to stay with the simpler TIROS design for the operational system. So the NIMBUS satellites were used as research vehicles to develop advanced instruments and technology for future weather satellites. Between 1964 and 1978, Goddard developed and launched a total of seven Nimbus research satellites. In 1965, the Weather Bureau was absorbed into a new agency called the Environmental Science Services Administration (ESSA). The next year, NASA launched the first satellite in ESSA's operational weather system. The satellite was designed like the TIROS IX spacecraft and was designated "ESSA 1." As per NASA's agreement, Goddard continued to manage the building, launching and testing of ESSA's operational spacecraft, even as the Center's scientists and engineers worked to develop more advanced technology with separate research satellites. The ESSA satellites were divided into two types. One took visual images of the Earth with an an Automatic Picture Transmission (APT) camera system and transmitted them in real time to stations around the globe. The other recorded images that were recorded and then transmitted to a central ground station for global analysis. These first ESSA satellites were deployed in pairs in "Sun-synchronous" polar orbits around the Earth, crossing the same point at approximately the same time each day. In 1970, Goddard launched an improved operational spacecraft for ESSA using "second generation" weather satellite technology. The Improved TIROS Operational System (ITOS), as the design was initially called, combined the functions of the previous pairs of ESSA satellites into a single spacecraft and added a day and night scanning radiometer. This improvement meant that meteorologists could get global cloud cover information every 12 hours instead of every 24 hours. Soon after ITOS 1 was launched, ESSA evolved into the National Oceanic and Atmospheric Administration (NOAA), and successive ITOS satellites were redesignated as NOAA 1, 2, 3, etc. This designation system for NOAA's polar-orbiting satellites continues to this day. In 1978, NASA launched the first of what was called the "third generation" of polar orbiting satellites. The TIROS-N design was a much bigger, three-axis-stabilized spacecraft that incorporated much more advanced equipment. The TIROS-N series of instruments, used aboard operational NOAA satellites today, provided much more accurate sea-surface temperature information, which is necessary to predict a phenomenon like an El Nino weather pattern. They also could identify snow and sea ice and could provide much better temperature profiles for different altitudes in the atmosphere. But while the lower-altitude polar satellites can observe some phenomena in more detail because they are relatively close to the Earth, they can't provide the continuous "big picture" information a geosynchronous satellite can offer. So for the past 25 years, NOAA has operated two weather satellite systems - the TIROS series of polar orbiting satellites at lower altitudes, and two geosynchronous satellites more than 22,300 miles above the Earth.16 While polar-orbiting satellites were an improvement over the more equatorial-orbiting TIROS satellites, scientists realized that they could get a much better perspective on weather systems from a geosynchronous spacecraft. Goddard's research teams started investigating this technology with the launch of the first Applications Technology Satellite (ATS-1) in 1966. Because the ATS had a geosynchronous orbit that kept it "parked" above one spot, meteorologists could get progressive photographs of the same area over a period of time as often as every 30 minutes. The "satellite photos" showing changes in cloud cover that we now almost take for granted during nightly newscasts are made possible by geosynchronous weather satellites. Those cloud movement images also allowed meteorologists to infer wind currents and speeds. This information is particularly useful in determining weather patterns over areas of the world such as oceans or the tropics, where conventional aircraft and balloon methods can't easily gather data. Goddard's ATS III satellite, launched in 1967, included a multi-color scanner that could provide images in color, as well. Shortly after its launch, ATS III took the first color image of the entire Earth, a photo made possible by the satellite's 22,300 mile high orbit.17 In 1974, Goddard followed its ATS work with a dedicated geosynchronous weather satellite called the Synchronous Meteorological Satellite (SMS). Both SMS -1 and SMS-2 were research prototypes, but they still provided meteorologists with practical information as they tested out new technology. In addition to providing continuous coverage of a broad area, the SMS satellites collected and relayed weather data from 10,000 automatic ground stations in six hours, giving forecasters more timely and detailed data than they had ever had before. Goddard launched NOAA's first operational geostationary18 satellite, designated the Geostationary Operational Environmental Satellite (GOES) in October 1975. That satellite has led to a whole family of GOES spacecraft. As with previous operational satellites, Goddard managed the building, launching and testing of the GOES spacecraft. The first seven GOES spacecraft, while geostationary, were still "spinning" designs like NOAA's earlier operational ESSA satellites. In the early 1980s, however, NOAA decided that it wanted the new series of geostationary GOES spacecraft to be three-axis stabilized, as well, and to incorporate significantly more advanced instruments. In addition, NOAA decided to award a single contract directly with an industry manufacturer for the spacecraft and instruments, instead of working separate instrument and spacecraft contracts through Goddard. Goddard typically developed new instruments and technology on research satellites before putting them onto an operational spacecraft for NOAA. The plan for GOES 8,19 however, called for incorporating new technology instruments directly into a spacecraft that was itself a new design and also had an operational mission. Meteorologists across the country were going to rely on the new instruments for accurate weather forecasting information, which put a tremendous amount of added pressure on the designers. But the contractor selected to build the instruments underestimated the cost and complexity of developing the GOES 8 instruments. In addition, Goddard's traditional "Phase B" design study, which would have generated more concrete estimates of the time and cost involved in the instrument development, was eliminated on the GOES 8 project. The study was skipped in an attempt to save time, because NOAA was facing a potential crisis with its geostationary satellite system. NOAA wanted to have two geostationary satellites up at any given point in order to adequately cover both coasts of the country. But the GOES 5 satellite failed in 1984, leaving only one geostationary satellite, GOES 6, in operation. The early demise of GOES 4 and GOES 5 left NOAA uneasy about how long GOES 6 would last, prompting the "streamlining" efforts on the GOES 8 spacecraft design. The problem became even more serious in 1986 when the launch vehicle for the GOES G spacecraft, which would have become GOES 7, failed after launch. Another GOES satellite was successfully launched in 1987, but the GOES 6 spacecraft failed in January 1989, leaving the United States once again with only one operational geostationary weather satellite. By 1991, when the GOES 8 project could not predict a realistic launch date, because working instruments for the spacecraft still hadn't been developed, Congress began to investigate the issue. The GOES 7 spacecraft was aging, and managers and elected officials realized that it was entirely possible that the country might soon find itself without any geostationary satellite coverage at all. To buy the time necessary to fix the GOES 8 project and alleviate concerns about coverage, NASA arranged with the Europeans to "borrow" one of their Eumetsat geostationary satellites. The satellite was allowed to "drift" further west so it sat closer to the North American coast, allowing NOAA to move the GOES 7 satellite further west. Meanwhile, Goddard began to take a more active role in the GOES 8 project. A bigger GOES 8 project office was established at the Center and Goddard brought in some of its best instrument experts to work on the project, both at Goddard and at the contractor's facilities. Goddard, after all, had some of the best meteorological instrument-building expertise in the country. But because Goddard was not directly in charge of the instrument sub-contract, the Center had been handicapped in making that knowledge and experience available to the beleaguered contractor. The project was a sobering reminder of the difficulties that could ensue when, in an effort to save time and money, designers attempted to streamline a development project or combine research and operational functions into a single spacecraft. But in 1994, the GOES 8 spacecraft was finally successfully launched, and the results have been impressive. Its advanced instruments performed as advertised, improving the spacecraft's focusing and atmospheric sounding abilities and significantly reducing the amount of time the satellite needed to scan any particular area. 20 Earth Resources Satellites As meteorological satellite technology developed and improved, Goddard scientists realized that the same instruments used for obtaining weather information could be used for other purposes, as well. Meteorologists could look at radiation that travelled back up from the Earth's surface to determine things like water vapor content and temperature profiles at different altitudes in the atmosphere. But those same emissions could reveal potentially valuable information about the Earth's surface, as well. Objects at a temperature above absolute zero emit radiation, many of them at precise and unique wavelengths in the electromagnetic spectrum. So by analyzing the emissions of any object, from a star or comet to a particular section of forest or farmland, scientists can learn important things about its chemical composition. Instruments on the Nimbus spacecraft had the ability to look at reflected solar radiation from the Earth in several different wavelengths. As early as 1964, scientists began discussing the possibilities of experimenting with this technology to see what it might be able to show us about not only the atmosphere, but also resources on the Earth. The result was the Earth Resources Technology Satellite (ERTS), launched in 1972 and later given the more popular name "Landsat 1." The spacecraft was based on a Nimbus satellite,with a multi-channel radiometer to look at different wavelength bands where the reflected energy from surfaces such as forests, water, or different crops would fall. The satellite instruments also had much better resolution than the Nimbus instruments. Each swath of the Earth covered by the Nimbus scanner was 1500 miles wide, with each pixel in the picture representing five miles. The polar-orbiting ERTS satellite instrument could focus in on a swath only 115 miles wide, with each pixel representing 80 meters. This resolution allowed scientists to view a small enough section of land, in enough detail, to conduct a worthwhile analysis of what it contained. Images from the ERTS/Landsat satellite, for example, showed scientists a 25-mile wide geological feature near Reno, Nevada that appeared to be a previously undiscovered meteor crater. Other images collected by the satellite were useful in discovering water-bearing rocks in Nebraska, Illinois and New York and determining that water pollution drifted off the Atlantic coast as a cohesive unit, instead of dissipating in the ocean currents. The success of the ERTS satellite prompted scientists to want to explore this use of satellite technology further. They began working on instruments that could get pixel resolutions as high as five meters, but were told to discontinue that research because of national security concerns. If a civilian satellite provided data that detailed, it might allow foreign countries to find out critical information about military installations or other important targets in the U.S. This example illustrates one of the ongoing difficulties with Earth resource satellite research. The fact that the same information can be used for both scientific and practical purposes often creates complications with not only who should be responsible for the work, but how and where the information will be used. In any event, the follow-on satellite, "Landsat-2," was limited to the same levels of resolution. More recent Landsat spacecraft, however, have been able to improve instrument resolution further.21 Landsat 2 was launched in January 1975 and looked at land areas for an even greater number of variables than its ERTS predecessor, integrating information from ground stations with data obtained by the satellite's instruments. Because wet land and green crops reflect solar energy at different wavelengths than dry soil or brown plants, Landsat imagery enabled researchers to look at soil moisture levels and crop health over wide areas, as well as soil temperature, stream flows, and snow depth. Its data was used by the U.S. Department of Agriculture, the U.S. Forest Service, the Department of Commerce, the Army Corps of Engineers, the Environmental Protection Agency and the Department of Interior, as well as agencies from foreign countries.22 The Landsat program clearly was a success, particularly from a scientific perspective. It proved that satellite technology could determine valuable information about precious natural resources, agricultural activity, and environmental hazards. The question was who should operate the satellites. Once the instruments were developed, the Landsat spacecraft were going to be collecting the same data, over and over, instead of exploring new areas and technology. One could argue that by examining the evolution of land resources over time, scientists were still exploring new processes and gathering new scientific information about the Earth. But that same information was being used predominantly for practical purposes of natural resource management, agricultural and urban planning, and monitoring environmental hazards. NASA had never seen its role as providing ongoing, practical information, but there was no other agency with the expertise or charter to operate land resource satellites. As a result, NASA continued to manage the building, launch, and space operation of the Landsat satellites until 1984. Processing and distribution of the satellite's data was managed by the Department of Interior, through an Earth Resources Observation System (EROS) Data Center that was built by the U.S. Geological Survey in Sioux Falls, South Dakota in 1972. In 1979, the Carter Administration developed a new policy in which the Landsat program would be managed by NOAA and eventually turned over to the private sector. In 1984, the first Reagan Administration put that policy into effect, soliciting commercial bids for operating the system, which at that point consisted of two operational satellites. Landsat 4 had been launched in 1982 and Landsat 5 was launched in 1984. Ownership and operation of the system was officially turned over to the EOSAT Company in 1985, which sold the images to anyone who wanted them, including the government. At the same time, responsibility for overseeing the program was transferred from NASA to NOAA. Under the new program guidelines, the next spacecraft in the Landsat program, Landsat 6, would also be constructed independently by industry. There were two big drawbacks with this move, however, as everyone soon found out. The first was that although there was something of a market for Landsat images, it was nothing like that surrounding the communication satellite industry. The EOSAT company found itself struggling to stay afloat. Prices for images jumped from the couple of hundred dollars per image that EROS had charged to $4,000 per shot, and EOSAT still found itself bordering on insolvency. Being a private company, EOSAT also was concerned with making a profit, not archiving data for the good of science or the government. Government budgets wouldn't allow for purchasing thousands of archival images at $4,000 a piece, so the EROS Data Center only bought a few selected images each year. As a result, many of the the scientific or archival benefits the system could have created were lost. In 1992, the Land Remote Sensing Policy Act reversed the 1984 decision to commercialize the Landsat system, noting the scientific, national security, economic, and social utility of the Landsat images. Landsat 6 was launched the following year, but the spacecraft failed to reach orbit and ended up in the Indian Ocean. This launch failure was discouraging, but planning for the next Landsat satellite was already underway. Goddard had agreed to manage design of a new data ground station for the satellite, and NASA and the Department of Defense initially agreed to divide responsibility for managing the satellite development. But the Air Force subsequently pulled out of the project and, in May 1994, management of the Landsat system was turned over to NASA, the U.S. Geological Survey (USGS), and NOAA. At the same time, Goddard assumed sole management responsibility for developing Landsat 7. The only U.S. land resource satellites in operation at the moment are still Landsat 4 and 5, which are both degrading in capability. Landsat 5, in fact, is the only satellite still able to transmit images. The redesigned Landsat 7 satellite is scheduled for launch by mid-1999, and its data will once again be made available though the upgraded EROS facilities in Sioux Falls, South Dakota. Until then, scientists, farmers and other users of land resource information have to rely on Landsat 5 images through EOSAT, or they have to turn to foreign companies for the information. The French and the Indians have both created commercial companies to sell land resource information from their satellites, but both companies are being heavily subsidized by their governments while a market for the images is developed. There is probably a viable commercial market that could be developed in the United States, as well. But it may be that the demand either needs to grow substantially on its own or would need government subsidy before a commercialization effort could succeed. The issue of scientific versus practical access to the information would also still have to be resolved. No matter how the organization of the system is eventually structured, Landsat imagery has proven itself an extremely valuable tool for not only natural resource management but urban planning and agricultural assistance, as well. Former NASA Administrator James Fletcher even commented in 1975 that if he had one space-age development to save the world, it would be Landsat and its successor satellites.23 Without question, the Landsat technology has enabled us to learn much more about the Earth and its land-based resources. And as the population and industrial production on the planet increase, learning about the Earth and potential dangers to it has become an increasingly important priority for scientists and policy-makers alike.24 Atmospheric Research Satellites One of the main elements scientists are trying to learn about the Earth is the composition and behavior of its atmosphere. In fact, Goddard's scientists have been investigating the dynamics of the Earth's atmosphere for scientific, as well as meteorological, purposes since the inception of the Center. Explorers 17, 19, and 32, for example, all researched various aspects of the density, composition, pressure and temperature of the Earth's atmosphere. Explorers 51 and 54, also known as "Atmosphere Explorers," investigated the chemical processes and energy transfer mechanisms that control the atmosphere. Another goal of Goddard's atmospheric scientists was to understand and measure what was called the "Earth Radiation Budget." Scientists knew that radiation from the Sun enters the Earth's atmosphere. Some of that energy is reflected back into space, but most of it penetrates the atmosphere to warm the surface of the Earth. The Earth, in turn, radiates.... ....energy back into space. Scientists knew that the overall radiation received and released was about equal, but they wanted to know more about the dynamics of the process and seasonal or other fluctuations that might exist. Understanding this process is important because the excesses and deficits in this "budget," as well as variations in it over time or at different locations, create the energy to drive our planet's heating and weather patterns. The first satellite to investigate the dynamics of the Earth Radiation Budget was Explorer VII, launched in 1959. Nimbus 2 provided the first global picture of the radiation budget, showing that the amount of energy reflected by the Earth's atmosphere was lower than scientists had thought. Additional instruments on Nimbus 3, 5, and 6, as well as operational TIROS and ESSA satellites, explored the dynamics of this complex process further. In the early 1980s, researchers developed an Earth Radiation Budget Experiment (ERBE) instrument that could better analyze the short-wavelength energy received from the Sun and the longer-wavelength energy radiated into space from the Earth. This instrument was put on a special Earth Radiation Budget Satellite (ERBS) launched in 1984, as well as the NOAA-9 and NOAA 10 weather satellites. This instrument has provided scientists with information on how different kinds of clouds affect the amount of energy trapped in the Earth's atmosphere. Lower, thicker clouds, for example, reflect a portion of the Sun's energy back into space, creating a... ....cooling effect on the surface and atmosphere of the Earth. High, thin cirrus clouds, on the other hand, let the Sun's energy in but trap some of the Earth's outgoing infrared radiation, reflecting it back to the ground. As a result, they can have a warming effect on the Earth's atmosphere. This warming effect can, in turn, create more evaporation, leading to more moisture in the air. This moisture can trap even more radiation in the atmosphere, creating a warming cycle that could influence the long-term climate of the Earth. Because clouds and atmospheric water vapor seem to play a significant role in the radiation budget of the Earth as well as the amount of global warming and climate change that may occur over the next century, scientists are attempting to find out more about the convection cycle that transports water vapor into the atmosphere. In 1997, Goddard launched the Tropical Rainfall Measuring Mission (TRMM) satellite into a near-equatorial orbit to look more closely at the convection cycle in the tropics that powers much of the rest of the world's cloud and weather patterns. The TRMM satellite's Clouds and the Earth's Radiant Energy System (CERES) instrument, built by NASA's Langley Research Center, is an improved version of the earlier ERBE experiment. While the satellite's focus is on convection and rainfall in the lower atmosphere, some of that moisture does get transported into the upper atmosphere, where it can play a role in changing the Earth's radiation budget and overall climate.25 An even greater amount of atmospheric research, however, has been focused on a once little-known chemical compound of three oxygen atoms called ozone. Ozone, as most Americans now know, is a chemical in the upper atmosphere that blocks incoming ultraviolet rays from the Sun, protecting us from skin cancer and other harmful effects caused by ultraviolet radiation. The ozone layer was first brought into the spotlight in the 1960s, when designers began working on the proposed Supersonic Transport (SST). Some scientists and environmentalists were concerned that the jet's high-altitude emissions might damage the ozone layer, and the federal government funded several research studies to evaluate the risk. The cancellation of the SST in 1971 shelved the issue, at least temporarily, but two years later a much greater potential threat emerged. In 1973, two researchers at the University of California, Irvine came up with the astounding theory that certain man-made chemicals, called chlorofluorocarbons (CFCs), could damage the atmosphere's ozone layer. These chemicals were widely used in everything from hair spray to air conditioning systems, which meant that the world might have a dangerously serious problem on its hands. In 1975, Congress directed NASA to develop a "comprehensive program of research, technology and monitoring of phenomena of the upper atmosphere" to evaluate the potential risk of ozone damage further. NASA was already conducting atmospheric research, but the Congressional mandate supported even wider efforts. NASA was not the only organization looking into the problem, either. Researchers around the world began focusing on learning more about the chemistry of the upper atmosphere and the behavior of ozone layer. Goddard's Nimbus IV research satellite, launched in 1970, already had an instrument on it to analyze ultraviolet rays that were "backscattered," or reflected, from different altitudes in the Earth's atmosphere. Different wavelengths of UV radiation should be absorbed by the ozone at different levels in the atmosphere. So by analyzing how much UV radiation was still present in different wavelengths, researchers could develop a profile of how thick or thin the ozone layer was at different altitudes and locations. In 1978, Goddard launched the last and most capable of its Nimbus-series satellites. Nimbus 7 carried an improved version of this experiment, called the Solar Backscatter Ultraviolet (SBUV) instrument. It also carried a new sensor called the Total Ozone Mapping Spectrometer (TOMS). As opposed to the SBUV, which provided a vertical profile of ozone in the atmosphere, the TOMS instrument generated a high-density map of the total amount of ozone in the atmosphere. A similar instrument, called the SBUV-2, has been put on weather satellites since the early 1980s. For a number of years, the Space Shuttle periodically flew a Goddard instrument called the Shuttle Solar Backscatter Ultraviolet (SSBUV) experiment that was used to calibrate the SBUV-2 satellite instruments to insure the readings continued to be accurate. In the last couple of years, however, scientists have developed data-processing methods of calibrating the instruments, eliminating the need for the Shuttle experiments. Yet it was actually not a NASA satellite that discovered the "hole" that finally developed in the ozone layer. In May 1985, a British researcher in Antarctica published a paper announcing that he had detected an astounding 40% loss in the ozone layer over a Antarctica the previous winter. When Goddard researchers went back and looked at their TOMS data from that time period, they discovered that the data indicated the exact same phenomenon. Indeed, the satellite indicated an area of ozone layer thinning, or "hole,"26 the size of the Continental U.S. How had researchers missed a development that drastic? Ironically enough, it was because the anomaly was so drastic. The TOMS data analysis software had been programmed to flag grossly anomalous data points, which were assumed to be errors. Nobody had expected the ozone loss to be as great as it was, so the data points over the area where the loss had occurred looked like problems with the instrument or its calibration. . Once the Nimbus 7 data was verified, Goddard's researchers generated a visual map of the area over Antarctica where the ozone loss had occurred. In fact, the ability to generate visual images of the ozone layer and its "holes" have been among the significant contributions NASA's ozone-related satellites have made to the public debate over the issue. Data points are hard for most people to fully understand. But for non-scientists, a visual image showing a gap in a protective layer over Antarctica or North America makes the problem not only clear, but somehow very real. The problem then became determining what was causing the loss of ozone. The problem was a particularly sticky one, because it was going to relate directly to legislation and restrictions that would be extremely costly for industry. By 1978, the Environmental Protection Agency (EPA) had already moved to ban.... ....the use of CFCs in aerosols. By 1985, the United Nations Environmental Program (UNEP) was calling on nations to take measures to protect the ozone and, in 1987, forty-three nations signed the "Montreal Protocol, agreeing to cut CFC production 50% by the year 2000. The CFC theory was based on a prediction that chlorofluorocarbons, when they reached the upper atmosphere, released chlorine and flourine. The chlorine, it was suspected, was reacting with the ozone to form chlorine monoxide - a chemical that is able to destroy a large amount of ozone in a very short period of time. Because the issue was the subject of so much debate, NASA launched numerous research efforts to try to validate or disprove the theory. In addition to satellite observations, NASA sent teams of researchers and aircraft to Antarctica to take in situ readings of the ozone layer and the ozone "hole" itself. These findings were then supplemented with the bigger picture perspective the TOMS and SBUV instruments could provide. The TOMS instrument on Nimbus 7 was not supposed to last more than a couple of years. But the information it was providing was considered so critical to the debate that Goddard researchers undertook an enormous effort to keep the instrument working, even as it aged and began to degrade. The TOMS instrument also hadn't been designed to show long-term trends, so the data processing techniques had to be significantly improved to give researchers that kind of information. In the end, Goddard was able to keep the Nimbus 7 TOMS instrument operating for almost 15 years, which provided ozone monitoring until Goddard was able to launch a replacement TOMS instrument on a Russian satellite in 1991.27 A more comprehensive project to study the upper atmosphere and and the ozone layer was launched in 1991, as well. The satellite, called the Upper Atmosphere Research Satellite (UARS), was one of the results of Congress's 1975 mandate for NASA to pursue additional ozone research. Although its goal is to try to understand the chemistry and dynamics of the upper atmosphere, the focus of UARS is clearly on ozone research. Original plans called for the spacecraft to be launched from the Shuttle in the mid-1980s, but the Challenger explosion back-up delayed its launch until 1991. Once in orbit, however, the more advanced instruments on board the UARS satellite were able to map chlorine monoxide levels in the stratosphere. Within months, the satellite was able to confirm what the Antarctic.... ....aircraft expeditions and Nimbus-7 satellite had already reported - that there was a clear and causal link between levels of chlorine, formation of chlorine monoxide, and levels of ozone loss in the upper atmosphere. Since the launch of UARS, the TOMS instrument has been put on several additional satellites to insure that we have a continuing ability to monitor changes in the ozone layer. A Russian satellite called Meteor 3 took measurements with a TOMS instrument from 1991 until the satellite ceased operating in 1994. The TOMS instrument was also incorporated into a Japanese satellite called the Advanced Earth Observing System (ADEOS) that was launched in 1996. ADEOS, which researchers hoped could provide TOMS coverage until the next scheduled TOMS instrument launch in 1999, failed after less than a year in orbit. But fortunately, Goddard had another TOMS instrument ready for launch on a small NASA satellite called an Earth Probe, which was put into orbit with the Pegasus launch vehicle in 1996. Researchers hope that this instrument will continue to provide coverage and data until the next scheduled TOMS instrument launch. All of these satellites have given us a much clearer picture of what the ozone layer is, how it interacts with various other chemicals, and what causes it to deteriorate. These pieces of information are essential elements for us to have if we want to figure out how best to protect what is arguably one of our most precious natural resources. Using the UARS satellite, scientists have been able to track the progress of CFCs up into the stratosphere and have detected the build-up of chlorine monoxide over North America and the Arctic as well as Antarctica. Scientists also have discovered that ozone loss is much greater when the temperature of the stratosphere is cold. In 1997, for example, particularly cold stratospheric temperatures created the first Antarctic-type of ozone hole over North America. Another factor in ozone loss is the level of aerosols, or particulate matter, in the upper atmosphere. The vast majority of aerosols come from soot, other pollution, or volcanic activity, and Goddard's scientists have been studying the effects of these particles in the atmosphere ever since the launch of the Nimbus I spacecraft in 1964. Goddard's 1984 Earth Radiation Budget Satellite (ERBS), which is still operational, carries a Stratospheric Aerosol and Gas Experiment (SAGE II) that tracks aerosol levels in the lower and upper atmosphere. The Halogen Occultation Experiment (HALOE) instrument on UARS also measures aerosol intensity and distribution. In 1991, both UARS and SAGE II were used to track the movement and dispersal of the massive aerosol cloud created by the Mt. Pinatubo volcano eruption in the Philippines. The eruption caused stratospheric aerosol levels to increase to as much as 100 times their pre-eruption levels, creating spectacular Sunsets around the world but causing some other effects, as well. These volcanic clouds appear to help cool the Earth, which could affect global warming trends, but the aerosols in these clouds seem to increase the amount of ozone loss in the stratosphere, as well. The good news is, the atmosphere seems to be beginning to heal itself. In 1979 there was no ozone hole. Throughout the 1980s, while legislative and policy debates raged over the issue, the hole developed and grew steadily larger. In 1989, most U.S. companies finally ceased production of CFC chemicals and, in 1990, the U.N. strengthened its Montreal Protocol to call for the complete phaseout of CFCs by the year 2000. Nature is slow to react to changes in our behavior but, by 1997, scientists finally began to see a levelling out and even a slight decrease in chlorine monoxide levels and ozone loss in the upper atmosphere.28 Continued public interest in this topic has made ozone research a little more complicated for the scientists involved. Priorities and pressures in the program have changed along with Presidential administrations and Congressional agendas and, as much as scientists can argue that data is simply data, they cannot hope to please everyone in such a politically charged arena. Some environmentalists argue that the problem is much worse than NASA is making it out to be, while more conservative politicians have argued that NASA's scientists are blowing the issue out of proportion.29 But at this point a few things are clearer. The production of CFC chemicals was, in fact, harming a critical component of our planet's atmosphere. It took a variety of ground and space instruments to detect and map the nature and extent of the problem. But the perspective offered by Goddard's satellites allowed scientists and the general public to get a clear overview of the problem and map the progression of events that caused it. This information has had a direct impact on changing the world's industrial practices which, in turn, have begun to slow the damage and allow the planet to heal itself. The practical implications of Earth-oriented satellite data may make life a little more complicated for the scientists involved, but no one can argue the significance or impact of the work. By developing the technology to view and analyze the Earth from space, we have given ourselves an invaluable tool for helping us understand and protect the planet on which we live. One of the biggest advantages to remote sensing of the Earth from satellites stems from the fact that the majority of the Earth's surface area is extremely difficult to study from the ground. The world's oceans cover 71% of the Earth's surface and comprise 99% of its living area. Atmospheric convective activity over the tropical ocean area is believed to drive a significant amount of the world's weather. Yet until recently, the only way to map or analyze this powerful planetary element was with buoys, ships or aircraft. But these methods could only obtain data from various individual points, and the process was extremely difficult , expensive, and time-consuming. Satellites, therefore, offered oceanographers a tremendous advantage. A two-minute ocean color satellite image, for example, contains more measurements than a ship travelling 10 knots could make in a decade. This ability has allowed scientists to learn a lot more about the vast open stretches of ocean that influence our weather, our global climate, and our everyday lives.30 Although Goddard's early meteorological satellites were not geared specifically toward analyzing ocean characteristics, some of the instruments could provide information about the ocean as well as the atmosphere. The passive microwave sensors that allowed scientists to "see" through clouds better, for example, also let them map the distribution of sea ice around the world. Changes in sea ice distribution can indicate climate changes and affect sea levels around the world, which makes this an important parameter to monitor. At the same time, this information also has allowed scientists to locate open passageways for ships trying to get through the moving ice floes of the Arctic region. By 1970, NOAA weather satellites also had instruments that could measure the temperature of the ocean surface in areas where there was no cloud cover, and the Landsat satellites could provide some information on snow and ice distributions. But since the late 1970s, much more sophisticated ocean-sensing satellite technology has emerged.31 The Nimbus 7 satellite, for example, carried an improved microwave instrument that could generate a much more detailed picture of sea ice distribution than either... ...the earlier Nimbus or Landsat satellites. Nimbus 7 also carried the first Coastal Zone Color Scanner (CZCS), which allowed scientists to map pollutants and sediment near coastlines. The CZCS also showed the location of ocean phytoplankton around the world. Phytoplankton are tiny, carbon dioxide-absorbing plants that constitute the lowest rung on the ocean food chain. So phytoplankton generally mark spots where larger fish may be found. But because they bloom where nutrient-rich water from the deep ocean comes up near the surface, their presence also gives scientists clues about the ocean's currents and circulation. Nimbus 7 continued to send back ocean color information until 1984. Scientists at Goddard continued working on ocean color sensor development... ....throughout the 1980s, and a more advanced coastal zone ocean color instrument was launched on the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) satellite in 1997. In contrast to most scientific satellites, SeaWiFS was funded and launched by a private company instead of by NASA. Most of the ocean color data the satellite provides is purchased by NASA and other research institutions, but the company is selling some data to the fishing industry, as well.32 Since the launch of the Nimbus 7 and Tiros-N satellites in 1978, scientists have also been able to get much better information on global ocean surface temperatures. Sea surface temperatures tell scientists about ocean circulation, because they can use the temperature information to track the movement of warmer and cooler bodies of water. Changes in sea surface temperatures can also indicate the development of phenomena such as El Nino climate patterns. In fact, one of the most marked indications of a developing El Nino condition, which can cause heavy rains in some parts of the world and devastating drought in others, is an unusually warm tongue of water moving eastward from the western equatorial Pacific Ocean. NOAA weather satellites have carried instruments to measure sea surface temperature since 1981, and NASA's EOS AM-1 satellite, scheduled for launch in 1999, incorporates an instrument that can measure those temperatures with even more precision. The launch of Nimbus 7 also gave researchers the ability to look at surface winds, which help drive ocean circulation. With Nimbus 7, however, scientists had to infer surface winds by looking at slight differentiations in microwave emissions coming from the ocean surface. A scatterometer designed specifically to measure surface winds was not launched until the Europeans launched ERS-1 in 1991. Another scatterometer was launched on the Japanese ADEOS spacecraft in 1996. Because ADEOS failed less than a year after launch, Goddard researchers have begun an intensive effort to launch another scatterometer, called QuickSCAT, on a NASA spacecraft. JPL project managers are being aided in this effort by the Goddard-developed Rapid Spacecraft Procurement Initiative, which will allow them to incorporate the instrument into an existing small spacecraft design.Using this streamlined process, scientists hope to have QuickSCAT in orbit by the end of 1998.33 In the 1970s, researchers at the Wallops Flight Facility also began experimenting with radar altimetry to determine sea surface height, although they were pleased if they could get accuracy within a meter. In 1992, however, a joint satellite project between NASA and the French Centre National d'Etudes Spatiales (CNES) called TOPEX/Poseidon put a much more accurate radar altimeter into orbit. Goddard managed the development of the TOPEX radar altimeter, which can measure sea surface height within a few centimeters. In addition to offering useful information for maritime weather reports, this sea level data tells scientists some important things about ocean movement. For one thing, sea surface height indicates the build-up of water in one area of the world or another. One of the very first precursors to an El Nino condition, for example, is a rise in ocean levels in the western equatorial Pacific, caused by stronger-than-normal easterly trade winds. Sea level also tells scientists important information about the amount of heat the ocean is storing. If the sea level in a particular area is low, it means that the area of warm, upper-level water is shallow. This means that colder, deeper water can reach the surface there, driving ocean circulation and bringing nutrients up from below, leading to the production of phytoplankton. The upwelling of cold water will also cool down the sea surface temperature, reducing the amount of water that evaporates into the atmosphere. All of these improvements in satellite capabilities gave oceanographers and scientists an opportunity to integrate on-site surface measurements from buoys or ships with the more global perspective available from space. As a result, we are finally beginning to piece together a more complete picture of our oceans and the role they play in the Earth's biosystems and climate. In fact, one of the most significant results of ocean-oriented satellite research was the realization that ocean and atmospheric processes were intimately linked to each other. To really understand the dynamics of the ocean or the atmosphere, we needed to look at the combined global system they comprised.34 El Nino and Global Change The main catalyst that prompted scientists to start looking at the oceans and atmosphere as an integrated system was the El Nino event of 1982-83. The rains and drought associated with the unusual weather pattern caused eight billion dollars of damage, leading to several international research programs to try to understand and predict the phenomenon better. The research efforts included measurements by ships, aircraft, ocean buoys, and satellites, and the work is continuing today. But by 1996, scientists had begun to understand the warning signals and patterns of a strong El Nino event. They also had the technology to track atmospheric wind currents and cloud formation, ocean color, sea surface temperatures, sea surface levels and sea surface winds, which let them accurately predict the heavy rains and severe droughts that occurred at points around the world throughout the 1997-98 winter. The reason the 1982-83 El Nino prompted a change to a more integrated ocean-atmospheric approach is that the El Nino phenomenon does not exist in the ocean or the atmosphere by itself. It's the coupled interactions between the two elements that cause this periodic weather pattern to occur. The term El Nino, which means "The Child," was coined by fishermen on the Pacific coast of Central America who noticed a warming of their coastal ocean waters, along with a decline in fish population, near the Christ Child's birthday in December. But as scientists have discovered, the sequence of events that causes that warming begins many months earlier, in winds headed the opposite direction. In a normal year, strong easterly trade winds blowing near the equator drag warmer, upper-level ocean water to the western edge of the Pacific ocean. That build-up of warm water causes convection.... ....up into the tropical atmosphere, leading to rainfall along the Indonesian and Australian coastlines. It also leads to upwelling of colder, nutrient-rich water along the eastern equatorial Pacific coastlines, along Central and South America. In an El Nino year, however, a period of stronger-than-normal trade winds that significantly raises sea levels in the western Pacific is followed by a sharp drop in those winds. The unusually weak trade winds allow the large build-up of warm water in the western tropical Pacific to flow eastward along the equator. That change moves the convection and rainfall off the Indonesian and Australian coasts, causing severe drought in those areas and, as the warm water reaches the eastern edge of the Pacific ocean, much heavier than normal rainfall occurs along the western coastlines of North, Central, and South America. The movement of warm water toward the eastern Pacific also keeps the colder ocean water from coming up to the surface, keeping phytoplankton from growing and reducing the presence of fish further up on the food chain. In other words, an El Nino is the result of a change in atmospheric winds, which causes a change in ocean currents and sea level distribution, which causes a change in sea surface temperature, which causes a change in water vapor entering the atmosphere, which causes further changes in the wind currents, and so on, creating a cyclical pattern. Scientists still don't know exactly what causes the initial change in atmospheric winds, but they now realize that they need to look at a global system of water, land and air interactions in order to find the answer. And satellites play a critical role in being able to do that. An El Nino weather pattern is the biggest short-term "coupled" atmospheric and oceanographic climate signal on the planet after the change in seasons, which is why it prompted researchers to take a more interdisciplinary approach to studying it. But scientists are beginning to realize that many of the Earth's climatic changes or phenomena are really coupled events that require a broader approach in order to understand. In fact, the 1990s have seen the emergence of a new type of scientist who is neither oceanographer or atmospheric specialist, but is an amphibious kind of researcher focusing on the broader issue of climate change.35 One of the other important topics these researchers are currently trying to assess is the issue of global warming. Back in 1896, a Swedish chemist named Svante Arrhenius predicted that the increasing carbon dioxide emissions from the industrial revolution would eventually cause the Earth to become several degrees warmer. The reason for this warming was due to what has become known as the "greenhouse effect." In essence, carbon dioxide and other "greenhouse gases," such as water vapor, allow the short-wavelength radiation from the Sun to pass through the atmosphere, warming the Earth. But the gases absorb the longer-wavelength energy travelling back from the Earth into space, radiating part of that energy back down to the Earth again. Just as the glass in a greenhouse allows the Sun through but traps the heat inside, these gases end up trapping a certain amount of heat in the Earth's atmosphere, causing the Earth to become warmer. The effect of this warming could be small or great, depending on how much the temperature actually changes. If it is only a degree or two, the effect would be relatively small. But a larger change in climate could melt polar ice, causing the sea level to rise several feet and wiping out numerous coastal communities and resources. If the warming happened rapidly, vegetation might not have time to adjust to the climate change, which could affect the world's food supply as well as timber and other natural resources. The critical question, then, is how great a danger global warming is. And the answer to that is dependent on numerous factors. One, obviously, is the amount of carbon dioxide and other emissions we put into the air - a concern that has driven efforts to reduce our carbon dioxide-producing fossil fuel consumption. But the amount of carbon dioxide in the air is also dependent on how much can be absorbed again by plant life on Earth - a figure that scientists depend on satellites in order to compute. Landsat images can tell scientists how much deforestation is occurring around the world, and how much healthy plant life remains to absorb CO2. Until recently, however, the amount of CO2 absorbed by the world's oceans was unknown. The ocean color images of SeaWiFS are helping to fill that gap, because the phytoplankton it tracks are a major source of carbon dioxide absorption in the oceans. Another part of the global warming equation is how much water vapor is in the atmosphere - a factor that is driven by ocean processes, especially in the heat furnace of the tropics. As a result, scientists are trying to learn more about the transfer of heat and water vapor between the ocean and different levels of the atmosphere, using tools such as Goddard's TRMM and UARS satellites. All of these numbers and factors are fed into atmospheric and global computer models, many of which have been developed at the Goddard Institute for Space Studies (GISS) in New York City. These models then try to predict how our global climate may change based on current emissions, population trends, and known facts about ocean and atmospheric processes. While these models have been successful in predicting short-term effects, such as the global temperature drop after the Mt. Pinatubo volcano eruption, the problem with trying to predict global change is that it's a very long-term process, with many factors that may change over time. We have only been studying the Earth in bits and pieces, and for only a short number of years. In order to really understand which climate changes are short-term variations and which ones are longer trends of more permanent change, scientists needed to observe and measure the global, integrated climate systems of Planet Earth over a long period of time. This realization was the impetus for NASA's Mission to Planet Earth, or the Earth Science Enterprise.36 Earth Science Enterprise In some senses, the origins of what became NASA's "Mission to Planet Earth" (MTPE) began in the late 1970s, when we began studying the overall climate and planetary processes of other planets in our solar system. Scientists began to realize that we had never taken that kind of "big picture" look at our own planet, and that such an effort might yield some important and fascinating results. But an even larger spur to the effort was simply the development of knowledge and technology that gave scientists both the capability and an understanding of the importance of looking at the Earth from a more global, systems perspective. Discussions along these lines were already underway when the El Nino event of 1982-83 and the discovery of the ozone "hole" in 1985 elevated the level of interest and support for global climate change research to an almost crisis level. Although the "Mission to Planet Earth" was not announced as a formal new NASA program until 1990, work on the satellites to perform the mission was underway before that. In 1991, Goddard's UARS satellite became the first official MTPE spacecraft to be launched. Although the program has now changed its name to the Earth Science Enterprise, suffered several budget cuts, and refocused its efforts from overall global change to a narrower focus of global climate change (leaving out changes in solid land masses), the basic goal of the program echoes what was initiated in 1990. In essence, the Earth Science Enterprise aims to integrate satellite, aircraft and ground-based instruments to monitor 24 interrelated processes and parameters in the planet's oceans and atmosphere over a 15-year period. Phase I of the program consisted of integrating information from satellites such as UARS, the TOMS Earth Probe, TRMM, TOPEX/Poseidon, ADEOS and SeaWiFS with Space Shuttle research payloads, research aircraft and ground station observations. Phase II is scheduled to begin in 1999 with the launch of Landsat 7 and the first in a series of Earth Observing System (EOS) satellites. The EOS spacecraft are extremely large research platforms with many different instruments to look at various atmospheric and ocean processes that affect natural resources and the overall global climate. They will be polar-orbiting satellites, with orbital paths that will allow the different satellites to take measurements at different times of the day. EOS AM-1 is scheduled for launch in late 1998. EOS PM-1 is scheduled for launch around the year 2000. The first in an EOS altimetry series of satellites, which will study the role of oceans, ocean winds and ocean-atmosphere interactions in climate systems, will launch in 2000. An EOS CHEM-1 satellite, which will look at the behavior of ozone and greenhouse gases, measure pollution and the effect of aerosols on global climate, is scheduled for launch in 2002. Follow-on missions will continue the work of these initial observation satellites over a 15-year period. There is still much we don't know about our own planet. Indeed, the first priority of the Earth Science Enterprise satellites is simply to try to fill in the gaps in what we know about the behavior and dynamics of our oceans and our atmosphere. Then scientists can begin to look at how those elements interact, and what impact they have and will have on global climate and climate change. Only then will we really know how great a danger global warming is, or how much our planet can absorb the man-made elements we are creating in greater and greater amounts.37 It's an ambitious task. But until the advent of satellite technology, the job would have been impossible to even imagine undertaking. Satellites have given us the ability to map and study large sections of the planet that would be difficult to cover from the planet's surface. Surface and aircraft measurements also play a critical role in these studies. But satellites were the breakthrough that gave us the unique ability to stand back far enough from the trees to see the complete and complex forest in which we live. For centuries, humankind has stared at the stars and dreamed of travelling among them. We imagined ourselves zipping through asteroid fields, transfixed by spectacular sights of meteors, stars, and distant galaxies. Yet when the astronauts first left the planet, they were surprised to find themselves transfixed not by distant stars, but by the awe-inspiring view their spaceship gave them of the place they had just left - a dazzling, mysterious planet they affectionally nicknamed the "Big Blue Marble." As our horizons expanded into the universe, so did our perspective and understanding of the place we call home. As an astronaut on an international Space Shuttle crew put it, "The first day or so we all pointed to our countries. The third or fourth day we were pointing to our continents. By the fifth day we were aware of only one Earth."38 Satellites have given this perspective to all of us, expanding our horizons and deepening our understanding of the planet we inhabit. If the world is suddenly a smaller place, with cellular phones, paging systems, and Internet service connecting friends from distant lands, it's because satellites have advanced our communication abilities far beyond anything Alexander Graham Bell ever imagined. If we have more than a few hours' notice of hurricanes or storm fronts, it's because weather satellites have enabled meteorologists to better understand the dynamics of weather systems and track those systems as they develop around the world. If we can detect and correct damage to our ozone layer or give advance warning of a strong El Nino winter, it's because satellites have helped scientists better understand the changing dynamics of our atmosphere and our oceans. We now understand that our individual "homes" are affected by events on the far side of the globe. From both a climatic and environmental perspective, we have realized that our home is indeed "one Earth," and we need to look at its entirety in order to understand and protect it. The practical implications of this information sometimes make the scientific pursuit of this understanding more complicated than our explorations into the deeper universe. But no one would argue the inherent worth of the information or the advantages satellites offer. The satellites developed by Goddard and its many partners have expanded both our capabilities and our understanding of the complex processes within our Earth's atmosphere. Those efforts may be slightly less mind-bending than our search for space-time anomalies or unexplainable black holes, but they are perhaps even more important. After all, there may be millions of galaxies in the universe. But until we find a way to reach them, this planet is the only one we have. And the better we understand it, the better our chances are of preserving it - not only for ourselves, but for the generations to come.
http://history.nasa.gov/SP-4312/ch5.htm
4.21875
From Ancestry.com Wiki | Colonial English Research |Overview of Colonial English Research| |Colonial New Hampshire| |Colonial Rhode Island| |Colonial New York| |Colonial New Jersey| |Colonial North Carolina| |Colonial South Carolina| |List of Useful Colonial English Resources| The first settlement in Virginia, at Jamestown, was undertaken in 1607 by the Virginia Company of London. (Thus, like New Amsterdam and Massachusetts Bay Company, Virginia began life as a business operation.) The company was reorganized and rechartered in 1618, but in 1624 the charter was revoked, and Virginia became a royal colony for the remainder of the colonial period. The county system in Virginia was not created all at once but grew in a series of stages from about 1618 to 1642. The Virginia Company in 1618 ordered the organization of the colony into four jurisdictions known as cities or boroughs. Based on this beginning, various courts were established, which formed the basis for the full-blown counties. By the end of this period, nine counties existed from which all other Virginia counties were descended: Charles City, Elizabeth City, Henrico, Isle of Wight, James City, Northampton, Northumberland, Warwick, and York. In Virginia, the principal method for transferring land from the colony to individuals for the first century or so of the colony’s existence was the headright system, in which land was awarded for each person transported into the colony. A single grant of land might be based on a head of household, the remaining members of the household, and a number of servants, and so might run to many hundreds of acres. (Less important were grants of land based on military service. Later in the history of the colony, rights to acquire land could be purchased.) After the Crown took over Virginia in 1624, these transfers of land were referred to as “crown grants,” except for the land distributed after 1649 by the Northern Neck Proprietors, which became known as “proprietary grants.” This separate proprietary land office was established by Charles II in that year as part of his political maneuvering in his attempts to regain the throne. The actual process of making the grant of land was divided into four steps. First, the person or group of persons wishing to exercise a headright or a military right submitted to the land office a petition that stated the number of acres requested and described the location of the land desired. In the case of headright grants, the petition also included the names of the persons upon whom the claim was based. The land office then issued a warrant, authorizing a surveyor to lay out the land. When the survey was completed and returned, the land office then issued a patent, which became the legal basis for ownership of the land. The massive collection of Virginia land patents, from the years 1623 to 1776, has been abstracted and published in seven volumes, the first three volumes of which were edited by Nell Marion Nugent. Several of these volumes contain extensive and useful introductory essays on the records and the system that created them. The Northern Neck land office, eventually controlled by the Fairfax family, covered an area in northern Virginia that was eventually divided into nearly two dozen counties in Virginia and West Virginia. The grants made by the Northern Neck propriety have also been published. In 1987, Richard Slatten published “Interpreting Headrights in Colonial-Virginia Patents: Uses and Abuses,” an article that described the granting process in detail and demonstrated techniques for interpreting the patents in the solution of genealogical problems. This same system, with variations, was used in all colonies other than the four New England colonies (although there were other important landgranting processes in New York and New Jersey). From one colony to another there might be differences in terminology, but the underlying process was much the same. For instance, the initial petition might also be called the “entry” and the survey might be designated the “plat.” Kentucky was admitted to the Union in 1792 as the fifteenth state. When European settlement of the region that would become Kentucky began, the area was under the jurisdiction of Virginia and was first considered to be part of Augusta County. This situation continued until 1772, when Fincastle County was erected, to include all of what became Kentucky. In 1777, Montgomery County was formed, and the Fincastle records came into the possession of that county. Charles E. Drake, “Drakes of Isle of Wight County, Virginia: Reconstructing an Immigrant Family from Fragile Clues,” National Genealogical Society Quarterly 79 (1991): 19–32. The author carefully analyzes scattered clues from early deed, probate, and patent books of Isle of Wight County and of the colony of Virginia to estimate the ages of a group of early Drake immigrants to that county and colony. Combining this information with a 1658 passenger list led to the discovery of the English origin of these Drakes. This research was followed up with a study of the early generations of the family, again using a similar range of records. Margaret R. Amundson, “The Taliaferro-French Connection: Using Deeds to Prove Marriages and Parentage,” National Genealogical Society Quarterly 83 (1995): 192–98. The author employs a plenitude of deeds and wills from several early Virginia counties, along with a few church records and entries from court order books, to identify the spouses of several members of the French and Taliaferro families of early eighteenth-century Virginia. - ↑ Edgar MacDonald, “The Myth of Virginia County Formation in 1634,” National Genealogical Society Quarterly 92 (2004): 58–63. - ↑ Nell Marion Nugent, ed, Cavaliers and Pioneers: Abstracts of Virginia Land Patents, 7 vols. (Baltimore, 1963–99). - ↑ Nell Marion Nugent, Supplement, Northern Neck Grants No. 1, 1690–1692 (Richmond, 1980); Gertrude E. Gray, Virginia Northern Neck Land Grants, 1694–1775, 2 vols. (Baltimore, 1987, 1988). - ↑ Richard Slatten, “Interpreting Headrights in Colonial-Virginia Patents: Uses and Abuses,” National Genealogical Society Quarterly 75 (1987): 169–79.
http://www.ancestry.com/wiki/index.php?title=Colonial_Virginia&oldid=14554
4
GestapoThe Geheime Staatspolizei (German for "secret state police"), commonly abbreviated as Gestapo, formed the secret state police force of Nazi Germany. Recruited from professional police officers, its role and organisation was quickly established by Hermann Göring after Hitler gained power in March 1933. Rudolf Diels was the first head of the organization, initially called Department 1A of the Prussian State Police. The role of the Gestapo was to investigate and combat "all tendencies dangerous to the State." They had the authority to investigate treason, espionage and sabotage cases, and cases of criminal attacks on the Party and State. The Gestapo's actions were not restricted by the law or subject to judicial review. The Nazi jurist, Dr. Werner Best, stated, "As long as the [Gestapo]... carries out the will of the leadership, it is acting legally." The Gestapo was specifically exempted from being responsible to administrative courts, where citizens normally could sue the state to conform to laws. The power of the Gestapo most open to misuse was Schutzhaft or "protective custody" - a euphemism for the power to imprison people without judicial proceedings, typically in concentration camps. The person imprisoned even had to sign their own Schutzhaftbefehl (the document declaring the person was to be imprisoned). Normally this signature was forced by torture. At the Nuremberg Trials the entire organisation was charged with crimes against humanity.
http://www.encyclopedia4u.com/g/gestapo.html
4.1875
Researchers pinpoint date and rate of Earth's most extreme extinction It's well known that Earth's most severe mass extinction occurred about 250 million years ago. What's not well known is the specific time when the extinctions occurred. A team of researchers from North America and China have published a paper in Science which explicitly provides the date and rate of extinction. "This is the first paper to provide rates of such massive extinction," says Dr. Charles Henderson, professor in the Department of Geoscience at the University of Calgary and co-author of the paper: Calibrating the end-Permian mass extinction. "Our information narrows down the possibilities of what triggered the massive extinction and any potential kill mechanism must coincide with this time." About 95 percent of marine life and 70 percent of terrestrial life became extinct during what is known as the end-Permian, a time when continents were all one land mass called Pangea. The environment ranged from desert to lush forest. Four-limbed vertebrates were becoming diverse and among them were primitive amphibians, reptiles and a group that would, one day, include mammals. Through the analysis of various types of dating techniques on well-preserved sedimentary sections from South China to Tibet, researchers determined that the mass extinction peaked about 252.28 million years ago and lasted less than 200,000 years, with most of the extinction lasting about 20,000 years. "These dates are important as it will allow us to understand the physical and biological changes that took place," says Henderson. "We do not discuss modern climate change, but obviously global warming is a biodiversity concern today. The geologic record tells us that 'change' happens all the time, and from this great extinction life did recover." There is ongoing debate over whether the death of both marine and terrestrial life coincided, as well as over kill mechanisms, which may include rapid global warming, hypercapnia (a condition where there is too much CO2 in the blood stream), continental aridity and massive wildfires. The conclusion of this study says extinctions of most marine and terrestrial life took place at the same time. And the trigger, as suggested by these researchers and others, was the massive release of CO2 from volcanic flows known as the Siberian traps, now found in northern Russia. Henderson's conodont research was integrated with other data to establish the study's findings. Conodonts are extinct, soft-bodied eel-like creatures with numerous tiny teeth that provide critical information on hydrocarbon deposits to global extinctions. Source: University of Calgary - Has the Earth's sixth mass extinction already arrived?Wed, 2 Mar 2011, 13:39:25 EST - New findings show a quick rebound from marine mass extinction eventFri, 2 Oct 2009, 11:54:33 EDT - Extinction runs in the familyThu, 6 Aug 2009, 15:37:48 EDT - Caltech geobiologists uncover links between ancient climate change and mass extinctionThu, 27 Jan 2011, 14:54:20 EST - New technique unlocks secrets of ancient oceanTue, 11 Oct 2011, 13:35:27 EDT - Researchers pinpoint date and rate of Earth's most extreme extinctionfrom Science CentricFri, 18 Nov 2011, 12:30:46 EST - Timeline of a mass extinction: New evidence points to rapid collapse of Earth`s species 252 million years agofrom PhysorgFri, 18 Nov 2011, 9:30:49 EST - Timeline of a mass extinctionfrom MIT ResearchFri, 18 Nov 2011, 5:30:16 EST - Date and rate of Earth's most extreme extinction pinpointed: Results stem from largest ever examination of fossil marine speciesfrom Science DailyThu, 17 Nov 2011, 21:30:20 EST - Massive extinction linked to ancient Siberian blastfrom CBC: Technology & ScienceThu, 17 Nov 2011, 21:00:11 EST - Researchers pinpoint date and rate of Earth's most extreme extinctionfrom PhysorgThu, 17 Nov 2011, 14:20:37 EST Latest Science NewsletterGet the latest and most popular science news articles of the week in your Inbox! It's free! Learn more about Check out our next project, Biology.Net From other science news sites Popular science news articles - Even with defects, graphene is strongest material in the world - Detection of the cosmic gamma ray horizon: Measures all the light in the universe since the Big Bang - Genetic engineering alters mosquitoes' sense of smell - Allosaurus fed more like a falcon than a crocodile, new study finds - 'Popcorn' particle pathways promise better lithium-ion batteries
http://esciencenews.com/articles/2011/11/17/researchers.pinpoint.date.and.rate.earths.most.extreme.extinction
4.09375
The Components Of Manifest DestinyThe notion of Manifest Destiny had many components, each serving people in different ways. Manifest Destiny reflected both the prides that characterized American Nationalism in the mid 19th century, and the idealistic vision of social perfection through God and the church. Both fueled much of the reform energy of the time. Individually, the components created separate reasons to conquer new land. Together they exemplified Americas ideological need to dominate from pole to pole. Demkin, Chapter 8). For example, the idea that the Puritan notion of establishing a "city on a hill" was eventually secularized into Manifest Destiny--a sort of materialistic, religious, utopian destiny. By the 1840's, expansion was at it highest. The Santa Fe Trail went from Independence to the Old Spanish Trail, which went into Los Angeles. The Oxbow Route headed from Missouri to California. Others headed out on the Oregon Trail to the Pacific Northwest. In 1845, approximately 5,000 people traveled the Oregon Trail to Oregon's Willamette Valley. The Oregon Trail was the longest of the pioneer trail that went West. It traversed more than 2,000 miles' trough prairie, desert, and rugged mountain land from Independence, Missouri to the Northwest. In its short life, 300,000 settlers traveled this trail, marking their path by the landmarks first identified by Lewis and Clark. Thirty thousand graves mark the trial of these pioneers. In the wake of continual death and hardship the allure of Manifest Destiny continued to drive expansionist interests. Beginning with the first wagon in 1831, to the formation of the territorial government in 1848, Manifest Destiny was responsible for making America grow. Manifest Destiny was the reason for the revived interest in territorial expansion. With a sense of mission, people were tempted by the boundless tracts and sparsely settled land lying just beyond the borders of their country. There was also the growing desire to develop trade with the Far East. Going West would eventually open new trade routes. Last but not least, there was a renewed fear that the security of the United States might be impaired by foreign intervention in areas along its borders. The easiest way to conquer those fears was to conquer land beyond its borders and expand American territories. The settlements that extended across the Western territories promised the American dream: the freedom and independence of a seemingly limitless land. This, coupled with the Agrarian spirit produced an attitude that nothing was gong to stand in the way of progress, the progress of Manifest Destiny. In the name of this doctrine, Americans took what ever land they wanted. With a belief that Manifest Destiny gave them a right and power to do so, many simply settled, planted and farmed Indian land. The large-scale annihilation and movement of Native American onto Indian reservations reached its peak in the late 19th century. The U.S. government intended to destroy tribal governments and break up Indian reservations under, what was then considered, the progressive Manifest Destiny Doctrine. The arrogance that flowed from the Manifest Destiny philosophy was exemplified when Albert T. Beveridge rose before the U.S. Senate and announced: "God has not been preparing the English-speaking and Tectonic peoples for a thousand years for nothing but vain and idle self-admiration. No! He has made us the master organizers of the world to establish system where chaos reigns... He has made us adepts in government that we may administer government among savages and senile peoples. Theodore Roosevelt, John Cabot Lodge, and John Hay, each in turn, endorsed with a strong sense of certainty the view that the Anglo-Saxon [Americans] was destined to rule the world. Such views expressed in the 19th century and in the early 20th century continues to ring true in the minds of many non-Indian property owners. The superiority of the "white race" is the foundation on which the Anti-Indian Movement organizers and right-wing helpers rest their efforts to dismember Indian tribes." (Ryser).
http://www.let.rug.nl/usa/essays/1801-1900/manifest-destiny/the-components-of-manifest-destiny.php
4.1875
One of the most difficult things “neurotypical” (a term used by some adults with AS to describe people without AS) educators to understand is the effect of deficits in theory of mind and perspective-taking on the behavior of students with AS. The term “theory of mind” (ToM) was original used in relation to the psychological development of young children. It is described as a naturally developing ability to discern the thoughts, feelings, ideas, and intentions of others. The primary importance of “theory of mind” is that this ability allows one to predict the behavior of others. Simon Baron-Cohen used this same term, “theory of mind,” to describe the cognitive process which, if impaired, most likely accounts for the constellation of characteristics present in children with AS and other autism spectrum disorders. He described these students as having a kind of “mindblindness”: an inability to understand that other people have thoughts, feelings, and beliefs that are different from their own. Research has found that lacking ToM is specific to individuals with autism spectrum disorder. This lack of ToM makes it very difficult for students with AS to understand and predict the behavior of other people and to understand the social context that guides others on a daily basis.
http://alaskaarc.org/2009/08/theory-of-mind-and-perspective/
4.0625
The four humours were four fluids that were supposed to permeate the body and influence its health. The concept was developed by ancient Greek thinkers around 400 BC and it was directly linked with another popular theory of the four elements (Empedocles). Paired qualities were associated with each humour and its season. The four humours, their corresponding elements, seasons and sites of formation, and resulting temperaments are : It is believed that Hippocrates was the one who applied this idea to medicine. "Humoralism" or the doctrine of the Four Temperaments as a medical theory retained its popularity for centuries largely through the influence of the writings of Galen (131-201 CE) and was decisively displaced only in 1858 by Rudolf Virchow's newly-published theories of cellular pathology. While Galen thought that humours were formed in the body, rather than ingested, he believed that different foods had varying potential to be acted upon by the body to produce different humours. Warm foods, for example, tended to produce yellow bile, while cold foods tended to produce phlegm. Seasons of the year, periods of life, geographic regions and occupations also influenced the nature of the humours formed. The imbalance of humours, or "dyscrasia", was thought to be the direct cause of all diseases. Health was associated with a balance of humours, or eucrasia.The qualities of the humours, in turn, influenced the nature of the diseases they caused. Yellow bile caused warm diseases and phlegm caused cold diseases. In On the TemperamentsGalen further emphasized the importance of the qualities. An ideal temperament involved a balanced mixture of the four qualities. Galen identified four temperaments in which one of the qualities, warm, cold, moist and dry, predominated and four more in which a combination of two, warm and moist, warm and dry, cold and dry and cold and moist, dominated. These last four, named for the humours with which they were associated-- that is, sanguine, choleric, melancholic and phlegmatic, eventually became better known than the others. While the term "temperament" came to refer just to psychological dispositions, Galen used it to refer to bodily dispositions, which determined a person's susceptibility to particular diseases as well as behavioral and emotional inclinations. Methods of treatment like blood letting, emetics and purges were aimed at expelling a harmful surplus of a humour. They were still in the mainstream of American medicine after the Civil War. Although completely refuted by modern science, the theory formed basis of thinking about causes of health problems for more than a thousand years. It was first seriously challenged only just before the 18th century. (this needs expanding) There are still remnants of the theory of the four humours in the current medical language. For example, we refer to humoral immunity or humoral regulation to mean substances like hormones and antibodies that are circulated throughout the body, or use the term blood dyscrasia to refer to any blood disease or abnormality. The theory was a modest advance over the previous views on human health that tried to explain in terms of the divine. Since then practitioners have started to look for natural causes of disease and to provide natural treatments. The Unani school of Indian medicine, still apparently practiced in India, is very similar to Galenic medicine in its emphasis on the four humors, and in treatments based on controlling intake, general environment, and the use of purging as a way of relieving humoral imbalances.
http://allwebhunt.com/wiki-article-tab.cfm/four_humours
4.15625
Instruments of Prehistory It may be that one of the key reasons for the lack of reference in the archaeological literature to musical instruments in British prehistory is not just an emphasis on economics, landscapes, or art but the poor preservation of the archaeological record. From the Neolithic period the only recognised instruments from Britain are hypothetical bone flutes or pipes. Of the five examples commonly mentioned, the example from Lincoln may be of Roman date, the Crane bone example from Wilsford, just to the south of Stonehenge, and the two examples, from Penywyrlod and Skara Brae are broken and fragmentary, and the Avebury instrument is lost. At some sites perforated cattle toes, such as those from Skara Brae, may have been used as whistles. We have a fragment of a possible Bronze Age horn from Scotland, but this is nothing in comparison with the large number found in Ireland. This does not seem like a promising starting point but with the help of ethnography and better survival condition from other European countries we can expand our vision of the range of music that may have been played in prehistoric Britain. So the likelihood of the possible flutes or pipes mentioned above being genuine is bolstered by the fact that the earliest accepted bird-bone and mammoth-ivory wind instruments from Geissenklösterle, Germany, are far earlier, ca 36000 BC. Musical behaviour is very old and important. Elsewhere in Europe, clay whistles and globular flutes are known from the 5th and 4th millennium BC, while many hundreds of clay drums are known from Germany and the surrounding countries of Northern Europe, with very similar artefacts known from China. In France several examples of clay horns are known, which suggests that earlier precursors to the Irish horns may have existed in Britain. This supposition is strengthened by the importance of cattle in ceremonial contexts from the earliest Neolithic and beyond; although horn also does not survive well. This is a key point: ethnographic assessments of hunter-gatherer societies suggest unaltered organic material was used for instruments, so if they survive there may be no sign that they were instruments by our modern-day standards. At Charavines, France, a 42 cm tube of elder was discovered which has been interpreted as a flute, perhaps played in the style of a Scandinavian overtone willow-flute, or maybe it was a tube to blow on a fire. A bullroarer survives from Denmark, ca 8500 years old, and these were still in existence in the twentieth century. Similarly in Denmark a bow has survived, from approximately 7000 years ago which may have been used as a musical bow, as found in southern Africa and South America. Bows may also be used for drilling, starting fires and of course hunting. In addition to this range of possible instruments we have the human body, equipped from rhythmic clapping and stamping and various forms of singing and chanting. When looking at objects we assume they had one function, but pots may be used as drums as readily as a small piece of bone, shell or acorn cup as a whistle. And it is very unlikely that the scarcity of the British prehistoric musical record reflects the nature of the music in the past. Dr. Simon Wyatt
http://ambpnetwork.wordpress.com/introductions-to-the-field/instruments-of-prehistory/
4.0625
As more and more people around the world flock to cities, urban areas in developing nations are struggling to keep up with the human influx and the waste that people produce. In 2010 roughly 2.5 billion people lacked basic sanitation, according to the World Health Organization. A team of engineers has developed a tool that may prove to be a solution: fuel cells that harness a mix of microbes to clean wastewater while producing their own power. The technology is young, but it shows promise, said Hong Liu, an associate professor of biological and ecological engineering at Oregon State University who heads the team. The researchers’ fuel cells generate more power from waste than other, similar fuel cells, according to a recent article in the journal Energy and Environmental Science. If scaled up, the technology would even outpace the digesters that many treatment plants currently use to extract energy from wastewater, Dr. Liu said. Wastewater itself contains more than nine times the energy that typical treatment plants currently use to clean it. The team aims to capture just enough to make the treatment process self-sustaining, she said. So far, the researchers have built a modest contraption that they say has the potential to do just that. The fuel cell resembles a book, Dr. Liu said — quite a slim one, at half a centimeter thick. The current version holds roughly one cup of water. The fuel cell consists of two electrodes: a platinum-coated cathode and an anode covered with several kinds of bacteria, Dr. Liu said. The microbes on the anode break down the organic material in wastewater, producing carbon dioxide, protons and electrons. The electrons then flow through a wire to the fuel cell’s cathode, generating an electric current. There, at the cathode, the platinum coating jump-starts a chemical reaction that combines the electrons, the protons and oxygen in the air to make water. At the end of the process, the fuel cell has yielded water and carbon dioxide, ridding wastewater of some of its unwanted contents and generating electricity along the way. The researchers hope to build a bigger system and reduce the cost of manufacturing the fuel cells, Dr. Liu said. She and her colleagues envision stacking the fuel cells together to make a unit of about 20 liters (about five gallons), large enough for them to test in areas that lack a central system for collecting sewage and treating wastewater. So far, groups in India, Malaysia and the Netherlands have contacted her about potential partnerships, she said. “I don’t mind going anywhere, as long as I can help them make the treatment more efficient,” Dr. Liu said. The group is focusing on building a treatment system that could be used in developing countries to clean relatively small amounts of wastewater — say, from a single factory or a group of homes, she said. If engineers improve the fuel cells to the point that they can produce electricity in addition to powering themselves, she believes that such a system could replace the digesters that cities’ treatment plants currently use. “If the energy gain is really high, we can challenge the traditional way we treat wastewater,” Dr. Liu said.
http://green.blogs.nytimes.com/2012/08/16/in-fuel-cells-some-hope-for-urban-sanitation/?src=twr
4.03125
Pre-Columbian Native Americans The first residents of what is now the United States emigrated from Asia over 30,000 years ago by crossing from present-day Russia into what is now present-day Alaska then headed south. A migration of humans from Eurasia to the Americas took place via Beringia, a land bridge which formerly connected the two continents across what is now the Bering Strait. Falling sea levels created the Bering land bridge that joined Siberia to Alaska, which began about 60,000 – 25,000 years ago. The most recent date by which this migration had taken place is about 12,000 years ago. These early Paleoamericans soon spread throughout the Americas, diversifying into many hundreds of culturally distinct nations and tribes. The North American climate finally stabilized about 10,000 years ago and had climatic conditions that were very similar to today. This led to widespread migration, cultivation of crops and a dramatic rise in population all over the Americas. Between about 8000 BC and 1492 AD, there were numerous and complex events that shaped the North American tribes, culture, language, range and more. Pre-Columbian Indian Cultures Timeline 13,000 BC (near the end of the Ice Age): First migration of Paleo-Indians in North America by people of Beringian subcontinent. 9,200 BC (Clovis Culture): Known for invention of superbly crafted grooved or fluted stone projectiles (Clovis points) first found near Clovis, New Mexico, in 1932. Clovis points have been found throughout the Americas. Hunted big game, notably mammoths. 8,900 BC (Folsom Culture): Named for site found near Folsom, New Mexico, 1926. Developed a smaller, thinner, fluted spear point than Clovis type. Hunted big game, notably the huge bison ancestor of the modern buffalo. First used a spear-throwing device called an atlatl (an Aztec word for “spear-thrower”). 8,500 BC (Plano or Plainview Culture): Named after the site in Plainview, Texas. They are associated primarily with the Great Plains area. Were bison hunters. Developed a delicately flaked spear point that lacked fluting. Adopted mass-hunting technique (jump-kill) to drive animal herds off a cliff. Preserved meat in the form of pemmican (from the Cree word pimîhkân, it is a concentrated mixture of fat and protein used as a nutritious food). First to use grinding stones to grind seeds and meat. 6,500 BC (Northwest Coast Indians): Some modern descendants are the Tlingit, Haida, Kwakiutl, Nootka, and Makah tribes. Settled along the shores, rivers, and creeks of southeastern Alaska to northern California. A maritime culture, were expert canoe builders. Salmon fishing was important. Some tribes hunted whales and other sea mammals. Developed a high culture without the benefit of agriculture, pottery, or influence of ancient Mexican civilizations. Tribes lived in large, complex communities, constructed multifamily cedar plank houses. Evolved a caste system of chiefs, commoners, and slaves. Were highly skilled in crafts and woodworking that reached their height after European contact, which provided them steel tools. Placed an inordinate value on accumulated wealth and property. Held lavish feasts (called potlatches) to display their wealth and social status. Important site: Ozette, Washington (a Makah village). 500 BC – 200 AD (Adena Culture): Named for the estate called Adena near Chilicothe, Ohio, where their earthwork mounds were first found. Culture was centered in present southern Ohio, but also lived in Pennsylvania, Indiana, Kentucky, and West Virginia. Were the pioneer mound builders in the U.S. and constructed spectacular burial and effigy mounds. Settled in villages of circular post-and-wattle houses. Primarily hunter-gatherers, they farmed corn, tobacco, squash, pumpkins, and sunflowers at an early date. Important sites: The Adena Mound, Ohio; Grave Creek Mound, West Virginia; Monks Mound, Illinois, is the largest mound. May have built the Great Serpent Mound in Ohio. 300–1300 AD (Hohokam people): Believed to be ancestors of the modern Papago (Tohono O’odham) and Pima (Akimel O’odham) Indian groups. Settled in present-day Arizona. Were desert farmers. Cultivated corn. Were first to grow cotton in the Southwest. Wove cotton fabrics. Built pit houses and later multi-storied buildings (pueblos). Constructed vast network of irrigation systems. Major canals were over 30 miles long. Built ball courts and truncated pyramids similar to those found in Middle America. First in world known to master etching (etched shells with fermented Saguaro juice). Traded with Mesoamerican Toltecs. Important sites: Pueblo Grande, Arizona; Snaketown, Arizona; Casa Grande, Arizona. 300 BC – 1100 AD (Mogollon Culture): Were highland farmers but also hunters in what is now eastern Arizona and southwestern New Mexico. Named after cluster of mountain peaks along Arizona-New Mexico border. They developed pit houses, later dwelt in pueblos. Were accomplished stoneworkers. Famous for magnificent black on white painted pottery (Minbres Valley pottery), the finest North American native ceramics. Important settlements: Casa Malpais, Arizona (first ancient catacombs in U.S., discovered there 1990); Gila Cliff, New Mexico; Galaz, New Mexico, Casa Grandes in Mexico was largest settlement. 300 BC – 1300 AD (Anasazi): Their descendants are the Hopi and other Pueblo Indians. Inhabited Colorado Plateau “four corners,” where Arizona, New Mexico, Utah, and Colorado meet. An agricultural society that cultivated cotton, wove cotton fabrics. The early Anasazi are known as the Basketmaker People for their extraordinary basketwork. Were skilled workers in stone. Carved stone Kachina dolls. Built pit houses, later apartment-like pueblos. Constructed road networks. Were avid astronomers. Used a solar calendar. Traded with Mesoamerican Toltecs. Important sites: Chaco Canyon, New Mexico; Mesa Verde, Colorado; Canyon de Chelly, Arizona; Bandelier, New Mexico; Betatkin, New Mexico, The Acoma Pueblo, New Mexico, built circa A.D. 1300 and still occupied, may be the oldest continuously inhabited village in the U.S. 100 BC – 500 AD (Hopewell Culture): May be ancestors of present-day Zuni Indians. Named after site in southern Ohio. Lived in Ohio valley, central Mississippi, and Illinois River Valleys. Were both hunter-gatherers and farmers. Villages were built along rivers, characterized by large conical or dome-shaped burial mounds and elaborate earthen walls enclosing large oval or rectangular areas. Were highly skilled craftsmen in pottery, stone, sculpture, and metalworking, especially copper. Engaged in widespread trade all over northern America extending west to the Rocky Mountains. Important sites: Newark Mound, Ohio; Great Serpent Mound, Ohio; Crooks Mound, Louisiana. 700 AD – European contact (Mississippi Culture): Major tribes of the Southeast are their modern descendants. Extended from Mississippi Valley into Alabama, Georgia, and Florida. Constructed large flat-topped earthen mounds on which were built wooden temples and meeting houses and residences of chiefs and priests. (They were also known as Temple Mound Builders.) Built huge cedar pole circles (“woodhenges”) for astronomical observations. Were highly skilled hunters with bow and arrow. Practiced large-scale farming of corn, beans, and squash. Were skilled craftsmen. Falcon and Jaguar were common symbols in their art. Had clear ties with Mexico. The largest Mississippian center and largest of all mounds (Monks Mound) was at Cahokia, Illinois. Other great temple centers were at Spiro, Oklahoma; Moundville, Alabama; and Etowah, Georgia. Eric the Red / Leif Ericsson Eric the Red (about 950 AD – 1000 AD) was born in Norway was best known for colonizing Greenland. Eric the Red (also Erik Thorvaldson, Eirik Raude or Eirik Torvaldsson), for three years, sailed around and explored the southern part of what he dubbed Greenland. In 986 he left Iceland with more than 20 ships and around 400-500 people. He arrived in Greenland with 14 boats and an estimated 350 colonizers. Although the settlement eventually disappeared, it opened the door to centuries of occasional explorations of the area and colonization attempts by northern Europeans. Leif Eriksson (975 AD – 1020 AD) was born in Iceland. He was a leader of Viking expeditions and may have been the first European to reach North America. Age of Exploration The Age of Discovery, also known as the Age of Exploration, was a period in history starting in the 15th century and continuing into the early 17th century during which Europeans engaged in intensive exploration of the world, establishing direct contacts with Africa, the Americas, Asia and Oceania and mapping the planet. The pioneer Portuguese and Spanish traveled long distances over open ocean in search of alternative trade routes to “the Indies”, moved by the trade of gold, silver and spices. 1492 – 1493: Christopher Columbus (representing Spain) set sail from port of Palos, in southern Spain on August 3, 1492. Sighted land in the Bahamas on October 12, 1492, discovered Cuba (Española) and other islands in West Indies. 1497: John Cabot (representing England) – Giovanni Caboto born in Genoa sailed for England from Bristol, England, in May 20, 1497. Reached Belle Island on the northern coast of Newfoundland, on June 24, 1497. He sailed down the east coast of Newfoundland, to the southern corner, landing only on Newfoundland, at Belle Island. He never landed again on the coast, before he returned to England on July 30, 1497. 1499 – 1500: João Fernandes (representing Portugal) – An Azorean farmer that sailed from Terceira and viewed Greenland and discovered Labrador. 1500: Gaspar Corte Real (representing Portugal) sailed from the Azores and explored Newfoundland, looking for the Northeast Passage, and returned to Lisbon in the autumn of 1500. 1513: Juan Ponce de Leon (representing Spain) set sail from San German, Puerto Rico, on March 3, 1513, in search of the Fountain of Youth. They sailed northwest and on April 2nd, sighted, what he thought was a large island, which he gave the name of Pascua Florida, because it was Easter season, and there were many flowers in the area. On April 3rd, he went ashore to claim it for Spain. He landed in a small inlet near Daytona Beach. He also discovered a strong current (Gulf Stream) that forced his ships, that were sailing south, to sail backwards. He sailed down the coast of Florida, past the Florida Keys, and up the western coast to Charlotte Harbor. Returning home, he sailed west, skirting the Yucatan, past the north coast of Cuba, and back to Puerto Rico, getting home on October 10th, 1513. 1518: Alonso Alvarez de Pineda (representing Spain) sailed at the end of 1518. They landed on the west coast of Florida, and encountered the same reception that Ponce de Leon received, and continued up the coast. They discovered the Mississippi River, and sailed 20 miles up the Mississippi. They then continued along the coast, west and south, along the coast of Texas. At a place called Chila, they were defeated by the Indians, and Pineda was killed. The natives managed to burn most of the ships in the fleet, but one. The survivors, arrived in Vera Cruz, and joined Cortez’s army, that was already there. Pineda, was able to navigate, along the Gulf of Mexico coast, and positively prove, that were was no passage to the Pacific Ocean. He and not De Soto or La Salle, discovered the Mississippi River. 1520: João Alvares Fagundes (representing Portugal) sailed from Portugal in 1520 to explore Codfish Land (Newfoundland) and the Gulf of St. Lawrence. He discovered St. Pierre, Miquelon and the many islands between Newfoundland and St. Lawrence including the Penguin Island. He also sailed along the coast of Nova Scotia and discovered the Bay of Fundy. He went back in 1521 and 1525 to establish colonies in the area. 1521: Juan Ponce de Leon (representing Spain) set sail from San Juan, Puerto Rico, on February 15th, 1521. He went to colonize Florida, and had seeds, and priests to convert the Indians. He reach Sanibel Island, on the west coast of Florida, where he had a battle with the natives, received and arrow wound, that became infected. They returned to Cuba, but he died in July. 1524: Giovanni da Verrazzano (representing France) sailed from Dieppe, France on January 17, 1524. He made landfall on March 1st, 1524 at Cape Fear, southernmost of North Carolina’s three capes. They sailed south for about 110 miles, and turned north, to avoid running into any Spaniards, he sailed another 250 miles north, along the coast. He explored the coasts of Georgia, North and South Carolina, and as far north as New York Bay and Arcadia. He returned and anchored at Dieppe on July 8, 1524. 1527: Pánfilo Narváez (representing Spain) sailed from Barrameda on February 22nd, 1527 with the commission to colonize all the lands between Florida and Mexico. He, with a force of 260 men, landed in Florida, near St. Petersburg, on May 1st. He sent his ship on to Mexico to wait for him while he marched up the coast of Florida. He traveled north battling Indians all the way to Apalachee. They constructed 4 boats here and continued on to Pensacola Bay, battling Indians all the way. They crossed the Mississippi River in their boats. Eventually, all of their boats were lost and the Indians kill them all, except for 4 men, one being Cabeza de Vaca. 1527 – 1528: John Rut (representing England) set sail from Plymouth, England, on June 10th. On July 21st, they arrived in Newfoundland, and looking for the Northwest Passage, sailed as far as Labrador. His ship was seen by Spaniards on Mona Island, and later, November 25, 1527 in Española. The Spanish reported that the ship was lost. In Puerto Rico, they took in supplies and returned to England in the spring of 1528. 1527 – 1536: Alvar Nuñez de Vera (Cabeza de Vaca) (representing Spain) was one of the four surviving members of the Pánfilo Narváez expedition. On November 8, 1527, the boat he was on, capsized, and they managed to swim to the shore. They walked to Texas and then Mexico, with the help of friendly Indians that fed them along the way. They lost all of their cloths along the way, and continued naked. Cabeza de Vaca, wrote that the shed their skin, twice a year, like serpents. They finally reached Mexico City on July 25th, 1536. It took him and his four companions 9 years to complete the trip. 1534 – 1536: Jacques Cartier (representing France) set sail from Saint-Malo on April 20th, 1534 and made landfall at Newfoundland on May 10th. Cartier sailed all around the coast of Newfoundland, Labrador, Arcadia, and all the islands in the area. He returned on September 5th, 1534. Jacques Cartier set sail from Saint-Malo on May 19, 1535, and sighted Funk Island on July 7th. They did not stop at Newfoundland but proceeded to explore the area of the Gulf of St. Lawrence, the islands in the gulf and Canada. They sailed up the Saguenay and the St. Lawrence River. They arrived at what is now Quebec in September 10th and on October 2nd, the were at the place where Montreal is today. Sailing down the river, he wintered in Quebec. He reached Saint-Malo on July 15, 1536. 1539 – 1543: Hernando de Soto (representing Spain) setting sail from Havana, Cuba, he landed near Fort Myers, Florida on May 25, 1539 with a force of 570 men and 223 horses. De Soto, got his training with Pizarro in Peru, he had no respect for the indigenous population of Florida. His basic strategy, was to enter an Indian town, capture the chief, demand provision, then move to the next village, capture that chief, then release the chief from the previous village. During his expedition, de Soto killed many Indians wherever he went. They marched north from Florida, into Georgia, Alabama, Tennessee. Near Memphis, he built barges and crossed the Mississippi River, two years after landing in Florida. They marched through Arkansas and Oklahoma, then back to the Mississippi. De Sota, died of fever on May 21, 1542, at the mouth of the Red River. His successor, Luis Moscoso, continued the expedition, spending a fourth winter at the mouth of the Arkansas River. He built a ship, sailed down the Mississippi, across the Gulf of Mexico, and arrived in Mexico on September 1543, with 311 men, out of the original 570. 1541 – 1542: Jacques Cartier (representing France) set sail from Saint-Malo on May 23rd, 1541. There were five ships in the fleet, that was going to colonize Canada. On August 23rd, 1541, it anchored off banks of the future Quebec. They established a settlement and continued their exploration, sailing up the Ottawa River. He arrived in Saint-Malo in October, 1542. Spanish expeditions reached the Appalachian Mountains, the Mississippi River, the Grand Canyon and the Great Plains. In 1540, Hernando de Soto undertook an extensive exploration of the present US and, in the same year, Francisco Vázquez de Coronado led 2,000 Spaniards and Native Mexican Americans across the modern Arizona–Mexico border and traveled as far as central Kansas. Other Spanish explorers include Lucas Vásquez de Ayllón, Pánfilo de Narváez, Sebastián Vizcaíno, Juan Rodríguez Cabrillo, Gaspar de Portolà, Pedro Menéndez de Avilés, Álvar Núñez Cabeza de Vaca, Tristán de Luna y Arellano and Juan de Oñate. The Spanish sent some settlers, creating the first permanent European settlement in the continental United States at St. Augustine, Florida in 1565, but it was in such a harsh political environment that it attracted few settlers and never expanded. Much larger and more important Spanish settlements included Santa Fe, Albuquerque, San Antonio, Tucson, San Diego, Los Angeles and San Francisco. Most Spanish settlements were along the California coast or the Santa Fe River in New Mexico. Nieuw-Nederland, or New Netherland, was the 17th century Dutch colonial province on the eastern coast of North America. The Dutch claimed territory from the Delmarva Peninsula to Buzzards Bay, while their settlements concentrated on the Hudson River Valley, where they traded furs with the Indians to the north and were a barrier to Yankee expansion from New England. Their capital, New Amsterdam, was located at the southern tip of the island of Manhattan and was renamed New York when the English seized the colony in 1664. The Dutch were Calvinists who built the Reformed Church in America, but they were tolerant of other religions and cultures. The colony left an enduring legacy on American cultural and political life, including a secular broadmindedness and mercantile pragmatism in the city, a rural traditionalism in the countryside typified by the story of Rip Van Winkle, and politicians such as Martin Van Buren, Theodore Roosevelt, Franklin D. Roosevelt and Eleanor Roosevelt. New France was the area colonized by France in North America during a period extending 1534 to 1763, when Britain and Spain took control. There were few permanent settlers outside Quebec, but fur traders ranged working with numerous Indian tribes who often became military allies in France’s wars with Britain. The territory was divided into five colonies: Canada, Acadia, Hudson Bay, Newfoundland and Louisiana. After 1750 the Acadians—French settlers who had been expelled by the British from Acadia (Nova Scotia)—resettled in Louisiana, where they developed a distinctive rural Cajun culture that still exists. They became American citizens in 1803 with the Louisiana Purchase. Other French villages along the Mississippi and Illinois rivers were absorbed when the Americans started arriving after 1770. In 1607, the Virginia Company of London established the Jamestown Settlement on the James River, both named after King James I. The strip of land along the eastern seacoast was settled primarily by English colonists in the 17th century, along with much smaller numbers of Dutch and Swedes. Colonial America was defined by a severe labor shortage that employed forms of slavery and indentured servitude, and by a British policy of benign neglect that permitted the development of an American spirit distinct from that of its European founders. Over half of all European immigrants to Colonial America arrived as indentured servants. The first successful English colony was established in 1607, on the James River at Jamestown. It languished for decades until a new wave of settlers arrived in the late 17th century and established commercial agriculture based on tobacco. Between the late 1610s and the Revolution, the British shipped an estimated 50,000 convicts to their American colonies. During the Georgian era English officials exiled 1,000 prisoners across the Atlantic every year. One example of conflict between Native Americans and English settlers was the 1622 Powhatan uprising in Virginia, in which Native Americans had killed hundreds of English settlers. The largest conflict between Native Americans and English settlers in the 17th century was King Philip’s War in New England, although the Yamasee War may have been bloodier. The Plymouth Colony was established in 1620. New England was initially settled primarily by Puritans who established the Massachusetts Bay Colony in 1630. The Middle Colonies, consisting of the present-day states of New York, New Jersey, Pennsylvania, and Delaware, were characterized by a large degree of diversity. The first attempted English settlement south of Virginia was the Province of Carolina, with Georgia Colony the last of the Thirteen Colonies established in 1733. Several colonies were used as penal settlements from the 1620s until the American Revolution. Methodism became the prevalent religion among colonial citizens after the First Great Awakening, a religious revival led by preacher Jonathan Edwards in 1734. Source: http://en.wikipedia.org/wiki/History_of_the_United_States, http://en.wikipedia.org/wiki/Native_Americans_in_the_United_States, http://bruceruiz.net/PanamaHistory/age_of_exploration_time_line.htm
http://storiesofusa.com/pre-colonial-america/
4.09375
The Thirteenth Amendment to the U.S. Constitution, sent to the states for ratification in February 1865 with the unanimous support of congressional Republicans and the firm endorsement of President Abraham Lincoln, contained two short sections. The first prohibited slavery and involuntary servitude except as punishment for convicted criminals. The second pronounced in arguably vague terms that Congress had the power to enforce this prohibition "by appropriate legislation." The amendment reflected the North's determination, after four years of Civil War, to make legal, permanent, and more encompassing Lincoln's 1863 Emancipation Proclamation. However, what rights would be granted the former slaves and what powers Congress had under the enforcement clause were left ambiguous. After Lincoln's assassination, President Andrew Johnson made clear to the South that ratification of the Thirteenth Amendment was one of his minimum requirements for readmission to the Union. The proceedings and results in North Carolina were typical of the actions southern states offered in response. On 29 Nov. 1865, two days after the new North Carolina legislature convened, Rufus Y. McAden introduced a resolution for approval of the Thirteenth Amendment. Debate centered on the second section. Many North Carolinians feared that this provision would allow Congress to regulate civil rights, thus depriving the states of their traditional control over race relations and legal privileges. Supporters tried to assure doubters that Secretary of State William H. Seward was correct in his interpretation that the "clause is really restraining in its effect, instead of enlarging the power of Congress." Faced with the knowledge that rejection of the amendment meant the continuation of federal control over the state, the North Carolina House approved the amendment 100 to 4. In the Senate, the same debate ensued, but on 4 December that body also ratified the amendment. But opposition forces quickly regrouped, and on the same day Senator A. D. McLean of Cumberland County introduced a resolution "touching" the Thirteenth Amendment; the resolution explicitly stated that North Carolina ratified the amendment only "in the sense given to it" by Seward, "to wit: That it does not enlarge powers of Congress to legislate on the subject of freed men within the States." Although the General Assembly clearly understood that McLean's resolution had no legal effect, both houses endorsed it. By 15 Dec. 1865, the necessary three-fourths of the states had ratified the Thirteenth Amendment and Seward proclaimed it in effect. With that action slavery, already recognized as ended in North Carolina after the state's 1865 constitutional convention, was now legally and permanently terminated by the U.S. Constitution. Roberta Sue Alexander, North Carolina Faces the Freedmen: Race Relations during Presidential Reconstruction, 1865-67 (1985). Herman Belz, A New Birth of Freedom: The Republican Party and Freedmen's Rights, 1861-1866 (1976). Michael L. Benedict, A Compromise of Principle: Congressional Republicans and Reconstruction, 1863-1869 (1974). Harold M. Hyman and William M. Wiecek, Equal Justice under Law: Constitutional Development, 1835-1875 (1982). "Primary Documents in American History: 13th Amendment to the U.S. Constitution." The Library of Congress http://www.loc.gov/rr/program/bib/ourdocs/13thamendment.html (accessed September 19, 2012). "Resolution Touching the Amendment To The Constitution Of The United States, Ratified At This Session of the General Assembly, Known as the Thirteenth Article." Public laws of the State of North-Carolina, passed by the General Assembly. Raleigh [N.C.]: Robt. W. Best. 1866. p.140. http://digital.ncdcr.gov/u?/p249901coll22,177357 (accessed September 19, 2012). "CRS Annotated Constitution: Thirteenth Amendment." Legal Information Institute, Cornell University Law School. http://www.law.cornell.edu/anncon/html/amdt13_user.html#amdt13_hd4 (accessed September 19, 2012). "13th Amendment to the U.S. Constitution: Abolition of Slavery (1865)." The National Archives and Records Administration. http://www.ourdocuments.gov/doc.php?doc=40 (accessed September 19, 2012). "John G. Nicolay to Abraham Lincoln, Tuesday, January 31, 1865 (Telegram reporting passage of 13th Amendment by Congress)." The Abraham Lincoln Papers at the Library of Congress. http://memory.loc.gov/cgi-bin/ampage?collId=mal&fileName=mal1/403/4037900/malpage.db&recNum=0 (accessed September 19, 2012). 1 January 2006 | Alexander, Roberta Sue
http://www.ncpedia.org/thirteenth-amendment
4.0625
|French and French Canadians| The French were latecomers to North America. During the Renaissance, while Britain, Holland, Spain, and Portugal conducted a systematic invasion of the Western Hemisphere, French kings had remained preoccupied with European politics. By the time that Jacques Cartier, in 1534, and Samuel de Champlain and Aymar de Chatte, in 1603, crossed the Atlantic Ocean, the only lands yet to be claimed were far north and lacking of either precious metals or arable lands—and therefore unattractive to their British rivals. Champlain founded Québec in 1608, Montréal in 1611. With the ascension to the French throne of Louis XIV in 1661, France began colonization in earnest. French expansion followed the waterways—the St. Lawrence River, Lakes Erie, Huron, and Michigan, then through the Fox and Illinois Rivers and on to the Mississippi. This presence intensified when Robert Cavelier de La Salle and Henri de Tonti, plus 30 Frenchmen and a handful of Native American allies, built a settlement at Starved Rock. La Salle and his party eventually canoed their way down the Mississippi to the Gulf of Mexico, and, on April 9, 1682, they claimed all the North American continent for France—with the exception of the 13 British colonies and New Spain (Mexico). In 1717, Illinois ceased to be part of New France (Canada) and was transferred to the government of Louisiana. Regardless of administrative jurisdiction, the Midwest remained an essential component of the French empire of North America, acting as a hinge between New France and Louisiana. The waterways between Lake Michigan and the Mississippi River were essential instruments of communication and remained essential to the fur traders who continued to venture southwest from Montreal and Quebec. The Treaty of Paris (1763) marked the end of the French presence in North America, with France surrendering its territories between the Atlantic and the Mississippi to Britain and transferring everything west of the Mississippi to Spain. A new distinction between French Canadians and French became clearer with the American Revolution and the separation between Canada, a British colony, and the United States. French cultural presence in the Midwest would all but disappear by the early 1840s, but the French in Canada, benefiting from a larger number and a cohesive grouping around the Roman Catholic Church in a well-defined territory, retained a distinctively French culture. During the second part of the nineteenth century, French Canadians migrating in the face of intense economic pressure at home made their way to the Kankakee area, where they founded the town of Bourbonnais. In the 1870s a large number of French Canadian families settled in the Brighton Park area of Chicago, where some of their descendants still remain. Meanwhile, a few individuals, especially artists and persons in luxury trades, migrated from France to Chicago between the Franco-Prussian War and World War I. Few because France avoided the severe economic crises that stimulated emigration from elsewhere in Europe, and it had its own colonies for the would-be emigrant. Interest in French culture was paradoxically maintained by Chicago society, who, in the 1890s, traveled extensively to France, where they bought the French art that would eventually enable the creation of the Art Institute's impressionist collection. This activity also led to the founding, in 1897, of the Alliance Française. Alvord, Clarence Walworth. The Illinois Country. 1920. Balesi, Charles J. The Time of the French in the Heart of North America, 1673–1818. 1st ed. 1992; 3rd ed. 2000. Eccles, William, J. Canada under Louis XIV. 1964. The Electronic Encyclopedia of Chicago © 2005 Chicago Historical Society. The Encyclopedia of Chicago © 2004 The Newberry Library. All Rights Reserved. Portions are copyrighted by other institutions and individuals. Additional information on copyright and permissions.
http://www.encyclopedia.chicagohistory.org/pages/488.html
4.46875
- Understand the different ways to save money and reasons for doing so - Use addition, subtraction, multiplication, and division (with whole numbers, fractions, decimals and/or percents, mixed numbers) to solve real-world math problems that will help them to understand how a savings account works and the concept of interest Chart paper or chalkboard, marker or chalk, Student Magazine Pages: How Money Can Grow (PDF), pencils Assess students' understanding of and experience with the concept of savings. Ask: Have you ever saved money to buy something? What were you saving for? How much did it cost? How long did it take you to save the money? How did you save the money? - Discuss the definition of savings with students. People have short-term goals when they save for things over a short period of time. This is usually for less expensive items. Long-term goals are usually for more expensive items that take a long time to save for, such as college or a house. - Ask students for examples of short- and long-term goals. - Now discuss the concept of interest as it applies to a savings account. Solicit ideas about what interest paid means when it refers to a savings account. Interest is the fee a bank pays someone for letting the bank use his or her money. Do the Math: Show students the math behind interest in the chart below: What are examples of long- and short-term goals? Why is it important to have goals? How does saving money help you reach goals? What are some things that you might like to save for? Introduce the concept of philanthropy, which means giving time and money to causes or charities. Discuss with students why they think this is an important goal to have. Assign Student Magazine Pages 10-11: Kids Helping Others (PDF) as an at-home or in-class assignment. Language Arts Extension: Expository: Writing Situation: By setting goals, you can help to accomplish something in the future. Directions for Writing: Think about a goal that you have. Now explain why this goal is important to you and what you can do to reach it. Narrative: Writing Situation: There are all sorts of goals that people have. Directions for Writing: Write a story about a real or imagined goal that someone has. Have students read Student Magazine Pages 8-9: How Money Can Grow (PDF), in class or at home, and complete the goals on page 8 and the problems on page 9. Student Magazine Answers (PDF)
http://www.scholastic.com/browse/lessonplan.jsp?id=373
4.25
The Earth's terrain varies greatly from place to place. About 70.8% of the surface is covered by water, with much of the continental shelf below sea level. This equates to 148.94 million km2 (57.51 million sq mi). The submerged surface has mountainous features, including a globe-spanning mid-ocean ridge system, as well as undersea volcanoes, oceanic trenches, submarine canyons, oceanic plateaus and abyssal plains. The remaining 29.2% not covered by water consists of mountains, deserts, plains, plateaus, and other geomorphologies. The planetary surface undergoes reshaping over geological time periods because of tectonics and erosion. The surface features built up or deformed through plate tectonics are subject to steady weathering from precipitation, thermal cycles, and chemical effects. Glaciation, coastal erosion, the build-up of coral reefs, and large meteorite impacts also act to reshape the landscape. |Present-day Earth altimetry and bathymetry. Data from the National Geophysical Data Center's TerrainBase Digital Terrain Model. The continental crust consists of lower density material such as the igneous rocks granite and andesite. Less common is basalt, a denser volcanic rock that is the primary constituent of the ocean floors. Sedimentary rock is formed from the accumulation of sediment that becomes compacted together. Nearly 75% of the continental surfaces are covered by sedimentary rocks, although they form only about 5% of the crust. The third form of rock material found on Earth is metamorphic rock, which is created from the transformation of pre-existing rock types through high pressures, high temperatures, or both. The most abundant silicate minerals on the Earth's surface include quartz, the feldspars, amphibole, mica, pyroxene and olivine. Common carbonate minerals include calcite (found in limestone) and dolomite. The pedosphere is the outermost layer of the Earth that is composed of soil and subject to soil formation processes. It exists at the interface of the lithosphere, atmosphere, hydrosphere and biosphere. Currently the total arable land is 13.31% of the land surface, with only 4.71% supporting permanent crops. Close to 40% of the Earth's land surface is presently used for cropland and pasture, or an estimated 1.3×107 km2 of cropland and 3.4×107 km2 of pastureland. The elevation of the land surface of the Earth varies from the low point of −418 m at the Dead Sea, to a 2005-estimated maximum altitude of 8,848 m at the top of Mount Everest. The mean height of land above sea level is 840 m.
http://www.trunitydemo2.net/boxford_classroom/articles/view/202444/?topic=74904
4.5625
Theme: Freedom of Choice Grades: Grades 7-8 The Giver is a science fiction story about a 12-year-old boy who must choose between a world of sameness or one filled with both the intense joys and pains of life. Jonas lives in a "perfect" world, devoid of strife or conflict. When Jonas begins training for his life assignment as the Receiver of Memory, he meets his teacher, a man called The Giver. As The Giver transfers to Jonas the memories of the world, Jonas begins to realize that his seemingly perfect world has many flaws. When the life of a baby, whom Jonas has become attached to, is threatened, Jonas must decide where his loyalties lie. - Classifying: Making Choices. To stimulate an oral discussion on freedom of choice, invite students to brainstorm things they do every day. Then have students classify each activity as 1) one that is totally their choice, 2) one in which they have some choice, or 3) one in which they have no choice. Students might work individually or as a group to chart their answers. Have students look for patterns in the types of items that appear under each heading. Conclude by revealing to students that the freedom to choose is an important issue in the novel they will read. - Connecting to Real Life: Book of Rules. Begin a discussion about whether it is important to follow rules and the reasong for rules in our society. Then ask students to create, independently or as a class, a list of rules they follow at home, at school, or in their community. Ask them to divide the rules into two groups: those that they believe are important and essential and those that are not important or are unnecessary. Suggest that they display their two lists of rules in a collage on a bulletin board or other wall area to use in later comparisons with the rules of Jonas's A Great Debate. Have students debate the question "Is it better for all people to be alike or for people to be different?" First assign students to one of two groups: Pro-Sameness or Pro-Diversity. To prepare for the debate, have each group brainstorm ideas to support their side and organize their best defense. You may wish to allow time for students to find facts that support their position from the novel, from almanacs, and other sources. - Act it Out. Ask students to consider what would happen if the freedom to make any choice were suddenly taken away from them. Instruct a group of students to write a skit in which the characters are without freedom of choice; have a second set of students perform the skit. - Genetic Engineering. The Community has been genetically altered for Sameness. Instruct students to research genetic engineering and tell whether they think it is right or wrong to tamper with nature in this way. Have them write a persuasive essay on this issue, using examples from their research as well as from The Giver. Research another utopian-like community, such as the Shakers. Write a comparison between that community and the one presented in The Giver, in which you consider the rules of conduct within the community as well as its relationship with the outside world.
http://www.classzone.com/novelguides/litcons/giver/guide.cfm
4.0625
Start Your Visit WithHistorical Timelines General Interest Maps The United Nations is an organization that traces its origins to the Allied opposition to the Axis powers during World War II. Its original purpose was similar to that of the failed League of Nations that grew out of World War I. The term was first used on January 1, 1942, by the then-26 nations who were at war with the Axis. On that date, they released a "United Nations Declaration." Franklin D. Roosevelt is regarded as the originator of the term. In October, 1943, a statement by the four primary anti-Axis nations (Britain, the United States, the Soviet Union, and China) declared their recognition of a need for an organization, after the war, to "maintain international peace and security." Plans were developed further and on August 21, 1944, representatives of the United States, Britain and the Soviet Union met at Dumbarton Oaks outside of Washington DC. They were joined later by China. Dumbarton Oaks was an 1801 Federal Style home which Mildred and Robert Woods Bliss purchased in 1920 and donated to Harvard in 1940. The Dumbarton Oaks conference agreed that the new postwar organization would be modeled very generally on the League of Nations, with a general assembly as well as a council in which the great powers would wield greater influence. At Yalta, it was further agreed that the great powers would have veto power in the security council but would not exercise it for procedural questions. It was also agreed that all those countries that declared war on either Germany or Japan by March 1, 1945, would be eligible to join. It further specified that two additional Soviet socialist republics, specifically Ukraine and White Russia, would be admitted as full members of the General Assembly. The final conference to complete the draft of a charter for the United Nations opened in San Francisco on April 25, 1945. The United States was represented by Edward R. Stettinius Jr., the Secretary of State, who presided. A total of 50 nations attended. Each proposed clause in the charter was debated and required a two-thirds vote of approval. The wartime alliance between the Soviet Union and the English-speaking Allies was coming under increasing strain, with disagreements about postwar policies becoming more evident. There were also splits between the Western democracies and the Communist bloc, between the industrially developed and developing nations, and the great powers and smaller powers, with the latter resenting the special treatment afforded to the great powers in the Security Council. The charter was signed by all 50 countries on June 26, 1945. Poland, which had not been present, signed later to complete the list of 51 original countries. The United States Senate ratified by treaty by a vote of 89-2 on July 28. For the charter to become effective, the Big Five (USA, Britain, USSR, China, France) had to approve the charter plus a majority of the other 51. This was achieved on October 24, 1945, which has since been recognized as United Nations Day. The League of Nations was formally disbanded on April 18, 1946 in Geneva. This final act, giving the UN the assets of the league, which was signed by Sean Lester, the league`s final Secretary-General, consigned the League of Nations to history. Finding a permanent home for the United Nations required time and extended negotiations. The general assembly met in London in early 1946, and thereafter met at various temporary locations around New York City. Eventually, the UN accepted the offer by John D. Rockefeller of land in Manhattan on the East River. Construction of the distinctive United Nations headquarters was completed in 1952. It didn`t take long for the first international conflict to reach the Security Council. Iran soon complained that the Soviet Union was not abiding by its wartime understandings and had not withdrawn its troops from Iranian territory. The Security Council contented itself with encouraging the two parties to resolve the problem amicably. This was soon followed by the conflict in Palestine. With Britain unwilling to continue to administer its mandate, the United Nations was trying to implement a partition when fighting broke out. The outcome was a greater territory in the hands of the new nation of Israel than would have been granted by the UN, and also a refugee problem that was not yet resolved six decades later. The Soviet Union was unhappy with the proceedings of the United Nations in 1950 and instituted a boycott of meetings. They were thus boycotting the Security Council when North Korea invaded South Korea and could not veto a resolution authorizing the use of force to resist the invasion. In their absence, the Security Council voted the necessary resolution and, to prevent later obstruction by the Soviet, transferred management of the military forces to the General Assembly. The result of this was that the United States, although providing most of the men and materiel, did not itself declare war on North Korea. The charter of the United Nations gave more executive power to the Secretary-General than had been the case with the League of Nations. The first UN Secretary-General was Trygve Lie of Norway. The Soviet Union became unhappy with his actions with regard to Korea and withdrew their support. Lie resigned in 1953 and was replaced with Dag Hammerskjold of Sweden. Hammerskjold in turn got himself in Soviet bad graces by his activities regarding the Congo in the period of rapid decolonization of Africa after 1960. So upset were the Soviets that they proposed to end the position of Secretary-General and replace it with a triumvirate, which would represent the three major blocs in the UN and would consequently become completely ineffective. Before this proposal could be put through, a plane carrying Hammerskjold in the Congo crashed on September 18, 1961, killing him. As a gesture to the growing number of nations from the developing world, the candidacy of U Thant of Burma was put forward. The Soviet Union had little choice but to accept. The United Nations was the scene of a number of famous speeches during the Cold War. Fidel Castro made several addresses, condemning the United States. America`s ambassador to the UN Adlai E. Stevenson used the international forum to present the evidence of a Soviet buildup of missiles in Cuba. And Nikita Khruschev infamously pounded his shoe on his desk to interrupt a speech he didn`t like. The United Nations has continued to be the scene of contentious debate since the end of the Cold War, but it has become more and more involved through its various agencies in international developments far removed from military confrontation. United Nations Secretariat Building See larger picture United Nations Secretariat Building. To the right of the UN are the Citicorp building and the Trump World Tower. See larger picture The view on the United Nations Secretariat Building from Queens West. On the lUnited Nations Secretariat Building. To the right of the UN are the Citicorp building and the Trump World Tower. See larger picture The view on the United Nations Secretariat Building from Queens West. On the lUnited Nations Secretariat Building from Queens West. On the left is the ... United Nations Headquarters UN Photo #106192C Address: United Nations, New York, New York 10017, USA The United Nations Headquarters is located in Manhattan, New York City, and covers approximately eighteen acres, from 42nd to 48th Streets between First AvenUnited Nations, New York, New York 10017, USA The United Nations Headquarters is located in Manhattan, New York City, and covers approximately eighteen acres, from 42nd to 48th Streets between First AvenUnited Nations Headquarters is located in Manhattan, New York City, and covers approximately eighteen acres, from 42nd to 48th Streets between First Avenue and the ... United Nations Day, 2005 This enduring truth inspired those who created the United Nations, and it continues to do so 60 years later. With courage and conscience, we will meet our responsibilities to protect the lives and rights of others. As we do this, weUnited Nations, and it continues to do so 60 years later. With courage and conscience, we will meet our responsibilities to protect the lives and rights of others. As we do this, we will help ...
http://www.u-s-history.com/pages/h2055.html
4
PHP - Operators In all programming languages, operators are used to manipulate or perform operations on variables and values. You have already seen the string concatenation operator "." in the Echo Lesson and the assignment operator "=" in pretty much every PHP example so far. There are many operators used in PHP, so we have separated them into the following categories to make it easier to learn them all. - Assignment Operators - Arithmetic Operators - Comparison Operators - String Operators - Combination Arithmetic & Assignment Operators Assignment operators are used to set a variable equal to a value or set a variable to another variable's value. Such an assignment of value is done with the "=", or equal character. Example: - $my_var = 4; - $another_var = $my_var; Now both $my_var and $another_var contain the value 4. Assignments can also be used in conjunction with arithmetic operators. |+ ||Addition ||2 + 4| |- ||Subtraction ||6 - 2| |* ||Multiplication ||5 * 3| |/ ||Division ||15 / 3| |% ||Modulus ||43 % 10| $addition = 2 + 4; $subtraction = 6 - 2; $multiplication = 5 * 3; $division = 15 / 3; $modulus = 5 % 2; echo "Perform addition: 2 + 4 = ".$addition."<br />"; echo "Perform subtraction: 6 - 2 = ".$subtraction."<br />"; echo "Perform multiplication: 5 * 3 = ".$multiplication."<br />"; echo "Perform division: 15 / 3 = ".$division."<br />"; echo "Perform modulus: 5 % 2 = " . $modulus . ". Modulus is the remainder after the division operation has been performed. In this case it was 5 / 2, which has a remainder of 1."; Perform addition: 2 + 4 = 6 Perform subtraction: 6 - 2 = 4 Perform multiplication: 5 * 3 = 15 Perform division: 15 / 3 = 5 Perform modulus: 5 % 2 = 1. Modulus is the remainder after the division operation has been performed. In this case it was 5 / 2, which has a remainder of 1. Comparisons are used to check the relationship between variables and/or values. If you would like to see a simple example of a comparison operator in action, check out our If Statement Lesson. Comparison operators are used inside conditional statements and evaluate to either true or false. Here are the most important comparison operators of PHP. Assume: $x = 4 and $y = 5; |Operator||English ||Example ||Result| | == ||Equal To ||$x == $y ||false| | != ||Not Equal To ||$x != $y ||true| | < ||Less Than ||$x < $y ||true| | > ||Greater Than ||$x > $y ||false| | <= ||Less Than or Equal To ||$x <= $y ||true| | >= ||Greater Than or Equal To ||$x >= $y ||false| As we have already seen in the Echo Lesson, the period "." is used to add two strings together, or more technically, the period is the concatenation operator for strings. $a_string = "Hello"; $another_string = " Billy"; $new_string = $a_string . $another_string; echo $new_string . "!"; Combination Arithmetic & Assignment Operators In programming it is a very common task to have to increment a variable by some fixed amount. The most common example of this is a counter. Say you want to increment a counter by 1, you would However, there is a shorthand for doing this. This combination assignment/arithmetic operator would accomplish the same task. The downside to this combination operator is that it reduces code readability to those programmers who are not used to such an operator. Here are some examples of other common shorthand operators. In general, "+=" and "-=" are the most widely used combination operators. |Operator||English ||Example ||Equivalent Operation| |+=||Plus Equals ||$x += 2; ||$x = $x + 2;| |-=||Minus Equals ||$x -= 4; ||$x = $x - 4;| |*=||Multiply Equals ||$x *= 3; ||$x = $x * 3;| |/=||Divide Equals ||$x /= 2; ||$x = $x / 2;| |%=||Modulo Equals ||$x %= 5; ||$x = $x % 5;| |.=||Concatenate Equals ||$my_str.="hello"; ||$my_str = $my_str . "hello"; | Pre/Post-Increment & Pre/Post-Decrement This may seem a bit absurd, but there is even a shorter shorthand for the common task of adding 1 or subtracting 1 from a variable. To add one to a variable or "increment" use the "++" operator: - $x++; Which is equivalent to $x += 1; or $x = $x + 1; To subtract 1 from a variable, or "decrement" use the "--" operator: - $x--; Which is equivalent to $x -= 1; or $x = $x - 1; In addition to this "shorterhand" technique, you can specify whether you want to increment before the line of code is being executed or after the line has executed. Our PHP code below will display the difference. $x = 4; echo "The value of x with post-plusplus = " . $x++; echo "<br /> The value of x after the post-plusplus is " . $x; $x = 4; echo "<br />The value of x with with pre-plusplus = " . ++$x; echo "<br /> The value of x after the pre-plusplus is " . $x; The value of x with post-plusplus = 4 The value of x after the post-plusplus is = 5 The value of x with with pre-plusplus = 5 The value of x after the pre-plusplus is = 5 As you can see the value of $x++ is not reflected in the echoed text because the variable is not incremented until after the line of code is executed. However, with the pre-increment "++$x" the variable does reflect the addition immediately. Download Tizag.com's PHP Book If you would rather download the PDF of this tutorial, check out our PHP eBook from the Tizag.com store. Print it out, write all over it, post your favorite lessons all over your wall! Found Something Wrong in this Lesson? Report a Bug or Comment on This Lesson - Your input is what keeps Tizag improving with time!
http://www.tizag.com/phpT/operators.php
4.03125
The use of a republic goes back at least as far as ancient Akkad. The best known ancient republic was the Roman Republic, which lasted from 509 BC until 44 BC. In the Roman Republic, the principles of annuality (holding office for a term of only one year) and collegiality (holding office with at least two men at the same time) were usually observed. In modern times, the head of state of a republic is usually formed by only one person, the president, but there are some exceptions such as Switzerland, which has a seven-member council as its head of state, called the Bundesrat, and San Marino, where the position of head of state is shared by two people. There is certainly nothing that says that among monarchies and republics one is necessarily more democratic than the other since the powers of the head of state (whether monarch or president) may be purely ceremonial, (although an elected head of state within a democratic system is generally considered more democratic than a monarchy). Monarchs generally reign for life, and when they die they are succeeded by a relative, either chosen by themselves or determined according to set rules. The presidents of republics, by contrast, are generally elected for a limited term, and their successors are chosen by the body that elected them. These days even non-democratic republics generally claim to be democratic, though the outcome of the election may be assured, and still maintain the ritual of regularly electing their head of state; and frequently in these states heads of states have left office voluntarily (through resignation or retirement) or been forced out (through constitutional means) by other members of the ruling elite. But there are still some exceptions -- each new Emperor of the Holy Roman Empire, for instance, was elected by the chief princes of the empire, though over the centuries the custom developed of always electing successive members of a particular family to that office. Perhaps the most significant exception among the forms of today's monarchies is the oligarchical form of election used in the United Kingdom (described under Privy Council). Republics in the Soviet Union were member states which had to meet three criteria to be named republics, 1) Be on the periphery of the Soviet Union so as to be able to take advantage of their theroetical right to secceed, 2) be economically strong enough to be self sufficient apon seccession, And 3) Be named after at least one million people of the ethnic group which should make up the majority population of said republic. republics were originally created by Stalin and continued to be created even today. Before roughly the 18th century, all known republics were also more or less democratic. This is why in older texts you will often see republic being used interchangeably with democracy. In recent times there have been a large number of not-so-democratic republics, and the definition of the word has become more constrained.
http://allwebhunt.com/wiki-article-tab.cfm/republic
4.0625
Sounds: 26 in Middle English The consonants are divided into stops, affricates, fricatives, nasals, and lateral resonants. These ten-dollar words basically refer to the parts of the mouth necessary to make the sounds--lips, blade of tongue, back of tongue, top of the mouth, and so on. You can go back to the earlier chart of the human body for reference if necessary. Stops involve the complete closure of the air passage (i.e., the lips or parts of the tongue are completely closed to make the sound initially). a stop plus a movment through a fricative position (i.e., the blade of the tongue initially moves up in the position of a stop, but then move through a fricative or spirant position rather than remaining in the "stop" position). a constriction of the air passage, but air still "hisses" around the edges of the lip or tongue. Nasals involve complete closure of the oral passage with the nasal passage open. The vibrating air often makes it feel like something is vibrating just behind or under your nose when you make this sound. or "liquids," occur when air is expelled through passages on the sides of the tongue. This is a sound very common in Welsh and in some "lilting" languages. involve so much vowel sound it's difficult to pinpoint exactly where these sounds end and vowel sounds begin. That's why some linguists call them "semi-vowels." It's easier to understand these terms with examples, so look below at the complete chart and sound through the examples aloud. If anybody near you gives you funny looks, simply make the sounds more loudly until they go away in fear of this crazy person making weird If you have trouble seeing the chart below, you can click here to download and print out a pdf file of this material. Also available is a lengthier International Phonetics Chart of consonants not limited to Middle English sounds. Note that some sounds are "voiced" and others are "unvoiced." This means the sounds are made exactly the same way in each case, in terms of where your lips and tongue are, but in the case of "voiced" sounds, your vocal cords vibrate as you make the noise. (Get it? Voiced=vocal, as in vocal cords?) To feel the difference, tilt your head back, and put your hand on the front of your throat. Say the unvoiced sound a few times, and you shouldn't feel the vocal vibrations. Then try saying the voiced version of the same sound, and your fingers should feel the vibration. Those virgules (diagonal slashes like /this/) in the far left-hand column are secret linguist code. These virgules tell linguists that the symbols or materials between the backslashes represent a physical sound rather than a written symbol. For instance, this example refers to the sound of the word spoon: On the other hand, this example refers to the way we write the word spoon, the way it appears on a page of text: slashes and brackets look a bit like html code, don't they? You should keep that difference between // and <> straight in your mind, or the next few sections will be really confusing. When you are done looking at the material on this page, click here to move on to the vowels. - This webpage is adapted from materials Professor James Boren designed for his Chaucer students at the University of Oregon. Any errors in this webpage are the result of my own scribal corruptions rather than a product of the original work. --Kip Wheeler
http://web.cn.edu/kwheeler/gvs_consonants.html
4.15625
Module 4 - physical and chemical parameters Waterwatch Australia Steering Committee Environment Australia, July 2002 ISBN 0 6425 4856 0 Turbidity: opacity or muddiness caused by particles of extraneous matter; not clear or transparent. In general, the more material that is suspended in water, the greater is the water's turbidity and the lower its clarity. Suspended material can be particles of clay, silt, sand, algae, plankton, micro-organisms and other substances. Turbidity affects how far light can penetrate into the water. It is not related to water colour: tannin-rich waters that flow through peaty areas are highly coloured but are usually clear, with very low turbidity. Measures of turbidity are not measures of the concentration, type or size of particles present, though turbidity is often used as an indicator of the total amount of material suspended in the water (called total suspended solids). Turbidity can indicate the presence of sediment that has run off from construction, agricultural practices, logging or industrial discharges. Suspended particles absorb heat, so water temperature rises faster in turbid water than it does in clear water. Then, since warm water holds less dissolved oxygen than cold water, the concentration of dissolved oxygen decreases. If penetration of light into the water is restricted, photosynthesis of green plants in the water is also restricted. This means less food and oxygen is available for aquatic animals. Plants that can either photosynthesise in low light or control their position in the water, such as blue-green algae, have an advantage in highly turbid waters. Suspended silt particles eventually settle into the spaces between the gravel and rocks on the bed of a waterbody and decrease the amount and type of habitat available for creatures that live in those crevices. Suspended particles can clog fish gills, inducing disease, slower growth and, in extreme cases, death. Fine particles suspended in water carry harmful bacteria and attached contaminants, such as excess nutrients and toxic materials. This is a concern for drinking water, which often requires disinfection with chlorine to kill harmful bacteria. Turbidity is affected by: Regular turbidity monitoring may detect changes to erosion patterns in the catchment over time. Event monitoring (before, during and immediately after rain) above and below suspected sources of sediment can indicate the extent of particular runoff problems. Turbidity can be measured using a Secchi disk, a turbidity meter or a turbidity tube. The turbidity tube is adequate for most purposes, but if your waterways are generally very clear a turbidity meter may be more suitable. The Secchi disk is useful only in non-flowing, relatively deep water. Turbidity is best measured on-site in the field, but if necessary it can be measured later, within 24 hours of sampling, provided the sample bottles are filled completely, leaving no air gap at the top. A Secchi disk allows you to measure the water's transparency. The clearer (less turbid) the water, the greater the depth to which the disk must be lowered before it disappears from view. This is why Secchi disks are not useful in shallow water. The main advantage of Secchi disks is that they are cheap and easy to use. The Secchi disk is a black and white 20 cm diameter disk which is attached to a long tape measure or cord marked in metres. Ask your Waterwatch coordinator how to obtain a Secchi disk. Lower the disk into the water until it disappears and then raise it until it reappears. The depth (as indicated on the tape) at which you can see the disk is the Secchi disk reading (see Figure 4.6). Turbidity is a relative measure. It is usually expressed as nephelometric turbidity units (NTU) or as metres depth. Other units, such as formazin turbidity units (FTU) or Jackson turbidity units (JTU), are specific to particular types of turbidity meter and their methods of calibration. They are not absolute measures. (They can be converted to NTU by calculation, if you need to, and the method may be explained on the instrument.) Turbidity meters measure the intensity of a light beam when it has been scattered by particles in the water. They are effective over a wide range - from 0 to 1000 NTU. The turbidity tube reads turbidity by absorbing light rather than scattering light, so it overestimates turbidity in samples that are highly coloured and underestimates turbidity in samples containing very fine particulates, such as clay. However, it is very simple to use and gives good comparative measures. The turbidity tube is a long thin clear plastic tube, sealed at one end with a white plastic disc with three black squiggly lines on it (seen when looking down the tube). The tube has a scale marked on the side. Your Waterwatch coordinator can tell you how to obtain a turbidity tube. *The scale is non-linear (logarithmic) and there are gaps between numbers. When the water level is between two numbers, record the value as less than the last number. If you can see the wavy lines when the water is at the top, record the result as 'less than 10 NTU'. Wash the turbidity tube thoroughly with tap water and ensure the tube is kept clean and free from contamination. No calibration is required and the tube reads from 10 to 400 NTU. Make sure the Secchi disk is clean. Have a second person check the result. Check the accuracy of markings on the Secchi disk cord. Perform the test in shade if possible. Undertake a field replicate test every 10 samples. For river monitoring, Secchi disks have limited use because the river bottom is often visible. Also, the disk is often swept downstream by the current, making accurate measurements impossible. You may consider using a Secchi disk if you wish to monitor the clarity of a lake or deep slow moving river or estuary, and the water is too clear - i.e. less than 10 nephelometric turbidity units (<10 NTU) - for accurate turbidity tube readings, and your group cannot afford to buy a turbidity meter. To avoid error make sure the turbidity tube is clean and free of scratches. Perform the test in shade if possible. Have a second person check the result. Note if highly coloured water is present, because it may elevate the readings. Test a field replicate every 10 samples. Natural (or background) turbidity levels in waterways vary from <1 NTU in mountain streams to hundreds of NTU during rainfall events or in naturally turbid waters. Turbidity is affected by river flow, so be sure to measure the river flow when you collect your sample. Interpreting turbidity readings requires information about the natural turbidity in your area. There are large variations in turbidity in Australian river systems; inland rivers tend to be naturally more turbid than coastal rivers. Find out the normal range in your catchment from your Waterwatch coordinator, natural resource management agency or local council. Then, over a series of measurements, build up a picture of the turbidity and its variation in your own waterbody. If, on any particular sampling occasion, values that differ markedly from those expected for that time of year or flow rate you should contact your Waterwatch coordinator and ask about the relevant trigger values discussed in the revised national water quality guidelines (ANZECC/ARMCANZ 2000).
http://www.waterwatch.org.au/publications/module4/turbidity.html
4.65625
Lesson Plans and Worksheets Browse by Subject Poisoning Protection Teacher Resources Find teacher approved Poisoning Protection educational resource ideas and activities Students explore poison prevention. In this poison prevention instructional activity, students define poisons, and discover what poisons are and how they can harm people. Students hear different examples of posioning and view examples. Students do worksheets and take letters home to their parents about poisons. Middle schoolers research and explore all the safety conditions to being exposed to poisons in real-world situations. They review/discuss/investigate about types, sources, effects and responses of poisons by creating general questions to pose to the whole class. In addition, they access the internet to view the website Regional Poison Control Center. Investigate common poisons and how to stay safe with your class. They will be able to identify common poisons and the symbols that represent poison (i.e. skull and crossbones). Additionally, they will compare and contrast candy and medications and study poison prevention. They will also research common poisons on the Internet. Text intensive, this presentation lists 51 guidelines for safety in the chemistry laboratory. If you choose to use it, make sure to demonstrate the safe procedures along the way and point out where the fume hood, first aid kit, eyewash station, shower, fire extinguisher, and fire blanket are located. Provide copies of the guidelines for young chemists to keep and ask them to sign a set in agreement to follow them.
http://www.lessonplanet.com/lesson-plans/poisoning-protection
4.46875
Research has shown that "good readers" utilize specific strategies that increase their understanding of text. These comprehension strategies help us engage with the text and enhance our enjoyment of reading. "True comprehension goes beyond literal understanding and involves the reader's interaction with text. If students are to become thoughtful, insightful readers, they must extend their thinking beyond a superficial understanding of the text." Stephanie Harvey and Anne Goudvis Each month the school will focus on one of these comprehension strategies. Although we use these strategies simultaneously as we read and each strategy may be discussed at other times during the school year, these will be the focus strategies for the months. Click on the Strategy below to learn more and discover ways you can help your child gain greater meaning from reading. Saturday, May 25th, 2013
http://ccusd93.org/education/components/scrapbook/default.php?sectiondetailid=23433