score
float64
4
5.34
text
stringlengths
256
572k
url
stringlengths
15
373
4
What Is a Steady State Ecosystem? Some ecosystems exist in a steady state, or homeostasis. In steady-state systems, the amount of input and the amount of output are equal. In other words, any matter entering the system is equivalent to the matter exiting the system. An ecosystem includes living organisms and the environment that they inhabit and depend on for resources. Environmental scientists who study system interactions, or system dynamics, have defined a few important patterns to these interactions. Some lakes exist as steady-state systems in terms of their water volume. For example, a lake that has a stream feeding water into it may also be losing water that soaks into the ground or exits by another stream. In this way, even though the stream provides a constant input of water into the lake, the lake also experiences a constant and equal output of water. As a result, the total amount of water within the lake stays the same. Most systems continually shift inputs and outputs to maintain a steady state. Your body temperature, which remains fairly constant, is one example. When your body gets too hot, it releases heat through sweating. When your body gets too cold, it generates more heat through shivering. In this way, your body attempts to keep your temperature at a steady state by making minor adjustments to its energy inputs and outputs. Like your body temperature, many natural systems respond to inputs by adjusting outputs. In fact, maintaining a steady state without change is difficult (and rare). So as systems try to reach equilibrium, they constantly shift inputs and outputs. The adjustments that a system makes as inputs enter or outputs exit are called feedbacks. The two types of feedbacks are Negative feedbacks: These feedbacks slow down or suppress changes, sometimes helping the system return to a steady state. Positive feedbacks: These feedbacks lead to increased change, sending the system further away from a steady state. Feedbacks often set off a chain of changes, called a feedback loop, in the system. For example, the internal regulation of your body temperature is a negative feedback loop. A change in your body temperature triggers parts of the system (your body) to respond by increasing (shivering) or decreasing (sweating) the temperature and sending it back toward a steady state, thus suppressing change. On the other hand, population growth can create a positive feedback loop. When more births occur, the next generation has more people to have more babies. In time, these babies grow up to have more babies, who grow up to have more babies, and so on. Thus, positive feedback loops can lead to runaway effects — sending a system far from its steady state. In the context of systems, the terms positive and negative don’t mean good and bad. In fact, positive feedbacks are often more dangerous than negative feedbacks because they move a system further from stability.
http://www.dummies.com/how-to/content/what-is-a-steady-state-ecosystem.html
4.28125
The science of eclipses Did you know that the ancient Greeks were able to work out the diameter of the Earth using data from lunar eclipses? The study of Earth's shadow projected on the Moon allows us to deduce that Earth is spherical. The ancient Greeks worked this out. Using lunar eclipse timing, as far back as the third century BC, Aristarchus from Samos estimated the lunar diameter. Using Eratosthene's previous measurement of Earth's diameter, he deduced the Earth-Moon distance. Hipparcos (150 BC) and Ptolemeus (2nd century AD) improved with impressive precision the measurements of the lunar diameter and Earth-Moon distance. In the 17th century, in order to improve longitude determination, absolute cartography made use of lunar eclipse phenomena, which were observable simultaneously from different points. Today, during lunar eclipses, laser-ranging measurements can be made with great accuracy using reflectors placed on the Moon during the Apollo and Lunokhod missions. This has allowed more precise measurement of lunar acceleration and the slowing down in Earth's rotation. Analysis of the refracted light of Earth's atmosphere during lunar eclipses has also made it possible to show that atmospheric ozone is confined to a layer between 50 and 80 kilometres above Earth's surface. Eclipses and scientific discovery The ancient Greeks and Romans used dated references to eclipses to improve the calendar. They also noted phenomena related to eclipses. The corona seen during eclipses was only identified as a solar phenomenon in the middle of the 19th century. Until then it was thought that the corona might come from terrestrial smoke, or that it indicated a lunar atmosphere. Kepler attributed it to solar light refracted by the atmosphere of the Moon. Even Halley (who predicted with success the path of the 1715 eclipse) and Arago interpreted the corona to be lunar in origin. It was Cassini who established a link with the solar zodiacal light in 1683. The British amateur astronomer Francis Baily observed from the 1836 annular eclipse the irregularities of the lunar limb. The first successful total eclipse photograph was taken on July 1851 by Berkowski from Königsberg. At the 1860 eclipse, photographs obtained by W. De La Rue and A. Secchi from two sites 500 km apart showed that prominences could not belong to the Moon, but were in fact of solar origin. From 1842, the use of spectroscopes allowed the recognition of helium emission as well as a new, unknown emission line measured by Janssen at the 1868 eclipse from India. This was shown later by Ramsey (in 1900) to come from an element then unknown on Earth and therefore given the name 'helium' - now measured as the second most abundant element in the Universe. Coronal eclipse spectra taken in 1869 also showed mysterious green and yellow spectral lines first attributed to another unknown element given the name 'coronium'. It was only much later, after the development of quantum mechanics and the measurement of spark discharge spectra by Bowen Edlen (1939), that the physicist Grotrian was able to solve the mystery of the coronium. Grotrian showed that these mysterious transitions in fact show iron in a very high state of ionisation due to the extreme temperature (iron having lost nine electrons for the red coronal line and 13 electrons for the green coronal line) of the corona. This can only occur at temperatures exceeding a million degrees. This discovery has led to another puzzle still unsolved today, but to which SOHO has unveiled fundamental clues: what heats the corona? Another famous eclipse in 1919 allowed Arthur Eddington to confirm Einstein's prediction of general relativity space-time distortion in a gravity field. An earlier German expedition to conduct this test in August 1914 failed when the team was taken as prisoners in Russia before being able to perform the key experiment. In 1919, Eddington selected two sites of observation from Brazil and Principe Island. The eclipse pictures showed an offset in the positions of stars due to solar gravitational bending of light that confirmed Einstein's theory exactly. What can be measured during solar eclipses? Eclipses made it possible to determine with precision the shape of the Moon. Their study improved the prediction of ephemerides. Even today, a total solar eclipse still allows astrophysicists to make valuable scientific measurements, particularly when co-ordinated with measurements from observatories in space. Solar eclipses enable scientists to measure accurately the diameter of the Sun and to search for variations in that diameter over long time scales. Geophysicists measure eclipse phenomena induced in the high terrestrial atmosphere. Total solar eclipses allow the observation of structures of the solar corona that cannot usually be studied due to the higher normal luminosity of skylight during the day. The structures in the corona are similar to patterns seen around a magnet. In fact sunspots were shown to be solar surface magnetic structures, which have their counterpart in the corona. The study of the solar corona gives us much information about the Sun's surface and its global variations. The morphology of the corona is changing due to the reorganisation of the surface magnetic field during the solar cycle, which can be seen in eclipse pictures taken at different epochs. The re-analysis of historical eclipse reports and documents could help to understand long term solar magnetic variations. One can follow these magnetically confined structures deep into the interplanetary medium. Eclipses make it possible to diagnose the physical conditions of temperature (at more than 1 million degrees), densities and dynamics, both in the corona and at the base of the sources of the solar wind. The dynamic instabilities, the solar wind and environment pervade the whole solar system and interact with Earth's magnetosphere. Artificial eclipses and coronagraphs Until the invention of the coronagraph in 1930, the rare glimpses from solar eclipses were the only opportunities to observe and study the solar corona. French astrophysicist Bernard Lyot developed the coronagraph instrument which made it possible for the first time to occult the solar disk in order to study the inner corona (creating an artificial eclipse). This is still limited by the stray-light emitted by the daylight atmosphere and only works from clean and high altitude sites such as the Pic du Midi, Sacramento Peak and Hawaii observatories. Used with additional filtering techniques to isolate specific emission this has given interesting results. Lyot made a spectacular movie using the Pic du Midi instrument showing giant prominences, arches and ejections of coronal mass. Unfortunately he died soon after a total solar eclipse expedition in Khartoum in 1952. The solar corona from space A revolution in the study of the corona came with the space age. In early sounding rocket experiments extreme-ultraviolet (EUV) and X-ray telescopes gave a view of the Sun very different from that previously seen in visible light. X-ray radiation arises from high-temperature coronal plasmas, and with these telescopes the corona can be mapped over the whole solar disk, and not only above the limb as in the eclipses. The X-ray and EUV instruments on the Skylab platform provided motion pictures of the solar corona, with the discovery of coronal holes of low X-ray emission, and changes in strongly emitting active magnetic regions. Also, space coronagraphs developed for the Solar Maximum Mission launched in 1982 mapped the outer corona in visible light, extending to long timescales the previously rare coronal snapshots obtained during eclipses. The Yohkoh Japanese X-ray satellite, launched in 1991, has obtained millions of X-ray images of the dynamic solar corona. The ultimate observations of the solar corona are now being obtained with SOHO, the Solar Heliospheric Observatory. This includes data obtained with the Extreme UV Imaging Telescope (EIT), spectro-imagers (CDS, SUMER), a UV coronagraph (UVCS) measuring intensities and flows in the corona, and a three-channel visible coronagraph in the visible (LASCO) covering an impressive range of distances from 0.2 to 30 solar radii above the limb. In addition, experiments map the surface magnetic field (MDI) and in-situ particle detectors measure the solar wind and instabilities a million kilometres before they reach Earth. SOHO now gives us a continuous view of the solar corona. Co-ordinated eclipse-space observations In this era of orbiting solar observatories, is there still a scientific benefit in making eclipse observations from Earth? The biggest benefit comes from co-ordinating modern ground-based eclipse observations with space measurements. There are still new discoveries to be made from eclipses, by using the latest methods of investigation (very accurate timing, fast rate of measurements, wavelengths not covered from space such as infrared or visible ranges, new experimental techniques). The interpretation of eclipse data together with space data gives us new insights into earlier eclipse observations, and also allows the study of long-term historical variability of the solar corona, and of the solar magnetic cycle. Since the launch of SOHO in 1995, co-ordinated campaigns have been conducted during the total solar eclipse of 26 February 1998, and the 11 August 1999 eclipse. SOHO measurements are analysed, together with ground based eclipse results, providing important insights into the nature of our Sun. Last update: 28 September 2004
http://www.esa.int/Our_Activities/Space_Science/The_science_of_eclipses/(print)
4.4375
The Wonder Behind the Wizard of Oz, by Cleo M. Coppa Guide Entry to 95.02.02: My unit is intended for seventh and eighth grade drama students. This unit is structured to be taught over a 6-8 week period. One of the major objectives of my unit is to expose my students to how the classic film “The Wizard of Oz” was created. In order to accomplish this objective, I have reviewed the process in which the L. Frank Baum novel The Wizard of Oz was adapted into the 1939 screenplay written by Noel Langley, Florence Ryerson and Edgar Allan Woolf. I have provided some background information on the author, L. Frank Baum. I think students will find Baum a very interesting character. In addition, I have provided a set of study questions for each chapter in Baum’s unabridged version of his novel that I feel teachers will find useful. Finally, a major portion of my unit compares the book to the film.
http://www.yale.edu/ynhti/curriculum/guides/1995/2/95.02.02.x.html
4.25
Reading For Meaning: Tutoring Elementary Students to Enhance Comprehension This article provides tutors with proven techniques for helping students acquire comprehension skills and strategies. In addition to building background knowledge about comprehension, it looks at six comprehension strategies and activities that support each strategy. In this article: Imagine three different children reading the following page from the popular story, M & M and the Bad News Babies, by Pat Ross. Mandy put a pink sea castle into the fish tank. Mimi added six yellow stones that glowed in the dark. The friends M and M had been fixing up the old fish tank all week. "Now all we need are the fish," said Mimi. "But fish cost money," said Mandy. Think about what the following readers did to understand this passage. Reader 1: Mark is familiar with other stories about the two friends M and M by Pat Ross and already knows that this story is an adventure about two girls named Mandy and Mimi. With this knowledge, he made a connection between what he already knows about the book series and this story. Mark knows that the author gives clues about the girls' adventures in the title, so he predicted that the two girls would try to earn money to buy fish for their fish tank and that their attempts will result in mishap. Finally, he ended by asking himself, "What kind of trouble will Mandy and Mimi get into in this story?" and turned the page to read more. Reader 2: Lizzy is not familiar with other stories about M and M. Instead of making a text-to-text connection, she quickly previewed the text and activated her prior knowledge about fish tanks. Lizzy made a text-to-self connection between her prior knowledge and the information in the story. She knows that people put different things in their fish tanks to decorate them and that fish cost money. Finally, she asked herself, "How will Mandy and Mimi earn money for the fish they want?" and turned the page to read more. Reader 3: Paul also quickly previewed the text, realizing that he doesn't know anything about fish or fish tanks. He imagined that the friends must have a new hobby, pet, or homework assignment. While he doesn't know about fish, Paul does know about earning money by doing chores. He made a text-to-world connection between his prior knowledge and clues from the passage. He ended his reading by asking himself, "I wonder if Mandy and Mimi will do chores to earn money for the fish they want?" and turned the page to read more. These examples demonstrate three paths to understanding the passage. Each path requires "in the head" strategies before, during, and after reading. The different paths help demonstrate that reading is an active thinking process and reveal what good readers do as they read. All children — whether struggling or proficient readers — benefit from learning to internalize and apply these comprehension strategies. This article will provide you, the tutor, with proven techniques for helping students acquire comprehension skills and strategies. Using these strategies to support students' developing abilities to read strategically and actively will make your work more effective. In addition to building your own background knowledge about comprehension, you'll explore six comprehension strategies and follow a tutor, Tina, and her student, Allison, through activities and conversations that support each strategy. What is comprehension? Comprehension is the "essence of reading" (Durkin, 1993). It is a complex thinking process that requires the reader to construct meaning from the text. The well-known children's author, Katherine Paterson, describes the relationship between reader and writer this way: "Once a book is published, it no longer belongs to me The work now belongs to the creative mind of my readers It's a wonderful feeling when readers hear what I thought I was trying to say, but there is no law that they must. Frankly, it is even more thrilling for a reader to find something in my writing that I hadn't until that moment known was there" (Paterson, 1981). Children need explicit instruction in reading comprehension. The role of the tutor is to help children become aware of the variety of problemsolving strategies that enable them to independently understand, discuss, and interpret text. Children who are given this kind of support become more proficient readers (National Reading Panel, 2000). Let's look at what good readers do. Good readers have strong listening comprehension skills. Comprehension develops through reading and listening to texts read aloud (Honig, Diamond, & Gutlohn, 2000). For young children and beginning readers, listening to someone read aloud provides opportunities for them to comprehend text they would not be able to read for themselves (Gillet & Temple, 1994). Developing children's listening comprehension helps them become more skillful at text comprehension (Fountas & Pinnell, 1996). Good readers recognize that reading is more than decoding words. Decoding is the ability to sound out a written word and figure out the spoken word it represents. While children cannot understand text they cannot decode, it is also true that decoded words are meaningless unless they are understood (Maria, 1990). Good readers make connections. Good readers experience the wonderful sensation of getting lost in text. They relate what they read to other books, to their own experiences, and to universal themes and the world around them. These types of connections are called text-to-text, text-to-self, and text-to-world connections (Keene & Zimmerman, 1997). Good readers think about their thinking. Good readers are aware of their own thought processes (Honig et al., 2000). Irvin (1998) points out that explicit instruction in comprehension skills helps develop children's metacognition—the ability to think about their thinking. Good readers use metacognition to "think about and have control over their reading" (Armbruster, Lehr, & Osborn, 2001). Good readers read a lot of good books! To be good readers, children need to read a lot. Allington (2001) points out that reading practice is a powerful contributor to the development of accurate, fluent, high-comprehension reading. Your work as a tutor not only provides additional learning time, but additional reading time for the children you work with. Increasing the volume of children's reading and helping them develop comprehension strategies are characteristics of effective reading support (Donahue, Voelkl, Campbell, & Mazzeo, 1999). What are the comprehension strategies? Research has shown that a major aspect of reading instruction is transforming comprehension skills into explicit strategies we teach to students (Simmons & Kameenui, 1998). Children need to learn to use comprehension strategies before, during, and after they read. Tutors need to explicitly model comprehension strategies and help students understand when and how to use them (Honig et al., 2000). The Comprehension Strategies - Activating prior knowledge - Answering and generating questions - Making and verifying predictions - Using mental imagery and visualization - Monitoring comprehension - Recognizing story structure The Tutor's Role - Provide an explicit description of the strategy and when it should be used - Model the strategy - Collaboratively use the strategy in action - Guide your tutee in practice using the strategy - Allow the student to use the strategy Activating prior knowledge Tina, the tutor, invites her student, Allison, to read the title of the book, M&M and the Bad News Babies, and then preview the book. Tina: What does the title tell you about the story? Allison: It's about babies who get into trouble. Tina: Let's take a look at the first chapter and see what we can find out. Allison: Look, they have a fish tank. I have fish, too. My fish live in a fish bowl, not a big tank. (Allison points to the picture of the tank on the page.) Tina: Allison, you are doing exactly what good readers do before they read. Good readers preview the book and think about what they already know about the topic. As we continue to read, keep in mind what you know about babysitting and doing chores. This may help you understand the story. Why It's Important Good readers make use of their prior knowledge and experiences to help them understand what they are reading. When a student activates her prior knowledge, the resulting connection provides a framework for any new information she will learn while reading (Graves, Juel, & Graves, 1998). This also helps ensure that the reader will remember the text after reading. How to Support Your Tutee - Page through the book and ask students what they already know about the topic, broad concept, author, or genre. For example, Tina learns that Allison has a fish tank, like the characters in the story. - Draw the student's attention to key vocabulary or phrases. Tina draws Allison's attention to topic words such as fish tank, babysitting, and twins. - Talk about print and text features and the way the text is organized. For example, Tina points out that the text is divided into four chapters. Another strategy for activating prior knowledge: K-W-L Chart. In addition to previewing the book with Allison, Tina decides to use a K-W-L chart (an example follows) as a way to explain the strategy further. K-W-L charts are especially helpful with nonfiction or expository text. Before reading, draw a K-W-L chart like the one below on a sheet of paper. - What I Know In the K column, list what the child already knows about the topic. If necessary, model a response to get the conversation started. - What I Want To Know Then point to the W column and ask the student what he would like to learn by reading the text. Write responses in the form of questions. Use the questions to help set a purpose for reading. - What I Learned While reading, turn the student's attention to the W column. As he discovers the answers to his questions, record them and any new learnings in the L column. After reading, help your student summarize the text using all three columns. Answering and generating questions Before beginning the next session with Allison, Tina invites her to talk about what they read last time. Tina: Do you remember what was happening when we read last week? Allison: (Pauses) Well, they had to babysit. The mom left the babies. Tina: Right. How do you think Mandy and Mimi felt about that? Allison: Well, it didn't really seem like they wanted to babysit. But they did it anyhow. Tina: I wonder why they decided to do it. Do you have any ideas? Tina: I'm going to reread the section. Listen to find out why the girls decided to babysit the twins. (Reads through the line, "I'd pay you," offered Mrs. Green.) Allison: That's right! They did it to get money to buy fish. Why It's Important Good readers ask questions before, during, and after reading. Asking, reflecting on, and answering questions enhances understanding. Encouraging students to ask themselves questions about the text will help them improve their comprehension of the story as well as recall selected elements. - Understand the purpose for reading - Focus their attention on what they are to learn - Think actively as they read - Monitor their comprehension - Review content and relate what they have learned to what they already know When children ask their own questions about text, they become more aware of their level of comprehension (Armbruster et al., 2001). How to Support Your Tutee - Ask open-ended questions that help students think actively about text - Share your own questions as you read together - Help students find clues in the text or use information they already have in their heads to answer questions - Invite children to keep track of their questions by telling you or by writing them into a notebook or on a sheet of paper Making and verifying predictions After reading the first two chapters, Tina pauses to model how to make predictions, or good guesses, about the story. Tina: Allison, let's think about what might happen next in the story. What kind of a contest could they have? Allison: I don't know. Something else is going to happen with the babies. Maybe they will have a race with the babies — a crawling race. Tina: (Writes down Allison's two predictions.) Using clues from the story to think about what might happen next is called making predictions or very good guesses. Good readers think about what can happen in a story and then read to find out if their prediction was correct. People who read a lot are constantly making predictions. Let's read the next two pages to see how the story matches up with your predictions. Allison: Great, let's see which one will win! Why It's Important Good readers make and confirm predictions when they form a connection between prior knowledge and new information in the text. Proficient readers have learned how to make informed predictions about what they read, read to confirm those predictions, and revise or make new predictions based on what they find out. How to Support Your Tutee - Before reading, model how to use all available information (e.g., title of the book, prior knowledge, genre, author) to make a - What clues about the story does the title provide? - What does the illustration on the cover make you think of? - Is this a real or make-believe story? - How can you tell? - Remind your tutee that many predictions may be wrong and that's okay As your tutee reads, prompt her to confirm, revise, and make new predictions - After reading, review and evaluate predictions made before and during reading Using mental imagery and visualization Tina: Allison, one of the strategies readers use to help them really understand a story is making pictures in their minds. It's called visualization. Try to visualize where the twins might be. Allison: I can see them looking at the fishtank. Tina: Keep going, what are they doing? Allison: Well, they have to climb up to get a good look. Oops! They knock it over. The yellow stones are rolling everywhere Tina:You're doing a good job of seeing things that are suggested by the story. Visualizing while you read is important. It's like making a movie of the story in your mind. Why It's Important Visualization is a type of inference, or informed guess, about the text. Readers make a visual representation in their minds of what they read. By using prior knowledge and background experiences, readers connect the author's writing with a personal picture. Through guided visualization, students learn how to create mental pictures as they read. How to Support Your Tutee To practice visualization, invite your student to listen carefully as you suggest some things he is going to see in his mind. Then describe an everyday object — one that is not within view — such as an animal, something to eat, a piece of sports equipment, or an article of clothing. Give the student a few moments to form an image in his mind. Then invite him to name and describe more details, such as color, size, shape, and smell. Vary this activity by allowing the student to choose the subject of the visualization himself. Use guided visualization either to prepare students for reading or to deepen understanding as they read. For example, have your student reread a passage from a book that describes something that is not pictured. You might say, The sentence that reads, "My mother is downstairs fixing her bike, said Mandy," leads me to picture a mother wearing jeans and sneakers working to fix a flat bike tire. What do you see? - What sentences helped you make your mental picture? - Were your images of the characters the same or did they change? Why? - Did making your mental picture give you any new ideas or questions about the story? It becomes clear that Allison misunderstood a part of the story, M&M and the Bad News Babies. The text reads, Then they made silly faces at each other. This silly faces looked just like two babies sucking on bottles. Under the text is an illustration of the two girls. Allison looks at the picture and thinks that the picture shows the two girls sticking out their tongues at each other. Allison: They look like they're angry but that wouldn't make sense. Tina: Think about what you should do when you come to a point in a story where something doesn't make sense. Allison: (Rereads the text and finds the word silly.) Oh, I see. This word is "silly." They are making a silly fish face. Tina: Does the story make sense now? Allison: Yes, because the faces look like babies sucking a bottle. Why It's Important Good readers monitor their comprehension while poor readers are less likely to do so (Simmons & Kameenui, 1998). Beginning readers are also less likely to use strategies to keep their reading on track (Paris & Oka, 1986). How to Support Your Tutee - Be aware of what they do understand - Identify what they do not understand - Use appropriate "fix-up" strategies to resolve problems in comprehension Recognizing story structure After reading M & M and the Bad News Babies, Tina and Allison flip back through the book to better understand what happened first, next, and last in the story. Tina: (Writes the words Beginning, Middle, and Ending onto a sheet of paper.) Allison, let's think back to the beginning of the story. What happened first? Allison: M and M were fixing up an old fish tank. Tina: Good, then what happened next? Allison: The neighbor, I think her name is Mrs. Green, knocked on their door. (Allison turns to the part of the book to confirm the neighbor's name.) Yup, her name is Mrs. Green. Mrs. Green came by and asked M and M to babysit the twins. Tina: Anything else important happen? Allison: The babies got into a lot of trouble. They wrecked the plants and comic books. The girls thought the babies wrecked their fish tank, but they didn't. Tina: What happened at the end of the story? Allison: The girls used their babysitting money to buy fish. They named the fish after the two twins. Why It's Important The setting of a story tells when and where the story takes place. Some stories have specific settings, while others occur at an indefinite time or place. Sometimes, the setting changes within the story. Characters are the people, animals, and other individuals that populate a story. The main character is sometimes called the protagonist and generally drives the plot. The rival is called the antagonist. The plot of a story tells what happened. It is the action of the story and gives it a beginning, middle, and ending. In general the plot consists of the following: - A problem that the main character must solve - The steps the character takes to solve it - The resolution of the problem - How the story ends The theme is the big idea that the author wants the reader to understand. Often the conclusion of the story reveals the theme. How to Support Your Tutee In addition to talking about and modeling thinking about what happened first, next, and last in the story, use a story map to help children understand the elements of story structure. See the Story Map below for an example. As we think back on Tina and Allison, we realize that helping students become active, strategic readers requires working with them on key comprehension strategies while fostering a love of reading. The strategies presented in this article encourage children to develop their metacognition, or to think about their own thinking while they read. Learning to read actively and purposefully helps children become proficient readers and prevents later reading difficulties. Your work with these strategies includes direct explanation, modeling, guided practice, and independent application. As a tutor, your work also includes modeling a love of reading and reading often. Remember, children need lots of experiences with good books and an environment that supports taking risks as readers. Click the "References" link above to hide these references. Allington, R.L. (2001). What really matters for struggling readers: Designing research-based programs. New York, NY: Longman. Armbruster, B.B., Lehr, F., & Osborn, J. (2001). Put reading first: The research building blocks for teaching children to read, kindergarten through grade 3. Washington, DC: National Institute for Literacy. Donahue, P.L., Voelkl, K.E., Campbell, J.R., & Mazzeo, J. (with Donahue, J., Finnegan, R., et al.). (1999). NAEP 1998 reading report card for the nation and the states. Washington, DC: U.S. Department of Education, National Center for Education Statistics. Retrieved March 31, 2004, from http://nces.ed.gov/nationsreportcard//pdf/main19 98/1999500.pdf Durkin, D. (1993). Teaching them to read (6th ed.). Boston, MA: Allyn & Bacon. Farstrup, A.E., & Samuels, S.J. (Eds.). (2002) What research has to say about reading instruction (3rd ed.). Newark, DE: International Reading Association. Fountas, I.C., & Pinnell, G.S. (1996). Guided reading: Good first teaching for all children. Portsmouth, NH: Heinemann. Gillet, J.W., & Temple, C. (with Mathews, S.R., II, & Young, J.P.). (1994). Understanding reading problems: Assessment and instruction (4th ed.). New York, NY: HarperCollins College. Graves, M.F., Juel, C., & Graves, B.B. (1998). Teaching reading in the 21st century. Boston, MA: Allyn & Bacon. Honig, B., Diamond, L., & Gutlohn, L. (2000). Teaching reading sourcebook: For kindergarten through eight grade. Novato, CA: Arena Press, & Emeryville, CA: Consortium on Reading Excellence. Irvin, J.L. (1998). Reading and the middle school student: Strategies to enhance literacy (2nd ed.). Boston, MA: Allyn & Bacon. Keene, E.O., & Zimmerman, S. (1997). Mosaic of thought: Teaching comprehension in a reader’s workshop. Portsmouth, NH: Heinemann. Maria, K. (1990). Reading comprehension instruction: Issues and strategies. Parkton, MD: York Press. National Reading Panel. (2000). Teaching children to read: An evidence-based assessment of the scientific research literature on reading and its implications for reading instruction. Reports of the subgroups. Washington, DC: National Institute of Child Health and Human Development. Paris, S.G., & Oka, E.R. (1986). Children’s reading strategies, metacognition, and motivation. Developmental Review, 6(1), 25–56. Paterson, K. (1988). Gates of excellence: On reading and writing books for children. New York, NY: Lodestar Books. Pearson, P.D., & Duke, N.K. (2002). Comprehension instruction in the primary grades. In C.C. Block & M. Pressley (Eds.), Comprehension instruction: Research-based best practices (pp. 247–258). New York, NY: Guilford Press. Ross, P. (1985). M & M and the bad news babies. New York, NY: Puffin Books. Simmons, D.C., & Kameenui, E.J. (Eds.). (1998). What reading research tells us about children with diverse learning needs: Bases and basics. Mahwah, NJ: Lawrence Erlbaum.
http://www.readingrockets.org/article/22800/
4.03125
A watershed is the area of land that drains into a body of water such as a river, lake, stream or bay. It is separated from other systems by high points in the area such as hills or slopes. It includes not only the waterway itself but also the entire land area that drains to it. For example, the watershed of a lake would include not only the streams entering the lake but also the land area that drains into those streams and eventually the lake. Drainage basins generally refer to large watersheds that encompass the watersheds of many smaller rivers and streams. Humans have an impact on watersheds in a number of ways. One way people influence watersheds is by changing where stormwater flows. By changing the contour of the land and adding stormwater systems, people change how and where the water goes. The storm drains and catch basins you see along the sidewalks and streets lead to a system of underground pipes that drain directly to local waterways. So where the melted snowflake from your sidewalk goes may be down the storm drain through stormwater pipes and out to the local river Another way people affect a watershed is by adding potential pollution sources to the watershed. The type of pollutant a rain droplet might pick up on its way through a watershed depends in part on how the land it travels through is used. How the land in a watershed is used by people, whether it is farms, houses or shopping centers, has a direct impact on the water quality of the watershed. When it rains, stormwater carries with it the effects of human activities as it drains off the land into the local waterway. As rain washes over a parking lot, it might pick up litter, road salt and motor oil and carry these pollutants to a local stream. On a farm, rain might wash fertilizers and soil into a pond. Snow melt might wash fertilizers and pesticides from a suburban lawn. To reduce this pollution of stormwater, it's important to practice pollution prevention. That means preventing pollution at the source, recycling motor oil instead of pouring it onto the street, cleaning up after pets, putting trash into containers rather than littering or reducing our use of fertilizers, pesticides and deicers. back to top What is the Water Cycle? For millions of years, water has been used. It is constantly being recycled and reused. It is important to understand how water moves through the Earth's water cycle, which is defined as the movement of water from the Earth's surface into the atmosphere and back to the Earth's surface again. When it rains, the rainwater flows overland into waterways or it is absorbed by the ground or plants. Water evaporates from land and water bodies becoming water vapor in the atmosphere. Water is also released from trees and other plants through "transpiration." the water vapor from evaporation and transpiration forms clouds in the atmosphere which in turn provide precipitation (rain, hail, snow, sleet) to start the cycle over again. This process of water recycling, known as the water cycle, repeats itself continuously. back to top What is Ground Water? Where does the water that rains on your home go? After it leaves your lawn, street or sidewalk, where is it headed? Does it wander into wetlands? Does it puddle in your backyard? Does it zip down a sink hole? If it soaks into the ground, it becomes ground water. A sizable amount of rainwater runoff seeps into the ground to become ground water. Ground water moves into water-filled layers of porous geologic formations called aquifers. If the aquifer is close to the surface, its ground water can flow into nearby waterways or wetlands, providing a base flow. Depending on your location, aquifers containing ground water can range from a few feet below the surface to several hundred feet underground. Aquifer recharge areas are locations where rainwater and other precipitation seep into the earth's surface to enter an aquifer. Contrary to popular belief, aquifers are not flowing underground streams or lakes. Ground water moves at an irregular pace, seeping from more porous soils, from shallow to deeper areas and from places where it enters the Earth's surface to where it is discharged or withdrawn. A system of more than 100 aquifers is scattered throughout New Jersey, covering 7,500 square miles. Why is Ground Water Important? Ground water is the primary drinking water source for half of the state's population. Most of this water is obtained from individual domestic wells or public water supplies which tap into aquifers. New Jersey agriculture also depends on a steady supply of clean ground water for irrigation. Ground Water Complications Humans have an impact on ground water in a number of ways. One way people influence ground water is by changing where stormwater flows. By changing the contour of the land and adding impervious surfaces such as roads, parking lots and rooftops, people change how and where water goes. When it rains, the stormwater in a developed area is less able to soak into the ground because the land is now covered with roads, rooftops and parking lots. Less ground water will be recharged and more water will flow directly into streams and rivers. Another way people affect ground water is by adding potential pollution sources. How the land above ground water is used by people, whether it is farms, houses or shopping centers, has a direct impact on ground water quality. As rain washes over a parking lot, it might pick up road salt and motor oil and carry these pollutants to a local aquifer. On a farm or suburban lawn, snow melt might soak fertilizers and pesticides into the ground. When properly used, the amount of ground water pumped out for human purposes is less than what nature supplies to recharge the aquifer. If overused, more water is pumped out than is recharged. With less ground water in the aquifer, it becomes more difficult to use and more susceptible to pollution and salt water intrusion. Conserving water through efficient water use can help prevent pollution. Using less water reduces the runoff of agricultural pollutants pesticides and fertilizers. Diverting less water from waterways or aquifers leaves more water in streams or lakes, protecting existing ecosystems such as wetlands (which absorb certain types of pollution) and water supplies. Water conservation can also save money by reducing pumping and treatment costs both before water reaches your home and after it leaves. Reduced water use may extend the life of existing sewage treatment facilities. It can also eliminate the need to develop a new water supply. New wells and reservoirs are expensive and time consuming to locate and build. back to top How Does Urbanization Change a Watershed? Urbanization (or development) has a great effect on local water resources. It changes how water flows in the watershed and what flows in the water. Both surface and ground water are changed. As a watershed becomes developed, trees, shrubs and other plants are replaced with impervious surfaces (roads, rooftops, parking lots and other hard surfaces that do not allow stormwater to soak into the ground). Without the plants to store and slow the flow of stormwater, the rate of stormwater runoff is increased. Less stormwater is able to soak into the ground because sidewalks, roads, parking lots and rooftops block this infiltration. This means a greater volume of water reaches the waterway faster and less of that water is able to infiltrate to ground water. This, in turn, leads to more flooding after storms but reduced flow in streams and rivers during dry periods. The reduced amount of infiltrating water can lower ground water levels, which in turn can stress local waterways that depend on steadier flows of water. In the stream, more erosion of stream banks and scouring of channels will occur due to volume increase. This degrades habitat for plant and animal life that depend on clear water. Sediment from eroded stream banks clogs the gills of fish and blocks light needed for plants. The sediment settles to fill in stream channels, lakes and reservoirs. This also increases flooding and the need for dredging to clear streams or lakes for boating. In addition to the high flows caused by urbanization, the increased runoff also contains increased contaminants. These include litter, cigarette butts and other debris from sidewalks and streets, motor oil poured into storm sewers, heavy metals from brake linings, settled air pollutants from car exhaust and pesticides and fertilizers from lawn care. These contaminants reach local waterways quickly after a storm. Stormwater Sewer Basics Stormwater flows into the stormwater system through a storm drain. These are frequently located along the curbs of parking lots and roadways. The grate that prevents larger objects from flowing into the storm sewer system is called a catch basin. Once below ground, the stormwater flows through pipes which lead to an outfall where the stormwater enters a stream, river or lake. In most areas of New Jersey, the stormwater sewer goes directly to local waterway without any treatment. In some areas of the state, the outfall may lead to a stormwater management basin. These basins control the flow of stormwater and can also improve water quality, depending on how they are designed. These basins are frequently seen in newer commercial and residential areas. In some older urban areas of the state, the stormwater and sanitary sewer systems may be combined. Here both stormwater and sewage from households and businesses travel together in the same pipes. Both stormwater and sewage are treated at sewage treatment plants except during heavy rains. During these occasions, both the stormwater and untreated sewage exceed the capacity of the treatment plant and this overflow is directed into local waterways. Protecting Stormwater Sewers In the first rush of water from a rainstorm, much of the debris and other pollutants that had settled on the land surface and in the stormwater sewer since the last storm will be picked up and carried into the local stream. This can significantly add to water quality problems. It is therefore important to protect the stormwater system from sources of pollution. The following should never be dumped down storm drains, road gutters or catch basins: motor oil, pet waste, grass trimmings, leaves, debris and hazardous chemicals of any kind. Anything dumped in our stormwater collection systems will be carried into our streams. Controlling Stormwater Flow Managing stormwater to reduce the impact of development on local watersheds and aquifers relies on minimizing the disruption in the natural flow - both quality and quantity of stormwater. By designing with nature, the impact of urbanization can be greatly reduced. This can be accomplished by following these principles: - minimizing impervious surfaces; - maximizing natural areas of dense vegetation; - structural stormwater controls such as stormwater management basins and; - practicing pollution prevention by avoiding contact between stormwater and pollutants. You Can Make a Difference in Your Own Backyard Managing stormwater in your own backyard is important. As an integral part of the watershed you live in, what you do in your backyard makes a difference. Here are some examples of what you can do at home: - Reduce impervious surfaces by using pavers or bricks rather than concrete for a driveway or sidewalk. - Divert rain from paved surfaces onto grass to permit gradual infiltration. - Landscape with the environment in mind. Choose the appropriate plants, shrubs and trees for the soil in your yard; don't select plants that need lots of watering (which increases surface runoff), fertilizers or pesticides. - Maintain your car properly so that motor oil, brake linings, exhaust and other fluids don't contribute to water pollution. - Keep stormwater clean. Never dump litter, motor oil, animal waste, or leaves into storm drains or catch basins. back to top What is Nonpoint Source Pollution? Nonpoint Source Pollution, or people pollution, is a contamination of our ground water, waterways, and ocean that results from everyday activities such as fertilizing the lawn, walking pets, changing motor oil and littering. With each rainfall, pollutants generated by these activities are washed into storm drains that flow into our waterways and ocean. They also can soak into the ground contaminating the ground water below. Each one of us, whether we know it or not, contributes to nonpoint source pollution through our daily activities. As a result, nonpoint source pollution is the BIGGEST threat to many of our ponds, creeks, lakes, wells, streams, rivers and bays, our ground water and the ocean. The collective impact of nonpoint source pollution threatens aquatic and marine life, recreational water activities, the fishing industry, tourism and our precious drinking water resources. Ultimately, the cost becomes the burden of every New Jersey resident. But there's good news - in our everyday activities we can stop nonpoint source pollution and keep our environment clean. Simple changes in YOUR daily lifestyle can make a tremendous difference in the quality of New Jersey's water resources. Here are just a few ways you can reduce nonpoint source pollution. LITTER: Place litter, including cigarette butts and fast food containers, in trash receptacles. Never throw litter in streets or down storm drains. Recycle as much as possible. FERTILIZERS: Fertilizers contain nitrates and phosphates that, in abundance, cause blooms of algae that can lead to fish kills. Avoid the overuse of fertilizers and do not apply them before a heavy rainfall. PESTICIDES: Many household products made to exterminate pests also are toxic to humans, animals, aquatic organisms and plants. Use alternatives whenever possible. If you do use a pesticide, follow the label directions carefully. HOUSEHOLD HAZARDOUS PRODUCTS: Many common household products (paint thinners, moth balls, drain and oven cleaners, to name a few) contain toxic ingredients. When improperly used or discarded, these products are a threat to public health and the environment. Do not discard with the regular household trash. Use natural and less toxic alternatives whenever possible. Contact your County Solid Waste Management Office for information regarding household hazardous waste collection in your area. MOTOR OIL: Used motor oil contains toxic chemicals that are harmful to animals, humans and fish. Do not dump used motor oil down storm drains or on the ground. Recycle all used motor oil by taking it to a local public or private recycling center. CAR WASHING: Wash your car only when necessary. Consider using a commercial car wash that recycles its wash water. Like fertilizers, many car detergents contain phosphate. If you wash your car at home, use a non-phosphate detergent. PET WASTE: Animal wastes contain bacteria and viruses that can contaminate shellfish and cause the closing of bathing beaches. Pet owners should use newspaper, bags or scoopers to pick up after pets and dispose of wastes in the garbage or toilet. SEPTIC SYSTEMS: An improperly working septic system can contaminate ground water and create public health problems. Avoid adding unnecessary grease, household hazardous products and solids to your septic system. Inspect your tank annually and pump it out every three to five years depending on its use. BOAT DISCHARGES: Dumping boat sewage overboard introduces bacteria and viruses into the water. Boat owners should always use marine sanitation devices and pump-out facilities at marinas. As you can see, these suggestions are simple and easy to apply to your daily lifestyle. Making your commitment to change at least one habit can result in benefits that will be shared by all of us and add to the health and beauty of New Jersey's water resources. back to top
http://www.nj.gov/dep/watershedrestoration/info.html
4.40625
Speed of gravity Isaac Newton's mechanical systems included the concept of a force that operated between two objects, gravity. The quantity of force was dependent on the masses of the two objects, with more massive objects exerting more force. This led to a problem: it seemed that each object had to "know" about the other in order to exert the proper amount of force on it. This troubled Newton, who commented that he made no claims to how it could work. Given two bodies attracting each other, the question then arises as to the speed of propagation of the force itself. Newton demonstrated that unless the force was instantaneous, relative motion would lead to the non-conservation of angular momentum. This he could observe as not being true, in fact the conservation of momentum was one of the observations that led to his theory of gravitation in the first place. He therefore concluded that gravity was instantaneous. Michael Faraday's work on electromagnetism in the mid-1800s provided a new framework for understanding electromagnetic forces. In these "field theories" the objects in question do not act on each other, but on space itself. Other objects react to that field, not to the distant object itself. There is no requirement for one object to have any "knowledge" of the other. With this simple change, many of the philosophical problems of Newton's seminal work simply disappeared while the answers stayed the same, and in many cases the answers were easier to calculate. By viewing gravity as being transmitted by a field rather than a force, it is possible for gravity to be transmitted at a finite speed without running into the problems that Newton sees. If gravity is transmitted by a field, a moving object will cause the field potentials to be non-circular. Hence by using a delayed field rather than a delayed force, one can show that the force will point to where an object is currently rather than where it was in the past. Gravity is still traveling at a finite speed because a sudden change in the direction of an object will not be noticed by the object it is pulling without a delay. A similar effect occurs in electromagnetic fields. This view has some major implications for how physicists view the world. Until the mid-19th century, the standard view among physicists is that forces are the fundamental entity and fields are merely mathematical shorthand to describe the behavior of forces. Since the late 19th century, physicists have gradually come to view fields as the more fundamental entity and forces merely manifestations of the behavior of fields. The phenomenon in which a delayed force theory will lead to wrong answers, but a delayed field theory will lead to right ones, is one reason why. The belief that fields rather than forces were the fundamental entity was one of the main motivating factors that led Albert Einstein to develop his theory of general relativity in the early 20th century to replace Newtonian gravity which was widely considered defective because it relied on the notion of instantaneous forces to transmit gravity rather than fields. In general relativity (GR), the field is elevated to the only real concern. The gravitational field is equated with the curvature of space-time, and propagations (including gravity waves) can be shown, according to this theory, to travel at a single speed, cg. This finite speed may at first seem to lead to exactly the same sorts of problems that Newton was originally concerned with. Although the calculations are considerably more complicated, one can show that general relativity does not suffer from these problems just as classical delayed potential theory does not. However, Tom Van Flandern has made a name for himself by insisting that this proves general relativity incorrect. Other physicists who have interacted with him argue that the objection he presents was resolved in the 19th century. In 1999 a Russian researcher from Izhevsk Yuri Ivanov confirmed the results of Roland Eötvös experiments, according to which a copper ball should fall with an acceleration of 0,000000001 fraction faster than a water drop. Ivanov has discovered a phenomenon which he called shady gravitational effect (Russian: теневой гравитационный эффект): the gravity of the Sun passing through the more dense envelopes of the Earth core is partially enfeebled and forms on the opposite side of the planet a sort of gravitational shadow. In the winter solstice of 2002 a torsion balance has reacted to the change of the attracting force of the Sun eight minutes before the midnight. That meaned that a speed of diffusion of the gravitation is practically momentary (according to the calculations of Laplace this speed should exceed the speed of light at least in six million times). In September 2002, Sergei Kopeikin made an indirect experimental measurement of the speed of gravity, using Ed Fomalont's data from a transit of Jupiter across the line-of-sight of a bright radio source. The speed of gravity, presented in January 2003, was found to be somewhere in the range between 0.8 and 1.2 times the speed of light, which is consistent with the theoretical prediction of general relativity that the speed of gravity is exactly the same as the speed of light. Some physicists have criticised the conclusions drawn from this experiment on the grounds that, as it was structured, the experiment was incapable of finding any results other than agreement with the speed of light. This criticism originates from the belief in electromagetic field origin of the fundamental speed c so that according to those physicists the Einstein equations must depend on the physical speed of light which explains why gravity always propagate in that theory with the speed of light. Alternative point of view is that the Einstein equations describe the origin and evolution of the space-time curvature and gravitational waves which are conceptually independent of the electromagnetic field and, hence, the fundamental speed c in the Einstein equations can not be interpreted as a physical speed of light despite that it must have the same numerical value as the speed of light in vacuum if the general relativity is correct. Perhaps, the best illustrative way to distinguish two speeds is to denote the speed of gravity in the Einstein equations as cg and the speed of light in Maxwell's equations as c. Kopeikin-Fomalont experiment observed the bending of quasar's light caused by time-dependent gravitational field of Jupiter and measured the ratio c/cg. This observation shows that this ratio is unity with the precision 20%. On the other hand, Tom Van Flandern at the U.S. Army's Research Lab at the University of Maryland, calculates the speed of gravity as ≥ 2x1010 c (that is, twenty trillion times the speed of light). If true, this could explain why gravity appears to be an instantaneous (ie, of infinite speed), rather than a finite force. Van Flandern's theories are controversial and not well accepted among physicists. - Does Gravity Travel at the Speed of Light? in The Physics FAQ - New Scientist story on experimental measurement of the speed of gravity - Testing Relativistic Effect of Propagation of Gravity by Very-Long Baseline Interferometry - Measuring the Speed of Propagation of Gravity - The Measurement of the Light Deflection from Jupiter: Experimental Results - A critism of the above, another - Aberration and the Speed of Gravity - Aberration and the Speed of Gravity in the Jovian Deflection Experiment - The post-Newtonian treatment of the VLBI experiment on September 8, 2002 - The Speed of Gravity in General Relativity and Theoretical Interpretation of the Jovian Deflection Experiment - The Speed of Gravity - Repeal of the Speed Limit, By Tom Van Flandern, Meta Research - The Speed of Gravity - What the Experiments Say - Tom Van Flandern views - Experiments indicate that the speed of gravity is minimum 20 billion times c. by Alfonso Leon Guillen Gomez
http://www.exampleproblems.com/wiki/index.php/Speed_of_gravity
4.03125
The Civil War 1850–1865 Key People & Terms A zealous, itinerant radical who crusaded violently against slavery in the 1850s. Brown moved to Kansas with his family in the mid-1850s to prevent the territory from becoming a slave state. In 1856, he and a band of vigilantes helped spark the Bleeding Kansas crisis when they slaughtered five border ruffians at the Pottawatomie Massacre. Three years later, Brown led another group of men in the Harpers Ferry Raid to incite a slave rebellion. He was captured during the raid and hanged shortly before the election of 1860. Brown’s death was cheered in the South but mourned in the North. A pro-Southern Democrat who became the fifteenth president of the United States in 1856. Buchanan defeated John Frémont of the new Republican Party and former president Millard Fillmore of the Know-Nothing Party in one of the most hotly contested elections in U.S. history. During his term, Buchanan supported the Lecompton Constitution to admit Kansas as a slave state, weathered the Panic of 1857 , and did nothing to prevent South Carolina’s secession from the Union. A former Senator from Mississippi who was selected as the first president of the Confederacy in 1861. Overworked and underappreciated by his fellow Confederates, Davis struggled throughout the Civil War to unify the Southern states under the central government they had established. An influential Democratic senator and presidential candidate from Illinois. Douglas pushed the 1854 Kansas-Nebraska Act through Congress to entice railroad developers to build a transcontinental railroad line in the North. The act opened Kansas and Nebraska territories to slavery and thus effectively repealed the Missouri Compromise of 1820. A champion of popular sovereignty, he announced his Freeport Doctrine in the Lincoln-Douglas debates against Abraham Lincoln in 1858. Although Douglas was the most popular Democrat, Southern party members refused to nominate him for the presidency in 1860 because he had rejected the Lecompton Constitution to make Kansas a slave state. As a result, the party split: Northern Democrats nominated Douglas, while Southern Democrats nominated John C. Breckinridge. In the election of 1860 , Douglas toured the country in an effort to save the Union. Ulysses S. Grant The Union’s top general in the Civil War, who went on to become the eighteenth U.S. president. Nicknamed “Unconditional Surrender” Grant, he waged total war against the South in 1863 and 1864. Robert E. Lee Arguably the most brilliant general in the U.S. Army in 1860, who turned down Abraham Lincoln’s offer to command the Union forces in favor of commanding the Army of Northern Virginia for the Confederacy. Although Lee loved the United States, he felt he had to stand by his native state of Virginia. His defeat at the Battle of Gettysburg proved to be the turning point in the war in favor of the North. Lee made the Confederacy’s unconditional surrender at Appomattox Courthouse to Ulysses S. Grant in April 1865 to end the Civil War. A former lawyer from Illinois who became the sixteenth president of the United States in the election of 1860. Because Lincoln was a Republican and was associated with the abolitionist cause, his election prompted South Carolina to secede from the Union. Lincoln, who believed that the states had never truly left the Union legally, fought the war until the South surrendered unconditionally. During the war, in 1863, Lincoln issued the largely symbolic Emancipation Proclamation to free all slaves in the South. Just at the war’s end, in April 1865, Lincoln was assassinated by John Wilkes Booth in Washington, D.C. A young, first-rate U.S. Army general who commanded the Union army against the Confederates during the Civil War. Unfortunately, McClellan proved to be overly cautious and was always reluctant to engage Confederate forces at a time when Abraham Lincoln badly needed military victories to satisfy Northern public opinion. McClellan did manage to defeat Robert E. Lee at the Battle of Antietam in 1862, which gave Lincoln the opportunity to issue the Emancipation Proclamation. Lincoln eventually fired McClellan, however, after the general began to criticize publicly the president’s ability to command. In 1864, McClellan ran for president as a Peace Democrat on a platform for peace against Lincoln but was defeated. Fourteenth president of the United States, elected in 1852 as a proslavery Democrat from New England. Pierce combined his desire for empire and westward expansion with the South’s desire to find new slave territories. He tacitly backed William Walker’s attempt to seize Nicaragua and used the Ostend Manifesto to try to acquire Cuba from Spain. Pierce also oversaw the opening of trade relations with Japan, upon the return of Commodore Matthew Perry, and authorized the Gadsden Purchase from Mexico in 1853. William Tecumseh Sherman A close friend of Ulysses S. Grant who served as a general in the Union army during the Civil War. Sherman, like Grant, understood that the war would only truly be won when the Union forces had broken the will of the Southern public to fight. Sherman is best known for the total war he and his expedition force waged on the South during his March to the Sea. A senator from Massachusetts who delivered an antislavery speech in the wake of the Bleeding Kansas crisis in 1856. In response, Sumner was caned nearly to death by South Carolinian congressman Preston Brooks on the Senate floor. The attack indicated just how passionately some Southerners viewed the popular sovereignty and slavery issue. A hero of the Mexican War who became the second and last Whig president in 1848. In order to avoid controversy over the westward expansion of slavery in the Mexican Cession, Taylor campaigned without a solid platform. He died after only sixteen months in office and was replaced by Millard Fillmore. A violent crisis that enveloped Kansas after Congress passed the Kansas-Nebraska Act in 1854. After the act passed, hundreds of Missourians crossed the border to make Kansas a slave state. Outraged by the intimidation tactics these “border ruffians” used to bully settlers, many Northern abolitionists moved to Kansas as well in the hopes of making the territory free. Tensions mounted until proslavery men burned the Free-Soil town of Lawrence, Kansas. John Brown and a band of abolitionist vigilantes countered by killing five men at the Pottawatomie Massacre in 1856. In many ways, Bleeding Kansas was a prelude to the war that loomed ahead. A group of hundreds of Missourians who crossed the border into Kansas, hoping to make Kansas a slave state after Congress passed the Kansas-Nebraska Act in 1854. The border ruffians rigged the elections to choose delegates for the Kansas constitutional convention, with the aim of making Kansas a new slave state. They succeeded and drafted the proslavery Lecompton Constitution in the winter of 1857. Outraged, many Northern abolitionists settled in Kansas to counter the border ruffians. The territory erupted into a civil war that became known as Bleeding Kansas. In 1858, the Senate rejected the Lecompton Constitution on the grounds that the elections had been rigged. Compromise of 1850 A bundle of legislation that enabled the North and South to end, temporarily, the debate over the expansion of slavery. First proposed by Henry Clay and championed by Stephen Douglas, the Compromise of 1850 contained several provisions. California was admitted as a free state; the other western territories were left to popular sovereignty; the slave trade (but not slavery itself) was banned in Washington, D.C.; Texas ceded disputed land to New Mexico Territory; and a new Fugitive Slave Law was enacted. Though the compromise was only a temporary solution, it effectively postponed the Civil War, and this extra time allowed the Northern industrial economy to grow in the decade before the war. Dred Scott v. Sanford A landmark 1857 Supreme Court decision that effectively ruled that slaves were property. Dred Scott, a slave to a Southern army doctor, had lived with his master in Illinois and Wisconsin in the 1830s. While there, he married a free woman and had a daughter. Scott and his daughter eventually returned to the South. Scott sued his master for his and his family’s freedom, but Chief Justice Roger Taney and a conservative Supreme Court ruled against Scott, arguing that Congress had no right to restrict the movement of private property. Moreover, Taney ruled that blacks like Scott could not file lawsuits in federal courts because they were not citizens. The 1857 decision outraged Northerners and drove them further apart from the South. A presidential proclamation that nominally freed all slaves in the Confederacy. President Abraham Lincoln, emboldened by the Union victory at the Battle of Antietam, issued the proclamation on January 1, 1863. The proclamation did not free all slaves (North and South), because Lincoln did not want the proslavery border states to secede in anger. Though the proclamation had no immediate effect on black slaves in the South, it did mark an ideological turning point in the war, because it irrevocably linked emancipation with the restoration of the Union. A party formed by disgruntled Northern abolitionists in 1848, when Democrats nominated Lewis Cass for president and Whigs nominated the politically inept Zachary Taylor. Former president Martin Van Buren became the Free-Soil candidate for president, campaigning for the Wilmot Proviso and against popular sovereignty and the westward expansion of slavery. Van Buren received no votes in the electoral college but did detract enough popular votes from Cass to throw the election to Taylor. Fugitive Slave Act A law passed under the Compromise of 1850 that forced Northerners to return runaway slaves to the South. Angered by the fact that many Northerners supported the Underground Railroad, Southerners demanded this new and stronger Fugitive Slave Act as part of the compromise. The act was so unpopular in the North that federal troops were often required to enforce it. One slave in Boston, Massachusetts, had to be escorted by 300 soldiers and a U.S. Navy ship. The law, like the Dred Scott v. Sanford decision, drove the North and South even further apart. Hampton Roads Conference A peace conference that Jefferson Davis requested in the winter of 1865, aware that the end of the war was near. At the conference, Abraham Lincoln’s representatives opened negotiations by demanding the unconditional surrender of the South and full emancipation of all slaves. The Southern delegation, however, refused anything less than full independence. The conference thus ended without resolution. However, the war ended only a few months later, completely on Lincoln’s terms. Harpers Ferry Raid An October 16, 1859, raid by John Brown, the infamous Free-Soiler who had killed five proslavery men at the Pottawatomie Massacre. This time around, Brown stormed an arsenal at Harpers Ferry, Virginia (present-day West Virginia), with twenty other men. He hoped the raid would prompt slaves throughout Virginia and the South to rise up against their masters. There was no rebellion, though, and Brown and his men found themselves cornered inside the arsenal. A long standoff ensued. Half the raiders were killed and the rest, including Brown, captured. After a speedy trial, Brown was convicted of treason and hanged. Although his death was cheered in the South, he became an abolitionist martyr in the North. The Kansas constitution that resulted when hundreds of proslavery border ruffians from Missouri crossed into Kansas after the Kansas-Nebraska Act of 1854 and rigged the elections to choose delegates for the Kansas constitutional convention. The border ruffians succeeded and submitted the proslavery Lecompton Constitution in the winter of 1857. After taking office that same year, pro-Southern president James Buchanan immediately accepted the constitution to make Kansas a new slave state. Democrat Stephen Douglas, however, rejected the Lecompton Constitution in the Senate on the grounds that the elections had been rigged. The South denounced Douglas as a traitor when a new (and more honest) vote in Kansas overwhelmingly made the territory free. Kansas was admitted into the Union after the Civil War began. A Northern abolitionist party formed in 1840 when the abolitionist movement split into a social wing and a political wing. The Liberty Party nominated James G. Birney in the election of 1844 against Whig Henry Clay and Democrat James K. Polk. Surprisingly, the Liberty Party detracted just enough votes from Clay to throw the election to the Democrats. A series of public debates between the relatively unknown former congressman Abraham Lincoln and Stephen Douglas in their home state of Illinois in 1858. Hoping to steal Douglas’s seat in the Senate in the national elections that year, Lincoln wanted to be the first to put the question of slavery to the voters. The “Little Giant” Douglas accepted and engaged Lincoln in a total of seven debates, each in front of several thousand people. Even though Lincoln lost the Senate seat, the debates made Lincoln a national figure. A Northern party, also nicknamed the “Copperheads” after the poisonous snake, that criticized Abraham Lincoln and the Civil War. The Peace Democrats did not particularly care that the Southern states had seceded and wanted to let them go in peace. The Copperheads nominated George McClellan for president in 1864 on a peace platform but lost to Lincoln and the Republican Party. The idea that citizens in the West should vote to determine whether their respective territories would become free states or slave states upon admission to the Union. Popular sovereignty was first proposed by presidential candidate Lewis Cass in 1848 and later championed by Stephen Douglas. The Whigs and the Republican Party flatly rejected popular sovereignty, because they opposed the westward expansion of slavery. The killing of five proslavery men near Pottawatomie Creek, Kansas, by John Brown and a band of abolitionist vigilantes in retaliation for the burning of Free-Soil Lawrence, Kansas. Neither Brown nor any of his men were brought to justice. Instead, border ruffians and other proslavery settlers responded in kind and sparked the “Bleeding Kansas” crisis. Eventually, the entire territory became embroiled in a bloody civil war that foreshadowed the war between the North and South. Uncle Tom’s Cabin A novel, published by Harriet Beecher Stowe in 1852, that turned Northern public opinion against slavery and the South more than anything else in the decade before the Civil War. Uncle Tom’s Cabin became the first American bestseller almost overnight and went on to sell 250,000 copies in just a few short months. In the wake of the strengthened Fugitive Slave Act, Northerners identified with the black slave protagonist and pitied his suffering. The book affected the North so much that when Abraham Lincoln met Stowe in 1863, he called her “the little woman who wrote the book that made this great war.”
http://www.sparknotes.com/history/american/civilwar/terms.html
4.03125
Acculturation: The process of acquiring or adapting to a new culture, its traditions, customs, and patterns of daily living. Mutual Constitution: The reciprocal way in which an individual is shaped by the surrounding culture and simultaneously shapes the culture with his or her behavior. The two modes of mutual constitution are independent (focus on the uniqueness of the individual, being unique) and interdependent (focus on group or community, sense of connection and responsibility to larger group). Protestant Ethic: A phrase that describes and relates to early American culture, emphasizing individual achievement, personal responsibility, self-sufficiency, and control over the environment. Trios: Psychologist James Jones's theory that the residual influences and harsh experiences of slavery surface in some African-Americans' conceptions of time, rhythm, improvisation, speech, and spirituality.
http://www.learner.org/series/discoveringpsychology/26/e26glossary.html
4.0625
What can you say about the child who will be first on the playground tomorrow morning at breaktime in your school? What statements can you make about the car that passes the school gates at 11am on Monday? How will you come up with statements and test your ideas? This activity asks you to collect information about the birds you see in the garden. Are there patterns in the data or do the birds seem to visit randomly? Take a look at these data collected by children in 1986 as part of the Domesday Project. What do they tell you? What do you think about the way they are presented? A group of children are discussing the height of a tall tree. How would you go about finding out its height? This challenging activity involves finding different ways to distribute fifteen items among four sets, when the sets must include three, four, five and six items. The letters of the word ABACUS have been arranged in the shape of a triangle. How many different ways can you find to read the word ABACUS from this triangular pattern? In this investigation, we look at Pascal's Triangle in a slightly different way - rotated and with the top line of ones taken off. This tricky challenge asks you to find ways of going across rectangles, going through exactly ten squares. This challenge extends the Plants investigation so now four or more children are involved. Have a go at this 3D extension to the Pebbles problem. A challenging activity focusing on finding all possible ways of stacking rods. It starts quite simple but great opportunities for number discoveries and patterns! Explore the different tunes you can make with these five gourds. What are the similarities and differences between the two tunes you Explore Alex's number plumber. What questions would you like to ask? What do you think is happening to the numbers? In this article for teachers, Bernard gives an example of taking an initial activity and getting questions going that lead to other What is the smallest number of tiles needed to tile this patio? Can you investigate patios of different sizes? All types of mathematical problems serve a useful purpose in mathematics teaching, but different types of problem will achieve different learning objectives. In generalmore open-ended problems have. . . . A description of some experiments in which you can make discoveries about triangles. An investigation that gives you the opportunity to make and justify How many shapes can you build from three red and two green cubes? Can you use what you've found out to predict the number for four red and two green? 48 is called an abundant number because it is less than the sum of its factors (without itself). Can you find some more abundant How many different ways can you find of fitting five hexagons together? How will you know you have found all the ways? Using different numbers of sticks, how many different triangles are you able to make? Can you make any rules about the numbers of sticks that make the most triangles? I cut this square into two different shapes. What can you say about the relationship between them? What do these two triangles have in common? How are they related? Can you find ways of joining cubes together so that 28 faces are What is the largest cuboid you can wrap in an A3 sheet of paper? In this investigation, you must try to make houses using cubes. If the base must not spill over 4 squares and you have 7 cubes which stand for 7 rooms, what different designs can you come up with? Investigate what happens when you add house numbers along a street in different ways. In my local town there are three supermarkets which each has a special deal on some products. If you bought all your shopping in one shop, where would be the cheapest? This problem is based on the story of the Pied Piper of Hamelin. Investigate the different numbers of people and rats there could have been if you know how many legs there are altogether! I like to walk along the cracks of the paving stones, but not the outside edge of the path itself. How many different routes can you find for me to take? How many different shaped boxes can you design for 36 sweets in one layer? Can you arrange the sweets so that no sweets of the same colour are next to each other in any direction? An activity making various patterns with 2 x 1 rectangular tiles. Can you continue this pattern of triangles and begin to predict how many sticks are used for each new "layer"? In this investigation, you are challenged to make mobile phone numbers which are easy to remember. What happens if you make a sequence adding 2 each time? If the answer's 2010, what could the question be? This challenge asks you to investigate the total number of cards that would be sent if four children send one to all three others. How many would be sent if there were five children? Six? What is the smallest cuboid that you can put in this box so that you cannot fit another that's the same into it? Can you find out how the 6-triangle shape is transformed in these tessellations? Will the tessellations go on for ever? Why or why Use the interactivity to investigate what kinds of triangles can be drawn on peg boards with different numbers of pegs. Follow the directions for circling numbers in the matrix. Add all the circled numbers together. Note your answer. Try again with a different starting number. What do you notice? Explore one of these five pictures. Polygonal numbers are those that are arranged in shapes as they enlarge. Explore the polygonal numbers drawn here. How many models can you find which obey these rules? Can you create more models that follow these rules? In this investigation we are going to count the number of 1s, 2s, 3s etc in numbers. Can you predict what will happen? We think this 3x3 version of the game is often harder than the 5x5 version. Do you agree? If so, why do you think that might be? Work with numbers big and small to estimate and calculate various quantities in biological contexts.
http://nrich.maths.org/public/leg.php?code=-333&cl=2&cldcmpid=4937
4.25
Fun Classroom Activities The 20 enjoyable, interactive classroom activities that are included will help your students understand the text in amusing ways. Fun Classroom Activities include group projects, games, critical thinking activities, brainstorming sessions, writing poems, drawing or sketching, and more that will allow your students to interact with each other, be creative, and ultimately grasp key concepts from the text by "doing" rather than simply studying. 1. Board Game Create a board game of "Gorgias." 2. Comic Book Draw a comic book version of "Gorgias." 3. Cover Art Design new cover art for "Gorgias." 4. Your Voice Choose a chapter of Gorgias and insert your voice into it, proposing an alternate argument. 5. Nwspaper Article Write a newspaper article describing a new country that has been founded on the principals of Socrates' ideal government. Write a poem describing Socrates' idea of death and the transition to the... This section contains 365 words| (approx. 2 pages at 300 words per page)
http://www.bookrags.com/lessonplan/gorgias/funactivities.html
4.03125
A+ Certification/Floppy disk drive The Floppy Disk Drive (FDD) magnetically reads and writes information into floppy disks. Floppy disks are a form of removable storage. The FDD is mounted inside the computer unit and is only removed from the system for repairs or upgrades. The standard size for modern day floppy disks is a 3 ½-inch disk with a hard plastic exterior shell that protects the thin, flexible disk inside. Standard Floppy Disks hold 1.44 MB of data, which is useful for simple files, such as Microsoft Word documents, but not very effective for graphical content. With the introduction of the USB Removable Drive, the Floppy Disk Drive has somewhat disappeared from newer computer systems. Floppy disks The floppy disks are legacy storage mediums that contain a small amount of data. Despite the larger size, the 8-inch and 5.25-inch disks store less data than the 3.5-inch disk, due to less efficient manufacturing and design. The disks themselves have three major components. The outer envelope provides protection against some external hazards, such as dust (although the older floppies are stored in an additional envelope.) On the inside of the envelope is a cloth to reduce turning friction, and is placed on both sides of the magnetic film used to store the data. The 3½-inch disk also has a sliding metal tab that provides some additional protection. The notable components of the floppy disk are: - The envelope, either a soft plastic for the larger disks, or a hard plastic shell for the 3½-inch floppy - For the 3½-inch floppy, a metal shield. - For some floppies, a hole or tab used to indicate write protection, disk density, etc. - On the inside of the envelope, a sheet used to reduce friction for the spinning component. - A flexible magnetic-coated film that contains data. - A hub (for the 3½-inch floppy) or hole used to allow the magnetic-coated film to rotate in the drive. Floppy disks are given an ability to protect against accidental write operations. With the 5¼-inch floppy, you can write to the disk by creating a notch on the upper-right corner, aligned with the bottom of the label. With a 3½-inch floppy, there is a slidable tab on the back of the disk. Floppy drives The floppy drives are used to read the floppy disks. When installing a floppy drive, it is connected using a floppy drive cable, and a 4-pin berg or molex connector (depending on the drive). An external drive uses its own connection (e.g. USB). In MS-Dos (and related operating systems), the drive letter assigned to the drive depends on the physical connection selected on the FDD cable. If it is connected to the end of the cable, it is drive A. If it is connected to the middle, it is drive B. Up to two devices may be connected. While some FDD connectors may include a pair of connectors at the end and at the middle, only one connector in the pair may be used to operate a drive.
http://en.wikibooks.org/wiki/A%2B_Certification/Floppy_disk_drive
4.03125
THE ORIGIN OF THE POTOMAC RIVER VALLEY AND THE CARVING OF GREAT FALLS As the sea finally withdrew from the Atlantic Coastal Plain in the Washington area between 10 and 20 million years ago, streams draining eastward from the Appalachian Highlands spread a blanket of sand and gravel over the newly exposed sea floor and nearby parts of the Piedmont Plateau. This deposit was not laid down by a single major river, but by numerous streams that constantly shifted their courses back and forth to form a complex series of fan-shaped deposits that coalesced into the blanket of sand and gravel. Remnants of this blanket are still preserved capping some of the highest hills in the Piedmont near Tysons Corner, Va., 5 miles south of Great Falls. The deposit is a source of sand and gravel used for construction purposes in the metropolitan area. Continued slow uplift of the Piedmont Plateau and the Appalachian Highlands to the west increased the slope of the land surface, causing the streams to deepen their valleys and eventually to coalesce into a river which was to become the Potomac. As this river deepened its valley, scattered remnants of its former flood plains were left at various levels as gravel-covered terraces. About 2 million years ago the river had succeeded in carving a broad, open valley in approximately its present position. With the beginning of continental glaciation in the Pleistocene Epochabout 2 to 3 million years agosea level was lowered, and the Potomac River began deepening this early valley. As water was withdrawn from the oceans to form the great ice sheets on the land, sea level around the world fell by as much as 500 feet. Most of the continental shelf off the eastern United States was exposed, and the shoreline lay as much as 75 miles east of its present position. Actually, continental glaciation occurred not just once, but at least four times in the last 2 to 3 million years. The last glacial episode ended only about 15 thousand years ago. As sea level fell, the river cut correspondingly deeper into the floor of its former valley. The valley was rapidly deepened in the soft, easily eroded materials in the Coastal Plain, but in the hard rocks of the Piedmont Plateau the down-cutting was much slower. It was this downcutting into the hard bedrock floor of the older wider valley that produced the spectacular rocky gorge of the Potomac between Little Falls and Great Falls. At Great Falls the river encounters a series of thick layers of metamorphosed sandstone that are particularly resistant to erosion, and these hard ledges have slowed the progress of valley cutting. The river valley above Great Falls thus remains essentially the unmodified, original pre-Pleistocene valley, but below the falls the river flows in a gorge excavated within the last 2 million years. Along the gorge the original valley floor can be recognized as a flat gravel-covered bench 50 to 60 feet above the present river level. MacArthur Boulevard follows this bench from Cabin John to Anglers Inn. Some of the details of the cutting of the gorge and the sculpturing of Great Falls are illustrated in the block diagrams. Last Updated: 01-Mar-2005
http://www.nps.gov/history/history/online_books/grfa/sec4.htm
4.25
Finally, a Solid Look at Earth's Core Scientists have long thought Earth's core is solid. Now they have some solid evidence. The core is thought to be a two-part construction. The inner core is solid iron, and that's surrounding by a molten core, theory holds. Around the core is the mantle, and near the planet's surface is a thin crust -- the part that breaks now and then and creates earthquakes. The core was discovered in 1936 by monitoring the internal rumbles of earthquakes, which send seismic waves rippling through the planet. The waves, which are much like sound waves, are bent when they pass through layers of differing densities, just as light is bent as it enters water. By noting a wave's travel time, much can be inferred about the Earth's insides. Yet for more than 60 years, the solidity of the core has remained in the realm of theory. A study announced today involved complex monitoring of seismic waves passing through the planet. The technique is not new, but this is the first time it's been employed so effectively to probe the heart of our world. First, some jargon: P is what scientists call the wave K stands for the outer core J is the inner core Path of a PKJKP wave. So a wave that rolls through it all is called PKJKP. An earthquake sends seismic waves in all directions. The surface waves are sometimes frighteningly obvious. Seismic waves passing through the mantle and traversing much of the planet's interior are routinely studied when they reach another continent. But no PKJKP wave has ever been reliably detected until now. Aimin Cao of the University of California-Berkeley and colleagues studied archived data from about 20 large earthquakes, all monitored by an array of German seismic detectors back in the 1980s and '90s. The trick to detecting a PKJKP wave is in noting the changes it goes through as it rattles from one side of the planet to the other. What starts out as a compression wave changes to what scientists call a shear wave (explanations and animations of these are here). "A PKJKP traverses the inner core as a shear wave, so this is the direct evidence that the inner core is solid," Cao told LiveScience, "because only in the solid material the shear wave can exist. In the liquid material, say water, only the compressional wave can travel through." The arrival time and slowness of the waves agree with theoretical predictions of PKJKP waves, which indicates a solid core. The results were published today online by the journal Science. - Hole Drilled to Bottom of Earth's Crust, Breakthrough to Mantle Looms - Ancient Impact Turned Part of Earth Inside-Out - Earth as a Giant Pinball Machine - Mission Proposed to Earth's Core MORE FROM LiveScience.com
http://www.livescience.com/6980-finally-solid-earth-core.html
4
Plants are static, they cannot 'stalk' their prey. Instead carnivorous plants lure it, trap it, digest it and absorb the nutrients as a sort of soup. Prey is usually insects and other small invertebrates, but occasionally frogs, birds and small mammals may be caught by large tropical species. There are four methods of trapping: The Venus fly trap (Dionaea) has leaves like a man trap. Modified, toothed, leaf tips with sensitive trigger-hairs, snap shut on prey which is digested by enzymes secreted from glands on the inside of the traps. Sarracenia, Cephalotus, Darlingtonia and Nepenthes all use this method. Insects are attracted to the colours and sweet secretions inside the pitchers, but lose their footing on the smooth hairs and waxy surface, falling to the bottom of the pitcher where they are digested, either by plant enzymes or by bacteria. Sticky surfaces are used by the sundews (Drosera) and the butterworts (Pinguicula). Insects are attracted to shiny glands covering the leaves but become covered in sticky, dew-like secretions and cannot escape. A trapping method used by Utricularia involves an underwater bladder with trapdoor entry. Tiny animals are sucked into the bladder in a rush of water as the trapdoor flies open.
http://www.kew.org/plants/carnivorous/trapping.html
4.09375
An ambitious project to model the cerebral cortex in silicon is under way at Stanford. The man-made brain could help scientists understand how the most recently evolved part of our brain performs its complex computational feats, allowing us to understand language, recognize faces, and schedule the day. It could also lead to new neural prosthetics. “Brains do things in technically and conceptually novel ways–they can solve rather effortlessly issues which we cannot yet resolve with the largest and most modern digital machines,” says Rodney Douglas, a professor at the Institute of Neuroinformatics, in Zurich. “One of the ways to explore this is to develop hardware that goes in the same direction.” Neurons communicate with a series of electrical pulses; chemical signals transiently change the electrical properties of individual cells, which in turn trigger an electrical change in the next neuron in the circuit. In the 1980s, Carver Mead, a pioneer in microelectronics at the California Institute of Technology, realized that the same transistors used to build computer chips could be used to build circuits that mimicked the electrical properties of neurons. Since then, scientists and engineers have been using these transistor-based neurons to build more-complicated neural circuits, modeling the retina, the cochlea (the part of the inner ear that translates sound waves into neural signals), and the hippocampus (a part of the brain crucial for memory). They call the process neuromorphing. Now Kwabena Boahen, a neuroengineer at Stanford University, is planning the most ambitious neuromorphic project to date: creating a silicon model of the cortex. The first-generation design will be composed of a circuit board with 16 chips, each containing a 256-by-256 array of silicon neurons. Groups of neurons can be set to have different electrical properties, mimicking different types of cells in the cortex. Engineers can also program specific connections between the cells to model the architecture in different parts of the cortex. “We want to be able to explore different ideas, different connectivity patterns, different operations in these areas,” says Boahen. “It’s not really possible to explore that right now.” Boahen ultimately plans to build chips that other scientists can buy and use to test their own theories of how the cortex operates. That new knowledge can then be built into the next generation of chips. Smaller design teams can now prototype and deploy faster.
http://www.technologyreview.com/news/407297/building-the-cortex-in-silicon/
4.28125
An Introduction to MATLAB: Basic Operations MATLAB is a programming language that is very useful for numerical simulation and data analysis. The following tutorials are intended to give you an introduction to scientific computing in MATLAB. Lots of MATLAB demos are available online at You can work through these at your leisure, if you want. Everything you need for EOS 225 should be included in the following tutorials. At its simplest, we can use MATLAB as a calculator. Type What do you get? ans = 5 What do you get? ans = 21 Can also do more complicated operations, like taking exponents: for "3 squared" type ans = 9 For "two to the fourth power" type ans = 16 "Scientific notation" is expressed with "10^" replaced by "e" - that is, 10^7 is written 1e7 and 2.15x10^-3 is written 2.15e-3. For example: ans = 0.0150 2e-3 * 1000 ans = 2 MATLAB has all of the basic arithmetic operations built in: + addition - subtraction * multiplication \ division ^ exponentiation as well as many more complicated functions (e.g. trigonometric, exponential): sin(x) sine of x (in radians) cos(x) cosine of x (in radians) exp(x) exponential of x log(x) base e logarithm of x (normally written ln) The above are just a sample - MATLAB has lots of built-in functions. When working with arithmetic operations, it's important to be clear about the order in which they are to be carried out. This can be specified by the use of brackets. For example, if you want to multiply 5 by 2 then add 3, we can type ans = 13 and we get the correct value. If we want to multiply 5 by the sum of 2 and 3, we type ans = 25 and this gives us the correct value. Carefully note the placement of the brackets. If you don't put brackets, Matlab has its own built in order of operations: multiplication/division first, then addition/subtraction. For example: ans = 13 gives the same answer as (5*2)+3. As another example, if we want to divide 8 by 2 and then subtract 3, we type ans = 1 and get the right answer. To divide 8 by the difference between 2 and 3, we type ans = -8 and again get the right answer. If we type ans = 1 we get the first answer - the order of operations was division first, then subtraction. In general, it's good to use brackets - they invovle more typing, and may make a computation look more cumbersome, but they help reduce ambiguity regarding what you want the computation to do. This is a good point to make a general comment about computing. Computers are actually quite stupid - they do what you tell them to, not what you want them to do. When you type any commands into a computer program like MATLAB, you need to be very careful that these two things match exactly. You can always get help in MATLAB by typing "help". Type this alone and you'll get a big list of directories you can get more information about - which is not always too useful. It's more useful to type "help" with some other command that you'd like to know more about. E.g.: SIN Sine of argument in radians. SIN(X) is the sine of the elements of X. See also ASIN, SIND. Reference page in Help browser doc sin ATAN Inverse tangent, result in radians. ATAN(X) is the arctangent of the elements of X. See also ATAN2, TAN, ATAND. Reference page in Help browser doc atan You can get a list of all the built-in functions by typing Elementary math functions. Trigonometric. sin - Sine. sind - Sine of argument in degrees. sinh - Hyperbolic sine. asin - Inverse sine. asind - Inverse sine, result in degrees. asinh - Inverse hyperbolic sine. cos - Cosine. cosd - Cosine of argument in degrees. cosh - Hyperbolic cosine. acos - Inverse cosine. acosd - Inverse cosine, result in degrees. acosh - Inverse hyperbolic cosine. tan - Tangent. tand - Tangent of argument in degrees. tanh - Hyperbolic tangent. atan - Inverse tangent. atand - Inverse tangent, result in degrees. atan2 - Four quadrant inverse tangent. atanh - Inverse hyperbolic tangent. sec - Secant. secd - Secant of argument in degrees. sech - Hyperbolic secant. asec - Inverse secant. asecd - Inverse secant, result in degrees. asech - Inverse hyperbolic secant. csc - Cosecant. cscd - Cosecant of argument in degrees. csch - Hyperbolic cosecant. acsc - Inverse cosecant. acscd - Inverse cosecant, result in degrees. acsch - Inverse hyperbolic cosecant. cot - Cotangent. cotd - Cotangent of argument in degrees. coth - Hyperbolic cotangent. acot - Inverse cotangent. acotd - Inverse cotangent, result in degrees. acoth - Inverse hyperbolic cotangent. hypot - Square root of sum of squares. Exponential. exp - Exponential. expm1 - Compute exp(x)-1 accurately. log - Natural logarithm. log1p - Compute log(1+x) accurately. log10 - Common (base 10) logarithm. log2 - Base 2 logarithm and dissect floating point number. pow2 - Base 2 power and scale floating point number. realpow - Power that will error out on complex result. reallog - Natural logarithm of real number. realsqrt - Square root of number greater than or equal to zero. sqrt - Square root. nthroot - Real n-th root of real numbers. nextpow2 - Next higher power of 2. Complex. abs - Absolute value. angle - Phase angle. complex - Construct complex data from real and imaginary parts. conj - Complex conjugate. imag - Complex imaginary part. real - Complex real part. unwrap - Unwrap phase angle. isreal - True for real array. cplxpair - Sort numbers into complex conjugate pairs. Rounding and remainder. fix - Round towards zero. floor - Round towards minus infinity. ceil - Round towards plus infinity. round - Round towards nearest integer. mod - Modulus (signed remainder after division). rem - Remainder after division. sign - Signum. MATLAB can be used like a calculator - but it's much more. It's also a programming language, with all of the basic components of any such language. The first and most basic of these components is one that we use all the time in math - the variable. Like in math, variables are generally denoted symbolically by individual characters (like "a" or "x") or by strings of characters (like "var1" or "new_value"). In class we've distinguished between variables and parameters - but denoted both of these by characters. MATLAB doesn't make this distinction - any numerical quantity given a symbolic "name" is a variable". How do we assign a value to a variable? Easy - just use the equality sign. For example a = 3 a = 3 sets the value 3 to the variable a. As another example b = 2 b = 2 sets the value 2 to the variable b. We can carry out mathematical operations with these variables: e.g. ans = 5 ans = 6 ans = 9 ans = 20.0855 Although operation of setting a value to a variable looks like an algebraic equality like we use all the time in math, in fact it's something quite different. The statement a = 3 should not be interpreted as "a is equal to 3". It should be interpreted as "take the value 3 and assign it to the variable a". This difference in interpretation has important consequences. In algebra, we can write a = 3 or 3 = a -- these are equivalent. The = symbol in MATLAB is not symmetric - the command a = b should be interpreted as "take the value of b and assign it to the variable a" - there's a single directionality. And so, for example, we can type a = 3 a = 3 with no problem, if we type we get an error message. The value 3 is fixed - we can't assign another number to it. It is what it is. Another consequence of the way that the = operator works is that a statement like a = a+1 makes perfect sense. In algebra, this would imply that 0 = 1, which is of course nonsense. In MATLAB, it means "take the value that a has, add one to it, then assign that value to a". This changes the value of a, but that's allowed. For example, type: a = 3 a = a+1 a = 3 a = 4 First a is assigned the value 3, then (by adding one) it becomes 4. There are some built in variables; one of the most useful is pi: ans = 3.1416 We can also assign the output of a mathematical operation to a new variable: e.g. b = a*exp(a) b = 218.3926 b = 218.3926 If you want MATLAB to just assign the value of a calculation to a variable without telling you the answer right away, all you have to do is put a semicolon after the calculation: b = a*exp(a); Being able to use variables is very convenient, particularly when you're doing a multi-step calculation with the same quantity and want to be able to change the value. For example: a = 1; b = 3*a; c = a*b^2; d = c*b-a; d = 26 Now say I want to do the same calculation with a = 3; all I need to do is make one change a = 3; b = 3*a; c = a*b^2; d = c*b-a; How does this make things any easier? Well, it didn't really here - we still had to type out the equations for b, c, and d all over again. But we'll seee that in a stand-alone computer program it's very useful to be able to do this. In fact, the sequence of operations above is an example of a computer program. Operations are carried out in a particular order, with the results of earlier computations being fed into later ones. It is very important to understand this sequential structure of programming. In a program, things happen in a very particular order: the order you tell them to have. It's very important to make sure you get this order right. This is pretty straightforward in the above example, but can be much more complicated in more complicated programs. Any time a variable is created, it's kept in memory until you purposefully get rid of it (or quit the program). This can be useful - you can always use the variable again later. It can also make things harder - for example, in a long program you may try using a variable name that you've already used for another variable earlier in the program, leading to confusion. It can therefore be useful sometimes make MATLAB forget about a variable; for this the "clear" command is used. For example, define b = 3; Now if we ask what b is, we'll get back that it's 3 b = 3 Using the clear command to remove b from memory now if we ask about b we get the error message that it's not a variable in memory - we've succeeded in getting rid of it. To get rid of everything in memory, just type An important idea in programming is that of an array (or matrix). This is just an ordered sequence of numbers (known as elements): e.g. M = [1, 22, -0.4] is a 3-element array in which the first element is 1, the second element is 22, and the third element is -0.4. These are ordered - in this particular array, these numbers always occur in this sequence - but this doesn't mean that there's any particular structure ordering in general. That is - in an array, numbers don't have to increase or decrease or anything like that. The elements can be in any order - but that order partly defines the array. Also note that the numbers can be integers or rational numbers, or positive or negative. While the elements of the array can be any kind of number, their positions are identified by integers: there is a first, a second, a third, a fourth, etc. up until the end of the array. It's standard to indicate the position of the array using bracket notation: in the above example, the first element is M(1) = 1 the second element is M(2) = 22 and the third element is M(3) = -0.4. These integers counting off position in the array are known as "indices" (singular "index"). All programming languages use arrays, but MATLAB is designed to make them particularly easy to work with (the MAT is for "matrix"). To make the array above in MATLAB all you need to do is type M = [1 22 -0.4] M = 1.0000 22.0000 -0.4000 Then to look at individual elements of the array, just ask for them by index number: ans = 1 ans = 22 ans = -0.4000 We can also ask for certain ranges of an array, using the "colon" operator. For an array M we can ask for element i through element j by typing ans = 1 22 ans = 22.0000 -0.4000 If we want all elements of the array, we can type the colon on its own ans = 1.0000 22.0000 -0.4000 We can also use this notation to make arrays with a particular structure. Typing M = a:b:c makes an array that starts with first element and increases with increment b: M(2) = a+b M(3) = a+2b M(4) = a+3b The array stops at the largest value of N for which M(N) <= c. M = 1:1:3 M = 1 2 3 The array starts with 1, increases by 1, and ends at 3 M = 1:.5:3 M = 1.0000 1.5000 2.0000 2.5000 3.0000 The array starts at 1, increases by 0.5, and ends at 3 M = 1:.6:3 M = 1.0000 1.6000 2.2000 2.8000 Here the array starts at 1, increases by 0.6, and ends at 2.8 - because making one more step in the array would make the last element bigger than 3. M = 3:-.5:1 M = 3.0000 2.5000 2.0000 1.5000 1.0000 This kind of array can also be decreasing. If the increment size b isn't specified, a default value of 1 is used: M = 1:5 M = 1 2 3 4 5 That is, the array a:c is the same as the array a:1:c It is important to note that while the elements of an array can be any kind of number, the indices must be positive integers (1 and bigger). Trying non-positive or fractional integers will result in an error message: Each of the elements of an array is a variable on its own, which can be used in a mathematical operation. E.g.: ans = 4 ans = 6 The array itself is also a kind of variable - an array variable. You need to be careful with arithmetic operations (addition, subtraction, multiplication, division, exponentiaion) when it comes to arrays - these things can be defined, but they have to be defined correctly. We'll look at this later. In MATLAB, when most functions are fed an array as an argument they give back an array of the function acting on each element. That is, for the function f and the array M, g=f(M) is an array such that g(i) = f(M(i)). a = 0:4; b = exp(a) b = 1.0000 2.7183 7.3891 20.0855 54.5982 Let's define two arrays of the same size a = 1:5; b = exp(a); and what we get is a plot of the array a versus the array b - in this case, a discrete version of the exponential function exp(x) over the range x=1 to x=5. We can plot all sorts of things: the program a = 0:.01:5; b = cos(2*pi*a); plot(a,b) sets the variable a as a fine discretisation of the range from x=0 to x=5, defines b as the cosine of 2 pi x over that range, and plots a agaist b - showing us the familiar sinusoidal waves. We can also do all sorts of things with plots - stretch them vertically and horizontally, flip them upside down, give them titles and label the axes, have multiple subplots in a single plot ... but we'll come to these as we need them. Arithmetic operations (addition, subtraction, multiplication, division) between an array and a scalar (a single number) are straightforward. If we add an array and a scalar, every element in the array is added to that scalar: the ith element of the sum of the array M and the scalar a is M(i)+a. M = [1 3 -.5 7]; M2 = M+1 M2 = 2.0000 4.0000 0.5000 8.0000 Similarly, we can subtract, multiply by, and divide by a scalar. M3 = 3*M M3 = 3.0000 9.0000 -1.5000 21.0000 M4 = M/10 M4 = 0.1000 0.3000 -0.0500 0.7000 It's even possible to add, subtract, multiply and divide arrays with other arrays - but we have to be careful doing this. In particular, we can only do these things between arrays of the same size: that is, we can't add a 5-element array to a 10-element array. If the arrays are the same size, these arithmetic operations are straightforward. For example, the sum of the N-element array a and the N-element array b is an N-element array c whose ith element is c(i) = a(i)+b(i) a = [1 2 3]; b = [2 -1 4]; c = a+b; c c = 3 1 7 That is, addition is element-wise. It's just the same with subtraction. d = a-b; d d = -1 3 -1 With multiplication we use a somewhat different notation. Mathematics defines a special kind of multiplication between arrays - matrix multiplication - which is not what we're doing here. However, it's what MATLAB thinks you're doing if you use the * sign between arrays. To multiply arrays element-wise (like with addition), we need to use the .* notation (note the "." before the "*"): e = a.*b; e e = 2 -2 12 Similarly, to divide, we don't use /, but rather ./ f = a./b; f f = 0.5000 -2.0000 0.7500 (once again, note the dot). As we'll see over and over again, it's very useful to be able to carry out arithmetic operations between arrays. For example, say we want to make a plot of x versus 1/x between x = 2 and x = 4. Then we can type in the program x = 2:.1:4; y = 1./x; plot(x,y) xlabel('x'); ylabel('y'); Note how we put the labels on the axes - using the commands xlabel and ylabel, with the arguments 'x' and 'y'. Because the arguments are character strings - not numbers - they need to be in single quotes. The axis labels can be more complicated, e.g. xlabel('x (between 1 and 5)') ylabel('y = 1/x') We haven't talked yet about how to exponentiate an array. To take the array M to the power b element-wise, we type M.^b Note again the "." before the "^" in the exponentiation. As an example x = [1 2 3 4]; y = x.^2 y = 1 4 9 16 As another example, we can redo the earlier program: x = 2:.1:4; y = x.^(-1); plot(x,y) xlabel('x'); ylabel('y'); Note that we put the "-1" in brackets - this makes sure that the minus sign associated with making the exponent negative is applied before the "^" of the exponentiation. In this case, we don't have to do this - but when programming it doesn't hurt to be as specific as possible. These are the basic tools that we'll need to use MATLAB. Subsequent tutorials will cover other aspects of writing a program - but what we've talked about above forms the core. Everything that follows will build upon the material in this tutorial. The following exercises will use the tools we've learned above and are designed to get you thinking about programming. In writing your programs, you'll need to be very careful to think through: (1) what is the goal of the program (what do I need it to do?) (2) what do I need to tell MATLAB? (3) what order do I need to tell it in? It might be useful to sketch the program out first, before typing anything into MATLAB. It can even be useful to write the program out on paper first and walk through it step by step, seeing if it will do what you think it should. Plot the following functions: (a) y = 3x+2 with x = 0, 0.25, 0.5, ...., 7.75, 8 (b) y = exp(-x^2) with x = 0, 0.1, 0.2, ..., 2 (c) y = ln(exp(x^-1)) with x = 1, 1.5, 2, ..., 4 (d) y = (ln(exp(x)))^-1 with x = 1, 1.5, 2, ..., 4 A mountain range has a tectonic uplift rate of 1 mm/yr and erosional timescale of 1 million years. If the mountain range starts with a height h(0) = 0 at time t = 0, write a program that predicts and plots the height h(t) at t=0, t=1 million years, t=2 million years, t=3 million years, t=4 million years, and t=5 million years (neglecting isostatic effects). Label the axes of this plot, including units. Repeat Exercise 2 in the case that the erosional timescale is 500,000 years. Repeat Exercise 3 in the case that the tectonic uplift rate is 2 mm/yr.
http://web.uvic.ca/~monahana/eos225/matlab_tutorial/tutorial_1/introduction_to_matlab.html
4.3125
|home | about this site | stories | the gallery | schools | migration histories | tracing your roots | search| |Migration Histories > South Asian > Origins| By the end of 1946 communal violence was escalating and the British began to fear that India would descend into civil war. The British government's representative, Lord Wavell, put forward a breakdown plan as a safeguard in the event of political deadlock. Wavell, however, believed that once the disadvantages of the Pakistan scheme were exposed, Jinnah would see the advantages of working for the best possible terms inside a united India. He wrote: 'Unfortunately the fact that Pakistan, when soberly and realistically examined, is found to be a very unattractive proposition, will place the Moslems in a very disadvantageous position for making satisfactory terms with India for a Federal Union.' This view was based on a report, which claimed that a future Pakistan would have no manufacturing or industrial areas of importance: no ports, except Karachi, or rail centres. It was also argued that the connection between East and West Pakistan would be difficult to defend and maintain. The report concluded: 'It is hard to resist the conclusion that taking all considerations into account the splitting up of India will be the reverse of beneficial as far as the livelihood of its people is concerned'. Lord Mountbatten replaced Lord Wavell as Viceroy of India in 1947. Read a letter of instructions from the Labour Prime Minister, Clement Atlee, to Lord Mountbatten written in March 1947. Mountbatten's first proposed solution for the Indian subcontinent, known as the 'May Plan', was rejected by Congress leader Jawaharlal Nehru on the grounds it would cause the 'balkanisation of India'. The following month the 'May Plan' was substituted for the 'June Plan', in which provinces would have to choose between India and Pakistan. Bengal and Punjab both voted for partition. The subcontinent was partitioned on 15 August 1947 and Pakistan came into existence, even though several princely states had still to decide which of the two new countries to join. The two new boundaries drawn up by the Radcliffe Commission, cut through Bengal and the Punjab. The partition of India into two successor states, India and Pakistan, resulted in the transfer of approximately eight million Muslims, and equivalent numbers of Sikhs and Hindus, across the Indo-Pakistan borders in the north-west and north-east of the subcontinent in 1947. The largest single refugee movement of the 20th century was accompanied by communal violence and atrocities committed on all sides of the religious spectrum, with a death toll calculated at approximately 1 million. In South Asia it has been referred to as a holocaust. Apart from the psychological scars, which have yet to heal, there were the practical problems of the fragmentation of refugee populations and loss of family land holdings, particularly in the Punjab, an area from which many migrants to Britain emigrated. It is no coincidence then that the vast majority (possibly as much as 75%) of post-war immigrants to Britain, prior to the 1970s, came from regions directly affected by partition. Creators: Dr. Shompa Lahiri |contact us | help | site map||copyright | privacy|
http://www.movinghere.org.uk/galleries/histories/asian/origins/partition2.htm
4.15625
Slavery has been rife throughout all of ancient history. Most, if not all, ancient civilizations practiced this institution and it is described (and defended) in early writings of the Sumerians, Babylonians, and Egyptians. It was also practiced by early societies in central America and Africa. (See Bernard Lewis's work Race and Slavery in the Middle East1 for a detailed chapter of the origins and practices of slavery.) The Qur'an prescribes a humanitarian approach to slavery -- free men could not be enslaved, and those faithful to foreign religions could live as protected persons, dhimmis, under Muslim rule (as long as they maintained payment of taxes called Kharaj and Jizya). However, the spread of the Islamic Empire resulted in a much harsher interpretation of the law. For example, if a dhimmis was unable to pay the taxes they could be enslaved, and people from outside the borders of the Islamic Empire were considered an acceptable source of slaves. Although the law required owners to treat slaves well and provide medical treatment, a slave had no right to be heard in court (testimony was forbidden by slaves), had no right to property, could marry only with permission of their owner, and was considered to be a chattel, that is the (moveable) property, of the slave owner. Conversion to Islam did not automatically give a slave freedom nor did it confer freedom to their children. Whilst highly educated slaves and those in the military did win their freedom, those used for basic duties rarely achieved freedom. In addition, the recorded mortality rate was high -- this was still significant even as late as the nineteenth century and was remarked upon by western travelers in North Africa and Egypt. Slaves were obtained through conquest, tribute from vassal states (in the first such treaty, Nubia was required to provide hundreds of male and female slaves), offspring (children of slaves were also slaves, but since many slaves were castrated this was not as common as it had been in the Roman empire), and purchase. The latter method provided the majority of slaves, and at the borders of the Islamic Empire vast number of new slaves were castrated ready for sale (Islamic law did not allow mutilation of slaves, so it was done before they crossed the border). The majority of these slaves came from Europe and Africa -- there were always enterprising locals ready to kidnap or capture their fellow countrymen. Black Africans were transported to the Islamic empire across the Sahara to Morocco and Tunisia from West Africa, from Chad to Libya, along the Nile from East Africa, and up the coast of East Africa to the Persian Gulf. This trade had been well entrenched for over 600 years before Europeans arrived, and had driven the rapid expansion of Islam across North Africa. By the time of the Ottoman Empire, the majority of slaves were obtained by raiding in Africa. Russian expansion had put an end to the source of "exceptionally beautiful" female and "brave" male slaves from the Caucasians -- the women were highly prised in the harem, the men in the military. The great trade networks across north Africa were as much to do with the safe transportation of slaves as other goods. An analysis of prices at various slave markets shows that eunuchs fetched higher prices than other males, encouraging the castration of slaves before export. Documentation suggests that slaves throughout Islamic world were mainly used for menial domestic and commercial purposes. Eunuchs were especially prised for bodyguards and confidential servants; women as concubines and menials. A Muslim slave owner was entitled by law to use slaves for sexual pleasure. As primary source material becomes available to Western scholars, the bias towards urban slaves is being questioned. Records also show that thousands of slaves were used in gangs for agriculture and mining. Large landowners and rulers used thousands of such slaves, usually in dire conditions: "of the Saharan salt mines it is said that no slave lived there for more than five years.1" 1. Bernard Lewis Race and Slavery in the Middle East: An Historical Enquiry, Chapter 1 -- Slavery, Oxford Univ Press 1994. This text revised from an article first published on About.com on 2 April 2001.
http://africanhistory.about.com/od/slavery/a/IslamRoleSlavery01.htm
4.40625
Have you ever wondered how a parachute works—or which design features are most important in slowing someone's descent? Parachutes come in many different shapes and sizes, but they work based on the same general principles. In this activity, you will test differently sized parachutes to see how changes in the size of the parachute affect flight. What do you think will work better: a bigger parachute or a smaller one? In the sport of skydiving, a person jumps out of an airplane from a very high altitude, falls through the air, and releases a parachute to help the skydiver slow his or her way down and land safely on the ground. How does the parachute break the free fall so well? As the skydiver is falling, the force of gravity is pulling the person and his or her parachute toward the earth. The force of gravity can make an object fall very fast! The parachute slows the skydiver down because it causes air resistance, or drag force. The air pushes the parachute back up and creates a force opposite to the force of gravity. As the skydiver falls, these "push and pull" forces are nearly in balance.* • Heavy-weight garbage bag • Four pennies • A safe, high surface, about two meters from the ground. A good place may be a secure balcony, deck or playground platform. • Stopwatch (accurate to at least 0.1 seconds) • Cut open the garbage bag to make a flat sheet of plastic. • Cut two squares out of the garbage bag. Make one be 20 centimeters by 20 cm (about eight inches by eight inches) and one be 50 cm by 50 cm (20 inches by 20 inches). • Tie a knot in each of the four corners of each square. • Cut the string into eight pieces that are 40 cm (about 16 inches) long each. • Tie one end of each piece of string around each of the knots, positioning the string right above the knot. • For each square, hold the center of the square in one hand and pull all of the strings with the other hand to collect them. Tie the free end of the strings together with an overhand knot. • Securely tape two pennies to the end of the strings on each square. What do you think the purpose of the pennies is? Hint: you can try letting the squares float to the ground without the pennies first and see what happens. • Your parachutes are now ready to test! • Bring a stopwatch and the parachutes to the safe, high surface you found that is about two meters from the ground. • Release the smaller parachute from high above the ground and time how long it takes for it to fall to the ground. Try this parachute two more times, releasing it from the same height each time. About how long did it take to fall on average? • Release the larger parachute from the same height and time how long it takes for it to fall to the ground. Try this parachute two more times. About how long did it take the larger parachute to fall? • Which parachute took longer to fall to the ground? Why do you think this is? • Extra: Make some more parachutes out of differently sized squares from the garbage bag. Test out your new parachutes. Do you see a clear trend between the size of the parachutes and how long it takes them to fall? • Extra: In this activity, you tested one variable—the surface area of the parachute—but there are a lot of other variables that affect how well a parachute works. Try this activity again but this time vary the material that the parachute is made out of (you could try nylon, cotton, tissue paper, etcetera), the shape of the parachute, the length of the string, the weight of the string, the load (by increasing or decreasing the number of pennies), or the height at which the parachute is dropped. How does changing one of these other variables affect how well the parachute works? Observations and results Did the larger parachute take longer to fall to the ground than the smaller parachute did? How large a parachute is (in other words, the parachute's surface area) affects its air resistance, or drag force. The larger the parachute, the greater the drag force. In the case of these parachutes, the drag force is opposite to the force of gravity, so the drag force slows the parachutes down as they fall. Consequently, the larger parachute, with its greater drag force, takes longer to reach the ground than the smaller parachute. Although the force of gravity is greater on the larger, slightly heavier parachute than the smaller, lighter one, the relative increase in the drag force on the larger parachute is greater than the increase in the force of gravity. *Correction (9/19/12): The last sentence in this paragraph, which erroneously described the effect of drag and gravity on the skydiver, was removed after posting. More to explore Calculating Drag from Aerocon Skydiving from the Physics Classroom Parachute Descent Calculations from Randy Culp Parachutes: Does Size Matter? from Science Buddies This activity brought to you in partnership with Science Buddies
http://www.scientificamerican.com/article.cfm?id=bring-science-home-parachute
4.0625
San Ignacio Lagoon Ecosystem By Tom Lewis, Naturalist Lagoon is more than a breeding ground for gray whales. It is an entire ecosystem, where a variety of species depend on one another for their survival. The lagoon's waters, marshlands, and sandy beaches rank among the most productive on earth, and support an amazing variety of plants and animals. Large populations of fish, invertebrates, birds, turtles, and marine mammals make this place their home. The lagoon ecosystem is composed of several very different habitats: a sandy beach habitat on the barrier islands near the lagoon entrance, a mudflat, a mangrove marsh habitat, and of course the marine habitat of the lagoon itself. The sandy beach habitat is home to hundreds of different species of crabs, mollusks (animals with shells), and worms. It is also a vital feeding area for many different species of wading birds that depend on the other creatures for food. The intertidal mudflat habitat is home to thousands of animals including a variety of tubeworms, crabs, snails, and sea slugs. These animals have become completely adapted to the mudflat. They could not live in a sandy The mangrove marsh habitat is perhaps the most productive section of the lagoon ecosystem. The mangrove is a salt- tolerant tree that grows in intertidal areas of tropical and subtropical oceans. Mangrove marshes are a critically important resting and feeding grounds for numerous migratory bird species, and a home to several resident species of birds. The mangrove marshes, like all ocean marshes, are also the nurseries for many different species of ocean fishes and the homes to many different species of invertebrates. Fishes move into the marsh to lay their eggs. When the young hatch, they use the quiet waters of the marsh to gain strength before they venture out into the open ocean. Laguna San Ignacio is one of the few remaining undisturbed marine lagoon habitats in North America. It supports a large number of vertebrate and invertebrate species different from the ones that live in the lagoon's other habitats. The organic productivity (the amount of life and life-giving nutrients) of the marine habitat is extraordinarily high. Imagine the tides flushing and recharging the lagoon ecosystem. Each rising tide stirs up nutrients from the bottom sediment and recharges areas of stagnant water with oxygen. Ocean fish ride in with the tide to feed. Ebb tides flush out dissolved material and carry decaying organic matter as well as living organisms offshore. The marine habitat is home to green sea turtles, bottlenose dolphins, numerous species of fishes, and humans. A small population of Mexican fishing families depends on the lagoon for their livelihood. They fish the lagoon in harmony with nature, and coexist with all the other creatures that call this lagoon home. Lagoon is one of the Earth's true natural wonders. It is our job to make sure that this place lives on for future generations. - What might be some human-caused and natural threats to the lagoon ecosystem? - Why do you think Tom says it's our job to make sure the lagoon ecosystem lives on for future generations? the ecosystem YOU live in. How are you helping to take care of it? Science Education Standards - All populations living together and the physical factors with which they interact compose change environments in ways that can be either beneficial or detrimental for themselves and other organisms.
http://www.learner.org/jnorth/tm/gwhale/LagoonEcosystem.html
4.21875
The strategies used by an animal to survive in its environment. Students will explore the special features of a number of different animals through a series of demonstrations and activities. This lesson investigates the needs of these creatures and gives students the tools to become more aware of how they share an environment with city-dwelling animals. In this lesson, students will learn about the process of pollination, the animals that pollinate, and the strategies that they can use to promote pollinator health and wellbeing. Science isn’t always neat and tidy — it’s sometimes “GROSS!” Blood, vomit, brains, and pellets aren’t things you always want to talk about, but they provide valuable information about animals and how they live. We can expand the way we think about the world by exploring the interesting world of animals and the so-called “gross” things about them. Explore why vomit is important, play a bloodsucker’s game and do some dissecting to learn more about how “gross” things are actually “cool”! Hummingbirds are found throughout most of British Columbia. This small, warm-blooded group of birds has several unique adaptations and abilities that allow it to survive our cool climate. This lesson explores characteristics of hummingbirds and the steps we can take to make our environment more welcoming to hummingbirds. A watershed is a section of land where all of the area’s water is collected and funnelled into the same waterway. Many human activities can negatively affect animals, particularly wild salmon, in their natural watershed habitat. The health of the watershed animal population depends on the health of their natural habitat. In this lesson, students will identify each stage of the salmon lifecycle and identify factors that affect salmon survival.
http://resources.scienceworld.ca/creatures
4.0625
The Earth is not the fixed, solid mass that we usually envision. It is actually a sphere of solids and molten rock fluids that are gradually but continuously moving and changing. For example, South America is drifting away from Africa at about the speed your fingernails grow. Earthquakes and volcanoes are reminders of the Earth's instability and changing face. crust is divided into numerous tectonic plates. These push against each other, rise and fall, tilt and slide, buckle and crumple, break apart and merge together. As a result, sediments from the bottom of ancient seas can today be found in rocks on the tops of mountains. In fact, the summit of Mount Everest is marine limestone, formed just this way. For more than half a billion years, photosynthesis has made life possible on Earth. Plants absorb solar energy and use it to convert carbon dioxide and water into oxygen and carbohydrates, such as sugar, starch and cellulose. These carbohydrates and other organic materials eventually settle on the ground and in stream, lake and As these organic materials become more deeply buried, heat and pressure transform them into solid, liquid or gaseous hydrocarbons known as fossil fuelscoal, crude oil or natural gas. Coal is generally formed from the remains of terrestrial (land-based) plants. Oil is typically derived from marine (water-based) plants and animals, mainly algae, that have been gently "cooked" for at least one million years at a temperature between 50° and 150° Celsius. Natural gas can be formed from almost any marine or terrestrial organic materials, under a wide variety of temperatures and pressures. Due to the force of gravity and the pressure created by the overlying rock layers, oil and natural gas seldom stay in the source rock in which they are formed. Instead, they move through the underground layers of sedimentary rocks until they either escape at the surface or are trapped by a barrier of less permeable rock. Most of the world's petroleum has been found trapped in porous rocks under relatively impermeable formations. These reservoirs are often long distances away from the A seep occurs when hydrocarbons migrate to the Earth's surface. Over time, huge amounts of these hydrocarbons have escaped into the atmosphere. Flowing water can also wash away hydrocarbons. Sometimes only the lighter, more volatile compounds are removed, leaving behind reservoirs of heavier types of crude oil. The Athabasca oil sands in northeastern Alberta are one example of a petroleum resource that has lost its lighter components or fractions. The tar-like bitumen in the oil sands was formed largely by the effects of bacterial processes, water flows and oxidation on the petroleum in the reservoir. Petroleum Communication Foundation. Our Petroleum Challenge: Exploring Canada's Oil and Gas Industry, Sixth Edition. Calgary: Petroleum Communication Foundation, 1999. With permission from the Centre for Energy.
http://wayback.archive-it.org/2217/20101208162445/http:/www.abheritage.ca/abresources/inventory/resources_hydrocarbons.html
4.1875
Runoff (surface runoff) Runoff is an important part of the water cycle. When rain falls from the sky, some of the water is absorbed into the ground and some of the water flows with gravity, usually into a ditch, sewer, lake, stream, river or another body of water. The amount of water that is absorbed into the ground is dependent on the type of surface covering the ground. In a forest, for example, the ground is mostly soft soil and leaf material, which can absorb more water. On paved streets and sidewalks, however, the rain cannot be absorbed, so much of the rain that falls in a city or residential area will become runoff. Runoff can easily carry pollutants into major bodies of water. City streets often have a huge amount of automobile pollution. Farms may use pesticides and fertilizers, and if they have livestock then there is also a lot of animal waste on farmland. When the rain falls, it washes away a lot of these things, but they don’t just disappear. The runoff might end up in a nearby river, and with it will be the pollution that comes from the farms and the city. Not only can some of these chemicals be harmful to living creatures (including humans!), but it can also be harmful to ecosystems by introducing too much of a certain type of nutrient (like phosphorus or nitrogen in fertilizers). This can severely disrupt the balance of an ecosystem by allowing some species to thrive at the expense of others.
http://www2.southeastern.edu/orgs/pbrp/lessons/definitions/runoff.html
4.28125
Early history traces the development of the Somali people to an Arab sultanate, which was founded in the seventh century A.D. by Koreishite immigrants from Yemen. During the 15th and 16th centuries, Portuguese traders landed in present Somali territory and ruled several coastal towns. The sultan of Oman and Zanzibar subsequently took control of these towns and their surrounding territory. Somalia's modern history began in the late l9th century, when various European powers began to trade and establish themselves in the area. The British East India Company's desire for unrestricted harbor facilities led to the conclusion of treaties with the sultan of Tajura as early as 1840. It was not until 1886, however, that the British gained control over northern Somalia through treaties with various Somali chiefs who were guaranteed British protection. British objectives centered on safeguarding trade links to the east and securing local sources of food and provisions for its coaling station in Aden. The boundary between Ethiopia and British Somaliland was established in 1897 through treaty negotiations between British negotiators and King Menelik. During the first two decades of this century, British rule was challenged through persistent attacks led by Mohamed Abdullah. A long series of intermittent engagements and truces ended in 1920 when British warplanes bombed Abdullah's stronghold at Taleex. Although Abdullah was defeated as much by rival Somali factions as by British forces, he was lauded as a popular hero and stands as a major figure of national identity to some Somalis. In 1885, Italy obtained commercial advantages in the area from the sultan of Zanzibar and in 1889 concluded agreements with the sultans of Obbia and Aluula, who placed their territories under Italy's protection. Between 1897 and 1908, Italy made agreements with the Ethiopians and the British that marked out the boundaries of Italian Somaliland. The Italian Government assumed direct administration, giving the territory colonial status. Italian occupation gradually extended inland. In 1924, the Jubaland Province of Kenya, including the town and port of Kismayo, was ceded to Italy by the United Kingdom. The subjugation and occupation of the independent sultanates of Obbia and Mijertein, begun in 1925, were completed in 1927. In the late 1920s, Italian and Somali influence expanded into the Ogaden region of eastern Ethiopia. Continuing incursions climaxed in 1935 when Italian forces launched an offensive that led to the capture of Addis Ababa and the Italian annexation of Ethiopia in 1936. Following Italy's declaration of war on the United Kingdom in June 1940, Italian troops overran British Somaliland and drove out the British garrison. In 1941, British forces began operations against the Italian East African Empire and quickly brought the greater part of Italian Somaliland under British control. From 1941 to 1950, while Somalia was under British military administration, transition toward self-government was begun through the establishment of local courts, planning committees, and the Protectorate Advisory Council. In 1948 Britain turned the Ogaden and neighboring Somali territories over to Ethiopia. In Article 23 of the 1947 peace treaty, Italy renounced all rights and titles to Italian Somaliland. In accordance with treaty stipulations, on September 15, 1948, the Four Powers referred the question of disposal of former Italian colonies to the UN General Assembly. On November 21, 1949, the General Assembly adopted a resolution recommending that Italian Somaliland be placed under an international trusteeship system for 10 years, with Italy as the administering authority, followed by independence for Italian Somaliland. In 1959, at the request of the Somali Government, the UN General Assembly advanced the date of independence from December 2 to July 1, 1960. Meanwhile, rapid progress toward self-government was being made in British Somaliland. Elections for the Legislative Assembly were held in February 1960, and one of the first acts of the new legislature was to request that the United Kingdom grant the area independence so that it could be united with Italian Somaliland when the latter became independent. The protectorate became independent on June 26, 1960; five days later, on July 1, it joined Italian Somaliland to form the Somali Republic. In June 1961, Somalia adopted its first national constitution in a countrywide referendum, which provided for a democratic state with a parliamentary form of government based on European models. During the early post-independence period, political parties reflected clan loyalties, which contributed to a basic split between the regional interests of the former British-controlled north and the Italian-controlled south. There also was substantial conflict between pro-Arab, pan-Somali militants intent on national unification with the Somali-inhabited territories in Ethiopia and Kenya and the "modernists," who wished to give priority to economic and social development and improving relations with other African countries. Gradually, the Somali Youth League, formed under British auspices in 1943, assumed a dominant position and succeeded in cutting across regional and clan loyalties. Under the leadership of Mohamed Ibrahim Egal, prime minister from 1967 to 1969, Somalia greatly improved its relations with Kenya and Ethiopia. The process of party-based constitutional democracy came to an abrupt end, however, on October 21, 1969, when the army and police, led by Maj. Gen. Mohamed Siad Barre, seized power in a bloodless coup. Following the coup, executive and legislative power was vested in the 20-member Supreme Revolutionary Council (SRC), headed by Maj. Gen. Siad Barre as president. The SRC pursued a course of "scientific socialism" that reflected both ideological and economic dependence on the Soviet Union. The government instituted a national security service, centralized control over information, and initiated a number of grassroots development projects. Perhaps the most impressive success was a crash program that introduced an orthography for the Somali language and brought literacy to a substantial percentage of the population. The SRC became increasingly radical in foreign affairs, and in 1974, Somalia and the Soviet Union concluded a treaty of friendship and cooperation. As early as 1972, tensions began increasing along the Somali-Ethiopian border; these tensions heightened after the accession to power in Ethiopia in 1973 of the Mengistu Hailemariam regime, which turned increasingly toward the Soviet Union. In the mid-1970s, the Western Somali Liberation Front (WSLF) began guerrilla operations in the Ogaden region of Ethiopia. Fighting increased, and in July 1977, the Somali National Army (SNA) crossed into the Ogaden to support the insurgents. The SNA moved quickly toward Harer, Jijiga, and Dire Dawa, the principal cities of the region. Subsequently, the Soviet Union, Somalia's most important source of arms, embargoed weapons shipments to Somalia. The Soviets switched their full support to Ethiopia, with massive infusions of Soviet arms and 10,000-15,000 Cuban troops. In November 1977, President Siad Barre expelled all Soviet advisers and abrogated the friendship agreement with the U.S.S.R. In March 1978, Somali forces retreated into Somalia; however, the WSLF continues to carry out sporadic but greatly reduced guerrilla activity in the Ogaden. Such activities also were subsequently undertaken by another dissident group, the Ogaden National Liberation Front (ONLF). Following the 1977 Ogaden war, President Barre looked to the West for international support, military equipment, and economic aid. The United States and other Western countries traditionally were reluctant to provide arms because of the Somali Government's support for insurgency in Ethiopia. In 1978, the United States reopened the U.S. Agency for International Development mission in Somalia. Two years later, an agreement was concluded that gave U.S. forces access to military facilities in Somalia. In the summer of 1982, Ethiopian forces invaded Somalia along the central border, and the United States provided two emergency airlifts to help Somalia defend its territorial integrity. From 1982 to 1990 the United States viewed Somalia as a partner in defense. Somali officers of the National Armed Forces were trained in U.S. military schools in civilian as well as military subjects. Within Somalia, Siad Barre's regime confronted insurgencies in the northeast and northwest, whose aim was to overthrow his government. By 1988, Siad Barre was openly at war with sectors of his nation. At the President's order, aircraft from the Somali National Air Force bombed the cities in the northwest province, attacking civilian as well as insurgent targets. The warfare in the northwest sped up the decay already evident elsewhere in the republic. Economic crisis, brought on by the cost of anti-insurgency activities, caused further hardship as Siad Barre and his cronies looted the national treasury. By 1990, the insurgency in the northwest was largely successful. The army dissolved into competing armed groups loyal to former commanders or to clan-tribal leaders. The economy was in shambles, and hundreds of thousands of Somalis fled their homes. In 1991, Siad Barre and forces loyal to him fled the capital; he later died in exile in Nigeria. In the same year, Somaliland declared itself independent of the rest of Somalia, with its capital in Hargeisa. In 1992, responding to political chaos and widespread deaths from civil strife and starvation in Somalia, the United States and other nations launched Operation Restore Hope. Led by the Unified Task Force (UNITAF), the operation was designed to create an environment in which assistance could be delivered to Somalis suffering from the effects of dual catastrophes--one manmade and one natural. UNITAF was followed by the United Nations Operation in Somalia (UNOSOM). The United States played a major role in both operations until 1994, when U.S. forces withdrew. The prevailing chaos in much of Somalia after 1991 contributed to growing influence by various Islamic groups, including al-Tabliq, al-Islah (supported by Saudi Arabia), and Al-Ittihad Al-Islami (Islamic Unity). These groups, which are among the main non-clan-based forces in Somalia, share the goal of establishing an Islamic state. They differ in their approach; in particular, Al-Ittihad supports the use of violence to achieve that goal and has claimed responsibility for terrorist acts. In the mid-1990s, Al-Ittihad came to dominate territory in Puntland as well as central Somalia near Gedo. It was forcibly expelled from these localities by Puntland forces as well as Ethiopian attacks in the Gedo region. Since that time, Al-Ittihad has adopted a longer term strategy based on integration into local communities and establishment of Islamic schools, courts, and relief centers. After the attack on the United States of September 11, 2001, Somalia gained greater international attention as a possible base for terrorism--a concern that became the primary element in U.S. policy toward Somalia. The United States and other members of the anti-terrorism coalition examined a variety of short- and long-term measures designed to cope with the threat of terrorism in and emanating from Somalia. Economic sanctions were applied to Al-Ittihad and to the Al-Barakaat group of companies, based in Dubai, which conducted currency exchanges and remittances transfers in Somalia. The United Nations also took an increased interest in Somalia, including proposals for an increased UN presence and for strengthening a 1992 arms embargo. More To Explore
http://www.historyofnations.net/africa/somalia.html
4.34375
Transposons are sequences of DNA that can move around to different positions within the genome of a single cell, a process called transposition. In the process, they can cause mutations and change the amount of DNA in the genome. Transposons were also once called "jumping genes", and are examples of mobile genetic elements. Discovered by Barbara McClintock early in her career, the discovery earned her a Nobel prize in 1983. There are a variety of mobile genetic elements, and they can be grouped based on their mechanism of transposition. Class I mobile genetic elements, or retrotransposons, move in the genome by being transcribed to RNA and then back to DNA by reverse transcriptase, while class II mobile genetic elements move directly from one position to another within the genome using a transposase to "cut and paste" them within the genome. Transposons are very useful to researchers as a means to alter DNA inside of a living organism. Transposons make up a large fraction of genome sizes which is evident through the C-values of eukaryotic species.
http://www.biosolutions.info/2012/07/genetic-evidence-transposons.html
4.0625
Cells can divide, and in unicellular organisms, this makes more organisms. In multicellular organisms, cell division is used for growth, development, and repair of the organism. Cell division is controlled by DNA, but exact copies of the DNA must be given to the daughter cells (note use of mother and daughter). Bacteria reproduce by a simple process called binary fission. They have one chromosome which is attached to the cell membrane. This chromosome replicates, then the two copies are pulled apart as the cell grows. Eventually the cell pinches in two to make two cells. Eukaryotes do mitosis. In mitosis, each daughter cell gets about half of the cytoplasm from the mother cell and one set or copy of the DNA. Before cell division occurs, the cell first has to replicate the chromosomes so each daughter cell can have a set. When the chromosomes are replicated and getting ready to divide, they consist of two, identical halves called sister chromatids which are joined by a central region, the centromere. Each chromosome is one long molecule of DNA and special proteins. DNA makes up the genes, and we say that genes are on chromosomes, or chromosomes contain or are made of genes. Some of the proteins in the chromosomes turn off the genes that are not needed in that cell. For example, while every cell in your body contains exactly the same genes, you dont need your eye-color gene operational in cells in your big toe, nor toenail-shape genes active in cells in your stomach. Two basic types of cells occur in the bodies of eukaryotes. Somatic cells are general body cells. These have the same number of chromosomes as each other within the body of an organism. The number of chromosomes in somatic cells is consistent among organisms of the same species, but varies from species to species. These chromosomes come in pairs, where one chromosome in each pair is from the mother and one is from the father. Actually, since most organisms have more than one pair of chromosomes, it would also be correct to say that the organism received one set of chromosomes from its mother and one matching set from its father, and that these sets match in pairs. The other type of cells found in eukaryotes is gametes or sex cells, consisting of eggs in females and sperm in males. These special reproductive cells have only one set (half as many) of chromosomes consisting of one chromosome from each pair. In humans ONLY, the somatic cells have 46 chromosomes arranged in 23 pairs (= two sets of 23 each), while gametes have 23 individual chromosomes (= one set). In fruit flies, somatic cells have 8 chromosomes (= 4 pairs or 2 sets) and gametes have 4 chromosomes (= 1 set). Geneticists use the term “-ploid” to refer to one set of chromosomes in an organism, and that term is typically combined with another wordstem that describes the number of sets of chromosomes present. For example, a cell with one set of chromosomes is called haploid, a cell with two sets of chromosomes is diploid, and a cell with four sets of chromosomes (not usually a “normal” condition, but sometimes possible) is tetraploid. Technically, mitosis is specifically the process of division of the chromosomes, while cytokinesis is officially the process of division of the cytoplasm to form two cells. In most cells, cytokinesis follows or occurs along with the last part of mitosis. Remember centrioles? They consist of nine sets of three microtubules, occur in animal cells only, and are involved in division of the chromosomes. Each animal cell has a pair of centrioles located just outside the nucleus. The two centrioles in the pair are oriented at right angles to each other. Just before mitosis, the centrioles replicate, so the cell now has four (two sets of two) as it starts mitosis. The stages in mitosis include (interphase), prophase, metaphase, anaphase, and telophase. Remembering IPMAT or Intelligent People Meet At Three (or is that Twelve?) can help you remember the stages in order. Strictly speaking, interphase is the stage in which a cell spends most of its life and is not part of the process of mitosis, per se, but is usually discussed along with the other stages. Interphase may appears to be a resting stage, but cell growth, replication of the chromosomes, and many other activities are taking place during this time. Near the end of interphase just before the cell starts into the other stages of mitosis, if the cell is an animal cell, the centrioles replicate so there are two pairs. At this time, the strands of DNA that make up the chromsosomes are unwound within the nucleus and do not appear as distinct chromosomes. Thus, at this stage, the genetic material is often referred to as chromatin. From here, the cell goes through all other stages of mitosis. In prophase, the chromosomes start to coil, shorten, and become distinct. In animals, the centrioles begin to migrate to the poles of the cell. The mitotic spindle or polar fibers begin to form from the poles of the cell towards the equator. In animals, this starts as asters around the centrioles. Eventually, the spindle mechanism finishes growing toward the equator and interacts with the centromeres to line up and, later, move the chromosomes. Also at this time, the nuclear envelope starts to disintegrate. Metaphase is characterized by the lining up of the chromosomes along the equator of the cell or what is called the metaphase plate. The nuclear envelope has totally disintegrated and the polar fibers have reached the centromeres of the chromosomes and have begun interacting with them. In anaphase, the sister chromatids separate at the centromeres, thus can now be called chromosomes. These are pulled to the poles of the cell by the mitotic spindle. onion root tip In telophase, the new daughter nuclei and nuclear envelopes start to reform and the chromosomes uncoil. Telophase frequently includes the start of cytokinesis. In animal cells, cytokinesis starts with a cleavage furrow or indentation around the middle that eventually pinches in, dividing the cell in two. In plants, cytokinesis begins with a series of vesicles that form at the equator of the cell, which subsequently join until the cell is divided in two. |Animal Cytokinesis||Plant Cytokinesis| One interesting offshoot of the study of mitosis is tissue culture. In tissue culture, the cells to be studied are removed from the organisms body and grown on a sterile, artificial medium. When grown in this manner, typically normal cells grow one layer thick on the surface of the sterile medium and will undergo only 20 to 50 mitotic divisions then cease to be able to reproduce. Also, typically, when all cells are touching neighbors all around, they stop dividing. This phenomenon is known as contact inhibition. In sharp contrast, cancer cells will not stop growing with one layer on the surface of the medium, but grow multiple layers and fill the dish. They do not exhibit contact inhibition: they dont stop growing when touching on all sides. Also cancer cells appear to have no limit to the number of generations they can produce. Back in the mid-1950s, a biopsy of cervical cancer was removed from a woman named Henrietta Lacks and grown in tissue culture. While Ms. Lacks died long ago, HeLa cells are a widely-cultured research organism available through a number of biological supply companies. Within the past few years, an interesting issue has arisen regarding these cells: are they still human? While HeLa cells currently being grown in tissue culture are descendents of the original human cancer cells, by now they have mutated so much that its questionable whether they can still be considered human tissue, especially since they were abnormal, cancer cells to begin with. Tissue culture is now a widely-used means of more effectively and quickly finding the right drugs to treat cancer. Typically, in the past, people with cancer were subjected to one toxic drug after another in hopes of finding one that would be effective against that particular cancer. Unfortunately, by the time the right drug was found, it frequently was too late to do any good. Now, when a person is diagnosed with cancer, a biopsy can be taken and a number of cultures of cells can be grown. Each of these cultures can be subjected to a different drug, thus enabling doctors to find the right drug sooner, while it may still be of help, and without needlessly subjecting the person to many kinds of toxic chemicals. Within our bodies, different cells do mitosis at different rates. Skin cells continuously do mitosis and divide, thus our skin is constantly renewed and repaired. In sharp contrast, most nerve cells stop doing mitosis soon after birth (Caution: overconsumption of alcohol can kill nerve/brain cells, and they can never be replaced, they will never “grow back.”). Liver cells are somewhere in between. In a healthy adult, liver cells normally do not divide, but can divide to repair minor damage. Major liver damage or a disease like cirrhosis is too much damage to be repaired through mitosis. In contrast, it is possible to use one adult liver to do liver transplants for four babies, and if all goes well, these pieces can eventually regenerate whole livers. Note this comparison between mitosis and meiosis. Copyright © 1996 by J. Stein Carter. All rights reserved. This page has been accessed times since 15 Aug 2000.
http://biology.clc.uc.edu/courses/bio104/mitosis.htm
4.28125
||This article needs additional citations for verification. (February 2010)| In mechanical engineering, backlash is the striking back of connected wheels in a piece of mechanism when pressure is applied. Another source defines it as the maximum distance through which one part of something can be moved without moving a connected part. In the context of gears backlash, sometimes called lash or play, is clearance between mating components, or the amount of lost motion due to clearance or slackness when movement is reversed and contact is re-established. For example, in a pair of gears backlash is the amount of clearance between mated gear teeth. Theoretically, the backlash should be zero, but in actual practice some backlash must be allowed to prevent jamming. It is unavoidable for nearly all reversing mechanical couplings, although its effects can be negated. Depending on the application it may or may not be desirable. Reasons for requiring backlash include allowing for lubrication, manufacturing errors, deflection under load and thermal expansion. Factors affecting the amount backlash required in a gear train include errors in profile, pitch, tooth thickness, helix angle and center distance, and runout. The greater the accuracy the smaller the backlash needed. Backlash is most commonly created by cutting the teeth deeper into the gears than the ideal depth. Another way of introducing backlash is by increasing the center distances between the gears. Backlash due to tooth thickness changes is typically measured along the pitch circle and is defined by: |= backlash due to tooth thickness modifications| |= tooth thickness on the pitch circle for ideal gearing (no backlash)| |= actual tooth thickness| Backlash, measured on the pitch circle, due to operating center modifications is defined by: |= backlash due to operating center distance modifications| |= difference between actual and ideal operating center distances| |= pressure angle| Standard practice is to make allowance for half the backlash in the tooth thickness of each gear. However, if the pinion (the smaller of the two gears) is significantly smaller than the gear it is meshing with then it is common practice to account for all of the backlash in the larger gear. This maintains as much strength as possible in the pinion's teeth. The amount of additional material removed when making the gears depends on the pressure angle of the teeth. For a 14.5° pressure angle the extra distance the cutting tool is moved in equals the amount of backlash desired. For a 20° pressure angle the distance equals 0.73 times the amount of backlash desired. In a gear train, backlash is cumulative. When a gear-train is reversed the driving gear is turned a short distance, equal to the total of all the backlashes, before the final driven gear begins to rotate. At low power outputs, backlash results in inaccurate calculation from the small errors introduced at each change of direction; at large power outputs backlash sends shocks through the whole system and can damage teeth and other components. Anti-backlash designs In certain applications, backlash is an undesirable characteristic and should be minimized. Gear trains where positioning is key but power transmission is light The best example here is an analog radio tuning dial where one may make precise tuning movements both forwards and backwards. Specialized gear designs allow this. One of the more common designs splits the gear into two gears, each half the thickness of the original. One half of the gear is fixed to its shaft while the other half of the gear is allowed to turn on the shaft, but pre-loaded in rotation by small coil springs that rotate the free gear relative to the fixed gear. In this way, the spring tension rotates the free gear until all of the backlash in the system has been taken out; the teeth of the fixed gear press against one side of the teeth of the pinion while the teeth of the free gear press against the other side of the teeth on the pinion. Loads smaller than the force of the springs do not compress the springs and with no gaps between the teeth to be taken up, backlash is eliminated. Leadscrews where positioning and power are both important Another area where backlash matters is in leadscrews. Again, as with the gear train example, the culprit is lost motion when reversing a mechanism that is supposed to transmit motion accurately. Instead of gear teeth, the context is screw threads. The linear sliding axes (machine slides) of machine tools are an example application. Most machine slides for many decades, and many even today, were simple-but-accurate cast iron linear bearing surfaces, such as a dovetail slide or box slide, with an Acme leadscrew drive. With just a simple nut, some backlash is inevitable. On manual (non-CNC) machine tools, the way that machinists compensate for the effect of backlash is to approach all precise positions using the same direction of travel. This means that if they have been dialing left, and now they want to move to a rightward point, they move rightward all the way past it and then dial leftward back to it. The setups, tool approaches, and toolpaths are designed around this constraint. The next step up from the simple nut is a split nut, whose halves can be adjusted and locked with screws so that one side rides leftward thread faces, and the other side rides rightward faces. Notice the analogy here with the radio dial example using split gears, where the split halves are pushed in opposing directions. Unlike in the radio dial example, the spring tension idea is not useful here, because machine tools taking a cut put too much force against the screw. Any spring light enough to allow slide movement at all would allow cutter chatter at best and slide movement at worst. These screw-adjusted split-nut-on-an-Acme-leadscrew designs cannot eliminate all backlash on a machine slide unless they are adjusted so tight that the travel starts to bind. Therefore this idea can't totally obviate the always-approach-from-the-same-direction concept; but backlash can be held to a small amount (1 or 2 thousandths of an inch), which is more convenient and in some non-precise work is enough to allow one to ignore the backlash (i.e., act as if there weren't any). CNCs can be programmed to use the always-approach-from-the-same-direction concept, but that is not the normal way they are used today, because hydraulic anti-backlash split nuts and newer forms of leadscrew other than Acme/trapezoidal, such as recirculating ball screws or duplex worm gear sets, effectively eliminate the backlash. The axis can move in either direction without the go-past-and-come-back motion. The simplest CNCs, such as microlathes or manual-to-CNC conversions, use just the simple old nut-and-Acme-screw drive. The controls can be programmed with a parameter value entered for the total backlash on each axis, and the machine will automatically add that much to the program's distance-to-go when it changes directions. This [programmatic] "backlash compensation", as it's called, is a useful trick for capital-frugal applications. "Professional-grade" CNCs, though, use the more expensive backlash-eliminating drives mentioned above. This allows them to do 3D contouring with a ball-nosed endmill, for example, where the endmill travels around in many directions with ease and constant rigidity. Some motion controllers include backlash compensation. Compensation may be achieved by simply adding extra compensating motion (as described earlier) or by sensing the load's position in a closed loop control scheme. The dynamic response of backlash itself, essentially a delay, makes the position loop less stable and prone to oscillation. Minimum backlash Minimum backlash is the minimum transverse backlash at the operating pitch circle allowable when the gear tooth with the greatest allowable functional tooth thickness is in mesh with the pinion tooth having its greatest allowable functional tooth thickness, at the tightest allowable center distance, under static conditions. Difference between the maximum and minimum backlash occurring in a whole revolution of the larger of a pair of mating gears. Gear couplings use backlash to allow for angular misalignment. Backlash is undesirable in precision positioning applications such as machine tool tables. It can be minimized by tighter design features such as ball screws instead of leadscrews, and by using preloaded bearings. A preloaded bearing uses a spring or other compressive force to maintain bearing surfaces in contact despite reversal of direction. There can be significant backlash in unsynchronized transmissions because of the intentional gap between dog gears (also known as dog clutches). The gap is necessary so that the driver or electronics can engage the gears easily while synchronizing the engine speed with the driveshaft speed. If there was a small clearance, it would be nearly impossible to engage the gears because the teeth would interfere with each other in most configurations. In synchronized transmissions, synchromesh solves this problem. See also - Engineering - A Complete Online Guide for Every Mechanical Engineer - Backlash, archived from the original on 2010-02-09, retrieved 2010-02-09. - Jones, Franklin Day; Ryffel, Henry H. (1984), Gear design simplified (3rd ed.), Industrial Press Inc., p. 20, ISBN 978-0-8311-1159-5. - Adler, Michael, Meccano Frontlash Mechanism, archived from the original on 2010-02-09, retrieved 2010-02-09. - Gear Nomenclature, Definition of Terms with Symbols. American Gear Manufacturers Association. p. 72. ISBN 1-55589-846-7. OCLC 65562739. ANSI/AGMA 1012-G05.
http://en.wikipedia.org/wiki/Backlash_(engineering)
4.15625
Refraction, Dispersion, Reflection and Diffraction The change of angle, which occurs at the interface when light traveling through a vacuum or air enters a medium such as glass or water, is called refraction. The amount of refraction that takes place is expressed by a quantity called the index of refraction or refractive index. If the angle of incidence in the figure is i and the angle of refraction is r, these quantities are related by: Index of refraction n =sini/sinr n is independent of the angle of incidence. Strictly speaking, it is called the index of refraction of the medium on the side of the transmission with respect to the side of incidence. If sunlight is passed through a prism, it is split up into a continuous spectrum, commonly thought of as being made up of seven colours: violet, indigo, blue, green yellow, orange and red. This is caused by the difference between the angles of refraction of light of different wavelengths. Different types of optical glass are made for lenses; some have a high index of refraction while others have a low index of refraction The splitting of light of different colors due to the different indices of refraction at different wavelengths when the light enters a lens or prism is called dispersion. The wave of monochromatic visual light varies from about 400nm (lnm=1 nanometer=1/ 1,000,000,000 meter) for violet to about 700nm for red. This difference between wavelengths is what gives us our sense of color. There are different types of optical glass, some producing high dispersion and others producing low dispersion. The relative refraction of the many different wavelengths between blue and red is called partial dispersion. In lenses made of' normal optical glass, this partial dispersion has a certain definite trend. However, it is possible to have glass of low dispersion and large partial dispersion, or high dispersion and small partial dispersion; such glass is called abnormal optical glass. In reflection, in contrast to refraction, when light strikes a medium such as glass it comes off of the interface in a completely new direction. If the interface is smooth relative to the wavelength of the light, the angle of incidence equals the angle of reflection, but if the roughness of the interface is on the same scale as the wavelength of the light or smaller, the reflected light is scattered in many directions. To keep the amount of light reflected by a lens to a minimum and maximize the amount of light that passes through, it is important for the front surface of the lens to be coated. Measures such as anti reflective paint, electrostatic powder, etc. are used to prevent fogging due to reflection off of the metallic surface of the lens barrel. The amount of light passing through a photographic lens is adjusted by a diaphragm. The process by which light gets in behind the edge of the diaphragm is called diffraction. This causes the image on the film to be lower in contrast and resolution and to lose sharpness. The diffraction effect increases as the f stop value becomes larger. The occurrence of diffraction depends not only on the diameter of the diaphragm opening, but also on the wavelength of the light, the focal length of the lens and the aperture ratio.
http://primegraphics.co.uk/refraction,-dispersion,-reflection-and-diffraction.html
4.09375
█ CARYN E. NEUMANN The Berlin Tunnel involved an attempt by American and British intelligence to adjust to the late 1940s Soviet shift from wireless transmissions to landlines by tapping Soviet and East German communication cables via a tunnel dug below the communist sector of the German city. The tunnel, which lasted from March 1955 until its discovery by Soviet troops in April 1956, provided difficult-to-obtain military intelligence, as well as information about scientific and political developments behind the Iron Curtain. The brainchild of the CIA, the Berlin Tunnel aimed to collect Soviet intelligence passed along an underground hub of telecommunications cables adjacent to the U.S. sector of the divided city. While Operation Gold had been conceived in 1951, detailed plans were not in place until August 1953 and the concept did not receive CIA approval until January 1954. The delays centered on the difficulty of discovering exactly which cables were used for Soviet communications and where these cables were located. While the CIA relied upon a number of East German sources to get information, a contact in the long-distance department of an East Berlin post office proved especially useful by providing books that identified cable users. Another contact in the East German Ministry of Post and Telecommunications provided detailed official maps of Soviet cable lines. Tunnel construction then began early in 1954. The CIA, using the U.S. Army as a front, designed a warehouse that led to a subterranean passageway about 1800 feet long (900 feet into Soviet territory) and 16.5 feet deep. A West Berlin contractor built the warehouse under the misconception that the unusually deep basement and ramps to accommodate forklifts were part of a new and improved quartermaster warehouse design. A detachment of U.S. Army engineers dug the tunnel, but the British army drove the vertical shaft from the end of the tunnel to the target cables and British telecommunications experts made the actual tap. In order to disguise evidence of digging, the army installed washers and dryers on site to clean the fatigues of the construction workers. As a defense against possible Soviet attackers, a heavy torchproof steel door separated the preamplification chamber, where the signals were isolated for recording, and the vertical shaft of the tap chamber. A microphone in the tap chamber permitted security personnel to monitor any activity in the area. Sandbags along the tunnel walls muffled sounds and served as shelves. Construction of the entire tunnel complex ended in March 1955 and the taps began in May. The KGB soon became aware of Operation Gold through George Blake, a Dutch-born British double agent for the Soviets who entered MI-6 in 1953. Blake, employed in a technical division, gave information about the tunnel to the KGB when the project was still in the planning stages. In order to attack the tunnel, the Soviets would have to compromise Blake and they found it preferable to sacrifice some information rather than their valuable agent. The KGB did not inform anyone in Germany, including the East Germans or the Soviet users of the cable lines, about the taps. When Blake received a transfer in 1955, the Soviets were free to "discover" the tunnel. Soviet and American accounts of the tunnel discovery do not match, with the Soviets creating a fanciful and widely-circulated account of Soviet technicians surprising Americans as they sipped coffee in the tunnel. In reality, with Blake safely out of the way, Soviet Premier Nikita Khruschchev planned to use the tunnel to score propaganda points but he did not wish to embarrass the British government on the eve of his visit to the island nation. He planned to emphasize the American role in the tunnel while downplaying British involvement. Accordingly, Soviet troops began to dig on the night of April 21, 1956. American personnel, using night vision equipment, detected 40 to 50 Soviet soldiers digging at three to five foot intervals. Given ample warning, the Americans retreated behind the steel door. The Soviets, unable to open the door, dug through an adjacent wall to get into the preamplification chamber. Once inside, they cut the tap cables and the microphone went dead. Although it came to an embarrassing end, the Berlin Tunnel counts as a successful intelligence operation. The American and British governments used 50,000 reels of tape to capture 443,00 fully transcribed conversations (368,000 Soviet and 75,000 East German), which in turn led to 1,750 intelligence reports. Besides revealing the latest developments in Soviet atomic research, the tapes indicated disagreements between the Soviets and the East Germans over the status of West Berlin. Despite Soviet claims to the contrary, the tunnel provided much more than carefully planned misinformation. █ FURTHER READING: Miller, Nathan. Spying for America: The Hidden History of U.S. Intelligence. New York: Paragon House, 1989. Murphy, David E., Sergei A. Kondrashev, and George Bailey. Battleground Berlin: CIA vs. KGB in the Cold War. New Haven: Yale University Press, 1997.
http://www.faqs.org/espionage/Ba-Bl/Berlin-Tunnel.html
4.09375
Detailed calculations show that electrons passing through a ‘turbine’ in which one type of carbon nanotube (CNT) is suspended inside another type of CNT would cause the inner CNT to rotate, forming an electron ‘windmill’ or turbine. The researchers also suggest that atoms or molecules could be pumped through the spinning inner CNT to form patterns of atoms or molecules—a nanotech inkjet printer. From NewScientist.com news service, written by Kate McAlpine: “‘Electron turbine’ could print designer molecules“, via KurzweilAI.net A carbon nanotube that spins in a current of electrons, like a wind turbine in a breeze, could become the world’s smallest printer or shrink computer memory, UK researchers say. The design is simple. A carbon nanotube 10 nanometres long and 1 nm wide is suspended between two others, its ends nested inside them to form a rotating joint. When a direct current is passed along the tubes, the central one spins around. That design has as yet only been tested using advanced computer simulations by Colin Lambert and colleagues at Lancaster University, Lancashire, UK. But Adrian Bachtold of the Catalan Institute for Nanotechnology, who was not involved in the work, intends to build the electron turbines and says it should be straightforward. Researchers have made or designed a range of small-scale motors in recent years, using everything from DNA to light sensitive molecules to cell-transport proteins. …The Lancaster researchers say their motor could be used to pump atoms and molecules through the spinning middle tube. Multiple pumps could precisely control a chemical reaction, driving atoms in a pattern to engineer new molecules. “It’s like a nanoscale inkjet printer,” says Lambert. The News Scientist article includes a link to the preprint of the research publication. The preprint presents the calculations on electron flow through the CNTs that justify the conclusion that the inner CNT will rotate. However, as far as I can see, it does not provide much information about how this device could be used to arrange atoms and molecules into patterns for designer molecules. The only statement I could find on this topic: For example, if the electrical contacts in Fig. 1 are replaced by reservoirs of atoms or molecules and a pressure difference applied to drive the atoms or molecules from left to right, then the resulting transfer of angular momentum may also be sufficient to drive the motor, as could a flux of phonons resulting from a temperature difference between the ends of the device. No doubt the researchers have ideas for how to build a molecule printer beyond what they present in this paper. Let’s hope that they follow with more on this concept and that someone soon succeeds in building such a device and testing it as a molecule printer.
http://www.foresight.org/nanodot/?p=2766
4.25
The lessons presented in McGraw-Hill Health focus on three major domains of health: physical, emotional and intellectual and social health. There are ten major health strands that students can relate to. These include: personal health; growth and development; emotional and intellectual health; family and social health; nutrition; physical activity and fitness; disease prevention and control; alcohol, tobacco and drugs; safety, injury, and violence prevention; and community and environmental health. Through the blending of content and activities, instructional emphasis will be place on the acquisition of six major life skills that children need to make wise health choices and to adopt health behaviors that will contribute to their well-being. These life skills are: make decisions, set goals, obtain help, manage stress, practice refusal skills and resolve a variety of situations, both at home and at school. When children take responsibility for their own health attitudes and behaviors, they begin to develop coping strategies that will help through their lives. Through the learning of the life skills, the students will develop five abilities that will also help them in their overall development of good health. These abilities are: take responsibility, develop self-esteem, respect others, communicate effectively and think critically. Students will have many opportunities to practice these abilities and to make the connection that each one has to good health. Through the lessons, life skills and abilities, students will have a solid foundation for good health.
http://www.ccsd59.org/g345health
4.0625
Main part of a prototype Nuclear Magnetic Resonance machine, Europe, 1970-1975 This is the magnet and electronic circuit for a prototype Nuclear Magnetic Resonance (NMR) machine used in medicine, commonly called MRI (Magnetic Resonance Imaging). MRI uses NMR signals that build up a picture of the human body by using high frequency radio waves. MRI does not expose the body to radiation or invasive surgery and it can image soft tissues more effectively than X-ray-based methods. Related Themes and Topics There are 266 related objects. View all related objects Glossary: NMR machine Magnetic Resonance Imaging (MRI). A technique for producing high quality images of internal organs and tissues. MRI uses radio waves to achieve its results. It is particularly effective in detecting cancers. Nuclear magnetic resonance is a technique used to detect what chemicals make up a sample containing unknown materials, or proportions of material. The sample is exposed to radio waves, and the frequency of electromagnetic energy that the sample absorbs is recorded. Because different atoms absorb unique frequencies of radiation, it is possible to determine what sort of atoms are present in the sample. NMR is now known as Magnetic Resonance Imaging (MRI).
http://www.sciencemuseum.org.uk/broughttolife/objects/display.aspx?id=6014&image=2
4.28125
Students hop, skip, or jump around a circle as they create healthy dinner menus. Students will identify all five food groupings and recognize the importance of eating a variety of healthy foods for dinner. ||One notecard and pencil for each student - Ask the students to form a circle. - Next, ask them what food groupings are (categories of different types of foods based on what they provide for and how they affect our bodies). - Then, ask or tell them the five food groupings (fruits; vegetables; milk and milk products; grains; and meats, beans, and nuts). Ask for a few examples of healthy foods from each grouping (see below). - Remind them it is important to eat a variety of foods because each one does something different and important for our bodies. - Give each student a piece of paper and a pencil. - Tell them you are going to play "Dinner Menu" and they are dieticians. Explain that dieticians are nutrition experts who can help people plan their diets. - Tell them to write their name on their note card and then leave it (and their pencil) on the floor in their spot. Explain that on your signal they should move around the area using a loco-motor movement you name (e.g. skip, hop on one foot, jump, slide, etc.). - When you say "MENU", students should find the nearest free note card and pencil. Name one of the five food groupings (fruits; vegetables; milk and milk products; grains; or meats, beans, and nuts) and have them write down one healthy dinner food in that food grouping, fold the paper over, put the pencil and note card face down, and begin moving around the play area again. - You can change movements and directions to keep students interested. Go through each of the five food groupings. - Ask the students to return to their original spots. Have them share and discuss their menus. Children should be encouraged to consume a variety of nutrient-rich foods low in fat and added sugar. There are five food groupings: - Milk and milk products grouping—contains vitamin D and calcium which keep bones and teeth strong. Includes low-fat or skim milk, yogurt, and cottage, cheddar, mozzarella, and gouda cheese. - Vegetable grouping and fruit grouping—contain vitamins A, B, and C which make eyes sparkle, skin smooth, and help fight off colds; also contain fiber which helps the body digest food and keeps teeth and gums healthy, and helps cuts heal quickly. Includes raspberries, apples, kiwi, watermelon, peas, carrots, spinach, and squash. - Meat, beans, and nuts grouping—contains iron which makes blood healthy, brains grow, and builds muscle. Includes peanut butter, grilled chicken, turkey, and salmon, and tuna. - Grains grouping—contains carbohydrates which give the body energy. Includes whole wheat pasta, bread, brown rice. "Slow" foods refer to foods high in fat and added sugar which can slow the body down. Less Healthy ("Slow") Dinner Foods and Drinks: ||fried fish sticks ||high fat pepperoni pizza ||General Tso's chicken ||sweet & sour chicken Related National Standards Further information about the National Standards can be found here
http://nyrrf.org/ycr/eat/activity/grade5/5d2.asp
4.15625
What It Is: A net importer is a country that buys more from other countries than it sells to other countries. Often, countries are net importers in some industries (natural gas, for example) but net exporters in others. How It Works/Example: Net imports are measured by comparing the value of the goods imported over a specific time period to the value of similar goods exported during that period. The formula for net imports is: Net Imports = Value of Imports - Value of Exports For example, let's suppose Canada sold $3 billion of gasoline to other countries last year, but it also bought $7 billion of gasoline from other countries last year. Using the formula above, Canada's net gasoline imports are: Net Imports = $7 billion - $3 billion = $4 billion In this example, Canada is a net importer of gasoline. Why It Matters: When the value of goods exported is higher than the value of goods imported (a net exporter), the country is said to have a positive balance of trade for the period. Conversely, a country that imports more goods than it exports (a net importer) has a negative balance of trade. When taken as a whole, this in turn can be an indicator of a country's savings rate, future exchange rates, and to some degree its self-sufficiency (though economists constantly debate the idea).
http://www.investinganswers.com/financial-dictionary/economics/net-importer-2459
4.0625
The Big Bang The Big Bang theory states that the universe arose from a singularity of virtually no size, which gave rise to the dimensions of space and time, in addition to all matter and energy. At the beginning of the Big Bang, the four fundamental forces began to separate from each other. Early in its history (10-36 to 10-32 seconds), the universe underwent a period of short, but dramatic, hyper-inflationary expansion. The cause of this inflation is unknown, but was required for life to be possible in the universe. Quarks and antiquarks combined to annihilate each other. Originally, it was expected that the ratio of quarks and antiquarks to be exactly equal to one, since neither would be expected to have been produced in preference to the other. If the ratio were exactly equal to one, the universe would have consisted solely of energy - not very conducive to the existence of life. However, recent research showed that the charge–parity violation could have resulted naturally given the three known masses of quark families.1 However, this just pushes fine tuning a level down to ask why quarks display the masses they have. Those masses must be fine tuned in order to achieve a universe that contains any matter at all. Large, just right-sized universe Even so, the universe is enormous compared to the size of our Solar System. Isn't the immense size of the universe evidence that humans are really insignificant, contradicting the idea that a God concerned with humanity created the universe? It turns out that the universe could not have been much smaller than it is in order for nuclear fusion to have occurred during the first 3 minutes after the Big Bang. Without this brief period of nucleosynthesis, the early universe would have consisted entirely of hydrogen.2 Likewise, the universe could not have been much larger than it is, or life would not have been possible. If the universe were just one part in 1059 larger,3 the universe would have collapsed before life was possible. Since there are only 1080 baryons in the universe, this means that an addition of just 1021 baryons (about the mass of a grain of sand) would have made life impossible. The universe is exactly the size it must be for life to exist at all. Early evolution of the universe Cosmologists assume that the universe could have evolved in any of a number of ways, and that the process is entirely random. Based upon this assumption, nearly all possible universes would consist solely of thermal radiation (no matter). Of the tiny subset of universes that would contain matter, a small subset would be similar to ours. A very small subset of those would have originated through inflationary conditions. Therefore, universes that are conducive to life "are almost always created by fluctuations into the[se] 'miraculous' states," according to atheist cosmologist Dr. L. Dyson.4 Just right laws of physics The laws of physics must have values very close to those observed or the universe does not work "well enough" to support life. What happens when we vary the constants? The strong nuclear force (which holds atoms together) has a value such that when the two hydrogen atoms fuse, 0.7% of the mass is converted into energy. If the value were 0.6% then a proton could not bond to a neutron, and the universe would consist only of hydrogen. If the value were 0.8%, then fusion would happen so readily that no hydrogen would have survived from the Big Bang. Other constants must be fine-tuned to an even more stringent degree. The cosmic microwave background varies by one part in 100,000. If this factor were slightly smaller, the universe would exist only as a collection of diffuse gas, since no stars or galaxies could ever form. If this factor were slightly larger, the universe would consist solely of large black holes. Likewise, the ratio of electrons to protons cannot vary by more than 1 part in 1037 or else electromagnetic interactions would prevent chemical reactions. In addition, if the ratio of the electromagnetic force constant to the gravitational constant were greater by more than 1 part in 1040, then electromagnetism would dominate gravity, preventing the formation of stars and galaxies. If the expansion rate of universe were 1 part in 1055 less than what it is, then the universe would have already collapsed. The most recently discovered physical law, the cosmological constant or dark energy, is the closest to zero of all the physical constants. In fact, a change of only 1 part in 10120 would completely negate the effect. Universal probability bounds "Unlikely things happen all the time." This is the mantra of the anti-design movement. However, there is an absolute physical limit for improbable events to happen in our universe. The universe contains only 1080 baryons and has only been around for 13.7 billion years (1018 sec). Since the smallest unit of time is Planck time (10-45 sec),5 the lowest probability event that can ever happen in the history of the universe is: 1080 x 1018 x 1045 =10143 So, although it would be possible that one or two constants might require unusual fine-tuning by chance, it would be virtually impossible that all of them would require such fine-tuning. Some physicists have indicated that any of a number of different physical laws would be compatible with our present universe. However, it is not just the current state of the universe that must be compatible with the physical laws. Even more stringent are the initial conditions of the universe, since even minor deviations would have completely disrupted the process. For example, adding a grain of sand to the weight of the universe now would have no effect. However, adding even this small amount of weight at the beginning of the universe would have resulted in its collapse early in its history. What do cosmologists say? Even though many atheists would like to dismiss such evidence of design, cosmologists know better, and have made statements such as the following, which reveal the depth of the problem for the atheistic worldview: *"This type of universe, however, seems to require a degree of fine-tuning of the initial conditions that is in apparent conflict with 'common wisdom'."6 *"Polarization is predicted. It's been detected and it's in line with theoretical predictions. We're stuck with this preposterous universe."7 *"In all of these worlds statistically miraculous (but not impossible) events would be necessary to assemble and preserve the fragile nuclei that would ordinarily be destroyed by the higher temperatures. However, although each of the corresponding histories is extremely unlikely, there are so many more of them than those that evolve without "miracles," that they would vastly dominate the livable universes that would be created by Poincare recurrences. We are forced to conclude that in a recurrent world like de Sitter space our universe would be extraordinarily unlikely."8
http://reasonablekansans.blogspot.com/2010/08/fine-tuning-of-universe.html
4.0625
The Social Gospel movement is a Protestant Christian intellectual movement that was most prominent in the early 20th century United States and Canada. The movement applied Christian ethics to social problems, especially issues of social justice such as wealth perceived as excessive, poverty, alcoholism, crime, racial tensions, slums, bad hygiene, child labor, inadequate labor unions, poor schools, and the danger of war. Theologically, the Social Gospellers sought to operationalize the Lord's Prayer (Matthew 6:10): "Thy kingdom come, Thy will be done on earth as it is in heaven." They typically were post-millennialist; that is, they believed the Second Coming could not happen until humankind rid itself of social evils by human effort. Social Gospel leaders were predominantly associated with the liberal wing of the Progressive Movement and most were theologically liberal, although they were typically conservative when it came to their views on social issues. Important leaders include Richard T. Ely, Josiah Strong, Washington Gladden, and Walter Rauschenbusch. Although most scholars agree that the Social Gospel movement peaked in the early 20th century, there is disagreement over when the movement began to decline, with some asserting that the destruction and trauma caused by World War I left many disillusioned with the Social Gospel's ideals while others argue that World War I actually stimulated the Social Gospelers' reform efforts. Theories regarding the decline of the Social Gospel after World War I often cite the rise of neo-orthodoxy as a contributing factor in the movement's decline. Many of the Social Gospel's ideas reappeared in the Civil Rights Movement of the 1960s. "Social Gospel" principles continue to inspire newer movements such as Christians Against Poverty. United States The Social Gospel affected much of Protestant America. The Presbyterians described its goals in 1910 by proclaiming: The great ends of the church are the proclamation of the gospel for the salvation of humankind; the shelter, nurture, and spiritual fellowship of the children of God; the maintenance of divine worship; the preservation of truth; the promotion of social righteousness; and the exhibition of the Kingdom of Heaven to the world. In the late 19th century, many Protestants were disgusted by the poverty level and the low quality of living in the slums. The social gospel movement provided a religious rationale for action to address those concerns. Activists in the Social Gospel movement hoped that by public health measures as well as enforced schooling the poor could develop talents and skills, the quality of their moral lives would begin to improve. Important concerns of the Social Gospel movement were labor reforms, such as abolishing child labor and regulating the hours of work by mothers. By 1920 they were crusading against the 12-hour day for workers at U.S. Steel. Differing Theology and Doctrine One of the defining theologians for the Social Gospel movement was Walter Rauschenbusch, a Baptist pastor of a congregation located in Hell’s Kitchen. Rauschenbusch rallied against what he regarded as the selfishness of capitalism and promoted a form of Christian Socialism that endorsed the creation of labor unions and cooperative economics. While pastors like Rauschenbusch were combining their expertise in Biblical ethics and economic studies and research to preach theological claims around the need for social reform, others such as Dwight Moody refused to preach about social issues based on personal experience. Pastor Moody’s experience led him to believe that the poor were too particular in receiving charity. Moody claimed that concentrating on social aid distracted people from the life saving message of the Gospel. Rauschenbusch sought to address the problems of the city with socialist ideas which proved to be frightening to the middle classes, the primary supporters of the Social Gospel. In contrast, Moody attempted to save people from the city and was very effective in influencing the middle class Americans who were moving into the city with traditional style revivals. Rauschenbusch's A Theology for the Social Gospel (1917) The social gospel movement was not a unified and well-focused movement, for it contained members who disagreed with the conclusions of others within the movement. Rauschenbusch stated that the movement needed “a theology to make it effective” and likewise, “theology needs the social gospel to vitalize it.” In A Theology for the Social Gospel (1917), Rauschenbusch takes up the task of creating “a systematic theology large enough to match [our social gospel] and vital enough to back it.” He believed that the social gospel would be “a permanent addition to our spiritual outlook and that its arrival constitutes a state in the development of the Christian religion,” and thus a systematic tool for using it was necessary. In A Theology for the Social Gospel, Rauschenbusch states that the individualistic gospel has made sinfulness of the individual clear, but it has not shed light on institutionalized sinfulness: “It has not evoked faith in the will and power of God to redeem the permanent institutions of human society from their inherited guilt of oppression and extortion.” This ideology would be inherited by liberation theologians and civil rights advocates and leaders such as Martin Luther King Jr. The “kingdom of God” is crucial to Rauschenbusch’s proposed theology of the social gospel. He states that the ideology and doctrine of the “the kingdom of God,” of which Jesus Christ reportedly “always spoke” has been gradually replaced by that of the Church. This was done at first by the early church out of what appeared to be necessity, but Rauschenbusch calls Christians to return to the doctrine of “the kingdom of God.” Of course, such a replacement has cost theology and Christians at large a great deal: the way we view Jesus and the synoptic gospels, the ethical principles of Jesus, and worship rituals have all been affected by this replacement. In promoting a return to the doctrine of the “kingdom of God,” he clarified that the “kingdom of God”: is not subject to the pitfalls of the Church; it can test and correct the Church; is a prophetic, future-focused ideology and a revolutionary, social and political force that understands all creation to be sacred; and it can help save the problematic, sinful social order. Many reformers inspired by the movement opened settlement houses, most notably Hull House in Chicago operated by Jane Addams. They helped the poor and immigrants improve their lives. Settlement houses offered services such as daycare, education, and health care to needy people in slum neighborhoods. The YMCA was created originally to help rural youth adjust to the city without losing their religion, but by the 1890s became a powerful instrument of the Social Gospel. Nearly all the denominations (including Catholics) engaged in foreign missions, which often had a social gospel component in terms especially of medical uplift. The Black denominations, especially the African Methodist Episcopal church (AME) and the African Methodist Episcopal Zion church (AMEZ) had active programs in support of the Social Gospel. Both evangelical ("pietistic") and liturgical ("high church") elements supported the Social Gospel, although only the pietists were active in promoting Prohibition. In the United States prior to World War I, the Social Gospel was the religious wing of the progressive movement which had the aim of combating injustice, suffering and poverty in society. Denver, Colorado, was a center of Social Gospel activism. Thomas Uzzel led the Methodist People's Tabernacle from 1885 to 1910. He established a free dispensary for medical emergencies, and employment bureau for job seekers, a summer camp for children, night schools for extended learning, and English language classes. Myron Reed of the First Congregational Church became a spokesman, 1884 to 1894 for labor unions on issues such as worker's compensation. His middle-class congregation encouraged Reed to move on when he became a Socialist, and he organized a nondenominational church. The Baptist minister Jim Goodhart set up an employment bureau, and provided food and lodging for tramps and hobos at the mission he ran. He became city chaplain and director of public welfare of Denver in 1918. Besides these Protestants, Reform Jews and Catholics helped build Denver's social welfare system in the early 20th century. New Deal During the New Deal of the 1930s Social Gospel themes could be seen in the work of Harry Hopkins, Will Alexander and Mary McLeod Bethune, who added a new concern with African Americans. After 1940, the movement withered, but was invigorated in the 1950s by black leaders like Baptist minister Martin Luther King and the civil rights movement. After 1980 it weakened again as a major force inside mainstream churches; indeed those churches were losing strength. Examples of its continued existence can still be found, notably the organization known as the Call to Renewal and more local organizations like the Virginia Interfaith Center for Public Policy. Another modern example can be found in the work of Reverend John Steinbruck, senior pastor of Luther Place Memorial Church in Washington, D.C., from 1970 to 1997, who was an articulate and passionate preacher of the Social Gospel and a leading voice locally and nationally for the homeless, Central American refugees, and the victims of persecution and prejudice. Social Gospel and Labor Movements Because the Social Gospel was primarily concerned with the day-to-day life of laypeople, one of ways in which it made its message heard was through labor movements. Particularly, the Social Gospel had a profound effect upon the American Federation of Labor (AFL). The AFL began a movement called Labor Forward, which was a pro-Christian group who “preached unionization like a revival.” In Philadelphia, this movement was counteracted by bringing revivalist Billy Sunday, himself firmly anti-union, who believed “that the organized shops destroyed individual freedom.” Legacy of the Social Gospel While the Social Gospel was short-lived historically, it had a lasting impact on the policies of most of the mainline denominations in the United States. Most began programs for social reform, which led to ecumenical cooperation and, in 1910, in the formation of the Federal Council of Churches, although this cooperation about social issues often led to charges of socialism. It is likely that the Social Gospel's strong sense of leadership by the people led to women's suffrage, and that the emphasis it placed on morality led to prohibition. The Cooperative Commonwealth Federation, a political party that was later reformed as the New Democratic Party, was founded on social gospel principles in the 1930s by J.S. Woodsworth, a Methodist minister. Woodsworth wrote extensively about the social gospel from experiences gained while working with immigrant slum dwellers in Winnipeg from 1904 to 1913. His writings called for the Kingdom of God "here and now". This political party took power in the province of Saskatchewan in 1944. This group, led by Tommy Douglas, a Baptist minister, introduced universal medicare, family allowance and old age pensions. This political party has since largely lost its religious basis, and became a secular social democratic party. In literature The Social Gospel theme is reflected in the novels In His Steps (1897) and The Reformer (1902), by the Congregational minister Charles Sheldon, who coined the motto "What would Jesus do?" In his personal life, Sheldon was committed to Christian Socialism and identified strongly with the Social Gospel movement. Walter Rauschenbusch, one of the leading early theologians of the Social Gospel in the United States, indicated that his theology had been inspired by Sheldon's novels. In 1892, Rauschenbusch and several other leading writers and advocates of the Social Gospel formed a group called the Brotherhood of the Kingdom. Members of this group produced many of the written works that defined the theology of the Social Gospel movement and gave it public prominence. These included Rauschenbusch's Christianity and the Social Crisis (1907) and Christianizing the Social Order (1912), as well as Samuel Zane Batten's The New Citizenship (1898) and The Social Task of Christianity (1911). The 21st century In the United States, the Social Gospel is still influential in mainline Protestant denominations such as, African Methodist Episcopal Church, the Evangelical Lutheran Church in America, the Presbyterian Church USA, the United Church of Christ, the Christian Church (Disciples of Christ), and the United Methodist Church; it seems to be growing in the Episcopal Church as well, especially with that church's effort to support the ONE Campaign. In Canada, it is widely present in the United Church and in the Anglican Church. Social Gospel elements can also be found in many service and relief agencies associated with Protestant denominations and the Catholic Church in the United States. It also remains influential among Christian socialist circles in Britain in the Church of England, Methodist and Calvinist movements.In Catholicism, liberation theology is considered by some to have been a radical Marxist attempt to promote the Social Gospel. See also - Christian socialism - Christian humanism - Emerging church - Evangelical left - The Gospel of Wealth - Liberation theology - Prosperity theology - Social justice - Christian Social Union (Church of England) - Salem Bland - Cecelia Tichi, Civic passions: seven who launched progressive America (and what they teach us) (2009) p 221 - For the most part, they rejected premillennialist theology (which was predominant in the Southern United States), according to which the Second Coming of Christ was imminent, and Christians should devote their energies to preparing for it rather than addressing the issue of social evils. - White, Jr. (1990), and Ahlstrom (1974). - White, Jr. and Hopkins (1975) and Handy (1966). - Visser 't Hooft (1928) - Ahlstrom (1974), Hopkins (1940), White, Jr. and Hopkins (1975), and Handy (1966) - Christopher H. Evans, The social gospel today (2001) p. 149 - Rogers and Blade 1998 - Howard C. Kee, et. al., Christianity: A Social and Cultural History, 2nd ed. (Upper Saddle River, NJ: Prentice Hall, 1998), 476-478 - Kee, Howard C, et. al. Christianity: A Social and Cultural History, 2nd ed. (Upper Saddle River, NJ: Prentice Hall, 1998.p478 - Rauschenbusch, Walter. Theology for the Social Gospel. New York: Abingdon Press, 1917. p1. - Rauschenbusch, Walter. Theology for the Social Gospel. New York: Abingdon Press, 1917. p2. - Rauschenbusch, Walter. Theology for the Social Gospel. New York: Abingdon Press, 1917. P5. - Rauschenbusch, Walter. Theology for the Social Gospel. New York: Abingdon Press, 1917. p131. - Rauschenbusch, Walter. Theology for the Social Gospel. New York: Abingdon Press, 1917. p132. - Rauschenbusch, Walter. Theology for the Social Gospel. New York: Abingdon Press, 1917. p133-134. - Walter Rauschenbusch, Theology for the Social Gospel (New York: Abingdon Press, 1917) pp 134-137. full text - Hopkins (1940) - Luker (1998) - Marty (1986) - Jeremy Bonner, "Religion," in Rick Newby, ed, The Rocky Mountain Region (Greenwood Press, 2004) p 370 - Howard C. Kee, et. al., Christianity: A Social and Cultural History, 2nd ed. (Upper Saddle River, NJ: Prentice Hall, 1998), 479-480 - "A BRIEF HISTORY OF THE NDP". Retrieved 2009-10-14. - The Encyclopedia of Saskatchewan. "The Encyclopedia of Saskatchewan "Social Gospel"". University of Regina (Saskatchewan, Canada. Retrieved 2009-10-14. Further reading Secondary sources - Sydney E. Ahlstrom. A Religious History of the American People (1974) - Susan Curtis. A Consuming Faith: The Social Gospel and Modern American Culture (1991) - Jacob H. Dorn. Socialism and Christianity in Early 20th Century America. (1998), online edition - Brian J. Fraser. The Social Uplifters: Presbyterian Progressives and the Social Gospel in Canada, 1875-1915 (1990) - Robert T. Handy, ed. The Social Gospel in America, 1870-1920 (1966). - Charles Howard Hopkins. The Rise of the Social Gospel in American Protestantism, 1865-1915. (1940) online edition - Benjamin L. Hartley. Evangelicals at a Crossroads: Revivalism and Social Reform in Boston, 1860-1910 (University of New Hampshire Press/University Press of New England; 2011) 304 pages; looks at Methodist, Salvation Army, Baptist, and nondenominational Christians - William R. Hutchison. "The Americanness of the Social Gospel; An Inquiry in Comparative History," Church History, Vol. 44, No. 3 (Sep., 1975), pp. 367–381 online in JSTOR - Maurice C. Latta, "The Background for the Social Gospel in American Protestantism," Church History, Vol. 5, No. 3 (Sep., 1936), pp. 256–270 online at JSTOR - Ralph E. Luker. The Social Gospel in Black and White American Racial Reform, 1885-1912. (1998) excerpt and text search - Marty, Martin E. Modern American Religion, Vol. 1: The Irony of It All, 1893-1919 (1986); Modern American Religion. Vol. 2: The Noise of Conflict, 1919-1941 (1991) - Muller, Dorothea R. "The Social Philosophy of Josiah Strong: Social Christianity and American Progressivism," Church History 1959, v 28, #2 pp. 183–201] at JSTOR - Rader, Benjamin G. "Richard T. Ely: Lay Spokesman for the Social Gospel." Journal of American History. 53:1 (June 1966). in JSTOR - Rogers, Jack B., and Robert E. Blade, "The Great Ends of the Church: Two Perspectives," Journal of Presbyterian History (1998) 76:181-186. - Smith, Gary Scott. "To Reconstruct the World: Walter Rauschenbusch and Social Change," Fides et Historia (1991) 23:40-63. - Visser 't Hooft, Willem A. The Background of the Social Gospel in America (1928). - White, Ronald C., Jr. Liberty and Justice for All: Racial Reform and the Social Gospel (1877-1925) (1990). - White, Ronald C., Jr. and C. Howard Hopkins. The Social Gospel. Religion and Reform in Changing America (1975). Primary sources - Batten, Samuel Zane. The Social Task of Christianity: A Summons to the New Crusade. New York: Fleming H. Revell Co., 1911. online edition - Gladden, Washington. Who Wrote the Bible? A Book for the People. Boston: Houghton, Mifflin and Co., 1891. online edition - Mathews, Shailer. Jesus on Social Institutions. New York: Macmillan, 1928. - Mathews, Shailer. The Spiritual Interpretation of History. William Belden Noble lectures. Cambridge: Harvard University Press, 1916. online edition - Peabody, Francis Greenwood. Jesus Christ and the Social Question: An Examination of the Teaching of Jesus in Its Relation to Some of the Problems of Modern Social Life. New York: Macmillan, 1900. online edition - Rauschenbusch, Walter. Christianity and the Social Crisis. New York: 1912. online edition - Rauschenbusch, Walter. A Theology for the Social Gospel. New York: Macmillan Co., 1917. online 1922 edition - Sheldon, Charles Monroe. In His Steps: "What Would Jesus Do?" London: Simpkin, Marshall, Hamilton, Kent & Co., 1897. online edition - Strong, Josiah. The New Era; Or, The Coming Kingdom. New York: The Baker & Taylor Co., 1893. online edition - Thomas, Lewis Herbert, ed. The Making of a Socialist: The Recollections of T.C. Douglas. Edmonton, Alta.: University of Alberta Press, 1982. - Rick Warren.Purpose Driven Life. Zondervan. Grand Rapids, Michigan. Evangelical Christian Publishers Association (ECPA) |Wikiquote has a collection of quotations related to: Social Gospel|
http://en.wikipedia.org/wiki/Social_Gospel
4.03125
A Turing machine is a very simple computer that manipulates symbols on a strip of tape to perform feats of logic. There isn't really much of a purpose to them these days; they exist as a novelty based on early computational theory by the great mathematician and computer scientist Alan Turing. They're made as a type of thought experiment to show the advantages and limits of mechanical computing. To really understand what a Turing machine demonstrates, you probably have to be the type who can speak binary. Many have been made over the years, but one caught our eye on YouTube this week (video below). We didn't notice it because it's elegant or attractive--indeed, it's rather harsh-looking--but because it's entirely mechanical. It uses magnets and springs, but no electronics or even electricity. It was made by British hobbyist Jim MacArthur as a demonstration for a Maker Faire in the U.K. Most Turing devices use a type of tape on which symbols are punched, but this one moves along a metal grid. Ball bearings are dropped into grid squares based on the data input via a series of small levers. The positions of the balls on the grid act as symbols. When one knows what they're doing, the pattern of ball bearings on the grid can be translated into a rough program. For a logic unit, it uses a left-or-right switch mechanism to create binary input. It has up to 5 input symbols that allow for 10 "states." If that doesn't make sense to you, that's OK, it's not really supposed to. It's a technical way of saying that while this DIY machine won't catch up to a pocket calculator anytime soon, it's still an impressive feat of engineering for not having any batteries. … Read more
http://news.cnet.com/8300-5_3-0.html?keyword=turing
4.21875
Light travels in waves of different wavelengths. Light is emitted from sources such as the sun, lightbulbs, and fire in many different wavelengths, however the human eye can only detect certain kinds. The eye is capable of sensing light of wavelengths ranging from about 400 to 700 nanometers. The wavelength of the light determines its color, for example light that has a wavelength of 520 nanometers appears green. Any color that is of a single wavelength (such as 520 nanometers) is called monochromatic. Scientists have displayed all of the monochromatic colors in what is called the visual spectrum and for this reason monochromatic colors are often called spectral colors. The spectrum to the left shows the many different wavelengths of light: the further to the right on the graph the longer the wavelength. These colors may look familiar to you, as they are often simplified to the colors of the rainbow: ROYGBIV. Notice that at both the beginning and the end of the graph the spectrum fades to black because the human eye is no longer capable of processing light outside of the 400-700 nanometer range. But these are not all of the colors that we are able to see. By mixing light of different wavelengths, it is possible to create different colors! (You will learn more about mixing colors in Color Changes ) Colors formed by mixing different wavelengthis of light are called polychromatic colors. The colors we can see because of mixing are shown on something called a Chromaticity Diagram. This shows all of the spectral colors along the edges and the colors formed by mixing them in between. Notice that white is in the center of the diagram; white light is actually a mixture of all wavelengths of light. Color is often classified based on the chromaticity diagram model. The HSL Classification System defines colors in terms of hue, saturation, and lightness. Hue is the closest wavelength of light to the color on the chromaticity diagram. Saturation (also called purity) measures how close the color is to the wavelength on the diagram. A color of a 520nm hue and 100% saturation would be pure 520nm light so would appear green. A color of a 520nm hue and 0% saturation would be in the center of the diagram so would appear white or gray. The final number for color classification is lightness, which measures the intensity of the color. Lightness of 0% would be black because no light would be reflected. Lightness of 100% would be white because all light would be reflected. Lightness between these extremes creates different shades of colors. The Primary Colors The human eye only has 3 color channels. In other words, our eyes are not able to distinguish the particular wavelengths of light and can only use three different sensors to determine what color an object should appear. We see the millions of colors around us because of the way that the color channels send signals to the brain. The color channels are called Red, Green, and Blue. Each channel is sensitive to a certain range of light. At 520nm the green channel is very sensitive but the blue channel is only moderately sensitive and the red channel is barely sensitve at all. When 520nm light strikes the eye, a very strong "green" signal, a moderate "blue" signal, and a weak "red" signal are sent to the brain. The brain combines these three signals to determine that the light is 520nm, making the light appear green. Because red, blue and green are the three colors that the human eye can see, these are called the primary colors of light. Any color visable to humans can be described as a combination of signals from the red, green, and blue, so the RGB Color Classification System was created. This system defines a color in terms of the amount of stimulation it creates in each of the three color channels. An activity using the RGB Color Classification System is in the next section, Color Changes.
http://library.thinkquest.org/27066/color/nlproperties.html
4.03125
PRE-ALGEBRALesson 166 of 171 Probability of Independent Events Students learn that two events are independent if the outcome of the first event does not affect the outcome of the second event. For example, flipping a coin twice. And the probability of independent events can be found by multiplying the probability of the first event times the probability more... Elementary / Middle High School Math *Also referred to as Elementary Algebra or Beginning Algebra. Search our lessons
http://www.mathhelp.com/how_to/probability/probability_of_independent_events.php
4.15625
Hadron Collider could test a hyperdrive Rather than just creating black holes that would wipe out the earth, or at least turn it into a larger version of Detroit, the Large Hadron Collider could help develop a hyperdrive which would allow spacecraft to approach the speed of light. According to Technology Review, the idea for hyperdrive Propulsion was worked out in 1924 by a German mathematician David Hilbert who worked out a side effect of Einstein's theory of relativity. Hilbert looked at the interaction between a relativistic particle moving toward or away from a stationary mass. If the relativistic particle had a velocity greater than about half the speed of light, a stationary mass should repel it. At least, that's how it would appear to a distant observer. Franklin Felber, an independent physicist based in the United States said the idea had been forgotten. Now he thinks that this effect could be exploited to propel an initially stationary mass to closer to the speed of light. Felber predicts that this speed can be achieved without generating the severe stresses that could damage a space vehicle or its occupants. However to prove his theory he needs to borrow the Large Hadron Collider which can accelerate particles to the kind of energies that generate this repulsive force. He said that the repulsive force that Felber predicts will be tiny, but it could be detected using resonant test mass. His experiment could run alongside anything else that the LHC would be doing. And since the experiment wouldn't interfere with the LHC's main business of colliding particles, it could be run in conjunction with it.
http://www.tgdaily.com/space-features/44245-hadron-collider-could-test-a-hyperdrive
4.0625
Taxation without representation at issue It was not so much the new duties that caused consternation among New England merchants. It was rather the fact that steps were being taken to enforce them effectively, an entirely new development. For over a generation, New Englanders had been accustomed to importing the larger part of their molasses from the French and Dutch West Indies without paying a duty. They contended that payment of even the small duty imposed would be ruinous. As it happened, the Sugar Act's preamble gave the colonists an opportunity to rationalize their discontent on constitutional grounds. The power of Parliament to tax colonial commodities for the regulation of imperial trade had been long accepted in theory though not always in practice, but the power to tax "for improving the revenue of this Kingdom" as stated in the Revenue Act of 1764 was new and hence debatable. The constitutional issue became an entering wedge in the great dispute which was finally to split the empire asunder. "One single act of parliament," wrote James Otis, an early patriot, "has set more people a-thinking in six months, more than they had done in their whole lives before." Merchants, legislatures, and town meetings protested against the expediency of the law, and colonial lawyers like Samuel Adams found in the preamble the first intimation of "taxation without representation," the atchword which was to draw so many to the cause of the patriots against the mother country. Later in the same year, Parliament enacted a Currency Act "to prevent paper bills of credit hereafter issued in any of His Majesty's colonies from being made legal tender." Since the colonies were a deficit trade area and were constantly short of "hard money," this added a serious burden to the colonial economy. History of American Money Equally objectionable from the colonial viewpoint was the Billeting Act, passed early in 1765, which required colonies where royal troops were stationed to provide quarters and certain supplies for their support. Strong as was the opposition to these acts, it was the last of the measures inaugurating the new colonial system which set off organized resistance. This was the famous Stamp Act. It provided that revenue stamps be affixed to all newspapers, broadsides, pamphlets, licenses, leases, or other legal documents, the revenue so secured to be expended for the sole purpose of "defending, protecting, and securing" the colonies. Only Americans were to be appointed as agents to collect the tax, and the burden seemed so evenly and lightly distributed that the measure passed Parliament with little debate or attention. So violent was its reception in the thirteen colonies, however, that it astonished moderate men everywhere. It was the act's peculiar misfortune that it aroused the hostility of the most powerful and the most articulate groups in the colonies: journalists, lawyers, clergymen, merchants, and businessmen, and that it bore equally on all sections of the country-north, south and west. Soon leading merchants whose every bill of lading would be taxed organized for resistance and formed nonimportation associations. Business came to a temporary standstill, and trade with the mother country fell off enormously in the summer of 1765. Prominent men organized as "Sons of Liberty," and political opposition was soon expressed in violence. Inflamed crowds paraded the crooked streets of Boston. From Massachusetts to South Carolina, the act was nullified, and mobs forced luckless agents to resign their offices and destroyed the hated stamps. The great significance of the Stamp Act lay not alone in its precipitation of revolutionary resistance but also in the fact that it forced Americans to formulate a theory of imperial relations that would accommodate itself to American conditions. The Virginia Assembly, for example, passed, on the instigation of Patrick Henry, a set of resolutions denouncing taxation without representation as a dangerous and unprecedented innovation and a threat to colonial liberties. A few days later, the Massachusetts House invited all the colonies to appoint delegates to a Congress in New York to consider the Stamp Act menace. This Congress-in October 1765-was the first intercolonial meeting summoned on American initiative. Twenty-seven bold and able men from nine colonies seized this opportunity to mobilize colonial opinion against parliamentary interference in American affairs. And after considerable debate, the Congress adopted a set of resolutions asserting that "no taxes ever have been or can be constitutionally imposed on them, but by their respective legislatures" and that the Stamp Act had a "manifest tendency to subvert the rights and liberties of the colonists."
http://www.let.rug.nl/usa/outlines/history-1963/the-winning-of-independence/taxation-without-representation-at-issue.php
4.0625
Extinction refers to the complete elimination of a given species. - Main article: Extinction level event Species become extinct all the time (an estimated 99% of all identified species are extinct), however, there have been times in geologic history called "mass extinction events" when large numbers of species vanish forever over a relatively short (geologically speaking) time. The most famous of these is the Cretaceous-Tertiary extinction, often known as the "wiping out of the dinos." However, the largest extinction was the Permian-Triassic event, which eliminated 96% of marine species and 70% of land species. Extinction events, in the past, could have any number of causes, from giant meteor impacts to oxygenation of the atmosphere. However, it is very likely that the rise of modern Homo sapiens (that's us) has precipitated a mass extinction that we are still in the middle of. It is impossible to accurately estimate the rate of disappearing species, with some estimates ranging up to over 100,000 a year. This is roughly 100 times the "normal" background level of extinction.
http://rationalwiki.org/w/index.php?title=Extinction&oldid=923189
4.1875
Microscopic plants less than 5mm across may be able to change the paths of 500-km-wide tropical storms, due to their ability to change to the colour of the surface of the sea. Phytoplankton is as common in the oceans as grass is on land, and blooms when cold, nutrient-rich water upwells from the depths. That bloom turns the oceans surface from a deep dark blue to a murky turquoise, henceforth known as murkquoise. The murkquoise stops the sun from penetrating as far as it normally does into the surface of the sea, making the surface layer much warmer, and the depths cooler. As a result, hurricanes tend to be stronger and last longer. While these results haven’t been isolated in the real world, and there are plenty of other factors affecting hurricane formation too, results from numerical models suggest that reducing the amount of phytoplankton could also keep hurricanes weaker and confine them to equatorial latitudes. At the Geophysical Fluid Dynamics Laboratory at the US National Oceanic and Atmospheric Administration, researchers simulated a large-scale “phytocide” in the Pacific Ocean and observed the effects on the sea surface and atmosphere. The results were clear -- a 15 percent drop in the number of hurricanes that formed each year. Those that did form didn’t track as far north, either. Instead, they wobbled along the equator before fizzling out. Hurricane activity in the subtropical north-west of the Pacific dropped a whopping 70 percent. But why? Well, removing the murkquoise and allowing the sun deeper into the ocean cools the surface. That in turn cools the air above the surface of the sea, allowing more cool dry air to descend from above. When the hurricane moves into this large-scale cool, dry, air-descending area, its moist, warm upwelling air is countered by it, and so it weakens. The sinking air is also carried along the surface to the equator, where it rises again, strengthening the already-powerful western winds in the upper atmosphere in the tropics. These winds, if strong enough, can behead storm systems that are beginning to organise into a hurricane by literally blowing them away. But before you dig out the industrial-strength herbicide to dump into the ocean and reduce the risk of hurricanes, it might be wise to consider the implications. Killing off phytoplankton would be like removing all grass from land. Grazing herbivores would be deprived of their food source and die, depriving carnivores their food source too. Before too long, the oceans would be barren and sterile. Probably too great a price to pay for a slightly lowered risk of hurricanes.
http://www.wired.co.uk/news/archive/2010-09/27/phytoplankton-hurricanes
4.1875
One of the most intriguing processes in nature is the colonisation of Islands by plants and animals. How do these tiny, remote specks in the ocean become populated by myriads of plants and animals? For centuries, it has been a source of wonder to explorers and scientists to discover how remote islands, thousands of kilometres from the nearest land, become populated by rich assemblies of living organisms. Three methods have been identified as the principal means by which plants are dispersed over the oceans - wind, water and transport by birds. Many seeds are tiny and can be blown hundreds of kilometres in wind currents; or larger seeds may have a parachute of silky hairs attached, enabling them to float on air currents over long distances. Some insects can similarly be blown in wind currents, or some spiders weave a parachute of silk for air transport. Many oceanic island plants have arrived as seeds that float; they must be buoyant and have a tough seed coat to survive in sea water. Birds can carry seeds in their stomach or on their feathers; and even carry eggs of snails or crustaceans on their feet. Certain groups of plants and animals are specially adapted for long distance dispersal, and many islands in the Pacific share similar species or genera. For example the Mountain rose Metrsosideros nervulosa of Lord Howe Island has relatives across the Pacific to Hawaii. Metrosideros seeds are tiny and blow long distances in wind currents; plus they survive cold temperatures encountered as they are carried aloft by winds.
http://www.lhimuseum.com/page/view/environment/biogeography
4
Mapping Africa has been designed to serve two major purposes. First, the unit teaches students about the basic physical and political geography of Africa. Second, it introduces or reviews fundamental geographical concepts and vocabulary in an African context. In a world so shrunken in time and distance that we can communicate almost instantly with any other city on any continent and fly to even more corners of the world in a matter of hours, a knowledge of different places can no longer be considered a luxury. Instead, it has become a necessity. Our interdependence is now so complete that actions- be they economic, political, social or environmental- in one world region can have immediate repercussions in another world region. The continent of Africa is often in the news, though the coverage of political upheaval, famines, and droughts rarely conveys much understanding of the continent's complex geography and environment. The unfamiliarity of the names of African countries and cities, the recent period of colonial governance and the images that have been conveyed through films and television make it all the more important to provide students with a basic grounding in information about this region. This unit will offer a geographic introduction to the map of Africa, with a focus on sub-Saharan Africa in terms of its examples and information. - to teach students key geographical terms that are important for communicating geographical ideas - to introduce and reinforce students' knowledge of the physical and political geography of Africa - to teach students the interaction of climate, landform, and natural vegetation - to improve students' understanding of location - to develop and practice chart and map reading skills - to provide opportunities for group work Mapping Africa meets the guidelines for teaching geography that were adopted by the California State Board of Education in its Model Curriculum Standards for World History, Culture, and Geography.
http://spice.stanford.edu/catalog/10070/
4.09375
BSA Supply No. 35869 Cinematography includes the fundamentals of producing motion pictures, including the use of effective light, accurate focus, careful composition (or arrangement), and appropriate camera movement to tell stories. In earning the badge, Scouts will also learn to develop a story and describe other pre- and postproduction processes necessary for making a quality motion picture. - Do the following: Do the following: - Discuss and demonstrate the proper elements of a good motion picture. In your discussion, include visual storytelling, rhythm, the 180-axis rule, camera movement, framing and composition of camera shots, and lens selection. - Discuss the cinematographer's role in the moviemaking process. Do ONE of the following: With your parent's permission and your counselor's approval, visit a film set or television production studio and watch how production work is done. Explain to your counselor the elements of the zoom lens and three important parts. Find out about three career opportunities in cinematography. Pick one and find out the education, training, and experience required for this profession. Discuss this career with your counselor. Explain why this profession might interest you. - In a three- or four-paragraph treatment, tell the story you plan to film, making sure that the treatment conveys a visual picture. - Prepare a storyboard for your motion picture (this can be done with rough sketches and stick figures). - Demonstrate the following motion-picture shooting techniques: - Using a tripod - Panning a camera - Framing a shot - Selecting an angle - Selecting proper lighting - Handheld shooting - Using motion-picture shooting techniques, plan ONE of the following programs (start with a treatment and complete the requirement by presenting the program to a pack or to your troop, patrol, or class): - Film or videotape a court of honor and show it to an audience. - Create a short feature of your own design, using the techniques you learned. - Shoot a vignette that could be used to train a new Scout in a Scouting skill. Architecture, Art, Communications, Model Design and Building, Photography, Public Speaking, and Theater merit badge pamphlets - Andersen, Yvonne. Make Your Own Animated Movies and Videotapes: Film and Video Techniques From the Yellow Ball Workshop. Little Brown and Company, 1991. - Andrew, James Dudley, ed. The Image in Dispute: Art and Cinema in the Age of Photography. University of Texas Press, 1997. - Box, Harry. Set Lighting Technician's Handbook: Film Lighting Equipment, Practice, and Electrical Distribution. Focal Press, 2003. - Brown, Blain. Cinematography: Image Making for Cinematographers, Directors, and Videographers. Focal Press, 2002. - Ettedgui, Peter. Cinematography: Screencraft. Focal Press, 1999. - Griffith, Richard, Arthur Mayer, and Eileen Bowser. The Movies: Revised and Updated Edition of the Classic History of American Motion Pictures. Random House Value Publishing, 1992. - Katz, Steven D. Film Directing, Cinematic Motion, second ed. Michael Wiese Productions, 2004. - ------. Film Directing Shot by Shot: Visualizing from Concept to Screen. Michael Wiese Productions, 1991. - Laybourne, Kit. The Animation Book: A Complete Guide to Animated Filmmaking, revised ed. Three Rivers Press, 1998. - Lowell, Ross. Matters of Light and Depth. Lower Light Management, 1999. - Malkiewicz, Kris. Cinematography: The Classic Guide to Filmmaking, third ed. Fireside Press, 2005. - Maltin, Leonard. The Art of the Cinematographer: A Survey and Interviews With Five Masters. Dover Publications, 1978. - Mascelli, Joseph V. The Five C's of Cinematography: Motion Picture Filming Techniques. Silman-James Press, 1998. - Oxlade, Chris. Movies. Heinemann, 1997. - Rickitt, Richard. Special Effects: The History and Technique. Watson-Guptill Publications, 2000. - Samuelson, David W. David Samuelson's "Hands-On" Manual for Cinematographers. Focal Press, 1994. - Scott, Elaine. Movie Magic: Behind the Scenes With Special Effects. HarperCollins Publishers, 1995. - Zettl, Herbert. Sight, Sound, Motion: Applied Media Aesthetics, third ed. Wadsworth Publishing Company, 1998. Organizations and Web Sites Exposure: The Internet Resource for Low-Budget Film-Makers Web site: http://www.exposure.co.uk Moving Image Collections Web site: http://mic.imtc.gatech.edu New York Film Academy Web site: http://www.nyfa.com
http://councils.scouting.org/sitecore/content/Home/BoysScouts/AdvancementandAwards/MeritBadges/mb-CINE
4.28125
A line is a shortest path between the two points that is straight, infinitely long and thin. In the coordinate plane, the location of a line is defined by the points whose coordinates are known and through which the line passes to an infinitely long distance in both the direction. There are no end points in case of line. A Ray joining which is used to join two points is also known as a line, it is a basic concept of elementary Geometry. Here we differentiate two words line and Line Segment. In the Line Segment the end points are included, and in case of line there is no endpoint. If two distinct lines ‘A‘ and ‘B’, these two lines are intersects each other or both lines are parallel to each other. The two lines which intersect each other have a unique Point or represent a unique point i.e. the point of Intersection. And the intersection point lies on the lines. In case of Parallel Lines there are no common points. Let you have 5 lines, than every line divides the line into two parts, which is known as rays. A piece of line is known as ray, which have only one end point. Rays are used in defining the angles. The line which does not have any end is known as line segment. The line in geometry is the basic design tool. A line has length, width etc. It suggested a direction through which we can find the path easily. If we have a line and which is not straight, then the line usually known as a curve or arc. In the plane geometry, line is used to indicate the Straight Line and the object which is straight, infinity times long are also known as line. A geometry line is always in one dimensional; its width is always zero. If we draw a line with the help of pencil then it shows that the pencil has a measurable width. According to the theorem of Geometry, if two points lies in a plane then there is exactly one line that passes through the two points. So, to write line equation we need two points. The equation of line can be written as, y = mx + c 'x' and 'y' are two vertices. 'm' is Slope. 'c' is y-intercept. Line or the straight line can be used to represent basic Position of an object such as height, width and length of an object. When an object is placed in two dimensional space then Straight Line is used to measure height and length of this object. When an object is placed in three dimensional space then its height, length and the width are calculated using st...Read More We come across various lines in our daily life while going for the assembly sessions in the school, while standing for the ticket on the ticket counter and many such places. The lines can be of following types: straig...Read More A line is the Set of infinite points which join together to form the line. A line extends in both the directions and so we say that it has no fixed length. When we say that two given lines are perpendicular lines it ...Read More A line is the Set of points which extend endlessly in both the directions. We say that a line has no fixed length and so it cannot be drawn on a plane. We say that a pair of lines is parallel line if they are at equal dis...Read More Parallel planes are two plates or planes that do not intersect. If there are two planes ‘A’ and ‘B’ which are parallel to each other A || B and, if there are three planes and two planes are parallel to ...Read More
http://www.tutorcircle.com/line-t3Ijp.html
4.09375
How does pharmacogenetic testing work? Genes are the basic units of genetic material, the segments of DNA that usually code for the production of specific proteins, including the proteins known as enzymes. Each person has two copies of most genes: one copy is inherited from their mother and one copy is inherited from their father. Each gene is made up of a specific genetic code, which is a sequence of nucleotides. Each nucleotide can be one of four different nucleotides (A, T, G, or C). For each nucleotide position in the gene, one of the four nucleotides is the predominant nucleotide in the general population. This nucleotide is usually referred to as "wild type." If an individual has a nucleotide that is different from "wild type" in one copy of their genes, that person is said to have a heterozygous variant. If an individual has the same variant nucleotide in both copies of their genes, that person is said to have a homozygous variant. Nucleotide or genetic variants (also called polymorphisms or mutations) occur throughout the population. Some genetic variants are benign — they do not produce any known negative effect or may be associated with features like height, hair colour, and eye colour. Other genetic variants may be known to cause specific diseases. Other variants may be associated with variable response to specific medications. Pharmacogenetic tests look for genetic variants that are associated with variable response to specific medications. These variants occur in genes that code for drug-metabolizing enzymes, drug targets, or proteins involved in the immune response. Pharmacogenetic tests can determine if a variant is heterozygous or homozygous, which can affect an individual's response or reaction to a drug. When are the tests requested? A doctor may test a patient's genes for specific variations that are known to be involved in variable response to a drug at any time during treatment (for example, prior to treatment, during the initial phase of treatment, or later in the treatment). The results of the testing may be combined with the individual's clinical information, including age, weight, health and other drugs that they are taking, to help tailor therapy to the specific individual. Sometimes, the doctor may use this information to adjust the medication dose or sometimes to choose a different drug. Pharmacogenetic testing is intended to give the doctor additional information but may not replace the need for therapeutic drug monitoring. Pharmacogenetic testing for a specific gene is only performed once since a person's genetic makeup does not change over time. Depending on the medication, a single gene may be ordered or multiple genes may be ordered. An example of a medication for which multiple genes are usually evaluated is warfarin, which can be affected by genetic variation in genes known as CYP2C9 and VKORC1. Testing may be ordered prior to starting specific drug therapies or if someone who has started taking a drug is experiencing side effects or having trouble establishing and/or maintaining a stable dose. Sometimes patients may not experience such issues until other medications that affect the metabolism or action of the drug in question are added or discontinued.
http://www.labtestsonline.org.uk/understanding/analytes/pharmacogenetic-tests/start/1
4.125
The History of the Earth: Classroom Activity This activity has benefited from input from a review and suggestion process as a part of an activity development workshop. This activity has benefited from input from faculty educators beyond the author through a review and suggestion process as a part of an activity development workshop. Workshop participants were provided with a set of criteria against which they evaluated each others' activities. For information about the criteria used for this review, see http://serc.carleton.edu/teacherprep/workshops/workshop07/activityreview.html. This page first made public: Mar 14, 2007 In this classroom activity, students first use an interview with an older adult to construct a timescale. Students use criteria of their choosing to divide the timescale into periods, then compare and contrast timescales among the class. Students are next given important events in the history of the earth and are invited to first develop a scaled representation of the earth's history based on their prior knowledge. Students then use classroom and Internet resources to place the same events in the proper order and at the correct locations along the timescale. Finally, students investigate the geologic timescale and place eons and eras of geologic time on the same scale as earth events. Comparisons are drawn between the human life timescale and geologic time. For assessment, the instructor grades written student responses to questions in the student course pack. The student course pack activity and instructor notes are provided. Learn more about the course for which this activity was developed. Upon completion of this activity students should be able to: identify major events in the history of the earth and place these in the correct relative sequence, distinguish between instantaneous and gradual events in earth's history, explain how the geologic timescale was created, recognize the time span of eras and eons of geologic time, and represent amounts of time as linear distances. Context for Use This activity is used in an introductory-level physcial and historical geology class specifically designed for pre-service elementary teachers. The activity takes place during one, 2-hour and 20-minute class period. Activity, small group work and discussion, and whole class discussion are integrated during the activity; there is no separate lecture. The activity takes place at the end of the historical geology unit of the course, after students have learned material related to fossils and relative and absolute age dating techniques. Required equipment includes computers with Internet access. Teaching Notes and Tips The instructor notes Instructor's Notes (Acrobat (PDF) 48kB Feb22 07) include a list of materials to prepare and assemble prior to starting the activity, suggestions for how to introduce the activity and engage the class in a discussion of the topic, prompting questions to draw out student prior knowledge of the topic, a suggested sequence of events and approximate time to complete each part of the activity, tips for where students generally encounter problems in completing the activity, suggestions for engaging students in class discussion of the topic, and an answer guide to the assessment questions in the student course pack. The student course pack activity Activity Sheet (Acrobat (PDF) 65kB Feb22 07) includes a statement of the problem and objectives of the activity, questions for students to consider prior to completing the activity, student materials and procedures for completing the activity, and questions to turn in to the instructor for assessment once the activity is complete. Formative assessment occurs as students complete the activity. The instructor can gauge student learning by circulating among student groups and questioning individual students as they complete the activity, monitoring student progress as they construct the timescales, and asking questions to guide the whole class discussion of the topic. Summative assessment of learning occurs as students write responses to questions included in the student course pack activity. Responses are graded (grading key included in the instructor notes) and instructor feedback guides the whole class wrap-up discussion of what has been learned from the activity. Controlled Vocabulary Terms Resource Type: Activities:Classroom Activity, Lab Activity Grade Level: College Lower (13-14), :Introductory Level Ready for Use: Ready to Use Topics: Time/Earth History
http://serc.carleton.edu/teacherprep/resources/activities/earthhistory.html
4
REDD+: Reducing Emissions from Deforestation and Forest Degradation Forests and other natural ecosystems play an essential role in regulating our global climate by absorbing carbon dioxide from our atmosphere and storing it as biomass. Our world’s tropical rainforests are estimated to contain roughly one-quarter of all the carbon in the terrestrial biosphere. Human land use pressures such as development and agriculture are causing alarming rates of tropical deforestation today. The loss of our planet’s tropical forests and peat lands currently accounts for an astounding 15% of annual carbon dioxide emissions. At the Copenhaen climate talks of December 2009, the international community acknowledged that averting the potentially devastating effects of an increase in global temperatures of 2°C cannot be achieved without protecting our planet’s forests. Tropical forest conservation is one of the most cost-effective ways of mitigating the effects of climate change. To encourage developing countries to protect their tropical forests, developed countries are pledging to provide financial incentives for keeping forests standing. This is known as REDD+ (defined by the UNFCCC as “policy approaches and positive incentives on issues relating to reducing emissions from deforestation and forest degradation in developing countries; and the role of conservation, sustainable management of forests, and enhancement of forest carbon stocks in developing countries”). The Copenhagen Accord, brokered by the United States, China, India, Brazil and South Africa at the conclusion of the 15th Conference of the Parties of the United Nations Framework Convention on Climate Change, includes a clear commitment to REDD+. - A Copenhagen Green Climate Fund has been created to support climate change mitigation projects in developing nations. Six countries including the United States, United Kingdom, Norway, France, Australia and Japan, have already agreed to contribute US$3.5 billion over the next three years, with a commitment for longer-term funding of US$100 billion for activities that address climate change, including REDD+. - If carefully implemented to ensure financial transparency, environmental and social safeguards, REDD+ programs have the unprecedented potential to bring together the interests of governments, communities and the private sector to protect our planet’s forests in a collaborative and equitable system. - REDD+ could also provide important domestic economic benefits to the United States’ farmers and foresters, by reducing market competition with U.S.-produced food and timber products of external products illegally-sourced from areas of tropical deforestation.
http://www.forestjustice.org/our-issues/deforestation/redd/
4.03125
Technology Tools For The Classroom: Podcasts We mentioned the impact that technology has made in the classroom, and briefly discussed ways to integrate QR codes and Digital Storytelling. Now we want to share another tool: Podcasts. The word podcast is taken from the word iPod, but the act of podcasting is not limited to iPod users only. A podcast is a pre-recorded audio program that can be downloaded to a computer or mobile device. They are often made available through websites for immediate delivery, and users may even subscribe to have episodes fed to their device on a daily or weekly basis. Here are some ways to use Podcasting in the classroom: - Organize substitute teachers: Provide substitute teachers with instructions for a class, or demonstrate how a typical class period runs. You can even break down your podcasts by subject. - School announcements: Allow students to create announcements to share among peers. They can interview administrators, teachers and classmates to create a weekly or daily news outlet. Schools can even make important announcements via podcasts on their websites. - Inform absent students: Help an absent student to catch-up on missed material. Provide a brief summary of the class they missed so that they can remain up-to-date. Use handouts as digital images and create a slide show. - Communicate with parents: Share your students’ work with parents and engage them in class happenings. School administrators can also inform parents on recent events and important dates. There are endless ways to use podcasts in the classroom, and the above-mentioned are few among the many. They give a creative approach to engaging students, parents, and faculty. Podcasts can even give life to monotonous lesson plans and day-to-day activities. The following video provides more background on podcasts and how to use them to enhance teaching: Teachers.tv Can you come up with another way to use podcasts in the classroom?
http://www.pearsonschoolsystems.com/blog/?p=292
4.5
It is sometimes useful to characterize the orientation of a line by referring to its direction in a dipping plane. For example, you may have ripple marks on a bed or slickensides on a fault, and it may be difficult to determine their trend and plunge accurately. On the other hand, it is very important that these lines are contained within some particular plane (the bed and the fault, respectively). In cases like these, the pitch measurement is sometimes used. Pitch is defined as the angle between some line in a plane and a horizontal line, measured in the plane. A line can have the same pitch in two directions, so it is important to define directions precisely. Here's how to do this using descriptive geometry. In practice, problems like this are generally done with a stereonet. There are also special diagrams that allow the solution to be read off directly. We can solve the problem easily if we note that structure contours on a map are foreshortened views of the real thing. If the plane dips with an angle D, the mapped contours are compressed by a factor of cos D You can determine the true spacing of the contours by calculation or by measuring distance AB in the cross-section. Note: a pitching line can never have a plunge greater than the dip of the plane! 1. Given the fault shown and slickensides with the observed pitch, find their trend and plunge. 2. Construct structure contours for the plane. Find the true spacing by cross-section or trigonometry and construct a second set of contours with true spacing. 3. On the true set of contours, construct the pitching line as it appears on the dipping plane. Project distances along the contours back to the map view. 4. On the map view, construct the map trace of the line. Find its trend and find the plunge by trigonometry or drawing a cross-section. We know the pitch and the dip. What we want to find is the trend and plunge. The trend is just the strike of the bed (given by the structure contours) plus or minus angle XAB in the top diagram. It will be plus or minus depending on which way the pitch is measured relative to the strike. The plunge (P) is found from the diagrams below. Note: we use B to denote location in the vertical cross section, and B' to denote the same point viewed in the dipping plane. If X is the angle between the trend and the strike, P is the plunge, D is the dip of the layer, then Created 5 January 1999, Last Updated 26 January 2012 Not an official UW Green Bay site
http://www.uwgb.edu/dutchs/STRUCTGE/sl35.htm
4.1875
Area And Perimeter Powerpoint PPT Therefore, Family A has the pool with the bigger swimming area. The perimeter of Family A’s pool is 12 units long. ... PowerPoint Presentation Author: Dr. Beth McCulloch Vinson Last modified by: Bill Ide Created Date: 7/1/2000 5:08:39 PM Area and Perimeter By Christine Berg Edited by V T Hamilton Perimeter The sum of the lengths of all sides of the figure Area The number of square units inside a figure Calculating Area Area Abbreviations: A = Area b = base h = height Rectangle To find the area multiply the length of the base ... Perimeter Author: Tiffany Bennett Last modified by: Tom Deibel Created Date: 9/19/2003 8:47:37 PM Document presentation format: On-screen Show Company: PISD Other titles: Finding Area and Perimeter of Polygons Area Definition: The number of square units needed to cover a surface. (INSIDE) Length x width Perimeter Definition: The sum of the length of all the sides of a polygon. The distance around the outside of a shape is called the perimeter. 8 cm 6 cm 8 cm 6 cm The perimeter of the shape is 8 + 6 + 8 + 6 = 28cm. First we need to find he length of each side by counting the squares. Perimeter The perimeter of a closed plane figure is the length of its boundary. 10cm 8cm 7cm 8cm 15cm Perimeter = 15 + 8 + 10 + 8 + 7 = 48cm Perimeter Rectangle Area Area is the space within the boundary of a figure. Jeopardy Perimeter & Area Perimeter Triangles Circles Toss Up Q $100 Q $100 Q $100 Q $100 Q$600 Q $200 Q $200 Q $200 Q $200 Q $600 Q $300 Q $300 Q $300 Q $300 Q $600 Finding the Perimeter Finding the Perimeter Take a walk around the edge! 6cm 10 cm The perimeter is… 32cm ! 6 16 22 32 Take a walk around the edge! 8cm 10 cm The perimeter is… 26 cm ! 8cm 8 16 26 Take a walk around the edge! Area of a Rectangle www.mathxtc.com This is one in a series of Powerpoint slideshows to illustrate how to calculate the area of various 2-D shapes. Perimeter, Circumference and Area Author: Charlotte-Mecklenburg School District Last modified by: Administrator Created Date: 3/9/2011 4:33:43 PM Document presentation format: On-screen Show Company: Charlotte-Mecklenburg School District Other titles: 1.7 Notes Perimeter, Circumference, Area. These formulas are found on p. 49 of your textbook. 1.7 Notes Perimeter, Circumference, Area. †Ask Dr. Math for a discussion on “square units” vs. “units squared.” Free powerpoint template: www.brainybetty.com Formulas for Geometry Mr. Ryan Don’t Get Scared!!! ... x 5 = Area 4 x 5 = 20 If the radius is 5, then the diameter is 10 Radius 5 Area=3.14 x (5 x 5) Perimeter = 3.14 x 10 * * * * Title: Formulas for Geometry Author: Mr. Ryan Last ... AREA OF A TRIANGLE You probably already know the formula for the area of a triangle. Area is one-half the base times the height. ... PowerPoint Presentation Author: Haider Last modified by: Stephen Corcoran Created Date: 7/21/2004 4:11:54 PM Surface Area What does it mean to ... Prism SA You can find the SA of any prism by using the basic formula for SA which is 2B + LSA= SA LSA= lateral Surface area LSA= perimeter of the base x height of the prism B = the base of ... PowerPoint Presentation Author: Chamberlain School No. 7-1 Last ... The Area and Perimeter of a Circle The Area and Perimeter of a Circle Diameter Radius centre What is the formula relating the circumference to the diameter? One of the great areas of confusion for students in the measurement strand is Area and Perimeter, in fact it sometimes seems that there is another term out there, “Arimeter”. Area and Perimeter Math 7 Area and Perimeter There are two measurements that tell you about the size of a polygon. The number of squares needed to cover the polygon is the area of the shape. Area, Perimeter and Volume Section 3-4-3 Perimeter Perimeter is measuring around the outside of something. Perimeter requires the addition of all sides of the shape. Area and . Perimeter . Triangles, Parallelograms, Rhombus, and Trapezoids. Mr. Miller. Geometry. Chapter 10 . Sections 1 and 2 The Area and Perimeter of a Circle The Area and Perimeter of a Circle The Area and Perimeter of a Circle A circle is defined by its diameter or radius Diameter radius The perimeter or circumference of a circle is the distance around the outside The area of a circle is the space inside it The ... The perimeter of a triangle is the measure around the triangle = a + b + c To find the area of a triangle: The height = the ... PowerPoint Presentation Author: Valued Gateway Client Last modified by: Understanding Area and Perimeter Amy Boesen CS255 Perimeter Perimeter is simply the distance around an object. ... PowerPoint Presentation Last modified by: Registered User Created Date: 1/1/1601 12:00:00 AM Document presentation format: Times New Roman Tahoma Wingdings Verdana Whirlpool PowerPoint Presentation Area and Perimeter Perimeter and Area Perimeter Find the Area of these shapes Doing your Work! Plenary How can we find the perimeter of this shape ? Now try these ... Area and perimeter of irregular shapes 19yd 30yd 37yd 23 yd 7yd 18yd What is the perimeter of this irregular shape? To find the perimeter, you first need to make sure you have all of the information you need. PowerPoint Presentation Author: Cub Last modified by: Certiport, Inc. Created Date: 3/3/2003 2:01:31 PM Document presentation format: On-screen Show Company: Mount Carmel Academy Other titles: Area is the amount of square units needed to cover the face (or flat side) of a figure When the area is found, it is reported in square units. ... Circumference as length Calculate the Surface Area of a Gear Head Motor 2.00” Dia. 1.55” 3.850” 1.367” Perimeter and Area of Basic Shapes b h s P = s1 + s2 + s3 A = ½ bh s s ... PowerPoint Presentation Perimeter and Area of Basic Shapes PowerPoint ... ... 56” 0.190” dia. typ. 0.5” 1.00” 45 deg. 0.71” 1.41” 0.190” typ. Calculate the Perimeter of this Component Perimeter Worksheet Perimeter and Area of Basic Shapes b h s P = s1 + s2 + s3 A = ½ ... PowerPoint Presentation Perimeter and Area of Basic Shapes PowerPoint ... Squares Perimeter = 4l The area of a square is given as: ... PowerPoint Presentation Author: MELISSA TROQUET Description: Contents written and edited by Melissa Lieberman ©Boardworks Ltd 2004 Flash developed by John Deans Last modified by: Area Formulas Rectangle Rectangle What is the area formula? Rectangle What is the area formula? ... Answers PowerPoint Presentation PowerPoint Presentation PowerPoint Presentation Trapezoid Trapezoid Trapezoid Trapezoid Trapezoid Trapezoid Trapezoid Trapezoid Practice! Answers ... Area, Perimeter, and Circumference Author: Registered User Last modified by: Registered User Created Date: 11/26/2002 5:55:51 PM Document presentation format: On-screen Show Company: Northern Michigan University Other titles: Effect of Change The effects on perimeter, area, and volume when dimensions are changed proportionally. * * Perimeter of a rectangle How would the perimeter change if the dimensions of the rectangle are doubled? 7 ft. 4 ft. 14 ft. 8 ft. 8cm 4cm P= _____ 2L + 2W P= (2 x8cm) + (2x 4cm) = 24 cm P= S+S+S+S 16cm+8cm = Practice Find the Area and Perimeter of the following rectangle: 10cm ... PowerPoint Presentation Author: Stephanie Green Last modified by: Stephanie Green Created Date: 7/1/2000 5:08:39 PM Perimeter Area Applications Objectives By: April Rodriguez, Betty Williams, ... PowerPoint Presentation PowerPoint Presentation Now, what should we do? You did say area, right? Remember, area is the number of square units, or units2, needed to cover the surface. ... 3.14 x 6 =18.84 x 10 = 188.4 SA = 244.92 2B + LSA = SA Rectangular Prism A B C 7 6 in 9 2B + LSA= SA Area of Base x 2 = LSA = perimeter x Height = Total SA = Triangular Prism 8 17 22 m 15 2B + LSA= SA Area of Base x 2 ... PowerPoint Presentation Author: Chamberlain School No. 7-1 Last ... Geometry: Perimeter and Area Lesson 6-8 Find the perimeter and area of figures Perimeter The distance around a figure. One way to find the perimeter of a figure is to add the length of all the sides. When finding the perimeter of a rectangle we use a common formula. Perimeter & Area of Rectangles, Squares ... PowerPoint Presentation Area of a Rectangle Area of a Rectangle Area of a Rectangle Area of a Rectangle PowerPoint Presentation Area of a Square Area of a Square Take Out Your Learning Targets LT #8 Perimeter of a Rectangle & Square PowerPoint ... Definition Circumference is the distance around a circle or the perimeter. Formula = Pi x diameter Area is the measure of the amount of surface enclosed by the boundary of a figure. ... for helping me use Microsoft PowerPoint: Definition from Connected Mathematics. Section 6.1: Perimeter & Area of Rectangles & Parallelograms Perimeter – the distance around the OUTSIDE of a figure Area – the number of square units INSIDE a figure Finding the Perimeter of Rectangles and Parallelograms Find the perimeter of each figure. Ruggedized Unattended Sensor Systems for Perimeter or Area Security Key Features & Benefits: Expands Physical Security & Surveillance Footprint Irregular shapes perimeter Objective 0506.4.2 Decompose irregular shapes to find perimeter and area. Remember!!! Perimeter all sides added together Review of perimeter http://www.jogtheweb.com/run/deYhohv5NJMJ/Area-and-Perimeter#1 Find the perimeter Find the perimeter What is an irregular shape? Area and Circumference of a Circle Author: Letitia Lee Cox Last modified by: Letitia Lee Cox Created Date: 6/17/2008 6:32:33 PM Document presentation format: On-screen Show Company: Jefferson County Schools Other titles: Area & Perimeter of Quadrilaterals & Parallelogram Perimeter Add up all the sides Quadrilateral has 4 sides Add them up Or use P = 2L + 2W Perimeter of a square is P = 4s Ex 1 Ex 2 Ex 3 Ex 4 Ex 5 Area A = L * W Rectangle A = S2 Square A = B * h Parallelogram Note: base and height will ... Estimate Perimeter, Circumference, and Area When estimating perimeter of shapes on a grid, use the length of one grid square to approximate the length of each side of the figure. Lesson 8 Perimeter and Area Perimeter The perimeter of a closed figure is the distance around the outside of the figure. In the case of a polygon, the perimeter is found by adding the lengths of all of its sides. Area of a circle Area examples ... PowerPoint Presentation Author: Project 2002 Last modified by: Bernie Lafferty Created Date: 1/28/2002 2:58:41 PM Document presentation format: On-screen Show Company: Glasgow City Council Perimeter, area and volume Aim To introduce approaches to working out perimeter, area and volume of 2D and 3D shapes. ... PowerPoint Presentation Author: David King Last modified by: ruthc Created Date: 9/22/2003 11:25:06 AM Document presentation format: Inicios in Mathematics NCCEP - UT Austin More Algebra with some geometry Geometry (Cabri Jr) and Navigator Activities Using both Activity Center and Screen Capture Area Invariance for Triangles, Parallelism and More (A Dynamic Geometry Interpretation of 1/2 b * h) More Area and Perimeter ... Sec. 1-9 Perimeter, Circumference, and Area Objective: Find the perimeters and areas of rectangles, squares, & the circumferences of circles. ... the sum of the areas of all of its surfaces Formulas for PRISMS LA = Ph Lateral Area = Perimeter of Base height of prism SA = Ph + 2B Surface Area = Perimeter of Base height of prism + 2 Area ... PowerPoint Presentation Author: cee13931 Last modified by: Griesemer, Sarah Created ...
http://freepdfdb.com/ppt/area-and-perimeter-powerpoint
4.21875
'Gifted and talented' describes children and young people with one or more abilities developed to a level significantly ahead of the year group. 'Gifted' learners are those who have abilities in one or more academic subjects such as Maths and English. 'Talented' learners are those who have particular abilities in sport, music, design or creative and performing arts. Gifted and/or talented children may display some of the following characteristics: · is intently focused · asks insightful questions · sees beyond the obvious · provides creative and original solutions · has a great intellectual curiosity · learns easily and readily · possesses unusual imagination This list is not extensive nor does it mean a child is necessarily gifted if he/she displays some/all of these qualities.
http://www.impington.cambs.sch.uk/curriculum/gifted-and-talented/aboutgandt.html?showall=&start=1
4
Our transcription: The cloud of material from which the planets formed started out as being mostly gas with a small amount of dust. That dust was very fine indeed, but the particles collided with each other, and during those collisions they stuck with each other, so that materials started to coagulate into still bigger bodies, so you have to imagine that we went from dust particles all the way up to objects that were kilometers across. Those objects were colliding with each other making still bigger bodies, so that we think of the formation of solid planets like the Earth as being a hierarchical process, a process that starts off with very small things, very large in number, towards intermediate sizes things, smaller in number, and eventually ending up with a very small number of large objects. As an intermediate stage in this process, we could imagine that the region where the Earth formed contained hundreds of objects that were about the size of the Earth's moon. Most of those objects, then, coalesced to make a single planet, the Earth.
http://walrus.wr.usgs.gov/infobank/programs/html/school/moviepage/02.01.12.html
4.03125
Climate change explained The Earth’s climate is driven by a continuous flow of energy from the sun. Heat energy from the sun passes through the Earth’s atmosphere and warms the Earth’s surface. As the temperature increases, the Earth sends heat energy (infrared radiation) back into the atmosphere. Some of this heat is absorbed by gases in the atmosphere, such as carbon dioxide (CO2) , water vapour, methane, nitrous oxide, ozone and halocarbons. Watch this video from National Geographic for a short visual explanation of climate change. The greenhouse effect 4 billion years ago its concentration in the atmosphere was much higher than today - 80% compared to today's 0.03%. But most of it was removed through photosynthesis over time. All this carbon dioxide became locked in organisms and then minerals such as oil, coal and petroleum inside the Earth's crust. A natural carbon dioxide cycle keeps the amount of CO2 in our atmosphere in balance. Decaying plants, volcanic eruptions and the respiration of animals release natural CO2 into the atmosphere, where it stays for about 100 years. It is removed again from the atmosphere by photosynthesis in plants and by dissolution in water (for instance in the oceans). The amount of naturally produced CO2 is almost perfectly balanced by the amount naturally removed. But even small changes caused by human activities can have a significant impact on this balance.
http://wwf.panda.org/about_our_earth/aboutcc/how_cc_works/
4.125
Lesson Plan Sculpting - Books, photographs of North - Clay tools - Water containers - Glazes and / or paint - Identify animals / wildlife found in North Carolina - Identify animals / wildlife habitats that are located throughout the three regions of - Discuss how artists portray animals/wildlife in three-dimensional form - Create a clay animal / wildlife from a North Carolina region web site connections: the land Animals: A scavenger hunt of objects inspired Crystal King works with 4th graders in the Billy Ray Hussey's Fu Lioness - Clay sculpture - Have students think about all the animals that live in North Carolina. Begin by making lists on the board. Divide these into the three regions of North Carolina. Explain that wildlife is an important resource for the state. Collect and display photographs of animals that live in North Carolina. - Explain that artists often choose animals as subject matter. Many of these artists are influenced by the animals that live around them in the local environment. Other artists like to create animals that live in other areas or are fantastic creatures of their - Look at the animals that have inspired North Carolina artists in the From the Land section of the site. How many of these wildlife examples are found in North Carolina today? Look at animals sculptures by clay artists Crystal Ray Hussey. What type of animals are found in their - Explain that students will make clay sculptures of animals/wildlife that are found in one or all of the three regions of North Carolina. To learn as much about an animal as possible students will need to do some preparatory work. - Have students choose an animal that currently lives in North Carolina. Have students research their animal by making detailed drawings and recording important facts. Have students assemble their information in a poster format that showcases their drawing and lists three facts about the animal/wildlife (habitat, characteristics, etc.) Display students posters in the classroom. Students are now ready to turn their investigations into three-dimensional - Distribute clay. Have students create three-dimensional sculptures. Be sure students understand the concept of three-dimensions (width, height, depth) and of sculpture (art in the round). To begin, students will need to make a hollow clay body for their animal. This can be done by forming the body of the animals and then digging out clay from the underside with a clay tool. Hollow out enough clay that will allow easy drying and reduce the weight of the sculpture. - After forming the body, have students add heads and appropriate appendages. Students will need to be careful to score (scratch the area where the clay meets so that it will adhere better) the clay when adding on these details. - Next have students create the textures found on the animal. An assortment of clay tools will be handy to create the look of feathers, fur, - When dry, use commercial underglazes if desired and bisque fire the sculptures. - Finally, glaze the vessels for the final firing. If this step is omitted, paint the sculptures using tempera, acrylic or watercolor paint. - As an extra step, have students create a three-dimensional diorama of the habitat and place the animal in front of the backdrop. A shoebox makes a good option for a display case for the sculpture and the diorama. - Display the clay sculptures. As students look at them, point out the variety of wildlife that is found across North Carolina. - Do students have an understanding that artists use wildlife as inspiration for their artwork? Can students identify a North Carolina artist that uses wildlife as subject matter in his/her clay art?
http://www.mintmuseum.org/craftingnc/08-menu-07-c.htm
4.40625
SOURCES OF GAMMA RAYS Brighter colors in the Cygus region indicate greater numbers of gamma rays detected by the Fermi gamma-ray space telescope. Credit: NASA/DOE/International LAT Team Gamma rays have the smallest wavelengths and the most energy of any wave in the electromagnetic spectrum. They are produced by the hottest and most energetic objects in the universe, such as neutron stars and pulsars, supernova explosions, and regions around black holes. On Earth, gamma waves are generated by nuclear explosions, lightning, and the less dramatic activity of radioactive decay. DETECTING GAMMA RAYS Unlike optical light and x-rays, gamma rays cannot be captured and reflected by mirrors. Gamma-ray wavelengths are so short that they can pass through the space within the atoms of a detector. Gamma-ray detectors typically contain densely packed crystal blocks. As gamma rays pass through, they collide with electrons in the crystal. This process is called Compton scattering, wherein a gamma ray strikes an electron and loses energy, similar to what happens when a cue ball strikes an eight ball. These collisions create charged particles that can be detected by the sensor. GAMMA RAY BURSTS Gamma-ray bursts are the most energetic and luminous electromagnetic events since the Big Bang and can release more energy in 10 seconds than our Sun will emit in its entire 10-billion-year expected lifetime! Gamma-ray astronomy presents unique opportunities to explore these exotic objects. By exploring the universe at these high energies, scientists can search for new physics, testing theories and performing experiments that are not possible in Earth-bound laboratories. If we could see gamma rays, the night sky would look strange and unfamiliar. The familiar view of constantly shining constellations would be replaced by ever-changing bursts of high-energy gamma radiation that last fractions of a second to minutes, popping like cosmic flashbulbs, momentarily dominating the gamma-ray sky and then fading. NASA's Swift satellite recorded the gamma-ray blast caused by a black hole being born 12.8 billion light years away (below). This object is among the most distant objects ever detected. Credit: NASA/Swift/Stefan Immler, et al. COMPOSITION OF PLANETS Scientists can use gamma rays to determine the elements on other planets. The Mercury Surface, Space Environment, Geochemistry, and Ranging (MESSENGER) Gamma-Ray Spectrometer (GRS) can measure gamma rays emitted by the nuclei of atoms on planet Mercury's surface that are struck by cosmic rays. When struck by cosmic rays, chemical elements in soils and rocks emit uniquely identifiable signatures of energy in the form of gamma rays. These data can help scientists look for geologically important elements such as hydrogen, magnesium, silicon, oxygen, iron, titanium, sodium, and calcium. The gamma-ray spectrometer on NASA's Mars Odyssey Orbiter detects and maps these signatures, such as this map (below) showing hydrogen concentrations of Martian surface soils. Credit: NASA/Goddard Space Flight Center Scientific Visualization Studio GAMMA RAY SKY Gamma rays also stream from stars, supernovas, pulsars, and black hole accretion disks to wash our sky with gamma-ray light. These gamma-ray streams were imaged using NASA's Fermi gamma-ray space telescope to map out the Milky Way galaxy by creating a full 360-degree view of the galaxy from our perspective here on Earth. Credit: NASA/DOE/International LAT Team A FULL-SPECTRUM IMAGE The composite image below of the Cas A supernova remnant shows the full spectrum in one image. Gamma rays from Fermi are shown in magenta; x-rays from the Chandra Observatory are blue and green. The visible light data captured by the Hubble space telescope are displayed in yellow. Infrared data from the Spitzer space telescope are shown in red; and radio data from the Very Large Array are displayed in orange. Credit: NASA/DOE/Fermi LAT Collaboration, CXC/SAO/JPL-Caltech/Steward/O. Krause et al., and NRAO/AUI National Aeronautics and Space Administration, Science Mission Directorate. (2010). Gamma Rays. Retrieved , from Mission:Science website: Science Mission Directorate. "Gamma Rays" Mission:Science. 2010. National Aeronautics and Space Administration.
http://missionscience.nasa.gov/ems/12_gammarays.html
4.09375
etymology, a root comprises the core form of a word, often in a primitive attestation or even in a reconstruction. Root forms have importance in deducing the structure of language families. The Appendix of Indo-European Roots below is designed to allow the reader to trace English words derived from Indo-European languages back to their fundamental components in Proto-Indo-European, the parent language of all ancient and modern Indo-European languages. Indo-European is the name given for geographic reasons to the large and well-defined linguistic family that includes some 150 languages spoken by about three billion people, including most of the major language families of Europe and western Asia, which belong to a single superfamily. Popular languages in this superfamily include English, Spanish, French, Portuguese, German, Italian, Russian, Persian, Hindi, Punjabi, and Urdu.
http://archimedes-lab.org/root_index/indo_european_roots.html
4.0625
Mercury's prime meridian, or 0° longitude, crosses through the left side of this image. The prime meridian was defined as the longitude where the Sun was directly overhead as Mercury passed through its first perihelion in the year 1950. The area was first seen by a spacecraft during MESSENGER's second Mercury flyby and is located to the northwest of the impact crater Derain. The image here has been placed into a map projection with north to the top. The original image was binned on the spacecraft from its original 1024 x 1024 pixel size to 512 x 512. This type of image compression helps to reduce the amount of data that must be downlinked across interplanetary space from MESSENGER to the Deep Space Network on Earth. On March 17, 2011 (March 18, 2011, UTC), MESSENGER became the first spacecraft ever to orbit the planet Mercury. The mission is currently in its commissioning phase, during which spacecraft and instrument performance are verified through a series of specially designed checkout activities. In the course of the one-year primary mission, the spacecraft's seven scientific instruments and radio science investigation will unravel the history and evolution of the Solar System's innermost planet. Visit the Why Mercury? section of this website to learn more about the science questions that the MESSENGER mission has set out to answer. Image Mission Elapsed Time (MET): 209937428 Image ID: 67124 Instrument: Wide Angle Camera (WAC) of the Mercury Dual Imaging System (MDIS) WAC filter: 7 (748 nanometers) Center Latitude: 5.9° Center Longitude: 4.6° E Resolution: 1253 meters/pixel Scale: The horizontal width of scene is about 875 kilometers (550 miles) These images are from MESSENGER, a NASA Discovery mission to conduct the first orbital study of the innermost planet, Mercury. For information regarding the use of images, see the MESSENGER image use policy.
http://photojournal.jpl.nasa.gov/catalog/PIA14197
4.21875
WASHINGTON — The earliest known ancestor of most animals may have been a minute creature shaped like a flattened helmet and barely visible to the naked eye, according to a new fossil discovery. The findings should help researchers understand how complex life evolved and may offer clues to the curious proliferation of new animals known as the Cambrian Explosion. The ancestors of most animal lineages first appeared during this episode of rapid evolution approximately 540 million years ago. The new fossils, however, lived around 55 million years earlier. Despite these animals’ minute size, their biology was relatively complex. Thus, animals with a fairly sophisticated “genetic toolkit” may have existed well before the Cambrian Explosion. What led them to diversify so dramatically in the Cambrian is still an open question. The researchers report their discovery in the journal Science, published by AAAS, the nonprofit science society. Why sponges don’t shake hands Like most modern animals, the animals that emerged during the Cambrian Explosion had two-sided body plans instead of circular ones. That is, they had a left and right side, a top and bottom, and a mouth and anus. This type of body plan is known as “bilateral symmetry.” In contrast, many sponges and cnidarians such as corals have “radial symmetry,” which means that cutting the shape in half — in any direction — produces two sides that are mirror images of each other. The fossils that David Bottjer of the University of Southern California and his colleagues from China, the United States and Taiwan discovered in China’s Doushantuo Formation may be the earliest known examples of bilaterally symmetric animals. “This discovery helps us to learn more about the murky origins of bilaterian animals, which are most of what we see on Earth. You and I would have to go scuba diving to see cnidarians and sponges,” Bottjer said. The researchers named the new animal “Vernanimalcula,” or “small spring animal,” referring to the fact that it lived after a “wintry” period of extensive glaciation. A long fuse? The genetic programming that produces a bilaterally symmetric body plan is relatively complex, which is one of the things that makes the sudden “explosion” of many bilaterians at the start of the Cambrian such a puzzle. One of the key questions about early animal evolution has been the relationship between complexity and size. Did one develop before the other? Did they go hand in hand? Most of the bilaterian animals appearing in the Cambrian were substantially larger than the microscopic creatures discovered by Bottjer and his colleagues. The Vernanimalcula fossils suggest that “maybe complex animals were around beforehand, and it was just the ability to grow large that caused the Cambrian Explosion,” Bottjer said. “The Cambrian Explosion may have had a really long fuse,” he added. Which came first: ‘Chicken’ or eggs? For a while now, scientists have been discovering signs of animal life around 20 to 30 million years before the Cambrian Explosion. These fossils, known as the “Ediacaran fauna,” were relatively large but primitive. “These fauna were soft — a lot of them were just big flat sheets with compartments,” Bottjer said. This has confused people somewhat as to how to classify them, but most researchers would agree that the majority of them were primitive cnidarians, according to Bottjer. “There were probably some bilaterians around. We just don’t have much of a record. Then you go back farther in time and we don’t have any record of any of these Ediacaran animals. That’s where our fossils come in,” he said. The 580 million-year-old to 600 million-year-old Doushantuo Formation, where the Vernanimalcula fossils were found, has already yielded some tantalizing signs of animal life before the Ediacaran. Scientists previously discovered tiny eggs and embryos in this sedimentary rock layer, although it wasn’t certain whether the “chicken” that laid these eggs was a bilaterian. Little vacuum cleaners The new fossils, which may indeed be related to the embryos found nearby, have many of the bilaterian characteristics that could make them early ancestors of most modern animals. In order to identify the fossils and then distinguish their mouths, internal organs and other structures, the researchers sliced off paper-thin sections of the rock and studied them under a microscope. The researchers then used a computer program to reconstruct a three-dimensional image of what the animal may have looked like. “They were probably little vacuum cleaners, with suctionlike mouths. Basically just a little guy scooting along the ocean bottom, probably sucking up microbes,” Bottjer said. He thinks the animals used suction for feeding because of the signs of muscles around the mouths in the fossils. The “scooting” part is speculation, however; the fossils don’t reveal how the animals moved. Fossils few and far between Only a handful of the fossils the researchers identified in their rock slices were actually Vernanimalcula specimens. The others were small bits of sponges and cnidarians, as well as eggs and embryos. Bottjer thinks that’s because preserving the soft tiny animals with their internal organs intact requires an unusual set of conditions in which phosphate — which we have in our bones and teeth — speedily works its way into the cellular structures. In fact, the lack of known Precambrian fossils is one of the reasons that scientists still have so many questions about how the earliest animals evolved. Bottjer is optimistic, however, that he and his colleagues will find more specimens to study. “People have only really been looking for the last 10 years. Lots of times people think there’s nothing more to find but soon they’re saying, ‘Wow, we found something else!’ The fossil record is better than we think it is,” he said. © 2013 American Association for the Advancement of Science
http://www.nbcnews.com/id/5112628/
4.03125
A rare find of stunningly intact fossils of prehistoric plankton will allow researchers to study how the tiny marine organisms cope with rising acidity in the oceans. Finding such intact specimens of coccolithophores, micrometre-sized marine plankton encased in discs of calcium carbonate, is a real coup — searching for fossils of calcified single-celled organisms often yields only skeletal bits that have fallen to the ocean floor. Scanning electron microscope image of rock surfaces collected from the Bass River core in New Jersey. Image Credit: Paul Bown “Breaking open undisturbed 56-million-year-old sediment samples, we can image coccolithophores — right down to their intracellular vesicles — using a scanning electron microscope,” said Paul Bown, a palaeoceanographer at University College London, who this week presented images of the fossils at the Third International Symposium on the Ocean in a High CO2 World in Monterey, California. A growing concern among scientists is that ocean acidification, driven by climate change, will reduce the abundance of calcium carbonate in the seas, making it difficult for algae to form their microscopic plating, essential for their survival. With intact fossils in hand, researchers can compare the sizes, shapes, thickness and growth rates of ancient and modern coccolithophores. Read the full article: Nature doi:10.1038/nature.2012.11500
http://www.surfaceoa.org.uk/?p=1976
4.15625
Increased air pollution may have delayed global warming in the eastern U.S. back in the late 20th century. A higher incidence of particulate air pollution (aerosols) in the eastern United States between 1930 and 1990 (peaking around 1980) created a "warming hole" over the eastern United States late in the 20th century. Temperature anomaly trend between 1930-1990. Image from GISS. This "warming hole" was an area where warming that would be expected from increasing greenhouse gases was delayed, according to climate researchers from Harvard University. While greenhouse gases like carbon dioxide and methane warm the Earth's surface, tiny particles in the air (such as aerosols) can have the reverse effect on regional scales, according to the Harvard School of Engineering and Applied Sciences press release. Aerosol pollution reflects incoming sunlight, causing a cooling effect at the surface. Clouds in clean air are composed of a relatively small number of large droplets (below left). As a consequence, the clouds are somewhat dark and translucent. In air with high concentrations of aerosols, water can easily condense on the particles, creating a large number of small droplets (right). These clouds are dense, very reflective, and bright white. Image courtesy of NASA. Air pollution concentrations in the eastern U.S. began to diminish in the 1980's and 90's after the passage and changes to the Clean Air Act. Excerpt from the Harvard press release...... "For the sake of protecting human health and reducing acid rain, we've now cut the emissions that lead to particulate pollution," he adds, "but these cuts have caused the greenhouse warming in this region to ramp up to match the global trend." At this point, most of the "catch-up" warming has already occurred. "No one is suggesting that we should stop improving air quality, but it's important to understand the consequences. Clearing the air could lead to regional warming," says co-author Loretta J. Mickley, a Senior Research Fellow in atmospheric chemistry at SEAS. The analysis was based on a combination of two complex models of Earth systems. The results of this study were also published in the journal Atmospheric Chemistry and Physics. New research shows that glaciers in the Mount Everest region have shrunk by 13 percent over the past 50 years..... A new in-depth analysis of peer-reviewed summaries shows an overwhelming consensus among scientists that recent warming is mostly caused by human actions.... Remote Sensing Systems has just released their global satellite measured temperatures for the month of April. A new, computer modeling study led by NASA shows for the first time how rising CO2 concentrations could affect the entire range of rainfall types for the globe..... The Arctic sea ice extent declined at a fairly normal seasonal rate during the month of April, but the actual extent is still running slightly lower than what it was last year..... Warming temperatures are found to cause an increase in concentrations of natural aerosols from plant emissions that have a slight cooling effect on the atmosphere.
http://www.accuweather.com/en/weather-blogs/climatechange/the-eastern-us-warming-hole-of-1/64548
4.3125
Learn about key events in history and their connections to today. On July 8, 1950, President Harry Truman appointed Gen. Douglas MacArthur commander in chief of United Nations forces in the Korean War. General MacArthur thus became the first leader of military forces fighting under a United Nations banner. Eleven days earlier, President Truman announced that he had ordered United States air and naval forces to fight with South Korea’s Army, two days after Communist North Korea invaded South Korea. The invasion had prompted the Security Council of the United Nations to call for a cease-fire and for all combatants to return to their former positions on either side of the 38th Parallel, which divides the two Koreas. The New York Times reported that the Security Council had, the previous day, adopted Resolution 84, calling for the United States to name the commander general of the combined land, air and naval units then battling the North Korean forces on the divided peninsula. Some interested parties had demanded that the United Nations itself field what the Times article called “an outright United Nations police force.” The use of United States troops alone, serving at the direction of President Truman but under a United Nations flag, was termed “a workable compromise.” Read more…
http://learning.blogs.nytimes.com/tag/military-strategy/
4
Although nearly 40 years have passed since Brazil banned slash-and-burn practices in its Atlantic Forest, the destruction lingers. New research reveals that charred plant material is leaching out of the soil and into rivers, eventually making its way to the ocean. So much of this “black carbon” is entering the marine ecosystem that it could be hurting ocean life, although further tests will be needed to confirm this possibility. People have used fire to shape Earth’s vegetation for millennia. In Brazil’s Atlantic Forest, Europeans began burning trees to make way for settlements and agriculture in the 16th century. What once blanketed 1.3 million square kilometers and ranked as one of the world’s largest tropical forests had shrunk to 8% of its former size by 1973, when protective laws were put in place. But that’s not the end of the story, according to researchers led by Carlos Eduardo de Rezende, an aquatic biogeochemist at the State University of Norte Fluminense in Rio de Janeiro, and Thorsten Dittmar, a marine geochemist at the Max-Planck Institute for Marine Microbiology in Bremen, Germany. The team discovered high levels of black carbon in the region’s soil and in the Paraiba do Sul River, the largest river that exclusively drains the area once occupied by the Atlantic Forest. Locals still burn sugarcane each year as a preharvest way of prepping the soil, but the researchers found that this could not account for the amount of black carbon they were seeing. To figure out how much black carbon the burned forest originally released, Rezende and colleagues looked to the neighboring Amazon forest for clues. Other studies reported black carbon rates for burning tracts of virgin Amazon rainforest, so they extrapolated those figures to match the historical range of the slashed-and-burned Atlantic Forest, which once had similar woody tree species to the Amazon. They calculated that torching the Atlantic Forest released about 200 to 500 million tonnes of black carbon. Given the material’s half-life, they estimate that it will take between 630 and 2200 years for just half of the black carbon to leach out of the region’s soils. Black carbon typically leaves the soil when rain water carries the material into nearby rivers. From there, the rivers deposit it in the ocean. To calculate just how much carbon this process may be adding to the sea, the researchers collected river samples once every 2 weeks from 1997 to 2008. They found that the dissolved black carbon continues to be exported from the soil at approximately the same levels each year during the rainy season. More than 2700 tonnes of the former forest’s dissolved black carbon enters the ocean each year from the Pariaba do Sul River alone, the team reports online today in Nature Geoscience. Scaling their findings up, Rezende and his colleagues estimate that former forest’s total cleared area sends between 50,000 to 70,000 tonnes of dissolved black carbon to the marine environment. “This kind of long-term time series is really essential to understanding global environmental change,” says Carrie Masiello, an Earth systems scientist at Rice University in Houston, Texas, who was not involved in the study. What becomes of this black carbon upon entering the ocean, however, is still unknown. One of the researchers’ previous studies found black carbon in the remote depths of the oceans surrounding Antarctica, and Dittmar suspects that much of the black carbon eventually winds up in deep ocean deposits around the globe. Only further investigation will reveal how much of it makes its way from the river transport to the deep ocean, however, and how it might affect marine life, especially microbial communities that live in and feed on small organic particles. “What’s exciting about this paper is it shows that tropical deforestation is not a small scale process,” Masiello says. Because slash-and-burn is still rampant in tropical locales around the planet, she explains, deforestation may very well be changing the way carbon cycles through the world’s oceans.
http://news.sciencemag.org/sciencenow/2012/08/years-after-slash-and-burn-brazi.html
4.0625
ReadWriteThink couldn't publish all of this great content without literacy experts to write and review for us. If you've got lessons plans, activities, or other ideas you'd like to contribute, we'd love to hear from you. Find the latest in professional publications, learn new techniques and strategies, and find out how you can connect with other literacy professionals. Teacher Resources by Grade |1st - 2nd||3rd - 4th| |5th - 6th||7th - 8th| |9th - 10th||11th - 12th| Bridging Literature and Mathematics by Visualizing Mathematical Concepts |Grades||3 – 5| |Lesson Plan Type||Standard Lesson| |Estimated Time||Five 50-minute sessions| Grosse Pointe Woods, Michigan Grosse Pointe Woods, Michigan Math-related, informational books, like Steve Jenkins' Actual Size and David M. Schwartz' If You Hopped Like a Frog, provide the focus for this lesson, which connects reading, writing, math, and science. By exploring the life-size images in Actual Size and the comparisons to familiar objects in both books, students visualize measurements and mathematical proportions, which, in turn, teaches ratio. Students first begin with a read aloud and discussion of Actual Size and then use their hands to make size comparisons with the illustrations in the book. Next they listen to and discuss If You Hopped Like a Frog. They then talk about the similarities and differences between the two books and complete a Venn diagram. Finally, students apply these strategies to their own research and writing, bridging literature and mathematics as they research and write about an animal from one of the texts and then share their work with the class. Interactive Venn Diagram: Students use this online tool to compare and contrast the elements of two stories read in class. Multigenre Mapper: Students use this online tool to publish their writing, including a drawing and three written texts. Stephanie Harvey suggests that teachers "surround kids with compelling nonfiction of every type and form" (13) and provide children with time to "research topics of interest and to practice reading and writing strategies" (14). The visual, language, and mathematical features of the math-related book pair that provides the focus of this lesson serve as powerful examples for children to examine critically and to inspire their own nonfiction writing. At the same time, these books incorporate real world applications of linear, area, and other forms of measurement, as well as the concept of ratio (NCTM, 2000, Connections Standard, Whitin & Whitin, 2004). These books can also inspire an inquiry stance toward scientific learning that is advocated by the National Science Teachers Association (NRC, 1996). Harvey, Stephanie. "Nonfiction Inquiry: Using Real Reading and Writing to Explore the World." Language Arts 80.1 (September 2002): 12-22. Whitin, David J. & Phyllis Whitin. 2004. New Visions for Linking Literature and Mathematics. Urbana, IL: National Council of Teachers of English; and Reston, VA: National Council of Teachers of Mathematics. National Council of Teachers of Mathematics. 2000. Principles and Standards for School Mathematics. Reston, VA: National Council of Teachers of Mathematics. National Research Council. 1996. National Science Education Standards. Arlington, VA: National Academy Press.
http://www.readwritethink.org/classroom-resources/lesson-plans/bridging-literature-mathematics-visualizing-822.html
4.28125
A major advantage of positional numeral systemss over other systems of writing down numbers is that they facilitate the usual grade-school method of long multiplication: multiply the first number with every digit of the second number and then add up all the properly shifted results. In order to perform this algorithm, one needs to know the products of all possible digits, which is why multiplication tables have to be memorized. Humans use this algorithm in base 10, while computers employ the same algorithm in base 2. The algorithm is a lot simpler in base 2, since the multiplication table has only 4 entries. Rather than first computing the products, and then adding them all together in a second phase, computers add the products to the result as soon as they are computed. Modern chips implement this algorithm for 32-bit or 64-bit numbers in hardware or in microcode. To multiply two numbers with n digits using this method, one needs about n2 operations. More formally: the time complexity of multiplying two n-digit numbers using long multiplication is &Theta(n2). An old method for multiplication, that doesn't require multiplication tables, is the Peasant multiplication algorithm; this is actually a method of multiplication using base 2. For systems that need to multiply huge numbers in the range of several thousand digits, such as computer algebra systems and bignum libraries, this algorithm is too slow. These systems employ Karatsuba multiplication which was discovered in 1962 and proceeds as follows: suppose you work in base 10 (unlike most computer implementations) and want to multiply two n-digit numbers x and y, and assume n = 2m is even (if not, add zeros at the left end). We can write If T(n) denotes the time it takes to multiply two n-digit numbers with Karatsuba's method, then we can write It is possible to experimentally verify whether a given system uses Karatsuba's method or long multiplication: take your favorite two 100,000 digit numbers, multiply them and measure the time it takes. Then take your favorite two 200,000 digit numbers and measure the time it takes to multiply those. If Karatsuba's method is being used, the second time will be about three times as long as the first; if long multiplication is being used, it will be about four times as long. Another Method of multiplication is called Toom-Cook or Toom3 There exist even faster algorithms, based on the fast Fourier transform. The idea, due to Strassen (1968), is the following: multiplying two numbers represented as digit strings is virtually the same as computing the convolution of those two digit strings. Instead of computing a convolution, one can instead first compute the discrete Fourier transforms, multiply them entry by entry, and then compute the inverse Fourier transform of the result. (See convolution theorem.) The fastest known method based on this idea was described in 1972 by Schönhage/Strassen and has a time complexity of Θ(n ln(n) ln(ln(n))). These approaches are not used in computer algebra systems and bignum libraries because they are difficult to implement and don't provide speed benefits for the sizes of numbers typically encountered in those systems. The GIMPS distributed Internet prime search project deals with numbers having several million digits and employs a Fourier transform based multiplication algorithm. Using number-theoretic transforms instead of discrete Fourier transforms should avoid any rounding error problems by using modular arithmetic instead of complex numbers. All the above multiplication algorithms can also be used to multiply polynomials. A simple improvement to the basic recursive multiplication algorithm: This may not help so much for multiplication by real or complex values, but is useful for multiplication of very large integers which are supported in some programming languages such as Haskell, Ruby, and Common Lisp.
http://www.fact-index.com/m/mu/multiplication_algorithm.html
4.0625
European and Canadian System From the Middle Ages or earlier, many trades in France and other European countries organized themselves into communities which came to be known as corporations or guilds. The guild system was characterized by the creation among craftsmen of a hierarchy comprising apprentices, journeymen and masters. The masters headed the guilds and elected juries responsible for drawing up and implementing regulations. Among these regulations were those governing apprenticeship and access to mastership: a long training period and an often rigorous entry procedure were imposed on apprentices; journeymen wishing to become masters had to pay a large fee and produce an original work of superior quality, ie, a masterpiece. When the first French craftsmen arrived in NEW FRANCE in the 17th century, they soon discovered that it was impossible to carry on their strict guild traditions and regulations. There was too much to be done in the new country for them to concentrate solely on their own trade; their time was divided among exercising their craft, clearing the land, fishing and trading furs. When the first villages sprang up and the training of a core group of craftsmen became necessary, the French system was no longer suitable, because 17th- and 18th-century Canada lacked specialized manpower. Guilds, as they had existed in France, were abandoned; however, the hierarchy and system of practical training were maintained. Members of the professions enjoyed better conditions: already favoured by a higher social rank, they came to Canada in fewer numbers and were able to devote most of their time to their professional duties. Certain characteristics of the apprenticeship system in England, France, the US, Québec and the Maritimes bear striking resemblances, particularly the age of apprentices and the duration of apprenticeship. With the exception of many anglophone masters, who hired their apprentices at a younger age and for a longer period, most masters hired apprentices around the age of 16 for a 3-year period. Training was generally completed around the age of 21 but there were some exceptions, notably in the case of apprentices who were orphans. Authorities used official apprenticeship as a means of placing orphans in families and ensuring that they received training in a particular trade. Orphans generally began their training at a much younger age and worked for longer periods than other apprentices. The age of apprentices in the professions was essentially the same as that of craft apprentices, but the length of the training differed considerably. For example, legislation established the training period of lawyers and notaries at 5 years. Working conditions reveal the characteristics of apprenticeship and the marked differences between craftsmen and professionals. Among 17th-and 18th-century craftsmen, the traditional organization of production and work dominated. With the exception of items produced at the FORGES SAINT-MAURICE and a few large workshops, each piece was the work of one craftsman, master or journeyman, sometimes assisted by an apprentice. Craftsmen generally worked in small workshops (often attached to their homes) and owned all their tools. Work was usually done to order and division of labour was almost nonexistent. The major sources of energy were still muscle power and water; raw materials were processed mainly by hand tools. Under this system, working relations were not defined solely on the basis of labour supply and demand as today but also in terms of rights, obligations, and personal relationships which were often very authoritarian in nature. Contracts stipulated that the apprentice obey the master, work on his behalf and strive to learn his trade. In return, the master agreed to reveal all the secrets of his craft and provide accommodation, food, clothing and a small annual salary, paid either in cash or in kind. The apprentice worked 6 days a week, his hours varying depending on the trade and whether it was practised indoors or outdoors. Working days were generally from 5 AM to 8 or 9 PM, with a minimum of 2 hours for lunch and dinner. Apprentices worked 12-14 hours a day, a little longer than the journeyman or master, since they had to prepare the others' work before the shop opened and had to clean up after closing. Some apprentices in the 17th and 18th centuries received religious and academic as well as technical training. Once the apprenticeship period was over, young workers might work for a few years as journeymen and then open their own shops when they had enough money; others inherited their fathers' shops. The journeyman's salary was 4-6 times greater than that of the apprentice. At this intermediate stage, the journeyman entered the job market and made products of his own. Some journeymen trained apprentices or were responsible for the internal operation of the workshop in the master's absence. The working conditions of apprentices to merchants or in the professions were very different from those of craft apprentices. Apprentices to the professions usually came from wealthier families, worked fewer hours, received some education before beginning their apprenticeship and were not required to perform domestic duties. Changes in the Nineteenth Century The economic boom of the early 19th century, certain British influences and the process of urbanization provoked irreversible changes in MANUFACTURING, work organization and conditions of apprenticeship. In order to meet the competition from imported products in the expanding local market, many master craftsmen also acted as merchants and manufacturers and modified their production methods. They used machine tools, grouped several craftsmen together under the same roof, shared tasks and were thus able to hire unskilled labour to make products on a large scale. This development marked the transition from individual artisan to workshop and, subsequently, to factory production in urban centres. The transition was a long and complex process spread over nearly the entire 19th century and craftsmen's workshops and factories co-existed for a long time. These major changes affected the role of traditional apprenticeship in the work force and society. Apprentices gradually became a source of cheap manpower and were hired more for their labour than for training purposes. They were hired at increasingly younger ages and their contracts were prolonged and expanded to embrace several tasks so that they could be used as cheap labour for longer periods. The traditional responsibilities of masters (support and moral and religious education) were replaced by a cash payment. Some masters failed to teach apprentices "the secrets of the trade"; others mistreated them. Apprentices often ran away, since they had difficulty breaking their contracts. If caught, they were prosecuted. While the traditional apprenticeship system deteriorated, other institutions increased in importance. At the beginning of the 19th century, evening schools began to replace the education previously given by the master and his wife. In order to have more control over the education of apprentices and young journeymen, masters and merchants followed the example of their counterparts in Great Britain and established technical institutes in the major cities of eastern Canada (Halifax, Québec City and Montréal). These institutes were viewed by the authorities as training centres where workers could learn respect for the establishment, discipline in the workplace and proper social behaviour. Sunday schools and temperance groups, established during the first half of the 19th century, supported the institutes in the pursuit of a common goal: to teach workers how to use their leisure time so that they would put more effort into their work. During the second half of the 19th century, technical institutes partly replaced workshops as the place in which training was provided. The founding of the first unions in the 1830s and professional corporations in the late 1840s resulted, in part, from a desire to control access to occupations and to protect their members. With the exception of the construction trades and the professions, which succeeded in limiting the number of apprentices, trades (eg, shoe making, coopering, tinsmithing) were threatened by technological change. Schools gradually took over responsibility for training professionals; hence, the creation of professional associations (eg, of doctors and notaries) gave professionals an opportunity to control not only the quality of education but also the number of graduates (see EDUCATION, HISTORY OF). Although government authorities first feared giving so much power to doctors and notaries, these 2 professional groups were granted such powers (in 1845 and 1847) after applying considerable pressure. Author JEAN-PIERRE HARDY AND DAVID-THIERY RUDDEL
http://www.thecanadianencyclopedia.com/articles/apprenticeship-in-early-canada
4.0625
Logic gates perform basic logical functions and are the are the fundamental building blocks of digital integrated circuits. Most logic gates take an input of two binary values, and output a single value of a 1 or 0. Some circuits may have only a few logic gates, while others, such as microprocessors, may have millions of them. There are seven different types of logic gates, which are outlined below. In the following examples, each logic gate except the NOT gate has two inputs, A and B, which can either be 1 (True) or 0 (False). The resulting output is a single value of 1 if the result is true, or 0 if the result is false. - AND - True if A and B are both True - OR - True if either A or B are True - NOT - Inverts value: True if input is False; False if input is True - XOR - True if either A or B are True, but False if both are True - NAND - AND followed by NOT: False only if A and B are both True - NOR - OR followed by NOT: True only if A and B are both False - XNOR - XOR followed by NOT: True if A and B are both True or both False
http://www.techterms.com/definition/logicgate
4.1875
Finnish Declaration of Independence ||This article needs additional citations for verification. (December 2012)| The Finnish declaration of independence (Finnish: Suomen itsenäisyysjulistus) was adopted by the Parliament of Finland on 6 December 1917. It declared Finland an independent nation, among nations and a sovereign republic and therefore broke the country free from being the Russian Grand Duchy of Finland. Revolution in Russia The February and the October Revolution in 1917, had also ignited hopes in the Grand Duchy of Finland. After the abdication of Grand Duke Nicholas II on 15 March 1917, the personal union between Russia and Finland lost its legal base – at least according to the view in Helsinki. There were negotiations between the Russian Interim Government and Finnish authorities. The resulting proposal, approved by the interim government, was heavily rewritten in the Parliament and transformed into the so-called Power Act (Finnish: Valtalaki, Swedish: Maktlagen), in which it declared itself now having all powers of legislation, except in respect of foreign policy and military issues, and also that it could be dissolved only by itself. At the time of voting it was believed that the Interim Government would be defeated. The Interim Government sustained, did not approve the act and dissolved the Parliament. After new elections and the defeat of the interim government, on 5 November, the Parliament declared itself to be "the possessor of supreme State power" in Finland, based on Finland's Constitution, and more precisely on §38 in the old Instrument of Government of 1772, which had been enacted by the Estates after Gustav III's bloodless coup. On 15 November 1917, the Bolsheviks declared a general right of self-determination, including the right of complete secession, "for the Peoples of Russia". On the same day the Finnish Parliament issued a declaration by which it assumed, pro tempore, all powers of the Sovereign in Finland. The old Instrument of Government was however no longer deemed suitable. Leading circles had long held monarchism and hereditary nobility to be antiquated, and advocated a republican constitution for Finland. The Senate of Finland, the government the Parliament had appointed in November, came back to the Parliament with a proposal for a new republican Instrument of Government on 4 December. The Declaration of Independence was technically given the form of a preamble of the proposition, and was intended to be agreed by the Parliament. Parliament adopted the Declaration on 6 December. On 18 December (31 December N. S.) the Soviet Russian government issued a Decree, recognizing Finland's independence, and on December 22 (January 4, 1918 N. S.) it was approved by the highest Soviet executive body – VTsIK. The Declaration and 15 November With reference to the declaration of 15 November, the declaration says: - The people of Finland have by this step taken their fate in their own hands; a step both justified and demanded by present conditions. The people of Finland feel deeply that they cannot fulfil their national and international duty without complete sovereignty. The century-old desire for freedom awaits fulfilment now; Finland's people step forward as a free nation among the other nations in the world. - (...) The people of Finland dare to confidently await how other nations in the world recognize that with their full independence and freedom, the people of Finland can do their best in fulfilment of those purposes that will win them a place amongst civilized peoples. Estonia, Latvia, Lithuania as well as Ukraine declared their independence from Russia during the same period. See Estonian War of Independence, Latvian Independence and Lithuanian Wars of Independence. These three countries were occupied by, and annexed into, the Soviet Union (1940-1941, 1944-1991). See Occupation of the Baltic states. Text of Finland’s Declaration of Independence To The Finnish People. At the Finnish Parliament session today, has the Finnish Senate by its chairman forwarded to the Diet, among other things, a Proposition for a new form of government for Finland. By submitting the draft to the Parliament, has the Finnish Senate chairman on behalf of the Finnish Senate stated: The Finnish Parliament has on 15th day of the last November, in support of Section 38 of the Constitution, declared to be the Supreme holder of the State Authority as well as set up a Government to the country, that has taken to its primary task the realization and safeguarding Finland’s independence as a state. The people of Finland have by this step taken their fate in their own hands: a step both justified and demanded by present conditions. The people of Finland feel deeply that they cannot fulfil their national duty and their universal human obligations without a complete sovereignty. The century-old desire for freedom awaits fulfilment now; The People of Finland has to step forward as an independent nation among the other nations in the world. Achieving this goal requires mainly some measures by the Parliament. Finland’s current form of government, which is currently incompatible with the conditions, requires a complete renewal and therefore has the Government now submitted a proposition for a new Constitution to the Parliament’s council, a proposition that is based on the principle that Finland is to be a sovereign republic. Considering that, the main features of the new polity has to be carried into effect immediately, the Government has at the same time delivered a bill of acts in this matter, which mean to satisfy the most urgent renewal needs before the establishment of the new Constitution. The same goal also calls for measures from the part of the Government. The Government will approach foreign powers to seek an international recognition of our country’s independence as a state. At the present moment this is particularly all the more necessary, when the grave situation caused by the country’s complete isolation, famine and unemployment compels the Government to establish actual relations to the foreign powers, which prompt assistance in satisfying the necessities of life and in importing the essential goods for the industry, are our only rescue from the imminent famine and industrial stagnation. The Russian people have, after subverting the Tsarist Regime, in a number of occasions expressed its intention to favour the Finnish people the right to determine its own fate, which is based on its centuries-old cultural development. And widely over all the horrors of the war is heard a voice, that one of the goals of the present war is to be, that no nation shall be forced against its will to be dependent on another (nation). The Finnish people believe that the free Russian people and its constitutive National Assembly don’t want to prevent Finland’s aspiration to enter the multitude of the free and independent nations. At the same time the People of Finland dare to hope that the other nations of the world recognizes, that with their full independence and freedom the People of Finland can do their best in fulfilment of those purposes that will win them an independent position amongst the people of the civilized world. At the same time as the Government has wanted to let all the Finnish citizens to know these words, the Government turns to the citizens, as well as the private and public authorities, calling everyone on their own behalf with rapt attention to follow the (law and) order by filling their patriotic duty, to strain all their strength for achieving the nations common goal in this point of time, which has such an importance and decisiveness, that there have never before been in the life of the Finnish people. In Helsinki, 4 December 1917. The Finnish Senate: P.E. Svinhufvud. E.N. Setälä. Kyösti Kallio. Jalmar Castrén. Onni Talas. Arthur Castrén. Heikki Renvall. Juhani Arajärvi. Alexander Frey. E.Y. Pehkonen. Hardship burdened the common people, which already had resulted in alarming polarization, and soon would ignite the Civil War. The declaration actually addresses this problem: - The Government will approach foreign powers to seek the recognition of our political independence. All the complications, famine and unemployment ensuing from the present external isolation make it urgent for the Government to tie direct contacts with foreign powers without delay. Urgent, concrete assistance in the form of necessities for living and industry is our only rescue from imminent famine and industrial standstill. 6 December was later declared as the national holiday Finland Independence Day. The 90th Anniversary of Finland's Declaration of Independence was recently selected as the main motif for the €5 90th Anniversary of Finland's Declaration of Independence commemorative coin, minted in 2007. The reverse shows petroglyph aesthetics, while the obverse has a nine-oar boat with rowers as a symbol of a true Finnish trait: collaboration. You can also distinguish signs of music and Finnish zitherin strings in the coin's design. - Valtalaki, 25th July 1917 - Translation from the Finnish language by B.Holm, 25th of July 2009. (Clarifications by the translator are in brackets.) - Declaration of independence (Finnish) from Wikisource - Declaration of independence (Swedish) from Wikisource - Instrument of Government (Swedish) from Wikisource - Audio recording of Svinhufvud reading the speech in 1937 from YLE
http://en.m.wikipedia.org/wiki/Finnish_Declaration_of_Independence
4.25
First, the Earth (whether flat or spherical) was considered to be the center of the universe. Then, the Sun was considered to be center of the universe. Eventually, mankind came to realize that the Sun is just one of 200 to 400 billion stars within the Milky Way galaxy, which itself is just one among hundreds of billions of galaxies in the known universe, and there may even be other universes. Astronomers have long believed that many stars would have planets around them, including some Earth-like planets. Carl Sagan wrote and spoke extensively about this in the 1970s and 80s, but we did not have the technology to detect such planets at the time, so the discussions remained theoretical. There were no datapoints by which to estimate what percentage of stars had what number of planets, of which what fraction were Earth-like. The first confirmed extrasolar planet was discovered in 1995. Since then, continually improving technology has yielded discovery of more than one per month, for a grand total of about 176 to date. So far, most known extrasolar planets have been Jupiter-sized or larger, with the detection of Earth-sized planets beyond our current technology. But the Impact of Computing is finding its way here as well, and new instruments will continue to deliver an exponentially growing ability to detect smaller and more distant planets. Mere projection of the rate of discovery since 1995 predicts that thousands of planets, some of them Earth-sized, will be discovered by 2015. To comfortably expect this, we just need to examine whether advances in astronomical observation are keeping up with this trend. Let's take a detailed look at the chart below from a Jet Propulsion Laboratory publication, which has a lot of information. The bottom horizontal axis is the distance from the star, and the top horizontal axis is the orbital period (the top and bottom can contradict each other for stars of different mass, but let's put that aside for now). The right vertical axis is the mass as a multiple of the Earth's mass. The left vertical axis is the same thing, merely in Jupiter masses (318 times that of the Earth). Current detection capability represents the area above the purple and first two blue lines, and the blue, red, and yellow dots represent known extrasolar planets. Planets less massive than Saturn have been detected only when they are very close to their stars. The green band represents the zone on the chart where an Earth-like planet, with similar mass and distance from its star as our Earth, would reside. Such a planet would be a candidate for life. The Kepler Space Observatory will launch in mid-2008, and by 2010-11 will be able to detect planets in the green zone around stars as far as 1000 light years away. It is set to examine 100,000 different stars, so it would be very surprising if the KSO didn't find dozens of planets in the green-zone. After 2015, instruments up to 1000 times more advanced than those today, such as the Overwhelmingly Large Telescope and others, will enable us to conduct more detailed observations of the hundreds of green-zone planets that will be identified by then. We will begin to get an idea of their color (and thus the presence of oceans) and atmospheric composition. From there, we will have a distinct list of candidate planets that could support Earth-like life. This will be a fun one to watch over the next decade. Wait for the first headline of 'Earth-like planet discovered' in 2010 or 2011.
http://www.singularity2050.com/2006/03/planets_around_.html
4.03125
Halteres (pron.: //; singular halter or haltere) are small knobbed structures modified from the hind wings (or front wings in the case of Strepsiptera) in some two-winged insects. They are flapped rapidly and function as gyroscopes, informing the insect about rotation of the body during flight. The word 'halter' comes from Greek ἁλτήρ, a double knobbed device used in Ancient Greece by athletes during training in jumping. In Diptera, the formation of the haltere during metamorphosis is dependent on the homeotic gene Ultrabithorax (Ubx). If this gene is experimentally deactivated, the haltere will develop into a fully developed wing. This is an excellent illustration of an important mechanism of a simple homeotic gene change can result in a radically different phenotype. Halteres flap up and down as the wings do and operate as vibrating structure gyroscopes. Every vibrating object tends to maintain its plane of vibration if its support is rotated, a result of Newton's first law. If the body of the insect changes direction in flight or rotates about its axis, the vibrating halteres thus exert a force on the body. The insect detects this force with sensory organs known as campaniform sensilla located at the base of the halteres. The planes of vibration of the two halteres are orthogonal to each other, each forming an angle of about 45 degrees with the axis of the insect; this increases the amount of information gained from the halteres. Halteres thus act as a balancing and guidance system, helping these insects to perform their fast aerobatics. In addition to providing rapid feedback to the muscles steering the wings, they also play an important role in stabilizing the head during flight. - Klowden, M. J. (2007). Physiological systems in insects. Elsevier/Academic Press. pp. 497-499. - QED: These tiny flying machines really are magnificent, The Telegraph, 9 February 2005 - Photo of a haltere of a mosquito (Nematocera)
http://en.wikipedia.org/wiki/Halteres
4.1875
CIVIL-RIGHTS MOVEMENT. Civil-rights campaigns in Texas are generally associated with the state's two most prominent ethnic minorities: African Americans and Mexican Americans.qqv Mexican Americans have made efforts to bring about improved political circumstances since the Anglo-American domination of Texas began in 1836. African Texans have fought for civil rights since their emancipation from slavery in 1865. Organized campaigns, however, were not launched until the early twentieth century. Issues of immediate concern to Mexican Americans after the Texas Revolution centered around racist actions. In the 1850s, Tejanos faced expulsion from their Central Texas homes on the accusation that they helped slaves escape to Mexico. Others became victims of Anglo wrath around the Goliad area during the Cart War of 1857, as they did in South Texas in 1859 after Juan N. Cortina's capture of Brownsville. Following the Civil War, both the newly freed slaves and Tejanos faced further atrocities. In the 1880s, white men in East Texas used violence as a method of political control, and lynching became the common form of retaliation for alleged rapes of white women or for other insults or injuries perpetrated upon white society. Mexican Americans of South Texas experienced similar forms of brutality. The Ku Klux Klan, the White Caps, law officials, and the Texas Rangersqv, all acting as agents of white authority, regularly terrorized both Mexican Americans and black Texans. De facto segregation followed emancipation. Freedmen found themselves barred from most public places and schools and, as the nineteenth century wore on, confined to certain residential areas of towns. By the early twentieth century, such practices had been sanctioned by law. Whites never formulated these statutes with Tejanos in mind, but they enforced them through social custom nonetheless. By the 1880s and 1890s, furthermore, minority groups faced legal drives to disfranchise them, though Anglos turned to a variety of informal means to weaken their political strength. African and Mexican Americans faced terrorist tactics, literacy tests, the stuffing of ballots, and accusations of incompetence when they won office. Political bosses in South Texas and the El Paso valley, meantime, attempted the Mexicans' domination through the controlled franchise. In 1902 the legislature passed the poll-tax law (see ELECTION LAWS), and the next year Texas Democrats implemented the white primary. These mechanisms disfranchised blacks, and Mexican Americans for that matter, for white society did not regard Tejanos as belonging to the "white" race. Progressive reformers of the age viewed both minority groups as having a corrupting influence on politics. By the late 1920s, Texas politicians had effectively immobilized African-Texan voters through court cases that defined political parties as private organizations which could exclude members. Some scholars have estimated that no more than 40,000 of the estimated 160,000 eligible black voters retained their franchise in the 1920s. Newer Jim Crow laws in the early twentieth century increased the segregation of the races, and in the cities, black migrants from the rural areas joined their urban compatriots in ghettoes. The laws ordinarily did not target Mexicans, but were enforced on the premise that Mexicans were an inferior and unhygienic people. Thus Tejanos were relegated to separate residential areas or designated public facilities. Hispanics, though mostly Catholic, worshiped in largely segregated churches. Blacks and Hispanics attended segregated and inferior "colored" and "Mexican" schools. As late as the mid-1950s, the state legislature passed segregationist laws directed at blacks (and by implication to Tejanos), some dealing with education, others with residential areas and public accommodations. Gov. R. Allan Shivers, who opposed the 1954 Brown v. Board of Education decision, went so far as to call out the Texas Rangers at Mansfield in 1956 to prevent black students from entering the public school (see MANSFIELD SCHOOL DESEGREGATION INCIDENT). Although Marion Price Daniel, Sr., Shivers's successor, was more tolerant, the integration process in Texas was slow and painful. Supreme Court decisions in 1969 and 1971 ordered school districts to increase the number of black students in white schools through the extremely controversial practice of busing. Violence in the era until the Great Depression years resembled that of the nineteenth century. In the ten-year period before 1910, white Texans lynched about 100 black men, at times after sadistic torture. Between 1900 and 1920, numerous race riots broke out, with black Texans generally witnessing their homes and neighborhoods destroyed in acts of vengeance. Similarly, Tejanos became victims of Anglo wrath for insult, injury, or death of a white man, and Anglos applied lynch law to Tejanos with the same vindictiveness as they did to blacks. African and Mexican Americans criticized segregationist policies and white injustices via their newspapers, labor organizations, and self-help societies. Black state conventions issued periodic protests in the 1880s and 1890s. On particular occasions during the nineteenth century, communities joined in support of leaders rising up against perceived wrongs or in behalf of those unjustly condemned. Tejanos, for one, rallied behind Juan N. Cortina and Catarino Garza, and contributed to the Gregorio Cortez Defense Network, which campaigned for the defense of a tenant farmer named Gregorio Cortezqv, who killed a sheriff in Karnes County in self-defense in 1901. The period between 1900 and 1930 saw continued efforts by minorities to break down racial barriers. In 1911 Mexican-American leaders met at the Congreso Mexicanista in Laredo and addressed the common problems of land loss, lynchings, ethnic subordination, educational inequalities, and various other degradations. In 1919 the Brownsville legislator J. T. Canales spearheaded a successful effort to reduce the size of the Texas Ranger force in the wake of various atrocities the rangers had committed in the preceding decade. La Agrupación Protectora Mexicana, founded in 1921, had as its intent the protection of farm renters and laborers facing expulsion by their landlords. Much of the leadership on behalf of civil rights came from the ranks of the middle class. Black leaders established a chapter of the National Association for the Advancement of Colored People in Houston in 1912, three years after the founding of the national organization; by 1930 some thirty chapters existed throughout the state. The association pursued the elimination of the white primary and other obstacles to voting, as well as the desegregation of schools, institutions of higher education, and public places. Tejanos established their own organizations to pursue similar objectives, among them the Orden Hijos de America (Order of Sons of Americaqv). The order was succeeded in 1929 by the League of United Latin American Citizens, which committed itself to the same goals of racial equality. Mexican Americans and Black Texans continued their advocacy for equality during the depression era. In San Antonio, Tejanos founded La Liga Pro-Defensa Escolar (School Improvement Leagueqv), which succeeded in getting the city's school board to build three new elementary schools and make improvements in existing facilities. Mexican Americans in the Gulf Coast area near Houston and in El Paso organized the Confederación de Organizaciones Mexicanas y Latino Americanas in the late 1930s, also for the purpose of eradicating racist policies. The black movement, for its part, won increased white support in the 1930s from the ranks of the Association of Southern Women for the Prevention of Lynching and from such prominent congressmen as Maury Maverick. After World War II, Tejano war veterans founded the American G.I. Forum, and by the 1950s, LULAC and the Forum became the foremost Mexican-American groups using the legal system to remove segregation, educational inequities, and various other discriminatory practices. In 1961 the politically oriented Political Association of Spanish Speaking Organizationsqv joined LULAC and the G.I. Forum to pursuing the goal of mobilizing the Texas-Mexican electorate in an effort to prod mainstream politicians to heed the needs of Hispanics. African Americans, meantime, undertook poll-tax and voter-registration drives through the Democratic Progressive Voters League; the white primary had been declared illegal in 1944. During the 1960s the Progressive Voters League worked to inform black people about political issues and encouraged them to vote. During the 1960s both African Americans and Mexican Americans took part in national movements intended to bring down racial barriers. Black Texans held demonstrations within the state to protest the endurance of segregated conditions. They also instituted boycotts of racist merchants. In conjunction with the National March on Washington in 1963, approximately 900 protesters marched on the state Capitol. The group, which included Hispanics, blacks, and whites, attacked the slow pace of desegregation in the state and Governor John Connally's opposition to the pending civil-rights bill in Washington. By the latter half of the sixties, some segments of the black community flocked to the cause of "black power" and accepted violence as a means of social redress, though the destruction of property and life in Texas in no way compared to that in some other states. In a similar manner, Tejanos took part in the Chicano movement of the era, and some, especially youths, supported the movement's militancy, its denunciation of "gringos," and its talk of separatism from American society. The Raza Unida party spearheaded the movement during the 1970s; as a political party, Raza Unida offered solutions to inequalities previously addressed by reformist groups such as LULAC and the G.I. Forum. Members used demonstrations and boycotts and confrontational approaches, but violence of significant magnitude seldom materialized. The movement declined by the mid-1970s. During the same period, the federal government pursued an agenda designed to achieve racial equality, and Texas Mexicans and Black Texans both profited from this initiative. The Twenty-fourth Amendment, ratified in 1964, barred the poll tax in federal elections, and that same year Congress passed the Civil Rights Act outlawing the Jim Crow tradition. Texas followed suit in 1969 by repealing its own separatist statutes. The federal Voting Rights Act of 1965 eliminated local restrictions to voting and required that federal marshals monitor election proceedings. Ten years later, another voting-rights act demanded modification or elimination of at-large elections. After the 1960s several organizations joined LULAC and the G.I. Forum in the cause of equality for Mexican Americans. The Mexican American Legal Defense and Education Fundqv, founded in 1968, emerged as the most successful civil-rights organization of the late twentieth century. It focused on the state's inequitable system of financing schools, redistricting, and related problems. Working to see the increasing political participation of Tejanos and the removal of obstacles to Tejano empowerment was the Southwest Voter Registration Education Projectqv. Groups at the city level that sought to help out barrio residents included COPS in San Antonio, and EPISO in the El Paso area. The struggle for civil rights also produced a number of favorable court decisions. Black Texans won a judicial victory in 1927 when the Supreme Court ruled in Nixon v. Herndon that the white primary violated constitutional guarantees. When the state circumvented the decision by declaring political parties to be private organizations that had the right to decide their own memberships, blacks again turned to the courts. Not until the case of Smith v. Allwright (1944) did the Supreme Court overturn the practice. The post-World War II era came to be a time of increased successes for civil-rights litigants. The case of Sweatt v. Painter (1950) integrated the University of Texas law school, and in its wake several undergraduate colleges in the state desegregated. The famous case of Brown v. Board of Education (1954) produced the integration of schools, buses, restaurants, and other public accommodations. Mexican Americans won similar decisions that struck at discriminatory traditions. The decision in Delgado v. Del Rio ISD (1948) made it illegal for school boards to designate specific buildings for Mexican-American students on school grounds; Hernandez v. Driscoll CISD (1957) stated that retaining Mexican-American children for four years in the first two grades amounted to discrimination based on race; Cisneros v. Corpus Christi ISD (1970) recognized Mexican Americans as an "identifiable ethnic group" so as to prevent the subterfuge of combining Mexicans and blacks to meet integration; and Edgewood ISD v. Kirby (1989) held that the system of financing public education in the state discriminated against Mexican Americans. In another significant case, Hernandez v. State of Texas (1954), the United States Supreme Court recognized Mexican Americans as a class whose rights Anglos had violated through Jim Crow practices. Evan Anders, Boss Rule in South Texas: The Progressive Era (Austin: University of Texas Press, 1982). Alwyn Barr, Black Texans: A History of Negroes in Texas, 1528–1971 (Austin: Jenkins, 1973). Arnoldo De León, The Tejano Community, 1836–1900 (Albuquerque: University of New Mexico Press, 1982). Arnoldo De León, They Called Them Greasers: Anglo Attitudes Toward Mexicans in Texas, 1821–1900 (Austin: University of Texas Press, 1983). Ignacio M. Garcia, United We Win: The Rise and Fall of La Raza Unida Party (Tucson: University of Arizona Mexican American Studies Research Center, 1989). Michael L. Gillette, "Blacks Challenge the White University," Southwestern Historical Quarterly 86 (October 1982). Michael L. Gillette, "The Rise of the NAACP in Texas," Southwestern Historical Quarterly 81 (April 1978). Darlene Clark Hine, Black Victory: The Rise and Fall of the White Primary in Texas (Millwood, New York: KTO Press, 1979). David Montejano, Anglos and Mexicans in the Making of Texas, 1836–1986 (Austin: University of Texas Press, 1987). Merline Pitre, Through Many Dangers, Toils and Snares: The Black Leadership of Texas, 1868–1900 (Austin: Eakin, 1985). Guadalupe San Miguel, Jr., "Let All of Them Take Heed": Mexican Americans and the Campaign for Educational Equality in Texas (Austin: University of Texas Press, 1987). James Smallwood, Time of Hope, Time of Despair: Black Texans during Reconstruction (London: Kennikat, 1981). The following, adapted from the Chicago Manual of Style, 15th edition, is the preferred citation for this article.Arnoldo De León and Robert A. Calvert, "CIVIL-RIGHTS MOVEMENT," Handbook of Texas Online (http://www.tshaonline.org/handbook/online/articles/pkcfl), accessed May 25, 2013. Published by the Texas State Historical Association.
http://www.tshaonline.org/handbook/online/articles/pkcfl
4.15625
In Direct, Joint, and Inverse Variation, our instructor begins with direct variation and the constant of variation before moving into graphs of constant variation. Joint variation with three variables is next with inverse variation and its graph. Last are proportions which make it easier to solve direct and inverse variations. Extra examples at the end make sure you understand all the concepts. Use values given in a problem to find the constant of variation. The use this value and other values to find the missing value. Direct, Joint, and Inverse Variation Lecture Slides are screen-captured images of important points in the lecture. Students can download and print out these lecture slide images to do practice problems as well as take notes while watching the lecture.
http://www.educator.com/mathematics/algebra-2/eaton/direct-joint-and-inverse-variation.php
4.3125
Leadership in Energy and Environmental Design (LEED) is a nationally accepted benchmark for design, performance and operation of green buildings. While few schools in the United States are officially labeled "green" construction, there are many smaller things that can make a school "greener," or more environmentally friendly. The LEED Rating System for Existing Buildings addresses: - whole-building cleaning and maintenance issues including chemical use - ongoing indoor air quality energy efficiency - water efficiency - recycling programs and facilities - exterior maintenance programs - systems upgrades to meet green building energy, water, indoor air quality and lighting performance standards From this rating system we can derive some fundamental questions that students can ask and research. Based on the findings, students can work toward more energy efficient and environmentally friendly building management. 1. Begin by asking the students about the definition of "green." Use any pedagogical method for brainstorming ideas that you prefer, e.g. jig saw, class call-out, or think-pair-share. After students are able to think independently and as a class, derive a workable definition for what it means to be "green." Follow this discussion by asking students to reflect on how "green" they think they are and how "green" the school is. Once they have recorded their responses in a journal and discussed these responses with their neighbor, ask them what specific criteria they used to classify both themselves and their 3. Discuss with them how their ideas are similar to nationally recognized benchmarks for green buildings (LEED). Teachers, review this Web site prior to the discussion: USGBC: LEED for Existing Buildings 4. Discuss how any actions in science or specifically to "green-up" a building or lifestyle should be based on information and in this case data that is easily collectable. Hand out the "What Shade of Green Is Your School?" worksheet. 5. Assign or have students volunteer for one of the six sections on the worksheet except for section three. If there is a computer lab available, have the students research the benefits of being "greener" in their assigned areas of research. What are the 6. Once all sections are completed (except section three), have students store their data and complete section 7. If there are glass, bottle and aluminum can recycling bins in the school, have each pair of students count the number of these recyclables that are deposited in an equal number of trash cans and recycling bins. Compile class data to determine the percent of cans that are recycled and the percent of recyclable cans that end up in the general trash and then a landfill. If your school has no recycling bins, determine the total amount of cans and bottles that could be recycled. For either case be sure to determine how long it has been since the receptacles have been emptied or changed. Have students extrapolate how many cans are recycled and thrown away in a school year. How can the school improve on this? As an extension to this, if your state has a bottle deposit, calculate the estimated amount of money that could be made if you turned in all redeemable cans over a year. Assume the proportions that occur during the audit would remain the same. Post activity discussion: Have each group share their findings with the class. This can be done as formally as needed. I often use this as an opportunity to fulfill the state requirements for various types of speaking presentations. Determine what can be changed in the school to make it more environmentally friendly. Extensions include presentations to the administration and custodial services of the findings and suggestions for change.
http://www.pbs.org/newshour/extra/teachers/lessonplans/science/green_schools.html
4.0625
*UPDATE* 22nd August 2012. Some climatologists believe that this catastrophic reduction in Arctic ice-cover is directly responsible for the extreme weather being experienced in the Northern hemisphere, a symptom of climate instability brought about by global warming. Coincident with the Arctic melt has been the sudden appearance of massive methane plumes in the Arctic Ocean. The combination of the loss of Arctic ice, causing an increase in the capacity of the sea to absorb the sun’s heat, together with the resultant thawing of the permafrost, causing the release of natural stores of the potent greenhouse gas methane, represent two of the most concerning feedbacks that are predicted to greatly accelerate the vicious cycle of global warming and consequent climate disruption. Already, the effects of extreme weather on food production have been pronounced this year, something also predicted by climatologists. 2010 was the joint warmest year on record Despite rumours that the world has been cooling recently, the most comprehensive and carefully detailed study of climate data made so far by NASA’s Goddard Institute for Space Studies (GISS) provides the clearest demonstration yet that 2010 (along with 2005) was the warmest year on record for global surface temperatures, culminating the warmest decade on record (right). Researchers took a number of factors into consideration when compiling and analysing the data in order to answer some of the chief criticisms recently levelled against climate science. This includes taking into account - Proximity of observing stations to urban areas (local urban warming effect) - Changes in the Sun’s irradiation of the Earth - The El Niño/La Niña-Southern Oscillation (ENSO) effects on global climate - The effect of volcanism on global climate - Explaining how regional extremes of weather are an integral and predicted part of global warming—symptoms of a destabilised climate system - Explaining how the short-term cooling effects of pollution merely mask the longer-term warming effect of the same sources of pollution - Taking into account the various effects of aerosols on climate - Cataloguing the general ecosystem changes (both physical and biological) precipitated by global warming As well as The GISS paper “Atmospheric CO2: Principal control knob governing Earth’s temperature” further explains how CO2, rather than H2O, is the most important regulator of the Earth’s long-term global temperature. To help illustrate these changes, NASA’s climate website outlines the key indicators with up-to-date interactive graphs and satellite imagery; and the US National Oceanic and Atmospheric Administration (NOAA) has further useful interactive graphs illustrating the historic changes and oscillations in many of these climate variables. The GISS findings tally closely with those of NOAA’s National Climatic Data Center (NCDC), the Japanese Meteorological Agency, and the Met Office Hadley Centre in the United Kingdom. Meanwhile, figures from the Carbon Dioxide Information Analysis Center reveal no let up in the global emission of anthropogenic carbon from fossil fuels, which reached its highest level in 2008 (updated 2009-10 figures can be found here), while this year Arctic sea ice has reached its lowest volume on record. According to the International Energy Agency (IEA) the world has just 5 more years to prevent itself from becoming locked-in to a high-carbon energy infrastructure that will necessitate dangerous and irreversible climate change. The late Stephen Schneider’s famous warning back in 1979. Back to top
http://www.edwardgoldsmith.org/1263/2010-was-the-joint-warmest-year-on-record/
4.09375
Article Summary: These are all good tips for developing a plan of attack in math problem solving. If you use these 20 tips as basis for developing your own problem solving technique you will be successful. Most students use the tips described above, use them for a few problems, and then adapt them to fit their style of learning and problem solving. Solving problems, especially word problems, are always a challenge. To become a good problem solver you need to have a plan or method which is easy to follow to determine what needs to be solved. Then the plan is carried out to solve the problem. The key is to have a plan which works in any math problem solving situation. For students having problems with problem solving, the following 20 tips are provided for helping children become good problem solvers. Tip 1: When given a problem to solve look for clues to determine what math operation is needed to solve the problem, for example addition, subtraction, etc. Tip 2: Read the problem carefully as you look for clues and important information. Write down the clues, underline, or highlight the clues. Tip 3: Look for key words like sum, difference, product, perimeter, area, etc. They will lead you to what operation you need to use. Rewrite the problem if necessary. Tip 4: Look for what you need to find out, for example: how many will you have left, the total will be, everyone gets red, everyone gets one of each, etc. They will also lead you to the type of operation needed to solve the problem. Tip 5: Use variable symbols, such as "X" for missing information. Tip 6: Eliminate all non-essential information by drawing a line through this distracting information. Tip 7: Addition problems use words like sum, total, in all, and perimeter. Tip 8: Subtraction problems use words like difference, how much more, and exceeds. Tip 9: Multiplication problems use words like product, total, area, and times. Tip 10: Division problems use words like share, distribute, quotient, and average. Tip 11: Draw sketches, drawings, and models to see the problem. Tip 12: Use guess and check techniques to see if you are on the right track. Tip 13: Ask yourself if you have ever seen a problem like this before, if so how did you solve it. Tip 14: Use a formula for solving the problem, for example for finding the area of a circle. Tip 15: Develop a plan based on the information that you have determined to be important to solving the problem. Tip 16: Carry out the plan using the math operations you determined would find the answer. Tip 17: See if the answer seems reasonable, if does then you are probably ok - if not then check your work. Tip 18: Work the problem in reverse or backwards starting with the answer to see if you wind up with your original problem. Tip 19: Do not forget about units of measure as you work the problem, such as: inches, pounds, ounces, feet, yard, meter, etc. Not using units of measure may result in the wrong answer. Tip 20: Ask yourself did you answer the problem? Are you sure? How do you know you are sure? These are all good tips for developing a plan of attack in math problem solving. If you use these 20 tips as basis for developing your own problem solving technique you will be successful. Most students use the tips described above, use them for a few problems, and then adapt them to fit their style of learning and problem solving. This is perfectly fine, because these 20 tips are only meant as a starting point for learning how to solve problems. One tip that is not mentioned above is that as you develop a strategy for solving math problems, then this strategy will become your strategy for solving problems in other subjects and dealing with life's problems you will encounter as you continue to grow.
http://www.mathworksheetscenter.com/mathtips/goodproblemsolvers.html
4.03125
Radium is a naturally-occurring silvery white radioactive metal that can exist in several forms called isotopes. It is formed when uranium and thorium (two other natural radioactive substances) decay (break down) in the environment. Radium has been found at very low levels in soil, water, rocks, coal, plants, and food. For example, a typical amount might be one picogram of radium per gram of soil or rock. This would be about one part of radium in one trillion (1,000,000,000,000) parts of soil or rock. These levels are not expected to change with time. Some of the radiation from radium is constantly being released into the environment. It is this release of radiation that causes concern about the safety of radium and all other radioactive substances. Each isotope of radium releases radiation at its own rate. One isotope, radium-224 for example, releases half of its radiation in about three and a half days; whereas another isotope, radium-226, releases half of its radiation in about 1,600 years. When radium decays it divides into two parts. One part is called radiation, and the second part is called a daughter. The daughter, like radium, is not stable; and it also divides into radiation and another daughter. The dividing continues until a stable, nonradioactive daughter is formed. During the decay process, alpha, beta, and gamma radiations are released. Alpha particles can travel only a short distance and cannot travel through your skin. Beta particles can penetrate through your skin, but they cannot go all the way through your body. Gamma radiation, however, can go all the way through your body. Thus, there are several types of decay products that result from radium decay. Because radium is present, usually at very low levels, in the surrounding environment, you are always exposed to it and to the small amounts of radiation that it releases to its surroundings. You may be exposed to higher levels of radium if you live in an area where it is released into the air from the burning of coal or other fuels, or if your drinking water is taken from a source that is high in natural radium, such as a deep well, or from a source near a radioactive waste disposal site. Levels of radium in public drinking water are usually less than one picocurie per liter of water (about one quart), although higher levels (more than 5 picocuries per liter) have been found. A picocurie (pCi) is a very small amount of radioactivity, and it is associated with about a trillionth of a gram (a picogram) of radium. (There are approximately 28 grams in an ounce.) No information is available about the amounts of radium that are generally present in food and air. You may also be exposed to higher levels of radium if you work in a uranium mine or in a plant that processes uranium ores. Pathways in the body Radium can enter the body when it is breathed in or swallowed. It is not known if it can be taken in through the skin. If you breathe radium into your lungs, some may remain there for months; but it will gradually enter the blood stream and be carried to all parts of the body, especially the bones. For months after exposure, very small amounts leave the body daily through the feces and urine. If radium is swallowed in water or with food, most of it (about 80%) will promptly leave the body in the feces. The other 20% will enter the blood stream and be carried to all parts of the body, especially the bones. Some of this radium will then be excreted in the feces and urine on a daily basis. There is no clear evidence that long-term exposure to radium at the levels that are normally present in the environment (for example, 1 pCi of radium per gram of soil) is likely to result in harmful health effects. However, exposure to higher levels of radium over a long period of time may result in harmful effects including anemia, cataracts, fractured teeth, cancer (especially bone cancer), and death. Some of these effects may take years to develop and are mostly due to gamma radiation. Radium gives off gamma radiation, which can travel fairly long distances through air. Therefore, just being near radium at the high levels that may be found at some hazardous waste sites may be dangerous to your health. Radium has been shown to cause adverse health effects such as anemia, cataracts, fractured teeth, cancer and death. The relationship between the amount of radium that you are exposed to and the amount of time necessary to produce these effects is not known. Although there is some uncertainty as to how much exposure to radium increases your chances of developing a harmful health effect, the greater the total amount of your exposure to radium, the more likely you are to develop one of these diseases. There are few medical tests to determine if you have been exposed to radium. There is a urine test to determine if you have been exposed to a source of radioactivity such as radium. There is also a test to measure the amount of radon, a breakdown product of radium, when it is exhaled. These tests require special equipment and cannot be done in a doctor's office. Another test can measure the total amount of radioactivity in the body; however, this test is not used except in special cases of high exposure. Disclaimer: This article is taken wholly from, or contains information that was originally published by, the Agency for Toxic Substances and Disease Registry. Topic editors and authors for the Encyclopedia of Earth may have edited its content or added new information. The use of information from the Agency for Toxic Substances and Disease Registry should not be construed as support for or endorsement by that organization for any new information added by EoE personnel, or for any editing of the original content.
http://www.eoearth.org/article/Health_effects_of_radium
4.1875
Home: The Story of Maine A Part of the Main: European Settlement of the Mainland Lesson #4: A Field Trip to the Maine State Museum For use with Module 1 Alignment with the Learning Results: A CLEAR AND EFFECTIVE COMMUNICATOR Uses oral, written, visual, artistic, and technological modes of expression. Reads, listens to and interprets messages from multiple HISTORICAL KNOWLEDGE, CONCEPTS, AND PATTERNS Students will develop historical knowledge of major events, people, and enduring themes in the United States, in Maine, and throughout world history. MIDDLE GRADES 5-8 Students will be able to demonstrate an understanding of selected themes in Maine, United States, and world history. HUMAN INTERACTION WITH ENVIRONMENTS Students will understand and analyze the relationships among people and their physical environments. MIDDLE GRADES 5-8 Students will be able to analyze how technology shapes the physical and human characteristics of places and regions, - Visit and analyze the Maine State Museum’s exhibit 12,000 Years in Maine. - Create an artifact and write a description of that artifact that demonstrates their understanding of the way technology Timing: 3-4 class periods, with time outside of class to complete assignment - Access to the Maine State Museum’s archaeological exhibit, "12,000 Years in Maine" - Assignment Sheet #4 - Grading Rubric #4 - Field Trip Worksheet Archaeologists study the material remains of ancient cultures. Much of what they find buried under layers and layers of earth are remnants of a certain culture’s technology. A defining technology of the Paleo-Indian culture was the fluted projectile point—a tip they affixed to the end of a spear and used for hunting. Birchbark canoes were another sophisticated technology—a perfect boat form for traversing Maine’s watery landscape. Just before the European settlers arrived, Maine Indians began using a new technology—the ceramic pot. Our technology today looks very different than that used by Maine’s ancient peoples. But in the same ways that their cultures were defined and transformed by their technologies, our culture is defined and transformed by ours. This topic is especially pertinent to the Information Age—the tool of the Internet is rapidly transforming our society and culture. This lesson encourages students to compare their own culture’s technology to that of cultures living in Maine thousands of Technology—the practical application of Archaeology—the scientific study of material remains of past human life and activities Culture—the customary beliefs, social customs, and material traits of a racial, religious, or social group Preparation for the Field Trip: - Put the list of important terms on the board without the definitions. Ask students to define the terms. Write down the student definition. Then have students look the words up. Discuss the dictionary definitions of these terms. - Ask students to list all the "latest technologies" they can think of. Make a list on the board. How do these technologies affect our culture? How do they affect the way we eat, communicate, worship, and die? - Watch Module 1 with students. Tell them to look for evidence of the technologies of Maine’s ancient cultures. Make a list on the board of these technologies next to the one you’ve made of our technologies. Compare the two lists. - Give students their Field Trip Worksheets. Go over them together. Make sure it is clear to them what they will need to look for as they go through the exhibit at the Maine State After the Field Trip: - Discuss the field trip with the class. What did they feel were the most advanced technologies of Maine’s ancient peoples? How did those technologies affect the culture of the time? How did their technologies change over time? What technologies did Europeans bring to Native Americans in Maine that transformed - Give students Assignment Sheet #4. Go over the assignment and the grading rubric with students. They will be playing the role of an archaeologist 1500 years in the future, who is investigating Maine cultures in the year 2000. What kinds of remains will we leave behind? Do some brainstorming with students to get them thinking, then allow them to work on their assignments on their own. See Assignment Sheet #4 for details. - Have students evaluate their own work according to the Grading Rubric #4. Grade them yourself, using the same - Have students create a physical replica of the tool they describe in their writing assignment. - Discuss the political and ethical implications of archaeology. Traditionally, archaeology has been the field of European-American scholars, studying the cultures of Native Americans. In the past, some Native Americans have objected to the digging up of ancient burial grounds. Other Native peoples feel that archaeology is an important way to learn about the cultures of their ancestors. What do students think? What role should ethics and diplomacy play in archaeology? Hold a debate on the question of whether or not archaeologists should have the right to excavate ancient burial grounds of Native
http://www.mpbn.net/homestom/Lsn4S2.html
4
The Long-billed Curlew is the largest North American shorebird. It has a very long, decurved bill, which is longer on adult females than on males and juveniles. It is mottled brown overall, with cinnamon underwings. Sometimes a striped head pattern is evident, but it is far less pronounced than the head stripes on the Whimbrel. It is similar in size, shape, and color to the Marbled Godwit, but the curlew's decurved bill distinguishes it from the upturned bill of the Marbled Godwit. Dry grasslands and shrub savannahs are the traditional breeding habitats of Long-billed Curlews. They also nest in grain fields and pastures. During migration and winter, they can be found on coastal mudflats and marshes, and less commonly in fields and grasslands. These birds often gather in small flocks and forage by walking quickly along with their long bills extended forward, probing for food. In summer, earthworms and other invertebrates are common prey. Berries may also be important food at certain times of the year. Birds in coastal areas eat crabs and other aquatic creatures. Males attract females and defend their territories with undulating flight displays, fluttering and gliding while calling. The nest is on the ground in the open, but is often located next to an object such a rock, a shrub, or even a pile of cow manure. The nest itself is a shallow scrape, usually sparsely lined with vegetation, sometimes with a rim built up around the edge. Both parents help incubate the four eggs for 27-30 days. The young leave the nest shortly after hatching and feed themselves, although both parents tend them and lead them to a marshy or damp area to find food. The young begin to fly at 32-45 days. This short-distance migrant is one of the earliest breeding shorebirds, returning from wintering grounds from California to Mexico in mid-March, before their nesting grounds dry out. The adults leave by mid-July, with the young of the year leaving in mid-August. Once abundant, Long-billed Curlews declined as a result of hunting in the 1800s. Protection has helped the birds rebound, and now habitat destruction is their biggest threat. As more and more native grassland is converted to agriculture, the amount of potential Long-billed Curlew nesting habitat is shrinking. The Canadian Wildlife Service estimates the current population at about 20,000 birds. When and Where to Find in Washington Long-billed Curlews breed in eastern Washington in the central Columbia Basin and up through the Okanogan Valley. They are uncommon throughout the state during migration. They generally winter south of Washington, but a flock winters around Tokeland at Willapa Bay (Pacific County) every winter. Bill's Spit at Ocean Shores is another place to look for them. Click here to visit this species' account and breeding-season distribution map in Sound to Sage, Seattle Audubon's on-line breeding bird atlas of Island, King, Kitsap, and Kittitas Counties. Washington Range Map North American Range Map - Spotted SandpiperActitis macularius - Solitary SandpiperTringa solitaria - Gray-tailed TattlerTringa brevipes - Wandering TattlerTringa incana - Greater YellowlegsTringa melanoleuca - WilletTringa semipalmata - Lesser YellowlegsTringa flavipes - Upland SandpiperBartramia longicauda - Little CurlewNumenius minutus - WhimbrelNumenius phaeopus - Bristle-thighed CurlewNumenius tahitiensis - Long-billed CurlewNumenius americanus - Hudsonian GodwitLimosa haemastica - Bar-tailed GodwitLimosa lapponica - Marbled GodwitLimosa fedoa - Ruddy TurnstoneArenaria interpres - Black TurnstoneArenaria melanocephala - SurfbirdAphriza virgata - Great KnotCalidris tenuirostris - Red KnotCalidris canutus - SanderlingCalidris alba - Semipalmated SandpiperCalidris pusilla - Western SandpiperCalidris mauri - Red-necked StintCalidris ruficollis - Little StintCalidris minuta - Temminck's StintCalidris temminckii - Least SandpiperCalidris minutilla - White-rumped SandpiperCalidris fuscicollis - Baird's SandpiperCalidris bairdii - Pectoral SandpiperCalidris melanotos - Sharp-tailed SandpiperCalidris acuminata - Rock SandpiperCalidris ptilocnemis - DunlinCalidris alpina - Curlew SandpiperCalidris ferruginea - Stilt SandpiperCalidris himantopus - Buff-breasted SandpiperTryngites subruficollis - RuffPhilomachus pugnax - Short-billed DowitcherLimnodromus griseus - Long-billed DowitcherLimnodromus scolopaceus - Jack SnipeLymnocryptes minimus - Wilson's SnipeGallinago delicata - Wilson's PhalaropePhalaropus tricolor - Red-necked PhalaropePhalaropus lobatus - Red PhalaropePhalaropus fulicarius |Federal Endangered Species List||Audubon/American Bird Conservancy Watch List||State Endangered Species List||Audubon Washington Vulnerable Birds List| |Yellow List||Monitored||Immediate Concern|
http://www.birdweb.org/birdweb/bird/long-billed_curlew
4
We all know that we use our muscles to exercise and to keep our bodies functioning. But, did you know that you’re born with certain types of muscle fibers in certain amounts? By understanding how our muscles are actually constructed, you can better understand why certain exercises and workouts work well and why some people are genetically predisposed to building muscle. What are Muscle Fibers? The human body is made up of many different muscles, but they can all be categorized into three main groups: cardiac, smooth, and striated. Cardiac and smooth muscles are both involuntary, meaning they function without conscious control. You would find cardiac muscle in the heart and smooth muscle in your other organs. Striated skeletal muscle is voluntarily controlled, and as the name suggests, is attached to the skeleton. Made up of myocytes (muscle cells or more commonly known as muscle fibers), muscles contain long rod-like structures called myofibrils, composed of different types of protein. These proteins are grouped into thin and thick portions called filaments. Your muscles are able to contract when these thick and thin portions slide along each other. When skeletal muscles contract they cause a movement at a joint. They are able to do so because they are attached to bones by tendons. While all skeletal muscles share these properties, they can be further categorized by muscle fiber type. What are the Different Muscle Fiber Types? Beyond the cardiac, smooth, and striated, muscle fibers can also be divided by type: Type I, Type IIa, and Type IIb. These are divided based on differences in the amount of mitochondria (the powerhouse of the cell) they have, how quickly they contract, color, and other factors. 1 - Red in color due to high concentrations of myoglobin (the compound in muscles that carries oxygen) - Very resistant to fatigue - Contains large amounts of mitochondria - Contracts slowly - Produces a low amount of power when contracted - Used in aerobic activities such as long distance running - Also called slow twitch fibers - Red in color due to high concentrations of myoglobin - Resistant to fatigue (but not as much as Type I fibers) - Contains large amounts of mitochondria - Contracts relatively quickly - Produces a moderate amount of power when contracted - Used in long-term anaerobic activities such as swimming (activities lasting less than 30 minutes) - Also called fast twitch A fibers - White in color due to low myoglobin concentrations - Fatigue very easily - Contains low amounts of mitochondria - Contracts very quickly - Produces a high amount of power when contracted - Used in short-term anaerobic activities such as sprinting and lifting heavy weights (activities lasting less than a minute) - Also called fast twitch B fibers Individual muscles in the body are made of a mixture of different fiber types and their composition will vary depending on what the muscle is used for. For example, postural muscles (e.g. spinal muscles, hip flexors, calves) are predominantly made up of Type I fibers because they do not need to produce a lot of power and are very resistant to fatigue. Furthermore, when a muscle contracts, only the fibers that are needed will contract. If a weak contraction occurs, only the Type I muscle fibers will contract. If a strong contraction occurs that requires a lot of power (like in lifting a heavy weight), the Type IIa and IIb fibers will be activated along with the Type I fibers, with the Type IIa and IIb fibers activating last. Can Muscle Fibers Change? All of us are born with a set percentage of these muscle fibers. However, some theories claim that you can change the properties of your muscle fibers based on what type of exercises you do. For instance, someone born with a certain amount of fast twitch and slow twitch fibers can have slow twitch muscle fibers that exhibit some characteristics of fast twitch muscle fibers through training such as sprinting or heavy weightlifting. So, while you may not be blessed with the slow twitch muscle makeup of an Olympic marathon runner or the fast twitch fiber makeup of a sprinter, it is possible to improve your performance through proper training and hard work. Muscle Fiber Types: Fast Twitch & Slow Twitch,
http://www.builtlean.com/2012/09/10/muscle-fiber-types/
4
Seeing the Invisible Colors of Mars by Janice Bishop - Principal Investigator Jan. 29, 2004 The color of Mars tells us which minerals are present, and these minerals provide information about water and environmental factors on Mars. The red color comes from iron oxides and varies from orange to red to violet depending on the mineral structure. In the visible region a spectrometer acts like our eyes do and recognizes colors such as green, blue and red. The Pancam on Spirit and Opportunity records these colors in spectral images. The Mini-Thermal Emission Spectrometer (Mini-TES) is another spectrometer on the Mars Exploration Rovers (MER) and measures infrared radiation. Our eyes cannot "see" the infrared radiation, but the spectrometer can. Rocks are composed of minerals and each mineral has a certain spectrum that can be measured by the spectrometer. Spectroscopy involves measuring the energy absorbed or reflected at certain wavelengths. Infrared spectroscopy primarily measures vibrational energies of the atomic bonds in the mineral structure. Bonds such as Si-O, Fe-O, H2O (water), SO4, CO3 each have different vibrational energies that are measured by the spectrometer. These clusters of atoms are the building blocks of minerals and each mineral has several infrared absorptions in its spectrum. The Mini-TES instruments on Spirit and Opportunity and the TES instrument on the Mars Odyssey orbiter are measuring these mineral components and the scientists must try to recreate which minerals are present in the martian rocks and soil by comparing the infrared energies detected with the known spectral properties of minerals from lab measurements. The Mars Express orbiter also has a spectrometer called Omega that is measuring the near-infrared region. This works in a similar way to the Mini-TES, but collects data from a complimentary wavelength region. A third spectrometer called CRISM will cover visible and near-infrared wavelengths (some that we can see, plus some that are similar to those measured by Omega) and is scheduled to fly to Mars on the Mars Reconnaissance Orbiter in 2005. Combining visible, near-infrared and mid-infrared spectra provides the most clues to scientists trying to figure out the mineralogy of Mars. The rocks and soils on Mars are composed of a variety of minerals such as silicates (pyroxene, feldspar and olivine), iron oxides, sulfates, and carbonates. The minerals tell a story about how each rock or soil unit formed and what has happened to it since it formed. We know a lot about the minerals present on Mars from detailed studies of the martian meteorites and from chemistry and spectroscopy of the surface. Still, we only have some pieces to the puzzle and many that are needed to assemble the full picture are missing. In order for scientists to be able to interpret the spectral data of Mars, it is necessary to measure the spectral patterns of rocks and minerals in the lab and in the field on Earth. Mars scientists are studying rocks from a number of field sites including volcanoes, deserts, hydrothermal areas, impact craters, the Arctic and the Antarctic. My field sites have focused on alteration of volcanic material (e.g. Hawaii, Iceland), sedimentation of volcanic material in Antarctic lakes, and rocks forming in hydrothermal regions associated with volcanoes. These samples include a number of minerals such as iron oxides/oxyhydroxides, clays, carbonates and sulfates that provide information about aqueous processes, temperature, pH, etc. Pure samples of these minerals are obtained as well in order to characterize their spectral properties. In many cases, small differences in chemistry or grain size can influence the spectral properties. I am a Co-Investigator for the UC Berkeley NAI team called BioMars and we are particularly interested in finding ways to characterize and identify Fe and S-bearing minerals that may be associated with life. My team is in the process of measuring spectra of rocks collected at new field sites. Our goal is to learn how to remotely characterize rocks that can provide information about whether or not conditions were supportive of life on Mars. Because each instrument on the many Mars orbital and landed missions measures spectra covering a different wavelength range and resolution, the lab and field data measured for this project will also be modified to match the various spectrometers collecting data of Mars. This will enable us to know what the spectrum of key rocks and minerals associated with life would like on Mars if they are observed by any of the martian spacecraft.
http://www.seti.org/node/837
4.28125
Chemistry Lesson 1 Atomic and Molecular Structure (Grades 9-12) Connection Among the Location in the Table, the Atomic Number, and Mass | How to Identify Metals, Semimetals, Nonmetals, and Halogens | How to Identify Alkaline Metals, Alkaline Earth Metals, and Transition Metals | Lanthanide, Actinide, Transactinide, and Transuranium Elements | Ionization Energy, Electronegativity, Relative Sizes | How Many Electrons Can Bond? | Size and Mass | Location and Quantum Electron Configuration | Summary |IONIZATION ENERGY, ELECTRONEGATIVITY, RELATIVE SIZES| The amount of energy required to pull an electron off a neutral atom is called the ionization energy. Since each successive electron shell is larger than the previous one, the electrons in the shells further from the nucleus require less energy to be pulled off. In other words, the larger the shell number or the further down in the Periodic Table of the Elements, the lower the ionization energy. As we go from left to right on the Periodic Table of the Elements, there are more and more protons in the nucleus. The greater number of protons pulls stronger at the valence electrons (the electrons in the outermost shell). This causes an increase in the ionization energy. To summarize, moving down the table causes a decrease in the ionization energy. Moving to the right causes an increase in the ionization energy. See the figure for the trend. Electronegativity is the measure of how easy it is to place another electron in the neutral atom. It is almost the opposite of ionization energy, except that some elements don’t accept another electron and it becomes impossible to measure a value of electronegativity. As the shells get larger, it is harder to put an electron in the atom. As more protons get added to the nucleus, the easier it is to add an electron. The noble gases don’t want to get another electron, so it is almost impossible to put an extra one in the atom. To summarize, moving down the table causes a decrease in electronegativity. Moving to the right causes an increase in electronegativity. The arrows in the figure show increasing electronegativity. Atomic size is based on the distance between two atoms. One method uses the distance between different atoms that are bonded together; another method uses the distance between like atoms of the element in a solid. Since the atom is incredibly small, scientists use small units to measure the size of an atom, either Angstroms (Å, 1 x 10–10 m), nanometers (nm, 1 x 10–9 m), or picometers (pm, 1 x 10–12 m). When a neutral atom gains electrons (becomes an ion), its size increases because there are no longer the same numbers of protons as electrons; when there are fewer protons than electrons in an atom, there is less positive force to pull the electrons in tighter so the atom’s radius becomes larger. When a neutral atom loses an electron, it is left with more protons than electrons and so there is more positive force pulling the remaining electrons in tighter, with the atomic radius becoming smaller. The size of the neutral atoms increases when moving down a group (column). This is because there are electrons added to the next larger shell. The size of the atoms decreases when moving from left to right (along a period). This is because there are more protons in the nucleus that pull on the electrons. The pull increases, the electrons get closer and the size decreases. Home Videos Channels Shows [Watch this video in a new window]
http://www.etap.org/demo/chem1/instruction5tutor.html
4.125
An early kind of writing developed in Mesopotamia during the height of Sumerian power (c.2100–2000 BCE) and impressed on tablets of clay. It used stereotyped pictures (‘pictographic’), but from representing things and actions, it later represented sounds and concepts. The symbols were wedge‐shaped marks. The earliest texts were aides‐memoire for scribes working on accounts and were practical, but later there were religious items and narratives which had existed before they were written down. The tablets from Tell el‐Amarna consisting of correspondence between Palestine and Egypt in the 14th cent. BCE are written in cuneiform script.
http://www.oxfordbiblicalstudies.com/article/opr/t94/e465
4.03125
Interactive Java Tutorials Pigments and dyes are responsible for most of the color that humans see in the real world. Books, magazines, signs, and billboards are printed with colored inks that create colors through the process of color subtraction. This interactive tutorial explores how individual subtractive primary colors can be separated from a full-color photograph, and then how they can be reassembled to create the original scene. The tutorial initializes with a color photograph of mixed fruit displayed in the upper left-hand corner of the tutorial window. Adjacent to and below the full color photograph are the four individual color separations that result from dissecting the image into cyan (C), magenta (M), yellow (Y), and black (K) components. In order to operate the tutorial, use the mouse cursor to superimpose the color separations over one another. As additional separations are added, the resulting image acquires the realism evident in the color photograph. When any two of the primary subtractive colors are added, they produce a primary additive color. For example, adding magenta and cyan together produces the color blue, while adding yellow and magenta together produces red. In a similar manner, adding yellow and cyan produces green. When all three primary subtractive colors are added, the three primary additive colors are removed from white light leaving black (the absence of any color). White cannot be produced by any combination of the primary subtractive colors, which is the main reason that no mixture of colored paints or inks can be used to print white. Human eyes, skin, and hair contain natural protein pigments that reflect the colors we see in the people around us (in addition to any assistance by colors used in facial makeup and hair dyes). Modern color desktop printers create beautiful prints that are produced with colored inks through the process of color subtraction. In a similar manner, automobiles, airplanes, houses, and other buildings are coated with paints containing a variety of pigments. The concept of color subtraction, as discussed above, is responsible for most of the color produced by the objects just described. For many years, artists and printers have searched for substances containing dyes and pigments that are particularly good at subtracting specific colors. All color photographs, and other images that are painted or printed, are produced using just four colored inks or dyes: magenta, cyan, yellow (the subtractive primaries) and black (see Figure 1). Mixing inks or dyes having these colors in varying proportions can produce the colors necessary to reproduce just about any image or color. The three subtractive primaries could (in theory) be used alone, however the limitations of most dyes and pigments makes it necessary to add black to achieve true color tones. When an image is being prepared for printing in a book or magazine, it is first separated into the component subtractive primaries, either photographically or with a computer as illustrated above in Figure 1. Each separated component is made into a film that is used to prepare a printing plate for that color. The final image is created by sequentially printing each color plate, one on top of another, using the appropriate ink to form a composite that recreates the appearance of the original. Paint is also produced in a somewhat similar manner. Base pigments containing the subtractive primaries are mixed together to form the various colors used in final paint preparations. Matthew J. Parry-Hill, Robert T. Sutter and Michael W. Davidson - National High Magnetic Field Laboratory, 1800 East Paul Dirac Dr., The Florida State University, Tallahassee, Florida, 32310. Questions or comments? Send us an email. © 1998-2013 by Michael W. Davidson and The Florida State University. All Rights Reserved. No images, graphics, scripts, or applets may be reproduced or used in any manner without permission from the copyright holders. Use of this website means you agree to all of the Legal Terms and Conditions set forth by the owners. This website is maintained by our
http://micro.magnet.fsu.edu/primer/java/primarycolors/colorseparation/
4.28125
Science Fair Project Encyclopedia Solubility equilibrium describes the chemical equilibrium between solid and dissolved states of a compound. The substance that is dissolved can be an organic solid such as sugar or an ionic solid such as salt. The main difference is that ionic solids dissociate into constituent ions when they dissolve in water. Most commonly water is the solvent of interest, although the same basic prinicples apply with any solvent. Dissolution of an organic solid can be described as an equilibrium between the substance in its solid and dissolved forms: We can write an equilibrium expression for this reaction, as for any chemical reaction (products over reactants): where K is called the equilibrium (or solubility) constant and the square brackets mean molar concentration in mol/L (sometimes called molarity with symbol M). Because the concentration of a solid doesn't make sense, we use the curly brackets, which mean activity, around the solid. Luckily, the activity of a solid is amost always equal to one. So, we have a very simple expression: This statement says that water at equilibrium with solid sugar contains a concentration equal to K. For table sugar (sucrose) at 25 °C, K = 1.971 mol/L. (This solution is very concentrated; sucrose is extremely soluble in water.) This is the maximum amount of sugar that can dissolve at 25 °C; the solution is saturated. If the concentration is below saturation, more sugar dissolves until the solution reaches saturation, or all the solid is consumed. If more sugar is present than is allowed by the solubility expression then the solution is supersaturated and solid will precipitate until the saturation concentration is reached. This process can be slow; the equilibrium expression describes concentrations when the system reaches equilibrium, not how fast it gets there. Ionic compounds normally dissociate into their constituent ions when they dissolve in water. For example, for calcium sulfate: As for the previous example, the equilibrium expression is: where K is called the equilibrium (or solubility) constant, the square brackets mean molar concentration (M, or mol/L), and curly brackets mean activity. Since the activity of a pure solid is equal to one, this expression reduces to the solubility product expression: This expression says that an aqueous solution in equilibrium with (saturated with) solid calcium sulfate has concentrations of these two ions such that their product equals Ksp; for calcium sulfate Ksp=4.93×10-5. If the solution contains only calcium sulfate the concentration of each ion is: Solubility constants have been experimentally determined for a large number of compounds and tables are readily available. For ionic compounds the constants are called solubility products. Concentration units are assumed to be molar (moles per liter) unless otherwise stated. Solubility is sometimes listed in mass units such as grams dissolved per liter of water. Solubility (and equilibrium) constants themselves have are dimensionless (they may have units, however); the lack of units in the constant looks inconsistant; it comes about because the use of molar concentration in the solubility expression is only an approximation to activity, a unitless quantity that is approximately equal to molarity at low concentrations. The common ion effect refers to the fact that solubility equilibria shift in response to Le Chatelier's Principle. In the above example, addition of sulfate ions to a saturated solution of calcium sulfate causes CaSO4 to precipitate until the ions in solution again satisfy the solubility expression. (Addition of sulfate ions could be accomplished by adding a very soluble salt, such as Na2SO4.) Solubility is sensitive to temperature. For example, sugar is more soluble in hot water than cool water, but the solubility of calcium sulfate decreases as the solution is heated. These effects occur because solubility constants, like other types of equilibrium constant, are functions of temperature. A thermodynamic approach is required to predict how much and in what direction a particular constant changes. The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Solubility_equilibrium
4.03125
Malaria is a serious condition that is common in some tropical countries. It is important that you take measures to reduce your risk of infection when you travel to these areas. This leaflet gives general information about malaria and how to avoid it. You should always see a doctor or nurse before travelling to a country with a malarial risk. They are provided with up-to-date information about the best antimalarial medication for each country. What is malaria? Malaria is a serious infection. It is common in tropical countries such as parts of Africa, Asia and South America. Malaria is a disease caused by a parasite (germ) called Plasmodium that lives in mosquitoes. The parasite is passed to humans from a mosquito bite. There are four types of plasmodium that cause malaria. These are called Plasmodium falciparum, Plasmodium vivax, Plasmodium ovale and Plasmodium malariae. Most cases of malaria brought into the UK are due to Plasmodium falciparum. This type of malaria is also the most likely to result in severe illness and/or death. Most infections occur in travellers returning to the UK (rather than visitors coming to the UK). The risk of getting malaria is greatest if you do not take antimalarial medication or do not take it properly. People who take last-minute holidays and those visiting friends or relatives abroad have been shown to be the least likely to take antimalarial medication. Each year around 1,700 people in the UK develop malaria which has been caught whilst abroad. Seven people died from malaria in the UK in 2010. Malaria can kill people very quickly if it is not diagnosed promptly. The most common symptom of malaria is a high fever. Malaria can also cause muscle pains, headaches, diarrhoea and a cough. Note: if you feel unwell and have recently visited an area in which there is malaria, you should seek prompt medical advice, even if you have taken your antimalarial medication correctly. See separate leaflet called 'Malaria' for more detail. Preventing malaria - four steps There is an ABCD for prevention of malaria. This is: - A wareness of risk of malaria. - B ite prevention. - C hemoprophylaxis (taking antimalarial medication exactly as prescribed). - Prompt D iagnosis and treatment. Awareness of the risk of malaria The risk varies between countries and the type of trip. For example, back-packing or travelling to rural areas is generally more risky than staying in urban hotels. In some countries the risk varies between seasons - malaria is more common in the wet season. The main type of parasite, and the amount of resistance to medication, varies in different countries. Although risk varies, all travellers to malaria-risk countries should take precautions to prevent malaria. The mosquitoes which transmit malaria commonly fly from dusk to dawn and therefore evenings and nights are the most dangerous time for transmission. You should use an effective insect repellent to clothing and any exposed skin. Diethyltoluamide (DEET) is safe and the most effective insect repellent and can be sprayed on to clothes. It lasts up to three hours for 20%, up to six hours for 30% and up to 12 hours for 50% DEET. There is no further increase in duration of protection beyond a concentration of 50%. When both sunscreen and DEET are required, DEET should be applied after the sunscreen has been applied. DEET can be used on babies and children over two months of age. In addition, DEET can be used, in a concentration of up to 50%, if you are pregnant. It is also safe to use if you are breast-feeding. If you sleep outdoors or in an unscreened room, you should use mosquito nets impregnated with an insecticide (such as pyrethroid). The net should be long enough to fall to the floor all round your bed and be tucked under the mattress. Check the net regularly for holes. Nets need to be re-impregnated with insecticide every six to twelve months (depending on how frequently the net is washed) to remain effective. Long-lasting nets, in which the pyrethroid is incorporated into the material of the net itself, are now available and can last up to five years. If practical, you should try to cover up bare areas with long-sleeved, loose-fitting clothing, long trousers and socks - if you are outside after sunset - to reduce the risk of mosquitoes biting. Clothing may be sprayed or impregnated with permethrin, which reduces the risk of being bitten through your clothes. Sleeping in an air-conditioned room reduces the likelihood of mosquito bites, due to the room temperature being lowered. Doors, windows and other possible mosquito entry routes to sleeping accommodation should be screened with fine mesh netting. You should spray the room before dusk with an insecticide (usually a pyrethroid) to kill any mosquitoes that may have come into the room during the day. If electricity is available, you should use an electrically heated device to vaporise a tablet containing a synthetic pyrethroid in the room during the night. The burning of a mosquito coil is not as effective. Herbal remedies have not been tested for their ability to prevent or treat malaria and are therefore not recommended. Likewise, there is no scientific proof that homoeopathic remedies are effective in either preventing or treating malaria, and they are also not recommended. Antimalarial medication (chemoprophylaxis) Antimalarial medication helps to prevent malaria. The best medication to take depends on the country you visit. This is because the type of parasite varies between different parts of the world. Also, in some areas the parasite has become resistant to certain medicines. There is a possibility of antimalarials that you may buy in the tropics or over the Internet, being fake. It is therefore recommended that you obtain your antimalarial treatment from your doctor's surgery, a pharmacist or a travel clinic. Medications to protect against malaria are not funded by the NHS. You will need to buy them, regardless of where you obtain them. The type of medication advised will depend upon the area you are travelling to. It will also depend on any health problems you have, any medication you are currently taking, the length of your stay, and also any problems you may have had with antimalarial medication in the past. You should seek advice for each new trip abroad. Do not assume that the medication that you took for your last trip will be advised for your next trip, even to the same country. There is a changing pattern of resistance to some medicines by the parasites. Doctors, nurses, pharmacists and travel clinics are updated regularly on the best medication to take for each country. You must take the medication exactly as advised. This usually involves starting the medication up to a week or more before you go on your trip. This allows the level of medicine in your body to become effective. It also gives time to check for any side-effects before travelling. It is also essential that you continue taking the medication for the correct time advised after returning to the UK (often for four weeks). The most common reason for malaria to develop in travellers is because the antimalarial medication is not taken correctly. For example, some doses may be missed or forgotten, or the tablets may be stopped too soon after returning from the journey. What are the side-effects with antimalarial medication? Antimalarial medication is usually well tolerated. The most common side-effects are minor and include nausea (feeling sick) or diarrhoea. However, some people develop more severe side-effects. Therefore, always read the information sheet which comes with a particular medicine for a list of possible side-effects and cautions. Usually, it is best to take the medication after meals to reduce possible side-effects. If you are taking doxycycline then you need to use a high-factor sunscreen. This is because this medication makes the skin more sensitive to the effects of the sun. Around 1 in 20 people taking mefloquine may develop headaches or have problems with sleep. Note: medication is only a part of protection against malaria. It is not 100% effective and does not guarantee that you will not get malaria. The advice above on avoiding mosquito bites is just as important, even when you are taking antimalarial medication. Symptoms of malaria (to help with prompt diagnosis) Symptoms are similar to flu. They include fever, shivers, sweating, backache, joint pains, headache, vomiting, diarrhoea and sometimes delirium. These symptoms may take a week or more to develop after you have been bitten by a mosquito. Occasionally, it takes a year for symptoms to develop. This means that you should suspect malaria in anyone with a feverish illness who has travelled to a malaria-risk area within the past year, especially in the previous three months. - Pregnant women are at particular risk of severe malaria and should, ideally, not go to malaria-risk areas. Full discussion with a doctor is advisable if you are pregnant and intend to travel. Most antimalarial medications are thought to be safe to the unborn child. Some, such as mefloquine, should be avoided in the first twelve weeks of pregnancy. - Non-pregnant women taking mefloquine should avoid becoming pregnant. You should continue with contraception for three months after the last dose. - If you have epilepsy, kidney failure, some forms of mental illness, and some other uncommon illnesses, you may have a restricted choice of antimalarial medication. This may be due to your condition, or to possible interactions with other medication that you may be taking. - If you do not have a spleen (if you have had it removed) or your spleen does not work well, then you have a particularly high risk of developing severe malaria. Ideally, you should not travel to a malaria-risk country. However, if travel is essential, every effort should be made to avoid infection and you should be very strict about taking your antimalarial medication. - Travellers going to remote places far from medical facilities sometimes take emergency medication with them. This can be used to treat suspected malaria until proper medical care is available. Further reading & references - Guidelines for malaria prevention in travellers from the United Kingdom, Health Protection Agency (January 2007) - Malaria, National Travel Health Network and Centre (NaTHNaC) - Malaria Fact Sheet No 94, World Health Organization, 2010 - Chiodini J; The standard of malaria prevention advice in UK primary care. Travel Med Infect Dis. 2009 May;7(3):165-8. Epub 2009 Mar 21. - Lalloo DG, Hill DR; Preventing malaria in travellers. BMJ. 2008 Jun 14;336(7657):1362-6. |Original Author: Dr Tim Kenny||Current Version: Dr Laurence Knott||Peer Reviewer: Dr Tim Kenny| |Last Checked: 15/12/2011||Document ID: 4416 Version: 41||© EMIS| Disclaimer: This article is for information only and should not be used for the diagnosis or treatment of medical conditions. EMIS has used all reasonable care in compiling the information but make no warranty as to its accuracy. Consult a doctor or other health care professional for diagnosis and treatment of medical conditions. For details see our conditions.
http://www.patient.co.uk/health/Malaria-Prevention.htm
4.09375
The Online Teacher Resource Receive free lesson plans, printables, and worksheets by email: - 50,000+ printables - Save Time! - 15,000+ English - 5,000+ Math Making a lesson plan is easy. Creating an effective lesson plan is the key to effective teaching and a critical factor in achieving positive student outcomes. For guidance in creating lesson plans, see the tutorial on "What to Consider When Writing a Lesson Plan." Also check out our Lesson Plans Center for more great advice on writing and delivering lessons. Directions: Just fill in the sections below. If you would like to save your lesson plan, make sure to save it right after it has been generated.
http://www.teach-nology.com/web_tools/lesson_plan/
4.15625
Lesson Plans and Worksheets Browse by Subject Drought Teacher Resources Find teacher approved Drought educational resource ideas and activities Learners discuss a natural disaster. In this droughts lesson plan, students discover how droughts occur and how they affect the society dealing with one. They discuss how the population of Australia deals with droughts and work on different questions that deal with the effects of droughts and how they happen. This lesson includes resources to information and an assessment guide. After a class discussion and and investigation of what kinds of plants can be found around the school campus, youngsters design a garden plan that uses drought-tolerant plants and flowers. The gardent is planted, and tended to, by the class for the entire year. Pupils are encouraged to share their new knowledge with their families, and to plant a similar garden at their homes. The Times covered a drought in 2011, which affected producers, consumers, and sellers. The class gets informed about climate and the economics of agriculture as the read this article and answer each of the 11 comprehension questions. A map and video link are embedded in the questions for further exploration. Seventh graders review the water cycle and its relationship to weather around the world. They focus their attention on extreme weather phenomena such as: floods, hurricanes, tornadoes, and drought. Pupils draw a complete water cycle and place the weather phenomena in the correct area of the water cycle. Students examine the impact of floods, hurricanes, tornadoes, and droughts. They conduct Internet research on various weather websites, complete a Weather Disaster Information sheet, save images of weather disasters on Google Images, and create a computer slideshow presentation. Students locate Lake Mead, then read a news article about Lake Mead drying up and how that would effect water and power supplies to the region. In this current events lesson, the teacher introduces the article with a map and vocabulary activity, then students read the news report and participate in a think-pair-share discussion. Lesson includes interdisciplinary follow-up activities.
http://www.lessonplanet.com/lesson-plans/drought/2
4.125
Stem cells are unspecialized cells found in multi-cellular organisms with an ability to proliferate into any one of the body’s more than 200 cell types. In a simpler language, these cells can be transformed into other cell type, found in the body on being given the right stimuli. Thus, they can be recreated to form liver, skin, red blood cells etc. The ability of the stem cells to transform varies, as some are more adaptable to transformation than others. These are characteristically of the same type. Scientists have developed a special technology through which they can mould these cells to become the precisely the same cell that is required. These cells can grow into anyone of the body’s more than 200 cell types. These cells retain the ability to divide throughout their life. The advancement in research has helped create interest in exploring the possibilities of fully functional differentiated cells such as cardiomyocytes, neurons and bone and cartilage. Primarily, these stems cells are divided into - Embryonic stem cells and umbilical cord stem cells- Both are different sources of deriving human embryonic stem cells. These are pluripotent cells which mean that they can divide into any of the three germ layers such as endoderm (inner stomach lining, lung and gastrointestinal tract), mesoderm (urogenitals, muscle, bone and blood), ectoderm (epidermal tissue and nervous system). These cells are more adaptable and can give rise to any fetal or fully grown cell type. - Developed stem cells- these are multipotent cells which can give rise to only limited type of cells. These are again differentiated into - Neuronal stem cells - Haematopoietic stem cells - Skin stem cells Each of these can give rise to only specific cell types such as neuronal stem cells. These neuronal stem cells can give rise to only nerve cells and not blood cells or liver cells. Thus, their function is limited The scientific human embryonic cells (hES) have triggered many debates in the recent times. The impacts of these debates have been so deep that it has affected the progress of the researches and clinical trials that this therapy is undergoing. Despite all this, scientists are committed and further research is on. The scientists are attempting different ways of isolating these cells in improved cultural conditions, such that its does not provoke ethical and political conflicts. Collection issues While extracting the cell it is important to ensure that the cells are free from microbiological contamination, can be identified easily and they are away from circumstances that can change their genotype (internal coding or blue print) and phenotype (physical appearance of the organism).
http://www.medexpressrx.com/research/stem-cells.aspx
4.0625
Teachers can learn to design lessons around popular music in the target language. Each song should have one primary and several secondary vocabulary themes appropriate to level (i.e., city theme with vocabulary such as places in town/shopping vocabulary/transit vocabulary... etc); At least 90% of the grammar and vocabulary should be on, or slightly above, level (songs are pedagogically problematic if they are "all over the map" grammatically or thematically); Ideally, there be some repetition of structure (refrains, dependent clauses, etc.). This imprints the structure in memory, into which spontaneous or new utterances may be later created, and it is a great jumping off point for patterned writing/parallel sentences; There should be pervasive and consistent use of verb tense (present tense only, preterit/imperfect contrast, or conditional/past subjunctive relationship, etc.); or gender/number agreement (a song that describes or lists people or things) etc.; other miscellaneous grammatical structures may be present only once, and can still serve as a springboard for lesson-writing (personal ‘a’, tener idioms, etc.) The song must be agreeable to listen to. This applies to all grade levels, but especially for fifth grade and up, it must be something that they will respond to: it can't be too "kiddie", yet it must be melodic; it might be culturally "authentic", but it doesn't have to be. I've found that young people who listen almost exclusively to non-melodic music such as rap respond positively to melodic music as well. This is a wonderful opportunity to reinforce art and music in the schools. The song should invite kinesthetic movement, dramatic interpretation, lend itself to illustration, and/or be rich in visual imagery. Benefits of Using Melodic Song to Teach Language • Presenting the target language through melodic music expands yet further the learning modality options you are providing for your students (aural-musical). • Probably nothing imprints linguistic patterns better than words wedded to memorable music. Because of the unique impressive nature of melodic music, students will retain grammatical structures and vocabulary for the rest of their lives. • Students’ inherently positive response to upbeat, melodic music makes them completely engaged in the activity. • A correlation between music and improved academic performance really does exist. The currently debated question about the so-called “Mozart effect” deals only with the passive listening to music while studying or taking exams, which has nothing to do with the active learning of language through the lyrics of melodic music. Music is mathematical by nature, whose “terrain” provides a fertile place for language learning to take hold and develop. • Music, being indigenous to its geographical place of creation, as well as to the cultural and social environment in which it arises, naturally transmits and reflects the culture in which it is created. Music is, of all sounds that exist, the most richly textured and interesting of sound. • Creative culminating activities for proficiency take learning the language to the next level: 1) student-created booklets illustrating the lyrics 2) karaoke, sing-along, or lip-sync video performances 3) dramatic interpretations/mime/acting out performances 4) dance and choreography--moving hands, head, feet, and body to the music in creative ways 5) re-writing the song either altogether in an original and creative lyric (for those who can), or by substituting all the nouns, or adjectives, or other parts of speech so as to make a new songlyric, and much more. • All of the (Howard Gardner’s) seven multiple intelligences are addressed when teaching language through music with the appropriate accompanying exercises: 1) kinesthetic (dance, clapping, stomping, body movement, percussion 2) musical (listening, singing, playing, distinguishing) 3) linguistic (interpreting lyrics while listening or through exercises) 4) logical/mathematical (music is math) 5) social (choral, dance, cooperative learning with the exercises) 6) visual (illustrations, dramatizations, video) 7) individual (the fallback for all of the written exercises, as well as with individual projects and culminating activities). • Activities can be done in cooperative learning groups, thus promoting classroom cohesion. • Songs and activities can be used either to introduce new material, or re-inforce previously learned material. • MUSIC teaches LANGUAGE by way of ART. We need more beauty and art in our schools, and in our lives (yes, that is my opinion, and I’m proud to say it!). Many students today, especially in their teens, are listening to some not-so-pretty music; fortunately, in my teaching experience, those very same students respond to positive lyrics and melodic music! Being a lover of catchy pop songs, and recognizing the extraordinary power of the marriage of melody and words as an aid in memorization (which is fundamental to language acquisition), I began introducing Spanish-language songs, and translating popular songs in English to Spanish, and using them to "spice up" my language classroom. This activity, although fun and popular, had only a modest pedagogical effectiveness, since most authentic pop songs (in whatever language) are not written with teaching language in mind, and are almost invariably "all over the map," grammatically speaking, which does not lend itself to a focused lesson. Moreover, “rap music” (regardless of its current popularity among some teens) and “chants”--even if written with teaching in mind--lack the aid to memory provided by good melodies, which are naturally more interesting and therefore more memorable. Although rhythm by itself, in such chants, or in rhyme, is also an aid to memory, it would be helpful if the rhythm were interesting. Programmed beats on an electronic drum machine by an instant musician does not always inspire passion. Fortunately, I have found that melodic music appeals to everyone, even those who listen to "unmelodic" music. I began writing songs specifically for the classroom, and re-writing the lyrics to my best songs written over the years, with the idea of teaching the language but at the same time creating songs that were appealing on their own, but just so happened to tidily teach a series of grammar structures or thematic vocabulary group. Obviously, most teachers are not musicians, let alone songwriters. Lately I have been scouring for songs in Spanish that meet the criteria described above, to which I write lessons. The problem is that perhaps only one in a hundred meet this set of criteria (this is the fundamental reason why it is difficult to use music in the language classroom for more than “culture” or translation or cloze activities—most songs don’t meet the criteria, which would allow for a profound exploration of the language. About Tom Blodget Tom Blodget, M.A., is an accomplished musician/songwriter and veteran Spanish teacher. He has published three books and CDs that reflect the educational purposes of the title of this essay, both in Spanish, in a series entitled Musicapaedia. Many thanks to Tom Blodget for permission to display this presentation. © Tom Blodget. All rights reserved. Used with permission.
http://www.songsforteaching.com/musicapaedia/teachingtargetlanguagethroughlyrics.htm
4.28125
Photographs by Linda Hartley TOOLS FOR LEARNING Good wall displays should support the process of learning. Pupils should refer to the display to support them in aquiring new conceptual knowledge, carrying out learning tasks or consolidating prior learning. Consider the height and position for easy access for pupils. INTERACTIVE Displays where children can engage, respond and contribute hold their attention and are therefore more effectively reinforce the key points. Static presentations run the risk of blending into the background and becoming invisible, whereas interactive displays draw in children's attention. Making these displays can be fun and there are limitless ways of creating interaction - involve the children in thinking of innovative ideas. PROMOTERS OF INCLUSION To ensure that your wall displays are inclusive consider the following points: use a range of images that reflect the ethnic diversity of the pupils use the home languages for vocabulary labels or pupils' writing where relevant and usefulavoid stereotypesevaluate the content of your topics: have you used Literacy texts from a variety of ethnic groups? Have you included references to other cultures in mathematics,e.g. through tesselating patterns, number systems, acknowledging the roots of our number system. The use of dual language posters can save time when preparing wall displays and resources, however make sure that the displays are relevant to current learning and meaningful to the pupils. Asking pupils to contribute their own dual language labels, captions or dual language texts can be an exciting and manageable way of including pupils. SUPPORTIVE OF PRIOR LEARNING Wall displays can be effective in reinforcing concepts and skills from prior lessons. For these types of displays consider making them interactive so that children actively engage in the revision through answering questions, matching items, creating sentences/phrases, or adding ideas, solving problems etc. This will make the learning memorable and fun and lead to more 'stickiness' in terms of long term recall. Types of display Pupils with English as an additional language can benefit greatly from classroom wall displays. Consider creating a range of different displays to support pupils in different ways. Working walls support learning of concepts, skills and vocabulary are effective in displaying key aspects of the current or recent lesson, for example; Working walls are dynamic and ever-changing to reflect work in progress. They may show a process such as 'our brainstorm' or 'our story draft' or 'our reworking or finished product'. They are designed to remind pupils of recent concepts and skills covered, allowing them to consolidate key points. Pupils may even add to some displays with ideas or vocabulary on sticky notes. These examples of literacy working walls show a range of features such as clear targets or learnging intentions, key features of genre, key vocabulary and sentence structures used. They are also visually stimulating and some provide opportunities for pupil participation. Working walls should display work in progress or evolving rather than neat finished products. The working wall should provide useful tools to support chidlren in their own writing. This numeracy working wall shows key componenets such as step by step guides, learning intention, demonstration sheets, key vocabulary and visual support. This numeracy working wall shows key componenets such as step by step guides, learning intention, demonstration sheets, key vocabulary and visual support. Interactive displays engage pupils Interactive displays are an excellent way to enable pupils to consolidate key points by actively taking part in the display. Interactive displays could include; Displays of completed work inspire and encourage pride in achievements Displays of completed work can also provide opportunities for consolidatiion. Including the following elements can maximise their usefulness; Evaluate your displays to maximise effectiveness Carrying out a self-evaluation of your displays can be a useful way of finding out what aspects are included and what aspects pupils find useful. Points to consider: Pupil questionnaires can elicit valuable feedback which can be used to enhance your wall displays. Points to consider for questions for the children to answer:
http://www.eal-teaching-strategies.com/walldisplays.html
4.21875
Drying of the American West Part B: What's Responsible for Lower Reservoir Levels? The Natural Resources Defense Council (NRDC) is an environmental action agency that represents 1.2 million members in courts of law using the expertise of more than 350 lawyers, scientists, and other professionals. In collaboration with the Rocky Mountain Climate Organization (RMCO), NRDC recently compiled a report titled Hotter and Drier: The West's Changed Climate. The report draws on over 100 scientific studies and government reports to document changes in temperature and precipitation across the west and a population "explosion" of people living in places such as Las Vegas and Phoenix where they are dependent on the river's water. - Visit the overview page for the report, titled Hotter and Drier, The West's Changed Climate. Click the various states on the interactive map to see photographs of the effects of hotter, drier climate in the west. Chapter 3 of the Hotter and Drier publication is specifically about the Colorado River Basin. Your teacher may assign you to download the report and read this chapter. In addition to showing increasing temperature trends across the basin, it cites reports that indicate: - As early as 2030, the average flow of the river could be reduced to only half of the level on which the Colorado River Compact is based. The Compact is the legal agreement used to divide Colorado River water among the states through which it flows. - If current levels of water use continue, there is a 50 percent chance that by 2023, water levels in the river's two main reservoirs, Lake Powell and Lake Mead, will fall below their outlets. This means that the reservoirs will have effectively gone dry. - By the end of the century, "Dust Bowl" conditions will be the new climate norm of the Southwest. The report also cites trends in reduced volume and shorter duration of snowpackthe volume of water that exists as snow on the surface during winter monthsacross the basin. Historically, melting snowpack has fed the river gradually through the spring months. Since 1985, snow has been melting earlier and faster, flowing downhill in the late winter, leaving the land drier during the spring. Stop and Think4. Compare snowpack in a watershed to a dam on a river. How are they alike? How are they different? 5. What effect does the El Nino Southern Oscillation (ENSO) have on water supplies to the Colorado River Basin? 6. How does increasing population of sunbelt cities in the Lower Colorado River Basin contribute to lowering reservoir levels? Natural flow of the river Since the early 1900s, dams on the Colorado River and its tributaries have diverted huge volumes of water away from the river. The dams have also increased the amount of water that is lost to the atmosphere by evaporation. By considering all the water that has been removed from the river system upstream, scientists have been able to reconstruct the "natural flow" record of the Colorado River at Lee's Ferry, the point directly below Glen Canyon Dam (the dam that forms the reservoir called Lake Powell). This natural flow record is important because it shows the variability in streamflow due to climate alone, apart from changes in the use and management of the river. - Examine the graph below. Interpret the three lines to understand the climate trend of the past and project it into the future. - The Colorado River Compact, the legal agreement that divides the river water among the basin states, was established based on the assumption that the average annual flow at Lee's Ferry was about 16.4 MAF. This was based on the 20 years of gage records available in 1922. However, the flow since 1922 has been generally lower than these early gaged flows, so the amount of water that the states have to share is mediumsmaller than expected. There is not enough water in the river, on average, to fulfill all of the legal entitlements that states have to the water. How the Colorado River Basin states will solve this issue remains to be seen.
http://serc.carleton.edu/eslabs/drought/6b.html
4.09375
The Halifax Gibbet was the forerunner of the guillotine. In fact, the guillotine was inspired by the Halifax Gibbet with the former working on the same principal as the latter. The principal of the guillotine – a sharp-bladed instrument being held above and then dropped some distance from a condemned person’s neck – was first used in Medieval England. It is believed that this method of execution was first used in Halifax – hence its name – in the C13th. A law known as the Gibbet Law gave the Lord of the Manor for Halifax the power to condemn someone to death by the Halifax Gibbet if they were found guilty of stealing something that was worth more than 13p. The first recorded use of the Halifax Gibbet was in 1286 when John of Dalton was executed – though no records survive to explain what he was guilty of. The Halifax Gibbet was a wooden structure that was 15 feet high with an axe shaped blade at the top. This was held up by a rope. Once the condemned prisoner had been securely fastened, the executioner would cut the rope. In theory, the weight of the blade and the speed at which it fell would decapitate the condemned. The Halifax Gibbet was used on markets days. This would ensure many people were in the town to witness the execution and the hope was that the fearsome sight of the Gibbet would act as a deterrent to those who might have considered a life of crime. If a condemned prisoner escaped on the day of his/her execution and crossed outside of the town’s boundary, he/she was safe as long as the condemned never returned to Halifax. John Lacey, in the reign of James I, did escape on the day of his execution. He returned to the town in 1623, a full seven years after the year he should have been executed. Lacey was recognised, arrested and executed on the Gibbet. The Halifax Gibbet was last used in 1650. The first recorded use of what was known as the guillotine was in 1789.
http://www.historylearningsite.co.uk/halifax_gibbet.htm
4.4375
Studying phage, a primitive class of virus that infects bacteria by injecting its genomic DNA into host cells, researchers have gained insight into the driving force behind this poorly understood injection process, which has been proposed in the past to occur through the release of pressure accumulated within the viral particle itself. Almost all phages (also known as bacteriophages) are formed of a capsid structure, or head, in which the viral genome is packaged during morphogenesis, and a tail structure that ensures the attachment of the phage to the host bacteria. A common feature of phages is that during infection, only their genome is transferred to the bacterial host's cytoplasm, whereas the capsid and tail remain bound to the cell surface. This situation is very different from that found in most eukaryotic viruses, including those that infect humans, in that the envelope of these viruses fuses with the host plasma membrane so that the genome is delivered without directly contacting the membrane. Phage nucleic acid transport poses a fascinating biophysical problem: Transport is unidirectional and linear; it concerns a unique molecule the size of which may represent 50 times that of the bacterium. The driving force for DNA transport is still poorly defined. It was hypothesized that the internal pressure built during packaging of the DNA in the phage capsid was responsible for DNA ejection. This pressure results from the condensation of the DNA during morphogenesis – for example, another group recently showed that the pressure at the final stage of encapsulation for a particular bacteriophage reached a value of 60 atomospheres, which is close to ten times the pressure inside a bottle of champagne. In the new work reported this week, researchers have evaluated whether the energy thus stored is sufficient to permit phage DNA ejection, or only to initiate that process. The researchers used fluorescently labeled phage DNA to investigate in real time (and with a resolution time of 750 milliseconds) the dynamics of DNA ejection from single phages. The ejected DNA was measured at different stages of the ejection process after being stretched by applied hydrodynamic flow. The study demonstrated that DNA release is not an all-or-none process, but rather is unexpectedly complex. DNA release occurred at a very high rate, reaching 75,000 base pairs of DNA/sec, but in a stepwise fashion. Pausing times were observed during ejection, and ejection was transiently arrested at definite positions of the genome in close proximity to genetically defined physical interruptions in the DNA. The authors discuss the relevance of this stepwise ejection to the transfer of phage DNA in vivo. Source: Eurekalert & othersLast reviewed: By John M. Grohol, Psy.D. on 21 Feb 2009 Published on PsychCentral.com. All rights reserved. Power resides in the moment of transition from a past to a new state. -- Ralph Waldo Emerson
http://psychcentral.com/news/archives/2005-03/cp-bfd030305.html
4.125
Using the Musical Keyboard (Introduction to Basic Music Theory) The main focus of this lesson is to instruct on how to play the simplest of chords on a keyboard while showing how to obtain them with some small amount of understanding. Hopefully, this will help the guitarist understand how chords are played on the guitar easier than using the guitar alone. It will also provide the guitarist with a quick introduction to the keyboard. By using the information provided below, the guitarist can more easily figure out what notes are being played with particular chords. Having some small amount of musical training while I was young, I can say where things usually begin when a new student is being taught piano. It starts at middle C. Middle C – The Starting Point So, where is middle C? The placement of middle C on a musical staff can be researched on the person’s own time. I am sure it is found in a number of other places on the Internet. This lesson will show how to finger all the major chords, minor chords, and dominant 7th chords, hopefully without overwhelming you with music theory. So, let us have a look at a diagram of the some keyboards. Note that middle C is off centre and not the middle note in the keyboard. This is because this and many other keyboards as well as full-size pianos are not symmetrical about middle C. The reasoning behind that is for some other topic of research that goes into the development of music as a whole. What is important is that you can learn how to locate middle C with a little practice. Other common sizes for keyboards are 73 keys, 76 keys, and 88 keys. These can be seen below in Figure 3, Figure 4 and Figure 5, respectively. While middle C is not usually (see Figure 3) in the middle of the keyboard, it is almost there. It makes a practical place to start in terms of range of musical sounds. It also makes a very good place to start when studying music theory. The C Major Scale The notes of the C major scale are: Each note in the C major scale can be numbered using regular numerals and Roman numerals: Note that I listed the next C in the scale while showing an octave instead of stopping at B as shown in Figure 2. This has to do with showing the formula for a major scale. This will make learning how to apply the formula to other notes easier later. Notice that when the C major scale is numbered using Roman numerals, some are numbered with capital letters and some are numbered with lower case letters. (Jumping a little bit ahead, all chords in the C major family are built using only notes from the C major scale.) When playing chords in the C major family, very little thinking has to be done because only the white keys are played. When each finger of the right hand assigned to one key, everything falls into place. If you place the thumb of the right hand on middle C, the rest of the fingers will each fall on one key naturally. Refer to Figure 6 below as an example. So, start by playing the C major chord, commonly referred to as C. Place the thumb of the right hand on middle C (or any C), skip using the index finger, place the middle finger on E, skip the ring finger, and place the little finger on G. That is the simplest C chord you can make. To play the D minor (commonly shown as Dm) chord, just move the hand to the right one white key so the thumb plays D, the middle finger plays F, and the little finger plays A. To play the E minor chord (commonly shown as Em), move the hand to the right one white key. This is the same for all successive chords for the C major scale. Now go back to the C major scale where it is numbered with Roman numerals. Those notes numbered with capital Roman numerals have chords that are major chords. Those numbered with lower case Roman numerals have chords that are minor chords. The exception to this last statement is the vii° chord. The vii° chord is a diminished chord. (It is a chord with a minor 3rd and a 5th that is lowered by a half a step. This information can be left for later exploration of knowledge of music theory.) When it comes to playing chords an octave higher, it is easy using a piano or other keyboard instrument. When playing guitar it is different because you can form different version of the same chord in different places on the fingerboard. If playing an electric guitar it is easier to play chords one octave higher because the fingers can be placed that high on the fretboard (fingerboard) more easily due to the way the guitar is built. For the C major scale, the chords are shown for the respective note below: Difference between Major Chords and Minor Chords A full chord must be constructed of at least three notes. Any chord in the family of the C major scale (and any major scale for that matter) begins with the note which is the name of the chord, the third note up from that note and the fifth note up from the note of the name of the chord. For a C chord, that means the chord is made up of the notes C (I), E (iii), and G (V) of the C major scale. What makes a minor chord minor? The answer is that the 2nd note in the chord (the major 3rd) is made a minor 3rd. This means that the 2nd note is reduced by a half-step. Example: D notes: D, F#, A – note that F# is not a note in the C major scale. Now it can be seen that the D chord is not a chord in the C major chord family. Dm note: D, F, A – note that the major 3rd (F#) is reduced to F. I once saw a musical play about a couple of piano students that made humorous the stories their careers starting from their early days. The piano teacher asked the question of the students “What makes a minor chord sound minor?” The answer was that a minor chord sounds sad whereas a major chord sounds happy. When you play a minor chord in comparison this generally sounds true. Just using the knowledge associated with the C major scale we know where the major chords are for the notes: C, F and G. We the know the minor chords are for the notes D, E, A, and the diminished chord is associated with the note B. Remember, the method for playing all of the chords in the C major scale is provided in the paragraph below Figure 6. All chords in the C major chord family can be played by using the thumb, the middle finger and the little finger. Actually, for later use and knowledge, the same holds true for playing the chords to the left of the right hand but starting with the little finger and moving to the right. The fingers used on the left hand are the little finger, the middle finger and the thumb. Extending the Knowledge of Minor Chords to Find the Rest of the Major Chords on the Keyboard Figure 7 shows an octave of keys from the notes C to C. The top of the figure shows how an octave normally looks while the bottom of the figure shows the octave as if the black keys in the octave were extended to the full length of the white keys. The extension of the black keys is done to show that there is movement of one half-step between all keys, black or white even though some white keys have no black keys between them. Note that there is no sharp (#) or flat (b) between the notes E and F and B and C. Aside: However, the movement from the notes E to F and B to C or the movement of F to E and C to B is still only one half-step. This is important to understand because using this knowledge along with of what notes are in the C major scale allows us to figure out for ourselves the formula for the major scale if we so wish. More importantly, with this knowledge, if we forget the formula for the major scale, we can refer to the C major scale to figure out the formula. Because we know what makes a minor chord minor, we can extend that knowledge to figure out what the major chords are for the notes D, E and A by using the chords Dm, Em and Am. Place the right-hand fingers on a keyboard for one of the minor chords mentioned. Just move the middle finger (the one on the 2nd note of the chord) up a half-step. To moved up a half-step is to move up by one key – black or white. Refer to Figure 7 above for reference. Using Dm to find D: Using the notes D, F and A => move the middle finger up by one half-step gives the notes D, F# and A. Refer to Figure 8 as an example. Imagine moving the fingers from the notes indicated on the bottom chord of Figure 8, Dm to the top chord of Figure 8, D. Using Em to find E: Using the notes E, G and B => move the middle finger up by one half-step gives the notes E, G# and B. Using Am to find A: Using the notes A, C and E => move the middle finger up by one half-step gives the notes A, C# and E. Using Bdim to find B: Using the notes B, D and F => move the middle finger up by one half-step and the little finger up by one half-step gives the notes B, D# and F#. We can also use the above knowledge to figure out what the minor chords are what the minor chords are for C, F and G. To do this, simply finger the chord and move the middle finger down one half-step. Cm has the notes C, Eb and G. Fm has the notes F, Ab and C. Gm has the notes G, Bb and D. Now it is possible to figure out all of the major and minor chords for all the notes on the keyboard. It is good to note that this method is easiest to use for the white keys. The only chord that has not be explicitly discussed is Bdim. Bdim has a minor 3rd and a minor 5th. You should be able to figure out or research what the notes are for the chords B and a Bm. You could also use the major scale formula to obtain the B major scale and work from there. NOTE: It is important to reference the keyboard (a real one or the diagrams) when studying this material to have a visual aid. The Major Scale Formula The major scale (as well as every other scale) has a set formula. However, if you know the C major scale and the key spacing, you can figure out the formula every time. Again, it is important to know that there are no black keys between the keys B and C, and E and F. Half-Steps and Whole-Steps: A half-step is a movement (up or down) form one key to the one immediately next to it. (Refer to Figure 7). Examples: C to C#, G# to A, E to F, B to C, A# to A, C to B or G to A whole-step is a movement of 2 half-steps. Examples: C to D, E to F#, A# to C or F to D#. C Major Scale Formula: - W = Whole-step - H = Half-step By using the major scale formula you can figure out all the major scales. This information can be used in many ways. Such as figuring out all of the major chords on the keyboard. However, because a major chord is made up of the 1st note of the chord, the major 3rd from the 1st note of the chord, and the major 5th from the 1st note of the chord the major scale formula will provide you with the major chords in the root note chord family for the root (I) note, the fourth (IV) and the fifth (V) notes of the major scales. Again, remember that you have already been provided with the method of figuring out all the major and minor chords for all the keys on the keyboard. Stick with the white keys for now. By figuring out all of the major scales and putting them in ascending order you end up with half of the Cycle of Fifths. This is information used for chord progressions in many songs. The numbering of the notes in the chord family (originally presented in the scale) is also often used in chord progressions of songs. When figuring out a major scale, it is a good indication that it is correct if the 7th note is a half-step below the 8th note. If the chords in the music you are playing are contained within the major scale, you can use that scale to solo. Dominant Seventh Chords Dominant 7th chords are often associated with a bluesy sound. To figure out how to play a dominant 7th chord, reduce the (major) 7th by a half-step and fit it into the chord fingering. Dominant 7th chords are written as follows: A7, B7, C7, etc. The 7th of the C major scale is B. The dominant 7th is A#/Bb. A# and Bb are the same note. They are called equivalent harmonics. The notes of the C7 chord are C, E, G, and A# or C, E, G, and Bb. Applying This Knowledge to the Guitar One of the main advantages to learning about music theory using a keyboard is that the keyboard is a much more linear instrument than the guitar. One key follows directly after another. On the guitar, when you get to the last fret on one string, the next note on the next string is not the next note as it is on the keyboard. The same note of the same pitch appears at more than one place on the guitar. To take this theory and apply it to guitar remember that standard tuning on a 6 string guitar is (low to high): E A D G B e. You can remember this by using the letters of standard tuning as an acronym for Eddie Ate Dynamite, Good Bye Another piece of information that is important to know about the guitar is that a movement of 1 fret (up or down) is a movement of a half-step. A movement of 2 frets is a whole-step. Now you can pick out scales and chords on the guitar as well as the keyboard. The rest is for you to explore.
http://www.guitarnoise.com/lessons/simple-chords-on-keyboard-and-guitar/
4.125
1st Nine Weeks Math: For the next few weeks we focus on learning strategies for memorizing addition and subtraction facts. We will also focus on quantitative reasoning and comparing and ordering numbers. Science: We started off the year reviewing science safety and identifying science tools. This 9 weeks your child will observe and identify properties and changes of matter. We will learn about various forms of energy and compare patterns of movement. Your child will keep a science notebook this year. Please encourage your child to share his/her notebook with you. Social Studies: This 9 weeks in social studies we will learn about the various communities in which we live. We begin by learning the elements of a community and the purpose of the communities we live in. We will also learn about the physical and human characteristics of the various communities we belong to. Language Arts: We begin our 9 weeks learning what a sentence is. We will review parts of sentences and then continue to develop our understanding of the different types of sentences. Later in the 9 weeks we will begin developing our understanding of nouns. Reading: The big idea for Unit 1 is: There are different kinds of communities. In this unit we will explore: why the order of events in a story is important how families are alike and different why an author writes a story what causes characters to change what clues tell you where and when a story takes place Study Skills: We will work on setting goals, organization and working cooperatively. In addition we will also work on paraphrasing and retelling information. We will learn to locate parts of a book and how to read a newsletter.
http://www.eisd.net/Page/4999
4.125
With the proclamation of 5th April 1815. King Friedrich Wilhelm III took possession of the territories granted to him at the Congress of Vienna. Thereby the predominant part of what is today North Rhine-Westphalia became Prussian, and were politically reunited for the first time since the decline of the Carolingian empire. At the Rhine and the Ruhr the first phases of industrialization were unfolding, which would herald the industrial revolution. The expansion of transport and traffic in the region proceeded rapidly on the one hand, while turning the developments of centuries upside down. What had been a centre of commerce or trade yesterday, could in a short space of time sink into insignificance, because it was disconnected from the new iron railways. The Rhineland was greatly influenced by its time under the French and clung to many of the improvements achieved at that time, such as French civil and commercial codes, the chambers of commerce and the constitutional districts. In addition, the Rhineland was almost 80% Catholic and thereby quite different from the old Prussian regions, which were characteristically Protestant. The early industrialization at the Rhine and Ruhr, its favourable location for transportation and a strong, self-confident bourgeoisie in the towns provided a phase of modernization which had a positive effect throughout the Prussian empire.
http://www.wir-rheinlaender.lvr.de/engl_version/rhineland_prussians/
4.0625
OIL SPILL TRACKING: Recently, scientists and mathematicians at University of North Carolina- Chapel Hill have developed a tool that could track the spread of oil spills—even before they happen. By modeling the surface of the ocean, and factoring in potential wind and weather patterns, scientists can predict where oil that stays on the surface of the water will spread. The researchers hope the modeling tools will help clean-up crews decide where to marshal their resources in the event of a future spill. ABOUT METHANE: Methane sources such as cows, oceans, wetlands, and natural gas pipes have more impact on the global atmosphere than previously thought. Methane was released along with oil in the Deepwater Horizon spill and even seeps naturally from the floor of the Gulf of Mexico. When methane breaks down chemically in the atmosphere and combines with other chemicals, it produces ozone, atmospheric scientists say. Like methane, ozone is a greenhouse gas, and it is also the main component of smog. Researchers say that even something as simple as tightening a leaky gas pipe can make a difference, reducing the amount of methane released into the atmosphere.
http://www.aip.org/dbis/AGU/stories/21120.html
4.15625
Grades 1,2 | Math In this math worksheet, your child will get practice with place value by writing each number as a sum of 10s and 1s. This worksheet originally published in Math Made Easy for 1st Grade by © Dorling Kindersley Limited. Skills: understanding place value, 1s, 10s, and 100s, writing expanded formPrint Full Size CCSS.Math.Content.1.NBT.B.2, CCSS.Math.Content.1.OA.D.8, CCSS.Math.Content.2.NBT.A.3 Don't have Acrobat Reader? Get it now.
http://www.greatschools.org/worksheets-activities/5337-expanded-form.gs