text
stringlengths
195
587k
id
stringlengths
47
47
dump
stringclasses
8 values
url
stringlengths
14
2.28k
file_path
stringlengths
125
138
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
46
159k
score
float64
2.52
5.06
int_score
int64
3
5
How to Create a New Custom Dictionary in Excel 2010 In Excel 2010, you can create custom dictionaries to use when spell checking your worksheets. You use the Add to Dictionary button in the Spelling dialog box to add unknown words to a custom dictionary. By default, Excel adds these words to a custom dictionary file named CUSTOM.DIC, but you can create a new custom dictionary to use as the default, if you prefer. Click the File tab and then click Options. The Excel Options dialog box appears. Click the Proofing tab and then click the Custom Dictionaries button. Excel opens the Custom Dictionaries dialog box where you can create a new custom dictionary. Click the New button. Excel opens the Create Custom Dictionary dialog box. Type the name for your new custom dictionary and then click the Save button. The name of the custom dictionary you created appears underneath CUSTOM.DIC (Default) in the Dictionary List box. (Optional) Click the dictionary’s name in the Dictionary List box and then click the Change Default button. This makes the new custom dictionary the default dictionary into which new words are saved. Click the Edit Word List button. Excel opens a dialog box with an alphabetical list of the words in that custom dictionary. If you just creaetd the dictionary, it will be empty. Type a word you want to add to your custom dictionary in the Word(s) text box and click Add. Continue to do this until you're satisfied with your custom dictionary. Click OK until you've returned to your worksheet. Now you're ready to get back to work. If you make the custom dictionary your default, Excel continues to add all unknown words to your new custom dictionary until you change the default back to the original custom dictionary (or to another custom one that you’ve created). To change back and start adding unknown words to the original custom dictionary, select the CUSTOM.DIC file in the Custom Dictionaries dialog box and click the Change Default button.
<urn:uuid:930d7876-6695-46a9-bd61-3bdc3b531007>
CC-MAIN-2013-20
http://www.dummies.com/how-to/content/how-to-create-a-new-custom-dictionary-in-excel-201.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699273641/warc/CC-MAIN-20130516101433-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.721516
433
2.671875
3
It’s the same question. Economic development or conservation? During April, the headline said, “Uganda Seeks to Reconcile Oil-Nature.” Now, in Tanzania, “Proposed Serengeti Highway Is Lined With Prospects and Fears.” The places differ but the issues are the same. In Uganda, the debate concerns oil drilling. In Tanzania, it is road building. Privately and publicly, oil will generate revenue for Uganda. Similarly, northern Tanzania will benefit from a new road that will bisect the Serengeti Park. It will facilitate medical care, carry much-needed goods, enable the spread of electricity and cell phone service. And, both projects, will irreparably harm priceless wildlife. What to do? An economist would suggest assessing the externalities. The Economic Lesson Economists see positive externalities wherever a transaction between two parties affects a third individual or group in some beneficial way. They see negative externalities when the impact on a third party is harmful. Vaccines usually have positive externalities while pollution is the typical example of a negative externality. Taking externalities an economic step further, we can look at cost. On a demand and supply graph, the equilibrium price of a decision that has a positive externality is too high because of the benefits experienced by society. Correspondingly, the equilibrium price of a decision with negative externalities is too cheap because of the associated costs that result.
<urn:uuid:99daaf2f-5538-4dac-99f0-08e6f4d3475e>
CC-MAIN-2013-20
http://www.econlife.com/tag/tanzania/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699273641/warc/CC-MAIN-20130516101433-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.929798
307
2.640625
3
Dorsal metatarsal arteries Arteries are blood vessels that convey blood away from the heart. Arteries can be divided into systematic arteries, which carry blood from the heart to the entire body, and pulmonary arteries, which are responsible for carrying blood from the heart to the lungs. Apart from the pulmonary and umbilical arteries, the blood in the arteries is oxygenated. Systematic arteries are classified as elastic and muscular depending on their composition. The larger arteries are generally elastic and the smaller ones tend to be muscular. Systematic arteries carry blood to the arterioles, which are the smallest type of arteries and are responsible for delivering blood to the capillaries. Arterioles also regulate blood pressure by the muscular contraction of their walls. Capillaries are tiny blood vessels in which gasses and nutrients are exchanged. Arteries are subject to atherosclerosis, which occurs when plaque build ups inside the artery walls as a result of cholesterol, smoking, blood sugar, etc. This can lead to a heart attack or a stroke, two of the leading causes of death in developed countries. Written and medically reviewed by the Healthline Editorial Team In Depth: Dorsal metatarsal arteries
<urn:uuid:d2a2e233-e04f-42aa-a172-11919d7c063c>
CC-MAIN-2013-20
http://www.healthline.com/human-body-maps/dorsal-metatarsal-arteries
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699273641/warc/CC-MAIN-20130516101433-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.934287
247
3.5625
4
By Ryan Gueningsman When considering what to talk about and in what format to present her thoughts on Veterans Day, Major Patricia Osmon of Delano realized since she will be at school speaking to students who are missing class, she might as well touch on some different school subjects. “Since you are missing out on some class time today, and since I am a school board member, I would like to visit some of those subjects you might be missing out on,” Osmon said, starting with history, a subject she said she struggled with the most. Osmon said President Woodrow Wilson proclaimed the first Armistice Day Nov. 11, 1918, to remember the signing of the Armistice that ended World War I. Formally, the war did not end until later with the signing of the Treaty of Versailles. “In 1938, that Nov. 11 became a legal holiday,” Osmon said. In 1854, Congress amended the name, changing it to Veterans Day. The meaning has evolved to be a day to honor the veterans of America collectively and as individuals. “Veterans Day differs from Memorial Day,” she said. “Veterans Day is celebrated to thank living veterans and members of the military for their service; in contrast, Memorial Day honors those who have served and died.” Osmon said the numbers vary by the millions, but said it is most commonly reported there are 24.9 million veterans in the United States military. Of those, Osmon said: • 1.5 million are women; • 2.3 million are from World War II; • 2.7 million are from Korean War; • 7.6 million are from Vietnam; • 4.5 million are from Gulf Wars; • 5.6 million are from peacetime only; • 78,000 served in three wars; and • 5.5 million are disabled. Osmon said the first image that comes to mind in this day of conflicts in Afghanistan and Iraq is one of the infantry soldier holding an M16, engaged in battle. “Veterans are everywhere,” Osmon said, and explained that veterans are much more diverse than that. Veterans may have served a variety of roles in their military service, she said, and there are many places one will find veterans where you least expect them. So, what makes them special? “The simple answer is that they put their life at risk in support of the president’s national agenda, which is to protect the American way of life,” Osmon said. In order to be a member of the military, she said, one must make a number of sacrifices. Service in the military means time away from family, birthdays missed, and holidays celebrated abroad. “It means young men and women scoring touchdowns or performing plays and not finding their parents in the audience,” she added. “Serving in the military consistently challenges you to do things outside your normal/typical experience, and you carry this with you for the rest of your life.” Osmon used “art” to paint a picture for those in attendance, telling a story of a soldier who had returned from Afghanistan in June. She described him as “happy-go-lucky,” and said he is the nicest guy one will ever meet and is expecting his first child with his wife any day. Osmon said the man is an average guy who loves football and the Vikings, and he works as a radiologist technician. The soldier said it’s been hard for him to adjust to being back home. He said it’s awkward talking about his experiences with family and friends, and said part of him wants to talk about it, yet he feels they will never quite grasp what he experienced, so he chooses most of the time not to talk about it. “Certain memories haunt him,” Osmon said, adding that it wasn’t the traumatic injuries he saw or men dying on the operating table, but instead, it was when he stood in a double line of soldiers who formed the corridor linking the hospital to the medevac helicopter. “The double column of soldiers would salute as the flag-draped coffin passed from the hospital morgue to the air transport to take them home. He said he stood at attention saluting, watching the coffins pass, thinking of the family members that have just lost a father, mother, brother.” Osmon said she could see how the experience marked him. “He went from all smiles and talk of the new baby to having a faraway look as he described the memory of those who went home under a draped flag,” she said. Osmon said each veteran has their own stories and memories. “It marks them and they carry it with them for the rest of their lives,” she said. Osmon concluded her remarks with a personal story about a training mission in Georgia called Operation Golden Medic, which resulted in the “best thank you” she could have ever received at that moment. “Many stories of bravery and heroics are being told today, but not every veteran’s story is a battle story,” she said. Osmon said veterans are marked by these experiences, and this is why recognition is appreciated. “It is why this day means so much . . . it is because we have seen . . . we have experienced . . . we have sacrificed,” she said. “We may not want to talk about it, but it did happen. We did see it, we did experience, it, and we did it for you.” She encouraged students and those on hand to look through the veneer of those around you and simply say “thank you.” “It is in part by their service that we retain our position on the international stage and enjoy the luxuries that come along with being an American,” Osmon said. “The American way of life.” Osmon is a graduate of Bates College in Lewiston, ME, and holds both masters and doctorate degrees from Baylor University in physical therapy. She has a total of 17 years of military service and currently serves with the 452nd Combat Support Group at Ft. Snelling as chief physical therapist. She is also employed by the Meeker and Wright Special Education Cooperative as a physical therapist, and serves as an adjunct professor at Dunwoody Institute in its physical therapy assistant program. In addition to all these roles, she also serves on the Delano School Board. The Veterans Day program also included the presentation of Patriot Pen essay winners, and the presentation of the Walter Grotz essay scholarship. The winning essays will be published in next week’s newspaper.
<urn:uuid:e7ba0e0c-6289-449b-9b03-1e5f65ba06f2>
CC-MAIN-2013-20
http://www.herald-journal.com/archives/2010/stories/osmon-vets-day.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699273641/warc/CC-MAIN-20130516101433-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.980065
1,438
2.59375
3
February 4, 2013 (Chicago Tribune) -- At age 17 in the 1970s, Marvin Seppala dropped out of high school and became the first adolescent admitted at Hazelden, when treatment for alcoholism was in its infancy. Afterward, he returned to his Minnesota town and finished high school in the homes of merciful teachers. Between relapses, he applied for a job as a janitor at the Mayo Clinic and somehow landed one as a lab tech. The work inspired him to get straight at age 19, go to college, then move ahead to medical school. Now a renowned expert on addiction treatment and psychiatry, Seppala returned to Hazelden (hazelden.org) a few years ago -- as chief medical officer. His recovery shows that predisposition to alcohol abuse isn't doom or destiny. But people with alcoholism in their backgrounds, whether they or close family members have struggled, often worry they or their children will inherit it, for good reason: Genetic history of alcoholism is the biggest risk factor for alcoholism. "Having an alcoholic parent, especially, tremendously raises the risk, by as much as six times the general population," Seppala said. "Honestly, when I got married, I didn't want children. I was so scared they'd go through the same thing I did." More has been learned about risk factors for alcoholism in the years since Seppala decided that having children was worth the risk. (He has two healthy children in their 20s.) Awareness can't guarantee someone won't fall into alcoholism. But it can't hurt. We asked Seppala and Sarah Allen Benton, author of "Understanding the High-Functioning Alcoholic" (Rowman & Littlefield), to shed light on a few of the more shadowy risk factors and red flags. They were game. Early onset of alcohol use: "If you start drinking before age 15, you increase the risk of alcoholism by 40 percent regardless of family history," said Benton, a therapist specializing in substance abuse and dual diagnosis treatment at McLean Hospital in Waltham, Mass. Just postponing the use of alcohol can significantly reduce the risk of alcoholism, Benton said, partly because the part of the brain that understands risk and inhibits problematic behaviors hasn't completely developed before age 21. "The more we learn about the neuroscience of addiction, the more we realize that the brain can be molded by our experience, particularly by substances we use," Seppala said. "Early on, substance use could be molding people's brains in a manner that they could be more at risk. It's not absolutely proven." Benton, who has been in recovery from alcoholism since 2004, said she started drinking when she was 14. "Most people I know who are alcoholics started drinking at age 14 or before," she said. High tolerance to alcohol: High tolerance to alcohol can be a red flag. One longitudinal study of people who had an alcoholic mother or father tested the fine motor coordination of the study participants at a young age, before and after drinking one shot. "Those who tolerated alcohol without much change at all in their coordination, suggestive of high tolerance, were a lot more likely to wind up alcoholic than those who were remarkably affected by alcohol and couldn't perform well on the test after the shot," Seppala said. Some experts say it helps explain why children of alcoholics often end up marrying one, despite their best efforts. In their young dating years, they are drawn to mates who seem to be able to "hold their liquor," unlike what they may have witnessed in their alcoholic parent. Alas, alcoholism is progressive and looks different at different stages. Compulsive tendencies: Experts suggest that one compulsive or addictive behavior often begets another. One early clue of an alcohol compulsion is a lack of recognition of consequences to use, Seppala said. "You see the consequences mounting and the individual doesn't alter the behavior," Seppala said. "I did all kinds of crazy things. Dropping out of high school was a big one. But before that I dropped out of sports and band and choir, all these things I enjoyed. And it never occurred to me it was because of the drugs and alcohol." Brain scans of addicts show damage to the prefrontal cortex. In recovery, people often talk about being compulsive about other things. "Early in my own recovery, all of a sudden I would go off and do one thing and one thing only," Seppala said. "It wasn't pure compulsivity driving me, but that was part of it. I also can be prone to overworking. ... Compulsivity is an ongoing feature that people in recovery really do have to deal with." Benton has found dialectical behavior therapy, which involves building mindfulness, interpersonal effectiveness, distress tolerance and emotion regulation, to be particularly helpful to recovering alcoholics. "Learning ways to take care of yourself, to calm the nervous system, even if you don't have an addiction, is going to help you," Benton said. Another mental health disorder. "It's very common for alcoholics to have an underlying mood issue," Benton said. What came first? It's hard to say. "Someone who suffers anxiety may drink to feel calmer, but when it leaves their system, they get rebound anxiety. The system is stimulated and anxiety symptoms are worsened, so then they drink again. It becomes a cycle. If they're just getting sober and not dealing with the underlying mental health issue, it's a recipe for relapse." Genetics. It bears repeating that genetics accounts for 50 percent of the chance of alcoholism, Benton said. "It's not a guarantee, but ... people underestimate the power of genetics in alcoholism. A majority of alcoholics will say they grew up in an alcoholic home. Again, do your drinking patterns happen because of genetics or because of what they saw when they were younger or the family culture? It's probably all of that. But again, the genetics are loaded on this one." That said, Benton doesn't want to obsess over drinking as she raises her young daughter. "I plan to have an open dialogue with her, and I want her to feel she can talk to my husband and me," Benton said. "I know my first drinking experiences, I blacked out, and I thought that was normal. I hid it from my parents. It's not that I want to condone it. I want to find a balance between condoning it and openness." "The main thing," Seppala said, "is raising your children to the best of your ability, providing a really loving environment for them. There are so many limits to what we can do. But those two things are powerful." KNOW THE FACTS About 8.5 percent of Americans suffer from alcohol-use disorders, and 25 percent of children have been exposed to alcohol-use disorders in their family. For more information on alcoholism awareness, including insights for parents and young adults, go to the website of The National Council on Alcoholism and Drug Dependence, ncadd.org. (c)2013 Chicago Tribune Distributed by McClatchy-Tribune News Service.
<urn:uuid:2d8143a6-fcdb-4262-bdba-53d2d56b0854>
CC-MAIN-2013-20
http://www.intelihealth.com/IH/ihtIH/EMIHC000/333/341/1473678.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699273641/warc/CC-MAIN-20130516101433-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.980003
1,484
2.625
3
Heart Valve Problems What Is It? The heart has four valves: the aortic, mitral, tricuspid and pulmonary valves. Like valves used in house plumbing, the heart valves open to allow fluid (blood) to be pumped forward, and they close to prevent fluid from flowing backward. Human heart valves are flaps of tissue called leaflets or cusps. Heart valve problems fall into two major categories: - Stenosis - The opening of the valve is too narrow, and this interferes with the forward flow of blood - Regurgitation - The valve doesn't close properly. It leaks, sometimes causing a significant backflow of blood. Heart valve problems can be congenital, which means present at birth, or acquired after birth. A heart valve problem is classified as congenital when some factor during fetal development causes the valve to form abnormally. Congenital heart valve disease affects about 1 in 1,000 newborns. Most of these infants have stenosis of either the pulmonary or the aortic valve. Most of the time, a specific reason for the congenital heart valve problem cannot be determined. However, researchers believe that many cases are caused by genetic (inherited) factors. This is because there is a higher risk of valve abnormalities in the parents and siblings of affected newborns, compared with the overall risk in the general population. Sometimes, the heart defect is related to health or environmental factors that affected the mother during pregnancy. These factors include diabetes, phenylketonuria, rubella infection, systemic lupus erythematosus (SLE or lupus) or drugs taken by the mother (alcohol, lithium, certain seizure medications). A heart valve problem is acquired if it occurs in a valve that was structurally normal at birth. Some common causes of acquired heart valve problems include: Heart valve problems affect each valve in a slightly different way. The aortic valve opens to allow blood to pass from the left ventricle to the aorta, the massive blood vessel that directs oxygenated blood from the heart to the rest of the body. Disorders of this valve include: - Congenital aortic stenosis - When a child is born with congenital aortic stenosis, the problem is usually a bicuspid aortic valve, meaning the valve has two flaps instead of the usual three. In about 10% of affected newborns, the aortic valve is so narrow that the child develops severe cardiac symptoms within in the first year of life. In the remaining 90%, congenital aortic stenosis is discovered when a heart murmur is found during a physical examination or a person develops symptoms later in life. - Acquired aortic stenosis - In adulthood, aortic stenosis typically is caused by rheumatic fever or idiopathic calcific aortic stenosis. Some recent research suggests that the same processes that cause atherosclerosis in the arteries of the heart may contribute to the development of aortic stenosis. - Aortic regurgitation - In aortic regurgitation, the aortic valve does not close properly, allowing blood to flow backward into the left ventricle. This decreases the forward flow of oxygenated blood through the aorta, while the backflow into the ventricle eventually dilates (stretches) the ventricle out of shape. In the past, adults with aortic regurgitation often had rheumatic fever in childhood. Today, other causes are more common, such as congenital heart disease, infection called endocarditis, and connective tissue disorders. Aortic valve problems in adults are more common in men than women. The mitral valve opens to allow blood to pass from the left atrium to the left ventricle. Disorders of this valve include: - Mitral stenosis - Congenital mitral stenosis is rare. The typical adult patient is a woman whose mitral valve was damaged by rheumatic fever. - Mitral valve prolapse - In this condition, the leaflets of the mitral valve fail to close properly. It is a common condition, particularly among women between the ages of 14 and 30. The underlying cause is unknown, and the majority of patients never have symptoms. In most women with this condition, mitral valve prolapse has no significance. However, in men, the prolapse is related to abnormalities of the valve leaflets that tend to get worse over time. This can lead to severe mitral regurgitation. - Mitral regurgitation - In the past, rheumatic fever was the most often cause of mitral regurgitation. Today, mitral valve prolapse in men, endocarditis, ischemic heart disease and dilated cardiomyopathy are the most common causes. The pulmonary valve, or pulmonic valve, is located between the right ventricle and the pulmonary artery. It allows oxygen-poor blood to flow from the right side of the heart to the lungs for oxygenation. Disorders of this valve include: - Congenital pulmonic stenosis - In the relatively few newborns with severe congenital pulmonic stenosis, the child develops heart failure or cyanosis (a bluish color to the lips, fingernails and skin) within the first month of life. In most cases, the valve is deformed, with two or three leaflets partially fused. - Adult disorders of the pulmonic valve - In adults, the pulmonic valve most often is damaged because of pulmonary hypertension (abnormally high pressure within the blood vessels in the lungs), usually related to chronic obstructive pulmonary disease. Damage from rheumatic fever and endocarditis is relatively rare. The tricuspid valve allows blood to flow from the right atrium to the right ventricle. Disorders of this valve include: - Tricuspid stenosis - This usually is caused by an episode of rheumatic fever, which often damages the mitral valve at the same time. Tricuspid stenosis is rare in North America and Europe. - Tricuspid regurgitation - Tricuspid regurgitation typically occurs because of pulmonary hypertension, but it also can be caused by heart failure, myocardial infarction, endocarditis or trauma. Many people with mild heart valve problems do not have any symptoms, and the abnormal valve is discovered only when a heart murmur is heard during a physical examination. For more severe heart valve problems, symptoms vary slightly depending on which valve is involved. - Congenital heart valve problems - Severe valve narrowing can cause a condition called cyanosis, in which the skin becomes bluish, and symptoms of heart failure. - Aortic stenosis - Aortic stenosis usually does not cause symptoms until the valve opening narrows to about one-third of normal. Symptoms include shortness of breath during exertion (exertional dyspnea), heart-related chest pain (angina pectoris) and fainting spells (syncope). - Aortic regurgitation - A patient can have significant aortic regurgitation for 10 to 15 years without developing significant symptoms. When symptoms begin, there may be palpitations; cardiac arrhythmias; shortness of breath during exertion; breathlessness while lying down (orthopnea); sudden, severe shortness of breath during the middle of the night (paroxysmal nocturnal dyspnea); sweating; angina; and symptoms of heart failure. - Mitral stenosis - Symptoms include shortness of breath on exertion; sudden, severe shortness of breath during the middle of the night; cardiac arrhythmias, especially atrial fibrillation; and coughing up blood (hemoptysis). In some patients, blood clots (thrombi) form in the left atrium. These clots can travel through blood vessels and damage the brain, spleen or kidneys. - Mitral regurgitation - Symptoms include fatigue, shortness of breath during exertion and breathlessness while lying down. - Pulmonic valve problems - Symptoms include fatigue, fainting spells and symptoms of heart failure. - Tricuspid regurgitation - This does not usually cause symptoms unless it is severe and associated with pulmonary hypertension. Leg swelling and more generalized fluid retention can occur. If you are having symptoms, your doctor will begin by evaluating your risk of heart valve problems. Your doctor will ask questions about your family history of heart problems; your personal history of rheumatic fever, syphilis, hypertension, arteriosclerosis or connective tissue disorders; and your risk of endocarditis caused by intravenous (IV) drug use or a recent medical or dental procedure. If the patient is an infant, the doctor will ask about the mother's health or environmental risk factors during pregnancy. Your doctor may suspect that you have a heart valve problem based on your specific symptoms and medical history. To support the diagnosis, your doctor will examine you, paying special attention to your heart. Your doctor will evaluate the size of your heart (to check for enlargement) and use a stethoscope to listen for heart murmurs. Because specific heart valve problems produce specific types of heart murmurs, your doctor often can make a tentative diagnosis based on your murmur's distinctive sound and whether the murmur occurs when the heart is pumping or resting. To confirm the diagnosis of a heart valve problem and to evaluate its effects on your heart, your doctor will order diagnostic tests. These may include an electrocardiogram (EKG), a chest X-ray, blood tests to check for infection in patients with suspected endocarditis, an echocardiogram, Doppler echocardiography and cardiac catheterization. In people who do not have any symptoms, diagnostic testing may become necessary after your doctor discovers a new heart murmur during a routine physical exam. In general, heart valve problems persist throughout life and may gradually worsen with time. Those caused by endocarditis sometimes may produce severe symptoms and rapid deterioration within a few days. There is no way to prevent the majority of congenital heart valve problems. Pregnant women should have regularly scheduled prenatal care and should avoid using alcohol. You can prevent many acquired heart valve abnormalities by preventing rheumatic fever. To do this, take antibiotics exactly as prescribed whenever you have strep throat. If you have a mild heart valve problem without any symptoms, your doctor may simply monitor your condition. Researchers are studying whether the medications called statins may slow the progression of aortic stenosis, but there is not yet any evidence that these drugs decrease the need for surgery. If you have moderate or severe symptoms, your treatment will be determined by the severity of your symptoms and the results of diagnostic tests. Although your doctor can give you medications to temporarily treat symptoms such as angina, cardiac arrhythmias and heart failure, you eventually may need to have the abnormal valve repaired or replaced. This can be done in several different ways: - Percutaneous balloon valvoplasty (for stenosis) - In this procedure, a tiny catheter with a balloon at its tip is passed through the narrowed heart valve. The tiny balloon then is inflated and pulled back through the narrowed valve to widen it. - Valvotomy using traditional surgery (for stenosis) - In this procedure, the surgeon opens the heart and separates valve leaflets that are fused together. - Valve repair (for regurgitation) - In this procedure, the surgeon opens the heart and repairs the valve leaflets so that they close more effectively. - Valve replacement - Defective heart valves can be replaced with a mechanical heart valve made of plastic or Dacron, or a biological valve made of tissue taken from a pig, cow or deceased human donor. After surgery, patients with mechanical valves must take anticoagulant medications to prevent blood clots. When to Call a Professional Call your doctor immediately if you begin to experience any symptoms that may be related to a heart problem, especially shortness of breath, chest pain, rapid or irregular heartbeat, or fainting spells. If you have been diagnosed with a heart valve problem, ask your doctor whether you are at risk of endocarditis. If so, you will need to take antibiotics before undergoing any medical or dental procedure in which bacteria may enter your blood and infect your abnormal valve. Among patients who undergo surgical treatments for heart valve problems, the major risks occur during and immediately after surgery. After that, the outlook is usually excellent. People that have had surgery are at much higher risk of developing an infection on the heart valve (endocarditis) throughout life. American Heart Association (AHA) 7272 Greenville Ave. Dallas, TX 75231 National Heart, Lung, and Blood Institute (NHLBI) P.O. Box 30105 Bethesda, MD 20824-0105 American College of Cardiology 9111 Old Georgetown Road Bethesda, MD 20814-1699 Toll-Free: 1-800-253-4636, ext. 694
<urn:uuid:0cef15a4-7eaf-4eb9-9284-f89ba4ad8024>
CC-MAIN-2013-20
http://www.intelihealth.com/IH/ihtIH/EMIHC267/9339/23659.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699273641/warc/CC-MAIN-20130516101433-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.912978
2,744
3.671875
4
Gordin Kaplan Award Lecture. Building a ... |Title||Gordin Kaplan Award Lecture. Building a relationship: science and the community| |Journal||Canadian journal of physiology and pharmacology| |Abstract||An appreciation for the intrinsic relationship that exists among science, scientists, and the public must be established. Both practitioners of science and the public should be made more aware that science is part of everyday life and that the definition of a scientist should be more encompassing. It is generally accepted that the public supports but often does not understand the goals of the scientific community. This is often due to lack of effective communication. The scientific community has accepted accountability to the public and attempts have been made to improve our image. Programs by groups and individuals to interact with the public, particularly school children, have grown. However, there is still a need to expand this area. It is our responsibility to find our niche in science awareness programs as speakers, mentors, or facilitators. Participation in these programs is an essential part of a professional scientist's career and should be encouraged by administration. Interaction with the public improves our ability to explain science in lay terms and the relevance of our work to the community. End points should be established to measure success: not just numbers of students entering 'scientific' careers but also science literacy. Developing this strategy will not only improve the image of the scientific community with the public but also build a lasting relationship where needs and aspirations will be mutually appreciated.| Using APA 6th Edition citation style. Times viewed: 55
<urn:uuid:475d52dc-d6cf-4948-bad2-ffbf3c210f72>
CC-MAIN-2013-20
http://www.islandscholar.ca/fedora/repository/ir%3Air-batch6-214
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699273641/warc/CC-MAIN-20130516101433-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.930231
317
2.546875
3
By Diane Bones, Pure Matters Excessive sun exposure leads to skincancer and premature aging of the skin and in some individuals, skin cancer. The good news is that most skin cancers are curable if they are caught and treated early. It is easy to routinely inspect your body for any skin changes. Should any growth, mole, sore or discoloration appear suddenly or begin to change, you should see your healthcare provider. Here's a rundown of some of your skin's worst enemies: Basal cell carcinoma This is the most common type of skin cancer. It usually appears on sun-exposed areas of the body as small, non-healing growths or as a fleshy bump. Most commonly seen on the face, they are also frequently found on the ears, chest, back, arms and hands. It's found mainly in people with light hair, eyes and complexions (people who usually burn easily). Fortunately, these tumors don't spread quickly. It may take many months or sometimes even years for one to reach a diameter of one-half inch. Left untreated, it can begin to bleed, crust over, and slowly invade the underlying fat, muscle, nerves and bones. Basal cell carcinoma seldom spreads to other parts of the body and is rarely fatal. Squamous cell carcinoma These tumors appear as non-healing ulcers, nodules or red scaly patches. They are typically found on the rim of the ear, face, lips and mouth. They can develop into large masses and cause a lot of local destruction. This cancer, unlike basal cell carcinoma, can spread to other parts of the body and cause death. Both basal and squamous cell carcinoma are rarely found in people with dark skin. The cure rate is high when caught and properly treated. A very serious cancer characterized by the uncontrolled growth of pigment-producing cells, although melanoma can be non-pigmented. Melanoma can suddenly appear without warning within or near a mole or dark spot. It is found most frequently on women's legs and the upper backs of both women and men; however, it can appear anywhere on the body. Melanoma is believed to be event-driven—one or more sunburns during childhood or adolescence that sets the stage for developing melanoma at a later date. It is more common in light-skinned people. Past sunburns, sun exposure during youth and even heredity are factors in developing melanoma. Dark-skinned people also can develop it, especially on the hands and feet, under the nails and in the mouth. Treating melanoma in its early stage before it has metastasized, or invaded the deeper layer of skin, is usually successful. But once it grows and sinks into the skin, the chances of it spreading and causing death are greatly increased.
<urn:uuid:cf2937fa-ed31-4545-ab4c-35203f23d77d>
CC-MAIN-2013-20
http://www.ksat.com/lifestyle/healthandwellnessexperts/Your-skin-s-worst-enemies/-/2606048/18517942/-/8noyesz/-/index.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699273641/warc/CC-MAIN-20130516101433-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.960866
583
2.828125
3
| You must have heard about different kinds of competition, and your children also must have participated in many of them. But today, I will tell you about a unique and a very interesting competition. i.e., National Handwriting Olympiad. National Handwriting Olympiad is a handwriting competition which is conducted at the national level. It is unique because the participant does not have to do any special preparations and it is interesting because even the one, who actually does not win, never loses. You must be wondering how this is possible. Actually, in this Competition students across India participate through their schools. The most interesting aspect is that, those students with illegible handwriting can also take part in this competition. The reason behind it being –“The way of Evaluation of the handwriting sheets” Handwriting experts scrutinize the handwriting sheets and also give necessary suggestions as well as corrective advice to each and every student. This advice helps your child to improve / repair his /her handwriting. The Experts in the field of Handwriting have said that “Handwriting reflects the personality of an Individual.” If you want to ensure success for your child in Academic Career, you will have to focus on his/her handwriting from childhood itself. The Best known Handwriting Institute of India, Write - Right found in its survey, that 96% of the merit students have got good and legible handwriting, and they also have a better control over languages; And if worked upon scientifically the handwriting of every single student can be improved within 7 to 10 days. The main aim of this Competition is to spread the awareness & importance of Handwriting among students, so that they can focus on handwriting right from the beginning and not lose marks due to illegible handwriting. Through this type of Competitions • Your child will get an opportunity to improve his/her handwriting. • Your child becomes aware about Importance of legible handwriting. • Your child will be Awarded/ Rewarded. So Friends, Isn’t it a Unique Competition, where students who are losing can also win. After knowing about this competition, you will be willing that your child should participate, so why the delay? Talk to the Teachers today itself. For more Information, log on to www.handwritingolympiad.com
<urn:uuid:aaff1ee0-53cd-4f46-82d5-8fdfb5f98c01>
CC-MAIN-2013-20
http://www.mastermindabacus.com/write-right-news.html?id=53
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699273641/warc/CC-MAIN-20130516101433-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.966098
475
2.84375
3
As a child many of us can attest to hearing mom reiterated many of these similar quotes, "eat your veggies, mind your manners, and get a good night's rest." A new study demonstrates reasons why mom was right, especially about receiving a good's night rest. Lead study author Aric A. Prather, PhD, clinical health psychologist and Robert Wood Johnson Foundation Health & Society Scholar at UCSF and UC Berkeley, uncovered lack of sleep lower the effectiveness of vaccines. "With the emergence of our 24-hour lifestyle, longer working hours, and the rise in the use of technology, chronic sleep deprivation has become a way of life for many Americans. These findings should help raise awareness in the public health community about the clear connection between sleep and health," he said. Conducted at the University of Pittsburgh, the study consisted of 70 women and 55 men between the ages of 40 and 60. All were relatively in good health and nonsmokers. Researchers administered the standard three-doses of hepatitis B vaccine. The first and second doses were administered a month a part, following a booster shot six months later. Preceding both the second and third shot, each patient's antibody levels were measured. Following the final shot, researchers measured antibody levels once again. Results discovered people who only received on average six hours of sleep per night, considerably hurt their chances of developing antibody responses to the vaccine and were 11.5 times more likely to be vulnerable to viruses. "Sleeping fewer than six hours conferred a significant risk of being unprotected as compared with sleeping more than seven hours per night," researchers stated. Researchers reiterated the importance of sleep and additional revealed a lack of sleep may have detrimental effects on the immune system that are essential to vaccine response. "Based on our findings and existing laboratory evidence, sleep may belong on the list of behavioral risk factors that influence vaccination efficacy," Prather said. "While there is more work to be done in this area, in time physicians and other health care professionals who administer vaccines may want to consider asking their patients about their sleep patterns, since lack of sleep may significantly affect the potency of the vaccination." Published by Medicaldaily.com
<urn:uuid:173904ad-8607-4ff8-96b6-bb54def9a9c0>
CC-MAIN-2013-20
http://www.medicaldaily.com/articles/11225/20120801/poor-sleep-can-effect-potency-of-vaccines.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699273641/warc/CC-MAIN-20130516101433-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.96225
444
3.140625
3
Tuesday, 14 February 2012 I read a very interesting article titled: "Is India Doing Enough For Its Children?". The first article paints a very sad picture to the reader, stating that a 17 year old mother was taken to a wooden shack moments after giving birth to her premature son. Since she belonged to a poor community, which believes a woman to be impure moments after giving birth, neither the poor woman, nor her son received any medical treatment, causing the newborn to die just two month after birth. Even though India has improved economically over the years, the article points out that they hold a shocking 20% of the world’s mortality in children. Even more shockingly, the article points out, half of these deaths occur within the first month of birth. Since both, the mothers and children lack nutrition, this is one of the main reasons for such high mortality rates. Surprisingly, the malnutrition problem in India is three times more severe than that of Ethiopia, nevertheless, Ethiopia has lower mortality rates of its children as compared to India, therefore there has to be another leading factor that contributes to such great losses. Over the years the mortality of children under the age of 5 decreased, but as the article states, the numbers are still far from India’s goal for 2015. The same trend is seen in other countries that have large populations as well as increased poverty as compared to the rest of the world. Poverty and lack of education for the mothers seems to be among the highest contributors to India’s high mortality rates between children. More well of places in India do not suffer from these horrendous losses. As many as 80% of the Indian population were not aware of such a pressing issue going on in their country. Key contributors to such high mortality rates in India seem to be the cultural aspect, deeming birth as unclean, hence allowing it to happen in unclean facilities without proper training; as well as, lacking of nutrition. It is told in the article that the young mother was escorted to give birth in a cow's shed, aligned with cows dung, which many Indians believe purifies childbirth. The article concludes, that in India it is normal for a woman to loose a few babies before finally having one child reach adulthood. As noted in the article, many poor communities suffer similar consequences. Lack of education derails people’s ability to understand such delicate procedures as childbirth, deeming it unclean and exposing both new mother and newborn to harsh bacteria and diseases by not providing clean, trained facilities. The world needs to move forward in educating these societies and providing training, as well as, clean facilities to ensure health and safety of both the mother, and the child. Although there are many extremist societies, which believe in “nature taking its course”, the world should not be turning its shoulder on the countries in high need of proper education, training and equipment. What do you think? Is there anything we can do to help out?
<urn:uuid:855ca41a-4b65-4d6a-a239-87916bf59724>
CC-MAIN-2013-20
http://www.momaroo.com/759080894/are-we-doing-enough-for-the-worlds-children/?page=1&jump=1524479316
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699273641/warc/CC-MAIN-20130516101433-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.97012
607
2.59375
3
Memo to Coal States: Adapt or Die Whether or not Congress passes climate legislation setting a cap on carbon dioxide, the future of coal in the United States looks bleak. Coal production in Central Appalachia is expected to decline by nearly 50 percent in the next decade, according to a new report from environmental consulting firm Downstream Strategies. Coal-producing counties in southern West Virginia, eastern Kentucky, southwest Virginia, and eastern Tennessee can expect a sharp decline in production, which has already fallen substantially over the last 12 years. Production peaked in 1997 at 290 million tons and fell 20 percent by 2008, due largely to increased competition from other regions and types of fuel and the depletion of the most accessible coal deposits. Though there are still substantial coal reserves, the study predicts a 46 percent decline in production by 2020 and a 58 percent decline by 2035. And that's not even counting the effect of strict emissions regulations that would likely reduce future demand for coal and cause further decline in coal jobs. In the region examined in the report, 37,000 workers were either directly and indirectly employed by the coal industry in 2008, accounting for up to 40 percent of the jobs in some counties. "Should substantial declines occur as projected, coal-producing counties will face significant losses in employment and tax revenue, and state governments will collect fewer taxes from the coal industry," write report authors Rory McIlmoil and Evan Hansen. "State policy-makers across the Central Appalachian region should therefore take the necessary steps to ensure that new jobs and sources of revenue will be available in the counties likely to experience the greatest impact from the decline."The authors recommend that affected states introduce renewable energy standards of 25 percent by 2025, as well as tax incentives for renewable production, grants, energy bond and loan programs, and job training programs to help workers transition to new industries. And their message may be getting through: even the coal industry's best friend in the Senate, Robert Byrd, has recently warned that coal states must "anticipate change and adapt to it, or resist and be overrun by it."
<urn:uuid:d7ab6a06-99c2-4814-8b2c-310c02b98033>
CC-MAIN-2013-20
http://www.motherjones.com/blue-marble/2010/01/coal-states-adapt-or-die
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699273641/warc/CC-MAIN-20130516101433-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.957026
418
2.859375
3
(NaturalNews) The American College of Gastroenterology recently held their 76th Annual meeting in Washington D.C. At this meeting, two different studies were presented that looked at the effectiveness of probiotic use in the treatment of antibiotic-associated diarrhea and Clostridium difficile-associated diarrhea, which is a complication of long-term antibiotic use. Researchers from the Maimonides Medical Center in Brooklyn, New York conducted a meta-analysis that looked at 22 different studies and included 3096 patients. They found that probiotic prophylaxis significantly reduced the chances of developing antibiotic-associated diarrhea. Researchers from Beth Israel Deaconess Medical Center at Harvard Medical School also found similar results when they conducted a meta-analysis that used 28 randomized controlled trials involving 3338 patients. Dangers of Antibiotics Antibiotic use has long been associated with short-term health problems such as diarrhea, rashes and stomachaches. Between 5%-39% of all patients put on antibiotics experience diarrhea as a complication, and those over the age of 65 are at most risk. Broad-spectrum antibiotics are a greater risk than narrow spectrum antibiotics; however, all antibiotics impart risk, and antibiotic-associated diarrhea can occur up to several weeks after stopping antibiotics. Probiotics are live microorganisms that live off other organisms and benefit the host. Various strains of bacteria and even one strain of yeast have been shown to be beneficial to humans. The use of antibiotics , especially broad-spectrum antibiotics, kills both good and bad bacteria alike. Loss of beneficial bacteria makes one more susceptible to diarrhea and other gastrointestinal upsets. Recent studies have shown that taking probiotics may help offset some of the negative consequences associated with antibiotic use. You need to replace the beneficial bacteria that are lost when antibiotics are taken. Dr. Steven Shamah, MD presented the finding of his researcher's meta-analysis. Of the 22 different studies his team looked at, 63% of the patients included in the studies were adults, and all were treated with a variety of probiotics. Thirty-five percent of the studies used S. boulardii , and probiotic treatment length ranged from 5 days to 3 weeks. The meta-analysis found that preventative probiotics reduced the odds of developing antibiotic-associated diarrhea by 60%. The researchers at the Beth Israel Deaconess Medical Center at Harvard Medical School showed that probiotics were effective at preventing diarrhea in both children and adults without regard to the type of probiotic used or the type of antibiotic used. These patients were receiving single or combination antibiotics to treat a variety of conditions. They also found that probiotics were preventative against diarrhea when antibiotics were taken in treatment for H. pylori A review published May 2011 in Therapeutic Advances Gastroenterology also showed that probiotics are effective against antibiotic-associated diarrhea. Lactobacillus and S. boulardii and L. rhamnosus GG were either the most effective or used in the most studies. Diarrhea is just one complication of antibiotic use. Other short-term problems include rash and stomachache. However, antibiotic use can also result in long-term problems, the most common of which is overgrowth of Candida yeast. In fact Candida overgrowth is at the bottom of many common health problems, including headaches, athlete's foot, chronic pain, mood swings and PMS. Therefore, it is highly recommended that any individual who chooses to take antibiotics, also takes probiotics to prevent both short and long-term health problems from antibiotic use.Sources for this article includehttp://www.medicalnewstoday.com/releases/236882.phphttp://www.naturalnews.com/027198_candida_antibiotics_health.htmlhttp://www.ncbi.nlm.nih.gov/pmc/articles/PMC3105609/About the author: Amelia Bentrup is the owner and editor of http://www.my-home-remedies.com a well-researched collection of natural home remedies. Discover natural cures for a variety of ailments and find specific information and safety guidelines for various herbs, vitamins, minerals and essential oils. Have comments on this article? Post them here: people have commented on this article.
<urn:uuid:2dd3b24a-786c-4ffc-a468-8b0035906886>
CC-MAIN-2013-20
http://www.naturalnews.com/034084_probiotics_antibiotics.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699273641/warc/CC-MAIN-20130516101433-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.939076
895
3.203125
3
ATLANTA (AP) — A new report estimates that half the meat and poultry sold in the supermarket may be tainted with the staph germ. That estimate is based on 136 samples of beef, chicken, pork and turkey purchased from grocery stores in Chicago, Los Angeles, Washington, D.C., Flagstaff, Ariz. and Fort Lauderdale, Fla. Researchers found more than half contained Staphylococcus aureus, a bacteria that can make people sick. Worse, half of those contaminated samples had a form of the bacteria resistant to at least three kinds of antibiotics. Proper cooking should kill the germs. But the report suggests that consumers should be careful to wash their hands and take other steps not to spread bacteria during food preparation. The nonprofit Translational Genomics Research Institute in Arizona did the work. © Copyright 2013 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
<urn:uuid:a11ad4b7-d500-4ffa-93ab-a96bff1794e3>
CC-MAIN-2013-20
http://www.newsmax.com/US/US-MED-Tainted-Meat/2011/04/15/id/393045
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699273641/warc/CC-MAIN-20130516101433-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.920995
196
2.5625
3
Welcome to the companion Web site to "Fireworks!," originally broadcast on January 29, 2002. This explosive NOVA presents the colorful history of pyrotechnics and reveals how hi-tech firing systems are transforming public displays into a dazzling, split-second science. Here's what you'll find online: Plus Resources and a Teacher's Guide Name That Shell Watch video clips of fireworks bursting in air and find out how well you know your chrysanthemums from your peonies, your roman candles from your palm trees. Anatomy of a Firework Where you see brilliant light and vivid color, a pyrotechnician sees a successful lift charge, black powder mix, time-delay fuse, bursting charge, and other essential ingredients. Dr. John Conkling, adjunct professor of chemistry at Washington College and former executive director of the American Pyrotechnics Association, describes what it is about fireworks that gets him, well, all fired up. On Fire (Hot Science) This virtual laboratory lets you explore the basics of combustion, including how a fire ignites, what a flame is made of, and how burning molecules rearrange themselves. To learn more about upcoming NOVA features, join our mailing list. Name That Shell | Anatomy of a Firework | Pyrotechnically Speaking | Teacher's Guide | NOVA Online | Site Map | Previously Featured | Editor's Picks | Join Us/E-mail | About NOVA | Watch NOVAs online | NOVA Online is produced for PBS by the WGBH Science Unit. Funding for NOVA is provided by ExxonMobil, David H. Koch, the Howard Hughes Medical Institute, the Corporation for Public Broadcasting, and public television viewers. © | created January 2002
<urn:uuid:aaf1b81b-5ab3-475e-a5c1-fa57d9a19379>
CC-MAIN-2013-20
http://www.pbs.org/wgbh/nova/fireworks/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699273641/warc/CC-MAIN-20130516101433-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.865124
382
2.8125
3
|PHOTO COURTESY OF POWERLIGHT CORPORATION | |The Mauna Lani located in Hawaii is the largest | solar-powered resort in the world. Its solar electric systems reduce operational costs as well as contribute to Hawaii's sustainability and environmental preservation by offsetting diesel combustion for Saving energy through lighting requires either reducing electricity or reducing the amount of time the lights are on. To reduce electricity, you can replace existing fluorescent lights with more energy-efficient models, which provide a lower wattage but approximately the same light output. Replace incandescent lights (regular old bulbs) that are on more than a few hours a day with compact fluorescent lights (CFLs). For your efforts, you'll see payback in about two years, Barrie says. Compact fluorescent lights (CFLs) combine the efficiency of fluorescent with the convenience of incandescent. The technology has been around for nearly 20 years, but it has become more popular in the past decade because of advancements in fixtures that fit them. They have also improved in terms of the amount of light they produce, says Joe Rey-Barreau, former director of education at the American Lighting Association. Compact fluorescents are about four times more efficient than a standard incandescent light, meaning a 25-Watt CFL equals about a 100-Watt incandescent bulb. CFLs also have a longer life than incandescent bulbs. A 100-Watt incandescent brightens a room for about 750 to 1,000 hours, while a CFL keeps going to an average of 10,000 hours. "From the point of view of just efficiency and long-term maintenance, you really can't beat these little guys," Rey-Barreau says. Until recent years, one of the technical problems with CFLs was their aversion to cold weather. When the temperature got below freezing, they often wouldn't work. Improvements now allow use of them down to about zero degrees Fahrenheit. If you're really going for the top of the line, then you'd want to look into high intensity discharge lamps (HID), which are commonly used for outdoor and street lighting. In terms of efficacy in lighting—translated to lumens per Watt—these produce the most amount of light for the amount of energy consumed. Incandescents produce from 17 to 20 lumens per Watt, fluorescents 80 to 100, HIDs 80 to 140. |PHOTO COURTESY OF POWERLIGHT CORPORATION | |Covering 10,000 square feet of the hotel | roof area, the solar electric system atop the Mauna Lani Bay Hotel lowers air conditioning requirements and extends roof life by protecting it from damaging effects of the weather—all while generating clean electricity. For outdoor applications, such as lighting pathways or a field, Rey-Barreau says that HIDs are very useful. But make sure you choose the right type of HID. There are three: mercury, metal halide and high pressure sodium. The mercury HIDs have an efficacy rating of about 60 to 80 lumens per Watt, and they give off a greenish-blue tint. "In an outdoor environment, they make plants look really cool," Rey-Barreau says. Metal halide is really just an improved mercury bulb. It provides higher efficacy (about 80 to 100, sometimes higher) and a life span of up to 20,000 hours. Rey-Barreau says these are being used almost exclusively now in outdoor sports facilities and shopping malls. The third type, high pressure sodium, is the most efficient commercially available light bulb, producing 120 to 140 lumens per Watt rating with a life span of at least 24,000 hours. But the yellowish-orange tint makes people look jaundiced, so these lights are mostly confined to highways. It depends on the fixture as to whether you'll need to get a new one to fit your new light, but compact fluorescents come in a variety of shapes and sizes. About 70 percent of incandescent fixtures will handle a direct switch with a compact fluorescent. For added savings, invest in motion detectors. If you're running an overnight camp, for instance, kids who stumble bleary-eyed into the bathroom at 2 a.m. might stumble out again without turning off the light. A motion-detector system could keep them in check by clicking off after a certain amount of time. "But you want to put a long timer on the lights, so people aren't left in the dark," Barrie says. An average motion detector runs about $20 and can simply replace a regular switch.
<urn:uuid:50535ae8-0db4-4b9b-9b22-78b9cd988f5a>
CC-MAIN-2013-20
http://www.recmanagement.com/features.php?fid=200210fe03&ch=3
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699273641/warc/CC-MAIN-20130516101433-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.940803
974
3.171875
3
(A) Curiosity will trundle around its landing site looking for interesting rock features to study. Its top speed is about 4cm/s (B) This mission has 17 cameras. They will identify particular targets, and a laser will zap those rocks to probe their chemistry (C) If the signal is significant, Curiosity will swing over instruments on its arm for close-up investigation. These include a microscope (D) Samples drilled from rock, or scooped from the soil, can be delivered to two hi-tech analysis labs inside the rover body (E) The results are sent to Earth through antennas on the rover deck. Return commands tell the rover where it should drive next The robot fired its ChemCam laser at a tennis-ball-sized stone lying about 2.5m away on the ground. The brief but powerful burst of light from the instrument vapourised the surface of the rock, revealing details of its basic chemistry. This was just target practice for ChemCam, proving it is ready to begin the serious business of investigating the geology of the Red Planet. It is part of a suite of instruments on the one-tonne robot, which landed two weeks ago in a deep equatorial depression known as Gale Crater. Over the course of one Martian year, Curiosity will try to determine whether past environments at its touchdown location could ever have supported life. The US-French ChemCam instrument will be a critical part of that investigation, helping to select the most interesting objects for study. The inaugural target of the laser was a 7cm-wide rock dubbed "Coronation" (previously N165). It had no particular science value, and was expected to be just another lump of ubiquitous Martian basalt, a volcanic rock. Its appeal was the nice smooth face it offered to the laser. ChemCam zapped it with 30 pulses of infrared light during a 10-second period. Each pulse delivered to a tiny spot more than a million watts of power for about five billionths of a second. The instrument observed the resulting spark through a telescope; the component colours would have told scientists which atomic elements were present.
<urn:uuid:9820b740-7c3b-4bbd-9d6a-93a84f219165>
CC-MAIN-2013-20
http://www.richarddawkins.net/news_articles/2012/8/20/nasa-s-curiosity-rover-zaps-mars-rock
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699273641/warc/CC-MAIN-20130516101433-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.940572
446
3.859375
4
It depends. First, cedar chips take a very long time to decompose, and will affect the soil and water, but not significantly differently than the decomposition of cedar in the forests which occurs naturally. However, I suggest that you avoid trying to decompose them with other materials such as garden waste or household vegetable matter. I assume by "Household Compost" you are talking about vegetable/fruit waste from inside the home. In this case, the important thing is to ensure that the composting is done in a container that is fully protected from rats in particular. Some things, like red worms, significantly improve the efficiency of composting these wastes. It is best that you have good drainage to ensure the composted material dries out fairly well. It is a good idea to turn it once in a while, i.e. mix it up. There is a neat tool which makes this quit easy. Done properly, this will not adversly affect the water or soil. The one thing that I would NOT include in my compost pile is evergreen clippings, needles or cones. These don't compost very well, and will make the soil too acidic. making a small donation to science.ca.
<urn:uuid:9bff3fd9-8566-418a-9fa3-1b91b26049b3>
CC-MAIN-2013-20
http://www.science.ca/askascientist/viewquestion.php?qID=3553
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699273641/warc/CC-MAIN-20130516101433-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.967596
251
2.5625
3
Oct. 28, 2004 ANN ARBOR, Mich. -- Picture a honeycomb and each compartment in the honeycomb is coated with living cells from a person's mouth, skin or a piece of bone. University of Michigan associate professor Nicholas Kotov believes that one day, the cells in those honeycombs can be used to grow spare parts for our bodies, or even an entire artificial immune system in a bottle. An immune system in a bottle would allow faster and easier production of a flu vaccine, thus preventing another shortage, he said. In addition, the immune system in a bottle will give scientists clues how to design vaccines that activate an immune response to the unchanging part of a flu virus, making yearly vaccinations, quite possibly, unnecessary, Kotov said. In the paper "Inverted Colloidal Crystals as 3-D Cell Scaffolds," published last month in the journal Langmuir, Kotov's lab in the chemical engineering department and other collaborators introduced a way to build those cell-incubating honeycombs---called scaffolds---so that even though the cells occupy different compartments in the honeycomb, they share the same conditions, just as they would share the same conditions if growing in the body. Collaborators on the paper include researchers from Oklahoma State University, University of Texas Medial Branch and Stillwater Oklahoma-based Nomadics Inc. Kotov has appointments in the biomedical, materials science and chemical engineering departments. The research is so important that the Defense Advanced Research Projects Agency (DARPA) has funded a consortium of research institutions for $10 million to grow the immune system in a bottle. Scientists can study the artificial immune system to see how it reacts to biological hazards and their countermeasures, and use the data to make more effective countermeasures, said Jan Walker, DARPA spokesman. The birthplace of this artificial immune system is Kotov's three dimensional scaffold, which is comprised of inverted colloidal crystals, also called photonic crystals. Colloidal crystals are hexagonally ordered lattices of highly uniform spherical particles that are packed together. They have a wide range of diameters, from nanometers to micrometers and this versatility is critical for controlling the life cycle of cells and how they change (i.e. differentiation). Kotov's team didn't use robotics or complicated computer set-ups to make the scaffolds. Instead, they used heat and gel to make a simple mold. First, they infiltrated the crystal with sol gel. When the gel hardened in the channels between the spheres, scientists heated the crystal to burn away all but the walls left by the hardened gel. What's left is an inverted replica, or a mold, of the crystal. Historically, scientists cultured cells in plates or dishes where they grow in two-dimensional colonies. But because cells proliferate three dimensionally in the body, it's critical that scientists develop a three-dimensional scaffold for cell cultures so the cells' development can mimic what happens inside us. This is particularly important for differentiation of stem cells into different lineages of immune cells. The inverted colloidal crystal scaffold could stimulate differentiation of human stem cells from blood of adults to functional T and B cells. T and B cells help target and kill foreign invaders. "The uniformity of the environment affects the way the cells are developing," Kotov said. "This is particularly relevant for stem cells and other cells that can differentiate. These scaffolds offer a very good control over the environment." The final goal of the DARPA project will be replication of the function of the human bone marrow and thymus. Besides University of Texas Medical Branch and Nomadics Inc., it also includes Harvard University, Massachusetts General Hospital, Scientific Research Laboratory Inc, and Fred Hutchinson Cancer Center. Later, the artificial bone marrow and thymus will be integrated with other elements of the human immune system being developed in the multiuniversity team lead by VaxDesign Inc. The ability of the inverted colloidal crystal scaffolds to control the differentiation process of the cells also opens possibilities in using for treatment of leukemia and other forms of cancer. For information on Kotov: http://www.engin.umich.edu/dept/cheme/people/kotov.html The Kotov research group: http://www.engin.umich.edu/dept/che/research/kotov/ The Defense Advanced Research Projects Agency: http://www.darpa.mil/ The University of Michigan College of Engineering is ranked among the top engineering schools in the country. Michigan Engineering boasts one of the largest research budgets of any public university, at $139 million for 2003. Michigan Engineering has 11 departments and two NSF Engineering Research Centers. Within those departments and centers, there is a special emphasis on research in three emerging industries: Nanotechnology and integrated microsystems; cellular and molecular biotechnology; and information technology. The College is seeking to raise $110 million for capital building projects and program support in these areas to further research discovery. The CoE's goal is to advance academic scholarship and market cutting edge research to improve public heath and well-being. For more information see the CoE home page: http://www.engin.umich.edu/index.html Other social bookmarking and sharing tools: Note: If no author is given, the source is cited instead.
<urn:uuid:5c98c27e-73cb-4a98-a1de-3a4a3cfeac52>
CC-MAIN-2013-20
http://www.sciencedaily.com/releases/2004/10/041027141126.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699273641/warc/CC-MAIN-20130516101433-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.925755
1,110
3.484375
3
Constitution of Oregon: 2011 Version Sec. 1. Election to accept or reject Constitution 2. Questions submitted to voters 3. Majority of votes required to accept or reject Constitution 4. Vote on certain sections of Constitution 5. Apportionment of Senators and Representatives 6. Election under Constitution; organization of state 7. Former laws continued in force 8. Officers to continue in office 9. Crimes against territory 10. Saving existing rights and liabilities 11. Judicial districts Section 1. Election to accept or reject Constitution. For the purpose of taking the vote of the electors of the State, for the acceptance or rejection of this Constitution, an election shall be held on the second Monday of November, in the year 1857, to be conducted according to existing laws regulating the election of Delegates in Congress, so far as applicable, except as herein otherwise provided. Section 2. Questions submitted to voters. Each elector who offers to vote upon this Constitution, shall be asked by the judges of election this question: Do you vote for the Constitution? Yes, or No. And also this question: Do you vote for Slavery in Oregon? Yes, or No. And in the poll books shall be columns headed respectively. “Constitution, Yes.” “Constitution, No" “Slavery, Yes." “Slavery, No." And the names of the electors shall be entered in the poll books, together with their answers to the said questions, under their appropriate heads. The abstracts of the votes transmitted to the Secretary of the Territory, shall be publicly opened, and canvassed by the Governor and Secretary, or by either of them in the absence of the other; and the Governor, or in his absence the Secretary, shall forthwith issue his proclamation, and publish the same in the several newspapers printed in this State, declaring the result of the said election upon each of said questions. [Constitution of 1859; Amendment proposed by S.J.R. 7, 2001, and adopted by the people Nov. 5, 2002] Section 3. Majority of votes required to accept or reject Constitution. If a majority of all the votes given for, and against the Constitution, shall be given for the Constitution, then this Constitution shall be deemed to be approved, and accepted by the electors of the State, and shall take effect accordingly; and if a majority of such votes shall be given against the Constitution, then this Constitution shall be deemed to be rejected by the electors of the State, and shall be void.– Section 4. Vote on certain sections of Constitution. If this Constitution shall be accepted by the electors, and a majority of all the votes given for, and against slavery, shall be given for slavery, then the following section shall be added to the Bill of Rights, and shall be part of this Constitution: “Sec. ___ “Persons lawfully held as slaves in any State, Territory, or District of the United States, under the laws thereof, may be brought into this State, and such Slaves, and their descendants may be held as slaves within this State, and shall not be emancipated without the consent of their owners.” And if a majority of such votes shall be given against slavery, then the foregoing section shall not, but the following sections shall be added to the Bill of Rights, and shall be a part of this Constitution. “Sec. ___ There shall be neither slavery, nor involuntary servitude in the State, otherwise than as a punishment for crime, whereof the party shall have been duly convicted.” [Constitution of 1859; Amendment proposed by S.J.R. 7, 2001, and adopted by the people Nov. 5, 2002] Note: See sections 34 and 35 of Article I, Oregon Constitution. Section 5. Apportionment of Senators and Representatives. Until an enumeration of the inhabitants of the State shall be made, and the senators and representatives apportioned as directed in the Constitution, the County of Marion shall have two senators, and four representatives. Linn two senators, and four representatives. Lane two senators, and three representatives. Clackamas and Wasco, one senator jointly, and Clackamas three representatives, and Wasco one representative. Yamhill one senator, and two representatives. Polk one senator, and two representatives. Benton one senator, and two representatives. Multnomah, one senator, and two representatives. Washington, Columbia, Clatsop, and Tillamook one senator jointly, and Washington one representative, and Washington and Columbia one representative jointly, and Clatsop and Tillamook one representative jointly. Douglas, one senator, and two representatives. Jackson one senator, and three representatives. Josephine one senator, and one representative. Umpqua, Coos and Curry, one senator jointly, and Umpqua one representative, and Coos and Curry one representative jointly. [Constitution of 1859; Amendment proposed by S.J.R. 7, 2001, and adopted by the people Nov. 5, 2002] Section 6. Election under Constitution; organization of state. If this Constitution shall be ratified, an election shall be held on the first Monday of June 1858, for the election of members of the Legislative Assembly, a Representative in Congress, and State and County officers, and the Legislative Assembly shall convene at the Capital on the first Monday of July 1858, and proceed to elect two senators in Congress, and make such further provision as may be necessary to the complete organization of a State government.– Section 7. Former laws continued in force. All laws in force in the Territory of Oregon when this Constitution takes effect, and consistent therewith, shall continue in force until altered, or repealed.– Section 8. Officers to continue in office. All officers of the Territory of Oregon, or under its laws, when this Constitution takes effect, shall continue in office, until superseded by the State authorities.– Section 9. Crimes against territory. Crimes and misdemeanors committed against the Territory of Oregon shall be punished by the State, as they might have been punished by the Territory, if the change of government had not been made.– Section 10. Saving existing rights and liabilities. All property and rights of the Territory, and of the several counties, subdivisions, and political bodies corporate, of, or in the Territory, including fines, penalties, forfeitures, debts and claims, of whatsoever nature, and recognizances, obligations, and undertakings to, or for the use of the Territory, or any county, political corporation, office, or otherwise, to or for the public, shall inure to the State, or remain to the county, local division, corporation, officer, or public, as if the change of government had not been made. And private rights shall not be affected by such change.– Section 11. Judicial districts. Until otherwise provided by law, the judicial districts of the State, shall be constituted as follows: The counties of Jackson, Josephine, and Douglas, shall constitute the first district. The counties of Umpqua, Coos, Curry, Lane, and Benton, shall constitute the second district.–The counties of Linn, Marion, Polk, Yamhill and Washington, shall constitute the third district.–The counties of Clackamas, Multnomah, Wasco, Columbia, Clatsop, and Tillamook, shall constitute the fourth district–and the County of Tillamook shall be attached to the county of Clatsop for judicial purposes.–
<urn:uuid:de5d2f23-ff7c-4876-969c-7c2f8ab3c154>
CC-MAIN-2013-20
http://www.sos.state.or.us/bbook/state/constitution/constitution18.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699273641/warc/CC-MAIN-20130516101433-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.939502
1,573
2.640625
3
Artful teaching: Integrating the arts for understanding across the curriculum, K-8 Source:Teachers College Press, Teachers College, Columbia University; National Art Education Association, New York; Reston, VA, p.181 (2010) Call Number:Cubb LB1591.5 .U6 I67 2010 Keywords:Art and society--United States, Art--Study and teaching--United States Contents: Introduction : Five best questions about arts integration : what to ask before your start / David M. Donahue and Jennifer Stuart (with Todd Elkin and Arzu Mistry) -- What is art? / Laurie Polster -- How does art connect to social justice? / David M. Donohue ... [et al.] -- Creating alliances for arts learning and arts integration / Louise Music -- Seeing is believing : making our learning through the arts visible / Stephanie Violet Juno -- Leadership for and in the arts / Lynda Tredway and Rebecca Wheat -- Arts integration : one school, one step at a time / Debra Koppman -- Musical people, a musical school / Sarah Willner -- Visual prompts in writing instruction : working with middle school English language learners / Dafney Blanca Dabach -- Creativity as classroom management : using drama and hip-hop / Evan Hastings -- Keeping reading and writing personal and powerful : bringing poetry writing and bookmaking together / Cathleen Micheaels -- Learning and teaching dance in the elementary classroom / Patty Yancey -- Working with K-12 students : teaching artists' perspectives / Ann Wettrich. "The book includes rich and lively examples of public school teachers integrating visual arts, music, drama and dance with subject matter, including English, social studies, science, and mathematics. Readers will come away with a deeper understanding of why and how to use the arts every day, in every school, to reach every child."
<urn:uuid:7e944de2-ca80-48f3-8000-8a0b69de13f3>
CC-MAIN-2013-20
http://www.stanford.edu/group/cubberley/node/14841
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699273641/warc/CC-MAIN-20130516101433-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.843616
386
3.09375
3
Clothes Make The Man By Rabbi Elly Broch "You shall make vestments of sanctity for Aaron your brother and his sons to minister to me." (Shemos/Exodus 28:4) The garments that were made to be worn by the Kohanim (priests) who served in the Temple were extremely ornate and impressive, as a glory to G-d to Whom they served. Sefer HaChinuch (1) expounds upon the importance of a salubrious Temple, and for the garments worn by the Kohanim to be so beautiful, thus introducing a foundation in Torah psychology. A person is influenced by his acts and the external environment in which he finds himself. One chosen to serve in the Temple must maintain a sense of awe, as he is in the presence of G-d. In addition, those who come to the Temple to atone for their indiscretions or to show gratitude to G-d must also conduct themselves as appropriate in the Creator's presence. Due to the environment in the Temple, engendered by the physical stimuli, such as the ornate implements and beautiful clothing of the Kohanim, plus the lofty spiritual sensitivity and sterling character of those around this precinct, those who experienced the Temple were transformed. Moreover, whenever the Kohanim noticed the clothing they were wearing, this reinforced the concept as to the importance of what they were doing and Whom they were serving. The clothing that they were wearing and the environment in which they served were all conducive to awareness of G-d. The Talmud (Bava Basra 21a) describes the indispensable work of Yehoshua ben Gamla, who lived during the Second Temple period. He instituted public education for children of all backgrounds and socioeconomic status. Initially, he established schools only in Jerusalem, which required many to travel, based on the verse "From Zion will go out the Torah." (Yeshaya/Isaiah 2:3) Tosafos (2) explain that since those in Jerusalem would experience great sanctity, and would witness the Kohanim performing the service, this would encourage greater fear of Heaven and enthusiasm for Torah. Yehoshua Ben Gamla was not only concerned with what the students would learn, but also the environment most conducive for growth. We are all influenced by our dress and surroundings to a greater degree than we are aware. Journals are replete with research concerning conformity, and the influence of positive and negative environmental factors on later development. Since we tend to gravitate towards the norms of our surroundings, it is important to choose our environments carefully. Seemingly unimportant details such as the clothes we wear and the environment in which we live can exert a large influence on our behavior, our thoughts and our service of G-d. Have a Good Shabbos! (1) Classic work on the 613 Torah commandments, their rationale and their regulations, by an anonymous thirteenth century Spanish author (2) The glosses of twelfth and thirteenth century French and German rabbis on the Babylonian Talmud printed in all editions of that work alongside the Text Copyright © 2005 by Rabbi Elly Broch Kol HaKollel is a publication of The Milwaukee Kollel Center for Jewish Studies · 5007 West Keefe Avenue · Milwaukee, Wisconsin · 414-447-7999
<urn:uuid:dcba2c5b-4f14-4703-8b9f-8bbcf5432a7d>
CC-MAIN-2013-20
http://www.torah.org/learning/kolhakollel/5765/tetzaveh.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699273641/warc/CC-MAIN-20130516101433-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.960077
732
2.65625
3
LONGVIEW JUNCTION, TX LONGVIEW JUNCTION, TEXAS. Longview Junction, in eastern Gregg County, began in 1873 when the International-Great Northern Railroad completed its line from Hearne to Longview and intersected with the rails of the newly built Texas and Pacific. The tracks connected a mile east of the T&P depot in downtown Longview. A second T&P depot and the I-GN depot were located at the junction. Longview businessmen formed the Longview and Junction Railway Company in 1883 to provide transportation between the main T&P depot in downtown Longview and Longview Junction. The street railway began with one car and one mule. In 1896 a larger, two-mule car was inaugurated. An electric trolley replaced the mule-drawn cars in 1912 and continued until the system was discontinued in 1922. By 1877 the Barner brothers were operating a sawmill at the junction with a capacity of 20,000 board feet a day. Longview Junction prospered in the early 1880s as dwellings and businesses for the railroad industry and its workers were built. A Catholic church was constructed in 1883. The brick, two-story, seventy-five-room Mobberly Hotel was completed in 1884. Local businesses in May 1885 included two saloons, a gambling house, a grocery store, a fruit and cigar stand, a drugstore and news stand, and a dressmaker, as well as two boarding houses, two restaurants, the Mobberly Hotel, and the Junction Hotel. Between 1890 and 1896 the number of dwellings in a five-block area of Longview Junction increased 200 percent. A private, two-room school opened in 1896. In 1904 the city of Longview annexed land on all sides of its corporate limits, including the site of Longview Junction. The additional residents made possible the issuance of bonds for many of Longview's first improvements, including wooden-block pavement for streets, cement sidewalks, and street lights. In 1919 a grade crossing at the junction, one of the longest in Texas, spanned eleven sets of tracks. In 1939 the Texas and Pacific Railroad Company constructed an underpass that eliminated the dangerous crossing. The Texas and Pacific moved its division offices and shops from Longview Junction to Mineola in January 1929, thus removing 700 families and a large payroll from the Longview area. Marker Files, Texas Historical Commission, Austin. Eugene W. McWhorter, Traditions of the Land: The History of Gregg County (Longview, Texas: Gregg County Historical Foundation, 1989). The following, adapted from the Chicago Manual of Style, 15th edition, is the preferred citation for this article.Charlotte Allgood, "LONGVIEW JUNCTION, TX," Handbook of Texas Online (http://www.tshaonline.org/handbook/online/articles/hrlag), accessed May 18, 2013. Published by the Texas State Historical Association.
<urn:uuid:9ec68dff-4c9d-4d07-879e-cd04b07b9d91>
CC-MAIN-2013-20
http://www.tshaonline.org/handbook/online/articles/hrlag
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699273641/warc/CC-MAIN-20130516101433-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.960712
612
2.671875
3
What is Twice a Stranger? ‘Twice a Stranger’ is a cross media project about the greatest forced migrations of the 20th century, when millions of people were uprooted and moved to new homelands. Based on oral video testimonies, rare film archives and photos this project brings visitors face-to-face with the survivors of these traumatic events. From the Greek-Turkish exchange in 1922-24, ‘Twice a Stranger’ travels to the German-Polish forced migration at the end of WW II, to the Partition of India, and to the Cyprus crisis' in the 1960's and 70's. ‘Twice a Stranger’ is distributed online and via a multi-media exhibition, hosted at the Benaki Museum in Athens, the Leventis Municipal Museum in Nicosia and the Istanbul Bilgi University. The project is accompanied by educational programmes, storytelling sessions, documentary screenings, a children's book, culinary nights, music events, community and outreach events. See our CALENDAR for a detailed list of scheduled events. "Twice a Stranger" is funded with the support of the European Commission (Culture Programme). It is created by Anemon, co-organised by the University of Oxford (Refugee Studies Centre), the Benaki Museum, the Leventis Municipal Museum, the Istanbul Bilgi University and Tolle Idee!, in partnership with ERT, the A.G.Leventis Foundation, the Goethe-Institut, the British Council, the Megaron Athens Concert Hall, the Greek Council for Refugees and the Centre for Asia Minor Studies.
<urn:uuid:53b042b9-757b-4557-9a7a-948418040296>
CC-MAIN-2013-20
http://www.twiceastranger.net/site/page/?page_id=1
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699273641/warc/CC-MAIN-20130516101433-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.903787
338
2.78125
3
Questions over what to do with a derelict Japanese shrimping vessel floating in the Gulf of Alaska are circulating around several governmental agencies. The ship is a remnant of last year's Japanese earthquake and subsequent tsunami. The Coast Guard's main concerns with the ship are potential maritime traffic problems, as well as its environmental impact. US Senators Mike Begich and Maria Cantwell are also worried about the vessel's impact. They say that research needs to be done in order to better understand possible impacts from debris from the quake. The ship is currently headed towards Sitka at a rate of about one mile per hour, but Coast Guard officials say there is no immediate concern.
<urn:uuid:664e4223-fcd2-43a5-bf1b-94edb548471d>
CC-MAIN-2013-20
http://www.webcenter11.com/?q=node/233
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699273641/warc/CC-MAIN-20130516101433-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.970858
137
2.59375
3
Learn something new every day More Info... by email Sequestration, a term meaning “to set apart,” is used in several different contexts in the legal community. It can refer to setting apart people, temporarily holding property, or freezing assets, for somewhat different reasons. Usually sequestration is ordered when there is a concern that interference could result in a miscarriage of justice or when there is a worry that someone may attempt to abscond with property or assets if they are not set aside. In terms of people, one form of sequestration is jury sequestration. A jury may be sequestered when there is a reason to fear jury tampering. For the duration of the trial and the juror deliberations, the jurors are isolated from the public so that people cannot intimidate, bribe, or otherwise interfere with the jurors. This may also be done when a case is so high profile that there are concerns that the jurors will be exposed to prejudicial information, ranging from reports on the news to comments from friends and family members. Witnesses are also sequestered. Commonly, witnesses are not allowed to sit in the courtroom when they are not testifying. This is done to prevent prejudice. If, for example, a witness hears an account from another witness, she or he may change the story told on the stand. Sequestering witnesses is designed to ensure that witnesses use their own words and only testify about facts they personally know. Assets and property can be sequestered by court order to await the outcome of a trial. This may be done in ownership disputes, with the court taking custody until the matter is decided, and it can also occur in cases in which there are worries that someone may attempt to remove, damage, or otherwise compromise property and assets. Sequestration orders can extend to items which are not directly being used as evidence in a trial. Property which is sequestered must be adequately cared for while it is under the supervision of the court. When property and assets are seized or frozen as a result of such an order, the process must be documented in detail. Receipts must be provided and the property will only be released when the rightful owner is identified and confirmed. If someone believes that assets have been wrongly seized, the sequestration can be challenged. This provides mechanisms for appeal in situations where, for example, John Q. Public's bank account is mistakenly frozen instead of John R. Public's account. I had never realized that witnesses were supposed to be sequestered. In the movies they often bring out the surprise, star witness with a big flourish but I thought that was more because they wanted to keep it a secret from the other lawyers (and the audience) until the last minute. Often they'll show a person who is also a witness sitting in the room the whole time the trial is on I guess for the dramatic effect. I know, I know, it's a little silly to take law proceedings on a TV show for granted as real. I think it would bug me though, if I was a witness for something, to never know what the other people were saying, however practical that might be. It's getting to the point where it is almost impossible to truly sequester a jury. What with the technology that most people carry around in their pockets now. If you have an I-phone, you'll have access to all the news reports you care to look up, which means that you can absolutely be influenced by the media, even in a jury room. And it can be hard to justify taking a phone off someone who might need it to communicate with their children or something. Even putting someone in a hotel and taking their phone away might not work if they have cable TV, or an internet connection. And most of the time the damage is already done on big, high profile cases, with the media reporting on crimes before they even get to the jury stage.
<urn:uuid:183b480f-fdc1-47ea-a6c5-6ad6c835bdb5>
CC-MAIN-2013-20
http://www.wisegeek.com/in-law-what-is-sequestration.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699273641/warc/CC-MAIN-20130516101433-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.972791
795
3.28125
3
“Now the purpose of this letter is to entreat President Lincoln to put forth his Proclamation, appointing the last Thursday in November … as the National Thanksgiving … the permanency and unity of our Great American Festival of Thanksgiving would be forever secured.” — Sarah Hale, 1788-1879 What do Edgar Allen Poe, “Mary Had a Little Lamb” and Thanksgiving have in common? Answer: Sarah Josepha Buell Hale. Sarah Buell Hale was born in New Hampshire to parents who believed in educating daughters as well as sons. Sarah married lawyer David Hale but was widowed nine years later. With five children to support, she turned to writing. She authored the rhyme “Mary had a Little Lamb,” which is based on a true story. She also single-handedly established Thanksgiving as a national holiday. Sarah Hale became editor of “Godey’s Lady’s Book,” publishing the works of literary greats including Longfellow, Hawthorne, Edgar Allan Poe and Frances Burnett, who later wrote the children’s classic “The Secret Garden.” Poe published so frequently in “Godey’s Lady’s Book” that one biographer quipped that it must have been Poe’s main source of income. At this time, the only nationally designated holidays were Washington’s Birthday and Independence Day. Sarah Hale wanted a third, Thanksgiving Day, a day when the entire nation gave thanks at the same time. According to Sarah Hale, “There is a deep moral influence in these periodical seasons of rejoicing in which whole communities participate. They bring out … the best sympathies in our natures.” She wrote copious amounts of letters and editorials over the span of 40 years in support of this cause. Each President in turn received her plea, including: Taylor, Fillmore, Pierce and Buchanan. Each President declined her request. By the mid-1800s relations between the North and South were worsening. In an 1859 editorial, Hale wrote that a united day of Thanksgiving would help bring the country together. She was unable to prevent the war but did finally succeed in her goal. In 1863 President Abraham Lincoln finally proclaimed the permanent establishment of a national day of Thanksgiving. There are conflicting claims as to who deserves credit for the first Thanksgiving. Plymouth Rock is the most accepted but, chronologically speaking, Plymouth comes in last. The earliest claimant is St. Augustine Florida. Fifth grade teacher Robyn Gioia has written a children’s book titled, “America’s Real First Thanksgiving.” Her research shows that Spanish explorer Don Pedro Menendez de Aviles landed in what is now St. Augustine on Sept. 8, 1565 and held Mass and afterward shared a feast of Thanksgiving with the Timucua Indians. The Timucua brought corn, beans, squash, nuts and shellfish, while the Spanish made a pork, bean and onion stew. In 1598 Juan de Onate took a group of 600 people, 83 wagons and 7000 animals across today’s Rio Grande. Upon arrival just south of modern day El Paso, they held a feast of Thanksgiving with the Manso Indians. Jamestown, Virginia settlers held a Thanksgiving ceremony in 1610 and Berkeley Plantation on the James River in Virginia also claims the first Thanksgiving held on Dec. 4, 1619. Plymouth Plantation is the site of the most famous Thanksgiving, when in 1621 a feast was held with deer, corn, fish and wild turkey brought by the Pilgrims and Wampanoag Indians. Sarah Hale implored presidents, governors and citizens to establish a national holiday of Thanksgiving. A day with all giving thanks could bind us together, she felt, bringing out the best in our natures. This year as we celebrate Thanksgiving, we remember Sarah Josepha Hale and her dream, and celebrate our national unity and blessings of providence, liberty and freedom. — Gordon Mercer is past president and on the Board of Trustees of Pi Gamma Mu International Honor Society and professor emeritus at Western Carolina University. Marcia Gaines Mercer is a published author and columnist.
<urn:uuid:557834d6-51e5-4ebb-a340-b83d994d439f>
CC-MAIN-2013-20
http://yourdailyjournal.com/pages/full_story/push?article-Notes+on+Quotes-+We+celebrate+Thanksgiving%20&id=20905086
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699273641/warc/CC-MAIN-20130516101433-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.958196
868
3.375
3
In 1975, The U.S. Fish and Wildlife Service listed the brown (grizzly) bear as a threatened species in the Lower 48 states, under the Endangered Species Act, meaning it is considered likely to become endangered. In Alaska, where there are estimated to be over 30,000 brown bears, they are classified as a game animal with regionally established regulations. Brown Bears reach weights of 300-1500 pounds. The coat color ranges from shades of blond, brown, black or a combination of these; the long outer guard hairs are often tipped with white or silver giving it a grizzled appearance hence the name. The brown or grizzly has a large hump over the shoulders which is a muscle mass used to power the forelimbs in digging. The head is large and round with a concave facial profile. In spite of their mass size this bear runs at speeds of up to 35 mph.
<urn:uuid:f16ca7d6-9c42-4938-b934-fbe519302cab>
CC-MAIN-2013-20
http://alaskareport.com/b8.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.948147
185
3.375
3
Mars Express captures battered Tharsis Tholus volcano on Mars Scientists think the volcano emptied its magma chamber during eruptions, and, as the lava ran out onto the surface, the volcano collapsed, forming a large caldera. November 15, 2011 The latest image released from Mars Express reveals a large extinct volcano that has been battered and deformed over eons. Battered volcano Tharsis Tholus. Credit: ESA/DLR/FU Berlin (G. Neukum) By earthly standards, Tharsis Tholus is a giant, towering 5 miles (8 kilometers) above the surrounding terrain, with a base stretching over 100 by 80 miles (155 by 125 km). Yet on Mars, it is just an average-sized volcano. What marks it out as unusual is its battered condition. Shown here in this image taken by the HRSC high-resolution stereo camera on the European Space Agency’s (ESA) Mars Express spacecraft, the volcanic edifice has been marked by dramatic events. At least two large sections have collapsed around its eastern and western flanks during its four-billion-year history, and these catastrophes are now visible as scarps up to several miles high. The main feature of Tharsis Tholus is, however, the caldera in its center. It has an almost circular outline, about 20 by 21 miles (32 by 34 km), and is ringed by faults that have allowed the caldera floor to subside by as much as 1.7 miles (2.7km). It is thought that the volcano emptied its magma chamber during eruptions, and, as the lava ran out onto the surface, the chamber roof was no longer able to support its own weight. So, the volcano collapsed, forming the large caldera.
<urn:uuid:633ec98c-e579-4add-b82b-556beb3fec07>
CC-MAIN-2013-20
http://astronomy.com/en/News-Observing/News/2011/11/Mars%20Express%20captures%20battered%20Tharsis%20Tholus%20volcano%20on%20Mars.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.96294
380
3.359375
3
Created as part of the Microscopic Masterpieces class, Rockwood School District third-grade students at the Center for Creative Learning (CCL) in Ellisville worked as authentic scientists and mathematicians to create artwork of enlarged insect parts. They sought accuracy during each stage of the process for the purpose of relaying correct information to their audience, according to Rockwood materials. CCL Teacher Sharon Smith said the students began by researching insects and studying their tiniest parts, according to a Rockwood news release. “What amazed me was the perseverance of these students,” said Smith in the release. Students from throughout Rockwood attend the CCL gifted education program. Chris Hartley, entomologist and coordinator of education programs at the Butterfly House, visited this third grade class to discuss the importance of accuracy when drawing insects. In addition, Ken Brown, Ph.D., Rockwood parent and product development specialist for BASF Pest Control Solutions, shared his insect spreading boards, and his personally drawn insect enlargements with students. The students then selected their insect parts from computer images. Students chose a variety of wings, abdomens, antennae, heads and thoraxes to enlarge. After learning and practicing the mathematical design process of drawing to scale, they created their own grids, drew fluent lines, implemented proper shading, constructively critiqued each other’s work and modified their pieces when necessary. Smith said students kept reflection journals to document their accomplishments as well as their struggles during the process. “The time, patience and determination to replicate each detail was inspirational. The students learned so much about insects and so much about themselves through this process, and they are excited to share their work with the community," she said.
<urn:uuid:66069235-550a-48a8-85c7-6ad8b0b83827>
CC-MAIN-2013-20
http://eureka-wildwood.patch.com/articles/rockwood-student-artwork-on-display-at-butterfly-house
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.966607
360
3.5
4
PH sovereignty based on Unclos, principles of international law the Department of Foreign Affairs (Editor’s Note: The following is the position paper from the Department of Foreign Affairs on the Philippine standoff with China at Panatag Shoal, also known as Bajo de Masinloc and Scarborough Shoal, in the West Philippine Sea.) Bajo de Masinloc is an integral part of the Philippine territory. It is part of the Municipality of Masinloc, Province of Zambales. It is located 124 nautical miles (220 kilometers) west of Zambales and is within the 200- nautical-mile (370 kilometers) exclusive economic zone (EEZ) and Philippine continental shelf. A Philippine Navy surveillance aircraft, patrolling the area to enforce the Philippine Fisheries Code and marine environment laws, spotted eight Chinese fishing vessels anchored inside the Bajo de Masinloc (Panatag Shoal) on Sunday, April 8, 2012. On April 10, the Philippine Navy sent the BRP Gregorio del Pilar to the area. In accordance with established rules of engagement, an inspection team was dispatched and it reported finding large amounts of illegally collected corals, giant clams and live sharks in the compartments of the Chinese fishing vessels. The actions of the Chinese fishing vessels are a serious violation of the Philippines’ sovereignty and maritime jurisdiction. The poaching of endangered marine resources is a violation of the Fisheries Code and the Convention on International Trade in Endangered Species of Wild Flora and Fauna (CITES). Basis of sovereignty Bajo de Masinloc (international name, Scarborough Shoal) is not an island. Bajo de Masinloc is also not part of the Spratlys. Bajo de Masinloc is a ring-shaped coral reef, which has several rocks encircling a lagoon. About five of these rocks are above water during high tide. Some of these rocks are about three meters high and can be seen above the water. The rest of the rocks and reefs are submerged during high tide. Bajo de Masinloc’s chain of reefs and rocks is about 124 nautical miles (220 km) from the nearest coast of Luzon and approximately 472 nautical miles (850 km) from the nearest coast of China. Bajo de Masinloc is located approximately along latitude 15°08’N and longitude 117°45’E. The rocks of Bajo de Masinloc are situated north of the Spratlys. Obviously then the rocks of Bajo de Masinloc are also within the 200 nautical mile EEZ and the 200 nautical mile continental shelf of the Philippines. A distinction has to be made between the rocks of Bajo de Masinloc and the larger body of water and continental shelf where the geological features are situated. The rights or nature of rights of the Philippines over Bajo de Masinloc are different from the rights it exercises over the larger body of water and continental shelf. The Philippines exercises full sovereignty and jurisdiction over the rocks of Bajo de Masinloc, and sovereign rights over the waters and continental shelf where the rocks of Bajo de Masinloc are situated. The basis of Philippine sovereignty and jurisdiction over the rocks of Bajo de Masinloc is distinct from that of its sovereign rights over the larger body of water and continental shelf. A. Public international law The rocks of Bajo de Masinloc are Philippine territory. The basis of Philippine sovereignty and jurisdiction over the rocks is not premised on the cession by Spain of the Philippine archipelago to the United States under the Treaty of Paris. That the rocks of Bajo de Masinloc are not included or within the limits of the Treaty of Paris, as alleged by China, is therefore immaterial and of no consequence. Philippine sovereignty and jurisdiction over the rocks is likewise not premised on proximity or the fact that the rocks are within its 200 nautical mile EEZ or continental shelf under the UN Convention on the Law of the Sea (Unclos). Although the Philippines necessarily exercises sovereign rights over its EEZ and continental shelf, the reason why the rocks of Bajo de Masinloc are Philippine territory is anchored on other principles of public international law. As decided in a number of cases by international courts or tribunals, most notably the Palmas Island Case, a mode for acquiring territorial ownership over a piece of real estate is effective exercise of jurisdiction. In the Palmas case, sovereignty over the Palmas Island was adjudged in favor of the Netherlands on the basis of “effective exercise of jurisdiction” although the island may have been historically discovered by Spain and historically ceded to the United States in the Treaty of Paris. In the case of Bajo de Masinloc, the Philippines, since it gained independence, has exercised both effective occupation and effective jurisdiction over Bajo de Masinloc. The name Bajo de Masinloc (which means Shallows of Masinloc or Masinloc Shoal) itself identifies the shoal as a particular political subdivision of the Philippine province of Zambales, known as Masinloc. One of the earliest known and most accurate maps of the area, named Carta Hydrographical y Chorographica de las Yslas Filipinas by Fr. Pedro Murillo Velarde, SJ, and published in 1734, showed Bajo de Masinloc as part of Zambales. The name Bajo de Masinloc was given to the shoal by the Spanish colonizers. In 1792, another map, drawn by the Alejandro Malaspina expedition and published in 1808 in Madrid, Spain, also showed Bajo de Masinloc as part of Philippine territory. This map showed the route of the Malaspina expedition to and around the shoal. It was reproduced in the Atlas of the 1939 Philippine Census. The Mapa General, Islas Filipinas, Observatorio de Manila published in 1990 by the US Coast and Geodetic Survey, also showed Bajo de Masinloc as part of the Philippines. Philippine flags have been erected on some of the islets of the shoal, including a flag raised on an 8.3-meter high flag pole in 1965 and another Philippine flag raised by Congressmen Roque Ablan and Jose Yap in 1997. In 1965, the Philippines built and operated a small lighthouse on one of the islets in the shoal. In 1992, the Philippine Navy rehabilitated the lighthouse and reported it to the International Maritime Organization for publication in the List of Lights (currently this lighthouse is not working). Bajo de Masinloc was also used as target range by Philippine and US naval forces stationed in Subic Bay in Zambales. The Philippines’ Department of Environment and Natural Resources together with the University of the Philippines has also been conducting scientific, topographic, and marine studies in the shoal. Filipino fishermen have always considered the shoal their fishing grounds because of its proximity to the coast of southwest Luzon. In 2009, when the Philippines passed an amended Archipelagic Baselines Law fully consistent with Unclos, Bajo de Masinloc was classified under the “Regime of Islands” consistent with the Law of the Sea. “Section 2. The baseline in the following areas over which the Philippines likewise exercises sovereignty and jurisdiction shall be determined as “Regime of Islands” under the Republic of the Philippines consistent with Article 121 of the Unclos: a) The Kalayaan Island Group as constituted under Presidential Decree No. 1596; and b) Bajo de Masinloc, also known as Scarborough Shoal.” Comments on Chinese claims But what about the historical claim of China over Bajo de Masinloc (Scarborough Shoal)? Does China have superior right over Bajo de Masinloc on the basis of its so-called historical claim? China is claiming Bajo de Masinloc based on historical arguments, claiming it to have been discovered by the Yuan Dynasty. China is also claiming that Bajo de Masinloc has been reflected in various official Chinese maps and has been named by China in various official documents. Chinese assertions based on historical claims must be substantiated by a clear historic title. It should be noted that under public international law, historical claims are not historical titles. A claim by itself, including historical claim, could not be a basis for acquiring a territory. Under international law, the modes of acquiring a territory are: discovery, effective occupation, prescription, cession and accretion. Also, under public international law, for a historical claim to mature into a historical title, a mere showing of long usage is not enough. Other criteria have to be satisfied, such as that the usage must be open, continuous, adverse or in the concept of an owner, peaceful and acquiesced by other states. Mere silence by other states to one’s claim is not acquiescence under international law. Acquiescence must be affirmative such that other states recognize the claim as a right on the part of the claimant that other states ought to respect as a matter of duty. There is no indication that the international community has acquiesced to China’s so-called historical claim. Naming and placing on maps are also not bases in determining sovereignty. In international case law relating to questions of sovereignty and ownership of land features, names and maps are not significant factors in the determination of international tribunals’ determination of sovereignty. What about China’s claims that Bajo de Masinloc is traditional fishing waters of Chinese fishermen? Under international law, fishing rights are not a mode of acquiring sovereignty (or even sovereign rights) over an area. Neither could it be construed that the act of fishing by Chinese fishermen is a sovereign act of a state nor can it be considered a display of state authority. Fishing is an economic activity done by private individuals. For occupation to be effective there has to be clear demonstration of the intention and will of a state to act as sovereign and there has to be peaceful and continuous display of state authority, which the Philippines has consistently demonstrated. Besides, when Unclos took effect, it has precisely appropriated various maritime zones to coastal states, eliminating so-called historical waters and justly appropriating the resources of the seas to coastal states to which the seas are appurtenant. “Traditional fishing rights” is in fact mentioned only in Article 51 of Unclos, which calls for archipelagic states to respect such rights, if such exist, in its archipelagic waters. It should also be noted, that in this particular case, the activities of these so-called fishermen can be hardly described as fishing. The evidence culled by the Philippine Navy showed clearly that these are poaching, involving the harvesting of endangered marine species, which is illegal in the Philippines and illegal under international law, specifically the CITES. B. Basis of sovereign rights As earlier indicated, there is a distinction between the rocks of Bajo Masinloc and the waters around them. The question of ownership of the rocks is governed by the principles of public international law relating to modes for acquiring territories. On the other hand, the extent of its adjacent waters is governed by Unclos. The waters outside of the maritime area of Bajo de Masinloc are also governed by Unclos. As noted, there are only about five rocks in Bajo de Masinloc that are above water during high tide. The rest are submerged during high tide. Accordingly, these rocks have only 12 nautical miles maximum territorial waters under Article 121 of Unclos. Since the Philippines has sovereignty over the rocks of Bajo de Masinloc, it follows that it has also sovereignty over their 12 nautical miles territorial waters. But what about the waters outside the 12 nautical miles territorial waters of the rocks of Bajo de Masinloc, what is the nature of these waters including the continental shelves? Which state has sovereign rights over them? As noted, Bajo de Masinloc is located approximately at latitude 15°08’N and longitude 117°45’E. It is approximately 124 nautical miles off the nearest coast of Zambales. Clearly, the rocks of Bajo de Masinloc are within the 200 nautical miles EEZ and continental shelf of the Philippines. Therefore, the waters and continental shelves outside of the 12 nautical miles territorial waters of the rocks of Bajo de Masinloc appropriately belong to the EEZ and continental shelf of the Philippines. As such, the Philippines exercises exclusive sovereign rights to explore and exploit the resources within these areas to the exclusion of other countries under Unclos. Part V of Unclos, specifically provides that the Philippines exercises exclusive sovereign rights to explore, exploit, conserve and manage resources whether living or nonliving, in this area. Although other states have the right of freedom of navigation over these areas, such rights could not be exercised to the detriment of the internationally recognized sovereign rights of the Philippines to explore and exploit the resources in its 200 nautical miles EEZ and continental shelf. To do otherwise would be in violation of international law, specifically Unclos. Therefore, the current action of the Chinese surveillance vessels within the Philippine EEZ is obviously inconsistent with its right of freedom of navigation and in violation of the sovereign rights of the Philippines under Unclos. It must also be noted that the Chinese fishermen earlier apprehended by Philippine law enforcement agents may have poached not only in Bajo de Masinloc but likely also in the EEZ of the Philippines. Therefore, these poachers have violated the sovereign rights of the Philippines under Unclos. PH archeological vessel The Philippine National Museum has been undertaking an official marine archaeological survey in the vicinity of Bajo de Masinloc. The archaeological survey is being conducted by the Philippine National Museum on board the Philippine-flag MY Saranggani. Chinese maritime surveillance vessels have been harassing the MY Saranggani. The Philippines has strongly protested the harassments by the Chinese side. The actions by the Chinese vessels are in violation of the sovereign right and jurisdiction of the Philippines to conduct marine research or studies in its EEZ. The Philippine Navy, during a routine sovereignty patrol, saw eight fishing vessels moored at Bajo de Masinloc on April 10. The Philippine side inspected the vessels and discovered that they were Chinese fishing vessels and on board were illegally obtained endangered corals and giant clams in violation of the Philippine Fisheries Code The Philippines staunchly protects its marine environment from any form of illegal fishing and poaching. It is a state party to the CITES and Convention on Biological Diversity. This illicit activity has also undermined the work of the Philippine government as a member of the Coral Triangle Initiative. The coral colonies in Bajo de Masinloc have been in existence for centuries. The Philippines is committed to the process of consultations with China toward a peaceful and diplomatic solution to the situation. As the Department of Foreign Affairs works toward a diplomatic solution, the Philippine Coast Guard is in the area and is continuing to enforce relevant Philippine laws.
<urn:uuid:22439e45-10e9-4ce3-8e12-a46217c96e77>
CC-MAIN-2013-20
http://globalnation.inquirer.net/34031/ph-sovereignty-based-on-unclos-principles-of-international-law
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.925423
3,122
2.765625
3
Molecular 'on-off' switch for Parkinson's disease discovered (Medical Xpress) -- Scientists at the Medical Research Council (MRC) Protein Phosphorylation Unit at the University of Dundee have discovered a new molecular switch that acts to protect the brain from developing Parkinson's disease. The findings have helped scientists understand how genetic mutations in a gene called PINK1 lead to Parkinson's in patients as young as 8 years old - which could eventually lead to new ways to diagnose and treat the condition. The job of some proteins inside cells is to switch other important proteins on or off. Understanding how these proteins work and which proteins they target could be the key to why nerve cells die in Parkinson's - and how we can save them. But despite intensive research, the target of the PINK1 enzyme (which is made by the PINK1 gene) has eluded scientists for almost a decade. Now the Dundee team has found that PINK1 switches on a protein called Parkin, whose main job is to keep cells healthy by removing damaged proteins. Mutations in the gene that makes Parkin can also cause inherited forms of Parkinson's in younger patients . The team was led jointly by Dr Miratul Muqit and Professor Dario Alessi at the University of Dundee. "Parkinson's is a devastating degenerative brain disorder and currently we have no drugs in the clinic that can cure or slow the disease down," said Dr Muqit, a Wellcome Trust Clinician Scientist in the MRC Protein Phosphorylation Unit. "Over the last decade, many genes have been linked to Parkinson's but a major roadblock has been determining the function of these genes in the brain and how the mutations lead to brain degeneration." Dr Muqit said, "Our work suggests this pathway can't be switched on in Parkinson's patients with genetic mutations in PINK1 or Parkin. More research will be needed to see whether this also happens in Parkinson's patients who do not carry these mutations." Professor Alessi, Director of the MRC Protein Phosphorylation Unit, added, "Now that we have identified this pathway, the key next step will be to identify the nature of these damaged proteins that are normally removed by Parkin. Although further studies are required, our findings also suggest that designer drugs that switch this pathway on could be used to treat Parkinson's." The research was funded by the Medical Research Council, Wellcome Trust, Parkinson's UK, the J. Macdonald Menzies Charitable Trust and the Michael J. Fox Foundation. The research is published in the latest edition of the journal Open Biology. The paper was co-authored with Dr Helen Walden from Cancer Research UK's London Research Institute. Journal reference: Open Biology Provided by University of Dundee - New insight into Parkinson's disease Apr 19, 2010 | not rated yet | 0 - Vitamin K2: New hope for Parkinson's patients? May 11, 2012 | not rated yet | 0 - Why do neurons die in Parkinson's disease? Nov 10, 2011 | not rated yet | 0 - Study finds promising clue to mechanism behind gene mutation that causes Parkinson's disease Mar 25, 2011 | not rated yet | 0 - Protein linked to Parkinson's disease may regulate fat metabolism Aug 25, 2011 | not rated yet | 0 - Motion perception revisited: High Phi effect challenges established motion perception assumptions Apr 23, 2013 | 3 / 5 (2) | 2 - Anything you can do I can do better: Neuromolecular foundations of the superiority illusion (Update) Apr 02, 2013 | 4.5 / 5 (11) | 5 - The visual system as economist: Neural resource allocation in visual adaptation Mar 30, 2013 | 5 / 5 (2) | 9 - Separate lives: Neuronal and organismal lifespans decoupled Mar 27, 2013 | 4.9 / 5 (8) | 0 - Sizing things up: The evolutionary neurobiology of scale invariance Feb 28, 2013 | 4.8 / 5 (10) | 14 Calculating on-axis elements of a solenoid 10 hours ago I wanted to mention that this solenoid has many winds over many layers. The thickness of the windings is 2.4 inches coming off of the engineering... latitude & longitude & air pressure 12 hours ago Hi there, I have a peculiar question. Imagine that you are in a earth position, obtained by google, that gives you the latitude and longitude.... Differences of Classical Mechanics when learned with Calc vs algebra? 15 hours ago what are the differences? Every example I find usually has a derivative or integral or some kind of calculus defined concept that seems to make it... what is the distance traveled 19 hours ago A rough sketch of experiment. Image: http://i43.tinypic.com/14t4sk5.png the red dots represent a side view of path traveled, F is downward force... Image of a Convex Lens Cut in Half Horizontally 23 hours ago Hello everyone, A friend of mine came up with this question in class and I really do not have a good answer. Suppose you have a convex lens... Ray tracing through optical system of thick lenses 23 hours ago Can you advise me a free software that allow to draw rays passed throught system of thick lenses (preferable in 3D)? - More from Physics Forums - Classical Physics More news stories Parkinson's disease (PD) is a degenerative neurological disorder marked by a progressive loss of motor control. Despite intensive research, there are currently no approved therapies that have been demonstrated to alter the ... Parkinson's & Movement disorders May 20, 2013 | not rated yet | 0 Faulty energy production in brain cells leads to disorders ranging from Parkinson's to intellectual disability Neuroscientist Patrik Verstreken of VIB (Flanders Institute for Biotechnology) and KU Leuven has shown for the first time that dysfunctional mitochondria in brain cells can lead to learning disabilities. The link between ... Parkinson's & Movement disorders May 17, 2013 | 4 / 5 (2) | 0 McGill University researchers have unlocked a new door to developing drugs to slow the progression of Parkinson's disease. Collaborating teams led by Dr. Edward A. Fon at the Montreal Neurological Institute and Hospital -The ... Parkinson's & Movement disorders May 09, 2013 | 5 / 5 (1) | 0 | New research reveals that Solanaceae—a flowering plant family with some species producing foods that are edible sources of nicotine—may provide a protective effect against Parkinson's disease. The study appearing today ... Parkinson's & Movement disorders May 09, 2013 | not rated yet | 0 (Medical Xpress)—Researchers at the Stanford University School of Medicine have exposed the possible function, in the healthy brain, of a mysterious molecule that has been strongly implicated in Parkinson's ... Parkinson's & Movement disorders May 01, 2013 | 5 / 5 (3) | 0 | (Medical Xpress)—Researchers at the University of Virginia School of Medicine have identified a promising target for treating glioblastoma, one that appears to avoid many of the obstacles that typically frustrate efforts ... 20 minutes ago | not rated yet | 0 | (Medical Xpress)—The first symptoms of major depression may be behavioral, but the common mental illness is based in biology—and not limited to the brain. 14 minutes ago | not rated yet | 0 | (Medical Xpress)—Scientists at Emory Vaccine Center have shown that an immune regulatory molecule called IL-21 is needed for long-lasting antibody responses in mice against viral infections. 8 minutes ago | not rated yet | 0 | Human breastmilk responds quickly to protect the child when there is an infection in mothers or babies, according to new international research led by The University of Western Australia. 20 minutes ago | not rated yet | 0 To coincide with the broadcast of Jabbed: Love, Fear and Vaccines (SBS ONE, Sunday 26 May at 8.30pm) the first ever national survey on Australian attitudes to vaccination reveals surprising statistics including half of Australians ... 2 minutes ago | not rated yet | 0
<urn:uuid:82a6da0e-8ac1-443b-9735-36b96bdcb54e>
CC-MAIN-2013-20
http://medicalxpress.com/news/2012-05-molecular-on-off-parkinson-disease.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.935142
1,721
2.765625
3
In the latest issue of Nature Genetics (volume 22; in July 1st 1999) the first ever-established complete clone-based physical map of a plant genome is published. The work was conducted in the group of Dr. Thomas Altmann at the Max Planck Institute of Molecular Plant Physiology, Golm, in collaboration with groups at the Max Planck Institute of Molecular Genetics, Berlin, Germany, the University of Pennsylvania, Philadelphia, and the Washington University, St. Louis, USA. The map covers the entire nuclear genome of the higher plant Arabidopsis thaliana. Furthermore, this map is the first ever assembled (for any organism) entirely on the basis of BAC (bacterial artificial chromosome) clones, the premier system for cloning and maintenance of large genomic DNA. A physical map of a genome shows the localisation of all cloned DNA-segments of an organism in relative order and distribution over the different chromosomes. The existing Arabidopsis physical maps were predominantly based on YACs (yeast artificial chromosomes, a system for cloning and maintenance of large DNA fragments). The map presented here is highly reliable and offers strongly increased resolution, due to the properties of the BAC cloning system. It is a representation of the entire Arabidopsis genome as a set of 8,285 overlapping BAC clones. The sequence analysis of these BAC clones, currently being done in the framework of the International Arabidopsis Genome Initiative, will lead to elucidation of the complete genomic DNA sequence within the next years. To date, complete genomic DNA sequences are available only for yeast and several prokaryotic microorganisms. Arabidopsis thaliana - a small flowering plant which is also called Thales cress and which possesses a very small genome of only 5 chromosomes - is the major mod Contact: Thomas Altmann
<urn:uuid:8d60e1ba-db8b-4f8b-b658-c0595880c36d>
CC-MAIN-2013-20
http://news.bio-medicine.org/biology-news-2/First-complete-physical-map-of-a-higher-plant-genome-12899-1/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.905871
389
3.125
3
The Epitaphios (Greek: Επιτάφιος, epitaphios, or Επιτάφιον, epitaphion; Slavonic: Плащаница, plashchanitsa; Arabic: نعش, naash) is an icon, today most often found as a large cloth, embroidered and often richly adorned, which is used during the services of Great Friday and Holy Saturday . It also exists in painted or mosaic form, on walls or panels. The icon depicts Christ after he has been removed from the cross, lying supine, as his body is being prepared for burial. The scene is taken from the Gospel of St. John 19:38-42. Shown around him, and mourning his death, may be his mother (the Theotokos; John the beloved disciple; Joseph of Arimathea; and Mary Magdalene, as well as angels. Nicodemus and others may also be depicted. Sometimes, the body of Christ appears alone, except for angels, as if lying in state. The oldest surviving embroidered icon, of about 1200 (Venice) is in this form. The equivalent subjects in the West are called the "Anointing of Christ's body", or Lamentation (with a group present), or the Pietà, with just Christ held by Mary. Usually, the troparion of the day is embroidered around the edges of the icon: - The Noble Joseph, taking Thy most pure body down from the Tree and having wrapped it in pure linen and spices, laid it in a new tomb. In the Late Byzantine period, it was commonly painted below a Christ Pantocrator on the apse of the prothesis of churches, illustrating a liturgical hymn which celebrated Christ "On the throne above and in the tomb below". The icon, in particular a panel mosaic version taken to Rome, probably in the 12th century, developed in the West into the Man of Sorrows subject, which was enormously popular in the Late Middle Ages, though this shows a live Christ, normally with eyes open. The Epitaphios is used on the last two days of Holy Week in the Byzantine rite, as part of the ceremonies marking the death and resurrection of Christ. It is then placed on the Holy Table, where it remains throughout the Paschal season. Vespers on Good Friday The Deposition from the Cross. Prior to the Apokathelosis (lit. "taking-down from the tree") Vespers on the afternoon of Great Friday, the priest and deacon will place the Epitaphios on the Holy Table. The priest may also anoint the Epitaphios with perfumed oil. A chalice veil and the Gospel Book is placed on top of the Epitaphios. This may be either the large Gospel Book used at the Divine Liturgy, or it may be a small one. During the reading of the Gospel lesson (compiled from selections of all four Gospels) which recounts the death of Christ, an icon depicting the soma (corpus) of Christ is taken down from a cross which has been set up in the middle of the church. The soma is wrapped in a white cloth and taken into the sanctuary. Near the end of the service, the priest and deacon, accompanied by acolytes with candles and incense, bring the Epitaphios in procession from the Holy Table into the center of the church and place it on a table which is often richly decorated for that purpose. The Gospel Book is laid on top of the epitaphios. In some Greek churches, an elaborately carved canopy stands over the Epitaphios. This bier or catafalque represents the Tomb of Christ. The Tomb is often sprinkled with flower petals and rosewater, decorated with candles, and ceremonially censed as a mark of respect. The bells of the church are tolled, and in traditionally Orthodox countries, flags are lowered to half-mast. Then the priest and faithful venerate the Epitaphios as the choir chants hymns. In Slavic churches, the service of Compline will be served next, during which a special Canon will be chanted which recalls the lamentations of the Theotokos. The faithful continue to visit the tomb and venerate the Epitaphios throughout the afternoon and evening, until Matins -- which is usually served in the evening during Holy Week, so that the largest number of people can attend. The form which the veneration of the epitaphios takes will vary between ethnic traditions. Some will make three prostrations, then kiss the image of Christ on the Epitaphios and the Gospel Book, and then make three more prostrations. Sometimes, the faithful will crawl under the table on which the Epitaphios has been placed, as though entering into death with Christ. Others may simply light a candle and/or say a short prayer with bowed head. Matins on Holy Saturday The Burial of Christ. During Matins, Lamentations (Greek: Επιτάφιος Θρήνος, epitaphios thrênos, lit. "winding-sheet lamentation"; or Εγκομια, enkomia, "praises") are sung before the Epitaphios as at the tomb of Christ, while all hold lighted candles. The verses of these Lamentations are interspersed between the verses of Psalm 118 (the chanting of this psalm forms a major part of the Orthodox funeral service). The psalm is divided into three sections, called stases. At the beginning of each stasis, the priest or deacon will perform a censing. At the third and final stasis, the priest will sprinkle rosewater on the Epitaphios and the congregation, symbolizing the anointing of Christ's body with spices. Near the end of Matins, during the Great Doxology, a solemn procession with the Epitaphios is held, with bells ringing the funeral toll, commemorating the burial procession of Christ. In Slavic churches, the Epitaphios alone is carried in procession with candles and incense. It may be carried by hand or raised up on poles like a canopy. Many Greek churches, however, will carry the entire bier, with its carved canopy attached. In societies where Byzantine Christianity is traditional, the processions may take extremely long routes through the streets, with processions from different parishes joining together in a central location. Where this is not possible, the procession goes three times around the outside of the church building. The procession is accompanied by the singing of the Trisagion, typically in a melodic form used at funerals. Those unable to attend the church service will often come out to balconies where the procession passes, holding lit candles and sometimes hand-held censers. In many Greek villages the Epitaphios is also paraded in the cemetery, among the graves, as a covenant of eternal life to those who have passed away. At the end of the procession, the Epitaphios is brought back to the church. Sometimes, after the clergy carry the Epitaphios in, they will stop just inside the entrance to the church, and hold the Epitaphios above the door, so that all who enter the church will pass under it (symbolically entering into the grave with Christ) and then kiss the Gospel Book. In Greek churches, the Epitaphios is then brought directly to the sanctuary, where it remains on the Holy Table until Ascension Thursday. In Slavic churches, it is brought back to the catafalque in the middle of the church (and honored further with more petals, rosewater and incense), where it remains until the Midnight Office at the Paschal Vigil on Great Saturday night. Where the Epitaphios remains in the center of the church, the faithful will continue to venerate it throughout Great Saturday. Liturgy on Holy Saturday The Hours on Holy Saturday will be read near the Epitaphios, and certain portions of the Liturgy that would normally be done at the Holy Doors (Litanies, reading the Gospel, the Great Entrance, etc.) are instead done in front of the Epitaphios. In the Slavic use, during the Midnight Office, after the Opening and Psalm 50, the Canon of Great Saturday is chanted (repeated from the Matins service the night before) as a reflection upon the meaning of Christ’s death and His Harrowing of Hell. During the last Ode of the Canon, at the words, "weep not for me, O Mother, for I shall arise...", the priest and deacon dramatically raise the Epitaphios (which represents the dead body of Christ) from the bier and carry it into the sanctuary, laying it upon the Holy Table, where it will remain throughout the Paschal season as a reminder of the burial cloth left in the empty tomb (John 20:5). During Bright Week (Easter Week), the Royal Doors of the sanctuary remain open as a symbol of the empty tomb of Christ. The Epitaphios is clearly visible through the open doors, and thus symbolizes the winding sheet left in the tomb after the resurrection. At the end of Bright Week, the Holy Doors are closed, but the Epitaphios remains on the Holy Table for 40 days, as a reminder of Jesus' physical appearances to his disciples from the time of his Resurrection until his Ascension into heaven. Epitaphios of the Theotokos An Epitaphios of the Theotokos also exists. This too is a richly embroidered cloth icon, but depicting instead the body of the Theotokos lying in state. This is used on the feast of the Dormition of the Theotokos on 15 August, known in the West as the Assumption of Mary. The Epitaphos of the Theotokos is used with corresponding hymns of lamentation, placed on a bier, and carried in procession in the same way as the Epitaphios of Christ, although it is never placed on the Holy Table. The Rite of the "Burial of the Theotokos" began in Jerusalem, and from there it was carried to Russia, where it was used in the Uspensky (Dormition) cathedral in Moscow. Its use has slowly spread among the Russian Orthodox, though it is not by any means a standard service in all parishes, or even most cathedrals or monasteries. In Jerusalem, the service is chanted during the All-Night Vigil of the Dormition. In some Russian churhes and monasteries, it is served on the third day after Dormition. - ↑ G Schiller, Iconography of Christian Art, Vol. II,1972 (English trans from German), Lund Humphries, London, p.199, ISBN 853313245
<urn:uuid:e94f13eb-e860-41fe-b7f9-f59201308cb9>
CC-MAIN-2013-20
http://orthodoxwiki.org/index.php?title=Epitaphios&oldid=60384
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.943919
2,328
3.390625
3
It's a chicken and egg question. Where do the infectious protein particles called prions come from? Essentially clumps of misfolded proteins, prions cause neurodegenerative disorders, such as mad cow/Creutzfeld-Jakob disease, in humans and animals. Prions trigger the misfolding and aggregation of their properly folded protein counterparts, but they usually need some kind of "seed" to get started. Biochemists at Emory University School of Medicine have identified a yeast protein called Lsb2 that can promote spontaneous prion formation. This unstable, short-lived protein is strongly induced by cellular stresses such as heat. Lsb2's properties also illustrate how cells have developed ways to control and regulate prion formation. Research in yeast has shown that sometimes, prions can actually help cells adapt to different conditions. The results are published in the July 22 issue of the journal Molecular Cell. The senior author is Keith Wilkinson, PhD, professor of biochemistry at Emory University School of Medicine The first author is senior associate Tatiana Chernova, PhD. The aggregated form of proteins connected with several other neurodegenerative diseases such as Alzheimer's, Parkinson's and Huntington's can, in some circumstances, act like prions. So the Emory team's finding provides insight into how the ways that cells deal with stress might lead to poisonous protein aggregation in human diseases. "A direct human homolog of Lsb2 doesn't exist, but there may be a protein that performs the same function," Wilkinson says. "The mechanism may say more about other types of protein aggregates than about classical prions in humans, This mechanism of seeding and growth may be more important for aggregate formation in diseases such as Huntington's." Lsb2 does not appear to form stable prions by itself. Rather, it seems to bind to and encourage the aggregation of another protein, Sup35, which does form prions. "Our model is that stress induces high levels of Lsb2, which allows the accumulation of misfolded prion proteins," Wilkinson says. "Lsb2 protects enough of these newborn prion particles from the quality control machinery for a few of them to get out." Explore further: Unlocking secrets of cell reproduction More information: T.A. Chernova et al. Prion Induction by the Short-lived Stress Induced Protein Lsb2 Is Regulated by Ubiquitination and Association with the Actin Cytoskeleton Mol. Cell (2011).
<urn:uuid:e7b5e432-9ab6-4dec-845e-53b278889d18>
CC-MAIN-2013-20
http://phys.org/news/2011-07-cellular-stress-yeast-prion-formation.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.955545
515
3.875
4
I think you're looking in the wrong direction. There are two aspects to consider: the security of your laptop, and the security of your connections. For the security of your connections, what matters is that you are using SSL (or TLS — treat it as a synonym of SSL) with a correct certificate. An HTTPS connection means HTTP (the usual web protocol) over SSL. SSL provides end-to-end confidentiality and integrity protection, so it doesn't matter whether you are browsing from a “secure” network or from a public wifi hotspot. What does “correct certificate” mean? A certificate is a website's “identity card”, providing a cryptographic means for your browser to verify that the website is who it claims to be. If the certificate verification didn't happen, you would have no way to know whether the SSL connection was going to the legitimate website or to a man-in-the-middle. In a good first approximation, you need to check three things to know that you have a secure connection to the desired website: - The URL must begin with https://, and browsers will typically show a padlock icon next to the URL. - If you see any scary warning, the connection is not secure. (A scary warning could be due to server misconfiguration too, and this is unfortunately more common than it should be. But if you see a scary warning when attempting to connect to your bank, I don't advise bypassing the warning.) - You must be connecting to the right URL in the first place. This means you should always connect to your bank from a bookmark, not by typing the URL (risk of typo) and never ever by clicking in an email or web link that you're not 200% sure comes from the bank (42nd National Bank is probably not a legitimate site). A VPN doesn't add much security over an HTTPS connection. A VPN protects the connection from your laptop to the VPN endpoint, which includes the point at which attacks are most likely (the local network where your laptop is plugged into or the wifi hotspot that it's connected to), but HTTPS provides end-to-end confidentiality and integrity anyway. VPNs have their uses, but they're esentially irrelevant for web banking: - An enterprise VPN connects your laptop to your enterprise network. The main point is to make securing the enterprise network a lot easier: anyone trying to connect to a server on the enterprise network must have passed some form of authentication already, either physically on the premises or logically by possessing the VPN key/password. - A VPN can provide a bit of privacy at the location where your laptop is connecting from: anyone snooping there will only see your VPN traffic as a whole, instead of individual connections which are undecipherable (if using SSL correctly) but whose endpoint is clearly identified. - A VPN can let you connect to sites that are blocked by an enterprise, ISP or government firewall, as long as those sites are visible from the VPN endpoint. As far as securing a connection from your laptop is concerned, WEP and WPA(2) are completely irrelevant. They are technologies for securing a wifi access point; a laptop connecting to that access point doesn't benefit from them in any useful fashion. IPsec, SSL/TLS, SSH can be technologies underlying a secure connection such as a VPN, but they're not really relevant at your level. They compete on ease of set up, possibility of piercing through firewalls, performance, but not on security. DNSSEC today isn't widely deployed. Until then, assume that DNS is insecure, and rely on SSL to tell you whether you're connecting to the right site. Connection hijacking could happen at the IP level anyway. Finally, none of these are relevant to securing your computer against external or internal attacks. For external attacks tried by someone on the local network, what matters is not what protocols you actively use but what protocols you have open on your machine. The defense is not to run services that you don't use, to have sane firewall settings (most laptops don't need to accept any form of incoming connection) and to keep your operating system and applications up to date. The biggest attack vector nowadays is through content that you have retrieved, e.g. a web page that attempts to exploit a bug in your web browser. The defense against these is not to download risky files such as executable, to avoid browsing dodgy sites or clicking on links in suspicious emails, and to keep your operating system and applications up to date.
<urn:uuid:13e5fb9c-966f-4376-b6b9-d3f423faeedc>
CC-MAIN-2013-20
http://security.stackexchange.com/questions/10072/public-wifi-security-protocols
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.958954
937
2.625
3
Capturing Climate Change Predicting and preparing for the effects of global warming ARTWORK: PETE MCARTHUR This is part of IEEE Spectrum's special report: Critical Challenges 2002: Technology Takes On The government of Tuvalu, a Pacific Island nation, made a plea last summer for countries to take in Tuvalu evacuees, fearing a rising sea level will ultimately sink the country. New Zealand is considering the request. Lowland flooding and salt-water intrusion into drinking water are already happening. Researchers at Iowa State University have started work on corn hybrids that would thrive in significantly different growing conditions from those common today, including different temperatures, hours of daylight, and precipitation levels. The Alaska Department of Transportation is testing ways of preserving permafrost under roads to prevent the sudden formation of sinkholes. One idea, painting highways white to reflect the sun's heat, failed because drivers had trouble with the glare. These efforts are not unrelated, but are signs of preparations being made to deal with the increase in global mean temperatures expected by the end of the century, a change of 1.4 ° to 5.8 °C from 1990, that will have impacts in the lifetimes of current generations. (By comparison, the difference between global mean temperatures today and during the Ice Age some 20 000 years ago is roughly 4 °C.) Global mean temperature is the area average of the surface temperature over the globe. Debate in decline Though for decades arguments have raged over whether human activities cause changes in climate, these battles may be nearing an end. It is hard to dispute that the earth's climate is getting warmer. The apparent reason is a measurable increase in greenhouse gases, most notably carbon dioxide, but also methane, nitrous oxide, chlorofluorocarbons (CFCs), and ozone. Some do disagree. And this group, while not large, is vocal. Some accept the evidence for a warming planet, but not that it is due to human activities. Others think a negative feedback effect will kick in or that the effects will be minor or even positive. For example, Richard S. Lindzen, Alfred P. Sloan Professor of Meteorology of the Massachusetts Institute of Technology in Cambridge, dismisses the existence of a connection between the rises in atmospheric carbon dioxide and global mean temperatures. This is a key point, for if global temperature increases do not depend on an increase in carbon dioxide, then plans to reduce the amount of it entering the atmosphere, as proposed in the Kyoto Protocol, are pointless. Also doubtful are Sallie L. Baliunas and Willie Soon, researchers at the Harvard-Smithsonian Center for Astrophysics, Washington, D.C., who contest linking increased industrial activities to increased atmospheric carbon dioxide. Meanwhile, the National Tidal Facility in Australia has questioned whether the sea level change seen at Tuvalu represents more than anomalies caused by weather patterns. But many scientists say global warming is real and will have serious effects. They also believe that nothing we do now can immediately stop it. Our best efforts, though important, will only slow it down. The questions of today are how well the effects can be predicted and how to cope with them. According to the 2001 report of the International Panel on Climate Change (IPCC), a group of some 3000 scientists from around the world convened by the United Nations, "there is new and stronger evidence that most of the warming observed over the last 50 years is attributable to human activities." Global warming is a catch phrase for the increase in the globe's mean temperature due to a buildup of atmospheric greenhouse gases. It also refers to the negative effects caused by that temperature rise, like melting glaciers, higher oceans, or different precipitation patterns. The evidence mounts Holding the biggest piece of the pie in terms of greenhouse gases is carbon dioxide. It is fairly easy to measure because it mixes uniformly through the layers of the atmosphere on time scales of a year or two. So it is no surprise that researchers, looking to quantify evidence supporting global warming, checked to determine whether the amount of carbon dioxide has been increasing at any one location over time. Since 1958, these measurements have been made directly at a site in Hawaii; data for prior years is gathered by sampling bubbles of air trapped in ice cores [see figure]. The measurements indicate that while carbon dioxide levels have varied over hundreds of thousands of years, the upswing that began with the industrial age about 200 years ago shows an unprecedented rate of change. The next bit of evidence is global mean temperature. This data is gathered from instrument records back to 1860, plus indicators that are sensitive to climate, such as tree rings, ice layers as measured in cores of ice from glaciers, ice caps, and ice sheets, and annual coral rings from cores in coral colonies. Over the last hundred years, the data shows an increase of at least 0.55 °C, a larger fluctuation than in any other past millennium. "That increase," said Kevin Trenberth, head of the climate analysis section for the National Center for Atmospheric Research [NCAR] in Boulder, Colo., "is one of the main reasons we believe we have detected climate change. The trends in the temperature record, particularly in the last 20 years, are now outside the realm of natural variability. They're not caused by variations in solar radiation. They're not caused by pollution from volcanic eruptions. The [increase in] global mean temperature is outside the realm of anything that can be accounted for except by the increases in greenhouse gases." Other evidence indicates the world is getting warmer. With a few exceptions, glaciers are melting. Oceans, measured consistently since the 1950s, are warming up. Sea ice in the Arctic Ocean, to cite recently declassified submarine data, has thinned by about 40 percent since the 1970s and diminished in extent. Sea level rose about 15 cm in the past 100 years, what with glaciers melting and oceans warming. The freezing season, or how long lakes and rivers around the world remain frozen in winter, has decreased by one to two weeks. Vegetation is creeping up mountains. And the list goes on. Making the connection The theory behind global warming starts with the Greenhouse Effect, first defined over 100 years ago. Greenhouse gases do not stop the sun's radiation from penetrating the atmosphere and reaching the earth, where it is converted into heat. But once that happens, the gases act as a blanket, reducing the amount of heat that can escape. This natural effect makes the planet habitable. If the amount of greenhouse gases increases, according to the next step in this theory, the earth gets warmer. Therefore, since human activities--including the burning of fossil fuels and wood, the cutting down of forests, and the intensification of agriculture--cause such an increase, then human activities are responsible for global warming. Climate models are used to test this theory. They grew out of efforts to get computers to predict the weather. They use information on what forces influence the weather and climate: for example, the amount of solar energy reaching the earth and its distribution; the earth's surface characteristics; the composition of the atmosphere, including the amount of particles in it; and basic laws of physics expressed as mathematical equations, including those for the conservation of thermodynamic energy; the conservation of air mass and water, and the behavior of air and water as fluids. All this data serves to relate temperatures, pressures, winds, humidity, clouds, and rainfall to one another in a physically consistent way to simulate the climate and make predictions. These models run on supercomputers: on vector machines that use a single or a small number of central processors to operate on arrays of numbers or on massively parallel computers. Today, climate models typically consider the earth as a grid of 100-kilometer-square boxes, with about 20 layers in the atmosphere plus 20 more in the oceans [see figure, left]. (Using a higher resolution would add to the accuracy of models, but could bring even today's fastest supercomputers to their knees.) Early on, climate models simply dealt with the atmosphere, focusing on temperature patterns worldwide to predict the development of storms several months out. Today's climate models link models of the atmosphere, the oceans, and land surfaces, including vegetation--light grasses reflect sunlight, for example, whereas dark trees absorb it [see figure, above]. Scientists all over the world work on climate models. Nearly 30 key modeling efforts are ongoing at 17 facilities. "Though it is hard to put an overall score on the performance of a model, the models that do best come from the UK Meteorological Office in Bracknell; the Max Planck Institute for Meteorology in Hamburg, Germany; and NCAR," Curtis C. Covey, a physicist in the Program for Climate Model Diagnosis and Intercomparison at Lawrence Livermore National Laboratory, Livermore, Calif., told IEEE Spectrum. "The distinction between the various models tends to be in the degree of complexity they assign to certain processes," said William Collins, a scientist at NCAR. The basic science they use, he said, is the same, because researchers quickly publish any advances they make. Besides these ongoing climate-modeling efforts, a new project has been launched in Yokohama, Japan. NEC Corp. is building an ultrahigh-speed 640-node parallel supercomputer to have a maximum performance of 40 teraflops. Called the Earth Simulator, this system is to run a climate model with a resolution in the tens rather than the hundreds of kilometers. It is expected to be operational this March. Researchers elsewhere in the world are eager for opportunities to run their models on this supercomputer. What that kind of computing power will allow, besides the increased resolution, Collins told Spectrum, is the running of ensembles of simulations--that is, groups of simulations with slightly different initial conditions. In effect, any differences in the outcome would be due to short-term variability in weather, not overall climate changes, and averaging the answers would lead to a clearer picture of climate change. It will also allow simulations to run many more years than today's typical simulations of 100 to 300 years, an advantage that would, again, remove short-term climate fluctuations from long-term climate change, making it easier to distinguish the signal from the noise. Meanwhile, in the United States, new computers have begun to be delivered to NCAR. IBM Corp.'s Blue Sky parallel supercomputer, when fully installed by the end of this year, will have a speed of 7 teraflops. This capability will allow researchers to run 50 or so one-century climate simulations per month, compared with six today. Given surface temperatures in the mid-19th century, along with information about changes in solar radiation, air pollution, and increases in carbon dioxide, the models around the world today can quite accurately generate simulations of the earth's average surface temperature over the last 100 years. Take out the carbon dioxide and air pollution data, however, and the models diverge from observations. Climatologists point to this fact as the "smoking gun," linking human activities to climate change [again, see figure]. Today's climate models do have weaknesses, though. One is cloud simulation. "We do not have a first-principles theory for how clouds form and how they interact with moisture," said Collins. "Since clouds play a major role in the climate system, that lack of a theory remains a serious unknown." Understanding how clouds behave has been identified by the IPCC as the next major challenge for future climate models. Another uncertainty in the models is the role of aerosols. These are microscopic particles, like soot from combustion of fossil fuel or volcanic eruptions. Some, but not all, act to shade the earth. Aerosols can complicate the global warming equation; cutting fossil fuel use may not have as big an effect as expected because of the corresponding reduction in aerosols. "We don't fully understand their optical properties," said Collins. "Their main effect is to reflect sunlight, which cools the climate a little. But there are also aerosols, like soot from diesel engines, that absorb sunlight and heat the atmosphere. And the real joker in the deck is the indirect effect of aerosols on clouds--aerosols make clouds brighter, causing them to reflect more sunlight away from earth." Both the cloud and aerosol problems are areas of much research worldwide. Eye on the earth Inputs for climate models stem from several sources. Ground-based weather stations supply temperature and humidity. Ships supply ocean temperatures. Boreholes provide temperatures several kilometers deep in the earth. Weather balloons give humidity, temperature, and wind information from various levels in the atmosphere, and weather satellites add factors like cloud cover. Solar radiation data derives from multiple sources, including the Department of Agriculture and the National Oceanic and Atmospheric Administration (NOAA), which collect it at the surface to help farmers predict crop growth, and NASA with NOAA, which collect it from space. The problem with all this data, Collins told Spectrum, is that, while it is accurate enough for short-term weather forecasting, it doesn't do a great job for long-term climate modeling. The readings drift, he said, so "it's very hard to look for subtle changes in the climate system when you're fighting this enormous instrumental artifact." Somewhat more consistent data comes from the Global Climate Observing System, a network of about 1000 ground-based stations established in 1992 and scattered around the world. However, reliability of this data varies as well because nations change instruments and locations to better use them for weather forecasting, and some regions of the world are only sparsely covered. Better data would make the models more accurate. One new effort is a NASA project, internally nicknamed A-train, that will put in place a "train" of polar-orbiting satellites as part of the Earth Observing System to make a variety of measurements. The group of satellites is expected to be completely assembled by 2004. Participating in this international project with the United States are Canada, England, France, and Japan. In another undertaking later this year, a satellite called IceSAT is expected to be launched by NASA. IceSAT will use lidar (laser radar) to monitor the height of ice sheets above sea level to a resolution of centimeters. NCAR's Collins is hoping that this satellite will also provide information about the three-dimensional distribution of aerosols in the atmosphere until the A-train starts measuring that phenomenon directly.v Another satellite data-collection effort uses, in part, satellites already in orbit--the 24 satellites of the global positioning system (GPS). In a 1992 paper, Michael Bevis of the University of Hawaii suggested that by adding a barometer and a thermometer to GPS reference stations on the ground, and by augmenting the analysis of the data they collect, it would be possible to compute the total atmospheric water vapor content overlying each GPS station, and to use these measurements as input to weather forecasting systems. This approach, known as ground-based GPS/Met, is now undergoing pre-operational trials by national weather services in the United States, Japan, Germany, and elsewhere. This idea was also of critical interest to climate change researchers, because changes in water vapor are related to global warming. Tom Yunck, a scientist at the Jet Propulsion Laboratory, in Pasadena, Calif., suggested that a more detailed atmospheric profile could be obtained from GPS signals received by GPS receivers located on dedicated satellites in low earth orbit. A 1995 test by the University Consortium for Atmospheric Research, Boulder, Colo., showed that it is indeed possible to create high-resolution vertical profiles of the pressure, temperature, and water vapor in the atmosphere. In 2000, two other satellites, testing the same phenomenon, were launched, one a joint effort by the United States and Germany and the other an effort between the United States and Argentina. The next evolutionary step of this technology is due to launch in 2005. Cosmic--for Constellation Observation System for Meteorology, Ionosphere, and Climate--will be a network of GPS receivers on six satellites. A joint venture of the United States and Taiwan, Cosmic will provide high-resolution (less than 1 km vertically) atmospheric profiling of the entire globe, a total of 3000 profiles a day. Besides providing better information for climate models, Cosmic is expected to improve weather forecasting dramatically, particularly for regions over the oceans, where there is a dearth of information on atmospheric conditions. Frankly, if all the climate models had to do was to demonstrate the existence of global warming, better measurements and better models would be unnecessary. But the models have a perhaps more critical role to play. "There is more pressure on the climate models to come up with answers as to what is going to happen, because if we can't stop [global warming], we want to know how we're going to have to adapt to it," said Gerald Meehl, a research climatologist at NCAR. It is unfortunately clear that the increase in carbon dioxide is not going to be level off at anything near today's levels of approximately 370 parts per million. The number quoted now as a possible goal is stabilization at twice pre-industrial levels, or 550 parts per million. This goal would require reducing carbon dioxide emissions to levels that are half or less than half of those today. "I think that's a little high," John Firor, former director of NCAR, told Spectrum. "We've got a lot of impacts already today--to double those impacts is worrisome. I'd be more content if we went to 450 parts per million, which is still troublesome, but is in the range of possibility. What is not possible is getting back to pre-industrial levels." Or even stabilizing at today's levels. In terms of temperature, the best--meaning the smallest increase--most scientists are hoping for is an increase in global mean temperatures of 1.4 °C (using the estimates of the United Nations panel's 2001 report) by 2100. The worst-case projects an increase of 5.8 °C. These numbers were among many derived from 35 scenarios of how the future will unfold. One scenario, for example, projects rapid worldwide economic development, car ownership climbing in the Third World and staying high in developed countries, and continued dependence on oil and natural gas. Another scenario paints a picture in which there is greater concern worldwide for environmental sustainability, while educational levels increase worldwide, reducing population growth, and certain regions move toward using less carbon-based fuel. The price of warming The UN panel did not make any attempt to identify which scenario, or which degree of warming, is most likely. But even taking the lowest global mean temperature--1.4 °C--effects will be noticeable. "It's hard to imagine stabilization [by human intervention] at a level where there would be negligible negative effects everywhere," said David Schimel, senior scientist at NCAR, "because it is going to take some effects to motivate changes. And because we're close to the thresholds for negative effects already in some regions, it may not be long before we see significant consequences." The negative effects cover many areas. At a minimum, because of some glacial melting and the fact that as water warms, it expands, sea level is projected to rise 20 cm by the year 2100. However, recent satellite measurements are also showing a melting of Greenland's ice sheet; adding this water into the calculations means that sea level could go up as high as 100 cm in that same time. Forests are expected to migrate north, as optimal conditions for tree growth change. The composition of forests is also expected to change, as trees with windblown seeds migrate faster than those that simply drop their seeds. Forest fire patterns would probably change, with fires becoming bigger and hotter, both because of warmer and drier conditions and because plants would grow faster, providing more fuel. When the mean temperature increases about 2 °C, world agriculture would have to make serious adjustments. "We can say that we're sure that at this point, there will be an effect," Schimel said. "But we're pretty sure that ecosystems are more sensitive than our current models suggest, and suspect that there could be an effect long before that definitive point." New plant varieties would have to be developed to handle a growing season in which either the ground warms while the days are short or the ground temperature is unchanged but the sunlight lasts longer. In other words, optimal summer growing temperatures for various crops will be found north of their locations today. This is do-able for most major crops, according to Elwynn Taylor, an agricultural meteorologist at Iowa State University in Ames. All the same, some specialty crops, including cranberries, strawberries, coffee, and tea, can grow in only a limited range of conditions, even with intensive genetic engineering. On the plus side, in some areas a longer growing season may mean time for an additional crop to be planted and harvested each year, and could open up the possibility of different crops. Weather, in general, would get more extreme--when it rained, for example, it would rain harder, because the air would hold more moisture. Global warming also increases drying, creating a greater risk of drought in places that already get scant enough rainfall. The battle against disease would become a little tougher, too. Dengue fever, recently found in Texas, and the West Nile virus, found in New York City, are both mosquito borne; they have not previously been a major problem in the United States and other temperate regions because winter cold tends to kill off the disease-bearing mosquitoes. The recent outbreak of West Nile virus, though, showed it survived recent New York winters. Cholera also thrives in warmer weather. "Except for sea level rise, which is a pure effect of climate change, the other problems we study already exist," Firor told Spectrum. "Malaria kills a million children every year. Forests are being destroyed by chainsaws. What climate change will do is amplify and exacerbate current global problems." Designers of climate models are now trying to predict how negative effects of global warming could play out in different regions. As a result, said Linda Mearns, deputy director of the Environmental and Social Impacts Group at NCAR, "There's been an explosion in regional climate modeling." If countries projected to be adversely affected turn up the political heat, such models could play a part in reducing greenhouse gas emissions. In your backyard These regional models, which today typically go down to resolutions of about 50 km, are nested inside global models in order to respond as global conditions change. Running them has shown several surprising effects that warrant further study. One such unexpected result came out of a study Mearns did in 1998 with Filippo Giorgi, a scientist at the International Centre for Theoretical Physics, Trieste, Italy. In a 50-km simulation of the western United States that included the first attempt to accurately represent the Rocky Mountains, precipitation over the Great Plains emerged as significantly different (heavier in most seasons) from the results of the global model used to direct the regional model. One outcome many global climate models show is a decrease in rainfall in the center of continents during the growing season, because the warmer air increases evaporation. Such projections worry countries like China, in particular. Its scientists are already noting that a warming and drying trend is hurting agriculture, already being pushed to its limits of production. However, determining future changes in regional rainfall is not something climate models do well, but they are improving. "We don't know whether wet areas will get wetter and dry drier, or if dry areas get wetter and wet areas get drier. Or some combination of that," said Michael Glantz, a senior scientist at NCAR. Glantz is looking at likely winners and losers in the global warming game, and precipitation is an important area of the research. Said NCAR's Schimel: "If the southeastern United States, for example, gets a lot warmer and rainfall increases, it will still be a forest. But if it gets two degrees warmer and rainfall stays the same, then it becomes a giant tinderbox." In a number of areas, especially in the United States and Europe, multiple thresholds are apparent--that is, changes will depend on different relative moves of temperature and rainfall. At this point, though, Schimel told Spectrum, "We just don't know how regional temperatures and rainfall will track each other." The first losers may be Pacific Island nations like Tuvalu, Kiribati, the Maldives, and dozens more. Some urban areas, like Rio de Janeiro, New Orleans, New York City, and Dhaka, risk inundation if the sea level rises 1 meter, which is within the realm of possibility. Low-lying countries like the Netherlands, India, and Bangladesh would also be hurt. There may be winners, even so. For countries suffering extended droughts, like Ethiopia, a change in precipitation patterns could hardly make it worse and might make it better, according to Glantz. The tourist industry may be one of the first business sectors to feel direct effects. For ski towns worldwide, for example, a few weeks' difference in the length of the snowy season would have dramatic economic impact. Individuals, businesses, and governments are worried enough about the effects of global warming to be beginning to take some action. Hence, the Kyoto Protocol. This international treaty, signed in December 1997, is now on the table for countries to ratify. Countries who do so will make commitments for emissions cuts, with the industrial countries cutting the most. The goal is to cut emissions of greenhouse gases by an average 5.2 percent below 1990 levels by 2008-2012. The protocol is considered binding once the treaty is ratified by industrialized countries that contributed 55 percent of the greenhouse gases emitted in 1990. Kyoto wasn't intended to do much but to prove to developing countries that the big rich guys would do something U.S. President George W. Bush last March indicated that the United States would not abide by the Kyoto agreement. But last October, in Marrakesh, Morocco, the final details of the treaty were worked out to the satisfaction of Russia and Japan, which had been wavering in their support. Included are credits for countries having large forests, which act to absorb carbon dioxide, as well as sanctions to be imposed on countries failing to make the agreed-upon cuts. The Kyoto Protocol could be ratified--without U.S. participation--this year. In the eyes of some, the Kyoto Protocol, even if ratified, is simply the proverbial drop in the bucket, with no real direct effect. It may, however, have an important indirect effect. "Everybody knows it won't do much," said Firor, the former NCAR director. "If everybody, including the developing countries, signs on to it and obeys it, it will reduce global emissions by a few percent. That's not enough to get to a stable atmosphere; you'd have to go down to at least half of current emissions to do that." But, Firor told Spectrum, "Kyoto wasn't intended to do much. It was intended to prove to the developing countries that the big rich guys were willing to do something. It's a demonstration project." There are precedents for starting with such a small first step. One is ozone depletion. Industry resisted the Montreal Protocol, signed in 1987 and which, in itself, did little. But subsequent agreements derived from that protocol led to eliminating the worst emissions of ozone-depleting chemicals from industrialized countries--and with little economic impact. "Anything you can do to slow down the rate at which we're changing the temperature is to the good," said NCAR's Glantz, "because it provides more time to understand how people are contributing to the changes in temperature" and more time to prepare for the effects of those changes. Even without a ratified international agreement, many countries have begun, or at least are beginning to plan, cuts in greenhouse gas emissions. China, driven by urban air pollution, is cutting coal use. The European Union, based in Brussels, Belgium, is establishing policies for achieving the cuts called for by Kyoto. In one action, the Union has drafted a law establishing emissions trading between companies, a policy seen as a critical tool for enabling countries to meet Kyoto targets. So far, the United States has promised nothing, though its actions could impact the problem if, for example, sport utility vehicles were mandated to meet the gasoline efficiency standards of ordinary vehicles, or computer-controlled heat-management systems were installed in more buildings. Cutting emissions isn't the only answer. Scientists are also working the other half of the equation: increasing the amount of carbon dioxide absorbed on earth. Plants perform this task, growing faster when there is more carbon dioxide in the air. Oceans absorb it, slowly taking it in until they store two orders of magnitude more than the atmosphere. Various schemes have been suggested to increase storage, like feeding iron into the oceans, so that algae that absorb carbon dioxide proliferate. Or one could simply do nothing to stop global warming. "On a pessimistic day," said Schimel, "it's not hard to imagine that we'll just take the easy way out, use fossil fuel indiscriminately, and buy a lot of air conditioning. That scenario leads you to carbon dioxide levels of a thousand parts per million and global mean temperatures up many degrees from today." The good news is that on most days scientists are cautiously optimistic. Said NCAR's Trenberth: "Maybe we can't make the problem go away, but we can certainly make scientific advances, we can slow down the rate of warming, and we can gain enough time to allow us to adapt." To Probe Further Soon it won't take a supercomputer in your basement to participate in climate modeling because a distributed climate-modeling project is to be launched. Similar to the SETI@home technology, which is searching for extraterrestrial intelligence, this effort, funded by the Coupled-Ocean Atmosphere Processes and European Climate (Coapec) research program, will use idle computer cycles to run climate models. Coapec is a program of the Natural Environment Research Council (NERC), Swindon, UK. For more information or to volunteer your computer, see http://www.climateprediction.com. The Intergovernmental Panel on Climate Change, Geneva, brings together climate scientists from around the world. Convened by the United Nations, the group recently approved its third assessment report, available at http://www.ipcc.ch. For comparisons of modeling efforts in development around the world to combat global warming, see http://www-pcmdi.llnl.gov. For more information on the effort to collect climate data using global positioning system receivers, see http://www.cosmic.ucar.edu. More details on NASA's Earth Observing System are available at http://eospso.gsfc.nasa.gov. Articles on climate change are available at http://www.cgd.ucar.edu/cas/GLOB_CHANGE/glob_change.html In the book Cool Companies (Island Press, Washington, D.C., 1999), Joseph J. Romm details how some 50 companies increased their energy efficiency to their economic benefit. See the site at http://www.coolcompanies.org
<urn:uuid:8e519c08-9340-4eb9-90e3-e1f4eb749fb3>
CC-MAIN-2013-20
http://spectrum.ieee.org/energy/environment/capturing-climate-change/4
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.944541
6,469
3.578125
4
Feb 11, 2013 8:34 AM, By Bob McCarthy The costs and benefits In an ideal world, sound systems would be invisible and inaudible. It is easy for us to comprehend the invisible part since we are constantly told by architects and set designers that they can’t stand the sight of a stationary rectangular box. Amazingly, lights moving around changing color and brightness, spilling light out of their sides and backs do not bother these folks. But a black box, God forbid one with a tiny LED on it, is an abomination. Why the prejudice against seeing speakers? This ties in to the secondary desire for the speakers to be inaudible, and by that I mean that we strive to create the illusion that the sound is magically coming directly from the stage performers rather than the rectangular boxes. Visible boxes break the magician’s illusion, resulting in a strong desire to hide them. The troubling part of all of this is that hiding the speakers visually can actually make hiding them audibly much more challenging. This is one of the tradeoffs in the game of sound image control. There are more, most notably in the categories of intelligibility, tonal modification, and uniformity. Maintaining a realistic sound image is a balancing act between relative level, time, distance, and angle. The first installment of this two-part article (part two will appear in a later issue) will explore how we perceive sound image and how we can control its placement with multiple speakers. The second part will cover examples of image placement and control in typical sound systems. Sound Image Perception Our sound image experience is comprised of two primary aspects and a variety of secondary ones. The dominant features are source direction and range, which give us the source location relative to our ears. The source angular relationship (its bearing) is subdivided into vertical and horizontal planes, which are decoded separately by the ear-brain system. These will be described momentarily. Our range perception is more complex, relying on a memory map that compares what we are hearing to our expectations regarding the particular sound source material. Our expectations are influenced highly by a secondary sense: sight, which gives us a framework with which to normalize what we are hearing. If we see a violin across the room, we compare what we hear to our memory of a distant violin sound, rather than what we experience when we are playing it ourselves (which, in my case, would sound like a nearby cat being tortured). This is not at all to say that a blind person lacks range-finder capability. They will have mapped the range clues much more finely than sighted folks since they lack the secondary sense backup that the eyes provide. The secondary range clues include the sound level, frequency response, and direct/reverberant ratio. Seeing the source distance and the shape and materials of a room give context to the range expectations. We adjust our sonic expectations when we see that we are in a large reflective environment. In such a context, it can be difficult to carry on a conversation even at a fairly close distance. “Objects may be closer than they sonically appear” would be a fair warning in a highly reverberant environment. By contrast, if we are blindfolded in an anechoic chamber, we will find it much harder to determine the range. Adjustments of level and frequency response can, in fact, alter our range perception without moving the speaker. Let’s set up an experiment to illustrate your sound image detection system in action. You are blindfolded in a room with a continuously moving sound source. How accurately will you be able to track the moving source’s bearing and range? The easiest aspect to localize is the horizontal position. This is because we have a two-channel detection system: our binaural hearing. The source location is double-checked by a pair of two-channel comparisons between the arrivals at our ears: relative time and relative level. As the sound source moves off of the horizontal center, it arrives first and louder at one ear. These two findings confirm each other to provide the localization clue. The vertical location is found by each ear individually using a memory mapped signature. This is unique for each ear and for each person (and animal) because it is derived from memorizing the comb-filtered frequency response created by the reflections of our outer ear as the sound enters the ear canal. We have never heard sound that was not reflected off of this structure and therefore we have normalized our hearing to this response. Each vertical orientation of the sound source creates a slightly different set of reflections into the canal. These microscopic differences are recognized by the ear and linked to the memory of the vertical position of sound sources previously localized in our life. Acceptable Use Policy blog comments powered by Disqus
<urn:uuid:52d009a3-1efc-488a-b71c-8234fbf3e761>
CC-MAIN-2013-20
http://svconline.com/how/sound_image/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.943689
979
2.6875
3
Global Initiative for Food-related Scientific Advice (GIFSA) What and Why? For over 50 years WHO and FAO have been the international source of scientific advice on matters related to the safety of food. This advice provides the evidence base for a linkage between substances in food and disease, or the prevention of disease. This evidence base is necessary to define the best preventive measures to avoid foodborne disease and promote human health as it links to our daily food. It also enables both national and international solutions to these problems, a necessary pre-condition in the present period of increased international food trade. Therefore it is important that the FAO/WHO scientific advice provides the basis for food standards, guidelines and codes of practice developed by the FAO/WHO Codex Alimentarius Commission. WHO and FAO devote significant resources to the provision of scientific advice in relation to food. However, there has been an increase in the demand for scientific advice. More and more links between food and disease have been discovered and recently also beneficial food constituents are now investigated and linked to the prevention of disease. The FAO/WHO framework for scientific advice is based on a transparent and global selection of experts representing the best in food safety and nutrition science. Over recent years the scope of scientific advice has therefore been broadened to include zoonotic diseases, biotechnology and nutrition as well as new emerging priority issues. As a result, FAO and WHO need to overcome a number of challenges in order to continue to support the technical consultations, workshops and related activities, which form the basis of FAO/WHO scientific advice. Over the past few years FAO and WHO have revised their advice framework to ensure the independence, quality, timeliness and sustainability of the provision of scientific advice. In order to specifically address the issue of sustainability of the provision of scientific advice, FAO and WHO are establishing a Global Initiative for Food-related Scientific Advice (GIFSA). The specific objectives of the GIFSA are: - To increase awareness of the FAO/WHO programme of work on the provision of scientific advice, - To mobilize technical, financial and human resources to support the provision of scientific advice in food safety and nutrition, - To promote the timeliness of the provision of scientific advice by WHO and FAO, while ensuring the continuation of the highest level of integrity and quality. Characteristics of the Fund The main focus of GIFSA is to establish a mechanism to facilitate the provision of extrabudgetary resources for scientific advice activities. Contributions are accepted from governments, organizations and foundations in accordance with WHO and FAO rules. Two separate accounts will be maintained, one at WHO and one at FAO. An FAO/WHO Committee manages the GIFSA, and procedures have been developed to ensure that all resources provided through GIFSA will be allocated to activities in an independent and transparent manner, taking into consideration the criteria for prioritization of activities already agreed by Codex, FAO and WHO and the specific needs of FAO and WHO Member countries.
<urn:uuid:3e347672-7c3e-4945-9a0c-eae9b08799f9>
CC-MAIN-2013-20
http://who.int/foodsafety/codex/gifssa/en/index.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.938786
615
3.171875
3
Most Active Stories Tue January 3, 2012 Raising The Minimum Wage: Who Does It Help? For some of America's lowest-paid workers, the new year means a pay raise. Some states set their own minimum wages, above the federal rate of $7.25 an hour, and that rekindles an old debate over whether minimum wages make sense — especially at a time of high unemployment. Like several other states, Washington state's minimum wage is indexed to the cost of living. This year, the formula has raised the statewide minimum from $8.67 to $9.04 an hour, making it the nation's highest statewide rate. Zack Colon is just out of college, and while he prepares for grad school, he works as a chef. He says that extra 37 cents an hour will make an appreciable difference to his weekly income. "Ten, fifteen dollars is like two meals. Any raise is, like, big," Colon says. But for Colon's boss, the raise is an unwelcome financial burden. Skyler Riley is a young entrepreneur, and two years ago this 20-something opened his restaurant — Rainin' Ribs — just outside Seattle. Even though he owns the place, Riley also works the till. "With your payroll, you know you can operate a lot yourself, you can certainly cut hours in a way that you physically use your own two hands," Riley says. "But when you're raising the payroll costs of an hourly minimum wage, it's definitely not helping." Pro-business groups in Washington state say the $9-an-hour wage may prove to be a "tipping point," a kind of sticker shock that could discourage employers from hiring more people. There's even a theory that the state minimum wage — which is counted separately from tips — might explain the understaffing and lousy service at some Seattle restaurants. It's Economy 101: The more you raise the price of something — employment, in this case — the less of it there'll be. "I don't think there's any sensible economist who thinks you could double the minimum wage and not throw a lot of people out of work," says David Neumark, director of the Center for Economics and Public Policy at University of California, Irvine. There is a debate, he says, over the effect of incremental raises for the small group of largely unskilled workers who earn the minimum wage. "The consensus from a lot of studies I've surveyed — including my own — says that a 10 percent increase in the minimum reduces employment of those very low-skilled groups by about 1 to 2 percent," he says. Keep in mind, that's 1 to 2 percent of the people earning minimum wage, and they make up only about 5 percent of the workforce nationally. So the job losses are pretty tiny. Defenders of the minimum wage say it's even less than that, pointing to a couple of recent studies that show zero net job loss. David Cooper, an analyst with the pro-labor Economic Policy Institute, says the minimum wage is especially necessary now. "When you have lines of the unemployed around the corner looking for jobs, there's no real pressure for employers to raise wages," Cooper says. And in this age of Occupy Wall Street, Cooper says, pushing up that wage floor is one way to address growing income inequality. "Increases in the minimum wage are essentially a shift from corporate profits to low-wage employees," he says. "And we know that low-wage employees spend more of their money. They're going to spend essentially every penny they get, so that increased demand is going to result in more economic activity and potentially more jobs." Still, economists on both sides tend to agree that the minimum wage itself isn't that big of a factor for America's working poor. For one thing, many of the people earning the minimum aren't poor at all; they're teenagers or middle-class part-timers looking for extra income. The working poor tend to earn more — because they have to if they're supporting a family. For them, the benefit of recent minimum-wage hikes has been relatively small, especially when compared with other anti-poverty programs such as the federal Earned Income Tax Credit.
<urn:uuid:c29f0526-02e4-4f9c-8db5-57b223f65597>
CC-MAIN-2013-20
http://wuky.org/post/raising-minimum-wage-who-does-it-help
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.968051
872
2.625
3
Montessori in the Home Why Family Matters In Montessori education there are three important roles: - The child - The parent - The teacher Each role is unique, essential, and interrelated. Like the sides of an equilateral triangle, each role is a distinct and separate part, and yet, each connects directly with every other. Teachers provide the social, public, outside-family, general education; parents provide the individual, private, intimate, specific education. Children are most fully supported when the adults in their lives communicate with each other and trust each other, therefore it is critical that both are responsive to the child. Parent involvement is essential throughout every child’s education. According to A New Generation of Evidence: The Family is Critical to Student Achievement (a report from the National Committee for Citizens in Education), Henderson and Berla, “the most accurate predictor of a student’s achievement in school is not income or social status but the extent to which that student’s family is able to: - Create a home environment that encourages exploration and learning - Express high (but not unrealistic) expectations for their children’s achievement and future careers - Become involved in their children’s education at school and in the community (p. 160) The Parent’s Role The Montessori approach to education is based on universal principles of human development. They are essential to the work of Montessori teachers, and they are essential to parenting practices too. Building home environments that harmonize with your child’s Montessori class will support their education and development. Your task will be to study and understand, to observe your child at home and at school, to deepen your awareness of Montessori tenets, and to collaborate with your child’s teacher in his or her education. Specific opportunities may include the following: - Parent lectures - Parent-teacher conferences - Open houses - Classroom observations - Read Maria Montessori’s books - Teacher guided discussion groups The family has an amazing power. This is why parents and teachers need to study and reflect, to understand the role of each other, and to help one another help the child.
<urn:uuid:fd393fb3-208c-48b7-b54b-4afe7bca21f2>
CC-MAIN-2013-20
http://www.amiusa.org/montessori-in-the-home/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.951746
461
3.5
4
Seiho Takeuchi Strutting Cockerels watercolor on paper signed. Japanese 1864-1942. The image measures 13 x 18 inches and the frame 25 x 20. inches.. Takeuchi Seiho, born December 20, 1864 Died August 23, 1942) was the pseudonym of a Japanese painter of the nihonga genre, active from the Meiji through the early Showa period. One of the founders of nihonga, his works spanned half a century and he was regarded as master of the prewar Kyoto circle of painters. His real name was Takeuchi Tsunekichi. Seiho was born in Kyoto. As a child, he loved to draw and wanted to become an artist. He was a disciple of Kono Bairei of the Maruyama-Shijo school of traditional painting. In 1882, two of his works received awards at the Naikoku Kaiga Kyoshinkai (Domestic Painting Competition), one of the first modern painting competitions in Japan, which launched him on his career. During the Exposition Universelle in Paris (1900), he toured Europe, where he studied Western art. After returning to Japan he established a unique style, combining the realist techniques of the traditional Japanese Maruyama–Shijo school with Western forms of realism borrowed from the techniques of Turner and Corot. This subsequently became one of the principal styles of modern Nihonga. His favorite subjects were animals -often in amusing poses, such as a monkey riding on a horse. He was also noted for his landscapes. From the start of the Bunten exhibitions in 1907, Seiho served on the judging committee. In 1909 he became a professor at the Kyoto Municipal College of Painting (the forerunner to the Kyoto City University of Arts). Seiho also established his own private school, the Chikujokai. Many of his students later went on to establish themselves as noted artists, including Tokuoka Shinsen and Uemura Shoen. In 1913, Seiho was appointed as a court painter to the Imperial Household Agency, and in 1919 was nominated to the Imperial Fine Arts Academy (Teikoku Bijutsuin). He was one of the first persons to be awarded the Order of Culture when it was established in 1937. He initially used characters for the first name of his pseudonym, and this name was possibly pronounced as Saiho. Museums that have his works: Yamatane Museum, Important Cultural Property,Tokyo National Museum, Kyoto National Museum of Modern Art,Tokyo National Museum of Modern Art, Kyoto Municipal Museum of Art, Imperial Household Agency, sannomaru shozokan, Kachu'an Takeuchi Seiho Memorial gallery. Notable pupils: Uemura Shoen, Ono Chikkyo, Tsuchida Bakusen, Nishimura Go'un, Hashimoto Kansetsu. Bio information obtained from Wilipedia.org. References: Araki, Tsune (ed), Dai Nihon shôga meika taikan, Tokyo 1975 (1934), p.1633 Conant, Ellen P., Nihonga, transcending the past: Japanese-style painting, 1868-1968, Saint Louis 1995, p. 322-323 Harada, Heisaku, Takeuchi Seihô, Kyoto 1981 Morioka, Michiyo and Paul Berry, Modern Masters of Kyoto, Seattle 1999, pp. 130-137 Roberts, Laurance P., A Dictionary of Japanese artists, New York, 1976, p.171 Shipping extra. Connecticut residents and buyers picking up in Connecticut add 6.35% state sales tax. Buyers outside the USA are responsible for any taxes,tariffs or customs that might apply. *** If you wish to see examples of similar items we have sold and/or appraised please go to our affiliate site www.OneofaKindAntiques.com and click the Archives / Homepage logo *** Art (paintings, prints, frames)
<urn:uuid:cf49f74e-1bc5-410b-a857-a8920c8f0b22>
CC-MAIN-2013-20
http://www.antiques.com/classified/Art--paintings--prints--frames-/Animals/Antique-8304-Seiho-Takeuchi-Strutting-Cockerels--watercolor-on-paper
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.953093
842
2.578125
3
Significance and Use The intent of this guide is to provide the reader with information concerning possible reasons for paint failures where the paint is used over a latex sealant. 1.1 This guide describes the practical considerations that may be used to determine the compatibility of a paint or coating to be applied over a latex sealant or caulk. It evaluates the appearance and not the performance characteristics of the coated or painted joint. 1.2 The committee with jurisdiction over this standard is not aware of any comparable standards published by other organizations. 1.3 The values stated in SI units are to be regarded as standard. No other units of measurement are included in this standard. 1.4 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory requirements prior to use. 2. Referenced Documents (purchase separately) The documents listed below are referenced within the subject standard but are not provided as part of the standard. C717 Terminology of Building Seals and Sealants D1729 Practice for Visual Appraisal of Colors and Color Differences of Diffusely-Illuminated Opaque Materials D2244 Practice for Calculation of Color Tolerances and Color Differences from Instrumentally Measured Color Coordinates E284 Terminology of Appearance cracking; latex sealant; paint; Compatibility; Cracking--sealants; Latex sealants; Paintability; Sealants; Waterborne materials/applications; Caulking compounds and sealants; ICS Number Code 87.040 (Paints and varnishes) ASTM International is a member of CrossRef. Citing ASTM Standards [Back to Top]
<urn:uuid:5a12a415-c534-4867-b86c-85dbf1a64ab1>
CC-MAIN-2013-20
http://www.astm.org/Standards/C1520.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.890029
377
2.71875
3
¶ But Solomon was building his own house thirteen years, and he finished all his house. .2 He built also the house of the forest of Lebanon; the length thereof was an hundred cubits, and the breadth thereof fifty cubits, and the height thereof thirty cubits, upon four rows of cedar pillars, with cedar beams upon the pillars. 3 And it was covered with cedar above upon the beams, that lay on forty five pillars, fifteen in a row. 4 And there were windows in three rows, and light was against light in three ranks. 5 And all the doors and posts were square, with the windows: and light was against light in three ranks. 6 And he made a porch of pillars; the length thereof was fifty cubits, and the breadth thereof thirty cubits: and the porch was before them: and the other pillars and the thick beam were before them. - Own house Actually the palace complex with various buildings. With Solomon's wide interests, we may assume that he was the chief architect. 1 - Thirteen years These were after seven years completing the temple 1ki0638. The temple would have taken longer without the extensive work of preparation 1ch2202. The construction thus lasted for twenty years 1ki0910, 2ch0801. 2 - House of the forest This may have been his private dwelling separate from or adjoining the queen's house v8. The "king's house" (1ki0910 linked above) must have included both of these along with other buildings but no one knows. 4 - Windows Compare 1ki0604. | 7 Then he made a porch for the throne where he might judge, even the porch of judgment: and it was covered with cedar from one side of the floor to the other. .8 And his house where he dwelt had another court within the porch, which was of the like work. Solomon made also an house for Pharaoh's daughter, whom he had taken to wife, like unto this porch. 9 All these were of costly stones, according to the measures of hewed stones, sawed with saws, within and without, even from the foundation unto the coping, and so on the outside toward the great court. | 7 - This may have been a chamber in the house of the forest of Lebanon. It may then have been a special area at the entrance of the throne room, or it could have been a separate building. 8 - His house Little is known about the King's living quarters. | 10 And the foundation was of costly stones, even great stones, stones of ten cubits, and stones of 11 And above were costly stones, after the measures of hewed stones, and cedars. .12 And the great court round about was with three rows of hewed stones, and a row of cedar beams, both for the inner court of the house of the LORD, and for the porch of the house. |12 - The great court The whole complex may have been surrounded by a walled courtyard.| ¶ And king Solomon sent and fetched Hiram out of Tyre. 14 He was a widow's son of the tribe of Naphtali, and his father was a man of Tyre, a worker in brass: and he was filled with wisdom, and understanding, and cunning to work all works in brass. And he came to king Solomon, and wrought all his work. .15 For he cast two pillars of brass, of eighteen cubits high apiece: and a line of twelve cubits did compass either of them about. 16 And he made two chapiters of molten brass, to set upon the tops of the pillars: the height of the one chapiter was five cubits, and the height of the other chapiter was five cubits: .17 And nets of checker work, and wreaths of chain work, for the chapiters which were upon the top of the pillars; seven for the one chapiter, and seven for the other chapiter. |13 - Hiram There could have been two people with the same name 2ch0207.| | 18 And he made the pillars, and two rows round about upon the one network, to cover the chapiters that were upon the top, with pomegranates: and so did he for the other chapiter. 19 And the chapiters that were upon the top of the pillars were of lily work in the porch, four cubits. 20 And the chapiters upon the two pillars had pomegranates also above, over against the belly which was by the network: and the pomegranates were two hundred in rows round about upon the other chapiter. .21 And he set up the pillars in the porch of the temple: and he set up the right pillar, and called the name thereof Jachin: and he set up the left pillar, and called the name thereof Boaz. |21 - Name ... Jachin ... Boaz Meaning "He shall establish" and probably "In Him is strength" They would have represented the source of Israel's existence and strength ps02807, is4524, je1619, ps04601| | 22 And upon the top of the pillars was lily work: so was the work of the pillars finished. .23 And he made a molten sea, ten cubits from the one brim to the other: it was round all about, and his height was five cubits: and a line of thirty cubits did compass it round about. .24 And under the brim of it round about there were knops compassing it, ten in a cubit, compassing the sea round about: the knops were cast in two rows, when it was cast. 25 It stood upon twelve oxen, three looking toward the north, and three looking toward the west, and three looking toward the south, and three looking toward the east: and the sea was set above upon them, and all their hinder parts were inward. .26 And it was an hand breadth thick, and the brim thereof was wrought like the brim of a cup, with flowers of lilies: it contained two thousand baths. |23 - Molten sea A brass laver holding water as before the tabernacle ex3017, ex3808 except much larger. It's capacity of 2,000 baths v26 comes to about 12,000 gallons or 44,000 liters! Seas were for priests to wash in before their ministry in the temple 2ch0406.| | 27 And he made ten bases of brass; four cubits was the length of one base, and four cubits the breadth thereof, and three cubits the height of it. 28 And the work of the bases was on this manner: they had borders, and the borders were between the ledges: 29 And on the borders that were between the ledges were lions, oxen, and cherubims: and upon the ledges there was a base above: and beneath the lions and oxen were certain additions made of thin work. 30 And every base had four brasen wheels, and plates of brass: and the four corners thereof had undersetters: under the laver were undersetters molten, at the side of every addition. 31 And the mouth of it within the chapiter and above was a cubit: but the mouth thereof was round after the work of the base, a cubit and an half: and also upon the mouth of it were gravings with their borders, foursquare, not round. |27-39 - Ten bases of brass These were trucks or four-wheeled carriages, for the support and conveyance of the lavers. The description of their structure shows that they were elegantly fitted up and skilfully adapted to their purpose. They stood, not on the axles, but on four rests attached to the axles, so that the figured sides were considerably raised above the wheels. They were all exactly alike in form and size. The lavers which were borne upon them were vessels capable each of holding three hundred gallons of water, upwards of a ton weight. The whole, when full of water, would be no less than two tons." (JFB)| | 32 And under the borders were four wheels; and the axletrees of the wheels were joined to the base: and the height of a wheel was a cubit and half a cubit. 33 And the work of the wheels was like the work of a chariot wheel: their axletrees, and their naves, and their felloes, and their spokes, were all molten. 34 And there were four undersetters to the four corners of one base: and the undersetters were of the very base itself. 35 And in the top of the base was there a round compass of half a cubit high: and on the top of the base the ledges thereof and the borders thereof were of the same. 36 For on the plates of the ledges thereof, and on the borders thereof, he graved cherubims, lions, and palm trees, according to the proportion of every one, and additions round about. 37 After this manner he made the ten bases: all of them had one casting, one measure, and one size. | 38 Then made he ten lavers of brass: one laver contained forty baths: and every laver was four cubits: and upon every one of the ten bases one laver. .39 And he put five bases on the right side of the house, and five on the left side of the house: and he set the sea on the right side of the house eastward over against the south. .40 And Hiram made the lavers, and the shovels, and the basons. So Hiram made an end of doing all the work that he made king Solomon for the house of the LORD: .41 The two pillars, and the two bowls of the chapiters that were on the top of the two pillars; and the two networks, to cover the two bowls of the chapiters which were upon the top of the pillars; .42 And four hundred pomegranates for the two networks, even two rows of pomegranates for one network, to cover the two bowls of the chapiters that were upon the pillars; 43 And the ten bases, and ten lavers on the bases; 44 And one sea, and twelve oxen under the sea; .45 And the pots, and the shovels, and the basons: and all these vessels, which Hiram made to king Solomon for the house of the LORD, were of bright brass. | 46 In the plain of Jordan did the king cast them, in the clay ground between Succoth and Zarthan. .47 And Solomon left all the vessels unweighed, because they were exceeding many: neither was the weight of the brass found out. |46 - Succoth (means "booths") "A spot in the valley of the Jordan and near the Jabbok, where Jacob set up his tents on his return from Mesopotamia. Joshua assigned the city subsequently built here to the tribe of Gad Gideon tore the flesh of the principal men of Succoth with thorn and briars, because they returned him a haughty answer when pursuing the Midianites. It seems to have lain on the east side of the Jordan; but may possibly have been on the west side, at the place now called Sakut. Compare; Ps 60:6." (ATS Dict.) ge3317, jos1327, jg0805. See map, near Jabbok."Succoth" is also the name of the first encampment of the exodus.| ¶ And Solomon made all the vessels that pertained unto the house of the LORD: the altar of gold, and the table of gold, whereupon the shewbread .49 And the candlesticks of pure gold, five on the right side, and five on the left, before the oracle, with the flowers, and the lamps, and the tongs of gold, .50 And the bowls, and the snuffers, and the basons, and the spoons, and the censers of pure gold; and the hinges of gold, both for the doors of the inner house, the most holy place, and for the doors of the house, to wit, of the temple. .51 So was ended all the work that king Solomon made for the house of the LORD. And Solomon brought in the things which David his father had dedicated; even the silver, and the gold, and the vessels, did he put among the treasures of the house of the LORD. |49 - Candlesticks (lampstands) "made, probably, according to the model of that in the tabernacle, which, along with the other articles of furniture, were deposited with due honor, as sacred relics, in the temple. But these seem not to have been used in the temple service; for Solomon made new lavers tables, and candlesticks, ten of each. (See further regarding the dimensions and furniture of the temple, in 2Ch 3:1-5:14)."(JFB).|
<urn:uuid:7b025269-ed46-41cd-94f4-9cbd28fd1221>
CC-MAIN-2013-20
http://www.bibleexplained.com/other-early/1&2-Kings/1ki07.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.978671
2,803
2.6875
3
The Cambridge Glass Fair three hundred years of collectable glass in one day In 1837, the brothers James and John Hartley established the Wear Glass Works in Sunderland which became well known for the manufacture of window glass, including crown, cylinder and Patent Rolled Plate or PRP, a type of mainly patterned cast glass which was much thinner than that produced elsewhere. At the Great Exhibition in 1851 Hartley's was awarded a prize medal for PRP for roofing. Various problems led to the closure of the firm in 1892. In the same year James Hartley's grandson, James Hartley Jnr., rented a redundant bottle works in Monkwearmouth and with a small team of ex-Hartley glassblowers continued to produce cylinder and crown glass. The name changed to Hartley Wood & Co. in 1895 when Alfred Wood, the leading colour mixer from Hartley's, joined as a partner. The firm prospered in the 1920's due to the demand for memorial windows after the Great War, and in the period after World War II because of the wide-spread damage sustained by churches. In 1956, the Clean Air Act meant that the old coal-fired furnace was replaced by four oil-fired modern furnaces. New legislation brought about other changes to dangerous materials used, especially in the manufacture of the streaky colour combinations known as 'Antique Glass' for which Hartley Wood was famous. The difference this made to the metal can be most easily seen in the art glass made by the firm from the 1930's onwards, examples of which can be seen here. The earlier vases are uneven and heavy and the glass has an oily look, while the later pieces are mould-blown, lighter and more regular in shape. The company was sold to Pilkington's in 1983, but despite investment Hartley Wood closed in 1989. The foyer exhibition will bring together a number of these interesting and striking pieces in various shapes and colourways from two private collections. This area of glass is deserving of more attention and, as can be seen from the images accompanying this article, with even a small number of vases placed in close proximity you can have your very own stained glass window. For further information about Hartley Wood glass, please see the article by Susan Newell published in The Journal of the Glass Association Vol.6 dated 2001.
<urn:uuid:909c1235-f27b-414b-8577-e366dba6c184>
CC-MAIN-2013-20
http://www.cambridgeglassfair.com/exhibitions/pastexhibitions/2004-02-hartleywood.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.965832
485
2.65625
3
Presented here is a complete set of cartographic map sheets from a high-resolution Iapetus atlas, a project of the Cassini Imaging Team. The map sheets form a three-quadrangle series covering the entire surface of Iapetus. As noted on the map, while both Saragossa Terra and Roncevaux Terra are bright regions on the moon's surface, they are distinct from each other in that the former has a slightly reddish color and the latter does not. The map sheets cover the entire surface of Iapetus at a nominal scale of 1:3,000,000. The map data was acquired by the Cassini imaging experiment. The mean radius of Iapetus used for projection of the maps is 736 kilometers (457 miles). Names for features have been approved by the International Astronomical Union (IAU). The Cassini Equinox Mission is a joint United States and European endeavor. The Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the mission for NASA's Science Mission Directorate, Washington, D.C. The Cassini orbiter was designed, developed and assembled at JPL. The imaging team consists of scientists from the US, England, France, and Germany. The imaging operations center and team lead (Dr. C. Porco) are based at the Space Science Institute in Boulder, Colo.
<urn:uuid:883f3549-4440-48b5-ad8f-f3f29de54a00>
CC-MAIN-2013-20
http://www.ciclops.org/view/5219/The_Iapetus_Atlas
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.926446
286
2.84375
3
General Biology Research Guide Databases: Links to databases indexing magazine, journal and newspaper articles Library Catalogs: Books, videos and other audiovisuals Reference Works: Find backgound information, illustrations and data - Biology Data Book . 3 v. Reference QH 310 .A392 - Encyclopedia of Bioethics . 5 vols. Reference QH 332 .E52 2004 - Dictionary of Biology. General QH 302.5 .D54 2004 - Encyclopedia of Evolution . 2 vols. Reference QH 360.2 .O83 2002 - Encyclopedia of Life Sciences . 20 vols. Reference QH 302.5 .E54 2001 - Facts on File Biology Handbook. Reference QH 310 .F23 2000 - Facts on File Dictionary of Cell and Molecular Biology. Reference QH 575 .F33 2003 - Life Sciences on File . Reference QH 318 .L45 1999 - Magill's Survey of Science: Life Science Series . 7 vols. General QH 307.2 .M34 - Penguin Dictionary of Biology Credo Reference Online - Synopsis and Classification of Living Organisms . Reference QH 83 .S892 Internet Science Sites: Major science gateways Comprehensive search engine for science-specific topics. Results cite Science Web sites and provide abstracts from Medline, Science Direct, U.S. Patent office documents and more. The gateway to government science information. Provides access to over 47 million pages of government science information from 30 databases and 1700 websites, many part of the "deep web" that is inaccesible to search engines. - Highwire Press A division of the Stanford University Libraries, this is the largest repository of free full-text life science articles in the world. Search for PubMed articles, access the contents of 780 HighWire hosted journals, and find listings in other free and pay-for-view sources. Internet Biology Sites - The Biology Project: an Online Interactive Resource for Leaning Biology Good supplement to biology text with tutorials on basic biology topics and many illustrations. - Cells Alive Fine images of many types of cells, explanations of cell structure, and animations of cell division phases. - Integrated Taxonomic Information System Authoritative taxonomic information for plants, animals, fungi and microbes of North America and the world. Includes accepted and non-accepted names, taxonomic hierarchy for kingdom to species, and links to Google images when available for a species listed. - Kimball's Biology Pages Online reference to biology terms with graphic illustrations of many concepts. - Lynn Fancher's Biology Bookmarks Extensive links to biology gateway sites, as well as internet sites for specific areas of biology. - Natural Selection Links to thousands of hand-selected sites about the natural world, maintained by the Natural History Museum of London. - Seven Biological Challenges Links to essays by noted researchers and useful websites in areas of biotechnology, evolution, environment and more. - The Tree of Life Collaborative project with information on biological classification, phylogeny, and the diversity of organisms. - The Virtual Library: Biosciences Extensive list of links to valuable websites in many areas of the biological sciences. - Yahoo's Index of Biological Links Search for information by biological subject categories or by keyword. Citing sources: MLA, APA and CBE (Biology) formats
<urn:uuid:01c7e169-50b5-4d18-8d65-be2504e0721f>
CC-MAIN-2013-20
http://www.cod.edu/library/libweb/Peters/BIOLOGY/generalbiology.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.673417
715
2.75
3
Several recent reports of “swimmers itch” have occurred as a result of swimming or wading in shallow areas with very warm water temperatures. Individuals with compromised immune systems, other health issues and children may want to avoid water contact at warm and shallow beaches — particularly North Nets, South Nets and Long Beach. An advisory is a warning to swimmers but it is not a beach closure and all public beaches at Pyramid Lake remain open. During an advisory, a beach is posted with warning signs when the water contains levels of bacteria that indicate there may be an increased risk of developing minor skin, eye, ear, nose and throat infections and stomach disorders. Anyone who chooses to swim during an advisory is asked to avoid dunking their head in, or swallowing the water. Anyone swimming should take additional precautions, including drying off vigorously with a towel, showering shortly after swimming and avoiding the ingestion of lake water. If visitors to Pyramid Lake do experience symptoms of “swimmers itch” they are encouraged to report it to the tribe by calling 888-225-2668.
<urn:uuid:2d4f5e41-97b2-4701-b338-011bb1fa69fc>
CC-MAIN-2013-20
http://www.dailysparkstribune.com/view/full_story/19689487/article-Water-quality-advisory-for-Pyramid-Lake?instance=region_in_brief_secondary_story
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.966471
219
2.546875
3
Evidence points to AIDS epidemic peaking but no time soon Most Americans are already intellectually and emotionally disconnected from the first 15 years of AIDS. Even for gay men and lesbians who lived through those days of darkness, who witnessed the horror first-hand, the terrible reality has faded to a distant memory. But the recent anniversary of the first published notice of the disease printed 25 years ago this month in the June 5, 1981, issue of Morbidity and Mortality Weekly Report calls us to somber remembrance of a time when hope itself seemed a fading mirage in the distance. And in the 15 years or so until the introduction of protease inhibitors in the mid-1990s, the disease ravaged the gay male community, nearly wiping out a whole generation of gay men. The first report told of five young gay men in three Los Angeles hospitals who had developed an unusual cluster of symptoms. The men were not acquainted with one another, but the symptoms they developed were striking: all had Pneumocystis carinii pneumonia, an extremely rare form of the disease usually seen in cancer chemotherapy patients or organ transplant patients whose immune systems were artificially compromised to prevent organ rejection. All five also displayed candidiasis, a fungal infection of the mouth and throat, and cytomegalovirus, an agent well-known today to cause serious complications in AIDS patients. The CDC followed up a month later with another report about gay men in California and New York who had contracted Kaposi’s sarcoma, a rare but fatal skin cancer associated with AIDS. The condition had certainly been present earlier. Nobody knows for sure when or where, but the AIDS epidemic is thought to have begun in the dark forests of West Africa when a virus lurking in the blood of a monkey or a chimpanzee made the leap from one species to another, infecting a hunter. Researchers have found HIV in a blood sample collected in 1959 from a man in Kinshasa, Congo. Genetic analysis of his blood suggested the HIV infection stemmed from a single virus in the late 1940s or early 1950s. For decades at least, the early human infections went unnoticed on a continent where life is harsh and short. Then came the CDC’s reports, marking a defining moment in the perception of the emerging epidemic. In the early days of the epidemic, just the mention of AIDS elicited snickers and jokes. Few saw it as a major threat. It was the “gay plague,” and for some, divine retribution for a lifestyle Christian fundamentalists and other conservatives consider sinful. When heterosexuals began to contract the disease through blood transfusions and other medical procedures, they were often portrayed as “innocent” victims of a disease spread by the immoral behavior of others. Largely because of negative attitudes toward homosexuality, the new illness was largely dismissed, a concern only for scientists and the gay community. Even in scientific circles the malady was called GRID, for gay-related immune deficiency. It took gay activists more than a year to convince scientists to remove “gay” from the disease’s name, since the cause also existed in other at-risk people: hemophiliacs, Haitians, Africans. In 1982, the CDC re-named the disease, and it took the moniker it is known by today: acquired immune deficiency syndrome, or AIDS. During these bleak early years, gay activists stood virtually alone in responding to AIDS. New organizations emerged to provide information and services in the hardest-hit communities. The Gay Men’s Health Crisis, the first community AIDS service provider in the United States, was established in New York City in 1982. In 1983, the National Association of People with AIDS was founded. That same year, a group of HIV-positive demonstrators forcibly took the stage at a U.S. health conference to issue a statement, referred to as The Denver Principles, setting forth the rights of people with AIDS. All of this activity occurred below the radar of most Americans. The gay sponsors of these organizations raised their funds, wrote their bylaws, and carried out their missions virtually unaided. Then, in 1985, a fading matinee idol, Rock Hudson, seared the affliction into the collective conscience of America and the world. Hudson announced that he was HIV-positive and fled to Paris in search of treatment. The haunting image of a critically ill Hudson being wheeled on a guerney from his chartered airliner, unsuccessful in his search for an extension of life in Paris, filled television screens and cemented in the nation’s psyche the hopelessness of an AIDS diagnosis. “In a heartbeat, a generation of Americans was lost to AIDS,” said Joe Salmonese, president of the Human Rights Campaign. But advances in medicine that have made the disease manageable in the developed world haven’t reached the rest. AIDS could kill 31 million people in India and 18 million in China by 2025, according to projections by U.N. population researchers. By then in Africa, where the virus has wrought the most devastation, researchers said the toll could reach 100 million. “It is the worst and deadliest epidemic that humankind has ever experienced,” Mark Stirling, the director of East and Southern Africa for UNAIDS, said in an interview. Even if new infections stopped immediately, additional African deaths alone would exceed 40 million, Stirling said. Efforts to find an effective vaccine have failed dismally, so far. The International AIDS Vaccine Initiative says 30 are being tested in small-scale trials. More money and more efforts are being poured into prevention campaigns but the efforts are uneven. Globally, just 1 in 5 HIV patients get the drugs they need, according to a recent report by UNAIDS. Stirling said that despite the advances, the toll over the next 25 years will go far beyond the 34 million thought to have died from the Black Death in 14th century Europe or the 20 to 40 million who perished in the 1918 flu epidemic. AIDS is the leading cause of death in Africa, which has accounted for nearly half of all global AIDS deaths. The epidemic is still growing and its peak could be a decade or more away. In at least seven countries, the U.N. estimates that AIDS has reduced life expectancy to 40 years or less. In Botswana, which has the world’s highest infection rate, a child born today can expect to live less than 30 years. Africa’s misery hangs like a sword over Asia, Eastern Europe and the Caribbean. Researchers don’t expect the infection rates to rival those in Africa. But Asia’s population is so big that even low infection rates could easily translate into tens of millions of deaths. Although fewer than 1 percent of its people are infected, India has topped South Africa as the country with the most infections: 5.7 million to 5.5 million, according to UNAIDS. In the early years, too much time, money and effort was spent on the wrong priorities, Stirling said. “Over the last 25 years, the one real weakness was the search for the magic bullet. There is no quick and simple fix,” he said. “But with the recent successes we are starting to see the end of epidemic.” The pace of change over the last couple of years suggests the number of new infections can be reduced by 50 to 60 percent by 2020 if the momentum continues. The Associated Press contributed to this report. This article appeared in the Dallas Voice print edition, June 30, 2006. Powered by Facebook Comments
<urn:uuid:4790f194-0815-44a7-a853-5c11fc582bdb>
CC-MAIN-2013-20
http://www.dallasvoice.com/a-quarter-century-of-plague-an-epidemic-unending-1021376.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.962795
1,577
2.671875
3
How the Common Core changes everything It’s well established that the Common Core State Standards (CCSS)—adopted in principle by forty-six states—won’t get any real traction unless they’re comprehensively and faithfully implemented at the state and local levels. (They also have implications for federal policy and programs, of course.) What we've heard about the Common Core's impact is just the tip of the iceberg . Photo by Natalie Lucier. But what is comprehensive implementation? True, we’ve heard much palaver about what the Common Core portends for assessment, for teachers’ professional development, and for curricular/instructional materials. All true, all crucial, and all probably the most urgent. But these issues are also just the tip of the CCSS iceberg, most of which remains invisible under water. What I haven’t seen yet is clear recognition that the Common Core, taken seriously, eventually changes everything in American education and that implementation, done right, must be comprehensive. Which means what? Start with a substantial analogy: World War II. A new book profiles General Albert C. Wedemeyer, who was assigned by General Marshall to the Army’s “War Plans Department” as the conflict loomed and (I quote the Wall Street Journal’s book review) “tasked…with reducing America’s mobilization requirements to a single document.” Then FDR asked Wedemeyer’s team to turn it “into a blueprint on how to defeat America’s likely enemies in a future war.” The book explains: Completed in an astonishing 90 days, this plan laid down all the critical politico-military-industrial assumptions for the looming conflict, correctly identifying America’s adversaries and where the main fighting would take place and estimating the industrial capacity needed to feed the war machines of China, the Soviet Union, Great Britain and the United States and how much war materiel could be spared to allies. Wedemeyer proposed overrunning Germany and Japan with an army of nearly nine million draftees, a number that he concluded would leave sufficient factory workers and farmers back home to feed the troops and keep the tanks, bombers and artillery shells rolling off assembly lines. He also called for an invasion of Europe in 1943, before Germany could strengthen its defenses. The Wedemeyer plan was carried out only in part. (Churchill was convinced that an early invasion of the European continent would end disastrously.) But, dramatic though this may seem, full-on Common Core implementation will demand a plan a plan of similar comprehensiveness and vision. I don’t know who will play the role of Albert Wedemeyer, but maybe we can accelerate the process by offering a provisional table of contents. Here’s my list of topics that the plan should include, submitted with some humility, as I’ve surely overlooked important items (bring ‘em on!), and with some trepidation, as understanding these implications may cause Common Core skeptics to stiffen their resistance. 1. Curriculum guides for teachers The Common Core sets forth what students needs to have learned by the end of each year. It doesn’t help teachers with “scope and sequence,” much less lesson planning. Not all teachers want such help—and there’s much shrieking about “national curriculum”—but some will welcome guidance. (Keep it voluntary!) 2. Textbooks that are truly aligned Every big ELA- and math-textbook publisher has already declared that its products are “aligned” with the Common Core, but mostly that’s not true. Who is going to apply the excellent publishers’ guidelines produced by Student Achievement Partners and actually rate the textbooks (and fast-proliferating digital resources) on how well they’re aligned? 3. Additional instructional materials Traditional U.S. textbooks and “reading programs”—bulky, lumbering, and linear—aren’t going to work very well with the instructional demands of the Common Core. Teachers will need to be able to muster instructional resources from many sources, including electronic ones. (Some excellent nonprofit groups are already at work on materials that will end up being freely available, which also portends a radical transformation of the textbook market!) 4. Curriculum narrowing How do we keep K-12 education from being whittled down to the two subjects in the Common Core? Yes, “next generation” science standards are in the works. But what about history, geography, civics, languages, and the arts? Health and phys ed? Where do they fit? What standards will apply? How will they be taught and assessed? 5. Professional development Hundreds of thousands of current teachers need to update, alter, and amplify their own knowledge base and pedagogical arsenal if they’re to succeed in imparting the Common Core to their pupils. 6. Teacher (and principal) preparation Pretty nearly every teacher-preparation program in the land, whether university-based or “alternative,” will need to revamp its own standards and curriculum if it’s to prepare tomorrow’s instructors to impart the Common Core to their students. Ditto for those who purport to train school leaders. A lot of professors will need to change their ways, too! These will change, too, with implications for everything that is attached to them (tenure decisions, merit pay, layoffs, and more). Will it get harder or easier to make “value-added” calculations at the classroom level once Common Core assessments kick in? What about rubrics for teacher observations? (And how well trained will those observers be in what to look for in a Common Core classroom?) 8. The day and year I wager that today’s standard school day and year will prove insufficient for many kids to master the Common Core plus everything else that they need to learn. Can instructional time be individualized, too, in school or online? What are the budget implications? 9. Promotion and graduation requirements Some states have third-grade “reading guarantees,” but what about the rest of the K-12 sequence? Will going from sixth grade to seventh hinge on a student having mastered the Common Core standards for sixth? What about entering high school? Earning a diploma? Will this continue to be based on Carnegie units and course credits or on actual mastery? (And what about subjects outside ELA and math?) What does Common Core portend for the two dozen or so states with high-school-graduation tests that are pegged to yesterday’s ninth- or tenth-grade expectations? 10. Internal organization of schools Though the Common Core is built around grade levels, kids don’t learn at the same speed—and individualization of instruction grows ever more important. What about moving kids forward as they master stuff rather than through lock-step progressions? Why can’t one be in third grade for ELA, say, and fourth or fifth for math? How about those who will need five years rather than four to master the challenges of high school? (Today they’re counted as “drop outs” in most states’ statistics!) And since we can no longer afford to individualize by shrinking class size further, we’ll need to rely more on technology—which the new assessments also need—and on more flexible ways of organizing school itself. Remind yourself what the Common Core expects Kindergartners to learn (review page ten of the ELA standards or page eleven of math standards). Then ask yourself what must a child know and be able to do upon entry into Kindergarten to maximize the odds that she will be ready to succeed there. Then ponder how few of today’s preschool programs (Head Start included) have standards, curricula, and staff that are up to this challenge? And how many of today’s needy pre-Kindergartners don’t even have access to those programs? This stuff is evolving at warp speed, with profound implications for schooling and for kids’ lives. Much of it’s about communication and entertainment, but as those realms overlap more with formal education—for good and ill—and as more kids gain 24/7 access to all of them, what will this mean for K-12 schooling? How much of it will actually take place in school? How much will require flesh-and-blood instructors—and of what sorts? And what’s to become of libraries, book rooms, backpacks, and the rest? Picture every school kid with her own iPad in hand… Assessment and accountability New Common Core assessments are under development for deployment in 2014-15—and let’s hope they turn out well—but for states and districts to make good use of them means rethinking their entire approach to student assessment, right down to the classroom level. How will the end of a “six-week unit,” for example, be assessed? What about those end-of-week vocabulary reviews? Weekly reports to parents on what was and wasn’t learned? 14. Accountability systems Most state accountability systems incorporate multiple factors, including but not limited to student test scores, which are geared to current state standards and tests. Every state does it differently—and those differences are apt to widen as federal NCLB prescriptions ease with recent waivers and (maybe someday) ESEA reauthorization. States that embrace the Common Core will need to reconstruct their accountability systems, as will districts that have their own. 15. Alternate assessments Then there’s the GED and other ways of gauging “equivalency” for those who don’t earn a conventional on-schedule diploma. Big changes are afoot there, but will the new tests equate to the Common Core—and redress the long-standing problem of the GED: namely that people possessing it don’t fare much better in life than dropouts? 16. Graduation rates What happens, politically, when graduation rates plummet and dropout rates soar, at least for a few years? Are states and communities ready for this? Nobody ever is. But does that mean we’ll “phase in” the more rigorous graduation expectations? How long will that take? 17. Higher education Once Common Core rigor takes hold (if ever) of high-school-exit expectations, will our universities actually accept that diploma as proof of college readiness? Will it yield automatic admission and placement into credit-bearing college courses? If not, why should K-12 students (and parents and taxpayers) take it seriously? If it does, what happens to faculty members who have been teaching remedial courses? What happens to collegiate English and math classes if entering students are truly prepared? Will that compulsory first-year writing course still be needed? 18. Career education The Common Core claims to be geared to college and career readiness. We know that not everyone is headed to (or belongs in) college, at least not the four-year kind. But what exactly are the implications for employer expectations, hiring practices, and on-the-job training? (How about the armed forces as a major employer?) How about secondary-level technical-vocational education? Will Common Core expectations make it into those institutions, too? That will likely mean major-league curricular and instructional alteration. 19. NCLB and other federal policy Everybody knows this but it needs underscoring: When Congress gets around to reauthorizing the Elementary and Secondary Education Act—and other programs such as IDEA, Head Start and TRIO—it must contend with the changed expectations that most states will have for their students and the implications of those changes for the special populations, additional services, and so forth that Uncle Sam focuses on. And it must do so without turning the Common Core itself into a federal mandate. (Remember, four states want no part of it—and at least a few more are apt to back out along the way.) The role of the Nation’s Report Card will evolve, too. If most states end up using new English language arts and math assessments, calibrated to Common Core standards, at the individual, building, district, and state levels, there will be less cause to press for NAEP (and PISA, TIMSS, etc.) to be administered to everybody. But NAEP will remain the crucial external auditor for Common Core states and those that do their own thing. At the same time, the curricular frameworks that determine what NAEP assesses may need to be re-examined. Yikes. It’s sort of scary. Daunting. Politically and organizationally challenging. Expensive in a time of tight budgets. Disruptive to myriad entrenched institutions and practices. But if we don’t wrap our minds around the totality of it, we may not win this war. Are you listening, General Wedemeyer? Category: Standards, Testing, & Accountability blog comments powered by Disqus About the Editor Michael J. Petrilli Executive Vice President Mike Petrilli is one of the nation's foremost education analysts. As executive vice president of the Thomas B. Fordham Institute, he oversees the organization's research projects and publications and contributes to the Flypaper blog and weekly Education Gadfly newsletter. May 16, 2013 Sign Up for updates from the Thomas B. Fordham Institute - Core Knowledge Blog - Daniel Willingham: Science and Education Blog - Education Next Blog - Getting Smart - Gotham Schools - Jay P. Greene - Joanne Jacobs - NACSA's Chartering Quality - National Journal Education Blog - NCTQ Pretty Darn Quick - NCTQ Teacher Quality Bulletin - Ohio Education Gadfly - Politics K-12 - Quick and the Ed - Rick Hess Straight Up - The Corner - The Hechinger Report - Top Performers
<urn:uuid:fd048869-987b-4f81-9bf2-7f52d4c2992b>
CC-MAIN-2013-20
http://www.edexcellence.net/commentary/education-gadfly-daily/flypaper/2012/how-the-common-core-changes-everything.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.935663
2,927
3.171875
3
(CNN) -- In designing modern and sustainable buildings in the United Arab Emirates, architects are taking cues from an ancient Arabic design tradition. A high-tech shading system running up the facade of the Al Bahar Towers in Abu Dhabi was inspired by "mashrabiya," latticed screens commonly seen in Islamic architecture that diffuse sunlight and keep buildings cool without blocking light. "Not allowing the sun to land directly on the skin of the building, causing overheating and glare, was a very simple concept," said Abdulmajid Karanouh, the buildings' architect. "And that's why using the mashrabiya, inspired from the past and inspired by nature, was a no brainer." Wrapping around most of the 25-story buildings' sides, the screens are arranged as an array of repeating geometric patterns and are computer-controlled to respond to the sun's movement, unfolding like an umbrella when the sun hits them. The screens fold closed and the automated mechanism shuts off each day after the sun goes down. The north sides of the buildings never receive direct sunlight and are left unshaded by the screens. In Abu Dhabi, solar rays can heat the outside surface of windows up to 90 degrees Celsius, (200 degrees Fahrenheit). By shielding the glass from the sun, the screens keep the buildings cool, reduce glare and let in diffused natural light. Using this method, the buildings require less artificial lighting and 50% less air conditioning. With the desert sun beating down on the cities in the Gulf, solar energy is an important environmental factor in its cities. But desert dust and sand make photovoltaic panels less practical than one would expect in this part of the world. Karanouh says even a thin layer of dust can reduce the efficiency of solar panels by nearly half, and proper maintenance means regular cleaning using water jets pumping fresh water, a scarcity in an arid country like the United Arab Emirates. "You might find that you are spending so much energy to desalinate the water and get it to where it needs to be and then clean the panels, you'll find out that that energy may equate or even exceed the energy that you get out of the photovoltaic panels," he said. In Qatar, the Doha Tower, a striking cylindrical building, was designed along the same lines; it is covered entirely in a latticed screen that uses a multi-layered pattern constructed of aluminum and glass. Both structures were named as best buildings of 2012 by the Chicago-based Council on Tall Buildings and Urban Habitats, which recognizes sustainable architecture. The United Arab Emirates is not normally thought of as a leader in combating climate change -- it has one of the highest levels of carbon dioxide emissions per capita in the world, according to the World Bank -- but Abu Dhabi has in recent years drawn attention for its innovative renewable energy building projects. Most notable is Masdar City, its much-hyped, planned city that is still being constructed, It was originally slated as being a carbon neutral area but now aims for environmental sustainability. The area has green features like a 10 megawatt solar power plant and a 45-meter-tall wind tower that helps regulate air temperatures in the public square by controlling air movement.
<urn:uuid:360814d9-59a3-4f2a-b56a-8a8a78303aac>
CC-MAIN-2013-20
http://www.edition.cnn.com/2012/11/18/world/meast/ancient-screen-design-in-abu-dhabi/index.html?iid=article_sidebar
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.958059
671
3.234375
3
On this page... - Federal Regulations - State Rules - Quality Indicators for Assistive Technology (QIAT) Services - State AT Liaisons - AEA AT Contacts - AT Consideration - Areas of AT - Funding Sources - Frequently Asked Questions Assistive Technology (AT) can help students with disabilities be fully included in the general education classroom or be a powerful intervention tool for young children before entering school. AT helps students who have disabilities learn the material in a way that they can understand it. AT helps eliminate barriers students may face that prevent them from being at the same level as their classmates. AT can be anything from a simple device, such as a magnifying glass, to a complex device, such as a computerized communication system. AT benefits children of all ages, with all types and severities of disabilities. It is key for success in school and future work. Simply put, AT is any device that allows a person with a disability to do what they need or want to do. It can be bought in a store or on-line, in can be home-made or especially designed for a specific person. It can be part of a system of devices. And in some cases, it might be an "off the shelf" device. This would be something like a garage door opener, easily available for persons who are not disabled, but for the person with a disability it is considered AT because it allows them to do something they otherwise would not be able to do. Under the Individuals with Disabilities Act (IDEA) Amendments of 1997 and subsequent revisions, the team that develops an individual education program (IEP) for a child must consider whether the child requires assistive technology devices and services. IDEA defines assistive technology in the following ways: "The term 'assistive technology device' means any item, piece of equipment, or product system, whether acquired commercially off the shelf, modified, or customized, that is used to increase, maintain, or improve functional capabilities of a child with a disability." "The term 'assistive technology service' means any services that directly assists a child with a disability in the selection, acquisition, or use of an assistance technology device." Such term includes: - The evaluation of the needs of such child, including a functional evaluation of the child in the child's customary environment; - Purchasing, leasing, or otherwise providing for the acquisition of assistive technology devices by such child; - Selecting, designing, fitting, customizing, adapting, applying, maintaining, repairing, or replacing of assistive technology devices; - Coordinating and using other therapies, interventions, or services with assistive technology devices, such as those associated with existing education and rehabilitation plans and programs; - Training or technical assistance for such child, or where appropriate, the family of such child; and - Training or technical assistance for professionals (including individuals providing education and rehabilitation services), employers, or other individuals who provide services to, employ, or are otherwise substantially involved in the major life functions of such child. 281--Iowa Administrative Code 41.5(256B,34CFR300) Assistive technology device. “Assistive technology device” means any item, piece of equipment, or product system, whether acquired commercially off the shelf, modified, or customized, that is used to increase, maintain, or improve the functional capabilities of a child with a disability. The term does not include a medical device that is surgically implanted or the replacement of such device. 281--Iowa Administrative Code 41.6(256B,34CFR300) Assistive technology service. "Assistive technology service" means any service that directly assists a child with a disability in the selection, acquisition, or use of an assistive technology device. The term includes the following: 1. The evaluation of the needs of a child with a disability, including a functional evaluation of the child in the child’s customary environment; 2. Purchasing, leasing, or otherwise providing for the acquisition of assistive technology devices by children with disabilities; 3. Selecting, designing, fitting, customizing, adapting, applying, maintaining, repairing, or replacing assistive technology devices; 4. Coordinating and using other therapies, interventions, or services with assistive technology devices, such as those associated with existing education and rehabilitation plans and programs; 5. Training or technical assistance for a child with a disability or, if appropriate, that child’s family; and 6. Training or technical assistance for professionals (including individuals providing education or rehabilitation services), employers, or other individuals who provide services to, employ, or are otherwise substantially involved in the major life functions of that child. QIAT is a nationwide grassroots group that includes hundreds of individuals who provide input into the ongoing process of identifying, disseminating, and implementing a set of widely-applicable Quality Indicators for Assistive Technology Services in School Settings that can be used as a tool to support: - school districts as they strive to develop and provide quality assistive technology services aligned to federal, state and local mandates - assistive technology service providers as they evaluate and constantly improve their services - consumers of assistive technology services as they seek adequate assistive technology services which meet their needs - universities and professional developers as they conduct research and deliver programs that promote the development of the competencies needed to provide quality assistive technology services - policy makers as they attempt to develop judicious and equitable policies related to assistive technology services. The State Assistive Technology Liaisons Team is a group of Assistive Technology professionals representing each of the state's 9 AEAs, large LEAs, the Iowa Program for Assistive Technology, and several of the state's colleges and universities across the state. This team collaborates to address and advise the Department of Education on systemic assistive technology issues and initiatives in Iowa's K-12 schools. |AEA||Contact||Phone||AEA AT Website| The following documents are provided to assist IEP teams in the documentation of the consideration process and to provide local teams information about AT considerations. The use of these documents is optional. AT Consideration SETT - Electronic Version - Use this version to fill out using Microsoft Word. AT Consideration SETT - Print Version - Use this version to print and fill out with a pen or pencil. Activities for Daily Living (ADL) - An example is eating from a scoop dish or drinking from a cut-out cup. Communication Adaptations (CMA) - An example is Picture Exchange Communication Symbols (PECS) or a communication device. Computer Access (CAC) - An example is an adapted mouse or a switch. Environmental Controls/Access (ECA) - An example is an adapted remote or software to control lights, fans, etc. Hearing (HRG) - An example is an amplified classroom or a written copy of the directions. Learning and Studying Adaptations (LSA) - An example is content materialsin an alternate fomat or highlighters. Math (MAT) - An example is software or a talking calculator. Mobility Adaptations (MOA) - An example is a wheelchair or a walker. Reading (RDG)- An example is text to speech software or a talking dictionary. - Accessible Instructional Materials (AIM) are specialized formats of curricular content that can be used by and with print-disabled learners. They include formats such as Braille, audio, large print, and electronic text. While AIM themselves, the actual specialized formats, are not AT; the use of AIM by a student, with the exception of paper-based, embossed Braille, and large print, must have AT to "ride on," including refreshable Braille and enlarged text. Visit the True AIM webpages for more infomation. Seating Adaptations (SEA) - An example is supported seating or a seat belt. Technologies for Vision (TVA) - An example is screen modification or audio books. Written Language Adaptations (WLA) - An example is a pencil grip or word prediction software. Assistive Technology activities are funded by Federal Part B grant dollars provided to the State of Iowa. Iowa COMPASS is Iowa's leading source of information on assistive technology and disability services. There are organizations that will pay for or provide for free assistive technology and home accessibility modifications. Visit the Iowa COMPASS website to find them. Assistive Technology FAQs ( 2009-08-17 10:56:19)- This document has questions and answers about Assistive Technology. Iowa Program for Assistive Technology (IPAT) IPAT is a statewide program of the Center for Disabilities and Development at the University of Iowa. IPAT's goal is to increase access to assistive technology devices and services across all environments: home, school, work and community. IPAT collaborates with the Iowa Department of Education; Bureau of Children, Family and Community and the Area Education Agencies to improve student access to assistive technology and services. To learn about IPAT go to http://www.iowaat.org . To find out about assistive technology devices and services in Iowa, call Iowa COMPASS at 800-779-2001 or TTY 877-686-0032. You can also access Iowa COMPASS on-line at: www.iowacompass.org The Iowa Educators Consortium (IEC) IEC, is an initiative of the Iowa Area Education Agencies. IEC purchases allow schools to take advantage of aggressive pricing based on the purchasing volume of many Iowa schools. In addition to aggressive pricing, the IEC frees valuable LEA staff time in researching and procuring products. Advisory committees work with vendors, manufacturers, product reviews and product literature to determine the best product/cost value for schools. The IEC offers discounted pricing on a wide variety of products. For information specific to Assistive Technology go to http://www.iec-ia.org/ . Click on “Media & Technology” in the left column and then on “Assistive Technology and Special Needs.” Audio Visual and computer equipment are located by clicking on “AV & Computer.” For more information contact: Jerry Cochrane, Coordinator Iowa Educators Consortium 1120 33rd Avenue SW Cedar Rapids, IA 52404 Phone: 319-399-6741, 800-798-9771x6741 IEC Website: www.iec-ia.org Iowa Center for Assistive Technology Education and Research (ICATER) ICATER is an assistive technology resource center located within the University of Iowa College of Education that serves the university, as well as communities throughout the state. The Center provides students with disabilities, parents, College of Education students, and education professionals hands-on assistive technology training, information, and materials. ICATER also conducts and collaborates on research projects, resulting in innovative methods and best practices of assistive technology usage. Through these training programs and research projects, ICATER impacts all students with disabilities by providing access to a variety of assistive technology devices, helping them accomplish their educational goals. Please contact us at www.education.uiowa.edu/icater.
<urn:uuid:30dc4405-9edf-4c42-8793-038c22e6d3ce>
CC-MAIN-2013-20
http://www.educateiowa.gov/index.php?option=com_content&task=view&id=572
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.910101
2,358
3.5625
4
The design article comes from the book Practical Applications in Digital Signal Processing by Richard Newbold and will be presented in several parts. The book is published by Prentice Hall and is a massive 1152 pages. An outline of the book including preface and chapter descriptions is provided here. Available in both print and eBook formats. See the Prentice Hall site for more information or from Amazon. Chapter 11 Digital Data Locked Loops In previous chapters of the book Practical Applications in Digital Signal Processing, elastic store memories were utilized to reclock asynchronous input tributary bit streams prior to being multiplexed into a synchronous output tributary. We utilized two levels of telephone multiplex signals to demonstrate the use of the elastic store memory. Specifically we used elastic store memories to multiplex two asynchronous DS-1 bit streams into a single DS-1C bit stream. Each of the input bit streams were associated with their own independent bit clock and were asynchronous to one another. Once the lower level bit streams are multiplexed into a higher level bit stream, all clock information associated with the lower level streams is essentially lost. The problem we have now is, how can we reverse this multiplex (i.e., how can we demultiplex the two DS-1 streams and synthesize a bit clock for each stream that is on average identical to its original clock)? The DS-1/DS-1C example is only one of an infinite number of possible examples. The same question can be asked of any demultiplex processing where the multiplexed tributaries were originally asynchronous to one another. The answer to these questions is to utilize a digital data locked loop (DLL). The DLL is fairly simple device that uses an elastic store memory to synthesize a bit stream clock and then synchronizes the demultiplexed bit stream with that clock. All this takes place with no prior knowledge of the original clock frequency. DLLs are suited for many applications. In order to maintain continuity within this book, we will describe and design a DLL that can be used to demultiplex the DS-1C tributary that we discussed in detail in Chapter 10, “Elastic Store Memory.” There is no reason, however, to restrict the usage of a DLL to only telephony applications. The DLL we describe in this chapter can be considered a base model that with a few modifications can be used for many other applications as well. 11.1 Digital Data Locked Design To help us better understand the design of the DLL, we need to have an overall picture of the type of signals and the functional path of the signals we will be processing. For this reason we will utilize the DLL in a simple bit stream demultiplexer to synthesize a bit clock and resynchronize the recovered bit stream. The functional blocks of a demultiplexer are illustrated in Figure 11.1. This book is only concerned with the shaded blocks in this figure. These blocks are the ones that utilize the DLL to synthesize and resync the recovered bit streams. It is not the intention of this book to discuss all the other processing that goes on within a demultiplexer, but we will need to briefly describe the format of the demultiplexed signals that serve as inputs to the DLL. For this reason, we will briefly explain the end-to-end signal flow. The tributary demultiplex block receives the high-level multiplex input bit stream and then demultiplexes the bit streams associated with each tributary. At the output of the tributary demultiplex, the bit stream is accompanied by a gated clock that is used to indicate the existence of a valid tributary bit. You can envision the gated clock as the high rate input clock with missing teeth. Figure 11.1 Simplified demultiplexer block diagram As shown in Figure 11.1, the clock teeth are present whenever a new information bit is recovered by the demultiplexer. Clock teeth are missing whenever the demultiplexer is off processing an overhead bit or when it is off processing a bit from another embedded tributary, and the corresponding bit periods shrink and expand accordingly. Clearly this is not the desired format and timing of a recovered bit stream that we would like to hand off to any external processors. Instead, we would rather synthesize a valid 50% duty cycle bit clock and then synchronize the recovered bit stream to this clock. We will use this chapter to design a DLL architecture that is relevant to a real-world digital signal processing (DSP) application. Since the reader is already familiar with the real-world DS-1 and DS-1C signals from the previous chapter, we will use these signals as the input and output of our DLL based demultiplexer. The input to the demultiplexer is a DS-1C, which carries two DS-1 tributaries. The tributary demultiplex block in Figure 11.1 outputs the gated clock version of both tributaries. The block diagram shows that we select only one tributary for further processing. This will be sufficient for our discussion and development of the DLL architecture. Enhancing this design to process multiple bit streams is straightforward. The selected bit stream and associated gated clock are fed to the DLL block, where the loop synthesizes a bit clock from the input bit stream and uses this clock to strobe the bit stream out of the DLL. The time aligned and newly formatted bit stream and 50% duty cycle bit clock output from the DLL circuit are illustrated in Figure 11.1. The reader should remember that we have no idea what the frequency of the original tributary bit clock was. All we know is that it must be something within ±77.2 Hz of the 1.544 MHz center frequency, and even then the original clock may have been drifting over time between the two limits. The clock that our DLL synthesizes must on average match the frequency of the original clock, and it should track its drift over time.
<urn:uuid:554d7399-70b4-4c33-b1f4-551a259dfcb1>
CC-MAIN-2013-20
http://www.eetimes.com/design/eda-design/4407104/Digital-Data-Locked-Loops---Part-1?pageNumber=0&Ecosystem=signal-processing-dsp
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.937583
1,276
2.875
3
Found 0 - 10 results of 36 programs matching keyword "rocks" Mechanical Engineer Armen Toorian explains that the wheel tracks of the Mars rover Curiosity are used to determine how far the rover has travelled on the red planet. Lead Curiosity Driver Matt Heverly and Research Scientist Bethany Ehlmann elaborate on the unusual working conditions involved with a Mars rover expedition. Research Scientist Bethany Ehlmann and Mechanical Designer Scott McGinley explain some of the scientific instruments aboard the Mars rover Curiosity. Engineers at the Jet Propulsion Laboratory (JPL) explain how they simulate martian conditions and conduct tests with model rovers to prepare the Curiosity rover for its journey to Mars and its work on the red planet. A glimpse of the full-scale model of the Mars rover, Curiosity. On display at the Exploratorium from August 1st to September 16, 2012. This model is on loan from JPL, NASA's Jet Propulsion Laboratory, and there are only two on loan in the United States! The ground under our San Franciscan feet is constantly on the move. Join Exploratorium educator Ken Finn as we visit some spots around town where exposed rocks reveal the tale of an active earth. In this video interview from Greenland, geologist Tom Neumann from the University of Vermont explains how he and his colleagues are attempting to read the history of the Greenland Ice Sheet by collecting and analyzing rocks spit out from the base of the glacier. On his team’s first day out in the field near Kangerlussuaq, geomorphologist Paul Bierman of the University of Vermont explains what kind of rocks they look for to help determine the last time Greenland was free of ice. The two Mars Rovers are alive and well after surviving their second Martian winter. Come and see photos of discoveries they made during their third year on Mars, with Exploratorium Senior Scientist Paul Doherty. In this zany competition teachers will have ten minutes to create a science activity from a special secret ingredient. This week: rocks!
<urn:uuid:54b68b4a-96bb-4add-8832-3a524e386848>
CC-MAIN-2013-20
http://www.exploratorium.edu/tv/archive.php?cmd=keyword&keywordtext=rocks
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.929006
411
3.203125
3
US 5182793 A An apparatus and method for assisting persons in making decisions, using a computer programmed with artificial intelligence techniques. Real world objects and events pertaining to a particular domain are represented in a knowledge base. Best choices for solving problems are made according to the application of rules, which may be applied absolutely, comparatively, by weight, or ordered, according to methods selected by the user. The invention also permits the user to select from among various decision making strategies and permits the user to observe the effects of choices in hypothetical scenarios. 1. In a computer system having a stored system of rules and a knowledge base of facts, events, and programming, an improved method for using said computer to display a problem scenario and modify the scenario according to a best choice among alternative choices in a particular solution domain, comprising the steps of: representing a real world problem to be solved as at least one data object; generating a problem scenario from a number of data objects, wherein said objects are arranged as a symbolic spreadsheet in which cells of said spreadsheet are frames representing said objects and their attributes and wherein said cells are linked by relationship attributes of said objects; displaying said problem scenario to a user; generating a pool of candidate solutions to said problem scenario; determining a candidate selection strategy for evaluating said candidates, wherein said strategy defines a method for applying rules; evaluating said candidates, using said candidate selection strategy to access and apply a combination of hard elimination rules, soft elimination rules, and comparison rules from a stored system of rules, to each of said candidate solutions until a best candidate is selected; assigning a solution value representing said best candidate to variables of said problem scenario; modifying said scenario to substitute said solution value, such that a new scenario reflecting the effects of said solution value is generated, wherein all objects having attributes affected by said solution value are modified; and displaying said modified scenario to a user. 2. The method of claim 1, wherein said candidate generating step further comprises updating the characteristics of said candidates in accordance with predetermined data or user input. 3. The method of claim 1, wherein said strategy is selected in accordance with user input. 4. The method of claim 1, wherein said evaluating step comprises evaluating said candidates using a hierarchial method. 5. The method of claim 1, wherein said evaluating step comprises evaluating said candidates using a rule weighting method. 6. The method of claim 5, wherein said rule weighting method includes a step for short circuiting said evaluation step by determining that a current best candidate cannot be surpassed. 7. The method of claim 29, wherein said rule weighting method learns best candidate selection by adjusting said rules. 8. The method of claim 1, wherein said candidate evaluating step further comprises using soft elimination rules. 9. The method of claim 1, wherein said candidate evaluating step further comprises using comparison rules. 10. The method of claim 1, wherein said candidate evaluating step further comprises using scoring rules. 11. The method of claim 1, wherein said candidate evaluating step further comprises using stop rules. 12. The method of claim 1, and further comprising the step of using said best choice in a final action. 13. The method of claim 1, wherein said candidate generating step further comprises updating the characteristics of said alternative choices in accordance with predetermined data or user input. 14. The method of claim 1, wherein said computer provides more than one method of applying said rules during said step of evaluating. This application is a continuation of application Ser. No. 07/821,234, filed Jan. 9, 1992, now abandoned which is a continuation of application Ser. No. 07/373,420, filed Jun. 30, 1989 now abandoned. This invention relates in general to computer processing, and in particular to an apparatus and method, using artificial intelligence programming techniques, for assisting human users in decision making. Although there is no consensus on a definition of "artificial intelligence", it is sometimes generally defined as a computer programming style in which programs operate on data according to rules to solve problems. Artificial intelligence involves the use of symbolic, as opposed to numeric, representations of data. Using computer processing to relate these symbolic representations is referred to as "symbolic processing", and permits computers to represent real world objects in the form of symbols and to then develop associations between those symbols. A feature common to artificial intelligence programs is that they all involve knowledge, and must represent knowledge in a manner that can be used by a computer. Specific applications of artificial intelligence, including those using symbolic processing, are associated with knowledge bases. A knowledge base for a particular application includes facts about the application and rules for applying those facts, i.e., declarative and procedural knowledge relevant to the domain. The "facts" of a knowledge base may include objects, events, and relationships. To develop useful knowledge bases, the computer industry has recognized a need to combine efforts of both software engineers and experts in the particular domain. Generally, the software engineer develops the expert system, and the domain expert provides information for the knowledge base. However, even this approach to creating knowledge bases ignores the expertise of a user, who may have his or her own skills to add to the decision making process. Thus, there is a need for a knowledge-based system that permits the skills of the user to contribute to the knowledge base. One application of artificial intelligence is decision support for human users, especially in the form of modeling a particular real world or hypothetical operation. The operation's domain includes all objects, events, and relationships that affect behavior within the operation. Yet, many existing systems are relatively inflexible, and rely on rule-based inference engines. These systems do not compare favorably to the ability of human intelligence to make decisions on what rules apply and how to apply them. There is a need for improved methods for applying rules. One aspect of the invention is an apparatus for aiding human users in making decisions relative to events in a particular domain of operations. The invention may be embodied in a computer system, which has a stored knowledge base, and in which the user interacts with a decision processor subsystem. Features of the apparatus include assisting a user in choosing among alternative choices relevant to the domain. The apparatus permits the user to select from a number of types of rules and other data to develop a choice method. The apparatus also permits the user to develop a strategy that includes a set of parameter values and methods designed for a particular set of choices. The invention is implemented with programming that uses artificial intelligence techniques, including object oriented programming. For this reason, a decision processor subsystem may be sufficiently generalized such that it has modules that may be used in a number of different computer systems. Thus, an aspect of the invention is a processor apparatus that is programmed to assist the user in choosing among alternative actions. Another aspect of the invention is a method of programming a computer to assist a user in selecting among possible alternatives. Specific features of the programming include representations of various types of data relevant to the selection process and various functions to implement the rules. The programming provides a number of different rules, and permits rules to be applied in varying ways. The programming also permits a multiplicity of methods for applying rules and enables the user to choose a desired method. Another aspect of the invention is a method of using a computer to select a best choice among alternatives in a particular domain. Features of the method include applying different rules, applying rules in different ways, selecting among a multiplicity of methods to make a choice, and adopting strategies for decision making. A further feature of the invention permits decisions to be made in the context of hypothetical scenarios. A technical advantage of the invention is that computer-aided decision making is accomplished in a manner that more nearly approximates human decision making. Rules may be applied non-absolutely and comparatively. A further advantage of the invention is that it allows a person to combine his or her expertise interactively with a computer system that has artificial intelligence capabilities. A still further advantage of the invention is that it combines the ability of a computer to determine the effects of events with its ability to make choices relevant to those events. The novel features believed characteristic of the invention are set forth in the appended claims. The invention itself, however, is best understood by reference to the following description of illustrative embodiments, read in conjunction with the accompanying drawings. FIG. 1 is a block diagram of a computer system in accordance with the invention. FIG. 2 illustrates a method for creating a program architecture for programming a computer in accordance with the present invention. FIG. 3 is a flowchart illustrating a method of computer-aided decision making in accordance with the present invention. FIGS. 4A and 4B are flowcharts illustrating steps that are further detail of the evaluation step of FIG. 3. FIG. 5 is a flowchart illustrating a method of computer-aided decision making, using a strategy generated by said computer. FIG. 6 is a flowchart illustrating a learning routine used with the method of FIG. 4B. FIG. 7 is a flowchart illustrating a method of computer-aided decision making, using a scenario generated by said computer. FIG. 1 represents an apparatus constructed in accordance with the present invention. In one embodiment, the invention is part of a computer network, which includes a host computer 10 and a number of stations, each in communication with host 10 by means of a network communication system 15. An example of an application of the apparatus is a computer network for monitoring an airline flight operation, in which the domain of the operation includes all events affecting flight schedules. As will be explained below, the system provides real time support for enabling a user to comprehend the scope of a problem, observe the details of problem side effects, generate multiple possibilities for improving the situation, and evaluate alternatives for improvement. The use of this specific application is exemplary, and the apparatus and method of this invention could be used for any number of other applications in many diverse areas. A typical station is identified with the reference numeral 20. Used in a network, each station 20 includes a terminal emulator to enable it to interact with host 10. However, it is not necessary to the invention that such a network be used, and the station may be a stand-alone processing unit. Whether used as a terminal in a network or as a stand alone processing unit, station 20 has a number of components, including a transaction interface 30, a user interface 40, a knowledge base 50, and a decision processor system 60. These components deliver and receive information to and from each other by means of a bus 17 and other communication means. In addition, station 20 has other components (not shown) typical of a data processing terminal or stand-alone unit commonly in use. For example, memory other than that used for knowledge base 50 stores data and programming, and a timer (not shown) provides timing functions. Transaction interface 30 permits data used by station 20 to be kept current with events that occur in the domain of the operation. Thus, transaction interface 30 is in communication with host 10 as well as with knowledge base 50. Of course, if station 20 is a stand-alone processing unit, transaction interface 30 will be in communication with some other input device and may be combined with user interface 40. Transaction interface 30 is also in communication with decision processor system 60, so that information based on decisions made using the invention can be channeled from station 20 to another station or to an output device. The hardware associated with transaction interface 30 may be any one of a number of well-known input/output and other peripheral devices designed for the functions herein described. User interface 40 provides access for a user to the functions of the invention. User interface 40 allows input through a keyboard or other input device, and displays for permitting the user to interact with the invention. User interface 40 is also in communication with decision processor system 60. The hardware associated with user interface 40 may be any number of well-known input/output and other peripheral devices. Knowledge base 50 contains the data necessary to perform the different functions of the system. The hardware associated with knowledge base 50 may be any memory device for electronically storing information, such as a digital storage device. Knowledge base 50 is conceptually similar to a data base of a standard data processing system, except that it contains a number of artificial intelligence structures to enable decision making. More specifically, knowledge base 50 is arranged as a semantic network of frames. In general, a frame is a knowledge representation structure that represents "objects", i.e, physical items, facts, and events, in the real world as groups of attributes. A frame contains slots, each of which can have a value. As explained below, these values can include programs to be executed. In a system of frames, frames may inherit values from other frames. The frames within knowledge base 50 may include reasoning programming, which is more accurately described as "application behavior code." This programming is object-oriented, which means that information is oriented around the objects that the programming manipulates. Objects in knowledge base 50 may "behave" and thereby cause data and relationships between objects in the knowledge base to change. To effect such behavior, the programming makes use of demons, which are programs invoked when certain data elements in frames are accessed and determines what to do when certain conditions arise. The programming within knowledge base 50 permits objects to be represented and their interrelationships to be manipulated in a manner analogous to cells of a numeric spreadsheet. This feature is explained in more detail below, in connection with effects processor 62. Decision processor system 60 includes two subsystems, including an effects processor 62 and a strategy processor 66, each associated with special programming to accomplish the functions described below. The hardware associated with each processor may be any one of a number of well-known devices capable of executing computer instructions, such as a microprocessor. Effects processor 62 embodies the concept of a "symbolic spreadsheet". The symbolic spreadsheet is analogous to numeric spreadsheets in common use today, in which numbers or formulas are assigned to cells. The user can change a value in one cell and immediately see the effect on values of other cells. In contrast to numeric spreadsheets, effects processor 62 uses frames as cells to symbolically represent components of a particular domain. Values, complex entities, or even programs may be assigned to cells. The frames are linked by means of descriptions of their relationships. When a change in the domain occurs or when a hypothetical event is proposed by a user, the programming determines the effect of those changes on other aspects of the operation. A term used to refer to this type of programming is "constraint propagation." For the airline flight operations example, cells represent objects, such as airplanes, crews, and airports, and events that affect them, such as a flight delays, airport closings, or maintenance delays. When the system receives an event, effects processor 62 determines its effects and updates knowledge base 50. A feature of effects processor 62 is that it is programmed to permit the user to create and examine scenarios, i.e., sets of hypothetical changes. A scenario is generated by knowledge base 50, such that the real world is replaced by a scenario, while maintaining the real knowledge base data of the base. Thus, effects processor 62 operates either in response to input from knowledge base 50 or from input from user created scenarios. A second subsystem of decision processing system 60 is the strategy processor 66. The programming of strategy processor 66 permits several basic functions: solution of "best choice" problems using a particular method, variation of methods, and variation of strategies. Strategy processor 66 may operate with or without effects processor 62. In other words, neither the programming nor the hardware associated with strategy processor 66 is dependent on effects processor 62. If station 20 has no effects processor 62, communication among the system components, such as knowledge base 50 and user interface 40, may be directly with strategy processor 66 by means of bus 17 or other communication means. Furthermore, strategy processor 66 may be programmed in a manner that is independent of other representations or applications and requires no other files. This feature is described below in connection with FIG. 2. The functions performed by strategy processor 66 are described below in connection with FIGS. 3-6. Strategy processor 66 and effects processor 62 are in communication with both user interface 40 and with each other. They may thereby interact with the user and with each other, to permit the user to determine effects of events depending on a particular choice. This interrelationship is discussed in further detail in connection with FIG. 7. Referring now to FIG. 2, another aspect of this invention is the method by which a computer may be programmed to carry out the functions of strategy processor 66. In the preferred embodiment, the programming is expressed in LISP, a well-known computer language amenable to symbolic processing. The features of LISP are described in a number of commercially available publications, with a distinctive feature being the use of lists as a data structure. Yet, the invention could be implemented with other programming languages, with the primary characteristics of the programming being that it be capable of expressing what to do with the data and rules described below, such that the same functional results are achieved. The terms "functions" and "data types" are associated with LISP, and other programming languages may use different terms for similar programming structures. As indicated in FIG. 2, the programming of the invention is directed to functions and data types that will represent a decision making process at a number of levels. At one level, the decision may be the result of a method selected from alternatives provided by the functions and data types. At another level, a set of methods may comprise a strategy for making particular choices. Yet, a feature of the invention is that the selections permitted at each level, i.e, choice methods and strategies may be interactive or programmatic. In other words, the user can make the selections of methods and strategies, or the selections can be the result of programming. In the latter case, the inherent learning and heuristic capabilities of artificial intelligence can be used. Thus, the "user" could be either a person operating the equipment or a program that exchanges information with another. Furthermore, not all of the data types and functions described below are essential to operation of the invention in its simplest form, and with simple modifications, the invention may accomplish the function of assisting a user in making decisions with some, and not necessarily all of the data types and functions described herein. For example, the decision making function can be used without any selecting methods. It is not necessary to the invention that there be more than one method of decision making or that there be more than one strategy. The strategy programming can be used for managing a set of values only, with no associated choice methods. Thus, in the programming of FIG. 2, the data types and functions associated with defining multiple methods and multiple strategies, are not essential to the invention and the programming is easily modified to operate without them. As indicated in FIG. 2, in general, the programming steps are defining functions and creating data types. An additional step is organizing these data types and functions into a programming structure to control the application of the functions and access to data. FIG. 2 is directed to the architectural features of the invention, with functional features and their implementation being further explained in connection with FIGS. 3-7. Candidate 200 is a data type that represents the candidates involved in a selection process. Candidate 200 is associated with functions from Candidate Move 210, and a data type, Rules 205. Candidate 200 is created and maintained by Candidate Generator 201. Candidate Generator 201 is a data type that specifies how the candidate pool is created, maintained, and accessed. Its internal structure is a list of the generator name, data, and functions from Candidate Move 210. The data in Candidate Generator 201 may range from a current candidate pool to a set of functions for determining candidates. Candidate Generator 201 uses functions defined as Candidate Move 210 to create and update the candidate pool and to produce each subsequent candidate on demand. Candidate Generator 201 permits more than one type of candidate generation, thus each Candidate Generator is identified with a name. Candidate Generator is an element of Method 204. Choose 230 determines whether more than one generator will be used during a decision making process, as specified by the current method. Candidate Move 210, is a set of related user defined functions, which provides an initial pool of candidates or some user defined data structure representing the information required to produce each subsequent candidate on demand. Candidate Move also provides the means to extract a single candidate from the candidate pool and a method for reducing the candidate pool after a single candidate has been extracted. Candidate Move 210 is created by Define Candidate Move 227, a system function, and is accessed by Define Candidate Generator 225, a system function. State 202, a data type, stores runtime information about a candidate, i.e., its "state" information, both system and user generated. The system created information includes the number assigned to the candidate relative to the current candidate pool and the name of the generator that produced the candidate. State is updated by a system function, Put State 226, and is accessible as a parameter from within Rules 205 or Final Action 214. State Update 211 is a user defined function, that updates a candidate's state information, using a system function Put State 226. State Update 221 is created by the system function Define State Update 228, and is accessed as an element of Method 204. The preferred embodiment of the invention includes a data type, Global Context 203, which represents information bearing on the best candidate that is not a property of individual candidates. It is a list of datum keyword and value pairs, initialized by an argument to Choose 230. It is accessed by a Get system function and updated with a Put system function. Global Context Update 212 is a user defined function that makes changes and additions to the global list, using the Get and Put system function 220 referred to in the preceding paragraph. It is created by Define 231, a system defined function, and is accessed as an element of choice method in Method 204. Method 204, a data type, represents information from which a method for making a choice in a particular way is selected. The internal structure of Method 204 is comprised of a number of slots, including at least one of each of the following: a choice name, a method name, a global update function, a candidate generator, a state update function, rules, final action function, rule weights, soft violation inclusion option, scaling option, and approach. The information in these the slots is associated with the data types and functions described herein. Rule weights, soft violation inclusion, and the scaling option are discussed below in connection with FIGS. 3-5. Method 204 permits choices to be made using one of a multiplicity of possible methods. Rules 205 is a data type comprised of at least one set of rules. In the preferred embodiment there are a number of types of rules, including elimination rules, comparison rules, scoring rules, and stop rules. Rules are represented by functions, which are created and maintained by a Define system function 222. Thus, the rules within Rules 205 may include functional programming. Basic characteristics of each type of rule are described immediately below, with functional characteristics described in connection with FIGS. 3-7. Rules are an element of Method 204 and return certain values, as explained below. Elimination Rules 215 is a function that determines whether a candidate can be eliminated from the selection process based upon some aspect of that candidate. Elimination Rules 215 accepts a candidate and that candidate's current state information list as arguments and returns a non-nil value if the candidate is to be eliminated from consideration; otherwise it returns a nil value. The rules in Elimination Rules may include both "soft" and "hard" elimination rules, which are used differently in Choose 230. Hard elimination rules represent insurmountable constraints. Soft elimination rules represent constraints that are not absolute and may be balanced against other considerations. Comparison Rules 216 is a function that determines which of two candidates is "better" or if they are "equal". The rules in Comparison Rule 216 express opinion or preference, which may be conflicting. Comparison Rules 216 accepts two candidates and the corresponding current state information list for each candidate as arguments. The return value of Comparison Rules 216 is a keyword, which depends on whether the rule determines that the first candidate is better, that the second candidate is better, that it cannot make a determination which is better, or that it does not apply. Comparison Rules 216 may also return a tentative result, and optionally, the numeric strength of the tentative result. Scoring Rules 217 is a function that gives a candidate a numeric weight for comparison with other candidates, based upon some characteristic of that candidate. Scoring Rules 217 is used in one of the approaches of Choose 230, as explained below. Scoring Rules 217 accepts a candidate and the corresponding current state information list for that candidate as arguments and returns a number to be used as the rank weight for the candidate. Stop Rules 218 is a function that determines if Choose 230 may be halted prematurely. Stop Rules 218 accepts as an argument a keyword describing the current situation, i.e., whether the current candidate is the new best candidate or whether the Candidate Generator has stopped producing candidates. Depending on the current situation, Stop Rules 218 also accepts as a keyword either the current candidate or the current best candidate if the generator has stopped producing candidates, and the state information list of the aforementioned candidate. Stop Rules 218 returns whether or not to stop the decision making process. Although not shown in FIG. 2, another feature of the programming method of the invention is Learn, a system function for automatically adjusting rule weights. The characteristics of Learn are explained in connection with FIG. 6. Choose 230 is a system function that picks the best candidate. Its arguments include: a method, unless the method is determined by a strategy, a candidate pool, and global context information. Choose 230 includes a number of algorithms that embody different choice approaches, which are specified in Method 204. Two such approaches, a hierarchical method and a weighted choose approach, are explained in connection with FIGS. 4A and 4B. Final Action 214 is a user defined function that enhances the decision making process by doing something with a candidate chosen as best. Its inputs are the best candidate and the state information list of that candidate. The returned value is a value returned from a call to Choose 230. Final Action 214 is created by a Define system function 231 and is an element of Methods 204. There may be more than one action in Final Action 214, and final actions may be applied in sequence. Strategy 206 is a data type that names a given situation in which various choices are to be made and designates the strategy parameters and choice methods pertinent to the situation. For a given choice making situation, Strategy 206 designates which method is to be used to make a best choice. Its internal structure includes a parameter name/value pair list and a choice/method pair list. Strategy is created by Define Strategy 223, a system function, which is accessed by a number of other system functions. Strategy Parameters 208 is a user defined data type, accessed by Rules 205, such that rules behave differently depending on strategy parameter values. Strategy Parameters 208 is created by Define Strategy Parameters 248, a system function. The values within Strategy Parameters can be obtained interactively from a user or from another program. The strategy feature is further discussed in connection with FIG. 5. FIGS. 3-7 illustrate another aspect of the invention, which is a method for using a computer to assist a user in selecting a best choice among alternative choices when making a decision. Essentially, FIGS. 3, 4A, and 4B illustrate the method of making a best choice, given a particular choice method. FIGS. 5 and 6 illustrate the a method of the invention using strategy selection and learning features. FIG. 7 illustrates a method of the invention using a scenario feature. These methods can be implemented with the functions and data types discussed in connection with FIG. 2. As indicated above, the "begin" stage of each method can be initiated by a user and the method can be executed interactively, or the method can be programmatic and called or used or both by other programming. The method of FIG. 3 assumes that there exists a pool of candidates. Step 300 is updating the characteristics of these candidates. A programming structure for implementing Step 300 is the Global Context Update 212. Step 302 is executing a candidate generation procedure of the computer, such that candidates may be evaluated singularly or in multiples for comparison purposes, during the other steps of the method. Candidate generation can range from a simple process that provides the initial pool of candidates as a simple list and extracts elements from the list one at a time, to a more complex process in which candidates are generated on demand using a data structure that represents the candidate pool. The generation of each candidate may be a computationally iterative process, so only as many candidates as required are generated. This latter process can be implemented with the aid of stop rules. However, for some choose options, the entire list of candidates must be generated to do a complete sort of preferred choices or perform the full weighted choose explained below. Although not indicated in FIG. 3, Step 302 may be repeated, i.e, there may be more than one candidate generation. Programming for implementing Step 302 is described in connection with FIG. 2, in particular, Candidate Generator 201, Candidate Move 210, and Choose 230. Whether or not there will be more than one candidate generation is determined by Choose 230. Step 304 is executing a procedure for selecting a best candidate for resolving the decision. Within Step 304 are alternative approaches, which are described below in connection with FIGS. 4A and 4B. An example of programming for implementing Step 304 is the function Choose 230. Step 305 is determining whether there are any final action functions, such as Final Action 214. If there are, Step 306 is performing the final action. If there are no final actions, Step 308 returns the best candidate to the user. FIGS. 4A and 4B illustrate substeps of Step 304. As will be explained in connection with each figure, the invention provides two choice approaches: a hierarchical approach and a weighted choose approach. Regardless of the approach, it is assumed that a pool of candidates with updated information is available from the data types and functions described above. Furthermore, if rules are to have rule weights, the rules are sorted accordingly. Both approaches include steps involving the application of hard and soft elimination rules. If a hard elimination rule fails for a candidate, the candidate is eliminated and the next candidate is considered. Soft elimination rules are applied to each candidate that has passed the hard elimination rules. Unlike hard elimination rules, soft elimination rules may continue to be applied to a candidate, regardless of whether a failure has already occurred. A soft elimination rule score is accumulated for each candidate by adding in the rule weight of each rule passed. In addition to receiving scores, the candidates are assigned to one of two groups: a group containing candidates that have passed all soft elimination rules applied so far, and a group containing candidates that have failed at least one soft elimination rule. Programming to implement the rules is described above in connection with Rules 205, Method 204, and Choose 230. The particular approach for decision making may include scaling with respect to soft elimination rules. If scaling is to be applied, the final soft elimination score in multiplied by the number of candidates currently being considered. This allows the soft elimination score to be combined on an equal basis with the subsequent comparison rule score. An example of implementation of such scaling is an element, such as a flag, in Method 204. Both approaches include steps for applying stop rules. If a stop rule succeeds for a candidate, the decision making process stops. One example of a stop rule is when the latest best candidate is sufficient. Another example is when the decision making has taken too much time. This is implemented by saving the starting time in a programming structure, such as Global Context Update 212, and checking the current time with the stop rule. A third example of a stop rule is when a maximum number of candidates is reached. This can be implemented by storing the number of candidates in a data structure, such as in State 202. A fourth stop rule is when the decision making process has reached the end of a candidate generator with at least one viable candidate or when the current best candidate is good enough. This rule can be implemented by checking Candidate Generator 201. Both approaches may also include steps for applying comparison rules. Comparison rules compare two candidates to determine which is better, i.e, a pairwise comparison. A candidate is better if the current rule returns a definitive result to that effect. If the rule returns "equal", or the rule returns a tentative result, then the next rule in sequence is considered. A tentative result may be either general or include a degree of certainty. Tentative results are remembered because if all comparison rules are exhausted without a definitive result being obtained, the tentative result having the highest certainty or deriving from the highest priority rule is used. The highest priority rule is the rule returning the highest certainty or ordered first in user-specified ordering. Programming structures for implementing this step are described in connection with Rules 205 and Method 204. FIG. 4A illustrates a hierarchical approach to choosing a best candidate. Step 400 is getting a candidate. Step 402 is applying hard elimination rules. Step 403 is determining whether the candidate was eliminated by the hard elimination rules. If the candidate was eliminated in Step 402, Step 400 is repeated to get another candidate, unless the candidate pool is exhausted. If the candidate was not eliminated in Step 402, Step 404 is determining whether soft elimination rules are specified. An example of implementing Step 402 is the data type Method 204. If no soft elimination rules are specified, Step 405 is applying comparison rules to the candidates not eliminated in Steps 402 and 403. To apply comparison rules, each candidate is compared against the current best candidate using one comparison rule at a time. This process continues until one of the two candidates is deemed better. The winner becomes the current best candidate. The comparison rules are ordered by the user in some meaningful way. After Step 405, Step 406 applies stop rules, and Step 407 is determining whether a stop rule prematurely ends the selection process. Returning to Step 404, if soft elimination rules are specified, Step 410 is applying them to the candidates that are not eliminated in steps 402 and 403, using the soft comparison rules as explained above. Step 412 is applying comparison rules to the remaining candidate pool to pick the best candidate. Step 414 is applying stop rules and if there are stop rules, Step 415 terminates the selection method. If there are no stop rules, Step 416 determines whether there are remaining candidates to be evaluated, and if so, Step 412 is repeated. If Step 416 determines that there are no more candidates, the method is terminated. FIG. 4B shows a weighted choose approach to making a choice in accordance with Step 304 of FIG. 3. Step 430 is getting a candidate. Step 432 is applying hard elimination rules, and Step 439 is determining whether the candidate is eliminated. If the candidate is thereby eliminated, Steps 430 and 432 are repeated and the method gets another candidate, unless Step 431 determines that there are no more candidates. If the candidate is not eliminated, Step 434 is adding the candidate to the candidate pool of surviving candidates. These steps continue for each candidate. Step 433 is applying soft elimination rules and determining a soft elimination score for each candidate. A feature of the invention is that the treatment of violators of soft elimination during the application of comparison rules may be different, depending upon the particular method being used. One alternative is to include all violators in the comparison rule steps. A second alternative is to include all violators only if no candidates pass all soft constraints. A third alternative is to include those candidates violating only the previous soft rule if no candidates pass all rules, and either move to the following soft elimination rule or move immediately to the comparison rule stage. Other alternatives may be as varied as desired and implemented by appropriate programming. The programming structure of Method 204 permits implementation of this step. Step 435 is determining a partial sums score for every comparison rule, which has the form: PS(i)=Sum (j=i to n) MaxGain(j), where i corresponds to the comparison rule in question and n is the number of comparison rules. MaxGain is the most that a second place candidate can gain on a first place candidate in total score, MaxGain(i)=(n-1) * rule weight, where i corresponds to the comparison rule in question. Step 437 is determining whether the comparison rule steps of FIG. 4B can be short circuited after the application of a comparison rule to the complete candidate pool. This short circuit occurs if the remaining comparison rules cannot cause any other candidate to have a higher total score. The premise of the short circuit is that a comparison rule can affect the total scores of candidates in either of the following ways: (1) the current first place candidate's score will increase by at least 1 * rule weight, and (2) the current second place candidate's score will increase by at most n * rule weight, where n is the number of candidates. Step 437 determines the difference between the current highest total score and the current second highest total score. If this difference is greater than the partial sum entry from the partial summation list corresponding to the comparison rule about to be applied, the comparison rule iteration is terminated. Step 437 is not executed if there are scoring rules. If there is no short circuit, Steps 436-444 form an iterative loop for applying comparison rules. More specifically, Step 436 is getting a comparison rule, and Step 438 is sorting the candidates using the comparison rule as a pairwise sort predicate. Step 440 is getting candidates from the sorted list. Step 442 is determining a candidate total as a product of its rank weight times the rule weight plus the old candidate total. To determine rank weights, Step 442 gives the first candidate in the sorted list a rank weight equal to n, the number of candidates in the current pool. Step 442 gives the second candidate a rank weight equal to n-1, unless Step 438 determined that candidate to be equal to the first candidate, in which case it receives the same rank weight. This ranking continues for each candidate. After Step 442, Step 443 is applying stop rules, and Step 444 stops the comparison rule phase if a stop rule applies. Steps 445 and 446 ensure that Steps 436-444 are repeated for each candidate and each rule. Steps 448-458 are an iterative loop for applying scoring rules. Step 448 is getting a scoring rule and step 450 is getting a candidate from the current pool. Step 452 is applying the scoring rule from Step 450 to each candidate individually. Step 454 determines a new candidate total as the product of the rank weight of the candidate returned by the scoring rule times the rule weight of the scoring rule. This product is added to the candidate total from Step 442 or from the previous scoring rule iteration. After Step 454, Step 456 applies stop rules, and Step 458 stops the comparison rule phase if a stop rule applies. Steps 460 and 462 ensure that Steps 448-458 are repeated for each candidate and each rule. Step 464 determines the final candidate total as the sum of the soft elimination score from Step 435 and the candidate total from Step 464. If desired, the soft elimination score may be scaled in the manner explained above. Step 466 is determining the best choice as the candidate having the highest score as a result of the steps of FIG. 4B. Referring again to FIG. 3, after a best choice has been determined with the choice approach of either FIG. 4A or 4B, Step 306 is carrying out any final action that may be specified. If there is no final action, Step 308 is returning a best candidate, or alternatively a ranked set of recommended candidates. An important feature of the invention is that the steps of FIG. 3 may be carried out with or without the prior selection of a strategy. This feature of the invention is best illustrated by FIG. 5. As discussed in connection with Strategy 206 in FIG. 2, the programming of the invention permits multiple strategies to be defined. A strategy has a name and consists of a set of values for strategy parameters, and a set of methods for making particular choices. Programming for implementing selection methods has already been described in connection with FIG. 2, especially in connection with Method 204. As shown in FIG. 5, using the invention with the strategy selection features involves a first step, Step 510, of adopting a strategy. When a strategy is adopted, the parameter values and choice methods of that strategy become the current values and choice methods. Step 512 is executing the decision making steps of FIG. 3 to choose a best candidate. Step 514 determines if the best candidate is acceptable. If it is, Step 516 is using the best candidate in other programming or as desired by the user. Steps 518 and 520 of FIG. 5 illustrate another feature of the invention, which may be used during the development stage of the decision making system. This is a learning feature, which is invoked by the user after a decision making operation has failed to produce the desired result. In general, the learning feature is a rule weight adjusting feature in connection with the weighted choose method, so that the candidate who is the user's best choice will receive the highest score, and thereby improve the knowledge base. The learning steps are iterative for each "rule instance", i.e., each part of the weighted choose method in which a particular rule was evaluated for the entire set of candidates surviving elimination. A rule instance is the rule weight and a list of candidate/rank weight pairs. Step 518 determines whether this learning feature is to be invoked. If so, Step 520 is performing it. The substeps of Step 520 are illustrated in FIG. 6. It is assumed that the programming has already saved each candidate and its state information, including the total score for each candidate, and each rule instance. Step 610 is getting a rule instance. Step 612 ensures that the learning steps continue for each rule instance. Step 614 is determining the relationship between the current rule and the current candidate's total, T. For each rule, there are three status categories: (1) every candidate that has a higher current T than the desired candidate is ranked higher by the current rule, or (2) every candidate that has a higher current T than the desired candidate is ranked lower by the current rule, or (3) neither category (1) nor (2) exists. According to the category in which a rule falls, the remaining steps increase or decrease that rule's weight or set the rule aside. If the rule falls within category 3, Step 618 sets the rule aside. Steps 620-626 ensures that if the rule is set aside, after any rule that falls within category (1) or (2) has had its weight adjusted, the set aside rules can be looked at again to determine if any one of them now fall within category (1) or (2). If the rule does not fall in category 3, Step 628 is getting the next candidate, with the goal of looping through all candidates except the desired candidate. Step 630 is determining if there are candidates remaining to be considered. If there are candidates, Step 632 is determining whether that candidate should be considered. This will be the case if the T for the current candidate if less than the T for the desired candidate, and (1) for a category 1 rule, the ranked position of the current candidate with respect to the rule is higher than that of the desired candidate, or (2) for a category 2 rule, the ranked position of the current candidate is lower than that of the desired candidate. If the candidate is not to be considered, Step 628 is repeated. If the candidate is to be considered, Step 634 is determining delta as follows: where T.sub.other and r.sub.other are the current total score and rule number associated with that candidate, and T.sub.desired and r.sub.desired are the total score and rule number associated with the desired candidate. Step 636 is comparing T-other to T-desired. If T-other is greater than T-desired, Step 638 is determining a largest overtake value, LO, such that LO is the maximum of the current LO and delta. If T-other is not greater than T-desired, Step 640 is determining a smallest catch up value, SC, such that SC is the minimum of the current SC and delta. After either Step 638 or Step 640, Step 628 is repeated and an LO or SC is computed for the next candidate. Referring back to Step 628, if there are no more candidates left to consider, Step 642 is scaling the LO. This scaling is accomplished by adding the number of rules remaining in the rule instance pool to the number of rules remaining in the set aside pool and dividing that sum into the largest delta. Step 644 is recomputing delta as the minimum of the scaled LO and SC. Step 646 is determining whether the rule is a category 1 rule. If so, Step 648 is computing a new rule weight as the old rule weight minus delta. If the rule is not in category 1, which implies that it is in category 2, the rule weight is the old rule weight plus delta. Step 652 recalculates T for each candidate, and checks the pool of set aside rules, as indicated above. After every rule instance has been evaluated by Steps 614-652, Step 654 is determining whether the desired candidate has an improved position in a sorted list of candidate totals than it did with the original rule weights. If so, Step 656 returns the adjusted rule weights. Referring again to FIG. 5, if the learning feature is not invoked, step 510-514 are repeated until an acceptable candidate is determined. FIG. 7 illustrates a feature of the invention that integrates features of the decision processing system of FIG. 1. In particular, FIG. 7 illustrates a method for generating trial solutions using different strategies. The method may be implemented using the apparatus and programming discussed above in connection with decision processor 60 and knowledge base 50. Step 710 is entering an "effects mode" of a computer system programmed with constraint propagation capabilities. Step 712 is determining whether a new scenario is to be created. If a new scenario is to be created, Step 714 is creating the scenario and associating a particular strategy with the scenario. If no new scenario is to be created, Step 716 is determining whether a previously selected scenario is to be processed. If not, Step 718 is leaving the effects mode and selecting a solution. Referring again to Step 716, if a previous scenario is to be used, Step 720 is selecting a scenario. Once a scenario has been created or selected, Step 722 is entering that scenario, which implies the adoption of an accompanying strategy. Step 724 is solving the problem by entering the decision making aspects of the invention. The programming discussed above in connection with FIGS. 3, 4A, and 4B are directed to this step. Although the invention has been described with reference to specific embodiments, this description is not meant to be construed in a limiting sense. Various modifications of the disclosed embodiment, as well as alternative embodiments, of the invention will become apparent to persons skilled in the art upon reference to the description of the invention. It is, therefore, contemplated that the appended claims will cover such modifications that fall within the true scope of the invention. Citations de brevets Citations hors brevets
<urn:uuid:83b0dc8f-1497-436c-b504-53e65437240e>
CC-MAIN-2013-20
http://www.google.fr/patents/US5182793?hl=fr
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.922591
9,844
2.640625
3
By: Nola Taylor Redd Published: 09/10/2012 11:04 AM EDT on SPACE.com In the past decade, astronomers have observed clay materials on Mars that seem to indicate large bodies of water once filled the Martian surface. But new research suggests that magma could form some of these slick deposits rapidly, and ancient Mars may not have been as wet as we thought. A region of French Polynesia has similar deposits of these strange clays, which scientists found were formed by cooling magma rather than water. "It was the first time that clays were shown to originate from another process than aqueous alteration," researcher Alain Meunier, of the Université de Poitiers in France, told SPACE.com by email. "The consequence was that, even if clays need water to be formed, this does not mean that they need liquid water." Since water is thought to be essential for all life, the Martian clay findings complicate the question of whether early Mars was likely to have been hospitable to life. [Photos: The Search for Water on Mars] Water vs. magma Along riverbeds, near glaciers, and near oceans, clays on Earth tend to appear near sources of water. Layers of rock gradually weather away, their chemicals transported and mixing to form clay. The process takes time, and so the presence of clays on Mars would seem to indicate relatively long-standing bodies of water, such as oceans, lakes, and streams. But four years ago, Meunier, working with a group of geologists, found that clays at the Moruroa Atoll in French Polynesia formed quickly with cooling magma rather than slowly with cold ocean water. As the magma cooled, small voids inside of the solidifying lava behaved as tiny pressure-cookers, forming the last generation of minerals, including clays. The iron-rich clays found at this Pacific Ocean atoll are similar in composition to some Martian mineral mixes. The only samples on Earth that originated on the Martian surface come from rocks blown from the Red Planet long ago that traveled through space to our world. One such sample is the Lafayette Meteorite, a rock of unknown origin that was found in the archives of Purdue University and not identified as of Martian origin until 1931. Studying the meteorite with an eye toward the formation processes at Moruroa, Meunier's team, which included several geologists from the French-Polynesian group, found a number of similarities. "The authors demonstrate pretty convincing evidence that some of the water that led to clay formation was derived from the magmatic gases," Brian Hynek of the University of Colorado told SPACE.com. Hynek, who was not involved in the research, wrote a commentary piece that appeared alongside the results, which were published in the journal Nature Geoscience on Sunday (Sept. 9). This graphic depicts how outgassing from Mars lava may have created moist clays, not water, in the ancient past. Drier surface conditions The slick deposits on Mars provided a peek into the state of the surface early in the planet's history. "Considering that clays witness the presence of liquid water, they implied that the physical conditions prevailing at the surface of the young planet were compatible with the liquid state," Meunier said. Although Mars today is too cold for liquid water, with too thin of an atmosphere to hold onto it, the water-related formation of clays has been one of the indicators that early Mars was warmer and wetter. "The possibility of a magmatic origin for clays changes these considerations," Meunier said. [Photos: Mars Volcano Views from Space] But the results don't mean that early Mars was a barren desert. There are other signs that the young planet had water on its crust, including extensive river systems, lakes, and oceans. Hyneck pointed out that that not all Martian meteorites show evidence of a magma-related formation. Furthermore, only a handful of samples have traveled to Earth from the Red Planet, and they only come from a narrow range of times and locations on Mars. "I don't think this new research changes our general picture of early Mars," Hyneck said. "It just provides an additional mechanism for forming clay minerals." A "stepping stone" for life Because water is considered essential for living organisms to evolve, scientists think areas boasting clays could be good sites to search for life on Mars. But areas with magma-formed clays would be less ideal for hosting life. "[This] clay formation process would have been quick and hot, and thus not good for biology," Hyneck said. However, it's unlikely that all clays on Mars were created by the same process. "As on Earth, clays probably formed in many different ways across the planet, and some of those are more favorable for biology." Even in the unlikely scenario that all clays across Mars were created by cooling magma, the minerals they contain have been implicated in the early biochemical processes that led to RNA and DNA, the backbone of life as we know it. Their presence alone could be considered an important stepping stone for the earliest biological and chemical processes, according to Hyneck. At the same time, early Mars was not the only time water lay on the surface. "Liquid water has undoubtedly existed on Mars at a later epoch," Meunier said. There are two NASA rovers currently exploring the Red Planet that are well situated to help further scientists' understanding of how the clays evolved. The Opportunity rover landed with its now-defunct sister rover Spirit in 2004, and continues to study Mars. The Mars rover Curiosity, which landed in August of 2012, is preparing to delve into the geological history of Mars. "The Gale Crater, which will be explored by Curiosity, is a wonderful place to research the traces of the pre-biotic chemistry," Meunier said. - Photos: The Search for Water on Mars - The Search for Life on Mars (A Photo Timeline) - What Went Wrong on Mars? Also on HuffPost: This image taken by the Mast Camera on NASA's Curiosity rover highlights the geology of Mount Sharp, a mountain inside Gale Crater, where the rover landed. Base Of Mount Sharp South/Southwest Of Landing Site This photo is from a test series of the 100-millimeter Mast Camera on NASA's Curiosity rover. It is looking south-southwest of the landing site and taken on Aug. 23, 2012. More From Mast Cam Another test photo from the Mast Camera on NASA's Curiosity Rover. Again, it's looking south-southwest on Aug. 23, 2012. The gravelly area of the landing site is visible in the foreground. The landing site is visibile here in this portion of a 360-degree color panorama along the heights of Mount Sharp. Big Wheels Rolling This photo was taken by a front Hazard-Avoidance camera on NASA's Curiosity and shows track marks from the rover's first Martian drives. Curiosity's Second Drive Track marks are seen here after the NASA Curiosity rover completes a successful drive to an area of bedrock. The donut-shaped tracks shown here make an infinity symbol, following the first two drives from NASA's curiosity rover. The drives took place on Aug. 22 and Aug. 27, respectively. Heights Of Mount Sharp The highest point of Mount Sharp visible from NASA's Curiosity rover is seen here in a high-resolution image taken on Aug. 18. Traces Of The Landing This mosaic image was created from images taken by the rover's Navigation cameras on Aug. 7 Pacific Time / Aug. 8 Eastern Time. Curiosity's Extended Arm This photo taken on Aug. 20 shows the many tools on Curiosity's extended arm. NASA's Curiosity rover tests its wheels at its landing site on Aug. 21. Photo taken by the rover's Navigation cameras. NASA's Curiosity rover fired its laser 50 times against these rocks at a mark called "Goulburn." Rover Takes First 'Steps' This overhead view shows NASA's Curiosity rover after its first successful test drive on Aug. 22, 2012. Another Look At Rover's First Steps Here's another view of the first track marks Rover left in the Martian surface on Aug. 22, 2012.
<urn:uuid:4dd867f4-2962-488c-84ca-a20394b30557>
CC-MAIN-2013-20
http://www.huffingtonpost.com/2012/09/10/mars-magma-water_n_1870939.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.956556
1,735
3.96875
4
Although insects are commonly thought of as pests in just about every region of the world, one must take the time to realize their benefits in our everyday lives. Without insects operating dutifully about their ecosystems, our world would be a very different place than the one we've come to enjoy. It takes a "micro-level" vision of their world to truly understand their importance to our own. Without Honey Bees, we would not have honey nor beeswax. The red food dye known as 'cochineal' is made from the crushed bodies of a species of insect native to South America for example. Honey Bees would be the very same insect as used by the ancient Aztec Indians almost 600 years ago in a variety of roles throughout their advanced civilization. Before sugar cane was introduced in all of Europe (about 700 AD), people would use honey to sweeten their intake of various foods and drinks. Bees in particular assist in the process known as pollination. Pollination is the process of development for a flower's seeds. Flower seeds must be fertilized by pollen from the same or another flower in order to reproduce. Pollen can then be dispersed through the wind or transferred from the bodies of insects such as bees. Some insects are naturally drawn to the flowers through scent, color and the sweetness of their nectar. As they traverse the surface of these flowers, their bodies will unknowingly pickup the pollen and be ready for transport to a new location. The act of pollination is actually more important to the living and working world than is the production of honey or beeswax! So imagine now a world where pollination is not possible. An example is of modern scientists breeding a particular species of fruit fly to help them understand genetic or inherited diseases in humans. Without this type of research, our knowledge of what ails humanity would not be as advanced as it is. When such a common "pest" and annoyance to our everyday lives can become a helper or savior of countless future lives, one starts to develop a certain level of respect in the complexity that is an insect. Field Crickets are known to feed upon the eggs and pupae of indoor pests. Though they primarily feed on plant matter outdoors, they can also be found feeding on animal remains - joining a host of other insects that rely on animal remains as a source of food. Though Blister Beetles can cause serious blistering to human skin, the chemical they secrete from their joints - called "Cantharidin" - is, ironically, used in some wart removal products. These fast-moving and scary-looking insects are actually quite the predator in the under-workings of a home. Though sometimes found in bath tubs and basins, the house centipede primarily resides in cool dark areas such as crawl spaces where it can hunt larger insects (including the dreaded cockroach). So which would you rather have meandering about your home? The helpful House centipede or the loathsome cockroach? Lady Bugs are your ultimate garden protector, feeding on insects bent on the destruction of your plants. Love 'em or loath 'em, spiders serve a greater purpose than creeping you out. Spiders are the ultimate insect exterminators and work to keep the insect population in check by feeding on just about anything with more legs than you. Dragonflies love to eat insects. What this means for you is population control of the little critters in particular, the all-mighty mosquito. • Alderflies, Fishflies and Dobsonflies • Bees, Ants, Wasps and Similar • Butterflies and Moths • Cicada and Similar • Dragonflies and Damselflies • Grasshoppers and Crickets • Mites and Ticks • Scorpions or Scorpionlike • True Bugs • Walkingsticks and Timemas • VIEW ALL • New Hampshire • New Jersey • New Mexico • New York • North Carolina • North Dakota • Rhode Island • South Carolina • South Dakota • West Virginia ©2013 www.InsectIdentification.org • Content ©2005-2013 InsectIdentication.org • All Rights Reserved • Site Contact Email: insectidentification at gmail dot com Business consulting by KyleWilliams.com. Site design by RunawayStudios.com Material presented throughout this website is for entertainment value and should not to be construed as usable for scientific research or medical advice (insect bites, etc...). Please consult licensed, degreed professionals for such information.
<urn:uuid:1d4a003f-f3f5-450f-9744-e1d8b8bfc39a>
CC-MAIN-2013-20
http://www.insectidentification.org/helpful_insects.asp
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.917156
934
3.703125
4
The road statistics each year clearly demonstrate that you can never be too careful with road and car safety particularly when you have children with you. Car safety for babies - Ensure you choose a car restraint that's appropriate for your baby's size and weight. Babies under 8kg or 70cms should always travel in a rear-facing restraint. - Babies should have good head control and be able to sit before they are moved to a more upright, front-facing car seat. - Make sure that your baby's restraint straps and fasteners are adjusted to fit his body correctly. - Have the restraint properly fitted to your car - every car model has a slightly different way to installing capsules and car seats so make sure that yours is correctly and safely fitted. - Never hold your baby while the car is moving, for example, while travelling in a taxi, as your arms aren't strong enough to protect him in the event of an accident. - Babies and young children can overheat quickly in the car, so make sure that they are suitably dressed while travelling and that there is appropriate protection from the sun. Car safety for children - Ensure you choose a car restraint that's appropriate for your child's size and weight. Your child should always travel in the back seat. While it isn't against the law for children to travel in the front seat, this is the least safe seat in the car and with the additional hazards that airbags pose, it is highly recommended that children sit in the back seat. - Teach your child to get in and out of the car from the footpath - this will remove any risk associated with him being on the road. - While there are some devices that can be used to prevent toddlers from undoing their seatbelts, these are not recommended as seatbelts are meant to be easily undone in case of an accident. - Don't allow you child to lie down across the back seat as this is an unsafe position in the event of an accident. - Your child's car seat should be used until he reaches the maximum allowable weight for the seat or he is too tall to use the shoulder straps. - Booster seats can be used until your child reaches the maximum allowable weight for the seat - it isn't recommended that booster seats are used by a child who weighs less than the minimum weight recommendation as his body weight partially secures the booster seat. - A five-point child safety harness is recommended for children 14 - 32kg as standard car seatbelts (which are designed for adult use) are safe to use until your child is a standing height of 148cm, a sitting height of 74cm and/or a weight of 37kg - this is the approximate size of an 11 year old. It is against the law: - To leave your child in a car unattended - even for a short time - For anyone to be unrestrained in the car while it's moving - To have a child on your knees while the car is moving - this includes having a seatbelt around you both - To have two people share the same seatbelt - To ride in the luggage compartment of a vehicle The vehicle driver is responsible for ensuring that all passengers under the age of 16 are correctly restrained. Tips for practical car safety Having the correct restraints properly installed in your car goes a long way to ensuring that everyone stays safe in your car, but there are other ways you can develop good car safety. - Lead by example. Your child is more likely to develop good car habits if he sees you following the same rules you set for him. Make sure that you always have your seatbelt on before you drive away, and always insist that your child does too. - Insist on seatbelts at all time. If you have issues with getting your child to stay buckled up, stop your car and tell him that you won't continue driving until he sits in his seat with his seatbelt on. - Don't allow siblings to ‘help' you by buckling and unbuckling each other's seatbelts as this gives them permission to move around in the car - and they may decide to do this while the car is moving. - Don't be distracted by unruly car behaviour. If your child is refusing to co-operate or is being loud and having an impact on your ability to concentrate while driving, pull over and sort out the backseat issues before hitting the road again. - Stop, revive, survive. If you're driving long distances, it's extremely important that you take regular breaks to avoid exhaustion. Regular breaks are a great way to let children blow off a little energy too. - Ensure that your line of sight is always clear. Never put anything in the windows of your car that will block your vision - a nappy in a side window to keep sun of your child will limit your view when driving. Find more relevant articles and information about car safety: Last revised: Tuesday, 21 April 2009 This article contains general information only and is not intended to replace advice from a qualified health professional.
<urn:uuid:4e9d0b53-8787-4820-a00d-5877d45c0c32>
CC-MAIN-2013-20
http://www.kidspot.com.au/familyhealth/Injuries-&-Safety-Safety-Car-safety+2592+219+article.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.966573
1,047
2.71875
3
Machine Intelligence: The First 80 Years August 6, 2001 by Ray Kurzweil A brief history of machine intelligence written for “The Futurecast,” a monthly column in the Library Journal. Originally published August, 1991. Published on KurzweilAI.net August 6, 2001. A new form of intelligence has recently emerged on Earth. To assess the impact of this most recent branch of evolution, let’s take a quick journey through its first 80 years. Drawing upon a diversity of intellectual traditions and fueled by the exigencies of war, the first computers were developed independently and virtually at the same time in three different countries, one of which was at war with the other two. The first operational computer, developed by Alan Turing and his English colleagues in 1940, was named Robinson, after a popular cartoonist who drew Rube Goldberg machines. Turing’s computer was able to decipher the German “Enigma” code and is credited with enabling the Royal Air Force to win the Battle of Britain and withstand the Nazi war machine. The similarity of computer logic to at least some aspects of our thinking process was not lost on Turing, and he is credited with having established much of the theoretical foundations of computation and the ability to apply this new technology to the emulation of intelligence. In his classic 1950 paper, “Computing Machinery and Intelligence,” Turing lays out an agenda that would in fact occupy the next century of advanced computer research: game playing, decision making, natural language understanding, translation, theorem proving, and, of course, the cracking of codes. Turing went on to predict that by early in the next century society will simply take for granted the pervasive intervention of intelligent machines in all phases of life, that people will speak routinely of machines making critical intelligent decisions without anyone thinking it strange. If we think about the Gulf War of 1991, we saw perhaps the first dramatic example of the increasingly dominant role of machine intelligence. The cornerstones of military power from the beginning of recorded history through most of the 20th century–geography, manpower, fire power, and battle-station defenses–were largely replaced by the intelligence of software and electronics. Intelligent scanning by unstaffed airborne vehicles; weapons finding their way to their destinations through machine vision and pattern recognition; intelligent communications and coding protocols; and other manifestations of the information age began to rapidly transform the nature of war. Infiltrated by machine intelligence By the end of the 1980s, we also saw the pervasive infiltration of our financial institutions by machine intelligence. Not only were the stock, bond, currency, commodity, and other markets managed and maintained by computerized networks, but the majority of buy-and-sell decisions were initiated by software programs that contained increasingly sophisticated models of their markets. The 1987 stock market crash was blamed in large measure on the rapid interaction of trading programs. Trends that otherwise would have taken weeks to manifest themselves developed in minutes. Suitable modifications to these algorithms have managed to avoid a repeat performance. Since 1990, your electrocardiogram (ECG) has come complete with the computer’s own diagnosis of your cardiac health. Intelligent image-processing programs enabled doctors to peer deep into your bodies and brains, and computerized bioengineering technology enabled drugs to be designed on biochemical simulators. The world of music had been transformed through intelligent software and electronics. Not only were most sounds heard in recordings and sound tracks generated by intelligent signal processing algorithms, but the lines of music themselves were increasingly an assemblage of both human and computer-assisted improvisation. The handicapped have been a particularly fortunate beneficiary of the age of intelligent machines. Reading machines have been reading to blind and dyslexic persons since the 1970s, and speech recognition and robotic devices have been assisting the hands impaired since the 1980s. Taking it all for granted With the increasingly important role of intelligent machines in all phases of our lives–military, medical, economic, political–it was odd to keep reading articles with titles such as Whatever Happened to Artificial Intelligence? This was a phenomenon that Turing predicted, that machine intelligence would become so pervasive, so comfortable, and so well integrated into our information-based economy that people would fail to even notice it. It reminds me of people who walk in the rain forest and ask, “Where are all these species that are supposed to live here?” when there are a hundred species of ant alone within 50 feet of them. Our many species of machine intelligence have woven themselves so seamlessly into our modern rain forest that they are all but invisible. Turing also offers an explanation of why we would fail to acknowledge intelligence in our machines. In 1947, he writes: The extent to which we regard something as behaving in an intelligent manner is determined as much by our own state of mind and training as by the properties of the object under consideration. If we are able to explain and predict its behavior we have little temptation to imagine intelligence. With the same object, therefore, it is possible that one man would consider it as intelligent and another would not; the second man would have found out the rules of its behavior. I am also reminded of Elaine Rich’s definition of artificial intelligence (AI) as the “study of how to make computers do things at which, at the moment, people are better.” The 90s: paperless books Now it was in the 1990s that things started to get interesting. The nature of books and other written documents underwent several transformations during this decade. In the early 1990s, written text began to be created on voice-activated word processors. By the mid-1990s, we saw the advent of paperless books with the introduction of portable and wireless displays that had the resolution and contrast qualities of paper. LJ ran a series of articles on the impact on libraries of books that no longer required a physical form. One of these articles pointed out that despite paperless publishing and the so-called paperless office, the use of paper continued nonetheless to increase. American use of paper for books and other documents grew from 850 billion pages in 1981 to 2.5 trillion pages in 1986 to six trillion pages in 1995. The nature of a document also underwent substantial change and now routinely included voice, music, and other sound annotations. The graphic part of documents became more flexible: fixed illustrations turned into animated pictures. Documents included the underlying knowledge and flexibility to respond intelligently to the inputs and reactions of the reader. The “pages” of a document were no longer necessarily ordered sequentially; they became capable of forming intuitive patterns that reflected the complex web of relationships among ideas. Communications also became transformed. Late in the decade, we saw the first effective translating telephones demonstrated, although the service was not routinely offered. Both the recognitions and translations were far from perfect, but they appeared to be usable. We also saw the introduction of listening machines for the deaf, which converted human speech into a visual display of text–essentially the opposite of reading machines for the blind. Another handicapped population that was able to benefit from these early AI technologies was paraplegic individuals, who were now able to walk using exoskeletal robotic devices they controlled using a specialized cane. As we entered the first decade of the 21st century, the translating telephones demonstrated late in the last century began to be offered by the telephone companies competing for international customers. The quality varied considerably from one pair of languages to another. Even though English and Japanese are very different in structure, this pair of languages appeared to offer the best performance, although translation among different European languages was close. Outside of English, Japanese, and a few European languages, performance fell off dramatically. The Disabled Act of 2004 The output displays for the listening machines for the deaf were now built into the user’s eyeglasses, essentially providing subtitles on the world. Specific funding was included in the Omnibus Disabled Act of 2004 to provide these sensory aids for deaf persons who could not afford them, although complex regulations on verifying income levels slowed the implementation of this program. The standard personal computers of 2005 were now palmtop devices that combined unrestricted speech recognition with handprint and gesture recognition as primary input modalities. They also included knowledge navigators with two-way voice communication and customizable personalities. TIM technology arrives Everyone recalls the flap when TIM was first introduced. TIM, which stands for Turing’s IMage, was created at the University of Texas and was presented as the first computer to pass the Turing Test. The Turing Test, first described by Turing in the same 1950 paper mentioned above, involves a computer that attempts to fool a human judge into thinking that it rather than a human “foil” is the real human. The researchers claimed that they had even exceeded Turing’s original challenge because you could converse with TIM by voice rather than through terminal lines as Turing had originally envisioned. In an issue of LJ devoted to the TIM controversy, Hubert Dreyfus, the persistent critic of the AI field, dismissed the original announcement of TIM as the usual hype we have come to expect from the AI community. Eventually, even AI leaders rejected the claim citing the selection of a human “judge” unfamiliar with the state of the art in AI and the fact that not enough time had been allowed for the judge to interview the computer and the human. TIM became, however, a big hit at Disney World, where 2000 TIMs were installed in the Microsoft Pavilion. The TIM technology was subsequently integrated into artificial reality systems (computerized systems with visual goggles and headphones that enable the wearer to enter and interact with an artificial world) that had already revolutionized the educational field, not to mention the game industry. Artificial reality with integrated conversational capabilities became quite controversial. One radical school of thought questioned the need for books on history when you could now go back and actually participate in historical events yourself. Rather than read about the Constitutional Convention, a student could now debate a simulated Ben Franklin on executive war powers, the role of the courts, or any other issue. An LJ editorial pointed out that books provided needed perspective rather than just experiences. It is hard now to recall that the medium they used to call television was itself controversial in its day, despite the fact that it was of low resolution, two dimensional, and noninteractive. Artificial reality was still a bit different from the real thing, though, in that the ability of the artificial people in artificial reality to really understand what you were saying still seemed a bit stilted. The vision from 2020 So here we are in the year 2020. Translating telephones are now used routinely, and, while the languages available are still limited, there is now more choice with reasonable performance for Chinese and Korean. The knowledge navigators available on today’s personal computers, unlike those of ten years ago, can now interview humans in their search for knowledge instead of just other computers. People use them as personal research assistants. Communications are quite a bit different from the days when phone calls went through wires and the old television medium went through the air. Now everyone is online all the time with low bandwidth communication (like voice) through cellular radio. This has strained the conventions of phone courtesy as it is now difficult to be “away” from your phone. High-resolution communication, such as moving three-dimensional holographic images, now go through wired, fiber-optic wires, of course. Japan did beat us by six years in laying down a fiber-optic information highway, but the American system is now second to none. The listening systems in your eyeglasses, originally developed for the deaf, are now routinely used by almost everyone, listening impaired or not, as they also include language translation capabilities and optional commentaries of what you see through them. Artificial reality is now much more lifelike, and there has been a recent phenomenon of people who spend virtually all their time in artificial reality and do not want to come out. What to do about this is a topic of considerable recent debate. The University of Texas has announced a new version of TIM, which has received a more enthusiastic reception from AI experts. Marvin Minsky, one of the fathers of AI, who was contacted at his retirement home in Florida, hailed the development as the realization of Turing’s original vision. Dreyfus, however, remains unconvinced and recently challenged the Texas researchers to use him as the human judge in their experiments. And in the New York Times this morning, there was a front-page article entitled, “Whatever Happened to Artificial Intelligence?” Reprinted with permission from Library Journal, August 1991. Copyright © 1991, Reed Elsevier, USA
<urn:uuid:23d54014-a4c6-40a6-9c95-a25b13c9f6e9>
CC-MAIN-2013-20
http://www.kurzweilai.net/machine-intelligence-the-first-80-years
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.966662
2,625
2.828125
3
Supplies you will need. Two three ring binders, a small notebook such as a steno pad, several black ink pens, several sharp pencils, and a supply of Family Group Sheets and Four Generation Pedigree Charts. The staff at the Information Desk can provide you with one copy from which you can make additional copies. Always write your name, phone number and address on each notebook. Start with yourself and work backward in time. Using the Four Generation Pedigree Chart begin with yourself and fill in as much information as possible. Interview family members and examine all documents such as family Bibles, wills, property deeds, photographs, letters, birth certificates and military discharge papers. Never start with a supposed ancestor and work forward. Read one or more basic guides to genealogical research. Some that we have in our collection are: Unpuzzling Your Past: A Basic Guide to Genealogy by Emily Anne Croom (929.1 Gr88) First Steps in Genealogy by Desmond Walls Allen (929 Al5) Walking Through Your Past by Genealogy Society of Craighead County (929.3 Ge28) Attend basic workshop. Sign up for one of the basic genealogy workshops at the library, sponsored by the Genealogy Society of Craighead County. The charge for the workshops will vary. Use the library resources. Use the library's online catalog to search for material located in the library or at the branches.Genealogy Databases offer a wide range of information including census records from around the country. Patrons must have a library card to access the databases. Always try to find primary sources. Indexers, authors and abstracters inevitably make mistakes. Whenever possible, look at original wills, deeds, birth certificates and other documents, or copies of them on microfilm. Read documents with caution. You will see old-fashioned terminology, handwriting, spelling and grammar. There are tools to help you decipher old documents. Beware the common pitfalls of research. Think of ways your surname could be misspelled, then search under those spellings. Remember that boundaries and names of counties sometimes change. In recalling where they lived years ago, relatives may name the nearest big city rather than the actual locale, or say the name of the county seat when they mean to name the county. Study the ways various documents are organized before you try to use them. Keep careful records. Whenever possible, make photocopies of documents. Always record titles and dates of your sources. Expect to visit many libraries and archives and to use many types of tools. No single collection will hold every document that you need. Likewise, no single source will answer all your questions. You will eventually use most of the tools of the genealogist: census records, deeds, county histories, wills, death certificates, etc. Please keep in mind that the library staff members are not genealogists. They can help you locate the published materials in the library, or suggest other sources, but they cannot do the research for you. If you need further assistance the library staff can refer you to a professional genealogist who will help you for a fee.
<urn:uuid:012075cd-db47-420f-85e8-93e4e8b30544>
CC-MAIN-2013-20
http://www.libraryinjonesboro.org/?q=node/21
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.922172
659
2.875
3
“Predator” bacteria (green) surround “prey” bacteria (red) in this petri dish version of the Serengeti. Rather than eating their prey, however, predator cells release a chemical that activates a suicide gene in the prey. Prey cells also release a chemical, but one that promotes survival of the predators. Researchers genetically programmed the cells to “communicate” with each other in this way and function as a synthetic ecosystem. The artificial system acts as an experimental model and can help us understand behaviors in more complex, natural ecosystems. July 9, 2008 Courtesy of Hao Song, Duke University. Full Story: (http://publications.nigms.nih.gov/computinglife/predator.htm)
<urn:uuid:ccdd3db1-ea0b-4b2b-885f-f0b5243ec8b3>
CC-MAIN-2013-20
http://www.microbeworld.org/component/jlibrary/?view=article&id=7764
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.893872
160
3.109375
3
Solar Energy it is Energy from the Sun Solar Energy Can Be Used for Heat and Electricity Solar thermal energy: There are many applications for the direct use of solar thermal energy, space heating and cooling, water heating, crop drying and solar cooking. It is a technology which is well understood and widely used in many countries throughout the world. Most solar thermal technologies have been in existence in one form or another for centuries and have a well established manufacturing base in most sun-rich developed countries. The most common use for solar thermal technology is for domestic water heating. Annual Production Liter Water Annual Running Cost ($) Solar Water Heater There are two basic types of solar thermal power station. The first is the 'Power Tower' design which uses thousands of sun-tracking reflectors or heliostats to direct and concentrate solar radiation onto a boiler located atop a tower. The temperature in the boiler rises to 500 - 700EC and the steam raised can be used to drive a turbine, which in turn drives an electricity producing turbine. The second type is the distributed collector system. This system uses a series of specially designed 'Trough' collectors which have an absorber tube running along their length. Large arrays of these collectors are coupled to provide high temperature water for driving a steam turbine. Such power stations can produce many megawatts (MW) of electricity, but are confined to areas where there is ample solar insulation. There are other uses of thermal energy such as Solar cooking, Crop drying, Space heating, Space cooling, Day-lighting. Photovoltaic modules or panels are made of semiconductors that allow sunlight to be converted directly into electricity. These modules can provide you with a safe, reliable, maintenance-free and environmentally friendly source of power for a very long time. Most modules on the market today come with warranties exceeding 20 years, and will perform much longer How it works: PV cells convert sunlight directly into electricity without creating any air or water pollution. PV cells are made of at least two layers of semiconductor material. One layer has a positive charge, the other negative. When light enters the cell, some of the photons from the light are absorbed by the semiconductor atoms, freeing electrons from the cell’s negative layer to flow through an external circuit and back into the positive layer. This flow of electrons produces electric current. Basic solar cell construction: Individual PV cells are interconnected together in a sealed, weatherproof package called a module. When two modules are wired together in series, their voltage is doubled while the current stays constant. When two modules are wired in parallel, their current is doubled while the voltage stays constant. To achieve the desired voltage and current, modules are wired in series and parallel into what is called a PV array. The flexibility of the modular PV system allows designers to create solar power systems that can meet a wide variety of electrical needs, no matter how large or small. Photovoltaic cells, modules and arrays Using solar energy produces no air or water pollution and no greenhouse gases, but does have some indirect impacts on the environment. For example, there are some toxic materials and chemicals, and various solvents and alcohols that are used in the manufacturing process of photovoltaic cells (PV), which convert sunlight into electricity. Small amounts of these waste materials are produced. In addition, large solar thermal power plants can harm desert ecosystems if not properly managed. Birds and insects can be killed if they fly into a concentrated beam of sunlight, such as that created by a “solar power tower.” Some solar thermal systems use potentially hazardous fluids (to transfer heat) that require proper handling and disposal. Concentrating solar systems may require water for regular cleaning of the concentrators and receivers and for cooling the turbine-generator. Using water from underground wells may affect the ecosystem in some arid locations.
<urn:uuid:903ec154-718e-46c4-a3d7-14354d8ab29c>
CC-MAIN-2013-20
http://www.najah.edu/page/3214
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.917753
796
3.21875
3
One of the most beloved of United States coin designs, James E. Fraser’s Indian Head/Buffalo Nickel was nearly abandoned at the outset due to protests from the vending industry that its bold design would not activate their machines properly. These concerns proved unfounded, though the coin did have some technical flaws. The value FIVE CENTS had to be set within an exergue midway through the first year’s production to protect it from wear. Sadly, the same was not done for the date, and countless pieces were rendered unidentifiable after just a couple decades in circulation. This is among the most popular of series with date and mint collectors, as there are no rare issues to prevent its completion. Scarce coins include nearly all mintmarked examples before 1916, most prominently 1913-S Type 2 and 1914-D. Still others are condition rarities, readily available in worn condition but quite scarce uncirculated and with sharp impressions. This series has a number of very dramatic varieties, the most desirable being the 1916 doubled-die obverse, the 1918/7-D overdate and the 1937-D three-legged bison. Numerous minor varieties are known, too, and these receive more attention than they would merit in series that are less popular than the Buffalo Nickel. NGC will attribute Buffalo Nickel varieties listed in VarietyPlus. Others may be found within the books listed below.
<urn:uuid:2563ed64-5450-42d8-b9cd-110003988cac>
CC-MAIN-2013-20
http://www.ngccoin.com/VPSubCategory.aspx?subid=4&category=nickels&cointype=buffalo-nickels
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.96877
286
2.765625
3
Elk Island National Park of Canada Phenotypic Differences Between the Bison Subspecies Historic range of plains and wood bison.© Parks Canada The differences between plains and wood bison can be separated into two groups; pelage characters and structural characters. Plains bison tend to have pelage characters which are larger and more obvious than those of the wood bison. Whereas plains bison have large chaps, a full beard and neck mane and a well-demarcated cape, wood bison have no chaps, a thin pointy beard, a rudimentary neck mane and a cape that grades smoothly back to the loins. Structurally the highest point of the hump on a plains bison is directly over the front legs while the highest point of the wood bison is well forward of the front legs. There are differences in weights as well with the wood bison being considerably larger than the plains. The Park maintains a bison weight database going back to 1956 and during all that time there is only one record of a plains bison bull weighing more than 2000 pounds (909 kg) while over one-third of the wood bison bulls exceed this weight. There has been some discussion as to whether the subspecies are simply ecotypes, and that if a wood bison was placed in plains bison habitat, or vice versa, it would assume the traits of the host bison, simply due to the environmental pressures under which it is placed. A large-scale phenotyping study was conducted during the early 1990s where almost every publicly managed bison herd (both plains and woods) was examined for its external phenotypic expression, and it proved that, despite the habitat in which they reside, they maintain the traits which characterize the subspecies. Recent research at the University of Alberta has conclusively proven a genetic difference between the two subspecies.PLAINS BISON Plains bison bull.© Parks Canada / EI9912310025, 1991/12/31 - The highest point of the hump is directly over the front legs. - Large thick chaps on the front legs. - Thick pendulous beard. - Full neck mane which extends below the chest. - Sharply demarcated cape line behind the shoulder. - Thick bonnet of hair between the horns. - Cape is usually lighter in color than the woods. - About one-third smaller than a wood bison of similar age. Wood bison bull.© Parks Canada / EI9912310026, 1991/12/31 - Highest point of the hump is well forward of the front legs. - Virtually no chaps on the front legs. - A thin scraggly beard. - The neck mane is short and does not extend much below the chest. - The cape grades smoothly back towards the loins with little if any demarcation. - The forelock lies forward in long strands over the forehead. - The hair is usually darker, especially on the head.
<urn:uuid:8a82cd00-a5ac-4c05-b134-516ce911263c>
CC-MAIN-2013-20
http://www.pc.gc.ca/pn-np/ab/elkisland/natcul/natcul1/b/iii.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.93533
628
3.21875
3
A natural-gas rush in some of New York and Pennsylvania's most pristine habitats could have serious negative consequences for the water supply of New York City. The Marcellus Shale holds possibly the largest reservoir of natural gas discovered so far in the United States, as much as 500 trillion cubic feet. As gas companies send their "landmen" on prospecting hunts, armed with contracts that boast the possibility to make millions, a struggle has already manifested between those eager to strike it rich and those who are aware of the consequences of drilling - natural spaces transformed into loud industrial zones, drill pads, pipelines, access roads and all. While the rigs have already begun drilling in Pennsylvania, the land around the Catskills has yet to be tapped. Even so, Governor Paterson signed a bill this past summer streamlining the permit process, so that gas companies could begin operating in the spring. Greater than the risks of the drilling process itself is the concern of the horizontal hydraulic fracturing process, called "fracing," which shakes the ground like an earthquake. In Wyoming, where fracing has occurred since 2003, residents report spoiled drinking water and structural damage to their foundations. The chemical recipe for blasting open the shale and freeing the gas was developed by Halliburton and is a trade secret. Yet, an independent study of fraced wells in Wyoming identified over 400 chemical toxins in contaminated soil and groundwater, some of which include carcinogens such as ethylbenzene, chromium, and arsenic. Yet the Energy Policy Act of 2005 creates a loophole for oil and gas companies to avoid accountability and thus prosecution. In fact, in the name of reducing dependence on foreign oil, such companies are exempt from major environmental-protection laws like the Safe Drinking Water Act and the Clean Water Act. If drilling is allowed in the Catskill watershed, the results could be disastrous. Half of the state's population counts on the watershed to provide drinking water. 1.2 billion gallons of unfiltered water reach the city each day driven almost entirely by gravity. In fact, it is the largest unfiltered surface-supply water system in America. Story suggested by Lewis Kofsky. Image: "My Water Supply" by CarbonNYC on Flickr courtesy of Creative Commons Licensing."
<urn:uuid:759af346-35ad-4336-b6fa-df6931ed2e5d>
CC-MAIN-2013-20
http://www.realitysandwich.com/toxic_runoff
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.951963
462
3.28125
3
Cobscook Bay, Maine, is the site of a tidal energy pilot project led by Ocean Renewable Power Company. | Photo courtesy of Ocean Renewable Power Company. A pilot project that will generate electricity from Maine’s ocean tides could be a game-changer for America’s tidal energy industry at-large. At the direction of the Maine Public Utilities Commission, three of the state’s electricity distributors will purchase electricity generated by Ocean Renewable Power Company (ORPC) -- the company leading the Maine pilot project. Once finalized, the contracts will be in place for 20 years -- making them the first long-term tidal energy power purchase agreements in the United States. The implications of these agreements are far-reaching -- helping to advance the commercialization of tidal energy technologies. The project, which has brought more than $14 million into Maine’s economy and created or helped retain more than 100 jobs, is supported by $10 million in funding from the Energy Department. For the pilot phase of the project, ORPC will deploy cross flow turbine devices in Cobscook Bay, at the mouth of the Bay of Fundy. These devices are designed to generate electricity over a range of water currents -- capturing energy on both ebb and flood tides without the need for repositioning. ORPC expects to complete installation on the first cross flow turbine by summer 2012, installing four additional turbine devices by fall 2013. Once complete, the Cobscook Bay Tidal Energy Project is expected to generate enough electricity to power 75-100 homes in surrounding Maine communities. After running and monitoring this initial project for a year, ORPC will expand its Maine project to nearby areas -- installing additional power systems over the course of three years. The goal is to increase the project’s capacity to 4 megawatts -- enough power to generate electricity for more than 1,000 Maine homes and businesses. The Office of Energy Efficiency and Renewable Energy’s Water Power Program supports the launch of innovative ocean power systems, funding cutting-edge prototype tests, demonstrations, and pilots that will lead to commercially viable renewable energy technologies. For more information, visit water.energy.gov.
<urn:uuid:e2d5a699-6aad-4cd5-9cd9-2f60b68a8985>
CC-MAIN-2013-20
http://www.rw.doe.gov/articles/maine-project-takes-historic-step-forward-us-tidal-energy-deployment
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.899412
445
2.640625
3
), officially the Arab Republic of Egypt , is a countrymainly in North Africa, with theSinai Peninsulaforming a land bridge inSouthwest Asia. Thereby, Egypt is atranscontinental country, and is considered to be a major power in NorthAfrica,Mediterranean Region,Africancontinent, Nile Basin,Islamic Worldand theRed Sea. Covering an area of about 1,010,000 square kilometers (390,000 sq mi), Egypt is bordered by theMediterranean Seato the north, theGaza StripandIsraelto the northeast, the Red Sea to the east,Sudanto the south andLibyato the west. Egypt is one of the most populous countries in Africa and theMiddle East. The great majority of its estimated 79 million people live near the banks of the NileRiver, in an area of about40,000 square kilometers (15,000 sq mi), where the onlyarableagricultural land isfound. The large areas of theSaharaDesert are sparsely inhabited. About half of Egypt'sresidents live in urban areas, with most spread across the densely populated centres of greater Cairo,Alexandriaand other major cities in the Nile Delta. Egypt is famous for itsancient civilizationand some of the world's most famous monuments,including theGiza pyramid complexand itsGreat Sphinx. Its ancient ruins, such as those of Memphis,Thebes,Karnak and theValley of the Kings, are a significant focus of archaeological study, and artefacts from these sites are now displayed in major museums around the world.Egypt is widely regarded as an important political and cultural nation of the Middle East.Egypt possesses one of the most developed and diversified economies in the Middle East, withsectors such as tourism, agriculture, industry and service at almost equal rates in national production. Consequently, the Egyptian economy is rapidly developing, due in part tolegislation aimed at luring investments, coupled with both internal and political stability, alongwith recent trade and market liberalization.
<urn:uuid:a5a98144-dc8b-4e3d-b3e5-a1e0d2d6dabb>
CC-MAIN-2013-20
http://www.scribd.com/doc/37189984/6/Buckingham-Palace
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.901487
432
2.6875
3
Want to stay on top of all the space news? Follow @universetoday on TwitterSedimentary rock covers 70% of the Earth. Erosion is constantly changing the face of the Earth. Weathering agents…wind, water, and ice…break rock into smaller pieces that flow down waterways until the settle to the bottom permanently. These sediments( pebbles, sand, clay, and gravel) pile up and for new layers. After hundred or thousands of years these rocks become pressed together to form sedimentary rock. Sedimentary rock can form in two different ways. When layer after layer of sediment forms it puts pressure on the lower layers which then form into a solid piece of rock. The other way is called cementing. Certain minerals in the water interact to form a bond between rocks. This process is similar to making modern cement. Any animal carcasses or organisms that are caught in the layers of sediment will eventually turn into fossils. Sedimentary rock is the source of quite a few of our dinosaur findings. There are four common types of sedimentary rock: sandstone, limestone, shale, and conglomerate. Each is formed in a different way from different materials. Sandstone is formed when grains of sand are pressed together. Sandstone may be the most common type of rock on the planet. Limestone is formed by the tiny pieces of shell that have been cemented together over the years. Conglomerate rock consists of sand and pebbles that have been cemented together. Shale forms under still waters like those found in bogs or swamps. The mud and clay at the bottom is pressed together to form it. Sedimentary rock has the following general characteristics: - it is classified by texture and composition - it often contains fossils - occasionally reacts with acid - has layers that can be flat or curved - it is usually composed of material that is cemented or pressed together - a great variety of color - particle size varies - there are pores between pieces - can have cross bedding, worm holes, mud cracks, and raindrop impressions This is only meant to be a brief introduction to sedimentary rock. There are many more in depth articles and entire books that have been written on the subject. Here is a link to a very interesting introduction to rocks. Here on Universe Today there is a great article on how sedimentary rock show very old signs of life. Astronomy Cast has a good episode on the Earth’s formation.
<urn:uuid:805ab979-777a-4e6d-a60e-66a9c207b774>
CC-MAIN-2013-20
http://www.universetoday.com/38537/sedimentary-rock/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.957616
512
4.125
4
They aren't in poverty, but they are just a step away from falling into its clutches. More than 30 million Americans are living just above the poverty line. These near poor, often defined as having incomes of up to 1.5 times the poverty threshold, were supporting a family of four on no more than $34,500 last year. They are more likely to be white than those in poverty, according to a CNNMoney analysis of Census Bureau data. They are more likely to be elderly. They are more than three times as likely to work full-time, year-round. And they are more likely not to receive help from the government. "People just above the poverty line are just one paycheck or health disaster away from poverty," said Katherine Newman, a dean at Johns Hopkins University. "They are still quite fragile." The near poor have grown by about 10 percent in number over the past five years, as the Great Recession sent many people falling down the income ladder. The ranks of those in poverty, on the other hand, swelled 24 percent in the same period. Half of the near poor are white, compared to just over two in five of those in poverty, according to Census figures. And only 16.7 percent are black, compared to 23.6 percent of those in poverty. The share of Latinos who are near poor is 27.8 percent, only slightly smaller than the share in poverty. The fact that there are more blacks in poverty than among the near poor likely stems from the fact that the unemployment rate among blacks is nearly double that of whites, said Robert Moffitt, professor of economics at Johns Hopkins. And they have much higher rates of single motherhood, he said. Whites, on the other hand, likely have enough earnings to put them just above the poverty line. Another large group among the ranks of the near-poor are senior citizens. Nearly 17 percent of the near poor are elderly, while only 7.8 percent of those in poverty are. Social Security keeps many of the elderly, particularly white seniors, above the poverty line ... but barely, said Arloc Sherman, senior researcher at the Center on Budget and Policy Priorities. "Social Security is not an exorbitant program," he said. "People end up above the poverty line, but not necessarily far above it."
<urn:uuid:ee07280c-9875-4e7c-bc1b-e1df50ef4bcf>
CC-MAIN-2013-20
http://www.wtae.com/news/money/Near-poor-in-America-30M-and-struggling/-/9680890/17111150/-/f73avnz/-/index.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.974833
478
2.65625
3
What can be learned from the post-election crisis in Greece? The traditional political establishment in Greece buckled under the weight of crippling austerity and a mass people’s movement when the country went to the polls May 6. Now that the voting is over, and attempts to form a government have failed, another election must be held, scheduled for June 17, raising new questions about the way forward for the Greek working class. The crisis deepens as panicking Greeks withdraw their savings from banks on the brink of collapse. Since the fall of the U.S.-backed military dictatorship that ruled Greece from 1967-1974, two parties, PASOK and New Democracy, have dominated the political scene. However, both parties had their worst showing ever, and combined were only able to muster 32 percent of the vote, down from 77 percent in 2009. Instead, support grew for parties of both the left and the far right. With parliament deadlocked, unable to form a new governing coalition, a new election is pending and there is a distinct possibility of a protracted political crisis and a sharp polarization that provides an opportunity for the working class to decisively assert itself. Background to the elections Since the worldwide capitalist economic crisis began in 2007-2008, several countries in the eurozone, which all operate with the common euro currency, have experienced severe debt crises. These national economies are more intimately linked than ever before—which was supposed to be the benefit of the eurozone—so the problems of one immediately threatens the rest. Germany, the strongest capitalist economy of the eurozone, along with the imperialist U.S., have been working hard to force economic restructuring on the most indebted countries, offering bailouts in exchange for severe cuts to social welfare programs and other austerity measures. They have worked through three main entities: the U.S.-dominated International Monetary Fund, the European Union and the European Central Bank, collectively referred to as the “Troika.” Over the last two years, the Troika has arranged for around €240 billion ($305 billion) in bailout funds for Greece to service its massive debt. In exchange, the Greek ruling class forced through devastating cuts that have led to repeated strikes and militant popular mobilizations. The Troika has worked hand in hand with the Greek ruling class, which, while claiming to “understand” the opposition of the people, claims that austerity is a difficult but necessary step toward economic revival. The other option, they claim, is complete collapse. It is a story poor and working people across the world are familiar with, including in the United States. Narrowly winning first place in the elections was New Democracy, a center-right party that was part of the existing government led by Lucas Papademos. An unelected banker, Papademos was appointed to lead the government through its unpopular debt deal. New Democracy campaigned on a platform of supporting the extreme austerity measures imposed by the Troika, and promised only to try to renegotiate some of the more painful terms of the debt deal. The other pro-austerity party, the misnamed Pan-Hellenic Socialist Movement (PASOK), came in third for the first time in the party’s history. PASOK is led by Evangelos Venizelos, the finance minister under the two previous governments. Venizelos was one of the main architects of the austerity “memorandum” and offered only a pitiful pledge that he would ask the country’s creditors to give them three years, rather than two, to reach absurdly unrealistic economic benchmarks. Gains on the left The biggest surprise of the election was the second-place finish of the Coalition of the Radical Left (SYRIZA), with 16.8 percent of the vote. SYRIZA is a collection of small communist tendencies and a larger reformist party that split from the Communist Party of Greece after the fall of the Soviet Union. SYRIZA is led by Alexis Tsipras, a former Communist Youth leader who has received significant international press attention. SYRIZA calls for canceling the bailout deal but keeping Greece inside the eurozone and European Union. This, it says, can be achieved through negotiations with the Troika and through nationalization of the Greek banking sector. Although there are revolutionary forces within SYRIZA, the dominant line at present is fundamentally social democratic. The “peaceful revolution” they have declared mutes the questions of socialism and working-class power, and raises hopes in a radically reformed capitalism. For example, Tsipras stated in a letter to high-ranking European Union officials, “We must urgently protect the economic and social stability of our country. … It is our duty to re-examine the whole framework of the existing strategy, given that it not only threatens social cohesion and stability in Greece but is a source of instability for the European Union.” While SYRIZA’s leadership wants to reverse austerity, its appeal to “social cohesion and stability” means “stability” under a reformed capitalism. The second most popular party on the left was the Communist Party of Greece (KKE), which registered a modest increase of 1 percent from their previous election result, ending up with 8.5 percent. This was below the 10-12 percent that most opinion polls predicted. The KKE put forward a platform calling for the socialization of the means of production under a “working class-people’s power” government. Of the left groups in parliament, the Communist Party is the only one to call for Greece to leave the European Union, a bloc of the major imperialist and peripheral capitalist states of Europe. The KKE has played a major role in the massive fight-back movement waged by the Greek working class and especially in its advocacy for general strikes. It intervenes through mass organizations like the All-Workers Militant Front (PAME) in the labor movement, the Greek Women’s Federation and the Students Struggle Front, among others. The lowest scoring of the three left parties was Democratic Left, a split to the right from SYRIZA formed in 2010. It only received 6.1 percent of the vote, but its leader Fotis Kouvelis is often ranked as the most popular politician in Greece by opinion polls. Democratic Left rejects the memorandum, but makes sure to balance its criticism of austerity with pledges of absolute loyalty to the eurozone. Major gains for far right and fascists Far-right forces experienced major gains in the election as well. The semi-fascist Popular Orthodox Rally suffered as punishment for its participation in the previous government, but new forces emerged. Independent Greeks, a split from New Democracy, came in fourth with 10.6 percent. The party rejects the austerity memorandum on nationalistic grounds and relies on anti-German and anti-Turkish demagogy in place of a specific political program. The story that has perhaps gotten the most foreign press attention is the entrance of the neo-Nazi Golden Dawn party into parliament with 7 percent of the vote, more than 20 times their score in 2009. Its logo is an ancient Greek symbol similar to a swastika, and until recently Adolf Hitler’s manifesto Mein Kampf was displayed prominently at the party’s headquarters. Golden Dawn campaigned on a platform of expelling all immigrants from Greece and national chauvinist opposition to the Troika. While some voters were attracted to its racist rhetoric and acts of violence against immigrants, Golden Dawn bought the loyalty of others by operating food banks during a time of growing hunger. The fascists’ success is a serious threat to the working class and all democratic forces in Greece. While its 7 percent may appear small, it is precisely under these polarized economic and political conditions, when the capitalist class cannot achieve stable rule through democratic means, that fascism has historically grown and taken power. The main bourgeois parties cynically used the threat of Golden Dawn to present the false dilemma of austerity or a descent into fascism. But in reality, these mainstream parties’ promotion of anti-immigrant racism gave Golden Dawn political space to grow. Moreover, if it appeared that the working class could potentially become the ruling power in Greece, the bourgeoisie could accept, if not turn to, a fascist coup. That the Greek ruling class has operated under fascist military rule before makes such a scenario all the more plausible. It is up to the revolutionary left and the working class to develop a program and plan of action to smash fascism politically and in the streets. A ‘government of the left’? A central component of the SYRIZA campaign was its appeal for the formation of a “government of the left,” encompassing all the left forces opposed to the Troika. The formation of such a government was impossible given the election results and highly implausible given Greece’s undemocratic electoral laws governing coalitions. But SYRIZA’s call for a government of the left clearly resonated with much of the working class and contributed to its success. If SYRIZA were to emerge in first place in the June election, as presently projected, the left could achieve such a majority. SYRIZA leader Tsipras and other social-democratic proponents of a government of the left argue that it would be able to cancel the memorandum, reverse the wave of austerity measures, potentially nationalize the banks and rebuild the Greek economy in a way that strengthens the working class. SYRIZA makes the case that the European ruling class would never let Greece default and exit the eurozone because of the economic havoc this would create in other heavily indebted states like Spain and Italy. In short, Tsipras pledges to reverse the balance of forces inside the eurozone; instead of the Troika forcing Greece into deeper austerity, Greece would leverage its power against the Troika. While the Troika obviously wants to avoid a complete Greek default (lenders have already accepted a 53.5 percent write-down on the debt), they have had the last two years to prepare for this eventuality. The centerpiece of the European ruling class’ preparations is the European Financial Stability Facility, a $976 billion bailout fund, meant to act as a “firewall” to counter the immediate effects of a Greek bankruptcy. With this in place, there is a small but growing tendency of capitalist financiers who believe that if Greece were expelled, the eurozone would “end up stronger once the dust had settled.” Tsipras insists that Greece can out-negotiate the international capitalists, rather than calling for the socialist reorganization of society. He raises unrealistic expectations among the oppressed in electoral and bourgeois political gamesmanship, rather than raising the possibility of a new class power. Why revolution is necessary By contrast, the KKE has called the “government of the left” idea a false hope that will lead to disillusionment. The KKE rejects possible participation in a left government, insisting that such a government will leave the capitalist state and the for-profit economic system intact, keep Greece bound to the imperialist institutions of the EU and NATO, and thus cannot resolve the central contradictions at the heart of the political crisis. More broadly, they explain that the social-democratic program, which arose in the post-war period of capitalist expansion, cannot be achieved in the context of protracted capitalist crisis and neoliberal financial control. They have called the SYRIZA plan opportunist, betraying the long-term interests and political clarity of the working class in exchange for short-term gains for particular leftist parties. This raises the age-old but still pivotal question of reform and revolution: Does the working class have the capacity to come to power, and how so? Can the capitalist system be reformed to resolve the exploitation at its center? How far can revolutionary organizations go at this time? Several organizations in Greece—including the KKE—have made the case that the political crisis of the bourgeois class has matured to the point of a revolutionary situation, opening the possibility for the transfer of power to the working class in alliance with middle-class strata. Revolutionaries, of course, fight for reforms that improve the conditions of the working class and facilitate the political struggle against the ruling class. But a central responsibility is to assess whether the conditions for revolution are approaching, to hasten their development and prepare for such an opportunity. The basic contradiction in capitalist society is that the productive process is socialized, involving millions of workers, while ownership is private, concentrated in a tiny ruling class. The capitalists control the means of production and distribution, as well as countless financial mechanisms, to squeeze profits out of workers and maintain their political and economic power. The capitalist state (the police, military and courts) allows them to safeguard this system with force, while the government provides for its administration. A change in administration, like the ascent of a government of the left, will not alter the fundamental underlying character of the state. This can only be achieved by the overthrow of the capitalist state and its replacement by a worker’s state based on independent organs of working-class power—a socialist revolution. Elections and the revolutionary process Some communist tendencies support the formation of a government of the left for this reason—not because it would solve the crisis, but because it would further polarize the country and hasten the development of a revolutionary situation. There can be no doubt that the formation of a SYRIZA-KKE-Democratic Left coalition would cause considerable panic among the Greek and European ruling class, and new bouts of intense class struggle. History has shown that revolutions can take many paths and tactical turns. In Venezuela, the election of President Hugo Chávez, a socialist presiding over a fundamentally capitalist state, undoubtedly gave a boost to the class struggle and the regroupment of revolutionary forces in the country. In Nepal, Maoists waged a triumphant revolutionary war against the feudal king that resulted in a negotiated peace and the Maoists’ subsequent election to lead a bourgeois government. Their decision to dissolve their armed forces remains the subject of considerable debate among revolutionaries. Neither country, despite heroic advances, has established socialism. But for a Greek left government to be a vehicle of revolution, instead of demobilization, demoralization and disillusionment, a left-wing government would need to have clear programmatic unity around the socialization of the means of production, centralized planning, workers’ political power, and so on. It would need to organize the people to take on the police and the military that their own left-unity government would be associated with and nominally leading. Otherwise, when a revolutionary situation emerged it would only disorient the movement and contribute to the persistence of reformist illusions. SYRIZA has been silent or worse on these critical questions. In the run-up to the elections, Tsipras said: “A government of the left is in need of industrialists and investors. It needs a healthy business climate.” In other words, his version of a government of the left would not challenge the capitalists’ right to exploit labor. The capitalist establishment has reciprocated, and the Federation of Hellenic Enterprises (SEV), the Greek equivalent of the U.S. Chamber of Commerce, has called for the formation of a national unity government including SYRIZA. Tsipras called the election results a “peaceful revolution,” a slogan that misleadingly suggests the electoral realm, rather than continued mass struggle, can provide a way out of the crisis for working people. The revolutionary crisis and dual power While support for SYRIZA is likely to increase in the coming election, it is doubtful the new election will produce a clear winner or workable coalition. In the face of the increased likelihood of exiting the eurozone, which would deepen the economic and political crisis, the class struggle will intensify. With the bourgeoisie so thoroughly discredited, and the Greek masses so clearly calling for an alternative way, the revolutionary left has an opportunity to offer a program that provides not only short-term relief, but also a longer-term vision of a new economic and political system. The question is how to mobilize the working class and broadly unite the revolutionary forces in a struggle to achieve this. Historically, a key phase in any revolutionary crisis is that of dual power. By organizing what is essentially a second, rival state built on organs of mass struggle, revolutionaries can show concretely what working-class or people’s power looks like and offers. In the Russian Revolution, this took the form of councils of workers and soldiers (called soviets). In China, the Red Army itself functioned as a government in the areas that it liberated. Revolutionaries have also convened constituent assemblies—to rewrite the constitution—as a way to articulate, and establish the legitimacy of, a new political vision. Clearly, there are millions in Greece who are still holding out hope that the existing capitalist government and state, perhaps with left-wing leadership, can deliver the goods. To this end, a sophisticated political struggle, backed by a concrete plan of action, must be waged against Tsipras and the social-democratic fantasies he projects. In his “April Theses,” designed to guide the Bolsheviks through Russia’s revolutionary crisis, Lenin called for “patient, systematic, and persistent explanation … especially adapted to the practical needs of the masses.” Can the struggle in the streets break the deadlock in parliament? Will alternatives to bourgeois state power be built? The Greek working class has found itself on the frontline of the international struggle against capitalism, and the answers to these questions will resonate around the world. For revolutionaries in the United States, our main role is not to endorse this or that organization and its tactics from afar. Our chief responsibilities are 1) to explain that the Greek crisis is a result of the contradictions of capitalism, not reckless social spending, 2) to defend the unfolding Greek revolution, especially as it could escalate and be slanderously attacked in the imperialist media and even militarily assaulted by U.S.-NATO forces, and 3) to study and learn from the complex revolutionary process that our brothers and sisters are trying to navigate. While their process is far more advanced than our own, their struggle is ours—and we have much to learn from it.
<urn:uuid:0bb2acb2-bd9e-4c58-b2aa-1fc1d3444f08>
CC-MAIN-2013-20
http://crimsonsatellite.wordpress.com/2012/05/17/reform-and-revolution-in-greece/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706153698/warc/CC-MAIN-20130516120913-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.958174
3,748
2.640625
3
Genital/ perigenital infection caused by the yeast-like fungus Candida albicans or occasionally by other species of Candida. Vulvovaginitis due to Candida presents with an erythema of the vaginal mucous membrane and the vulval skin, itching, soreness, and with a thick, creamy-white discharge. There may be spread on to the perineum and into the groins. In Candida balanitis, tiny papules develop on the glans penis, evolve as white pustules or vesicles and rupture, leaving a peeling edge. Involvement of groins sometimes coexists. Perianal candidiasis presents with erythema, soreness and irritation, and subsequent spread along the natal cleft is common. Candidosis, Genital/Perigenital, Kolpitis Candidomycetica, Vulvitis Candidomycetica
<urn:uuid:796e9729-6dda-4afb-8e38-477e78e9ccc8>
CC-MAIN-2013-20
http://dermis.net/dermisroot/en/16080/diagnosep.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706153698/warc/CC-MAIN-20130516120913-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.763979
194
2.578125
3
We have always enjoyed units from Karen Caroe and Clocks and Time looks like a fun way to learn about something that is so important to us each day. Subject areas covered include Bible, language arts, math, history, science, geography, art and music. There are many possibilities for rabbit trails, and a variety of related topic such as genealogy, leap years and historical calendars. The unit study planning sheet recommended is no longer available. But you can get an idea of how it looked by visiting the internet archive. Good general resource for teaching clocks and time. See left sidebar for a variety of information and activities. Telling Time – Clocks Explanation of how to tell time. Practice sheets are at the bottom. 24/12 Hour Time Explanation of how to convert from military to 12-hour time. Very handy tool for teaching analog and digital time. Explanation for using Roman numerals with a Roman numeral converter. Why We Have a Change in Seasons Newton’s Apple: Clocks Activities that go along with the show, but stand alone just fine. Includes making a water clock. Stop the Clock See how fast you can assign the digital clock readings to the correct analog time. - Level One – 30 minute intervals. - Level Two – 15 minute intervals. - Level Three – 5 minute intervals. - Level Four – 1 minute intervals. - Level Five – Includes military time. Finding Fractions of Times Worksheet for becoming familiar with terms such as quarter-hour and half-past. Important Words for Times and Dates Worksheet on words we use discussing time such as century, leap year, and decade. Add and Subtract Times Explanation and worksheet from BBC. Telling Time Worksheet Generator Create your own worksheets. Make Your Own Sundial Instructions from Sky and Telescope. Mathematical Geography by Willis E. Johnson Covers longitude and time, circumnavigation and time, the earth’s revolution, time and the calendar, and seasons for older students. The Real Mother Goose Contains The Mouse and the Clock, Ten O’Clock Scholar and Thirty Days Hath September. Units & Lesson Plans A brief mini-unit here at DIYHomeschooler.
<urn:uuid:91d56a73-f03e-431e-8027-a7a35284cd24>
CC-MAIN-2013-20
http://diyhomeschooler.com/clocks-and-time-free-unit/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706153698/warc/CC-MAIN-20130516120913-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.855269
494
3.828125
4
|This material is published under the OGL| This category includes abilities a creature has because of its physical nature. Natural abilities are those not otherwise designated as extraordinary, supernatural, or spell-like. Extraordinary abilities, though they may break the laws of physics, are nonmagical, don’t become ineffective in an antimagic field, are not subject to any effect that disrupts magic, and they generally do not provoke attacks of opportunity. They are, however, not something that just anyone can do or even learn to do without extensive training. Using an extraordinary ability is a free action unless otherwise noted. Spell-like abilities are magical and work just like spells (though they are not spells and so have no verbal, somatic, material, focus, or XP components). They go away in an antimagic field and are subject to dispel magic and spell resistance (if the spell the ability resembles or duplicates would be subject to spell resistance). Usually, a spell-like ability works just like the spell of that name. A few spell-like abilities are unique; these are explained in the text where they are described. A spell-like ability usually has a limit on how often it can be used. A spell-like ability that can be used at will has no use limit. Using a spell-like ability is a standard action unless noted otherwise in the ability or spell description, and doing so while threatened provokes attacks of opportunity. It is possible to make a Concentration check to use a spell-like ability defensively and avoid provoking an attack of opportunity, just as when casting a spell. A spell-like ability can be disrupted just as a spell can be. Spell-like abilities cannot be used to counterspell, nor can they be counterspelled. For creatures with spell-like abilities, a designated caster level defines how difficult it is to dispel their spell-like effects and to define any level-dependent variables (such as range and duration) the abilities might have. The creature’s caster level never affects which spell-like abilities the creature has; sometimes the given caster level is lower than the level a spellcasting character would need to cast the spell of the same name. If no caster level is specified, the caster level is equal to the creature’s Hit Dice. The saving throw (if any) against a spell-like ability is 10 + the level of the spell the ability resembles or duplicates + the creature’s Cha modifier. Some spell-like abilities duplicate spells that work differently when cast by characters of different classes. A monster’s spell-like abilities are presumed to be the sorcerer/wizard versions. If the spell in question is not a sorcerer/wizard spell, then default to cleric, druid, bard, paladin, and ranger, in that order. Most psionic monsters have some number of psi-like abilities. These are very similar to spell-like abilities. Naturally, they are psionic and work just like powers. In some cases, a creature’s psi-like abilities (or abilities listed under a creature’s psionics entry) may include an effect that does not duplicate any listed power. For such abilities, simply use the existing spell description. Treat the creature’s manifester level as the caster level for the spell. The ability is still psionic in origin, so spells and powers that specifically affect psionic powers can negate or reduce its effects as they would any other psionic power. A few psi-like abilities are unique; these are explained in the text where they are described. A creature with psi-like abilities does not pay for these abilities with power points and the abilities have no verbal, somatic, or material components, nor do they require a focus or have an XP cost (even if the equivalent power has an XP cost). The user activates them mentally. Armor never affects a psi-like ability’s use. Psi-like abilities do not work in a null psionics field, are subject to being dispelled by dispel psionics and power resistance if the power or spell the ability duplicates would be subject to power resistance. A psi-like ability usually has a limit on how often it can be used. A psi-like ability that can be used at will has no use limit. Using a psi-like ability is a standard action unless noted otherwise, and doing so while threatened provokes attacks of opportunity. It is possible to make a Concentration check to use a psi-like ability defensively and avoid provoking attacks of opportunity, just as when using a power or casting a spell. A psi-like ability can be interrupted just as a spell can be. Psi-like abilities cannot be used to counterspell, nor can they be counterspelled. All creatures with psi-like abilities are assigned a manifester level, which indicates how difficult it is to dispel their psi-like effects and determines all level-dependent variables (such as range or duration) the abilities might have. When a creature uses a psi-like ability, the power is manifested as if the creature had spent a number of power points equal to its manifester level, which may augment the power to improve its damage or save DC. However, the creature does not actually spend power points for its psi-like abilities, even if it has a power point reserve due to racial abilities, class levels, or some other psionic ability. The DC of a saving throw (if applicable) against a creature’s psi-like ability is 10 + the level of the power or spell the ability duplicates + the creature’s Cha modifier. Remember to check the power’s Augment entry to see if the creature’s manifester level (and thus the effective power point expenditure) increases the DC of the saving throw. Changes to the effect’s save DC, damage, and so on are noted in the psi-like ability entry. By default, supernatural abilities are magical and go away in an antimagic field. However, some creatures have psionic abilities that are considered supernatural. Psionic feats are also supernatural abilities. These abilities do not function in areas where psionics is suppressed. Supernatural abilities of either type are not subject to spell resistance nor power resistance. Supernatural abilities cannot be dispelled and are not subject to counterspells. Using a supernatural ability is a standard action unless noted otherwise. Supernatural abilities may have a use limit or be usable at will, just like spell-like abilities. However, supernatural abilities do not provoke attacks of opportunity and never require Concentration checks. Unless otherwise noted, a supernatural ability has an effective caster level equal to the creature’s Hit Dice. The saving throw (if any) against a supernatural ability is 10 + 1/2 the creature’s HD + the creature’s ability modifier (usually Charisma). See the tables below for a summary of the types of special abilities. |Attack of opportunity||No||Yes||No| |Dispel: Can dispel magic and similar spells dispel the effects of abilities of that type? Antimagic Field: Does an antimagic field or similar magic suppress the ability? Attack of Opportunity: Does using the ability provoke attacks of opportunity the way that casting a spell does? |Null psionics field||No||Yes||Yes| |Attack of opportunity||No||Yes||No| |Dispel: Can dispel psionics and similar powers dispel the effects of abilities of that type? Power Resistance: Does power resistance protect a creature from these abilities? Null Psionics Field: Does a antimagic field or similar psionics suppress the ability? Attack of Opportunity: Does using the ability provoke attacks of opportunity the way that manifesting a power does? This page is protected from editing because it is an integral part of the Dungeons and Dragons Wiki. Please discuss possible problems on the talk page.
<urn:uuid:19e2f9c0-5cbe-4488-b763-903bba3a8baf>
CC-MAIN-2013-20
http://dungeons.wikia.com/wiki/Spell-like_abilities
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706153698/warc/CC-MAIN-20130516120913-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.899546
1,628
2.921875
3
Posted by Christina B. Winge and Åse Dragland Andy Booth, SINTEF scientist and environmental chemist is interested in what nanotechnology is doing to the marine environment. A couple of years ago, he began to be interested in whether nanoparticles could be hazardous. Now, Booth is leading a project called “The environmental fate and effects of SINTEF-produced nanoparticles”. The scientists will study both how the particles behave and how they affect organisms when they are released into the marine environment. One of the goals of the project is to find out whether nanoparticles are toxic to marine organisms such as small crustaceans and animal plankton. Further down the road, the ability of cod larvae and other large organisms to tolerate nanoparticles will also be studied. “Our experiments will tell us whether these tiny particles will be excreted or remain inside organisms, and if they do, how they will behave there,” explains Booth, who wants to make it clear that not all nanoparticles are necessarily dangerous. Many types of nanoparticles occur naturally in the environment, and have existed ever since the Earth was formed. For example, ash is a material that contains nanoparticles. “What is new is that we are now capable of designing nanoparticles with a wide range of different properties. Such particles can be different from those that already occur in nature, and they are intended to perform specific tasks at our command, so we do not know how they will behave in nature. “This could potentially – and I say “potentially” because this topic is so new to science – indicate that these particles could be toxic under certain conditions. However, this depends on a number of factors, including their concentration and the combination of particles,” emphasises Booth. “Has industry good enough tests to ensure that the nanoproducts that it releases in the market are good enough?” “In the field of chemical analysis, we have standard tests that tell us whether or not a material is toxic. Today, there are no such tests of nanoparticles that are 100% accurate, so this is something that scientists are currently working on at international level,” says Booth, adding that he believes that it is extremely difficult to put products that are a danger to health on the market. Survey of millions is essential The nanoparticle concept is general, and includes many more than one type. There are millions of potential variants, Today, it is impossible to obtain an overview of how many there actually are, and some of them will be toxic, while others are harmless, just like other chemicals. This is why Andy Booth and his 12-strong team at SINTEF have just launched their painstaking efforts. One of the biggest challenges they have faced so far is that of identifying scientific methods that will enable them to discover how these tiny particles behave in nature, and how they might affect natural processes. Booth’s colleague Christian Simon and his research department at SINTEF Materials and Chemistry, has recently made the most important industrial breakthrough ever in nanoparticle technology, and in this case it looks as though nanosubstances could be environmentally friendly alternatives to chemicals. One of Norway’s leading manufacturer of powders and paints, has started production of a new type of paint containing nanoparticles, and it has been developed by SINTEF. The particles possess fluid characteristics that make the paint easy to apply. This means that a higher proportion of dry matter can be used, with correspondingly less solvent. Furthermore, the paint will dry rapidly and be more wear-resistant than normal paint. “What is new is that we combine inorganic, tough, hard materials with organic, flexible, and formable materials when we create our nanoparticles. This gives us a new class of materials with improved properties; what are known as hybrid solutions. For example, we can make polymers with improved light stability that will also withstand scratches,” says Simon. When a hollow nanoparticle is created, it is called a nanocapsule. The cavity can be filled with another material for subsequent release for any of a wide range of purposes. The SINTEF scientists have not come as far with nanocapsules as they have with nanoparticles, but they have developed a technology that can be used in several applications and they can produce nanocapsules on a large scale. “For example, we can improve the durability of coatings for aircraft, ships and cars,” says Simon. “The components consist of substances that can close up cracks and scratches. Just think of vehicle bodywork. When gravel hits its surface, the enamel cracks and gets damaged. But simultaneously, the capsules inside the enamel burst and the material they contain will repair the damage. “But what happens when materials painted with nanoparticles are demolished, chopped up or burnt? Will hazardous components escape to the environment? “The particles have been produced in such a way that they create chemical bonds to the other components of the paint. When the paint is fully cured, therefore, the nanoparticles no longer exist, so they cannot separate from the polymer matrix when whatever has been painted is torn down, chopped up or burnt,” answers Christian Simon. “Surgical” medical treatment Hollow nanocapsules can also be used in medical treatments with almost “surgical” effects. They can be sent directly into the sick cells. Ruth Baumberger Schmidt and her team are working on this topic. The scientists fill nanocapsules with medication, and steer them to wherever they want their contents to end up. They do this by binding special molecules to the coating. The capsule’s shell is broken when its immediate environment is right in terms of the selected trigger, such as temperature or acidity. According to how the capsule has been concocted, its contents can be allowed to leak out gradually over time, or at a higher rate at first and gradually less as time goes by. At the moment, Ruth Schmidt and a group of SINTEF chemists are concentrating on medicines to fight cancer, a long-term project that offers important challenges. The use of nanocapsules inside the body makes serious demands of the materials used. The particles that are being developed for medical purposes must be non-toxic and need to be broken down into non-hazardous components that the body can excrete, for example via the urine. The capsules also need to head for the right site of action and to liberate their contents, without being discovered by “watchdogs” such as T cells and natural killer cells. “In this case these capsules are a plus because here we want the capsules to pass through the cell membrane and do their work locally. Other types of nanoparticles can pass the membrane and become a danger to the body. The risk of nanotechnology is that sometimes they are not supposed to pass, or that they accumulate in large quantities over a period of time, instead of disappearing. We don’t use nanotubes or nanofibres, because we believe that they are less safe than particles. But a lot of research is being done in this field.” So there is great potential, but also a high degree of uncertainty, is the conclusion. Can it be that nanotechnology was oversold when the subject emerged during the nineties? Were we simply blinded by its potential, with the result that we forgot to look out for its potential disadvantages? Andy Booth and his colleagues carry on tirelessly with their experiments. “When nanoparticles are released into rivers and lakes, it is a rather complicated matter to study how they will behave. Chemistry is different at nanometre level, and nanoparticles do not behave like normal particles,” says Booth. “These particles also behave differently in fresh- and salt-water. Finding methods that will enable us to study their behaviour is essential,” says the environmental chemist. “We can add a fluorescent marker to the particles. When we test the sample in a spectroscopic camera, the marker will light up and distinguish such particles from other particles.” “The big question now is to find out how high concentrations we need to test in order to be on the safe side. It is not worth taking chances with nature,” concludes Andy Booth. Christina Benjaminsen Winge has been a regular contributor to the science magazine Gemini for 11 years. She was educated at Volda University College and the Norwegian University of Science and Technology, where she studied media and journalism. Åse Dragland is the editor of GEMINI magazine, and has been a science journalist for 20 years. She was educated at the University in Tromsø and Trondheim, where she studied Nordic literature, pedagocics and social science.
<urn:uuid:97ce5d7a-a07b-4d7a-beca-12f28b310184>
CC-MAIN-2013-20
http://earthsky.org/human-world/nanoparticles-in-nature-toxic-or-harmless
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706153698/warc/CC-MAIN-20130516120913-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.95745
1,841
3.453125
3
||This article includes a list of references, related reading or external links, but its sources remain unclear because it lacks inline citations. (January 2010)| ||The examples and perspective in this article may not represent a worldwide view of the subject. (January 2010)| These shrines most often house a statue of the Blessed Virgin Mary but sometimes hold the image of another Catholic saint or of Jesus. Infrequently, more than one figure is represented (as in this tableau of a juvenile Mary with her mother). While often constructed by upending an old bathtub and burying one end, similar designs have been factory produced. These factory produced enclosures sometimes have decorative features that their recycled counterparts lack, such as fluting reminiscent of a scallop shell. The grotto is sometimes embellished with brickwork or stonework, and framed with flowerbeds or other ornamental flora. The inside of the tub is frequently painted a light blue color, particularly if the statue is of Mary because of her association with this color. Over time, distinguishing characteristics of these shrines can become blurred. Instances occur of shrines whose statue is missing and conversely of grottoes being removed, leaving a statue in place. Bathtub Marys in actual bathtubs are frequently found in the Upper Mississippi River valley, including western Wisconsin and Minnesota, and are an important part of the visual folk culture of Roman Catholics in that region. Noteworthy concentrations of bathtub Madonnas can be found in Stearns County, Minnesota, an area heavily settled by German-American Catholics in the mid-19th century, the Holyland in eastern Wisconsin, and rural Bay City, Michigan. Bathtub Madonnas are also a common sight in north-central Kentucky and southern Indiana, an area that has historically been predominately Catholic. A drive down country roads in Nelson, Marion, and Washington counties will provide ample sightings of these small shrines. In the Southern United States, bathtub Marys are a regular sight in the Cajun portion of South Louisiana, especially along the Bayou Teche. Breaux Bridge, St. Martinville, Port Barre, Cecilia, Baldwin and other communities along the bayou have examples of this type of shrine. They are also commonly found in the Baltimore, MD metropolitan area, and this prevalence was lampooned in the John Waters film Pecker. Google and magazine database searches reveal instances of bathtub shrines among other Catholic ethnic groups in other locations, e.g., Mexican Americans in Milwaukee, Italian Americans in Michigan, and Hispanic Americans in New Mexico, French Catholics in Quebec, and in the heavily Polish-Italian-Irish Catholic region of northeastern Pennsylvania, particularly around Scranton. In the northeastern United States, smaller shrines that do not make use of actual bathtubs are more common. Somerville, Massachusetts, a city which has traditionally had sizable Italian, Irish, Portuguese and (more recently) Brazilian populations, has a very large number of smaller shrines; well over 200 Catholic yard shrines in a town of about four square miles, with only one example using an actual bathtub. - "For the Love of Mary—Yard Shrines Honoring Blessed Virgin Have Devoted Following", St. Cloud Visitor (Newspaper of the Roman Catholic Diocese of St. Cloud, Minnesota), August 16, 2001. - Miyazaki, Kevin J. (January 2003). "Our Town". Milwaukee Magazine 28 (1): 20. ISSN 0741-1243. - Perera, Srianthi (22 April 2006). "Grave Images Illuminate 150-year Tradition". The Arizona Republic. p. CR-18. - Smith, Peyton (2003). "Grottos of the Midwest: Religion and Patriotism in Stone". Retrieved 2009-04-01. - Sciorra, Joseph (1989). "Yard Shrines and Sidewalk Altars of New York's Italian-Americans". Perspectives in Vernacular Architecture (Vernacular Architecture Forum) 3: 185–198. doi:10.2307/3514304. ISSN 08879885. JSTOR 3514304. - Graham, Joe S. (1997). Hecho En Tejas. p. 229. ISBN 978-1-57441-038-9. - Nevans-Pederson, Mary (7 June 2008). "Shrine shift?". Telegraph Herald (Dubuque, IA). Archived from the original on 26 May 2011. Retrieved 2 April 2009. - Ford, Suzanne (1994). Bathtub shrines : a stylistic, iconographic, and contextual analysis. Thesis (M.A. in Art History). University of Wisconsin—Milwaukee. OCLC 32691656. |Wikimedia Commons has media related to: Bathtub Madonna| - A Creole yard shrine in Louisiana - Front Yard Shrines and Wayside Shrines - The Grotto of Unyang Pusan, South Korea
<urn:uuid:aad0d165-c47d-4b74-a206-a23463c82c5d>
CC-MAIN-2013-20
http://en.m.wikipedia.org/wiki/Bathtub_Madonna
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706153698/warc/CC-MAIN-20130516120913-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.904839
1,041
2.609375
3
This strange-sounding problem has nothing to do with the kind of tunnels you drive through. When someone has carpal (say: kar-pul) tunnel syndrome, or CTS, the "tunnel" of bones and ligaments in their wrist has narrowed. This narrowed tunnel pinches a nerve, causing a tingly feeling or numbness in a person's hand, especially in the thumb and first three fingers. Someone with carpal tunnel syndrome may have trouble typing on the computer or playing a video game. In fact, repetitive motions (doing the same thing again and again) from those activities may be to blame for causing the carpal tunnel syndrome in the first place. Where Is This Tunnel? Take a look at the palm of your hand. Under the skin at your wrist is the tunnel we're talking about. Nine tendons (tough bands of tissue that join a muscle with some other part of the body) and one nerve pass through this tunnel from the forearm to the hand. The bottom and sides of the carpal tunnel are formed by wrist bones, and the top of the tunnel is covered by a strong band of connective tissue called a ligament. The tendons that run through the tunnel connect muscles to bones and help you use your hand and bend your fingers and thumb. The nerve that passes through the carpal tunnel to reach the hand is the median (say: me-dee-un) nerve. It's pretty tight inside the carpal tunnel. In fact, there's barely enough room for the tendons and the nerve to pass through it. If anything takes up extra room in the canal, the median nerve gets pinched, which causes numbness and tingling in the area of the hand where the nerve spreads out. Swelling can occur when someone does the same thing over and over, like typing. This swelling can pinch the nerve. Millions of Americans have CTS. Kids can get it, too, but it's not as common. Most people who get CTS are over 30, and more women than men have it. In fact, three times as many women as men have CTS. Computer operators, assembly-line workers, and hair stylists are at risk because they repeat the same hand movements over and over again. What Causes It? Anything pressing on the median nerve can cause CTS. The tendons passing through the carpal tunnel can become swollen from doing the same movement over and over, like typing on a computer or playing video games or a musical instrument for long periods of time. It's more common in gymnasts, particularly those who do a lot of handstands, and in people who play racquet sports, like tennis. Did you ever wake up and your hand is still asleep — all numb and giving you pins and needles? Sometimes, with CTS this tingling starts in the palm of the hands and fingers, especially the thumb, and the index and middle fingers. A brace or splint can help mild cases of CTS. It is usually worn at night and keeps a person's wrists from bending. Keeping the wrist straight opens the carpal tunnel so the nerve has as much room as possible. Resting the wrist will allow the swollen tendons to shrink. Medicines like ibuprofen can also help reduce the swelling. In more severe cases, your doctor may recommend cortisone (say: kor-tih-zone) to reduce inflammation and swelling in the carpal tunnel. This medicine is given by a shot, or injection. When the symptoms of CTS have improved, the doctor may suggest the person do wrist exercises and make changes that can prevent further problems, such as repositioning the computer and keyboard. If none of these treatments help, the person may need surgery to release the pressure on the median nerve. This surgery takes less than an hour and usually doesn't require a stay overnight in the hospital. Very few people are permanently injured by CTS. Most can get better and take steps to prevent the symptoms from returning. Though not many kids get CTS, it's a good idea to develop good habits now that can prevent this problem in adulthood. When you spend a lot of time on the computer, be sure to take breaks and not overdo it. Just getting up to stretch or do something else for a while can help. You might even set an alarm clock or a kitchen timer to go off every hour or so to remind you to take your breaks. At the computer, be sure your work area is comfortable. Use a chair that can be adjusted for your height so that you aren't sitting down too low or up too high. Your chair, computer screen, and keyboard should all be in line. And try to follow these rules while sitting: Hold your elbows at your sides with your wrists in front to set the keyboard height. Keep your forearms and wrists straight and don't bend your wrists up. If you use a wrist pad, don't press into it when you type. Place things you use a lot within close reach, with no item farther than an arm's length away. When you take these steps, you're treating your wrists just right. And if you ever get CTS, remember that there's always light at the end of the carpal tunnel.
<urn:uuid:fadb1ba0-9d2a-485d-97c2-309a0b50a82a>
CC-MAIN-2013-20
http://kidshealth.org/PageManager.jsp?dn=BannerHealth&lic=160&cat_id=113&article_set=22013&ps=304
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706153698/warc/CC-MAIN-20130516120913-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.954293
1,093
3.046875
3
The development of a new type of deep-sea vehicle sporting unique technologies and innovative methods to strike a balanace between size, weight, materials cost and functionality has made it possible to routinely reach the bottom of the ocean. Last month, the Woods Hole Oceanographic Institution (WHOI) announced the successful dive of a new type of deep-sea robotic vehicle called Nereus to the deepest part of the world’s ocean. Nereus reached a depth of 10,902 meters (6.8 miles) last May 31, 2009, at the Challenger Deep in the Mariana Trench in the western Pacific Ocean. The Nereus engineering team knew that, to reach maximum depths, they needed to develop a new type of deep-sea vehicle. The greatest challenge was developing a tethering system that would not snap under its own weight. To solve this challenge, the team adapted fiber-optic technology developed by the Navy’s Space and Naval Warfare Systems Center Pacific (SSC Pacific) to carry real-time video and other data between the Nereus and the surface crew. Andy Bowen, a WHOI engineer that led the efforts to developed Nereus, spoke about the new technology in the cable that links it to the ship in an interview with Oceanus, the organization’s magazine and website: “The tether is a specially designed glass fiber that allows the transmission of digital information, we can transmit video signals and data from sensors up from Nereus, and then we send commands down to Nereus to “turn left,” “go up,” “move your arm,” “pick up this object” — those types of things as, unlike on land, there are no digital signals in the ocean. Outside the glass fiber is a very thin layer of plastic that protects the fiber from contact with seawater or air. All of that is about the diameter of a human hair. Nereus has its own batteries onboard to provide power, and it can swim back to the ship on its own. So we can dispense with the copper for power and the steel for strength, and use only a light optical fiber cable to allow bi-directional passage of information to and from the vehicle.” Another weight-saving advance of the vehicle is its use of ceramic spheres for flotation, rather than the much heavier traditional syntactic foam, which only withstands pressures of depth of about 6,000 meters. Don Peters, one of the five engineers who developed Nereus, spoke about the development of this new floating device in an interview with Oceanus: “Ceramics, when compressed, have about fives times the strength of steel, but weigh about a third as much. They are a relatively inexpensive raw material. In looking at the options, we were aware of a fellow named Jerry Stachiw, who had done by far the most experimental work developing ceramic pressure housings for the Navy. It turned out that he had also been looking into making seamless ceramic spheres for flotation. Then we became aware that a company called DeepSea Power & Light had already done a lot of work testing and verifying 3½-inch ceramic spheres for a use in the oil-production industry. This material is very strong when you push on it, but it’s also very brittle and capable of breaking. The external pressure we’re putting the sphere under is completely uniform—from the water around it. The spheres are as circular in all directions as can be manufactured, and that geometry distributes the load uniformly in all directions along the skin of the sphere. So even though the spheres are about the thickness of a tortilla chip, about 50/1,000ths of an inch thick, they can handle a lot of compression.” Each of Nereus’s two hulls contains approximately 800 of the ~9-centimeter (3.5-inch) hollow spheres. WHOI engineers also modified a hydraulically operated robotic manipulator arm to operate under intense pressure and make effective use of the vehicle’s limited battery power. With its tandem hull design, Nereus weighs nearly 3 tons in air and is about 4.25 meters (14 feet) long and approximately 2.3 meters (nearly 8 feet) wide. It is powered by more than 4,000 lithium-ion batteries. Funding to develop Nereus was provided by the National Science Foundation, the Office of Naval Research, the National Oceanic and Atmospheric Administration, the Russell Family Foundation, and WHOI. To read the full interview with Andy Bowen visit “Miles Under the Sea, Hanging on by Hair-Thin Fiber“ and with Ron Peter visit “Floating without Imploding” Copyright © 2009 by Marine Science Today, a publication of OceanLines LLC
<urn:uuid:c74917be-fb84-438a-a0b5-35136dc64341>
CC-MAIN-2013-20
http://marinesciencetoday.com/2009/07/07/latest-whoi-deep-sea-vehicle-has-newest-technologies/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706153698/warc/CC-MAIN-20130516120913-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.951372
1,010
3.609375
4
Flying Monsters 3D About 220 million years ago dinosaurs were on the rise to dominating Earth. But another group of reptiles was about to make an extraordinary leap—control of the skies. They were the pterosaurs—after insects, the first animals ever to fly. The story of how and why these mysterious creatures took to the air is more fantastical than any fiction. Dig for pterosaur fossils and pilot a glider alongside a Quetzalcoatlus in this fun Flying Monsters interactive. More About Flying Monsters 3D Did You Know? Proceeds from the sale of film tickets help further National Geographic’s nonprofit mission to increase global understanding through education, research, and conservation. Your support counts!
<urn:uuid:04ef6ed0-473f-4f9a-b8c3-ff5ffb0fe606>
CC-MAIN-2013-20
http://movies.nationalgeographic.com/movies/flying-monsters/theater-listings/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706153698/warc/CC-MAIN-20130516120913-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.933055
150
2.96875
3
Basically how do you find out which could be your worst or best case and any other "edge" cases you might have BEFORE having them and so, how do you prepare your code for them? migrated from stackoverflow.com May 1 '11 at 7:47 Based on the content of the algorithm you can identify what data structures/types/constructs are used. Then, you try to understand the (possible) weak points of those and try to come up with an execution plan that will make it run in those cases. For example, the algorithm takes a string and an integer as input and does some sorting of the characters of the string. Here we have: String with some known special cases: Integer with known special cases: Sort algorithm that could fail in the following boundary cases: Then, take all these cases and create a long list trying to understand how they overlap. Ex: Now create test cases for them :) Short summary: break the algorithm in basic blocks for which you know the boundary cases and then reassemble them, creating global boundary cases I don't think there is any algorithm to determine edge conditions....just experience. Example: for a byte parameter you would want to test numbers like 0, 127, 128, 255, 256, -1, anything that can cause trouble. An "edge" has two meanings, and both are relevant when it comes to edge cases. An edge is either an area where a small change in the input leads to a large change in the output, or the end of a range. So, to identify the edge cases of an algorithm, I first look at the input domain. Its edge values could lead to edge cases of the algorithm. Secondly, I look at the output domain, and look back at the input values that might create them. This is less commonly a problem with algorithms, but it helps find problems in algorithms that are designed to generate output which spans a given output domain. E.g. a random-number generator should be able to generate all intended output values. Finally, I check the algorithm to see if there are input cases which are similar, yet lead to dissimilar outputs. Finding these edge cases is the hardest, because it involves both domains and a pair of inputs. This is a very general question so all I can do is throw out some general, vague ideas :) -Examine boundary cases. Ex. if you're parsing a string what happens if the string is empty or null? If you're counting from x to y what happens at x and y? Part of the skill of using algorithms is knowing their weaknesses and patholigical cases. Victor's answer gives some good tips, but in general I would advise that you need to study the topic in more depth to get a feel for this, I don't think you can follow rules of thumb to answer this question fully. E.g. see Cormen, or Skiena (Skiena in particular has a very good section on where to use algorithms and what works well in certain cases, Cormen goes in to more theory I think).
<urn:uuid:ff25b0b1-d0bb-45d3-9300-d14478a980d0>
CC-MAIN-2013-20
http://programmers.stackexchange.com/questions/72761/how-do-you-identify-edge-cases-on-algorithms/72763
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706153698/warc/CC-MAIN-20130516120913-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.929344
643
3.0625
3
Individual differences | Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology | Evolutionary theory suggests that depression is a protective mechanism: If an individual is involved in a lengthy fight for dominance of a social group and is clearly losing, depression causes the individual to back down and accept the submissive role. In doing so, the individual is protected from unnecessary harm. In this way, depression helps maintain a social hierarchy. - Other evolutionary theories – Another evolutionary theory is that the cognitive response that produces modern-day depression evolved as a mechanism that allows people to assess whether they are in pursuit of an unreachable goal. Still others claim that depression can be linked to perfectionism. People who accept satisfactory outcomes in lieu of "the best" outcome tend to lead happier lives. - Recently some evolutionary biologists have begun to subscribe to the theory of "honest signalling". It has been pointed out that the incidence of major depression is much higher in persons born after 1945 which would seem to cast doubt on a possible disease model and that such suffering is notable in persons of greater than average intellect and emotional complexity. This contradicts the submission thesis. Key texts – BooksEdit Additional material – BooksEdit Key texts – PapersEdit - Hagen, E.H. (1999). The Functions of Postpartum Depression. Evolution and Human Behavior, 20: 325-359. Full text - Henriques, G. ( ? ). Depression: Disease or Behavioral Shutdown Mechanism? Journal of Science and Health Policy Full text - Nesse R.M. (2000). Is depression an adaptation? Archives of General Psychiatry. 57, 14-20. Full text Additional material - PapersEdit
<urn:uuid:902c7fd4-b085-4daa-ab25-58a620d759b4>
CC-MAIN-2013-20
http://psychology.wikia.com/wiki/Depression:_Evolutionary_perspective
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706153698/warc/CC-MAIN-20130516120913-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.873442
347
3.359375
3
Linguistik-Klassifikation: Grammatikforschung / Grammar research Semiotactics as a Van Wijngaarden grammar Frederik H. H. Kortlandt - 1. There are two classes of theories of Universal Grammar: (1) Formalist theories, such as the widespread varieties of generative grammar. These theories start from the assumption that certain strings of linguistic forms are grammatical while other strings are ungrammatical. A grammar of this type produces grammatical strings and does not produce ungrammatical ones. All theories of this class fail in the same respect: they do not account for the meaning of the strings. (2) Semiotactic theories, which describe the meaning of a string in terms of the meanings of its constituent forms and their interrelations. The only elaborate formalized theory of this class presently available is the one advanced by C.L. Ebeling (Syntax and Semantics, Leiden: Brill, 1978). I shall discuss some of its mathematical properties here.
<urn:uuid:1b106311-ba39-4f40-bd30-77fcb2e45d67>
CC-MAIN-2013-20
http://publikationen.ub.uni-frankfurt.de/solrsearch/index/search/searchtype/collection/id/16726/start/0/rows/10/author_facetfq/Frederik+H.+H.+Kortlandt/doctypefq/bookpart
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706153698/warc/CC-MAIN-20130516120913-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.872532
219
2.75
3
Cremation to a Child When a deceased family member or friend is cremated or already has been cremated, your child may want to know what cremation is. In answering your child's questions about cremation, keep your explanation of what cremation involves simple and easy to In explaining cremation to your child, avoid using words that may have a frightening connotation such as "fire" and "burn." Instead, in a straightforward manner, tell your child that the deceased body, enclosed in a casket or container, is taken to a place called a crematory where it goes through a special process that reduces it to small particles resembling fine gray or white sand. Be sure to point out that a dead body feels no pain. Let your child know that these cremated remains are placed in a container called an urn and returned to the family. If cremation has already taken place and the container picked up, you may want to show it to the child. Because children are curious, your child may want to look at the contents. If your child makes such a request, look at them yourself first so that you can describe what they look like. Share this with your child. Then let the child decide whether to proceed further. If possible, arrange for a time when you and your child can be with the body before cremation is carried out. If handled correctly, this time can be a positive experience for the child. It can provide an opportunity for the child to say "good-bye" and accept the reality of death. However, the viewing of the body should not be forced. Use your best judgment on whether or not this should be done. Depending on the age of your child, you may wish to include him or her in the planning of what will be done with the cremated remains. Before you do this, familiarize yourself with the many types of cremation memorials available. Some of the many options to consider include burying the remains in a family burial plot, interring them in an urn garden that many cemeteries have, or placing the urn in a columbarium niche. Defined as a recessed compartment, the niche may be an open front protected by glass or a closed front faced with bronze, marble, or granite. (An arrangement of niches is called a columbarium, which may be an entire building, a room, a bank along a corridor, or a series of special indoor alcoves. It also may be part of an outdoor setting such as a garden wall.) Although your child may not completely understand these or other options for memorialization, being involved in the planning helps establish a sense of comfort and understanding that life goes on even though someone loved has died. If you incur any difficulties in explaining death or cremation to your child, you may wish to consult a child guidance counselor who specializes in these areas.
<urn:uuid:86312c44-0859-4dd2-ab08-2fb2175adf0b>
CC-MAIN-2013-20
http://ruestmanharrisfuneralhome.com/grief-support/explaining-cremation-to-children.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706153698/warc/CC-MAIN-20130516120913-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.942344
637
3.203125
3
- Jan 16, 2010 1:51 PM EST - [num] Comments It's a cliche by now, the availability of such code means two things: You can test with some accuracy whether you're vulnerable, and real-world black hat exploits will be out there fairly soon, if not already. This vulnerability can be invoked simply by viewing a web page in Internet Explorer or some app that uses IE to render HTML. But which versions are really vulnerable? Technically, all versions other than IE5 on Windows 2000 (a version due to be taken off life support in 6 months). But there are some important mitigations which are displayed beautifully in a blog entry by Microsoft's SRD (Security Research and Defense team: Note that it's Windows 2000 and Windows XP which are basically vulnerable, and then only in some cases. IE8 turns on DEP (Data Execution Prevention) by default, and protected mode in Vista and Windows 7 are standard as well. DEP blocks the vulnerability itself; Protected mode blocks the exploit. It may be possible to write an exploit that gets around protected mode, but this is an academic question, since all protected mode systems also support DEP as well. It's also true that, depending on how it's set up, Vista with IE7 may not have DEP turned on by default. Such users should be protected from the actual exploits by protected mode, but you can turn on DEP following the instructions in the SRD blog. So putting Windows 2000 aside for the moment, the only vulnerable platform, as a practical matter, is Windows XP, and only for IE6 and IE7. IE7 is a bit of a wild card, as the exploits used in Aurora and the proof of concept both are sensitive to memory layouts and only work on IE6. Researchers insist that IE7 should be as exploitable, but they have to build a separate exploit for it and just haven't done it yet. The primary moral of the story is that defense-in-depth has once again showed its value, as a serious vulnerability was blocked by a systemic defense. Users who keep their software up to date are protected against attack, more often than not. If you were to run any of the non-vulnerable configurations and turn off DEP or protected mode you would once again be vulnerable, but who's fault would that be? The other moral of the story is that if you're using IE6, you really ought to move on: Go to Firefox, go to Chrome (it's my primary browser now), or upgrade to IE8, but running IE6 is like putting a big "Hack Me!" sign on your back.
<urn:uuid:ba66ebae-0e11-4f1a-82fc-000380dbabae>
CC-MAIN-2013-20
http://securitywatch.pcmag.com/firefox/284160-ie-0-day-exploit-code-out-who-s-vulnerable
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706153698/warc/CC-MAIN-20130516120913-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.953944
548
2.515625
3
- More than twice as deep as the Grand Canyon - Greatest depth occurs at the Nevado Ampato extinct volcano Colca Canyon is located in southern Peru. It is more than twice as deep as the Grand Canyon but it is not as deep as its sister Cotahuasi Canyon. The canyon was created by the Colca River which starts high in the Andes mountains flowing through the canyon and changing names twice before flowing into the Pacific Ocean. The greatest depth of the canyon occurs at Nevado Ampato, an extinct volcano, with a vertical rise of 20,630 feet (6,288 m). The Colca Canyon is home to the Andean Condor which is a frequent attraction for visitors who watch the condors soar through the air hunting for food. Best way to see and experience the Colca Canyon More will follow on the Colca Canyon as it is declared an official or notable wonder of South America.
<urn:uuid:c95a50fa-449e-41c5-a5a2-00459093ecb1>
CC-MAIN-2013-20
http://sevennaturalwonders.org/south-america/colca-canyon/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706153698/warc/CC-MAIN-20130516120913-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.965951
188
3.421875
3
Last week we talked about sentience. The ability for an organism to feel, to direct it’s attention towards a thing and thus improve it’s neural model of that thing. This works in ways which are far too complicated for science to have yet explained. However, we do know more or less how neurons in the brain work, and can postulate that this system for filtering information, categorising it, directing attention to parts of it, attaching emotional significance to it and learning to operate in the world is built from some kind of neural network in the brain. It seems likely that in a sufficiently advanced organism (and right now we basically have no way of determining how ‘advanced’ we mean) these neural circuits can do more than just focus awareness on the senses, more than just learn to trigger emotions and moods through association. It can also grow connections from part of it’s own network back into the ‘input’ parts of that network. By directing it’s attention not at the world, at it’s senses, but back at it’s own sentience network it can begin to learn to model that network in the same way it’s learned to model space though the visual field, sounds though the air-pressure sensors in it’s ears and it’s own body though the network of neurons in the skin and muscle building the kinaesthetic senses. The organism can become aware not only of the world it lives in, but of it’s own processing of that world, it can learn to use those systems to model and understand itself. It can learn to see how it thinks. This, then, is sapience. A trait usually only attributed to people, to you and I. With vision, hearing and feeling, you don’t ‘see’ photons or feel the air-pressure in your ear-drums or even actually the stretch of a single neuron. Sentience is finding patterns and metaphors and similarities and generally building a simplified model of that thing. Not trying to represent it all exactly at once. Just narrow down the salient parts. See what they remind it of. Pattern recognition, simplification Likewise, a train of thought, a moment of consciousness, an epiphany or understanding, is actually an unimaginably complex cascade of excited neurons selectively exciting and inhibiting others in turn. A massive waterfall of cause and effect. Far too complex and chaotic for the brain to model precisely just as a visual scene is too complex to be modeled precisely. It needs to be simplified, to be compared to other things through metaphor and simile. When you look at a picture your retinal neurons — the rods and cones in the back of your eye — start firing in some extraordinary complex pattern. In order to see that picture rather than just look at it your brain simplifies and codifies and interprets that scene. The insanely complex neural firing patterns are simplified hierarchically. The visual cortex has networks looking for patches of similar colour which feed into networks looking for lines. Then these feed into networks looking for shape, and these feed into networks looking at orientation, and so on, until eventually it gets to things simple enough to keep in working memory, in the consciousness, in the sentience. Surely sapience, our awareness of that sentience, our model of our awareness, works in a similar way. Hierarchically organizing the insane cascade of it’s own neurons, looking for patterns, comparing them. Neuro Linguistic Programming practitioners teach that the brain works through ‘Representational Systems’. That each thought is strongly associated with a sensory system. That our thinking is done in ‘modes’, either visual thinking or auditory thinking or kinaesthetic thinking, or sometimes olfactory or gustatory thinking. Each of us will have more practice with some of these modes of thinking than others. Likely the ones we happened to try first will have been most practised and so most useful and so used more often. Some people are strong visualisers, they have learned to take more conscious control over their visual system than others. Some people have amazing auditory and language skills, they “think in words” more deeply than others can. Some people have more often used kinaesthetic systems to model and understand their own thinking, and so practised that more and perhaps “grasp” ideas rather than “Seeing what you mean”. These types of thinking may just be metaphors, interpretations of what the brain is doing using similar ideas as those used for seeing, hearing etc. Alternatively, they may be the actual systems which the brain uses to do the thinking. The visual or auditory systems themselves diverted by understanding and control. Either way, practice using that control will lead to more refinement of those interpretive models or more skill at diverting the brains inherent systems. All normal human beings have learned some ability in all of these skills. Some of these methods are however better at solving some types of problems than others. Thus you should endeavour to improve your abilities to notice, model and so direct them all. You will concentrate on paying attention to one of the three main styles of thought, concentrating on them, on how they work, and thus improving your own model of these thinking styles. You will increase your awareness and your conscious control of your own thoughts. Though we will use words to direct you, you will be practising using, modeling and understanding your visual imagery and kinaesthetic senses as well as your auditory ones. Note that if you’re following along our meditations in order, you have already been practising using those skills for some time. Just about every meditation has you imagining and visualising and paying attention to imaginary detail. As you listen to this meditation however, you’ll be deliberately focusing your attention onto the fact that you are practising them. Learning to direct your consciousness at itself more thoroughly. You’ll also be receiving suggestions that as you practice these things in future, you’ll remember to pay attention to all of your sensory systems rather than concentrating on just one.
<urn:uuid:b231b0d8-3bc3-4ac8-800b-b83b96228662>
CC-MAIN-2013-20
http://transcendenceinstitute.org.uk/index.php/consciousness-sentience-sapience/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706153698/warc/CC-MAIN-20130516120913-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.948394
1,261
3.09375
3
In Access, the Max function returns the maximum of a set of values in a select query. The syntax for the Max function is: Max ( expression ) The expression argument represents a string expression identifying the field that contains the data you want to evaluate or an expression that performs a calculation using the data in that field. Operands in expression can include the name of a table field, a constant, or a function (not one of the other SQL aggregate functions). You can use Max to determine the largest values in a field based on the specified aggregation, or grouping. In Access, you can use the Max function in the query design grid, in an SQL statement in SQL view of the Query window, or in an SQL statement within Visual Basic code. It is used in conjunction with the Group By clause. Select SellerID, Max(Price) as MaxPrice From Antiques Group by SellerID
<urn:uuid:6739a79d-66e3-47f4-9743-80a0685e3979>
CC-MAIN-2013-20
http://webcheatsheet.com/sql/access_functions/max.php
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706153698/warc/CC-MAIN-20130516120913-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.797315
184
2.609375
3
Diabetes prevention now closer: Aussie research Australian scientists have developed a therapy that could help prevent type one diabetes in susceptible people. The scientists at the Walter and Eliza Hall Institute of Medical Research (WEHI), in Melbourne said today they had successfully used genetic engineering to prevent 'type one' diabetes from developing in mice that were highly susceptible to the disease. The institute said further animal experiments were necessary but that the goal was to develop a practical therapy for preventing type one diabetes in humans. In type one diabetes, the body does not produce the insulin needed to take sugar from the blood and turn it into energy for cells, requiring sufferers to take insulin injections for life. The disease is usually diagnosed in children and young adults and was previously known as juvenile diabetes. The more common 'type two diabetes' occurs when the body does not produce enough insulin or its cells ignore the insulin. Type one diabetes is caused by a malfunction in the body's immune system, which attacks and destroys insulin-producing cells. Scientists Dr Raymond Steptoe and Professor Leonard Harrison set out to prevent the disease by "re-educating the immune system" so it would not attack the insulin cells. Their strategy consisted of genetically altering some of the body's own blood stem cells so they inactivate those immune system cells that are attacking the insulin-production system. "In a clinical setting we would harvest these blood stem cells from individuals who have a demonstrated risk for type one diabetes, insert a small amount of genetic material and transfer these cells back to the patient," Steptoe said. "The advantage of this approach over non-specific therapies, such as general immunosuppressants, is that it specifically targets the detrimental immune cells, but leaves those that we need to fight infections such as colds and 'flu," he said. While the new technique proved successful in mice, human trials could still be years away, he said. The scientists also cautioned that while the new technique might eventually be used to prevent type one diabetes, it would not cure those who already require insulin injections. "It may, however, be useful in conjunction with other approaches, such as insulin-producing cell replacement therapy," said a statement from WEHI. The research was published in the May 1 issue of the Journal of Clinical Investigation.
<urn:uuid:25cb2d28-5de4-4e0f-a7b2-247fe742fc33>
CC-MAIN-2013-20
http://www.abc.net.au/science/articles/2003/05/06/848000.htm?site=science&topic=latest
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706153698/warc/CC-MAIN-20130516120913-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.957118
472
3.1875
3
Australian Bureau of Statistics 6202.0 - Labour Force, Australia, Jun 2012 Quality Declaration Previous ISSUE Released at 11:30 AM (CANBERRA TIME) 12/07/2012 |Page tools: Print Page Print All RSS Search this Product| UNDERSTANDING THE AUSTRALIAN LABOUR FORCE USING ABS STATISTICS In order to understand what is happening in Australian society, or our economy, it is helpful to understand people’s patterns of work, unemployment and retirement. ABS statistics can help to build this picture. Fifty years ago, the majority of Australians who worked were men working full-time. Most worked well into their 60s, sometimes beyond, and if they were not working most were out looking for work until that age. The picture now is very different. Far more people work part-time, or in temporary or casual jobs. Retirement ages vary much more, with a greater proportion of men not participating in the labour force once they are older than 55. Nowadays. 45% of working Australians are women, compared with just 30% fifty years ago. These are profound changes that have helped shape 21st Century Australia. This note explains some of the key labour force figures the ABS produces that can be used to obtain a better picture of the labour market. Every month, the ABS runs a Labour Force Survey across Australia covering almost 30,000 homes as well as a selection of hotels, hospitals, boarding schools, colleges, prisons and Indigenous communities. Apart from the Census, the Labour Force Survey is the largest household collection undertaken by the ABS. Data are collected for about 60,000 people and these people live in a broad range of areas and have diverse backgrounds - they are a very good representation of the Australian population. From this information, the ABS produces a wide variety of statistics that paint a picture of the labour market. Most statistics are produced using established international standards, to ensure they can be easily compared with the rest of the world. The ABS has also introduced new statistics in recent years that bring to light further aspects of the labour market. It can be informative to look at all of these indicators to get a grasp of what is happening, particularly when the economy is changing quickly. One thing to remember about the ABS labour force figures is that when a publication states that, for example, 11.4 million Australians are employed, the ABS has not actually checked with each and every one of these people. In common with most statistics produced, the ABS surveys a sample of people across Australia and then scales up the results – based on the latest population figures - to give a total for the whole country. Because the figures are from a sample, they are subject to possible error. The Labour Force Survey is a large one, so the error is minimised. The ABS provides information about the possible size of the error to help users understand how reliable the estimates are. The above diagram shows the break down of the civilian population into the different groups of labour force participation. Each pixel represents about 1000 people as at September 2011. According to established international standards, everyone who works for at least one hour or more for pay or profit is considered to be employed. This includes everyone from teenagers who work part-time after school, to a partially retired grandparent helping out at the school canteen. While it is unreasonable to expect a family to survive on the income of an hour of work per week, one could also argue that all work, no matter how small, contributes to the economy. This definition of 'one hour or more' - which is an international standard - means that ABS' employment figures can be compared with the rest of the world. Now it is, of course, easy to argue that someone who works 2 or 3 hours per week is not really “employed”. But a definition is required. And any cut-off point is open to debate. Imagine if ABS defined being ‘employed’ as working 15 hours a week. Would it be reasonable to argue that someone who works 14.5 hours is unemployed, but 15 hours is not? It is also a mistake to assume that all persons who work low hours would prefer to work longer hours, and are therefore 'hidden' unemployment. Most people who work less than 15 hours a week are not seeking additional hours, although of course there are some who are. The issue of underemployment is further discussed below. Rather than open up such discussions, the ABS prefers to use the international standard and the ABS also encourages people to consider other indicators to form a better picture of what is happening. Alongside the total employed figures, full-time and part-time estimates are provided to better inform on the different kinds of employment, and a detailed breakdown by the number of hours worked is also provided to allow for customised definitions of 'employment.' Commentators often refer to the rise in employment as the number of new jobs created each month. This can be misleading, because the ABS doesn't actually measure the number of jobs. This might sound like semantics, but if a person in the Labour Force Survey who is employed gains a second part-time job at the same time as their main job, this would have no impact on the employment estimate - the Labour Force Survey does not count jobs, it counts people. It is also important to bear in mind that if the relative growth in population is greater than the number of new people in employment, there might actually be an increase in the employment figure, but a lower percentage of people with jobs. It is often informative to look at the proportion of people in employment. This measure, called the employment to population ratio, is the number of employed people expressed as a percentage of the civilian population aged over 15. This removes the impact of population growth to give a better picture of labour market dynamics over time. AGGREGATE MONTHLY HOURS WORKED Instead of counting how many people are working, another way of looking at how much Australians are working is to count the total number of hours worked by everyone. This is measured by a statistic produced by the ABS called Aggregate monthly hours worked, and it is measured in millions of hours. This can sometimes be more revealing of what is happening in the labour market, particularly in a weakening economy where a fall in hours worked can usually be seen before any fall in the number of people employed. PEOPLE WHO ARE NOT WORKING: THE UNEMPLOYED AND OTHERS There are many reasons why Australians do not work. Some have retired and are not interested in going back to work. Some are staying home to look after children and plan on going back to work once the kids have grown older. Some are out canvassing for work every day while others have given up looking. The ABS separates all of these people into those who are unemployed and those who are not by asking two simple questions: If you were given a job today, could you start straight away? and Have you taken active steps to look for work? Only those who are ready to get back into work, and are taking active steps to find a job, are classed as unemployed. Some people might like to work, but are not currently available to work - such as a parent who is busy looking after small children. Other people might want to work but have given up actively looking for work - such as a discouraged job seeker who only half-heartedly glances at the job adds in the newspaper but doesn't call or submit any applications. These people are not considered to be unemployed, but are regarded as being marginally attached to the labour force. They can be thought of as 'potentially unemployed' when, or if, their circumstances change, but are regarded as being on the fringe of labour force participation until then. It is important to note that the ABS unemployment figures are not the same as the data that Centrelink collects on the number of people receiving unemployment benefits. The ABS bases its figures on asking people directly about their availability and steps to find work. In this way, policy decisions about, for example, the criteria for the receipt of unemployment benefits have no impact on the way that the unemployment figures are measured. LABOUR FORCE AND PARTICIPATION RATE The size of the labour force is a measure of the total number of people in Australia who are willing and able to work. It includes everyone who is working or actively looking for work - that is, the number of employed and unemployed together as one group. The percentage of the total population who are in the labour force is known as the participation rate. The unemployment rate is the percentage of people in the labour force who are unemployed. This is a popular measure around the world for tracking a country’s economic health as it removes all the people who are not participating (such as those who are retired). Because the unemployment rate is expressed as a percentage, it is not directly influenced by population growth. The underemployment rate is a useful companion to the unemployment rate. Instead of looking at the people who are unemployed, the underemployment rate captures those who are currently employed, but are willing and able to work more hours. It highlights the proportion of the the labour force who work part-time but would prefer to work full-time. This is sometimes referred to as the 'hidden' potential in the labour force. The underemployment rate can be an important indicator of changes in the economic cycle. During an economic slow down, some people lose their jobs, become unemployed and contribute to a rising unemployment rate. But while this is happening, there might well be others who remain working but have their hours reduced; for example from full-time to part-time. As long as they want to work more hours, they are classed as underemployed, and contribute to the underemployment rate. LABOUR FORCE UNDERUTILISATION RATE The labour force underutilisation rate combines the unemployment rate and the underemployment rate into a single figure that represents the percentage of the labour force that is willing and able to do more work. It includes people who are not currently working and want to start, and those who are currently working but want to - and can - work more hours. It provides an alternative – and more complete - picture of labour market supply than the unemployment rate, as changes in the underutilisation rate capture both changes in unemployment and underemployment, indicating the spare capacity in the Australian labour force. For any queries regarding these measures or any other queries regarding the Labour Force Survey estimates, contact Labour Force on Canberra 02 6252 6525, or via email at email@example.com. These documents will be presented in a new window. This page last updated 8 August 2012
<urn:uuid:77aefa6d-4f8b-4540-8e05-f5552dfb8c0e>
CC-MAIN-2013-20
http://www.abs.gov.au/AUSSTATS/abs@.nsf/Previousproducts/6202.0Main%20Features999Jun%202012?opendocument&tabname=Summary&prodno=6202.0&issue=Jun%202012&num=&view=
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706153698/warc/CC-MAIN-20130516120913-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.964134
2,209
2.75
3
Correction: Douglas Kenney's name was misspelled in the original version of this article. It's become increasingly clear that the demands put on the Colorado River by the seven thirsty states in its basin are not sustainable. A complex web of treaties, compacts, laws and court decisions govern who can use the once-mighty river's water and when. But over the last several decades, those rules have not kept the yearly demand for water from exceeding the average flow. "People have known since the 1940s, if not earlier, that this river was over-allocated and that, at some point, it's going to be a major problem," said Douglas Kenney, senior research associate at the University of Colorado's Natural Resources Law Center. "The demand on the river has grown slowly and steadily," he said. "That, combined with recent understanding of what climate change is going to do to this region, all of a sudden has opened people's eyes. Improvements need to be made to how we manage this river." Kenney and two of his colleagues have now begun an ambitious, yearlong project called the Colorado River Governance Initiative to evaluate options for reforming the laws of the river. "The initiative is designed to develop a blueprint for future management that will allow for managing the river basin's resources more holistically and in a manner that preserves wildlife resources and habitats while ensuring the availability of water supplies for humans," said Mark Squillace, director of the Natural Resources Law Center. David Getches, dean of CU's law school, is also working on the project. While "almost everyone who looks at the Colorado River" realizes that changes are necessary, Kenney said, suggesting reforms to current management strategies can be politically toxic. For example, when then-presidential candidate John McCain was quoted in the Pueblo Chieftan in August 2008 saying that the Colorado River Compact of 1922 -- which divvied up the water between seven Western states -- should be renegotiated, he touched off a firestorm. Within the week, he reversed his position. So part of the project's goal is to do the background policy work, and ask the questions, that public officials are often afraid to, eventually creating a ready-made list of possible changes that may be easier for government leaders to handle. "If you're an elected official and you talk about changing the management of the Colorado River, you have to tread very carefully," Kenney said. "We're going to study the options that they cannot safely talk about publicly. If we come up with some really good solutions, then they can think about supporting them." Contact Camera Staff Writer Laura Snider at 303-473-1327 or firstname.lastname@example.org.
<urn:uuid:55c157f2-af38-4425-8f3c-3a2cbd4db50c>
CC-MAIN-2013-20
http://www.dailycamera.com/news/ci_14212471
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706153698/warc/CC-MAIN-20130516120913-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.966897
563
2.875
3
Daily Planet's Ingram discusses prion disease 0 Discovery Channel's Daily Planet co-host Jay Ingram visits Grande Prairie today to offer behind-the-scenes details of a mysterious and contagious series of diseases. The lecture takes place at the Grande Prairie Regional College at 7 p.m., where Ingram discusses fatal prion diseases. "Here at this very microscopic level, strange things are happening and just now we are beginning to figure out what they are," he said. The most well-known form of a prion disease is bovine spongiform encephalopathy (BSE), widely known as the mad cow disease. Prion diseases spread when malformed proteins attach themselves to healthy tissue. Unlike other infectious ailments, they are incurable. "It is a protein that has gone wrong," said Stefanie Czub, a scientist with the University of Calgary and the Canadian Food Inspection Agency who will be present at the lecture. "For other infectious diseases we have a cure or the body heals itself quite efficiently.prion diseases, once infected, they are invariably fatal." A prion disease of current concern to Western Canadians is chronic wasting disease (CWD), affecting deer and elk in southern Saskatchewan and Alberta. Unlike mad cow disease, which infects the animal's brain, spinal cord and central nervous system, chronic wasting disease spreads to several parts of the animal, and is even present in urine and saliva. Animals infected become very thin, and can carry the disease for two years before these signs become evident. Prion diseases can only be formally diagnosed by sampling infected tissue. "With all these diseases, it takes quite a long time for the symptoms to show," Ingram said. "You could hunt and kill a deer and eat it, and it might have the chronic wasting disease prions in it." "The ultimate diagnosis can only be done on a piece of tissue, not in blood," Czub said. Twenty deer have been identified in Alberta with CWD since monitoring of the disease began in 2005. While the cause of mad cow disease is generally believed to be the use of recycled beef and bone meal material in livestock feed, which became a common practice in the 1980s, CWD's cause remains unknown. "Nobody really knows," Ingram said. "It could be that an infected deer goes to a salt lick, licks it, and the prions are in the saliva." Ingram said that the disease is currently not an issue for the Peace Country, but infected deer and elk are bringing it into southern Alberta as they travel along the valleys of the South Saskatchewan and Red Deer Rivers. "If chronic wasting disease spreads far enough north, especially in Saskatchewan, that it intersects with the caribou migration routes, and if caribou are susceptible, then you've got a huge problem on your hands," Ingram said. The medical community is taking a close look at CWD due to the similarities prion diseases have with the degenerative Alzheimer's, Parkinson's and Lou Gehrig's diseases. "The way that they spread is somewhat similar," Ingram said. "In Alzheimer's and Parkinson's and Lou Gehrig's disease, you get an accumulation in the brain of junk basically; they're called plaques, these sort of dark deposits if you look at brain tissue after autopsy." "They are all part of the so-called protein-misfolding diseases," Czub said. "One might be a very good model for the other, so we need to keep that in mind. Especially with this enormous increase in Alzheimer's in the future to be expected. One in three over the age of 65 is going to develop Alzheimer's disease in the next 10 years." Hosted by the Alberta Prion Research Institute, Ingram's lecture is open to the public free of charge in the Collins Recital Hall, room L106 today at 7 p.m.
<urn:uuid:271c79de-3e52-44db-a7a4-497b51f1ab1d>
CC-MAIN-2013-20
http://www.dailyheraldtribune.com/2011/03/31/daily-planets-ingram-discusses-prion-disease
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706153698/warc/CC-MAIN-20130516120913-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.962426
806
2.515625
3
Back in December, a US government panel took the highly controversial position of calling for the censoring of scientific work aimed at an understanding of how the H5N1 “bird flu” virus can change to become directly transmissible between humans. The virus is deadly to humans but can not be spread from one person to another. Instead, close contact with infected birds is required for humans to be infected. The work which the National Science Advisory Board for Biosecurity (which, as described in the Washington Post article linked above, “was created after the anthrax bioterrorism attacks of 2001″) wanted to censor involved experiments aimed at understanding precisely what changes in the virus would be required for it to retain its lethality while also becoming directly transmissible between humans through processes such as the sneeze caught in the disgusting high-speed photo from CDC seen here. After a very long delay, the first of the two delayed papers was published in Nature last month. Now, the second paper has been published in Science, where the journal has taken the unusual step of dedicating an entire issue to the single topic of the H5N1 virus and has removed the subscription requirements for access. Scientists seeking to fight future pandemics have created a variety of “bird flu” potentially so dangerous that a federal advisory panel has for the first time asked two science journals to hold back on publishing details of research. In the experiments, university-based scientists in the Netherlands and Wisconsin created a version of the so-called H5N1 influenza virus that is highly lethal and easily transmissible between ferrets, the lab animals that most closely mirror human beings in flu research. The problem is that once the details of the experiments and their results were released, the viruses produced by both of the independent laboratories by different processes lost their lethality as they became transmissible between ferrets, which were used as a model of transmission among humans. It turns out then, that the feared “supervirus” which the NSABB was assuming had been created did not even exist, so the “risk” from publishing details of how one could create it was totally unfounded. From the New York Times: As the virus became more contagious, it lost lethality. It did not kill the ferrets that caught it through airborne transmission, but it did kill when high doses were squirted into the animals’ nostrils. Dr. Fouchier’s work proved that H5N1 need not mix with a more contagious virus to become more contagious. By contrast, the lead author of the other bird flu paper, Dr. Yoshihiro Kawaoka, of the University of Wisconsin-Madison, took the H5N1 spike gene and grafted it onto the 2009 H1N1 swine flu. One four-mutation strain of the mongrel virus he produced infected ferrets that breathed in droplets, but did not kill any. The editor of Science, Professor Bruce Alberts, says in commentary accompanying the publication of the special issue: Breakthroughs in science often occur when a scientist with a unique perspective combines prior knowledge in novel ways to create new knowledge, and the publication of the two research Reports in this issue will hopefully help to stimulate the innovation needed, perhaps from unsuspected sources, to make the world safer. It should be kept in mind that the whole point of this research has been that in understanding how a lethal virus could be spread, there likely will come an understanding of what approaches will be useful in counteracting its spread. That is what Alberts is talking about in his words about the innovation needed to make the world safer. It also is what I was talking about when I called for full publication of the work back in December: Full publication of the bird flu virus work is essential for us to have the best possible chance for effective treatment if and when such a pathogenic version evolves in the wild. Ironically, because the details presented in these two papers do not create a lethal virus that can spread among humans, they do not constitute the “recipe” for a weapon of mass destruction that the fear-mongers cited in calling for the censorship and delay to publication of the work. That detail is less relevant to research in the world of prevention, though, so the net result of this exercise in moving the government’s nanny state into supervision of the publication of scientific work has been to delay the publication of details that may be important in developing the next tool against a deadly virus pandemic. Sadly, despite his welcome move in removing the subscription requirement for the special issue of Science and his good words on the unexpected nature of where breakthroughs arise, Alberts also endorses the NSABB model and the caste system it would develop for who can and who can not be allowed access to certain scientific advances. From his commentary: As described in News and Commentary pieces in this special section, the prolonged controversy has also provided a “stress test” of the systems that had been established to enable the biological sciences to deal with “dual-use research of concern” (DURC): biological research with legitimate scientific purposes that may be misused to pose a biologic threat to public health and/or national security. One centerpiece of this system is the U.S. National Science Advisory Board for Biosecurity (NSABB). Science strongly supports the NSABB mechanism, which clearly needs to be supplemented and further strengthened to deal with the inevitable future cases of publication of dual-use research, both before and after their submission to journals. Still missing is a comprehensive international system for assessing and handling DURC—one that provides access, for those with a need to know, to any information deemed not to be freely publishable. Establishing a “need to know” system for access to scientific work is anethema to the concept Alberts acknowledged in his comments about innovation from unsuspected sources. Although scientific freedom won out in the battle over the H5N1 virus, the movement to provide a mechanism for stifling publication of scientific work continues and more scientists are likely to see their important work delayed by posturing regulators who wish to win favor with fearmongers in government. Scientific work carried out at the basic level needs to be freely published. Detailed, applied work describing how to create a bioweapon of course should not be published, but such work is illegal anyway and should not be carried out. The work which the NSABB tried to censor in this case falls far short of such weapons-based work and never should have been subject to the delays created.
<urn:uuid:e9644dac-168a-4d0c-92f8-ac469cdb51bc>
CC-MAIN-2013-20
http://www.emptywheel.net/2012/06/22/science-wins-out-studies-nsabb-attempted-to-censor-published-fears-unfounded/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706153698/warc/CC-MAIN-20130516120913-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.965393
1,369
2.90625
3
In many ways, our memories shape who we are. They make up our internal biographies—the stories we tell ourselves about what we've done with our lives. They tell us who we're connected to, who we've touched during our lives, and who has touched us. In short, our memories are crucial to the essence of who we are as human beings. Age-related memory loss, then, can represent a loss of self. It also affects the practical side of life. Forgetting how to get from your house to the grocery store, how to do everyday tasks, or how you are connected to family members, friends, and other people can mean losing your ability to live independently. It's not surprising, then, that concerns about declining thinking and memory skills rank among the top fears people have as they age. There's no getting around the fact that the ability to remember can slip with age. Many of these changes are normal, and not a sign of dementia. Improving Memory: Understanding age-related memory loss helps you understand the difference between normal, age-related changes in memory and changes caused by dementia. The report also offers tips on how to keep your brain healthy, and how to help improve your memory if you're living with age-related memory loss. One of the key components of this memory-saving program is to keep the rest of your body healthy. Many medical conditions—from heart disease to depression—can affect your memory. Staying physically and mentally active turns out to be among the best prescriptions for maintaining a healthy brain and a resilient memory. Improving Memory: Understanding age-related memory loss also discusses the different types of dementia and the treatments available for them. Prepared by the editors of Harvard Health Publications in consultation with Kirk R. Daffner, M.D., Director, Center for Brain-Mind Medicine and Chief, Division of Cognitive and Behavioral Neurology, Brigham and Women’s Hospital, Associate Professor of Neurology, Harvard Medical School, Boston, MA. 49 pages. (2012) Age can dull the ability to remember things. This is often the result of normal changes in the structure and function of the brain, and isn't dementia, Alzheimer's disease, or other brain problem. If you have normal age-related memory loss, several techniques can help you improve your ability to retain new information and skills. You've probably heard stories about people with extraordinary memories and wondered how they do it. Many rely on mnemonic devices (nuh-MON-iks), which are basically learning techniques that aid memory. (The term comes from Mnemosyne, the Greek goddess of memory.) One mnemonic device is to think of a word that rhymes with a person's name so that you don't forget the name. Another is to come up with a sentence or phrase to help you remember something, such as "Every Good Boy Does Fine" for recalling E, G, B, D, and F, the notes that fall on the lines of the treble-clef musical staff. When you learn something new, immediately relate it to something you already know. Making connections is essential for building long-term memories. What you're really doing is making the information meaningful, which helps the brain structure known as the hippocampus consolidate it. Making connections between new and old information also takes advantage of the older pattern of synaptic activation, piggybacking the new material onto a prefabricated network. One way to help remember names is by making an association with the first letters. It's fairly easy to remember the National Aeronautics and Space Administration because of its familiar acronym — NASA. You might try this technique with people's names, too. Say you meet someone named Louise Anderson. Her initials are L.A., which immediately brings to mind Los Angeles. If you can somehow connect Louise to Los Angeles, you'll have an easier time remembering her name. You can also make associations to remember numbers such as access codes or passwords that you need to use regularly but don't want to write down. Say you need to remember the number 221035 to get your voice mail: 22 could remind you of "Catch 22," 10 might be the number of cousins you have, and 35 was your age when your oldest child was born. Another technique for remembering a long series of items is to regroup them. This is sometimes called chunking. You "chunk" when you turn a list of 15 things into three groups of five. You might do this when you go grocery shopping: think of the items you need by categories, such as dairy, produce, desserts, frozen foods, and so on. Chunking is also useful for remembering telephone numbers — which are naturally chunked into the area code, local exchange, and remaining four digits — and other numbers. Say your checking account number is 379852654. Instead of memorizing it as a string of nine single digits, try grouping the digits into three triple-digit numbers: 379, 852, and 654. That way, you'll reduce the number of chunks of information you need to remember from nine to three. The SQ3R Method SQ3R stands for Survey, Question, Read, Recite, and Review. This five-step method is particularly effective for mastering a large volume of technical information from a textbook or professional document. Survey the material by reading through it quickly. Concentrate most on the chapter headings and subheadings, as well as the first sentence of each paragraph, to get an overview. Question yourself about the main points of the text. The more provocative and interesting your questions, the better able you will be to mentally organize the material when you re-read it. Read the text carefully for comprehension, keeping in mind your questions from the second step. Don't take notes or underline yet—doing so at this stage can actually interfere with your comprehension by interrupting the flow of information. Recite what you have just read, either to yourself or to someone else. Speaking out loud helps deepen your understanding of the material. Now is also the time to take notes. Review the text, as well as your notes, a day or two later. Now, think critically about the information: does it support or contradict other information you know about the subject? Go back to your questions from step two. Can you answer them? Do any questions remain? Review the text quickly several more times over the next several days or weeks to help your brain consolidate and store it. The following reviews have been left for this report. Log in and leave a review of your own. I bought this report because I had worries about a family member. The report was so helpful because it let me understand what was normal and what was outside the norm. We decided to get help for our family member but there was side benefit--the tips for ways to improve memory were really helpful to me. Gratefully, Joan S I read the report and was hoping to find something that help hope for a family member. It appears that the research has not progressed to the point that traditional medicine can offer something positive. It is too bad that alternative medicine has not been respected enough to receive support from the larger medical profession, particularly in cases where traditional medicine is still stumped. I have found more hope in the research from alternative (natural) medicine, but the larger medical profession would prefer to stand still and do and offer nothing. It is all about money. I found it informative. It reinforced what I have been doing. Perhaps if people followed the recommendations in the report we would have less mental problems. As an 88year old senior citizen,I was concerned about short term memory loss.After reading the report,I realize I was not so abnormal after all. I feel that Improving Memory tallies well with the Health Report about Altzheimer's Disease. I am enthralled by the fact you can explain in simple terms how this wonderful, complex organ works. I am sure you will update the knowledge of the brain as soon as it is available and I will be waiting to pounce at a copy of the Report! While some of the article was review, I was impressed with the reasons why our memory works the way it does. Knowing why is very helpful in making decisions about my health. I will remember that I am exercising for my brain as well as my muscles, bones, and balance. A useful document re-inforcing known aspects and reminding me of others which had been half-forgotten. It would have been useful to have had the alternative/natural medical viewpoint alongside the traditional, not only for comparative purposes but also to see where the one is outstripping the other in developments in this area. I found the publication to be very informative and a nice review of brain functions. I felt a lot better after reading it and realized that my issues were normal.
<urn:uuid:e8a1aa95-e0ed-4c2d-b92e-a5eefd966900>
CC-MAIN-2013-20
http://www.health.harvard.edu/special_health_reports/Improving_Memory
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706153698/warc/CC-MAIN-20130516120913-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.964327
1,831
3.375
3
A trader works at Subic Fish Port at dawn in Subic Bay, Philippines. / Thomas Maresca MASINLOC, Philippines -- Romeo Taneo, 39, has been going to Scarborough Shoal for as long as he can remember. The rich schools of fish such as tuna found along the chain of reefs and rocks 124 miles from shore have been fished by the people of this Philippine town for centuries. But the 2,000 fishermen of Masinloc haven't gone there in months, not since Chinese vessels arrived to claim the shoal for China even though its coastline is 500 miles away. "We can't fish there anymore," Taneo said. "Whenever we go near, the Chinese chase us away." China has essentially said it wants to chase every nation from the South China Sea. It has laid claim to 1 million square miles of the sea and in recent months has been dispatching ships and aircraft to enforce its ownership, infuriating Asian nations whose coastlines also approach the sea. It's not just for the rich sources of fish that China and others are battling. The World Bank has estimated that the seabed contains huge deposits of oil and natural gas. The sea is a major route for the world's cargo (50% of global oil tankers pass through it). As Asia's economies and populations grow, the food source and the energy resources of the South China Sea will become even more important. Confrontations that have already taken place between China and its neighbors over the sea could escalate and lead to war,observers of the situation say. "The situation is quite worrying and we're watching it closely," said Stephanie Kleine-Ahlbrandt, China and Northeast Asia project director for the International Crisis Group. "The continuing presence of claimants' law enforcement and fishing vessels in disputed waters are opportunities for skirmishes that may bring countries down a path they didn't intend." The shoal, a triangle of rocks about 35 miles around, is one of a number of outcroppings and islands in the South China Sea that the People's Republic of China says the Chinese discovered and claimed long ago. Scarborough is named for a British tea ship wrecked on its rocks in 1784 with no survivors. In July, China proclaimed the creation of a Sansha, a new city on tiny Yongxing Island that would oversee jurisdiction of the Paracel, Spratly and Macclesfield Bank island groups scattered throughout the sea. In November, China issued passports with a map of China that included about 80% of the South China Sea. Today it continues to protect Chinese fishing boats that ply shoal waters, even though the shoal is well within the Philippines' a 200-mile zone that all coastal nations can claim as exclusively theirs according to the United Nations Convention on the Law of the Sea. Vietnam, Malaysia, Brunei, Taiwan and the Philippines have claims to parts of the sea, and some have complained to the United Nations and the USA for help in dealing with China's ownership announcement. Tensions were high in April when the Philippines tried to act against China. Chinese vessels prevented a Philippine naval warship from pushing out Chinese fishing boats accused of poaching protected species such as sea turtles. Eventually both fleets agreed to go home, but Chinese marine surveillance vessels soon returned and remain. The vessels went as far as to rope off the entrance to Scarborough lagoon. Caught in the geopolitical standoff are fishermen up and down Zambales, a province on the west coast of Luzon, the largest island in the Philippines. "We're afraid to go to Scarborough now," said Francis Alaras, who has been fishing for 15 years out of Subic Bay. "Even the Coast Guard is afraid to go there." Taneo said he used to take in $250 to $500 in a good week catching grouper, Spanish mackerel and tropical aquarium fish around Scarborough. Now he might earn $50 in waters nearer the coast. Some fishermen journey several extra hours to avoid the Chinese-occupied area, burning additional fuel and squeezing their ability to make a profit. What puzzles many in Masinloc is the suddenness of the change. Taneo said fishermen from several countries used to fish at Scarborough without incident, at times even boarding each other's vessels to swap local delicacies and liquor. "Why now?" he said. Harry Roque Jr., a professor of law at the University of the Philippines, urged Manila to bring the Scarborough case before the U.N.'s International Tribunal for the Law of the Sea, which could issue a binding provisional decision. China and the Philippines are both signatories to the treaty. "It would be the perfect way to defuse the tension if there is in fact a provisional measure," Roque said. "Of course there's no guarantee China will comply with it, but I think it's very clear that in modern history no state wants to be branded a violator of international law." Philippine Foreign Affairs Secretary Albert del Rosario has also called for international arbitration in the Scarborough standoff. "While we are at a disadvantage in terms of our resources and capabilities, it is our belief that international law is the great equalizer and that right is might," he said. On Tuesday, the Philippine foreign secretary said that he has summoned China's ambassador in the Philippines to inform her that Manila is seeking arbitration at an international tribunal. Del Rosario said the Philippines has exhausted almost all political and diplomatic avenues for a peaceful negotiated settlement of maritime disputes with China, and hopes that the arbitral proceedings will bring results. China, however, has said it would not accept an international judgment and will only resolve the matter in one-on-one talks with individual countries, which its smaller neighbor the Philippines says puts it at a severe disadvantage. The conflict has gotten the Philippines to turn for help to a former hated enemy, Japan, whose occupation of the Philippines during World War II is not forgotten here. Last week Japanese Foreign Minister Fumio Kishida pledged 10 patrol ships and communications equipment for the Philippines coast guard, according to media reports. Japan is fending off similar territorial claims that China his pressing over the Senkaku Islands in the East China Sea. The United States has stayed neutral in the territorial disputes, saying only that they should be resolved through negotiation. The USA and the Philippines held discussions in December that del Rosario says should result in an increased naval rotational presence in the Philippines that "will serve to guarantee peace and stability in the region." Murray Hiebert, deputy director at the Center for Strategic and International Studies, says U.S. interests lie most clearly lie in maintaining the unrestricted movement of trade in the South China Sea. "Freedom of navigation is absolutely critical," he said. "A whole lot of oil and iPads move through there." China shows little sign of backing down, however. In November, the Chinese province of Hainan said it police vessels may board and search foreign ships that "illegally" enter Chinese waters. "If China persists in its view that (the South China Sea) is a Chinese lake, then we're headed for conflict," Roque said. "And I think every single nation on earth that wants to use the seas will have an interest in it." For now, solutions seem scarce. Some observers suggested that joint development of fishing and hydrocarbons in disputed areas is a reasonable way forward. But the charged environment is making cooperation increasingly difficult. "If the political will were present, (joint development) would be possible," said Robert Beckman, director of the Center for International Law at National University of Singapore. "However, under the present political climate, it seems unlikely." In Masinloc, the fishermen are looking to the future with a characteristically Filipino blend of fatalism and optimism. Masinloc's fishery officer, Jerry Escape, says people are looking at other ways to earn a living, such as establishing more fish hatcheries to increase fish stocks closer to shore and promote tourism of its pristine areas. "We will find a way," he said. "We are Filipinos. That is what we do." Copyright 2013 USATODAY.com Read the original story: Seabed a hotbed of controversy for Philippines, China
<urn:uuid:226f050f-5b1a-465a-9169-308acda79126>
CC-MAIN-2013-20
http://www.hometownlife.com/usatoday/article/1833467?odyssey=mod%7Cnewswell%7Cimg%7CFrontpage%7Cp
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706153698/warc/CC-MAIN-20130516120913-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.963751
1,707
2.5625
3
Although they are not a very robust predictor (even ACT, Inc., will say that a better predictor of college success is obtained when combining the ACT scores together with high school GPA), and they are only one piece of the admissions puzzle, the yearly numerical scores are easily analyzed, make for a good news story, and result in many diverse opinions. In looking at the ACT test scores, you can find many correlations, such as lower average scores for children of single parents versus married or divorced parents, regional differences, and differences by race. This leads to many claims of unfairness in the design of the test, intentional or not. So the question is this: does the ACT provide a gateway to opportunity by identifying the best and the brightest to attend the "best" schools", or are they an artificial barrier to entry that can be used for political (or other) purposes by leaving the disadvantaged behind? Clearly, the strongest correlation you will find in the ACT scores is increasing scores with increasing family income. Using data for all undergraduates from the 2008 National Postsecondary Student Aid Study (NPSAS), the distribution of ACT composite scores by family-income for dependant students attending four-year schools, as shown below, demonstrates this (note that the data is incomplete for ACT scores less than 10). To answer this, I looked at NPSAS data for net tuition and fees (after all grants, veterans benefits, and tax benefits) for dependant students, to see if the advantages of an increased ACT score (possibly a benefit of well-off parents) correlates to a reduced cost of college due to merit-based aid. The result shows that the higher the ACT score, the greater the net cost of college (which must be covered by savings, income, or loans). That is, even though they qualify for greater merit-based aid, these students tend to attend higher-cost schools, resulting in a greater net cost. Also, since they tend to have higher-income families they would receive less need-based aid. The net result of this is that there is a tendancy to segregate college choice by family income, where children of high-income families tend to go to higher cost schools (whether better academically or not, though they are typically better in terms of quality of the facilities and surroundings), and children of lower-income families go to lower cost schools (better academically or not). Often, this means that wealthy families send their children to private schools, and lower-income families send their children to public schools (another, "DUH!" conclusion). And in those careers where the school you graduated from matters (business and law, for example, as compared to engineering or nursing), it reinforces the inter-generational advantages for graduates in those fields.
<urn:uuid:1ff57f94-25d6-4a67-a495-74335e6fb4a5>
CC-MAIN-2013-20
http://www.howwillipayforcollege.com/2011/09/standardized-tests-gateway-to-college.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706153698/warc/CC-MAIN-20130516120913-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.96637
564
2.640625
3
Protect your investment Protected cropping can be as beneficial for small fruit production as it is for vegetables. Yields of some berries may be two to three times greater in protected cultivation than outside in the field. Build customer loyalty by having fresh fruit first in the spring, long into fall, and by growing the highest quality berries. Strawberry plants, which are damaged by temperatures below 12F/-11C, require winter mulch. Although hay and straw are the traditional mulching materials, many strawberry growers use row covers because they require less labor to install and remove. Heavier row covers, weighing 1.25 oz./sq.yd. or more, are recommended. Fall-bearing raspberries and blackberries that normally stop producing at the first frost will continue fruiting for months longer in an unheated hoophouse. They will fruit again earlier in spring than those in the field, commanding a much higher sales price. Strawberry transplants from runners produced over the summer can be planted in an unheated hoophouse in September. They will produce fruit in the fall, continuing until December, and then fruit again in early spring. Strawberry plasticulture is supplanting the traditional matted row system on many farms. Plant on black plastic mulch from mid-July to September and cover with row cover in fall. Fruits are harvested the following spring. Protect fruits from marauding birds with Johnny's bird netting which won't damage fruit or bend branches. Getting started with fruit By Lynn Byczynski For many market growers, fruits are the final frontier of horticultural expertise. Growing fruit is an interesting challenge for a vegetable grower because fruits require different systems for planting, cultivating, harvesting and post-harvest handling. But there are many reasons to take up the challenge. - The primary reason is that people love fruit. Farmers market customers flock to vendors with berries and grapes for sale. CSA members develop stronger ties to farms that can supply a wide range of fresh produce. Chefs who tout their connections to local farmers are delighted to be able to list local fruit on their dessert menus. At home, even the pickiest eaters are usually happy to snack on berries and grapes. - Consumption of berries and grapes is rapidly increasing worldwide, thanks to recent discoveries about the health benefits of these fruits. The pigments that give berries and red or purple grapes their deep colors contain phytochemicals that help prevent cancer, cardiovascular disease, and age-related mental decline. People feel good about eating grapes and berries! - From the farmer's and gardener's perspective, berries and grapes are easier to grow than ever before. New varieties, production practices, and products are increasing the options for growers in every region. Berries are popular crops for the hoophouse, for example, because protection from wind and rain produces extraordinary yields of high-quality fruits. Plastic and paper mulches reduce the need for year-round weeding of these perennial plants. And because Johnny's offers plants in small quantities, growers can trial numerous commercial varieties without spending a lot of money. - Although most berry and grape plants won't produce fruit for 1 to 3 years after planting, the wait is worthwhile. Commercial growers can charge a premium for fresh, ripe fruits. And home gardeners can save money by growing their own. - What do you need to get started with fruits? First, if you aren't sure about the suitability of your climate for small fruits, contact your state Extension service for recommendations. Some regions of the country may not have enough cold (chilling hours) for certain varieties, while others may be too cold for the plants or too hot for the fruits. Good soil preparation is essential for successful fruit production. So is an irrigation system. Most small fruits don't compete well with weeds, so a mulch of hay, straw, or wood chips is beneficial. Grapes need a strong trellis, which should be erected when the vines are planted. A living mulch in the paths between rows will help reduce weed pressure and improve soil fertility. You’ll find products and information about living mulches in the cover crops section on the web and in the catalog. By Lynn Byczynski Growing grapes may appear complicated to the beginner, and with good reason. Although grapes will grow anywhere, there are many kinds of training and trellising systems, and choosing the right one requires some study before planting. Training and trellising go hand-in-hand because the kind of structure you build to hold your grape vines will affect how you prune them. The structure, in turn, depends somewhat on the type of grapes you grow because some are more vigorous and need stronger supports. In general, a grape trellis needs to be able to support the weight of the crop and withstand high winds. It also should be designed to last 20 years, as that's how long you can expect your vines to produce. Home gardeners planting just a few vines can use a fence that fits into the landscape or, better still, an arbor that provides shade in summer as well as support for the grape vines. To get good fruit production from an arbor planting, pruning becomes the key. Texas Extension has a nicely illustrated manual on arbor training. Commercial growers with larger aspirations need to set up a trellis in the field. The main ingredients for a vineyard trellis are strong end posts with braces, earth anchors, or deadmen; posts along the length of the trellis to support the wires; and high-tensile galvanized steel wire to support the vines. The most common type of trellis is the single curtain trellis with either one or two wires and posts every 16 to 24 feet apart, depending on the training system. With this type of trellis, various training styles are possible. Another popular type of trellis, especially in northern areas, is the double curtain, which allows the vines to spread horizontally across two wires. The recommended trellis and training system varies by climate. Northern growers with shorter growing seasons usually choose training systems that expose more leaf surface to the sun, but those can be inappropriate to warm climates. To learn more about the best training and trellising system for your location, check the list below of state viticulture guides and choose the state nearest your own. Or, contact your state Extension service for recommendations. California: Viticulture and Enology Home Page Colorado: Grape Growers Guide Idaho, Oregon, Washington: Northwest Berry & Grape Information Network Iowa: Viticulture Home Page Kansas: Commercial Grape Production Michigan: MSU Grape Information Missouri: Home Fruit Production: Grape Training Systems New York: Cornell Viticulture Ohio: Midwest Grape Production Guide Oklahoma: Viticulture and Enology Pennsylvania: Wine Grape Network South Dakota: Viticulture in South Dakota Texas: Winegrape Network Vermont: Cold Climate Grape Production Wisconsin: Growing Grapes By Lynn Byczynski Strawberries are one of the most popular fruits in American gardens and market farms. They can be grown in many places, from hanging baskets to fields to hoophouses. The trick is to match the growing system to the type of strawberry you want to grow. Some varieties need plenty of space, whereas others can be grown in containers. June-bearing varieties initiate fruit buds in fall and blossom the following spring. They are the earliest type to fruit. They produce one crop and then spend their energy sending out runners (also called daughter plants) that will fruit the following year. June-bearing strawberries are usually grown in a matted row system, in which the mother plants are planted in spring, spaced 18-24" apart in rows that are 3-4' apart. The first year, flowers are pinched off to stimulate the plants to send out runners that fill in the spaces within the row and between the rows. Plants produce fruit the second spring. A variation of this system is to prune runners to one or two per plant so that they stay in a line and don't spread out between the rows. This obviously requires a lot more labor, but may result in better yields because of reduced competition. Matted-row systems can be renovated to keep plants producing for many years. Another system is called the ribbon row system, in which strawberry crowns are planted in fall and allowed to bloom and fruit the following spring. As runners form, they are removed to increase fruit size. Once the crop is done, runners are allowed to develop and fill in the bed to a matted row system. Day-neutral varieties produce fruit all summer. They can be grown as annuals: plant early in spring and pinch off flowers for two months to let the plants get established, and then let them fruit the rest of the summer. Day-neutral strawberries are good for container production on a deck or patio. Some varieties, including 'Seascape', will fruit on unrooted runners so they make attractive hanging baskets, with the runner plants cascading over the sides of the basket. Day-neutral strawberries can also be grown in a hill system, with 12 inches between plants. Alpine strawberries produce small but intensely flavorful berries. They do not send out runners and are usually grown from seed. They are a good choice for strawberry pots and other containers, or as edging in the vegetable garden. They also can be grown with less than full sun, so they are a good choice for many home gardeners. Region-specific growing information is available from most state Extension services. ATTRA has a publication on Organic Production of Strawberries. By Lynn Byczynski Strawberry quality, yield, and earliness is greatly improved in a hoophouse. Penn State researchers found that in their climate, hoophouse strawberries produced fruit 3 weeks earlier in spring than those grown outside, with about a 25% yield increase. Most commercial hoophouse strawberries are grown using an annual plasticulture system that includes raised beds, drip irrigation, plastic mulch, and floating row cover. Plugs are planted in late summer on beds covered with plastic mulch, with drip tape beneath the mulch. As the weather gets cold, the young plants are covered with floating row cover to maintain the warmer soil temperatures needed for establishment. The plants grow slowly during winter in the protected environment of the hoophouse; then, as the weather warms, they flower and produce berries for several weeks. The crop is then finished for the year. Strawberry plants can either be removed to make way for other crops; or they can be left to produce a second year if berry prices or other factors justify tying up the space for a year. Plugs are available from outside suppliers, or they can be produced on the farm in summer. To grow your own, detach unrooted daughter plants (runners) from the mother plant in July and stick them in potting mix in 72-cell flats under intermittent mist until roots protrude from the bottom of the cell. Then place on a greenhouse bench and grow until September, when they can be planted into the hoophouse. Plants that are rooted in July are likely to flower and fruit in fall in warmer climates, but that won't affect their yield the following spring. For more information on hoophouse strawberries: Growing Strawberries in High Tunnels in Missouri Production of Vegetables, Strawberries, and Cut Flowers Using Plasticulture is a book about all aspects of horticultural plastics, and includes extensive information about hoophouse strawberries.
<urn:uuid:36ed1ab0-6236-4c74-aec9-8f12de0eb6cf>
CC-MAIN-2013-20
http://www.johnnyseeds.com/t-catalog_extras_fruits.aspx?source=BlogJSSAdv0212
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706153698/warc/CC-MAIN-20130516120913-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.940725
2,392
3.421875
3
|2012 Hurricane Information |Date of Record: June 20, 2012| The 2012 hurricane season began June 1st, are you prepared? Planning ahead can stop a lot of confusion and help eliminate extra stress at the time of an event. Weather forecasters start tracking storms and predicting their paths as soon as they form. A "5-day cone" and a "3-day cone" are created which show the forecast path for the center of the storm with as much as a 300-mile "cone of uncertainty." Things to be done before the cone: Know your evacuation zone. If you or a member of your family will require special needs assistance or transportation to a shelter, register with the fire department by calling 727-587-6737 and have the paperwork sent to you, fill it out and return it. During a 5-day cone action: Review your family disaster plan Get your survival kit and important papers ready. Begin work to prepare your home If you live in an evacuation zone, know where you will go and how you will get there. During a 3-day cone action: Double check your survival kit and make necessary purchases to avoid lines and traffic. Gather special supplies for infants, children, seniors and pets. Be sure you have all materials and tools necessary to shutter windows. Shop early. If your plans are to evacuate, make arrangements, book reservations and pack what you can in your Hurricane watch actions: Fill vehicle gas tank. Get cash and secure papers and Fill containers and tubs with water, even if evacuating - you may need the water when you return. Secure yard equipment and Shutter your windows. Help neighbors with their If your plans are to evacuate out of the local area, make final preparations to secure your home so you can leave as soon as an evacuation order is issued. If you are registered for transportation to a public shelter, be sure you have everything you need for your "go bag". Hurricane warning actions: Stay tuned to local news and get your weather radio ready. Complete any final preparations to evacuate or to shelter in your home. If your plan is to travel out of the local area and you can leave at this point, do so. If you are registered for transportation to a public shelter, have your "go bag" ready. Rescue workers will begin pick-ups shortly after an evacuation order Once an evacuation order is issued: Determine if your residence is affected by the evacuation order (does it include your evacuation zone or do you live in a mobile or manufactured home?) If you are evacuating locally, get to your shelter location within a few hours of the evacuation order. Be sure to check which public shelters are open. If you are traveling out of the local area, leave as quickly as possible to avoid traffic jams. If you are not required to evacuate, prepare a safe room in your home and stay off the roads to enable evacuation traffic to clear the area.
<urn:uuid:07bf8e84-342e-4705-9bbd-3ed293f34550>
CC-MAIN-2013-20
http://www.largo.com/egov/docs/1340217512494.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706153698/warc/CC-MAIN-20130516120913-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.929317
669
2.515625
3
January 11, 2012 Imagine this scenario. You have a great career and you are a star employee. You are talented, a problem solver and go out of your way to help co-workers. You do your job and make it all look easy. Despite your dedication, one day a new person is hired. They have the same education, experience, and skills that you have. There is only one difference: he gets paid more. In spite of all the progress our nation has made in civil rights over the past fifty years, men still get paid more than women for no other reason than gender. Now is the time to change that. The Paycheck Fairness Act, a long overdue amendment to the Equal Pay Act of 1963, (S. 182) was introduced by Congress in 2011 with the support of President Obama, who in a statement called the legislation “ a common-sense bill that will help ensure that men and women who do equal work receive the equal pay that they and their families deserve.” Nearly fifty years ago, the Equal Pay Act of 1963 passed, promising that men and women would receive equal pay for equal work. However, the wage gap continues despite women’s increased education, greater level of experience, and less time spent raising children. When President John F. Kennedy signed the Equal Pay Act into law, women earned, on average, 60 cents for every dollar earned by men. In the forty-eight years that have passed, the pay gap has closed by less than 20 cents. In 2010, women earned 77 cents on every dollar earned by men. Women of color fared even worse--African American women earned only 67 cents, and Latinas just 58. Seventy-seven cents on the dollar is not fair. Now is the time to take action to close the pay gap. The Paycheck Fairness Act would close loopholes, strengthen business incentives to end pay discrimination, prohibit retaliation against workers who share wage information, and bring the Equal Pay Act in line with other civil rights laws. Passing this act does not just affect women, it affects everyone. More families are depending on women as breadwinners. The share of couples that both work rose to 66 percent in 2010, according to U.S. Census Bureau data. The number of women who were the only working spouse also rose, with an estimated 4 million families depending on mom to bring home the bacon. The number of dads who were the only working spouse dropped, and the number of stay-at-home dads rose higher. The wage gap has long-term effects on the economic security of women and families. In 2009 a typical college educated woman earned $36,278/year for fulltime work, while a comparable educated man earned $47,127- a difference of $10,849. This amount of money can be a great help to a family struggling during a recession. With $10,849, you could buy a year's worth of groceries ($3,210), pay for a semester of college tuition ($6,548), pay three months of rent and utilities ($2,265) six months of health insurance ($1,697), or cover six months on a student loan ($1,602). The current economic crisis is heavily affecting families, and the latest data shows that gender roles are becoming more flexible and egalitarian. Shouldn’t pay reflect this shift? The answer is a resounding yes. We can’t afford to wait another fifty years for paycheck equality. The future of our economy and the well-being of our families depend on it. When women are paid fairly, whole families win. It is our civic duty to take action by writing or calling our Senators and Representatives and urging them to pass the Paycheck Fairness Act now.
<urn:uuid:8390a3a0-bfd8-4dc5-bcd5-47f3d9e8d5c1>
CC-MAIN-2013-20
http://www.madewomanmag.com/index.php?option=com_k2&view=item&id=163:news-%7C-equal-pay-for-equal-work-can-we-afford-to-wait-another-50-years?Itemid=169
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706153698/warc/CC-MAIN-20130516120913-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.970004
768
3.453125
3
The current drawn by the macro, into the VCC supply pin of the circuit. For individual macros, the current drawn by that macro (TYPICAL). When computing the supply current for an array, it will be a function of the macros used in the array plus the overhead current for the I/O mode. It may also be a function of the number of TTL input macros. Integration Competency Center. An ICC is typically a shared, centralized resource that defines uniform approaches to integration with reusable assets. There are a variety of ways to set up an ICC—from simply defining a series of best practices to specifying specific tools or architectures that must be used, to providing centralized developers and architects that can actually create and manage integrations. What's right for your company depends on such things as the corporate structure (centralized or decentralized), frequency of projects, your level of standardization, and your IT infrastructure. Integrated circuit card. See chip card. Integrated Circuit Card. A card into which has been embedded one or more ICs. ISO Integrated Circuit Card or Smart Card Integrated Circuit Card see Smart Card
<urn:uuid:759ca364-eca5-467b-84f2-15642bda2288>
CC-MAIN-2013-20
http://www.metaglossary.com/meanings/806730/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706153698/warc/CC-MAIN-20130516120913-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.891563
231
2.578125
3
Joined: 16 Mar 2004 |Posted: Wed Apr 02, 2008 3:08 pm Post subject: Polymer opal films shed new kind of light on nature |24 July 2007 Nanowerk Polymer opal films shed new kind of light on nature Imagine cleaning out your refrigerator and being able to tell at a glance whether perishable food items have spoiled, because the packaging has changed its colour, or being able to tell if your dollar bill is counterfeit simply by stretching it to see if it changes hue. These are just two of the promising commercial applications for a new type of flexible plastic film developed by scientists at the University of Southampton in the United Kingdom and the Deutsches Kunststoff-Institut (DKI) in Darmstadt, Germany. Combining the best of natural and manmade optical effects, their films essentially represent a new way for objects to precisely change their colour. The researchers will publish their findings in the July 23 issue of Optics Express ("Nanoparticle-tuned structural color from polymer opals"), an open-access journal of the Optical Society of America. These "polymer opal films" belong to a class of materials known as photonic crystals. Such crystals are built of many tiny repeating units, and are usually associated with a large contrast in the components’ optical properties, leading to a range of frequencies, called a "photonic bandgap," where no light can propagate in any direction. Instead, these new opal films have a small contrast in their optical properties. As with other artificial opal structures, they are also "self-assembling," in that the small constituent particles assemble themselves in a regular structure. But this self-assembly is not perfect, and though meant to be periodic, they have significant irregularities. In these materials, the interplay between the periodic order, the irregularities, and the scattering of small inclusions strongly affect the way the light travels through these films, just as in natural opal gem stones, a distant cousin of these materials. For example, light may be reflected in unexpected directions that depend on the light's wavelength. Photonic crystals have been of interest for years for various practical applications, most notably in fibre optic telecommunications but also as a potential replacement for toxic and expensive dyes used for colouring objects, from clothes to buildings. Yet much of their commercial potential has yet to be realized because the colours in manmade films made from photonic crystals depend strongly on viewing angle. If you hold up a sheet of the opal film, Baumberg explains, “You’ll only see milky white, unless you look at a light reflected in it, in which case certain colours from the light source will be preferentially reflected.” In other words, change the angle, and the colour changes. These photonic crystals are apparent in the natural world as well but are more consistent in colour at varying angles. Opals, butterfly wings, certain species of beetle, and peacock feathers all feature arrays of tiny holes, neatly arranged into patterns. Even though these natural structures aren’t nearly as precisely ordered as the manmade versions, the colours produced are unusually strong, and depend less on the viewing angle. Until now, scientists believed that the same effect was at work in both manmade and natural photonic crystals: the lattice structure caused the light to reflect off the surface in such a way as to produce a colour that changes depending upon the angle of reflection. Baumberg, however, suspects that the natural structures selectively scatter rather than reflect the light, a result of complex interplay between the order and the irregulaty in these structures. Given that hunch, Baumberg’s team developed polymer opals to combine the precise structure of manmade photonic crystals with the robust colour of natural structures. The polymer opal films are made of arrays of spheres stacked in three dimensions, rather than layers. They also contain tiny carbon nanoparticles wedged between the spheres, so light doesn’t just reflect at the interfaces between the plastic spheres and the surrounding materials, it also scatters off the nanoparticles embedded between the spheres. This makes the film intensely coloured, even though they are made from only transparent and black components, which are environmentally benign. Additionally, the material can be "tuned" to only scatter certain frequencies of light simply by making the spheres larger or smaller. In collaboration with scientists at DKI in Darmstadt, Germany, Baumberg and his colleagues have developed a solution for another factor that traditionally has limited the commercial potential of photonic crystals: the ability to mass-produce them. His Darmstadt colleagues have developed a manufacturing process that can be successfully applied to photonic crystals and they now can produce very long rolls of polymer opal films. The films are "quite stretchy," according to Baumberg, and when they stretch, they change colour, since the act of stretching changes the distance between the spheres that make up the lattice structure. This, too, makes them ideal for a wide range of applications, including potential ones in food packaging, counterfeit identification and even defence. Sources: Nanowerk & Optical Society of America http://www.nanowerk.com/news/newsid=2261.php Story posted: 24th July 2007
<urn:uuid:92222904-0c1e-4955-a992-b110b934102d>
CC-MAIN-2013-20
http://www.nano.org.uk/forum/viewtopic.php?t=3087&start=0&postdays=0&postorder=asc&highlight=
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706153698/warc/CC-MAIN-20130516120913-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.939243
1,095
3.046875
3
Little Bluestem, Big Bluestem, Indiangrass, and Switchgrass are the legendary grasses of the Tall Grass Prairie. These species are the backbone of the prairiegrass ecosystem that once covered most of the central plains of North America. These native grasses are all excellent forage producers that make your grass selection "natural" which will require less fertilizer and other outputs. They are well adapted to both upland and lowland sites. You can not go wrong with these native grass staples which are heat and drought tolerant and will provide permanent cover and forage production. This mixture contains: - Little Bluestem - Schizachyrium scoparium - Big Bluestem - Andropogon gerardii - Indiangrass - Sorghastrum nutans - Switchgrass - Panicum virgatum - 1/2 lbs/1,000 square feet - 6 lbs./acre when planting with wildflowers - 12 lbs./acre grass mix only Planting times: late spring to early summer, with wildflowers; late spring to mid summer, grass mix only. Planting range: can be planted in the central and midwestern U.S., Texas, Louisiana, Mississippi, Alabama, northern Georgia, western North and South Carolina, western Virginia, Pennsylvania, New York, southern New Hampshire and western Massachusetts. For elevations below 6,000 feet, moderate to moist soils.
<urn:uuid:c6d50ddd-40e8-4ee2-896e-ce58d733dd93>
CC-MAIN-2013-20
http://www.outsidepride.com/seed/native-grass-seed/tall-native-grass-seed.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706153698/warc/CC-MAIN-20130516120913-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.842731
297
2.765625
3