text
stringlengths
195
587k
id
stringlengths
47
47
dump
stringclasses
8 values
url
stringlengths
14
2.28k
file_path
stringlengths
125
138
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
46
159k
score
float64
2.52
5.06
int_score
int64
3
5
Severe Acute Respiratory Syndrome (SARS) is a serious form of pneumonia, resulting in acute respiratory distress and sometimes death. It is a dramatic example of how quickly world travel can spread a disease. It is also an example of how quickly a networked health system can respond to an emerging threat. This contagious respiratory infection was first described on February 26, 2003. SARS was identified as a new disease by WHO physician Dr. Carlo Urbani. He diagnosed it in a 48-year-old businessman who had traveled from the Guangdong province of China, through Hong Kong, to Hanoi, Vietnam. The businessman died from the illness. Dr. Urbani subsequently died from SARS on March 29, 2003 at the age of 46. In the meantime, SARS was spreading, and within 6 weeks of its discovery, it had infected thousands of people around the world, including people in Asia, Australia, Europe, Africa, and North and South America. Schools had closed throughout Hong Kong and Singapore. National economies were affected. The WHO had identified SARS as a global health threat, and issued an unprecedented travel advisory. Daily WHO updates tracked the spread of SARS seven days a week. It wasn’t clear whether SARS would become a global pandemic, or would settle into a less aggressive pattern. The rapid, global public health response helped to stem the spread of the virus, and by June 2003, the epidemic had subsided to the degree that on June 7 the WHO backed off from its daily reports. Nevertheless, even as the number of new cases dwindled, and travel advisories began to be lifted, the sober truth remained: every new case had the potential to spark another outbreak. SARS appears to be here to stay, and to have changed the way that the world responds to infectious diseases in the era of widespread international travel. Causes And Risk Factors SARS is caused by a new member of the coronavirus family (the same family that can cause the common cold). The discovery of these viral particles represents some of the fastest identification of a new organism in history. SARS is clearly spread by droplet contact. When someone with SARS coughs or sneezes, infected droplets are sprayed into the air. Like other coronaviruses, the SARS virus may live on hands, tissues, and other surfaces for up to 6 hours in these droplets and up to 3 hours after the droplets have dried. While droplet transmission through close contact was responsible for most of the early cases of SARS, evidence began to mount that SARS might also spread by hands and other objects the droplets had touched. Airborne transmission was a real possibility in some cases. Live virus had even been found in the stool of people with SARS, where it has been shown to live for up to four days. And the virus may be able to live for months or years when the temperature is below freezing. With other coronaviruses, re-infection is common. Preliminary reports suggest that this may also be the case with SARS. Preliminary estimates are that the incubation period is usually between two and ten days, although there have been documented cases where the onset of illness was considerably faster or slower. People with active symptoms of illness are clearly contagious, but it is not known how long contagiousness may begin before symptoms appear or how long contagiousness might linger after the symptoms have disappeared. Reports of possible relapse in patients who have been treated and released from the hospital raise concerns about the length of time individuals can harbor the virus. Minimizing contact with people with SARS minimizes the risk of the disease. This might include minimizing travel to locations where there is an uncontrolled outbreak. Where possible, direct contact with people with SARS should be avoided until at least 10 days after the fever and other symptoms are gone. The CDC has identified hand hygiene as the cornerstone of SARS prevention. This might include hand washing or cleaning hands with an alcohol-based instant hand sanitizer. People should be taught to cover the mouth and nose when sneezing or coughing. Respiratory secretions should be considered to be infectious, which means no sharing of food, drink, or utensils. Commonly touched surfaces can be cleaned with an EPA approved disinfectant. In some situations, appropriate masks and goggles may be useful for preventing airborne or droplet spread. Gloves might be used in handling potentially infectious secretions. The hallmark symptoms are fever greater than 100.4 F (38.0 C) and cough, difficulty breathing, or other respiratory symptoms. Symptoms found in more than half of the first 138 patients included (in the order of how commonly they appeared): - chills and shaking - muscle aches Less common symptoms include (also in order): These symptoms are generally accompanied by findings on the chest X-ray and on laboratory tests. Signs And Tests: Listening to the chest with a stethoscope (auscultation) may reveal abnormal lung sounds. In most people with SARS, progressive chest X-ray changes or chest CT changes demonstrate the presence of pneumonia or respiratory distress syndrome. Much attention was given early in the outbreak to developing a quick, sensitive test for SARS. Specific tests for the SARS virus include the PCR for SARS virus, antibody tests to SARS (such as ELISA or IFA), and direct SARS virus isolation. All current tests have some limitations. General tests used in the diagnosis of SARS might include: - a chest X-ray or chest CT - a CBC (people with SARS tend to have a low white blood cell count (leukopenia), a low lymphocyte count (lymphopenia), and/or a low platelet count (thrombocytopenia). - clotting profiles (often prolonged clotting) - blood chemistries (LDH levels are often elevated. ALT and CPK are sometimes elevated. Sodium and potassium are sometimes low). People suspected of having SARS should be evaluated immediately by a physician and hospitalized under isolation if they meet the definition of a suspect or probable case. Antibiotics are sometimes given in an attempt to treat bacterial causes of atypical pneumonia. Antiviral medications have also been used. High doses of steroids have been employed to reduce lung inflammation. In some serious cases, serum from people who have already gotten well from SARS (convalescent serum) has been given. Evidence of general benefit of these treatments has been inconclusive. Other supportive care such as supplemental oxygen, chest physiotherapy, or mechanical ventilation is sometimes needed. As the first wave of SARS began to subside, the death rate proved to have been about 14 or 15 percent of those diagnosed. In people over age 65 the death rate was higher than 50 percent. Many more were sick enough to require mechanical ventilation. And more still were sick enough to require ICU care. Intensive public health policies are proving to be effective in controlling outbreaks. Many nations have stopped the epidemic within their own countries. All nations must be vigilant, however, to keep this disease under control. Viruses in the coronavirus family are known for their ability to spawn new mutations in order to better spread among humans. - respiratory failure - liver failure - heart failure - myelodysplastic syndromes Call Health Care Provider: Call your health care provider if you suspect you or someone you have had close contact with has SARS. More infomation on SARS: SARS – A Worldwide Threat Stop Respiratory Infections SARS – School’s Out SARS and Allergies Asthma and SARS Prepare for the Worst; Hope for the Best Review date: 6/7/2003 Reviewer: Alan Greene, MD, Chief Medical Officer, A.D.A.M.
<urn:uuid:04a3c50a-ef00-400a-8472-a37c65fde10d>
CC-MAIN-2013-20
http://www.drgreene.com/disease-severe-acute-respiratory-syndrome-sars/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.959444
1,640
3.828125
4
Radio-Collaring Elephants in Namibia with Keith Leggett Keith Leggett radio-collars enormous elephants in the Namibian desert to find out where they range and roam-and gets help from a BBC film crew. Attaching a radio-collar to a 5-ton animal is no easy task. Especially if that animal, say, an elephant, has no interest in cooperating and does not necessarily turn up where you expect it to. This is Keith Leggett's challenge as a researcher with the Northwestern Namibia Desert-dwelling Elephant and Giraffe Project in Namibia, Africa. With the help of Earthwatch volunteers since 2002, Leggett has been radio-collaring and tracking these enormous pachyderms in the Namibian desert to find out more about their home ranges and travel routes. Why? These elephants don't make very good neighbors - they drink upwards of 30 gallons of water per day, even in the dry season when water is scarce, and are extremely destructive eaters, pushing down and trampling trees and anything in their paths. Not surprisingly, elephants and people in this area have trouble coexisting. But, Namibian elephants are of great interest to tourists, and this may be the key to their salvation in this country that has been described as the land that God created in anger. Probably a pretty fair description of the environment, says Leggett. Understanding the routines and ecology of elephants is the first step in helping them coexist with humans. Last February, Leggett got the chance to capture and collar an elephant in front of BBC cameras. This is his report on how it went: "I went up to the bush two days before the collaring was due and met the BBC team and enjoyed them straight off. We found the mature bull (WKM-14) but the younger bull was nowhere to be seen. After searching for two days and not finding the younger bull it was decided to go with the older mature male. Everyone had arrived in camp by the morning of the proposed collaring so we went straight out to collar the bull. The collaring was absolutely textbook, couldn't think of a more perfect one. The collar went straight under the bull without any hassles, the bull fell in an open area and he responded perfectly to the drugs...perfect! "On top of all that he moved straight into the floodplains of the Hoanib River, a move none of the previously collared elephants had undertaken. It will be very interesting to see his movements when he comes into musth especially in response to the other dominant bull in the area. "The film crew themselves were great fun, the only drawback was doing some takes 3 or 4 times... don't know how actors do it. I simply don't have the patience for it. Though they were very good when we were doing the collaring and stayed in the background and out of the way. Mind you it will probably work out to be about 2 minutes of airtime, but at least I have another collar." In the last three years of leading Earthwatch volunteers into the Namibian desert, Leggett has tracked, observed, and collared numerous elephants, and sends our office back emails from his trips, such as this report from May 18, 2005: "The first night we were in Purros, 3 elephants walked straight past camp. It appeared that we were going to have a good trip after all, or so we thought. The next two days was spent in the fruitless search for elephants... not another hide nor hair was observed...it was decided to head to the Hoanib River. "The first we observed on arriving in the Hoanib River was a herd of 5 elephants with one of the cows having a calf of about 3 months of age. He is still totally uncoordinated and lurches from one misadventure to another. The previous calf born in the west was 12 months ago and so a new calf is still a novelty and most of the herd females take turns in guarding and guiding him around. The minders are very vigilant and when the older calf came to play the older animals saw him off... quite amusing at times. The mothers appears to play only a minor role in the overall rearing of the individual, though they are usually doing the nursing though I have seen even other females nurse young periodically. The group takes responsibility for the offspring. "Later that day we saw the rest of the herd of 14 so it was hog heaven for 2 days and then the elephants disappeared again. It appears as though they are doing circuits at this time of year wandering between feeding areas. They are always moving never slopping for long in one area. "Overall, the volunteers were excellent and put up with the vehicle breakdowns, lack of elephants and then the total abundance, then absence again with a resigned tolerance... they were also pretty good fun. The west has dried out significantly and the days were very hot, but the nights were cool. There has been significant grass growth this year with the good rains and the animals are all looking in extremely good shape. Springbok, gemsbok and ostrich were abundant and while the elephants have spread pretty thin the rest of the wildlife has collected in feeding aggregations. "After a shower, a shave and some relaxation time, I feel almost human again..." Leggett's study is one of the first to scientifically document the home range and movements of these massive animals. Preliminary findings recently published in African Zoology show that elephant movements range from 50 to 625 kilometers (31 to 388 miles), over a period of up to five months, in response to available water and vegetation. In June, July, and August of 2006, Earthwatch teams will help Leggett track this animal, as well as up to a dozen others that he has radio-collared. They will also identify individual elephants in the field, using distinguishing tusk characteristics, ear scars, and footprint patterns, and observe their behavior. This information will help conservation agencies better manage Namibia's unique desert elephants.
<urn:uuid:015f4b05-56f2-4463-8bdd-8e038a204a0f>
CC-MAIN-2013-20
http://www.earthwatch.org/aboutus/research/voices_of_science/radio_collaring_elephants/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.976241
1,253
3
3
Jersey Language and LanguagesEdit This Page From FamilySearch Wiki Historically, the local spoken language in Jersey was Jèrriais. This is best understood as the Jersey branch (or branches) of the wider Norman language; the branches spoken in Guernsey, Sark and Alderney are recognisably of similar origin but differ considerably in detail. The Norman language is a curious fusion: the structure is that of a Romance language (derived from Latin), but to this was added considerable Nordic vocabulary - bear in mind that the Normans were so called because they were by origin Norse-men - Vikings who had come south. Jèrriais has two broad dialects, western and eastern. It may come as a surprise to find that an island just nine miles across would have two recognisably different dialects, but travel across the island was for centuries a difficult exercise - it was said that in 1800 the island had the worst roads in Europe. Consequently, not only were there the two major dialect grops, but also small isolated pockets (such as La Moye, at the southwest corner of the island) where distinctive forms of pronunciation and vocabulary developed. There is a considerable corpus of written Jèrriais, including over 900 articles written for the Jersey Evening Post by George Le Feuvre (who wrote as George d'La Forge), and proceedings of L'Assembliée d'Jèrriais, a group founded in 1952 which gathered speakers of the language from across the island. Jèrriais was never the language of the Royal Court; documentation from there (and subsequently from the States) was always written in what might be called "proper French". Equally the business of the church (and later the chapels) was done in French. But for most of the latter part of the 19th and early 20th century, most people in rural Jersey were trilingual - English was the language of commerce, French the language of church and law, and Jèrriais the language that did for everything else. The rise of the school certificate and broadcast media changed this, and Jèrriais was largely squeezed out (although it had a brief renaissance during the Occupation, as the German forces could not understand what was being said!). Thanks to the efforts first of L'Assembliée d'Jèrriais and subsequently of L'Office du Jèrriais, the language has not yet become extinct, and it is thought that several thousand local inhabitants can speak at least a minumum amount of Jèrriais, with thousands more able to recognise it and gist its meaning. However, the number of people who are first-language speakers continues to decline, and is now believed to be barely above one hundred. The rise of English English only became a significant language in Jersey after the beginning of the 19th Century. As has already been mentioned, the roads in Jersey were very poor in 1800, but shortly thereafter General George Don was appointed as the Island's Lieutenant-Governor. General Don set in motion a substantial programme of infrastructure works designed to make defending Jersey from French invasion easier; this included various fortification works and a network of new roads. The island could not supply sufficient manpower; consequently a massive immigration of English people began. Between 1821 and 1851 the population doubled from about 28600 to just over 57000. Until this point the English Parliament in Westminster had been largely content to let Jersey run its own affairs, not least because the practicalities of maintaining links across a stretch of water patrolled by the hostile French Navy were difficult, to say the least. However, as the threat of war with France receded, and communications became easier, Parliament rapidly discovered that the English community in Jersey were deeply unhappy, and with good reasons. Their most vocal representative was Abraham Jones Le Cras - a man of Jersey descent, but born and raised in England. Le Cras wrote a book in 1839, entitled The Laws Customs and Privileges and their Administration in the Island of Jersey, which contained a 52-clause petition to Parliament. Royal Commissions were appointed in 1840, 1846 and 1853 to investigate matters and step by step the States yielded to some of Le Cras' demands. Court proceedings were translated into English; elected Deputies came to the States, some of whom spoke neither French nor Jèrriais. Gradually the influence of English increased. But old habits died very hard. Property transactions continued to be recorded in French in the Public Registry right through the 19th Century and nearly all the way through the 20th - only in about 1990 did contracts finally come to be written in English. Similarly a large corpus of Jersey legislation still exists only in French. The use of French was not entirely limited to law and the church. There were a series of migrations to Jersey from France: the last and largest of these began in about 1850. The vast majority of immigrants were agicultural labourers who had left Brittany and western Normandy in search of better wages and working conditions. By working hard they began to acquire property and became farmers employing labourers rather than hired men and women, and naturally some of these were French also (it was noted many years ago - by a Jerseyman - that the French were prepared to work for each other, but the Jerseyman was only prepared to work for himself). At its peak, between 1890 and 1914, the French-born population numbered about one in ten. The population was significantly reduced in 1914 - many men were called up to the French Army and never returned - and in 1920 the States clamped down hard on further immigration, allowing only a strictly-controlled flow of seasonal labour. Gradually the remaining population integrated into the existing population of Jersey. French is no longer the island's second language. When Britain joined the EEC (as it then was) in 1973 the traditional benefits in employing French contract labour disappeared. The gap was filled largely by Portuguese nationals escaping the last years of the Salazar dictatorship. Most came from Madeira, at that time an impoverished province neglected by the authorities. It is now thought that about one tenth of the population of Jersey is of Portuguese origin. More recently, in line with the rest of the UK there has been a significant influx of "new Europeans", most of them originating in Poland and Romania. - This page was last modified on 21 February 2013, at 18:38. - This page has been accessed 175 times. New to the Research Wiki? In the FamilySearch Research Wiki, you can learn how to do genealogical research or share your knowledge with others.Learn More
<urn:uuid:95c6ea07-ab83-460b-9a3b-9ca4e772a0c1>
CC-MAIN-2013-20
http://www.familysearch.org/learn/wiki/en/Jersey_Language_and_Languages
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.979563
1,372
3.375
3
America’s storied leadership in promoting liberty and individual rights began long before we became a nation. It began when the first persecuted immigrants came here to find religious freedom. Their belief in a natural, God-given right to practice religion freely grew out of centuries-old struggles of people to secure a right to life, liberty, and property under the rule of law, not the whim of rulers. How should Americans think about human rights today? Since the 13th century, people in England had fought for and won a number of agreements with their kings to secure certain liberties. In 1607, King James I granted the colony of Jamestown a royal charter assuring its residents of “all the liberties as if they had been abiding and born within this our realm of England or any other of our dominions.” Growing out of these historic liberties and the development of the rule of law is the Founders’ deeper recognition of the inherent natural rights as the foundation of human freedom. Hence, the Virginia Bill of Rights, crafted by Founding Fathers George Mason and James Madison and adopted in June 1776, began with these familiar words: That all men are by nature equally free and independent, and have certain inherent rights, of which, when they enter into a state of society, they cannot, by any compact, deprive or divest their posterity; namely, the enjoyment of life and liberty, with the means of acquiring and possessing property, and pursuing and obtaining happiness and safety. That all power is vested in, and consequently derived from, the people; that magistrates are their trustees and servants, and at all times amenable to them. This principle of “inherent” or inalienable rights outside of and despite government imbues our Declaration of Independence and invigorates our Constitution. Since our founding, these important documents provided the basis for our social order and American jurisprudence. They have guided our struggles to overcome slavery and discrimination by race, religion, sex, or birth. And they have guided our engagement abroad. Yet this principle of inalienable natural rights—fundamental rights that government neither creates nor can take away—isn’t the same as the thoroughly modern idea of “human rights.” Although both are universal, natural rights most emphatically do not come from government. Government only secures these rights, that is, creates the political conditions that allow one to exercise them. Human rights, as popularly understood, are bestowed by the state or governing body. The sacred rights of mankind ... are written ... in the whole volume of human nature, by the hand of the divinity itself; and can never be erased or obscured by mortal power. – Alexander Hamilton February 23, 1775 In addition, natural rights, being natural, do not change over time. All men, at all times, have had the same right to life, liberty, and the pursuit of happiness. Human rights, on the other hand, constantly change. A whole cottage industry has sprung up to advance a bevy of new “economic and social rights” conceived of, defined by, and promoted by activists, governments, and international bureaucrats. Many Americans are unaware that these manufactured rights are not the same as the natural rights endowed by God or nature. What are often called “human rights” today are social constructs. They either sound like high-minded aspirations—equal rights for women and minorities—or like trivial and harmless concepts such as the “right to leisure.” These concepts are in fact neither high-minded nor harmless: they are fundamentally incompatible with the Founders’ understanding of natural rights. First, they are largely goals that government cannot guarantee. Take the “right to development.” Government can strive to level the playing field so everyone has an opportunity to improve their lives. But the power it would need to guarantee that no one is poor would be so great it could crush the natural rights and liberty of individuals. That is the sad lesson of Communism in state-controlled societies, which limit individual freedom and civil liberties so as to provide a “guaranteed” level of income, or some other high-minded social goal, for everyone. Whereas natural rights (such as life, liberty, and property) are rights that government protects from infringement by others, invented rights (such as “housing” and “leisure”) are things that government is obligated to provide. And it does so by redistribution of private wealth. Second, they suffer from confusion over what a right really is or should be. Governments that pretend to give and safeguard rights to certain groups inevitably endanger individual rights held by everyone. If your social value is defined by your sex, class, or race, then your intrinsic value as a person is lost. Your natural right to freedom of speech or assembly is tangible and real. Government can protect it without infringing on someone else’s rights. But trying to guarantee a social group’s right to something inevitably puts them at odds with other groups, and both are reduced to petitioning political favors from government. A woman’s right to freedom of speech is no less important than a man’s, but that’s because she’s human, not because she’s a woman. The same confusion exists with “economic rights.” The U.N. and countries often define them as a guarantee to a certain wage or income. But governments don’t create wealth any more than they create natural rights. You indeed have a right to property, but it’s because of your natural right to keep what you gain through your efforts in the first place. When the U.N. or government mistakenly defines “economic rights” to things it cannot guarantee, it ends up creating conditions that deny people the very liberties and property rights it should protect. Lofty sounding aspirations can be seductive. Who would not want to eliminate poverty in the world? Who would not want women, children, and minorities to live full and complete lives in a free society? No one of conscience would object to any of these as outcomes. But they are not what motivate the modern human rights proponents. To understand why not, you only have to consider the United Nations. Its institutions, like the Human Rights Council, have become distractions from (or worse, obstacles to) advancing the kinds of liberalizing policies countries really need. Why? Because the U.N. is populated with nations that abuse the principles in its Charter. Socialist, Communist, and authoritarian regimes consider basic civil, political, and economic freedoms as real threats to their hold on power. They claim to promote collective rights to advance the “common good,” but they exploit these rights politically to maintain control. The one-nation, one-vote rule at the U.N. and other international forums affords them a legitimacy they do not deserve and a venue for waging their ideological battle with true democracies. Americans likewise should be wary of international human rights treaties. The goals of such treaties may be laudable. But all too often they fail to deliver on their promises: many nations sign them with no intention of changing their ways. Saudi Arabia is a perfect example; it signed the Convention on the Elimination of All Forms of Discrimination Against Women (CEDAW), yet it treats women as second-class citizens who need their husband’s permission to travel and who are still forbidden to drive. The U.S. is far more committed to enforcing the terms and conditions of the treaties it ratifies. It has not signed CEDAW, in part, because America’s legal protections of individual rights are already well established. Democracy, freedom, human rights have come to have a definite meaning ... which we must not allow any nation to so change that they are made synonymous with suppression and dictatorship. – Eleanor Roosevelt September 28, 1948 Ratifying treaties like CEDAW would also reduce our ability to govern ourselves by undermining our national sovereignty. By ratifying CEDAW, the U.S. would subject its laws to a body of international “experts” who monitor the treaty’s implementation. Those experts would try to impose their interpretation of rights on Americans, in defiance of the rule of law and our Constitution. The CEDAW Committee, for example, has instructed some countries to legalize prostitution and others to give prostitutes full benefits like any other employee. And it discourages references to “motherhood” as “stereotypical.” For Americans, the protection of human rights is fundamental to liberty. We understand that our rights are not guaranteed simply because we join the U.N. or sign a treaty. Rather, they are guaranteed by our Constitution and our laws, which ensure that no one will be deprived of his rights without due process of law. The best way to ensure every generation enjoys the liberties and civil rights we fought so hard for throughout our history is to preserve the Constitution. The United States is uniquely situated to be the global leader on behalf of fundamental and traditional freedoms because it is the only nation of the world explicitly founded on the creed of individual liberty, natural rights, and constitutional government. It is an exceptional nation. But it will remain so only if succeeding generations are committed to this creed. The best way to promote human rights as we understand them is not through international treaties and institutions. It is through a properly balanced political system that ensures equal justice and limits the state’s role to only what is necessary to secure our rights. It is by standing up for victims whose natural rights are violated around the world and assisting them when we can, and by pointing out other states’ failings to live up to their treaty commitments. It is by remaining the beacon of liberty for people everywhere—that “shining city on the hill” that Ronald Reagan described as he confronted the evils of Communist states. We should be proud of our record on rights. In our comparatively short history, we’ve brought more prosperity and equality to more people than any other nation in history. We back our words with our lives, our treasures, and our future. We have corrected the great flaw of our past—a bloody Civil War ended slavery in America—and don’t need to go around the world apologizing for our nation’s history. We should never allow the U.N. or anyone to abuse the mantra of human rights to undermine our sovereign constitutional system which not only protects our God-given rights and the liberty to govern ourselves but also offers the best model for others to do the same. And so the question remains, as Reagan once asked, “If we are not to shoulder the burdens of leadership in the free world, who will?” For all that it represents and continues to defend in the world, America remains liberty’s last best hope. Kim R. Holmes, Ph.D., is Vice President of Foreign and Defense Policy Studies and Director, The Kathryn and Shelby Cullom Davis Institute for International Studies at The Heritage Foundation. Isaiah Berlin, Two Concepts of Liberty The English philosopher Isaiah Berlin defined the concepts of negative and positive liberty. In this case, negative is good: as in the United States, it means freedom from government interference. Positive liberty, by contrast, means government intervention to guarantee outcomes. Berlin points out dictators have often used the excuse of positive liberty to deprive the people of their freedom. Thomas West and William Schambra, The Progressive Movement and the Transformation of American Politics The Founding Fathers believed that all men naturally possess inalienable rights. The Progressives of the late nineteenth century did not believe that natural rights exist. Instead, they believed that the government created and promoted freedoms. In Berlin’s terms, they believed in positive liberty. As West and Schambra explain, the Progressives therefore rejected the understanding of freedom on which America was founded. Kim R. Holmes, Economic Freedom as a Human Right Economic freedom offers people around the world the best hope for achieving healthier, safer, wealthier, and more productive lives. But economic freedom is not just a good idea in practice. As Holmes emphasizes, it is also a natural right, and indivisible from the broader idea of liberty. Is Freedom For Everyone? In his inaugural Margaret Thatcher Center for Freedom Lecture, Sharansky, a Soviet dissident who found freedom in Israel, speaks eloquently to the legacy of Ronald Reagan and Margaret Thatcher, and proclaims his belief that “freedom is a cause for everybody.” Jeane J. Kirkpatrick, “Dictatorships and Double Standards,” Commentary, November 1979 This essay inspired Ronald Reagan, after he won the 1980 presidential election, to make Kirkpatrick the U.S. Ambassador to the United Nations. It remains timely today because it exposes the folly of pretending that totalitarian insurgents, or the United Nations, are on the right side of history and that American foreign policy must therefore seek to win their favor. UNITED NATIONS. Brett Schaefer and Steven Groves, “The U.S. Universal Periodic Review: Flawed from the Start,” ,” August 26, 2010. When the Obama administration joined the U.N.’s Human Rights Council, it subjected the U.S. to the Council’s system of regular reviews. Schaefer and Groves explain why the U.S. should not have joined the Council, why the review process is flawed, and argue that it must be fundamentally reformed if it is play any meaningful role in the advance of liberty. THE LANGUAGE OF FREEDOM. The Heritage Foundation, “Reclaiming the Language of Freedom at the United Nations: A Guide for U.S. Policymakers,” September 6, 2006. The American idea of freedom is more and more poorly understood, not just by foreign countries and diplomats, but by some Americans as well. At the U.N., and everywhere else, the U.S. must reinvigorate the American tradition of freedom, which is universal and indivisible. WAR AGAINST TERRORISM. Lisa Curtis, “Championing Liberty Abroad to Counter Islamist Extremism,” February 9, 2011. Promoting democracy and liberty around the world has long been a core component of U.S. foreign policy. Such efforts are particularly important in Muslim-majority countries because the principles of liberal democratic governance are a powerful antidote to Islamist extremists’ message of intolerance, hatred, and repression. Download the Report:
<urn:uuid:f74b6e84-bbf5-4d78-a1a2-726a4a60f009>
CC-MAIN-2013-20
http://www.heritage.org/research/reports/2011/06/how-should-america-think-about-human-rights
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.947329
3,007
3.984375
4
Special education teachers play a vital role within the education system as a whole. They are responsible for motivating students with a variety of special needs. Special education teachers facilitate an environment of acceptance among students with mild to severe cases of emotional, cognitive or physical challenges. Teachers with special education backgrounds work to instill life skills and basic literacy ability within their students. Furthermore, they apply remedial instruction for each individual child in preschool, elementary, middle school and secondary school settings. Yet, some special education teachers also work with toddlers and infants. They pull together various educational techniques to facilitate social, behavioral and academic development for all of their students. Special education teachers hold more than 470,000 positions within the United States alone. Special education teacher salary ranges are varied within their positions in public and private academic settings. There is also a large variation of salary ranges within individual and social assistance agencies, hospitals, residential facilities and homebound settings.* Those that have a special education teacher salary often get paid more when working within the field of sports coaching or extracurricular activity settings. The special education teacher salary may be more in summer education or other seasonal job areas. According to the Bureau of Labor Statistics, the recent median for special education teacher salary tops at around $50,000 for those that work within preschool, kindergarten and elementary settings. Furthermore, the middle 50% of special education teachers earn around $40,400 to more than $63,000.* According to the bureau, special education teachers within the lower 10% range earn salaries that are a little less than $33,770. However, those that are within the upper 10% range earn more than $78,900.* *According to the BLS, http://www.bls.gov/oco/ According to CollegeGrad.com, the median range for a special education teacher salary is around $45,699 in secondary education settings. However, the middle 50% is said to earn nearly $36,900 to $59,300 per year. Job Description and Outlook A plan for personal goal setting for each student is put together by the special education teacher in the form of an Individualized Education Plan or IEP. Special education teachers are increasingly working within inclusive settings as they help create step-by-step goals to help students meet the demands of the next grade level. Although the rewards of establishing meaningful relationships with students are great, the work load of special education teachers can be heavy. Besides the emotional and physically parts of educating students with special needs, they also must incorporate many hours to fill out documents, which gives details on each students progress. Most special education teachers work on a 10-month schedule. The need for this level of educators is projected to be on the rise. One reason for the projected rise is that parents are expected to demand more assistance in order to keep their children at the optimal level of education. Specifically, the need for qualified special education teachers is expected to increase by 17% between now and 2018, according to the Bureau of Labor Statistics. Another variant that creates job openings for special education teachers is caused by replacement need. Some special education teachers switch to general education or they retire. Another reason some special education teachers move away from the field includes starting entirely new career choices. Even so, the geographic location and specialty within this field are variables that may decrease or increase the need for special education teachers. For instance, some of the more needy places for special education teachers include rural and inner city locations. Yet, teachers with the ability to work with children with multiple or severe disabilities have a greater chance of finding jobs in special education. Training and Education Requirements There is undergraduate, graduate, and doctor’s levels of education for the special education teacher. The training for special education teachers is a lot more extensive than regular teaching degree programs. Although most undergraduate programs require four years of general and specialized course work, some programs have launched fifth-year or graduate level requirements. In general, the last year of undergraduate course work is spent getting classroom training and supervision from a teacher with special education certification. Teachers with special education certification may work in a variety of fields. Some fields include music therapy, art therapy, para-education, speech and language therapy, audiology, administration of special education and many more. Teachers can obtain degrees with special certifications or specializations in general special education. However, special board certifications can be obtained for those teachers that desire to express a complete level of commitment to employers. A board certification in special education also demonstrates full dedication to peers, administrators and parents. Obtaining board certification requires five interdisciplinary subjects: - Special Education Eligibility - IEP Development Principles - Knowledge of Special Education Assessment Procedures - Review of Main Principles in Special Education - Knowledge of Response to Intervention (RTI) There is great benefit for special education teachers that get involved with professional associations, organizations and networks. These associations provide social and educational support: - Nation Association of Special Education Teachers - American Academy of Special Education Professionals - The Council for Exceptional Children (CEC) - National Association for Gifted Children (NAGC) - Technology, Reading, and Learning Difficulties (TRLD) Conference - Special Education-Learning Disabilities Association of America - CARS Plus- The Organization for Special Educators - National Center to Improve Recruitment and Retention of Qualified Personnel for Children with Disabilities.
<urn:uuid:526038cb-e798-4cee-b608-489a9e41dbaa>
CC-MAIN-2013-20
http://www.highersalary.com/education/special-education-teacher/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.946841
1,118
3.28125
3
First-Hand:Cryo CMOS and 40+ layer PC Boards - How Crazy is this? Contributed by: Tony Vacca How it started It was in the early 80's. Control Data (CDC) had just launched the CYBER - 205 with modest success and the team was now focused on the next generation machine, the 2XX as I recall. Speed, cost and meeting the schedule were all key objectives. Speed because Cray Research under the guidance of Seymour Cray was setting milestones for Supercomputers with the Cray 1 and then the Cray 2. Cost, since Supercomputers were extremely expensive. Schedules since the CYBER - 205 had established patience records as a machine that may never get out the door and this must not be repeated. A conventional evolutionary approach for Integrated Circuit (IC) logic was initially selected. Motorola, with some prodding, agreed to launch an 8,000 gate equivalent ECL (emitter-coupled-logic - the circuitry of choice for high performance processing units) provided that Control Data do the actual circuit development. There were insufficient customers for Motorola to commit their resources to this lofty development. Motorola did, however, commit their advanced ECL processes to CDC and a joint team was formed with the two companies. Logic designers at the CDC Advanced Design Laboratory were given preliminary design rules based on computer device models and estimates of gate per chip densities. There was a natural follow up of grumbling by the logic design team led by very experienced and innovative folks (Ray Kort, Maurice Hudson and Dave Hill to name three) but circuit designers had learned to accept this since logic designers always found the circuits to be too slow and insufficient an quantity of gates and pins (I/O ports) per die. There was a lot of cooperation too. Basic building blocks were defined by the logic designers - gate functionality, register functionality, etc. From this set of preliminary rules function blocks were defined and capacity per reasonably-sized Printed Circuit (PC) boards defined. The initial design using the Cray CYBER - 205 based architecture was launched. In parallel with this effort, and in the same design group; i.e.; circuit, packaging, PC board and newly formed CAD (tools for layout and design of chips and boards) - chief chip design engineer - Randy Bach - was assigned to develop an advanced CMOS chip for the Canadian Computer Development organization. At this time, early 80's CMOS was in it's infancy being used for memory devices, low performance peripherals and also for low performance microprocessors (5 to 10 MHz clock speeds). The design contained 5,000 gates plus appropriate input and output communication devices. Gate arrays for CMOS was also nearly non-existent so Randy and his small team of two assistants developed a cell library and worked closely with the Canadian Development team to meet their objectives as well. This effort was completely separate from the ECL based gate array to be used for the next generation Supercomputer. The product was developed for a low cost application. It was customary for Neil Lincoln - chief architect, Dale Handy - manufacturing manager and me to go off to lunch every 8 to 10 days to discuss status at either Author Treacher's Fish & Chips or Zantigo's (high class - NOT) fast food restaurants. As a side note, both of these fast food places disappeared during the ETA Systems brief duration. Zantigo's has returned (I think because they know it is safe now that the three of us cannot visit together any longer - Neil unfortunately passed on a few years ago). At one of these meetings, Neil had "news" for me. Simply stated, the gate array in active co-development with Motorola had unacceptable goals. The chip had too few I/O pins, consumed too much power and insufficient gates. In addition, he completed a cost model which indicated an unacceptable cost figure for the CPU. He also determined that the CPU (some 3 Million gates) had to be assembled on a single board. "It was time for this goal to be reached". He also reached the conclusion that a proper logic design required at least 15,000 gates per chip to meet these goals. The logic designers had gotten to him I surmised. Schedules, Neil reminded us, could not be altered - and that was that. To soften the blow he bought lunch that day, three Cokes and three orders of fish and chips - Neil's was a large order. The trip back to the lab was pretty quiet, fortunately short since our eating places were all very close to the lab. That afternoon, I assembled the key folks - I might miss one or two but Randy Bach, Doug Carlson, Dave Resnick and John Ketzler were four that I recall now. Doug was a mechanical engineer that I assigned the Motorola project to because of his management skills - something he probably never forgave me for - John was the key circuit engineer on the Motorola project and Dave was and still is a very versatile and perceptive engineer. Doug and I would inform Motorola of the decision not to continue. The team would package up what was accomplished and turn it over to Motorola to carry the ball forward if they wished. As a side note, Motorola and Cray did continue the design. It was the circuit design used in the Cray C90, a very successful Cray Research Supercomputer. The meeting turned to what were the next steps. The key challenges that emerged were: - IC Technology that could meet the new lofty goals - The PC board technology required to meet a single board CPU - Packaging and interconnect technology required to support the two above requirements - Computer Aided Design (CAD) technology necessary to accurately design IC and PCB technologies - Suppliers for all - do they exist? - What additional internal resources were required to achieve objectives - System packaging beyond a single CPU. (Memory, peripherals, I/O, etc.) - Testing of complex IC technology and complex PCB technology Before getting to the details as to how decisions were made and how the ETA System technologies the “kit” was selected and developed, a list of noteworthy accomplishments achieved are listed: - First Industry competitive CMOS CPU Since 1995 – to the present (beginning 12 years after the technology selection by ETA Systems I might add) ALL HPC (High Performance Computers) are developed and manufactured using CMOS IC technology. Until as late as 2000, bipolar technology (higher power, more costly to manufacture and lower gate count per chip) dominated high performance computers throughout the world. - First Industry Single Board CPU The chip density (gates per chip) allowed by advanced CMOS, the use of layout and design Computer aided design tools for optimum layout and simulation, the successful design of a 45 layer advance Printed Circuit board (you read it right 45 layers) and innovative chip attachment and cooling permitted a single processor containing nearly 3 million gates to be packaged on a single board - First Industry system to bedesigned with self-test CPU Processing units (≈3Million gates each) were validated for functionality and performance in less than 4 hours.Any interconnect errors were recorded and allowed chip-to-chip replacement to occur in a minimal time.Other CPU checkout during this same period required weeks to months to check out and validate a processing unit. Incoming testing of the logic IC Chip (function and performance) also used the same self-test innovations. - First Industry production Liquid Nitrogen CPU The ETA Systems CPU was immersed in Liquid Nitrogen – 77 degrees Kelvin – to improve performance greater than two times that CMOS technology operated at room temperature – 300 degrees Kelvin. - First system at CDC to fully utilize Computer Design Software to design Chips, boards, validate Logic design and Auto Diagnostic test the system with Synergistic tools Permitted checkout of a CPU to be completed in less than 4 hours. Manufacturing costs were greatly reduced. This technique was also used at the IC Supplier and greatly reduced any probe test hardware and software. - First Industry system to havemultiple cost designs from single design effort Performance range of the ETA System products was greater than 24:1 (8 processor system operating at 7 nanoseconds Clock period and a single processor system operating at 24 nanoseconds.). Processors were manufactured, tested and validated from a single manufacturing line using identical components. (IC Chips were performance sorted using auto self test). Product differences began at the system packaging level. Boring into details Any Technology kit must be driven by a customer need. In the case of Supercomputers the craving for increased computer performance at a lower cost (overall cost) was the deciding factor. In any Supercomputer company a combination of marketing requirements, architecture innovations and logic design demands dictate the initial objectives of the hardware circuit and packaging organization. I state “initial” since once the objectives are digested and key technologies are evaluated for the time frame addressed, compromises are the norm. In the case of ETA Systems technology selections in the early 1980’s, this was the strategy implemented. The following paragraphs sequence the thought process and the technology selection strategy utilized. Integrated Circuit selection The objectives, listed in earlier paragraphs were first integrated into the architecture and logic design requirements. A market survey of key integrated circuit suppliers was conducted with emphasis on what was in development and planned for product introduction – not what was available at the time of the survey. A risk assessment was made. Primary focus was on the most dynamic technology, the IC Logic technology. All decisions as to volume requirements, pins, packaging, etc. resulted from what was determined by this survey and risk analysis. Merging the logic design objectives (gates, bandwidth and performance of key functions) was next. An ECL (emitter coupled logic) high performance bipolar gate array using Motorola advanced IC technology was selected. Since Motorola was not fully staffed to begin the actual product development (application) but did have the process development underway, a cooperative development agreement was struck with the two companies (this occurred between Motorola and Control Data since ETA Systems had yet not been formed). The design called for basic logic cells to be incorporated into a larger version of their existing gate array advancing the process for increased performance and chip size for increased gate capacity. The existing gate or function array utilized approximately 2,500 gates (which was used as the primary gate array for the Cray Research very popular Y-MP Supercomputer) and the planned gate array would contain an excess of 8,000 equivalent gates. Logic cell libraries were agreed to (acceptable to both Motorola for the general market and to CDC for the logic designs). Pin counts (for power, ground and input/output logic communications) were established and power consumption estimates were made. Once these parameters were established, board size, power systems and thermal control were evaluated in a trade off give-and-take. Features of Printed Circuit Boards, (line widths, spacing, interconnect vias and number of layers were compared to the board size capacities, laminating press capabilities, drill designs and printed pc board processing limits. IC packaging, limits, i.e.; minimum size of package, pin spacing, thermal removal, etc. was evaluated in parallel with PC Board limits. The chip design began, the cell library began and the packaging began once all parameters (pins, power consumption and die size objectives) were agreed to. Printed circuit board experiments also began. Once feasibility was established and practical limits established (original goals could be met as to physical design and performance based on IC Modeling and extrapolation from previous established functional systems, a preliminary specification was presented to the architects and logic designers for review. From initial design data, logic design based on the parameters provided established a physical size for the Central Processing unit or CPU, the heart of the system. A multiple board processor was required. This placed additional constraints on packaging since within a single processor all distances are crucial between circuits. Three-dimensional packaging concepts were considered. Three dimensional packaging effectively meant a “sandwich” effect of multiple boards with interconnects from board to board were throughout the area – not exclusive to the periphery of the board such that chips on each of the boards would minimize distances between them. In addition, power consumption estimates were made; thermal removal paths and techniques were considered. A cost model was generated as well. All of these factors resulted in a preliminary estimate of the CPU volume. In the introduction portion of the document, you already know that this was rejected - more to follow for sure. In parallel with these efforts, memory design was underway. Less freedom was available to memory since the basic semiconductor device could not be altered to accommodate specific users. There were a few packaging alternatives, very few, and device configurations (Word – Bit architecture, pin numbering, power considerations, etc.) were dictated by the industry. Since memory design has its own objectives for cost, reliability and performance, this effort could continue quite independently with one exception, the packaging of the total system must be synergistic and compatible. A crucial parameter of this is the interconnect mechanism between processors and memory. A hardware system cost model was established – not only for current cost considerations but also estimates on volume costs based on learning-curve estimates as well for the life of the system. The chief architect, after careful review, rejected the design; this was covered in the introduction. Three key reasons were sited; performance would be impacted due to the 8,000 gate limit, (worst case logic paths could not reside in a single chip and multiple chip distances would increase the clock period), power consumption per CPU, although lower on a performance ratio basis to previous generations, was too high when the total system size (including the multiprocessor objectives) were considered and system cost appeared prohibitive – always a subjective issue but never-the-less a key component of the design. Reliability concerns were also stated since the pin-count per CPU, although quite reduced from previous designs, were of concern. The architecture was committed to four CPUs (max) per system so the interconnect "bar" was raised. Back to the drawing board Bipolar technology refers to conventional NPN and PNP transistors operating in a non-saturating mode (collector-base). By not saturating the operating transistors (not allowing the base voltage being higher than the collector voltage) the switching characteristics were improved and balanced (off logic level and on logic levels had identical delays). In addition, the non-saturating circuitry – titled ECL for Emitter coupled logic – provided the TRUE and COMPLIMENT outputs for each logic function (i.e.; AND & NAND, OR & NOR, etc). This provided advantages to logicians to design complex Boolean functions (ADD units, MULTIPLY units, DIVIDE units, etc). Under the category of “no free ride” ECL circuitry consumed higher power than the more popular but much slower saturating logic circuitry (TTL – transistor-transistor logic). Other improvements in performance for integrated versions of ECL logic circuitry included replacing conventional junction isolation between circuit devices on a single die with Oxide isolation between circuits (lower capacitance per circuit so less charging and discharging when logic levels switched). CMOS (Complementary Metal Oxide Silicon) circuitry, especially at the time of ETA System, was a simpler and more efficient logic circuit. This form of logic also had a simpler process. Stacking of P channel and N channel transistors in series between voltage bus rails defines a single complementary gate. Functionality of the logic devices is much more forgiving to process variations due to the larger voltage swing and only active transistors used to define the circuitry (no resistors, diodes, etc.). The physical size of a logic function when compared to a bipolar equivalent is significantly smaller, resulting in an increase in circuitry per equivalent die (chip) size. CMOS technology also consumed power ONLY when the circuit was switching (changing states) so power consumption was directly proportional to the frequency it was operating.(P = CV2f) ECL circuitry, by contrast, consumed approximately the same power – while switching or in a quiescent state. (Later forms of CMOS – especially those designed in early 2000 and beyond, had increased power consumption primarily caused by increased bulk leakage currents as a result of processes developed for lithography having features en excess (smaller) than 90 nanometers. Technology at the time of the development of the ETA Supercomputers had minimum features of 1,200 nanometers. (In 2009, by contrast, the production capability is 45 nanometers) Advantages of CMOS were obvious; more circuits per given chip area, lower power consumption and higher functional yield. It is important to stress “functional yield”. The CMOS devices functioned over a much larger range of processing variations (> 50% Vs. < 15% to 25% for ECL). Performance variations for a given process were approximately 2 to 3 times for CMOS and 20% to 30% for ECL. For this reason CMOS devices were sold at a much lower performance than any bipolar counterpart.(I.e.; if the product was specified to accommodate the entire functional lot (wafers processed at the same time), more IC devices yielded. There is one other key difference in defining performance differences between Bipolar and CMOS devices. For ECL (or any other bipolar device) the maximum operating frequency is defined, in part, by the base width – the physical distance between the emitter and collector of the transistor. This is determined by the spacing based on diffusion or implant of the emitter and is controlled in the vertical direction and limited by process control that is quite precise. This parameter is very thin and the frequency is determined indirectly proportional to the base width. For CMOS the gate length defines the critical performance parameter. Gate length is defined by mask optic limitations for any generation of processes. Bipolar devices in the 1980’s and well into the later half on the 1990’s, therefore, had higher operating maximum frequencies than their CMOS counterparts. As capital equipment – primary optics to generate masking and etching capabilities defined smaller and smaller geometries, CMOS technology improved dramatically in performance. This was a result of smaller gate lengths but also each generation had smaller devices resulting in lower capacitance loading and lower time constants to charge and discharge. During the time of the ETA Systems Supercomputer development, CMOS technology had not seen the advantages that bipolar devices could realize – but the potential for future improvements was obvious and projections clearly indicated that by the second half of the 1990’s (nearly 10 years after the first ETA Systems Supercomputer would be available), CMOS would overtake Bipolar in the last and most important parameter – performance. To restate this; the IC industry was transitioning to CMOS technology and more funding at the device, and equipment level was being expended to accommodate new markets focused on potential of CMOS than was being expended for Bipolar devices. Bipolar technology was stretched to a practical limit for the time frame in question. The IC industry, therefore, had only one other technology candidate, CMOS, which was, in 1983, used exclusively for lower cost and considerable lower performance applications and memory device technology where more bits per die could be fabricated at the expense of lower performance of the Bipolar counterpart(s). The impressive characteristic of CMOS technology at this time was: Lower power consumption per function, smaller size per logic function and lower cost per die due to two key factors (smaller physical size per function meant more logical functions per unit of area, and higher chip yield – chip functionality per wafer manufactured – due to reduced number of processing steps to generate CMOS devices. That was the good news. The concern was system performance. While bipolar technology had set the standard for clock periods of 10 NSec for Supercomputer architectures such as the ETA System projection, CMOS was at least 5 times slower – in most cases 10 to 20 times slower for equivalent architectures. Based on this parameter alone, CMOS was not a candidate for Supercomputers in the 1999-1990 time frame (the time frame where the ETA Systems Supercomputer would be in high volume production). The next steps for CDC (recall that at this time CDC still had a Supercomputer Division) were dramatic and at times emotional. First, the team had to discard the ECL design and terminate the effort with Motorola. This was very difficult since both companies depended on each other and secondly, all objectives of the ECL product were being met within the specifications established. CDC (team which later became ETA Systems) provided Motorola with all of the design details to date. Considerable effort was made to insure that the program was successful at Motorola. A sidelight to this discussion – Motorola completed this product as an industry product. Cray Research Inc. (the key competition and leader of the Supercomputer market) engaged with Motorola to successfully complete this complex IC development for a product announced in the late 1980’s. The product (Cray C-90) under the leadership of Les Davis, Steve Nelson and other notable scientists (a key circuit designer was Mark Birrittella), became another very successful supercomputer products developed and manufactured by Cray Research Inc.. Next, a full effort evaluation of all technology candidates occurred. CMOS futures were explored in depth. GaAs technology was also evaluated. Alternative ECL (bipolar) candidates were also considered. CMOS was viewed as the technology of the future but the future was beyond the time frame necessary for product introduction. Key events that led to the decision to use CMOS technology. Moore’s law (invented by the great innovator and co-founder of Intel – Gordon Moore) stated that IC technology (CMOS) technology, would double in performance and density every 18 months to two years. The actual Moore’s law may have been stated somewhat differently but this captured all the project cared about. To achieve this predicted growth, several parameters had to occur: - The die size would increase (more gates per manufactured chip). - Features on the chip (metal widths and spaces to interconnect devices and actual device parameters) would be reduced every 16 months to 2 years. Reducing parameter sizes have two positive results to goals of ETA Systems: increased performance and more gates per die. - The technology would gain broad industry popularity – this would mean that capital equipment would keep pace with the “law”, applications would increase thus increasing volume, thus lowering cost and increasing performance and more applications and industries would drive CMOS technology – the Supercomputer industry could not drive such a large industry. Key industry activities also emerged at this time: - CDC validated operational performance gains operating CMOS technology in a cryogenic environment. Several ring counter configurations generated with the 5,000 gate chip discussed earlier were dipped in a Liquid Nitrogen thermos jug expecting to witness the shattering of the silicon and the detachment of the solder joints attached to the oscilloscope only to find the frequency of the ring oscillators double and the system operate for weeks until we turned off the experiment. Analytical analysis applied to the Silicon design validated the research done previously by others. - Key US Government agencies began a technology acceleration program based on CMOS technology – the Very High Speed Integrated Circuits (VHSIC) program under direction of the Army, Navy and Air Force certainly captured our attention. - Honeywell, one of the participants in the VHSIC program held a technology luncheon IEEE symposium in which they presented an 11,000-gate CMOS development effort. Attendees from CDC were impressed (especially the key designer – Randy Bach - with what the efforts. The chip was certainly larger than any that had been developed to date and the performance was accelerated beyond what was predicted for the 1988 time frame by the conventional IC industry (the introduction date set for the ETA Systems Supercomputer – then the next generation CDC Supercomputer). Honeywell was a recipient of one of the VHSIC contracts. - Logicians and architects back at CDC - led by Neil Lincoln (chief architect), Ray Kort, Maurice Hudson and Dave Hill and others - determined that an minimum gate density of 15,000 gates per die would allow them to achieve a key objective; having a worst case Register to Register clock path residing within a single chip. Now additional explanation is required here. There were technical reasons that the logicians wanted more beyond the knee jerk reaction that asking for 50% more than offered was a standard mode of operation for these guys. Each architecture configuration has a method of achieving its goals of applying computational instructions to problems. The number of gates that are connected in serial fashion between the input and output registers (and this is truly simplifying the problem) determine the clock period that is allowed. For the ETA Systems Supercomputer, therefore, it was determined that a functional unit clock period could reside within the boundary of the chip if the chip could provide 15,000 gates of logic to the designer. - Research into technology experiments uncovered significant performance features of CMOS technology. First of all, the technology was functional across a wide range of voltages and temperatures but performance was significantly altered. The higher the operating voltage (within semiconductor constraints, of course) the higher the performance resulted. Unfortunately the Power consumption, although significantly lower than any alternative technology, increased as the Square of the operating voltage. The lower the operating temperature of CMOS the higher performance as well. This factor was studied by others and carefully documented from 400 degrees Kelvin (100 degrees above room temperature) to 77 degrees Kelvin. (77 Degrees Kelvin is the boiling point temperature of liquid Nitrogen.) Summary of what was learned with this evaluation - IC chips currently (four years before the need for an ETA Systems product) had a capacity of 11,000 gates. - The performance of these gates, when operated at liquid Nitrogen temperatures, would perform at least two times faster than at room temperature – not yet validated at CDC. - 15,000 useable gates were required per chip to meet logic designer chip boundary requirements.<o:p></o:p> ¨ If Moore’s law was applied to these parameters, within the time frame required, it was possible to achieve both gates per chip densities and performance goals (if the system operated in a liquid Nitrogen environment). - There were at least two IC Suppliers (those having contracts with the US government) that were pursuing CMOS as a performance and high gate/chip density technology (the other known corporation was TRW). - Computer Aided Design (CAD) tools were, during the period of the 80’s, in the infancy stage if one was to compare them to today’s capabilities. To design, place cells within the matrix of the gates provided on the IC Chip, and route the interconnections of these cells accurately to the logic or Boolean design required by the logicians and to clock period constraints was a challenge. This challenge applied to board layout designs as well. Control Data Corporation (CDC) recognized the challenges and established a small but efficient and dedicated organization to address these challenges. The industry had established a metric that to use CAD tools for gate or cell arrays, an additional 20% to 30% gates were required. This meant if the ETA Supercomputer required at least 15,000 useable gates to accomplish necessary designs based on its architecture, an 18,000 to ≈20,000-gate capacity was required. The technology organization set at its objectives a design of 20,000 gates plus necessary circuitry to self-test each gate or cell array. This as compared to the gate array in development at Honeywell was nearly 2 times the capacity (11,000 total gates Vs. 20,000 total gates plus circuitry for self test). The task was to convince Honeywell to project the next generation size and layout rules and to accept an R&D effort that would allow CDC / ETA Systems achieve its objectives. Honeywell, an innovative organization, took on the task after considerable discussion with key requirements: - ETA Systems (we were now ETA Systems by the time these discussions reached negotiations) accept costs based on wafers processed, not functional chips. Honeywell would provide necessary processing data to reflect wafers were processed within process parameter specifications. - ETA Systems provide test equipment for wafer testing and test parameters for chip acceptance prior to packaging. - Both companies would share facilities and key resources and work as a single team – as “open a Kimono relationship” that one could ever imagine during this dynamic period of complex process developments within the IC Industry. – David Frankel was assigned the task as ETA Systems interface and engergetically took on the challenging task. - Self-test circuitry was designed into the basic cell array periphery. The area consumed by this custom set of pseudo-random generated logic and registers was less than 15% of the total chip area. (David Resnick, resident do-it-all reduced concepts explored by ex CDC scientist Nick Van Brunt who left the company a year previous to the formation of ETA Systems.) This was one of many extra ordinary contributions David made to ETA Systems. Additionally to providing self test capability to accept or reject the circuitry – both functionality and performance sorting – the circuitry included in each 20,000 gate array had capability to test for interconnect between circuits on the final PC Board as well as circuit to I/O connections. When the logic design team first heard of this area “waste” of test circuits that could be used for logic design, they lobbied for it to be removed in favor of more logic gates for function designs. Fortunately this request was not honored. IC validation at both the supplier in wafer form and at ETA Systems in packaged chip configuration coupled with the use of the same circuitry in manufacturing checkout to detect board opens and shorts between circuits assembled both in room temperature and cryogenic temperature environments proved to be well worth this “waste” of circuitry area. Small, relatively inexpensive testing systems were designed by ETA Systems and provided to the supplier. The operands for initialization of the pseudo-random logic were also supplied for each design (chip type). Chip types (array design options) were carefully managed as to not proliferate the chip types in the system. This was a new constraint placed on logic designers and was dealt with most professionally and responsibly by all participants once understood. The resultant chip total for the CPU (processing unit) was fewer than 150 while the chip types including clock chips and all logic design chips was fewer than 20 as best recalled. During the development cycle of the ETA System Supercomputer, Honeywell moved the manufacturing capability from a local Minneapolis facility to a state-of-the-art manufacturing facility in Colorado Springs, CO. The transition was very transparent to ETA Systems (with the exception of the traveling budget, of course). To accomplish this team membership from both companies acted as one in all decisions addressing scheduling and timing of needs of various chips, testing, packaging, etc. The open book relationship was very beneficial to both companies. On one milestone occasion – where Honeywell successfully completed an initial order – Dave Frankel and I visited Honeywell, some 30 miles from the ETA Systems facility, and served cake and coffee to all designers and operators – it was below zero when this milestone was reached and no one cared. One design that was incorporated into the chip was to allow for next generation critical processing parameters to be added to the existing design (present chip layout). Although this would not optimize the features of new process features (all parameters were not considered), key performance enhancements could be and were added to the present design. A key feature was gate length and this was added transparently to the physical chip and offered appreciable performance enhancements to the design. Chip design summary: The decision to utilize CMOS technology for the ETA Systems Supercomputer in the 1985 – 1988 time frame (prematurely by all industry metrics) resulted in the following additional “technology kit” decisions: Addition of chip self-test. Feature established functionality at wafer test and functionality and performance sorting at ETA Systems - Computer Layout tools that validated logic prior to chip release for fabrication - Requirement to operate the chip at 77 degrees Kelvin or in liquid Nitrogen - Packaging, interconnect & assembly decisions based on liquid Nitrogen operation challengesRemote testing of the CPU because of liquid Nitrogen operation challenges - Logic design partitioning challenges to design within 15,000-gate per chip boundaries and a minimum of IC chip types Printed Circuit Board Design Selection: In the period of the 1980s, the time frame of the ETA Systems Supercomputer development, Printed circuit boards had maximum dimensions of approximately a square foot and the number of total layers fewer than 20. (Layers provide power and ground stability, interconnect capability for the circuits attached to the board as well as inputs and outputs to and from the board.) If these total layers are allocated properly, approximately 50% are used for interconnect and the remaining for power and ground. Positioning of power and ground layers also serve to provide interconnect layers that have transmission line capabilities to insure signal integrity throughout the board. During this period, a state-of-the-art printed circuit board was approximately one square foot of active circuitry and as stated earlier, 20 layers or fewer usually restricted to a total thickness of 0.063 inches. It was determined that a maximum of 150 chips would be required to design the ETA Systems Supercomputer CPU. Packaging of the IC and interconnecting the chip to a PC board with minimum spacing between chips (some spacing was required to allow interconnects to all of the necessary layers) resulted in a 1.2x1.2 sq. inch “footprint”. Doing the simple math results in a pc board of a minimum of 220 sq. inches. The number of total layers required to interconnect the 150 chips and the necessary Input and Output at the board periphery was determined to be 45. Looking at design parameters of the board layers in more depth and insuring transmission line features to insure signal integrity defined the board thickness at slightly greater than 0.25 inches. This thickness was approximately three times greater than high-end printed circuit boards produced in this time frame. With a board having an area of greater than 1.5 times the size of what was able to be produced, a thickness of 300% of what was produced and a the number of layers 2.5 times of what was produced in this time frame it was clear that the printed circuit board industry was not ready for the ETA Systems design! The design has other limitations. A key factor when designing pc boards is to insure proper connecting of the layers, i.e.; connecting the chip pins to the board and the proper layer of interconnect in the board and back to the proper receiving chip. Drilling holes in the layers and plating the wall of the holes with copper for conduction make these connections. These are called plated thru holes or PTH. A key parameter to insure that plating occurs in these holes is the hole diameter to depth ratio. The industry at this period (not much better today) is 6:1, i.e.; the thickness of the board must be no more than 6 times the diameter of the hole. This ratio would dominate the size of the board. If this ratio is used to design the board the board size would be increased in area by greater than 9 times. Talk about piling on! Since it was deemed not feasible, issues like cost and time to fabricate the board were not even addressed. Nestled into the design laboratory of Control Data Corporation was a small but very innovative printed circuit board prototype facility. The leader of this group, LeRoy Beckman, never said “no” to challenges. He just bit his pipe a little harder and tried not to snicker out loud. LeRoy kept his eyes and ears out for innovative alternatives to conventional board fabrication techniques and had previously displayed innovation (evolutionary in nature) in previous generations. Embedded termination resistors in layers was one invention he brought to CDC when resistor termination took up too much board area; finer features than the industry was producing another, and higher plated through hole (pth) ratios than the industry a third.New technologies in the printed circuit board were few and far between. The industry was set in it’s ways of subtractive etching of circuit layers (removing unwanted copper from a pre-copper clad layer, convention wet etch processes and relatively simple assembly, i.e.; lamination of layers with pressure. One inventor, Mr. Peter P. Pellegrino, arrived on the scene to discuss innovative, revolutionary and proven pc board processing. At first the claims appeared to be too good to be true. Board size relatively independent, aspect ratios exceeding 20:1 for PTH, an additive process that permitted finer lines to be fabricated on individual layers. The layers were also embedded into the laminate so the opportunity for higher yield with reduced features. An additional benefit of additive plating is reduction in waste and water usage.A special plating cell was also introduced that permitted uniform deep hole plating by forcing plating fluid into each of the thousands of PTH. The process titled “Push-PullTM” also accelerated the plating manufacturing cycle by over an order of magnitude, reducing cost.A small plating cell was incorporated into the prototype facility at CDC and a controlled set of experiments conducted. Experiments were thorough and challenging since no one in the industry could approach the lofty objectives of the ETA Systems Supercomputer CPU board nor the lofty claims of the inventor. The results were simply outstanding. From the results and a commitment to fabricate a larger manufacturing line of plating insert cells, the 45 layer 15” x 24” CPU board became a realistic finalized goal of ETA Systems.Anyone told of this goal openly scoffed at this as too risky and unrealistic. This included some in the company as well.Later, when manufacturing of the systems was viable, a production capacity was developed for manufacturing. It is noted that hundreds of these boards were fabricated from a period of 1987 through early 1989. The yield of final boards was nearly perfect – only one finished board was scrapped. To this day (2009) few realize what a monumental accomplishment this was and still is. This a tribute to LeRoy Beckman, Peter Pellegrino, the manufacturing facility at ETA Systems (now a banking building in St. Paul) and those who trusted that the lofty objectives could be realized. To accommodate routing and designing for minimum distance between IC chips, CAD tools were developed and the first use of diagonal routed layers were introduced. Prior to this only x–y layers were permitted with manual and/or auto tools (CAD). This enhancement permitted timing constraints to be realized between chips. The final board had the following noteworthy characteristics: - Board size: 15 inches by 22 inches by 0.26 inches - Pth hole ratio ≈ 20:1 – plating time – less than 20 minutes - 45 total layers per CPU panel - 150 IC chip locations (fewer were used in final design) - More than 30,000 board plated thru holes (pth) were used for interconnect In 2009 this board development and manufacturing stands out as one of the major technology developments by ETA Systems The key challenge for packaging the ETA Supercomputer processing unit was the cryogenic chamber for the processor. The Cryostat to contain the processor (two processor units) had a conventional (and quite heavy) circular cryostat containing a vacuum chamber between the outside environment and the inner environment. Input of liquid Nitrogen was at the bottom of the chamber and the escaping of the gaseous Nitrogen was provided for near the top of the unit. The piping containing the Nitrogen to and from the regeneration unit was also temperature protected with vacuum lines. Dan Sullivan and his design team led this admirable effort. (Unfortunately, Dan passed on a few years ago). It was felt that a less heavy and equally efficient chamber (proposed by Carl Breske – a very innovative scientist) could be designed if time permitted but the selection of the vacuum based design was conservative to accommodate schedule and also to familiarize the team with the challenges of Cryogenics. The compressor unit was a conventional Liquid Nitrogen system (very large and bulky) used for generation of Liquid Nitrogen for the commercial market. The system was not pretty. Marketing, led by Bobby Robertson (also now deceased), prohibited the engineers to show this to perspective customers fearful that this would scare them away. Thought was given to actually eliminate the need to regenerate the system in a closed system but rather purchase Liquid Nitrogen – readily available in tanks - and have them periodically refilled as is done in the IC and other industries using Liquid Nitrogen. This was discarded for the initial design since several customer sites did not easily accommodate the external access to Liquid Nitrogen tanks. It was to be an option for future systems and those customers that easily accommodated such an option. The final design was then a closed recycled Liquid Nitrogen system with the compressor located remote, much like Freon compressors, which many Supercomputer customers were already accommodating. The design challenge was at the surface (looked much like a two slice toaster) where the processing boards were inserted. This seal had to accommodate the connecting transmission to the external and room temperature memory and I/O subsystems. A printed circuit board was designed to connect the processor to the outside world. Heaters were applied to the surface to prevent icing at the cryostat surface. The separation, only a few short inches had memory operating at 300 degrees Kelvin and the CPU operating at 77 Degrees Kelvin. There were a few “frosty” events in this development cycle! The third challenge was to provide reliable soldering of the circuitry to the board amidst the severe temperature difference that the solder joints would be subjected to (greater than 250 degrees) during the cool down and warm up cycles. Studies at the National Bureau of Standards provided input that the temperature cycle should be profiled in a precise sequence as the board was cooled and heated. In addition, care as not to remove the board and to care for condensation that would occur if the board had not been heated to room temperature was considered. The result was a 20-minute cycle to remove or insert the board was designed with a specifically prescribed sequence of temperature lowering and rising for both cycles. At the time of the unfortunate termination of ETA Systems, a more refined, lower cost and lower weight design as stated earlier was on the drawing boards. Although the cryostat and associated cooling was costly, an analysis clearly showed that for the performance resulting from the design, the cost was less than any Bipolar IC system designed at the time. Once the connector was finalized and the process and assembly designed, the system operated flawlessly. Checkout on the manufacturing floor of the system utilized the “Self-Test” capability exhaustively so specific interconnect flaws were clearly understood prior to removing a CPU from the cryostat, thus reducing checkout time considerably as well. These designs were well done, significant and challenged laws of thermodynamics and physics to new limits. Air Cooled System As stated earlier in the document, an Air-cooled processor would operate considerably slower (2x slower) when operated in normal or “room temperature” environments. ETA Systems by sorting the devices for performance at incoming inspection, allowed for a three times performance differential to be realized. Only the highest performance devices were reserved for the Cryogenic cooled system. The remaining parts were then re-sorted into two categories for room temperature; the differential would be a 4-nanosecond clock period between the two room temperature systems and 17 nanoseconds (24 to 7) for the total system product set. The sorting and using the entire distribution of Integrated Circuits had a significant cost reduction factor for the entire product line. Bipolar devices, by contrast had lower functional yield to begin with coupled with additional loss of product due to performance yield. This was a definite cost reduction asset to the ETA System. To cool the CPU air was forced on to the processor chips by using a plenum that was designed to cover each chip. Holes were designed in the plenum such that equal operating temperature would result for each operating chip. Since the power consumption variation significant for several part types, designing the appropriate number of holes above each chip location could provide custom cooling. The plenum could then be molded for mass production of the processing unit. Large volume cooling fans were designed for the system as well. Cost was the focus for the air-cooled systems since the price tag was below $1M. Recall, that the air-cooled design was identical in parts at the CPU and storage level. A single development was achieved for a wide range of products with one design team. Stacks using three-dimensional characteristics were designed under the leadership of Brent Doyle for the memory – both static (high performance) and dynamic (high density and lower performance) memories of the ETA Systems Supercomputer. These unique designs provided for highest density and optimum performance of the standard memory devices used. Ability to upgrade to future generations of memory (more storage capacity Integrated Circuits) was built into the design as well. The design worked well and stacking became commonplace in the computer industry for future designs – eventually eliminating the chip package entirely. The Air-cooled system was defined as a Piper. An Illustration of “Piper” is shown below. The design of the ETA Systems Supercomputer hardware had many unique features. The brief pages highlight some of them. It would be remiss not to briefly discuss the “team” concept used to design the hardware. By having the CAD, Packaging, memory, circuit and power expertise located in a close proximity and holding concise project reviews at all levels at periodic and timely phases, all were kept abreast of the progress and challenges of each other. This permitted changes to be made to necessary designs to properly accommodate the challenges and opportunities in a timely fashion. Hardware was demonstrated on or near schedule despite the innovations required in each aspect of the design. The team was truly a “team”. A missing link to the team was the logic design. These folks were separate and actually on another floor of the ETA Systems facility. It was strongly suggested and accepted for future designs, that the logic team would be a part of this common organization. I had the opportunity to lead one additional hardware development that included the logic design team (at Cray Research, not ETA Systems) later. It was a smoother and more effective and thorough team. Like ETA Systems the communications were open and included both manufacturing and software participation (the later two were voluntary). Clearly, communications – effective communications at all levels of the organization was key to this hardware design success.
<urn:uuid:4b02dc20-df47-4c26-b630-e079cb1a6fee>
CC-MAIN-2013-20
http://www.ieeeghn.org/wiki/index.php?title=First-Hand:Cryo_CMOS_and_40%2B_layer_PC_Boards_-_How_Crazy_is_this%3F&redirect=no
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.966182
9,683
2.71875
3
22 May 2009 Cruising Hubbard Glacier, Alaska; Cruise Day 116 Although we were at sea today, we spent most of the time at Hubbard Glacier, taking in the beauty, as well as learning about this glacier and glaciers in general. Half of all the glaciers in the world are located in Alaska. Hubbard is the biggest tidewater glacier in the world, which means it ends in the sea. It is about 76 miles long and six miles wide. As you know glaciers flow. The ice starts at the top and flows (gets pushed) down.The ice at the foot is about 400 years old. About 75% of the world’s fresh water supply is in glaciers. Washington and Alaska are the only two states that get any material drinking water from glaciers. For more information about glaciers, check these previous blogs from when we circled South America.
<urn:uuid:682a3ad0-2291-4c36-ad32-bdacf0552581>
CC-MAIN-2013-20
http://www.jetsetway.com/entries/cruising-hubbard-glacier-alaska-cruise-day-116
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.95859
177
2.625
3
Tuesday, Dec. 4, 2012 Order of importance: Study finds prioritizing rather than canvassing entire plant genome may lead to improved crops MANHATTAN -- A new study may help scientists produce better climate-resistant corn and other food production plants by putting a spin on the notion that we are what we eat. Kansas State University geneticists and colleagues found that by applying a genetic-analysis method used to study and prioritize the genes in humans, it improved the likelihood of finding critical genes in food production plants. These genes control quantitate traits in plants, such as how the plants grow and when they flower. Additionally, this method can be used to study how food production plants respond to drought, heat and other factors -- giving scientists a greater chance at improving crops' resistances to harsh weather and environments. "Right now we know most of the genes that make up several of these food production plants, but finding the right genes to increase food yield or heat tolerance is like finding a needle in a haystack," said Jianming Yu, associate professor of agronomy at Kansas State University and the study's senior author. Yu made the finding with Xianran Li and Chengsong Zhu, both agronomy research associates at Kansas State University; Patrick Schnable, Baker professor of agronomy at Iowa State University, and colleagues at Cornell University; the Cold Spring Harbor Laboratory; the University of Minnesota; and the U.S. Department of Agriculture-Agricultural Research Service. Their study, "Genic and non-genic contributions to natural variation of quantitative traits in maize," was recently published in the journal Genome Research. The National Science Foundation funded the research. For the study, researchers looked at the sequenced genome of corn. A genome is the genetic blueprint of an organism and contains all of the DNA and genes that give the organism its traits, like height and color. Staple food crops like corn, wheat, barley and oats have comparable and sometimes larger, more complex genomes than humans and mammals. That poses a challenge for scientists attempting to modify the plant and improve aspects like production and heat tolerance. "Like humans, plants have complex traits and complex diseases," said Li, the study's first author. "In plants, those are things like drought tolerance and grain yield. Sometimes one specific gene can make a big change. Frequently, though, it involves multiple genes. Each gene has a small, modest effect on the trait and many genes are involved. This makes it really difficult to study." Historically, scientists have analyzed an isolated region of a plant genome -- often taking a trial-and-error approach at finding what genes control what traits. Instead, researchers approached the corn genome with a relatively new analysis method that is used to study the genome of humans. The method, called genome-wide associate studies -- or GWAS -- searches the entire genome for small, frequent variations that may influence the risk of a certain disease. This helps researchers pinpoint genes that are potentially problematic and may be the key in abnormal traits and diseases. "Conducting routine, full-scale, genome-wide studies in crop plants remains challenging due to cost and genome complexity," said Schnable, the other senior author. "What we tried to get out of this study is a broad view of which regions of crop genomes should be examined in detail." Using the GWAS method for multiple analyses and complementary methods in identifying genetic variants, researchers were able to find that, on average, 79 percent of detectable genetic signals are concentrated at previously defined genes and their promoter regions. According to Yu, the percentage is a significant increase compared to looking at the gene regions alone."We used to think that genes are the only search priority and there were just many other less important or useless DNA sequences," Yu said. "But now we are starting to see that these other regions harbor some important genetic codes in them. Canvassing without prioritizing can be cost prohibitive, however, and efficient GWAS in crops with complex genomes still need to be carried out by taking advantage of a combination of genome technologies available."
<urn:uuid:cd3bf370-3d33-419f-bd2c-6db98e0c5a74>
CC-MAIN-2013-20
http://www.k-state.edu/media/newsreleases/dec12/gwas120412.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.939928
835
3.203125
3
Home > Health Library > Neuroscience > Neurological Conditions and Diseases > Other Neurological Conditions Other Neurological Conditions Acoustic neuroma is a noncancerous tumor that may develop from an overproduction of Schwann cells that press on the hearing and balance nerves in the inner ear. Bell's palsy is an unexplained weakness or paralysis of the facial muscle that begins suddenly and worsens over three to five days. It can strike at any age, but it occurs most often in pregnant women, and in people who have diabetes, influenza, or another upper respiratory ailment. Meniere's disease is a balance disorder caused by an abnormality found in a section of the inner ear called the labyrinth. Multiple Cranial Neuropathies Neuropathy is a disorder that affects the nerves. The cranial nerves are those that arise directly from your brain or brainstem and often affect areas like the face and eyes. Neurocutaneous syndrome is a broad term for a group of disorders. These diseases are life-long conditions that can cause tumors to grow inside the brain, spinal cord, organs, skin, and skeletal bones. Neuromyelitis optica, sometimes called NMO, is a rare condition that affects the spinal cord and the nerves that carry signals from the eyes to the brain, causing paralysis and blindness. Normal Pressure Hydrocephalus The ventricles are chambers in the brain that normally contain cerebrospinal fluid. Sometimes, too much fluid can build up in the ventricles. This accumulation of fluid leads to a condition is called normal pressure hydrocephalus (NPH). Pseudotumor cerebri is a disorder related to high pressure in the brain that causes signs and symptoms of a brain tumor—hence the term “pseudo” or false tumor. Transverse myelitis is a neurological condition that happens when both sides of the same section of the spinal cord become inflamed. Trigeminal neuralgia is a type of nerve pain that affects your face.
<urn:uuid:35754815-fb2f-44f4-9672-a0defdf4e1a0>
CC-MAIN-2013-20
http://www.nebraskamed.com/health-library/neuro/neurological-conditions-and-diseases/other-neurological-conditions
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.894131
419
3.34375
3
As a cure for our addiction to oil, ethanol turns out to have some nasty side effects. Pollution from gasoline engines accounts for 10,000 deaths in the US each year, along with thousands of cases of respiratory disease and even cancer. The widely touted ethanol-based fuel E85 (15 per cent gasoline, 85 per cent ethanol) could make matters worse. Mark Jacobson of Stanford University in California modelled emissions for cars expected to be on the road in 2020. The model assumed that carbon emissions would be 60 per cent less than 2002 levels, so overall deaths would be halved. However, an E85-fuelled fleet would cause 185 more pollution-related deaths per year than a petrol one across the US, most of them in Los Angeles. The findings, to be published in Environmental Science & Technology, run counter to the idea that ethanol is a cleaner-burning fuel. While ethanol-burning cars will emit ... To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
<urn:uuid:972f0c98-1645-4ef2-b32e-d1fa1b5d89e1>
CC-MAIN-2013-20
http://www.newscientist.com/article/mg19426004.300-biofuels-dirty-little-secret.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.934707
218
3.0625
3
General Knowledge - Quiz General facts on world affairs. Athletics 01 - Warming Up Lesson plan that introduces children to the concept of warming up. Children are provided with opportunities to practice running over distance, sprinting and jumping. Their knowledge and understanding of athletics is promoted as they find out about sprinting. How do I get to work by public transport? Op het einde van deze les kun je opzoeken hoe een persoon het best naar zijn werk gaat met het openbaar vervoer. The Senses: Hearing - by StudyJams The ear is specially designed to change sound waves into signals that the brain can understand, allowing you to hear. That is not all they do, though. Your ears also help you balance. Learn more about your sense of hearing with this cartoon animation from StudyJams. A short, self-checking quiz is also included with this link. Dear 16-year-old Me This professionally-made video was made possible thanks to the generosity of real Canadians and Americans whose lives have been touched by melanoma. These are not actors. The video encourages early detection of skin cancer and provides suggestions for prevention. The "p.i.t.a" expression is used at counter 2:19, but it does not overshadow the powerful message directed towards teens about the serious matter of melanoma. (05:04) " The Needs of a Plant"- Song About What Plants Need to Thrive This computer-animated video contains a lively song which teaches young children the needs of a plant : water, soil, space, sun, and air. (1:04) " They Were the Pilgrims"- A Song About the Journey of the Pilgrims This computer-animated video features a song which traces the journey of the Pilgrims to America. The selection includes maps and describes challenges faced by the Pilgrims. ( 2:24) Counting By Fives Song This cute animated video will help students learn to count by 5's. The lyrics repeat themselves several times. The numbers appear on the screen as they are said. (3:04) Animal Life Cycles by StudyJams Every animal's life goes through the same cycle: reproduction, growth, maturity, and finally, death. Learn more about the animal life cycle with this slide show from StudyJams. Vibrant images are set to music with information written under each photo. A short, self-checking quiz is also included with this link. These are Mayans and they were part of South America and they were one of the many important civilizations in the continent and the topic of this 48 minute video. It is slow loading so it should be downloaded before viewing online. There are images shown that may be offensive to some so a preview is a must. Provides students with a better perspective of how progressive these people were and adds to the mystery of what happened to them. How to Identify a White Pine This 1:35 video explains what a White Pine looks like and ways to identify it. Its vitamin content is explained and how to use it needles to make a tea is also shown. The Spread of the Apple Tree from the Romans This 12 minute video traces how the apple tree came to the New World and its uses. There is some mention of sex, but the video's subject matter is overwhelming fit for most classrooms. Sometimes the speaker wanders from the topic. 25 Years Later: Nuclear Expert Robert Alvarez Speaks on Chernobyl and Fukushima On April 26, 1986, a reactor at the Chernobyl nuclear power plant in Ukraine exploded. The resulting meltdown became the world's worst nuclear disaster. 25 years later, as Japan struggles to contain its own nuclear disaster, Institute for Policy Studies nuclear expert Robert Alvarez discusses health effects from the Chernobyl disaster, what we can expect for Japan and what we have learned about nuclear power. A state-of-the-art relative navigation system will be demonstrated on the STS-134 mission to the International Space Station called the Sensor Test for Orion Relative Navigation Risk Mitigation or STORRM. The goal of STORRM is to validate a new relative navigation sensor based on advanced laser and detector technology that will make docking and undocking to the International Space Station and other spacecraft easier and safer. The demonstration is a test-run of the technology, and the STS-134 cr STS-134 Daily Mission Recap - Flight Day 2 A video recap of flight day 2 of the STS-134 mission of space shuttle Endeavour to the International Space Station. First spaceflight recreated Read more: http://www.newscientist.com/blogs/nstv/2011/04/experience-the-first-spaceflight.html Robot with corkscrew legs Read more: http://www.newscientist.com/blogs/nstv/2011/05/corkscrew-legs-make-robot-more-versatile.html Now at MoMA: Looking at Music 3.0 | Lee Quinones Looking at Music 3.0: Lee Quinones on graffiti February 16-May 30, 2011 Images courtesy of Lee Quinones Filmed by The People's DP, Inc. Authors@Google: Mary Roach Mary Roach spoke to Googlers in Mountain View on April 22, 2011 about her book Packing for Mars: The Curious Science of Life in the Void. About the book: Space is a world devoid of the things we need to live and thrive: air, gravity, hot showers, fresh produce, privacy, beer. How much can a person give up? How much weirdness can they take? What happens to you when you can't walk for a year? What happens if you vomit in your helmet during a space walk? Is it possible for the human body to survi Authors@Google: Gary Taubes Gary Taubes spoke to Googlers in Mountain View on May 2, 2011 about his book Why We Get Fat: And What to Do About It. About the book: An eye-opening, myth-shattering examination of what makes us fat, from acclaimed science writer Gary Taubes. Building upon this critical work in Good Calories, Bad Calories, Taubes revisits the urgent question of what's making us fat and how we can change in this exciting new book. Persuasive, straightforward, and practical, Why We Get Fat makes Taubess crucial
<urn:uuid:53b708a3-d1c3-4799-8cfc-9ea1ed44b222>
CC-MAIN-2013-20
http://www.nottingham.ac.uk/xpert/scoreresults.php?keywords=Industry%20perspectives%20in%20media%20branding%20and%20promo&start=6700&end=6720
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.906521
1,350
3.109375
3
STRATIGRAPHY OF PERMIAN ROCKS (continued) BONE SPRING LIMESTONE The Bone Spring limestone is the oldest formation exposed in the Guadalupe and Delaware Mountains. it forms a bench of varying height along the west-facing escarpment of the mountains, which is fringed on the west by alluvial deposits or outcrops of down-faulted rocks. (For views of typical exposures see pl. 5; for map relations, pl. 3.) The formation passes beneath the surface in the southern Delaware Mountains, south of the area described, but across the Salt Basin to the southwest is extensively exposed and forms the upper three-fourths of the east-facing escarpment of the Sierra Diablo.46 The formation was named by Blanchard and Davis,47 but it had previously been recognized by both Shumard48 and Girty49 as the "basal black limestone" (member 4 of Shumard's section). The type locality is in the lower course of Bone Canyon below Bone Spring, on the west side of the Guadalupe Mountains 1 mile northwest of El Capitan, where there are characteristic exposures of several hundred feet of its upper beds. The formation is several thousand feet thick, as shown by the sections on plate 8. On the promontory of the Delaware Mountains 18 miles south of El Capitan, 1,500 feet of beds were measured (section 49), and at a point 2 miles north of Bone Spring 1,700 feet (section 7), but at neither place is the base exposed. In the Sierra Diablo, measured sections show a combined thickness for the Bone Spring and underlying Hueco of about 3,000 feet (section 45). This agrees closely with the 3,123 feet recorded in the Updike well near El Capitan (section 47). In the Delaware Mountains to the south, which in Permian time were a part of the Delaware Basin, the formation is evidently much thicker, for in the Anderson and Prichard well the combined thickness of Bone Spring and Hueco limestones, including the beds exposed above the top of the well, totals 4,540 feet (section 48). According to Adams,50 in this part of the section several faults may have been drilled through, as "chunks of rocks showing slickensides were bailed from the hole." Judgment must be reserved as to whether the possible faults have materially altered the amount of thickness, but they should be kept in mind as a possible source of error. The Bone Spring is composed almost entirely of limestone beds, as contrasted with the dominantly sandy strata of the Delaware Mountain group which overlies it (plate 7, A). In the Delaware Mountains, and extending as far north as Bone Canyon, the exposed parts of the formation are black, cherty limestone in thin beds, with partings and a few members of shaly limestone and siliceous shale. North of Bone Canyon in the Guadalupe Mountains, the upper part of the black limestone is replaced by a thick-bedded gray limestone, the Victorio Peak gray member, which also forms the capping stratum of the Sierra Diablo. Between the main mass of limestones and the sandstones of the Delaware Mountain group is a small thickness of interbedded limestone and shale, which forms the Cutoff shaly member and its probable equivalents. SEQUENCE IN THE SOUTH South of Bone Canyon, the black limestones of the Bone Spring crop out in a bench along the west base of the mountains, forming rounded slopes of a darker color than those carved from the sandstones above. Near United States Highway No. 62 the bench is discontinuous and low, but it rises to the north and south. At the top of the bench in the Delaware Mountains south of the area studied, two cliff-making members of black limestone form steep walls, in places unscalable. Outcrops of the Bone Spring limestone in the south part of the area are shown on the geologic map, plate 3. A part of the outcrop can be seen in the panorama, plate 5, A, fringing the base of the escarpment below El Capitan and Pine Top Mountain. Stratigraphic sections south of Bone Canyon appear on the right halves of plates 6 (numbers 15-44) and 8 (numbers 48 and 49). The two cliffs referred to form the 460-foot interval in the upper part of the formation in section 49. BLACK LIMESTONES AND ASSOCIATED ROCKS In the southern part of the area studied no more than the topmost 500 feet of black limestones is exposed, although more beds come to the surface farther south. These topmost beds are fine-textured, dense, black limestones, in beds a few inches to a foot or more thick. They are in part straight-bedded and in part have lumpy or undulatory bedding surfaces. Black, brown-weathering chert occurs in some of the beds as long, knobby lenses, nodules, and flat sheets. Chert is also common in the Anderson and Prichard well for more than 1,000 feet below the surface,51 suggesting that most of it is original with the deposit. The black limestones are nearly barren of fossils. The known fauna has been collected from discontinuous lenses, generally more granular than the inclosing rock. Ammonoids in some of the lenses not far north of United States Highway No. 62 are filled with free oil, which spills over the rocks when the ammonoids are broken. The black limestone in most exposures shows no stratification between the bedding planes, but in some exposures it is marked by finer laminations. Limestones marked by closely spaced, light and dark laminae similar to varves are common lower down in the formation (pl. 10, A); they have been observed on the promontory of the Delaware Mountains 18 miles south of El Capitan, in the Sierra Diablo, and in the cores from the Updike well. Some of the limestone beds are separated by partings of shaly black limestone. The strata for several hundred feet beneath the two cliff-making members south of the area studied consist of brown, platy siliceous shale and shaly limestone. The following analyses of black limestone from the Bone Spring limestone were made. These and subsequent analyses of carbonate rocks in this report were determined by methods described by Hillebrand.52 The only modification was that insoluble residues were caught on Jena glass filtering crucibles, and the organic insoluble determined by the Robinson53 method. Analyses, in percent, of black limestone from the Bone Spring Insoluble residues: 1, Dark brownish and carbonaceous, consisting of clay with finely divided quartz particles; 2, similar to No 1; 3, light brown and of fine-grained particles. At several places, layers as much as 10 feet thick of platy, fine-grained, calcareous sandstone are interbedded with the black limestones. Two specimens of the sandstone, one from a point 2-1/2 miles south-southeast of El Capitan and the other from the mouth of Black Canyon farther south, were studied under the microscope by Ward Smith. The grains have a maximum diameter of 0.2 millimeter and lie in a calcite matrix. They consist chiefly of quartz, with some microcline and plagioclase, and a small but noteworthy amount of zircon, tourmaline, and apatite. These are the more stable minerals of igneous and metamorphic rocks. A mile south of Bone Canyon, several thin conglomerate layers containing black limestone pebbles are interbedded in the black limestone (sec. 17, pl. 13). One of these beds locally attains a thickness of 4 feet and contains boulders several feet across of light-gray, fossiliferous limestone similar to that of the Victorio Peak gray member as developed a few miles to the north. Apparently some erosion of this contemporaneous, light-gray limestone was taking place at the time the black limestones were being deposited. Near Bone Spring, the upper part of the black limestone contains lenticular masses of poorly bedded, gray, granular limestone as much as 50 feet thick (secs. 15a and 16a, pl. 13). One such mass exposed on the escarpment face not far south of the mouth of Bone Canyon seems to lie in a channel in the underlying black limestone. Other masses have a moundlike upper surface, against which the succeeding beds overlap. They contain the heads of massive bryozoans, and also numerous productids and other brachiopods like those in the Victorio Peak gray member nearby. At least some of these lenticular masses were small reef deposits. STRUCTURAL FEATURES IN THE BLACK LIMESTONE The black limestones are thinly and evenly bedded. In the vicinity of Bone Canyon and farther south, however, most of the exposures when viewed as a mass show a great irregularity of stratification, so much so that at nearby points the dip is quite different in direction and amount. This irregularity results from two types of structural features, described below. The first type is found in the vicinity of Bone Canyon. Here, the black limestone is divided into numerous wedge-shaped and basin-shaped masses as much as 100 feet thick. The strata within each mass are parallel but the masses themselves are separated by sloping planes of contact from other masses of similar lithologic character in which the strata are differently inclined.54 In some places, gently dipping strata overlie more steeply tilted strata, and in other places the overlying strata have the steeper dips. The upper beds are generally parallel to the plane of contact beneath, and the lower beds are cleanly truncated. None of the limestones near the planes of contact is contorted, and none contains any breccia or conglomerate; the overlying limestones rest directly on the underlying. At one or two places, however, the smoothness of the contact is broken by small pockets in the underlying beds, which are filled by limestone like that above and below. A typical exposure of such features is shown in plate 11, A, in which a pocket like that noted above can be seen on one of the surfaces. The features are shown also on the sections accompanying plate 9, especially in the enlarged sketch on the left, and in figures A and B, accompanying plate 13. The area in which they occur is shown on plate 7, A. These features are strikingly exposed in Bone Canyon, and in Shumard Canyon,55 the next valley to the north. They are found also for somewhat more than a mile south of Bone Canyon, but are absent beyond. They are absent also north of Shumard Canyon, where the bedding planes in the black limestone are straight and parallel. In Shumard Canyon, the lower part of the overlying thicker-bedded Victorio Peak gray member contains a few similar structural features, but the angle of divergence between the overlying and the truncated beds is less than that in the beds beneath. In this canyon, the Victorio Peak itself is truncated and overlain by basin-shaped remnants of the Cutoff shaly member (sec. CC', pl. 9). The second type of structural feature, a remarkable contortion of the black limestone beds, is known only in the area south of El Capitan, where it can be seen in the upper layers of the black limestone, the oldest beds exposed in the district. These features have not been described in previous publications, although they may have been seen by geologists, and confused with the features of the other type near Bone Canyon. A typical exposure of this second type of feature is shown in plate 11, B, and the area in which they occur on plate 7, A. In many places the canyons that drain across the black limestone bench cut through steep to overturned or recumbent folds, involving 10 to 20 feet of beds. Accompanying the folds are small thrust faults. In places the contorted rocks pass into masses of sheared, wrinkled, and rolled lenses of limestone. The general trend of the folds and thrusts is between east-northeast and west-northwest, but the direction of overturning is either northward or southward. Numerous furrows and slickensides of the same trend as the folds groove the bedding planes, both in the contorted rocks and in rocks not otherwise conspicuously disturbed. Wherever they are exposed the strata beneath any set of contorted beds are little disturbed. Many of the contorted beds are truncated, and overlain by gently dipping strata. Whether the upper strata lie unconformably on the lower or have been thrust over them cannot be determined with certainty. The contortion has not modified the broader features of the strata, for toward the south the contorted beds stand in cliff-making members that can be traced continuously for long distances. Both sets of structural features are relatively ancient, for the tilted beds, planes of contact, and thrusts are in many places cut cleanly through by vertical joints of probable Tertiary age, some of which are shown on plate 11, B. The features near Bone Canyon were interpreted by Baker56 as thrust slices. Darton and Reeside57 and later geologists, however, have regarded the truncated surfaces in this neighborhood as local unconformities, and the whole feature as a sort of gigantic cross-beddings58 formed during the time of deposition. This latter interpretation seems best to fit the facts, as the basinlike form of some of the masses and the pockets along some of the planes of contact more closely resemble sedimentary than tectonic features. Further, similar truncated surfaces higher up, which separate the Victorio Peak from the Cutoff member, seem clearly to be local unconformities. Such unconformities do not necessarily mean emergence of the sea bottom; they may have been caused by submarine currents. The features farther south are certainly the result of some sort of deformation, but I am inclined to believe that they also were formed during or shortly after the time of Bone Spring deposition. The intensity of the contortion and the small thickness of the beds involved suggests that they were deformed under a relatively thin overburden, and that the beds retained a certain plasticity at the time of deformation. They must have been sufficiently consolidated, however, to have been grooved and slickensided. The deformation might have been caused by a sliding of one part of the newly deposited beds over another, causing the beds between to crumple.59 Some of the flat-lying beds that truncate contorted beds may have slid in this manner. (See p. 27.) CUTOFF SHALY MEMBER South of El Capitan, the black limestone bench is separated from the first sandstone ledges of the Brushy Canyon formation above by a slope 50 to 150 feet high, carved from shales, sandstones, and thin limestones, of which a typical exposure is shown on plate 14, B. These beds are classed as an upper member of the Bone Spring limestone, and tentatively correlated with the Cutoff shaly member of the Bone Spring, which is found in the northern part of the area studied. Near El Capitan, however, the beds thin out and disappear, so that the actual connection to the north cannot be traced. The Cutoff member of the southern area is well exposed in Brushy Canyon, not far south of United States Highway No. 62 (sec. 36, pl. 6). The member consists of black, platy, siliceous shale and shaly sandstone, with a few intercalated sandstone beds in the upper part, and many thin beds of compact gray or black limestone. At some localities, the various constituents are very irregularly interbedded. In Brushy Canyon, one of the limestone beds develops locally into a mass 15 feet thick and contains abundant brachiopods, mollusks, and other fossils. The thinner limestones contain little else than fusulinids, and many are unfossiliferous. In some exposures, the shales contain large, spherical, cannon-ball concretions of limestone. In the lower 25 feet of the member, and resting in places directly on the black limestones beneath, are lenticular beds of conglomerate a few feet thick, composed of round black limestone pebbles set in a calcareous matrix. The upper surface of the black limestones is not channeled, however, and the limestones interbedded in the shales above the contact are identical in appearance with those below. The top of the member is drawn at the base of the lowest prominent sandstone ledge of the Brushy Canyon formation, but this is not a definite boundary, as some similar sandstone is interbedded in the shales below, and shales and platy sandstones are interbedded in the thicker sandstones above. SEQUENCE IN THE NORTH Near Bone Canyon the bench of Bone Spring limestone rises to a greater height than farther south. To the north it stands in an imposing line of cliffs that rise 1,000 feet or more above the foothill ridges of downfaulted rocks that flank it on the west. About 4 miles north of Bone Canyon, these downfaulted rocks rise so high that they conceal the Bone Spring beds on the main escarpment. Toward the northwest, however, in the lower ridges near Cutoff Mountain, the formation reappears in places. It and the overlying rocks are much faulted, and some of its limestones form dip slopes that are inclined steeply westward toward the Salt Basin. Outcrops of the Bone Spring limestone in the northern part of the area are shown on the geologic maps, plates 3 and 9. The whole outcrop in the northern part of the area also can be seen on the panorama, plate 5, B. The part of the outcrop on the main escarpment extends from below El Capitan to the right, northward past points 5738 and 6402 to below the Blue Ridge on the left, where it comes to an end. The outcrops near Cutoff Mountain appear farther to the left, and form the cuestas below point 5443 and elsewhere. Stratigraphic sections of the formation north of Bone Canyon are shown on the left half of plate 6, numbers 1 to 14. VICTORIO PEAK GRAY MEMBER The black limestones are exposed for only a few miles north of Bone Canyon and pass from view beyond. Most of the exposed part of the formation in this district belongs to the Victorio Peak gray member, a succession of thick-bedded, gray limestones 800 feet thick, which are the northward equivalent of the upper part of the black limestones. The member is named for Victorio Peak,60 a high point on the Sierra Diablo escarpment southwest of the Guadalupe Mountains. A correlation of the rocks assigned to the member in the two areas seems assured, because in addition to a similarity of the faunas, the member at the northwest end of the Sierra Diablo is divisible into three parts that are identical with its three divisions in the Guadalupe Mountains. (Compare secs. 46 and 7, pl. 8.) Here, as in the Guadalupe Mountains, it rests on black limestone and is overlain by the Cutoff shaly member. On the high ridge between Shumard and Shirttail Canyons,61 about a mile north of Bone Spring, two well-marked divisions in the member are recognized. (See sec. 10, pl. 6; for structure of the ridge, see sec. BB', pl. 9.) The lower division, resting with gradational contact on the black limestone, consists of 350 feet of gray-brown, fine-grained, dolomitic limestone in beds several feet thick. In Shumard Canyon, many layers of thin bedded, hackly limestone are interbedded. Here erosion has carved the limestone of the division into picturesque, serrated walls and pinnacles, which are shown in the lower left-hand part of plate 12, B. The division commonly contains widely spaced, large, subspherical chert nodules, and in many beds fragmental remains of fossils. In Shirttail Canyon, several layers of light-brown, fine-grained sandstone are interbedded in the lower part. The upper division of the Victorio Peak member on the ridge between Shumard and Shirttail Canyons is a light-gray, nondolomitic, noncherty, thick-bedded calcitic limestone 160 feet thick, which contains various productids and other brachiopods. The following analyses of limestones from the Victorio Peak gray member were made: Analyses, in percent, of limestones from the Victorio Peak member Insoluble residues: 1, Dark brownish, carbonaceous, consisting of clay and finely divided quartz, some of which is perhaps authigenic; 2, dark brown, carbonaceous, with large garnet particles, some of which are well-rounded, and also red tourmaline, quartz, and chalcedony; 3, brown, with quartz, chalcedony, microcline, and coarse garnet. The two divisions of the Victorio Peak gray member disappear south of Shumard Canyon. The lower division extends as far as a ravine between Shumard and Bone Canyons, where it intergrades abruptly with black limestone, as shown in figure A, plate 13. The upper division is cut off southward by pre-Brushy Canyon (Delaware Mountain) erosion. In the northern branches of Shumard Canyon its beds are truncated by a smooth surface, sloping 15° southeast, against which the sandstones of the Brushy Canyon formation overlap (sec. BB', pl. 9). In the southern branches the upper division extends as a rapidly thinning wedge, which is locally overlain by basin-shaped remnants of the Cutoff shaly member. The black limestone exposed in Bone Canyon is of the same age as the lower division of the Victorio Peak member a little to the north, and the lenticular masses of gray, granular limestone which it contains are considered as outliers of the Victorio Peak deposits. No equivalent of the upper division is present here. Crude tracing of the ledges suggests, however, that black limestone beds younger than any in Bone Canyon come in beneath the Brushy Canyon formation to the south, as indicated diagrammatically on plate 7, A. They are probably equivalent to the upper division of the Victorio Peak member to the north. North of Shirttail Canyon, the lower division of the Victorio Peak member, which is not widely exposed, is separated from the upper division by a middle division 100 feet thick of slope-making, thin-bedded, light-gray or white limestone, with much buff, fine-grained, calcareous sandstone interbedded. (Shown on secs. 5 and 7, pl. 6.) The upper division is calcitic, light gray, noncherty, and thick-bedded. (See chemical analysis No. 3, above.) Its upper layers contain numerous poorly preserved fusulinids and productid shells. CUTOFF SHALY MEMBER IN SHUMARD CANYON In the southern branches of Shumard Canyon, resting unconformably on both the lower and upper divisions of the Victorio Peak member, and overlain unconformably by the Brushy Canyon formation, are small remnants of poorly fossiliferous beds which are probably equivalent to the Cutoff member to the north. Two divisions are present, separated by an unconformity. The older one, composed of thin-bedded, black, cherty limestone, is exposed at only one place, near the head of the south fork of the canyon. It lies in a steep-sided basin carved in the Victorio Peak limestone, which it fills to a thickness of 90 feet. The younger division crops out somewhat more widely in the branches of the canyon, and consists of thin-bedded black limestone, weathering to ashen-gray, hackly fragments, interbedded with platy siliceous shale. They closely resemble the limestones and shales of the Cutoff member as developed farther north. The younger division is well exposed on the ridge south of the mouth of Shumard Canyon, where it reaches a thickness of 60 feet.62 The outcrops of the two divisions of the Cutoff shaly member in Shumard Canyon are shown on the geologic map, plate 9, and their structure on the accompanying section CC'. The basin-shaped remnant of the lower division stands out prominently on the nearest ridge in the center of the panorama, plate 12, B. The lower division is included in section 12a, and the upper in section 13a of plate 6. CUTOFF SHALY MEMBER TN NORTH PART OF AREA In the northern part of the area studied, the Victorio Peak gray member is overlain, apparently conformably, by 230 feet of shales and limestones which crop out on slopes above the limestone cliffs. They form the Cutoff member, which is named for exposures on the west slope of Cutoff Mountain about 1,000 feet below its summit (sec. 1, pl. 6).63 The member consists of thin-bedded, dense limestone of black, buff, or gray color, weathering to dove-gray or ashen, hackly, conchoidal fragments. Some of the lower beds contain irregular masses of black chert. In the upper part, much platy black siliceous shale, brown sandy shale, and soft sandstone is interbedded. The member contains few fossils; some pelecypod imprints were seen in the upper part west of Cutoff Mountain. About half a mile north of Shirttail Canyon, the southeastward extending outcrop of the Cutoff member comes to an end. At this place an erosion surface slopes southward across the truncated edges of the Cutoff beds, with sandstones of the Brushy Canyon formation overlapping northward against it, as shown diagrammatically on plate 7, A. To the south, the Brushy Canyon beds rest directly on the Victorio Peak member. Correlation of the typical Cutoff shaly member of the north part of the area with the shales and limestones at the top of the Bone Spring limestone farther south is tentative because only the beds to the south contain fossils in any abundance. The rocks of the different areas are similar lithologically, however, and all are included in the Cutoff shaly member in this report. BONE SPRING FLEXURE A study of the region south of El Capitan reveals no unusual features near the Bone Spring-Brushy Canyon contact. The black limestones, which project as a low bench at the base of the mountains, are overlain without apparent break by the interbedded shales, limestones, and sandstones of the Cutoff member. They are followed in turn by the sandstone ledges of the Brushy Canyon formation of the Delaware Mountain group, as in section 36, plate 6. A view to the north along the western side of the mountains, however, shows that the limestone bench rises to a much greater height in this direction, without a similar rise in the overlying sandstone ledges (as shown in pl. 5, A). At the Bone Spring-Brushy Canyon contact in Bone Canyon a few miles to the north, in the area of higher-standing limestone, the Cutoff member is not found. Instead, the upper surface of the black limestone is channeled and is overlain by coarse conglomerate, which contains fragments derived from the limestone.64 Besides these fragments the conglomerate contains cobbles and boulders of gray limestone unlike any rock exposed here or to the south. The conglomerate grades upward into typical sandstones of the Brushy Canyon formation, as shown in section 15, plate 13. A view of the relations farther north can be had from the crest of the succeeding ridge (pl. 12, B). Looking down into Shumard Canyon, the next large drainage beyond Bone Canyon, one can see the contact of the limestone and sandstone on the walls of the tributary gorges; it rises from a position beneath the observer to one several hundred feet above him on the farther wall. On the farther wall the black limestones are overlain by gray limestones which stand in a high projecting bench. These gray limestones constitute the Victorio Peak member and are the source of the boulders to the south.65 Brown sandstone ledges of the succeeding Brushy Canyon formation can be traced along the slopes above the limestone, rising less steeply northward than the limestone-sandstone contact. One group of them in middle distance, in the north fork of Shumard Canyon, is seen to overlap abruptly against the sloping surface. Near the point where the sandstones overlap, one can find innumerable ripple marks on their bedding surfaces, suggesting that the sandstones were laid down near a shore. The shore itself, the sloping surface of the gray limestones, is a smooth face, cut across the edges of gently tilted beds. The sandstones contain no embedded detritus derived from the shore as they do at Bone Canyon to the south. Perhaps this area stood higher on the sea bottom so that the detritus was swept away, and deposited lower down the slope, as at Bone Canyon. North of El Capitan the Bone Spring limestone is thus flexed into a position much higher than to the south. On the north side of Shumard Canyon the limestone stands 2,000 feet higher than it does south of El Capitan, and 1,000 feet higher than it does in Bone Canyon nearby. This uplift is only mildly shared by the overlying sandstones, and seems to have been largely completed before they were laid down. The upraised limestones were being eroded in early Delaware Mountain time, and the Brushy Canyon formation of that group overlaps their sloping surface. The overlap is so great that 1,000 feet of beds, the entire Brushy Canyon formation, is cut out between Bone Canyon and a point 2 miles to the north. The fold produced by this pre-Delaware Mountain uplift is known as the Bone Spring flexure. The feature was named by Blanchard and Davis,66 who called it the Bone Springs arch. It would seem from their paper that they considered the feature to be anticlinal, and to have a similar, opposing flank to the north. This view was contested at the time by De Ford.67 My work has failed to disclose a north flank to the feature and the term flexure is therefore used instead of arch. A good general view of the flexure can be seen in the panorama, plate 5, B, which shows the Bone Spring limestone rising from a low position below El Capitan to a high position below Shumard Peak, beyond which the beds flatten out northward. The structure of the beds shown in this view is given in section KK', plate 17. A closer view of the exposures in Shumard Canyon is shown on plate 12, B. The relations of the overlying and underlying beds to the flexure is shown on the map and sections of plate 9, and structure contours on the upraised surface of the Bone Spring limestone on the inset of figure 6. SOME DETAILS NEAR BONE CANYON The broader stratigraphic relations of the Bone Spring limestone and Delaware Mountain group are clear, but near Bone and Shumard Canyons local complexities tend to obscure them and deserve further explanation. The peculiar, cross-bedded structure of the black limestones, and the basins cut into the Victorio Peak gray member and filled by the Cutoff shaly member have already been described. To produce them, uplift and erosion must have taken place on the flexure before Bone Spring time came to an end. The conglomerates interbedded in the black limestone south of Bone Canyon, which are similar to those in the overlying Brushy Canyon formation, lend support to this idea, for they contain fragments not only of black, but also of gray limestone, and thus were not derived entirely from the break-up of the beds next beneath them. Along the unconformity below the Cuttoff member, the Victorio Peak member is deeply eroded, and the break seems more important than those in the black limestones below. In places along Shumard Canyon, this unconformity is more prominently exposed than that between the Cutoff and the sandstones above. This instance is local, however, and the general relations indicate that the younger unconformity is the major one. The apparent trend of the Bone Spring flexure is east and west, at right angles to the northward trending outcrops, for most of the observable uplift and overlap take place in a northward direction, along the outcrop. Closer scrutiny of the rather narrow belt of outcrop, however, indicates that the actual trend of the flexure is north-northeast. The limestones on each west-projecting ridge rise higher than they do in the heads of the canyons to the east (inset, fig. 6), and a westward overlap of the overlying sandstones and conglomerates can be observed on the walls of Bone and other canyons (pl. 13, fig. B). Overlying the conglomerates near Bone Spring is a bed of gray-brown, dolomitic limestone which closely resembles the limestones of the lower division of the Victorio Peak member, which lies at about the same altitude to the north. This forms the 28-foot interval in section 15, plate 13. It might be mistaken for a tongue of the lower division projecting into and intergrading with the sandstones of the Brushy Canyon formation were it not that on the south side of the next ravine north of Bone Canyon it can be found overlapping the similar, older, gray-brown limestones (as shown at point 14b, pl. 9, and on fig. A, pl. 13) with the unconformable contact clearly exposed. Moreover, beneath the limestone bed in Bone Canyon, the conglomerate contains fragments of the upper division of the Victorio Peak member as well as of the lower division, thus proving that the bed is much younger than the lower division. Lloyd 68 considers that "the lower part of the sandstone series [Brushy Canyon] merges laterally with the gray limestone [Victorio Peak] just as the upper part merges into the lower part of the Capitan." His interpretation is based chiefly on the apparent relations of the limestone bed here referred to. This interpretation is not accepted in this report. RELATIONS NORTH AND SOUTH OF FLEXURE South of the Bone Spring flexure there appears to be a continuous, gradational sequence from the black limestones of the Bone Spring, through the shales of the Cutoff member, into the sandstones of the Brushy Canyon formation. Deposition probably was nearly continuous from one formation to the other in this region. The gray limestones of the Victorio Peak member are not present between the black limestones and the Cutoff member, but they are not believed to be missing on account of erosion; instead, during Victorio Peak time, black limestone was probably being deposited south of the flexure while the gray limestone was being deposited north of it. North of the flexure, the unconformity between the Bone Spring limestone and the Delaware Mountain group is not evident, and the strata of the two units lie parallel. The beds next beneath the contact belong to the Cutoff shaly member of the Bone Spring, and those next above the contact to the sandstone tongue of the Cherry Canyon formation. Near the north edge of the flexure, however, the Cutoff member below has been eroded away. Also, on the flexure, a great thickness of beds older than the sandstone tongue wedge in below the Cherry Canyon formation, and constitute the Brushy Canyon formation (pl. 7, A). The absence of the latter north of the flexure indicates that a great, but nonevident break separates the Bone Spring limestone and Delaware Mountain group in that region. Invertebrate fossils occur in various degrees of abundance in all the members of the Bone Spring limestone. In general, the faunas of all the members are similar, but there are some differences which appear to be related to differences in lithologic facies of the enclosing rocks. Considered as a whole, the fauna is closely related to that in the overlying Guadalupe series, although of slightly more primitive character. It has few resemblances to that of the underlying Hueco, and still fewer resemblances to that of the Pennsylvanian beneath the Hueco. Some of the fossils from the black limestone beds of the formation were described by Girty69 in 1908, and the general aspect of the fauna of the Victorio Peak member was reviewed by him in 1926.70 Some brachiopods from the formation in the Delaware Mountains and the Sierra Diablo were described by King71 in 1931. The present investigation has furnished much additional information on the fauna, which is summarized below. In this and succeeding discussions of the fossils of the Guadalupe Mountains section, information on the fusulinids is based on the work of Dunbar and Skinner,72 and that on the cephalopods on the work of Miller and Furnish.73 These studies, which to a great extent were based on collections made during the present survey, have already been published. Information on the other groups of fossils, particularly on the brachiopods, gastropods, and pelecypods, is based on the work of the late G. H. Girty, who was able to complete in manuscript a rather long summary of the collections shortly before his death in 1939. This summary, quoted in this report, is of particular value because it links the pale ontological and stratigraphic ideas of his earlier work, in 1908, with the ideas obtained by other geologists from more detailed subsequent field work and collecting. Throughout his summary, Girty makes frequent comparisons between the faunas as he knew and described them in 1908 and faunas as they are revealed by the present larger collections. Because of the fact that this report is primarily a description of the physical stratigraphy of the southern Guadalupe Mountains, because of the large size of the available collections, and because of the preliminary nature of the ideas on many of the fossil groups, it does not seem desirable at this place to include the customary fossil lists. Instead, in the summary written by Dr. Girty, the important features of each fauna are discussed, and only incidental reference is made to specific localities. A similar plan is followed in summarizing the results of Dunbar and Skinner and of Miller and Furnish, although the actual localities of their collections have been given in their publications. Although this method of presentation has some disadvantages, it is believed to have advantages for immediate purposes that outweigh the disadvantages. It is hoped that stratigraphers and paleontologists will find use for the material as it is given. Although the summary by Dr. Girty quoted herein was completed shortly before his death, he was unable to edit the manuscript in the manner he had contemplated; in its original state it was essentially a rough draft. In order to prepare it for publication, therefore, it was edited by P. B. King and J. S. Williams. King condensed and rearranged certain parts, so that as here given they are not exactly as written by Girty, although the original meaning and style are retained. Williams reviewed the terminology of the genera and species, which were not everywhere consistent in the several parts of the manuscript. Where discrepancies were found an attempt was made to determine the usage actually preferred by Girty at the time of writing. Most of his preferences could be determined from statements in the manuscript itself, but supplementary evidence was obtained by examination of other notes and manuscripts written by Girty that were available to Williams. Throughout the summary by Girty, the generic assignments given by him are retained, and no attempt has been made to incorporate generic changes that have appeared since Girty's death in 1939. In connection with the generic terminology as used, Girty comments as follows on that of the brachiopods. The generic names used for the fusulinids are those employed by Dunbar and Skinner in their publication of 1937, and those for the ammonoids are those employed by Miller and Furnish in their publication of 1940. BLACK LIMESTONE BEDS In most of the black limestone beds fossils are scarce, being represented by only occasional specimens. In a few layers, which are generally lenticular or nodular, and somewhat more granular than the rest of the rock, they are more abundant, and from these layers most of the known fauna has been obtained. Slight differences exist between the fossil assemblages in the different beds. In some, brachiopods predominate, in others gastropods, pelecypods, and cephalopods. According to Dr. Girty, the differences between the assemblages are not fundamental. One of the most striking features of the black limestone fauna is the abundance of ammonoids at numerous localities. Nearly all the collections that have been studied, however, came from exposures near or a short distance north of the crossing of the outcrop by United States Highway No. 62 (localities 2920, 2967, 7413, 7691, 7720, and 8596). These ammonoids belong mainly to three species: Paraceltites elegans Girty, Texoceras texanum (Girty), and Peritroehia erebus Girty. At one locality (7720) there is also Agathiceras cf. A. girtyi Bose, and at another (7701), Perrinites hilli tardus Miller and Furnish.74 The genus Perrinites, although rare in the Guadalupe Mountains, is an abundant and characteristic fossil of the type Leonard series in the Glass Mountains, with which the Bone Spring limestone is correlated. According to Miller, a striking feature of the ammonoid specimens collected from the black limestone is that nearly all retain the living chamber, a fragile structure that is usually missing from specimens from other beds and other areas. This suggests that the shells were deposited in unusually quiet water. Associated with the ammonoids are occasional nautiloids, which were represented in Girty's original collections by Metacoceras shumardianum (Girty). In the later collections Miller and Furnish75 have identified the same species, and in addition, Titainoceras sp., "Orthoceras" sp., and Stearoceras? sp. By contrast with the ammonoids, fusulinids are nearly absent from the black limestone, although they are abundant in the gray Victorio Peak limestone to the north, where ammonoids are absent (compare fig. 11). Their rarity in the black limestone contrasts with their abundance in most other beds of the Guadalupe Mountains section. Within the area studied they have so far been observed at only one locality in the black limestone (7923)in a canyon a mile south of Bone Canyon. Here the black limestone contains Schwagerina setum Dunbar and Skinner, which is also found in the probably contemporaneous Victorio Peak limestones not far to the north.76 At the point of the Delaware Mountains, 18 miles south of El Capitan, R. E. King in 1928 collected the genus Parafusulina from the black limestone. Regarding the remaining, much greater part of the fauna, Dr. Girty reports as follows: VICTORIO PEAK GRAY MEMBER Fossils are abundant in many beds of the Victorio Peak gray member, but are not always easy to collect, because of the hardness of the rock, and, in places, because of subsequent dolomitization or silicification. The material obtained during the present investigation therefore consists of a relatively small number of collections. Dr. Girty states that many of the specimens in these collections are so fragmentary that they can be identified only by careful comparisons, if at all. According to Dr. Girty, the faunas of the member closely resemble those of the black limestone beds, and are distinguished more by the absence of forms that are present in the black limestone, than by the introduction of novel or instructive elements. Many of the collections consist entirely of brachiopods, and especially of the larger productids and spiriferoids. The fauna differs notably from that of the black limestone beds in the almost complete absence of cephalopods. No ammonoids have been found, and only one nautiloid (a Tainoceras according to A. K. Miller). The fauna differs from that of the black limestone also in the rather great abundance of fusulinids in certain beds in the upper division (fig. 11, A). They belong to two species, Schwagerina setum Dunbar and Skinner, and Parafusulina fountaini Dunbar and Skinner. The lower division of the Victorio Peak member is represented by only one collection, made on the south bank of Shumard Canyon at its entrance (locality 7725). For it Dr. Girty gives the following provisional list, with several indeterminate forms omitted. The upper division of the Victorio Peak gray member is somewhat better represented by collections. The material from each locality is rather scanty, however, and the specific representations are mostly confined to two or three specimens. The largest collections were obtained on the crest of the ridge between Shumard and Shirttail Canyons, whose summit stands at 6,402 feet (locality 7690). Regarding the fauna of the upper division, Dr. Girty writes: CUTOFF SHALY MEMBER As will be recalled, the name Cutoff shaly member is given to discontinuous sets of beds at the top of the Bone Spring limestone, which are exposed in three general districts: the northwest part of the area, from which the name is derived; in Shumard Canyon, not far from Bone Spring, where it is separable into two divisions; and along the base of the Delaware Mountains in the southern part of the area. In all of these districts, the member contains some fossils, but the collections which have been made so far are too scanty to furnish much information on the correlation of the beds in the different districts. Fossils are least abundant in the northwestern exposures, from which the member is named, and in the main part of the member only a poorly preserved imprint of a pelecypod was seen (locality 7650). Some of the black limestone beds near the base, however, contain many small brachiopod shells, but they have not been collected or studied. Several miles north of the New Mexico line, the member contains rather abundant specimens of Chonetes (locality 7727). Only one collection was made in the member in the Shumard Canyon area. This collection was obtained from a lens of massive limestone interbedded in the black limestones of the lower division of the member on the south side of the south fork of Shumard Canyon (locality 7675). Regarding it, Dr. Girty writes: Fossils are more numerous in the Cutoff shaly member in the southern part of the area, west of the Delaware Mountains. Here, many of the thin limestone beds contain fusulinids, which belong to an undetermined species of Parafusulina, and some contain brachiopods. The largest collection was made on the north side of Brushy Canyon in its lower course, from a limestone bed in the lower part of the member, which has here thickened to the rather unusual amount of 15 feet (locality 7666). On this collection, Dr. Girty reports as follows: CONDITIONS OF DEPOSITION The Permian rocks exposed in the Guadalupe and Delaware Mountains, and the Sierra Diablo, were laid down during a well-marked depositional cycle which formed the closing stages of the Paleozoic era. This cycle commenced with the Wolfcamp epoch of Carboniferous or Permian age. By the beginning of Wolfcamp time, the localized mountain-making and the still more widespread crustal unrest that had characterized the preceding Pennsylvanian time in the southwestern United States had largely ceased. Readjustments then began which brought into existence the depositional provinces of Permian time (shown on figure 3). These provinces appear to have been broad, persistent tectonic features, that had a marked influence on sedimentation. At the opening of the Wolfcamp epoch, deposition began in an advancing sea which spread over a deformed and eroded surface of Pennsylvanian and older rocks. From this epoch to the end of the Permian, a distinctive and characteristic set of deposits was laid down in the west Texas region, and sedimentation was interrupted by only minor pulsations which serve to divide one epoch from the next. In this report, Permian geologic history in west Texas is summarized at the end of the stratigraphic discussion, and on the maps of figures 13 and 14. Under the present heading, only those features that were directly related to the Guadalupe Mountains region are discussed. In the Guadalupe Mountains region, the deposits of Wolfcamp and Leonard age are not completely revealed by exposures. Additional information is afforded, however, by the two wells already mentioned, and by exposures in the nearby Sierra Diablo. Judging by the thickness of sediments laid down (as suggested by plate 7, B), the Wolfcamp and Leonard epochs were fully as long and as important as the succeeding Guadalupe epoch, whose rocks are more completely exposed in the area of this report. FACIES AND PROVINCES During Leonard time (as represented by the Bone Spring limestone), and probably during Wolfcamp time (as represented by the Hueco limestone and other Beds), two unlike facies were deposited in the Guadalupe Mountains region. Deposits of the one are black, petroliferous, shaly limestone, and of the other are light-gray, thick-bedded to massive limestone. The two facies tended to persist in separate areas, which correspond closely to the provinces of Permian time shown on figures 3 and 16, A. Thus, the black limestone facies characterizes the southeast part of the Guadalupe Mountains region, or Delaware Basin of figure 16, A, and the gray limestone facies characterizes the northwest part, or Northwestern Shelf Area of that figure. The basin appears to have been a negative feature, with a marked tendency toward subsidence; the shelf was more positive, and either remained stable or did not subside as much. During Leonard time, the boundary between the provinces lay along the Bone Spring flexure of the Guadalupe Mountains which, it will be recalled, is bent down southeastward toward the basin area. BLACK LIMESTONE FACIES In the Delaware Basin conditions throughout the whole of Leonard time were nearly uniform and the black limestones were laid down in successive beds without the admixture of much other material. Deposits representing this facies consist mainly of calcium carbonate, impregnated with bituminous material which imparts to them their characteristic color. There is also some argillaceous matter and a small amount of primary silica. Parts of the deposit are thinly laminated by light and dark bands in such a manner as to suggest that the amount of organic matter in the sea water fluctuated from seasonal or other causes, and that the water was sufficiently quiet for the material to be laid down in successive layers on the bottom. Evidently the sea bottom during the time of deposition was not favorable to life, as great thicknesses of strata are nearly unfossiliferous. In many of the fossiliferous lenses, ammonoids are the chief fossils, and these animals were probably free-swimming organisms whose shells dropped to the bottom after death. The associated brachiopods and mollusks, which were certainly bottom-dwellers, are of a relatively few species, and fusulinids are absent. This general impoverishment, however, is not absolute, for some collections within the black limestone contain specimens of productids, spiriferoids, and other brachiopods that are abundant in the gray limestone facies. Further, the trilobites that have been found are not specialized forms but belong to the same species as those found elsewhere in the region in quite different types of deposits. Perhaps the less-specialized animals were occasional migrants into an environment that on the whole was not favorable to them. The black limestones were evidently laid down in quiet water. The bituminous material with which they were impregnated could not have been preserved unless there was little circulation of the water and such a lack of oxygen near the bottom that organic matter was deposited faster than it decayed. These assumed conditions are confirmed by the general poverty of bottom-dwelling organisms in the fauna, and the relative abundance of ammonoids, which swam nearer the surface. Quiet water conditions near the bottom are further indicated by the presence in the ammonoid specimens of the fragile living chamber, which would have been destroyed if the shells had accumulated in agitated water. The conditions just outlined closely resemble those under which the black shales of earlier Paleozoic systems presumably formed.78 Quiet-water conditions during deposition of black shale and limestone deposits do not necessarily indicate the depth of water under which the beds accumulated. There is, however, some evidence to indicate that the beds in the Bone Spring limestone were deposited in deep water. Relations at the Bone Spring flexure, outlined below, suggest that the water was deeper to the southeast, in the black limestone area, than to the northwest, in the gray limestone area. Moreover, the gray limestone deposits seem to have accumulated in agitated water, and it is difficult to see how such differences of deposition could have existed unless there had been also a difference in depth. Further, the Delaware Basin or area of black limestone deposits, received a greater thickness of sediments during Leonard time than the shelf area or area of gray limestone deposits. This greater thickness indicates that the basin area subsided more than the shelf area, and thereby entrapped more sediments. It is possible that subsidence was so rapid that sedimentation did not entirely keep pace with it, and the sea floor stood lower in the basin than on the shelf (sec. a, pl. 7, B). The black limestone deposits are notably poor in sand and other, coarser, clastics. The few thin, interbedded sandstone layers are very fine grained and consist of the more resistant minerals of igneous and metamorphic rocks. Evidently these sands were transported from a distant source. In its lack of coarser clastic material the black limestone contrasts markedly with the deposits of the Guadalupe series (Delaware Mountain group) that succeeded them, and also with contemporaneous deposits of the Leonard series in the Glass Mountains,79 on the southeast side of the Delaware Basin (fig. 13, B and C). In the Glass Mountains, the deposits include sandstones and conglomerates derived from the erosion of older Paleozoic rocks of the newly uplifted Marathon folded belt. Evidently they were not spread far northwestward into the basin. The few sandstone beds in the black limestone might have been derived from this source, but the fact that similar sandstones are interbedded in the gray limestone toward the northwest suggests that at least some of the sand also probably came from the opposite direction. In the marginal area, between the Delaware Basin and the northwestern shelf area, deposits of the black limestone and gray limestone facies interfinger. During the last half of Leonard (Bone Spring) time, the gray Victorio Peak member was spread out on the shelf area, extending as far southeastward as the edge of the Delaware Basin, where it apparently intergraded with black limestone. During the first half of Leonard time, black limestones extended for several miles farther northwestward toward the shelf, underneath the gray Victorio Peak beds. In the Guadalupe Mountains, exposures of the black limestone do not extend deeply enough to indicate their relations to the shelf area. In the Sierra Diablo, however, they are replaced near the shelf by limestone reefsa part of the gray limestone facies. They overlap shelfwards on a surface of unconformity that separates the Leonard from the underlying Wolfcamp series. In the Guadalupe Mountains, the southeastern edge of the gray Victorio Peak limestones follows the upper part of the Bone Spring flexure. This relation of depositional facies to a tectonic feature is probably more than accidental, and implies that the flexure was in existence at the time of deposition. The unconformities in the Bone Spring limestone in Bone and Shumard Canyons suggest contemporaneous movements on the flexure. Possibly also, the small-scale contortion in the black limestone farther southeast was caused by subaqueous gliding of the newly deposited sediments away from the upraised surface of the flexure. On the Bone Spring flexure, the unconformity at the top of the Bone Spring limestone (between it and the Delaware Mountain group) is clearly much greater than the local unconformities within the Bone Spring. This condition might be taken to indicate that the main movement on the flexure came at the end of Leonard (Bone Spring) time, were it not for opposing evidence. During Leonard time, the water in the basin southeast of the flexure was deep. Further movement on the flexure would either deepen the water in the basin still more, or cause a marked uplift in the shelf area. Neither of these events took place. Actually, as summarized in a later part of this report, the water in the basin during the first part of Guadalupe (Brushy Canyon) time, was probably much shallower than during Leonard time. Also, the shaly, poorly resistant Cutoff member, the last deposit of the Bone Spring limestone, underwent almost no pre-Guadalupe erosion in the shelf area, and its beds lie parallel to those of the succeeding series. These conditions suggest that no uplift took place in the shelf area. The marked unconformity at the top of the Bone Spring limestone on the flexure thus probably resulted not so much from accentuation of tectonic movements along the edge of the Delaware Basin at the end of Leonard time as from some more widespread phenomenon, such as a general lowering of sea level in the basin, by regional uplift, eustatic change, or other causes. The Bone Spring flexure, although exposed in only a small area in the Guadalupe Mountains, probably had a wide extent along the northwest edge of the Delaware Basin (fig. 16, A). During late Leonard and early Guadalupe time, it certainly extended southwestward for some distance, as indicated by certain relations at the north end of the Sierra Diablo. Here outliers of the Cherry Canyon, or middle formation of the Delaware Mountain group lie directly on the Bone Spring limestone, just as they do northwest of the flexure in the Guadalupe Mountains (pl. 7, A). The flexure is probably buried under the Salt Basin deposits east of the outliers, for farther east, in the Delaware Mountains, the Cherry Canyon is separated from the Bone Spring limestone by the full thickness of the Brushy Canyon or lower formation of the Delaware Mountain group. GRAY LIMESTONE FACIES The gray limestone deposits (Victorio Peak gray member) north of the Bone Spring flexure were probably laid down in shallower, clearer, better aerated water than the black limestones. Their moderately thick beds include layers, traceable for relatively long distances, that were spread out in broad sheets. They are thus unlike the irregularly bedded, massive limestone deposits higher in the section, which have the form of reefs. The Victorio Peak deposits are better designated as limestone banks than as limestone reefs. The area of gray limestone deposition was a more favorable environment for life than the black limestone area. The many large, thick-shelled productids, spiriferoids, and other brachiopods found in the gray limestone probably found favorable living conditions in clear, shallow waters. The abundance of fusulinids in the gray limestones contrasts with their absence in the black limestones. Conversely, ammonoids which are abundant in the black facies are absent in the gray (fig. 11). It is possible that ammonoids originally lived in both areas, and in the gray limestone area their shells were largely destroyed in the agitated water and were not embedded in the sediments. Support for this suggestion is found in the fact that the nautiloids, whose life habits were similar to those of ammonoids but whose shells were stronger, are represented in the collections from both areas. Last Updated: 28-Dec-2007
<urn:uuid:32c60bfe-8db1-4bd1-9685-5964f0c4c607>
CC-MAIN-2013-20
http://www.nps.gov/history/history/online_books/gumo/215/sec2a.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.960303
12,731
3.03125
3
Materials For Writing, And Forms Of Books ( Originally Published 1893 ) PROBABLY the earliest efforts of the human race to record its thoughts and history were by scratching with some hard instrument on stone or bone. The permanence of the result has always made stone or metal a satisfactory substance to receive engraving, whether for sepulchral tablets, for some official records, such as State decrees, or for honorary inscriptions. Among obvious examples are the drawings of prehistoric man on the walls of caves, the Ten Commandments graven on stone, the Nicene Creed cut in silver by Pope Leo III.'s order (to fix the absolute form decreed by the second General Council), the Parian Chronicle, the Rosetta Stone, and tombs of all ages. It is on stone almost alone that we find in the early classical days of Rome the pure capital forms of letters, as on the tombs of the Scipios. And as material tends to act on style, and as curves are harder to grave than straight lines, writing on stone tends to discard the former and to encourage the latter, so that we find in such inscriptions a decided preference for angular forms of letters. But another very early material for writing was the wood or bark of trees. It was easy to obtain, soft, and fairly durable. Three of our common terms are derived from the custom of cutting or scratching on wooden boards or bark, the Latin liber (a book, properly the bark of a tree, whence such words as library, libretto), the Latin codex (or caudex, a tree-stump, then sawn boards, then a book, now narrowed to a manuscript book ; compare codicil, a diminutive form), and perhaps the Teutonic word which appears in German as Buck and in English as book, meaning originally a beech tree and beechen boards. Next we come to the substance which has given us much of the terminology of books. A common reed, chiefly found in Egypt, and known to the Greeks as (papuros), and to the Romans as papyrus, was discovered to be, when properly pre-pared, a facile and cheap material for writing. The inner rind was cut lengthways into thin strips (bubloi), and laid in order thus : On this was glued, with the help of rich Nile water or other substance, another set of slips laid on the former transversely,thus: This cross-formed substance, properly pressed, hammered and dried, presented a smooth but soft receptive surface for ink, and was most extensively used in classical times until parchment competed with it, or, more accurately, till the export of papyrus began to fail. The papyrus, however, was not used in the form of our books, but as a long roll, with the writing in broad columns placed thus, the writing being represented by wavy lines : Birt, in his book Das antike Buchwesen (1882), has endeavoured to prove that there was a normal length of about thirty-eight letters in each line, but the length of the entire roll might be anything up to 150 feet. There are also a face and a back to papyri, a right and a wrong side for writing. In the British Museum there is a papyrus roll containing, in Greek, the funeral oration of Hyperides on Leosthenes, B.C. 323 ; on the other side of this is a horoscope of a person born in A.D. 95. Naturally, for some time it was believed that the horoscope was casually inscribed on the back of the Hyperides ; but a closer examination has proved that the horoscope is on the face of the papyrus, and the Hyperides perhaps a school exercise accidentally entered on the back. So that A.D. 95 is not the terminus ad quem of the date, but the terminus a quo. Unfortunately, of all possible materials for permanent record, papyrus is among the worst. Even when first written on, it must have seemed ominous that a heavy stroke was wont to pierce and scratch the smooth surface ; so much so that in all papyrus records the writing is along the line of the uppermost layers or strips (not across them), and is also of necessity light, and hardly distinguishable into up and down strokes. This foreshadowed the time when, on the complete drying of the substance in course of years, the residuum would be fragile, friable, and almost as brittle as dead leaves. Every papyrus that comes into a library should therefore be at once placed between two sheets of glass, to prevent, as far as possible, any further disintegration. The terms used in connexion with writing in Greek, Latin and English are chiefly derived from the rolls of papyrus. Let us begin with two words which have had an interesting history. Our `paper' is derived from the Greek (through the Latin papyrus), explained above as the name of an Egyptian reed. Thence it carne to mean the papyrus as prepared to receive writing. How then has paper, which has always been made out of rags, usurped the name without taking over the material ? Simply because the term came to signify whatever substance was commonly employed for writing ; so when papyrus was disused (the latest date of its systematic use is the eleventh century), a material formed of rags was beginning to be known, and carried on, so to speak, the term. The Latin charta (paper) has had a partly similar history, for when first found it is applied to papyrus as distinguished from parchment. Still more interesting is the word Bible. (bubloi) was the Greek term for the strips of the inner part of papyrus. Then the book formed of papyrus began to be called (biblos) and (biblion, a diminutive form). The Romans took over the second word, but chiefly used it in the plural, biblia, which came later to be regarded as a feminine singular, as if its genitive were biblia and not bibliorum. Lastly, the word became specially and exclusively applied to The Book, the Bible, and as such has passed into English. Other terms which recall the days of papyrus are volume (Latin volumen, ` a thing rolled up,' from volvo, I roll ; corresponding to the Greek kulindros), the long stretch of papyrus rolled up for putting away ; the Latin term evolvere, to unroll, in the sense of ` to read ' a book ; and the common word explicit, equivalent to ` the end,' but properly meaning' unrolled ' (' explicitus '), the end of the roll having been reached. So, too, the custom of writing on parchment with three or even four columns to a single page, as may be seen in our most ancient Greek MSS. of the New Testament, is probably a survival of the parallel columns of writing found on papyri. We next come to the most satisfactory material ever discovered for purposes of writing and illumination, tough enough for preservation to immemorial It will be observed that ' explicit ' is a vox nihili, and can only be properly explained as a contraction of ' explicit(us) ' est liber, the book ' is unrolled to the end.' The corresponding term is incipit, ' here begins,' which is a good Latin word time, hard enough to bear thick strokes of pen or brush without the surface giving way, and yet fine enough for the most delicate ornamentation. Parchment is the prepared skin of animals, especially of the sheep and calf ; the finer quality derived from the calf being properly vellum, and if from the skin of an abortive calf, uterine vellum, the whitest and thinnest kind known, employed chiefly for elaborate miniatures. Parchment has neither the fragile surface of papyrus nor the coarseness of medieval paper, and has therefore long enjoyed the favour of writers. Its only disadvantages in medieval times were its comparative costliness and its thickness and weight, but neither of these was a formidable obstacle to its use. The name of this substance contains its history. In the first half of the second century before Christ, Eumenes II., King of Pergamum, found himself debarred, through some jealousy of the Ptolemies, from obtaining a sufficient supply of papyrus from Egypt. From necessity he had recourse to an ancient custom of preparing skins for the reception of writing by washing, dressing and rubbing them smooth ; probably adding some new appliances, by which his process became so famous that the material itself was called ; in Latin, Pergamena, ` stuff prepared at Pergamum,' whence the English word parchment. Both parchment and paper have had less effect than stone or papyrus on styles of writing, because both are adapted to receive almost any stroke of the pen. They have rather allowed styles to develop themselves naturally, and are specially favourable to flowing curves, which are as easy as they are graceful in human penmanship. Paper has for long been the common substance for miscellaneous purposes of ordinary writing, and has till recent .times been formed solely from rags (chiefly of linen), reduced to a pulp, poured out on a frame in a thin watery sheet, and gradually dried and given consistence by the action of heat. It has been a popular belief, found in every book till 1886 (now entirely disproved, but probably destined to die hard), that the common yellowish thick paper, with rough fibrous edge, found especially in Greek MSS. till the fifteenth century, was paper of quite another sort, and made of cotton (charta bomb˙cďna, bombyx being usually silk, but also used of any fine fibre such as cotton). The microscope has at last conclusively shown that these two sorts are simply two different kinds of ordinary linen-rag paper. A few facts about the dates at which papyrus, parchment and paper are found may be inserted here. The use of papyrus in Egypt is of great antiquity, and the earliest Greek and Latin MSS. we possess are on papyrus ; in the case of Greek of the fourth century B.C., in Latin of the first century A.D. It was freely exported to Greece and Rome, and, though it gradually gave way before parchment for the finest books, from the first century B.C. onwards, it was not till the tenth century A.D. that in Egypt itself its use was abandoned. Practically in about A.D. 935 its fabrication ceased, although for Pontifical Bulls it was invariably used till A.D. 1022, and occasionally till 1050. Parchment has also been used from the earliest times ; and its use was revived, as we have seen, in the second century before Christ, and lasted till the invention of printing, after which it was reserved for sumptuous editions, and for legal and other durable records. Paper was first manufactured (outside China) at Samarkand in Turkestan in about A.D. 750 ; and even in Spain, where first it obtained a footing in Europe (in the tenth century), it was imported from the East, not being manufactured in the West till the twelfth century ; but from that time its use spread rapidly. In England there was a paper-mill owned by John Tate in 1495, when Bartholomćus Glanville's De proprietatibus rerum was issued on native paper. Watermarks in paper (see p. 16) are entirely a Western invention, found first towards the end of the thirteenth century, and never found at all in Oriental paper. Besides stone, papyrus, parchment, and paper, the materials used for writing, though numerous, are rather curious than important. Tablets of wood, hinged like a book and covered with wax, on which letters were scratched with a small pointed metal rod (stilus, whence our words style, stiletto, etc.), were common at Rome in classical and later times, and are believed to have suggested the form of our ordinary books. For private accounts and notes these wax tablets are said to have been in use in Western Europe until the time of printing. Various metals, especially lead, have been made use of to bear writing ; and also bones (in prehistoric times), clay inscribed when soft and then baked (as in Assyria), potsherds (ostraka), leaves, and the like. B.—Forms of Books We now come to the forms of books—the way in which they are made up. In the case of papyrus, as has already been observed, we almost always find the roll-form. The long strip was, of course, rolled round a round rod or two rods (one at each end) when not in use, much as a wall-map is at the present day. With parchment the case has been different. Though in classical times in Rome, so far as can be judged, the roll-form was still in ordinary use even when parchment was the material, and though, in the form of court-rolls, pedigrees, and many legal kinds of record, we are still familiar with the appearance of a roll, the tendency of writers on parchment has been to prefer and perpetuate the form of book best known at the present day, in which pages are turned over by the reader, and no membranes unrolled. The normal formation of a parchment book in the Middle Ages was this :—four pieces of parchment, each roughly about 10 inches high and r8 inches broad, were taken and were folded once across, so that each piece formed four pages (two leaves) as a basis for making a quarto volume. These pieces were then fitted one inside another, so that the first piece formed the 1st and 8th leaves, the second the 2nd and 7th, the third the 3rd and 6th, and the fourth the two middle leaves of a complete section of eight leaves or sixteen pages, termed technically in Latin a quaternio, because made of four (quatuor) pieces of parchment. When a sufficient number of quaternions were thus formed to contain the projected book, they were sent in to the scribe for writing on, and were eventually bound. Many variations of form, both smaller and larger than quarto, are found, and often more or fewer pieces than four make up the section or quire. Paper was essentially different from parchment, in that it could be made of larger size and folded smaller ; whereas the cost of skins was almost prohibitive, if very large and fine pieces were required. As a fact, paper has almost always been used in book and not roll-form. The normal formation of paper-books has been this :— a piece about 12 inches high by 16 inches wide was regarded as a standard size. This was folded across along the dotted line a b, and if this singly-folded sheet was regarded as the basis of a section, and the whole book was made up of a set of these sections, it was called a folio book ; if, however, the singly-folded sheet was folded again across the dotted line c d, and this was treated as a section (containing four leaves or eight pages), the book made up of such sections was called a quarto. Once more, if the doubly-folded sheet was again folded along the dotted line e f, and this trebly-folded sheet was treated as a section (containing eight leaves or sixteen pages), the book was called an octavo. The methods of folding the sheet so as to pro-duce a duodecimo, a 16mo, etc., and the use of half-sheets to form sections, are matters which concern printing rather than writing. But it should be clearly understood that, whereas we now mean by a folio a tall narrow book, by a quarto a shorter broad book, and by an octavo a short narrow book, judging by size and shape ; in the earlier days of paper, these terms indicated, not size or even shape, but form, that is to say, the way in which the sheets of paper were folded up to form sections ; and that it is only owing to the fact that a certain size of paper was generally adopted as a standard that the terms came to have their modern signification. So true is this, that some early folios are quite small, and many quartos larger or smaller than what we call quarto. But there is one infallible test of a true folio, quarto, or octavo. Observe the diamond on the figures on pp. 15-16, and the lines drawn across them. The diamond represents the watermark, a trade design (such as a jug, a unicorn, a pair of scissors, etc.) inserted by the maker in every sheet, and the lines are ` chain-lines,' the marks where the wire frames supported the half liquid paper-sheet as it gathered consistency by being dried. The position of the watermark and the direction of the chain-lines were fortunately invariable, and therefore (as may be easily seen by a paper model) every true folio has the watermark in the centre of a page and the chain-lines perpendicular ; every quarto has the watermark in the centre of the back, not easy to see, and the lines horizontal ; and every octavo has a watermark at the top of the back at the inner edge, and the lines perpendicular. These points are not necessarily true of modern books. C.—Instruments and Ink On this subject few words are necessary. For hard substances and for wax and clay, a graving tool or pointed metal rod is necessary ; for papyrus and parchment and paper, a pen. Pens have till modern times always been of one of two kinds, either made of a reed (calamus, arundo, a reed-pen), or made of a quill, usually from a bird's feather (penna, a quill-pen). The latter appears to be the later in invention, but is found as early as the sixth century of our era. Ink (atramentum) has hardly varied in composition from the earliest times, having been always formed in one of two ways : either, as was the common practice in classical times, by a mixture of soot with gum and water, which produces a black lustrous ink, but is without much difficulty removed with a sponge ; or by galls (gallic acid) with sulphate of iron and gum, which is the modern method, though also so ancient as to be found on the Herculanean rolls. At Pompeii ink of this kind was found still liquid after seventeen centuries of quiescence. The chief coloured inks known to antiquity were red, purple, green, and yellow : gold and silver liquids were sometimes used, especially when the parchment had been stained purple to enhance the effect. For the colours used in illumination, Chapter V. may be consulted. So far we have been concerned with passive sub-stances prepared and presented to the scribe, to become instinct with life when the message of the author is consigned to the expectant page. Our next chapters will naturally treat of the writing itself, and of scribes and their ways, the living elements in a book.
<urn:uuid:297f6ebb-2713-4ab7-855d-721ed36ce7b2>
CC-MAIN-2013-20
http://www.oldandsold.com/articles11/manuscripts-2.shtml
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.97502
4,020
3.5
4
Mastectomy (Removal of the Breast) for Breast Cancer Mastectomy is removal of the breast. Other nearby tissue may also be removed if it appears that cancer may have spread to these areas. All mastectomies remove the whole breast. Because the size and location of tumors and where the cancer might have spread differ from one person to another, the amount of other tissue removed during surgery also varies. Reference Mastectomy procedures include: - Total or simple mastectomy, which is the removal of the whole breast. - Modified radical mastectomy, which is the removal of the breast, some of the lymph nodes under the arm, and sometimes part of the chest wall muscles. - Radical mastectomy, which is the removal of the breast, chest muscles, and all of the lymph nodes under the arm (Reference axillary lymph node dissection). This surgery is rarely used now. Depending on the location of the tumor in the breast or other factors, some women may be able to have a skin-sparing or nipple-sparing mastectomy. Skin-sparing mastectomy leaves most of the skin that was over the breast, except for the nipple and the areola. Nipple-sparing mastectomy saves the skin over the breast as well as the nipple and areola. Some women choose to have breast reconstruction after a mastectomy. Reconstruction can be done during the same surgery as the mastectomy, or it may be done later as a separate procedure. - Opens New Window Breast Cancer: Should I Have Breast Reconstruction After a Mastectomy? Opens New Window In addition to surgery, you may have Reference radiation therapy Opens New Window, Reference chemotherapy Opens New Window, Reference hormone therapy Opens New Window, or a combination of these treatments. |By:||Reference Healthwise Staff||Last Revised: June 28, 2011| |Medical Review:||Reference Sarah Marshall, MD - Family Medicine Reference Douglas A. Stewart, MD - Medical Oncology
<urn:uuid:dcb09372-cb69-4560-9791-0e70da19ea10>
CC-MAIN-2013-20
http://www.pamf.org/teen/healthinfo/index.cfm?A=C&type=info&hwid=zt1580&section=zt1581
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.915879
411
2.703125
3
If you were in medical school just 20 years ago you might have been taught that the human brain was incapable of producing new brain cells. Doctors and brain scientists believed this because the nervous system did not seem capable of repairing itself and no new brain cells had ever been demonstrated. New technologies that let us study the brain in more detail are changing many old beliefs about how the brain works, maintaining brain fitness and memory and what the brain is capable of. The brain is estimated to have 100 billion neurons and 100 trillion connections between neurons. However, what we still don't know about the brain is considerable. Think of the brain as an entire universe that we are just beginning to journey into. When brain scientists use the word plasticity they don't mean plastic—they are talking about the brain’s ability to reorganize itself and adapt to new knowledge and experience. Research over the past 20 years shows that this ability to change and adapt is not lost with age. Neurogenesis is the brain’s ability to actually make new nerve cells called neurons. It was once thought that we were stuck with our original 100 billion (a number you might think would be enough!), but we now know we are making new neurons, even as we get older, in an area of the brain that is responsible for memory and learning. Here are a few brain facts from the recent research: "Studies show that diverse, mentally stimulating tasks result in more brain cells, more robust connections among those cells, and a greater ability to bypass disease related trouble spots in the brain." —AARP Magazine As we have learned more and more about the brain's ability to add new connections and new neurons throughout life, a growing number of software companies have developed programs to help people find the right exercise for their brains. Using terms like "brain fitness," "brain cross-training" and "cognitive stimulation," these programs seek to help you remember more, prevent brain decline, strengthen your attention, and even help you drive better. The logic behind keeping your brain active certainly seems convincing. Most of these brain game training programs are based on some solid brain science, and there are many studies to support the "use it or lose it" approach to brain health. A study in the New England Journal of Medicine even suggests that participating in brain stimulating activity may decrease your risk of Alzheimer's disease. The question is whether these new software programs are any better than a stimulating conversation or a crossword puzzle. Here are some points to consider: The jury may still be out, but there are plenty of smart people working on designing brain exercises that strengthen brain functions. Another thing to consider is that these programs are actually fun to use and unlike other types of “treatments," they are drug free. They also present a challenging alternative to crossword puzzles, Sudoku and Scrabble games and can be an enjoyable addition to your mix of diversions. "As scientists gain more knowledge about the relationship between sensory perception, memory, and cognition, they are learning to design brain exercises that strengthen brain function." —Michael Merzenich, PhD, neuroscientist The science behind brain games is solid. Research supports the concept of "use it or lose it" for brain health. You might find them to be both fun and effective. Remember that in addition to brain fitness games, research shows that there are other things you can do to keep your brain functioning at a high level. Here are some healthy brain habits to practice: Brain training software is a growing market that targets seniors and baby boomers. There are new competitors in the market all the time, but the major manufacturers are:
<urn:uuid:ee0a4fb3-2381-48e8-aa23-0d12f5e014db>
CC-MAIN-2013-20
http://www.parentgiving.com/elder-care/brain-fitness-how-brain-games-may-preserve-memory/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.967863
741
3.53125
4
Bold and beautiful, the mesmerizing Arctotis hirsuta is one of the most captivating species in an annual spring display. This robust annual species grows to 450 mm in height and diameter. It is slightly fleshy with a branched stem. The thinly hairy, lyrate (lyre-shaped) to pinnatifid (divided feather-like) leaves are up to 200 mm long and often auriculate (with ear-like lobes at the base). There are various flowerheads of about 40 mm in diameter, borne on leafy stalks. The rayflorets are orange, yellow or cream-coloured, sometimes with purplish markings at the base. The disc florets are often black.The involucral bracts surrounding the flower heads are finely woolly and arranged in three rows. The innermost bracts have a transparent tip. Flowering is from July to October. Arctotis hirsuta is not threatened and has a status of Least Concern (LC) (Raimondo et al . 2009). Distribution and habitat The gousblom is found on sandy slopes and flats, often along the coast, in the Western Cape from Elandsbaai to the Agulhas Plain. Derivation of name and historical aspects The genus name Arctotis is derived from Greek arktos, which means a bear, and otos an ear. This implies that the scales of the pappus (appendages of the fruit) look like the ears of a bear. The specificepithet, hirsuta, means hairy in Latin. This refers to the hairs present on the leaves and stem. Arctotis species are often referred to as African daisies. Some species were previously placed under Venidium. There are currently more than fifty species known from southern Africa to Angola. As a spring annual species, the gousblom completes its entire lifecycle in a single season. This ranges from germination, flowering, the production of seed and until it eventually dies off. The process starts in autumn with the first cool air and the season's first rain. These are favourable conditions and significant growth will take place. From the middle of winter to the middle of spring, blooming occurs and the game is on to attract possible pollinators. Thus the large, colourful flower heads come into play. Bees and beetles have been noticed to regularly visit the flowers of Artotis hirsuta . It is not certain, though, which ones are responsible for doing the pollination. Seeds are light in weight which aids wind-dispersal and thus ensures that seeds get scattered over a larger area. This entire process is normally completed by the time summer arrives. This adaptation allows this species to cope with the harsh summer conditions. The next generation— the seed— slightly buried in the soil, like its predecessors, is ready to germinate, but only under favourable conditions that will ensure the survival of the species. Uses and cultural aspects No medicinal or cultural uses of the gousblom have been recorded. This species is an ornamental winner with it its bright display of colour and somewhat robust habit during late winter and early spring. The large yellow, orange or cream -coloured flower heads are simply bold and beautiful and it rightfully demands inclusion in the spring annual garden. Growing Arctotis hirsuta The gousblom grows easily. Sow seed during March in seed beds or seed trays using a light, well-draining medium which is placed in a sunny position. The medium could be a light, sandy soil or a mixture of bark, compost or river sand. There is no restriction on what materials to use for a perfect medium. The most important requirement is good drainage. The medium needs levelling and a good watering beforehand. If the medium has been used before, it is advisable to dig the soil over first. Sow the light seeds evenly on a windless day, water gently and cover with a thin layer of sand or bark. The seed will germinate within the first two weeks. Prick seedlings out from the beds or trays as soon as they are large enough to be handled. Seed can also be sown directly into garden beds, but germination could be irregular. Fast growing weeds could also provide fierce competition. The transplanting stage is vital and care needs to be taken that the young seedlings are regularly watered. Plant this sun-loving species close together. Preparing flowering beds often requires the loosening up and aeration of the compacted soil. Add organic fertilizer and well-rotted compost and use the rotovator to work it into the soil. Level the soil out by using an iron rake. To help suppress the development and growth of weeds or other unwanted plants, it could be beneficial to apply an additional mulch-layer of compost. If using summer annuals in an area not receiving summer rainfall, this could also have the advantage of retaining soil moisture. The lack of compost for a season or two need not deter one from planting, as good well-drained soil will suffice. In time though, it would be advisable to add compost. The transplanting stage is vital and care needs to be taken that the young seedlings are regularly watered. Plant plants of this sun-loving species close together. Flowering normally starts sometime during July right through towards the end of September/ October. Seed will be ready for harvesting from September onwards. Arctotis hirsuta is equally suitable for en masse mixed plantings of its various colour-forms or for mixed plantings with other annual species in small or larger flower beds. There are various other annuals which either occur with this species in its natural habitat or which have been found to complement it rather well. These include: Ursinia anthemoides (marigold), U. speciosa (Namaqua-ursinia) and U. cakilefolia, Heliophila coronopifolia (blue flax), Senecio elegans (wild cineraria), Dorotheanthus bellidiformis (Livingstone daisy), Dimorphotheca sinuata (Namaqualand daisy), D. pluvialis (white Namaqualand daisy), Felicia dubia (dwarf felicia) and blue F. heterophylla . References and further reading - Cowling, R. & Pierce, S. 1999. Namaqualand: A succulent desert . Fernwood Press, Cape Town. - Goldblatt, P. & Manning, J. 2000. Cape plants. A conspectus of the Cape flora of South Africa . Strelitzia 9. National Botanical Institute, Pretoria & Missouri Botanical Garden, Missouri. - Le Roux, A. 2005. Namaqualand : South African Wild Flower Guide 1. Botanical Society of South Africa, Cape Town. - Manning, J. & Goldblatt, P. 1996. West Coast : South African Wild Flower Guide 7. Botanical Society of South Africa, Cape Town. - Powrie, F. 1998. Grow South African plants . A gardener's companion to indigenous plants. National Botanical Institute, Cape Town. - Raimondo, D., Von Staden, L., Foden, W., Victor, J.E., Helme, N.A., Turner, R.C., Kamundi, D.A. & Manyama, P.A. (eds) 2009. Red list of South African plants 2009. Strelitzia 25. South African National Biodiversity Institute, Pretoria. - Stearn, W. 2002. Stearn's dictionary of plant names for gardeners. Timber Press. Portland, Oregon. Kirstenbosch National Botanical Garden
<urn:uuid:1415ba3e-72d6-4083-827e-1eec07ed8da0>
CC-MAIN-2013-20
http://www.plantzafrica.com/plantab/arctotishirsuta.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.908678
1,643
3.3125
3
Celebrate Princeton Invention: Craig Arnold Posted December 21, 2009; 01:08 p.m. Able to adjust its focus more than 100,000 times faster than the human eye, the TAG Lens invented by mechanical and aerospace engineering professor Craig Arnold and his colleagues has applications in materials processing and imaging. (Photo: Brian Wilson) Name: Craig Arnold, associate professor of mechanical and aerospace engineering Invention: Tunable Acoustic Gradient Index of Refraction Lens (TAG Lens) What it does: The TAG lens features a cylinder made of a special material that vibrates when electricity is passed through it, enclosed inside a fluid-filled chamber. Controlling the flow of electricity changes the vibrations that propagate through the fluid, changing the lens' focus more than 100,000 times faster than the human eye can refocus. Inspiration: After developing a low-cost lens to shape laser beam output into different patterns, Arnold and his colleagues focused their attention on understanding how the device worked and its potential applications. Finding that the lens had the unique ability to focus rapidly at a wide range of focal lengths, they realized its potential went far beyond the original intended purpose, with numerous applications in materials processing and imaging. Collaborators: Euan McLeod, a 2009 Ph.D. recipient, and Alexander Mermillod-Blondin, a former postdoctoral researcher in the Arnold lab Back to main story
<urn:uuid:e52b9faa-46f7-4ce0-bef3-858f067d0752>
CC-MAIN-2013-20
http://www.princeton.edu/main/news/archive/S26/14/18O76/index.xml?from=2012-01-01&to=2012-02-01
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.93646
290
3.453125
3
British historian and journalist Holland (Fortress Malta) vividly recalls the final year of World War II in Italy in this masterful narrative. The controversial decision to invade Sicily and Italy following the North African campaign was ""purely opportunistic"" and intended to draw German resources away from the main action in Normandy. As critics had feared, Italy, with its rugged mountains, was ""a truly terrible place to fight,"" and the campaign became a bloody war of attrition. The final toll on combatants, civilians, and the Italian landscape was staggering; total casualties exceeded a million and entire cities were leveled. Cassino, the site of a decisive battle, was ""utterly-100 per cent-destroyed"" and Benevento resembled ""a post-apocalyptic ruin."" Holland's balanced account of the savage fighting and wholesale destruction draws on the eyewitness testimony of Allied and German combatants, Italian partisans and Fascist loyalists. He concludes-echoing historian Rick Atkinson's excellent recent account of the campaign, The Day of Battle-that despite its terrible cost, the fight in Italy played a decisive role in defeating Germany. A complementary volume to Atkinson's account focusing on the earlier stages of the campaign, this is popular history at its very best: exhaustively researched, compellingly written and authoritative.
<urn:uuid:1e3e90d1-c398-4952-9ff2-da7279315cd7>
CC-MAIN-2013-20
http://www.publishersweekly.com/978-0-312-37396-2
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.967296
258
2.515625
3
Article, Book Resources Thanksgiving Recommended Books - Grades: PreK–K, 1–2, 3–5, 6–8 The following titles for grades K-8 are available in The Teacher Store at Scholastic.com. 1621: A New Look at Thanksgiving by Catherine O'Neill Grace and Margaret M. Bruchac. Photographs by Cotton Coulson and Sisse Brimberg (Grades 3-5 ) In October of 2000, Plimoth Plantation cooperated with the Wampanoag community to stage an historically accurate reenactment of the 1621 harvest celebration. 1621: A New Look at Thanksgiving exposes the myth that this event was the "first Thanksgiving" and is the basis for the Thanksgiving holiday that is celebrated today. This exciting book describes the actual events that took place during the three days that the Wampanoag people and the colonists came together. The First Thanksgiving by Linda Hayward (Grades PreK-1) Give young readers the familiar story behind our tradition of Thanksgiving Day, detailed in this easy-to-read history storybook. The Pilgrims' journey, the trials they endure while at sea, and all of their amazing adventures are conveyed with vibrant illustrations and simple words for utmost comprehension. Pilgrims' First Thanksgiving by Ann McGovern (Grades PreK-1) Full-color illustrations bring to life this historically accurate account of how the children of Plymouth Colony helped contribute to the first Thanksgiving celebration. Clifford's Thanksgiving Visit by Norman Bridwell (Grades PreK-2) What child wouldn't like to have a pet as special as Clifford the Big Red Dog? In this adventure, Clifford experiences an unusual Thanksgiving journey, ending with an appreciation of overcoming difficulties, celebrating tradition, and spending time with family. Squanto's Journey: The Story of the First Thanksgiving by Joseph Bruchac (Grades K-3) Travel back to 1620 as an English ship called the Mayflower lands on the shores inhabited by the Pokanoket people. As Squanto welcomes the newcomers and teaches them how to survive in the rugged land they called Plymouth, young readers are treated to a story ending with the two peoples feasting together in the spirit of peace and brotherhood. by Barbara Cohen (Grades K-3) Molly nears her first Thanksgiving in America and her classmates giggle at her Yiddish accent and make fun of her unfamiliar ways. Now her mother embarrasses her with a doll that looks more Russian than Pilgrim. Will Molly discover something to be thankful for? Gracias, el pavo de Thanksgiving by Joy Crowley (Grades PreK-2) In this warm holiday story, a young Puerto Rican boy saves the life of his pet turkey with help from his close-knit New York City family and neighborhood. Beginning Spanish vocabulary is woven into the text, giving young readers a unique Thanksgiving story experience. If You Were at the First Thanksgiving by Anne Kamma (Grades 1-4) Told from a child's perspective and illustrated in full color, this book brings the first Thanksgiving to life. Details about daily life put young readers into the middle of the action. If You Sailed on the Mayflower in 1620 by Ann McGovern (Grades 1-4) Answer children's questions about the Pilgrims with an enlightening Thanksgiving story. With the beautiful illustrations, young readers can imagine being right on the ship, waiting to arrive in a new land. As a part of the If You series, this book helps bring history to life and nurture imagination. The Journal of Jasper Jonathan Pierce: A Pilgrim Boy, Plymouth, 1620 by Ann Rinaldi (Grades 4-8) By promising seven years of labor to a fellow traveler, Jasper earns passage aboard the Mayflower and closes the door on his troubled past. His account of the arduous ocean crossing and first year in the New World shows young readers his physical and spiritual growth as he learns the strengths and weaknesses in himself, his Puritan people, and his Native American neighbors. See all our Thanksgiving resources in The Teacher Store.
<urn:uuid:3c066bba-b476-4b5d-8d18-357db7ce30ec>
CC-MAIN-2013-20
http://www.scholastic.com/teachers/article/thanksgiving-recommended-books
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.923196
868
3.390625
3
Kojic acid is a fine white powdery substance composed of tiny crystals. The formal discovery of kojic acid occurred in 1989, and since then, the substance has been used widely in skin care products due to its numerous benefits. The ingredient is obtained from mushrooms that are native to Japan and is a by-product of the fermentation process used to produce the alcoholic beverage sake. In skin care products, kojic acid functions primarily as a skin-lightening agent. To understand kojic acid's effects, it is necessary to understand how the skin gets it color. The body naturally produces a pigment known as melanin through specialized cells known as melanocytes. A person's genes determine how much melanin the body naturally produces. In people with fair skin, only small amounts of melanin are manufactured by the melanocytes, while copious amounts of the pigment are made by the cells of those with dark complexions. The production of melanin in the skin does not occur in fixed amounts. Often, the cells produce more melanin in response to the environment or internal conditions in the body. When the skin is exposed to ultraviolet radiation from the sun, the melanocytes increase their production activities, causing the skin to tan. Repeated exposure to the sun can result in a permanent increase in melanin production in spots on the skin, causing small freckles and larger sun or age spots to form. Melanin production can also increase when the skin becomes chronically inflamed. This is a common problem among acne sufferers who have prolonged discoloration of their skin after their acne blemishes heal. Hormonal changes that occur in the body during pregnancy can also spur melanin production, leading to a discoloration on the face that is known as melasma or chloasma. The overproduction of melanin caused by inflammation and hormonal activity typically declines over time, resulting in a gradual fading of the darkened skin. When kojic acid is applied to the skin in concentrated amounts, the chemicals in the ingredient work on the melanocytes, interfering with the production of melanin. The exact way in which kojic acid lessens melanin production is not known, but many experts believe that the ingredient prevents an enzyme known as tyroinase from beginning the reactions in the cells that are necessary for manufacturing the pigment. Prior to the discovery of kojic acid, the ingredient hydroquinone was largely the only ingredient used for skin whitening. Hydroquinone is known to cause skin irritation in many individuals, and for these people, dermatologists often recommend kojic acid as an alternative method for treating skin discoloration. Those with very sensitive skin may still develop redness or itching from the use of kojic acid, but overall, the ingredient is better tolerated than hydroquinone. The effects of kojic acid have been reported as being identical to those of hydroquinone or slightly less noticeable. In addition to its skin-lightening abilities, kojic acid is classified as an antioxidant. This class of nutrients has the ability to counteract the effects of particles in the air called free radicals, which have the potential to cause oxidative damage to the skin cells. By limiting the effects of free radicals, kojic acid helps to prevent the formation of signs of aging that occur when the cells that produce the skin's vital structural proteins become damaged. Kojic acid is also an antibacterial agent, meaning that it interferes with the processes that bacteria cells must perform to thrive and reproduce. By disrupting these processes, kojic acid causes the death of bacteria. Some dermatologists recommend the use of mild concentrations of kojic acid for addressing acne blemishes, which are often caused by bacterial infections in the pores. When used in skin care products, kojic acid is a largely unstable compound and has the potential to turn brown if it is not stabilized by additional ingredients. As a result, some skin care companies use a more stable derivative of kojic acid known as kojic dipalmitate in place of the ingredient. Consumers should be aware that studies have not found kojic dipalmitate to be as effective in lightening the skin as kojic acid, so products that contain this version may not be as beneficial for treating hyperpigmentation. Since the discovery of kojic acid, conflicting studies have been reported about the long-term safety of the ingredient. Results in some clinical trials have established a link between the ingredient and some forms of cancer, while others have found that kojic acid has no carcinogenic effects. Experts do generally agree that any cancer-causing properties of kojic acid would only be problematic if the body was exposed to large quantities of the ingredient. These levels greatly exceed the amount of kojic acid that is actually found in skin care products.
<urn:uuid:2a686595-e60d-4880-936e-cfc55f3fa73c>
CC-MAIN-2013-20
http://www.skinstore.com/kojic-acid.aspx?ft=kojic+acid&as=Ships+Free+(Descending)&vb=Grid&c=True
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.96173
991
2.90625
3
Discovered: Male fish who engage in same-sex flirting lure in female fish; cheese dates back 7.5 millennia; Americans love public transportation once they give it a chance; depressed mice cheer up after brain stimulation. Gay fish attract more female mates. Nature has its own version of a pop culture archetype—the highly attractive man who, unfortunately, remains unavailable to the women swooning over him. A team of researchers led by University of Frankfurt's David Bierbach has found a species of tropical fish in which males who flirt with other males are perceived as more attractive by potential female mates. They observed Poecilia mexicana, or Atlantic mollies, engaged in "mate copying," meaning that females will try to mate with a male fish they've seen interacting sexually with members of their own sex. "Males can increase their attractiveness towards females by homosexual interactions, which in turn increase the likelihood of a male's future heterosexual interactions," says Bierbach. "We do not know how widespread female mate choice copying is, but up to now it is reported in many species, including fruit flies, fishes, birds and mammals [including] humans." [BBC News] Cheese is really old ... ahem ... well aged. Ancient pottery recovered on a dig in Poland reveals that cheese making could date back as far as 5,500 B.C. University of Bristol researchers led by Richard Evershed discovered fatty milk residue on the shards of sieves. That ruled out previous theories that they were used to make honey or beer. "It's almost inconceivable that the milk fat residues in the sieves were from anything else but cheese," comments University of Vermont nutrition professor Paul Kindstedt. We're glad Neolithic people discovered cheese, because their cuisine sounds really boring without it, consisting mostly of porridge. "They probably would not be the first choice for a lot of people today," Kindstedt says of the cheeses these sieves could have produced. "But I would still love to try it." [AP] If you can convince Americans to take public transportation, they'll love it. It's hard to convince drivers to try public transportation, so Maya Abou-Zeid of the American University of Beirut and Moshe Ben-Akiva of M.I.T. cut a deal with their experimental subjects: they covered their fare for a brief trial period. They found that 30 percent of Boston car commuters were convinced to switch to public transportation, and 25 percent actually stuck with it for six months. So what was preventing them from switching before? Mostly our societal opinions on public transportation, the researchers found. "Because of a generally weaker public transportation culture in Boston than in Switzerland, M.I.T. participants who switched might not have seriously considered using public transportation until they experimented with it during the trial," they write. [Atlantic Cities] Brain stimulation cheers up depressed mice. Stanford University neuroscientist Karl Deisseroth has been able to quell depression in mice by stimulating and silencing certain parts of the rodents' brains with lasers. Using optogenetics, the researchers behind two new papers could control nerve cells by adjusting fiber-optic light beams. By better understanding the neural pathways that regulate depression in mice, Deisseroth hopes to develop treatments for humans suffering from depression. "In this way, bit by bit, we can piece together the circuitry," he says. "It’s a long process that’s just starting, but we have a foothold now." [Science News]
<urn:uuid:7bb0e61d-f2c7-4bb2-9924-a43fcde6d123>
CC-MAIN-2013-20
http://www.theatlanticwire.com/technology/2012/12/female-fish-are-more-attracted-bi-curious-males/59934/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.966608
723
2.90625
3
UC Davis study shows that the increase in obesity among California school children has slowed Researchers recommend expanded fitness testing and outreach to reduce childhood obesity among the very youngest children Click here to view this release en español. After years of increases in the rates of childhood obesity, a new UC Davis study shows that the increase slowed from 2003 to 2008 among California school children. While encouraged by the results, the authors expressed concern about a group of youngsters currently driving the increase in obesity: children under age 10. "Children who were obese entering the fifth grade remained obese in subsequent years as well, despite improvements in school nutrition and fitness standards," said William Bommer, professor of cardiovascular medicine at UC Davis and senior author of the study. "And we suspect that this trend begins before kindergarten." Published in the February 2012 issue of the American Heart Journal, the results indicate a major turning point in efforts to reduce the impact of a chronic condition linked with a host of serious adult health issues that can begin in childhood, including heart disease, diabetes, breathing issues and some cancers. Bommer served on a state task force that recommended standards to help protect K-12 children and teens from diseases related to sedentary living and unhealthy eating. As a result, new laws in 2005 expanded fitness programs, nutrition education and alternatives to high-fat, high-sugar foods and beverages in California schools. Since 1996, California schools have reported to the state Department of Education the results of a variety of fitness and body composition evaluations for fifth, seventh and ninth graders. Body composition evaluations included body mass index -- or BMI -- measures, which determine if a child has a healthy weight or is overweight or obese. Data on all students from 2003 to 2008 were provided to Bommer to evaluate and gauge the success of the new standards. For the current study, he and his colleagues included data on a total of 6.3 million students for whom complete fitness test results and body composition evaluations were available. There were some encouraging results. While childhood obesity is still on the rise (2 percent more children were overweight and obese in 2008 than in 2003), the rate of increase is slowing. National studies in prior decades showed annual increases in obesity among children and teens between 0.8 percent and 1.7 percent each year. For the current study, the rate of increase in California was an average of 0.33 percent per year. In addition, while the results of fitness tests varied (abdominal strength and trunk extensor strength worsened overall, while upper body strength and flexibility improved overall), there was a significant increase in the percent of children with healthy aerobic capacity. "This was particularly heartening, because cardiovascular and respiratory endurance directly correlate with reduced risks of heart disease and diabetes later in life, especially if it is maintained over time," said Bommer. One concern, however, was that students with lower aerobic capacity and upper body strength fitness scores and higher BMIs tended to live in counties with lower median household incomes (less than $40,000 per year) or with higher unemployment. "We clearly need to do more to ensure that children, regardless of where they go to school, are benefiting from the recommended health standards," said study lead author Melanie Aryana, a UC Davis researcher in cardiovascular medicine. "Expanding efforts to ensure that all California schools have the resources they need to make healthy changes will help." The team's strongest recommendation related to reducing the trend toward early onset, persistent obesity among younger school children. This generation could eventually reverse recent advances in reducing heart disease risks and mortality, according to Bommer. He advises earlier fitness testing, including during preschool, to better monitor this increase together with interventions that specifically address unhealthy weight prior to age 10. "Our study proves that nutrition and physical activity standards can help fewer children become obese during a critical time in their lives for establishing long-term healthy habits," said Bommer. "But just imagine how much more we can do to reduce the impact of obesity if we are just as successful much earlier in children's lives." In addition to Bommer and Aryana, Zhongmin Li, UC Davis associate professor of internal medicine, was a study coauthor.
<urn:uuid:ed9c10ed-d0be-4fd6-a6a6-d9ae5b1f84b0>
CC-MAIN-2013-20
http://www.ucdmc.ucdavis.edu/publish/news/research/6321
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.958154
854
2.59375
3
Network Articles & Tutorials SEARCH : Home : Help : Articles & Tutorials AMD, the Berkeley Automount Daemon AMD - a public domain automount daemon An Architectural Overview of UNIX Network Security - By Robert B. Reinhardt - The goal of this paper is to present my concept of a UNIX network security architecture based on the Internet connectivity model and Firewall approach to implementing security. ARP networking tricks BSD Sockets: A Quick And Dirty Primer - By Jim Frost February 13, 1990 Curing remote-access ailments with ssh DNS Operations Guid - Domain Name Server Operations Guide for Bind DNS Tips and Tricks - Generating cache file with dig; Recovering from an SOA typo; Ultrix needs primaries in host file; SLIP and BIND; What is a Lame Delegation?; Terminology: domain, zone, label; CNAMEs as RR targets; Local dummy zones; Legal characters in hostnames; Checking if a domain is registered already; Setting up a resolver; Given a choice of servers, which one is queried? - The purpose of this tutorial is to provide basic information about FDDI, a networking protocol used in Local Area Networks (LAN's). By Keith McGuigan - email@example.com Integrating Your Machine With the Network - Very comprehensive networking resource and reference from the Unix System Administration Independent Learning project. Ethernet, DNS, NFS... you name it, it's here. - The ISDN Shop MBone How to guide - Dan's Quick and Dirty Guide to Getting Connected to the MBONE Network Administrator's Guide (NAG) - This is the reference on Linux networking. I'd advise you to save yourself toner and eyestrain and buy the book, but if you need a quick reference you can always hit this page. Network Computing Success Network Information Services Plus Tutorial - What it is and what it isn't. By Douglas W. Stevenson - Starting NFS daemons, exporting/sharing file systems, mounting remote file systems, automount; includes sample files and man pages. - Basic overview of NFS. If you don't understand it, start here and read the last two paragraphs. Subnet Addressing - by Ron Cooney Securing your data and e-mail with PG - Introduction to the Internet Protocols (TCP/IP) - By Scott Newton TCP/IP primer - by Hal Stern - Using the CMU SNMP Library To Build an SNMP Manager. Thomas L. Georges (Harris Corp). - Network Traffic Management paper (uses HP products as examples). By Peter Phaal (commissioned by HP). Download Instructions Trusted Hosts - by Rik Farrow - This documentation covers the use of UUCP with smail instead of sendmail. Related WWW Sites - Network Management Papers Add Link | Copyright © 1994-2005 Unix Guru Universe
<urn:uuid:e63c8edf-e3f0-4000-9ba6-ebdf7164414d>
CC-MAIN-2013-20
http://www.ugu.com/sui/ugu/showclassic?I=help.articles.network&F=1110111010&G=Y
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.809113
632
2.734375
3
Schools have banned cupcakes, issued obesity report cards and cleared space in cafeterias for salad bars. Just last month, Michelle Obama's campaign to end childhood obesity promised to get young people moving more and revamp school lunch, and beverage makers said they had cut the sheer number of liquid calories shipped to schools by almost 90 percent in the past five years. But new research suggests that interventions aimed at school-aged children may be, if not too little, too late. More and more evidence points to pivotal events very early in life -- during the toddler years, infancy and even before birth, in the womb -- that can set young children on an obesity trajectory that is hard to alter by the time they're in kindergarten. The evidence is not ironclad, but it suggests that prevention efforts should start very early. Among the findings are these: ¶The chubby cherub-like baby who is growing so nicely may be growing too much for his or her own good, research suggests. ¶Babies who sleep less than 12 hours are at increased risk for obesity later. If they don't sleep enough and also watch two hours or more of TV a day, they are at even greater risk. Some early interventions are already widely practiced. Doctors recommend that overweight women lose weight before pregnancy rather than after, to cut the risk of obesity and diabetes in their children; breast-feeding is also recommended to lower the obesity risk. But weight or diet restrictions on young children have been avoided. "It used to be kind of taboo to label a child under 5 as overweight or obese, even if the child was -- the thinking was that it was too stigmatizing," said Dr. Elsie M. Taveras of Harvard Medical School, lead author of a recent paper on racial disparities in early risk factors. The new evidence "raises the question whether our policies during the last 10 years have been enough," Dr. Taveras said. "That's not to say they've been wrong -- obviously it's important to improve access to healthy food in schools and increase opportunities for exercise. But it might not be enough." Much of the evidence comes from an unusual long-term Harvard study led by Dr. Matthew Gillman that has been following more than 2,000 women and babies since early in pregnancy. Like children and teenagers, babies and toddlers have been getting fatter. One in 10 children under age 2 is overweight. The percentage of children ages 2 to 5 who are obese increased to 12.4 percent in 2006 from 5 percent in 1980. Yet most prevention programs have shied away from intervening at very young ages, partly because the school system offers an efficient way to reach large numbers of children, and partly because the rate of obese teenagers is even higher than that of younger children -- 18 percent. The Robert Wood Johnson Foundation, which helped finance Dr. Taveras's study, is spending $500 million by 2015 to fight childhood obesity, but only in children 3 and up. And a multimillion-dollar National Institutes of Health childhood obesity project that is giving out $8 million over eight years explicitly excludes pregnant women and infants under 1.
<urn:uuid:f4694287-cf4a-4e50-ae67-150ef78aedcc>
CC-MAIN-2013-20
http://www.vegsource.com/news/2010/03/obesity-prevention-should-start-very-early.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.971643
641
3.125
3
- A first improvement can be to simplify the assembler program to have a more performant program. The program already works but it could work faster if we delete some parts (this will make the program more difficult to understand for a third party but the display will be more fluent). - It's possible to make a program without using the interrupts (it will be easier and faster). - You could also build around the tower a wooden structure for example and fix it more harder (if you want to use it in a disco, for example) because the tower is rather fragile. You can also incorporated the microcontroller in this wooden structure to make it transportable. Back to frontpage
<urn:uuid:244d7b0f-4439-4bd1-8d1d-d0e5f90ec4fe>
CC-MAIN-2013-20
http://wwwtw.vub.ac.be/werk/cursussen/mechatronica/frames/finished%20projects/Visual%20Sound%20Indicator/MYWEB5/impr.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.925463
139
2.765625
3
Not thinking about a problem for a while doesn’t just give you a fresh perspective when you come back to it. It also allows your more creative unconscious to get to work as it “…ventures out to the dark and dusty nooks and crannies of the mind”. That’s according to Ap Dijksterhuis and Teun Meurs at the University of Amsterdam, who were keen to show that the benefits of taking a break from thinking hard about a problem are not merely passive (for example, by freeing you from an incorrect line of thought), but that unconscious thought actually offers an alternative, active mode of thinking that is more divergent and creative. In one experiment Dijksterhuis and Meurs asked 87 students to think of as many new names for pasta as they could, giving them five examples of existing names that all began with the letter ‘i’. Those students who were engaged in a distracter task for three minutes before giving their suggestions thought of far more varied names than students who were given three minutes to concentrate on thinking of new names (they mostly thought of new names beginning with ‘i’). In another experiment, students were asked to think of places in Holland beginning with the letter ‘A’. Those students who were distracted before being asked to give their suggestions named a wide variety of cities, towns and villages, whereas students who were given time to think of places, and students who answered immediately, tended to just name the most obvious main cities in Holland. Finally, students were asked to name as many uses as they could for a brick. Again, students who were distracted by a different task before giving their suggestions, didn’t name more uses, but were judged by two independent raters to have proposed more creative and unusual uses than students who were given dedicated time to think, or than students who answered immediately. “Upon being confronted with a task that requires a certain degree of creativity, it pays off to delegate the labour of thinking to the unconscious mind”, the authors concluded. Dijksterhuis, A. & Meurs, T. (2006). Where creativity resides: The generative power of unconscious thought. Consciousness and Cognition, 15, 135-146. Link to recent research, also by Dijksterhuis, showing that it's best not to consciously deliberate when making big decisions like which house or car to buy. BBC coverage here. Link to article in The Psychologist celebrating psychology's rediscovery of 'the irrational' (BPS members only). Link to New Scientist special issue on creativity. Link to Scientific American Mind issue on creativity.
<urn:uuid:5b94a494-9a50-4605-987b-4a850c77e335>
CC-MAIN-2013-20
http://bps-research-digest.blogspot.com/2006/03/be-creative-dont-even-think-about-it.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.979841
552
2.765625
3
When a child is in pain and crying, a loving parent wants nothing more than to make the pain go away. Ear infections can be very painful and often a parent will request antibiotics to treat the infection from their pediatrician or family doctor. The American Academy of Pediatrics (AAP) has issued new guidelines for identifying and treating childhood ear infections and would like to see fewer antibiotics prescribed. The guidelines more clearly define the signs and symptoms that indicate an infection that needs treatment. They also encourage more observation, with follow-ups, instead of antibiotics. This would also include some children under the age of two. Most children with ear infections get well on their own and can be safely monitored for a few days. For children with recurrent infections, the guidelines advise physicians and parents on when it is time to see a specialist. "Between a more accurate diagnosis and the use of observation, we think we can greatly decrease the use of antibiotics," said the lead author of the new guidelines, Dr. Allan Lieberthal, a pediatrician at Kaiser Permanente Panorama City, in Los Angeles, and a clinical professor of pediatrics at the Keck School of Medicine at the University of Southern California. The guidelines say that there are definitely times when antibiotics should be prescribed such as when children have a severe ear infection. Severe is defined as when a child has either a fever of 102.2 degrees or higher or is in significant pain. He or she has a ruptured ear drum with drainage, or an infection in both ears for kids two years or younger. These account for fewer cases but studies have shown that children benefit from antibiotics given right away. It's been since 2004 since the last set of guidelines were issued. Those guidelines stimulated new research that has provided evidence for the new AAP guidelines that will appear in the March issue of Pediatrics. Lieberthal said the biggest change is the definition of the diagnosis itself. Experts say that the new definition is more precise. Because of the different stages of ear infections, diagnosis can be tricky. The AAP offers detailed treatment suggestions that encourage observation with close follow-ups as long as the child is not having severe symptoms, but leaves it up to the discretion of the physician whether or not to prescribe antibiotics. Previous guidelines recommended that antibiotics be prescribed for children under two with ear infections. Pain management is also an important component of the new guidelines. Antibiotics can take up to 2 days before they start to improve symptoms, so if a child has fever or pain they should be given pain relieving or fever reducing medications. The new guidelines also state that children, even those with recurrent infections, shouldn't be on long-term daily antibiotics to try to prevent infections from occurring. Long-term antibiotic use has its own downfall. Children can develop a rash and diarrhea (causing dehydration.) The biggest concern is that the child will build up immunity to the antibiotic, making it ineffective over time. When children have recurrent ear infections they should be referred to an ear, nose and throat specialist. Recurrent is defined as children who have three or more ear infections in a six-month period, or four or more infections in a one-year period (with at least one infection occurring in the previous six months.) The new guidelines also recommend staying current on your child's vaccine schedule, especially the pneumococcal conjugate vaccine (PCV), and the flu shot. "Studies show that anything that decreases viral infection will decrease the incidence of ear infections," Lieberthal said. Many parents are beginning to see the logic of not over-using antibiotics, but some are still unaware of the dangers. Physicians may now be more assertive about watchful waiting and follow-ups when a child's ear infection isn't severe. That may not comfort the parent of a crying child in pain, but it may be the best approach for the child in the long run. Michelle Healy http://www.usatoday.com/story/news/nation/2013/02/25/ear-infections-new-guidelines/1935493/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+usatoday-NewsTopStories+(News+-+Top+Stories)
<urn:uuid:70c1fd84-f743-4503-a5fd-1a8c0a90dde0>
CC-MAIN-2013-20
http://cnyhomepage.com/fulltext?nxd_id=35252&d=1
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.950748
871
3.203125
3
What happens when you let two bots have a conversation? Cornell researchers Igor Labutov, Jason Yosinski and Hod Lipson find out. Follow the links at the bottom of this post for more about "AI vs. AI." Laughing babies, talking dogs and Rebecca Black may be Internet sensations, but if you want to add something more substantive to your viral video diet, turn your dial to dueling chatbots, dancing Ph.D. theses and other highlights from the past year's surfeit of science videos. Talking bots can be just as surprising and silly as talking dogs. Take "AI vs. AI," for example. Cornell researchers Igor Labutov, Jason Losinski and Hod Lipson took two Cleverbot artificial-intelligence programs, hooked them up to each other, and typed in "Hi" as an ice-breaker. Hilarity ensues. "We just assembled the pieces, the audio and the avatars, and let the program run," Lipson, an associate professor at the Cornell Creative Machines Lab, told me today. The funniest line in the video comes when one AI program tells the other that they're chatting together as robots. The other bot replies, "I am not a robot, I am a unicorn." Where did that come from? "The conversations are based on millions of conversations that it had before," Lipson said. "Probably this term is something it had encountered in some conversation with a human." The best guess is that someone made a reference to the unicorn from Lewis Carroll's "Through the Looking Glass," and somehow that stuck in the Cleverbot's electronic brain. The takeaway is that artificially intelligent chatbots can become as petulant and irrational as the humans who made them. This Cleverbot conversation provides further evidence of that. ("I'm talking about you ... how you are a creep," one clone-bot tells another.) Here are 10 other clever and creepy science videos from 2011 to while away the minutes with. I've added links to more information about each of them at the bottom of this item: Science educator James Drake put together 600 pictures from the International Space Station to create this video view of an orbital night flight. It's been viewed more than 6 million times on YouTube since September. Follow the links at the bottom for more night-flight videos. The top video in this year's "Dance Your Ph.D" contest was "Microstructure-Property Relationships in Ti2448 Components Produced by Selective Laser Melting: A Love Story" from Joel Miller on Vimeo. Follow the links at the bottom to watch more winners from the "Dance Your Ph.D" video file. One of the year's most trafficked videos is "A Day Made of Glass," which depicts Corning's vision for a glassy future. It's been viewed more than 16 million times on YouTube since February. Follow the links at the bottom of this story for more about the future of glass. An octopus rises from the deep at the Fitzgerald Marine Reserve in California ... and walks over land on its legs. It turns out this behavior is not all that uncommon. The video is among Txchnologist's top 10 science videos. Follow the links at the bottom for more about walking octopi and the Txchnologist list.. Speaking of octopi, here's a soft robot that crawls along a surface like an octopus out of water. Follow the links at the bottom to see more videos from Chemical & Engineering News. Soft robots may look cute, but this hard-charging AlphaDog Proto looks downright creepy. It's being developed by Boston Dynamics with funding from DARPA and the U.S. Marine Corps. The first version of the complete robot will be ready in 2012. Follow the links at the bottom to learn more about AlphaDog. Minute Physics focuses on the faster-than-light neutrino research in its latest video. Follow the links listed below for more from Minute Physics. Quantum levitation sounds like a science-fiction phenomenon, but the Superconductivity Group at the University of Tel Aviv shows that it really, really works. Watch this report from TODAY.com's Dara Brown, and follow the links at the bottom of this post to learn more. In one of a series of math-themed videos, Vi Hart takes potshots at pi and talks up tau instead. And she proves she can make a cherry pie. Follow the links at the bottom for more about Hart and Tau Day. The "Readers Choice" honors in the 2011 Labby Awards went to "Weaver Ants" by Mark Moffett and Melissa Wells. This video was posted by thescientistllc on Vimeo. Follow the links below for more about the Labbies. Update for 8:35 p.m. ET: For 10 more must-see, humorous science videos, check out this Tree of Life blog posting by UC-Davis biologist Jonathan A. Eisen. He says his No. 1 pick, the "Bad Project" Lady Gaga parody, is "simply awesome" — and I simply agree. More about the videos: - Cleverbots at Cornell: AI vs. AI - How the Cleverbot chats like a human - Night flights: Sleigh ride in orbit - Night flights: The best of NASA's night lights - Ph.D. dance-off makes science sexy - A Day Made of Glass: The story from Corning - Future of Tech: The evolution of glass - Scientific American explains the walking octopus - Txchnologist: Ten of 2011's top science videos - Top 10 videos of 2011 from C&EN, including soft robot - Four-legged battlefield robot evolves into 'AlphaDog' - Minute Physics' YouTube channel - Video wows with quantum levitation - Vi Hart's math blog | The Tau Manifesto - The Scientist's 2011 Labby Awards | Doctor Bugs More year-end reviews: - Cast your vote for the Weird Science Awards - 11 scientific twists from 2011 - The biggest ancient mysteries of 2011 - The year in space | 2011 slideshow - Who's on the A-list for bad celebrity science? Alan Boyle is msnbc.com's science editor. Connect with the Cosmic Log community by "liking" the log's Facebook page, following @b0yle on Twitter and adding the Cosmic Log page to your Google+ presence. You can also check out "The Case for Pluto," my book about the controversial dwarf planet and the search for new worlds.
<urn:uuid:8f334ce1-cd67-4c1b-b102-02226b501b74>
CC-MAIN-2013-20
http://cosmiclog.nbcnews.com/i-am-not-a-robot
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.927418
1,362
2.59375
3
Carlos Andres Perez, twice Venezuela’s President in 1974 and 1989 died yesterday at 88. A controversial figure, CAP, as he was known, was twice in exile as a young Adeco activist in 1948 and the 1950′s and was in charge of the fight against guerrillas during Romulo Betancourt’s presidency from 1959 to 1964, first as a Director General of the Ministry of Interior and Justice and later as Minister. He developed an image of being tough during this time. When the 1973 Presidential campaign arrived, Romulo Betancourt quickly said he would not be a candidate, leaving the field open for CAP. It was the first multimedia electoral campaign in Venezuela’s history with CAP projecting an energetic image (he was a tireless worker), visiting all corners of the country and defeating Lorenzo Fernandez of the incumbent COPEI party. Once elected, CAP was dramatic the first few months of his presidency, nationalizing oil and iron his first day in power, benefiting from the sharp rise in oil prices. But CAP, like most Venezuelan Presidents, had no economic knowledge and his Government was a hodge podge of Cepal-like recipes and the conception that the Government could do it all. But he dazzled the population, in the first month in power, he cleaned up Caracas, froze the prize of arepas (which made areperas disappear in short order) and decreed that all elevators had to have an operator, as a way of creating employment (Pleno empleo, full employment, was his motto). The economy boomed, thanks to the oil windfall, but the same windfall hid all of the problems as CAP developed his vision of the “Gran Venezuela”. Money was thrown at steel, aluminum and technology projects in which the Government was the owner or provided the financing, but there was little control and/or know how to make it successful. He did try to protect some of the windfall, creating the Fondo de Inversiones de Venezuela, reduced oil production because so much money was not needed and maintained the structure of the oil industry before it was nationalized, creating PDVSA and naming General Ravard to preside it. The boom was so huge that everyone benefited, poverty reached the lowest levels in Venezuela’s history, he created the Mariscal de Ayacucho program that sent 10,000 Venezuelans abroad for mostly graduate degrees, protected wild areas in National Parks, he created the oil research institute INTEVEP, he built important hydroelectric projects. He was a democrat and he was a populist, a bit of megalomaniac, worried about his image and his legacy. He gave a boat to Bolivia which has no ports, as a symbol of its fight to have access to the sea. He reached out to Fidel Castro, while shunning the Dictators from the South, while making it attractive and facilitating for thousands of highly educated people from the latter countries to move to Venezuela to help in his push to increase the number of university students. But his economic policies had as their central theme the intervention by the State. He removed the independence of the Venezuelan Central Bank, while increasing salaries periodically, which debased the currency leading to inflation. Venezuela was not ready for the huge inflows and there were lots of corrupt people ready to make a lot of money off the Government. By the end of his term, corruption charges, including the infamous Sierra Nevada refrigerated boat scandal, tarnished his image. He was brought to trial because of that case, curiously, it was Jose Vicente Rangel who cast the deciding vote to exonerate CAP. That was CAP, he was capable of talking to everyone and anyone, even his staunchest enemies felt that he was someone he could talk to. His last year in power, oil prices dropped, forcing CAP to lower the budget by 10%, Venezuelans had the feeling that things were worse for the first time in many years (little did they know!) and his party lost. CAP spent the ten years required by law between terms, traveling around the world, involving himself with the South commission and talking to world leaders. This changed his ideas, but still, he had little economic knowledge and as he ran for President in 1988, he promised to return to the hey day years of his first term. But it was not be. CAP reached out to a group of well educated non-adecos, including those that were involved in studies on how to change the state. It was not until they began talking to the people of the Lusinchi Government, after CAP was elected. that they realized how dire the situation was. International reserves were less than US$400 million. After a lavish “crowning” with all of the pomposity that was simply out of place, the CAP Government realized that they needed help form the IMF and imposed an adjustment program, a “shock” program that included increasing gasoline prices by 100%, interest rate increases, the increase of public tariffs, freeing of prices that had been frozen for years, eliminating tariffs and allow the currency to float. One month after taking power, having won with 56% of the popular vote, riots started the “Caracazo” four days of rioting and protests against the gasoline price increase that cast a shadow over CAP’s Presidency. He believed people had the right to protest, doing little the first two days and the protests and the looting go out of hand. In the end an estimated 276 people died and the looting was in the millions. His Government was a lame duck Government even before he started. But he pressed on. His intuition was right, that he was very good at. He implemented or began to implement many of the reforms suggested by the Commission for the Reform of the State, including the election of Governors, tax reform and the general decentralization of the Government. He was changing things very fast. But his own party AD felt it had been replaced by these “technocrats” and he had opposition from within. His cabinet was composed of very knowledgable, very well prepared people, most of which had no political experience. CAP was supposed to take care of the politics, but he did not, it was an ego thing and that was what doomed him. Policies were working, the economy grew by over 9% in 1991 after all the adjustments, CAP thought he had no worries. A group of people the self called “Notables”, mostly intellectuals, who had always opposed CAP and envied his popularity, began calling for his removal. Chavez followed this with his coup in February 1992 (which had been in the works for a decade!), weakening the Government further. When it was discovered that CAP had used funds from the secret slush fund to provide security to Violate Chamorro in Nicaragua and exchanged it at a preferential rate when the Government was ready to devalue, he was accused and impeached. He was later sentenced to 28 months in prison and charged with other crimes. He was elected Senator in 1998, which gave him immunity, but the 2000 Constitution eliminated the Senate and this rule, removing the protection he had. He never returned to Venezuela. He was in the end, a true democrat, too ignorant on economic matters to have a coherent plan, but smart enough to follow his instincts with his collaborators, he allowed corruption to flourish around him, there was so much money to be made. But he did many positive things, implementing changes in his second Government that were very important. Some of them even took power away from him! He was willing to change, but sadly he did not sell the change the same way he sold himself. On a relative scale, he was not that bad, better than Caldera, who would never change, better than Luis Herrera, who had no program on how to change the country, better than Lusinchi, who had no clue. Betancourt was better, because he understood economics, oil and what the country needed, he had a program. Leoni simply followed Betancourt’s plans with honesty and surrounded by many of the same people. And of course, he was much better than Hugo, who is not a democrat and has failed at all of his economic initiatives, allowing the largest corruption levels in Venezuela’s history and failing to leverage the biggest oil boom in the country;s history for the benefit of the people. May Carlos Andres Perez rest in peace!
<urn:uuid:591c7c37-b085-4c96-b42f-e935c2e634a7>
CC-MAIN-2013-20
http://devilsexcrement.com/2010/12/26/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.98957
1,745
2.703125
3
Amplitude Modulation Synthesis is a type of sound synthesis where the gain of one signal is controlled, or modulated, by the gain of another signal. The signal whose gain is being modulated is called the "carrier", and the signal responsible for the modulation is called the "modulator". In classical Amplitude Modulation, or AM Synthesis, both the modulator and the carrier are oscillators. However, the carrier can also be another kind of signal, such as an instrument or vocal input. Amplitude Modulation using a very low frequency modulator is known as Tremolo, and the use of one audio signal to Amplitude Modulate another audio signal is known as Ring Modulation. Simple AM Synthesis Classical AM Synthesis is created by using one oscillator to modulate the gain of another oscillator. Because we are changing the gain of the carrier oscillator from 0 (no gain) to 1 (full gain), the modulating oscillator must output a signal which changes between 0 and 1. This is most often done at audio frequency rates from 20 Hz and up. In this case, the sawtooth waveform of a [phasor~] is used as the modulator, and the sine waveform of an [osc~] is the carrier. Tremolo is a form of Amplitude Modulation where the gain of an audio signal is changed at a very slow rate, often at a frequency below the range of hearing (approximately 20 Hz). This effect is commonly used to alter the sound of organs or electric guitar. Since a sine wave is often used for a smooth-sounding tremolo effect, in this patch we have taken the output of an [osc~], which normally moves between -1 and 1, and scaled it so that it's output is now from 0 to 1. This is known as adding a DC Offset to the signal. For more discussion on this, please see the chapter on DC Offset. You can also modulate one audio signal with another audio signal (i.e. a signal which has both positive and negative values). This effect is called Ring Modulation. If you have a microphone connected to your computer, try the following patch. The sound of your voice will enter Pd through the Analog to Digital Converter [adc~] object (the line in from the soundcard), and be modulated by the sine wave of a [phasor~] object. Notice that there is no sound when only one audio signal is present (i.e. when you are not speaking). This is because one audio signal multiplied by zero (no audio signal) will always be zero. And the louder the input signal is, the louder the output will be. The Ring Modulation effect was often used in Science Fiction movies to create alien voices. You may want to use headphones when running a microphone into Pd to prevent feedback (the output of the speakers going back into the microphone and making a howling sound).
<urn:uuid:e9baa87e-0f04-4c4f-9725-7d1675d7851f>
CC-MAIN-2013-20
http://en.flossmanuals.net/pure-data/ch021_amplitude-modulation/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.94995
616
3.609375
4
The Sacramento Valley is the northern component of California 's Central Valley (complimenting the San Joaquin Valley ). About 200 miles long, it extends from Shasta Dam in the north to the Sacramento River Delta in the south. On the west, the valley is bordered by the Coast Range, including tbe Mendoncino National Forest . On the east, the valley is defined by the volcanic northern portion of the Sierra Nevada Through the middle of the valley runs the Sacramento River, the largest river in California. The Feather River also runs along much of the length of the valley, and the American, Yuba, and Bear rivers enter it from the east. From the west, Stony Creek, Cache Creek, and Putah Creek enter the valley, rivers in their own right. Exactly in the center of the valley, the Sutter Buttes rise out of flat farmland. Touted as the 'world's smallest mountain range', they reach over 1000 feet in height but are only a few miles across and nearly perfectly round when viewed from above. The Sutter Buttes are a long dead volcano, the southernmost of the Cascade chain. The main city in the Sacramento Valley is, not surprisingly, Sacramento, the state capital. Also found here are the cities of Redding, Red Bluff (the areas around the northern part of the valley have very red soil), Chico, and Davis. Once a vast series of wetlands, the Sacramento Valley is now mostly used for agriculture, as the soil is extremely fertile. Sadly, suburban sprawl is taking over much of this fertile land as the city of Sacramento expands. By far, the most important industry here is agriculture. Hot summers, mild winters, fertile alluvial soil, and abundant water from the Sierras combine to make this one of the most productive areas in the world. Walnuts, corn, tomatos, onions, wheat, rice, and many other crops are grown here. The city of Sacramento is largely supported by industries associated with the Capitol. Tourism is minimal, although the area supports some excellent fishing and very productive waterfowl hunting. If you are going to visit this valley, avoid the summer. Summer temperatures commonly reach above 100 degrees, and the valley is more humid than most other areas west of the Rockies. The summer is very dry; summer rain falls only every one or two years. Winters may be rainy, but often are dominated by the notoriously dense and resilient tule fog. Spring and fall can be very windy. Although winter freezes aren't uncommon, snow is almost unknown. A little known fact of the yalley, especially the southern parts, is that tornados sometimes occur here, usually in the spring. They are usually quite weak but damage has been recorded.
<urn:uuid:5f3e1eeb-a916-4f12-a3aa-c8a192ed2a5e>
CC-MAIN-2013-20
http://everything2.com/title/Sacramento+Valley
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.944423
568
3.234375
3
- 10 to 15 minutes - Take turns introducing each other while pretending you are on a stage. - Stand up, bow, and tell all the good things you can about the other person, his hobbies, good qualities, etc. Be sure to over-dramatize. - Take turns introducing other family members, present or not. - Challenge your child to introduce a best friend or a favorite teacher. - Think of famous people and take turns introducing them. Copyright © 2004 by Susan Kettmann. Excerpted from The 2,000 Best Games & Activities with permission of its publisher, Sourcebooks, Inc. To order this book visit Amazon.com.
<urn:uuid:917d251d-f393-43ee-afe9-4b708927c6bd>
CC-MAIN-2013-20
http://fun.familyeducation.com/manners/activity/35581.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.893032
141
3.015625
3
Weekly Washington Words Friday, December 7 2012 This week’s Washington Words is about income tax rates. “Tax Rates” – Nothing makes people jump to arms in the political arena like the mention of tax rates, but many don’t think of tax rates in the context of the complex environment that is the U.S. tax code. In general, when tax rates are being discussed in the news what is being referred to are the marginal tax rates. Marginal tax rates exist for each tax bracket and change as income rises. 2012 Tax Year Federal Income Tax Brackets (Single tax filer) Bracket Marginal tax rate $0 to $8,700 10% $8,700 – $35,350 15% $35,350 – $85,650 25% $85,650 – $178,650 28% $178,650 – $388,350 33% Over $388,350 35% Marginal tax rates also differ depending on how one files their tax return (married, single with dependents, etc.), which is just the beginning of the confusion when it comes to our tax code. These only apply to earned income wages; salaries, tips, and other taxable employee pay for example, and will exclude things such as carried interest, dividends, and capital gains, which are taxed at different rates. It is important to remember that with marginal tax rates that only earned income above the floor of the new tax bracket is taxed at that rate, so all income up to $8,700 is only taxed at the 10% rate, while if the individual makes $8,701 only the $1 above will be taxed at the 15% rate and so on. Furthermore, these tax rates only refer to federal income tax and not to what citizens pay in payroll tax. For a substantial amount of the tax paying population their payroll tax burden is greater than that of the federal income tax. The federal government has a number of taxes in place, so it is important to understand what our elected officials mean with these Washington Words .
<urn:uuid:2a3b73d8-cb0c-49d6-a839-99d038ff06e7>
CC-MAIN-2013-20
http://keepingamericagreat.org/weekly-washington-words-2/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.964085
430
3.15625
3
(En español: Anestesia) Anesthesia is medicine that doctors and nurses give to make people feel comfortable when they're having surgery, stitches, or other things that might be painful. There are different types of anesthesia: general and local. General anesthesia is cool because it helps you fall asleep for a little bit so you don't feel any pain while the doctors are fixing something. A doctor can give you general anesthesia with a shot or by letting you breathe a special kind of air. The medicine wears off and you wake up a while later. Local anesthesia doesn't make you fall asleep, but it numbs the area so you won't feel pain while you get stitches or minor surgery to remove something like a wart.
<urn:uuid:1f91dee7-5073-438d-87c2-f8630cc4c352>
CC-MAIN-2013-20
http://kidshealth.org/PageManager.jsp?dn=RenownChildrens_Hospital&lic=408&cat_id=20190&article_set=30461&ps=309
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.938444
150
3.0625
3
Religion in Africa is varied. There are many tribal religions in which there is more than one God. Also, in Northern Africa Islamic and Muslim religions are common. Near Southern Africa, mix religions are common due to immigration and multiculturalism. Many countries in Africa use democratic or republic government systems. Some tribal communities have leaders that are elected by birth or by skill, but most major countries have elections although bias has been discovered in some electoral proceedings in Africa. Including just recently in Zimbabwe, where the two political parties caused such turmoil that the life expectancy dropped to an all time low in the country. This is one negative outcome of African politics. One positive event in African politics is the election of Nelson Mandela in South Africa. The ‘ups and downs’ of in Africa are similar to those found around the world.
<urn:uuid:ccd078dc-e02a-42db-a66b-af048f818d21>
CC-MAIN-2013-20
http://library.thinkquest.org/08aug/02084/Africanculturereligion.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.940564
177
3.015625
3
Detection of protandry and protogyny from infructescences |Sycamore index page| |Invasive Woody Plants|| As described in the Sex Expression section sycamore trees may be protandrous when their inflorescences start with a sequence of male flowers followed by a sequence of female flowers, or protogynous when the reverse sequence occurs. Protogynous individuals will produce inflorescences of Mode B and very rarely a few of Mode G. Protandrous individuals are far more variable as they have inflorescences of Mode C, D, or E, or a mixture of these. Male flowering trees are described as protandrous because in some years some or even all their inflorescences have female flowers. Similarly protandrous individuals will exhibit large annual variation in the proportion of inflorescences of Mode C, D and/or E. The existence of female flowering individuals has been reported. There is no evidence that trees will change their modes of sex expression with age. In sycamore certain characters such as fruit production, fruit dry weight, percentage of fertilized fruits and the number of carpels per fruit, may vary between morphs (see fruit set). In order to explain such variation it is then essential to know the sexual morph of the trees studied. Because sexing the flowers of trees in spring is rather time consuming and/or impractical on tall specimens, a method using the morphological characters of the infructescence was developed. Reliability of method The method for the identification of the sex expression of the inflorescences using infructescences was developed in 1983, and was tested with 95% success when comparing the flowering data of 240 trees obtained in spring and the determination of the sex expression using infructescences in the autumn. The test was repeated in 1984 with 100% success. Using fruiting material only, one can differentiate between protandrous and protogynous individuals and also between infructescences of Mode B and Mode G of the latter group using the characters listed in Table 1 and as shown diagrammatically in Fig. 1. In Mode G the structure and position of the fruits of the first part of the stalk is similar to the normal protogynous infructescence (Mode B), but it is longer and it bears a few small parthenocarpic fruits at its end (Fig. 1C). It is however impossible to distinguish between infructescences from Mode C and D. The very few male flowering individuals (Mode E; less than 1% in Ireland) will not be recognised with this method and only the shoot morphology will provide indications of their existence. In these, soon after flowering, the inflorescences will fall and the two growing terminal buds will be closely appressed (Fig. 1G). On the other hand in other modes of flowering, female flowers, even if unfertilized, will produce fruits, because of a high parthenocarpic tendency in maples. Such infructescences will remain on the trees most of the summer leading to two well separated terminal buds (Fig. 1H). It should be noted that some small flowering side shoots may not produce any buds. Only practice allows one to determine the sex expression of the individual with accuracy from infructescence material, and whilst the majority of the individuals examined fit easily into one or other of the two morphs, nevertheless some trees have features which do not always fit entirely the description given in Table 1 and Fig. 1. For instance, some infructescences of Mode B do have a terminal fruit, but this is never the case for protandrous modes of flowering. The size of fruits and infructescences, and the number of fruits per infructescence given in Table 1 are applicable to sycamores encountered in most of the British Isles and the Alps. However in areas with a very favourable climate (e.g. some parts of lowland Switzerland) measurements of fruit and infructescence size and the number of fruits per infructescence may be higher, and therefore the values listed in Table 1 may be misleading. Table 1. Morphological data from infructescences differentiating between protandrous and protogynous individuals, and also between Mode B and Mode G of the latter group. |Copyright © 2000 Pierre Binggeli. All rights reserved.|
<urn:uuid:d7a8d506-f3eb-48a2-8315-51fb1a7dfd94>
CC-MAIN-2013-20
http://members.multimania.co.uk/WoodyPlantEcology/sycamore/detection.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.922545
897
2.734375
3
2012-2013 Service Learning Courses Hispanic Literature in Translation—"Defiant Acts: Spanish and Latin American Theatre" Isabel de Sena This course will explore the full spectrum of theatre from the early modern period in Spain and colonial Spanish America to contemporary theatre on both sides of the Atlantic, including U.S. Latino playwrights. We will read across periods to identify preoccupations and generic characteristics as theatre evolves and moves between the street and the salon, the college yard and the court, enclosed theatres and theatre for the enclosed. In the process we will address a wide swath of ideas, on gender, class, freedom and totalitarianism, the boundaries of identity. Students will be introduced to some basic concepts and figures ranging from Lope de Vega’s brilliant articulation of “comedia” to Augusto Boal’s concept of an engaged theatre, and investigate the work of FOMMA (Fortaleza de la Mujer Maya) and similar contemporary collectives. And we will read plays as plays, as literature and as texts intended for performance on a stage. At the same time students will have the opportunity to explore creative practices, through engagement with different community organizations: schools, retirement homes, local theatre organizations, etc. Students are encouraged to apply concepts learned in class to their internships, and to bring their ideas and reflections on their weekly practices for discussion in class. Each other week one hour will be devoted to discussing their work in the community. NO Spanish required, but students who are sufficiently fluent in the language may opt to work in a community where Spanish is the primary language of communication. NO expertise in theatre required though theatre students are very welcome. Open to any interested student. Fall & Spring First Year Studies Umuntu ngumuntu ngabantu [Isizulu: A person is only a person through other persons] How do the contexts in which we live influence our development? And how do these contexts influence the questions we ask about development, and the ways in which we interpret our observations? How do local, national and international policies impact the contexts in which children live? Should we play a role in changing some of these contexts? What are the complications of doing this? In this course, we will discuss these and other key questions about child and adolescent development in varying cultural contexts, with a specific focus on the United States and sub-Saharan Africa. As we do so, we will discuss factors contributing to both opportunities and inequalities within and between these contexts. In particular, we will discuss how physical and psychosocial environments differ for poor and non-poor children and their families in rural Upstate New York, urban Yonkers, and rural and urban Malawi, Zimbabwe, South Africa, Kenya and Tanzania. We will also discuss individual and environmental protective factors that buffer some children from the adverse effects of poverty, as well as the impacts of public policy on poor children and their families. Topics will include health and educational disparities; environmental inequalities linked to race, class, ethnicity, gender, language and nationality; environmental chaos; children’s play and access to green space; cumulative risk and its relationship to chronic stress; and the HIV/AIDS pandemic and the growing orphan problem in sub-Saharan Africa. Readings will be drawn from both classic and contemporary research in psychology, human development, anthropology, sociology, and public health; memoirs and other first-hand accounts; and classic and contemporary African literature and film. This course will also serve as an introduction to the methodologies of community based and participatory action research within the context of a service-learning course. As a class, we will collaborate with local high school students in developing, implementing and evaluating effective community based work in partnership with organizations in urban Yonkers and rural Tanzania. As part of this work, all students will spend an afternoon a week working in a local after-school program. In addition, we will have monthly seminars with local high school students during our regular class time. Environment, Race and the Psychology of Place This service learning course will focus on the experience of humans living within physical, social and psychological spaces. We will use a constructivist, multidisciplinary, multilevel lens to examine the interrelationship between humans and the natural and built environment, to explore the impact of racial/ethnic group membership on person/environment interactions, and to provide for a critical analysis of social dynamics in the environmental movement. The community partnership/ service learning component is an important part of this class - we will work with local agencies to promote adaptive person-environment interactions within our community. Children’s Health in a Multicultural Context This course offers, within a cultural context, an overview of theoretical and research issues in the psychological study of health and illness in children. We will examine theoretical perspectives in the psychology of health, health cognition, illness prevention, stress, and coping with illness and highlight research, methods, and applied issues. This class is appropriate for those interested in a variety of health careers. Conference work can range from empirical research to bibliographic research in this area. Community partnership/service learning work is encouraged in this class. A background in social sciences or education is recommended.
<urn:uuid:4932f7a4-ad95-4a3c-aceb-f7b7ec9a24f6>
CC-MAIN-2013-20
http://mobile.slc.edu/studentlife/community-partnerships/Service_Learning_Courses.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.930146
1,062
3.015625
3
Topographic maps are digital, graphic maps that portray the horizontal position of planimetric features using lines and symbols. Contours are derived from a Digital Terrain Model (DTM) to represent the elevation of the ground. Ground control is set for this product. Aerotriangulation is performed to determine the position and orientation of the camera for each exposure so planimetric features are shown in their true relative coordinate position. A DTM is collected to represent the surface of the earth. Typical map scales are 1 inch = 100 feet and 1 inch = 200 feet. Topographic maps are used for preliminary design of transportation projects. Topographic maps, including contours, with associated X, Y coordinate system, and the DTM are delivered to the customer.
<urn:uuid:2d5cb35f-5a85-47da-b3b1-2944c479b2b7>
CC-MAIN-2013-20
http://ncdot.gov/doh/preconstruct/highway/photo/Products/Topographic_maps_default.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.855071
165
3.34375
3
« Previous | Next » How are mental disorders diagnosed and treated? This session explores the two main categories of psychological treatment – behavioral therapies and medical (drug) therapies. For OCD, depression and ADHD, we'll look at the current scientific understanding of these disorders and compare methods of treatment. Keywords: psychotherapy, psychoanalysis, diagnosis, medication, psychopharmacology, CBT, depression, OCD, ADHD, ADD, Ritalin Image: "Self-Portrait with Bandaged Ear," Vincent Van Gogh (1889). Read the following before watching the lecture video. - [Sacks] Chapter 11, "Cupid's Disease" (pp. 102-107) - Finish the chapter you started for the previous session: - [K&R] Chapter 12, "Treatment: Healing Actions, Healing Words" - [Stangor] Finish Chapter 12, "Defining Psychological Disorders," and Chapter 13, "Treating Psychological Disorders." View Full Video View by Chapter In this discussion, we'll talk about psychological disorders, or psychopathology. It's a really interesting question: what kinds of behavior exceed the normal range of behavior for human beings? Which behaviors are truly pathological, as opposed to simply uncommon or exceptional?… Read more » "Extra credit" writing assignment: Is it ethical to use cognition-enhancing drugs? These optional resources are provided for students that wish to explore this topic more fully. Course optinal resources. ||The World of Abnormal Psychology. Annenberg Learner, 1992. ||13 1-hour videos on various psychopathology topics ||NIMH.gov. "ADHD: Signs, Symptoms, Research." Sept. 10, 2010. YouTube. Accessed March 9, 2012. http://www.youtube.com/watch?v=IgCL79Jv0lc ||Video about ADHD by the U.S. National Institute of Mental Health. ||Study materials for Ch. 15, "Psychological Disorders: Healing Actions, Healing Words." In Kosslyn & Rosenberg, Psychology in Context, 3/e (Pearson, 2007) ||Practice test questions, flashcards, and media for this related textbook « Previous | Next »
<urn:uuid:81d8a60e-c4ee-42fc-84dd-00b919905029>
CC-MAIN-2013-20
http://ocw.mit.edu/courses/brain-and-cognitive-sciences/9-00sc-introduction-to-psychology-fall-2011/psychopathology-ii/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.801102
467
3.578125
4
Multiplies x by 2^y. This is equivalent to shifting the binary representation of x to the left by y bits. Bitwise logical operations on numbers. These forms compute the bitwise AND, inclusive OR, exclusive OR, and equivalence (a.k.a. exclusive NOR), respectively. These macros expand into calls of binary functions such as binary-logand, binary-logior, etc. The guards of these functions require that all inputs be integers. When passed one argument, these functions return the argument unchanged. When passed no arguments, logand and logeqv return -1, while logior and logxor return 0. |> (logand 1)| |> (logand 10 6)| |> (logior 10 5)| |> (logxor 15 9)| |> (logeqv 5 6)| |> (logior "5")| Computes the bitwise logical NAND of the two given numbers. |> (lognand 10 6)| Computes the bitwise logical NOR of x and y. |> (lognor 10 6)| Computes the bitwise logical NOT of the given number. |> (lognot 5)| Returns the ith bit in the two’s complement binary representation of j. Returns the number of "on" bits in the binary representation of x. Computes the bitwise logical Inclusive OR of y with the bitwise logical NOT of x. |> (logorc1 10 6)| Computes the bitwise logical Inclusive OR of x with the bitwise logical NOT of y. |> (logorc2 10 6)| Returns true if and only if x and y share a ’1’ bit somewhere in their binary representation (i.e. (logand x y) is not zero).
<urn:uuid:a4154bb8-237d-4eee-a66a-8c4247735f56>
CC-MAIN-2013-20
http://planet.plt-scheme.org/package-source/cce/dracula.plt/8/15/planet-docs/reference/Bitwise_Operations.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.752082
393
3.375
3
This phenomena has been explained by the Zetas and is thoroughly documented on this blog. While the "official" cause of such massive fish kills is often attributed to hypoxia (lack of oxygen), what is conveniently excluded in these opaque explanations is that high concentrations of dissolved methane essentially expels oxygen, thus rendering water and air uninhabitable for the fish and birds encountering it."Dead fish and birds falling from the sky are being reported worldwide, suddenly. This is not a local affair, obviously. Dead birds have been reported in Sweden and N America, and dead fish in N America, Brazil, and New Zealand. Methane is known to cause bird dead, and as methane rises when released during Earth shifting, will float upward through the flocks of birds above. But can this be the cause of dead fish? If birds are more sensitive than humans to methane release, fish are likewise sensitive to changes in the water, as anyone with an aquarium will attest. Those schools of fish caught in rising methane bubbles during sifting of rock layers beneath them will inevitably be affected. Fish cannot, for instance, hold their breath until the emergency passes! Nor do birds have such a mechanism." ZetaTalk Click on Map below for interactive version: yellow=2011, blue=2012, red=2013 Some of the Evidence: Youtube video up to Jan 30, 2011 5000+ Black Birds 500+ Black Birds 100,000 Drum Fish Tens of Thousands - Fish Thousands of Fish Thousands of Fish Dozens of fish in just 50 feet 50 - 100 Birds - Jackdaws 100 Tons of Fish Hundreds of Snapper 10 Tons of fish Hundreds of fish Thousands of fish Hundreds of Fish Hundreds of Fish Scores of Fish Hundreds of Fish 150 Tons of Red Tilapias Thousands of Fish Scores of dead fish Hundreds of Starfish, Jellyfish Main source: http://maps.google.com/maps/ms?ie=UT...bca25af104a22b DEAD FISH IN 36 LAKES IN CONNECTICUT! MASS FISH DIE-OFF IN MICHIGAN! HEAPS OF DEAD FISH AT BAY STATE PONDS! DOZENS OF DEAD FISH FOUND IN MADISON POND! RED SAND LAKE FISH DIE-OFF! MELTING LAKES REVEAL HUNDREDS OF DEAD FISH! HUNDREDS OF DEAD FISH IN MEADOWS RIVER DEAD BIRDS FALL FROM THE SKY IN KANSAS! TENS OF THOUSANDS OF DEAD FISH IN INDIA! LAKE MAARDU WITHOUT FISH! MASSIVE FISH MOR IN THE LIPETSK REGION! 100 TONNES OF DEAD FISH IN UKRAINE! PENGUINS LOSING THEIR FEATHERS TO UNKNOWN ILLNESS! DEAD TURTLES FOUND ON AUSTRALIAN BEACH! Animal Death List 4th June 2011 - 800 Tons of fish dead in a lake near the Taal Volcano in the Philippines. 13th May 2011 - Dozens of Sharks washing up dead in California. 13th May 2011 - Thousands of fish wash up dead on shores of Lake Erie in Ohio. 6th May 2011 - Record number of wildlife die-offs in The Rockies during the winter. 1st May 2011 - Two giant Whales wash ashore and die on Waiinu Beach in New Zealand. 22nd April 2011 - Leopard Sharks dying in San Francisco Bay. 20th April 2011 - 6 Tons of dead Sardines found in Ventura Harbour in Southern California. 20th April 2011 - Hundreds of Dead Abalone and a Marlin wash up dead on Melkbos Beach near Cape Town. 18th April 2011 - Hundreds of dead fish found in Ventura Harbour in Southern California. 29th March 2011 - Over 1300 ducks die in Houston Minnesota. 28th March 2011 - Sei Whale washes up dead on beach in Virginia. 26th March 2011 - Hundreds of fish dead in Gulf Shores. 8th March 2011 - Millions of dead fish in King Harbor Marina in California. 3rd March 2011 - 80 baby Dolphins now dead in Gulf Region. 25th February 2011 - Avian Flu - Hundreds of Chickens die suddenly in North Sumatra Indonesia. 23rd February 2011 - 28 baby Dolphins wash up dead in Alabama and Mississippi. 21st February 2011 - Big Freeze kills hundreds of thousands of fish along coast in Texas. 21st February 2011 - Bird Flu? 16 Swans die over 6 weeks in Stratford-Upon-Avon, UK. 20th February 2011 - Over 100 whales dead in Mason Bay, New Zealand. 20th February 2011 - 120 Cows found dead in Banting, Malaysia. 19th February 2011 - Many Blackbirds found dead in Ukraine. 16th February 2011 - 5 Million dead fish in Mara River, Kenya. 16th February 2011 - Thousands of fish and several dozen ducks dead in Ontario, Canada. 16th February 2011 - Mass fish death in Black Sea Region in Turkey. 11th February 2011 - 20,000 Bees died suddenly in a biodiversity exhibit in Ontario, Canada. 11th February 2011 - Hundreds of dead birds found in Lake Charles, Louisiana. 9th February 2011 - Thousands of dead fish wash ashore in Florida. 8th February 2011 - Hundreds of Sparrows fall dead in Rotorua, New Zealand. 5th February 2011 - 14 Whales die after being beached in New Zealand. 4th February 2011 - Thousands of various fish float dead in Amazon River and in Florida. 2nd February 2011 - Hundreds of Pigeons dying in Geneva, Switzerland. 31st January 2011 - Hundreds of thousands of Horse Mussell Shells wash up dead on beaches in Waiheke Island, New Zealand. 27th January 2011 - 200 Pelicans wash up dead on Topsail Beach in North Carolina. 27th January 2011 - 2000 Fish dead in Bogota, Columbia. 23rd January 2011 - Hundreds of dead fish in Dublin, Ireland. 22nd January 2011 - Thousands of dead Herring wash ashore in Vancouver Island, Canada. 21st January 2011 - Thousands of fish dead in Detroit River, Michigan. 20th January 2011 - 55 dead Buffalo in Cayuga County, New York. 18th January 2011 - Thousands of Octopus was up in Vila Nova de Gaia, Portugal. 17th January 2011 - 10,000 Buffalos and Cows died in Vietnam. 17th January 2011 - Hundreds of dead seals washing up on shore in Labrador, Canada. 15th January 2011 - 200 dead Cows found in Portage County, Wisconsin. 14th January 2011 - Massive fish death in Baku, Azerbaijan. 14th January 2011 - 300 Blackbirds found dead on highway I-65 south of Athens in Alabama. 7th January 2011 - 8,000 Turtle Doves reign down dead in Faenza, Italy. 6th January 2011 - Hundreds of dead Grackles, Sparrows & Pigeons were found dead in Upshur County, Texas. 5th January 2011 - Hundreds of Dead Snapper with no eyes washed up on Coromandel beaches in New Zealand. 5th January 2011 - 40,000+ crabs wash up dead in Kent, England. 4th January 2011 - 100 Tons of Sardines, Croaker & Catfish wash up dead on the Parana region shores in Brazil. 4th January 2011 - 3,000+ dead Blackbirds found in Louisville, Kentucky. 4th January 2011 - 500 Dead Red-winged blackbirds & Starlings in Louisiana. 4th January 2011 - Thousands of dead fish consisting of Mullet, Ladyfish, Catfish & Snook in Volusia County, Florida. 3rd January 2011 - 2,000,000 (2 Million) Dead fish consisting of Menhayden, spots & Croakers wash up in Chesapeake Bay, Maryland & Virginia. 1st January 2011 - 200,000+ Dead fish wash up on the shores of Arkansas River, Arkansas. 1st January 2011 - 5,000+ Red-winged blackbirds & Starlings fall out of the sky dead in Beebe, Arkansas. 20th December 2010 (est. date) - Thousands of Crows, Pigeons, Wattles & Honeyeaters fell out of the sky in Esperance, Western Australia. 2nd November 2010 - Thousands of sea birds found dead in Tasmania, Australia.
<urn:uuid:6b7427b8-34a2-4e75-b0da-eca0f34b3001>
CC-MAIN-2013-20
http://poleshift.ning.com/profiles/blog/show?id=3863141%3ABlogPost%3A610429&xg_source=activity&page=24
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.840406
1,809
2.921875
3
While our direct knowledge of black holes in the universe is limited to what we can observe from thousands or millions of light years away, a team of Chinese physicists has proposed a simple way to design an artificial electromagnetic (EM) black hole in the laboratory. In the Journal of Applied Physics, Huanyang Chen at Soochow University and colleagues have presented a design of an artificial EM black hole designed using five types of composite isotropic materials, layered so that their transverse magnetic modes capture EM waves to which the object is subjected. The artificial EM black hole does not let EM waves escape, analogous to a black hole trapping light. In this case, the trapped EM waves are in the microwave region of the spectrum. The so-called metamaterials used in the experiment are artificially engineered materials designed to have unusual properties not seen in nature. Metamaterials have also been used in studies of invisibility cloaking and negative-refraction superlenses. The group suggests the same method might be adaptable to higher frequencies, even those of visible light. 'Development of artificial black holes would enable us to measure how incident light is absorbed when passing through them,' says Chen. 'They can also be applied to harvesting light in a solar-cell system.'
<urn:uuid:12fb261f-aaf8-4c03-a222-ea1748e2b5e5>
CC-MAIN-2013-20
http://sciencecentric.com/news/10111710-artificial-black-holes-made-with-metamaterials.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.910465
257
4.28125
4
Revision 1 as of 2005-11-07 20:34:54 converted to 1.6 markup |No differences found!| WhiteSpace Handling in the XSL FO spec Some thoughts about the concerns The FO spec must address the following three concerns: - What to do with linefeed characters in the input: consider as space or as a real linefeed? - What to do with XML white space characters other than linefeed in the input: preserve or collapse? These two concerns are governed by the properties linefeed-treatment and white-space-collapse. Together these two items address the matter of pretty printing of XML documents (in this case FO documents). - What to do with white space and other eligible characters around line breaks? This concern is governed by the properties white-space-treatment and suppress-at-linebreak. XML itself has a prescription for dealing with white space in the input XML file: The parser must report whether white space occurs in element content or not, allowing applications to ignore it in element content; in SAX terms, white space in element content is ignorable white space. Because FO does not have a DTD or schema, there is no element content, and all white space is passed on to the FO processor. FO does have its own equivalent of element content. When white space occurs in flow objects which do not take PCDATA as children, it is ignored by the FO processor. White space in flow objects that take PCDATA children, however, must be taken into account. Its interpretation is governed by the first two items. Pretty printing can also occur inside PCDATA. Editors commonly break long stretches of text into separate lines, substituting space characters with linefeed characters. They also commonly indent the lines to illustrate the nesting position of the element containing the PCDATA, replacing single spaces with sequences of spaces and tab characters. The above two concerns also undo those pretty printing effects on the output of the FO processor. The first two items are concerned with input. Therefore they can in principle be taken care of at the refinement stage. The third item is concerned with input characters whose representation depends on the layout, viz., which are suppressed when they occur before and/or after a line break. Therefore it can only be taken care of when the line breaks are known, i.e. at the layout or area building stage. The formulation of this concern was flawed in version 1.0 of the FO spec. Instead of line breaks, it mentions line feed characters. This is clearly not what is needed. Users expect white space to be suppressed around line breaks, and FO processors do this, even though the spec has no good prescription for this behaviour. Version 1.1 of the FO spec tries to correct this. But the result is a mixed behaviour of the property white-space-treatment. Two of its values refer to input characters and can be taken care of at the refinement stage, the other three refer to suppression as a result of layout and must be taken care of at the layout or area building stage. Remarks on white-space-collapse white-space-collapse is formulated in terms of flow objects, so that it only applies to direct siblings. This can give rise to undesirable effects. Examples: - Spaces before an fo:inline and spaces at the start of an fo:inline are not collapsed, perhaps contrary to the expectation of the user. - fo:marker elements may have spaces at their start and end, which may become adjacent to spaces before and after the fo:retrieve-marker that inserted the fo:marker content. These spaces are not collapsed, again perhaps contrary to the expectation of the user. The user would prefer to think in terms of collapsing of adjacent white space glyph areas. The comments of the XSL editors have made it clear, however, that white-space-collapse is strictly interpreted in terms of sibling flow objects. On the other hand, they do not make it clear why they place white-space-collapse handling at the area building stage. As a result the user must be careful not to add extra white space to inline content. Remarks on white-space-treatment and white-space-collapse The values ignore and preserve of white-space-treatment would better be combined with white-space-collapse into a new property, called something like white-space-treatment, with three values ignore, collapse and preserve as follows: - white-space-treatment="ignore" and white-space-collapse="true": ignore - white-space-treatment="ignore" and white-space-collapse="false": ignore - white-space-treatment="preserve" and white-space-collapse="true": collapse - white-space-treatment="preserve" and white-space-collapse="false": preserve The property with the remaining values then could be called something like around-line-break. Unfortunately, the remaining three values have linefeed in their name, where linebreak is intended.
<urn:uuid:d81e5a1b-9ec9-4ffa-900f-c56314afa56d>
CC-MAIN-2013-20
http://wiki.apache.org/xmlgraphics-fop/XslFoWhiteSpaceHandling?action=diff
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.90407
1,047
2.734375
3
BEMIDJI -The Governor's Task Force Prevention of School Bullying met with Bemidji students, parents and educators Wednesday evening kicking off a series of sessions to help redefine the state's anti-bullying statute. "The Governor has organized a task force of citizens from around the state to make recommendations to the Governor and to the Legislature in regards that might be put forth that would direct all the school districts in the state as to how they should address bullying," said Nancy Riestenberg, School Climate Specialist at the Department of Education. Riestenberg joined members of the task force to speak with students from Bemidji High School and students from Schoolcraft to discuss their concerns about bullying and ways in which they think it could be prevented. Students like Thomas Caddy and Tia Siddens, 9th graders at Bemidji High School, bullying is present at the school and it is seen both physically and verbally. "Words hurt more than fists in a lot of situations because if the physical wound isn't there it could still leave a mark on the mind," Siddens said. The students said the most common targets of bullying in school are people of different race, sexual orientation or people with a mental disability. "People who don't have a lot of friends are targeted more because if you don't always have those friends there to help speak up," Schoolcraft student Katie Fgevje said. "With less friends you are more vulnerable in my eyes because you don't have that person to kind of help you get through it." When asked what the students would recommend the task force do to help the bullying problem in schools, a lot of students said there needs to be an effort to teach students from a young age why bullying is wrong, but also to teach the staff how to resolve and prevent bully situations. "We have some programs that address bullying and I am happy we have them but I don't think they are exactly effective because if the person is doing it they are not going to be listening to the reasons why they are not supposed to be doing it," Thomas Caddy said. Riestenberg said this was the first of many student sessions she and the task force will conduct but she said she was impressed by how engaging the students were. "They confirmed for me what I teach in my job as the school climate specialist at the Department of Education and gave insight to the task force about what students face and what they need," Riestenberg said. Following the parent session, the task force went to the middle school to meet with parents and educators to see what their concerns and recommendations were. The common concern among the parents was the issue of cyber bullying in addition to racial and sexual orientation bullying. The parents agreed that the school and the parents need to make sure students know their resources on who they can talk to when being bullied. "We need to be held accountable for our own actions and we not only need to listen to our kids but we need to show them that we are trying to do something," Marty Cobenais, a parent in attendance said. "If we don't make an effort to do something our kids are not going to come talk to us when they have a problem." Bemidji School Superintendent James Hess said the bullying issue is one that does need to be addressed not only at the school level but also the community level. "I think about bullying and I think about the school's role in bullying and I don't think the school is the place to lay all of the blame," Hess said. "I know that if you walk into any classroom in the district you won't find any teachers teaching bullying. I think we need to be a part of the search for a solution to bullying but I don't think we are the stopping place, we are the starting place. We need to look at the greater community to find solutions that are going to be the lasting solutions." The task force will be meeting with schools across the state and will pass along recommendations to the Legislature by August.
<urn:uuid:96cc40b8-7871-436d-9364-13d3cb186f33>
CC-MAIN-2013-20
http://www.bemidjipioneer.com/content/state-task-force-prevention-school-bullying-meets-bemidji-students-parents-educators
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.986108
831
2.578125
3
Evaluating the Relationship Between Physical Activity, Diet, Weight, and the Neighborhood Environment for Adolescents Many teenagers have unhealthy eating habits and do not get enough physical activity. This study will examine whether the neighborhood in which a teenager lives affects his/her quality of life, physical activity levels, and eating habits. Obesity is an increasingly important health problem in the United States, particularly among adolescents. Previous studies among adults have shown that people who live in neighborhoods with good "walkability" and recreational environments have increased physical activity levels, and some studies have suggested that there is a relationship between the neighborhood food environment and eating patterns. While these concepts have been studied in adults, more research is needed on the effect of the neighborhood environment on adolescents. In this study, adolescents who live in select neighborhoods in Seattle-King County, WA and Baltimore-Washington, DC will be enrolled. Forty-eight neighborhoods in these areas will be studied, with researchers taking into account the neighborhoods' walkability levels (e.g., combination of street connectivity, residential density, land use mix, retail floor area ratio) and median income levels. Study researchers will examine and create formulas to measure walkability, pedestrian infrastructure, public recreation space, and nutrition environment quality. Researchers will also examine crime and weather patterns; psychosocial variables; parent support; and perceived neighborhood, school, and home environments. Overall, this study will evaluate the ability of a research model to explain the variation in physical activity levels, sedentary behavior, dietary patterns, and weight among adolescents, with an emphasis on neighborhood environment. There will be no study visits for this study: participation will take place entirely through the mail, phone, or internet. Participants will include adolescents between the ages of 12 and 16 years old and their parents, all of whom live in the identified study neighborhoods. At the time of study entry, adolescents will complete a questionnaire on neighborhood and safety issues, diet, physical activity habits and places where activity occurs, grades, school policies and parental rules that affect physical activity and eating, and the support they get from people regarding healthy eating and physical activity. One parent of each adolescent will also complete a neighborhood information questionnaire. Adolescents will measure their height, weight, and waist circumference and send the measurements to study staff along with the questionnaire. Next, a 4-week period, study staff will call adolescents on three random days and collect information on their diet in the previous 24 hours. During this period, adolescents will wear an activity meter and a GPS monitor for 7 consecutive days and will mail the devices to study staff for analysis. Observational Model: Ecologic or Community, Time Perspective: Prospective San Diego State University Foundation National Heart, Lung, and Blood Institute (NHLBI) Results (where available) - Source: http://clinicaltrials.gov/show/NCT00608036 - Information obtained from ClinicalTrials.gov on July 15, 2010 Medical and Biotech [MESH] Definitions A condition of having excess fat in the abdomen. Abdominal obesity is typically defined as waist circumferences of 40 inches or more in men and 35 inches or more in women. Abdominal obesity raises the risk of developing disorders, such as diabetes, hypertension and METABOLIC SYNDROME X. The condition of weighing two, three, or more times the ideal weight, so called because it is associated with many serious and life-threatening disorders. In the BODY MASS INDEX, morbid obesity is defined as having a BMI greater than 40.0 kg/m2. Agents that increase energy expenditure and weight loss by neural and chemical regulation. Beta-adrenergic agents and serotoninergic drugs have been experimentally used in patients with non-insulin dependent diabetes mellitus (NIDDM) to treat obesity. A status with BODY WEIGHT that is grossly above the acceptable or desirable weight, usually due to accumulation of excess FATS in the body. The standards may vary with age, sex, genetic or cultural background. In the BODY MASS INDEX, a BMI greater than 30.0 kg/m2 is considered obese, and a BMI greater than 40.0 kg/m2 is considered morbidly obese (MORBID OBESITY). The discipline concerned with WEIGHT REDUCTION in patients with OBESITY. The purpose of this project is to establish a Center of Excellence in Research on Obesity that will focus on severe obesity. The prevalence of severe obesity (i.e., Class 2 and 3 obesity;... The objective of this study is to test and evaluate the effectiveness of a parent-only treatment for childhood obesity. This study provides state-of-the-art treatment for childhood obesit... The purpose of this study is to design and demonstrate the feasibility of implementing moderate and intensive environmental obesity prevention programs at major worksites. The purpose of this study is to evaluate the efficacy of a culturally-appropriate childhood obesity intervention with Hispanic families. The program aims at preventing childhood obesity b... The purpose of this study is to explore the pathogenesis and genetic susceptibility of obese subjects,providing a convincing argument for further treatment of obesity and metabolic syndrom... Obesity is one of the main health problems in the world with high societal and individual costs. To tackle the obesity epidemic, we need to collaborate across scientific boarders to fundamentally broa... Objective:To explore the relationship between severity of obesity at age 7 and age 15, age at onset of obesity, and parental body mass index (BMI) in obese children and adolescents.Design:Longitudinal... Obesity has reached epidemic proportions in the United States, and obesity-related illnesses have become a leading preventable cause of death. Childhood obesity is also growing in frequency, and the i... The interactions between obesity and infectious diseases have recently received increasing recognition as emerging data have indicated an association between obesity and poor outcome in pandemic H1N1...
<urn:uuid:ac3104be-bc5b-472d-9013-57be2261f9d3>
CC-MAIN-2013-20
http://www.bioportfolio.com/resources/trial/92476/Evaluating-The-Relationship-Between-Physical-Activity-Diet-Weight-And-The-Neighborhood-Environment.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.928525
1,225
3.5
4
Trading Standards - Product Safety Trading Standards is responsible for enforcing the laws relating to the safety of goods we buy in the shops. We conduct surveys on particular product categories throughout the year. We also respond to complaints brought to us by members of the public. We may arrange for products to be tested in a laboratory to see if they comply with safety standards. If they don't, we can bring this to the attention of the manufacturer and may even bring a prosecution in serious cases. Where we find unsafe goods on sale officers have the power to seize them and remove them from sale. Advice is given to local producers and importers on safety matters. Examples of common safety problems are: - Sharp edges or points on toys - Toys with small detachable parts which could choke - Badly designed or constructed electrical items - Toys with toxic metals such as lead in the paint If you have a problem, which you believe could be a safety issue, it is important to bring it to our attention. You can do this by letter, telephone, fax, or e-mail . It will be helpful, but not essential, to retain any packaging, instructions, receipt and other relevant information. If you have any queries, or would like to make a complaint you can contact the Citizens Advice consumer helpline on 08454 04 05 06 or visit www.adviceguide.org.uk If you are buying toys as presents for children, the Royal Society for the Prevention of Accidents has some useful advice. - Only buy toys from recognised outlets and take extra care buying from car boots or jumble sales. - Look out for the CE symbol and the British Toy and Hobby Association 'Lion Mark' and details of who made the toy and where. - Check the age range and make sure it is suitable for the reipient. take extra care with toys for children under three years, who are most at risk of choking. - Watch out for young children playing with older children's toys. - Look at the toy and check for loose hair, small parts, sharp edges and points. - Make sure that garden swings and slides are robust and secure and are not a strangulation hazard. - Check toys regularly for wear and tear and repair or dispose of them when necessary. - Keep area where children play tidy and hazard free. - Follow the instrucutions and warnings provided with toys. - Supervise young children at play. For more information on age restricted products visit Underage Sales. Copyright: Copyright Restricted
<urn:uuid:d2e81b60-3dec-43b6-8380-6fe0ef167ec5>
CC-MAIN-2013-20
http://www.birmingham.gov.uk/cs/Satellite?c=Page&childpagename=ICF%2FCFPageLayout&cid=1223092622774&packedargs=website%3D4&pagename=BCC%2FCommon%2FWrapper%2FCFWrapper&rendermode=live
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.937357
524
2.65625
3
Ricin: Control Measures Overview for Clinicians On this page: Protecting Emergency/First Responders - Always use Standard Precautions. - Responders should be trained and attired in appropriate personal protective equipment (PPE) before entering the incident site (hot zone). - If rescuers have not been trained in use of appropriate PPE, call for assistance in accordance with local Emergency Operational Guides. - Sources include local HAZMAT teams, the closest metropolitan strike system (MMRS), and the U.S. Soldier and Biological Chemical Command (SBCCOM)-Edgewood Research Development and Engineering Center. - Incident Commander assigns personal protective equipment (PPE) levels based on a hazard assessment and site conditions, including the mechanism of dispersal and whether or not dispersal is continuing. - Incident site (hot zone) PPE may include: - Chemical protective clothing - NIOSH-approved pressure-demand, self-contained breathing apparatus (SCBA CBRN, if available) is recommended in response to non-routine emergency situations - In other situations, two types of full facepiece, tight-fitting masks may be used: 1) Powered air Purifying respirator (PAPR) with HEPA filters; or 2) Air Purifying respirator (APR) with P100 filters - For guidance on selection criteria, see: Interim Recommendations for the Selection and Use of Protective Clothing and Respirators Against Biological Agents - Eyes should be protected when possible (full face-piece respirator provides eye protection) - Support Zone (Post-Decontamination) - Use Standard Precautions. - PPE disposal: - Decontaminate any reusable PPE by thoroughly rinsing with soap and water, soaking in a 0.1% sodium hypochlorite solution for 15 minutes, and then rinsing with water and allowing to air dry - Dispose single-use PPE hazardous waste. - Identify person(s) assigned to coordinate communication (e.g., with medical examiner, investigators, law enforcement). - Identify person(s) assigned to managing fatalities (e.g., to set up temporary morgue, provide security, provide victims’ identities, protect victims’ personal effects, and maintain and protect records). - Heighten awareness of and be suspicious for injuries and exposures beyond a release of ricin (e.g., another biological or chemical agent, blast injury, and trauma). - If a ricin release is suspected or known: - Determine if evacuation or “shelter in place” inside a building to avoid further exposure is necessary. - Sort victims by urgency, need for stabilization, need for decontamination, number of victims, and healthcare resources. - Base triage on walking feasibility, respiratory status, and additional injuries. - Category (Priority) for triage of casualties: - Immediate (Priority 1) Unconscious, talking but not walking, or moderate to severe effects in two or more body organ systems; seizing, post-ictal, severe respiratory distress, apneic, recent cardiac arrest. - Delayed (Priority 2): Recovering from agent exposure/improving respiration. - Minimal (Priority 3): Walking and talking. - Expectant (Priority 4): Unconscious; cardiac/respiratory arrest of long duration. - Direct ambulatory victim(s) from incident site/hot zone to decontamination zone. - Shift to doing the most good for the most people when resources are exceeded. - Evaluate and support airway, breathing, and circulation. - When assisted ventilation is required, use bag-valve-mask device with canister or air filter, if available. - Apply direct pressure to stop bleeding, if present. - Remove from incident site/hot zone as quickly as possible. - Persons suspected to be contaminated with ricin should receive gross decontamination to the extent possible at the site of release, prior to transport to the hospital, unless medical condition of a victim dictates immediate transport to the hospital. - Remove, bag, seal, and dispose of clothes, and wash body. - See Healthcare Facility Management, Decontamination/Infection Control, Decontamination. - Emergency response personnel, local or state health department representative(s) arrange for disposal of clothing. - When responding to victims at agent release site, depending upon timing, duration, and circumstances of exposure (e.g., suspected/known release, release of another chemical agent or a biological agent, etc) and availability of resources (e.g., medical personnel, antidote): - Identify potentially exposed and evaluate each for evidence of exposure and for ricin poisoning symptoms. - Transport exposed persons to a temporary field location, or to a healthcare facility. - Notify exposed persons who are not transported to a healthcare facility of ricin poisoning symptoms, and to seek immediate medical attention if symptoms develop. Record names, addresses, and telephone numbers. - Be prepared for victim(s) who may present to an emergency department without prior warning - Direct the emergency department ventilation exhaust away from the hospital’s main ventilation system to limit distant spread of any airborne biological agent chemical agent contaminant through off-gassing vapor from victims who present to and enter the emergency department. - Comply with healthcare facility’s Emergency Response Plan. - Prepare for mass casualties by establishing patient triage, registration, decontamination, treatment, transportation, and stabilization zones/areas for hospital admission(s). - Perform hazard vulnerability analysis to determine if hospital can manage the anticipated number of victims. - Determine if lockdown (shelter-in-place) is necessary, and secure area to control access and contain contamination. - Prepare for public health surge capacity and cooperate with other healthcare facilities, local, state, and federal authorities when: - It is determined that healthcare facility cannot manage anticipated number of victims. - Services expand beyond normal from large scale event. - Sort victims by urgency, need for stabilization, need for decontamination (e.g., simultaneous release of multiple agents), number of patients, and healthcare resources. - Treat, or hold for observation, previously decontaminated patients. - Shift to doing the most good for the most people when resources exceeded. - Decontaminate persons whose skin or clothing was suspected or known to be exposed to ricin. - If it has not been done at incident site, decontaminate exposed person prior to entry into the healthcare facility, outside the main emergency department (Decontamination Area). - For the comfort of the victim and to improve cooperation, attention should be given to explaining the procedure to the victim, and providing privacy, security of personal belongings, and water at a comfortable temperature, if possible. - Remove clothing, as quickly as possible, - Any clothing that has to be pulled over the head should be cut off the body instead. - Remove jewelry and watches. - Double bag and seal contaminated clothing and all personal belongings in plastic bags: - Wear gloves, use plastic bag turned inside out, or use tongs or similar objects to avoid touching contaminated areas of clothing. - Place clothing inside one plastic bag, then seal the bag. - Place the sealed bag inside another plastic bag and seal it. - Label bag as contaminated and secure it in a safe location until it can be safely disposed. - Avoid touching any contaminated areas if assisting an exposed person remove clothing. - Prevent droplets from contacting broken skin or mucosal membranes when decontaminating someone or cleaning up body fluids that may contain ricin toxin. Airborne dispersal of ricin during decontamination is an unlikely hazard. - Rapidly wash off any obvious contamination with soap and copious amounts of water. - Shower entire body, including head and hair, with large amounts of liquid soap and warm water, this is the most effective and preferred method for removing remaining hazardous substances from skin. - Irrigate exposed eyes with plain water for 10 to 15 minutes: - Remove contact lenses if contact lenses are present and are easily removable without additional trauma to eyes. - Do not put contact lenses back in eyes, even if they are not disposable contact lenses. - Wash eyeglasses with soap and water. - Eyeglasses may be put back on after they have been washed. Isolation and Exposure Prevention - Use standard precautions. - Prior to decontamination, healthcare workers caring for chemically contaminated patients should: - Put on full chemical resistant suit with gloves, surgical mask, and eye/face protection such as face shield and goggles. - If a person’s skin or clothes have been contaminated with ricin, and the victim has not already undergone decontamination, decontaminate the ricin-exposed victim(s) before entry into healthcare facility. - During and after decontamination tasks, healthcare personnel should refrain from any hand-to-mouth activities. - After completing decontamination tasks, healthcare personnel should: - Carefully remove all PPE, place in sealed plastic bag(s) for either decontamination or disposal - Perform hand hygiene and shower. - When caring for ricin-exposed victims who do not require decontamination or victim’s post-decontamination, healthcare workers should follow Standard Precautions and perform hand hygiene. - Standard laboratory precautions should be observed and precautions taken to avoid aerosolization and exposure of laboratory personnel (See Ricin: Diagnosis and Laboratory Guidance for Clinician). - Aerosol-generating sawing associated with surgery should be avoided. - Use standard precautions when handling bodies of ricin-exposed patients who have died. Aerosol-generating procedures (e.g., bone-sawing associated with post-mortem examinations) should be avoided. - Healthcare personnel or laboratory workers sustaining exposure via sharps injury, cuts, or abrasions should immediately wash the exposed site with a soap and water. - Potentially exposed healthcare personnel should be advised to remove all PPE carefully, wash hands thoroughly with soap and water, refrain from any hand-to-mouth activities, and shower. - When exposure to eyes occurs, flush eyes with copious amounts of water or eye wash solution for at least 15 minutes. - Follow standard facility policy regarding workplace exposure. - Environmental surfaces or equipment, such as in a transport vehicle (e.g., ambulance) can be cleaned with soap and water, then disinfected in a 0.1% sodium hypochlorite solution or cleaned and disinfected with an EPA-registered hospital disinfectant following conventional protocols. - In the healthcare facility, disinfect environmental surfaces with EPA-registered hospital disinfectant following conventional healthcare facility policies and procedures. - In case of a spill of materials potentially contaminated with ricin, immediately cover spill with absorbent materials, then disinfect the area with an EPA-registered hospital disinfectant or EPA-registered chlorine bleach solution following healthcare/laboratory facility policies and procedures. Infection Control Professionals should: - Maintain heightened awareness for evidence of ricin-exposed patients and collaborate with clinicians and laboratory to ensure immediate notification of local and state public health department officials when nerve agent poisoning is suspected - Ensure that telephone numbers for notification of appropriate healthcare facility and public health agencies are current and distributed to the appropriate healthcare facility departments and personnel - Communicate with the laboratories that receive specimens for testing. - Page last reviewed February 29, 2008 - Page last updated April 17, 2006 Get email updates To receive email updates about this page, enter your email address: - Centers for Disease Control and Prevention 1600 Clifton Rd Atlanta, GA 30333 TTY: (888) 232-6348 - Contact CDC-INFO
<urn:uuid:552733a0-65d4-4e20-a456-b05269e1f2ae>
CC-MAIN-2013-20
http://www.bt.cdc.gov/agent/ricin/clinicians/control.asp
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.874536
2,482
2.921875
3
When it comes to planting populations for corn silage, there’s no “one size fits all” recommendation. In general, silage yield and fiber content tend to improve as plant populations increase, but recommendations vary based on geography, soil type and the specific hybrid being planted. “Proper plant spacing is critical for top yield and quality potential,” says Terry Helms, customer agronomist for Mycogen Seeds. “Always consult with your trusted agronomic adviser to determine the ideal plant population for your specific situation.” Helms provides these tips to help determine proper plant populations: • Generally, silage hybrids can be planted at populations 5 percent to 15 percent higher than grain corn hybrids, or approximately 2,000 more plants per acre. • BMR hybrids that contain the bm3 gene do not need to be planted at extremely high populations (28,000 to 30,000 plants per acre is desired in most environments). Populations for these hybrids should not exceed 34,000, even with high levels of management. • Population requirements depend on productivity of both the hybrid and soil. Highly productive soils can support higher plant populations. In lower productivity soils, growers may not see the benefit of increased plant populations. • Heavier, finer soils with better water-holding capacity can support higher populations than lighter, coarser-textured soils. This situation holds true only in non-irrigated situations. Higher levels of available moisture are necessary to realize the advantage of increased plant populations. • Silage hybrids perform better when planted on highly fertile soils under an optimum fertility and management program. • As plant population increases, uniform plant spacing becomes more critical for plant development and yield potential. Prepare your planter early and check it often during planting for proper seed depth and spacing.
<urn:uuid:d9bf5184-d09d-438e-8b93-906fa7ef48f2>
CC-MAIN-2013-20
http://www.dairyherd.com/dairy-herd/profit-tips/Profit-Tips-Whats-the-ideal-plant-population-for-corn-silage-138153674.html?source=related
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.899079
374
2.828125
3
| Dental Care Regularly examine your pet for signs of dental disease- bad breath, tartar, red, swollen, or painful gums, decreased appetite, difficulty eating, loose or missing teeth. If your pet has any of these signs it has periodontal disease. Periodontal disease can start as early as 2 years of age. More than 85 percent of dogs and cats over four years of age have some form of periodontal disease, a progressive, painful inflammation and destruction of the normal tooth structure, leading to tooth loss. If periodontal disease is left unchecked, bacteria from the mouth can enter the bloodstream and travel to major organs, starting infections there. Damage to these organs caused by infection can shorten the lives of dogs and cats. Factors that affect the incidence of periodontal disease include: o Breed- smaller dogs tend to have more dental disease than larger dogs. Cats can have resorptive enamel lesions exposing the root of the tooth. o Extra or malpositioned teeth- retained baby teeth can force the permanent teeth into abnormal positions and cause a build up of tartar. Brachycephalic breeds ie. Pugs have malpositioned teeth causing crowding and a tartar trap. These teeth should be extracted to avoid problems later. o Diet- soft food can predispose to an increase in accumulation of plaque. Professional cleaning is the best way to remove tartar on the teeth, and hopefully reverse the effects on the gums. Under anesthesia, the tartar is removed, teeth are polished, and fluoride applied and the teeth probed for pockets. A full mouth x-ray is performed to check for any unseen abnormalities under the gumline. Problem teeth are x-rayed to develop a treatment plan. A root canal or periodontal surgery may be performed to save a tooth. Any teeth that can not be saved are extracted. To help prevent dental tartar, start brushing your pet's teeth when your pet it is young or after a teeth cleaning. We recommend daily brushing. Brushing will dramatically increase the interval between teeth cleaning appointments. Tips to get your pet to accept tooth brushing: - Start with a healthy, comfortable mouth. Untreated problems can cause a painful mouth and a non-compliant patient. - Choose a proper toothbrush. The toothbrush should have soft bristles which can reach under the gum line. Use the right size toothbrush to fit the patient, large or small. Each pet should have their own toothbrush to decrease cross contamination of bacteria from one pet to another. - Use special pet toothpaste. CET toothpaste is flavored to increase acceptance and does not have detergent properties as human toothpaste which can cause gastrointestinal upset if swallowed. - Brush your pet's teeth when the pet is relaxed. Position the pet in a corner or on your lap so that it will be secure and more easily handled. Put a small amount of toothpaste on your finger and allow the pet to taste it. Carefully lift the lips up to expose the teeth. Apply a small amount of toothpaste to the brush. Place the brush bristles at a 45 degree angle to the gum line. Move the brush gently in circular patterns over the teeth as well as back and forth. Start by brushing a few teeth. As the brushing sessions continue, slowly include more teeth. Build up to 30 seconds on each side. When you sense the pet is anxious, give reassurance by talking and trying again. Expect progress, not perfection. Reward immediately with a play period and praise after each cleaning session. Take time. Each pet is different- some will be trained in one week, while others will take a month. - Use other home care products to reduce plaque production. These include tartar control dry diets such as Hill's T/D, OraVet Plaque Prevention, rawhides or chew toys. Pets with periodontal disease may need antibiotics and disinfectant rinses or gels. Most importantly, brush daily, have annual dental exams, and regularly have teeth cleaned professionally. PLAQUE INDEX (PI #) - PI 0 no observable plaque - PI 1 scattered plaque covering less than one third of the buccal tooth surface - PI 2 plaque covering between one and two thirds of the buccal tooth surface - PI 3 plaque covering greater than two thirds of the buccal tooth surface PI 1 PI 2 PI 3 CALCULUS INDEX(CI #) refers to the amount of calculus on a tooth. - CI 0 no observable calculus - CI 1 scattered calculus covering less than one third of the buccal tooth surface. - CI 2 calculus covering between one and two thirds of the buccal tooth surface with minimal subgingival deposition. - CI 3 calculus covering greater than two thirds of the buccal tooth surface and extending sub-gingivally. CI 1 CI 2 CI 3 GINGIVAL INDEX (GI #) is the number assigned to designate the degree of gingival inflammation. - GI 0 normal healthy gingiva with sharp non inflamed margins. - GI 1 marginal gingivitis with minimal inflammation and edema at the free gingiva. No bleeding on probing. - GI 2 moderate gingivitis with a wider band of inflammation and bleeding upon probing. - GI 3 advanced gingivitis with inflammation clinically reaching the mucogingival junction usually with ulceration. Periodontitis will usually be present. GI 1 GI 2 GI 3
<urn:uuid:0b062358-5792-4307-970d-a236d46d214d>
CC-MAIN-2013-20
http://www.eaglefernvet.com/site/view/96547_DentalCare.pml
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.8979
1,141
3.046875
3
Human body or anything that ejects warmth for its functioning really wastes a big amount of energy in form of heat. Can this dissipating energy be harvested and used to run an electric device? A team of Wake Forest University scientists have done a good job in this regard. They have developed a thermoelectric device, called the Power Felt, to gather power from heat, whether it originates from human body, roof tiles or even a wound wrap. To generate a charge, the Power Felt utilizes temperature differences between the body and outer area. In other words, if the device is put to generate power from human body heat, it creates the charge from the difference between the temperature of the body and the temperature of the room. The fabric-like device is made locking up tiny carbon nanotubes in flexible plastic fibers. Well, this textile device can be designed to use inside a cloth or with anything that emits heat. According to Corey Hewitt, a graduate student of the Wake Forest, thermoelectrics has been an underdeveloped technology due to its high cost. Scientists often refrained from harnessing thermoelectrics as it would cost heftily. Moreover, the energy the technology can generate is meager in amount. Even the Power Felt that has 72 stacked layers of nanotubes produces only 140 nano watts, which can do nothing to charge an iPhone. Gizmodo notes that 140 nano watts is only a millionth of energy an idle iPhone requires. But, the early restraints won’t disgruntle the researchers at the Center for Nanotechnology and Molecular Materials. They are working further to enhance the thermoelectric material to produce more power. The scientists think covetously that the material can generate enough power to charge a smartphone, music player or medial equipment. The researchers are on way to enrich the material with more nanotube layers as well as to make it even slimmer and efficient. Thermoelectric technology is already in use in CPU coolers, car seats and mobile refrigerators. In these applications, the technology is used not to produce electricity, but to eliminate heat and cool up the devices. Here, thermoelectric technology makes use of a more efficient compound, named bismuth telluride, which costs $1,000 per km. So the researchers think they can commercialize a thermoelectric cell phone cover for just $1. The fabric swatch can reproduce the heat emerging from a cell phone to power to recharge the device itself. Doesn’t it sound great? Now let us see what David Carroll, director of the Center for Nanotechnology and Molecular Materials, says of the groundbreaking invention. Imagine it in an emergency kit, wrapped around a flashlight, powering a weather radio, charging a prepaid cell phone. Literally, just by sitting on your phone, Power Felt could provide relief during power outages or accidents.Via: ScienceDaily
<urn:uuid:0c8b3939-3c3c-4c3d-abc0-73621dd07751>
CC-MAIN-2013-20
http://www.gizmowatch.com/power-felt-utilizes-body-heat-users-recharge-cellphones.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.930982
598
3.53125
4
Become a fan of h2g2 People ask exactly what is ale? Why do people go on about it so much? Why is it only drunk by men with huge beer guts, silly beards and daft pipes? Ale is simply this, the basic form of beer. Nothing more, nothing less. It's just another style of beer, as lager or stout are styles of drink. The History of Ale Throughout English history ale is mentioned, usually in the same sentence as the words 'quaffing' and 'amounts'. If ancient texts are to be believed, then the only drinks available throughout the Dark Ages were ale and mead. Ale itself hasn't changed much since then, when it was probably at its peak of popularity, except that it's a lot thinner, clearer and weaker and made from different ingredients. Ale is basically fermented maltose1. People in the Dark Ages boiled up a lot of malt, strained it, let it cool down and then added some bread to the remaining liquid so that the yeast in it would start the fermentation. A nice, simple process and everybody was doing it. However, men didn't brew ale, it was the women who did. Brewing beer was seen as just another one of the household chores for the wife to do while the husband was off slaughtering people in the Crusades. Wives weren't so much chosen for their good looks back then. They were chosen for their abilities to cook, clean, have babies and brew beer. The word Brewster, meaning a woman brewer, came into the English language after the word Brewer, meaning a man who brews. For a man brewing was a job, but for a woman it was part of the housework. The first public houses came into being around the Dark Ages. For when the men weren't off dismembering the infidels they sat around getting drunk. Soon the houses where the women brewed the best beer became gathering places. Skipping forwards in time a couple of hundred years, you'll find something that's actually recognisable as a pub, except it probably had a brewery attached to the back of it. Most pubs used to brew their own beer and if the beer was any good people travelled that little bit further to go there and the pub did a good trade. Brewers became more adventurous with what they threw into the pot and when someone found out that hops were a great preservative as well as adding that bitter taste, they soon were added to every ale. So as you can see, there is a lot of history behind ale as a drink and people, being people, like to cling onto the past. CAMRA is a consumer group. A few blokes were sat in a pub one night in the 1970s having a few beers, as blokes do, and the conversation turned to how hard it was to get a decent pint. Now, unlike most ideas that are thought of as good after a couple of beers, this one still seemed a good idea the following morning and so they decided to do something about it. Landlords were talked to, letters were sent and other people became involved. The decline of ale in pubs slowed and turned. It wasn't till years later that some of the stereotypical people joined. So Why is it Real Ale? Quite simply because it's still alive. Keg bitter, as opposed to real ale, has been filtered and pasteurised. In plain English, it's had all the bits taken out. It's like the difference between full-fat and skimmed milk in the sense thet they're both milk, but taste completely different. Whilst in the cask in the cellar, real ale still has yeast and hop leaves floating in it. It looks cloudy and not really very appealing. But left for a day or two and all those bits settle to the bottom naturally, leaving behind only their flavour. Real ale, as opposed to keg bitter, has more body. It tastes a bit fuller, a bit more complete, and in a way it is. So What Goes into Ale? You name it, it goes in - almost. The rumours that rats or joints of beef go in are unfounded. The basic ingredients in real ale are: - Barley Malt The not-so-basic ingredients that can be added in for flavour are: - Sweet Gale3 And others. Basically if it's not poisonous you can stick it in a pint. And people do. Ale isn't for everyone but as they saying goes, 'How do you know you don't like it until you've tried it?' With over 600 breweries producing in excess of 2000 different real ales in the UK alone, that's an awful lot of trying to do!
<urn:uuid:ebf99868-cb48-48fc-85b0-4481071278f5>
CC-MAIN-2013-20
http://www.h2g2.com/approved_entry/A266762
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.98617
977
2.546875
3
Strongly supported by western mining interests and farmers, the Bland-Allison Act—which provided for a return to the minting of silver coins—becomes the law of the land. The strife and controversy surrounding the coinage of silver is difficult for most modern Americans to understand, but in the late 19th century it was a topic of keen political and economic interest. Today, the value of American money is essentially secured by faith in the stability of the government, but during the 19th century, money was generally backed by actual deposits of silver and gold, the so-called "bimetallic standard." The U.S. also minted both gold and silver coins. In 1873, Congress decided to follow the lead of many European nations and cease buying silver and minting silver coins, because silver was relatively scarce and to simplify the monetary system. Exacerbated by a variety of other factors, this led to a financial panic. When the government stopped buying silver, prices naturally dropped, and many owners of primarily western silver mines were hurt. Likewise, farmers and others who carried substantial debt loads attacked the so-called "Crime of '73." They believed, somewhat simplistically, that it caused a tighter supply of money, which in turn made it more difficult for them to pay off their debts. A nationwide drive to return to the bimetallic standard gripped the nation, and many Americans came to place a near mystical faith in the ability of silver to solve their economic difficulties. The leader of the fight to remonetize silver was the Missouri Congressman Richard Bland. Having worked in mining and having witnessed the struggles of small farmers, Bland became a fervent believer in the silver cause, earning him the nickname "Silver Dick." With the backing of powerful western mining interests, Bland secured passage of the Bland-Allison Act, which became law on this day in 1878. Although the act did not provide for a return to the old policy of unlimited silver coinage, it did require the U.S. Treasury to resume purchasing silver and minting silver dollars as legal tender. Americans could once again use silver coins as legal tender, and this helped some struggling western mining operations. However, the act had little economic impact, and it failed to satisfy the more radical desires and dreams of the silver backers. The battle over the use of silver and gold continued to occupy Americans well into the 20th century.
<urn:uuid:17a7df1b-03ea-4e38-8257-adbbba18c0ad>
CC-MAIN-2013-20
http://www.history.com/this-day-in-history/silver-dollars-made-legal
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.974011
489
3.6875
4
Posted by Physics fail on Friday, January 25, 2013 at 5:28pm. An electron experiences 1.2 x 10^-3 force when it enters the external magnetic field, B with a velocity v. What is the force experienced by the electron if the magnetic field is increased two times and the velocity is decreased to half? Any help would be greatly appreciated Answer this Question For Further Reading
<urn:uuid:fec66674-d3b5-4042-9a3a-3d315ac56f72>
CC-MAIN-2013-20
http://www.jiskha.com/display.cgi?id=1359152918
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.931786
83
2.65625
3
What is urticaria? Urticaria, or hives, is a condition in which red, itchy, and swollen raised areas appear on the skin--usually as an allergic reaction from eating certain foods or taking certain medicines; however, sometimes the cause may be unknown. Hives can vary in size from one-half inch to several inches in size. Hives can appear all over the body or be limited to one part of the body. What foods commonly cause hives? What medicines commonly cause hives? Other causes of hives Dermatographism. Hives caused by scratching the skin, continual stroking of the skin, or wearing tight-fitting clothes that rub the skin. Cold-induced. Hives caused by exposure to cold air or water. Solar hives. Hives caused by exposure to sunlight or light bulb light. Exercise-induced urticaria. Allergic symptoms brought on by physical activity. Chronic urticaria. Recurrent hives with no known cause. What is angioedema? Angioedema is an allergic reaction that causes swelling deeper in the layers of the skin. It most commonly occurs on the hands, feet, and face (lips, tongue, and eyes).
<urn:uuid:5866ae11-d415-4f25-a6a7-bfc0009b7fa5>
CC-MAIN-2013-20
http://www.mission4health.com/Health-Library/Article.aspx?CT=85&C=P00041
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.906056
262
3.609375
4
How Plants Are Named Tagged as: scientific names Common or Trade Names Plants can have many different common names. Depending on where it grows, the same plant may have many different regional names. It may also have other common names in other countries in which it grows. For example the plant we know as serviceberry is also known as sarvisberry, shadbush, shadblow, saskatoon, Junebush, and Juneberry depending on in what part of North America it is growing. The common dandelion is also known as blowball, canker wort, Irish daisy, leotodon taraxacum, lion's tooth, puffball, and wild endive in various English speaking countries. It also is know as dent de lion or pissenlit vulgaire in France, Löwenzahn in German, dente-de-leão in Portuguese, achicoria amarga, amargón or diente de león in Spanish, and there are many others. Making common names even more confusing is the fact that a single common name can be applied to many different kinds of plants which may not even be remotely related to it. For example, the plant we know as bluebells belongs to the group of plants known to the scientific community as Campanula. The common name bluebells has also been applied to plants belonging to Hyacinthoides (Europe), Endymion (Asia), Polemonium, Mertensia, Penstemon (North America), and Wahlenbergia (Australia). Trade names are special names with legal standing that are protected by laws. These are designated by the trademark™ and registered trademark® symbols. Examples of trade names are: Camelot ® crabapple, Celebration ® maple, Royal Heritage ™ hellebore. The place to go for more information about trademarks is the US Patent and Trademark Office website: www.uspto.gov. Botanical or Scientific Names Unlike common names, botanical or scientific names are applied to only one kind of plant. They typically consist of two words; the first is called the genus name the second the species name. Together they define a single unique type of plant. This system of using binomials or two names to describe a specific plant was begun in the 18th century by the famous Swedish botanist Carl Linnaeus. Since that time, botanists and taxonomists (people who study plants and their classification and naming) have developed a system of international rules that determine how these names are created and used. This set of rules is called the International Code of Botanical Nomenclature. These rules set out how a scientific name is created, used and printed. A scientific name should be in italics or underlined. The genus name always begins with a capital letter, and the rest of the name is always in lower case letters. These names are in Latin, or are Latinized. A person’s name following the scientific name is the name of the person who first described the plant using that name. For example our common white oak is known as Quercus alba L. The genus name for oaks is Quercus. The species name for white oak is alba, and the author of the combination describing white oak Quercus alba is Carl Linnaeus which is abbreviated as the letter L. Occasionally you may find the letter x or the multiplication sign used in a scientific name. This signifies that the plant is a cross of hybrid derivation. Acer x freemanii is a cross between silver and red maple. Quercus x bebbiana is a cross between bur oak and white oak, x Amelosorbus or ×Amelosorbus is a cross between Amelanchier and Sorbus. In some cases a sub-group of the name is created. In these cases scientists use a three- part name using a special abbreviation to show what kind of plant sub-group is being described: ssp. (subspecies), var. (variety) or f. (forma). Of these, subspecies and varieties pertain to different sub-groups of a plant that are tied to geography. A forma pertains to a variation that can occur anywhere in the range of a plant. For example; Cercis canadensis var. texensis refers to the sub-group of our common redbud tree that is found growing in Texas. The name Cercis canadensis f. alba refers to a sub-group of our common redbud tree that blooms with white flowers rather that the more typical pink flowers. Horticulturists have created special names for individual plants with unique characteristics called cultivars. A cultivar name is always printed in normal type and is enclosed in single quotes. For example; Quercus alba ‘Fastigiata’ is the name of an narrow, upright growing form of our white oak. The letters PP followed by a number signify that the plant has been patented. The place to go for more information about plant patents is the US Patent and Trademark Office website: www.uspto.gov. The words that make up the scientific name of a plant all mean something. They are Latin or Latinized words. Sometimes they are the old Roman name for a particular kind of plant (Acer, Cornus, Quercus), Latinized words of other languages are also used especially Greek names (Scilla, Artemisia, Pyrethrum), descriptive names or terms (alba-white, laciniata-cut, sanguinea-blood-red), or names of people for which the plant was named (Forsythia, Fothergilla, Magnolia). Finding out with the words of a scientific name mean can be fun, and enlightening. A trip to the Sterling Morton Library will get you started on how and why plants are named. Even though scientific or botanical names may seem daunting, we use them every day without knowing it. Aster, chrysanthemum, forsythia, fothergilla, magnolia, narcissus, protea, rhododendron, sansevieria, scilla, and sorghum are a few examples of the scientific names for plants that we grow in our yards, gardens, or in our houses and use in our everyday language.
<urn:uuid:fe5be850-544f-4a10-bb52-caf332ec2983>
CC-MAIN-2013-20
http://www.mortonarb.org/tree-collections/taxonomic-groups/birch/735-how-plants-are-named.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.926698
1,327
3.390625
3
Students can learn how geologists use stratigraphy, the study of layered rock, to understand the sequence of geological events. As students watch baking soda-vinegar "lava" flow from their clay volcanoes, they will see that the lava follows different paths. They will also learn how to distinguish between older and newer layered flows. Lava Layering Activity [82KB PDF file] This activity is part of the Exploring the Moon Educator Guide
<urn:uuid:99ce2c1f-fbae-47e1-a402-ab298e242ee3>
CC-MAIN-2013-20
http://www.nasa.gov/audience/foreducators/topnav/materials/listbytype/Lava_Layering.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.925432
95
3.75
4
Poverty and Prosperity: A Longitudinal Study of Wealth Accumulation, 1850-1860 NBER Historical Working Paper No. 8 This paper depicts and analyzes the wealth distribution and wealth mobility in a national sample of nearly 1,600 households matched in the 1850 and 1860 manuscript schedules of the census. Gini coefficients, a transition matrix, the Shorrocks measure, and a regression model of wealth accumulation are estimated from these data. The findings shed light on theories of the wealth distribution, life-cycle behavior, regional economic performance, and the empirical basis for critiques of capitalism. Blacks accumulated slowly but the foreign born performed remarkably well. The distribution of wealth was relatively unequal on the frontier but the region performed well in reducing propertylessness. Residents of eastern cities were less fluid than other residents of the rural North. Blue collar workers and the unskilled declined relative to farmers and white-collar workers during the decade, which suggests that other aspects of wealth determination may have outweighed stretching of the wage structure as an explanation of growing inequality during industrialization. Comparisons with data on net family assets collected by the National Longitudinal Survey in the 1960s and 1970s show that mid-nineteenth century households were less mobile at the lower end but more mobile at the upper end of the wealth distribution. Published: Steckel, Richard H. "Poverty And Prosperity: A Longitudinal Study Of Wealth Accumulation, 1850-1860," Review of Economics and Statistics, 1990, v72(2), 275-285.
<urn:uuid:5d590a45-dc75-4876-a955-66cd8c4526f7>
CC-MAIN-2013-20
http://www.nber.org/papers/h0008
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.939754
312
3.15625
3
If you have a medical emergency, please contact your health care provider or go to the nearest emergency room. Children and adolescents may suffer psychological trauma as a result of disease, specifically the type of disease, epidemic, or pandemic which arises suddenly, spreads rapidly and widely, results in death, and may not have a known cure. As there is daily news coverage regarding potential health risks from occurrences of H1N1(swine flu), SARS (severe acute respiratory disease), West Nile Virus, and others, the NCTSN will be compiling and providing information on several diseases to address the common questions of parents and families. The Psychological First Aid Field Operations Guide, 2nd edition (PFA), developed by the National Child Traumatic Stress Network and the National Center for PTSD, can assist mental health and other practitioners intervening with children and families exposed to disasters, including epidemics. Access PFA in English, Spanish, Japanese, and Chinese by clicking here . The CDC web site features comprehensive information on other infectious diseases, including influenza , SARS , bird flu , H1N1(swine flu) and West Nile Virus . The Department of Health and Human Services Pandemic Flu.gov site provides government-wide information on influenza outbreaks. Pandemic Flu
<urn:uuid:4720d499-3915-43c2-a769-3a5b511049b0>
CC-MAIN-2013-20
http://www.nctsnet.org/print/97
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.923003
258
3.5
4
Buy a paper copy or browse online Access for subscribers to OECD iLibrary Access for OLIS users Competitive neutrality means that state-owned and private businesses compete on a level playing field. This is essential to use resources effectively within the economy and thus achieve growth and development. Therefore the principle of competitive neutrality is gaining wide support around the world. But how to obtain it in practice, is a much more difficult question. The purpose of this report is to help respond to this question. The report identifies the most important issues that governments need to address in order to achieve competitive neutrality. It is framed around eight building blocks, including choosing the best corporate form, achieving a commercial rate of return, accounting for public service obligations, improving debt neutrality, and making public procurement open and transparent. It provides country examples of how to implement competitive neutrality policies in practice. The report is not about privatisation. Rather, it aims to provide guidance to policy makers who want to make sure that the presence of the state owned enterprises in the market place does not thwart private entrepreneurs, skew competition or lead to other inefficiencies. Understanding how to avoid unintended economic consequences that may follow from state ownership is particularly important for policy makers that face the challenge to balance the commercial objectives of state owned enterprises with other important policy objectives: A challenge that permeates all levels of government. The book may be read in conjunction with two publicly available stocktaking papers: Overview of OECD work on competitive neutrality
<urn:uuid:6a1a3423-fa5b-4883-8ef5-d7c67b7e3e86>
CC-MAIN-2013-20
http://www.oecd.org/corporate/competitiveneutralitymaintainingalevelplayingfieldbetweenpublicandprivatebusiness.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.933873
297
2.640625
3
Parents in the Beaver State have higher labor force participation rates than the average across the nation. Nationally, 80.6 percent of parents with children under 18 were in the labor force in 2011, compared with 82.6 percent in Oregon. Men's participation doesn't differ much from the national norm; 92.9 percent of Oregon dads are in the labor force, just slightly below the national labor force participation rate of 93.5 percent. The participation rate of Oregon women with children under 18, however, is 3 percentage points above the national level (73.5% vs. the nation's 70.6%). Labor force participation of parents differs by gender and the age of children. For parents of children under six years of age, there's a big difference in the labor force experiences of men versus women. Of the men in this group, 93.4 percent are in the labor force, compared with 68.5 percent of Oregon mothers of children under age six. That female participation rate of 68.5 percent in Oregon is nearly 5 percentage points above the national participation rate of 63.9 percent for mothers of children under age six. The gender gap in labor force participation is reduced somewhat for parents of children ages six to 17. For men with children ages six to 17, the participation rate was 92.5 percent in 2011, and 77.6 percent of Oregon women with children in that age range were in the labor force. For people without children under 18, the genders behave far more similarly in their likelihood of labor force participation. Men in this group had a participation rate of 62.4 percent, just over 5 percentage points higher than the women's participation rate of 57.1 percent. Married parents show a wider variation in labor force participation between genders (Graph 2). Married mothers are less likely to work, and married fathers are more likely to work. Parents with any other marital status have more similar labor force participation between genders. Women with children under age six are the most likely to work part-time, with 39 percent of employed females in this group reporting part-time status (Graph 3). Men with children under age six were far less likely to work part time, with only 6 percent reporting such schedules. More than one-third of employed women and 8 percent of employed men with children ages six to 17 work part time. Once again, for people without children under the age of 18, the employment experiences of the genders are more similar. In this group, 22 percent of men and 31 of women work part time.
<urn:uuid:fc58940b-10e6-479f-bd81-dba36a966f7e>
CC-MAIN-2013-20
http://www.qualityinfo.org/olmisj/ArticleReader?itemid=00008552
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.963404
515
2.59375
3
Yom Kippur, the Day of Atonement, falls ten days after Rosh Hashanah. When the Temple stood in Jerusalem, the High Priest effected atonement for the entire people through an elaborate ritual. Today, in the absence of the Temple, each of us stands, alone, together, naked as it were, before God. Some find the Yom Kippur liturgy, with its litany of sins, onerous, particularly for women. This text serves as a counterpoint to the traditional Al Chet (confession) affirming our goodness alongside our sins. [more]
<urn:uuid:e333c115-b9f7-4402-a8b3-9689b36f4eb5>
CC-MAIN-2013-20
http://www.ritualwell.org/categories/83?page=1
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.919971
125
2.578125
3
- The notion of parallel universes leapt out of the pages of fiction into scientific journals in the 1990s. Many scientists claim that mega-millions of other universes, each with its own laws of physics, lie out there, beyond our visual horizon. They are collectively known as the multiverse. - The trouble is that no possible astronomical observations can ever see those other universes. The arguments are indirect at best. And even if the multiverse exists, it leaves the deep mysteries of nature unexplained. In the past decade an extraordinary claim has captivated cosmologists: that the expanding universe we see around us is not the only one; that billions of other universes are out there, too. There is not one universe—there is a multiverse. In Scientific American articles and books such as Brian Greene’s latest, The Hidden Reality, leading scientists have spoken of a super-Copernican revolution. In this view, not only is our planet one among many, but even our entire universe is insignificant on the cosmic scale of things. It is just one of countless universes, each doing its own thing. The word “multiverse” has different meanings. Astronomers are able to see out to a distance of about 42 billion light-years, our cosmic visual horizon. We have no reason to suspect the universe stops there. Beyond it could be many—even infinitely many—domains much like the one we see. Each has a different initial distribution of matter, but the same laws of physics operate in all. Nearly all cosmologists today (including me) accept this type of multiverse, which Max Tegmark calls “level 1.” Yet some go further. They suggest completely different kinds of universes, with different physics, different histories, maybe different numbers of spatial dimensions. Most will be sterile, although some will be teeming with life. A chief proponent of this “level 2” multiverse is Alexander Vilenkin, who paints a dramatic picture of an infinite set of universes with an infinite number of galaxies, an infinite number of planets and an infinite number of people with your name who are reading this article. This article was originally published with the title Does the Multiverse Really Exist?.
<urn:uuid:05688a38-675f-4dbd-b31c-ad2f643ed28d>
CC-MAIN-2013-20
http://www.scientificamerican.com/article.cfm?id=does-the-multiverse-really-exist
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.943249
455
2.984375
3
A fine flower to start withWritten by George Ellison One of the best pieces of advice I ever received in regard to learning wildflowers was to “concentrate on one family at a time.” The person advising me didn’t, of course, intend that I should devote my attention exclusively to the species in a given family and ignore any plants outside that group. But she rightly intuited that making real progress in a systematic manner required some sort of focus. My choice was the Lily Family (Liliaceae). In retrospect, I realize that picking this family was a rather grand first choice since it includes many genera and an array of species. I could have started with a less complicated group. But I was attracted by the showy — sometimes even gaudy — species represented in the Liliaceae: fly poison, wild hyacinth, lily-of-the-valley, trout lily, swamp pink, Indian cucumber root, grape hyacinth, bog asphodel, star-of-Bethlehem, Solomon’s and false Solomon’s seal, featherbells, rosy twisted stalk, the numerous trillium species, the bellworts, turkey beard, etc. The centerpiece genus of the Liliaceae is, of course, Lilium or the so-called true lilies. Here in the southern mountains this genus is comprised of five quite distinctive species: turk’s-cap lily (Lilium superbum), Canada lily (L. canadense), wood lily (L. philadelphicum), Michaux’s or Carolina lily (L. michauxii), and Gray’s lily (L. grayi). Of these, only the turk’s-cap and Michaux’s lilies are, in my experience, commonly encountered. The rarest species is Gray’s lily, also known as bell lily, orange-bell lily, roan lily, and roan mountain lily. It is, for me, not only the most beautiful species in the Liliaceae but also the most beautiful wildflower I have encountered in North America. The species is named for Asa Gray, America’s first great formal botanist. In 1840, Gray and several companions explored the high mountains of North Carolina. Among the many exciting plants they located was the spectacular red and purple-spotted lily that would, in 1879, be described as a new species and named in Gray’s honor. Gray’s lily is a perennial, standing from two to four feet tall, with a smooth stem that bears three to eight whorls of narrow leaves. From June into early August, it displays from one to 10 bell shaped, slightly flared flowers on long stalks. The flowers are poised in an almost horizontal position. Each flower head is dark red or reddish-orange outside. Inside it is somewhat lighter in color and distinctively marked with numerous purple spots. It is a stately, almost regal plant. This rare and endangered species is limited in its natural state to high-elevation, moist, grassy open areas and woodland thickets. Its distribution is restricted to a handful of counties in western Virginia, east Tennessee, and western North Carolina. In an open, grassy plot alongside the creek on our property, Elizabeth and I once attempted as part of a horticultural experiment to grow several seedlings of Gray’s lily originally propagated from seeds by Kim Hawks, who was at that time the owner of Niche Wildflower Gardens near Chapel Hill. They flowered sparsely for several years and then disappeared. If we ever try to raise Gray’s lily again, we’ll create and place the plants in a moist peat bed in wooded shade.
<urn:uuid:499cd793-65bb-4092-a602-886c1ad06b5e>
CC-MAIN-2013-20
http://www.smokymountainnews.com/outdoors/item/2266-a-fine-flower-to-start-with
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.94426
809
2.703125
3
A Soyuz rocket launched two Galileo satellites into orbit on Friday, marking a crucial step for Europe’s planned navigation system, operator Arianespace announced. The launch took place at the Kourou space base in French Guiana, at 3:15pm (6:15pm GMT). Three and three-quarter hours later, the 700kg satellites were placed into orbit. The new satellites add to the first two in the Galileo navigation system, which were launched on Oct. 21, last year. Together they create a “mini-constellation.” Four is the minimum number of satellites needed to gain a navigational fix on the ground, using signals from the satellite to get a position for latitude, longitude, altitude and a time reference. Galileo will ultimately consist of 30 satellites, six more than the US Global Positioning System. By 2015, 18 satellites should be in place, which is sufficient for launching services to the public, followed by the rest in 2020, according to the European Space Agency. It is claimed that the system will be accurate to within one meter. The US Global Positioning System, which became operational in 1995 and is currently being upgraded, is currently accurate to between three and eight meters. In May, the European Commission said the cost by 2015 would be 5 billion euros (US$6.45 billion). As a medium-sized launcher, Soyuz complements Europe’s heavyweight Ariane 5 and lightweight Vega rockets.
<urn:uuid:699d891f-bdb8-4b5c-be74-eb4bcd7029f8>
CC-MAIN-2013-20
http://www.taipeitimes.com/News/world/print/2012/10/14/2003545181
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.944344
308
2.796875
3
How Rioters Act Like Shoppers Shortly after the riots that spread across London in the summer of 2011, media outlets in the city began publishing maps trying to make sense of the event. They ran illustrations showing the sites of the worst rioting, as well as other maps cross-referencing the clusters of violence with the known home addresses – using court records – of people who’d been arrested in it. At the time, these maps struck several researchers studying urban systems at University College London. “We thought, ‘this is a spatial system, and it looks a bit like something we have looked at before,’” says Toby Davies, one of the academics. He and his colleagues were picturing, more specifically, spatial models of how shoppers behave in search of retail. And this got them thinking. “It looks like retail," Davies says, "and retail is something we know we can model.” Why not try to mathematically model the movement of rioters? Their research on this question, just published in the journal Scientific Reports, yields two curious insights: Rioters in search of retail to loot make rational decisions just like shoppers do about where to find the good stuff and how far they’re willing to travel to get there. And this means that the spatial layout of a city may be just as important as its social dynamics in explaining the rise and spread of riots. Most research about London’s much-studied summer of 2011 has focused instead on the latter, on human behavior rather than urban space. “We’re encouraging people to think in an explicitly geographic way,” Davies says, “to really think about the places where these riots are taking place, to think about how rioters prioritize on that basis.” Those 2011 riots were particularly characterized by massive looting, which makes the analogy to shopping particular apt. Any time you model a scenario where people have choices, Davies says, you have to first consider how attractive a given destination is in the perception of the shopper – or looter – considering a trip there. People are hindered by the cost of traveling, but they’re also lured longer distances by prime targets. You buy your milk from around the corner. But you might drive several miles to a mall that has both an Apple Store and a Brookstone. Rioters make very similar calculations. “Places where there are more goods to loot, in this context – or more shopping opportunities in the non-criminal case – attract people,” Davies says. This is an obvious idea: Looters will congregate at retail hubs, and so you may want to pinpoint them on your police map. But Davies and his colleagues have also looked at the proximity of potential rioters to destinations that might draw them. They examined areas that rank poorly on the U.K.’s index of multiple deprivation (a much more complex measure than the U.S. poverty rate of a given community). “We’re very careful to say that deprivation isn’t necessarily a cause of [rioting],” Davies says. “But there is a clear statistical relationship with deprivation. In more deprived areas, the rate of offending is higher.” This means that if you have a commercial hub but the populations nearby aren’t particularly deprived, the likelihood of looting there is smaller. Likewise, a deprived community with no retail around looks in this mathematical model like a less likely source of rioting. In the model, all of this is also calibrated by one significant difference between looters and shoppers: looting can be contagious. In its present state, Davies says, this model isn’t ready just yet to be deployed by police in a live scenario (it doesn’t take into account, for example, London’s transportation system). But the researchers hope to continue to refine it to where the system might be used in simulations by officers training or strategizing for how to respond to a future event. Such a tool could tell them where an initial outbreak of rioting might spread, the size it could reach, or even which neighborhoods deserve some preventive attention to head off future risk. “One of the challenges that the police face in riots is that they are very rare events, so they don’t get much chance to practice on how they react to them,” Davies says. “If we can produce a way which simply simulates riots properly, then they can, as it were, ‘set them off’ in a controlled way and practice how they might respond to them.’ In this spatial system – as in the London riots in 2011 – the worst offenses take place at retail sites. But the model could be transposed to other locations. A soccer riot, for instance, might spill out into nearby sports bars instead of shopping centers. The underlying idea is the same: If you’re a police captain in the midst of such chaos, the spatial layout of your city will likely play a major role in determining what happens next. Top image of riots in London in August of 2011. (Toby Melville/Reuters)
<urn:uuid:4052cfb1-d2f2-4e0a-8e6e-34840df717d4>
CC-MAIN-2013-20
http://www.theatlanticcities.com/neighborhoods/2013/02/how-rioters-act-shoppers/4797/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.967213
1,064
2.65625
3
GLADE CREEK (UPSHUR COUNTY) GLADE CREEK (Upshur County). Glade Creek rises two miles north of Little Mound in western Upshur County (at 32°45' N, 95°09' W) and runs south for nine miles to its mouth on Big Sandy Creek, two miles southwest of Pleasant Grove (at 32°38' N, 95°09' W). It traverses flat to rolling terrain surfaced by clay and sandy loams that support water-tolerant hardwoods, conifers, and grasses. The following, adapted from the Chicago Manual of Style, 15th edition, is the preferred citation for this article."GLADE CREEK (UPSHUR COUNTY)," Handbook of Texas Online (http://www.tshaonline.org/handbook/online/articles/rbglp), accessed May 22, 2013. Published by the Texas State Historical Association.
<urn:uuid:b894d352-3fb2-4d25-830e-196e98955d6b>
CC-MAIN-2013-20
http://www.tshaonline.org/handbook/online/articles/rbglp
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.887773
196
2.890625
3
Vibration Response Imaging Vibration response imaging or VRI, is nominally a medical VR visualisation technology, although it has obvious uses in other fields. It was developed to image patient lungs, and works via detecting vibrations within the target body, via sensors deployed to either side, pressed against the surface. Below, we offer a selection of links from our resource databases which may match this term. Related Dictionary Entries for Vibration Response Imaging: Resources in our database matching the Term Vibration Response Imaging: As the demands for precise imaging in fields such as medicine, astronomy, and real-time machine vision in hostile environments continue to increase, so the demands placed on imaging equipment become ever more stringent. An imaging method based on Single Photon Avalanche Photodiodes (SPAD) offers the potential to ease this bottleneck greatly. Technology Review's long, and in depth look at the rise of diffusion spectrum imaging, and how this new neural interface imaging technique is rapidly accelerating the study of both human and animal brains to an extent unparalleled by any previous imaging technique, even fMRI. Diffusion spectrum imaging is a new technique at time of writing, which allows magnetic resonance brain imaging, at a much higher level of fidelity than fMRI permits. In mid 2012, Swiss researchers turned the world of alzheimers plaque imaging on its head: by combining a phased imaging source and an integral VR model generator, for the first time ever we can now track the formation of Alzheimers plaques in real-time in living patients. AR based Medical imaging technologies really began to take off in the early 2000s. There are a growing range of holographic, projective, interactive gesture recognition tools available, which can really make training and diagnosis so much easier. VRIxp is a medical diagnosis device using what is perhaps a novel form of 3D visualisation. It uses audio analysis of vibration deep inside the body to assemble precise structural detail. fMRI or functional magnetic resonance imaging, is one of the newest brain imaging technologies for the first decade of the 21st century. It is a basic form of Brain-Computer Interaction. These are the proceedings of the fourth international medical imaging and augmented reality conference, held in Tokyo, Japan, August 1-2, 2008. These are the proceedings of the third international medical imaging and augmented reality conference, held in Shanghai, China, August 17-18, 2006. These are the proceedings of the first international medical imaging and augmented reality conference, held in Hong Kong, 10-12 June 2001. Industry News containing the Term Vibration Response Imaging: Results by page Currently, to keep track of a game, soccer fans have the option of reading textual information of the game?s key events in near-real-time, or listening to audio of the text transferred to voice. However, these options require a user?s full ... Magnetic resonance imaging (MRI) can serve as a very sensitive technique for detecting small tumors in the body, but it is not as good at identifying the edges of a tumor. Photoacoustic imaging tomography (PAT) is not as sensitive as MRI, b... A new 3D view of the body's response to infection -- and the ability to identify proteins involved in the response -- could point to novel biomarkers and therapeutic agents for infectious diseases. Vanderbilt University scie... This week, researchers from Philips Electronics plan to describe a jacket they have lined with vibration motors to study the effects of touch on a movie viewer?s emotional response to what the characters are experiencing. New functional and imaging-based diagnostic tests that measure communication and signaling between different brain regions may provide valuable information about consciousness in patients unable to communicate. The new tests,...
<urn:uuid:ac6b2f66-cb03-43e4-9bc4-b9dff8eab27d>
CC-MAIN-2013-20
http://www.virtualworldlets.net/Resources/Dictionary.php?Term=Vibration%20Response%20Imaging
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.917382
775
2.75
3
Diabetes is commonly associated with a greater risk for blindness, infections and amputation. Those are serious problems. But adults with diabetes are also two to four times more likely than those without diabetes to die from heart problems or to have a stroke. In fact, cardiovascular disease is the leading cause of early death for people with diabetes. Over time, high blood sugar levels are associated with more fatty deposits in the walls of blood vessels. Fatty materials can build up and form a plaque. This can narrow or block blood vessels. Plaque can make it more likely that a clot will form. This can restrict blood flow to your heart. Controlling your blood sugar, blood pressure and cholesterol can help lower your risk of heart disease. You should get regular tests for all of these. And if all of these levels are normal, that's fantastic. It's absolutely vital that you keep them in the normal range by maintaining a healthy lifestyle and following your diabetes management program. However, others with diabetes may already have high blood pressure or high cholesterol. That said, if you do have diabetes, you can help keep those levels in check by taking important lifestyle steps. Eat heart-healthy foods Eating the right mix of protein, fat and carbohydrates may vary according to each person's needs. However, knowing the amount of carbohydrates you are taking in will help you control your blood sugar. Include 14 grams of fiber for every 1,000 calories in your diet. High-fiber foods include oatmeal, whole grain breads and cereals, dried beans and peas and fruits and vegetables. Cut down on foods with saturated fat. These include fatty meat, poultry skin, butter, dairy products with fat, lard and tropical oils. Limit your saturated fat intake to less than 7 percent of total calories. Keep your cholesterol to 200 milligrams a day. Cholesterol is found in meat, eggs and dairy. If you already have heart disease, your doctor may want you to eat even less cholesterol each day. Look in the nutrition facts section of food labels to find out if a product has trans fat. It can be found in crackers, cookies, microwave popcorn, cake mixes and salad dressings. Trans fat can raise blood cholesterol. Try to minimize your intake of trans fat. Think of small ways to increase your activity level, like taking the stairs instead of the elevator. But make sure you check with your doctor first to determine the safe level of exercise for you. Try to get at least 150 minutes of moderate-intensity exercise spread over at least three days a week. Don't go more than two straight days without exercise. Also aim to do muscle-strengthening exercises for the major muscle groups at least two days a week, provided you don't have other contraindications. Always get your doctor's approval before starting an exercise program. If you choose to drink, limit your intake. Men should aim for two drinks or less per day. Women should aim for one drink or less a day. Talk with your doctor about how much weight he or she wants you to lose if you are overweight. Ask a registered dietitian for help with meal planning. And go slowly. Aim to lose no more than 1 or 2 pounds each week. Ditch the cigarettes Smoking doubles your risk of getting heart disease. It cuts the amount of oxygen that goes to your organs, raises bad cholesterol and raises blood pressure. Make a quit plan. Set a quit date and tell people what it is. Write down your reasons for quitting. Toss your cigarettes, matches, lighters and ashtrays. Ask a friend who smokes to quit with you. Resources like smokefree.gov provide advice, information and encouragement. Ask your doctor about aspirin Studies have shown that low doses of aspirin each day can help cut the risk of a heart attack or stroke. But aspirin is not right for everyone. Be sure you get medical advice before taking it. If you have diabetes as well as high blood pressure or high cholesterol, here are some additional steps you could take. Follow your doctor's instructions for taking your blood pressure medications and incorporating lifestyle changes. Those include adopting a Dietary Approaches to Stop Hypertension (DASH)-style diet, losing weight, lowering your sodium intake, increasing your potassium intake, moderating your alcohol intake and getting more physical activity. Eat less saturated fat, cholesterol and trans fat. Eat more omega-3 fatty acids, fiber and plant sterols/stanols. Those are substances that keep the body from absorbing cholesterol. Follow your doctor's guidance for losing weight and getting more exercise. Your doctor may also prescribe statins. If you already have heart disease, it's crucial that you work with your doctor to prevent any further events. Your doctor may give you different (stricter) lifestyle recommendations. You can do it. Just be sure to work with your team, take your medications, live a healthy lifestyle and keep all of your medical appointments. Heart disease — and its prevention — should be taken seriously. But stay optimistic. You do have control over your diabetes and its threats. These Web sites are for your informational use only. It is not a substitute for professional medical advice. It may not represent your true individual medical situation. Do not use this information to diagnose or treat a health problem or disease without consulting a qualified health care provider. Also consult your healthcare provider before starting any medications or supplements or beginning or modifying any exercise program. © 2013 OptumHealth, Inc. All rights reserved. No part of information on this page may be reproduced or transmitted in any form or by any means, without the written permission of OptumHealth, Inc.
<urn:uuid:e81a8a3e-855f-483c-8171-4fe346738f60>
CC-MAIN-2013-20
http://www.whig.com/story/21242069/diabetes-and-heart-disease-the-abcs-of-prevention
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.933545
1,165
3.390625
3
Fundamentals of Building Construction: Materials and Methods, 5th Edition This price is valid for United States. Change location to view local pricing and availability. Now in its Fifth Edition, this essential textbook has been used by thousands of students annually in schools of architecture, engineering, and construction technology. The bestselling reference focuses on the basic materials and methods used in building construction, emphasizing common construction systems such as light wood frames, masonry bearing walls, steel frames, and reinforced concrete. New introductory material on the processes, organization, constraints, and choices in construction offers a better look at the management of construction. New sections covering the building envelope uncover the secrets to designing enclosures for thermal insulation, vapor retarders, air barriers, and moisture control. The Fifth Edition also features more axonometric detail drawings and revised photographs for a thoroughly illustrated approach and the latest IBC 2006, CSI MasterFormat, ASTM references, and LEED information.
<urn:uuid:43faa6fc-000e-4233-bde0-280751242296>
CC-MAIN-2013-20
http://www.wiley.com/WileyCDA/WileyTitle/productCd-047007468X.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.884943
190
2.8125
3
3D: Landscape Photography Unlike the human eye that can convey depth, a digital camera is limited in technology. So, how do you add depth to your landscape photos that need it the most? Well, here are a few suggestions. Follow these tips and you will be well on your way to creating perfect 3D images! Now, you know objects that are closer to the lens appear larger than those further away from them, right? Well, take this cue and bring some perspective to your landscape shots. Your shot could be clouds nearing the horizon, water ripples close to you and further away, ocean waves, rivers, streams, trees, flowers, roads, etc. The point is to exaggerate what you want and minimize what you don’t want. By doing that, you are also creating a 3D effect by showing the distance between two objects: the one near the camera and the one further away from it. It’s All in the Angle Your landscape photos should have a point of view. This is where the lenses come into play. The wide angle lenses increase the perceived distance between elements in the composition and promote a feeling of deep space. Telephoto lenses lend exactly the opposite of this. They compress the distance between elements in the scene. To accentuate these extreme effects, you should position the camera as close as possible to the nearest object in the composition. Adjusting the Height of Your Camera This can only come by experimentation. Since you own a digital camera, you can take multiple pictures and still not burn a hole in your pocket, unlike the old days. As a rule, landscape subjects that are closer to you should be positioned lower in your field of view than those more distant. Why? Because your eyes are more than five feet above the ground and if you want to achieve the highest 3D effect, you should focus on the object at about a 45 degree angle above the ground. In other words, the focal length of your lens should be wide enough to include the horizon and a bit of sky. If you place the camera too low, you will lose visual exposure of the spaces between size cues. If you set up too high, you will lose the horizon and the familiar eye-level configuration of the size cue. Either position results in a flattening of the scene. Maximize Your Size Cues Position the camera horizontally so that the number of size cues portrayed are maximized and the cues are kept separate and distinct. This step may require you to move the camera forward or backward, as well as, sideways. In most situations, you should set depth of field to include both the closest size cue and features on the horizon (usually infinity). Add Mood to Your Landscapes Landscapes on hazy days can be great for photography. Due to particles suspended in the atmosphere, close objects appear more detailed than those further away. Aerial perspective is commonly encountered as fog, mist, snow, dust and haze. When shooting in these moody conditions, you can be assured of opportunities on the periphery of the atmospheric phenomenon, like the edge of storms or cloud banks. You can modulate the effect by changing position or waiting for a change or movement of the weather pattern. Timing is Everything The earlier in the day you shoot, the greater the effect. To flatten perspective and achieve an impression that is somewhat surreal, shoot early or late in the day, with the sun directly behind you for better illumination. Landscapes illuminated from the side fall into areas of highlight and shadow. This overlapping of objects or planes is emphasized and clarified, because the shadow portion of one is set against the highlight portion of another. Give Some Space If you are trying to include moving subjects into your landscape photos, like a passing cloud, rain drops or a dust storm, give the subject some space in your image to move into. If you do that, your landscapes will look that much more three dimensional. Framing Does It Frame your subject. You can emphasize your subject by placing it into a frame of some sort. Things like an open window, tree branches or a doorway work very well. Shallow Depth of Field This is a great way to handle a busy background that would otherwise interfere with your subject. To get to a shallow depth of field, use a long focal length, open the aperture as wide as possible and get as close as possible to your subject. This works best with DSLRs. It’s an effect that is hard to achieve with a point and shoot camera. And yes, use a polarizer to bring down the brightness of the skies. This works best with blue skies and when the sun is to your left or right. Polarizers also increase the saturation of the colors in your image. Some Dos and Don’ts 1.) Use what is called “negative space” to your advantage. It is the part of an image that is not your subject. Don’t be afraid to use a lot of it every now and then. 2.) Keep water lines horizontal. If you take a photo of a lake or the sea, make sure to keep the horizon level. Even a slight skew of half a degree will make the viewer feel uncomfortable with the picture. 3.) Don’t be afraid to cut off certain things. Get closer, only shoot part of a face from a mountain or river or select another detail, like a protruding rock or a patch of grass. Have fun creating your own 3D landscape masterpieces! ~ Zahid H. Javali
<urn:uuid:78a09a9f-4672-47be-b346-9a23937d33eb>
CC-MAIN-2013-20
http://www.worldstart.com/3d-landscape-photography/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.934276
1,139
2.75
3
NASA's Hubble Breaks New Ground With Distant Supernova Discovery WASHINGTON -- NASA's Hubble Space Telescope has looked deep into the distant universe and detected the feeble glow of a star that exploded more than 9 billion years ago. The sighting is the first finding of an ambitious survey that will help astronomers place better constraints on the nature of dark energy, the mysterious repulsive force that is causing the universe to fly apart ever faster. "For decades, astronomers have harnessed the power of Hubble to unravel the mysteries of the universe," said John Grunsfeld, associate administrator for NASA’s Science Mission Directorate in Washington. "This new observation builds upon the revolutionary research using Hubble that won astronomers the 2011 Nobel Prize in Physics, while bringing us a step closer to understanding the nature of dark energy which drives the cosmic acceleration." As an astronaut, Grunsfeld visited Hubble three times, performing a total of eight spacewalks to service and upgrade the observatory. The stellar explosion, nicknamed SN Primo, belongs to a special class called Type Ia supernovae, which are bright beacons used as distance markers for studying the expansion rate of the universe. Type Ia supernovae likely arise when white dwarf stars, the burned-out cores of normal stars, siphon too much material from their companion stars and explode. SN Primo is the farthest Type Ia supernova with its distance confirmed through spectroscopic observations. In these types of observations, a spectrum splits the light from a supernova into its constituent colors. By analyzing those colors, astronomers can confirm its distance by measuring how much the supernova's light has been stretched, or red-shifted, into near-infrared wavelengths because of the expansion of the universe. The supernova was discovered as part of a three-year Hubble program to survey faraway Type Ia supernovae, opening a new distance realm for searching for this special class of stellar explosion. The remote supernovae will help astronomers determine whether the exploding stars remain dependable cosmic yardsticks across vast distances of space in an epoch when the cosmos was only one-third its current age of 13.7 billion years. Called the CANDELS+CLASH Supernova Project, the census uses the sharpness and versatility of Hubble's Wide Field Camera 3 (WFC3) to assist astronomers in the search for supernovae in near-infrared light and verify their distance with spectroscopy. CANDELS is the Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey and CLASH is the Cluster Lensing and Supernova Survey. "In our search for supernovae, we had gone as far as we could go in optical light," said Adam Riess, the project's lead investigator, at the Space Telescope Science Institute and The Johns Hopkins University in Baltimore, Md. "But it's only the beginning of what we can do in infrared light. This discovery demonstrates that we can use the Wide Field Camera 3 to search for supernovae in the distant universe." The new results were presented on Jan. 11 at the American Astronomical Society meeting in Austin, Texas. The supernova team's search technique involved taking multiple near-infrared images over several months, looking for a supernova's faint glow. After the team spotted the stellar blast in October 2010, they used WFC3's spectrometer to verify SN Primo's distance and to decode its light, finding the unique signature of a Type Ia supernova. The team then re-imaged SN Primo periodically for eight months, measuring the slow dimming of its light. By taking the census, the astronomers hope to determine the frequency of Type Ia supernovae during the early universe and glean insights into the mechanisms that detonated them. "If we look into the early universe and measure a drop in the number of supernovae, then it could be that it takes a long time to make a Type Ia supernova," said team member Steve Rodney of The Johns Hopkins University. "Like corn kernels in a pan waiting for the oil to heat up, the stars haven't had enough time at that epoch to evolve to the point of explosion. However, if supernovae form very quickly, like microwave popcorn, then they will be immediately visible, and we'll find many of them, even when the universe was very young. Each supernova is unique, so it's possible that there are multiple ways to make a supernova." If astronomers discover that Type Ia supernovae begin to depart from how they expect them to look, they might be able to gauge those changes and make the measurements of dark energy more precise. Riess and two other astronomers shared the 2011 Nobel Prize in Physics for discovering dark energy 13 years ago, using Type Ia supernova to plot the universe's expansion rate. The Hubble Space Telescope is a project of international cooperation between NASA and the European Space Agency. NASA's Goddard Space Flight Center manages the telescope. The Space Telescope Science Institute (STScI) conducts Hubble science operations. STScI is operated for NASA by the Association of Universities for Research in Astronomy, Inc., in Washington, D.C. For images and more information about Hubble, visit: http://www.nasa.gov/hubble - end - text-only version of this release NASA press releases and other information are available automatically by sending a blank e-mail message to To unsubscribe from this mailing list, send a blank e-mail message to Back to NASA Newsroom | Back to NASA Homepage
<urn:uuid:0f86c830-d13f-4be2-a075-5e8b685a5d49>
CC-MAIN-2013-20
http://www1.nasa.gov/home/hqnews/2012/jan/HQ_12-011_Hubble_Farthest_Supernova.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.910339
1,144
3.453125
3
Jezreel Expedition Update July 2012 By Jennie Ebeling Associate Professor of Archaeology University of Evansville By Norma Franklin The Zinman Institute of Archaeology University of Haifa Members of the Jezreel Expedition headed by co-directors Norma Franklin (Zinman Institute of Archaeology, University of Haifa) and Jennie Ebeling (Department of Archaeology and Art History, University of Evansville) conducted an intensive landscape survey of greater Jezreel in June 2012. The main goal of the inaugural season was to record surface features in a three square kilometer area to the west, north, and east of Tel Jezreel in order to identify areas for excavation in summer 2013 (Figure 1). The Jezreel team documented more than 360 features, including cisterns, cave tombs, rock-cut tombs, agricultural and industrial installations, terrace and village walls, quarries and more; most of these features had never been systematically recorded before. The results of this survey shed light on the extent of different settlements at Jezreel from late prehistory through to the 20th century CE Palestinian village of Zerin. In preparation for the 2012 season, maps, plans, aerial photographs, and various other unpublished materials relating to the site were collected from sources in Israel, Great Britain, and elsewhere. In February 2012, The Jezreel team commissioned a LiDAR (light detection and ranging) airborne laser scan of ca. 7.5 square kilometers of greater Jezreel from Ofek Aerial Photography, Ltd. The LiDAR scan provided us with accurate locational and height data that enabled the creation of a three-dimensional model of the land surface (Figure 2). The resultant model was examined to identify historic and natural features and georeferenced with aerial photographs (the earliest of which dates to 1918); this provided us with data concerning the landscape before mechanized farming and other modern surface-changing events took place. This is the first time that aerial LiDAR has been used by an archaeological project in Israel. Land to the west, north, and east of the tel was divided into survey areas based on the LiDAR model; when possible, natural, and modern features delineated these large areas. Each identified feature was assigned a locus/feature number and information was recorded along with one or more sketch plans on a locus sheet. Coordinates and elevations were taken for each feature using a handheld GPS (Garmin eTrex HC), and a digital camera was used to document each feature from several angles (Figure 3). Since the goal of this survey was to identify surface features, pottery and other artifacts were not systematically collected other than in Area T , located on the terrace that overlooks Ein Jezreel (the spring) (Figure 4). Pottery, lithic, and ground stone artifacts were collected on the terrace to the west and the east of the masha, or uncultivated area that was too overgrown to properly survey, designated Area S. To the west of the masha, the team collected pottery from the Early Bronze, Intermediate Bronze, Middle Bronze, Iron Age, Roman, Byzantine and modern periods; to its east, Wadi Rabah, Chalcolithic, Early Bronze, Intermediate Bronze, Iron, and Byzantine pottery was collected. This information, along with the results of earlier surveys conducted by N. Zori and members of the Tel Aviv British School of Archaeology excavation team in the 1990s led by P. Croft, suggest that the site was inhabited continuously from as early as the late Neolithic period until the Byzantine period. The team plans to open an excavation area on the terrace in 2013 in order to investigate the occupational phases of this lower site. Other excavation areas are also planned in selected areas on the tel and on the north and east slopes. The 2012 Jezreel Expedition team included Norma Franklin (University of Haifa) and Jennie Ebeling (University of Evansville), Ian Cipin (University College London), Julye Bidmead (Chapman University), Noga Blockman (Tel Aviv University), and Deborah Appler (Moravian Theological Seminary) along with eight archaeology majors and alumni from the University of Evansville: Megan Anderson, Nathan Biondi, Sarah Carlton, Emma Dunleavy, Kelly Goodner, Michael Koletsos, Emily Mella, and Hilda Torres (Figure 5). John Egerton, Deborah Graf, and Rebekah Thomas of Moravian Theological Seminary participated for several days, as well did Jeff Anderson (Wayland Baptist University), Willie Ondricek (University of the Holy Land), and Daria Trumbo. Technical support was provided by Peter Ostrin (University of Leicester) and Matthew Bradley in Oxford. The team wishes to thank former co-director of the Tel Jezreel Excavations David Ussishkin (Tel Aviv University) as well as Mina Weinstein-Evron and Michael Eisenberg (University of Haifa) for their support of this project, and Deborah Cantrell (Vanderbilt University), Shimon Gibson (University of the Holy Land) and Eran Arie (Tel Aviv University) for their advice and assistance. We have benefited from cooperation with the Megiddo Expedition and the generosity of directors Israel Finkelstein and David Ussishkin (Tel Aviv University) and Eric Cline (George Washington University). We also wish to thank Sheila Bishop, President of The Foundation for Biblical Archaeology, for her generous assistance. The Jezreel team lodged in the Partnership House at Kibbutz Yizreel for the month of June and enjoyed the warm hospitality of kibbutz members. For more information about the Jezreel Expedition, please visit our website: www.jezreel-expedition.com Figures 1 and 4 courtesy of Todd Bolen. Use the form below to submit a new comment. Comments are moderated and logged, and may be edited. You must provide your full name. Inappropriate material will not be posted.
<urn:uuid:97e1cd21-73bf-4f3e-a480-70ad0dd64a17>
CC-MAIN-2013-20
http://bibleinterp.com/articles/ebe368017.shtml
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.942259
1,278
2.59375
3
Here is one method of front suspension for a bicycle that came out in 1889! This was patented by J. S. Copeland. When the front wheel hits a bump, it can travel up in relation to the frame. It also has a cool spoon brake, which was the norm before caliper brakes were invented. It is the same idea as shown in the Softride shock absorber stem above, which is also a parallelogram with a strong spring, to cushion some shock from hard bumps. But in the Softride version, the wheel doesn’t travel up, the handlebars travel down. My friend Kurt inUtah really likes his Softride stem, and has used it for years. Here is a pretty well developed full suspension bike patented in 1890, a year before McGlinchey’s full suspension velocipede, and 32 years before telescoping forks of Sage. As far as I know, Becker is the first inventor of the full suspension bicycle Here is a rear suspension bike from 1891 which used springs in a tube to give some give to the rear wheel. Here is another candidate for the first rear suspension bicycle design, from 1891. Its modern counterpart is shown below. Here is a very early version of front suspension on a bike. In this patent from 1891 there is a spring in the headset, and the fork assembly can move back and forth to absorb road shock. This front suspension seems to be the precursor to early springer motorcycle forks. The beefy springs allowed the front wheel and forks to move upward and absorb some road shocks. Those old bike designers tried a lot of ways to cushion the ride of the safety bike on the rough roads found at the end of the 19th century. Here is a different way to employ springs on the front forks to cushion the ride. This appears to be a front suspension bike, patented in 1891. The seat and cranks are attached solidly to the rear wheel, but if the front wheel hit a bump it would be allowed to raise up against the spring located near the crank. Interesting. Many other early suspension designs are in the Bicycle Technology section of the Patent Pending blog. In the top version of this bike, steering is by handles by the saddle, which is connected to the front wheel by cables. There is no traditional handlebar. I think the inventor was trying to allow the rider to sit upright and not have to lean forward to steer the front wheel. That might really relieve some back strain. Here is a good way to have multiple speeds on a bike without using a derailleur. This bike has two gears on either side of the front sprocket, and a driveshaft for each of them. One driveshaft would be disengaged while the other was engaged. the driveshafts engage bevel gears on the rear wheel. This might be a little heavy, but should work just fine. Other driveshaft drives were patented in 1897 with a transmission and a gear shift knob and in 1891 with a single drive shaft. Alexander Pope also patented a driveshaft bike. Here is an interesting and early (1890) front suspension bike, using a spring in the fork assembly to soften the rough roads of the day.
<urn:uuid:0e9c1fb5-80ec-4f49-8f3d-fc1a9b58076e>
CC-MAIN-2013-20
http://bicyclepatents.com/category/suspension/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.962631
665
3.203125
3
High blood pressure, also known as hypertension, is a common yet serious health condition affecting about 1 out of every 3 adults in the U.S. It is often called the “silent killer” because it greatly increases the risk of heart attack and stroke yet many people with hypertension do not have the classic symptoms of high blood pressure: sweating, nervousness, or trouble sleeping. With this being National High Blood Pressure Education Month, we thought we would help dispel harmful misconceptions surrounding this condition. Myth 1: High blood pressure runs in my family so I will get it too. While a family history of hypertension does increase your risk of developing it, that doesn’t mean you can’t avoid it. These healthy lifestyle factors can help prevent high blood pressure: - Eat a healthy diet that consists of fruits and vegetables, lean protein and whole grains. Limit unhealthy saturated fats, sodium and fast carbs (sugar and processed flour) - Get regular physical activity – about 30 minutes a day - Maintain a healthy weight - Manage stress - Avoid tobacco and limit alcohol consumption to 1 drink per day for women and 2 for men. Myth 2: I feel fine. I don’t have to worry about high blood pressure. Even if you feel good and have no family history or other risk factors for high blood pressure, that doesn’t mean you are safe. Many people don’t have symptoms. Be sure to get your blood pressure checked at least once every two years. Myth 3: If my blood pressure is 119/79 (considered normal) then I’m in good shape. Not so fast. Normal blood pressure for a healthy person may be 119/79 (or below) but if you have other health conditions such as diabetes, excess body weight or high cholesterol, then your doctor may want your blood pressure even lower. Myth 4: Kosher and sea salt are low sodium alternatives to table salt. Like table salt, both kosher and sea salt contain 40% sodium and count the same toward total sodium consumption. Myth 5: I was diagnosed with high blood pressure but I have it under control now so I can stop taking medication. High blood pressure can be a life-long disease. Don’t stop taking your medication, but do speak with your doctor about your concerns and prognosis. For more ways to lower your risk of hypertension or keep it in check try these 10 Top Ways to Manage Blood Pressure Naturally. American Heart Association
<urn:uuid:bcfc5903-c1d0-4ea7-935c-6051dfc0f5ba>
CC-MAIN-2013-20
http://blog.cncahealth.com/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.916564
520
2.640625
3
Wondering why some of these books don't match your year-plan? They may be replacement titles. You can access information for using them at the Book Updates Chart on the Tapestry of Grace website. With a timekeeper sailors would be able to know the time back at their home port and calculate the longitude. But no one knew how to design such a clock. John Harrison (1693-1776), an Englishman without any scientific training, worked tirelessly for more than forty years to create a perfect clock. Together with beautifully detailed pictures by Erik Blegvad, Louise Borden's text takes the reader through the drama, disappointments, and successes that filled Harrison's quest to invent the perfect sea clock. 48 pages HB/DJ
<urn:uuid:db5406e1-85f4-425a-a75d-a23a973d0df3>
CC-MAIN-2013-20
http://bookshelfcentral.com/index.php?main_page=product_info&cPath=17_29_45_229_233&products_id=276
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.902647
154
3.015625
3
Dependent personality disorder is characterized by dependent and submissive behavior. The person often defers the majority or all decision-making to someone else. People with this type of personality disorder are not aware that their thoughts and behaviors are inappropriate. It is not clear what causes personality disorders, but it is likely a combination of genetic (inherited) factors and a person's environment. A risk factor is something that increases your chance of acquiring a disease or condition. Factors that increase the risk of dependent personality disorder include: Symptoms of dependent personality disorder may include: - Irrational fear - Relying on others for guidance, decision-making, reassurance, and advice - Excessive sensitivity to criticism - A strong fear of rejection - Perception of oneself as powerless You will likely be referred to a psychiatrist or other mental health professional. You will be asked about your symptoms. A mental and medical health history will be taken. A diagnosis will be made after a complete psychiatric assessment that rules out other disorders. Talk with your doctor about the best treatment plan for you. Treatment options include: Counseling may be beneficial for people with dependent personality disorder. Counseling sessions focus on learning how to manage your anxiety and be more assertive. In some cases, medications, such as tricyclic antidepressants, monoamine oxidase inhibitors, or alprazolam, may help manage symptoms. For most patients, medications only provide a minimal amount of symptom relief. Other treatments, such as group therapy and social skills training, can help you manage symptoms. - Reviewer: Rimas Lukas, MD - Review Date: 09/2012 - - Update Date: 00/93/2012 -
<urn:uuid:2a12ba73-62c9-48be-bef2-a03deb5115e1>
CC-MAIN-2013-20
http://doctors-hospital.net/your-health/?/222873/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.910727
355
2.796875
3
Editor's note: Colin Stuart is an astronomy and science writer, who also works as a Freelance Astronomer for the Royal Observatory Greenwich in London. His first book is due to be published by Carlton Books in September 2013. Follow @skyponderer on Twitter. London (CNN) -- Reports coming from Russia suggest that hundreds of people have been injured by a meteor falling from space. The force of the fireball, which seems to have crashed into a lake near the town of Chebarkul in the Ural Mountains, roared through the sky early on Friday morning local time, blowing out windows and damaging buildings. This comes on the same day that astronomers and news reporters alike were turning their attention to a 40 meter asteroid -- known as 2012 DA14 -- which is due for a close approach with Earth on Friday evening. The asteroid will skirt around our planet, however, missing by some 27,000 kilometers (16,777 miles). Based on early reports, there is no reason to believe the two events are connected. And yet it just goes to show how much space debris exists up there above our heads. It is easy to think of a serene solar system, with the eight planets quietly orbiting around the Sun and only a few moons for company. The reality is that we also share our cosmic neighborhood with millions of other, much smaller bodies: asteroids. Made of rock and metal, they range in size from a few meters across, up to the largest -- Ceres -- which is 1000 kilometers wide. They are left over rubble from the chaotic birth of our solar system around 5000 million years ago and, for the most part, are found in a "belt" between the orbits of Mars and Jupiter. But some are known to move away from this region, either due to collisions with other asteroids or the gravitational pull of a planet. And that can bring them into close proximity to the Earth. Once a piece of space-rock enters our atmosphere, it becomes known as a meteor. Traveling through the sky at a few kilometers per second, friction with the air can cause the meteor to break up into several pieces. Eyewitnesses have described seeing a burst of light and hearing loud, thunderous noises. This, too, is due to the object tearing through the gases above our heads. If any of the fragments make it to the ground, only then are they called meteorites. Such events are rare, but not unprecedented. An object entered Earth's atmosphere in 1908 before breaking up over Siberia. The force of the explosion laid waste to a dense area of forest covering more than 2000 square kilometers. It is not hard to imagine the devastation of such an event over a more highly populated region. The Earth is sprinkled with around 170 craters also caused by debris falling from space. The largest is found near the town of Vredefort in South Africa. The impact of a much larger asteroid -- perhaps as big as 15 kilometers across -- is famously thought to have finished off the dinosaurs 65 million years ago. It is easy to see why, then, that astronomers are keen to discover the position and trajectory of as many asteroids as possible. That way they can work out where they are heading and when, if at all, they might pose a threat to us on Earth. It is precisely this sort of work that led to the discovery of asteroid 2012 DA14 last February by a team of Spanish astronomers. However, today's meteor strike shows that it is not currently possible to pick up everything. A non-profit foundation, led by former NASA astronaut Ed Lu, wants to send a dedicated asteroid-hunting telescope into space that can scan the solar system for any potential threats. For now, astronomers will use Friday's fly-by to bounce radar beams off 2012 DA14's surface, hoping to learn more about its motion and structure. One day this information could be used to help move an asteroid out of an Earth-impacting orbit. This latest meteor over Russia just goes to show how important such work is and how crucial it is that we keep our eye on the sky. The opinions expressed in this commentary are solely those of Colin Stuart.
<urn:uuid:0c9e9eed-c93a-4331-8fc6-375d8c613978>
CC-MAIN-2013-20
http://edition.cnn.com/2013/02/15/opinion/meteors-colin-stuart-oped
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.962393
840
3.3125
3
|Pre-Hispanic City of Chichen-Itza| |Name as inscribed on the World Heritage List| |Criteria||i, ii, iii| |UNESCO region||Latin America and the Caribbean| |Inscription||1988 (12th Session)| Chichen Itza (pron.: / /, Spanish: Chichén Itzá [tʃiˈtʃen iˈtsa], from Yucatec Maya: Chi'ch'èen Ìitsha' [tɕʰɨɪʼtɕʼeːn˧˩ iː˧˩tsʰaʲ]; "at the mouth of the well of the Itza") was a large pre-Columbian city built by the Maya civilization. The archaeological site is located in the municipality of Tinum, in the Mexican state of Yucatán. Chichen Itza was a major focal point in the northern Maya lowlands from the Late Classic (c. AD 600–900) through the Terminal Classic (c.AD 800–900) and into the early portion of the Early Postclassic period (c. AD 900–1200). The site exhibits a multitude of architectural styles, reminiscent of styles seen in central Mexico and of the Puuc and Chenes styles of the northern Maya lowlands. The presence of central Mexican styles was once thought to have been representative of direct migration or even conquest from central Mexico, but most contemporary interpretations view the presence of these non-Maya styles more as the result of cultural diffusion. Chichen Itza was one of the largest Maya cities and it was likely to have been one of the mythical great cities, or Tollans, referred to in later Mesoamerican literature. The city may have had the most diverse population in the Maya world, a factor that could have contributed to the variety of architectural styles at the site. The ruins of Chichen Itza are federal property, and the site’s stewardship is maintained by Mexico’s Instituto Nacional de Antropología e Historia (National Institute of Anthropology and History). The land under the monuments had been privately owned until 29 March 2010, when it was purchased by the state of Yucatán.[nb 1] Chichen Itza is one of the most visited archaeological sites in Mexico; an estimated 1.2 million tourists visit the ruins every year. Name and orthography The Maya name "Chichen Itza" means "At the mouth of the well of the Itza." This derives from chi', meaning "mouth" or "edge", and ch'en or ch'e'en, meaning "well." Itzá is the name of an ethnic-lineage group that gained political and economic dominance of the northern peninsula. One possible translation for Itza is "enchanter (or enchantment) of the water", from its, "sorcerer", and ha, "water". The name is spelled Chichén Itzá in Spanish, and the accents are sometimes maintained in other languages to show that both parts of the name are stressed on their final syllable. Other references prefer the Maya orthography, Chichen Itza' (pronounced [tʃitʃʼen itsáʔ]). This form preserves the phonemic distinction between ch' and ch, since the base word ch'e'en (which, however, is not stressed in Maya) begins with a postalveolar ejective africate consonant. The word "Itza'" has a high tone on the "a" followed by a glottal stop (indicated by the apostrophe). Evidence in the Chilam Balam books indicates another, earlier name for this city prior to the arrival of the Itza hegemony in northern Yucatán. While most sources agree the first word means seven, there is considerable debate as to the correct translation of the rest. This earlier name is difficult to define because of the absence of a single standard of orthography, but it is represented variously as Uuc Yabnal ("Seven Great House"), Uuc Hab Nal ("Seven Bushy Places"), Uucyabnal ("Seven Great Rulers") or Uc Abnal ("Seven Lines of Abnal").[nb 2] This name, dating to the Late Classic Period, is recorded both in the book of Chilam Balam de Chumayel and in hieroglyphic texts in the ruins. Chichen Itza is located in the eastern portion of Yucatán state in Mexico. The northern Yucatán Peninsula is arid, and the rivers in the interior all run underground. There are two large, natural sink holes, called cenotes, that could have provided plentiful water year round at Chichen, making it attractive for settlement. Of the two cenotes, the "Cenote Sagrado" or Sacred Cenote (also variously known as the Sacred Well or Well of Sacrifice), is the most famous. According to post-Conquest sources (Maya and Spanish), pre-Columbian Maya sacrificed objects and human beings into the cenote as a form of worship to the Maya rain god Chaac. Edward Herbert Thompson dredged the Cenote Sagrado from 1904 to 1910, and recovered artifacts of gold, jade, pottery and incense, as well as human remains. A study of human remains taken from the Cenote Sagrado found that they had wounds consistent with human sacrifice. Political organization Several archaeologists in late 1980s suggested that unlike previous Maya polities of the Early Classic, Chichen Itza may not have been governed by an individual ruler or a single dynastic lineage. Instead, the city’s political organization could have been structured by a "multepal" system, which is characterized as rulership through council composed of members of elite ruling lineages. This theory was popular in the 1990s, but in recent years, the research that supported the concept of the "multepal" system has been called into question, if not discredited. The current belief trend in Maya scholarship is toward the more traditional model of the Maya kingdoms of the Classic Period southern lowlands in Mexico. Chichen Itza was a major economic power in the northern Maya lowlands during its apogee. Participating in the water-borne circum-peninsular trade route through its port site of Isla Cerritos on the north coast, Chichen Itza was able to obtain locally unavailable resources from distant areas such as obsidian from central Mexico and gold from southern Central America. Between AD 900 and 1050 Chichen Itza expanded to become a powerful regional capital controlling north and central Yucatán. It established Isla Cerritos as a trading port. The layout of Chichen Itza site core developed during its earlier phase of occupation, between 750 and 900 AD. Its final layout was developed after 900 AD, and the 10th century saw the rise of the city as a regional capital controlling the area from central Yucatán to the north coast, with its power extending down the east and west coasts of the peninsula. The earliest hieroglyphic date discovered at Chichen Itza is equivalent to 832 AD, while the last known date was recorded in the Osario temple in 998. The Late Classic city was centred upon the area to the southwest of the Xtoloc cenote, with the main architecture represented by the substructures now underlying the Las Monjas and Observatorio and the basal platform upon which they were built. Chichen Itza rose to regional prominence towards the end of the Early Classic period (roughly 600 AD). It was, however, towards the end of the Late Classic and into the early part of the Terminal Classic that the site became a major regional capital, centralizing and dominating political, sociocultural, economic, and ideological life in the northern Maya lowlands. The ascension of Chichen Itza roughly correlates with the decline and fragmentation of the major centers of the southern Maya lowlands. As Chichen Itza rose to prominence, the cities of Yaxuna (to the south) and Coba (to the east) were suffering decline. These two cities had been mutual allies, with Yaxuna dependent upon Coba. At some point in the 10th century Coba lost a significant portion of its territory, isolating Yaxuna, and Chichen Itza may have directly contributed to the collapse of both cities. |Classic Maya collapse| |Spanish conquest of Yucatán| |Spanish conquest of Guatemala| |Spanish conquest of Petén| According to Maya chronicles (e.g., the Book of Chilam Balam of Chumayel), Hunac Ceel, ruler of Mayapan, conquered Chichen Itza in the 13th century. Hunac Ceel supposedly prophesied his own rise to power. According to custom at the time, individuals thrown into the Cenote Sagrado were believed to have the power of prophecy if they survived. During one such ceremony, the chronicles state, there were no survivors, so Hunac Ceel leaped into the Cenote Sagrado, and when removed, prophesied his own ascension. While there is some archaeological evidence that indicates Chichén Itzá was at one time looted and sacked, there appears to be greater evidence that it could not have been by Mayapan, at least not when Chichén Itzá was an active urban center. Archaeological data now indicates that Chichen Itza declined as a regional center by 1250 CE, before the rise of Mayapan.[nb 3] Ongoing research at the site of Mayapan may help resolve this chronological conundrum. While Chichén Itzá "collapsed" or fell (meaning elite activities ceased) it may not have been abandoned. When the Spanish arrived, they found a thriving local population, although it is not clear from Spanish sources if Maya were living in Chichen Itza or nearby. The relatively high density of population in the region was one of the factors behind the conquistadors' decision to locate a capital there. According to post-Conquest sources, both Spanish and Maya, the Cenote Sagrado remained a place of pilgrimage. Spanish conquest In 1526 Spanish Conquistador Francisco de Montejo (a veteran of the Grijalva and Cortés expeditions) successfully petitioned the King of Spain for a charter to conquer Yucatán. His first campaign in 1527, which covered much of the Yucatán peninsula, decimated his forces but ended with the establishment of a small fort at Xaman Ha', south of what is today Cancún. Montejo returned to Yucatán in 1531 with reinforcements and established his main base at Campeche on the west coast. He sent his son, Francisco Montejo The Younger, in late 1532 to conquer the interior of the Yucatán Peninsula from the north. The objective from the beginning was to go to Chichén Itzá and establish a capital. Montejo the Younger eventually arrived at Chichen Itza, which he renamed Ciudad Real. At first he encountered no resistance, and set about dividing the lands around the city and awarding them to his soldiers. The Maya became more hostile over time, and eventually they laid siege to the Spanish, cutting off their supply line to the coast, and forcing them to barricade themselves among the ruins of the ancient city. Months passed, but no reinforcements arrived. Montejo the Younger attempted an all out assault against the Maya and lost 150 of his remaining troops. He was forced to abandon Chichén Itzá in 1534 under cover of darkness. By 1535, all Spanish had been driven from the Yucatán Peninsula. Montejo eventually returned to Yucatán and, by recruiting Maya from Campeche and Champoton, built a large Indio-Spanish army and conquered the peninsula. The Spanish crown later issued a land grant that included Chichen Itza and by 1588 it was a working cattle ranch. Modern history Chichen Itza entered the popular imagination in 1843 with the book Incidents of Travel in Yucatan by John Lloyd Stephens (with illustrations by Frederick Catherwood). The book recounted Stephens’ visit to Yucatán and his tour of Maya cities, including Chichén Itzá. The book prompted other explorations of the city. In 1860, Desire Charnay surveyed Chichén Itzá and took numerous photographs that he published in Cités et ruines américaines (1863). In 1875, Augustus Le Plongeon and his wife Alice Dixon Le Plongeon visited Chichén, and excavated a statue of a figure on its back, knees drawn up, upper torso raised on its elbows with a plate on its stomach. Augustus Le Plongeon called it “Chaacmol” (later renamed “Chac Mool”, which has been the term to describe all types of this statuary found in Mesoamerica). Teobert Maler and Alfred Maudslay explored Chichén in the 1880s and both spent several weeks at the site and took extensive photographs. Maudslay published the first long-form description of Chichen Itza in his book, Biologia Centrali-Americana. In 1894 the United States Consul to Yucatán, Edward Herbert Thompson purchased the Hacienda Chichén, which included the ruins of Chichen Itza. For 30 years, Thompson explored the ancient city. His discoveries included the earliest dated carving upon a lintel in the Temple of the Initial Series and the excavation of several graves in the Osario (High Priest’s Temple). Thompson is most famous for dredging the Cenote Sagrado (Sacred Cenote) from 1904 to 1910, where he recovered artifacts of gold, copper and carved jade, as well as the first-ever examples of what were believed to be pre-Columbian Maya cloth and wooden weapons. Thompson shipped the bulk of the artifacts to the Peabody Museum at Harvard University. In 1913, the Carnegie Institution accepted the proposal of archaeologist Sylvanus G. Morley and committed to conduct long-term archaeological research at Chichen Itza. The Mexican Revolution and the following government instability, as well as World War I, delayed the project by a decade. In 1923, the Mexican government awarded the Carnegie Institution a 10-year permit (later extended another 10 years) to allow U.S. archaeologists to conduct extensive excavation and restoration of Chichen Itza. Carnegie researchers excavated and restored the Temple of Warriors and the Caracol, among other major buildings. At the same time, the Mexican government excavated and restored El Castillo and the Great Ball Court. In 1926, the Mexican government charged Edward Thompson with theft, claiming he stole the artifacts from the Cenote Sagrado and smuggled them out of the country. The government seized the Hacienda Chichén. Thompson, who was in the United States at the time, never returned to Yucatán. He wrote about his research and investigations of the Maya culture in a book People of the Serpent published in 1932. He died in New Jersey in 1935. In 1944 the Mexican Supreme Court ruled that Thompson had broken no laws and returned Chichen Itza to his heirs. The Thompsons sold the hacienda to tourism pioneer Fernando Barbachano Peon. There have been two later expeditions to recover artifacts from the Cenote Sagrado, in 1961 and 1967. The first was sponsored by the National Geographic, and the second by private interests. Both projects were supervised by Mexico's National Institute of Anthropology and History (INAH). INAH has conducted an ongoing effort to excavate and restore other monuments in the archaeological zone, including the Osario, Akab D’zib, and several buildings in Chichén Viejo (Old Chichen). In 2009, to investigate construction that predated El Castillo, Yucatec archaeologists began excavations adjacent to El Castillo under the direction of Rafael (Rach) Cobos. Site description Chichen Itza was one of the largest Maya cities, with the relatively densely clustered architecture of the site core covering an area of at least 5 square kilometres (1.9 sq mi). Smaller scale residential architecture extends for an unknown distance beyond this. The city was built upon broken terrain, which was artificially levelled in order to build the major architectural groups, with the greatest effort being expended in the levelling of the areas for the Castillo pyramid, and the Las Monjas, Osario and Main Southwest groups. The site contains many fine stone buildings in various states of preservation, and many have been restored. The buildings were connected by a dense network of paved causeways, called sacbeob.[nb 4] Archaeologists have identified over 80 sacbeob criss-crossing the site, and extending in all directions from the city. The architecture encompasses a number of styles, including the Puuc and Chenes styles of the northern Yucatán Peninsula. The buildings of Chichen Itza are grouped in a series of architectonic sets, and each set was at one time separated from the other by a series of low walls. The three best known of these complexes are the Great North Platform, which includes the monuments of El Castillo, Temple of Warriors and the Great Ball Court; The Osario Group, which includes the pyramid of the same name as well as the Temple of Xtoloc; and the Central Group, which includes the Caracol, Las Monjas, and Akab Dzib. South of Las Monjas, in an area known as Chichén Viejo (Old Chichén) and only open to archaeologists, are several other complexes, such as the Group of the Initial Series, Group of the Lintels, and Group of the Old Castle. Architectural styles The Puuc-style architecture is concentrated in the Old Chichen area, and also the earlier structures in the Nunnery Group (including the Las Monjas, Annex and La Iglesia buildings); it is also represented in the Akab Dzib structure. The Puuc-style building feature the usual mosaic-decorated upper façades characteristic of the style but differ from the architecture of the Puuc heartland in their block masonry walls, as opposed to the fine veneers of the Puuc region proper. At least one structure in the Las Monjas Group features an ornate façade and masked doorway that are typical examples of Chenes-style architecture, a style centred upon a region in the north of Campeche state, lying between the Puuc and Río Bec regions. Those structures with sculpted hieroglyphic script are concentrated in certain areas of the site, with the most important being the Las Monjas group. Architectural groups Great North Platform El Castillo Dominating the North Platform of Chichen Itza is the Temple of Kukulkan (a Maya feathered serpent deity similar to the Aztec Quetzalcoatl), usually referred to as El Castillo ("the castle"). This step pyramid stands about 30 metres (98 ft) high and consists of a series of nine square terraces, each approximately 2.57 metres (8.4 ft) high, with a 6-metre (20 ft) high temple upon the summit. The sides of the pyramid are approximately 55.3 metres (181 ft) at the base and rise at an angle of 53°, although that varies slightly for each side. The four faces of the pyramid have protruding stairways that rise at an angle of 45°. The talud walls of each terrace slant at an angle of between 72° and 74°. At the base of the balustrades of the northeastern staircase are carved heads of a serpent. Mesoamerican cultures periodically superimposed larger structures over older ones, and El Castillo is one such example. In the mid-1930s, the Mexican government sponsored an excavation of El Castillo. After several false starts, they discovered a staircase under the north side of the pyramid. By digging from the top, they found another temple buried below the current one. Inside the temple chamber was a Chac Mool statue and a throne in the shape of Jaguar, painted red and with spots made of inlaid jade. The Mexican government excavated a tunnel from the base of the north staircase, up the earlier pyramid’s stairway to the hidden temple, and opened it to tourists. In 2006, INAH closed the throne room to the public. On the Spring and Autumn equinoxes, in the late afternoon, the northwest corner of the pyramid casts a series of triangular shadows against the western balustrade on the north side that evokes the appearance of a serpent wriggling down the staircase. Some have suggested the effect was an intentional design by the Maya builders to represent the feathered-serpent god Kukulcan. Archaeologists have found no evidence to support such an assertion. Great Ball Court Archaeologists have identified thirteen ballcourts for playing the Mesoamerican ballgame in Chichen Itza, but the Great Ball Court about 150 metres (490 ft) to the north-west of the Castillo is by far the most impressive. It is the largest and best preserved ball court in ancient Mesoamerica. It measures 168 by 70 metres (551 by 230 ft). The parallel platforms flanking the main playing area are each 95 metres (312 ft) long. The walls of these platforms stand 8 metres (26 ft) high; set high up in the centre of each of these walls are rings carved with intertwined feathered serpents.[nb 5] At the base of the high interior walls are slanted benches with sculpted panels of teams of ball players. In one panel, one of the players has been decapitated; the wound emits streams of blood in the form of wriggling snakes. At one end of the Great Ball Court is the North Temple, also known as the Temple of the Bearded Man (Templo del Hombre Barbado). This small masonry building has detailed bas relief carving on the inner walls, including a center figure that has carving under his chin that resembles facial hair. At the south end is another, much bigger temple, but in ruins. Built into the east wall are the Temples of the Jaguar. The Upper Temple of the Jaguar overlooks the ball court and has an entrance guarded by two, large columns carved in the familiar feathered serpent motif. Inside there is a large mural, much destroyed, which depicts a battle scene. In the entrance to the Lower Temple of the Jaguar, which opens behind the ball court, is another Jaguar throne, similar to the one in the inner temple of El Castillo, except that it is well worn and missing paint or other decoration. The outer columns and the walls inside the temple are covered with elaborate bas-relief carvings. Additional structures The Tzompantli, or Skull Platform (Plataforma de los Cráneos), shows the clear cultural influence of the central Mexican Plateau. Unlike the tzompantli of the highlands, however, the skulls were impaled vertically rather than horizontally as at Tenochtitlan. The Platform of the Eagles and the Jaguars (Plataforma de Águilas y Jaguares) is immediately to the east of the Great Ballcourt. It is built in a combination Maya and Toltec styles, with a staircase ascending each of its four sides. The sides are decorated with panels depicting eagles and jaguars consuming human hearts. This Platform of Venus is dedicated to the planet Venus. In its interior archaeologists discovered a collection of large cones carved out of stone, the purpose of which is unknown. This platform is located north of El Castillo, between it and the Cenote Sagrado. The Temple of the Tables is the northernmost of a series of buildings to the east of El Castillo. Its name comes from a series of altars at the top of the structure that are supported by small carved figures of men with upraised arms, called “atlantes.” The Steam Bath is a unique building with three parts: a waiting gallery, a water bath, and a steam chamber that operated by means of heated stones. Sacbe Number One is a causeway that leads to the Cenote Sagrado, is the largest and most elaborate at Chichen Itza. This “white road” is 270 metres (890 ft) long with an average width of 9 metres (30 ft). It begins at a low wall a few metres from the Platform of Venus. According to archaeologists there once was an extensive building with columns at the beginning of the road. Cenote Sagrado The Yucatán Peninsula is a limestone plain, with no rivers or streams. The region is pockmarked with natural sinkholes, called cenotes, which expose the water table to the surface. One of the most impressive of these is the Cenote Sagrado, which is 60 metres (200 ft) in diameter, and sheer cliffs that drop to the water table some 27 metres (89 ft) below. The Cenote Sagrado was a place of pilgrimage for ancient Maya people who, according to ethnohistoric sources, would conduct sacrifices during times of drought. Archaeological investigations support this as thousands of objects have been removed from the bottom of the cenote, including material such as gold, carved jade, copal, pottery, flint, obsidian, shell, wood, rubber, cloth, as well as skeletons of children and men. Temple of the Warriors The Temple of the Warriors complex consists of a large stepped pyramid fronted and flanked by rows of carved columns depicting warriors. This complex is analogous to Temple B at the Toltec capital of Tula, and indicates some form of cultural contact between the two regions. The one at Chichen Itza, however, was constructed on a larger scale. At the top of the stairway on the pyramid’s summit (and leading towards the entrance of the pyramid’s temple) is a Chac Mool. This temple encases or entombs a former structure called The Temple of the Chac Mool. The archeological expedition and restoration of this building was done by the Carnegie Institute of Washington from 1925 to 1928. A key member of this restoration was Earl H. Morris who published the work from this expedition in two volumes entitled Temple of the Warriors. Group of a Thousand Columns Along the south wall of the Temple of Warriors are a series of what are today exposed columns, although when the city was inhabited these would have supported an extensive roof system. The columns are in three distinct sections: a west group, that extends the lines of the front of the Temple of Warriors; a north group, which runs along the south wall of the Temple of Warriors and contains pillars with carvings of soldiers in bas-relief; and a northeast group, which apparently formed a small temple at the southeast corner of the Temple of Warriors, which contains a rectangular decorated with carvings of people or gods, as well as animals and serpents. The northeast column temple also covers a small marvel of engineering, a channel that funnels all the rainwater from the complex some 40 metres (130 ft) away to a rejollada, a former cenote. To the south of the Group of a Thousand Columns is a group of three, smaller, interconnected buildings. The Temple of the Carved Columns is a small elegant building that consists of a front gallery with an inner corridor that leads to an altar with a Chac Mool. There are also numerous columns with rich, bas-relief carvings of some 40 personages. A section of the upper façade with a motif of x’s and o’s is displayed in front of the structure. The Temple of the Small Tables which is an unrestored mound. And the Thompson’s Temple (referred to in some sources as Palace of Ahau Balam Kauil ), a small building with two levels that has friezes depicting Jaguars (balam in Maya) as well as glyphs of the Maya god Kahuil. El Mercado This square structure anchors the southern end of the Temple of Warriors complex. It is so named for the shelf of stone that surrounds a large gallery and patio that early explorers theorized was used to display wares as in a marketplace. Today, archaeologists believe that its purpose was more ceremonial than commerce. Osario Group South of the North Group is a smaller platform that has many important structures, several of which appear to be oriented toward the second largest cenote at Chichen Itza, Xtoloc. The Osario itself, like El Castillo, is a step-pyramid temple dominating its platform, only on a smaller scale. Like its larger neighbor, it has four sides with staircases on each side. There is a temple on top, but unlike El Castillo, at the center is an opening into the pyramid which leads to a natural cave 12 metres (39 ft) below. Edward H. Thompson excavated this cave in the late 19th century, and because he found several skeletons and artifacts such as jade beads, he named the structure The High Priests' Temple. Archaeologists today believe the structure was neither a tomb nor that the personages buried in it were priests. The Temple of Xtoloc is a recently restored temple outside the Osario Platform is. It overlooks the other large cenote at Chichen Itza, named after the Maya word for iguana, "Xtoloc." The temple contains a series of pilasters carved with images of people, as well as representations of plants, birds and mythological scenes. Between the Xtoloc temple and the Osario are several aligned structures: The Platform of Venus (which is similar in design to the structure of the same name next to El Castillo), the Platform of the Tombs, and a small, round structure that is unnamed. These three structures were constructed in a row extending from the Osario. Beyond them the Osario platform terminates in a wall, which contains an opening to a sacbe that runs several hundred feet to the Xtoloc temple. South of the Osario, at the boundary of the platform, there are two small buildings that archaeologists believe were residences for important personages. These have been named as the House of the Metates and the House of the Mestizas. Casa Colorada Group South of the Osario Group is another small platform that has several structures that are among the oldest in the Chichen Itza archaeological zone. The Casa Colorada (Spanish for "Red House"), is one of the best preserved buildings at Chichen Itza. Its Maya name is Chichanchob, which according to INAH may mean "small holes". In one chamber there are extensive carved hieroglyphs that mention rulers of Chichen Itza and possibly of the nearby city of Ek Balam, and contain a Maya date inscribed which correlates to 869 AD, one of the oldest such dates found in all of Chichen Itza. In 2009, INAH restored a small ball court that adjoined the back wall of the Casa Colorada. While the Casa Colorada is in a good state of preservation, other buildings in the group, with one exception, are decrepit mounds. One building is half standing, named Casa del Venado (House of the Deer). The origin of the name is unknown, as there are no representations of deer or other animals on the building. Central Group Las Monjas is one of the more notable structures at Chichen Itza. It is a complex of Terminal Classic buildings constructed in the Puuc architectural style. The Spanish named this complex Las Monjas ("The Nuns" or "The Nunnery") but it was actually a governmental palace. Just to the east is a small temple (known as the La Iglesia, "The Church") decorated with elaborate masks. El Caracol ("The Snail") is located to the north of Las Monjas. It is a round building on a large square platform. It gets its name from the stone spiral staircase inside. The structure, with its unusual placement on the platform and its round shape (the others are rectangular, in keeping with Maya practice), is theorized to have been a proto-observatory with doors and windows aligned to astronomical events, specifically around the path of Venus as it traverses the heavens. Akab Dzib is located to the east of the Caracol. The name means, in Yucatec Mayan, "Dark Writing"; "dark" in the sense of "mysterious". An earlier name of the building, according to a translation of glyphs in the Casa Colorada, is Wa(k)wak Puh Ak Na, "the flat house with the excessive number of chambers,” and it was the home of the administrator of Chichén Itzá, kokom Yahawal Cho' K’ak’. INAH completed a restoration of the building in 2007. It is relatively short, only 6 metres (20 ft) high, and is 50 metres (160 ft) in length and 15 metres (49 ft) wide. The long, western-facing façade has seven doorways. The eastern façade has only four doorways, broken by a large staircase that leads to the roof. This apparently was the front of the structure, and looks out over what is today a steep, but dry, cenote. The southern end of the building has one entrance. The door opens into a small chamber and on the opposite wall is another doorway, above which on the lintel are intricately carved glyphs—the “mysterious” or “obscure” writing that gives the building its name today. Under the lintel in the door jamb is another carved panel of a seated figure surrounded by more glyphs. Inside one of the chambers, near the ceiling, is a painted hand print. Old Chichen Old Chichen (or Chichén Viejo in Spanish) is the name given to a group of structures to the south of the central site, where most of the Puuc-style architecture of the city is concentrated. It includes the Initial Series Group, the Phallic Temple, the Platform of the Great Turtle, the Temple of the Owls, and the Temple of the Monkeys. Other structures Chichen Itza also has a variety of other structures densely packed in the ceremonial center of about 5 square kilometres (1.9 sq mi) and several outlying subsidiary sites. Caves of Balankanche Approximately 4 km (2.5 mi) south east of the Chichen Itza archaeological zone are a network of sacred caves known as Balankanche (Spanish: Gruta de Balankanche), Balamka'anche' in Yucatec Maya). In the caves, a large selection of ancient pottery and idols may be seen still in the positions where they were left in pre-Columbian times. The location of the cave has been well known in modern times. Edward Thompson and Alfred Tozzer visited it in 1905. A.S. Pearse and a team of biologists explored the cave in 1932 and 1936. E. Wyllys Andrews IV also explored the cave in the 1930s. Edwin Shook and R.E. Smith explored the cave on behalf of the Carnegie Institution in 1954, and dug several trenches to recover potsherds and other artifacts. Shook determined that the cave had been inhabited over a long period, at least from the Preclassic to the post-conquest era. On 15 September 1959, José Humberto Gómez, a local guide, discovered a false wall in the cave. Behind it he found an extended network of caves with significant quantities of undisturbed archaeological remains, including pottery and stone-carved censers, stone implements and jewelry. INAH converted the cave into an underground museum, and the objects after being catalogued were returned to their original place so visitors can see them in situ. Chichen Itza is one of the most visited archaeological sites in Mexico; in 2007 it was estimated to receive an average of 1.2 million visitors every year. Tourism has been a factor at Chichen Itza for more than a century. John Lloyd Stephens, who popularized the Maya Yucatán in the public’s imagination with his book Incidents of Travel in Yucatan, inspired many to make a pilgrimage to Chichén Itzá. Even before the book was published, Benjamin Norman and Baron Emanuel von Friedrichsthal traveled to Chichen after meeting Stephens, and both published the results of what they found. Friedrichsthal was the first to photograph Chichen Itza, using the recently invented daguerreotype. After Edward Thompson in 1894 purchased the Hacienda Chichén, which included Chichen Itza, he received a constant stream of visitors. In 1910 he announced his intention to construct a hotel on his property, but abandoned those plans, probably because of the Mexican Revolution. In the early 1920s, a group of Yucatecans, led by writer/photographer Francisco Gomez Rul, began working toward expanding tourism to Yucatán. They urged Governor Felipe Carrillo Puerto to build roads to the more famous monuments, including Chichen Itza. In 1923, Governor Carrillo Puerto officially opened the highway to Chichen Itza. Gomez Rul published one of the first guidebooks to Yucatán and the ruins. Gomez Rul's son-in-law, Fernando Barbachano Peon (a grandnephew of former Yucatán Governor Miguel Barbachano), started Yucatán’s first official tourism business in the early 1920s. He began by meeting passengers that arrived by steamship to Progreso, the port north of Mérida, and persuading them to spend a week in Yucatán, after which they would catch the next steamship to their next destination. In his first year Barbachano Peon reportedly was only able to convince seven passengers to leave the ship and join him on a tour. In the mid-1920s Barbachano Peon persuaded Edward Thompson to sell 5 acres (20,000 m2) next to Chichen for a hotel. In 1930, the Mayaland Hotel opened, just north of the Hacienda Chichén, which had been taken over by the Carnegie Institution. In 1944, Barbachano Peon purchased all of the Hacienda Chichén, including Chichen Itza, from the heirs of Edward Thompson. Around that same time the Carnegie Institution completed its work at Chichen Itza and abandoned the Hacienda Chichén, which Barbachano turned into another seasonal hotel. In 1972, Mexico enacted the Ley Federal Sobre Monumentos y Zonas Arqueológicas, Artísticas e Históricas (Federal Law over Monuments and Archeological, Artistic and Historic Sites) that put all the nation's pre-Columbian monuments, including those at Chichen Itza, under federal ownership. There were now hundreds, if not thousands, of visitors every year to Chichen Itza, and more were expected with the development of the Cancún resort area to the east. In the 1980s, Chichen Itza began to receive an influx of visitors on the day of the spring equinox. Today several thousand show up to see the light-and-shadow effect on the Temple of Kukulcan in which the feathered serpent god supposedly can be seen to crawl down the side of the pyramid.[nb 6] Tourists are also wondered by the acoustics at Chicen Itza. For instance a handclap in front of the staircase of the El Castillo pyramid is followed by an echo that resembles the chirp of a quetzal as investigated by Declercq. Chichen Itza, a UNESCO World Heritage Site, is the second-most visited of Mexico's archaeological sites. The archaeological site draws many visitors from the popular tourist resort of Cancún, who make a day trip on tour buses. In 2007, Chichen Itza's El Castillo was named one of the New Seven Wonders of the World after a worldwide vote. Despite the fact that the vote was sponsored by a commercial enterprise, and that its methodology was criticized, the vote was embraced by government and tourism officials in Mexico who project that as a result of the publicity the number of tourists expected to visit Chichen will double by 2012.[nb 7] The ensuing publicity re-ignited debate in Mexico over the ownership of the site, which culminated on 29 March 2010 when the state of Yucatán purchased the land upon which the most recognized monuments rest from owner Hans Juergen Thies Barbachano. Over the past several years, INAH, which manages the site, has been closing monuments to public access. While visitors can walk around them, they can no longer climb them or go inside their chambers. The most recent was El Castillo, which was closed after a San Diego, California, woman fell to her death in 2006. Photo gallery Photo of the great limestone column in the Cave of Balankanche, surrounded by Tlaloc-themed incense burners See also - Concerning the legal basis of the ownership of Chichen and other sites of patrimony, see Breglia (2006), in particular Chapter 3, "Chichen Itza, a Century of Privatization". Regarding ongoing conflicts over the ownership of Chichen Itza, see Castañeda (2005). Regarding purchase, see "Yucatán: paga gobierno 220 mdp por terrenos de Chichén Itzá," La Jornada, 30 March 2010, retrieved 30 March 2010 from jornada.unam.mx - Uuc Yabnal becomes Uc Abnal, meaning the “Seven Abnals” or “Seven Lines of Abnal” where Abnal is a family name, according to Ralph L. Roys (Roys 1967, p.133n7). - For summation of this re-dating proposal, see in particular Andrews et al. 2003. - From Mayan: sakb'e, meaning "white way/road”. Plural form is sacbeob (or in modern Maya orthography, sakb'eob'). - A popular explanation is that the objective of the game was to pass a ball through one of the rings, however in other, smaller ball courts there is no ring, only a post. - See Quetzil Castaneda (1996) In The Museum of Maya Culture (University of Minnesota Press) for a book length study of tourism at Chichen, including a chapter on the equinox ritual. For a 90-minute ethnographic documentary of new age spiritualism at the Equinox see Jeff Himpele and Castaneda (1997)[Incidents of Travel in Chichen Itza] (Documentary Educational Resources). - Figure is attributed to Francisco López Mena, director of the Consejo de Promoción Turística de México (CPTM - Council for the Promotion of Mexican Tourism). - See also "Chichén Itzá". English Pronunciation Guide to the Names of People, Places, and Stuff. inogolo.com. Retrieved 21 November 2007. - Barrera Vásquez et al, 1980. - Gobierno del Estado de Yucatán 2007. - Miller 1999, p.26. - Boot 2005, p.37. - Piña Chan 1980, 1993, p.13. - Luxton 1996, p.141. - Koch 2006, p.19. - Osorio León 2006, p.458. - Osorio León 2006, p.456. - Coggins 1992. - Anda Alanís 2007. - Freidel, p.6. Sharer and Traxler 2006, p.581. - Schmidt 2007, pp.166–167. - Cobos Plama 2004, 2005, pp.539-540. - Cobos Palma 2004, 2005, p.540. - Cobos Palma 2004, 2005, pp.537-541. - Cobos Palma 2004, 2005, p.531. - Cobos Palma 2004, 2005, pp.531-533. - Osorio León 2006, p.457. - Osorio León 2006, p.461. - Cobos Palama, 2004, 2005, p.541. - Thompson 1954, 1966, p.137. - Chamberlain 1948, pp.136, 138. - Restall 1998, pp.81, 149; Landa 1937, p.90. - Clendinnen 2003, p.23. - Chamberlain 1948, pp.19–20, 64, 97, 134–135. - Chamberlain 1948, pp.132–149. - Clendinnen 2003, p.41. - Breglia 2006, p.67. - Morley 1913, pp.61–91. - Brunhouse 1971, pp.74-75. - Brunhouse 1971, pp.195-196; Weeks and Hill 2006, p.111. - Brunhouse 1971, pp.195-196; Weeks and Hill 2006, pp.577–653. - Usborne (2007). - Sharer and Traxler 2006, pp.562-563. - Sharer and Traxler 2006, p.562. Coe 1999, pp.100, 139. - Cano 2002, p.84. - García Salgado 2010, p.118. - García Salgado 2010, pp.119, 122. - Phillips 2006, 2007, p.264. - Willard 1941. - Diario de Yucatan, 3 March 2006. - García Salgado 2010, pp.121-122. - Kurjack et al 1991, p.150. - Piña Chan 1980, 1993, p.42. - Piña Chan 1980, 1993, p.44. - Cano 2002, p.83. - Cirerol Sansores 1948, pp.94–96. - Cano 2002, p.85. - Coggins 1984, pp. 26-7 - Fry 2009. - Cano 2002, pp.84, 87. - Osorio León 2006, pp.457, 460. - Aveni 1997, pp.135–138. - Voss and Kremer 2000. - Andrews 1961, pp.28–31. - Andrews 1970. - SECTUR 2007. - Palmquist and Kailbourn 2000, p.252. - Madeira 1931, pp. 108–109, - Breglia 2006, pp.45–46. - Ball, 14 December 2004. - SECTUR 2006. - EFE, 29 June 2007. - Boffil Gómez, 30 March 2010. - Andrews, Anthony P.; E. Wyllys Andrews V, and Fernando Robles Castellanos (January 2003). "The Northern Maya Collapse and its Aftermath". Ancient Mesoamerica (New York: Cambridge University Press) 14 (1): 151–156. doi:10.1017/S095653610314103X. ISSN 0956-5361. OCLC 88518111. - Andrews, E. Wyllys, IV (1961). "Excavations at the Gruta De Balankanche, 1959 (Appendix)". Preliminary Report on the 1959–60 Field Season National Geographic Society – Tulane University Dzibilchaltun Program: with grants in aid from National Science Foundation and American Philosophical Society. Middle American Research Institute Miscellaneous Series No 11. New Orleans: Middle American Research Institute, Tulane University. pp. 28–31. ISBN 0-939238-66-7. OCLC 5628735. - Andrews, E. Wyllys, IV (1970). Balancanche: Throne of the Tiger Priest. Middle American Research Institute Publication No 32. New Orleans: Middle American Research Institute, Tulane University. ISBN 0-939238-36-5. OCLC 639140. - Anda Alanís, Guillermo de (2007). "Sacrifice and Ritual Body Mutilation in Postclassical Maya Society: Taphonomy of the Human Remains from Chichén Itzá's Cenote Sagrado". In Vera Tiesler and Andrea Cucina (eds.). New Perspectives on Human Sacrifice and Ritual Body Treatments in Ancient Maya Society. Interdisciplinary Contributions to Archaeology. Michael Jochim (series ed.). New York: Springer Verlag. pp. 190–208. ISBN 978-0-387-48871-4. OCLC 81452956. ISSN 1568-2722. - Aveni, Anthony F. (1997). Stairways to the Stars: Skywatching in Three Great Ancient Cultures. New York: John Wiley & Sons. ISBN 0-471-15942-5. OCLC 35559005. - Ball, Philip (14 December 2004). "News: Mystery of 'chirping' pyramid decoded". nature.com. Nature Publishing Group. doi:10.1038/news041213-5. Retrieved 2011-12-14. - Barrera Vásquez, Alfredo; Juan Ramón Bastarrachea Manzano and William Brito Sansores (eds.) (1980). Diccionario maya Cordemex: maya-español, español-maya. with collaborations by Refugio Vermont Salas, David Dzul Góngora, and Domingo Dzul Poot. Mérida, Mexico: Ediciones Cordemex. OCLC 7550928. (Spanish) (Yukatek Maya) - Beyer, Hermann (1937). Studies on the Inscriptions of Chichen Itza (PDF Reprint). Contributions to American Archaeology, No.21. Washington D.C.: Carnegie Institution of Washington. OCLC 3143732. Retrieved 22 November 2007. - Boffil Gómez, Luis A. (30 March 2007). "Yucatán compra 80 has en la zona de Chichén Itzá" [Yucatán buys 80 hectares in the Chichen Itza zone]. La Jornada (Mexico City: DEMOS, Desarollo de Medios, S.A. de C.V.). Retrieved 2011-12-14. (Spanish) - Boot, Erik (2005). Continuity and Change in Text and Image at Chichen Itza, Yucatan, Mexico: A Study of the Inscriptions, Iconography, and Architecture at a Late Classic to Early Postclassic Maya Site. CNWS Publications no. 135. Leiden, The Netherlands: CNWS Publications. ISBN 90-5789-100-X. OCLC 60520421. - Breglia, Lisa (2006). Monumental Ambivalence: The Politics of Heritage. Austin: University of Texas Press. ISBN 978-0-292-71427-4. OCLC 68416845. - Brunhouse, Robert (1971). Sylvanus Morley and the World of the Ancient Mayas. Norman, Oklahoma: University of Oklahoma Press. ISBN 978-0-8061-0961-9. OCLC 208428. - Cano, Olga (2002). "Chichén Itzá, Yucatán (Guía de viajeros)". Arqueología Mexicana, Vol. IX, número 53, January–February 2002, pp.80-87 (Mexico: Editorial Raíces). ISSN 0188-8218. OCLC 29789840. (Spanish) - Castañeda, Quetzil E. (1996). In the Museum of Maya Culture: Touring Chichén Itzá. Minneapolis: University of Minnesota Press. ISBN 0-8166-2672-3. OCLC 34191010. - Castañeda, Quetzil E. (May 2005). "On the Tourism Wars of Yucatán: Tíich’, the Maya Presentation of Heritage" (Reprinted online as "Tourism “Wars” in the Yucatán", AN Commentaries). Anthropology News (Arlington, VA: American Anthropological Association) 46 (5): pp.8–9. ISSN 1541-6151. OCLC 42453678. Retrieved 22 November 2007. - Chamberlain, Robert S. (1948). The Conquest and Colonization of Yucatán 1517–1550. Washington D.C.: Carnegie Institution of Washington. OCLC 42251506. - Charnay, Désiré (1886). "Reis naar Yucatán" (Project Gutenberg etext reproduction [#13346]). De Aarde en haar Volken, 1886. Haarlem, Netherlands: Kruseman & Tjeenk Willink. OCLC 12339106. Retrieved 23 November 2007. (Dutch) - Charnay, Désiré (1887). Ancient Cities of the New World: Being Voyages and Explorations in Mexico and Central America from 1857–1882. J. Gonino and Helen S. Conant (trans.). New York: Harper & Brothers. OCLC 2364125. - Cirerol Sansores, Manuel (1948). "Chi Cheen Itsa": Archaeological Paradise of America. Mérida, Mexico: Talleres Graficos del Sudeste. OCLC 18029834. - Clendinnen, Inga (2003). Ambivalent Conquests: Maya and Spaniard in Yucatán, 1517–1570. New York: Cambridge University Press. ISBN 0-521-37981-4. OCLC 50868309. - Cobos Palma, Rafael (2004, 2005). "Chichén Itzá: Settlement and Hegemony During the Terminal Classic Period". In Arthur A. Demarest, Prudence M. Rice and Don S. Rice. The Terminal Classic in the Maya Lowlands: Collapse, Transition, and Transformation (paperback ed.). Boulder, Colorado: University Press of Colorado. pp. 517–544. ISBN 0-87081-822-8. OCLC 61719499. - Coe, Michael D. (1987). The Maya (4th edition, revised ed.). London and New York: Thames & Hudson. ISBN 0-500-27455-X. OCLC 15895415. - Coe, Michael D. (1999). The Maya. Ancient peoples and places series (6th edition, fully revised and expanded ed.). London and New York: Thames & Hudson. ISBN 0-500-28066-5. OCLC 59432778. - Coggins, Clemency Chase (1984). Cenote of Sacrifice: Maya Treasures from the Sacred Well at Chichen Itza. Austin, TX: University of Texas Press. ISBN 0-292-71098-4. - Coggins, Clemency Chase (1992). Artifacts from the Cenote of Sacrifice, Chichén Itzá, Yucatán: Textiles, Basketry, Stone, Bone, Shell, Ceramics, Wood, Copal, Rubber, Other Organic Materials, and Mammalian Remains. Cambridge, MA: Peabody Museum of Archaeology and Ethnology, Harvard University; distributed by Harvard University Press. ISBN 0-87365-694-6. OCLC 26913402. - Colas, Pierre R.; and Alexander Voss (2006). "A Game of Life and Death – The Maya Ball Game". In Nikolai Grube (ed.). Maya: Divine Kings of the Rain Forest. Eva Eggebrecht and Matthias Seidel (assistant eds.). Cologne, Germany: Könemann. pp. 186–191. ISBN 978-3-8331-1957-6. OCLC 71165439. - Cucina, Andrea; and Vera Tiesler (2007). "New perspectives on human sacrifice and postsacrifical body treatments in ancient Maya society: Introduction". In Vera Tiesler and Andrea Cucina (eds.). New Perspectives on Human Sacrifice and Ritual Body Treatments in Ancient Maya Society. Interdisciplinary Contributions to Archaeology. Michael Jochim (series ed.). New York: Springer. pp. 1–13. ISBN 978-0-387-48871-4. OCLC 81452956. ISSN 1568-2722. - Demarest, Arthur (2004). Ancient Maya: The Rise and Fall of a Rainforest Civilization. Case Studies in Early Societies, No. 3. Cambridge: Cambridge University Press. ISBN 0-521-59224-0. OCLC 51438896. - Diario de Yucatán (2006-03-03). "Fin a una exención para los mexicanos: Pagarán el día del equinoccio en la zona arqueológica" [End to an exemption for Mexicans: They will have to pay entry to the archaeological zone on the equinox]. Diario de Yucatán (Mérida, Yucatán: Compañía Tipográfica Yucateca, S.A. de C.V.). OCLC 29098719. (Spanish) - EFE (29 June 2007). "Chichén Itzá podría duplicar visitantes en 5 años si es declarada maravilla" [Chichen Itza could double visitors in 5 years if declared wonder]. Madrid, Spain. Agencia EFE, S.A. (Spanish) - Freidel, David. "Yaxuna Archaeological Survey: A Report of the 1988 Field Season" (PDF). Foundation for the Advancement of Mesoamerican Studies, Inc. (FAMSI). Retrieved 2011-12-12. - Fry, Steven M. (2009). "The Casa Colorada Ball Court: INAH Turns Mounds into Monuments". www.americanegypt.com. Mystery Lane Press. Retrieved 2011-12-14. - García-Salgado, Tomás (2010). "The Sunlight Effect of the Kukulcán Pyramid or The History of a Line". Nexus Network Journal. Retrieved 27 July 2011. - Gobierno del Estado de Yucatán (2007). "Municipios de Yucatán: Tinum". Mérida, Yucatán: Gobierno del Estado de Yucatán. Retrieved 2012-01-30. (Spanish) - Himpele, Jeffrey D. and Quetzil E. Castañeda (Filmmakers and Producers) (1997). Incidents of Travel in Chichén Itzá: A Visual Ethnography (Documentary (VHS and DVD)). Watertown, MA: Documentary Educational Resources. OCLC 38165182. - INAH. "Almost a Hundred Sacbeob Led to Chichen Itza".[dead link] Mexico City: Instituto Nacional de Antropologíá e Historia (INAH). Retrieved 2008-10-10. - Jacobs, James Q. (1999). "Mesoamerican Archaeoastronomy: A Review of Contemporary Understandings of Prehispanic Astronomic Knowledge". Mesoamerican Web Ring. jqjacobs.net. Retrieved 23 November 2007. - Koch, Peter O. (2006). The Aztecs, the Conquistadors, and the Making of Mexican Culture. Jefferson, North Carolina: McFarland & Co. ISBN 0-7864-2252-1. OCLC 61362780. - Kurjack, Edward B.; Ruben Maldonado C. and Merle Greene Robertson (1991). "Ballcourts of the Northern Maya Lowlands". In Vernon Scarborough and David R. Wilcox (eds.). The Mesoamerican Ballgame. Tucson: University of Arizona Press. pp. 145–159. ISBN 0-8165-1360-0. OCLC 51873028. - Landa, Diego de (1937). William Gates (trans.), ed. Yucatan Before and After the Conquest. Baltimore, Maryland: The Maya Society. OCLC 253690044. - Luxton (trans.) (1996). The book of Chumayel : the counsel book of the Yucatec Maya, 1539-1638. Walnut Creek, California: Aegean Park Press. ISBN 0-89412-244-4. OCLC 33849348. - Madeira, Percy (1931). An Aerial Expedition to Central America (Reprint ed.). Philadelphia: University of Pennsylvania. OCLC 13437135. - Masson, Marilyn (2006). "The Dynamics of Maturing Statehood in Postclassic Maya Civilization". In Nikolai Grube (ed.). Maya: Divine Kings of the Rain Forest. Eva Eggebrecht and Matthias Seidel (assistant eds.). Cologne, Germany: Könemann. pp. 340–353. ISBN 978-3-8331-1957-6. OCLC 71165439. - Miller, Mary Ellen (1999). Maya Art and Architecture. London and New York: Thames & Hudson. ISBN 0-500-20327-X. OCLC 41659173. - Morley, Sylvanus Griswold (1913). W. H. R. Rivers, A. E. Jenks and S. G. Morley, ed. Archaeological Research at the Ruins of Chichen Itza, Yucatan. Reports upon the Present Condition and Future Needs of the Science of Anthropology. Washington, D.C.: Carnegie Institution of Washington. OCLC 562310877. - Osorio León, José (2006). "La presencia del Clásico Tardío en Chichen Itza (600-800/830 DC)" (PDF). XIX Simposio de Investigaciones Arqueológicas en Guatemala, 2005 (edited by J.P. Laporte, B. Arroyo y H. Mejía) (Guatemala City, Guatemala: Museo Nacional de Arqueología y Etnología): 455–462. Retrieved 2011-12-15. (Spanish) - Palmquist, Peter E.; and Thomas R. Kailbourn (2000). Pioneer Photographers of the Far West: A Biographical Dictionary, 1840–1865. Stanford, CA: Stanford University Press. ISBN 0-8047-3883-1. OCLC 44089346. - Pérez de Lara, Jorge (n.d.). "A Tour of Chichen Itza with a Brief History of the Site and its Archaeology". Mesoweb. Retrieved 23 November 2007. - Perry, Richard D. (ed.) (2001). Exploring Yucatan: A Traveler's Anthology. Santa Barbara, CA: Espadaña Press. ISBN 0-9620811-4-0. OCLC 48261466. - Phillips, Charles (2006, 2007). The Complete Illustrated History of the Aztecs & Maya: The definitive chronicle of the ancient peoples of Central America & Mexico - including the Aztec, Maya, Olmec, Mixtec, Toltec & Zapotec. London: Anness Publishing Ltd. ISBN 1-84681-197-X. OCLC 642211652. - Piña Chan, Román (1980, 1993). Chichén Itzá: La ciudad de los brujos del agua. Mexico City: Fondo de Cultura Económica. ISBN 968-16-0289-7. OCLC 7947748. (Spanish) - Restall, Matthew (1998). Maya Conquistador. Boston, Massachusetts: Beacon Press. ISBN 978-0-8070-5506-9. OCLC 38746810. - Roys, Ralph L. (trans.) (1967). The Book of Chilam Balam of Chumayel. Norman, Oklahoma: University of Oklahoma Press. OCLC 224990. - Schele, Linda; and David Freidel (1990). A Forest of Kings: The Untold Story of the Ancient Maya (Reprint ed.). New York: Harper Perennial. ISBN 0-688-11204-8. OCLC 145324300. - Schmidt, Peter J. (2007). "Birds, Ceramics, and Cacao: New Excavations at Chichén Itzá, Yucatan". In Jeff Karl Kowalski and Cynthia Kristan-Graham. Twin Tollans: Chichén Itzá, Tula, and the Epiclassic to Early Postclassic Mesoamerican World. Washington D.C.: Dumbarton Oaks Research Library & Collection : Distributed by Harvard University Press. ISBN 0-88402-323-0. OCLC 71243931. - SECTUR (2006). Compendio Estadístico del Turismo en México 2006. Mexico City: Secretaría de Turismo (SECTUR). - SECTUR (7 July 2007). "Boletín 069: Declaran a Chichén Itzá Nueva Maravilla del Mundo Moderno". Mexico City: Secretaría de Turismo. Retrieved 2011-12-16. (Spanish) - Sharer, Robert J.; with Loa P. Traxler (2006). The Ancient Maya (6th (fully revised) ed.). Stanford, CA: Stanford University Press. ISBN 0-8047-4817-9. OCLC 57577446. - Thompson, J. Eric S. (1954, 1966). The Rise and Fall of Maya Civilization. Norman, Oklahoma: University of Oklahoma Press. ISBN 0-8061-0301-9. OCLC 6611739. - Tozzer, Alfred Marston; and Glover Morrill Allen (1910). Animal figures in the Maya codices. IV, no.3 (Papers of the Peabody museum of American archaeology and ethnology, Harvard university ed.). Cambridge, Massachusetts: The Museum. OCLC 2199473. - Usborne, David (7 November 2007). "Mexican standoff: the battle of Chichen Itza". The Independent (Independent News & Media). Retrieved 9 November 2007. - Voss, Alexander W.; and H. Juergen Kremer (2000). "K'ak'-u-pakal, Hun-pik-tok' and the Kokom: The Political Organization of Chichén Itzá" (PDF). In Pierre Robert Colas (ed.). The Sacred and the Profane: Architecture and Identity in the Maya Lowlands (proceedings of the 3rd European Maya Conference). 3rd European Maya Conference, University of Hamburg, November 1998. Markt Schwaben, Germany: Verlag Anton Saurwein. ISBN 3-931419-04-5 OCLC 47871840. - Weeks, John M.; and Jane A. Hill (2006). The Carnegie Maya: the Carnegie Institution of Washington Maya Research Program, 1913–1957. Boulder, Colorado: University Press of Colorado. ISBN 978-0-87081-833-2. OCLC 470645719. - Willard, T.A. (1941). Kukulcan, the Bearded Conqueror : New Mayan Discoveries. Hollywood, California: Murray and Gee. OCLC 3491500. Further reading - Coggins & Shane, "Cenote Of Sacrifice", (U. of Texas, 1984). - Holmes, Archæological Studies in Ancient Cities of Mexico, (Chicago, 1895) - Spinden, Maya Art, (Cambridge, 1912) - Stephens, John Lloyd in Incidents of Travel in Yucatan, (two volumes, 1843) |Wikimedia Commons has media related to: Chichén Itzá| - Chichen Itza on Mesoweb.com - Chichen Itza Digital Media Archive (creative commons-licensed photos, laser scans, panoramas), with particularly detailed information on El Caracol and el Castillo, using data from a National Science Foundation/CyArk research partnership - UNESCO page about Chichen Itza World Heritage site - Ancient Observatories page on Chichen Itza - Chichen Itza reconstructed in 3D - Archaeological documentation for Chichen Itza created by non-profit group INSIGHT and funded by the National Science Foundation and Chabot Space and Science Center
<urn:uuid:e4edcc59-3612-4818-8b65-5323185e8d8c>
CC-MAIN-2013-20
http://en.wikipedia.org/wiki/Chichen_itza
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.869371
14,739
3.609375
4
How can the mid-point of a polyline be calculated using the geoprocessing framework, eg in a Python script? The polyline.centroid property returns The true centroid if it is within or on the feature; otherwise, the label point is returned. The centroid is rarely located on non-straight lines, which is useless for my purposes. The Feature Vertices to Points tool has a midpoint option but this requires ArcInfo, which I don't currently have. An option could be to add measures to the polylines and create a route event 50% along the line. Another workaround is to use the Calculate Geometry option in ArcMap, but ideally I need to automate this process in a script. Any better/faster suggestions? Thanks [EDIT] I'm limited to ArcGIS 10.0 for the moment.
<urn:uuid:3b441a8e-a45f-4fd4-855a-a3efeb2c1c12>
CC-MAIN-2013-20
http://gis.stackexchange.com/questions/31838/how-to-find-the-mid-point-of-a-line-in-arcpy?answertab=votes
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.926895
179
2.828125
3
Het dorp Rincon, Te Rincon, Rincón: Rinon (Papamientu) Meaning of Rincón, Spanish for corner/nook, may refer to: A “Rincon” is also a place where you can have activities, such as a restaurant (“ex: the corner of the good food”) or a shop (“the corner of hot sales”). It means as well a remote settlement, a group of houses. A Rincon can be built inland or by the sea. Is Rincon protected from the pirate attacks? 1/Rincon is sheltered in the North by the Brandaris (240 m) and Seru Mangel (144,6 m), in the West by Seru Dos Pos (125,6 m) and Montagne (143 m), and, in the South by Seru Largu (133 m). Rincon is almost invisible from the sea and far less exposed than Palu di Lechi. 2/ Rincon is protected by a very shallow sand bank on the east side of Bonaire, right in front of the town. 3/ Even inland, sheltered and protected, the Rincon has been raided several times, cf Laurens Prins in 1665. Rincon Village is the oldest village on Bonaire. Additionally, it is the oldest in continual existence within the Netherlands Antilles and Aruba. In 1527, the Spaniards came back from Hispaniola to Bonaire and then founded the first settlement on Bonaire: Rincon. On their return they brought Indians and various types of livestock along. Rincon’s location was chosen because it is in a fertile valley and has an always blowing trade wind and it was out of the eye of passing (pirate)ships and beyond the reach of their raids. In 1636 the West India Company, which has long been established on Curacao, decided to Bonaire add to their possessions. The Dutch exploited Bonaire mainly for its salt and paint timber. Brasil Wood (Palu di Brasil) was used as red pigment for paint. When the Dutch saw that they needed more labor, they imported slaves from Africa, who then had to settle in Rincon. The slaves worked on plantations in the area of Rincon and on salt pans on the opposite side of the island. Because the walk from Rincon to the salt pans took about 10 hours the slaves build themselves small huts to sleep in. The slaves worked the salt pans and stayed in their huts during the week, on saterday they were allowed to go back to their homes in Rincon. In 1850, these shacks were replaced by stone houses. These stone houses are still standing along the salt pans. After the abolition of slavery on July 1st of 1863, the former slaves stayed in Rincon, they maintained their culture and to this date many of the current residents of Rincon are descendants of the slaves. 1 Altamira Unjo – Panorama Rincon 2 Cadushy Distillery & Heritage Center 3 Polar Bar (closed) 4 Catholic Church 5 Rose Inn 6 Kos Bon So Grill restaurant 7 Excellent Supermarket 8 Community Centre – Centro di Bario 9 Gas Station 10 Lourdes Grotto – Replica of Lourdes Cave 11 Mangazina di Rei – Culture Park 12 Protestant Church 13 Direction Playa Grandi 14 Direction Dos Pos 16 Plantation Onima 17 Watapana school 18 San Luis Bertran school 19 Windmill Park 20 Traditional cottage – Kas Krioyo (closed) 21 Kanta Orchidia 22 Soccer Stadium 23 Kununu’s (farming grounds) West 24 Kunuku’s (farming grounds) East 25 Statue of Julio Abraham 26 Memorial stone founding of Rincon 27 Posada Para Mira 28 Market Square 29 Tropicana Bar 30 Orange bar ©2012 Olivier Douvry/GlobeDivers
<urn:uuid:0295fdaf-5027-4ecc-a8d7-1b947a16a894>
CC-MAIN-2013-20
http://globedivers.org/2012/07/23/bonaire-history-el-rincon-the-corner-the-village-and-the-people-1627-2012/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.936605
849
2.75
3
Join for Just $16 A Year - Discounts on travel and everyday savings - Subscription to AARP The Magazine - Free membership for your spouse or partner True clubfoot is characterized by abnormal bone formation in the foot. There are four variations of clubfoot, including talipes varus, talipes valgus, talipes equines, and talipes calcaneus. In talipes varus, the most common form of clubfoot, the foot generally turns inward so that the leg and foot look somewhat like the letter J. In talipes valgus, the foot rotates outward like the letter L. In talipes equinus, the foot points downward, similar to that of a toe dancer. In talipes calcaneus, the foot points upward, with the heel pointing down. Clubfoot can affect one foot or both. Sometimes an infant's feet appear abnormal at birth because of the intrauterine position of the fetus birth. If there is no anatomic abnormality of the bone, this is not true clubfoot, and the problem can usually be corrected by applying special braces or casts to straighten the foot. Experts do not agree on the precise cause of clubfoot. The exact genetic mechanism of inheritance has been extensively investigated using family studies and other epidemiological methods. As of 1999, no definitive conclusions had been reached, although a Mendelian pattern of inheritance is suspected. This may be due to the interaction of several different inheritance patterns, different patterns of development appearing as the same condition, or a complex interaction between genetic and environmental factors. The MSX1 gene has been associated with clubfoot in animal studies. But, as of 2001, these findings have not been replicated in humans. A family history of clubfoot has been reported in 24.4% of families in a single study. These findings suggest the potential role of one or more genes being responsible for clubfoot. Several environmental causes have been proposed for clubfoot. Obstetricians feel that intrauterine crowding causes clubfoot. This theory is supported by a significantly higher incidence of clubfoot among twins compared to singleton births. Intrauterine exposure to the drug, misoprostol, has been linked with clubfoot. Misoprostol is commonly used when trying, usually unsuccessfully, to induce abortion in Brazil and in other countries in South and Central America. Researchers in Norway have reported that males who are in the printing trades have significantly more offspring with clubfoot than men in other occupations. For unknown reasons, amniocentesis, a prenatal test, has also been associated with clubfoot. The infants of mothers who smoke during pregnancy have a greater chance of being born with clubfoot than are offspring of women who do not smoke. Author Info: L. Fleming Fallon Jr., MD, DrPH, The Gale Group Inc., Gale, Detroit, Gale Encyclopedia of Genetic Disorders Part I, 2002This feature is for informational purposes only and should not be used to replace the care and information received from your healthcare provider. Please consult a healthcare professional with any health concerns you may have. Enter your symptoms in our Symptom Checker to find out possible causes of your symptoms. Go. Enter any list of prescription drugs and see how they interact with each other and with other substances. Go. Enter its color and shape information, and this tool helps you identify it. Go. Find information on drug interactions, side effects, and more. Go. Member access to health and insurance products and services at AARPhealthcare.com. Members can get an instant quote with AARP® Dental Insurance administered by Delta Dental Insurance Company. Members can save on eyewear with AARP® Vision Discounts provided by EyeMed. Caregiving can be a lonely journey, but AARP offers resources that can help.
<urn:uuid:b7b57061-934c-4c72-a296-c19503c9e839>
CC-MAIN-2013-20
http://healthtools.aarp.org/galecontent/clubfoot
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.931328
785
3.515625
4
On May 22, a Falcon 9 rocket carrying a Dragon space capsule lifted off the pad at Cape Canaveral Air Force Station. The mission itself was unremarkable: carry cargo to the International Space Station. But it was historic. On that day, SpaceX became the first private company to take on a mission to supply the ISS, ushering in a new era of space flight. More flights are planned by SpaceX and a host of other companies that hope to tap into what may be a lucrative new growth industry. The space race is on again, but this time involves private companies and regions that hope to get a piece of the action. The field is packed. AP science writer Seth Borenstein pointed out that there are more companies looking to make bucks in orbit than there are major U.S. airlines still flying. The Federal Aviation Administration has licensed eight spaceports, and Alabama in early 2012 said it might try to establish one, perhaps on the Gulf Coast. That Alabama is interested is no surprise. The state for years has been a player in the space industry, thanks to Huntsville, home of NASA’s Marshall Space Flight Center and the Army’s missile programs. Once the purview of nation states, space is a bold new playing field for private companies. The Aerospace Industries Association estimates space to be a $45.14 billion piece of the $217.65 billion aerospace industry in 2012. Fortunately for the Gulf Coast, it’s already a player in the wide-open field and part of the exclusive club of areas with technology-focused NASA centers. It has two NASA operations, Mississippi’s Stennis Space Center and New Orleans’ Michoud Assembly Facility. In addition, the Gulf Coast is home to an Air Force center at Eglin Air Force Base that for 40 years has operated a powerful space surveillance system which tracks more than 16,000 near and deep space objects. But the big question is, how much of the commercial sector can the region attract? Both Stennis and Michoud are offering thousands of acres to private companies. And with space flight costs so high, that could provide a savings hard to pass up. SSC is the most capable of the NASA sites where rocket engines are tested, the last place in the country where NASA can test full-scale engines or whole rocket stages 24/7. It will test engines for NASA’s Space Launch System. Forty miles away, Michoud has 43 acres under one roof that includes world-class advanced manufacturing equipment. It’s building some of the SLS, including the Orion crew vehicle. But both locations have room to do even more. NASA and Dixie It was the underdeveloped South that was the big winner in the space race. Spurred on by President Kennedy’s challenge to get a man on the moon before the end of the decade, NASA launched an ambitious program to establish the manufacturing, test and launch facilities needed to beat the Soviets. The South became the home to key NASA facilities in part because of the availability of large tracts of land and interconnected waterways needed to transport large space vehicles. Longer periods of fair weather flying, the same things that attracted the military, also played a role. In addition, powerful, senior Southern politicians embraced the space program and recognized the economic benefit it would bring. The Huntsville operation was joined by Houston, Cape Canaveral, Bay St. Louis, Miss., and New Orleans, and the term “Space Crescent” was used to describe the arc of centers in the South. “Way Station to Space” by Mack R. Herring pointed out a cover story in the July 20, 1964 issue of U.S. News & World Report that described the space program as a new industry in the South worth “billions.” The article said money for facilities was being spent at the rate of “one million dollars every two hours.” That the South benefited when NASA dominated the space program is clear. What is less clear is how well it will do in an age when private players may eventually dominate space. Some areas are already taking steps to ensure they get a piece of the growing field. It was big news in Florida late last year when a NASA facility at Kennedy Space Center, that faced an uncertain future with the end of the Space Shuttle program, got a new lease on life when Boeing decided to use it to build the company’s CST-100 spacecraft. Space Florida, an aerospace economic development agency, took over the Space Shuttle Main Engine Processing Facility and Processing Control Center and is leasing it to Boeing for 15 years to build its Crew Space Transportation spacecraft there, and move the program’s headquarters there as well. A November story in Time magazine likened the lease to an aristocrat selling off parts of the family estate. But with aerospace workers idled, Florida officials saw the buildings as a chance to attract the commercial space flight industry. The commercial industry’s growth appears inevitable, and it’s not just the number of spaceports that tell the tale. The Federal Aviation Administration said 21 percent of orbital launch attempts in 2011 were commercial, earning revenue of $1.9 billion. In addition, more than two dozen teams are competing for Google’s $30 million prize to be the first privately funded team to land a robot on the moon. The list of companies now pushing into space is impressive. It includes SpaceX, Bigelow Aerospace, Virgin Galactic, Orbital Sciences Corp., Alliant Techsystems, Boeing Co., Sierra Nevada Corp., and Blue Origin. The commercial interest is understandable. Companies have been part of the space program from the start. One reason having a NASA center was seen as an economic boon was the space program’s ties to the aerospace industry. NASA needed companies to develop systems, whether it was a Grumman-built lunar module or Saturn V rocket with stages built by Boeing, North American Aviation and Douglas Aircraft. In many cases they established operations close to NASA centers to be near the customer and facilities it had established. But in the new age, NASA might wind up being simply one customer, and perhaps not even the largest. Space flight companies are cropping up in multiple places nationwide, including Washington and Colorado. Still, the South continues to have some of the most unique capabilities available in the world, and it’s those capabilities that can be a lure for the new breed. The industry, whether a huge aerospace company that’s worked in the field for years or one of the startups backed by the deep pockets of billionaires, still needs the same things NASA has built up over 60 years. For some companies it makes sense to tap into what’s already there. Patrick Scheuermann, director of Stennis Space Center, said there are a lot of companies with great ideas that are in the laboratory or subscale version. Success with those smaller versions will force them to make an investment in their own back yard or search for a location to test the larger scale. “Rather than them duplicating infrastructure somewhere or putting their capital dollars somewhere, they’re basically using resources that the taxpayers already paid for once,” Scheuermann said. David Tortorano heads The Gulf Coast Reporters’ League, an independent team of journalists that produces annual report on the Gulf Coast Aerospace Corridor. To download a copy, visit gulfcoastaerospacecorridor.com/gulfcoastaerospacecorridorbook2012.html
<urn:uuid:225645e5-a2d5-4f0f-a5f9-d944204245ee>
CC-MAIN-2013-20
http://inweekly.net/wordpress/?p=10231
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.950681
1,538
2.703125
3
Adam/Dave/Rod: September 17, 2009 Scripture text for this study: Revelation 8 A. Seventh Seal: Prelude to the Seven Trumpets (8:1-6) Verse 2: The seven angels who were given the seven trumpets are said to “stand before God.” Is it possible that one of these seven angels was Gabriel, as he testified to Zechariah (Luke 1:19) that he “stands in the presence of God”? Steve Gregg, editor of Revelation: Four Views (A Parallel Commentary), says, “For Israel, the trumpet was an instrument used to rally the troops for war or to warn of an enemy invasion. Likening the upcoming judgments to the sounding of trumpets suggests that God Himself is making war against His enemies in apostate Israel” (p. 146). Verses 3-5: The “prayers of all the saints” were offered together with “much incense” on the golden altar that was in front of the throne pictured in heaven. It seems clear that the judgments that followed were, in part, a direct result of these prayers. Sam Storms sees a direct link between the cries of the martyrs for vengeance (Rev. 6:10) and God’s response here in these verses. As a result of the censer filled “with fire from the altar” being thrown to the earth, there were “peals of thunder, rumblings, flashes of lightning, and an earthquake.” One application to take away from this passage, then, is that God hears the prayers of His people and acts in a sovereign way in His own timing and according to His will. The thunder, lightning, and rumblings are again reminiscent of the giving of the Old Covenant through Moses at Mount Sinai (Exodus 19:16), just as in Rev. 4:5. John Piper said in 1994 that this text “portrays the prayers of the saints as the instrument God uses to usher in the end of the world with great divine judgments… [It] is an explanation of what has happened to the millions upon millions of prayers over the last 2,000 years as the saints have cried out again and again, ‘Thy kingdom come…’” Sam Storms, a Historicist, agrees with Piper’s cause/effect premise, but disagrees with him regarding the timing of God’s actions: It may well be that the trumpets, no less than the sixth and seventh seals, are God’s answer to the prayers of his people in 6:9-11 for vindication against their persecutors. If so, this would strongly militate against the futurist interpretation which relegates the trumpets to the final few years of history just before the second coming. In other words, it seems unlikely that God would act in response to that prayer only at the end of history while passing by and leaving unscathed more than sixty generations of the wicked. Sam Storms goes on to say that most of the seal, trumpet, and bowl judgments “describe the commonplaces of history.” The Preterist response, of course, is that the prayers of the first-century martyrs who cried out “O Sovereign Lord, holy and true, how long before You will judge and avenge our blood on those who dwell on the earth?” (Rev. 6:10-11) were vindicated within one generation when God poured out His judgment on Jerusalem in 70 AD (cf. Matt. 23:29-38; Luke 13:33-35; Rev. 17:6, 18:20, 24). The Historicist idea that God has continued to act in various ways upon the prayers of His people throughout history certainly applies. B. First Trumpet: Vegetation Struck (8:7) Hail and fire, mixed with blood, is thrown to the earth. In our study of Revelation so far, we have suggested that many of the references to “the earth” in the book of Revelation are not meant to be taken as worldwide in scope, but as dealing instead with the land of Israel/Palestine. In a 3-part study on this subjectbeginning with this post, I have outlined nearly 20 instances where this appears to be the case (See, for example, the post on Revelation 1, where we examined the phrase “tribes of the earth” in verse 7, which is often thought to be worldwide in scope. When this prophecy is compared, though, to its counterpart in Zechariah 12:10-14, it’s clear that every one of those tribes belonged to the land of Israel). Steve Gregg notes that “[as] the first four seals [Rev. 6:1-8] were set off from the latter three, in that each of the first group revealed a horseman, so the first four trumpets are set off from the last three, in that the latter are referred to as ‘Woes.’ The entire series, however, is concerned with the Jewish War of A.D. 66-70, ‘the Last Days’ of the Jewish commonwealth” (p. 148). Steve quotes from Jay Adams, who notes that during this period “the land suffered terribly. The plagues are reminiscent of those in Egypt, at the birth of the Hebrew nation. Here they mark both the latter’s cessation, and the birth of a new nation, the kingdom of God (I Pet. 2:9, 10).” We are told that a third of the earth (the land of Israel), the trees, and the green grass were burned up in this judgment. If meant to be taken literally, this account from Josephus points to a very plausible fulfillment during the five-month siege upon Jerusalem leading up to its destruction in 70 AD (Steve Gregg, pp. 151-152): And now the Romans, although they were greatly distressed in getting together their materials, raised their banks in days, after they had cut down all the trees that were in the country that adjoined to the city, and that for ninety furlongs round about, as I have already related. And, truly, the very view itself of the country was a melancholy thing; for those places which were before adorned with trees and pleasant gardens were now become a desolate country every way, and its trees were all cut down: nor could any foreigner that had formerly seen Judea and the most beautiful suburbs of the city, and now saw it as a desert, but lament and mourn sadly at so great a change; for the war had laid all signs of beauty quite waste (Wars, VI:1:1). C. Second Trumpet: The Seas Struck (8:8-9) Verse 8: John was shown “something like a great mountain burning with fire …thrown into the sea.” Steve Gregg asserts that there is both a symbolic and a literal sense in which this trumpet can be applied to the destruction of Jerusalem and Israel in 66-70 AD: It’s symbolic, in that in Biblical prophecy a mountain often refers to a government or a kingdom, even as it did for Israel (e.g. Exodus 15:17). The sea is a frequent prophetic “symbol of the Gentile nations, in contrast to ‘the land,’ signifying Israel. The symbolism could predict the Jewish state collapsing and the resultant dispersion of the Jews throughout the Gentile world.” (97,000 Jews were sold into slavery by Rome in 70 AD.) It’s also literal in that Jerusalem was burned with fire by the Romans in 70 AD (pp. 154, 156). Verses 8-9: John was also shown a third of the sea becoming blood, with the result being that a third of the living creatures in the sea died and a third of the ships were also destroyed. For those open to the idea of Revelation having been written before 70 AD, this most definitely calls to mind some of the battles during the Jewish-Roman War (66-70 AD). The Roman Emperor Nero officially declared war on Israel in February 67 AD in response to the Jewish rebellion, and by the spring of that year his general Vespasian had marched into the land of Judea with 60,000 men. In the coming months more than 150,000 Jews were killed in Judea and Galilee. The Jewish historian Josephus described Galilee at one point as “filled with fire and blood.” Steve Gregg (pp. 156, 158) highlights one battle in particular, recorded by Josephus, whose words, says Gregg, “seem almost as if they were calculated to present the fulfillment of this trumpet judgment.” This battle took place on the Sea of Galilee (Tiberius): And for such [Jews] as were drowning in the sea, if they lifted their heads up above the water they were killed by darts [arrows], or caught by the [Roman] vessels; but if, in the desperate case they were in, they attempted to swim to their enemies, the Romans cut off either their heads or their hands; and indeed they were destroyed after various manners everywhere… one might then see the lake all bloody, and full of dead bodies, for not one of them escaped. And a terrible stink, and a very sad sight there was on the following days over that country; for as for the shores, they were full of shipwrecks, and of dead bodies all swelled; and as the dead bodies were inflamed by the sun, and putrefied, they corrupted the air [and the conditions were so miserable that even the Roman perpetrators felt pity]. With such carnage, it’s easy to see how many creatures in the Sea of Galilee were poisoned and did not survive, and how a third of its ships could have been destroyed. In my recent term paper on Jerusalem’s destruction in 70 AD, I also referenced a book written by George Peter Holford in 1805 (“The Destruction of Jerusalem”). The following is an excerpt from my paper, based on his writing, of another battle in the port city of Joppa: One of the first towns Vespasian crushed was Joppa, because its inhabitants had provoked his men by their frequent piracies at sea. The Jews there tried to flee from Vespasian on their ships, but Vespasian was helped by a tremendous storm that blew in just as they began to flee. Their vessels were crushed against each other and against the rocks, and when this slaughter was complete more than 4,200 bodies were strewn along the coast and a very long stretch of the coast was stained with blood. D. Third Trumpet: The Waters Struck (8:10-11) This passage speaks of a “great star” falling from heaven, “burning like a torch,” causing many deaths because a third of the rivers and springs of water become wormwood (bitter). Some futurists interpret this trumpet judgment symbolically, as referring either to a future Antichrist (e.g. Arno Gaebelein) or a future Pope (e.g. H.A. Ironside) who causes much corruption (Steve Gregg, pp. 161, 163). Other futurist interpreters (e.g. Henry Morris, Charles Ryrie, John Walvoord) see this as a literal reference to a burning meteorite or “a giant set of meteors” that will enter earth’s atmosphere “with contaminating influence upon the rivers and waters” of the entire planet (p. 165). Steve Gregg’s articulation of the Preterist understanding is helpful: The turning of fresh water sources bitter and toxic may be in part a literal result of the decaying corpses that lay in the Sea of Galilee and in the river as the result of war. However, this fouling of the waters has symbolic significance, occurring as it does here to the nation of Israel. There is probably an intentional allusion to the promise (and implied threat) God made to Israel when they first came out of Egypt. When they came to the bitter waters of Marah, in response to Moses’ casting a tree into the waters, God made the waters sweet and wholesome… However, God’s promise/warning implies that their disobedience to Him will result in His placing upon them the same plagues that He placed on the Egyptians—the waters can be made bitter again [cf. Exodus 15:25-26, Deuteronomy 28:59-60]… It is noteworthy that throughout the pages of Revelation, the plagues that come upon the apostates are comparable to those with which God afflicted the Egyptians in the days of Moses. The star which was burning like a torch (v. 10) is reminiscent of the tree cast into the waters by Moses, but has the opposite effect (pp. 160, 162). David Chilton adds, The name of this fallen star is Wormwood, a term used in the Law and the Prophets to warn Israel of its destruction as a punishment for apostasy (Deut. 29:18; Jer. 9:15; 23:15; Lam. 3:15, 19; Amos 5:7). Again, by combining these Old Testament allusions, St. John makes his point: Israel is apostate, and has become an Egypt; Jerusalem has become a Babylon; and the covenant-breakers will be destroyed, as surely as Egypt and Babylon were destroyed (Gregg, p. 164). [The following information in blue font was edited into this post on October 26, 2009:] To further illustrate this point, it’s also instructive to consider the test for adultery under the Law of Moses, as recorded in Numbers 5:11-31. This test was to be administered by a priest in cases where a married woman was suspected of defiling herself in an adulterous manner (Numbers 5:11-14). The priest would mix dust from the floor of the tabernacle into holy water contained in a vessel, to create “the water of bitterness that brings the curse” (vss. 16-18). The woman would then take the following oath: If no man has lain with you, and if you have not turned aside to uncleanness while you were under your husband’s authority, be free from this water of bitterness that brings the curse. But if you have gone astray, though you are under your husband’s authority, and if you have defiled yourself, and some man other than your husband has lain with you, then (let the priest make the woman take the oath of the curse, and say to the woman) the Lord make you a curse and an oath among your people, when the Lord makes your thigh fall away and your body swell. May this water that brings the curse pass into your bowels and make your womb swell and your thigh fall away (vss. 20-22). The woman would then say, “Amen. Amen,” and the curses would be written into a book and washed off into the bitter water. The woman would then be made to drink the water (vss. 23-26), with the following possible results: And when he has made her drink the water, then, if she has defiled herself and has broken faith with her husband, the water that brings the curse shall enter into her and cause bitter pain, and her womb shall swell, and her thigh shall fall away, and the woman shall become a curse among her people. But if the woman has not defiled herself and is clean, then she shall be free and shall conceive children (vss. 27-28). If this imagery and this procedure is what is mirrored by the third trumpet judgment, then this is one more indication that Israel had been found to be apostate. Many people died from the bitter water because they were indeed guilty of spiritual adultery, and were found to be in a state of defilement. E. Fourth Trumpet: The Heavens Struck (8:12-13) Verse 12: Regarding the common contention of Futurists that these judgments must literally take place in the future, i.e. a third of the light of the sun, moon, and stars will cease to shine; a practical question is in order. Since this is not to be the final plague, and other judgments must follow this one, is it possible that any life would continue to survive for even a few days, let alone months, under those conditions? We know that life exists on this planet because the sun basically maintains its present intensity. A significant increase or decrease in its intensity would either cause mankind to burn or freeze. Alternatively, David Chilton writes, The imagery here was long used in the prophets to depict the fall of nations and national rulers (cf. Isa. 13:9-11, 19; 24:19-23; 34:4-5; Ezek. 32:7-8, 11-12; Joel 2:10, 28-32; Acts 2:16-21. [He quotes F.W. Farrar (1831-1903), who wrote that] “ruler after ruler, chieftain after chieftain of the Roman Empire and the Jewish nation was assassinated and ruined. Gaius, Claudius, Nero, Galba, Otho, Vitellius, all died by murder or suicide; Herod the Great, Herod Antipas, Herod Agrippa, and most of the Herodian Princes, together with not a few of the leading High Priests of Jerusalem, perished in disgrace, or in exile, or by violent hands. All these were quenched suns and darkened stars” (Gregg, pp. 166, 168). Verse 13: As terrible as these plagues are, a flying eagle with a loud voice announces that the three remaining trumpet judgments are even more woeful. Their target again is “those who dwell on the earth,” another reference to the land of Israel, as discussed earlier. We will see these woes beginning in chapter 9. Steve Gregg quotes from Adam Clarke (1732-1815), who he says is a historicist but “accurately puts forth the preterist position”: These woes are supposed by many learned men to refer to the destruction of Jerusalem: the first woe—the seditions among the Jews themselves; the second woe—the besieging of the city by the Romans; the third woe—the taking and the sacking of the city, and burning the Temple. This was the greatest of all the woes, as in it the city and Temple were destroyed, and nearly a million men lost their lives. Our study of Revelation 9 can be found here. All of our Revelation chapter-by-chapter studies, and any other posts related to the book of Revelation, can be found here. For more on the concept that 70 AD marked the birthing of God’s kingdom, in the exclusive sense that it was completely separated from the Judaic system, see here here, and Matthew 21:33-45, Hebrews 8:13. One of the fascinating things in Revelation is the way it portrays the experience of the people of God in terms very similar to what transpired for Israel in Egypt and the ten plagues of judgment. For example, 1) prominence of the Red Sea (Ex. 14:1-31) // 1) prominence of glassy sea (Rev. 15:2) 2) song of deliverance (Ex. 15:1-18) // 2) song of deliverance (Rev. 15:2-4) 3) God’s enemy: Pharaoh // 3) God’s enemy: the Beast 4) court magicians of Egypt // 4) the False Prophet 5) persecution of Israel // 5) persecution of the Church 6) protected from plagues (Ex. 8:22; 9:4,26; 10:23; 11:7) // 6) protected from wrath (Rev. 7:1-8; 9:4) 7) hardened/unrepentant (Ex. 8:15; 9:12-16) // 7) hardened/unrepentant (Rev. 16:9,11,21) 8] the name of God (Ex. 3:14) // 8] the name of God (Rev. 1:4-6) 9) Israel redeemed from bondage by blood // 9) Church redeemed from sin by blood 10) Israel made a kingdom of priests (Ex. 19:6) // 10) Church made a kingdom of priests (Rev. 1:6) 11) 7th plague (Ex. 9:22-25) // 11) 1st trumpet 12) 6th plague (Ex. 9:8-12) // 12) 1st bowl 13) 1st plague (Ex. 7:20-25) // 13) 2nd/3rd trumpet & 2nd/3rd bowl 14) 9th plague (Ex. 10:21-23) // 14) 4th trumpet & 4th bowl 15) 8th & 9th plagues (Ex. 10:1-20) // 15) 5th trumpet & 5th bowl See the chart of the Emperors here: http://kloposmasm.wordpress.com/2009/08/14/pp5-internal-evidence-for-an-early-date-revelation-part-2/.
<urn:uuid:29ee1431-2598-41b4-8784-bbce849f6268>
CC-MAIN-2013-20
http://kloposmasm.com/2009/09/30/revelation-chapter-8/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.965306
4,534
2.515625
3
Did You Know Did you know that there is a town in Germany by the name of Reher?? That there is also one by the name of Stoltenberg?? Do you know where our Reher and Stoltenberg ancestors came from in Germany?? Johann Reher came from the town of Bebensee. His father Casper came from Dreggers which was 6 miles east of there. Bad Segeberg is also associated with the family. It is about 7 miles north and the closest large town. Since the town of Reher is 40 miles west of Bebensee and Dreggers I'm not sure if there is any connection to our name and family. Forty miles in the mid 1800's was quite a distance to travel. Claus Stoltenberg came from Brodersdorf which is 46 miles north of Bebensee and about 10 miles northeast of Kiel, a major city on the Baltic Sea. Brodersdorf is two to three miles south of the Baltic Sea. Other towns associated with the Stoltenberg side of the family are all located in close proximity. They are LaBoe, Stein, Fahren, Probsteierhagen and Wentdorf. The town of Stoltenberg is only seven miles to the southeast of Brodersdorf. The above map of northern Germany will show you where the towns Reher, Stoltenberg, Bebensee, Dreggers, and Brodersdorf are located. I've marked these towns with a dark asterisk * to make them easier to find. Click on the map to make it bigger and easier to see the towns marked with an asterisk. After the larger map comes up, click on that again to zoom in. It is about 60 miles from Kiel at the top of the map to Hamburg which is towards the bottom of this map. From Reher to Bebensee it is 40 miles, to Brodersdorf it is 46 miles which is also the distance from Bebensee to Brodersdorf and Stoltenberg... almost a perfect equilateral triangle. See the map below... Reher is in the bottom left corner or the triangle, Bebensee in the bottom right and Brodersdorf in the top. Hamburg is the port that our Johann & Sophia Reher sailed out of with their two small children, Emma and Ernest when they came to America in 1872. The map above shows a closer view of the area where Bebensee & Dreggers are located... where Johann Reher and his father were born. Once again, click on the map to make it larger. The map above shows a closer view of the area where the Stoltenberg side of the family came from. You'll see all of the towns: Brodersdorf, LaBoe, Stein, Fahren, Probsteierhagen and Wentdorf. I've added the Stoltenberg name myself as the town didn't show up on this map. So Now You Know
<urn:uuid:093caeea-82d7-4228-9893-9bb346f428bd>
CC-MAIN-2013-20
http://rareramblings.blogspot.com/2006/05/did-you-know-did-you-know-that-there.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.95408
651
2.546875
3
Ann Arbor, MI (Scicasts) – How do stem cells preserve their ability to become any type of cell in the body? And how do they “decide” to give up that magical state and start specializing? If researchers could answer these questions, our ability to harness stem cells to treat disease could explode. Now, a University of Michigan Medical School team has published a key discovery that could help that goal become reality. In the current issue of the journal Cell Stem Cell, researcher Dr. Yali Dou, and her team show the crucial role of a protein called Mof in preserving the ‘stem-ness’ of stem cells, and priming them to become specialized cells in mice. Their results show that Mof plays a key role in the “epigenetics” of stem cells -- that is, helping stem cells read and use their DNA. One of the key questions in stem cell research is what keeps stem cells in a kind of eternal youth, and then allows them to start “growing up” to be a specific type of tissue. Dou, an associate professor of pathology and biological chemistry, has studied Mof for several years, puzzling over the intricacies of its role in stem cell biology. She and her team have zeroed in on the factors that add temporary tags to DNA when it’s coiled around tiny spools called histones. In order to read their DNA, cells have to unwind it a bit from those spools, allowing the gene-reading mechanisms to get access to the genetic code and transcribe it. The temporary tags added by Mof act as tiny beacons, guiding the “reader” mechanism to the right place. “Simply put, Mof regulates the core transcription mechanism – without it you can’t be a stem cell,” says Dou. “There are many such proteins, called histone acetyltransferases, in cells – but only MOF is important in undifferentiated cells.” Dou and her team also have published on another protein involved in DNA transcription, called WDR5, that places tags that are important during transcription. But Mof appears to control the process that actually allows cells to determine which genes it wants to read – a crucial function for stem-ness. “Without Mof, embryonic stem cells lost their self-renewal capability and started to differentiate,” she explains. The new findings may have particular importance for work on induced pluripotent stem cells – the kind of stem cells that don’t come from an embryo, but are made from “adult” tissue. IPCS research holds great promise for disease treatment because it could allow a patient to be treated with stem cells made from their own tissue. But the current way of making IPSCs from tissue involves a process that uses a cancer-causing gene – a step that might give doctors and patients pause. Dou says that further work on Mof might make it possible to stop using that potentially harmful approach. But further research will be needed. What they will focus on is how Mof marks the DNA structures called chromatin to keep parts of the genome readily accessible. In stem cells, scientists have shown, many areas of DNA are kept open for access – probably because stem cells need to use their DNA to make many proteins that keep them from ‘growing up.’ Once a stem cell starts to differentiate, or become a certain specialized type of cell, parts of the DNA close up and aren’t as accessible. Many scientific teams have studied this “selective silencing” and the factors that cause stem cells to start specializing by reading only certain genes. But few have looked at the factors that facilitate broad-range DNA transcription to preserve stem-ness. “Mof marks the areas that need to stay open and maintains the potential to become anything,” Dou explains. Its crucial role in many species is hinted at by the fact that the gene to make Mof has the same sequence in fruit flies and mice. “If you think about stem cell biology, the self-renewal is one aspect that makes stem cells unique and powerful, and the differentiation is another,” says Dou. “People have looked a lot at differentiation to make cells useful for therapy in the future – but the stem cell itself is actually pretty fascinating. So far, Mof is the only histone acetyltransferase found to support the stemness of embryonic stem cells.” In addition to Dou, the research team includes her former postdoctoral fellow Dr. Xiangzhi Li, now at Shandong University in China; colleagues from the Department of Biostatistics and Bioinformatics in the Rollins School of Public Health at Emory University; and colleagues from the Laboratory of Gene Expression at the National Institutes of Health.
<urn:uuid:b4b28577-bf83-4440-af9e-8874008647a0>
CC-MAIN-2013-20
http://scicasts.com/bioit/1844-bioinformatics/4712-stem-cells-can-become-anything-but-not-without-mof
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.958316
1,008
3.375
3
Greenland ice a benchmark for warming Core data Greenland was about eight degrees warmer 130,000 years ago than it is today, an analysis of an almost three-kilometre-long ice core in Greenland has revealed. The finding by an international team of 38 institutions from 14 nations provides an important benchmark for climate change modelling and gives an insight into how the natural world will respond to global warming in the future. The study, which involves CSIRO researchers, also suggests Antarctica's ice sheets may be more vulnerable to warming than previously thought. Published in today's Nature journal, the results flow out of a four-year expedition known as the North Greenland Eemian Ice Drilling operation (NEEM). Dr David Etheridge, principal research scientist with CSIRO Marine and Atmospheric Research who has worked on the project, says the NEEM program is the first to successfully reach down into Greenland's ice core into the Eemian period, which stretched from 130,000 years to 115,000 years ago. "It has been something of a holy grail for Greenland work to achieve this … we are getting to ice close to the bedrock where you get melting and mixing of the ice layers." Etheridge says in a process similar to assembling a jigsaw puzzle, scientists used comparisons with gas elements in Antarctica's deep ice core records to re-assemble the layers in their original sequence. Deep ice drilling in the Antarctic has reached as far back as 800,000 years. Past and future It is important to understand what happened in Greenland during the Eemian period because the temperatures experienced then are "within the realms of where we are heading", says Etheridge. However, he says the previous warming was due to the Earth receiving more of the Sun's radiation due to its orbit at the time, while today's warming is being driven by increases in greenhouse gases in the atmosphere. Nature paper co-author Dr Mauro Rubino, of CSIRO Marine and Atmospheric Research, says it had been previously estimated that Greenland's temperature was about 4°C warmer during the Eemian than now. But this latest work used analysis of water-stable isotopes to estimate "the temperature 130,000 years ago was up to 8°C warmer [in Greenland] than what it is today", says Rubino. It also shows sea levels were on average 6 metres higher. The results provide "important benchmarks for future climate change projections" in temperature and the contribution of the two main ice sheets to sea level rises, Rubino says. He says the study also reveals the Greenland ice sheet did not melt as much as previously thought so was not the major contributor to sea level at that time. "It shows the major contribution to sea level rises was not coming from the Greenland ice shelf," he says. "It was previously believed that Greenland melted entirely [during the Eemian], but in fact the ice sheet was not that much different from what it is now. "Most of the contribution to sea level rise comes from these two big ice reserves [in Greenland and the Antarctica] so one of the possible interpretations is Antarctica is more susceptible to climate change than we thought." Etheridge agrees. He says the work shows the Greenland ice sheet survived during the Eemian - although it was about 400 metres thinner. "From that figure you can deduce how much it contributed to the sea level rise and it is not as much as was thought. "That throws things back to Antarctica ... previously the thought was Antarctica was too cold and too stable to be impacted." Etheridge says CSIRO was invited by lead institution, the University of Copenhagen, to be involved in NEEM at its formation because of its expertise in analysing air composition in air bubbles trapped in deep ice. Rubino says their team began analysis of gas bubbles from the first 80 to 100 metres of ice core down to the final 2540 metre depth. This helped track changes in climate and temperature on a year-by-year basis. He says the concentration of greenhouse gases such as carbon dioxide, methane and nitrous oxide in the air bubbles from the Eemian was much lower than what it is today.
<urn:uuid:96cf8d51-9a89-4975-ae4f-fecfebf943e1>
CC-MAIN-2013-20
http://www.abc.net.au/science/articles/2013/01/24/3675740.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.972568
860
3.84375
4
(Plasma-Cell Myeloma)En Español (Spanish Version) More InDepth Information on This Condition Definition | Causes | Risk Factors | Symptoms | Diagnosis | Treatment | Prevention Multiple myeloma is a rare cancer of the bone marrow. It results from the abnormal growth of plasma cells in the bone marrow. Plasma cells normally produce antibodies. As these abnormal or malignant plasma cells multiply, they produce large quantities of abnormal antibodies. These abnormal antibodies collect in the blood and urine. As the plasma cell tumor grows, it also destroys the bone around it. These events lead to bone pain, kidney damage, and a weak immune system. Cancer occurs when cells in the body (in this case plasma cells) divide without control or order. Normally, cells divide in a regulated manner. If cells keep dividing uncontrollably, a mass of tissue forms, called a growth or tumor. The term cancer refers to malignant tumors, which can invade nearby tissue and spread to other parts of the body. A benign tumor does not invade or spread. Bone Marrow in Adult Risk factors that increase your chance of getting multiple myeloma include: - Age: 50 or older - Race: black Symptoms of early stage multiple myeloma include: - Persistent bone pain, often severe. It is most common in the back but also in the limbs or ribs. When the disease progresses, symptoms may include: - Broken bones - Repeat infections - Nausea and vomiting - Difficulty urinating - Abnormal bleeding - Visual problems The doctor will ask about your symptoms and medical history. A physical exam will be done. Your doctor may need pictures of your bones. This can be done with: - Magnetic resonance imaging (MRI) - Computed tomography scan (CT scan) - Positron emission tomography/computed tomography scan (PET/CT scan) Your doctor may order tests of your body fluids and tissues. This can be done with: - Blood tests - Urine tests - Bone marrow aspiration or biopsy After cancer is found, staging tests are done to find out if the cancer has spread. Treatment is sometimes able to slow the progress of multiple myeloma. Complete remission is rare. Treatment is also important to control symptoms. Treatment depends on your symptoms and the stage of your cancer. Options include: Chemotherapy is the use of drugs to kill cancer cells. Chemotherapy may be given in many forms including: pill, injection, and via a catheter. The drugs enter the bloodstream and travel through the body. The drugs kill mostly cancer cells. Some healthy cells may be killed in the process. Chemotherapy drugs are used in combination and may also be given with other types of medicines, like immunomodulating agents. Immunomodulating agents work by changing the way the myeloma cells live. This makes it difficult for them to survive, reproduce, and produce proteins that cause symptoms. These medicines are often paired with a corticosteroid. Corticosteroids may be combined with other medicines or given alone. Corticosteroids can also help to treat the symptoms of chemotherapy, like nausea and vomiting. A proteasome inhibitor is also available to treat multiple myeloma. Proteasomes are a type of protein complex that breaks down proteins. It inhibits proteasomes, which causes more proteins to be in the cells. Because of these extra proteins, the cells eventually do not grow anymore. Biologic therapies repair, encourage, or raise the body’s response to cancer by affecting the immune system. Interferon is one biologic agent used to treat multiple myeloma. Interferon may be used with chemotherapy to help prolong remission, slowing the speed at which myeloma cells grow. Radiation therapy is the use of radiation to kill cancer cells and shrink tumors. External beam radiation therapy may be given to relieve bone pain. It is not considered a cure. Surgery is done to remove a tumor that causes pain or other disabling symptoms when radiation therapy is not considered a good option. Surgery is not a cure. Peripheral stem cell transplant involves giving immature, healthy blood cells to replace bone marrow cells that are damaged by cancer. American Cancer Society Multiple Myeloma Research Foundation Canadian Cancer Society Casciato D., Territo M., Manual of Clinical Oncology, sixth edition, 2009, Lippincott Williams & Wilkins. Treating multiple myeloma: bisphosphonates for multiple myeloma. Available at: http://www.cancer.org/Cancer/MultipleMyeloma/DetailedGuide/multiple-myeloma-treating-bisphosphonates. Updated July 24, 2012. Accessed December 27, 2012. Treating multiple myeloma: chemotherapy and other drugs for multiple myeloma. Available at: http://www.cancer.org/Cancer/MultipleMyeloma/DetailedGuide/multiple-myeloma-treating-chemotherapy. Updated July 24, 2012. Accessed December 27, 2012. Treating multiple myeloma: plasmapheresis for multiple myeloma. Available at: http://www.cancer.org/Cancer/MultipleMyeloma/DetailedGuide/multiple-myeloma-treating-plasmapheresis. Updated July 24, 2012. Accessed December 27, 2012. Treating multiple myeloma: radiation therapy for multiple myeloma. Available at: http://www.cancer.org/Cancer/MultipleMyeloma/DetailedGuide/multiple-myeloma-treating-radiation. Updated July 24, 2012. Accessed December 27, 2012. Treating multiple myeloma: stem cell transplant. Available at: http://www.cancer.org/Cancer/MultipleMyeloma/DetailedGuide/multiple-myeloma-treating-stem-cell-transplant. Updated July 24, 2012. Accessed December 27, 2012. Multiple myeloma. EBSCO DynaMed website. Available at: http://www.ebscohost.com/dynamed/what.php. Updated September 13, 2012. Accessed December 27, 2012. Rajkumar, SV, Hayman, SR, Lacy, MQ, et al. Combination therapy with lenalidomide plus dexamethasone (Rev/Dex) for newly diagnosed myeloma. Blood. 2005;106:4050 . Last reviewed November 2012 by Mohei Abouzied, MD Please be aware that this information is provided to supplement the care provided by your physician. It is neither intended nor implied to be a substitute for professional medical advice. CALL YOUR HEALTHCARE PROVIDER IMMEDIATELY IF YOU THINK YOU MAY HAVE A MEDICAL EMERGENCY. Always seek the advice of your physician or other qualified health provider prior to starting any new treatment or with any questions you may have regarding a medical condition.
<urn:uuid:e1cae5d1-4994-4d03-9ac2-7341cb744d26>
CC-MAIN-2013-20
http://www.bermudahospitals.com/health-wellness/conditions-A-Z.asp?chunkiid=11660
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.881604
1,467
3.359375
3
Adam Smith biography Adam Smith was an economist and philosopher who wrote what is considered the "bible of capitalism," The Wealth of Nations, in which he details the first system of political economy. While his exact date of birth isn’t known, Adam Smith’s baptism was recorded on June 5, 1723, in Kirkcaldy, Scotland. He attended the Burgh School, where he studied Latin, mathematics, history and writing. Smith entered the University of Glasgow when he was 14 and in 1740 went to Oxford. In 1748, Adam Smith began giving a series of public lectures at the University of Edinburgh. Through these lectures, in 1750 he met and became lifelong friends with Scottish philosopher and economist David Hume. This relationship led to Smith's appointment to the Glasgow University faculty in 1751. In 1759 Smith published The Theory of Moral Sentiments, a book whose main contention is that human morality depends on sympathy between the individual and other members of society. On the heels of the book, he became the tutor of the future Duke of Buccleuch (1763–1766) and traveled with him to France, where Smith met with other eminent thinkers of his day, such as Benjamin Franklin and French economist Turgot. The Wealth of Nations After toiling for nine years, in 1776 Smith published An Inquiry into the Nature and Causes of the Wealth of Nations (usually shortened to The Wealth of Nations), which is thought of as the first work dedicated to the study of political economy. Economics of the time were dominated by the idea that a country’s wealth was best measured by its store of gold and silver. Smith proposed that a nation’s wealth should be judged not by this metric but by the total of its production and commerce—today known as gross national product (GDP). He also explored theories of the division of labor, an idea dating back to Plato, through which specialization would lead to a qualitative increase in productivity. Smith’s ideas are a reflection on economics in light of the beginning of the Industrial Revolution, and he states that free-market economies (i.e., capitalist ones) are the most productive and beneficial to their societies. He goes on to argue for an economic system based on individual self-interest led by an “invisible hand,” which would achieve the greatest good for all. In time, The Wealth of Nations won Smith a far-reaching reputation, and the work, considered a foundational work of classical economics, is one of the most influential books ever written. In 1787, Smith was named rector of the University of Glasgow, and he died just three years later, at the age of 67.
<urn:uuid:8a4b33c2-c160-42fe-a153-de441ec3c0b0>
CC-MAIN-2013-20
http://www.biography.com/print/profile/adam-smith-9486480
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.976442
554
3.546875
4
Technology transfer is a complex technical process that does not often receive the resources or considerations that it needs to ensure success of complex manufacturing processes. Ideally, technology transfer should be managed by a dedicated division or specialist group, which is one of the benefits gained by outsourcing manufacturing projects to an established contract manufacturing organization (CMO). A dedicated team knows that technology transfer programs are not always straightforward, quick, or easy to perform and that there will always be challenges to overcome and problems to resolve to achieve success. Their experience will also have taught them that clients often have very little idea as to what the transfer process involves, which means that the development of a trusting relationship between the client and the CMO is essential. At many stages during biopharmaceutical product development, it will be necessary to transfer processes from one place to another. Whether this is internally between teams within a company (for example, from the process development team to the scale-up or manufacturing team) or externally, from one company to another, the key objective is to execute the transfer with minimal disruption or unnecessary cost. This can be an overwhelming proposition for small- to medium-sized start up companies (usually with an academic or research background). These companies will prefer to transfer early-stage processes to a CMO for the first time—for scale-up and manufacture of Phase ½ clinical trial materials—and have limited experience of scale-up or the regulations and expectations surrounding a current good manufacturing practices (cGMP) process. Poor planning, unclear documentation, and bad communication all can lead to an inefficient transfer program, and more often than not will lead to the receiving team having to develop parts of the process that have already been established. Any inefficiency in the technology transfer program will inevitably result in delays in production, with the related theoretical loss in revenue and additional increases in the cost of the transfer. It is therefore essential to put serious time and resources into the planning stage of a technology transfer program. (Jens Bonnke/Getty Images) A Stitch in Time There are no fool-proof strategies that can guarantee a smooth technology transfer, but certain factors have a large influence on success: meticulous planning, technical understanding of the manufacturing processes, and good communication. A strong emphasis need be placed on a risk-based approach. This involves the identification of risks, both technical and scientific, and logistical, at an early stage, to develop avoidance or mitigation strategies far enough in advance to help keep the transfer program on course. This kind of strategy relies on a highly knowledgeable team of people who have the necessary experience to be able to anticipate the issues that may arise, and excellent communication between the sending and receiving units. For a CMO, one of the biggest challenges in technology transfer can be dealing with client expectations. For first-timers, the idea of technology transfer can seem quite simple and straightforward: "We've developed this process at our facility—how hard can it be to do the same thing in a new place?" The truth is that even transferring the simplest process will be a complex progression of planning, testing, and optimizing, especially considering that most biotech products being transferred to a CMO will be moving into clinical trials with the intention to eventually manufacture commercially. This should mean that processes are, at this stage, being looked at critically to assess suitability for scale-up and regulatory compliance. Clear communication and the development of a good relationship between both parties is clearly of benefit to all from the very start of the project.
<urn:uuid:6cb6e2c9-818f-448e-bbb6-fa6fe5d276aa>
CC-MAIN-2013-20
http://www.biopharminternational.com/biopharm/Outsourcing/Handling-a-Risky-Business-How-to-Ensure-Successful/ArticleStandard/Article/detail/590355?contextCategoryId=43001&ref=25
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.949904
739
2.53125
3
Three Famous Short Novels (eBook) “You cannot swim for new horizons until you have courage to lose sight of the shore.” —William Faulkner These short works offer three different approaches to Faulkner, each representative of his work as a whole. Spotted Horses is a hilarious account of a horse auction, and pits the “cold practicality” of women against the boyish folly of men. Old Man is something of an adventure story. When a flood ravages the countryside of the lower Mississippi, a convict finds himself adrift with a pregnant woman. And The Bear, perhaps his best known shorter work, is the story of a boy’s coming to terms wit the adult world. By learning how to hunt, the boy is taught the real meaning of pride, humility, and courage. From the Trade Paperback edition. About the Author William Cuthbert Faulkner was born in 1897 and raised in Oxford, Mississippi, where he spent most of his life. One of the towering figures of American literature, he is the author of The Sound and the Fury, Absalom, Absalom!, and As I Lay Dying, among many other remarkable books. Faulkner was awarded the Nobel Prize in 1950 and France’s Legion of Honor in 1951. He died in 1962. Praise for Three Famous Short Novels… “No man ever put more of his heart and soul into the written word than did William Faulkner. If you want to know all you can about that heart and soul, the fiction where he put it is still right there.” —Eudora Welty “Faulkner’s greatness resided primarily in his power to transpose the American scene as it exists in the Southern states, filter it through his sensibilities and finally define it with words.” —Richard Wright
<urn:uuid:e20213c7-8613-4924-8bfc-0a5abfaf4984>
CC-MAIN-2013-20
http://www.bookpassage.com/ebook/9780307791979
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.970085
391
2.625
3
Community Effort Can Decrease Teen Drinking and Smoking, Study Finds A new study finds a program designed to assist communities in preventing unhealthy behaviors in teens is effective in reducing adolescent smoking and drinking. The study found tenth graders in towns that used the program, called “Communities That Care,” were less likely to try drinking or smoking, compared with teens in communities not using the program. The program was also effective in reducing delinquent behavior including stealing, fights and vandalism, HealthDay reports. Communities participating in the program had 4,400 fifth graders in seven states complete surveys designed to identify factors that put them at risk for health and behavior problems. A group of community leaders, including parents, teachers and health workers, looked at ways to address the problems. They chose from a list of preventive interventions that have been shown to work, such as tutoring, educational sessions for parents of at-risk kids, and middle-school curricula about substance abuse. The children were followed for five years. The researchers then compared rates of substance abuse and violence in 12 towns where community leaders used Communities That Care with 12 communities that did not use the program. The researchers found teenagers in towns that participated in the program were half as likely to ever have smoked a cigarette by tenth grade, and 21 percent less likely to be a current smoker, compared with teens in non-participating communities. They were also 38 percent less likely to ever have tried alcohol, and 21 percent less likely to have engaged in delinquent behavior. The study did not find a difference between the two groups in rates of illegal or prescription drug use. “What’s exciting about this paper is that these decreases in alcohol use, smoking and violence were apparent even after outside support for the Communities That Care system ended. It shows that community coalitions can make a sustained difference in their youngsters’ health community-wide,” study author J. David Hawkins of the University of Washington said in a news release. The findings are published in the Archives of Pediatrics and Adolescent Medicine.
<urn:uuid:e182d0e5-f203-4947-9f8a-ae2d2e8c548b>
CC-MAIN-2013-20
http://www.drugfree.org/uncategorized/community-effort-can-decrease-teen-drinking-and-smoking-study-finds
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.973822
423
3.09375
3
Tostig GodwinsonTostig Godwinson (~1026-25 September 1066), Earl of Northumbria, was one of the brothers of King Harold of England, its last Saxon king. Tostig married Judith (Fausta) of Flanders (1030-5 March 1094), the half-sister of Count Baldwin V of Flanders and, thus, the aunt of Matilda of Flanders who was the wife of William the Conqueror. Popular (as opposed to scholarly) non-fiction books that cover Tostig's life and role in history include: - 1066 The Year of the Conquest (©1977) by David Howarth (ISBN 0-88029-014-5) - The Making of the King 1066 (©1966) by Alan Lloyd (ISBN 0-88029-473-6)
<urn:uuid:83c17ca5-572c-4be7-bcdd-5c52e673d5e3>
CC-MAIN-2013-20
http://www.encyclopedia4u.com/t/tostig-godwinson.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.869905
183
2.578125
3
Negotiating devolution is complicated and has been going on since 2001. At times, it has made sense to make a record of discussions and decisions by entering into written agreements. These agreements are important steps that keep the negotiations moving forward. Past agreements of this kind have included the 2001 Memorandum of Intent and the 2004 Devolution Framework Agreement. These agreements were endorsed by Aboriginal governments, the Government of Canada and the GNWT. In 2007, the GNWT and four regional Aboriginal governments - the Inuvialuit Regional Corporation, the Gwich'in Tribal Council, the Sahtu Secretariat Incorporated and the Northwest Territory Métis Nation - also signed an Agreement-in-Principle on Resource Revenue Sharing, making provisions for the remaining Aboriginal governments to sign on when they were ready. Like these earlier agreements, the devolution Agreement-in-Principle (AiP) is another important step in the process. The AiP sets out the subjects that should be included in a final devolution agreement, stating the principles and financial parameters that will guide the negotiation of a final agreement. The AiP won't be legally binding and will not jeopardize existing or future rights. The AiP is simply an agreement to move to the next stage of negotiations. Approval of the AiP marks the beginning of negotiations leading up to a final devolution agreement. Many of the outstanding issues will be addressed as part of final negotiations.Signing the devolution AiP is a signal to the Government of Canada that we are ready to invest in advancing towards a final agreement.
<urn:uuid:dde9fc1e-bf20-4fef-b9bf-0d190db7c2f2>
CC-MAIN-2013-20
http://www.executive.gov.nt.ca/initiatives/Devolution/negotiations/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.946844
319
3.015625
3
Carbon With That Latte? Sonia Narang 07.03.07, 6:00 AM ET How Starbucks hopes to trim its emissions footprint. In its shop in downtown San Mateo, Calif., for instance, baristas serve up about 40,000 cups of coffee drinks every month. Just based on utility bills alone, that means Starbucks is serving up about 4,900 pounds of carbon with its drinks--or about two ounces per cup. Starbucks executives say they are looking for ways to trim those carbon emissions. But they are reluctant to say just how much Starbucks' worldwide carbon footprint is--and how it has changed over the past few years. Starbucks has calculated the carbon footprint of its North American locations only once, in 2003. Since then, its number of U.S. company-owned stores has almost doubled to 6,281. Its international company-owned locations, also left out of the calculation, now number more than 1,500. "Although we have grown in size, the nature of our business remains the same--the operation of retail stores and roasting coffee," says Jim Hanna, environmental affairs manager at Starbucks in Seattle. While Starbucks chooses not to calculate its carbon footprint every year, the company does conduct annual progress checks, but these numbers are not publicly reported. Other eco-friendly companies are also surprisingly coy. Last month, for instance, Google ) led a group of 40 other companies (including Starbucks) in kicking off the "Climate Savers Computing Initiative," a project aimed at building and buying more energy-efficient PCs. Google is nonetheless keeping a watch on the size of its carbon footprint and hopes to achieve carbon neutrality by the end of this year by using non-carbon energy sources for much of its power needs and purchasing carbon offsets for the rest. Recently, Google flipped the switch on 1.6 megawatts of solar power modules on the roof of its Mountain View headquarters. Starbucks was early among eco-sensitive companies. Executives became convinced early in this decade that atmospheric carbon could wreak havoc on the global climate--and so on the supply and price of coffee beans. "We're facing environmental risks posed by climate change that could negatively affect many aspects of our company, including our ability to procure coffee," Hanna says. Temperature and rainfall dictate how much coffee comes out of regions including Latin America and Asia. "As we hope to increase to 40,000 stores worldwide in the next 10 years, we're going to need a larger supply," Hanna says. In 2003, Starbucks hired Denver-based engineering firm CH2M Hill to calculate the carbon footprint of the approximately 3,700 stores it then had in North America. CH2M Hill began measuring corporate footprints in the late 1990s and has done comparable calculations for a few dozen companies, including Nike (nyse: NKE - news - people ), 3M (nyse: MMM - news - people ), SC Johnson and energy firm Kinder Morgan (nyse: KMI - news - people ). Doing such calculations is still something of a black art. CH2M Hill's Lisa Grice, who worked on the coffee company's carbon footprint, says the final number primarily includes electricity used in retail stores. Carbon calculators take into account stores' geographic locations. That's because electricity generated at power plants in one state may come from a different source than a power plant in another state. Some stores may get electricity from coal-fired plants, which results in greater carbon emissions, while others may depend on hydroelectric power, which has a lower carbon byproduct. Starbucks decided to leave out the additional 81,000 tons of carbon dioxide it emitted through transporting coffee materials and disposing solid waste. According to Starbucks Environmental Affairs Manager Ben Packard, the company can only control and manage carbon emissions from energy used in retail stores and coffee-roasting plants. It took about half a year of data collection and complex calculations to figure out that Starbucks emitted 295,000 tons of carbon into the atmosphere in 2003. Starbucks decided to leave out an additional 81,000 tons of carbon dioxide it emitted by transporting coffee materials and disposing of solid waste. According to Starbucks Environmental Affairs Manager Ben Packard, the company can only control and manage carbon emissions from energy used in retail stores and coffee-roasting plants. Starbucks attributes 81% of its greenhouse gas emissions to purchased electricity and 18% to coffee roasting at its three North American plants and natural gas usage in stores. That 295,000-ton figure gives Starbucks a small carbon footprint, among a list of about 1,000 companies compiled by the Carbon Disclosure Project, a London-based nonprofit. Near the top of the list is energy giant American Electric Power (nyse: AEP - news - people ) with 146.5 million tons of carbon emissions. Next in line are oil and gas companies Royal Dutch/Shell and British Petroleum (nyse: BP - news - people ) with 105 million tons and 92 million tons. Comparatively, General Electric's (nyse: GE - news - people ) 12.4 million ton footprint makes it a medium-size emitter. The smallest carbon emitters weighed in at a few thousand tons. Most of the lower footprints belong to insurance companies, retailers and banks. Starbucks execs say that even as they've been growing the number of outlets, they've been trying to be more energy efficient. In 2005, Starbucks joined the World Research Institute's Green Power Market Development Group, a consortium of 15 companies ranging from Staples (nasdaq: SPLS - news - people ) to Google. The group helps its members purchase renewable energy at lower prices. Last year, the coffee company increased its wind power to 20% of the total energy usage in North American stores. This offset 62,000 tons of carbon dioxide. But to track progress in reducing carbon emissions accurately, companies need to update those footprints frequently, says Marcus Peacock of the U.S. Environmental Protection Agency. "We've asked companies to check their numbers annually," he says. A number of companies are doing just that. Both Intel (nasdaq: INTC - news - people ) and Sun Microsystems (nasdaq: SUNW - news - people ), which are also part of the Climate Savers Computing Initiative, report their carbon footprints annually. Intel's carbon footprint added up to 4 million tons in 2006, a number that includes worldwide operations. Sun first calculated its footprint at 255,000 tons last year, and used past data to figure out carbon emissions dating back four years. The company also reports up-to-date carbon numbers on its Web site. "We calculate this monthly so that we can make sure we're on track with improving emissions," says Sun's VP of Eco Responsibility Dave Douglas. Both Intel and Sun are part of the EPA's Climate Leaders Program, a group of companies that sets tangible carbon reduction goals. Climate Leaders began five years ago, when few companies even knew the meaning of carbon footprint. Now, the program boasts 132 members. In the meantime, Starbucks executives insist they are looking for ways to improve energy efficiency and encourage their customers to do the same. This summer, Starbucks told its customers to go green through a number of high-profile campaigns, including "Green Umbrellas for a Green Cause" and the online Planet Green Game (planetgreengame.com). Starbucks will also start monitoring the energy usage of specific equipment at some stores later this year. "We'll install individual meters on espresso machines, refrigerators, water filtration systems and other components," Hanna says. This doesn't necessarily mean you'll see a green espresso maker at a Starbucks near you anytime soon. "Quality and performance come first," Hanna says. '); //--> News Headlines | More From Forbes.com | Special Reports Advertisement: Related Business Topics >
<urn:uuid:f118693f-8e25-48c9-a4fa-ea787f1d53e7>
CC-MAIN-2013-20
http://www.forbes.com/2007/07/02/starbucks-emissions-environment-biz-cz_sn_0703green_carbon.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.944981
1,605
2.65625
3
Sleep and Your Body Clock What is the body clock? body's "biological clock," or 24-hour cycle (circadian rhythm), can be affected by light or darkness, which can make the body think it is time to sleep or wake up. The 24-hour body clock controls functions - Sleeping and waking. - The balance of body fluids. - Other body functions, such as when you feel hungry. How are body clock problems and sleep problems connected? Body clock sleep problems have been linked to a hormone called melatonin. Light and dark affect how the body makes melatonin. Most melatonin is made at night. During the day, light tells your body to make less melatonin. If you work at night in artificial light, your body may be making less melatonin than it needs. as those who can't sleep until very late and those who go to bed very early—have circadian (say "ser-KAY-dee-un") rhythms that are different from those of most people. Other people with sleep problems may have regular circadian rhythms but have to adjust them to new situations, such as working a What sleep problems are related to problems with your body clock? Things that may affect melatonin production and can cause sleep problems include: - Jet lag. Crossing time zones disrupts your body clock. You have sleep problems because your body clock has not adjusted to the new time zone. Your body thinks that you're still in your old time zone. For example, if you fly from Chicago to Rome, you cross seven time zones. This means that Rome is 7 hours ahead of Chicago. When you land in Rome at 6:00 in the morning, your body thinks it's still in Chicago at 11:00 the previous night. Your body wants to sleep, but in Rome the day is just - Changing your sleep schedule. When you work at night and sleep during the day, your body's internal clock needs to reset to let you sleep during the day. Sometimes that's hard to do. People who work the night shift or rotate shifts may have trouble sleeping during the day and may feel tired at night when they need to be alert for - Your sleep environment. Too much light or noise can make your body feel like it is not time to sleep. - Illness. Certain illnesses and health problems can affect sleep patterns. These include dementia, a head injury, recovering from a coma, and severe depression. Some medicines that affect the central nervous system may also affect sleep patterns. - Aftereffects of drugs and alcohol. Some drugs cause sleep problems. And you may fall asleep with no problems after drinking alcohol late in the evening, but drinking alcohol before bed can wake you up later in the night. Other sleep problems related to the body clock - Having a hard time falling asleep until very late at night or very early in the morning and then feeling tired and needing to sleep during the day. People who have this problem may be called "night owls." This is a common problem, and it usually starts in the early teen or young adult years. People who have a parent with this problem are more likely to have it themselves. - Falling asleep early—at 8 p.m. or earlier—and waking up early—between 3 a.m. and 5 a.m. If you wake up early, you may be called an "early bird." This problem is not as common as staying up late and waking up late. Experts are not sure what causes it. How can you treat sleep problems related to your body clock? How you treat a sleep problem related to your body clock depends on what is causing the problem. Here are some tips for the most common problems. Taking melatonin supplements may help reset your body clock. Studies show that melatonin has reduced the symptoms of jet lag for people flying both east and west.1 Suggestions about times and dosages vary among researchers who have studied melatonin. Doctors recommend that you: - Take melatonin after dark on the day you travel and after dark for a few days after you arrive at your destination. - Take melatonin in the evening for a few days before you fly if you will be flying east. The safety and effectiveness of melatonin have not been thoroughly tested. Taking large doses of it may disrupt your sleep and make you very tired during the day. If you have epilepsy or are taking blood thinners such as coumadin (Warfarin), talk to your doctor before you use melatonin. The sleeping pills eszopiclone (Lunesta) and zolpidem (Ambien) have been studied for jet lag. They may help you sleep despite jet lag if you take them before bedtime after you arrive at your destination. Side effects include headaches, dizziness, confusion, and feeling sick to your stomach. For more information on jet lag, see: - Sleep Problems: Dealing With Jet Lag. If you work the night shift or rotate shifts, you can help yourself get good sleep by keeping your bedroom dark and quiet and by taking good care of yourself overall. In some cases, prescription medicine or over-the-counter supplements may help. Here are some tips on sleeping well when you do this type of shift - Make sure that the room where you sleep is dark. Use blackout drapes, or wear a sleep eye mask. - Wear earplugs to block sounds. - Don't have alcohol or caffeine in the hours leading up to bedtime. - Take a nap during a work break if you can. - Ask your doctor if you should try a dietary supplement or medicine. Doctors usually advise people to use a supplement or medicine only for a short time. For more information, see the topic Shift Work Sleep Disorder. Some people, no matter what they do, have trouble falling asleep at night and being up early during the day. This may or may not cause problems for them. It depends on their lifestyle and work or school schedule. If you are one of those night owls, there are things you can try so that you fall asleep earlier and sleep through the - Getting up at the same time every day no matter what time you go to sleep. On the weekends (or on days when you don't have to get up), don't let yourself sleep more than 1 hour longer than you do when you have to get up for work or school. If that doesn't work, you can try the treatments listed below. - Light therapy. In this case, light therapy means exposing yourself to bright light as soon as you wake up. You can use sunlight or a bright (10,000 lux) light box for 30 to 45 minutes each day. - Melatonin. Ask your doctor about taking melatonin supplements in the evening to help you get to - Chronotherapy. For night owls, this method involves creating a 27-hour day. During each sleep-wake cycle, you go to sleep 3 hours later until the time to go to sleep has cycled back around to the time you actually want to go to sleep. After you complete the cycle once, then you would keep going to bed at that desired time. This method can be hard to do because of the way it can disrupt your daily schedule and because you have to keep to a rigid schedule. Here is a sample schedule: - Day 1: If you normally go to bed at midnight, you would wait until 3 a.m. to go to sleep. - Day 2 and beyond: Go to sleep at 6 a.m., and then keep delaying sleep 3 hours each day until you are going to bed at the time you desire. This will probably take 5 to fall asleep very early and wake up before dawn may try the following to try to stay up later at night and sleep later in the morning. - Light therapy. In this case, light therapy means exposing yourself to bright light in the evening. Use a bright (10,000 lux) light box for 30 to 45 minutes each - Antidepressant medicine. A doctor may prescribe antidepressants along with having you try to stay up 15 minutes later every few days. This treatment is usually for people who are depressed in addition to having sleep problems. - Chronotherapy. For early birds, this method involves creating a 21-hour day. During each sleep-wake cycle, you go to bed 3 hours earlier until the time to go to sleep has cycled back around to the time you actually want to go to sleep. This method can be hard to do because of the way it can disrupt your daily schedule and because you have to keep to a rigid schedule. Here is a sample schedule: - Day 1: If you normally go to bed at 8 p.m., you would go to bed at 5 p.m. - Day 2 and beyond: Go to bed at 2 p.m., and then keep going to sleep 3 hours earlier each day until you are going to bed at the time you desire. This will probably take about a week. Then you would keep going to bed at that desired time. After you get treatment for the illness or health problem that is causing your sleep problem, you will need to practice good sleep habits. This includes getting regular exercise (but not within 3 or 4 hours of your bedtime), going to bed at the same time each day, and using the bed only for sleep and sex. For more tips on improving sleep habits, see: - Insomnia: Improving Your Sleep. Health Tools help you make wise health decisions or take action to improve your health. | ||Actionsets are designed to help people take an active role in managing a health condition.| | ||Insomnia: Improving Your Sleep| | ||Sleep Problems: Dealing With Jet Lag| Herxheimer A (2008). Jet lag, search date June 2008. Online version of BMJ Clinical Evidence: http://www.clinicalevidence.com. Other Works Consulted Reite M, et al. (2002). Insomnia complaints. In Concise Guide to Evaluation and Management of Sleep Disorders., 3rd ed., chap. 3. Washington, DC: American Psychiatric |By: ||Healthwise Staff ||Last Revised: December 1, 2011| |Medical Review: ||Anne C. Poinier, MD - Internal Medicine| Lisa S. Weinstock, MD - Psychiatry © 1995-2013 Healthwise, Incorporated. Healthwise, Healthwise for every health decision, and the Healthwise logo are trademarks of Healthwise, Incorporated.
<urn:uuid:0dd43b85-0368-4e6b-9b12-1e0b94f1ff3c>
CC-MAIN-2013-20
http://www.ghc.org/kbase/entireTopic.jhtml?docId=uz2304
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.922799
2,304
3.203125
3
- Hesiod (c. 8th century BC) - Orphic Communities (c540 BC - ?) - Pythagoras (?580-?500 BC) and - Herodotus (484-425BC) - Herodotus (link to Google Books) - trans. Rev. William Beloe 1831. P.236: The neck of land which stretches from the country of the Gindanes towards the sea is possessed by the Lotophagi who live entirely upon the fruit of the lotos - Empedokles (?480-430BC) - The Fragments of Empedocles (link to Questia.com) trans. W.E. Leonard Ph.D., Chicago, 1908. Part 2 says much about transmigration of souls and the Orphic/Pythagorean traditions. - Socrates (?470-399 BC) - Antisthenes (?445-365 BC) - Plato (?427-?347 BC) - Plato's Republic (link to archive.org) trans. Lewis Campbell M.A., LL.D., London, 1902. In Books II & III Plato (428-347 BC) develops the dietary ideas of Pythagoras. - Diogenes (?412-?323 BC) - Aristotle (384-322BC) - Theophrastus (?372-?287BC) - Epicurus (341-270BC) - Cicero (Roman) (106-43 B.C.) - Ovid (Publius Ovidius Naso) (43 BC - AD 17) - Español - Ovidio - Ovid's Metamorphoses (link to archive.org) By Publius Ovidius Naso (43 BC - AD 17). This edition pub. London 1822. Book 15, p.516 is a biography of Pythagoras. p.519: 'He first forbid animal food to be served up at the tables - Seneca (c.5 BC - AD 65) - Seneca's Morals (link to archive.org) - by Lucius Annaeus Seneca (c.5 BC - AD 65) - trans Sir Roger L'Estrange, New York, c.1870. "I gave over eating of flesh", p110 - Plutarch (Greek) (c.AD 46-c.120) - Plutarch's Morals Vol.5 (link to archive.org) by Plutarch (c.AD 46-c.120)- edited by W. W. Goodwin Ph.D, Harvard, 1878. Includes the essay 'Of Eating Flesh' - Plotinus (Roman) (?205-?270) - Porphyry (Greek) (233 - - Iamblichus (Greek) (c.250-c.325) - Emperor Julianus 331-363 A.D. - Christian and Western Literature 5th Century to 16th Century - Animal Minds and Human Morals: The Origins of the Western Debate by Richard Sorabji - early Greek/Christian philosophy.
<urn:uuid:8bcaa495-e6e9-492c-9456-8ba6b77b6584>
CC-MAIN-2013-20
http://www.ivu.org/history/greece_rome/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.775364
665
2.65625
3
p technology used in various military and NASA flight programs) and the Singer- Kearfott SKC-2000 (then a candidate for the B-1A program). Both of these machines were judged to require extensive modification before being considered adequate. To understand the configuration and makeup of the Space Shuttle avionics system, it is necessary to understand the technological environment of the early seventies. In the approximately 16 years since the inception of the system, computers and the associated technology have undergone four generations of change. If the system designers were operating in today's environment, a much different set of design choices and options would be available and, quite possibly, a different configuration would have resulted. This section is intended to familiarize the reader with the designer's world during the formative stages of the system, with the technology available, and with the pressure of factors other than technology which influenced the result. Although the state of technology was a major factor (and limitation) in the design of the avionics system, the effect of other factors was also significant. These include influences arising from traditional, conservative attitudes, as well as those associated with the environment in which the system was to operate. In any development program, a new approach or technique is correctly perceived to have unknown risks with potential cost and schedule implications and is to be avoided whenever possible. In addition, the designers, the flightcrew, and other operational users of the system often have a mindset, established in a previous program or experience, which results in a bias against new or different, "unconventional" approaches. Finally, the environment in which the system is to function must be considered. For instance, a new technique proposed for a system may not be viable if it requires a major change in the associated ground support complex. In the following paragraphs, a number of subsystem or functional areas are examined in the context of one or more of these factors. In the early seventies, only two avionics computers under development were considered potentially capable of performing the Space Shuttle task. These were the IBM AP-101 (a derivative of the 4 No suitable off-the-shelf microcomputers were then available (no Z80's, 8086's, 68000's, etc.). Large-scale integrated-circuit technology was emerging but not considered mature enough for Space Shuttle use. Very little was known about the effects of lightning or radiation on high-density solid-state circuitry. Core memory was the only reasonably available choice for the Space Shuttle Orbiter computers; therefore, the memory size was limited by power, weight, and heat constraints. Data bus technology for real-time avionics systems was emerging but could not be considered operational. The U.S. Air Force (USAF) was developing MIL-STD-1553, the data bus standard, but it would not become official until 1975. All previous systems had used bundles of wires, each dedicated to a single signal or function. The use of tape units for software program mass storage in a dynamic environment was limited and suspect, especially for program overlays while in flight. Software design methodology was evolving rapidly with the emerging use of top-down, structured techniques. No high-order language tailored for aerospace applications existed, although NASA was in the process of developing a high-order software language for Shuttle (HAL/S), which subsequently become the Space Shuttle standard. In all manned space programs preceding the Space Shuttle (Mercury, Gemini, and Apollo), fly-by-wire control systems were used for vehicle attitude and translation control. Although digital autopilots were developed for Apollo spacecraft, analog control systems were also included and considered necessary for backup. Aircraft flight control technology, however, had not advanced beyond the use of mechanical systems, augmented with hydraulic boost on large airplanes. Most aircraft applications of electronics in the flight control system used limited-authority analog stability-augmentation devices to improve aerodynamic handling qualities. Autopilots were also analog devices and also given limited authority. Neither the stability-augmentation function nor the autopilot was considered critical for safe flight when implemented in these configurations. The flight control hardware and subsystems were kept functionally and electrically separate from other electronic systems to the extent possible. Sophisticated guidance and navigation schemes and algorithms had been developed and used in the Apollo Program; therefore, the technology base appeared adequate for the Space Shuttle in these disciplines. Although a new guidance and navigation challenge was posed by the entry through landing phase, no state-of-the-art advances were deemed necessary. The pilot input devices in general use for aircraft control were a stick or a yoke/wheel for roll and pitch, and rudder pedals for yaw. When hydraulic boost was used, elaborate sensing devices were included to provide the correct feedback to the pilot. Hand controllers without feedback and with only electrical outputs had been used in previous manned space programs; however, the application did not involve aerodynamic flight. Switches, pushbuttons, and other input devices were typically hardwired to the function, the box, or the subsystem that required the input. Displays were also hardwired, were generally mechanical, and were dedicated to the function served. Off-the-shelf horizontal and vertical situation displays, although electronically driven, utilized a mechanical presentation. Electronic attitude and directional indicator (EADI) technology was emerging but not in common use. Heads-up displays (HUD's) were also just emerging. The concept of multifunctional displays was immature and had never been used in an aerospace application. Many of the display and control design issues associated with management of a redundant system had never been addressed. A very capable S-band communications system had been developed for use on the Apollo Program; however, it could not serve the data rate, link margins, and coverage requirements forecast for Space Shuttle operations and experiment support. The NASA had led research in digital voice and sophisticated encoding and decoding techniques, but these had never been proven in an operational system. Solid-state radiofrequency (RF) amplifiers capable of power output sufficient for skin-tracking radar were emerging but also not proven. The Federal Aviation Administration (FAA) was considering an upgrade of the Instrument Landing System (ILS) to one using microwave scanning beam techniques capable of meeting Orbiter landing performance requirements, but no realistic conversion schedule existed. The use of redundant systems to enable operation in the face of failures was common in both aircraft and space applications; however, all previous approaches used primary/backup, active/standby techniques which relied on manual recognition of faults and crew-initiated switchover to the alternate or backup system. Very little was known about the use and management of multiple sensors or other input devices and even less about multiple output devices such as hydraulic actuators. No aerospace project had even contemplated the automation of failure detection and recovery for large systems such as the reaction control system (RCS). The RCS required complex assessments of large numbers of temperature and pressure sensors, correlation with vehicle dynamic response to digital autopilot commands, and a variety of recovery options which depended on factors such as mission phase, propellant quantity, and available thruster configuration. The system which evolved required the use of techniques which rival those of expert systems being developed today. NASA Office of Logic Design Last Revised: February 03, 2010 Digital Engineering Institute Web Grunt: Richard Katz
<urn:uuid:2a25b211-fff5-4453-81f5-79790a089879>
CC-MAIN-2013-20
http://www.klabs.org/DEI/Processor/shuttle/sp-504/section_2/section_2.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.963721
1,493
3.046875
3
University of Utah physicists invented a new "spintronic" organic light-emitting diode or OLED that promises to be brighter, cheaper and more environmentally friendly than the kinds of LEDs now used in television and computer displays, lighting, traffic lights and numerous electronic devices. "It's a completely different technology," says Z. Valy Vardeny, University of Utah distinguished professor of physics and senior author of a study of the new OLEDs in the July 13, 2012 issue of the journal Science. "These new organic LEDs can be brighter than regular organic LEDs." The Utah physicists made a prototype of the new kind of LED – known technically as a spin-polarized organic LED or spin OLED – that produces an orange color. But Vardeny expects it will be possible within two years to use the new technology to produce red and blue as well, and he eventually expects to make white spin OLEDs. However, it could be five years before the new LEDs hit the market because right now, they operate at temperatures no warmer than about minus 28 degrees Fahrenheit, and must be improved so they can run at room temperature, Vardeny adds. Vardeny developed the new kind of LED with Tho D. Nguyen, a research assistant professor of physics and first author of the study, and Eitan Ehrenfreund, a physicist at the Technion-Israel Institute of Technology in Haifa. The study was funded by the U.S. National Science Foundation, the U.S. Department of Energy, the Israel Science Foundation and U.S.-Israel Binational Science Foundation. The research was part of the University of Utah's new Materials Research Science and Engineering Center, funded by the National Science Foundation and the Utah Science Technology and Research initiative. The Evolution of LEDs and OLEDs The original kind of LEDs, introduced in the early 1960s, used a conventional semiconductor to generate colored light. Newer organic LEDs or OLEDs – with an organic polymer or "plastic" semiconductor to generate light – have become increasingly common in the last decade, particularly for displays in MP3 music players, cellular phones and digital cameras. OLEDs also are expected to be used increasingly for room lighting. Big-screen TVs with existing OLEDs will hit the market later this year. The new kind of OLED invented by the Utah physicists also uses an organic semiconductor, but isn't simply an electronic device that stores information based on the electrical charges of electrons. Instead, it is a "spintronic" device – meaning information also is stored using the "spins" of the electrons. Invention of the new spin OLED was made possible by another device – an "organic spin valve" – the invention of which Vardeny and colleagues reported in the journal Nature in 2004. The original spin-valve device could only regulate electrical current flow, but the researchers expected they eventually could modify it to also emit light, making the new organic spin valve a spin OLED. "It took us eight years to accomplish this feat," Vardeny says. Spin valves are electrical switches used in computers, TVs, cell phones and many other electrical devices. They are so named because they use a property of electrons called "spin" to transmit information. Spin is defined as the intrinsic angular momentum of a particle. Electron spins can have one of two possible directions, up or down. Up and down can correlate to the zeroes and ones in binary code. Organic spin valves are comprised of three layers: an organic layer that acts as a semiconductor and is sandwiched between two metal electrodes that are ferromagnets. In the new spin OLED, one of the ferromagnet metal electrodes is made of cobalt and the other one is made of a compound called lanthanum strontium manganese oxide. The organic layer in the new OLED is a polymer known as deuterated-DOO-PPV, which is a semiconductor that emits orange-colored light. The whole device is 300 microns wide and long – or the width of three to six human hairs – and a mere 40 nanometers thick, which is roughly 1,000 to 2,000 times thinner than a human hair. A low voltage is used to inject negatively charged electrons and positively charged "electron holes" through the organic semiconductor. When a magnetic field is applied to the electrodes, the spins of the electrons and electron holes in the organic semiconductor can be manipulated to align either parallel or antiparallel. Two Advances Make New Kind of Organic LEDs Possible In the new study, the physicists report two crucial advances in the materials used to create "bipolar" organic spin valves that allow the new spin OLED to generate light, rather than just regulate electrical current. Previous organic spin valves could only adjust the flow of electrical current through the valves. The first big advance was the use deuterium instead of normal hydrogen in the organic layer of the spin valve. Deuterium is "heavy hydrogen" or a hydrogen atom with a neutron added to regular hydrogen's proton and electron. Vardeny says the use of deuterium made the production of light by the new spin OLED more efficient. The second advance was the use of an extremely thin layer of lithium fluoride deposited on the cobalt electrode. This layer allows negatively charged electrons to be injected through one side of the spin valve at the same time as positively charged electron holes are injected through the opposite side. That makes the spin valve "bipolar," unlike older spin valves, into which only holes could be injected. It is the ability to inject electrons and holes at the same time that allows light to be generated. When an electron combines with a hole, the two cancel each other out and energy is released in the form of light. "When they meet each other, they form 'excitons,' and these excitons give you light," Vardeny says. By injecting electrons and holes into the device, it supports more current and has the ability to emit light, he says, adding that the intensity of the new spintronic OLEDs can be a controlled with a magnetic field, while older kinds require more electrical current to boost light intensity. Existing OLEDs each produce a particular color of light – such as red, green and blue – based on the semiconductor used. Vardeny says the beauty of the new spin OLEDs is that, in the future, a single device may produce different colors when controlled by changes in magnetic field. He also says devices using organic semiconductors are generally less expensive and are manufactured with less toxic waste than conventional silicon semiconductors. University of Utah: http://www.unews.utah.edu/ This press release was posted to serve as a topic for discussion. Please comment below. We try our best to only post press releases that are associated with peer reviewed scientific literature. Critical discussions of the research are appreciated. If you need help finding a link to the original article, please contact us on twitter or via e-mail.
<urn:uuid:67550fe4-d467-4286-bc3f-8e4068202b15>
CC-MAIN-2013-20
http://www.labspaces.net/121715/Physicists_invent__spintronic__LED
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.932662
1,448
3.296875
3
November Health Awareness American Diabetes Month® The vision of the American Diabetes Association is a life free of diabetes and all of its burdens. Raising awareness of this ever-growing disease is one of the main efforts behind the mission of the Association. American Diabetes Month® (ADM) is an important element in this effort, with programs designed to focus the nation's attention on the issues surrounding diabetes and the many people who are impacted by the disease. Here are just a few of the recent statistics on diabetes: - Nearly 26 million children and adults in the United States have diabetes. - Another 79 million Americans have pre-diabetes and are at risk for developing type 2 diabetes. - The American Diabetes Association estimates that the total national cost of diagnosed diabetes in the United States is $174 billion. American Diabetes Month takes place each November and is a time to come together as a community to Stop Diabetes®. Please visit this link to become a part of the movement. Lung Cancer Awareness Month November is lung cancer awareness month! This is a national campaign that brings the lung cancer community together to raise awareness and increase attention to this disease. Click on the following links to see how you can help in your community. National Healthy Skin Month National Healthy Skin Month is sponsored by the American Academy of Dermatology, which has as its mission the achievement of the highest quality dermatologic care for all Americans. For additional information, please click on the following link.
<urn:uuid:4d940e7f-3a36-4198-8314-ec9df5f52a1c>
CC-MAIN-2013-20
http://www.marylandpublicschools.org/MSDE/programs/take15health/nha.htm?WBCMODE=PresentationUnpublished%25%3E%25%3E%25%3E%25%25%3E%25%3E
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.923803
300
2.671875
3
Releasing the study, Dr Javaid Rahi, National Secretary of TRCF, said that new research has proved that the Gujjar race had been one of the most vibrant identities of Central Asia in the BC era, and later ruled over many princely states in northern India for hundreds of years, and also left their imprints in the Himalayan ranges and inscribed them in such a way, that they could not be destroyed even thousands of years later. The study further revealed that the 5000-year history of Gujjars, unexpectedly, is similar to that of the tribes of Turkish origins, who left for Koh-e kaf during the era of Christ along with their camels and other domestic animals. The study said that in the state of Jammu and Kashmir, ‘Turk’ (Gotra) is one of the most important casts of Gujjars, and hundreds of Turk Gujjars reside in different districts of the Kashmir Valley. The study said that in Gojri, there are a number of words which are Turkish in origin, thereby linking the history of the Gujjars with that of the Turks. What is also surprising is that the tribal folk art and costumes of nomadic Gujjars still resemble those of the Turkish tribes. The anthropological study said that, amazingly, the physical features and facial expressions of Gujjars resemble those of Turkish tribals. The study further said that one Ger (Gujjar) Khan of Turkistan was a commander in Babur’s army, and he had done remarkable work in binding various ethnic groups together. Dr Rahi further explained that the TRCF is in correspondence with, and seeks help from, Turkish missions in India with regards conducting genetic surveys of Gujjars to establish their roots in Central Asia, from where they are believed to have migrated to different parts of the world; especially the sub-continent. The study further said that in Central Asia, places like Gurjarni, Gujari Pil, and Gujreti are named after the Gujjar clans or ‘Gots’, linking this ethnic group with its roots. The study suggested that the scholars, anthropologists, and historians of countries like Turkey, Georgia, Iran, Pakistan, India, etc. should come forward and study this connection through anthropological, archaeological, and historical evidence. The conclusion of the study stressed for an inter-disciplinary and in-depth research at a national and international level with the help of these countries to study the rich culture of these nomads.
<urn:uuid:9a3ac9a8-dd10-4907-a60d-ebc674a1db4e>
CC-MAIN-2013-20
http://www.merinews.com/article/gujjars-are-of-turkish-descent---study/15767331.shtml
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.962799
526
3.046875
3