text
stringlengths
150
542k
id
stringlengths
47
47
dump
stringclasses
1 value
url
stringlengths
15
499
file_path
stringlengths
138
138
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
39
159k
score
float64
2.52
5.03
int_score
int64
3
5
[en] Numerous works are related to the use of unconventional feed resources, and particularly to Mucuna Spp., in poultry diet. This review aims at describing the context of their use, their nutritional values and the constraints related to their upgrading, before considering the effects of the various methods of treatment on the reduction of the toxic substances that they could contain and on their chemical compositions. The methods of treatment are very variable and their standardisation should allow using them in rural area. Those feed could thus constitute an alternative to costly conventional feed usually used in poultry production. Administration générale de la Coopération au Développement - AGCD
<urn:uuid:7ae15ad2-9a9e-440d-921d-4db57cbe2002>
CC-MAIN-2013-20
http://orbi.ulg.ac.be/handle/2268/90677
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.923702
135
2.59375
3
Kristensen, Hanne L. and Thorup-Kristensen, Kristian (2006) Roots below one meters depth are important for nitrate uptake by annual crops. [Rødder under 1 meters dybde er vigtige for et-årige afgrøders optagelse af nitrat.] In: CD-rom "The American Society of Agronomy - Crop Science Society of America - Soil Science Society of America International Annual Meetings, November 12-16, 2006, Indianapolis, USA. No. 205-9., CD-rom "The American Society of Agronomy - Crop Science Society of America - Soil Science Society of America International Annual Meetings, November 12-16, 2006, Indianapolis, USA., no. Abstract No. 205-9.. The root depths of annual crops vary from 0.2 m to more than 2 m depending on root growth rate and length of growing season. However, studies of root growth and N uptake are often restricted to a depth of 1 m or less, as root biomass is assumed to be negligible below this depth. We have studied the importance of root growth and N uptake to a depth of 2.5 m in fully grown field vegetables and cover crops by use of minirhizotrons and deep point placement of 15N. Deep rooted crucifereous crops were found to have high root densities to a depth of 1.5-2 m and high 15N uptake to this depth. The work shows that knowledge of the interactions between root growth and soil N below a depth of 1 m are important to understand crop N uptake and nitrate leaching from agro-ecosystems. |EPrint Type:||Conference paper, poster, etc.| |Type of presentation:||Paper| |Subjects:|| Soil > Nutrient turnover| Crop husbandry > Crop combinations and interactions Crop husbandry > Production systems > Vegetables |Research affiliation:|| Denmark > DARCOF III (2005-2010) > VEGQURE - Organic cropping Systems for Vegetable production| Denmark > AU - Aarhus University > AU, DJF - Faculty of Agricultural Sciences |Deposited By:||Kristensen, Ph.D. Hanne L.| |Deposited On:||28 Nov 2007| |Last Modified:||12 Apr 2010 07:35| |Refereed:||Peer-reviewed and accepted| Repository Staff Only: item control page
<urn:uuid:d2b3502d-1c86-4ed7-9c72-a55bc162b976>
CC-MAIN-2013-20
http://orgprints.org/11461/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.787057
527
2.90625
3
We had a running joke in science ed that kids get so overexposed to discrepant events involving density and air pressure that they tend to try to explain anything and everything they don't understand with respect to science in terms of those two concepts. Why do we have seasons? Ummm... air pressure? Why did Dr. Smith use that particular research design? Ummm... density? I think we need another catch-all explanation. I suggest index of refraction. To simplify greatly, index of refraction describes the amount of bending a light ray will undergo as it passes from one medium to another (it's also related to the velocity of light in both media, but I do want to keep this simple). If the two media have significantly different indices, light passing from one to the other at an angle (not perpendicularly, in which case there is no bending) will be bent more than if indices of the two are similar. The first four data points are from Hyperphysics, the final one from Wikipedia... glass has a wide range of compositions and thus indices of refraction. Water at 20 C: 1.33 Typical soda-lime glass: close to 1.5 Since glycerine and glass have similar IoR, light passing from one to the other isn't bent; as long as both are transparent and similarly colored, each will be effectively "invisible" against the other. So, why does it rain? Umm... index of refraction? A Bright Moon Impact 12 hours ago
<urn:uuid:7eeb7ef3-3122-42f0-86c8-01da8f3d7396>
CC-MAIN-2013-20
http://outsidetheinterzone.blogspot.com/2009/12/index-of-refraction.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.944247
313
3.1875
3
By Justin Moyer The Washington Post — Defense Secretary Leon Panetta signed an order Thursday allowing women the same opportunities as men to serve in combat, including formerly off-limits assignments on attack submarines and in the Navy SEALs. Just two weeks before the announcement, researchers from San Diego's Naval Health Research Center published a study suggesting that some recent mothers deployed on the battlefield may be more prone to depression after seeing action. "Women who deploy and report combat-associated exposures after childbirth are significantly more likely to screen positive for maternal depression than are women who did not deploy after childbirth," concluded the study, titled "Is Military Deployment a Risk Factor for Maternal Depression?" and appearing in the Journal of Women's Health. "It is also possible," the report noted, "that giving birth and leaving a young child, in addition to the experience of combat, contribute to postdeployment depression." The study included eight co-authors, five of them associated with the Naval Health Research Center, a research and development laboratory within the Department of Defense. It was based on surveys of more than 1,600 women who "gave birth during active duty service." Not all branches of the armed forces showed the same results. "Participants who served in the Army had an increased risk of maternal depression; Army service members tend to be deployed longer and more frequently than personnel serving in the Navy and Air Force," the study found. Of course, you don't have to be a mom to experience depression on the front line. The report points out that "the increased rate of depression is primarily attributed to experiencing combat while deployed," not just to whether a solider is also a parent.
<urn:uuid:5a5aeb9f-2d6c-45df-aca0-8ea28cde7f62>
CC-MAIN-2013-20
http://paulsvalleydailydemocrat.com/community-news-network/x1633465595/Are-mothers-in-combat-more-prone-to-depression/print
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.970372
341
2.515625
3
An Introduction To 127.0.0.1 127.0.0.1 is an IP address utilized for a looplock network connection. What does this mean? If a user tries to connect to this IP address, they will be sent back to their computer. The address is also known as a localhost. The localhost is the computer. How the Localhost Works If the command is relayed to the localhost, you would be hooked up to the system where the commands were sent out. For instance, suppose the computer is called "Joker". If you telnet from the Joker computer to the localhost, a message will appear. It will attempt to hook up to The localhost is employed in lieu of the computer hostname to be linked to. This IP address is the most wisely used localhost address. However, you can actually use any IP address provided it starts with 127. This means 127.*.*.* can be used as a localhost. Establishing a connection with the loopback address is similar to creating a connection with remote network computers. The only difference is you don't have to deal with network For this reason it is widely utilized by software developers. It is also used by system administrators. It is often used for testing programs and apps. If the connection is IPv4, the computer's loopback address will be the 127.*.*.*. The subnet mask is typically 255.0.0.0. This IP addresses 127.*.*.*. are defined in RFC 330 as Special-Use IPv4 Addresses. The 127.0.0.0/8 block is defined as the Net host loopback address. If a higher level protocol sends a datagram anywhere in the block, it will be looped in the host. This is typically implemented with the 127.0.0.1 / 32 for looplock. However, addresses in the block must not be visible anywhere else in the network. There is also a localhost IPv6 version. In RFC 3513, it is defined as Internet Protocol Version 6 (IPv6) Addressing Architecture::1/128. More Information about the Localhost In simple terms, the localhost means the computer. It is the hostname allocated loopback network interface address. The name is likewise a domain name. This will help prevent confusion with the hostname definition. In IPv6, the loopback IP address is ::1. The localhost is stated when one would usually use the computer hostname. For instance, a browser using an HTTP server to http://localhost will show the local website home page. This will be possible if the server is set up properly to work the loopback interface. The loopback address can also be used for linking up to a game server. It can also be used for the various inter-process communications. This facts about 127.0.0.1 indicate how fundamental and basic the localhost is to a system. That's why it is so crucial for network
<urn:uuid:0cfb12fb-ebfd-4e6a-8720-c551d7e97801>
CC-MAIN-2013-20
http://pdfcast.org/pdf/a-guide-to-localhost
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.893844
644
3.8125
4
Municipal incorporation occurs when such municipalities become self-governing entities under the laws of the state or province in which they are located. Often, this event is marked by the award or declaration of a municipal charter. With the notable exception of the City of London Corporation, the term has fallen out of favour in the United Kingdom, but the concept remains central to local government in the United Kingdom, as well as former British colonies such as India and Canada. Municipal charters A city charter or town charter (generically, municipal charter) is a legal document establishing a municipality such as a city or town. The concept developed in Europe during the middle ages and is considered to be a municipal version of a constitution. Traditionally the granting of a charter gave a settlement and its inhabitants the right to town privileges under the feudal system. Townspeople who lived in chartered towns were burghers, as opposed to serfs who lived in villages. Towns were often "free", in the sense that they were directly protected by the king or emperor, and were not part of a feudal fief. Today the process for granting charters is determined by the type of government of the state in question. In monarchies, charters are still often a royal charter given by the Crown or the state authorities acting on behalf of the Crown. In federations, the granting of charters may be within the jurisdiction of the lower level of government such as a state or province. By country In Brazil, municipal corporations are called municípios and are created by means of local legislation at state level, or after passing a referendum vote of the affected population. All municipal corporations must also abide by an Organic Municipal Law which is passed and amended (when needed) at municipal level. In Canada charters are granted by provincial authorities. In Germany, municipal corporations existed since antiquity and through medieval times, until they became out of favour during the absolutism. In order to strengthen the public spirit the city law of Prussia dated 19 November 1808 picked up this concept. It is the basis of today's municipal law. In India a Municipal Corporation is a local government body that administers a city of population 10,00,000 or more. Under the panchayati raj system, it interacts directly with the state government, though it is administratively part of the district it is located in. The largest Municipal Corporations in India currently are Mumbai, followed by Delhi, Kolkata, Bangalore, Chennai, Hyderabad, Ahmedabad, Surat and Pune. The Corporation of Chennai is the oldest Municipal Corporation in the world outside UK. The Municipal Corporation consists of members elected from the wards of the city. The Mayor and Deputy Mayor are elected by the public. A Municipal Commissioner, who is from the Indian Administrative Service is appointed to head the administrative staff of the Municipal Corporation, implement the decisions of the Corporation and prepare its annual budget. The Municipal Corporation is responsible for roads, public transportation, water supply, records of births and deaths (delegated from central government Births and Deaths Registration Act), sanitation that includes waste management, sewage, drainage and flood control, public safety services like fire and ambulance services, gardens and maintenance of buildings. The sources of income of the Corporation are property tax, entertainment tax, octroi (now abolished from many cities) and usage fees for utilities. Republic of Ireland In Ireland, municipal corporations existed in boroughs since medieval times. The Corporation of Dublin, officially styled the Right Honourable the Lord Mayor, Aldermen, and Burgesses of the City of Dublin had existed since the 13th century. Corporations were established under the royal charter establishing the city or borough. The Municipal Corporations (Ireland) Act 1840 abolished all but ten of the boroughs and their corporations. The Local Government (Ireland) Act 1898 created two different types of borough, county boroughs had essentially equal status to counties - these comprised Dublin, Cork, Limerick, and Waterford (as well as Belfast and Derry, which are now in Northern Ireland). The other boroughs were non-county boroughs. The Local Government Act 2001 abolished the title of municipal corporation. Corporations of county boroughs (renamed cities) were renamed City Councils. Non county boroughs were abolished, but those towns which were previously non-county boroughs were allowed to use the title of Borough Council. Royal charters remain in force for ceremonial and civic purposes only. South Africa From the beginning of American colonial rule, Philippines cities were formally established through laws enacted by the various national legislatures in the country. The Philippine Commission gave the city of Manila its charter in 1901, while the city of Baguio was established by the Philippine Assembly which was composed by elected members instead of appointed ones. During the Commonwealth era, the National Assembly established an additional ten cities. Since achieving independence from the United States in 1946 the Philippine Congress has established 124 more cities (as of September 2007), the majority of which required the holding of a plebiscite within the proposed city's jurisdiction to ratify the city's charter. United Kingdom United States In the United States, such municipal corporations are established by charters that are granted either directly by a state legislature by means of local legislation, or indirectly under a general municipal corporation law, usually after the proposed charter has passed a referendum vote of the affected population.
<urn:uuid:ff2d2b6b-aa78-4bda-ba69-e90f18d28bfb>
CC-MAIN-2013-20
http://pediaview.com/openpedia/Municipal_corporation
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.969311
1,116
4.09375
4
|Gallium metal is silver-white and melts at approximately body temperature (Wikipedia image).| |Atomic Number:||31||Atomic Radius:||187 pm (Van der Waals)| |Atomic Symbol:||Ga||Melting Point:||29.76 °C| |Atomic Weight:||69.72||Boiling Point:||2204 °C| |Electron Configuration:||[Ar]4s23d104p1||Oxidation States:||3| From the Latin word Gallia, France; also from Latin, gallus, a translation of "Lecoq," a cock. Predicted and described by Mendeleev as ekaaluminum, and discovered spectroscopically by Lecoq de Boisbaudran in 1875, who in the same year obtained the free metal by electrolysis of a solution of the hydroxide in KOH. Gallium is often found as a trace element in diaspore, sphalerite, germanite, bauxite, and coal. Some flue dusts from burning coal have been shown to contain as much 1.5 percent gallium. It is one of four metals -- mercury, cesium, and rubidium -- which can be liquid near room temperature and, thus, can be used in high-temperature thermometers. It has one of the longest liquid ranges of any metal and has a low vapor pressure even at high temperatures. There is a strong tendency for gallium to supercool below its freezing point. Therefore, seeding may be necessary to initiate solidification. Ultra-pure gallium has a beautiful, silvery appearance, and the solid metal exhibits a conchoidal fracture similar to glass. The metal expands 3.1 percent on solidifying; therefore, it should not be stored in glass or metal containers, because they may break as the metal solidifies. High-purity gallium is attacked only slowly by mineral acids. Gallium wets glass or porcelain and forms a brilliant mirror when it is painted on glass. It is widely used in doping semiconductors and producing solid-state devices such as transistors. Magnesium gallate containing divalent impurities, such as Mn+2, is finding use in commercial ultraviolet-activated powder phosphors. Gallium arsenide is capable of converting electricity directly into coherent light. Gallium readily alloys with most metals, and has been used as a component in low-melting alloys. Its toxicity appears to be of a low order, but should be handled with care until more data is available.
<urn:uuid:317a0fc8-b8f1-4147-a9ac-f69a1f176048>
CC-MAIN-2013-20
http://periodic.lanl.gov/31.shtml
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.892846
546
3.46875
3
From Oxford University Press: There is a broad consensus among scholars that the idea of human rights was a product of the Enlightenment but that a self-conscious and broad-based human rights movement focused on international law only began after World War II. In this narrative, the nineteenth century's absence is conspicuous--few have considered that era seriously, much less written books on it. But as Jenny Martinez shows in this novel interpretation of the roots of human rights law, the foundation of the movement that we know today was a product of one of the nineteenth century's central moral causes: the movement to ban the international slave trade. Originating in England in the late eighteenth century, abolitionism achieved remarkable success over the course of the nineteenth century. Martinez focuses in particular on the international admiralty courts, which tried the crews of captured slave ships. The courts, which were based in the Caribbean, West Africa, Cape Town, and Brazil, helped free at least 80,000 Africans from captured slavers between 1807 and 1871. Here then, buried in the dusty archives of admiralty courts, ships' logs, and the British foreign office, are the foundations of contemporary human rights law: international courts targeting states and non-state transnational actors while working on behalf the world's most persecuted peoples--captured West Africans bound for the slave plantations of the Americas. Fueled by a powerful thesis and novel evidence, Martinez's work will reshape the fields of human rights history and international human rights law. - Forces us to fundamentally rethink the origins of human rights activism - Filled with fascinating stories of captured slave ship crews brought to trial across the Atlantic world in the nineteenth century - Shows how the prosecution of the international slave trade was crucial to the development of modern international law
<urn:uuid:ec191471-6e59-4d54-afce-3997eab364e0>
CC-MAIN-2013-20
http://pesd.stanford.edu/publications/slavetrade_humanrightslaw/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.945528
358
3.046875
3
Phantom Phone Calls ospri.net- Alleged contact with the dead has occurred universally throughout history, taking various forms; as dreams, waking visions and auditory hallucinations, either spontaneous or induced through trance. In many cultures, the spirits of the dead have been sought for their wisdom, advice and knowledge of the future. The dead also seem to initiate their own communication, using whatever means seem to be most effective. With the advent of electromagnetic technology, mysterious messages have been communicated by telegraph, wireless, phonographs and radio. A curious phenomenon of modern times is the communication via the telephone. Phone calls from the dead seem to be random and occasional occurrences that happen without explanation. The great majority are exchanges between persons who shared a close emotional tie while both were living: spouses, parents and children, siblings, and occasionally friends and other relatives. Most communications are "intention" calls, initiated by the deceased to impart a message, such as farewell upon death, a warning of impending danger, or information the living needs to carry out a task. For example, actress Ida Lupino's father, Stanley, who died intestate in London during World War II, called Lupino six months after his death to relate information concerning his estate, the location of some unknown but important papers. Some calls appear to have no other purpose than to make contact with the living; many of these occur on emotionally charged "anniversary" days, such as Mothers day or Fathers day, a birthday or holiday. In a typical” anniversary” call, the dead may do nothing more than repeat a phrase over and over, such as "Hello, Mom, is that you?" Persons who have received phone calls from the dead report that the voices are exactly the same as when the deceased was living, furthermore, the voice often uses pet names and words. The telephone usually rings normally, although some recipients say that the ring sounded flat and abnormal. In many cases, the connection is bad, with a great deal of static and line noise, and occasionally the faint voices of the other persons are heard, as though lines have been crossed. In many cases, the voice of the dead one is difficult to hear and grows fainter as the call goes on. Sometimes, the voice just fades away but the line remains open, and the recipient hangs up after giving up on further communication. Sometimes the call is terminated by the dead and the recipient hers the click of disengagement, other times, the line simply goes dead. The phantom phone calls typically occur when the recipient is in a passive state of mind. If the recipient knows the caller is dead, the shock is great and the phone call very brief, invariably, the caller terminates the call after a few seconds or minutes, or the line goes dead. If the recipient does not know the caller is dead, a lengthy conversation of up to 30 minutes or so may take place, during which the recipient is not aware of anything amiss. In a minority of cases, the call is placed person-to-person, long-distance with the assistance of a mysterious operator. Checks with the telephone company later turn up no evidence of a call being places. Similar phone calls from the dead are "intention" phone calls occurring between two living persons. Such calls are much rarer than calls from the dead. In a typical "intention" call, the caller thinks about making the call but never does, the recipient nevertheless receives a call. In some cases, emergencies precipitate phantom calls, a surgeon is summoned by a nurse to the hospital to perform an emergency operation, a priest is called by a "relative" to give last rites to a dying man and so forth. Some persons who claim to have had UFO encounters report receiving harassing phantom phone calls. The calls are received soon after the witness returns home, or within a day or two of the encounter, in many cases, the calls come before the witness has shared the experience with anyone, stranger still, they are often placed to unlisted phone numbers. The unidentified caller warns the witness not to talk and to "forget" what he or she saw. Phone calls allegedly may be placed to the dead as well. The caller does not find out until sometime after the call that the person on the other end has been dead. In one such case, a woman dreamed of a female friend she had not seen for several years. In the disturbing dream, she witnessed the friend sliding down into a pool of blood. Upon awakening, she worried that the dream was a portent of trouble, and called the friend. She was relieved when the friend answered. The friend explained that she had been in the hospital, had been released and was due to be readmitted in a few days. She demurred when the woman offered to visit, saying she would call later. The return call never came. The woman called her friend again, to be told by a relative that the friend has been dead for six months at the time the conversation took place. In several cases studied by researchers, the deceased callers make reference to an anonymous” they” and caution that there is little time to talk. The remarks imply that communication between the living and the dead is not only difficult, but not necessarily desirable. Most phone calls from the dead occur within 24 hours of the death of the caller. Most short calls come from those who have been dead seven days or less: most lengthy calls come from those who have been dead several months. One of the longest death-intervals on record is two years. In a small number of cases, the callers are strangers who say they are calling on behalf of a third party, whom the recipient later discovered is dead. Several theories exist as to the origin of phantom phone calls. (1) They are indeed placed by the dead, who somehow manipulate the telephone mechanisms and circuitry: (2) they are deceptions of elemental-type spirits who enjoy playing tricks on the living: (3) they are psychokinetic acts caused subconsciously by the recipient, whose intense desire to communicate with the dead creates a type of hallucinatory experience: (4) they are entirely fantasy created by the recipient. For the most part, phantom phone calls are not seriously regarded by parapsychologists. In the early 20th century, numerous devices were built by investigators in hopes of capturing ghostly voices: many of them were modifications of the telegraph and wireless. Thomas Alva Edison, whose parents were Spiritualists, believed that a telephone could be invented that would connect the living to the dead. He verified that he was working on such a device, but apparently it never was completed before his death. "Psychic telephone" experiments were conducted in the 1940's in England and America. Interest in the phenomenon waned until the 1960’s, following the findings of Konstantin Raudive that ghostly voices could be captured on electromagnetic tape.
<urn:uuid:c6d4bada-2535-41ff-b2c6-f0357a52e392>
CC-MAIN-2013-20
http://phantomuniverse.blogspot.com/2010/02/phone-calls-from-beyond.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.975881
1,417
2.59375
3
Researchers at UT Southwestern Medical Center have found that fluctuations in internal body temperature regulate the body's circadian rhythm, the 24-hour cycle that controls metabolism, sleep and other bodily functions. A light-sensitive portion of the brain called the suprachiasmatic nucleus (SCN) remains the body's "master clock" that coordinates the daily cycle, but it does so indirectly, according to a study published by UT Southwestern researchers in the Oct. 15 issue of Science. The SCN responds to light entering the eye, and so is sensitive to cycles of day and night. While light may be the trigger, the UT Southwestern researchers determined that the SCN transforms that information into neural signals that set the body's temperature. These cyclic fluctuations in temperature then set the timing of cells, and ultimately tissues and organs, to be active or inactive, the study showed. Scientists have long known that body temperature fluctuates in warm-blooded animals throughout the day on a 24-hour, or circadian, rhythm, but the new study shows that temperature actually controls body cycles, said Dr. Joseph Takahashi, chairman of neuroscience at UT Southwestern and senior author of the study. "Small changes in body temperature can send a powerful signal to the clocks in our bodies," said Dr. Takahashi, an investigator with the Howard Hughes Medical Institute. "It takes only a small change in internal body temperature to synchronize cellular 'clocks' throughout the body." Daily changes in temperature span only a few degrees and stay within normal healthy ranges. This mechanism has nothing to do with fever or environmental temperature, Dr. Takahashi said. This system might be a modification of an ancient circadian control system that first developed in other organisms, including cold-blooded animals, whose daily biological cycles are affected by external temperature changes, Dr. Takahashi said. "Circadian rhythms in plants, simple organisms and cold-blooded animals are very sensitive to temperature, so it makes sense that over the course of evolution, this primordial mechanism could have been modified in warm-blooded animals," he said. In the current study, the researchers focused on cultured mouse cells and tissues, and found that genes related to circadian functions were controlled by temperature fluctuations. SCN cells were not temperature-sensitive, however. This finding makes sense, Dr. Takahashi said, because if the SCN, as the master control mechanism, responded to temperature cues, a disruptive feedback loop could result, he said. Explore further: Now we know why old scizophrenia medicine works on antibiotics-resistant bacteria
<urn:uuid:896eff09-96fc-4a88-806f-0afe2beec059>
CC-MAIN-2013-20
http://phys.org/news/2010-10-temperature-rhythms-body-clocks-sync.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.936214
528
3.671875
4
If superparticles were to exist the decay would happen far more often. This test is one of the "golden" tests for supersymmetry and it is one that on the face of it this hugely popular theory among physicists has failed. Prof Val Gibson, leader of the Cambridge LHCb team, said that the new result was "putting our supersymmetry theory colleagues in a spin". The results are in fact completely in line with what one would expect from the Standard Model. There is already concern that the LHCb's sister detectors might have expected to have detected superparticles by now, yet none have been found so far.This certainly does not rule out SUSY, but it is getting to the same level as cold fusion if positive experimental result does not come soon.
<urn:uuid:72def0d3-296d-49d8-bdf5-73c351dd6672>
CC-MAIN-2013-20
http://physicsandphysicists.blogspot.com/2012/11/more-results-not-in-favor-of-susy.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.973009
163
2.6875
3
When the world was still young, some 3 500 million years ago, molten rock forced its way through the earth's crust and solidified to form the spectacular granite outcrops where Pretoriuskop Rest Camp is now nestled. The impressive granite dome known as "Shabeni Hill" is not far from the camp, which is found in the south-western corner of the Kruger National Park. It is immediately apparent to any visitor that Pretoriuskop is unique as brilliant red trees adorn the camp, pre-dating the decision to make exclusive use of indigenous plants in laying out rest camp gardens. Nostalgia prompted an exception to the rule for Pretoriuskop, the Kruger National Park's oldest rest camp, and exotic flowering plants were allowed to stay, enhancing the strong sense of the past that is so pervasive. Giving geographical context to places of interest in South Africa
<urn:uuid:ef5e50af-1d85-4468-9975-00dcebbaa0ff>
CC-MAIN-2013-20
http://plak.co.za/moreinfo/25232/wesfleur-hospital
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.942431
180
2.90625
3
Major Section: BREAK-REWRITE Example: (brr@ :target) ; the term being rewritten (brr@ :unify-subst) ; the unifying substitutionwhere General Form: (brr@ :symbol) :symbolis one of the following keywords. Those marked with *probably require an implementor's knowledge of the system to use effectively. They are supported but not well documented. More is said on this topic following the table. :symbol (brr@ :symbol) ------- ---------------------In general :target the term to be rewritten. This term is an instantiation of the left-hand side of the conclusion of the rewrite-rule being broken. This term is in translated form! Thus, if you are expecting (equal x nil) -- and your expectation is almost right -- you will see (equal x 'nil); similarly, instead of (cadr a) you will see (car (cdr a)). In translated forms, all constants are quoted (even nil, t, strings and numbers) and all macros are expanded. :unify-subst the substitution that, when applied to :target, produces the left-hand side of the rule being broken. This substitution is an alist pairing variable symbols to translated (!) terms. :wonp t or nil indicating whether the rune was successfully applied. (brr@ :wonp) returns nil if evaluated before :EVALing the rule. :rewritten-rhs the result of successfully applying the rule or else nil if (brr@ :wonp) is nil. The result of successfully applying the rule is always a translated (!) term and is never nil. :failure-reason some non-nil lisp object indicating why the rule was not applied or else nil. Before the rule is :EVALed, (brr@ :failure-reason) is nil. After :EVALing the rule, (brr@ :failure-reason) is nil if (brr@ :wonp) is t. Rather than document the various non-nil objects returned as the failure reason, we encourage you simply to evaluate (brr@ :failure-reason) in the contexts of interest. Alternatively, study the ACL2 function tilde-@- failure-reason-phrase. :lemma * the rewrite rule being broken. For example, (access rewrite-rule (brr@ :lemma) :lhs) will return the left-hand side of the conclusion of the rule. :type-alist * a display of the type-alist governing :target. Elements on the displayed list are of the form (term type), where term is a term and type describes information about term assumed to hold in the current context. The type-alist may be used to determine the current assumptions, e.g., whether A is a CONSP. :ancestors * a stack of frames indicating the backchain history of the current context. The theorem prover is in the process of trying to establish each hypothesis in this stack. Thus, the negation of each hypothesis can be assumed false. Each frame also records the rules on behalf of which this backchaining is being done and the weight (function symbol count) of the hypothesis. All three items are involved in the heuristic for preventing infinite backchaining. Exception: Some frames are ``binding hypotheses'' (equal var term) or (equiv var (double-rewrite term)) that bind variable var to the result of rewriting term. :gstack * the current goal stack. The gstack is maintained by rewrite and is the data structure printed as the current ``path.'' Thus, any information derivable from the :path brr command is derivable from gstack. For example, from gstack one might determine that the current term is the second hypothesis of a certain rewrite rule. brr@-expressionsare used in break conditions, the expressions that determine whether interactive breaks occur when monitored runes are applied. See monitor. For example, you might want to break only those attempts in which one particular term is being rewritten or only those attempts in which the binding for the variable ais known to be a consp. Such conditions can be expressed using ACL2 system functions and the information provided by brr@. Unfortunately, digging some of this information out of the internal data structures may be awkward or may, at least, require intimate knowledge of the system functions. But since conditional expressions may employ arbitrary functions and macros, we anticipate that a set of convenient primitives will gradually evolve within the ACL2 community. It is to encourage this evolution that brr@provides access to the
<urn:uuid:460fe123-8906-4320-9cc8-f581b79ced1f>
CC-MAIN-2013-20
http://planet.plt-scheme.org/package-source/cce/dracula.plt/4/0/docs/BRR_at_.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.868059
976
2.6875
3
Hibiscus rosa-sinensis 'The Path' Chinese Hibiscus, Tropical Hibiscus Among the showiest flowering plants. Plants typically bear funnel-shaped blossoms, often with prominent stamens. The many species offer a wide range of flower colors. Probably from tropical Asia; tropical hibiscus has been in cultivation for centuries, and is among the most flamboyant flowering shrubs. It reaches 30 ft. tall and 15 to 20 ft. wide in Hawaii, but more typical size on mainland is 8 to 15 ft. tall, 5 to 8 ft. wide. Glossy leaves vary somewhat in size and texture depending on variety. Growth habit may be dense and dwarfish or loose and open. Summer flowers are single or double, 4 to 8 in. wide. Colors range from white through pink to red, from yellow and apricot to orange. Individual flowers last only a day, but the plant blooms continuously. Provide overhead protection where winter lows frequently drop below 30°F/-1°C. Where temperatures go much lower, grow in containers and shelter indoors over winter; or treat as annual, setting out fresh plants each spring. Hibiscus also makes a good houseplant. This shrub requires excellent drainage; if necessary, improve soil for best drainage or set plants in raised beds or containers. Can be used as screen, espalier, or specimen. To develop good branch structure, prune poorly shaped young plants when you set them out in spring. To keep a mature plant growing vigorously, prune out about a third of old wood in early spring. Pinching out tips of stems in spring and summer increases flower production.All varieties susceptible to aphids. There are thousands of selections.'The Path' Gorgeous, ruffled, single, buttercup yellow flowers with a bright pink center on a bushy, upright shrub that grows 6–8 ft. tall, 4–5 ft. wide. Large, frilly, single, bright orange flowers with white central eye edged in red. Strong-growing, erec... Double golden flowers with petals that shade to carmine orange toward base. Plant is bushy and upright... This 6–8 ft.-tall variety has big, single, soft pink flowers.
<urn:uuid:c5277efa-9a67-4a7c-b0ba-6e91839cf993>
CC-MAIN-2013-20
http://plantfinder.sunset.com/plant-details.jsp?id=1463
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.904095
475
2.796875
3
Outside of the academic environment, a harsh and seemingly ever-growing debate has appeared, concerning how mass media distorts the political agenda. Few would argue with the notion that the institutions of the mass media are important to contemporary politics. In the transition to liberal democratic politics in the Soviet Union and Eastern Europe the media was a key battleground. In the West, elections increasingly focus around television, with the emphasis on spin and marketing. Democratic politics places emphasis on the mass media as a site for democratic demand and the formation of “public opinion”. The media are seen to empower citizens, and subject government to restraint and redress. Yet the media are not just neutral observers but are political actors themselves. The interaction of mass communication and political actors — politicians, interest groups, strategists, and others who play important roles — in the political process is apparent. Under this framework, the American political arena can be characterized as a dynamic environment in which communication, particularly journalism in all its forms, substantially influences and is influenced by it. According to the theory of democracy, people rule. The pluralism of different political parties provides the people with “alternatives,” and if and when one party loses their confidence, they can support another. The democratic principle of “government of the people, by the people, and for the people” would be nice if it were all so simple. But in a medium-to-large modern state things are not quite like that. Today, several elements contribute to the shaping of the public’s political discourse, including the goals and success of public relations and advertising strategies used by politically engaged individuals and the rising influence of new media technologies such as the Internet. A naive assumption of liberal democracy is that citizens have adequate knowledge of political events. But how do citizens acquire the information and knowledge necessary for them to use their votes other than by blind guesswork? They cannot possibly witness everything that is happening on the national scene, still less at the level of world events. The vast majority are not students of politics. They don’t really know what is happening, and even if they did they would need guidance as to how to interpret what they knew. Since the early twentieth century this has been fulfilled through the mass media. Few today in United States can say that they do not have access to at least one form of the mass media, yet political knowledge is remarkably low. Although political information is available through the proliferation of mass media, different critics support that events are shaped and packaged, frames are constructed by politicians and news casters, and ownership influences between political actors and the media provide important short hand cues to how to interpret and understand the news. One must not forget another interesting fact about the media. Their political influence extends far beyond newspaper reports and articles of a direct political nature, or television programs connected with current affairs that bear upon politics. In a much more subtle way, they can influence people’s thought patterns by other means, like “goodwill” stories, pages dealing with entertainment and popular culture, movies, TV “soaps”, “educational” programs. All these types of information form human values, concepts of good and evil, right and wrong, sense and nonsense, what is “fashionable” and “unfashionable,” and what is “acceptable” and “unacceptable”. These human value systems, in turn, shape people’s attitude to political issues, influence how they vote and therefore determine who holds political power.
<urn:uuid:e07102a9-36b0-40bd-b081-74ec346f923a>
CC-MAIN-2013-20
http://politicstoday.biz/10851/do-mass-media-influence-the-political-behavior-of-citizens/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.964322
723
3.203125
3
For many years, UNESCO and China have collaborated closely in the field of world heritage. Among the 35 Chinese properties on the World Heritage List, there are 25 cultural, 6 natural and 4 mixed sites. China is working with the countries of Central Asia (Kazakhstan, Kyrgyzstan, Tajikistan, Turkmenistan and Uzbekistan) on a Serial World Heritage Nomination of the Silk Roads. Like the country itself, China’s intangible cultural heritage is of extremely vast. The Kun Qu Opera was proclaimed a Masterpiece of the Oral and Intangible Heritage of Humanity in 2001, and the Guqin and its Music in 2003. The Uyghur Muqam of Xinjiang and the Urtiin Duu – Traditional Folk Long Song (the latter was submitted together with Mongolia) were awarded this distinction in 2005. A number of field projects have been devoted to endangered languages. With regard to cultural diversity, the cultural approach to the prevention and treatment of HIV and AIDS is being studied by officials. Crafts that make it possible to maintain traditional techniques - frequently the preserve of women - as well as community economic development are being promoted in some regions. China also collaborates with UNESCO in the area of dialogue through the programme on Intercultural Dialogue in Central Asia. In the framework of this programme, China is a member of the International Institute for Central Asian Studies, which was created to encourage intellectual cooperation among the Member States of the region.
<urn:uuid:fc55b8b0-4ef7-4eb4-bd14-4eaf3e257462>
CC-MAIN-2013-20
http://portal.unesco.org/geography/en/ev.php-URL_ID=2988&URL_DO=DO_TOPIC&URL_SECTION=201.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.95087
297
2.9375
3
May 16, 2011 If you fuel your truck with biodiesel made from palm oil grown on a patch of cleared rainforest, you could be putting into the atmosphere 10 times more greenhouse gasses than if you’d used conventional fossil fuels. It’s a scenario so ugly that, in its worst case, it makes even diesel created from coal (the “coal to liquids” fuel dreaded by climate campaigners the world over) look “green.” The biggest factor determining whether or not a biofuel ultimately leads to more greenhouse-gas emissions than conventional fossil fuels is the type of land used to grow it, says a new study from researchers at MIT. The carbon released when you clear a patch of rainforest is the reason that palm oil grown on that patch of land leads to 55 times the greenhouse-gas emissions of palm oil grown on land that had already been cleared or was not located in a rainforest, said the study’s lead author. The solution to this biofuels dilemma is more research. Unlike solar and wind, it’s truly an area in which the world is desperate for scientific breakthroughs, such as biofuels from algae or salt-tolerant salicornia.
<urn:uuid:15d19448-aa73-495a-802e-5b1e68a460f3>
CC-MAIN-2013-20
http://prn.fm/2011/05/17/christopher-mims-some-biofuels-worse-than-dirtiest-fossil-fuels/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.95875
253
3.484375
3
The Bible gives us a clear picture of foolish behavior and its consequences. It’s important for us to recognize these traits in others—and in ourselves. Dealing appropriately with people who behave foolishly requires prayer and wisdom. But remember, that foolish person is not in your life by accident, and you can by God’s grace respond to him or her in a Christ-like manner. Characteristics of Foolish Behavior 1. Denying, disregarding, or rebelling against God. The fool says in his heart “There is no God” (Psalm 14:1). 2. Slandering, lying, deceiving The one who conceals hatred has lying lips, and whoever utters slander is a fool 3. Quick-Tempered A fool shows his annoyance at once, but a prudent man overlooks an insult (Proverbs 12:16). 4. Acts Impetuously and Without Regard for Consequences In everything the prudent acts with knowledge, but a fool flaunts his folly. (Proverbs 13:16). One who is wise is cautious and turns away from evil, but a fool is reckless and careless. 5. Talks endlessly, brags, spouts off frequently. A fool takes no pleasure in understanding, but only in expressing his opinion The wise lay up knowledge, but the mouth of a fool brings ruin near. (Proverbs 10:14). A fool’s mouth is his ruin, and his lips are a snare to his soul. (Proverbs 18:7 ). 6. Refuses Advice, Accountability and/or Discipline A fool despises his father’s instruction, but whoever heeds reproof is prudent A rebuke goes deeper into a man of understanding than a hundred blows into a fool 7. Handles Money Recklessly Of what use is money in the hand of a fool, since he has no desire to get wisdom? In the house of the wise are stores of choice food and oil, but a foolish man devours all he has (Proverbs 21:20). 8. Quarrels frequently, picks fights, is contentious Fools get into constant quarrels; they are asking for a beating (Proverbs 18:6 NLT). A fool gives full vent to his anger, but a wise man keeps himself under control 9. Lazy, Lacks Focus and Ambition Foolish people refuse to work and almost starve (Ecclesiastes 4:5). A wise person thinks much about death, while the fool thinks only about having a good time now (Ecclesiastes 7:4 ). Fools are so exhausted by a little work that they have no strength for even the simplest tasks (Ecclesiastes 10:15 ). 10. Never Learns from Past Experience As a do returns to his vomit, so a fool repeats his folly (Proverbs 26:11). You cannot separate fools from their foolishness, even though you grind them like grain with mortar and pestle (Proverbs 27:22 ). How are we to respond to foolish behavior? 1. First and most importantly, we pray for them. 2. Second, watch your attitude and motivation toward these foolish people: Principle #1 – Don’t be surprised if they refuse good advice. Don’t waste your breath on fools, for they will despise the wisest advice (Proverbs 23:9 ). Principle #2 – Don’t give them honor or luxury. It is not fitting for a fool to live in luxury – how much worse for a slave to rule over princes! Like snow in summer or rain in harvest, honor is not fitting for a fool (Proverbs 26:1). Principle #3 – Don’t argue with foolish people. Don’t have anything to do with foolish and stupid arguments, because you know they produce quarrels. And the Lord’s servant must not quarrel; instead, he must be kind to everyone, able to teach, not resentful (2 Tim. 2:23-24). Principle #4 – Protect yourself from the resentment and anger caused by foolish people. A stone is heavy and sand is weighty, but the resentment caused by a fool is heavier than both (Proverbs 27:3 ). Stay away from a foolish man, for you will not find knowledge on his lips (Proverbs 14:7). Are you encouraged here? I personally invite you to subscribe and get the latest posts sent to your inbox. Also, connect with us on Facebook and Twitter and get updates that are not posted here on the blog. Linking up with:The Modest Mom, We are now selling Lilla Rose! (30% discount on soon to be retired items) Vision Forum Sale: 20% off everything With discount code: EXTRA20
<urn:uuid:838d869f-2d2f-432a-b380-608aa039f4f0>
CC-MAIN-2013-20
http://proverbs14verse1.blogspot.com/2012/02/characteristics-of-foolish-behavior.html?showComment=1330368347803
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.93071
1,040
2.625
3
- PASSENGER-STRAND (1) (remove) - A large-scale chemical modification screen identifies design rules to generate siRNAs with high activity, high stability and low toxicity (2009) - The use of chemically synthesized short interfering RNAs (siRNAs) is currently the method of choice to manipulate gene expression in mammalian cell culture, yet improvements of siRNA design is expectably required for successful application in vivo. Several studies have aimed at improving siRNA performance through the introduction of chemical modifications but a direct comparison of these results is difficult. We have directly compared the effect of 21 types of chemical modifications on siRNA activity and toxicity in a total of 2160 siRNA duplexes. We demonstrate that siRNA activity is primarily enhanced by favouring the incorporation of the intended antisense strand during RNA-induced silencing complex (RISC) loading by modulation of siRNA thermodynamic asymmetry and engineering of siRNA 3-overhangs. Collectively, our results provide unique insights into the tolerance for chemical modifications and provide a simple guide to successful chemical modification of siRNAs with improved activity, stability and low toxicity.
<urn:uuid:b5654d8e-0f08-4bf6-9a7c-6fd03e5fe0e7>
CC-MAIN-2013-20
http://publikationen.ub.uni-frankfurt.de/solrsearch/index/search/searchtype/authorsearch/author/%22Dalibor+Odad%C5%BEi%C4%87%22/start/0/rows/10/subjectfq/PASSENGER-STRAND+
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.893895
233
2.734375
3
Each year more than 4 million homeless pets are killed as a result of overpopulation, but families who adopt from animal shelters or rescue groups can help preserve these lives and support the growing trend of socially responsible holiday shopping. Best Friends Animal Society encourages families this holiday season to give the precious gift of life by adopting homeless pets rather than buying from breeders, pet stores or online retailers. Also, resist the urge to surprise a friend or family member with a living gift. Choosing the right pet is an extremely personal decision, one that should be made carefully by the adults who will be caring for the animal for its 15- to 20-year lifetime. Instead, offer an adoption gift certificate paired with a basket of pet care items or stuffed animal for the holiday itself, and then let the person or family choose the actual pet that feels right to them. Once you’ve decided to adopt, keep in mind that welcoming a pet into your life is a big decision and requires important preparation. Best Friends offers tips and advice to help make a smooth transition at home: * Determine roles and responsibilities – Before bringing home a new pet, discuss what roles and responsibilities each family member will take on. Who will be in charge of feeding, walks, changing the litter box and taking your pet for regular visits to the vet? Giving each family member a specific task will help everyone feel involved, especially young children. * Prep the house – Adding a pet to the house means adding new items to your shopping lists. For dogs, the basics are a collar and leash, chew toys, a kennel and dog bed. Cats need a litter box and litter, a scratching post and a carrying crate for transportation. Also don’t forget food and toys. * Have your pet spayed/neutered – Spaying or neutering is one of the greatest gifts you can provide your pet and community. It not only helps control the overabundance of pets, but can also help prevent medical and behavioral problems from developing. Most shelters include this with the adoption package or can recommend a local veterinarian in your area, so check with the staff at the shelter before you leave. * Research community rules and resources – Do a little research on what identification (tags, microchips, etc.) you might need for your pet. Scout out the local dog parks and runs for future outdoor fun, and make sure you know where emergency vet clinics or animal hospitals are located. * Set limits – Having pre-determined rules will create consistency in training and help make the home a pleasant environment for you and your pet. Will your pet be allowed to snuggle with you in bed or curl up with you on your furniture? Will treats be limited to one a day? It’s important to discuss these questions as a family before your new family member arrives. An estimated 17 million people will be adding pets to their families this year, so this season, help bring some holiday cheer to a homeless pet by adopting your newest companion.
<urn:uuid:cefe5610-f57c-40d2-938e-5283619dbcb4>
CC-MAIN-2013-20
http://queensledger.com/view/full_story/20992104/article-The-holiday-gift-that-keeps-on-giving--opt-to-adopt-a-pet--save-a-life?instance=pets
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.934479
614
2.59375
3
Taking Play Seriously By ROBIN MARANTZ HENIG Published: February 17, 2008 On a drizzly Tuesday night in late January, 200 people came out to hear a psychiatrist talk rhapsodically about play -- not just the intense, joyous play of children, but play for all people, at all ages, at all times. (All species too; the lecture featured touching photos of a polar bear and a husky engaging playfully at a snowy outpost in northern Canada.) Stuart Brown, president of the National Institute for Play, was speaking at the New York Public Library's main branch on 42nd Street. He created the institute in 1996, after more than 20 years of psychiatric practice and research persuaded him of the dangerous long-term consequences of play deprivation. In a sold-out talk at the library, he and Krista Tippett, host of the public-radio program ''Speaking of Faith,'' discussed the biological and spiritual underpinnings of play. Brown called play part of the ''developmental sequencing of becoming a human primate. If you look at what produces learning and memory and well-being, play is as fundamental as any other aspect of life, including sleep and dreams.'' The message seemed to resonate with audience members, who asked anxious questions about what seemed to be the loss of play in their children's lives. Their concern came, no doubt, from the recent deluge of eulogies to play . Educators fret that school officials are hacking away at recess to make room for an increasingly crammed curriculum. Psychologists complain that overscheduled kids have no time left for the real business of childhood: idle, creative, unstructured free play. Public health officials link insufficient playtime to a rise in childhood obesity. Parents bemoan the fact that kids don't play the way they themselves did -- or think they did. And everyone seems to worry that without the chance to play stickball or hopscotch out on the street, to play with dolls on the kitchen floor or climb trees in the woods, today's children are missing out on something essential. The success of ''The Dangerous Book for Boys'' -- which has been on the best-seller list for the last nine months -- and its step-by-step instructions for activities like folding paper airplanes is testament to the generalized longing for play's good old days. So were the questions after Stuart Brown's library talk; one woman asked how her children will learn trust, empathy and social skills when their most frequent playing is done online. Brown told her that while video games do have some play value, a true sense of ''interpersonal nuance'' can be achieved only by a child who is engaging all five senses by playing in the three-dimensional world. This is part of a larger conversation Americans are having about play. Parents bobble between a nostalgia-infused yearning for their children to play and fear that time spent playing is time lost to more practical pursuits. Alarming headlines about U.S. students falling behind other countries in science and math, combined with the ever-more-intense competition to get kids into college, make parents rush to sign up their children for piano lessons and test-prep courses instead of just leaving them to improvise on their own; playtime versus r?m?uilding. Discussions about play force us to reckon with our underlying ideas about childhood, sex differences, creativity and success. Do boys play differently than girls? Are children being damaged by staring at computer screens and video games? Are they missing something when fantasy play is populated with characters from Hollywood's imagination and not their own? Most of these issues are too vast to be addressed by a single field of study (let alone a magazine article). But the growing science of play does have much to add to the conversation. Armed with research grounded in evolutionary biology and experimental neuroscience, some scientists have shown themselves eager -- at times perhaps a little too eager -- to promote a scientific argument for play. They have spent the past few decades learning how and why play evolved in animals, generating insights that can inform our understanding of its evolution in humans too. They are studying, from an evolutionary perspective, to what extent play is a luxury that can be dispensed with when there are too many other competing claims on the growing brain, and to what extent it is central to how that brain grows in the first place. Scientists who study play, in animals and humans alike, are developing a consensus view that play is something more than a way for restless kids to work off steam; more than a way for chubby kids to burn off calories; more than a frivolous luxury. Play, in their view, is a central part of neurological growth and development -- one important way that children build complex, skilled, responsive, socially adept and cognitively flexible brains. Their work still leaves some questions unanswered, including questions about play's darker, more ambiguous side: is there really an evolutionary or developmental need for dangerous games, say, or for the meanness and hurt feelings that seem to attend so much child's play? Answering these and other questions could help us understand what might be lost if children play less.
<urn:uuid:316c7af5-14e1-4d0b-9576-753e17ef2cc5>
CC-MAIN-2013-20
http://query.nytimes.com/gst/fullpage.html?res=9404E7DA1339F934A25751C0A96E9C8B63&scp=2&sq=taking%20play%20seriously&st=cse
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.961459
1,055
2.5625
3
Financial Accounting - CH 1 & 2 |Four Principal Activities of Business Firms:|| 1.Establishing goals and strategies| |What are the 2 sources Financing comes from?|| 1. Owners| |Investments are made in the following:|| 1. Land, buildings, equipment| 2. Patents, licenses, contractual rights 3. Stock and bonds of other organizations 5. Accounts Receivable |What are the 4 areas for conducting operations?|| 1. Purchasing| |What are the 4 commonly used conventions in financial statements?|| 1. The accounting period| 2. The number of reporting periods 3. The monetary amounts 4. The terminology and level of detail in the financial statements |Common Financial Reporting Conventions, Accounting Period||The length of time covered by the financial statements. (The most common interval for external reporting is the fiscal year).| |Common Financial Reporting Conventions, Number of reporting periods||The number of reporting periods included in a given financial statement presentation, Both U.S. GAAP and IFRS require firms to include results for multiple reporting periods in each report.| |Common Financial Reporting Conventions, Monetary amounts||This includes measuring units, like thousands, millions, or billions, and the currency, such as dollars ($), euros (€), or Swedish kronor (SEK)| |Common Financial Reporting Conventions, Terminology and level of detail in the financial statements||U.S. GAAP and IFRS contain broad guidance on what the financial statements must contain, but neither system completely specifies the level of detail or the names of accounts. Therefore, some variation occurs.| |Characteristics of a Balance Sheet||A Balance Sheet:| 1. is also known as a statement of financial position; 2. provides information at a point in time; 3. lists the firm's assets, liabilities, and shareholders' equity and provides totals and subtotals; and 4. can be represented as the Basic Accounting Equation. Assets = Liabilities + Shareholders' Equity |Accounting Equation Components|| 1. Assets| 3. Share Holder's Equity |Assets|| Assets are economic resources with the potential to provide future economic benefits to a firm. | Examples: Cash, Accounts Receivable, Inventories, Buildings, Equipment, intangible assets (like Patents) |Liabilities|| Liabilities are creditors' claims for funds, usually because they have provided funds, or goods and services, to the firm.| Examples: Accounts Payable, Unearned Income, Notes Payable, Buildings, Accrued Salaries |Shareholders' Equity|| Shareholders' Equity shows the amounts of funds owners have provided and, in parallel, their claims on the assets of a firm. | Examples: Common Stock, Contributed Capital, Retained Earnings |What are the separate sections on a Balance Sheet (Balance sheet classification)||1. Current assets represent assets that a firm expects to turn into cash, or sell, or consume within approximately one year from the date of the balance sheet (i.e., accounts receivable and inventory).| 2. Current liabilities represent obligations a firm expects to pay within one year (i.e., accounts payable and salaries payable). 3. Non-current assets are typically held and used for several years (i.e., land, buildings, equipment, patents, long-term security investments). 4. Noncurrent liabilities and shareholders' equity are sources of funds where the supplier of funds does not expect to receive them all back within the next year. |Income Statement||1. Sometimes called the statement of profit and loss by firms applying IFRS| 2. Provides information on profitability 3. May use the terms net income, earnings, and profit interchangeably 4. Reports amounts for a period of time 5. Typically one year 6. Is represented by the Basic Income Equation: Net Income = Revenues - Expenses |Revenues||(also known as sales, sales revenue, or turnover, a term used by some firms reporting under IFRS) measure the inflows of assets (or reductions in liabilities) from selling goods and providing services to customers.| |Expenses||measure the outflow of assets (or increases in liabilities) used in generating revenues.| |Relationship between the Balance Sheet and the Income Statement|| 1. The income statement links the balance sheet at the beginning of the period with the balance sheet at the end of the period.| 2. Retained Earnings is increased by net income and decreased by dividends. |Statement of Cash Flows|| The statement of cash flows (also called the| cash flow statement) reports information about cash generated from or used by: 2. investing, and 3. financing activities during specified time periods. The statement of cash flows shows where the firm obtains or generates cash and where it spends or uses cash. |Classification of Cash Flows|| 1. Operations: | cash from customers less cash paid in carrying out the firm's operating activities cash paid to acquire noncurrent assets less amounts from any sale of noncurrent assets cash from issues of long-term debt or new capital less dividends |Inflows and Outflows of Cash| |The Relationship of the Statement of Cash Flows to the Balance Sheet and Income Statement||-The statement of cash flows explains the change in cash between the beginning and the end of the period, and separately displays the changes in cash from operating, investing, and financing activities.| -In addition to sources and uses of cash, the statement of cash flows shows the relationship between net income and cash flow from operations. |Statement of Shareholders' Equity||This statement displays components of shareholders' equity, including common shares and retained earnings, and changes in those components.| |Other Items in Annual Reports||Financial reports provide additional explanatory material in the schedules and notes to the financial statements.| |Who are the 4 main groups of people involved with the Financial Reporting Process|| 1. Managers and governing boards of reporting entities.| 2. Accounting standard setters and regulatory bodies. 3. Independent external auditors. 4. Users of financial statements. |What is the Securities and Exchange Commission (SEC)?||An agency of the federal government, that has the legal authority to set acceptable accounting standards and enforce securities laws.| |What is the Financial Accounting Standards Board (FASB)?||a private-sector body comprising five voting members, to whom the SEC has delegated most tasks of U.S. financial accounting standard-setting.| |GAAP||1. Common terminology includes the pronouncements of the FASB (and its predecessors) in the compilation of accounting rules, procedures, and practices known as generally accepted accounting principles (GAAP).| 2. Recently, the FASB launched its codification project which organizes all of U.S GAAP by topic (for example, revenues), eliminates duplications, and corrects inconsistencies. |FASB board members make standard-setting decisions guided by a conceptual framework that addresses:|| 1. Objectives of financial reporting.| 2. Qualitative characteristics of accounting information including the relevance, reliability, and comparability of data. 3. Elements of the financial statements. 4. Recognition and measurement issues. |Sarbanes-Oxley Act of 2002.|| Concerns over the quality of financial reporting have led, and continue to lead, to government initiatives in the United States.| Sarbanes-Oxley Act of 2002 established the Public Company Accounting Oversight Board (PCAOB), which is responsible for monitoring the quality of audits of SEC registrants. |International Financial Reporting Standards (IFRS)||-The International Accounting Standards Board (IASB) is an independent accounting standard-setting entity with 14 voting members from a number of countries. Standards set by the IASB are International Financial Reporting Standards (IFRS).| -The FASB and IASB Boards are working toward converging their standards, based on an agreement reached in 2002 and updated since then. |Auditor's Opinion||Firms whose common stock is publicly traded are required to get an opinion by an independent auditor who:| 1.Assesses the effectiveness of the firm's internal control system for measuring and reporting business transactions 2.Assesses whether the financial statements and notes present fairly a firm's financial position, results of operations, and cash flows in accordance with generally accepted accounting principles |Basic Accounting Conventions and Concepts||1. Materiality is the qualitative concept that financial reports need not include items that are so small as to be meaningless to users of the reports.| 2. The accounting period convention refers to the uniform length of accounting reporting periods. 3. Interim reports are often prepared for periods shorter than a year. However, preparing interim reports does not eliminate the need to prepare an annual report. |Cash vs. Accrual Accounting||Cash basis| A firm measures performance from selling goods and providing services as it receives cash from customers and makes cash expenditures to providers of goods and services. A firm recognizes revenue when it sells goods or renders services and recognizes expenses in the period when the firm recognizes the revenues that the costs helped produce. |What Is an Account? How Do You Name Accounts?||-An account represents an amount on a line of a balance sheet or income statement (i.e., cash, accounts receivable, etc.).| -There is not a master list to define these accounts since they are customized to fit each specific business's needs. -Accountants typically follow a conventional naming system for accounts, which increases communication. |What Accounts Make up the Typical Balance Sheet?| |Current assets and current liabilities (Balance Sheet Classifications)||Receipt or payment of assets that the firm expects will occur within one year or one operating cycle.| |Noncurrent assets and noncurrent liabilities (Balance Sheet Classifications)||Firm expects to collect or pay these more than one year after the balance sheet date.| |Duality Effects of the Balance Sheet Equation (Assets = Liabilites + Shareholders' Equity)||Any single event or transaction will have one of the following four effects or some combination of these effects:| 1.INCREASE an asset and INCREASE either a liability or shareholders' equity. 2.DECREASE an asset and DECREASE either a liability or shareholders' equity. 3.INCREASE one asset and DECREASE another asset. 4.INCREASE one liability or shareholders' equity and DECREASE another liability or shareholders' equity. A T-account is a device or convention for organizing and accumulating the accounting entries of transactions that affect an individual account, such as Cash, Accounts Receivable, Bonds Payable, or Additional Paid-in Capital. |T-Account Conventions: Assets| |T-Account Conventions: Liabilities| |T-Account Conventions: Shareholders' Equity| |Debit vs. Credit| While T-accounts are useful to help analyze how individual transactions flow and accumulate within various accounts, journal entries formalize the reasoning that supports the transaction. The attached standardized format indicates the accounts and amounts, with debits on the first line and credits (indented) on the second line: | Revenue or Sales:| (Common Income Statement Terms) |Assets received in exchange for goods sold and services rendered.| | Cost of Goods Sold:| (Common Income Statement Terms) |The cost of products sold.| | Selling, General, and Administrative (SG&A):| (Common Income Statement Terms) |Costs incurred to sell products/services as well as costs of administration.| | Research and Development (R&D) Expense:| (Common Income Statement Terms) |Costs incurred to create/develop new products, processes, and services.| | Interest Income:| (Common Income Statement Terms) |Income earned on amounts lent to others or from investments in interest-yielding securities.| |Unique Relationships Exist Between the Balance Sheet and the Income Statement| |Important Account Differences||1. Balance sheet accounts are permanent accounts in the sense that they remain open, with nonzero balances, at the end of the reporting period.| 2. In contrast, income statement accounts are temporary accounts in the sense that they start a period with a zero balance, accumulate information during the reporting period, and have a zero balance at the end of the reporting period. |The Financial Statement Relationships can be summarized as:| -After preparing the end-of-period income statement, the accountant transfers the balance in each temporary revenue and expense account to the Retained Earnings account. -This procedure is called closing the revenue and expense accounts. After transferring to Retained Earnings, each revenue and expense account is ready to begin the next period with a zero balance. |Expense and Revenue Transactions| |Dividend Declaration and Payment| |Issues of Capital Stock| |Posting||1. After each transaction is recognized by a journal entry, the information is transferred in the accounting system via an activity known as posting.| 2. The balance sheet ledger accounts (or permanent accounts) where these are posted begin each period with a balance equal to the ending balance of the previous period. 3.The income statement ledger accounts (or temporary accounts) have zero beginning balances. |Adjusting Entries|| There are some journal entries that are not triggered by a transaction or exchange.| -Rather, journal entries known as adjusting entries, result from the passage of time at the end of an accounting period or are used to correct errors (more commonly known as correcting entries). |Four Basic Types of Adjusting Entries|| 1.Unearned Revenues| |Closing Process||1. After adjusting and correcting entries are made, the income statement can be prepared.| 2. Once completed, it is time to transfer the balance in each temporary revenue and expense account to the Retained Earnings account. This is known as the closing process. 3. Each revenue account is reduced to zero by debiting it and each expense account is reduced to zero by crediting it. 4. The offset account—Retained Earnings—is credited for the amount of total revenues and debited for the amount of total expenses. 5. Thus, the balance of ending Retained Earnings for a period shows the difference between total revenues and total expenses. |Preparation of the Balance Sheet||1. After the closing process is completed, the accounts with nonzero balances are all balance sheet accounts.| 2. We can use these accounts to prepare the balance sheet as at the end of the period. 3. The Retained Earnings account will appear with all other balance sheet accounts and now reflects the cumulative effect of transactions affecting that account. |Final Step in Preparing Financial Statements: The Cash Flow Statement||1. The statement of cash flows describes the sources and uses of cash during a period and classifies them into operating, investing, and financing activities.| 2. It provides a detailed explanation for the change in the balance of the Cash account during that period. 3. Two approaches can be used to prepare this statement: Direct and Indirect
<urn:uuid:03b12dbf-26a3-4290-b6e6-e08368916e2a>
CC-MAIN-2013-20
http://quizlet.com/12638820/financial-accounting-ch-1-2-flash-cards/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.908959
3,227
3.375
3
Intel demonstrated a wireless electric power system that could revolutionize modern life by eliminating chargers, wall outlets and eventually batteries all together by 2050. Intel chief technology officer Justin Rattner demonstrated a Wireless Energy Resonant Link at Intel’s 2008 developer’s forum. During the demo electricity was sent wirelessly to a lamp on stage, lighting a 60 watt bulb that uses more power than a typical laptop computer. Most importantly, the electricity was transmitted without zapping anything or anyone that got between the sending and receiving units. “The trick with wireless power is not can you do it; it’s can you do it safely and efficiently,” according to Intel researcher Josh Smith. “It turns out the human body is not affected by magnetic fields; it is affected by elective fields. So what we are doing is transmitting energy using the magnetic field not the electric field.” Examples of potential applications include airports, offices or other buildings that could be rigged to supply power to laptops, mobile telephones or other devices toted into them. The technology could also be built into plugged in computer components, such as monitors, to enable them to broadcast power to devices left on desks or carried into rooms, according to Mr. Smith. - Duracell, Energizer, Texas Instruments and Motorola Mobility in Attendance at the International Wireless Power Summit (prweb.com) - British Start-Up Working to Bring Wireless Charging to the Racetrack (wheels.blogs.nytimes.com)
<urn:uuid:43d4b460-a138-4381-addd-d0464250f94f>
CC-MAIN-2013-20
http://rbach.net/blog/index.php/wireless-electricity/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.947473
312
2.84375
3
What is an estimate? “An 'Estimate' is a computer-generated approximation of a property's market value calculated by means of the Automated Value Model (AVM). As such, an Estimate is calculated on the basis of: - Publicly available tax assessment records for the property - Recent sale prices of comparable properties in the same area There are many additional factors that determine a property's actual market value, including its condition, house style, layout, special features, quality of workmanship, and so on. For this reason, an Estimate should not be viewed as an appraisal, but rather as an approximate basis for making comparisons, and as a starting point for further inquiry. A REALTOR® who specializes in the given area will be able to provide a more accurate valuation based upon current market trends, as well as specific property and neighborhood characteristics.” In some parts of the country, Realtor.com does not have access to public records data or the available estimates are not considered accurate. In these instances, the company does not display an estimated value.
<urn:uuid:cddfcf13-751b-4468-a44a-9a3cb46519c1>
CC-MAIN-2013-20
http://realestate.aol.com/homes-for-sale-detail/2202-Lee-Ln_Sarasota_FL_34231_M62531-69124
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.946604
224
2.6875
3
By Roger Fox I doubt the Keystone project is even a real long term goal by TransCanada,. Certainly in the big picture Keystone is only a single chapter in a much larger book. If you read this diary you will risk information overload, you will be offered numerous disparate data points that at first glance may seem unconnected. You will need to digest all the information offered, and then analyze. Crude is is classified by the American Petroleum Institute (API) into light, medium, heavy and extra heavy crudes, by API gravity. If its API gravity is greater than 10, it is lighter and floats on water; if less than 10, it is heavier and sinks. The Albert Tar Sands contain crudes of API 10 or less that is called Extra heavy or Bitumen. Heavy oil is defined as having an API gravity below 22.3, Medium oil is defined as having an API gravity between 22.3 °API and 31.1 °API, Light crude oil is defined as having an API gravity higher than 31.1. At a production rate of 3 million barells a day the tar sands can last for 170 years. This would also mean a hole in the ground visible from orbit. The Keystone pipeline is only one of a couple of handfuls of pipeline proposals over the last decade in the Western US, Canada and Alaska. Alaskan nat gas is largely unexploited, and is used locally on the North Slope. Its estimated that 70 trillion cubic feet of nat gas can be found in Alaska, a lot of it in the North Slope area. There are at least 3 major proposals for nat gas pipelines from the North Slope area and the adjacent Mackenzie River Delta in Canada. 2 of these projects point right at Alberta. TransCanada and Exxon Mobil are partnered in the Alaska gas pipeline proposal that will directly link nat gas production in the North Slope of ALaska thru Alberta to the US mid west. This project may be the same as the Denali proposal, and was reintroduced to theSenate in Feb, of 2011. There also at least 2 variations. Additionally there is the Dempster Lateral. -> Next page: Follow the routes south
<urn:uuid:ba49b9f2-b352-4b64-9909-4785d5b6527b>
CC-MAIN-2013-20
http://redgreenandblue.org/2011/08/24/keystone-xl-tar-sands-pipeline-a-small-part-of-a-bigger-strategy/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.943009
447
2.59375
3
|Easton's Bible Dictionary| Baalah of the well, (Joshua 19:8, probably the same as Baal, mentioned in 1 Chronicles 4:33, a city of Simeon. Int. Standard Bible Encyclopedia ba'-a-lath-be'-er ba`alath be'er "lady (mistress) of the well"; (Joshua 19:8 (in 1 Chronicles 4:33, Baal)): In Jos this place is designated "Ramah of the South," i.e. of the Negeb, while in 1 Samuel 30:27 it is described as Ramoth of the Negeb. It must have been a prominent hill (ramah = "height") in the far south of the Negeb and near a well be'er. The site is unknown though Conder suggests that the shrine Kubbet el Baul may retain the old name. Baalath-beer (2 Occurrences) Joshua 19:8 and all the villages that were round about these cities to Baalath-beer, Ramah of the South. This is the inheritance of the tribe of the children of Simeon according to their families. (ASV BBE DBY JPS WBS YLT NAS) 1 Chronicles 4:33 And all the small places round these towns, as far as Baalath-beer, the high place of the South. These were their living-places, and they have lists of their generations. (BBE)
<urn:uuid:38961357-af4d-4931-975b-8aa63d52a04c>
CC-MAIN-2013-20
http://refbible.com/b/baalath-beer.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.957361
310
2.515625
3
OBSOLETE UNITS PACKAGE SYMBOL As of version 9.0, unit functionality is built into Mathematica is the fundamental CGS unit of mass. - To use , you first need to load the Units Package using Needs["Units`"]. - is equivalent to Kilogram/1000 (SI units). - Convert[n Gram, newunits] converts n Gram to a form involving units newunits. - is typically abbreviated as g.
<urn:uuid:0918420b-56e0-40b1-bd26-4b8fb94c48df>
CC-MAIN-2013-20
http://reference.wolfram.com/mathematica/Units/ref/Gram.en.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.742624
102
2.671875
3
6 Series with Tags: Recession Indicators Series These time series are an interpretation of US Business Cycle Expansions and Contractions data provided by The National Bureau of Economic Research (NBER) at http://www.nber.org/cycles/cyclesmain.html and Organisation of Economic Development (OECD) Composite Leading Indicators: Reference Turning Points and Component Series data provided by the OECD at http://www.oecd.org/document/6/0,3746,en_2649_34349_35726918_1_1_1_1,00.html. Our time series are composed of dummy variables that represent periods of expansion and recession. The NBER identifies months and quarters, while the OECD identifies months, of turning points without designating a date within the period that turning points occurred. The dummy variable adopts an arbitrary convention that the turning point occurred at a specific date within the period. The arbitrary convention does not reflect any judgment on this issue by the NBER's Business Cycle Dating Committee or the OECD. A value of 1 is a recessionary period, while a value of 0 is an expansionary period.
<urn:uuid:fb670c36-a8a2-490f-86d6-7d758632a7c7>
CC-MAIN-2013-20
http://research.stlouisfed.org/fred2/release?rid=242&t=japan%3Boecd&at=nsa&ob=pv&od=desc
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.82983
237
2.734375
3
A risk factor is something that increases your likelihood of getting a disease or condition. It is possible to develop melanoma with or without the risk factors listed below. However, the more risk factors you have, the greater your likelihood of developing melanoma. If you have a number of risk factors, ask your doctor what you can do to reduce your risk. Risk factors for melanoma include: The occurrence of melanoma has been linked with exposure to ultraviolet (UV) radiation. Therefore, exposing your skin to UV rays from the sun or tanning lamps increases your odds of developing melanoma. People who live in sunny climates are exposed to more sunlight. People who live at high altitudes, where the sunlight is strongest, are exposed to more UV radiation. Blistering sunburns, even as a child, also increase the risk of developing melanoma. Having melanoma once increases your risk of developing it again. Having many moles or large moles increases your risk of melanoma. Also, irregular moles are more likely to turn into melanoma than normal moles. Irregular moles are characterized by: - Being larger than normal moles - Being variable in color - Having irregular borders - Any pigmented spot in the nail beds - Changing in size and/or shape Most melanomas are diagnosed in young adults and older adults. Family members of people with melanoma are at greater risk of developing the disease than people with no family history of the disease. People with a disease called xeroderma pigmentosa (XP) are at a very increased risk of developing melanoma. This rare disease does not allow patients to repair sun-damaged DNA, therefore any sun exposure will result in damage and mutations that become melanomatous. It is not unusual for these people to develop hundreds of melanomas on their skin. Similarly, people with hereditary dysplastic nevus syndrome or familial atypical multiple mole melanoma (FAMMM) syndrome are also at increased risk for developing melanoma. Caucasians are more likely than black, Hispanic and Asian people to develop melanoma. Most people who develop melanoma tend to burn rather than tan when exposed to sunlight. These people tend to have fair skin, freckles, red or blonde hair, or blue-colored eyes. - Reviewer: Brian Randall, MD - Review Date: 04/2013 - - Update Date: 04/08/2013 -
<urn:uuid:daae388f-a51e-4b8b-9f5a-339ad966b48f>
CC-MAIN-2013-20
http://restonhospital.com/your-health/?/19822/Lifestyle-Changes-to-Manage-Melanoma~Risk-Factors
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.933812
508
3.4375
3
Fully revised and updated for the 21st century, 365 Manners Kids Should Know tackles one manner a day. It suggests many games, exercises, and activities that parents, teachers, and grandparents can use to teach children and teens essential etiquette and at what age to present them. Some of the manners covered are when and where to text, how to handle an online bully, how to write a thank-you note, and proper behavior and dress for special events such as weddings, birthday parties, and religious services. Customer Reviews for 365 Manners Kids Should Know - revised and updated This product has not yet been reviewed. Click here to continue to the product details page.
<urn:uuid:9e429843-e203-44f4-b932-d13b9513660a>
CC-MAIN-2013-20
http://reviews.christianbook.com/2016/88825X/three-rivers-press-365-manners-kids-should-know-revised-and-updated-reviews/reviews.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.942324
136
2.78125
3
This work is licensed under the GPLv2 license. See License.txt for details Autobuild imports, configures, builds and installs various kinds of software packages. It can be used in software development to make sure that nothing is broken in the build process of a set of packages, or can be used as an automated installation tool. Autobuild config files are Ruby scripts which configure rake to imports the package from a SCM or (optionnaly) updates it configures it. This phase can handle code generation, configuration (for instance for autotools-based packages), … It takes the dependencies between packages into account in its build process, updates the needed environment variables
<urn:uuid:d4c570b0-6a4e-47fd-afe7-15b6daac7169>
CC-MAIN-2013-20
http://rubygems.org/gems/autobuild/versions/1.2.15
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.814703
144
2.84375
3
W hy is it important for scientists to contribute to science education? Our nation has failed to meet important educational challenges, and our children are ill prepared to respond to the demands of today?s world. Results of the Third International Mathematics and Science Study ( TIMSS )--and its successor, TIMSS-R--show that the relatively strong international performance of U.S. 4th graders successively deteriorates across 8th- and 12th-grade cohorts. Related studies indicate that U.S. PreK-12 curricula lack coherence, depth, and continuity and cover too many topics superficially. By high school, unacceptably low numbers of students show motivation or interest in enrolling in physics (only one-quarter of all students) or chemistry (only one-half). We are rapidly approaching universal participation at the postsecondary level, but we still have critical science, technology, engineering, and mathematics (STEM) workforce needs and too few teachers who have studied science or mathematics. Science and engineering degrees as a percentage of the degrees conferred each year have remained relatively constant at about 5%. In this group, women and minorities are gravely underrepresented. The consequences of these conditions are serious. The U.S. Department of Labor estimates that 60% of the new jobs being created in our economy today will require technological literacy, yet only 22% of the young people entering the job market now actually possess those skills. By 2010, all jobs will require some form of technological literacy, and 80% of those jobs haven?t even been created yet. We must prepare our students for a world that we ourselves cannot completely anticipate. This will require the active involvement of scientists and engineers. How is NSF seeking to encourage scientists to work on educational issues? The NSF Strategic Plan includes two relevant goals: to develop "a diverse, internationally competitive, and globally engaged workforce of scientists, engineers, and well-prepared citizens" and to support "discovery across the frontiers of science and engineering, connected to learning, innovation, and service to society." To realize both of these goals, our nation?s scientists and engineers must care about the educational implications of their work and explore educational issues as seriously and knowledgeably as they do their research questions. The phrase "integration of research and education" conveys two ideas. First, good research generates an educational asset, and we must effectively use that asset. Second, we need to encourage more scientists and engineers to pursue research careers that focus on teaching and learning within their own disciplines. All proposals submitted to NSF for funding must address two merit criteria: intellectual merit and broader impacts. In everyday terms, our approach to evaluating the broader impact of proposals is built on the philosophy that scientists and engineers should pay attention to teaching and value it, and that their institutions should recognize, support, and reward faculty, as well as researchers in government and industry, who take their role as educators seriously and approach instruction as a scholarly act. We think of education very broadly, including formal education (K-graduate and postdoctoral study) and informal education (efforts to promote public understanding of science and research outside the traditional educational environment). What does it mean to take education seriously and explore it knowledgeably? Any scholarly approach to education must be intentional, be based on a valid body of knowledge, and be rigorously assessed. That is, our approach to educational questions must be a scholarly act. NSF actively invests in educational reform and models that encourage scientists and engineers to improve curriculum, teaching, and learning in science and mathematics at all levels of the educational system from elementary school to graduate study and postdoctoral work. We recognize that to interest faculty and practicing scientists and engineers in education, we must support research that generates convincing evidence that changing how we approach the teaching of science and mathematics will pay off in better learning and deeper interest in these fields. Here are a few of the most recent efforts to stimulate interest in education that might be of interest to Next Wave readers. (For more information, go to the NSF Education and Human Resources directorate's Web site .) The GK-12 program supports fellowships and training to enable STEM graduate students and advanced undergraduates to serve in K-12 schools as resources in STEM content and applications. Outcomes include improved communication and teaching skills for the Fellows, increased content knowledge for preK-12 teachers, enriched preK-12 student learning, and stronger partnerships between higher education and local schools. The Centers for Learning and Teaching ( CLT ) program is a "comprehensive, research-based effort that addresses critical issues and national needs of the STEM instructional workforce across the entire spectrum of formal and informal education." The goal of the CLT program is to support the development of new approaches to the assessment of learning, research on learning within the disciplines, the design and development of effective curricular materials, and research-based approaches to instruction--and through this work to increase the number of people who do research on education in the STEM fields. This year (FY 02) we are launching some prototype higher education centers to reform teaching and learning in our nation's colleges and universities through a mix of research, faculty development and exploration of instructional practices that can promote learning. Like other NSF efforts, the Centers incorporate a balanced strategy of attention to people, ideas and tools. We hope to encourage more science and engineering faculty to work on educational issues in both K-12 and in postsecondary education. If you are interested in these issues and want to pursue graduate or postdoctoral study, or want to develop a research agenda on learning in STEM fields, find the location and goals of the currently funded centers and also check later this summer to find out which higher education CLT prototypes are funded. The following solicitations all involve the integration of research and education as well as attention to broadening participation in STEM careers: The Science, Technology, Engineering, and Mathematics Talent Expansion Program ( STEP ) program seeks to increase the number of students (U.S. citizens or permanent residents) pursuing and receiving associate or baccalaureate degrees in established or emerging fields within STEM. The Faculty Early Career Development ( CAREER ) program recognizes and supports the early career development activities of those teacher-scholars who are most likely to become the academic leaders of the 21st century. The Course, Curriculum, and Laboratory Improvement (CCLI) program seeks to improve the quality of STEM education for all students and targets activities affecting learning environments, course content, curricula, and educational practices. CCLI offers three tracks: educational materials development , national dissemination , and adaptation and implementation . The Integrative Graduate Education and Research Training ( IGERT ) program addresses the challenges of preparing Ph.D. scientists and engineers with the multidisciplinary backgrounds and the technical, professional, and personal skills needed for the career demands of the future. The Vertical Integration of Research and Education in the Mathematical Sciences ( VIGRE ) program supports institutions with Ph.D.-granting departments in the mathematical sciences in carrying out innovative educational programs, at all levels, that are integrated with the department?s research activities. The Increasing the Participation and Advancement of Women in Academic Science and Engineering Careers (ADVANCE) program seeks to increase the participation of women in the scientific and engineering workforce through the increased representation and advancement of women in academic science and engineering careers. The Science, Technology, Engineering and Mathematics Teacher Preparation ( STEMTP ) program involves partnerships among STEM and education faculty working with preK-12 schools to develop exemplary preK-12 teacher education models that will improve the science and mathematics preparation of future teachers. The Noyce Scholarship Supplements program supports scholarships and stipends for STEM majors and STEM professionals seeking to become preK-12 teachers. The views expressed are those of the authors and do not necessarily reflect those of the National Science Foundation.
<urn:uuid:21bc3a09-e45d-497b-add1-4880324aff25>
CC-MAIN-2013-20
http://sciencecareers.sciencemag.org/print/career_magazine/previous_issues/articles/2002_07_12/nodoi.4298361476632626608
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.950222
1,620
3.03125
3
File compression is to perform some algorithm on the file that reduces it in size but the reverse of the algorithm will return it to its original form. In data files, the compression and decompression must be lossless which means that the data must be returned to its exact form. There are various methods to do this: some hardware implementations and some software. The most popular ones that are implemented in hardware usually use a Limpel-Ziv algorithm to look for repeating sequences over a set span of data (the run) and replace that with special identifying information. Compression does save space but may take extra time (latency). Video and music data are typically already compressed. The compression rates are usually very high because of the data and the fact that a lossy compression algorithm is used. It can be lossy (meaning that all bits may not be decompressed exactly) because it won't be noticeable with video or music. Zip files are the result of software compression. Another compression round on already compressed data will probably not yield any substantial gain. Evaluator Group, Inc. Editor's note: Do you agree with this expert's response? If you have more to share, post it in our Storage Networking forum at http://searchstorage.discussions.techtarget.com/WebX?50@@.ee83ce4 or e-mail us directly at firstname.lastname@example.org. This was first published in December 2001
<urn:uuid:89cf800c-2b77-4614-98f0-40e3877e109f>
CC-MAIN-2013-20
http://searchstorage.techtarget.com/answer/What-is-compression
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.931701
295
3.75
4
Hershey, PA : Information Science Reference, c2009. xxi, 417 p. : ill. ; 29 cm. "Premier reference source"--Cover. Includes bibliographical references (p. 362-407) and index. Now established as an effective tool in the instructional process, multimedia has penetrated educational systems at almost every level of study. In their quest to maximize educational outcomes and identify best practices, multimedia researchers are now expanding their examinations to extend towards the cognitive functionality of multimedia."Cognitive Effects of Multimedia Learning" identifies the role and function of multimedia in learning through a collection of research studies focusing on cognitive functionality. An advanced collection of critical theories and practices, this much needed contribution to the research is an essential holding for academic libraries, and will benefit researchers, practitioners and students in basic and applied fields ranging from education to cognitive sciences. (source: Nielsen Book Data)
<urn:uuid:d4ea54a6-6831-40cc-b677-7da62fccecf5>
CC-MAIN-2013-20
http://searchworks.stanford.edu/view/7815633
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.884077
185
2.578125
3
May 20, 2009 The Cook Islands are closely associated to New Zealand. Air New Zealand is the only air carrier that flies directly from the U.S. to the Cook Islands. As you will see below, the Cook Islands use the NZD as their currency. Despite some 90,000 visitors a year to the capital island, Rarotonga, the Cook Islands are largely unspoiled by tourism. There are no high-rise hotels, only four beach buggies and very little hype. The Cook Islands offer a rare opportunity for an authentic island holiday. There are a total of 15 islands in the heart of the South Pacific spread over 850,000 square miles with a population of approximately 15,000. The Islands most visited are Rarotonga and Aitutaki which are only 140 miles apart. Cook Island History Ru, from Tupua’i in French Polynesia, is believed to have landed on Aitutaki, and Tangiia, also from French Polynesia, is believed to have arrived on Rarotonga around 800 AD. Similarly, the northern islands were probably settled by expeditions from Samoa and Tonga. Cook Island Climate Cooled by the gentle breezes of the Pacific, the climate of these islands is sunny and pleasant. Roughly speaking, there are two seasons: from November through May the climate is hot and humid, and from June through October the climate is warm and dry. Most of the rain falls during the hot season, but there are also many lovely sunny days during these months, with refreshing trade-winds. Cook Island Geography The Cook Islands consists of two main groups, one in the north and one in the south. The southern group is nine “high” islands mainly of volcanic origin although some are virtually atolls. The majority of the population lives in the southern group. The northern group comprises six true atolls. Cook Island Southern Group Aitutaki, Atiu, Mangaia, Manuae, Mauke, Mitiaro, Palmerston, Rarotonga (the capital island), Takutea. Cook Island Northern Group Manihiki, Nassau, Tongareva (Penrhyn) also known as Mangarongaro, Pukapuka, Rakahanga, Suwarrow Cook Island Time Zones Rarotonga and Aitutaki are in the same time zone. Cook Island Currency New Zealand dollar. Cook Island Language English and Cook Island Maori. Call the “Island Travel Gal” at 800 644-6659 or email email@example.com to secure your seats to the idyllic Cook Islands If you enjoyed this post, make sure you subscribe to my RSS feed!
<urn:uuid:38abe3cc-4023-4e24-87a6-103174cbb29c>
CC-MAIN-2013-20
http://seethesouthpacific.com/tag/cook-island-geography/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.948147
580
2.671875
3
Classroom Activities for Teaching Sedimentary GeologyThis collection of teaching materials allows for the sharing of ideas and activities within the community of geoscience teachers. Do you have a favorite teaching activity you'd like to share? Please help us expand this collection by contributing your own teaching materials. Subject: Sedimentary Geology Results 1 - 4 of 4 matches Chemical and Physical Weathering Field and Lab Experiment: Development and Testing of Hypotheses part of Activities Lisa Greer, Washington and Lee University This exercise combines an integrated field and laboratory experiment with a significant scientific writing assignment to address chemical and physical weathering processes via hypothesis development, experimental ... Demystifying the Equations of Sedimentary Geology part of Activities Larry Lemke, Wayne State University This activity includes three strategies to help students develop a deeper comfort level and stronger intuitive sense for understanding mathematical expressions commonly encountered in sedimentary geology. Each can ... Digital Sandstone Tutorial part of Activities Kitty Milliken, University of Texas at Austin, The The Tutorial Petrographic Image Atlas is designed to give students more exposure to petrographic features than they can get during organized laboratory periods. Red rock and concretion models from Earth to Mars: Teaching diagenesis part of Activities Margie Chan, University of Utah This activity teaches students concepts of terrestrial diagenesis (cementation, fluid flow, porosity and permeability, concretions) and encourages them to apply those concepts to new or unknown settings, including ...
<urn:uuid:f4b8146e-83a2-43e4-8f2c-b3c235ae8afb>
CC-MAIN-2013-20
http://serc.carleton.edu/NAGTWorkshops/sedimentary/activities.html?q1=sercvocabs__43%253A206
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.898655
310
3.875
4
- Exam wrappers. As David Thompson describes the process, "exam wrappers required students to reflect on their performance before and after seeing their graded tests." The first four questions, completed just prior to receiving their graded test, asked students to report the time they spent preparing for the test, their methods of preparation, and their predicted test grade. After reviewing their graded test, students completed the final three reflection questions, including a categorization of test mistakes and a list of changes to implement in preparation for the next test. Thompson then collected and made copies of the wrappers returned them to the students several days later, reminding them to consider what they planned to do differently or the same in preparation for the upcoming test. Thompson reports that each reflection exercise required only 8-10 minutes of class time. Clara Hardy and others also describes uses exam wrappers. - Reading Reflections. As Karl Wirth writes, reading reflections, effectively outlined by David Bressoud (2008), are designed to address some of the challenges students face with college-level reading assignments. Students submit online reading reflections (e.g., using Moodle or Blackboard) after completing each reading assignment and before coming to class. In each reflection, students summarize the important concepts of the reading and describe what was interesting, surprising, or confusing to them. The reading reflections not only encourage students to read regularly before class, but they also promote content mastery and foster student development of monitoring, self-evaluation, and reflection skills. For the instructor, reading reflections facilitate "just-in-time" teaching and provide invaluable insights into student thinking and learning. According to Wirth, expert readers are skilled at using a wide range of strategies during all phases of reading (e.g., setting goals for learning, monitoring comprehension during reading, checking comprehension, and self-reflection), but most college instruction simply assumes the mastery of such metacognitive skills. - Knowledge surveys. Many members of the group were influenced by Karl Wirth's work on "knowledge surveys" as a central strategy for helping students think about their thinking. Knowledge surveys involve simple self-reports from students about their knowledge of course concepts and content. In knowledge surveys, students are presented with different facets of course content and are asked to indicate whether they know the answer, know some of the answer, or don't know the answer. Faculty can use these reports to gauge how confident students feel in their understanding of course material at the beginning or end of a course, before exams or papers, or even as graduating seniors or alumni. Kristin Bonnie's report relates how her students completed a short knowledge survey (6-12 questions) online (via Google forms) on the material covered in class that week. Rather than providing the answer to each question, students indicated their confidence in their ability to answer the question correctly (I know; I think I know; I don't know). Students received a small amount of credit for completing the knowledge survey. She used the information to review material that students seemed to struggle with. In addition, a subset of these questions appeared on their exam – the knowledge survey therefore served as a review sheet.Wirth notes that the surveys need not take much class time and can be administered via paper or the web. The surveys can be significant for clarifying course objectives, structure, and design. For students, knowledge surveys achieve several purposes: they help make clear course objectives and expectations, are useful as study guides, can serve as a formative assessment tool, and, perhaps most critically, aid in their development of self-assessment and metacognitive skills. For instructors, the surveys help them assess learning gains, instructional practices, and course design.
<urn:uuid:9d6abf05-e21c-4917-ab95-cb9f3fd306aa>
CC-MAIN-2013-20
http://serc.carleton.edu/acm_teagle/interventions
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.956848
746
3.578125
4
Free the Cans! Working Together to Reduce Waste In a blog about how people share, it’s worth the occasional reference to the bizarre ways that people DON’T SHARE. Is it safe to say we live in a society that places great value on independence, private property, personal space, and privacy? Even sometimes extreme value? Is that why people at an 8-unit apartment building in Oakland, CA have separate caged stalls for eight separate trash cans? I know it’s not nice to stare, but I walked by these incarcerated cans and could not help myself. I returned with my camera, so that I could share my question with the world: Why can’t people share trash cans or a single dumpster? Or, at the very least, why can’t the cans share driveway space? The Zero Waste Movement has come to the Bay Area and it calls for a new use for these eight cages. Here are my suggestions: - Turn two of those cages into compost bins. Fill one with grass, leaves, and vegetable scraps, let it decompose for six months, then start filling the second bin in the meantime. - Put in a green can, which is what Oakland uses to collect milk cartons, pizza boxes, yard trimmings, and all food to send it to the municipal composting facility. If your city doesn’t do this yet, tell them it’s a great idea and they could be as cool and cutting edge as Oakland. - Put in one or two recycling cans for glass, plastic, cardboard, paper, aluminum, etc. - Put out a FREE STUFF box for unwanted clothing and household items. The neighbors could sort through it each week, and later put it out on the curb for passers-by to explore. Take what’s left to Goodwill or a comparable donation spot. - Put in a few small bins for various items that can be recycled, such asbatteries and electronics, which can then be taken to an electronics recycling center every month or two. Styrofoam can be brought to a local packaging store or ceramics business that accepts used packaging material. Or, if you accumulate a bunch of plastic bags,take them to a store or to some other place that accepts used ones. - Put in ONE trash can. By the time you compost, recycle, re-use, redistribute, and take a few other measures to reduce your waste, you’ll have almost no trash each week. - Install a bicycle rack or locked bicycle cage. - With the leftover space, put in a container garden and a bench where neighbors can gather and chat. A much more pleasant alternative to the garbage can jailhouse ambiance, wouldn’t you agree?
<urn:uuid:c970d9a2-a5ce-4050-9ea3-58d7bbd609a8>
CC-MAIN-2013-20
http://sharingsolution.com/2009/05/23/free-the-cans-working-together-to-reduce-waste/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.93236
575
2.8125
3
Excerpts for Thames : The Biography The River as Fact It has a length of 215 miles, and is navigable for 191 miles. It is the longest river in England but not in Britain, where the Severn is longer by approximately 5 miles. Nevertheless it must be the shortest river in the world to acquire such a famous history. The Amazon and the Mississippi cover almost 4,000 miles, and the Yangtze almost 3,500 miles; but none of them has arrested the attention of the world in the manner of the Thames. It runs along the borders of nine English counties, thus reaffirming its identity as a boundary and as a defence. It divides Wiltshire from Gloucestershire, and Oxfordshire from Berkshire; as it pursues its way it divides Surrey from Middlesex (or Greater London as it is inelegantly known) and Kent from Essex. It is also a border of Buckinghamshire. It guarded these once tribal lands in the distant past, and will preserve them into the imaginable future. There are 134 bridges along the length of the Thames, and forty-four locks above Teddington. There are approximately twenty major tributaries still flowing into the main river, while others such as the Fleet have now disappeared under the ground. Its "basin," the area from which it derives its water from rain and other natural forces, covers an area of some 5,264 square miles. And then there are the springs, many of them in the woods or close to the streams beside the Thames. There is one in the wood below Sinodun Hills in Oxfordshire, for example, which has been described as an "everlasting spring" always fresh and always renewed. The average flow of the river at Teddington, chosen because it marks the place where the tidal and non-tidal waters touch, has been calculated at 1,145 millions of gallons (5,205 millions of litres) each day or approximately 2,000 cubic feet (56.6 cubic metres) per second. The current moves at a velocity between 1Ú2 and 23Ú4 miles per hour. The main thrust of the river flow is known to hydrologists as the "thalweg"; it does not move in a straight and forward line but, mingling with the inner flow and the variegated flow of the surface and bottom waters, takes the form of a spiral or helix. More than 95 per cent of the river's energy is lost in turbulence and friction. The direction of the flow of the Thames is therefore quixotic. It might be assumed that it would move eastwards, but it defies any simple prediction. It flows north-west above Henley and at Teddington, west above Abingdon, south from Cookham and north above Marlow and Kingston. This has to do with the variegated curves of the river. It does not meander like the Euphrates, where according to Herodotus the voyager came upon the same village three times on three separate days, but it is circuitous. It specialises in loops. It will take the riparian traveller two or three times as long to cover the same distance as a companion on the high road. So the Thames teaches you to take time, and to view the world from a different vantage. The average "fall" or decline of the river from its beginning to its end is approximately 17 to 21 inches (432 to 533 mm) per mile. It follows gravity, and seeks out perpetually the simplest way to the sea. It falls some 600 feet (183 m) from source to sea, with a relatively precipitous decline of 300 feet (91.5 m) in the first 9 miles; it falls 100 (30.4 m) more in the next 11 miles, with a lower average for the rest of its course. Yet averages may not be so important. They mask the changeability and idiosyncrasy of the Thames. The mean width of the river is given as 1,000 feet (305 m), and a mean depth of 30 feet (9 m); but the width varies from 1 or 2 feet (0.3 to 0.6 m) at Trewsbury to 51Ú2 miles at the Nore. The tide, in the words of Tennyson, is that which "moving seems asleep, too full for sound and foam." On its flood inward it can promise benefit or danger; on its ebb seaward it suggests separation or adventure. It is one general movement but it comprises a thousand different streams and eddies; there are opposing streams, and high water is not necessarily the same thing as high tide. The water will sometimes begin to fall before the tide is over. The average speed of the tide lies between 1 and 3 knots (1.15 and 3.45 miles per hour), but at times of very high flow it can reach 7 knots (8 miles per hour). At London Bridge the flood tide runs for almost six hours, while the ebb tide endures for six hours and thirty minutes. The tides are much higher now than at other times in the history of the Thames. There can now be a difference of some 24 feet (7.3 m) between high and low tides, although the average rise in the area of London Bridge is between 15 and 22 feet (4.5 and 6.7 m). In the period of the Roman occupation, it was a little over 3 feet (0.9 m). The high tide, in other words, has risen greatly over a period of two thousand years. The reason is simple. The south-east of England is sinking slowly into the water at the rate of approximately 12 inches (305 mm) per century. In 4000 BC the land beside the Thames was 46 feet (14 m) higher than it is now, and in 3000 BC it was some 31 feet (9.4 m) higher. When this is combined with the water issuing from the dissolution of the polar ice-caps, the tides moving up the lower reaches of the Thames are increasing at a rate of 2 feet (0.6 m) per century. That is why the recently erected Thames Barrier will not provide protection enough, and another barrier is being proposed. The tide of course changes in relation to the alignment of earth, moon and sun. Every two weeks the high "spring" tides reach their maximum two days after a full moon, while the low "neap" tides occur at the time of the half-moon. The highest tides occur at the times of equinox; this is the period of maximum danger for those who live and work by the river. The spring tides of late autumn and early spring are also hazardous. It is no wonder that the earliest people by the Thames venerated and propitiated the river. The general riverscape of the Thames is varied without being in any sense spectacular, the paraphernalia of life ancient and modern clustering around its banks. It is in large part now a domesticated river, having been tamed and controlled by many generations. It is in that sense a piece of artifice, with some of its landscape deliberately planned to blend with the course of the water. It would be possible to write the history of the Thames as a history of a work of art. It is a work still in slow progress. The Thames has taken the same course for ten thousand years, after it had been nudged southward by the glaciation of the last ice age. The British and Roman earthworks by the Sinodun Hills still border the river, as they did two thousand years before. Given the destructive power of the moving waters, this is a remarkable fact. Its level has varied over the millennia--there is a sudden and unexpected rise at the time of the Anglo-Saxon settlement, for example--and the discovery of submerged forests testifies to incidents of overwhelming flood. Its appearance has of course also altered, having only recently taken the form of a relatively deep and narrow channel, but its persistence and identity through time are an aspect of its power. Yet of course every stretch has its own character and atmosphere, and every zone has its own history. Out of oppositions comes energy, out of contrasts beauty. There is the overwhelming difference of water within it, varying from the pure freshwater of the source through the brackish zone of estuarial water to the salty water in proximity to the sea. Given the eddies of the current, in fact, there is rather more salt by the Essex shore than by the Kentish shore. There are manifest differences between the riverine landscapes of Lechlade and of Battersea, of Henley and of Gravesend; the upriver calm is in marked contrast to the turbulence of the long stretches known as River of London and then London River. After New Bridge the river becomes wider and deeper, in anticipation of its change. The rural landscape itself changes from flat to wooded in rapid succession, and there is a great alteration in the nature of the river from the cultivated fields of Dorchester to the thick woods of Cliveden. From Godstow the river becomes a place of recreation, breezy and jaunty with the skiffs and the punts, the sports in Port Meadow and the picnic parties on the banks by Binsey. But then by some change of light it becomes dark green, surrounded by vegetation like a jungle river; and then the traveller begins to see the dwellings of Oxford, and the river changes again. Oxford is a pivotal point. From there you can look upward and consider the quiet source; or you can look downstream and contemplate the coming immensity of London. In the reaches before Lechlade the water makes its way through isolated pastures; at Wapping and Rotherhithe the dwellings seem to drop into it, as if overwhelmed by numbers. The elements of rusticity and urbanity are nourished equally by the Thames. That is why parts of the river induce calm and forgetfulness, and others provoke anxiety and despair. It is the river of dreams, but it is also the river of suicide. It has been called liquid history because within itself it dissolves and carries all epochs and generations. They ebb and flow like water. The River as Metaphor The river runs through the language, and we speak of its influence in every conceivable context. It is employed to characterise life and death, time and destiny; it is used as a metaphor for continuity and dissolution, for intimacy and transitoriness, for art and history, for poetry itself. In The Principles of Psychology (1890) William James first coined the phrase "stream of consciousness" in which "every definite image of the mind is steeped . . . in the free water that flows around it." Thus "it flows" like the river itself. Yet the river is also a token of the unconscious, with its suggestion of depth and invisible life. The river is a symbol of eternity, in its unending cycle of movement and change. It is one of the few such symbols that can readily be understood, or appreciated, and in the continuing stream the mind or soul can begin to contemplate its own possible immortality. In the poetry of John Denham's "Cooper's Hill" (1642), the Thames is a metaphor for human life. How slight its beginning, how confident its continuing course, how ineluctable its destination within the great ocean: Hasting to pay his tribute to the sea, Like mortal life to meet eternity. The poetry of the Thames has always emphasised its affiliations with human purpose and with human realities. So the personality of the river changes in the course of its journey from the purity of its origins to the broad reaches of the commercial world. The river in its infancy is undefiled, innocent and clear. By the time it is closely pent in by the city, it has become dank and foul, defiled by greed and speculation. In this regress it is the paradigm of human life and of human history. Yet the river has one great advantage over its metaphoric companions. It returns to its source, and its corruption can be reversed. That is why baptism was once instinctively associated with the river. The Thames has been an emblem of redemption and of renewal, of the hope of escaping from time itself. When Wordsworth observed the river at low tide, with the vista of the "mighty heart" of London "lying still," he used the imagery of human circulation. It is the image of the river as blood, pulsing through the veins and arteries of its terrain, without which the life of London would seize up. Sir Walter Raleigh, contemplating the Thames from the walk by his cell in the Tower, remarked that the "blood which disperseth itself by the branches or veins through all the body, may be resembled to these waters which are carried by brooks and rivers overall the earth." He wrote his History of the World (1610) from his prison cell, and was deeply imbued with the current of the Thames as a model of human destiny. It has been used as the symbol for the unfolding of events in time, and carries the burden of past events upon its back. For Raleigh the freight of time grew ever more complex and wearisome as it proceeded from its source; human life had become darker and deeper, less pure and more susceptible to the tides of affairs. There was one difference Raleigh noticed in his history, when he declared that "for this tide of man's life, after it once turneth and declineth, ever runneth with a perpetual ebb and falling stream, but never floweth again." The Thames has also been understood as a mirror of morality. The bending rushes and the yielding willows afford lessons in humility and forbearance; the humble weeds along its banks have been praised for their lowliness and absence of ostentation. And who has ventured upon the river without learning the value of patience, of endurance, and of vigilance? John Denham makes the Thames the subject of native discourse in a further sense: Though deep, yet clear; though gentle, yet not dull; Strong without rage; without o'erflowing, full. This suggests that the river represents an English measure, an aesthetic harmony to be sought or wished for, but in the same breath Denham seems to be adverting to some emblem of Englishness itself. The Thames is a metaphor for the country through which it runs. It is modest and moderate, calm and resourceful; it is powerful without being fierce. It is not flamboyantly impressive. It is large without being too vast. It eschews extremes. It weaves its own course without artificial diversions or interventions. It is useful for all manner of purposes. It is a practical river. When Robert Menzies, an erstwhile Australian prime minister, was taken to Runnymede he was moved to comment upon the "secret springs" of the "slow English character." This identification of the land with the people, the characteristics of the earth and water with the temperament of their inhabitants, remains a poignant one. There is an inward and intimate association between the river and those who live beside it, even if that association cannot readily be understood. From the Hardcover edition.
<urn:uuid:c8589dab-6a33-4d56-9c69-99faf059b9e4>
CC-MAIN-2013-20
http://sherloc.imcpl.org/enhancedContent.pl?contentType=ExcerptDetail&isbn=9780385528474
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.962477
3,140
3.390625
3
Teach your child the importance of good sportsmanship.Not too long ago, my 10-year-old daughter's indoor soccer team finished their game and lined up to do the traditional end-of-game walk with the other team. If your own child has ever played in a team sport, you likely have seen this walk a hundred times before. Win or lose, each member of the team is expected to essentially tell the other players they did well and good game. This is a classic way to end a game on a positive note and to exhibit good sportsmanship, win or lose. The opposing team in this case, however, had a unique way of showing their good sportsmanship. They all licked their hands before holding them out for our own girls to "low-five" as they walked down the line. Our girls saw this, and they refused to touch the other girls' slimy, slobbery, germ-ridden hands. You may be wondering if our girls' team beat this other team. The truth is that they beat the other team pretty harshly, but there is no score that would justify the level of poor sportsmanship that the other team exhibited. As a parent, I can only hope the parents or coach on the other team reprimanded their girls for this unsportsmanlike behavior. This is not the kind of behavior any parent would be proud to see in their own child. However, this is just one of many ways unsportsmanlike behavior is exhibited. From tears on the field to pushing, shoving, "trash talking" and more, there are many different behaviors that are associated with poor sportsmanship. The fact is that good sportsmanship is a quality that can play a role in your child's ability to react to other situations throughout life. Competition may occur on the field, but it also plays a part in the college admission process, a run for a place on the school board, the job application process and so much more. Teaching your child how to be a good sport now can help him or her to handle wins and losses throughout life with grace. So how can you help your child build a healthy "win-or-lose" attitude? A Positive Parental Role Model No parent takes pride in seeing other players, either from their child's own team or on the opposing team, be better than their own child. Parents simply want their child to be the best. However, somewhere between the desire to see your kid to aim for the stars and the truth of reality is the fact that there always will be someone or some team that is better. As a parent, you can talk negatively about these better players or better teams, or you can talk positively about them. You can use these interactions with better competition to point out areas where your own child can improve and to teach your child to respect those with skills and talents that are worthy of respect. This is a great opportunity to teach your child to turn lemons into lemonade. You Win Some, You Lose Some Very few children really are the best at what they do. There is always someone who either is better now or who is working hard to be better in the near future. A team that was on top this season may not be the top team the next season. While you want your child to work hard and strive to win, it is unrealistic to expect a child or his or her team to win all of the time. Children will inevitably be disappointed after a loss. This is understandable and justified, especially if he or she has been working hard and did his or her personal best. As a parent, your response to a loss is every bit as important as your response to a win. The fact is that an entire team can play their best, and they may simply be out-matched. Teaching kids that losses do happen, even when they try their hardest, can help them to cope with their defeat. Show them that you are proud of their performance and effort at each game rather than letting the tally mark under the "W" column dictate this. A Lesson Learned The fact is that a child or a team simply will not improve very quickly when they are blowing out the competition on a regular basis. To be the best, you have to play the best. You have to be challenged by the best, and sometimes this means a loss will occur. Within each game, whether a win or loss, lies an opportunity for growth, development and improvement. After each game, regardless of the outcome, talk to your child about what he or she did well and what he or she thinks could have been done better. Rather than tell your child what you think, ask your child his or her personal opinion on the matter and what the coach said. Then, remind your child that these are areas that he or she can work on for the next game. Nobody likes to lose, but challenge and loss are the motivators that make us all better. Whether on the field, in the workplace or any number of other environments, challenge and loss are vital to developing that ever-important trait that true winners in life have. That trait is perseverance.Content by Kim Daugherty .
<urn:uuid:efad95cf-5021-4697-ab61-7e73bd8414e7>
CC-MAIN-2013-20
http://shine.yahoo.com/team-mom/building-healthy-win-lose-attitude-161700902.html?.tsrc=attcf
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.980175
1,056
2.75
3
View Sample Pages Provides a detailed curricular calendar that's tied to a developmental continuum and the standards so you'll know not only what you should be teaching, but what your students are ready to embrace and what you can reasonably expect of them as successful readers and writers. Additionally, you'll find monthly units of study that integrate reading and writing so both work together to provide maximum support for your students. The units are organized around four essential components, process, genre, strategy, and conventions, so you're reassured you're addressing everything your students need to know about reading and writing. What's more you'll find ready-to-use lessons that offer exemplary teaching and continuous assessment, and a flexible framework that shows you how to frame a year of teaching, a unit, and a lesson—and you can easily adapt all to fit the unique needs and interests of your own students. 240 pages + DVD (17 minutes) & fold-out color year-long planner .
<urn:uuid:589b573a-7869-4a15-a5dd-7d9174baeb0b>
CC-MAIN-2013-20
http://shop.scholastic.com/webapp/wcs/stores/servlet/ProductDisplayView?productId=114498&langId=-1&storeId=10751&catalogId=10004&sa_campaign=internal_ads/scholastic3_0/search
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.95655
193
3.03125
3
Groundhogs, as a species, have a large range in size. There are the medium-sized rodents I grew up with, averaging around 4 kg, and groundhogs—like a certain Phil—that are probably more like 14 kg. This is the likely source of my earlier confusion, as that's a huge discrepancy in size. Evidently, it's all in the diet, much like humans. Where I grew up, in rural Northern Minnesota, we called the groundhog a woodchuck; I thought that the groundhog was some fat cat, East Coast, liberal rodent. As it would turn out, they are actually one in the same creature—Marmota monax, a member of the squirrel family. Woodchucks spend a lot of their time in burrows. It is their safe haven from their many predators, and they are quick to flee to it at the first sign of danger. They will sometimes emit a loud whistle on their way to alert others in the area that something is awry. Groundhogs enjoy raiding our gardens and digging up sod, thereby destroying what we've spent countless hours toiling upon. Look for groundhog signs. You might not even know there is a groundhog around until your garden has been devoured or your tractor damaged by a collapsed groundhog den. Things to look for are large nibble marks on your prized veggies, gnaw marks on the bark of young fruit trees, root vegetables pulled up (or their tops trimmed off), groundhog-sized holes (25–30 cm) anywhere near your garden, or mounds of dirt near said holes. If you see these signs, take action. Don't wait or it will be too late! If you know it will be a problem and do nothing, you can't blame the animal. Set groundhog traps. This technique takes some skill as you need to be able to pick a spot in the path of the animal, camouflage it, and mask your strong human scent. Setting a spring trap, whether coil or long-spring, is usually just a matter of compressing the springs and setting a pin that keeps the jaws open into the pan or trigger. Make sure your trap is anchored securely with a stake. Check your traps often, and dispatch the animal quickly and humanely. Shooting them in the head or a hearty whack to the head with club will do the trick. If you can't deal with this, you have no business setting traps. Call a professional. Guns kill groundhogs. I have never shot a groundhog. I rarely have had problems with them, and they move so damned fast it is difficult to get a shot off. If I had to, I know how I would do it. First, be sure it is legal in your area, and be sure to follow gun safety protocols. After that, it's just a matter of learning where your target is going to be comfortable and let their guard down. I would follow their tracks back to their den, find a spot downwind to sit with a clear shooting lane, and make sure nothing you shouldn't hit with a bullet is down range. Then, I would wait, my sights set on the den, until the groundhog stuck its head up—quick and easy. Demolish the groundhog burrows. If you find a couple holes around your yard, they are likely the entrances to an elaborate tunnel maze carved into the earth beneath you. About all you can do, short of digging the whole mess up, is to try and fill it in from the top side. First, fill it with a bunch of rocks and then soil—make sure to really pack it in. This will make it difficult for the groundhog to reclaim its hole without a lot of work. You probably want to do this in tandem with other control methods such as trapping, shooting, or fumigating to prevent the groundhog from just digging a new hole. Do some landscaping and build barriers. As with the control of many pests, it is advisable to keep a yard free of brush, undercover, and dead trees. These types of features are attractive to groundhogs as cover, and without it, they are less likely to want to spend time there. If you want to keep a groundhog out of an area, consider a partially buried fence. This will require a lot of work, but it is going to help a lot. Make sure it extends up at least a meter, and that it is buried somewhere around 30 cm deep. Angle the fencing outward 90 degrees when you bury it, and it will make digging under it a very daunting task for your furry friend. Try using fumigants to kill groundhogs. What is nice about this product is that you can kill the animal and bury it all in one stroke. The best time to do this is in the spring when the mother will be in the den with her still helpless young. Also, the soil will likely be damp, which helps a lot. You should definitely follow the directions on the package, but the way they usually work is that you cover all but one exit, set off the smoke bomb, shove it down the hole, and quickly cover it up. Check back in a day or two to see if there is any sign of activity, and if so, do it again or consider a different control method. It is important that you don't do this if the hole is next to your house or if there is any risk of a fire. Poisons are a last resort. I am not a fan of poisons because it is difficult to target what will eat said poison in the wild. Also, you are left with the issue of where the groundhog will die and how bad it will smell if it is somewhere under your house. Or, if it is outside somewhere, who will be affected by eating the dead animal? Where does it end? If you want to use poison, you're on your own. Use live traps. This is a good option for those of you not too keen on killing things. Try jamming the door open and leaving bait inside for the taking a couple of times so they get used to it. Then, set it normally and you've got your groundhog (or a neighborhood cat). Now what? The relocation is just as important; you need to choose a place that is far away from other humans and can likely support a groundhog. Good luck. Predator urine. The idea is simple: form a perimeter around an area you want to protect. If the groundhog doesn't recognize the smell as a natural predator, it is probably not going to work too well. Look for brands that have wolf and bobcat urine. Apply regularly, or as the manufacturer recommends. Remember, if it rains, the urine has probably washed away. Repellents. Another popular method involves pepper-based repellents. These deter groundhogs by tasting horrible and burning their mucous membranes. You can do a perimeter with powdered cayenne pepper or just apply it to the things you want spared in your garden. Be sure to wash your vegetables off before using them (which you should be doing anyway).
<urn:uuid:077623b0-183e-4d40-bf5d-168228829785>
CC-MAIN-2013-20
http://simplepestcontrol.com/groundhog-control.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.967829
1,467
3.03125
3
In my next few blogs, I will provide an overview of Voltage Source Converter (VSC) HVDC technology and its suitability for Smart Grids operation and control discussed. VSC HVDC is based upon transistor technology and was developed in the 1990′s. The switching element is the Insulated Gate Bipolar Thyristor (IGBT), which can be switched on and off by applying a suitable voltage to the gate (steering electrode). Because of the more switching operations, and the nature of the semiconductor devices itself, the converter losses are generally higher than those of HVDC classic converters. VSC HVDC is commonly used with underground or submarine cables with a transfer capacity in the range of 10 – 1000 MW, and is suitable to serve as a connection to a wind farm or supply a remote load. VSC HVDC technology has very fast steer and control functionality and is suitable for meshed networks. It is characterised by compactness of the converter stations, due to the reduced need for AC harmonic filters and reactive power compensation. Power flow reversal in VSC systems is achieved by reversal of the current, whereas in HVDC classic systems the voltage polarity has to change. An important consequence of this voltage source behavior is the ability to use cheaper and easier to install XLPE cables, instead of the mass-impregnated cables that are needed for HVDC classic. Currently, only twelve VSC HVDC projects are in service. A few examples include: Estlink, which connects Estonia to Finland (350 MW), and BorWin1, connecting an offshore wind farm to Northern Germany (400 MW). Both are equipped with ±150 kV submarine cables, and the Trans Bay project in California (400 MW) that consists of 90 km ±200 kV submarine cable. Most projects have submarine cable, but some projects include long lengths of underground cable, such as Murraylink (220 MW, 177 km underground cable), and Nord E.On 1 (400 MW, 75km underground cable). The 500 MW East-West interconnector between Ireland and Great Britain, operating at ±200 kV, is scheduled to go into service in 2012. A 2000 MW 65 km cable interconnector ±320kV as part of the Trans European Network—between Spain and France—is scheduled for commissioning in 2013, and will represent the highest power rating for a VSC HVDC system installed at this time. Make sure to check back next Tuesday for my next blog on the comparison between HVDC classic and VSC HVDC. By: Peter Vaessen
<urn:uuid:7b22bc99-05e4-4dcf-9143-3715cd5724d7>
CC-MAIN-2013-20
http://smartgridsherpa.com/blog/voltage-source-converter-hvdc-a-key-technology-for-smart-grids
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.936672
536
3.234375
3
The Operations Layer defines the operational processes and procedures necessary to deliver Information Technology (IT) as a Service. This layer leverages IT Service Management concepts that can be found in prevailing best practices such as ITIL and MOF. The main focus of the Operations Layer is to execute the business requirements defined at the Service Delivery Layer. Cloud-like service attributes cannot be achieved through technology alone and require a high level of IT Service Management maturity. Change Management process is responsible for controlling the life cycle of all changes. The primary objective of Change Management is to eliminate or at least minimize disruption while desired changes are made to services. Change Management focuses on understanding and balancing the cost and risk of making the change versus the benefit of the change to either the business or the service. Driving predictability and minimizing human involvement are the core principles for achieving a mature Service Management process and ensuring changes can be made without impacting the perception of continuous availability. Standard (Automated) Change Non-Standard (Mechanized) Change It is important to note that a record of all changes must be maintained, including Standard Changes that have been automated. The automated process for Standard Changes should include the creation and population of the change record per standard policy in order to make sure auditability. Automating changes also enables other key principles such as: The Service Asset and Configuration Management process is responsible for maintaining information on the assets, components, and infrastructure needed to provide a service. Critical configuration data for each component, and its relationship to other components, must be accurately captured and maintained. This configuration data should include past and current states and future-state forecasts, and be easily available to those who need it. Mature Service Asset and Configuration Management processes are necessary for achieving predictability. A virtualized infrastructure adds complexity to the management of Configuration Items (CIs) due to the transient nature of the relationship between guests and hosts in the infrastructure. How is the relationship between CIs maintained in an environment that is potentially changing very frequently? A service comprises software, platform, and infrastructure layers. Each layer provides a level of abstraction that is dependent on the layer beneath it. This abstraction hides the implementation and composition details of the layer. Access to the layer is provided through an interface and as long as the fabric is available, the actual physical location of a hosted VM is irrelevant. To provide Infrastructure as a Service (IaaS), the configuration and relationship of the components within the fabric must be understood, whereas the details of the configuration within the VMs hosted by the fabric are irrelevant. The Configuration Management System (CMS) will need to be partitioned, at a minimum, into physical and logical CI layers. Two Configuration Management Databases (CMDBs) might be used; one to manage the physical CIs of the fabric (facilities, network, storage, hardware, and hypervisor) and the other to manage the logical CIs (everything else). The CMS can be further partitioned by layer, with separate management of the infrastructure, platform, and software layers. The benefits and trade-offs of each approach are summarized below. CMS Partitioned by Layer CMS Partitioned into Physical and Logical Table 2: Configuration Management System Options Partitioning logical and physical CI information allows for greater stability within the CMS, because CIs will need to be changed less frequently. This means less effort will need to be expended to accurately maintain the information. During normal operations, mapping a VM to its physical host is irrelevant. If historical records of a VM’s location are needed, (for example, for auditing or Root Cause Analysis) they can be traced through change logs. The physical or fabric CMDB will need to include a mapping of fault domains, upgrade domains, and Live Migration domains. The relationship of these patterns to the infrastructure CIs will provide critical information to the Fabric Management System. The Release and Deployment Management processes are responsible for making sure that approved changes to a service can be built, tested, and deployed to meet specifications with minimal disruption to the service and production environment. Where Change Management is based on the approval mechanism (determining what will be changed and why), Release and Deployment Management will determine how those changes will be implemented. The primary focus of Release and Deployment Management is to protect the production environment. The less variation is found in the environment, the greater the level of predictability – and, therefore, the lower the risk of causing harm when new elements are introduced. The concept of homogenization of physical infrastructure is derived from this predictability principle. If the physical infrastructure is completely homogenized, there is much greater predictability in the release and deployment process. While complete homogenization is the ideal, it may not be achievable in the real world. Homogenization is a continuum. The closer an environment gets to complete homogeneity, the more predictable it becomes and the fewer the risks. Full homogeneity means not only that identical hardware models are used, but all hardware configuration is identical as well. When complete hardware homogeneity is not feasible, strive for configuration homogeneity wherever possible. Figure 2: Homogenization Continuum The Scale Unit concept drives predictability in Capacity Planning and agility in the release and deployment of physical infrastructure. The hardware specifications and configurations have been pre-defined and tested, allowing for a more rapid deployment cycle than in a traditional data center. Similarly, known quantities of resources are added to the data center when the Capacity Plan is triggered. However, when the Scale Unit itself must change (for example, when a vendor retires a hardware model), a new risk is introduced to the private cloud. There will likely be a period where both n and n-1 versions of the Scale Unit exist in the infrastructure, but steps can be taken to minimize the risk this creates. Work with hardware vendors to understand the life cycle of their products and coordinate changes from multiple vendors to minimize iterations of the Scale Unit change. Also, upgrading to the new version of the Scale Unit should take place one Fault Domain at a time wherever possible. This will make sure that if an incident occurs with the new version, it can be isolated to a single Fault Domain. Homogenization of the physical infrastructure means consistency and predictability for the VMs regardless of which physical host they reside on. This concept can be extended beyond the production environment. The fabric can be partitioned into development, test, and pre-production environments as well. Eliminating variability between environments enables developers to more easily optimize applications for a private cloud and gives testers more confidence that the results reflect the realities of production, which in turn should greatly improve testing efficiency. The virtualized infrastructure enables workloads to be transferred more easily between environments. All VMs should be built from a common set of component templates housed in a library, which is used across all environments. This shared library includes templates for all components approved for production, such as VM images, the gold OS image, server role templates, and platform templates. These component templates are downloaded from the shared library and become the building blocks of the development environment. From development, these components are packaged together to create a test candidate package (in the form of a virtual hard disk (VHD) that is uploaded to the library. This test candidate package can then be deployed by booting the VHD in the test environment. When testing is complete, the package can again be uploaded to the library as a release candidate package – for deployment into the pre-production environment, and ultimately into the production environment. Since workloads are deployed by booting a VM from a VHD, the Release Management process occurs very quickly through the transfer of VHD packages to different environments. This also allows for rapid rollback should the deployment fail; the current release can be deleted and the VM can be booted off the previous VHD. Virtualization and the use of standard VM templates allow us to rethink software updates and patch management. As there is minimal variation in the production environment and all services in production are built with a common set of component templates, patches need not be applied in production. Instead, they should be applied to the templates in the shared library. Any services in production using that template will require a new version release. The release package is then rebuilt, tested, and redeployed, as shown below. Figure 3: The Release Process This may seem counter-intuitive for a critical patch scenario, such as when an exploitable vulnerability is exposed. But with virtualization technologies and automated test scripts, a new version of a service can be built, tested, and deployed quite rapidly. Variation can also be reduced through standardized, automated test scenarios. While not every test scenario can or should be automated, tests that are automated will improve predictability and facilitate more rapid test and deployment timelines. Test scenarios that are common for all applications, or the ones that might be shared by certain application patterns, are key candidates for automation. These automated test scripts may be required for all release candidates prior to deployment and would make sure further reduction in variation in the production environment. Knowledge Management is the process of gathering, analyzing, storing, and sharing knowledge and information within an organization. The goal of Knowledge Management is to make sure that the right people have access to the information they need to maintain a private cloud. As operational knowledge expands and matures, the ability to intelligently automate operational tasks improves, providing for an increasingly dynamic environment. An immature approach to Knowledge Management costs organizations in terms of slower, less-efficient problem solving. Every problem or new situation that arises becomes a crisis that must be solved. A few people may have the prior experience to resolve the problem quickly and calmly, but their knowledge is not shared. Immature knowledge management creates greater stress for the operations staff and usually results in user dissatisfaction with frequent and lengthy unexpected outages. Mature Knowledge Management processes are necessary for achieving a service provider’s approach to delivering infrastructure. Past knowledge and experience is documented, communicated, and readily available when needed. Operating teams are no longer crisis-driven as service-impacting events grow less frequent and are quickly resolves when they do occur. When designing a private cloud, development of the Health Model will drive much of the information needed for Knowledge Management. The Health Model defines the ideal states for each infrastructure component and the daily, weekly, monthly, and as-needed tasks required to maintain this state. The Health Model also defines unhealthy states for each infrastructure component and actions to be taken to restore their health. This information will form the foundation of the Knowledge Management database. Aligning the Health Model with alerts allows these alerts to contain links to the Knowledge Management database describing the specific steps to be taken in response to the alert. This will help drive predictability as a consistent, proven set of actions will be taken in response to each alert. The final step toward achieving a private cloud is the automation of responses to each alert as defined in the Knowledge Management database. Once these responses are proven successful, they should be automated to the fullest extent possible. It is important to note, though, that automating responses to alerts does not make them invisible and forgotten. Even when alerts generate a fully automated response they must be captured in the Service Management system. If the alert indicates the need for a change, the change record should be logged. Similarly, if the alert is in response to an incident, an incident record should be created. These automated workflows must be reviewed regularly by Operations staff to make sure the automated action achieves the expected result. Finally, as the environment changes over time, or as new knowledge is gained, the Knowledge Management database must be updated along with the automated workflows that are based on that knowledge. The goal of Incident Management is to resolve events that are impacting, or threaten to impact, services as quickly as possible with minimal disruption. The goal of Problem Management is to identify and resolve root causes of incidents that have occurred as well as identify and prevent or minimize the impact of incidents that may occur. Pinpointing the root cause of an incident can become more challenging when workloads are abstracted from the infrastructure and their physical location changes frequently. Additionally, incident response teams may be unfamiliar with virtualization technologies (at least initially) which could also lead to delays in incident resolution. Finally, applications may have neither a robust Health Model nor expose all of the health information required for a proactive response. All of this may lead to an increase in reactive (user initiated) incidents which will likely increase the Mean-Time-to-Restore-Service (MTRS) and customer dissatisfaction. This may seem to go against the resiliency principle, but note that virtualization alone will not achieve the desired resiliency unless accompanied by highly mature IT Service Management (ITSM) maturity and a robust automated health monitoring system. The drive for resiliency requires a different approach to troubleshooting incidents. Extensive troubleshooting of incidents in production negatively impacts resiliency. Therefore, if an incident cannot be quickly resolved, the service can be rolled back to the previous version, as described under Release and Deployment. Further troubleshooting can be done in a test environment without impacting the production environment. Troubleshooting in the production environment may be limited to moving the service to different hosts (ruling out infrastructure as the cause) and rebooting the VMs. If these steps do not resolve the issue, the rollback scenario could be initiated. Minimizing human involvement in incident management is critical for achieving resiliency. The troubleshooting scenarios described earlier could be automated, which will allow for identification and possible resolution of the root much more quickly than non-automated processes. But automation may mask the root cause of the incident. Careful consideration should be given to determining which troubleshooting steps should be automated and which require human analysis. Human Analysis of Troubleshooting If a compute resource fails, it is no longer necessary to treat the failure as an incident that must be fixed immediately. It may be more efficient and cost effective to treat the failure as part of the decay of the Resource Pool. Rather than treat a failed server as an incident that requires immediate resolution, treat it as a natural candidate for replacement on a regular maintenance schedule, or when the Resource Pool reaches a certain threshold of decay. Each organization must balance cost, efficiency, and risk as it determines an acceptable decay threshold – and choose among these courses of action: The benefits and trade-off of each of the options are listed below: Option 4 is the least desirable, as it does not take advantage of the resiliency and cost reduction benefits of a private cloud. A well-planned Resource Pool and Reserve Capacity strategy will account for Resource Decay. Option 1 is the most recommended approach. A predictable maintenance schedule allows for better procurement planning and can help avoid conflicts with other maintenance activities, such as software upgrades. Again, a well-planned Resource Pool and Reserve Capacity strategy will account for Resource Decay and minimize the risk of exceeding critical thresholds before the scheduled maintenance. Option 3 will likely be the only option for self-contained Scale Unit scenarios, as the container must be replaced as a single Scale Unit when the decay threshold is reached. The goal of Request Fulfillment is to manage requests for service from users. Users should have a clear understanding of the process they need to initiate to request service and IT should have a consistent approach for managing these requests. Much like any service provider, IT should clearly define the types of requests available to users in the service catalog. The service catalog should include an SLA on when the request will be completed, as well as the cost of fulfilling the request, if any. The types of requests available and their associated costs should reflect the actual cost of completing the request and this cost should be easily understood. For example, if a user requests an additional VM, its daily cost should be noted on the request form, which should also be exposed to the organization or person responsible for paying the bill. It is relatively easy to see the need for adding resources, but more difficult to see when a resource is no longer needed. A process for identifying and removing unused VMs should be put into place. There are a number of strategies to do this, depending on the needs of a given organization, such as: The benefits and trade-offs of each of these approaches are detailed below: Option 4 affords the greatest flexibility, while still working to minimize server sprawl. When a user requests a VM, they have the option of setting an expiration date with no reminder (for example, if they know they will only be using the workload for one week). They could set an expiration deadline with a reminder (for example, a reminder that the VM will expire after 90 days unless they wish to renew). Lastly, the user may request no expiration date if they expect the workload will always be needed. If the last option is chosen, it is likely that underutilized VMs will still be monitored and owners notified. Finally, self-provisioning should be considered, if appropriate, when evaluating request fulfillment options to drive towards minimal human involvement. Self-provisioning allows great agility and user empowerment, but it can also introduce risks depending on the nature of the environment in which these VMs are introduced. For an enterprise organization, the risk of bypassing formal build, stabilize, and deploy processes may or may not outweigh the agility benefits gained from the self-provisioning option. Without strong governance to make sure each VM has an end-of-life strategy, the fabric may become congested with VM server sprawl. The pros and cons of self-provisioning options are listed in the next diagram: The primary decision point for determining whether to use self-provisioning is the nature of the environment. Allowing developers to self-provision into the development environment greatly facilitates agile development, and allows the enterprise to maintain release management controls as these workloads are moved out of development and into test and production environments. A user-led community environment isolated from enterprise mission-critical applications may also be a good candidate for self-provisioning. As long as user actions are isolated and cannot impact mission critical applications, the agility and user empowerment may justify the risk of giving up control of release management. Again, it is essential that in such a scenario, expiration timers are included to prevent server sprawl. The goal of Access Management is to make sure authorized users have access to the services they need while preventing access by unauthorized users. Access Management is the implementation of security policies defined by Information Security Management at the Service Delivery Layer. Maintaining access for authorized users is critical for achieving the perception of continuous availability. Besides allowing access, Access Management defines users who are allowed to use, configure, or administer objects in the Management Layer. From a provider’s perspective, it answers questions like: From a consumer’s perspective, it answers questions such as: Access Management is implemented at several levels and can include physical barriers to systems such as requiring access smartcards at the data center, or virtual barriers such as network and Virtual Local Area Network (VLAN) separation, firewalling, and access to storage and applications. Taking a service provider’s approach to Access Management will also make sure that resource segmentation and multi-tenancy is addressed. Resource Pools may need to be segmented to address security concerns around confidentiality, integrity, and availability. Some tenants may not wish to share infrastructure resources to keep their environment isolated from others. Access Management of shared infrastructure requires logical access control mechanisms such as encryption, access control rights, user groupings, and permissions. Dedicated infrastructure also relies on physical access control mechanisms, where infrastructure is not physically connected, but is effectively isolated through a firewall or other mechanisms. The goal of systems administration is to make sure that the daily, weekly, monthly, and as-needed tasks required to keep a system healthy are being performed. Regularly performing ongoing systems administration tasks is critical for achieving predictability. As the organization matures and the Knowledge Management database becomes more robust and increasingly automated, systems administration tasks is no longer part of the job role function. It is important to keep this in mind as an organization moves to a private cloud. Staff once responsible for systems administration should refocus on automation and scripting skills – and on monitoring the fabric to identify patterns that indicate possibilities for ongoing improvement of existing automated workflows.
<urn:uuid:809862ce-ee94-40a5-8400-9b6e42bd25fc>
CC-MAIN-2013-20
http://social.technet.microsoft.com/wiki/contents/articles/4518.private-cloud-planning-guide-for-operations.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.926484
4,173
2.640625
3
Let and be two differentiable functions. We will say that and are proportional if and only if there exists a constant C such that . Clearly any function is proportional to the zero-function. If the constant C is not important in nature and we are only interested into the proportionality of the two functions, then we would like to come up with an equivalent criteria. The following statements are equivalent: Therefore, we have the following: Define the Wronskian of and to be , that is The following formula is very useful (see reduction of order technique): Remark: Proportionality of two functions is equivalent to their linear dependence. Following the above discussion, we may use the Wronskian to determine the dependence or independence of two functions. In fact, the above discussion cannot be reproduced as is for more than two functions while the Wronskian does....
<urn:uuid:b7bc34b8-0f1f-4df8-8e8d-e56fc9c8fec5>
CC-MAIN-2013-20
http://sosmath.com/diffeq/second/linearind/wronskian/wronskian.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.931602
180
2.6875
3
Published May 2008 Properly located digital signage in high traffic areas on school campuses provides students and faculty with a convenient resource to stay up to date about the latest school news and activities. Signage in Education By Anthony D. Coppedge Technology gets high marks. Digital media and communications have come to play a vital role in people’s everyday lives, and a visit to the local K-12 school, college or university campus quickly illustrates the many ways in which individuals rely on audio and visual technologies each day. The shift from analog media to digital, represented by milestones ranging from the replacement of the Walkman by the MP3 player to the DTV transition currently enabling broadcasts beyond the home to mobile devices, has redefined the options that larger institutions, including those in our educational system, have for sharing information across the campus and facilities. Flexible And Efficient Digital signage, in particular, is proving to be a flexible and efficient tool for delivering specific and up-to-date information within the educational environment. As a high-resolution, high-impact medium, it lives up to the now-widespread expectation that visual media be crisp and clear, displayed on a large screen. Although the appeal of implementing digital signage networks does stem, in part, from plummeting screen prices and sophisticated content delivery systems, what’s equally or more important is that digital signage provides valuable information to the people who need it, when and where they need it. On school campuses—whether preschool, elementary, high school or post-secondary institutions—it does so effectively, for both educational purposes and for the security and safety of staff, administration and the student body as a whole. School campuses have begun leveraging digital signage technology in addition to, or in place of, printed material, such as course schedules, content and location; time-sensitive school news and updates; maps and directions; welcome messages for visitors and applicants; and event schedules. Digital signage simplifies creation and delivery of multiple channels of targeted content to different displays on the network. Although a display in the college admissions office might provide prospective students with a glimpse into student life, for example, another display outside a lab or seminar room might present the courses or lectures scheduled for that space throughout the day. This model of a distribution concept illustrates a school distributing educational content over a public TV broadcast network. At the K-12 level, digital signage makes it easy to deliver information such as team or band practice schedules, or to post the cafeteria menu and give students information encouraging sound food choices. Digital signage in the preschool and daycare setting makes it easy for teachers and caregivers to share targeted educational programming with their classes. Among the most striking benefits of communicating through digital signage is the quality of the pictures and the flexibility with which images, text and video can be combined in one or more windows to convey information. Studies have shown that dynamic signage is noticed significantly more often than are static displays and, furthermore, that viewers are more likely to remember that dynamic content. Though most regularly updated digital signage content tends to be text-based, digital signage networks also have the capacity to enable the live campus-wide broadcast of key events: a speech by a visiting dignitary, the basketball team’s first trip to the state or national tournament, or even the proceedings at commencement and graduation. When time is short, it’s impractical to gather the entire student body in one place or there simply isn’t the time or means to deliver the live message in any other way. The ability to share critical information to the entire school community, clearly and without delay, has made digital signage valuable as a tool for emergency response and communications. Parents, administrators, teachers and students today can’t help but be concerned about the school’s ability to respond quickly and effectively to a dangerous situation, whether the threat be from another person, an environmental hazard, an unpredictable weather system or some other menace. Digital signage screens installed across a school campus can be updated immediately to warn students and staff of the danger, and to provide unambiguous instructions for seeking shelter or safety: where to go and what to do. Although early digital signage systems relied on IP-based networks and point-to-point connections between a player and each display, current solutions operate on far less costly and much more scalable platforms. Broadcast-based digital signage models allow content to be distributed remotely from a single data source via transport media, such as digital television broadcast, satellite, broadband and WiMAX. The staff member responsible for maintaining the digital signage network can use popular content creation toolsets to populate both dynamic and static displays. This content is uploaded to a server that, in turn, feeds the digital signage network via broadcast, much like datacasting, to the receive site for playout. By slotting specific content into predefined display templates, each section with its own playlist, the administrator can schedule display of multiple elements simultaneously or a single-window static, video or animated display. The playlist enables delivery of the correct elements to the targeted display both at the scheduled time and in the appropriate layout. In networks with multicast-enabled routers, the administrator can schedule unique content for displays in different locations. In the case of delivering emergency preparedness or response information across a campus, content can be created through the same back-office software used for day-to-day digital signage displays. Within the broadcast-based model, three components ensure the smooth delivery of content to each display. A transmission component serves as a content hub, allocating bandwidth and inserting content into the broadcast stream based on the schedule dictated by the network’s content management component. Content is encapsulated into IP packets that, in turn, are encapsulated into MPEG2 packets for delivery. Generic content distribution model for digital signage solution. The content management component of the digital signage network provides for organization and scheduling of content, as well as targeting of that content to specific receivers. Flexibility in managing the digital signage system enables distribution of the same emergency message across all receivers and associated displays, or the delivery of select messages to particular displays within the larger network. With tight control over the message being distributed, school administrators can immediately provide the information that students and staff in different parts of the campus need to maintain the safest possible environment. Receivers can be set to confirm receipt of content, in turn assuring administrative and emergency personnel that their communications are, in fact, being conveyed as intended. On the receiving end, the third component of the system, content, is extracted from the digital broadcast stream and fed to the display screen. The relationships that many colleges and universities share with public TV stations provide an excellent opportunity for establishing a digital signage network. Today, the deployed base of broadcast-based content distribution systems in public TV stations is capable of reaching 50% of the US population. These stations’ DTV bandwidth is used not only for television programming, but also to generate new revenues and aggressively support public charters by providing efficient delivery of multimedia content for education, homeland security and other public services. Educational institutions affiliated with such broadcasters already have the technology, and much of the necessary infrastructure, in place to launch a digital signage network. In taking advantage of the public broadcaster’s content delivery system, the college or university also can tap into the station’s existing links with area emergency response agencies. As digital signage technology continues to evolve, educational institutions will be able to extend both urgent alerts and more mundane daily communications over text and email messaging. Smart content distribution systems will push consistent information to screens of all sizes, providing messages not only to displays, but also to the cell phones and PDAs so ubiquitous in US schools. The continued evolution of MPH technology will support this enhancement in delivery of messages directly to each student. MPH in-band mobile DTV technology leverages ATSC DTV broadcasts to enable extensions of digital signage and broadcast content directly to personal devices, whether stationary or on the move. Rather than rely on numerous unrelated systems, such as ringing bells, written memos and intercom announcements, schools can unify messaging and its delivery, in turn reducing the redundancy involved in maintaining communications with the student body. An effective digital signage network provides day-to-day benefits for an elementary school, high school, college or university while providing invaluable emergency communications capabilities that increasingly are considered a necessity, irrespective of whether they get put to the test. The selection of an appropriate digital signage model depends, of course, on the needs of the organization. Educational institutions share many of the same concerns held by counterparts in the corporate world, and key among those concerns is the simple matter of getting long-term value and use out of their technical investments. However, before even addressing the type of content the school wishes to create and distribute, the systems integrator, consultant or other AV and media professional should work with the eventual operators of the digital signage network to identify and map out the existing workflow. Once the system designer, integrator or installer has evaluated how staff currently work in an emergency to distribute information, he then can adjust established processes and adapt them to the digital signage model. The administrative staff who will be expected to update or import schedules to the digital signage system will have a much lower threshold of acceptance for a workflow that is completely unfamiliar or at odds with all their previous experience. An intuitive, easy-to-use system is more likely to be used in an emergency if it has become familiar in everyday practice. Turnkey digital signage solutions provide end-to-end functionality without forcing users and integrators to work with multiple systems and interfaces. The key in selecting a vendor lies in ensuring that they share the same vision and are moving in the same direction as the end user. In addition to providing ease of use, digital signage solutions for the education market also must provide a high level of built-in security, preventing abuse or misuse by hackers, or by those without the knowledge, experience or authority to distribute content over the network. Because the network is a conduit for emergency messaging, its integrity must be protected. So, the installer must not only identify the number of screens to be used and where, but also determine who gets access to the system and how that access remains secure. Scalable systems that can grow in number of displays or accommodate infrastructure improvements and distribution of higher-bandwidth content will provide the long-term utility that makes the investment worthwhile. By going into the project with an understanding of existing infrastructure, such as cabling, firewalls, etc., and the client’s goals, the professional is equipped to advise the customer as to the necessity, options and costs for enhancing or improving on that infrastructure. As with any other significant deployment of AV technology, the installation of a digital signage network also requires knowledge of the site, local building codes, the availability of power and so forth. Ralph Bachofen, senior director of Product Management and Marketing, Triveni Digital, has more than 15 years of experience in voice and multimedia over Internet Protocol (IP), telecommunications and the semiconductor business. The infrastructure requirements of a school in deploying a digital signage network will vary, depending on the type of content being delivered through the system. HD and streaming content clearly are bandwidth hogs, whereas tickers and other text-based messages put a low demand on bandwidth. Most facilities today are equipped with Gigabit Ethernet networks that can handle the demands of live video delivery and lighter content. However, even bandwidth-heavy video can be delivered by less robust networks, as larger clips can be “trickled” over time to the site, as long as storage on the unit is adequate. There is no set standard for the bandwidth required, just as there is no single way to use a digital signage solution. It all depends on how the system will be used, and that’s an important detail to address up front. Most digital signage solutions feature built-in content-creation tools and accept content from third-party applications, as well. Staff members who oversee the system thus can use familiar applications to create up-to-date content for the school’s digital signage network. This continuity in workflow adds to the value and efficiency of the network in everyday use, reducing the administrative burden while serving as a safeguard in the event of an emergency. For educational institutions, the enormous potential of the digital signage network can open new doors for communicating with students and staff, but only if it is put to use effectively. Comprehensive digital signage solutions offer ease of use to administration, deliver clear and useful messaging on ordinary days and during crises, and feature robust design and underlying technology that supports continual use well into the future.
<urn:uuid:4f104f5b-67cc-4de8-87a7-62cb466de5d1>
CC-MAIN-2013-20
http://soundandcommunications.com/archive_site/video/2008_05_video.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.922634
2,596
2.5625
3
How We Found the Missing Memristor The memristor--the functional equivalent of a synapse--could revolutionize circuit design Image: Bryan Christie Design THINKING MACHINE This artist's conception of a memristor shows a stack of multiple crossbar arrays, the fundamental structure of R. Stanley Williams's device. Because memristors behave functionally like synapses, replacing a few transistors in a circuit with memristors could lead to analog circuits that can think like a human brain. It’s time to stop shrinking. Moore’s Law, the semiconductor industry’s obsession with the shrinking of transistors and their commensurate steady doubling on a chip about every two years, has been the source of a 50-year technical and economic revolution. Whether this scaling paradigm lasts for five more years or 15, it will eventually come to an end. The emphasis in electronics design will have to shift to devices that are not just increasingly infinitesimal but increasingly capable. Earlier this year, I and my colleagues at Hewlett-Packard Labs, in Palo Alto, Calif., surprised the electronics community with a fascinating candidate for such a device: the memristor. It had been theorized nearly 40 years ago, but because no one had managed to build one, it had long since become an esoteric curiosity. That all changed on 1 May, when my group published the details of the memristor in Nature. Combined with transistors in a hybrid chip, memristors could radically improve the performance of digital circuits without shrinking transistors. Using transistors more efficiently could in turn give us another decade, at least, of Moore’s Law performance improvement, without requiring the costly and increasingly difficult doublings of transistor density on chips. In the end, memristors might even become the cornerstone of new analog circuits that compute using an architecture much like that of the brain. For nearly 150 years, the known fundamental passive circuit elements were limited to the capacitor (discovered in 1745), the resistor (1827), and the inductor (1831). Then, in a brilliant but underappreciated 1971 paper, Leon Chua, a professor of electrical engineering at the University of California, Berkeley, predicted the existence of a fourth fundamental device, which he called a memristor. He proved that memristor behavior could not be duplicated by any circuit built using only the other three elements, which is why the memristor is truly fundamental. Memristor is a contraction of ”memory resistor,” because that is exactly its function: to remember its history. A memristor is a two-terminal device whose resistance depends on the magnitude and polarity of the voltage applied to it and the length of time that voltage has been applied. When you turn off the voltage, the memristor remembers its most recent resistance until the next time you turn it on, whether that happens a day later or a year later. Think of a resistor as a pipe through which water flows. The water is electric charge. The resistor’s obstruction of the flow of charge is comparable to the diameter of the pipe: the narrower the pipe, the greater the resistance. For the history of circuit design, resistors have had a fixed pipe diameter. But a memristor is a pipe that changes diameter with the amount and direction of water that flows through it. If water flows through this pipe in one direction, it expands (becoming less resistive). But send the water in the opposite direction and the pipe shrinks (becoming more resistive). Further, the memristor remembers its diameter when water last went through. Turn off the flow and the diameter of the pipe ”freezes” until the water is turned back on. That freezing property suits memristors brilliantly for computer memory. The ability to indefinitely store resistance values means that a memristor can be used as a nonvolatile memory. That might not sound like very much, but go ahead and pop the battery out of your laptop, right now—no saving, no quitting, nothing. You’d lose your work, of course. But if your laptop were built using a memory based on memristors, when you popped the battery back in, your screen would return to life with everything exactly as you left it: no lengthy reboot, no half-dozen auto-recovered files. But the memristor’s potential goes far beyond instant-on computers to embrace one of the grandest technology challenges: mimicking the functions of a brain. Within a decade, memristors could let us emulate, instead of merely simulate, networks of neurons and synapses. Many research groups have been working toward a brain in silico: IBM’s Blue Brain project, Howard Hughes Medical Institute’s Janelia Farm, and Harvard’s Center for Brain Science are just three. However, even a mouse brain simulation in real time involves solving an astronomical number of coupled partial differential equations. A digital computer capable of coping with this staggering workload would need to be the size of a small city, and powering it would require several dedicated nuclear power plants. Memristors can be made extremely small, and they function like synapses. Using them, we will be able to build analog electronic circuits that could fit in a shoebox and function according to the same physical principles as a brain. A hybrid circuit—containing many connected memristors and transistors—could help us research actual brain function and disorders. Such a circuit might even lead to machines that can recognize patterns the way humans can, in those critical ways computers can’t—for example, picking a particular face out of a crowd even if it has changed significantly since our last memory of it. The story of the memristor is truly one for the history books. When Leon Chua, now an IEEE Fellow, wrote his seminal paper predicting the memristor, he was a newly minted and rapidly rising professor at UC Berkeley. Chua had been fighting for years against what he considered the arbitrary restriction of electronic circuit theory to linear systems. He was convinced that nonlinear electronics had much more potential than the linear circuits that dominate electronics technology to this day. Chua discovered a missing link in the pairwise mathematical equations that relate the four circuit quantities—charge, current, voltage, and magnetic flux—to one another. These can be related in six ways. Two are connected through the basic physical laws of electricity and magnetism, and three are related by the known circuit elements: resistors connect voltage and current, inductors connect flux and current, and capacitors connect voltage and charge. But one equation is missing from this group: the relationship between charge moving through a circuit and the magnetic flux surrounded by that circuit—or more subtly, a mathematical doppelgänger defined by Faraday’s Law as the time integral of the voltage across the circuit. This distinction is the crux of a raging Internet debate about the legitimacy of our memristor [see sidebar, ”Resistance to Memristance ”]. Chua’s memristor was a purely mathematical construct that had more than one physical realization. What does that mean? Consider a battery and a transformer. Both provide identical voltages—for example, 12 volts of direct current—but they do so by entirely different mechanisms: the battery by a chemical reaction going on inside the cell and the transformer by taking a 110â¿¿V ac input, stepping that down to 12 V ac, and then transforming that into 12 V dc. The end result is mathematically identical—both will run an electric shaver or a cellphone, but the physical source of that 12 V is completely different. Conceptually, it was easy to grasp how electric charge could couple to magnetic flux, but there was no obvious physical interaction between charge and the integral over the voltage. Chua demonstrated mathematically that his hypothetical device would provide a relationship between flux and charge similar to what a nonlinear resistor provides between voltage and current. In practice, that would mean the device’s resistance would vary according to the amount of charge that passed through it. And it would remember that resistance value even after the current was turned off. He also noticed something else—that this behavior reminded him of the way synapses function in a brain. Even before Chua had his eureka moment, however, many researchers were reporting what they called ”anomalous” current-voltage behavior in the micrometer-scale devices they had built out of unconventional materials, like polymers and metal oxides. But the idiosyncrasies were usually ascribed to some mystery electrochemical reaction, electrical breakdown, or other spurious phenomenon attributed to the high voltages that researchers were applying to their devices. As it turns out, a great many of these reports were unrecognized examples of memristance. After Chua theorized the memristor out of the mathematical ether, it took another 35 years for us to intentionally build the device at HP Labs, and we only really understood the device about two years ago. So what took us so long? It’s all about scale. We now know that memristance is an intrinsic property of any electronic circuit. Its existence could have been deduced by Gustav Kirchhoff or by James Clerk Maxwell, if either had considered nonlinear circuits in the 1800s. But the scales at which electronic devices have been built for most of the past two centuries have prevented experimental observation of the effect. It turns out that the influence of memristance obeys an inverse square law: memristance is a million times as important at the nanometer scale as it is at the micrometer scale, and it’s essentially unobservable at the millimeter scale and larger. As we build smaller and smaller devices, memristance is becoming more noticeable and in some cases dominant. That’s what accounts for all those strange results researchers have described. Memristance has been hidden in plain sight all along. But in spite of all the clues, our finding the memristor was completely serendipitous. In 1995, I was recruited to HP Labs to start up a fundamental research group that had been proposed by David Packard. He decided that the company had become large enough to dedicate a research group to long-term projects that would be protected from the immediate needs of the business units. Packard had an altruistic vision that HP should ”return knowledge to the well of fundamental science from which HP had been withdrawing for so long.” At the same time, he understood that long-term research could be the strategic basis for technologies and inventions that would directly benefit HP in the future. HP gave me a budget and four researchers. But beyond the comment that ”molecular-scale electronics” would be interesting and that we should try to have something useful in about 10 years, I was given carte blanche to pursue any topic we wanted. We decided to take on Moore’s Law. At the time, the dot-com bubble was still rapidly inflating its way toward a resounding pop, and the existing semiconductor road map didn’t extend past 2010. The critical feature size for the transistors on an integrated circuit was 350 nanometers; we had a long way to go before atomic sizes would become a limitation. And yet, the eventual end of Moore’s Law was obvious. Someday semiconductor researchers would have to confront physics-based limits to their relentless descent into the infinitesimal, if for no other reason than that a transistor cannot be smaller than an atom. (Today the smallest components of transistors on integrated circuits are roughly 45 nm wide, or about 220 silicon atoms.) That’s when we started to hang out with Phil Kuekes, the creative force behind the Teramac (tera-operation-per-second multiarchitecture computer)—an experimental supercomputer built at HP Labs primarily from defective parts, just to show it could be done. He gave us the idea to build an architecture that would work even if a substantial number of the individual devices in the circuit were dead on arrival. We didn’t know what those devices would be, but our goal was electronics that would keep improving even after the devices got so small that defective ones would become common. We ate a lot of pizza washed down with appropriate amounts of beer and speculated about what this mystery nanodevice would be. We were designing something that wouldn’t even be relevant for another 10 to 15 years. It was possible that by then devices would have shrunk down to the molecular scale envisioned by David Packard or perhaps even be molecules. We could think of no better way to anticipate this than by mimicking the Teramac at the nanoscale. We decided that the simplest abstraction of the Teramac architecture was the crossbar, which has since become the de facto standard for nanoscale circuits because of its simplicity, adaptability, and redundancy. The crossbar is an array of perpendicular wires. Anywhere two wires cross, they are connected by a switch. To connect a horizontal wire to a vertical wire at any point on the grid, you must close the switch between them. Our idea was to open and close these switches by applying voltages to the ends of the wires. Note that a crossbar array is basically a storage system, with an open switch representing a zero and a closed switch representing a one. You read the data by probing the switch with a small voltage. Like everything else at the nanoscale, the switches and wires of a crossbar are bound to be plagued by at least some nonfunctional components. These components will be only a few atoms wide, and the second law of thermodynamics ensures that we will not be able to completely specify the position of every atom. However, a crossbar architecture builds in redundancy by allowing you to route around any parts of the circuit that don’t work. Because of their simplicity, crossbar arrays have a much higher density of switches than a comparable integrated circuit based on transistors. But implementing such a storage system was easier said than done. Many research groups were working on such a cross-point memory—and had been since the 1950s. Even after 40 years of research, they had no product on the market. Still, that didn’t stop them from trying. That’s because the potential for a truly nanoscale crossbar memory is staggering; picture carrying around the entire Library of Congress on a thumb drive. One of the major impediments for prior crossbar memory research was the small off-to-on resistance ratio of the switches (40 years of research had never produced anything surpassing a factor of 2 or 3). By comparison, modern transistors have an off-to-on resistance ratio of 10 000 to 1. We calculated that to get a high-performance memory, we had to make switches with a resistance ratio of at least 1000 to 1. In other words, in its off state, a switch had to be 1000 times as resistive to the flow of current as it was in its on state. What mechanism could possibly give a nanometer-scale device a three-orders-of-magnitude resistance ratio? We found the answer in scanning tunneling microscopy (STM), an area of research I had been pursuing for a decade. A tunneling microscope generates atomic-resolution images by scanning a very sharp needle across a surface and measuring the electric current that flows between the atoms at the tip of the needle and the surface the needle is probing. The general rule of thumb in STM is that moving that tip 0.1 nm closer to a surface increases the tunneling current by one order of magnitude. We needed some similar mechanism by which we could change the effective spacing between two wires in our crossbar by 0.3 nm. If we could do that, we would have the 1000:1 electrical switching ratio we needed. Our constraints were getting ridiculous. Where would we find a material that could change its physical dimensions like that? That is how we found ourselves in the realm of molecular electronics. Conceptually, our device was like a tiny sandwich. Two platinum electrodes (the intersecting wires of the crossbar junction) functioned as the ”bread” on either end of the device. We oxidized the surface of the bottom platinum wire to make an extremely thin layer of platinum dioxide, which is highly conducting. Next, we assembled a dense film, only one molecule thick, of specially designed switching molecules. Over this ”monolayer” we deposited a 2- to 3-nm layer of titanium metal, which bonds strongly to the molecules and was intended to glue them together. The final layer was the top platinum electrode. The molecules were supposed to be the actual switches. We built an enormous number of these devices, experimenting with a wide variety of exotic molecules and configurations, including rotaxanes, special switching molecules designed by James Heath and Fraser Stoddart at the University of California, Los Angeles. The rotaxane is like a bead on a string, and with the right voltage, the bead slides from one end of the string to the other, causing the electrical resistance of the molecule to rise or fall, depending on the direction it moves. Heath and Stoddart’s devices used silicon electrodes, and they worked, but not well enough for technological applications: the off-to-on resistance ratio was only a factor of 10, the switching was slow, and the devices tended to switch themselves off after 15 minutes. Our platinum devices yielded results that were nothing less than frustrating. When a switch worked, it was spectacular: our off-to-on resistance ratios shot past the 1000 mark, the devices switched too fast for us to even measure, and having switched, the device’s resistance state remained stable for years (we still have some early devices we test every now and then, and we have never seen a significant change in resistance). But our fantastic results were inconsistent. Worse yet, the success or failure of a device never seemed to depend on the same thing. We had no physical model for how these devices worked. Instead of rational engineering, we were reduced to performing huge numbers of Edisonian experiments, varying one parameter at a time and attempting to hold all the rest constant. Even our switching molecules were betraying us; it seemed like we could use anything at all. In our desperation, we even turned to long-chain fatty acids—essentially soap—as the molecules in our devices. There’s nothing in soap that should switch, and yet some of the soap devices switched phenomenally. We also made control devices with no molecule monolayers at all. None of them switched. We were frustrated and burned out. Here we were, in late 2002, six years into our research. We had something that worked, but we couldn’t figure out why, we couldn’t model it, and we sure couldn’t engineer it. That’s when Greg Snider, who had worked with Kuekes on the Teramac, brought me the Chua memristor paper from the September 1971 IEEE Transactions on Circuits Theory. ”I don’t know what you guys are building,” he told me, ”but this is what I want.” To this day, I have no idea how Greg happened to come across that paper. Few people had read it, fewer had understood it, and fewer still had cited it. At that point, the paper was 31 years old and apparently headed for the proverbial dustbin of history. I wish I could say I took one look and yelled, ”Eureka!” But in fact, the paper sat on my desk for months before I even tried to read it. When I did study it, I found the concepts and the equations unfamiliar and hard to follow. But I kept at it because something had caught my eye, as it had Greg’s: Chua had included a graph that looked suspiciously similar to the experimental data we were collecting. The graph described the current-voltage (I-V) characteristics that Chua had plotted for his memristor. Chua had called them ”pinched-hysteresis loops”; we called our I-V characteristics ”bow ties.” A pinched hysteresis loop looks like a diagonal infinity symbol with the center at the zero axis, when plotted on a graph of current against voltage. The voltage is first increased from zero to a positive maximum value, then decreased to a minimum negative value and finally returned to zero. The bow ties on our graphs were nearly identical [see graphic, ”Bow Ties”]. That’s not all. The total change in the resistance we had measured in our devices also depended on how long we applied the voltage: the longer we applied a positive voltage, the lower the resistance until it reached a minimum value. And the longer we applied a negative voltage, the higher the resistance became until it reached a maximum limiting value. When we stopped applying the voltage, whatever resistance characterized the device was frozen in place, until we reset it by once again applying a voltage. The loop in the I-V curve is called hysteresis, and this behavior is startlingly similar to how synapses operate: synaptic connections between neurons can be made stronger or weaker depending on the polarity, strength, and length of a chemical or electrical signal. That’s not the kind of behavior you find in today’s circuits. Looking at Chua’s graphs was maddening. We now had a big clue that memristance had something to do with our switches. But how? Why should our molecular junctions have anything to do with the relationship between charge and magnetic flux? I couldn’t make the connection. Two years went by. Every once in a while I would idly pick up Chua’s paper, read it, and each time I understood the concepts a little more. But our experiments were still pretty much trial and error. The best we could do was to make a lot of devices and find the ones that worked. But our frustration wasn’t for nothing: by 2004, we had figured out how to do a little surgery on our little sandwiches. We built a gadget that ripped the tiny devices open so that we could peer inside them and do some forensics. When we pried them apart, the little sandwiches separated at their weakest point: the molecule layer. For the first time, we could get a good look at what was going on inside. We were in for a shock. What we had was not what we had built. Recall that we had built a sandwich with two platinum electrodes as the bread and filled with three layers: the platinum dioxide, the monolayer film of switching molecules, and the film of titanium. But that’s not what we found. Under the molecular layer, instead of platinum dioxide, there was only pure platinum. Above the molecular layer, instead of titanium, we found an unexpected and unusual layer of titanium dioxide. The titanium had sucked the oxygen right out of the platinum dioxide! The oxygen atoms had somehow migrated through the molecules and been consumed by the titanium. This was especially surprising because the switching molecules had not been significantly perturbed by this event—they were intact and well ordered, which convinced us that they must be doing something important in the device. The chemical structure of our devices was not at all what we had thought it was. The titanium dioxide—a stable compound found in sunscreen and white paint—was not just regular titanium dioxide. It had split itself up into two chemically different layers. Adjacent to the molecules, the oxide was stoichiometric TiO 2 , meaning the ratio of oxygen to titanium was perfect, exactly 2 to 1. But closer to the top platinum electrode, the titanium dioxide was missing a tiny amount of its oxygen, between 2 and 3 percent. We called this oxygen-deficient titanium dioxide TiO 2-x , where x is about 0.05. Because of this misunderstanding, we had been performing the experiment backward. Every time I had tried to create a switching model, I had reversed the switching polarity. In other words, I had predicted that a positive voltage would switch the device off and a negative voltage would switch it on. In fact, exactly the opposite was true. It was time to get to know titanium dioxide a lot better. They say three weeks in the lab will save you a day in the library every time. In August of 2006 I did a literature search and found about 300 relevant papers on titanium dioxide. I saw that each of the many different communities researching titanium dioxide had its own way of describing the compound. By the end of the month, the pieces had fallen into place. I finally knew how our device worked. I knew why we had a memristor. The exotic molecule monolayer in the middle of our sandwich had nothing to do with the actual switching. Instead, what it did was control the flow of oxygen from the platinum dioxide into the titanium to produce the fairly uniform layers of TiO 2 and TiO 2-x . The key to the switching was this bilayer of the two different titanium dioxide species [see diagram, ”How Memristance Works”]. The TiO 2 is electrically insulating (actually a semiconductor), but the TiO 2-x is conductive, because its oxygen vacancies are donors of electrons, which makes the vacancies themselves positively charged. The vacancies can be thought of like bubbles in a glass of beer, except that they don’t pop—they can be pushed up and down at will in the titanium dioxide material because they are electrically charged. Now I was able to predict the switching polarity of the device. If a positive voltage is applied to the top electrode of the device, it will repel the (also positive) oxygen vacancies in the TiO 2-x layer down into the pure TiO 2 layer. That turns the TiO 2 layer into TiO 2-x and makes it conductive, thus turning the device on. A negative voltage has the opposite effect: the vacancies are attracted upward and back out of the TiO 2 , and thus the thickness of the TiO 2 layer increases and the device turns off. This switching polarity is what we had been seeing for years but had been unable to explain. On 20 August 2006, I solved the two most important equations of my career—one equation detailing the relationship between current and voltage for this equivalent circuit, and another equation describing how the application of the voltage causes the vacancies to move—thereby writing down, for the first time, an equation for memristance in terms of the physical properties of a material. This provided a unique insight. Memristance arises in a semiconductor when both electrons and charged dopants are forced to move simultaneously by applying a voltage to the system. The memristance did not actually involve magnetism in this case; the integral over the voltage reflected how far the dopants had moved and thus how much the resistance of the device had changed. We finally had a model we could use to engineer our switches, which we had by now positively identified as memristors. Now we could use all the theoretical machinery Chua had created to help us design new circuits with our devices. Triumphantly, I showed the group my results and immediately declared that we had to take the molecule monolayers out of our devices. Skeptical after years of false starts and failed hypotheses, my team reminded me that we had run control samples without molecule layers for every device we had ever made and that those devices had never switched. And getting the recipe right turned out to be tricky indeed. We needed to find the exact amounts of titanium and oxygen to get the two layers to do their respective jobs. By that point we were all getting impatient. In fact, it took so long to get the first working device that in my discouragement I nearly decided to put the molecule layers back in. A month later, it worked. We not only had working devices, but we were also able to improve and change their characteristics at will. But here is the real triumph. The resistance of these devices stayed constant whether we turned off the voltage or just read their states (interrogating them with a voltage so small it left the resistance unchanged). The oxygen vacancies didn’t roam around; they remained absolutely immobile until we again applied a positive or negative voltage. That’s memristance: the devices remembered their current history. We had coaxed Chua’s mythical memristor off the page and into being. Emulating the behavior of a single memristor, Chua showed, requires a circuit with at least 15 transistors and other passive elements. The implications are extraordinary: just imagine how many kinds of circuits could be supercharged by replacing a handful of transistors with one single memristor. The most obvious benefit is to memories. In its initial state, a crossbar memory has only open switches, and no information is stored. But once you start closing switches, you can store vast amounts of information compactly and efficiently. Because memristors remember their state, they can store data indefinitely, using energy only when you toggle or read the state of a switch, unlike the capacitors in conventional DRAM, which will lose their stored charge if the power to the chip is turned off. Furthermore, the wires and switches can be made very small: we should eventually get down to a width of around 4 nm, and then multiple crossbars could be stacked on top of each other to create a ridiculously high density of stored bits. Greg Snider and I published a paper last year showing that memristors could vastly improve one type of processing circuit, called a field-programmable gate array, or FPGA. By replacing several specific transistors with a crossbar of memristors, we showed that the circuit could be shrunk by nearly a factor of 10 in area and improved in terms of its speed relative to power-consumption performance. Right now, we are testing a prototype of this circuit in our lab. And memristors are by no means hard to fabricate. The titanium dioxide structure can be made in any semiconductor fab currently in existence. (In fact, our hybrid circuit was built in an HP fab used for making inkjet cartridges.) The primary limitation to manufacturing hybrid chips with memristors is that today only a small number of people on Earth have any idea of how to design circuits containing memristors. I must emphasize here that memristors will never eliminate the need for transistors: passive devices and circuits require active devices like transistors to supply energy. The potential of the memristor goes far beyond juicing a few FPGAs. I have referred several times to the similarity of memristor behavior to that of synapses. Right now, Greg is designing new circuits that mimic aspects of the brain. The neurons are implemented with transistors, the axons are the nanowires in the crossbar, and the synapses are the memristors at the cross points. A circuit like this could perform real-time data analysis for multiple sensors. Think about it: an intelligent physical infrastructure that could provide structural assessment monitoring for bridges. How much money—and how many lives—could be saved? I’m convinced that eventually the memristor will change circuit design in the 21st century as radically as the transistor changed it in the 20th. Don’t forget that the transistor was lounging around as a mainly academic curiosity for a decade until 1956, when a killer app—the hearing aid—brought it into the marketplace. My guess is that the real killer app for memristors will be invented by a curious student who is now just deciding what EE courses to take next year. About the Author R. STANLEY WILLIAMS, a senior fellow at Hewlett-Packard Labs, wrote this month’s cover story, ”How We Found the Missing Memristor.” Earlier this year, he and his colleagues shook up the electrical engineering community by introducing a fourth fundamental circuit design element. The existence of this element, the memristor, was first predicted in 1971 by IEEE Fellow Leon Chua, of the University of California, Berkeley, but it took Williams 12 years to build an actual device.
<urn:uuid:fc30d469-0a3c-4993-a11a-95b648c6e637>
CC-MAIN-2013-20
http://spectrum.ieee.org/semiconductors/processors/how-we-found-the-missing-memristor/5
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.961854
6,717
3.171875
3
Mercury in the Morning The planet Mercury -- the planet closest to the Sun -- is just peeking into view in the east at dawn the next few days. It looks like a fairly bright star. It's so low in the sky, though, that you need a clear horizon to spot it, and binoculars wouldn't hurt. Mercury is a bit of a puzzle. It has a big core that's made mainly of iron, so it's quite dense. Because Mercury is so small, the core long ago should've cooled enough to form a solid ball. Yet the planet generates a weak magnetic field, hinting that the core is still at least partially molten. The solution to this puzzle may involve an iron "snow" deep within the core. The iron in the core is probably mixed with sulfur, which has a lower melting temperature than iron. Recent models suggest that the sulfur may have kept the outer part of the core from solidifying -- it's still a hot, thick liquid. As this mixture cools, though, the iron "freezes" before the sulfur does. Small bits of solid iron fall toward the center of the planet. This creates convection currents -- like a pot of boiling water. The motion is enough to create a "dynamo" effect. Like a generator, it produces electrical currents, which in turn create a magnetic field around the planet. Observations earlier this year by the Messenger spacecraft seem to support that idea. But Messenger will provide much better readings of what's going on inside Mercury when it enters orbit around the planet in 2011. Script by Damond Benningfield, Copyright 2008 For more skywatching tips, astronomy news, and much more, read StarDate magazine.
<urn:uuid:d0a1999f-a775-4afc-bcfd-ee6ff6243a0b>
CC-MAIN-2013-20
http://stardate.org/radio/program/2008-10-20
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.943661
357
4
4
Teacher Tip: instead of addressing the class as... Maps: Behold ORBIS, a Google Maps for the Roman... → Have you ever wondered how much it would cost to travel from Londinium to Jerusalem in February during the heyday of the Roman Empire? Thanks to a project helmed by historian Walter Scheidel and developer Elijah Meeks of Stanford University, all of your pressing queries about Roman roadways can be answered! This is ORBIS, an online simulation (and thoroughly brainy time sink) that allows you to... "Telling the Time" presentation → Don't Insist on English → chris-english-daily: A TED talk presentation about the spread and usage of the English language - some interesting ideas here kids ….
<urn:uuid:848c99d1-4b57-4148-90f9-66ad418ba789>
CC-MAIN-2013-20
http://stepladderteaching.tumblr.com/archive
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.855205
159
2.546875
3
Requirements for proficiency in Norwegian The Norwegian language is the primary language of instruction at Norwegian institutions of higher education. Some foreign students learn Norwegian before they continue with further studies in Norway. Below is an overview of the language requirements for foreign students applying for courses where the language of instruction is Norwegian. If applying for a course taught in Norwegian, or for general acceptance into an institution, applicants outside of the Nordic countries must meet one of the following requirements: - Successfully passed 'Norwegian as a second language' from upper secondary school. - Sucessfully passed Level 3 in Norwegian at a university. - Successfully passed one-year study in Norwegian language and society for foreign students from a university college. - Successfully passed test in Norwegian at higher level, 'Bergenstesten', with a minimum score of 450. In certain cases, institutions may accept other types of documentation. Please contact the institutions directly for details.
<urn:uuid:de1e1761-77e7-432f-856b-39b8c69b3443>
CC-MAIN-2013-20
http://studyinnorway.no/Study-in-Norway/Admission-Application/Requirements-for-proficiency-in-Norwegian
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.926042
191
2.90625
3
Monitor drivers vs. video adapter drivers: How are they different and which do I need? Monitor drivers are specific to the monitor. They are usually text files that tell the operating system what the monitor is and what it is capable of. They are not required for the monitor to function. Video adapter drivers Your video adapter lets your computer communicate with a monitor by sending images, text, graphics, and other information. Better video adapters provide higher-quality images on your screen, but the quality of your monitor plays a large role as well. For example, a monochrome monitor cannot display colors no matter how powerful the video adapter is. A video driver is a file that allows your operating system to work with your video adapter. Each video adapter requires a specific video driver. When you update your video adapter, your operating system will provide a list and let you pick the appropriate video driver for it. If you do not see the video driver for your adapter in the list, contact the manufacturer of your video adapter to get the necessary video driver.
<urn:uuid:73cae941-2fef-484a-8887-1a20cde01d2f>
CC-MAIN-2013-20
http://support.lenovo.com/en_GB/research/hints-or-tips/detail.page?LegacyDocID=MIGR-4MVU4G
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.92526
212
2.75
3
In 1962 President John F. Kennedy’s administration narrowly averted possible nuclear war with the USSR, when CIA operatives spotted Soviet surface-to-surface missiles in Cuba, after a six-week gap in intelligence-gathering flights. In their forthcoming book Blind over Cuba: The Photo Gap and the Missile Crisis, co-authors David Barrett and Max Holland make the case that the affair was a close call stemming directly from a decision made in a climate of deep distrust between key administration officials and the intelligence community. Using recently declassified documents, secondary materials, and interviews with several key participants, the authors weave a story of intra-agency conflict, suspicion, and discord that undermined intelligence-gathering, adversely affected internal postmortems conducted after the crisis peaked, and resulted in keeping Congress and the public in the dark about what really happened. We asked Barrett, a professor of political science at Villanova University, to discuss the actual series of events and what might have happened had the CIA not detected Soviet missiles on Cuba. The Actual Sequence of Events . . . “Some months after the Cuban Missile Crisis, an angry member of the Armed Services Committee of the House of Representatives criticized leaders of the Kennedy administration for having let weeks go by in September and early October 1962, without detecting Soviet construction of missile sites in Cuba. It was an intelligence failure as serious as the U.S. ignorance that preceded the Japanese attack on Pearl Harbor in 1941, he said. Secretary of Defense Robert McNamara aggressively denied that there had been an American intelligence failure or ineptitude with regard to Cuba in late summer 1962. McNamara and others persuaded most observers the administration’s performance in the lead-up to the Crisis had been almost flawless, but the legislator was right: The CIA had not sent a U-2 spy aircraft over western Cuba for about a six week period. There were varying reasons for this, but the most important was that the Kennedy administration did not wish to have a U-2 “incident.” Sending that aircraft over Cuba raised the possibility that Soviet surface-to-air missiles might shoot one down. Since it was arguably against international law for the U.S. to send spy aircrafts over another country, should one be shot down, there would probably be the same sort of uproar as happened in May 1960, when the Soviet Union shot down an American U-2 flying over its territory. Furthermore, most State Department and CIA authorities did not believe that the USSR would put nuclear-armed missiles into Cuba that could strike the U.S. Therefore, the CIA was told, in effect, not even to request permission to send U-2s over western Cuba. This, at a time when there were growing numbers of reports from Cuban exiles and other sources about suspicious Soviet equipment being brought into the country.As we now know, the Soviets WERE constructing missile sites on what CIA deputy director Richard Helms would call “the business end of Cuba,” i.e., the western end, in the summer/autumn of 1962. Fortunately, by mid-October, the CIA’s director, John McCone, succeeded in persuading President John F. Kennedy to authorize one U-2 flight over that part of Cuba and so it was that Agency representatives could authoritatively inform JFK on October 16th that the construction was underway.The CIA had faced White House and State Department resistance for many weeks about this U-2 matter." What Could Have Happened . . . “What if McCone had not succeeded in persuading the President that the U.S. needed to step up aerial surveillance of Cuba in mid-October? What if a few more weeks had passed without that crucial October 14 U-2 flight and its definitive photography of Soviet missile site construction? Remember to check out Blind over Cuba: The Photo Gap and the Missile Crisis, which is being published this fall!If McCone had been told “no” in the second week of October, perhaps it would have taken more human intelligence, trickling in from Cuba, about such Soviet activity before the President would have approved a risky U-2 flight.The problem JFK would have faced then is that there would have been a significant number of operational medium-range missile launch sites. Those nuclear-equipped missiles could have hit the southern part of the U.S. Meanwhile, the Soviets would also have progressed further in construction of intermediate missile sites; such missiles could have hit most of the continental United States.If JFK had not learned about Soviet nuclear-armed missiles until, say, November 1st, what would the U.S. have done?There is no definitive answer to that question, but I think it’s fair to say that the President would have been under enormous pressure to authorize—quickly--a huge U.S. air strike against Cuba, followed by an American invasion. One thing which discovery of the missile sites in mid-October gave JFK was some time to negotiate effectively with the Soviet Union during the “Thirteen Days” of the crisis. I don’t think there would have been such a luxury if numerous operational missiles were discovered a couple weeks later.No wonder President Kennedy felt great admiration and gratitude toward those at the CIA (with its photo interpreters) and the Air Force (which piloted the key U-2 flight). The intelligence he received on October 16th was invaluable. I think he knew that if that intelligence had not come until some weeks later, there would have been a much greater chance of nuclear war between the U.S. and the Soviet Union.”
<urn:uuid:7da5e687-07e2-4c8f-9fac-fe3f58c7017a>
CC-MAIN-2013-20
http://tamupress.blogspot.com/2012/07/close-call-what-if-cia-had-not-spotted.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.977158
1,150
2.828125
3
If you have ever used the Windows Copy (Ctrl+C) to copy objects to the clipboard and then the Windows Paste (Ctrl+V) to copy/paste AutoCAD object(s), then you know that those clipboard object(s) will have the lower left-hand corner of their extents as the base point (not very precise)... and this always reminds me of some of the graphic editing applets (e.g.: Paint or even the wonderful AutoCAD Button Editor!) that have you draw a circle like a rectangle. (annoying to say the least!) With AutoCAD you can use the keyboard shortcut of (Ctrl+Shft+C) to pick a base point for your clipboard object(s). COPYBASE is the actual command, and then you can paste to a precise point in the destination AutoCAD DWG file using the keyboard shortcut of (Ctrl+Shift+V). This is the PASTEBLOCK command or you can also use the PASTEORIG command if the COPYBASEd object(s) go in the same exact spot in the receiving DWG file. Also it is important to note: If you do use the Ctrl+Shift+V PASTEBLOCK method and want to leave it as a block, AutoCAD will assign a name for the block, which is something like "A$C11A06AFD" or "A$C1F7A5022" ... Either use the RENAME command, or use EXPLODE or XPLODE, also watch your layers, with regards to the object(s) original layers and where this new "block" is being INSERTed... or where they go if they are EXPLODEd vs. XPLODEd. (I will save that for a whole different post).
<urn:uuid:ab6efc59-895c-4b89-90f1-13b1d77a46de>
CC-MAIN-2013-20
http://tlconsulting.blogspot.com/2005_07_01_archive.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.854439
381
2.859375
3
On January 16, 1863, Walt Whitman wrote a pained letter to his brother, Thomas Jefferson Whitman, in which he bemoaned the Union’s recent defeat at Fredericksburg as the most “complete piece of mismanagement perhaps ever yet known in the earth's wars.” While Whitman today is celebrated as one of America’s greatest poets, works like Leaves of Grass, penned in the 1850s, were seen as scandalous by an American reading public unready for Whitman’s unconventional lifestyle. An opponent of slavery, Whitman supported the Union with the poem Beat! Beat! Drums and volunteered as a nurse in army hospitals. After Lincoln’s assassination in 1865, Whitman penned Oh Captain, My Captain, eulogizing the President for having navigated the ship of state through the storm of war, only to meet a violent end.
<urn:uuid:1c1ea546-62ba-4fed-b94c-3c8787377d6e>
CC-MAIN-2013-20
http://tpr.org/post/week-civil-war-485?ft=1&f=168896410,168978447,169074256,169170371,169367809,169981090,169981167,169981640,169982186,169983052,169983689,170665935,170667035,170667818,173159441
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.976947
180
2.59375
3
The Neighbor Squirrel These busy fluffballs have lost their fear of most predators - and they help plant pecan trees. By Sheryl Smith-Rodgers Have you ever watched an eastern fox squirrel (Sciurus niger) bury an acorn or pecan? A nuzzle here, another there, then he hurriedly pushes the leaves and grass over the site before scampering up the closest tree. Minutes later, he's back with another nut. Over the course of three months, that industrious squirrel can bury several thousand pecans. Come winter, when food's scarce, he'll find them again with his excellent sense of smell. Some will escape his appetite, though, and sprout into saplings, which is how many native nut trees get planted. Eastern fox squirrels - the state's most common and wide-ranging squirrel and a popular game animal, too - occur in forests and riparian habitats. They also easily adapt to cities and neighborhoods, where they've lost most of their fear of natural predators. "Playing the call of a red-tailed hawk didn't phase squirrels on campus," reports Bob McCleery, a wildlife lecturer at Texas A&M University, who has studied urban squirrels in College Station. "When we played a coyote call in the Navasota river bottom, a squirrel immediately flattened itself in the crotch of a tree for a good five minutes." When agitated, fox squirrels - whose fur closely resembles that of a gray fox - bark and jerk their long, bushy tails, which they use for balance when scampering on utility lines and other high places. Tails provide warmth and protection, too. "In the summer, I've seen them lying down with their tails over their heads to block the sun," McCleery says.
<urn:uuid:3cb858ec-4357-48a5-9912-c7929ec225af>
CC-MAIN-2013-20
http://tpwmagazine.com/archive/2008/jan/scout3/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.964519
370
3.296875
3
Ragtime and blues fused ‘All That Jazz’ By Laura Szepesi Published: Sunday, March 17, 2013, 7:09 p.m. Updated: Monday, March 18, 2013 EDITOR'S NOTE: Thursday marks the 85th birthday of well-known Connellsville jazz trombonist Harold Betters. We salute him with this four-part series, starting today with a brief history of jazz music. In 1979, actor Roy Scheider brought the life of Broadway dancer / director Bob Fosse to the big screen in the film “All That Jazz.” “All” is the perfect way to describe jazz music. Jazz was born around 1900 in New Orleans — about the same time as the earliest music recordings became available to the public. It grew out of ragtime, which many sources claim is the first true American music. Like jazz, ragtime has Southern roots, but was also flavored by the southern Midwest. It was popular from the late 1800s to around 1920. It developed in African American communities, a mix of march music (from composers such as John Philip Sousa), black songs and dances including the cakewalk. Ragtime: Dance on Eventually, ragtime spread across the United States via printed sheet music, but its roots were as live dance music in the red light districts of large cities such as St. Louis and New Orleans. Ernest Hogan is considered ragtime's father. He named it ragtime because of the music's lively ragged syncopation. Ragtime faded as jazz's following grew. However, composers enjoyed major success in ragtime's early years. Scott Joplin's 1899 “Maple Leaf Rag” was a hit, as was his “The Entertainer,” which was resurrected as a Top 5 hit when it was featured in the 1974 movie “The Sting” starring Robert Redford and Paul Newman. Born of ragtime, jazz was also heavily influenced by the blues. Blues originated in the late 1800s, but in the deep South. It is an amalgam of Negro spirituals, work songs, shouts, chants and narrative lyrics. Fused with blues Like jazz, the blues comes in many forms: delta, piedmont, jump and Chicago blues. Its popularity grew after World War II when electric guitars — rather than acoustic guitars — became popular. By the early 1970s, blues had formed another hybrid: blues rock. While ragtime is jangly and spirited, the blues takes after its name: blue, or melancholy. Its name is traced to 1912 when Hart Ward copyrighted the first blues song, “Dallas Blues.” Jazz — as a mix of ragtime and blues — has fused into many styles since its emergence. In the 1910s, New Orleans jazz was the first to take off. In the 1930s and 1940s, Big Band swing, Kansas City jazz and bebop prevailed. Other forms include cool jazz and jazz rock; today, there's even cyber jazz. Jazz: Always changing The late jazz trombone player J.J. Johnson summed jazz up as restless. “It won't stay put ... and never will,” he was quoted as saying, according to various sources. Johnson's sentiment is heartily endorsed by Connellsville jazz trombonist Harold Betters. Betters turns 85 years old this week. He will share decades of his memories about music and growing up in Connellsville as his March 21 birthday approaches. Laura Szepesi is a freelance writer. Tuesday: Just how did Harold Betters decide to play the trombone? - Uniontown police investigate shooting injury - Upper Tyrone family helps pet overcome paralysis - Several Fayette boroughs have contested races - Recap of the death of Connellsville police officer McCray Robb in 1882 - Connellsville police officer recognized 131 years after death - Fayette County man accused of receiving stolen property, multiple drug offenses - Connellsville set to debut model-railroad train in 2014 - Connellsville airport will remain open - Connellsville mayoral candidate Joshua DeWitt held for trial in chop shop case - South Connellsville man charged in pedestrian accident - Connellsville council to make appointments, reappointments You must be signed in to add comments To comment, click the Sign in or sign up at the very top of this page. Subscribe today! Click here for our subscription offers.
<urn:uuid:9adf6dc2-8439-48ef-addd-49274751b0af>
CC-MAIN-2013-20
http://triblive.com/news/fayette/3678122-74/jazz-blues-ragtime
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.966445
959
2.75
3
Welcome to BSA Troop 51 based out of Waterford, MI. Please take a look around and have a great day. How Scouting Started in the United States One day in 1909 in London, England, an American visitor, William D. Boyce, lost his way in a dense fog. He stopped under a street lamp and tried to figure out where he was. A boy approached him and asked if he could be of help. You certainly can, said Boyce. He told the boy that he wanted to find a certain business office in the center of the city. I’ll take you there, said the boy. When they got to the destination, Mr. Boyce reached into his pocket for a tip. But the boy stopped him. No thank you, sir. I am a Scout. I won’t take anything for helping. A Scout? And what might that be? asked Boyce. The boy told the American about himself and about his brother scouts. Boyce became very interested. After finishing his errand, he had the boy take him to the British Scouting office. At the office, Boyce met Lord Robert Baden-Powell, the famous British general who had founded the Scouting movement in Great Britain. Boyce was so impressed with what he learned that he decided to bring Scouting home with him. On February 8, 1910, Boyce and a group of outstanding leaders founded the Boy Scouts of America. From that day forth, Scouts have celebrated February 8, as the birthday of Scouting in the United States. What happened to the boy who helped Mr. Boyce find his way in the fog? No one knows. He had neither asked for money nor given his name, but he will never be forgotten. His good turn helped bring the scouting movement to our country. In the British Scout Training Center at Gilwell Park, England, Scouts from the United States erected a statue of an American Buffalo in honor of this unknown scout. One good turn to one man became a good turn to millions of American Boys. Such is the power of a good turn. Hence the Scout Slogan: DO A GOOD TURN DAILY
<urn:uuid:9318ce26-85fd-4349-bf3d-da8b89c27837>
CC-MAIN-2013-20
http://troop51-bsa.com/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.976728
451
2.90625
3
The diagnosis of Trichotillomania (TM) is synonymous with the act of recurrently pulling one’s own body hair resulting in noticeable thinning or baldness. (American Psychiatric Association, Diagnostic and statistical manual of mental disorders, 2000, p. 674) Sites of hair pulling can include any area of the body in which hair is found, but the most common sites are the scalp, eyelashes, eyebrows, and the pubis area. (Kraemer, 1999, p. 298) The disorder itself is categorized in the DSM-IV-TR as an “Impulse Control Disorder Not Elsewhere Classified” along with disorders like Pathological Gambling, Pyromania, Kleptomania, and Intermittent Explosive Disorder. Although TM was previously considered to be a rare disorder, more recent research indicates that prevalence rates of TM may be as high as 2% of the general population. (Kraemer, 1999, p. 298) This prevalence rate is significantly higher than the lifetime prevalence rate of .6% that is cited as a potential baseline among college students the DSM-IV-TR. (4th ed., text rev.; DSM-IV-TR; American Psychiatric Association, 2000, p. 676) The condition appears to be more common among women and the period of onset is typically in childhood or adolescence. (Kraemer, 1999, p. 298) As is customary with most DSM-IV-TR diagnoses, the act of hair pulling cannot be better accounted for by another mental disorder (like delusions, for example) or a general medical condition. Like every disorder in the DSM-IV-TR, the disturbance must cause significant distress or impairment in functioning. (4th ed., text rev.; DSM-IV-TR; American Psychiatric Association, 2000, p. 675) Alopecia is a key concept that must be understood in order to complete the differential diagnosis of TM. Alopecia is a condition of baldness in the most general sense. (Shiel, Jr. & Stoppler, 2008, p. 14) Other medically related causes of alopecia should be considered in the differential diagnosis of TM, especially when working with an individual who deny pulling their hair. The common suspects include male-pattern baldness, Discoid Lupus Erythematosus (DLE), Lichen Planopilaris (also known as Acuminatus), Folliculitis Decalvans, Pseudopelade of Brocq, and Alopecia Mucinosa (Follicular Mucinosis). (4th ed., text rev.; DSM-IV-TR; American Psychiatric Association, 2000, p. 676) Comprehensive coverage of these medical conditions is beyond the scope of this article – all of the aforementioned confounding variables can be eliminated by a general practitioner. There are a number of idiosyncratic features associated with TM that bear mentioning. Although the constellation of features covered here is not sufficient to warrant a diagnosis in isolation, they can aid in the differential diagnosis process. Alopecia, regardless of the cause, has been known to lead sufferers to tremendous feats of avoidance so that the hair loss remains undetected. Simply avoiding social functions or other events where the individual (and their attendant hair loss) might be uncovered is a common occurrence. In cases where individual’s focus of attention is on the head or scalp, it is not uncommon for affected individuals to attempt to hide hair loss by adopting complimentary hair styles or wearing other headwear (e.g., hats, wigs, etc). These avoidance behaviors will be the target of exposure and response prevention later in this article. In addition to avoidant behavior and elaborate attempts to “cover it up,” individuals with TM frequently present with clinically significant difficulty in areas such as self-esteem and mood. Comorbidity, or the presence of one or more disorders in the addition to a primary diagnosis, is the rule not the exception in the stereotypical presentation of TM. Mood disorders (like depression) are the most common (65%) – anxiety (57%), chemical use (22%), and eating disorders (20%) round out the top four mostly likely candidates for comorbidity. (Kraemer, 1999, p. 298) These comorbidity rates are not overly surprising since they parallel prevalence rates across the wider population – perhaps with the notable exception of the high rate of comorbid eating disorders. We can speculate about the source of comorbidity – one possible hypothesis is that a few people who suffer TM also suffer from a persistent cognitive dissonance associated with having happy-go-lucky personality trait which leads them “let the chips fall where they may.” They are individuals prone to impulsivity, but they are subdued and controlled the shame, guilt, frustration, fear, rage, and helplessness associated with the social limitations placed on them by the disorder. (Ingram, 2012, p. 269) On the topic of personality, surprisingly enough, research suggests that personality disorders do not share significant overlap with TM. This includes Borderline Personality Disorder (BPD) despite the fact that BPD is often associated with self-harming behavior. (Kraemer, 1999, p. 299) Differentiating TM from Obsessive-Compulsive Disorder (OCD) can be challenging in some cases. TM is similar to OCD because there is a “sense of gratification” or “relief” when pulling the hair out. Unlike individuals with OCD, individuals with TM do not perform their compulsions in direct response to an obsession and/or according to rules that must be rigidly adhered to. (4th ed., text rev.; DSM-IV-TR; American Psychiatric Association, 2000, p. 676) There are, however, observed similarities between OCD and TM regarding phenomenology, neurological test performance, response to SSRI’s, and contributing elements of familial and/or genetic factors. (Kraemer, 1999, p. 299) Due to the large genetic component contributions of both disorders, obtaining a family history (vis-à-vis a detailed genogram) is highly recommended. The comprehensive genogram covering all mental illness can be helpful in the discovery the comorbid conditions identified above as well. There is some suggestion that knowledge of events associated with onset is “intriguing, but unnecessary for successful treatment.” (Kraemer, 1999, p. 299) I call shenanigans. There is a significant connection between the onset of TM and the patient enduring loss, perceived loss, and/or trauma. Time is well spent exploring the specific environmental stressors that precipitated the disorder. Although ignoring circumstances surrounding onset might be prudent when employing strict behavioral treatment paradigms, it seems like a terrible waste of time to endure suffering without identifying some underlying meaning or purpose that would otherwise be missed if we overlook onset specifics. “Everything can be taken from a man but one thing: the last of human freedoms – to choose one’s attitude in any given set of circumstances, to choose one’s own way.” (Frankl, 1997, p. 86) If we acknowledge that all behavior is purposeful, then we must know and understand the circumstances around onset if we will ever understand the purpose of said behavior. I liken this to a difference in professional opinion and personal preference because either position can be reasonably justified, but in the end the patient should make the ultimate decision about whether or not to explore onset contributions vis-à-vis “imagery dialogue” or a similar technique. (Young, Klosko, & Weishaar, 2003, p. 123) If such imagery techniques are unsuccessful or undesired by the client, a psychodynamic conversation between “internal parts of oneself” can add clarity to the persistent inability of the client to delay gratification. (Ingram, 2012, p. 292) Such explorations are likely to be time consuming, comparatively speaking, and should not be explored with patients who are bound by strict EAP requirements or managed care restrictions on the type and length of treatment. Comorbid developmental disabilities and cognitive deficits may preclude this existential exploration. I employ the exploration of existential issues of origin in the interest of increasing treatment motivation, promoting adherence, enhancing the therapeutic milieu, and thwarting subsequent lapses by anchoring cognitive dissonance to a concrete event. TM represents a behavioral manifestation of a fixed action patterns (FAPs) that is rigid, consistent, and predicable. FAPs are generally thought to have evolved from our most primal instincts as animals – they are believed to contain fundamental behavioral ‘switches’ that enhance the survivability of the human species. (Lambert & Kinsley, 2011, p. 232) The nature of FAPs that leads some researchers to draw parallels to TM is that FAPs appear to be qualitatively “ballistic.” It’s an “all or nothing” reaction that is comparable to an action potential traveling down the axon of a neuron. Once they are triggered they are very difficult to suppress and may have a tendency to “kindle” other effects. (Lambert & Kinsley, 2011, p. 233) There are some unique considerations when it comes to assessing a new patient with TM. Because chewing on or ingesting the hair is reported in nearly half of TM cases, the attending clinician should always inquire about oral manipulation and associated gastrointestinal pain associated with a connected hair mass in the stomach or bowel (trichobezoar). Motivation for change should be assessed and measured because behavioral interventions inherently require a great deal of effort. Family and social systems should not be ignored since family dynamics can exacerbate symptomatlogy vis-à-vis pressure to change (negative reinforcement), excessive attention (positive reinforcement), or both. (Kraemer, 1999, p. 299) What remains to be seen is the role of stress in the process of “triggering” a TM episode. Some individuals experience an “itch like” sensation as a physical antecedent that remits once the hair is pulled. This “itch like” sensation is far from universal. Some clinicians and researchers believe that the abnormal grooming behavior found in TM is “elicited in response to stress” with the necessary but not sufficient condition of “limited options for motoric behavior and tension release.” (Kraemer, 1999, p. 299) Although this stress hypothesis may materialize as a tenable hypothesis in some cases, it’s by no means typical. Most people diagnosed with TM report that the act of pulling typically occurs during affective states of relaxation and distraction. Most individuals whom suffer from TM do not report clinically significant levels of anxiety as the “trigger” of bouts of hair pulling. We could attribute this to an absence of insight regarding anxiety related triggers or, perhaps anxiety simply does not play a significant role in the onset and maintenance of hair pulling episodes. Regardless of the factors that trigger episodes, a comprehensive biopsychosocial assessment that includes environmental stressors (past, present and anticipated) should be explored. The options for treatment of TM are limited at best. SSRIs have demonstrated some potential in the treatment of TM, but more research is needed before we can consider SSRIs as a legitimate first-line treatment. SSRIs are worth a shot as an adjunct treatment in cases of chronic, refractory, or treatment resistant TM. I would consider recommending a referral to a psychiatrist (not a general practitioner) for a medication review due in part to the favorable risk profile of the most recent round of SSRIs. Given the high rate of comorbidity with mood and anxiety disorders – if either is anxiety or depression are comorbid, SSRIs will likely be recommended regardless. Killing two birds with one stone is the order of the day, but be mindful that some medication can interfere with certain treatment techniques like imaginal or in vivo exposure. (Ledley, Marx, & Heimberg, 2010, p. 141) Additional research is needed before anxiolytic medications can be recommended in the absence of comorbid anxiety disorders (especially with children). Hypnosis and hypnotic suggestion in combination with other behavioral interventions may be helpful for some individuals, but I don’t know enough about it at this time to recommend it. Call me skeptical, or ignorant, but I prefer to save the parlor tricks for the circus… Habit reversal is no parlor trick. My goal isn’t to heal the patient; that would create a level of dependence I am not comfortable with… my goal is to teach clients how to heal themselves. Okay, but how? The combination of Competing Response Training, Awareness/Mindfulness Training, Relaxation Training, Contingency Management, Cognitive Restructuring, and Generalization Training is the best hope for someone who seeks some relief from TM. Collectively I will refer to this collection of techniques as Habit Reversal. Competing Response Training is employed in direct response to hair pulling or in situations where hair pulling might be likely. In the absence of “internal restraints to impulsive behavior,” artificial circumstances are created by identifying substitute behaviors that are totally incompatible with pulling hair. (Ingram, 2012, p. 292) Just like a compulsive gambling addict isn’t in any danger if spends all his money on rent, someone with TM is much less likely to pull hair if they are doing something else with their hands. Antecedents, or triggers, are sometimes referred to as discriminative stimuli. (Ingram, 2012, p. 230) “We sense objects in a certain way because of our application of priori intuitions…” (Pirsig, 1999, p. 133) Altering the underlying assumptions entrenched in maladaptive priori intuitions is the core purpose of Awareness and Mindfulness Training. “There is a lack of constructive self-talk mediating between the trigger event and the behavior. The therapist helps the client build intervening self-messages: Slow down and think it over; think about the consequences.” (Ingram, 2012, p. 221) The connection to contingency management should be self evident. Utilizing a customized self-monitoring record, the patient begins to acquire the necessary insight to “spot” maladaptive self talk. “Spotting” is not a new or novel concept – it is central component of Abraham Low’s revolutionary self help system Recovery International. (Abraham Low Self-Help Systems, n.d.) The customized self-monitoring record should invariably include various data elements such as precursors, length of episode, number of hairs pulled, and a subjective unit of distress representing the level of “urge” or desire to pull hair. (Kraemer, 1999) The act of recording behavior (even in the absence of other techniques) is likely to produce significant reductions in TM symptomatlogy. (Persons, 2008, p. 182-201) Perhaps more importantly, associated activities, thoughts, and emotions that may be contributing to the urge to pull should be codified. (Kraemer, 1999, p. 300) In session, this record can be reviewed and subsequently tied to “high risk circumstances” and “priori intuitions” involving constructs such as anger, frustration, depression, and boredom. Relaxation training is a critical component if we subscribe to the “kindling” hypothesis explained previously. Relaxation is intended to reduce the urges that inevitably trigger the habit. Examples abound, but diaphragmatic breathing, progressive relaxation, and visualization are all techniques that can be employed in isolation or in conjunction with each other. Contingency Management is inexorably tied to the existential anchor of cognitive dissonance described above. My emphasis on this element is where my approach might differ from some other clinicians. “You are free to do whatever you want, but you are responsible for the consequences of everything that you do.” (Ingram, 2012, p. 270) This might include the client writing down sources of embarrassment, advantages of controlling the symptomatlogy of TM, etc. (Kraemer, 1999) The moment someone with pyromania decides that no fire worth being imprisoned, they will stop starting fires. The same holds true with someone who acknowledges the consequences of pulling their hair. How do we define success? Once habit reversal is successfully accomplished in one setting or situation, the client needs to be taught how to generalize that skill to other contexts. A hierarchical ranking of anxiety provoking situations can be helpful in this process since self-paced graduated exposure is likely to increase tolerability for the anxious client. (Ingram, 2012, p. 240) If skills are acquired, and generalization occurs, we can reasonably expect a significant reduction in TM symptomatlogy. The challenges are significant, cognitive behavioral therapy is much easier said than done. High levels of treatment motivation are required for the behavioral elements, and moderate to high levels of insight are exceptionally helpful for the cognitive elements. In addition, this is an impulse control disorder… impulsivity leads to treatment noncompliance and termination. The combination of all the above, in addition to the fact that TM is generally acknowledged as one of the more persistent and difficult to treat disorders, prevents me from providing any prognosis other than “this treatment will work as well as the client allows it to work.” Abraham Low Self-Help Systems. (n.d.). Recovery international terms and definitions. Retrieved August 2, 2012, from http://www.lowselfhelpsystems.org/system/recovery-international-language.asp American Psychiatric Association. (2000). Diagnostic and statistical manual of mental disorders (4th ed., text rev.). Washington, DC: Author. Frankl, V. E. (1997). Man’s search for meaning (rev. ed.). New York, NY: Pocket Books. Ingram, B. L. (2012). Clinical case formulations: Matching the integrative treatment plan to the client (2nd ed.). Hoboken, NJ: John Wiley & Sons. Kraemer, P. A. (1999). The application of habit reversal in treating trichotillomania. Psychotherapy: Theory, Research, Practice, Training, 36(3), 298-304. doi: 10.1037/h0092314 Lambert, K. G., & Kinsley, C. H. (2011). Clinical neuroscience: Psychopathology and the brain (2nd ed.). New York: Oxford University Press. Ledley, D. R., Marx, B. P., & Heimberg, R. G. (2010). Making cognitive-behavioral therapy work: Clinical process for new practitioners (2nd ed.). New York, NY: Guilford Press. Persons, J. B. (2008). The case formulation approach to cognitive-behavior therapy. New York, NY: Guilford Press. Pirsig, R. M. (1999). Zen and the art of motorcycle maintenance: An inquiry into values (25th Anniversary ed.). New York: Quill. Shiel, W. C., Jr., & Stoppler, M. C. (Eds.). (2008). Webster’s new world medical dictionary (3rd ed.). Hoboken, NJ: Wiley Publishing. Young, J. E., Klosko, J. S., & Weishaar, M. E. (2003). Schema therapy: A practitioner’s guide. New York: Guilford Press.
<urn:uuid:7947504d-63b5-4a37-bd58-19d265d90077>
CC-MAIN-2013-20
http://try-therapy.com/2012/08/02/trichotillomania/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.912184
4,098
2.796875
3
SCIENCE announced that it is taking viewers further inside NASA's latest mission to Mars with the exclusive world premiere of i.am.mars: REACH FOR THE STARS tonight, September 19, 2012, at 10 PM ET/PT. The special documents the artistic and technical process behind "Reach for the Stars," will.i.am's newest single that became the first song ever to be broadcast from another planet to Earth. In what is being hailed as "the most complex Mars mission to date," NASA's Curiosity spacecraft successfully landed on the red planet on August 6, 2012. Since then the Curiosity rover has returned stunning photographs and valuable information about the Martian surface that is helping scientists determine if it has the ability to support life. Recently, Curiosity also returned will.i.am's new song "Reach for the Stars" as - for the first time in history - recorded music was broadcast from a planet to Earth. i.am.mars: REACH FOR THE STARS profiles will.i.am's passion for science and his belief in inspiring the next generation of scientists through STEM (Science, Technology, Engineering and Math) education. i.am.mars: REACH FOR THE STARS also gives viewers a window into his creative process, as well as the recording of the song with a full children's choir and orchestra. In addition, viewers also go inside the engineering challenges NASA faced in uploading the song to Curiosity, and the hard work required to make the historic 700 million mile interplanetary broadcast a reality. "Between MARS LANDING 2012: THE NEW SEARCH FOR LIFE and i.am.mars: REACH FOR THE STARS, SCIENCE is consumed with the bold exploration of the red planet," said Debbie Myers, general manager and executive vice president of SCIENCE. "We hope our viewers are as inspired as we are by the creativity, imagination and daring of both will.i.am and NASA." i.am.mars will be distributed to schools nationwide through Discovery Education's digital streaming services. SCIENCE and Discovery Education will also work with Affiliates to promote i.am.mars' educational resources for use in schools and with community organizations, brining the magic of Mars to life.
<urn:uuid:cc8f8d73-112d-4456-ae20-1bdc4072b3b4>
CC-MAIN-2013-20
http://tv.broadwayworld.com/article/william-Featured-in-Sciences-iammars-REACH-FOR-THE-STARS-919-20120918
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.950739
463
2.5625
3
Forecast Texas Fire Danger (TFD) The Texas Fire Danger(TFD) map is produced by the National Fire Danger Rating System (NFDRS). Weather information is provided by remote, automated weather stations and then used as an input to the Weather Information Management System (WIMS). The NFDRS processor in WIMS produces a fire danger rating based on fuels, weather, and topography. Fire danger maps are produced daily. In addition, the Texas A&M Forest Service, along with the SSL, has developed a five day running average fire danger rating map. Daily RAWS information is derived from an experimental project - DO NOT DISTRIBUTE
<urn:uuid:a789fd8d-b873-45cf-b01d-af6eca242a5d>
CC-MAIN-2013-20
http://twc.tamu.edu/drought/tfdforecast?date=2/29/2012&type=tfdforecast
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.887535
136
3.015625
3
|UFDC Home||| Help || The Florida Geological Survey Digital Collection includes historic resources from the Florida Geological Survey (FGS). FGS is an Office which reports directly to the Deputy Secretary for Land & Recreation in the Florida Department of Environmental Protection. The mission of the FGS is to collect, interpret, disseminate, store and maintain geologic data, thereby contributing to the responsible use and understanding of Florida’s natural resources, and to conserve the State of Florida’s oil and gas resources and minimize environmental impacts from exploration and production operations. Historic resources from the Florida Geological Survey Digital Collection includes historic FGS: For a list of all publications, historic through current, see the FGS website. Florida Geological Survey Fossil Collection in the Florida Museum of Natural History The Florida Geological Survey fossil vertebrate collection (FGS) was started during the 1910s and was originally housed in Tallahassee. Under the direction of E. H. Sellards, Herman Gunter, and S. J. Olsen, the FGS collection was the primary source of fossil vertebrate descriptions from Florida until the early 1960s. World-renown paleontologists such as George G. Simpson, Edwin H. Colbert, and Henry F. Osborn wrote scientific papers about specimens in the FGS collection in addition to Sellards and Olsen. In 1976 the entire FGS fossil vertebrate collection was transferred to the Florida Museum of Natural History with support from a National Science Foundation grant. The UF/FGS collection is composed of about 22,000 specimens assigned to about 10,000 catalogue numbers, and almost all of them were collected in Florida. The majority of specimens in the UF/FGS collection are mammals, followed by reptiles, birds, and a relatively small number of amphibians and fish. Although there are some sites that are unique to the UF/FGS collection, many of the sites overlap with holdings in the main UF and UF/PB collections. The major strengths of the UF/FGS collection are historically important samples from the early Miocene Thomas Farm locality, the middle Miocene and early Pliocene deposits of the Bone Valley Region, Polk County, and from the late Pleistocene Vero locality, Indian River County. Researchers using the UF/FGS database should be aware that when the catalogue data for the FGS collection was first transferred from the original file cards to a computerized database in the late 1980s, relatively little effort was made to correct or improve entries. The nature of specimen was not indicated on many of the cards, locality information was sometimes vague, and many employed taxonomic names that are no longer in use. While some corrections have subsequently been made to this database, limitations of time and resources have prevented an exhaustive clean-up. Also, when Sellards left Florida for Texas in the 1920s, he transferred some, but not all, of the holotypes in the FGS collection that he had named to the USNM collection, Smithsonian Institution, Washington, D.C. United States Geological Survey Water Management Districts of Florida For information about the Florida Geological Survey: Dr. Jon Arthur Florida Geological Survey 903 West Tennessee Street Tallahassee, FL 32304-7000 Phone: (850) 488-4191 Fax: (850) 488-8086 Acknowledging or Crediting the Florida Geological Survey As Creative Entity or Information Source The Florida Geological Survey is providing many of its publications (State documents) for the purpose of digitization and Internet distribution. If you cite or use portions of these electronic documents, which the Florida Geological Survey (an office of the Florida Department of Environmental Protection) is making available to the public with the kind assistance of the University of Florida’s Digital Library Center, we ask that you acknowledge or credit the Florida Geological Survey as the information source: i.e. “Courtesy of the Florida Department of Environmental Protection’s Florida Geological Survey” Further, since Florida Geological Survey publications were developed using public funds, no proprietary rights may be attached to FGS publications wholly or in part, nor may FGS publications be sold to the U.S. Government or the Florida State Government as part of any procurement of products or services. Our publications are disseminated to citizens “as is" for general public information purposes; many of them reflect the state of knowledge at the time of their publication and they may or may not have been updated by more recent publications. Our electronic documents should not be altered or manipulated (largely or in part) and then republished or reposted on websites for commercial resale. FGS Publications Committee
<urn:uuid:c55bd2af-b611-49c7-a047-f008d4bbc8b3>
CC-MAIN-2013-20
http://ufdcweb1.uflib.ufl.edu/fgs
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.924949
962
2.890625
3
Welcome to Jane Addams Hull-House museum The Jane Addams Hull-House Museum serves as a dynamic memorial to social reformer Jane Addams, the first American woman to receive the Nobel Peace Prize, and her colleagues whose work changed the lives of their immigrant neighbors as well as national and international public policy. The Museum preserves and develops the original Hull-House site for the interpretation and continuation of the historic settlement house vision, linking research, education, and social engagement The Museum is located in two of the original settlement house buildings- the Hull Home, a National Historic Landmark, and the Residents' Dining Hall, a beautiful Arts and Crafts building that has welcomed some of the world's most important thinkers, artists and activists. The Museum and its many vibrant programs make connections between the work of Hull-House residents and important contemporary social issues. Founded in 1889 as a social settlement, Hull-House played a vital role in redefining American democracy in the modern age. Addams and the residents of Hull-House helped pass critical legislation and influenced public policy on public health and education, free speech, fair labor practices, immigrants’ rights, recreation and public space, arts, and philanthropy. Hull-House has long been a center of Chicago’s political and cultural life, establishing Chicago’s first public playground and public art gallery, helping to desegregate the Chicago Public Schools, and influencing philanthropy and culture.
<urn:uuid:f01a8af6-2422-47f6-a2f8-477c864a7d08>
CC-MAIN-2013-20
http://uic.edu/jaddams/hull/_museum/visitors.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.945106
294
3.078125
3
Introduction to principles of chemistry and fundamentals of inorganic and biochemistry. Structure and chemistry of carbohydrates, lipids, proteins, biochemistry of enzymes, metabolism, body fluids and radiation effects. On-line materials includes the course syllabus, copies of the lecture slides and animations, interactive Periodic Table, chapter summaries and practice exams. This course is targeted towards Health Science Majors. Introduction to principles of chemistry. This course is targeted towards Chemistry Majors. Laboratory experiments to develop techniques in organic chemistry and illustrate principles. On-line materials include step-by-step prelabs for many of the experiments that students will be conducting. Theoretical principles of quantitative and instrumental analysis. Emphasis is placed on newer analytical tools and equipment. Intermediate level course. Includes a discussion of the structure, function and metabolism of proteins, carbohydrates and lipids. In addition, there is a review of enzymes, DNA and RNA. This course stresses theory and application of modern chromatographic methods. On-line materials include the course syllabus, copies of course lecture slides and animations. A 'short course' covering the use of a mass spectrometer as a GC detector. Basic instrumentation, data treatment and spectral interpretation methods will be discussed. On-line materials include copies of course lecture slides and tables to assist in the interpretation of mass spectra. Coverage of statistical methods in Analytical Chemistry. Course includes basic statistics, experimental design, modeling, exploratory data analysis and other multivariate techniques. On-line materials include the course syllabus, homework problems and copies of the lecture slides. A survey of the basic equipment, data and methodology of Analytical methods that rely on radioisotopic materials. On-line materials include the course syllabus, homework problems. copies of the lecture slides and animations. Why I missed the exam
<urn:uuid:841e4baa-add2-400d-b3cc-719e93276b6c>
CC-MAIN-2013-20
http://ull.chemistry.uakron.edu/classroom.html/genchem/gcms/genobc/periodic/excuses/organic_lab/analytical/chemsep/genobc/chemometrics/gcms/chemometrics/biochem/gcms/analytical/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.871156
381
2.609375
3
Now that we’ve said a lot about individual operators on vector spaces, I want to go back and consider some other sorts of structures we can put on the space itself. Foremost among these is the idea of a bilinear form. This is really nothing but a bilinear function to the base field: . Of course, this means that it’s equivalent to a linear function from the tensor square: . Instead of writing this as a function, we will often use a slightly different notation. We write a bracket , or sometimes , if we need to specify which of multiple different inner products under consideration. Another viewpoint comes from recognizing that we’ve got a duality for vector spaces. This lets us rewrite our bilinear form as a linear transformation . We can view this as saying that once we pick one of the vectors , the bilinear form reduces to a linear functional , which is a vector in the dual space . Or we could focus on the other slot and define . We know that the dual space of a finite-dimensional vector space has the same dimension as the space itself, which raises the possibility that or is an isomorphism from to . If either one is, then both are, and we say that the bilinear form is nondegenerate. We can also note that there is a symmetry on the category of vector spaces. That is, we have a linear transformation defined by . This makes it natural to ask what effect this has on our form. Two obvious possibilities are that and that . In the first case we’ll call the bilinear form “symmetric”, and in the second we’ll call it “antisymmetric”. In terms of the maps and , we see that composing with the symmetry swaps the roles of these two functions. For symmetric bilinear forms, , while for antisymmetric bilinear forms we have . This leads us to consider nondegenerate bilinear forms a little more. If is an isomorphism it has an inverse . Then we can form the composite . If is symmetric then this composition is the identity transformation on . On the other hand, if is antisymmetric then this composition is the negative of the identity transformation. Thus, the composite transformation measures how much the bilinear transformation diverges from symmetry. Accordingly, we call it the asymmetry of the form . Finally, if we’re working over a finite-dimensional vector space we can pick a basis for , and get a matrix for . We define the matrix entry . Then if we have vectors and we can calculate In terms of this basis and its dual basis , we find the image of the linear transformation . That is, the matrix also can be used to represent the partial maps and . If is symmetric, then the matrix is symmetric , while if it’s antisymmetric then .
<urn:uuid:3bf09a24-c60d-45a0-b8e6-cc02ddac7ed6>
CC-MAIN-2013-20
http://unapologetic.wordpress.com/2009/04/14/bilinear-forms/?like=1&source=post_flair&_wpnonce=8cb43e0c56
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.919104
608
2.546875
3
The Gram-Schmidt Process Now that we have a real or complex inner product, we have notions of length and angle. This lets us define what it means for a collection of vectors to be “orthonormal”: each pair of distinct vectors is perpendicular, and each vector has unit length. In formulas, we say that the collection is orthonormal if . These can be useful things to have, but how do we get our hands on them? It turns out that if we have a linearly independent collection of vectors then we can come up with an orthonormal collection spanning the same subspace of . Even better, we can pick it so that the first vectors span the same subspace as . The method goes back to Laplace and Cauchy, but gets its name from Jørgen Gram and Erhard Schmidt. We proceed by induction on the number of vectors in the collection. If , then we simply set This “normalizes” the vector to have unit length, but doesn’t change its direction. It spans the same one-dimensional subspace, and since it’s alone it forms an orthonormal collection. Now, lets assume the procedure works for collections of size and start out with a linearly independent collection of vectors. First, we can orthonormalize the first vectors using our inductive hypothesis. This gives a collection which spans the same subspace as (and so on down, as noted above). But isn’t in the subspace spanned by the first vectors (or else the original collection wouldn’t have been linearly independent). So it points at least somewhat in a new direction. To find this new direction, we define This vector will be orthogonal to all the vectors from to , since for any such we can check where we use the orthonormality of the collection to show that most of these inner products come out to be zero. So we’ve got a vector orthogonal to all the ones we collected so far, but it might not have unit length. So we normalize it: and we’re done.
<urn:uuid:4a2ad899-7ba0-4bfc-9276-c5c5c0845fe6>
CC-MAIN-2013-20
http://unapologetic.wordpress.com/2009/04/28/the-gram-schmidt-process/?like=1&source=post_flair&_wpnonce=fe7f791e1e
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.897189
447
3.625
4
Sarin was developed in 1938 in Germany as a pesticide. Its name is derived from the names of the chemists involved in its creation: Schrader, Ambros, Rudriger and van der Linde. Sarin is a colorless non-persistent liquid. The vapor is slightly heavier than air, so it hovers close to the ground. Under wet and humid weather conditions, Sarin degrades swiftly, but as the temperature rises up to a certain point, Sarin’s lethal duration increases, despite the humidity. Sarin is a lethal cholinesterase inhibitor. Doses which are potentially life threatening may be only slightly larger than those producing least effects. Signs and Symptoms overexposure may occur within minutes or hours, depending upon the dose. They include: miosis (constriction of pupils) and visual effects, headaches and pressure sensation, runny nose and nasal congestion, salivation, tightness in the chest, nausea, vomiting, giddiness, anxiety, difficulty in thinking, difficulty sleeping, nightmares, muscle twitches, tremors, weakness, abdominal cramps, diarrhea, involuntary urination and defecation, with severe exposure symptoms progressing to convulsions and respiratory failure. breath until respiratory protective mask is donned. If severe signs of agent exposure appear (chest tightens, pupil constriction, in coordination, etc.), immediately administer, in rapid succession, all three Nerve Agent Antidote Kit(s), Mark I injectors (or atropine if directed by a physician). Injections using the Mark I kit injectors may be repeated at 5 to 20 minute intervals if signs and symptoms are progressing until three series of injections have been administered. No more injections will be given unless directed by medical personnel. In addition, a record will be maintained of all injections given. If breathing has stopped, give artificial respiration. Mouth-to-mouth resuscitation should be used when mask-bag or oxygen delivery systems are not available. Do not use mouth-to-mouth resuscitation when facial contamination exists. If breathing is difficult, administer oxygen. Seek medical attention Immediately. Contact: Immediately flush eyes with water for 10-15 minutes, then don respiratory protective mask. Although miosis (pinpointing of the pupils) may be an early sign of agent exposure, an injection will not be administered when miosis is the only sign present. Instead, the individual will be taken Immediately to a medical treatment facility for observation. Contact: Don respiratory protective mask and remove contaminated clothing. Immediately wash contaminated skin with copious amounts of soap and water, 10% sodium carbonate solution, or 5% liquid household bleach. Rinse well with water to remove excess decontaminant. Administer nerve agent antidote kit, Mark I, only if local sweating and muscular twitching symptoms are observed. Seek medical attention Immediately. not induce vomiting. First symptoms are likely to be gastrointestinal. Immediately administer Nerve Agent Antidote Kit, Mark I. Seek medical Above Information Courtesy of United States Army
<urn:uuid:7ba4236f-2dbb-4dce-b113-18fc0fa8af10>
CC-MAIN-2013-20
http://usmilitary.about.com/library/milinfo/blchemical-4.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.863795
680
3.328125
3
Interagency Coordinating Council "The mission of the Utah Interagency Coordinating Council for Infants and Toddlers with Special Needs is to assure that each infant and young child with special needs will have the opportunity to achieve optimal health and development within the context of the family." Introduction to ICC: Interagency Coordinating Council for Infants and Toddlers with Disabilities and their Families What is Early Intervention? Baby Watch Early Intervention is a statewide, comprehensive, coordinated, interagency, multidisciplinary system, which provides early intervention services to infants and toddlers, younger than three years of age, with developmental delay or disability, and their families. Early intervention is the "baby" piece of Special Education. The program is authorized through the Individuals with Disabilities Act (IDEA), Part C, (Early Intervention Program for Infants and Toddlers with Disabilities). In 1987, Utah's Governor designated the Department of Health (DOH) as the "Lead Agency" for the early intervention program. Utah was one of the very first states in the nation to fully implement its early intervention program after securing the approval of the State Legislature. At present, there are 16 early intervention programs that serve more than 2,000 children per month in the state. It is anticipated that the demand for these services will continually increase. What is an Interagency Coordinating Council (ICC)? The creation of an ICC was established with the passage of federal law P.L. 99-457 in October 1986. Developers of the legislation recognized the need for a group outside of the Lead Agency to "advise and assist" in the development of such a system. The independent nature of the ICC is one feature that gives the group the potential for making a contribution to the development of the service system. Another feature of the regulations is the multidisciplinary and the multi-constituency representation on the ICC. By specifying what types of members should be included on the ICC, the legislation enables states to bring together consumer, clinical, political, and administrative communities. This merging of a variety of communities facilitates the building of bridges between the involved agencies. In addition, the committee has provided a broader vision of the service system based upon the participation and contributions of all relevant providers and consumers. The ICC, a body required by statute to be appointed by each state's Governor, is to be an important participant in the development of a well-coordinated service system (Federal Interagency Coordinating Council, June, 1989). Each state ICC determines, in conjunction with the Lead Agency, the nature of the roles and tasks it chooses to perform at various policy stages. The Utah ICC is an interagency group whose membership represents the statewide early childhood services community. It is comprised of up to 25 members. The purpose of the Utah ICC is to advise and assist the lead agency in the Division of Community and Family Health Services, Bureau of Children with Special Health Care Needs in the UDOH. Much of the work of the ICC is accomplished in standing committees and ad hoc task force meetings that perform long range planning, study specific issues and make appropriate actions. A member of the ICC chairs each committee. What role does the ICC play? The Council functions as a planning body at the systems level and advocates for children birth to three years of age and their families with or at-risk for a developmental disability. The Council acts in three major roles: (1) ADVISOR: Providing advice to the Lead Agency, Governor and the state legislature on issues related to the development of a coordinated system of early intervention services for children with or at-risk for a developmental disability and their families. The federal law defines the Council membership and the program in order to give it a unique view of the "service systems". The parent component of the Council gives it a perspective which may be different from that presented by state agencies which are represented on the Council. The Council can use its special vantage point to be recognized as a source of information for the Lead Agency, Governor, and legislators, as well as other key decision makers in the state. (2) NEGOTIATOR: Working as an advocate to encourage a particular course of action by the state. A major activity of the Council is to "review and comment on the annual state plan for services for children birth to three years" as part of its overall responsibility to assess the service system as it exists in the state. This information as well as interagency coordination is another important goal of the program and puts the Council in a position to be effective in making changes in how services are provided in the state. With agency and provider representatives on the Council, communication can more easily be effected and gaps between agencies can hopefully be bridged. (3) CAPACITY BUILDER: Enhancing the ability of the overall service system to address service needs. In this role, the Council works to increase the quality and quantity of desired supports and services from the public and private sectors, to ensure that all needy children and families will be provided early intervention services.
<urn:uuid:ca8c9151-949c-43e8-9c9b-d2e43029f3ed>
CC-MAIN-2013-20
http://utahbabywatch.org/icc/index.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.954089
1,031
2.734375
3
On August 9, 2011, the Canadian Ice Service (CIS) reported that the Petermann Ice Island-A (PII-A) appeared to be grounded off the east coast of Newfoundland, east of the city of St. Anthony. The Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA’s Terra satellite captured this natural-color image of the ice island and its surroundings on August 14, 2011. Clouds hide much of the region, and white lines delineate coasts and borders. PII-A appears as an irregularly shaped white body east of St. Anthony. What look like small fragments of ice appear immediately west and north of the ice island. The CIS had reported for weeks that the ice island was losing mass due to melting and calving, so a continued loss of ice is consistent with CIS reports. PII-A is a remnant of a much larger ice island that calved off the Petermann Glacier in northwestern Greenland on August 5, 2010. Over the course of the following year, that ice island fragmented into smaller pieces, which continued drifting. Other fragments of the original ice island were in Baffin Bay and Lancaster Sound as of August 9, according to the CIS. - Canadian Ice Service (2011, August 9). Petermann Ice Island Updates. Accessed August 15, 2011.
<urn:uuid:c058ae56-08dd-48a3-bf10-e5d80e3b6477>
CC-MAIN-2013-20
http://visibleearth.nasa.gov/view.php?id=51737
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.957788
275
2.90625
3
The press release doesn’t contain any pictures, and really doesn’t do this new web tool justice, so I’ve added some screencaps. In a nutshell, the new iSWA site lets you arrange graphical packages of solar images and plots oncsreen for simultaneous evaluation. Stuff that had been scattered over several solar related websites is now in one interface. Pretty cool. – Anthony When NASA’s satellite operators need accurate, real-time space-weather information, they turn to the Community Coordinated Modeling Center (CCMC) of the Space Weather Laboratory at NASA’s Goddard Space Flight Center in Greenbelt, Md. The CCMC’s newest and most advanced space-weather science tool is the Integrated Space Weather Analysis (iSWA) system. The iSWA is a robust, integrated system provides information about space weather conditions past, present, and future and, unlike many other programs currently in use, has an interface that the user can customize to suit a unique set of data requirements. “The iSWA space-weather data analysis system offers a unique level of customization and flexibility to maintain, modify, and add new tools and data products as they become available,” says Marlo Maddox, iSWA system chief developer at NASA Goddard. iSWA draws together information about conditions from the sun to the boundary of the sun’s influence, known as the heliosphere. The iSWA systems digests information from spacecraft including the National Oceanic and Atmospheric Administration’s (NOAA) Geostationary Operational Environmental Satellites (GOES), NASA’s Solar Terrestrial Relations Observatory (STEREO), the joint European Space Agency and NASA mission Solar and Heliospheric Observatory (SOHO), and NASA’s Advanced Composition Explorer (ACE). Citizen scientists and science enthusiasts can also use the data, models, and tools of the iSWA system. Similar to the way in which armchair astronomers have used SOHO data to discover comets, enthusiasts will find the iSWA system a wonderful resource for increasing their familiarity with the concept of space weather. “We are continuously evolving the iSWA system, and we hope that it will benefit not only NASA satellite operators, but also that it may also help space-weather forecasting at other agencies such as the Air Force Weather Agency and NOAA,” says Michael Hesse, chief of the Space Weather Laboratory at NASA Goddard. Space-weather information tends to be scattered over various Web sites. NASA Goddard space physicist Antti Pulkkinen says the iSWA system represents “the most comprehensive single interface for general space-weather-related information,” providing data on past and current space-weather events. The system allows the user to configure or design custom displays of the information. The system compiles data about conditions on the sun, in Earth’s magnetosphere — the protective magnetic field that envelops our planet — and down to Earth’s surface. It provides a user interface to provide NASA’s satellite operators and with a real-time view of space weather. In addition to NASA, the iSWA system is used by the Air Force Weather agency. Access to space-weather information that combines data from state-of-the-art space-weather models with concurrent observations of the space environment provides a powerful tool for users to obtain a personalized “quick look” at space-weather information, detailed insight into space-weather conditions, as well as tools for historical analysis of the space-weather’s impact. Development of the iSWA system has been a joint activity between the Office of the Chief Engineer at NASA Headquarters and the Applied Engineering and Technology Directorate and the Science and Exploration Directorate at NASA Goddard. The iSWA system is located at NASA Goddard. The Community Coordinated Modeling Center is funded by the Heliophysics Division in the Science Mission Directorate at NASA Headquarters, and the National Science Foundation. Layout selector tool:
<urn:uuid:ee290d5b-d6ca-45b4-9106-2c14f262df50>
CC-MAIN-2013-20
http://wattsupwiththat.com/2010/02/24/new-all-in-one-space-weather-tool-from-nasa/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.882555
837
2.640625
3
By JOHN CARTER When Abraham Lincoln died from an assassin’s bullet on April 15, 1865, Edwin Stanton remarked to those gathered around his bedside, “Now he belongs to the ages.” One of the meanings implied in Stanton’s famous statement is that Lincoln would not only be remembered as an iconic figure of the past, but that his spirit would also play a significant role in ages to come. The Oscar-nominated movie “Lincoln,” which chronicles the struggle to pass the 13th amendment abolishing slavery, has turned our attention again to Lincoln’s legacy and his relevance amid our nation’s present divisions and growing pains. Here is some of the wit and wisdom of Abraham Lincoln worth pondering: “As for being president, I feel like the man who was tarred and feathered and ridden out of town on a rail. To the man who asked him how he liked it, he said, ‘If it wasn’t for the honor of the thing, I’d rather walk.’” “I desire so to conduct the affairs of this administration that if at the end, when I come to lay down the reins of power, I have lost every other friend on earth, I shall at least have one friend left, and that friend shall be down inside of me.” “Should my administration prove to be a very wicked one, or what is more probable, a very foolish one, if you the people are true to yourselves and the Constitution, there is but little harm I can do, thank God.” “Bad promises are better broken than kept.” “I am not at all concerned that the Lord is on our side in this great struggle, for I know that the Lord is always on the side of the right; but it is my constant anxiety and prayer that I and this nation may be on the Lord’s side.” “I have never had a feeling, politically, that did not spring from the sentiments embodied in the Declaration of Independence.” “Those who deny freedom to others deserve it not for themselves; and, under a just God, cannot long retain it.” “As I would not be a slave, so I would not be a master. This expresses my idea of democracy.” “The probability that we may fail in the struggle ought not to deter us from the support of a cause we believe to be just.” “The true rule, in determining to embrace or reject anything, is not whether it have any evil in it, but whether it have more evil than good. There are few things wholly evil or wholly good.” “Some of our generals complain that I impair discipline and subordination in the army by my pardons and respites, but it makes me rested, after a hard day’s work, if I can find some good excuse for saving a man’s life, and I go to bed happy as I think how joyful the signing of my name will make him (a deserter) and his family.” “I have been driven many times to my knees by the overwhelming conviction that I had nowhere else to go.” In addition, Lincoln’s Gettysburg Address and his second inaugural speech are ever relevant. And you may wish to add your own favorites to these. Paul’s advice to us in Philippians 4:8 is to “fill your minds with those things that are good and deserve praise: things that are true, noble, right, pure, lovely, and honorable.” As we celebrate his birthday on the 12th, Lincoln’s words more than meet this standard! John Carter is a Weatherford resident whose column, “Notes From the Journey,” is published weekly in the Weatherford Democrat.
<urn:uuid:d53f9812-f42b-4039-a509-209a2d5aac9b>
CC-MAIN-2013-20
http://weatherforddemocrat.com/opinion/x1303543173/NOTES-FROM-THE-JOURNEY-Lincoln-is-still-one-for-the-ages
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.968393
821
3.390625
3
TAKING EVERY PRECAUTION Japan Takes Measures to Prevent SARS (June 9, 2003) As severe acute respiratory syndrome (SARS), a new type of pneumonia, rages in wide areas of Asia and other places, the Japanese government has been busy taking measures to prevent an outbreak from occurring in Japan. The government has urged people to take caution in traveling to affected areas, and it has been making every effort to prevent SARS from entering Japan. In addition, work is progressing on a system in which medical institutions, national and local governments, and corporations will act together to prevent the spread of SARS in the event of an outbreak in Japan. As a result of these efforts, as of June 9, there have been no confirmed or probable cases of SARS in Japan. |Medical staff practice using an isolator. (Jiji) Plans Already Developed for Dealing with Patients On May 1 the government brought the heads of the relevant ministries and agencies together for a first-ever meeting devoted to SARS in order to decide what measures should be taken in the event that someone in Japan is found to be infected with the virus. The group decided to call on people returning from China to stay at home for 10 days, which is believed to be the incubation period for the disease. Taking this into consideration, the Ministry of Health, Labor, and Welfare made plans for taking action in the event of an outbreak. It decided to give local governments the authority to direct people believed likely to be infected, or "probable patients," to hospitalize themselves. In the event that a patient refuses, the local governments are empowered to forcibly hospitalize the person. Local governments are readying themselves to accept patients. According to a survey conducted by the Nihon Keizai Shimbun in early May, all of the nation's 47 prefectures had already completed action plans spelling out what measures would be taken in the event of an outbreak. In addition, some 250 medical institutions around the country have made such preparations as setting up "negative air-pressure rooms" to prevent the virus from spreading within the hospital or to the outside. Local governments in such places as Kitakyushu City, Hokkaido, and Mie Prefecture have been purchasing capsules called isolators to be used when suspected SARS patients are moved, and they have conducted drills on how to use them with volunteers playing the role of patients. In May a foreign traveler who had been to Japan was found to be infected with SARS. When this was discovered, the government and local authorities quickly implemented emergency measures, as a result of which no secondary infections occurred. According to a survey conducted by the Asahi Shimbun, 28 local governments out of the 47 prefectures and 13 major cities in Japan, nearly half the total, were rethinking their plans to cope with a potential SARS outbreak in light of this news. Fukushima Prefecture decided to check whether visitors from abroad have come from an area to which the World Health Organization recommends postponing travel. It will also make use of the local hotels association to determine the previous whereabouts of such guests. Kagawa Prefecture, meanwhile, which had previously only planned for people who had come in close contact with SARS patients, defined as having been within 2 meters, has created an action plan for checking on people who have had even a low possibility of coming in contact with a carrier. Public and Private Sectors Taking Action The Japanese government is stepping up its efforts to take rapid, nationwide measures to prevent SARS infection. The Ministry of Health, Labor, and Welfare has accelerated revision of the Infectious Disease Law, for example. And while local governments are the first line of defense in tracking the path of infection and following up on people who may have been exposed, the national government will become directly involved in the event that infection spreads outside of a local area. Japan is also actively engaged in international cooperation aimed at preventing the spread The private sector has also been taking action to prevent the spread of SARS and to reassure travelers. West Japan Railway Co. (JR West) has set up a SARS-response headquarters and is considering disinfecting affected carriages in the event that an infected person is found to have been onboard a certain train at a certain time. The company also decided to publicly release information on the time and route traveled by any SARS patients. Orient Ferry, which runs a ferry route from Shimonoseki to China's Qingdao, has since late April requested that all passengers and crew fill out health questionnaires, and the company has trained staff for what to do in the event that a passenger falls ill with SARS while onboard. The terminal in Qingdao, the shuttle bus, and the inside of the ship are all disinfected every day. Meanwhile, some companies have taken the step of postponing scheduled business trips to affected areas, and, in response to requests by the government, airlines and ship operators whose vessels operate in Japan are distributing health questionnaires to their staff and passengers. Japan has avoided SARS so far, and there is every reason to be confident that the country will remain free of the disease. Even if an outbreak did occur, the concerted efforts of local and national governments and private enterprises to prepare for such an eventuality suggest that it would be handled quickly and efficiently. Note: The government's "Measures upon Entry/Return to Japan" for travelers heading to Japan can be found here. (http://www.mofa.go.jp/policy/health_c/sars/measure0521.html) Related Web Sites the Ministry of Health, Labor, and Welfare World Health Organization West Japan Railway Co. (JR West) Copyright (c) 2004 Web Japan. Edited by Japan Echo Inc. based on domestic Japanese news sources. Articles presented here are offered for reference purposes and do not necessarily represent the policy or views of the Japanese Government. (November 19, 2002) GIVE BLOOD AND ENJOY (September 25, 2002)
<urn:uuid:b366d573-5d03-4927-bd6a-353a5c4ca06f>
CC-MAIN-2013-20
http://web-japan.org/trends/lifestyle/lif030609.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.95093
1,318
2.96875
3
Protecting your skin and checking it for changes are keys to preventing another melanoma or catching one in an early, treatable stage. Exposure to ultraviolet (UV) rays produced by the sun increases your risk of melanoma. Here’s how to protect your skin from the sun’s UV rays: - Cover your skin with clothing, including a shirt and a hat with a broad brim. - When outside, try to sit in shady areas. - Avoid exposing your skin to the sun between 10:00 a.m. and 2:00 p.m. standard time or 11:00 a.m. and 3:00 p.m. daylight saving time. - Use sunscreens with a sun protection factor (SPF) of 15 or more on skin that will be exposed to the sun. - Wear sunglasses with 99% or 100% UV absorption to protect your eyes. - Don't use sun lamps or tanning booths. Check your skin regularly and have someone help you check areas you can’t see, such as your back and buttocks, scalp, underneath the breasts of women, and the backs of the legs. If you notice a new, changing or an irregular-looking mole, show it to a doctor experienced in recognizing skin cancers, such as a dermatologist. This may include large, irregular shape with a border that is not smooth and even, more than one color, or irregular texture. Your doctor may monitor the mole or recommend removing it Contact your doctor if you discover a mole that is new has changed or looks suspicious: large or of irregular shape, color, or texture. - Reviewer: Brian Randall, MD - Review Date: 04/2013 - - Update Date: 04/09/2013 -
<urn:uuid:eb7c4ed1-acfb-48af-a77a-40cd6a725c8e>
CC-MAIN-2013-20
http://westfloridahospital.com/your-health/?/2010814049/Lifestyle-Changes-to-Manage-Melanoma
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.89927
367
3.03125
3
Archaeological Site of Rehman Dheri Department of Archaeology and Museums Property names are listed in the language in which they have been submitted by the State Party. The archaeological site of Rehman Dheri consists of a rectangular shaped mound covering some twenty two hectares and standing 4.5 metres above the surrounding field. The final occupational phase of the site is clearly visible on the surface of the mound by eye and also through air photographs. It consisted of a large walled rectangular area with a grid iron network of streets and lanes dividing the settlement into regular blocks. Walls delineating individual buildings and street frontages are clearly visible in the early morning dew or after rain and it is also possible to identify the location of a number of small-scale industrial areas within the site marked, as they are, by eroding kilns and scatters of slag. The surface of the mound is littered with thousands of shreds and artefacts, slowly eroding out of room fills. The archaeological sequence at the site of Rehman Dheri is over 4.5 metres deep, and covers a sequence of over 1,400 years beginning at c.3,300 BC. The site represents following periods: I c.3300-3850 BC II c.2850-2500 BC III c.2500-1900 BC It is generally accept that the settlement received its formal plan in its earliest phases and that subsequent phases replicated the plan over time. Although its excavators have cut a number of deep trenches or soundings into the lower levels, the areas exposed have been too limited to undertake a study of change in layout and the spatial distribution of craft activities. It was abandoned at the beginning of the mature Indus phase by the middle of the third millennium BC and subsequent activities, greatly reduced, are only recorded on the neighbouring archaeological mound, Hisam Dheri. The plan of the Early Harappan settlement is therefore undisturbed by later developments and, as such, represents the most exceptionally preserved example of the beginning of urbanisation in South Asia.
<urn:uuid:113e3986-b949-4542-97da-d6842557b2f6>
CC-MAIN-2013-20
http://whc.unesco.org/pg_friendly_print.cfm?cid=326&id=1877
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.960351
426
2.765625
3
- weak drug regulatory control and enforcement; - scarcity and/or erratic supply of basic medicines; - unregulated markets and distribution chains; - high drug prices and/or - significant price differentials. At national level, governments, law enforcement agencies, heath professionals, the pharmaceutical industry, importers, distributors, and consumer organizations should adopt a shared responsibility in the fight against counterfeit drugs. Cooperation between countries, especially trading partners is very useful for combating counterfeiting. Cooperation should include the timely and appropriate exchange of information and the harmonization of measures to prevent the spread of counterfeit medicines. The World Health Organization has developed and published guidelines, Guidelines for the development of measures to combat counterfeit medicines. These guidelines provide advice on measures that should be taken by the various stakeholders and interested parties to combat counterfeiting of medicines. Governments and all stakeholders are encouraged to adapt or adopt these guidelines in their fight against counterfeiting of medicines. - Guidelines for the development of measures to combat counterfeit medicines - Rapid Alert System for counterfeit medicines Communication and advocacy - creating public awareness Patients and consumers are the primary victims of counterfeit medicines. In order to protect them from the harmful effects of counterfeit medicines it is necessary to provide them with appropriate information and education on the consequences of counterfeit medicines. Patients and consumers expect to get advice from national authorities, health-care providers, health professionals and others from where they should buy or get their medicines; what measures they should take in case they come across such medicines or are affected by the use of such medicines. Ministries of health, national medicines regulators, health professional associations, nongovernmental organizations and other stakeholders have the responsibility to participate in campaign activities targeting patients and consumers to promote awareness of the problem of counterfeit medicines. Posters, brochures, radio and television programmes are useful means for disseminating messages and advice.
<urn:uuid:3ffdac17-ada1-42bf-b987-66bc26ca97f6>
CC-MAIN-2013-20
http://who.int/impact/activities/en/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.929692
378
3.21875
3
Detailed Distribution Map Information This map reflects the specimen location information from the Wisconsin Botanical Information System database and attemps to line up the original Town-Range Survey map from 1833 to 1866 with a computer generated table grid over the map of Wisconsin. Because the original Town Range lines are inexact, these "dots" might be somewhat skewed. Also townships near the borders of the state might only be partial, so the "dot" might center outside the state's boundry. Holding the mouse over the "dot" identifies the Town-Range. Clicking(new window) on the "dot" will link to a list of all specimen accession numbers for this location. You can then link to the individual specimen's label data. Arrange this window side-by-side with the specimen-list window so you can easily go back and forth between this map and the specimen's data.
<urn:uuid:40292269-3406-46cf-84aa-4c4efc553ecc>
CC-MAIN-2013-20
http://wisplants.uwsp.edu/scripts/maps.asp?SpCode=BARVUL&bkg=s
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.850816
185
2.828125
3
Click the picture above to see three larger pictures Show birthplace location |Previous||(Alphabetically)||Next||Biographies index | |Version for printing| Lipman Bers, always known as Lipa, was born into a Jewish family. His parents Isaac Bers and Bertha Weinberg were teachers, his mother being head at an elementary school in Riga where teaching was in Yiddish while his father was head at the Yiddish high school in Riga. Born in 1914, Lipa's early years were much affected by the political and military events taking place in Russia. Latvia had been under Russian imperial rule since the 18th century so World War I meant that there were evacuations from Riga. The Russian Revolution which began in October 1917 caused fighting between the Red Army and the White Army and for the next couple of years various parts of Russia came first under the control of one faction then of the other. Lipa's family went to Petrograd, the name that St Petersburg had been given in 1914 when there was strong anti-German feeling in Russia, but Lipa was too young to understand the difficulties that his parents went through at this time. At the end of World War I in 1918, Latvia regained its independence although this was to be short-lived. Lipa spent some time back in Riga, but he also spent time in Berlin. His mother took him to Berlin while she was training at the Psychoanalytic Institute. During his schooling mathematics became his favourite subject and he decided that it was the subject he wanted to study at university. He studied at the University of Zurich, then returned to Riga and studied at the university there. At this time Europe was a place of extreme politics and, in 1934, Latvia became ruled by a dictator. Lipa was a political activist, a social democrat who argued strongly for human rights. He was at this time a soap-box orator putting his views across strongly both in speeches and in writing for an underground newspaper. Strongly opposed to dictators and strongly advocating democracy it was clear that his criticism of the Latvian dictator could not be ignored by the authorities. A warrant was issued for his arrest and, just in time, he escaped to Prague. His girl friend Mary Kagan followed him to Prague where they married on 15 May 1938. There were a number of reasons why Bers chose to go to Prague at this time. Firstly he had to escape from Latvia, secondly Prague was in a democratic country, and thirdly his aunt lived there so he could obtain permission to study at the Charles University without having to find a job to support himself. One should also not underestimate the fact that by this stage his mathematical preferences were very much in place and Karl Loewner in Prague looked the ideal supervisor. Indeed Bers did obtain his doctorate which was awarded in 1938 from the Charles University of Prague where he wrote a thesis on potential theory under Karl Loewner's supervision. At the time Bers was rather unhappy with Loewner :- Lipa spoke of feeling neglected, perhaps even not encouraged, by Loewner and said that only in retrospect did he understand Loewner's teaching method. He gave to each of his students the amount of support needed ... It is obvious that Lipa did not appear too needy to Loewner. In 1938 Czechoslovakia became an impossible country for someone of Jewish background. Equally dangerous was the fact that Bers had no homeland since he was a wanted man in Latvia, and was a left wing academic. With little choice but to escape again, Bers fled to Paris where his daughter Ruth was born. However, the war followed him and soon the Nazi armies began occupying France. Bers applied for a visa to the USA and, while waiting to obtain permission, he wrote two papers on Green's functions and integral representations. Just days before Paris surrendered to the advancing armies, Bers and his family moved from Paris to a part of France not yet under attack from the advancing German armies. At last he received the news that he was waiting for, the issue of American visas for his family. In 1940 Bers and his family arrived in the United States and joined his mother who was already in New York. There was of course a flood of well qualified academics arriving in the United States fleeing from the Nazis and there was a great scarcity of posts, even for the most brilliant, so he was unemployed until 1942, living with other unemployed refugees in New York. During this time he continued his mathematical researches. After this he was appointed Research Instructor at Brown University where, as part of work relevant to the war effort, he studied two-dimensional subsonic fluid flow. This was important at that time since aircraft wings were being designed for planes with jet engines capable of high speeds. Between 1945 and 1949 Bers worked at Syracuse University, first at Assistant Professor, later as Associate Professor. Gelbart wanted to build up the department at Syracuse and attracting both Bers and Loewner was an excellent move. Here Bers began work on the problem of removability of singularities of non-linear elliptic equations. His major results in this area were announced by him at the International Congress of Mathematicians in 1950 and his paper Isolated singularities of minimal surfaces was published in the Annals of Mathematics in 1951. Courant writes:- The nonparametric differential equation of minimal surfaces may be considered the most accessible significant example revealing typical qualities of solutions of non-linear partial differential equations. With a view to such a general objective, [Bers] has studied singularities, branch-points and behaviour in the large of minimal surfaces. Abikoff writes in that this paper is:- ... a magnificent synthesis of complex analytic techniques which relate the different parameterisations of minimal surfaces to the representations of the potential function for subsonic flow and thereby achieves the extension across the singularity. Bers then became a member of the Institute for Advanced Study at Princeton where he began work on Teichmüller theory, pseudoanalytic functions, quasiconformal mappings and Kleinian groups. He was set in the right direction by an inequality he found in a paper of Lavrentev who attributed the inequality to Ahlfors. In a lecture he gave in 1986 Bers explained what happened next:- I was in Princeton at the time. Ahlfors came to Princeton and announced a talk on quasiconformal mappings. He spoke at the University so I went there and sure enough, he proved this theorem. So I came up to him after the talk and asked him "Where did you publish it?", and he said "I didn't". "So why did Lavrentev credit you with it?" Ahlfors said "He probably thought I must know it and was too lazy to look it up in the literature". When Bers met Lavrentev three years later he asked him the same questions and, indeed, Ahlfors had been correct in guessing why Lavrentev had credited him. Bers continued in his 1986 lecture:- I immediately decided that, first of all, if quasiconformal mappings lead to such powerful and beautiful results and, secondly, if it is done in this gentlemanly spirit - where you don't fight over priority - this is something that I should spend the rest of my life studying. It is ironic, given Bers strong political views on human rights, that he should find that Teichmüller, a fervent Nazi, had already made stunning contributions. In one of his papers on Teichmüller theory, Bers quotes Plutarch:- It does not of necessity follow that, if the work delights you with its grace, the one who wrought it is worthy of your esteem. In 1951 Bers went to the Courant Institute in New York, where he was a full professor, and remained there for 13 years. During this time he wrote a number of important books and surveys on his work. He published Theory of pseudo-analytic functions in 1953 which Protter, in a review, described as follows:- The theory of pseudo-analytic functions was first announced by [Bers] in two notes. These lecture notes not only contain proofs and extensions of the results previously announced but give a self-contained and comprehensive treatment of the subject. The author sets as his goal the development of a function theory for solutions of linear, elliptic, second order partial differential equations in two independent variables (or systems of two first-order equations). One of the chief stumbling blocks in such a task is the fact that the notion of derivative is a hereditary property for analytic functions while this is clearly not the case for solutions of general second order elliptic equations. Another classic text was Mathematical aspects of subsonic and transonic gas dynamics published in 1958:- It should be said, even though this is taken for granted by everybody in the case of Professor Bers, that the survey is masterly in its elegance and clarity. In 1958 Bers address the International Congress of Mathematicians in Edinburgh, Scotland, where he lectured on Spaces of Riemann surfaces and announced a new proof of the measurable Riemann mapping theorem. In his talk Bers summarised recent work on the classical problem of moduli for compact Riemann surfaces and sketched a proof of the Teichmüller theorem characterizing extremal quasiconformal mappings. He showed that the Teichmüller space for surfaces of genus g is a (6g-6)-cell, and showed how to construct the natural complex analytic structure for the Teichmüller space. Bers was a Guggenheim Fellow in 1959-60, and a Fulbright Fellow in the same academic year. From 1959 until he left the Courant Institute in 1964, Bers was Chairman of the Graduate Department of Mathematics. In 1964 Bers went to Columbia University where he was to remain until he retired in 1984. He was chairman of the department from 1972 to 1975. He was appointed Davies Professor of Mathematics in 1972, becoming Emeritus Davies Professor of Mathematics in 1982. During this period Bers was Visiting Miller Research Professor at the University of California at Berkeley in 1968. Tilla Weinstein describes in Bers as a lecturer:- Lipa's courses were irresistible. He laced his lectures with humorous asides and tasty tidbits of mathematical gossip. He presented intricate proofs with impeccable clarity, pausing dramatically at the few most critical steps, giving us a chance to think for ourselves and to worry that he might not know what to do next. Then, just as the silence got uncomfortable, he would describe the single most elegant way to complete the argument. Jane Gilman describes Bers' character:- Underneath the force of Bers' personality and vivacity was the force of his mathematics. His mathematics had a clarity and beauty that went beyond the actual results. He had a special gift for conceptualising things and placing them in the larger context. In Bers life is summed up by Abikoff as follows:- Lipa possessed a joy of life and an optimism that is difficult to find at this time and that is sorely missed. Those of us who experienced it directly have felt an obligation to pass it on. That, in addition to the beauty of his own work, is Lipa's enduring gift to us. We have yet to say something about Bers' great passion for human rights. In fact this was anything but a sideline in his life and one could consider that he devoted himself full-time to both his mathematical work and to his work as a social reformer. Perhaps his views are most clearly expressed by quoting from an address he gave in 1984 when awarded an honorary degree by the State University of New York at Stony Brook:- By becoming a human rights activist ... you do take upon yourself certain difficult obligations. ... I believe that only a truly even-handed approach can lead to an honest, morally convincing, and effective human rights policy. A human rights activist who hates and fears communism must also care about the human rights of Latin American leftists. A human rights activist who sympathises with the revolutionary movement in Latin America must also be concerned about human rights abuses in Cuba and Nicaragua. A devout Muslim must also care about human rights of the Bahai in Iran and of the small Jewish community in Syria, while a Jew devoted to Israel must also worry about the human rights of Palestinian Arabs. And we American citizens must be particularly sensitive to human rights violations for which our government is directly or indirectly responsible, as well as to the human rights violations that occur in our own country, as they do. Bers received many honours for his contributions in addition to those we have mentioned above. He was elected to the American Academy of Arts and Sciences, to the Finnish Academy of Sciences, and to the American Philosophical Society. He served the American Mathematical Society in several capacities, particularly as Vice-President (1963-65) and as President (1975-77). The American Mathematical Society awarded him their Steele Prize in 1975. He received the New York Mayor's award in Science and Technology in 1985. He was an honorary life member of the New York Academy of Sciences, and of the London Mathematical Society. Article by: J J O'Connor and E F Robertson Click on this link to see a list of the Glossary entries for this page List of References (5 books/articles)| |Some Quotations (3)| |Mathematicians born in the same country| |Honours awarded to Lipman Bers| (Click below for those honoured in this way) |AMS Colloquium Lecturer||1971| |AMS Steele Prize||1975| |American Maths Society President||1975 - 1976| |LMS Honorary Member||1984| Other Web sites |Previous||(Alphabetically)||Next||Biographies index | |History Topics || Societies, honours, etc.||Famous curves | |Time lines||Birthplace maps||Chronology||Search Form | |Glossary index||Quotations index||Poster index | |Mathematicians of the day||Anniversaries for the year| JOC/EFR © April 2002 | School of Mathematics and Statistics| University of St Andrews, Scotland The URL of this page is:|
<urn:uuid:8e3ce5d6-a76b-4e22-8892-e1280e29f3f7>
CC-MAIN-2013-20
http://www-history.mcs.st-and.ac.uk/~history/Biographies/Bers.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.980487
2,948
3
3
x2/3 + y2/3 = a2/3 x = a cos3(t), y = a sin3(t) Click below to see one of the Associated curves. |Definitions of the Associated curves||Evolute| |Involute 1||Involute 2| |Inverse curve wrt origin||Inverse wrt another circle| |Pedal curve wrt origin||Pedal wrt another point| |Negative pedal curve wrt origin||Negative pedal wrt another point| |Caustic wrt horizontal rays||Caustic curve wrt another point| The astroid only acquired its present name in 1836 in a book published in Vienna. It has been known by various names in the literature, even after 1836, including cubocycloid and paracycle. The length of the astroid is 6a and its area is 3πa2/8. The gradient of the tangent T from the point with parameter p is -tan(p). The equation of this tangent T is x sin(p) + y cos(p) = a sin(2p)/2 Let T cut the x-axis and the y-axis at X and Y respectively. Then the length XY is a constant and is equal to a. It can be formed by rolling a circle of radius a/4 on the inside of a circle of radius a. It can also be formed as the envelope produced when a line segment is moved with each end on one of a pair of perpendicular axes. It is therefore a glissette. Other Web site: |Main index||Famous curves index| |Previous curve||Next curve| |History Topics Index||Birthplace Maps| |Mathematicians of the day||Anniversaries for the year| |Societies, honours, etc||Search Form| The URL of this page is:
<urn:uuid:367a0525-d005-4467-93f1-a7ac123614d1>
CC-MAIN-2013-20
http://www-history.mcs.st-andrews.ac.uk/history/Curves/Astroid.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.859534
409
2.71875
3
The machete blades turned red with heat in the fire that the rubber workers built on a Liberia plantation, Thomas Unnasch remembers from a visit in the 1980s. This was how the men tried to quell the intense itchiness that comes with river blindness, a rare tropical disease. "You can imagine how bad the itching must be, that running a red-hot machete up and down your back would be a relief, but it was," said Unnasch, whose laboratory works on diagnostic tests for the disease. About 18 million people have river blindness worldwide, according to the World Health Organization, but more than 99% of cases of this disease are found in Africa. It goes by the technical name "onchocerciasis," and it spreads through small black flies that breed in fast-flowing, highly oxygenated waters. When an infected fly bites a person, it drops worm larvae in the skin, which can then grow and reproduce in the body. Unlike malaria, river blindness is not fatal, but it causes a "miserable life," said Moses Katabarwa, senior epidemiologist for the Atlanta-based Carter Center's River Blindness Program, which has been leading an effort to eliminate the disease in the Americas and several African countries. Some strains cause blindness, while others come with more severe skin disease. With time, generally all strains of the disease can lead to rough "lizard" skin, depigmented "leopard skin" and hanging groins. Another big problem among patients is itching, which happens when the worms die inside a person. In southwest Uganda, the locals call the disease "Obukamba," referring to the symptoms of distorted skin appearance and itchiness, Katabarwa said. In western Uganda, he said, "the fly is called 'Embwa fly' or dog fly, for it bites like a dog!" There is no vaccine for river blindness, but there is a drug, called ivermectin that paralyzes and kills the offspring of adult worms, according to the Mayo Clinic. It may also slow the reproduction of adult female worms, so there are fewer of them in the skin, blood and eyes. The pharmaceutical company Merck has been donating the treatment, under the brand name Mectizan, since 1985. Great strides have been made against this disease. In the Americas, it was eliminated in Colombia in 2007 and in Ecuador in 2009.
<urn:uuid:415cd5cc-0228-4449-a777-4a0bb2194449>
CC-MAIN-2013-20
http://www.4029tv.com/news/health/Fighting-river-blindness/-/8897344/18384038/-/item/0/-/o7f20pz/-/index.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.959309
502
3.15625
3
Attention Deficit Hyperactivity Disorder or ADHD is a common childhood illness. People who are affected can have trouble with paying attention, sitting still and controlling their impulses. There are three types of ADHD. The most common type of ADHD is when people have difficulties with both attention and hyperactivity. This is called ADHD combined type. Some people only have difficulty with attention and organization. This is ADHD inattentive subtype or Attention Deficit Disorder (ADD). Other people have only the hyperactive and impulsive symptoms. This is ADHD hyperactive subtype. It is a health condition involving biologically active substances in the brain. Studies show that ADHD may affect certain areas of the brain that allow us to solve problems, plan ahead, understand others' actions, and control our impulses. Many children and adults are easily distracted at times or have trouble finishing tasks. If you suspect that your child has ADHD, it is important to have your child evaluated by his or her doctor. In order for your child’s doctor to diagnose your child with ADHD, the behaviors must appear before age 7 and continue for at least six months. The symptoms must also create impairment in at least two areas of the child's life-in the classroom, on the playground, at home, in the community, or in social settings. Many children have difficulties with their attention but attention problems are not always cue to ADHD. For example, stressful life events and other childhood conditions such as problems with schoolwork caused by a learning disability or anxiety and depression can interfere with attention. According to the National Institute of Mental Health, ADHD occurs in an estimated 3 to 5 percent of preschool and school-age children. Therefore, in a class of 25 to 30 children, it is likely that at least one student will have this condition. ADHD begins in childhood, but it often lasts into adulthood. Several studies done in recent years estimate that 30 to 65 percent of children with ADHD continue to have symptoms into adolescence and adulthood. No one knows exactly what causes ADHD. There appears to be a combination of causes, including genetics and environmental influences Several different factors could increase a child's likelihood of having the disorder, such as gender, family history, prenatal risks, environmental toxins and physical differences in the brain seem to be involved. A child with ADHD often shows some of the following: Difficulties with attention: - trouble paying attention - inattention to details and makes careless mistakes - easily distracted - losing things such as school supplies - forgetting to turn in homework - trouble finishing class work and homework - trouble listening - trouble following multiple adult commands - difficulty playing quietly - inability to stay seated - running or climbing excessively - always "on the go" - talks too much and interrupts or intrudes on others - blurts out answers The good news is that effective treatment is available. The first step is to have a careful and thorough evaluation with your child’s primary care doctor or with a qualified mental health professional. With the right treatment, children with ADHD can improve their ability to pay attention and control their behavior. The right care can help them grow, learn, and feel better about themselves. Medications: Most children with ADHD benefit from taking medication. Medications do not cure ADHD. Medications can help a child control his or her symptoms on the day that the pills are taken. Medications for ADHD are well established and effective. There are two main types: stimulant and non-stimulant medications. Stimulants include methylphenidate, and amphetamine salts. Non-stimulant medications include atomoxetine. For more information about the medications used to treat ADHD, please see the Parent Med Guide. Before medication treatment begins, your child's doctor should discuss the benefits and the possible side effects of these medications. Your child’s doctor should continue to monitor your child for improvement and side effects. A majority of children who benefit from medication for ADHD will continue to benefit from it as teenagers. In fact, many adults with ADHD also find that medication can be helpful. Therapy and Other Support: A psychiatrist or other qualified mental health professional can help a child with ADHD. The psychotherapy should focus on helping parents provide structure and positive reinforcement for good behavior. In addition, individual therapy can help children gain a better self-image. The therapist can help the child identify his or her strengths and build on them. Therapy can also help a child with ADHD cope with daily problems, pay better attention, and learn to control aggression. A therapist may use one or more of the following approaches: Behavior therapy, Talk therapy, Social skills training, Family support groups. Sometimes children and parents wonder when children can stop taking ADHD medication. If you have questions about stopping ADHD medication, consult your doctor. Many children diagnosed with ADHD will continue to have problems with one or more symptoms of this condition later in life. In these cases, ADHD medication can be taken into adulthood to help control their symptoms. For others, the symptoms of ADHD lessen over time as they begin to "outgrow" ADHD or learn to compensate for their behavioral symptoms. The symptom most apt to lessen over time is hyperactivity. Some signs that your child may be ready to reduce or stop ADHD medication are: - Your child has been symptom-free for more than a year while on medication, - Your child is doing better and better, but the dosage has stayed the same, - Your child's behavior is appropriate despite missing a dose or two, - Or your child has developed a newfound ability to concentrate. The choice to stop taking ADHD medication should be discussed with the prescribing doctor, teachers, family members, and your child. You may find that your child needs extra support from teachers and family members to reinforce good behavior once the medication is stopped. Without treatment, a child with ADHD may fall behind in school and have trouble with friendships. Family life may also suffer. Untreated ADHD can increase strain between parents and children. Parents often blame themselves when they can't communicate with their child. The sense of losing control can be very frustrating. Teenagers with ADHD are at increased risk for driving accidents. Adults with untreated ADHD have higher rates of divorce and job loss, compared with the general population. Luckily, safe and effective treatments are available which can help children and adults help control the symptoms of ADHD and prevent the unwanted consequences.
<urn:uuid:40aae48c-b422-4ff3-a8dc-88f9431d1a4e>
CC-MAIN-2013-20
http://www.aacap.org/cs/ADHD.ResourceCenter/adhd_faqs
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.959207
1,307
3.71875
4
Weights linked to lower diabetes risk Weight gains Weight training, and not just cardio workouts, is linked to a lower risk of developing type 2 diabetes, according to a US study. "We all know that aerobic exercise is beneficial for diabetes - many studies have looked at that - but no studies have looked at weight training," says study leader Frank Hu, at the Harvard School of Public Health. "This study suggests that weight training is important for diabetes, and probably as important as aerobic training." Hu and his colleagues, whose report was published in the Archives of Internal Medicine, used data on more than 32,000 male health professionals, who answered questionnaires every two years from 1990 to 2008. On average, four out of 1000 men developed type 2 diabetes every year, the researchers found. The risk of getting the blood sugar disorder was only half as high for men who did cardio, or aerobic, workouts - say brisk walking, jogging or playing tennis - at least 150 minutes a week, as for those who didn't do any cardio exercise. Men who did weight training for 150 minutes or more had a risk reduction of a third compared to those who never lifted weights, independently of whether or not they did aerobic exercise. Exercise is beneficial Whereas weight training increases muscle mass and can reduce abdominal obesity, it tends not to cut overall body mass, says Hu. The results don't prove that working out staves off diabetes, because many men who stay fit may also be healthier in other ways, but the researchers did their best to account for such potential differences, including age, smoking and diet. "I think the benefits of weight training are real," says Hu. "Any type of exercise is beneficial for diabetes prevention, but weight training can be incorporated with aerobic exercise to get the best results." Along with an appropriate diet, exercise is also important for people who already have type 2 diabetes and can help control high blood sugar, he adds.
<urn:uuid:fa576bb7-fea9-461f-8ee3-962a8a33a7f9>
CC-MAIN-2013-20
http://www.abc.net.au/science/articles/2012/08/07/3562561.htm?topic=health
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.976155
400
2.734375
3
Arctic meltdown not caused by nature Rapid loss of Arctic sea ice - 80 per cent has disappeared since 1980 - is not caused by natural cycles such as changes in the Earth's orbit around the Sun, says Dr Karl. The situation is getting rather messy with regard to the ice melting in the Arctic. Now the volume of the ice varies throughout the year, rising to its peak after midwinter, and falling to its minimum after midsummer, usually in the month of September. Over most of the last 1,400 years, the volume of ice remaining each September has stayed pretty constant. But since 1980, we have lost 80 per cent of that ice. Now one thing to appreciate is that over the last 4.7 billion years, there have been many natural cycles in the climate — both heating and cooling. What's happening today in the Arctic is not a cycle caused by nature, but something that we humans did by burning fossil fuels and dumping slightly over one trillion tonnes of carbon into the atmosphere over the last century. So what are these natural cycles? There are many many of them, but let's just look at the Milankovitch cycles. These cycles relate to the Earth and its orbit around the Sun. There are three main Milankovitch cycles. They each affect how much solar radiation lands on the Earth, and whether it lands on ice, land or water, and when it lands. The first Milankovitch cycle is that the orbit of the Earth changes from mostly circular to slightly elliptical. It does this on a predominantly 100,000-year cycle. When the Earth is close to the Sun it receives more heat energy, and when it is further away it gets less. At the moment the orbit of the Earth is about halfway between "nearly circular" and "slightly elliptical". So the change in the distance to the Sun in each calendar year is currently about 5.1 million kilometres, which translates to about 6.8 per cent difference in incoming solar radiation. But when the orbit of the Earth is at its most elliptical, there will be a 23 per cent difference in how much solar radiation lands on the Earth. The second Milankovitch cycle affecting the solar radiation landing on our planet is the tilt of the north-south spin axis compared to the plane of the orbit of the Earth around the Sun. This tilt rocks gently between 22.1 degrees and 24.5 degrees from the vertical. This cycle has a period of about 41,000 years. At the moment we are roughly halfway in the middle — we're about 23.44 degrees from the vertical and heading down to 22.1 degrees. As we head to the minimum around the year 11,800, the trend is that the summers in each hemisphere will get less solar radiation, while the winters will get more, and there will be a slight overall cooling. The third Milankovitch cycle that affects how much solar radiation lands on our planet is a little more tricky to understand. It's called 'precession'. As our Earth orbits the Sun, the north-south spin axis does more than just rock gently between 22.1 degrees and 24.5 degrees. It also — very slowly, just like a giant spinning top — sweeps out a complete 360 degrees circle, and it takes about 26,000 years to do this. So on January 4, when the Earth is at its closest to the Sun, it's the South Pole (yep, the Antarctic) that points towards the Sun. So at the moment, everything else being equal, it's the southern hemisphere that has a warmer summer because it's getting more solar radiation, but six months later it will have a colder winter. And correspondingly, the northern hemisphere will have a warmer winter and a cooler summer. But of course, "everything else" is not equal. There's more land in the northern hemisphere but more ocean in a southern hemisphere. The Arctic is ice that is floating on water and surrounded by land. The Antarctic is the opposite — ice that is sitting on land and surrounded by water. You begin to see how complicated it all is. We have had, in this current cycle, repeated ice ages on Earth over the last three-million years. During an ice age, the ice can be three kilometres thick and cover practically all of Canada. It can spread through most of Siberia and Europe and reach almost to where London is today. Of course, the water to make this ice comes out of the ocean, and so in the past, the ocean level has dropped by some 125 metres. From three million years ago to one million years ago, the ice advanced and retreated on a 41,000-year cycle. But from one million years ago until the present, the ice has advanced and retreated on a 100,000-year cycle. What we are seeing in the Arctic today — the 80 per cent loss in the volume of the ice since 1980 — is an amazingly huge change in an amazingly short period of time. But it seems as though the rate of climate change is accelerating, and I'll talk more about that, next time … Published 27 November 2012 © 2013 Karl S. Kruszelnicki Pty Ltd
<urn:uuid:3a4ac59c-d59d-470b-adad-88e5e1c8a45a>
CC-MAIN-2013-20
http://www.abc.net.au/science/articles/2012/11/27/3640992.htm?topic=latest
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.955824
1,065
3.5625
4
Black holes growing faster than expected Black hole find Existing theories on the relationship between the size of a galaxy and its central black hole are wrong according to a new Australian study. The discovery by Dr Nicholas Scott and Professor Alister Graham, from Melbourne's Swinburne University of Technology, found smaller galaxies have far smaller black holes than previously estimated. Central black holes, millions to billions of times more massive than the Sun, reside in the core of most galaxies, and are thought to be integral to galactic formation and evolution. However astronomers are still trying to understand this relationship. Scott and Graham combined data from observatories in Chile, Hawaii and the Hubble Space Telescope, to develop a data base listing the masses of 77 galaxies and their central supermassive black holes. The astronomers determined the mass of each central black hole by measuring how fast stars are orbiting it. Existing theories suggest a direct ratio between the mass of a galaxy and that of its central black hole. "This ratio worked for larger galaxies, but with improved technology we're now able to examine far smaller galaxies and the current theories don't hold up," says Scott. In a paper to be published in the Astrophysical Journal, they found that for each ten-fold decrease in a galaxy's mass, there was a one hundred-fold decrease in its central black hole mass. "That was a surprising result which we hadn't been anticipating," says Scott. The study also found that smaller galaxies have far denser stellar populations near their centres than larger galaxies. According to Scott, this also means the central black holes in smaller galaxies grow much faster than their larger counterparts. Black holes grow by merging with other black holes when their galaxies collide. "When large galaxies merge they double in size and so do their central black holes," says Scott. "But when small galaxies merge their central black holes quadruple in size because of the greater densities of nearby stars to feed on." Somewhere in between The findings also solve the long standing problem of missing intermediate mass black holes. For decades, scientists have been searching for something in between stellar mass black holes formed when the largest stars die, and supermassive black holes at the centre of galaxies. "If the central black holes in smaller galaxies have lower mass than originally thought, they may represent the intermediate mass black hole population astronomers have been hunting for," says Graham. "Intermediate sized black holes are between ten thousand and a few hundred thousand times the mass of the Sun, and we think we've found several good candidates." "These may be big enough to be seen directly by the new generation of extremely large telescopes now being built," says Graham.
<urn:uuid:e617c5fd-d556-4d43-be1f-042e7e7f2c60>
CC-MAIN-2013-20
http://www.abc.net.au/science/articles/2013/01/17/3671551.htm?topic=enviro
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.948663
552
4.25
4
Hoodoos may be seismic gurus Hoodoo prediction Towering chimney-like sedimentary rock spires known as hoodoos may provide an indication of an area's past earthquake activity. The research by scientists including Dr Rasool Anooshehpoor, from the United States Nuclear Regulatory Commission, may provide scientists with a new tool to test the accuracy of current hazard models. Hoodoo formations are often found in desert regions, and are common in North America, the Middle East and northern Africa. They are caused by the uneven weathering of different layers of sedimentary rocks, that leave boulders or thin caps of hard rock perched on softer rock. By knowing the strengths of different types of sedimentary layers, scientists can determine the amount of stress needed to cause those rocks to fracture. The United States Geological Survey (USGS) use seismic hazard models to predict the type of ground motion likely to occur in an area during a seismic event. But, according to Anooshehpoor, these models lack long term data. "Existing hazard maps use models based on scant data going back a hundred years or so," says Anooshehpoor. "But earthquakes have return periods lasting hundreds or thousands of years, so there is nothing to test these hazard models against." The researchers examined two unfractured hoodoos within a few kilometres of the Garlock fault, which is an active strike-slip fault zone in California's Red Rock Canyon. Their findings are reported in the Bulletin of the Seismological Society of America. "Although we can't put a precise age on hoodoos because of their erosion characteristics, we can use them to provide physical limits on the level of ground shaking that could potentially have occurred in the area," says Anooshehpoor. The researchers developed a three-dimensional model of each hoodoo and determined the most likely place where each spire would fail in an earthquake. They then tested rock samples similar to the hoodoo pillars to measure their tensile strength and compared their results with previously published data. USGS records suggest at least one large magnitude earthquake occurred along the fault in the last 550 years, resulting in seven metres of slip, yet the hoodoos are still standing. This finding is consistent with a median level of ground motion associated with the large quakes in this region, says Anooshehpoor. "If an earthquake occurred with a higher level of ground motion, the hoodoos would have collapsed," he says. "Nobody can predict earthquakes, but this will help predict what ground motions are associated with these earthquakes when they happen." Dr Juan Carlos Afonso from the Department of Earth and Planetary Sciences at Sydney's Macquarie University says it's an exciting development. "In seismic hazard studies, it's not just difficult to cover the entire planet, it's hard to cover even small active regions near populated areas," says Afonso. "You need lots of instruments, so it's great if you can rely on nature and natural objects to help you." He says while the work is still very new and needs to be proven, the physics seems sound.
<urn:uuid:85a979cb-9571-4e06-b38a-2f79912abb44>
CC-MAIN-2013-20
http://www.abc.net.au/science/articles/2013/02/05/3682324.htm?site=science&topic=enviro
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.955619
644
4.3125
4
Now, it is common knowledge these days that Hitler's final great offensive in the last years of WWII was the Ardennes Offensive of 1944/45, also known as the battle of the Bulge. What was not appreciated at the time by the Allied high command was just how desperately short of vital supplies the Third Reich armies actually were. The Ardennes Offensive was Hitler's bold attempt to capture and hold the Allied army's massive supply of Brussels sprouts, vital - of course - for the full functioning of any army. German intelligence were aware that the American army was - in particular - massing huge quantities of the vital Brussels sprouts just behind their frontlines in preparedness for their own massive push - and - of course - in time for Christmas. The German's audacious plan would have succeeded if the Allies had not quickly worked out that it was their stockpiles of Brussels sprouts that were under immediate threat. The bold plan put forward by the Allied Generals was a heavy gamble, but it paid off. They ordered their front-line chefs to begin boiling their entire stocks of Brussels sprouts, and - most importantly - to keep them boiling well past a state of fully preparedness. So, when the weather altered and the wind direction changed, it blew the smell of over-cooked Brussels sprouts straight into the faces of the advancing Germans. Then the Reich troops knew that they would not be able to replenish their stocks of Brussels sprouts and any sprouts that they did capture from the Allied frontline kitchens would be overcooked to the point of inedibility. Later in this series, we will discuss the major strategic role that Brussels sprouts have played in world history, such as Hadrian building a wall to protect the Roman Empire's most northern supplies of Brussels sprouts from the northern barbarians, thus thwarting the barbarian's fiendish plan to deep-fry the Roman's entire stockpiles of sprouts. Then there was, also, Napoleon's retreat from Moscow when his over-long supply line of Brussels sprouts direct from France broke down. Even when his troops could get sprouts, they were of poor quality - dry, wizened and frozen solid. Of course, this led to a massive collapse of morale. Eventually, the lack of good quality sprouts forced a massive retreat where thousands of French troops died from a pitiful lack of sprouts. And, of course, not forgetting - of course - how the Spanish conquest of the Americas was a result of the Spaniards overwhelming sprout superiority.
<urn:uuid:1e42f564-b487-459a-9400-a0404ff31bff>
CC-MAIN-2013-20
http://www.abctales.com/story/hadley/brussels-sprouts-and-their-role-history
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.974233
518
2.859375
3
Books Yellow, Red, and Green and Blue, All true, or just as good as true, And here's the Blue Book just for YOU! Hard is the path from A to Z, And puzzling to a curly head, Yet leads to Books—Green, Yellow and Red. For every child should understand That letters from the first were planned To guide us into Fairy Land So labour at your Alphabet, For by that learning shall you get To lands where Fairies may be met. And going where this pathway goes, You too, at last, may find, who knows? The Garden of the Singing Rose. As to whether there are really any fairies or not, that is a difficult question. The Editor never saw any himself, but he knew several people who have seen them-in the Highlands-and heard their music. If ever you are in Nether Lochaber, go to the Fairy Hill, and you may hear the music your-self, as grown-up people have done, but you must go on a fine day. This book has been especially re-published to raise funds for: The Great Ormond Street Hospital Children’s Charity By buying this book you will be donating to this great charity that does so much good for ill children and which also enables families to stay together in times of crisis. And what better way to help children than to buy a book of fairy tales. Some have not been seen in print or heard for over a century. 33% of the Publisher’s profit from the sale of this book will be donated to the GOSH Children’s Charity. YESTERDAYS BOOKS for TODAYS CHARITIES LITTLE RED RIDING HOOD Once upon a time there lived in a certain village a little country girl, the prettiest creature was ever seen. Her mother was excessively fond of her; and her grandmother doted on her still more. This good woman had made for her a little red riding-hood; which became the girl so extremely well that everybody called her Little Red Riding-Hood. One day her mother, having made some custards, said to her: "Go, my dear, and see how thy grandmamma does, for I hear she has been very ill; carry her a custard, and this little pot of butter." Little Red Riding-Hood set out immediately to go to her grandmother, who lived in another village. As she was going through the wood, she met with Gaffer Wolf, who had a very great mind to eat her up, but he dared not, because of some faggot-makers hard by in the forest. He asked her whither she was going. The poor child, who did not know that it was dangerous to stay and hear a wolf talk, said to him: "I am going to see my grandmamma and carry her a custard and a little pot of butter from my mamma." "Does she live far off?" said the Wolf. "Oh! aye," answered Little Red Riding-Hood; "it is beyond that mill you see there, at the first house in the village." "Well," said the Wolf, "and I'll go and see her too. I'll go this way and you go that, and we shall see who will be there soonest." The Wolf began to run as fast as he could, taking the nearest way, and the little girl went by that farthest about, diverting herself in gathering nuts, running after butterflies, and making nosegays of such little flowers as she met with. The Wolf was not long before he got to the old woman's house. He knocked at the door—tap, tap. "Your grandchild, Little Red Riding-Hood," replied the Wolf, counterfeiting her voice; "who has brought you a custard and a little pot of butter sent you by mamma." The good grandmother, who was in bed, because she was somewhat ill, cried out: "Pull the bobbin, and the latch will go up."The Wolf pulled the bobbin, and the door opened, and then presently he fell upon the good woman and ate her up in a moment, for it was above three days that he had not touched a bit. He then shut the door and went into the grandmother's bed, expecting Little Red Riding-Hood, who came some time afterward and knocked at the door—tap, tap. Little Red Riding-Hood, hearing the big voice of the Wolf, was at first afraid; but believing her grandmother had got a cold and was hoarse, answered: "’Tis your grandchild, Little Red Riding-Hood, who has brought you a custard and a little pot of butter mamma sends you." The Wolf cried out to her, softening his voice as much as he could: "Pull the bobbin, and the latch will go up." Little Red Riding-Hood pulled the bobbin, and the door opened. The Wolf, seeing her come in, said to her, hiding himself under the bed-clothes: "Put the custard and the little pot of butter upon the stool, and come and lie down with me." Little Red Riding-Hood undressed herself and went into bed, where, being greatly amazed to see how her grandmother looked in her night-clothes, she said to her: "Grandmamma, what great arms you have got!" "That is the better to hug thee, my dear." "Grandmamma, what great legs you have got!" "That is to run the better, my child." "Grandmamma, what great ears you have got!" "That is to hear the better, my child." "Grandmamma, what great eyes you have got!" "It is to see the better, my child." "Grandmamma, what great teeth you have got!" "That is to eat thee up." And, saying these words, this wicked wolf fell upon Little Red Riding-Hood, and tried to start eating her. Red Riding Hood screamed “Someone Help Me!” over and over again. The woodcutter, who was felling trees nearby, heard Red Riding Hood’s screams for help and ran to the cottage. He burst in to find the wolf trying to eat Red Riding Hood. He swung his axe, and with one blow killed the bad wolf for which Red Riding Hood was ever so grateful. Great Book! Really interesting read! Was great to see a published version of Jewish tales! Arrived very quickly too - great service! A thrilling book about a chase across the US! A great story, my son loved it! Quick and Convenient delivery! Stories of the famous spice route across Asia! Great to see a volume of Phillipine Folklore Stories in Print, only one I've found on the web! We deliver to destinations all over the world, and here at Abela, we have some of the best rates in the book industry. We charge shipping dependant on the book you have ordered and where in the world you are ordering from. This will be shown below the price of the book. The delivery time is typically dependant on where in the world you are ordering from, Should you need a estimated delivery time, please do not hesitate to contact us. We pride ourselves on the quality of our packaging and damage rates are very low. In the unlikely event there is damage please contact us before returning your item, as you may have to pay for return shipping, if you have not let us know. Due to the nature of books being read then returned for a refund, unfortunately we do not accept returns unless the item is damaged and we are notified ON THE DAY OF DELIVERY.
<urn:uuid:417be69e-3827-4c17-971c-f3410cf2c856>
CC-MAIN-2013-20
http://www.abelapublishing.com/the-blue-fairy-book_p23349351.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.976676
1,657
2.5625
3
What exactly does "desecration" mean? Is it just flag burning — or does it also include smearing the flag with dirt? How about dropping it on the ground? And why should law enforcement get to decide who to arrest for such desecration? Free expression and the right to dissent are among the core principles which the American flag represents. The First Amendment must be protected most when it comes to unpopular speech. Failure to do so fails the very notion of freedom of expression. Our democracy is strong because we tolerate all peaceful forms of expression, no matter how uncomfortable they make us feel, or how much we disagree. If we take away the right to dissent - no matter how unpopular - what freedom will be sacrificed next? Make a Difference Your support helps the ACLU defend free speech and a broad range of civil liberties. Burn the Flag or Burn the Constitution? (2011 blog): Sadly, Congress is once again considering an amendment to the U. S. Constitution banning desecration of the American flag and, in doing so, testing our political leaders' willingness to defend what is arguably one of America's most sacred principles — protecting political speech. Flag Amendment Defeated, First Amendment Stands Unscathed (2003): On June 27, 2006, the Senate voted down the proposed Flag Desecration Amendment by the slimmest margin ever. The vote was 66-34, just one vote short of the two-thirds needed to approve a constitutional amendment. Reasons to Oppose the Flag Desecration Amendment (2004 resource): Talking Points on Opposing the Flag Desecration Amendment Background on the Flag Desecration Amendment (2004 resource) Fight for the Flag - Resources (2006 resource)
<urn:uuid:cbe1a6ba-f0ca-4f88-86c8-7605d31dcf07>
CC-MAIN-2013-20
http://www.aclu.org/free-speech/flag-desecration
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.90417
351
3.09375
3
First, an object is placed on the platform of the printer upon – a petrie dish for example. Then the printer must check the height of the object to make sure everything is calibrated correctly. Mr. Carvalho placed a paper card on the platform of the 3D-Bioplotter to demonstrate how the machine works. Mr. Carvalho then talked us through the printing process. To begin, a liquefied material – in this case a silicone paste – is pressed through a needle-like tip by applying air pressure. The needle moves in all three dimensions which means it is able to create a three dimensional object. The printer is called ‘Bioplotter’ because the unique aspect of this machine is its use of biomaterials to make implants or other objects for biomedical application. Some of the implants which are made using the 3D Bioplotter are intended to dissolve in the body. The materials which are used in this application include PLLA, PLGA, and silicone. Implants made with thermoplastics – as they are mostly water and CO2 – are removed by the body naturally in around a week or two. Other materials, such as ceramic paste, may also be used to print implants. The implants printed using ceramic paste do not dissolve. Instead, the body uses this material to create new bone. This actually speeds up the process of the body’s regeneration. The 3DBioplotter also prints hydrogels – such as collagen or alginate. These materials can have human cells actually added to them. Thus human cells may be printed directly with this machine. Every Thursday is #3dthursday here at Adafruit! The DIY 3D printing community has thrilled us at Adafruit with its passion and dedication to making solid objects from digital models. Recently, we have noticed that our community integrating electronics projects into 3D printed enclosures, brackets, and sculptures, so each Thursday we celebrate and highlight these bold pioneers! Have you take considered building a 3D project around an Arduino or other microcontroller? How about printing a bracket to mount your Raspberry Pi to the back of your HD monitor? And don’t forget the countless EL Wire and LED projects that are possible when you are modeling your projects! The Adafruit Learning System has dozens of great tools to get you well on your way to creating incredible works of engineering, interactive art, and design with your 3D printer! If you have a cool project you’ve made that joins the traditions of 3D printing and electronics, be sure to send it in to be featured here!
<urn:uuid:73055da5-4336-4490-8edc-b8121ae20961>
CC-MAIN-2013-20
http://www.adafruit.com/blog/2012/11/29/the-3d-bioplotter-from-envisiontec-3dthursday/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.93519
539
3.421875
3
Re-inventing the Planned City Monday, March 12, 2012 TAU and MIT launch pilot project to re-think 50's era "New Towns" A bird's-eye view of Kiryat Gat In response to population growth, many "new towns" or planned cities were built around the world in the 1950s. But according to Dr. Tali Hatuka, head of Tel Aviv University's Laboratory for Contemporary Urban Design (LCUD) at the Department of Geography and the Human Environment, these cities are a poor fit for modern lifestyles — and it's time to innovate. TAU has launched a pilot project, in collaboration with a team from the Massachusetts Institute of Technology led by Prof. Eran Ben-Joseph, to revitalize this aging model. Last month, a team of five TAU and 11 MIT graduate students visited Kiryat Gat, a mid-sized town in the south of Israel. Home to branches of industrial giants Hewlett-Packard Company and Intel, Kiryat Gat was chosen as a "laboratory" for re-designing outmoded planned civic spaces. Based on smart technologies, improved transportation, use of the city's natural surroundings, and a reconsideration of the current use of city space, the team's action plan is designed to help Kiryat Gat emerge as a new, technologically-advanced planned city — a prototype that could be applied to similar urban communities. Planning a future for the mid-sized city The project, jointly funded by TAU's Vice President for Research and MIT's MISTI Global Seed Funds, will create a new planning model that could reshape the future of Kiryat Gat and similar cities across the world which are often overlooked in academia and practical planning. "Our goal is to put a spotlight on these kinds of towns and suggest innovative ways of dealing with their problems," says TAU student Roni Bar. MIT's Alice Shay, who visited Israel for the first time for the project, believes that Kiryat Gat, a city that massive urbanization has left behind, is an ideal place for the team to make a change. "The city is at a catalyst point — an exciting moment where good governance and energy will give it the capacity to implement some of these new projects." To tackle the design and planning challenges of the city, the team of students focused on four themes: the "mobile city," which looked at transport and accessibility; the "mediated city," dealing with technological infrastructure; the "compact city," which reconsidered the use of urban space and population growth; and the "natural city," which integrated environmental features into the urban landscape. Finding common ground Ultimately, the team’s goal is to create a more flexible city model that encourages residents and workers to be a more active part of the urban fabric of the city, said Dr. Hatuka. The current arrangement of dedicated industrial, residential, and core zones is out of step with a 21st century lifestyle, in which people work, live, and spend their leisure time in the same environment. "Much of the past discourse about the design of sustainable communities and 'eco-cities' has been premised on using previously undeveloped land," says Prof. Ben-Joseph. "In contrast, this project focuses on the 'retrofitting' of an existing environment — a more likely approach, given the extent of the world's already-built infrastructure." The students from TAU and MIT have become a truly cohesive team, and their diversity of background helps challenge cultural preconceptions, Bar says. "They ask many questions that help us to rethink things we took for granted." Shay agrees. "Tali and Eran have created an incredible collaboration, encouraging us all to exchange ideas. Our contexts are different but there is a common urban design language." The team estimates that they will be able to present the updated model of the city early next year. The next step is further exploring the project's key themes at a March meeting at MIT. And while the project has provided an exceptional educational experience for all involved, ideas are already leaping off the page and into the city's urban fabric. "In the next two months, the Mayor of Kiryat Gat would like to push this model forward and implement the initial steps that we have offered," says an enthusiastic Dr. Hatuka.
<urn:uuid:08c12af3-3a0e-45eb-852d-5a9c514658cb>
CC-MAIN-2013-20
http://www.aftau.org/site/News2/596546752?page=NewsArticle&id=16181&news_iv_ctrl=-1
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.95867
895
2.609375
3
Elderly people are at increased risk of food-borne illness because as they age, their immune systems become weaker. In fact, the website for the Centers for Disease Control estimates that each year about 48 million people get sick, 128,000 are hospitalized and 3,000 die from food-borne diseases. The most severe cases tend to occur in the very old. The good news is that food poisoning can be prevented if you follow proper home food safety practices. Ruth Frechman, a registered dietitian and spokesperson for the American Dietetic Association, spoke with AgingCare.com about home food safety for elderly people. "Since older adults are at particular risk for food-borne illness, good food safety habits are extremely crucial." Ms. Frechman says three common cooking and food preparation mistakes can result in unsafe food and potential food poisoning. Bacteria in raw meat and poultry juices can be spread to other foods, utensils and surfaces. . "To prevent cross-contamination, keep raw foods separate from ready-to-eat foods and fresh vegetables," she says. "For example, use two cuttings boards: one strictly for raw meat, poultry and seafood; the other for ready-to-eat foods like breads and vegetables." She recommends washing cutting boards thoroughly in hot soapy water after each use or placing them in the dishwasher. Use a bleach solution or other sanitizing solution and rinse with clean water. Always wash your hands after handling raw meat. Leaving food out too long Leaving food out too long at room temperature can cause bacteria to grow to dangerous levels that can cause illness. "Many people think it's okay to leave food sitting out for a few hours," Ms. Frechman says. "But that's a dangerous habit. Food should not be left out for more than two hours. And if it's over 90 degrees, like at an outdoor summer barbecue, food should not be out for more than one hour." Its common knowledge that meat should be cooked to proper temperatures. However, most people don't know that even leftovers that were previously cooked should be re-heated to a certain temperature. Ms. Frechman says re-heating foods to the proper temperature can kill many harmful bacteria. Leftovers should be re-heated to at least 165 degrees Fahrenheit. "Harmful bacteria are destroyed when food is cooked to proper temperatures," she says. "That's why a food thermometer comes in handy not only for preparing food, but also for re-heating." How long it is safe to eat leftovers? Not as long as you would think, Ms. Frechman says. Chicken, fish and beef expire after three to four days in the refrigerator. To help seniors track if leftovers are still good, she recommends writing the date on the package of leftovers. Seniors and their caregivers should take these preventive measures to avoid germs in food and contracting food poisoning. Pay attention to the foods that are eaten, how food is prepared, and properly maintain the food in the refrigerator, and you may avoid an illness that could cause great discomfort, weakening of the body or even death.
<urn:uuid:fa1546c3-bbdd-441b-a8ad-eb263ea80d04>
CC-MAIN-2013-20
http://www.agingcare.com/Articles/Top-3-Food-Preparation-Mistakes-That-Cause-Food-borne-Illness-147181.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.959251
656
3.25
3
What is HIV? And what is AIDS? Find answers to some common questions in this section. How is HIV transmitted - and how is it not transmitted? Find out the answers in this section. Worried you might have HIV? Have an HIV test - it's the only way to know for sure. HIV treatment is not a cure, but it is keeping millions of people well. Start learning about it in this section. In this section we have answered some of the questions you might have if you have just found out you have HIV. Find healthcare services and support. A series of illustrated leaflets designed to support conversations between professionals and people with HIV. Our award-winning series of patient information booklets. Each title provides a comprehensive overview of one aspect of living with HIV. Twice-monthly email newsletter on the practical aspects of delivering HIV treatment in resource-limited settings. Our regular newsletter, providing in-depth discussion of the latest research across the HIV sector. Free to people personally affected by HIV. Find contact details for over 3000 key organisations in more than 190 countries An instant guide to HIV & AIDS in countries and regions around the world The most comprehensive listing of HIV-related services in the UK Pre-exposure prophylaxis (PrEP) – free webinar 18 April 2013As part of its European HIV prevention work, NAM is collaborating... Learning the basics about hepatitis C 05 April 2013If you are familiar with NAM’s patient information materials, hopefully you... Treatment as prevention – free webinar 20 March 2013As part of its European HIV prevention work, NAM is collaborating...
<urn:uuid:77e4b735-9163-48f4-84c1-c755c8c2ff0e>
CC-MAIN-2013-20
http://www.aidsmap.com/resources/treatmentsdirectory/drugs/iCombiviri-AZT3TC/page/1730921/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.935643
342
2.875
3
Select Preservation Resources - Shocking Statistics: Reasons for Preservation Week: Facts that illustrate the need for national preservation awareness. - PW Fact Sheet: More facts that discusses how items become damaged and simple steps to keep them safe. - Preserving Your Memories: Organized by material type, these web sites, books, and other sources give useful information on caring for any kind of collection. - Disaster Recovery: Information for before and after a disaster has damaged precious collections. - Bibliographies & Indexes: A list of links to resources collected by professional preservation organizations - Videos: Video resources depict ways and reasons to preserve collections - Preservation for Children: Tools to help children understand the importance of preservation. - Comprehensive Resources - Resources in Other Languages: Spanish, French, Chinese, Italian, and Arabic resources for spreading the preservation message. - Books of Fiction about Conservation - Books of Fiction about Books for Book Groups
<urn:uuid:2ca2992e-4394-4461-91b7-99387774cf64>
CC-MAIN-2013-20
http://www.ala.org/alcts/confevents/preswk/tools/select
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.865597
190
2.765625
3
In preparation for Christmas, I read Stephen Nissenbaum's 1998 "The Battle for Christmas," a thorough exploration of this season. The book's title will be deceiving, because it has nothing to do with the recent sacred-vs.-secular Christmas quarrels. Nissenbaum explores the myriad ways that Christmas has evolved in our nation. It turns out we've been jockeying for more than 300 years over what this holiday means. In Colonial America our faith-filled ancestors banned Christmas altogether, outlawing it in some colonies. Until the 1760s, one could not even find an almanac that would print the word "Christmas" on the date Dec. 25. This opposition was because Christmas had become a drunken spectacle where gangs of poor young men roamed the streets, making merry and engaging in acts of petty rowdyism, vaguely like today's New Year's Eve. It was customary and permissible for these gangs to knock on doors of strangers to demand gifts. ("So give us some figgy pudding....") Our nation's first "battle" for Christmas was the movement to domesticate the holiday, a battle that Nissenbaum suggests involved merchants, the middle and upper classes and the church. Merchants began linking Christmas and the purchase of manufactured gifts as early as the 1830s as society began to stress family celebrations in front of a tree and with Santa visiting every home. In case you think that your complaining will reverse the commercialism of this holiday, according to Nissenbaum that complaint first emerged in the 1830s. Complain if you must, but don't expect results. Nissenbaum so thoroughly explores Clement Moore's "'Twas the Night before Christmas" that one learns why Saint Nick touches the side of his nose and why his pipe is a short one. Nissenbaum contends that the ascendance of Santa Claus, the emergence of the Christmas tree and even the giving of gifts contribute to this gradual process of making Christmas a less revolutionary, more predictable holiday. He explores Dickens and Scrooge, Christmas parties for poor children and even the complicated master-slave relationship at Christmas leading up to and immediately following the Civil War. If you prefer to maintain that Christmas was a pure season of private devotion and public worship until Sears, Roebuck, Wal-Mart and the Supreme Court got involved, don't read this book. Ditto if you enjoy lamenting that "They've taken Christmas away from us," Nissenbaum might say that a pure, simple Christmas never existed. Rather it has evolved since the first day the Colonists set foot on our shore, an evolution showing no sign of abating. Nissenbaum's scholarly, heavily footnoted book is enlightening and readable. But his analysis of Christmas reminds me of a scientist who thoroughly explains the rainbow but never grasps its beauty. And so as this season continues to evolve, I'll enjoy my Christmas tree, sing both "White Christmas" and "Joy to the World," and be grateful again for the mystery of Bethlehem, which properly understood, is the most revolutionary act of history. Contact columnist minister Creede Hinshaw at Wesley Monumental United Methodist Church in Savannah at email@example.com.
<urn:uuid:fc291606-3625-4033-97fb-f0d3e531c2bc>
CC-MAIN-2013-20
http://www.albanyherald.com/news/2009/dec/11/christmas-has-been-evolving-for-centuries/?features
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.95191
669
2.640625
3
Heal Our Planet Earth Secondary and Universities Educational Outreach: Secondary (High) Schools and Universities Anthony Marr from the HOPE Foundation’s main point was the wild tigers. He believes that something must be done to keep these animals alive. "If we let this go, life will be less beautiful and worth less living," Anthony Marr quoted. The money that he receives as a conservationist is donated to help out the endangered species. He makes many trips to India to help them find other solutions to their problems. If the people living in India keep living the way they’ve done, India will soon become a desert. Changes need to be made and people need to adapt to these changes. I agree with Anthony’s beliefs. Even if tigers are bred, it does not make a difference, because they cannot survive on their own. No matter what humans do, it still will not change the fact that one of God’s creations is becoming destroyed. No animals should be killed for the purpose of human needs. It is not necessary to kill tigers to sell products and make money because of silly beliefs that of they eat this then something will happen. There are so many alternatives. Humans need food, but they do not have to consume so much meat. Every time they eat meat, a precious animal is being killed. Animals do not kill us and eat us, then why should we do the same? More solutions need to be found and more people need to become more involved in saving the beauty of the world. Go on to Student - 10
<urn:uuid:285c3b9d-ca3c-48fd-8ece-77b958f31b4b>
CC-MAIN-2013-20
http://www.all-creatures.org/hope/edout-hs-20060928-09.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.960507
335
2.6875
3
Science Fair Project Encyclopedia The chloride ion is formed when the element chlorine picks up one electron to form the anion (negatively charged ion) Cl−. The salts of hydrochloric acid HCl contain chloride ions and are also called chlorides. An example is table salt, which is sodium chloride with the chemical formula NaCl. In water, it dissolves into Na+ and Cl− ions. The word chloride can also refer to a chemical compound in which one or more chlorine atoms are covalently bonded in the molecule. This means that chlorides can be either inorganic or organic compounds. The simplest example of an inorganic covalently bonded chloride is hydrogen chloride, HCl. A simple example of an organic covalently bonded chloride is chloromethane (CH3Cl), often called methyl chloride. Other examples of inorganic covalently bonded chlorides which are used as reactants are: - phosphorus trichloride, phosphorus pentachloride, and thionyl chloride - all three are reactive chlorinating reagents which have been used in a laboratory. - Disulfur dichloride (SCl2) - used for vulcanization of rubber. Chloride ions have important physiological roles. For instance, in the central nervous system the inhibitory action of glycine and some of the action of GABA relies on the entry of Cl− into specific neurons. The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
<urn:uuid:4e76b8fd-c479-45d7-8ee7-faf61495aecb>
CC-MAIN-2013-20
http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Chloride
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.896893
320
4.59375
5
Science Fair Project Encyclopedia Industrial Design is an applied art whereby the aesthetics and usability of products may be improved. Design aspects specified by the industrial designer may include the overall shape of the object, the location of details with respect to one another, colors, texture, sounds, and aspects concerning the use of the product ergonomics. Additionally the industrial designer may specify aspects concerning the production process, choice of materials and the way the product is presented to the consumer at the point of sale. The use of industrial designers in a product development process may lead to added values by improved usability, lowered production costs and more appealing products. Product Design is focused on products only, while industrial design has a broader focus on concepts, products and processes. In addition to considering aesthetics, usability, and ergonomics, it can also encompass the engineering of objects, usefulness as well as usability, market placement, and other concerns. Product Design and Industrial Design can overlap into the fields of user interface design , information design and interaction design. Various schools of Industrial Design and/or Product Design may specialize in one of these aspects, ranging from pure art colleges (product styling) to mixed programs of engineering and design, to related disciplines like exhibit design and interior design. In the US, the field of industrial design hit a high-water mark of popularity in the late 30's and early 40's, with several industrial designers becoming minor celebrities. Raymond Loewy, Norman bel Geddes, and Henry Dreyfuss remain the best known. In the UK, the term "Industrial Design" increasingly implies design with considerable engineering and technology awareness alongside human factors - a "Total Design" approach, promoted by the late Stuart Pugh (University of Strathclyde) and others. Famous industrial designers - Egmont Arens (1888-1966) - Norman bel Geddes (1893-1958) - Henry Dreyfuss (1904-1972) - Charles and Ray Eames (1907-1978) and (1912-1988) - Harley J. Earl (1893-1969) - Virgil Exner (1909-1973) - Buckminster Fuller (1895-1983) - Kenneth Grange (1929- ) - Michael Graves (1934- ) - Walter Adolph Gropius (1883-1969) - Jonathan Ive (1967- ) - Arne Jacobsen (1902-1971) - Raymond Loewy (1893-1986) - Ludwig Mies van der Rohe (1886-1969) - László Moholy-Nagy (1895-1946) - Victor Papanek (1927-1999) - Philippe Starck (1949- ) - Brooks Stevens (1911-1995) - Walter Dorwin Teague (1883-1960) - Eva Zeisel (1906- ) - Industrial design rights - Design classics - Interaction Design - Automobile design - Six Sigma - Famous Industrial Designers - Design Council on Product Design Design Council one stop shop information resource on Product Design by Dick Powell. - Industrial Designers Society of America - The Centre for Sustainable Design - International Council of Societies of Industrial Designers - U.S. Occupational Outlook Handbook: Designers - Core77: Industrial Designers' Online Community The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
<urn:uuid:a466a758-3d7d-477a-8ae7-30c1404a9da8>
CC-MAIN-2013-20
http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Industrial_design
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.855773
743
3.203125
3
Help kids practice their counting skills with this printable counting to eight (8) worksheet that has a fun birds theme. This worksheet will be a great addition to any numbers or counting lesson plan as well as any birds themed lesson plan. On this worksheet, kids are asked to count the number of cardinals and circle the correct number (eight) at the bottom of the page. View and Print Your Birds Themed Counting Worksheet All worksheets on this site were done personally by our family. Please do not reproduce any of our content on your own site without direct permission. We welcome you to link directly to any pages on our site without specific permission. We also welcome any feedback, ideas or anything you want to share with us - just email us at firstname.lastname@example.org.
<urn:uuid:62414080-4b8a-49db-816c-5a3eca396b17>
CC-MAIN-2013-20
http://www.allkidsnetwork.com/worksheets/animals/birds/birds-worksheet-counting8.asp
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.931344
166
3.515625
4
Dr. Carl Auer von Welsbach (1858-1929) had a rare double talent of understanding how to pursue fundamental science and, at the same time, of commercializing himself successfully as a inventor and discoverer. He discovered 4 elements (Neodymium, Praseodymium, Ytterbium, and Lutetium). He invented the incandescent mantle, that helped gaslighting at the end of the 19th century to a renaissance. He developed the Ferrocerium - it`s still used as a flint in every disposable lighter. He was an eminent authority, and great expert in the field of rare earths (lanthanoides). He invented the electric metal filament light bulb which is used billions of times today. Additionally, all his life he took active part in different fields, from photography to ornithology. His personal qualities are remembered highly by the people of Althofen, he not only had an excellent mind but also a big heart. These qualities ensured him a prominent and lasting place not only in Austria`s science and industrial history. 9th of Sept. 1858: Born in Vienna, son of Therese and Alois Ritter Auer von Welsbach ( his father was director of the Imperial printing office the "Staatsdruckerei"). 1869-73: went to the secondary school in Mariahilf, (then changed to the secondary school in Josefstadt.) 1873-77: went to secondary school in Josefstadt, graduation. 1877-78: military service, became a second lieutenant. 1878-80: Inscribed into the technical University of Vienna; studies in math, general organic and inorganic chemistry, technical physics and thermodynamics with the Professors Winkler, Bauer, Reitlinger; and Pierre. 1880-82: Changed to the University of Heidelberg; lectures on inorganic experimental chemistry and Lab. experiments with Prof. Bunsen, introduction to spectral analysis and the history of chemistry, mineralogy and physics. 5th of Feb. 1882: Promotion to Doctor of Philosophy at the Ruperta-Carola-University in Heidelberg. 1882: Return to Vienna as unpaid Assistant in Prof. Lieben`s laboratory; work with chemical separation methods for investigations on rare earth elements. 1882-1884: Publications: " Ueber die Erden des Gadolinits von Ytterby", "Ueber die Seltenen Erden". 1885: The first separation of the element "Didymium" with help from a newly developed separation method from himself, based on the fractioned crystalisation of a Didym-ammonium nitrat solution. After the characteristical colouring, Auer gave the green components the name Praseodymium, the pink components the name Neodidymium. In time the latter element was more commonly known as Neodymium. 1885-1892: Work on gas mantle for the incandescent lighting. Development of a method to produce gas mantle ("Auerlicht) based on the impregnation from cottontissue by means, measures, methods of liquids, that rare earth has been absolved in and the ash from the material in a following glow process. Production of the first incandescent mantle out of lanthanum oxide, in which the gas flame is surrounded from a stocking; definite improvement in light emmission, but lack of stability in humidity. Continuous improvements in the chemical composition of the incandescent mantel "Auerlicht", experimentations of Lanthanum oxide-magnesium oxide- variations. 18th of Sept. 1885: The patenting of a gas burner with a "Actinophor" incandescent mantle made up of 60% magnesium oxide, 20% lanthanum oxide and 20% yttrium oxide; in the same year, the magnesium oxide part was replaced with zirconium oxide and the constitution of a second patent with reference to the additional use of the light body in a spirits flame. 9th of April 1886: Introduction the name "Gasgluehlicht" through the Journalist Motiz Szeps after the successful presentation from the Actinophors in the lower Austrian trade union ; regular production of the impregnation liquid, called "Fluid", at the Chemical Institute. 1887: The acquisition of the factory Würth & Co. for chemical-pharmaceutical products in Atzgersdorf and the industrial production of the light bodies. 1889: The beginning of sales problems because of the defaults with the earlier incandescent mantle, ie. it`s fragility, the short length of use, as well as having an unpleasant, cold, green coloured light , and the relatively high price. The factory in Atzgersdorf closes. The development of fractioned cristallisation methods for the preparation of pure Thorium oxide from and therefore cheap Monazitsand. The analysis of the connection between the purity of Thorium oxide and its light emission. The ascertainment of the optimal composition of the incandescent mantle in a long series of tests. 1891: Patenting of the incandescent mantle out of 99% Thorium oxide and 1% Cerium oxide, at that period of time, because of the light emission it was a direct competition for the electric carbon-filament lamp. The resuming of production in Atzgersdorf near Vienna and the quick spreading of the incandescent mantle because of their high duration. The beginning of a competition with the electric lighting. Work with high melting heavy metals to improve and higher the filament temperature, and therefore the light emission as well. The development of the production of thin filaments. The making of incandescent mantle with Platinum threads that were covered with high melting Thorium oxide, whereby it was possible to use the lamps over the melting temperature of Platinum. This variation was discarded because with smelting the platinum threads either the cover would burst or by solidifying it would rip apart. The taking out of a patent for two manufacturing methods for filaments. In the patent specification Carl Auer von Welsbach described the manufacturing of filaments through secretion of the high smelting element Osmium onto the metallic-filament. The development and experimentation of further designing methods such as the pasting method for the manufacturing of suitable high smelting metallic-filaments. With this method Osmium powder and a mixture of rubber or sugar is mixed together and kneaded into a paste. The manufacturing results in that the paste gets stamped through a delicate nozzle discharged cylinder and the filament subsequently dries and sinters. This was the first commercial and industrial process in the powder metallurgy for very high smelting metals. 1898: The acquiring of a industrial property in Treibach and the beginning of the experimentation and discovery work at this location. The taking out of a patent for the metallic-filament lamp with Osmium filament. 1899: Married Marie Nimpfer in Helgoland. 1902: Market introduction of the "Auer-Oslight" the first industrial finished Osmium metallic-filament lamp using the paste method. The advantages of this metallic-filament lamp over the, at that period of time, widely used carbon-filament lamp were: 57% less electricity consumption; less blackening of the glass; because of the higher filament temperature, a "whiter" light; a longer life span and therefore more economic. The beginning of the investigation of spark giving metals with the aim ignition mechanisms for lighters, gas lighters and gas lamps as well as projectile and mine ignition. Carl Auer von Welsbach knew of the possibility to produce sparks by mechanical means from Cerium from his teacher Prof. Bunsen. The ascertainment of the optimal compound from Cerium-Iron alloys for spark production. 1903: The taking out of a patent for his pyrophoric alloys (by scratching with hard and sharp surfaces a splinter which could ignite itself.) In the patent specification 70% Cerium and 30% Iron was given as an optimal compound. Further development of a method to produce the latter alloy cheaply. The optimizing of Bunsen, Hillebrand and Norton´s procedure, used at that time mainly for producing Cerium, was based on the fusion electrolysis from smelted Rare Earth chlorides. The problem at that time was in the leading of the electrolysis to secrete a pore-free and long lasting metal. This was the first industrial process and commercial utilization of the rare earth metals. 30th of March 1905: A report to the "Akademie der Wissenschaften" in Vienna that the results of the spectroscopic analysis show that Ytterbium is made up of two elements. Auer named the elements after the stars Aldebaranium and Cassiopeium. He ommitted the publication of the attained spectras and the ascertained atomic weights. 1907: The founding of the "Treibacher Chemische Werke GesmbH" in Treibach-Althofen for the production of Ferrocerium- lighter flints under the trade name "Original Auermetall". The publication of the spectras and the atomic weights of both new, from Ytterbium separated elements, in the completion of his report to the Academie der Wissenschaften. Priority dispute with the french Chemist Urbain concerning the analysis of Ytterbium. 1908: The solution of the electrolysis of fused salts (cerium chloride) problem, at which the minerals Cerit and Allanite are used as source substances. 1909: The adaption of the procedure, from his collaborator, Dr.Fattinger, to be able to use the Monazitsand residue out of the incandescent mantle production, for the production of cerium metal for the lighter flints. The production of three different pyrophoric alloys: "Cer" or Auermetall I : Alloy out of fairly pure Cerium and Iron. Used for igniting purposes. "Lanthan" or Auermetall II : The Cerium-Iron alloy enriched with the element Lanthan. Used for light signals because of its particularly bright sparking power. Erdmetall or Auermetall III : Alloy out of Iron and "natural" Cermischmetall; a rare earth metal alloy of corresponding natural deposits. Both of the first alloys could not win its way through the market. only the easy to produce Erdmetall, after the renaming it Auermetall I, obtained world wide status as the flint in the lighter industry. 1909: The International Atomic weight Commission decided in favour of Urbain´s publication instead of Auer´s because Urbain handed it in earlier. The Commission of the term from Urbain Neoytterbium- known today as Ytterbium and Lutetium for the new elements. The carrying-out of large scale chemical separations in the field of radioactive substances. The production of different preparations of Uran, Ionium (known today as Th230 isotop), a disintegration product in the Uranium-Radium-line, Polonium and Aktinium, that Auer made available, for research use, to such renowned Institutions and scientists as F.W.Aston and Ernest Rutherford at the Cavendish Laboratory in Cambridge (1921) and the "Radiuminstitut der Akademie der Wissenschaften" in Vienna. 1922: A report on his spectroscopic discoveries to the "Akademie der Wissenschaften" in Vienna. 1929:World-wide production of ligther flints reached 100,000 kg. 8th of April 1929: Carl Auer von Welsbach died at the age of 70.
<urn:uuid:f684139c-4f94-4f1f-821a-2847edc6ba5b>
CC-MAIN-2013-20
http://www.althofen.at/AvW-Museum/Englisch/biographie_e.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.903716
2,533
3.046875
3
Ethics of dementia research What are clinical trials and how are they controlled/governed? A clinical trial is a biomedical/health-related study into the effects on humans of a new medical treatment (medicine/drug, medical device, vaccine or new therapy), sometimes called an investigational medicinal product (IMP). Before a new drug is authorised and can be marketed, it must pass through several phases of development including trial phases in which its safety, efficacy, risks, optimal use and/or benefits are tested on human beings. Existing drugs must also undergo clinical testing before they can be used to treat other conditions than that for which they were originally intended. Organisations conducting clinical trials in the European Union must, if they wish to obtain marketing authorisation, respect the requirements for the conduct of clinical trials. These can be found in the Clinical Trials Directive (“Directive 2001/20/EC of the European Parliament and of the Council of 4 April 2001 on the approximation of the laws, regulations and administrative provisions of the Member States relating to the implementation of good clinical practice in the conduct of clinical trials on medicinal products for human use”). There are also guidelines to ensure that clinical trials are carried out in accordance with good clinical practice. These are contained in the “Commission Directive 2005/28/EC of 8 April 2005 laying down principles and detailed guidelines for good clinical practice as regards investigational medicinal products for human use, as well as the requirements for authorisation of the manufacturing or importation of such products” (also known as the Good Clinical Practice or GCP for short). This document provides more concrete guidelines and lends further support to the Clinical Trials Directive. The London-based European Medicines Agency (EMA) has published additional, more specific guidelines which must also be respected. These include guidelines on inspection procedures and requirements related to quality, safety and efficacy. Copies of the above-mentioned documents in 22 languages can be found at: http://ec.europa.eu/enterprise/pharmaceuticals/clinicaltrials/clinicaltrials_en.htm The protection of people participating in clinical trials (and in most cases in other types of research) is further promoted by provisions of: - the European Convention on Human Rights and Biomedicine (Oviedo Convention, Act 2619/1998), - the Additional protocol to the Oviedo Convention concerning Biomedical Research - the Nuremberg Code of 1949, - the revised Helsinki Declaration of the World Medical Association regarding Ethical Principles for Medical Research Involving Human Subjects, - The Belmont Report of 18 April 1979 on the Ethical Principles and Guidelines for the Protection of Human Subjects of Research. What are the different phases of trials? Testing an experimental drug or medical procedure is usually an extremely lengthy process, sometimes lasting several years. The overall procedure is divided into a series of stages (known as phases) which are described below. Clinical testing on humans can only begin after a pre-clinical phase, involving laboratory studies (in vitro) and tests on animals, which has shown that the experimental drug is considered safe and effective. Whilst a certain amount of testing can be carried out by means of computer modelling and by isolating cells and tissue, it becomes necessary at some point in time to test the drug on a living creature. Animal testing is an obligatory stage in the process of obtaining regulatory approval for new drugs and medicines, and hence a legal requirement (EU Directive 2001/83/EC relating to Medicinal Products for Human Use). The necessity of carrying out prior testing on animals is also stated in the World Medical Association’s “Ethical Principles for Medical Research Involving Human Subjects. In order to protect the well-being of research animals, researchers are guided by three principles which are called the 3Rs: Reduce the number of animals used to a minimum Refine the way that experiments are carried out so that the effect on the animal is minimised and animal welfare is improved Replace animal experiments with alternative (non-animal) techniques wherever possible. In addition, most countries will have official regulatory bodies which control animal research. Most animals involved in research are mice. However, no animal is sufficiently similar to humans (even genetically modified ones) to make human testing unnecessary. For this reason, the experimental drug must also be tested on humans. The main phases of clinical trials Clinical trials on humans can be divided into three main phases (literally, phase I, II and III). Each phase has specific objectives (please see below) and the number of people involved increases as the trial progresses from one phase to the next. Phase I trials Phase 1 trials are usually the first step in testing a new drug or treatment on humans after successful laboratory and animal testing. They are usually quite small scale and usually involve healthy subjects or sub-groups of patients who share a particular characteristic. The aims of these trials are: - to assess the safety of experimental drugs, - to evaluate any possible side effects, - to determine a safe dose range, - to see how the body reacts to the drug (how it is absorbed, distributed and eliminated from the body, the effects that it has on the body and the effects it has on biomarkers). Dose ranging, sometimes called dose escalation, studies may be used as a means to determine the most appropriate dosage, but the doses administered to the subjects should only be a fraction of those which were found to cause harm to animals in the pre-clinical studies. The process of determining an optimal dose in phase I involves quite a high degree of risk because this is the first time that the experimental treatment or drug has been administered to humans. Moreover, healthy people’s reactions to drugs may be different to those of the target patient group. For this reason, drugs which are considered to have a potentially high toxicity are usually tested on people from the target patient group. There are a few sequential approaches to phase I trials e.g. single ascending dose studies, multiple ascending dose studies and food effect. In single ascending dose studies (SAD), a small group of subjects receive a very low dose of the experimental drug and are then observed in order to see whether that dose results in side effects. For this reason, trials are usually conducted in hospital settings. If no adverse side effects are observed, a second group of subjects are given a slightly higher dose of the same drug and also monitored for side-effects. This process is repeated until a dose is reached which results in intolerable side effects. This is defined as the maximum tolerated dose (MTD). Multiple ascending dose studies (MAD) are designed to test the pharmacokinetics and pharmacodynamics of multiple doses of the experimental drug. A group of subjects receives multiple doses of the drug, starting at the lowest dose and working up to a pre-determined level. At various times during the period of administration of the drug, and particularly whenever the dose is increased, samples of blood and other bodily fluids are taken. These samples are analysed in order to determine how the drug is processed within the body and how well it is tolerated by the body. Food effect studies are investigations into the effect of food intake on the absorption of the drug into the body. This involves two groups of subjects being given the same dose of the experimental drug but for one of the groups when fasting and for the other after a meal. Alternatively, this could be done in a cross-over design whereby both groups receive the experimental drug in both conditions in sequence (e.g. when fasting and on another occasion after a meal). Food effect studies allow researchers to see whether eating before the drug is given has any effect on the absorption of the drug by the body. Phase II trials Having demonstrated the initial safety of the drug (often on a relatively small sample of healthy individuals), phase II clinical trials can begin. Phase II studies are designed to explore the therapeutic efficacy of a treatment or drug in people who have the condition that the drug is intended to treat. They are sometimes called therapeutic exploratory trials and tend to be larger scale than Phase I trials. Phase II trials can be divided into Phase IIA and Phase IIB although sometimes they are combined. Phase IIA is designed to assess dosing requirements i.e. how much of the drug should patients receive and up to what dose is considered safe? The safety assessments carried out in Phase I can be repeated on a larger subject group. As more subjects are involved, some may experience side effects which none of the subjects in the Phase I experienced. The researchers aim to find out more about safety, side effects and how to manage them. Phase IIB studies focus on the efficacy of the drug i.e. how well it works at the prescribed doses. Researchers may also be interested in finding out which types of a specific disease or condition would be most suitable for treatment. Phase II trials can be randomised clinical trials which involve one group of subjects being given the experimental drug and others receiving a placebo and/or standard treatment. Alternatively, they may be case series which means that the drug’s safety and efficacy is tested in a selected group of patients. If the researchers have adequately demonstrated that the experimental drug (or device) is effective against the condition for which it is being tested, they can proceed to Phase III. Phase III trials Phase III trials are the last stage before clinical approval for a new drug or device. By this stage, there will be convincing evidence of the safety of the drug or device and its efficacy in treating people who have the condition for which it was developed. Such studies are carried out on a much larger scale than for the two previous phases and are often multinational. Several years may have passed since the original laboratory and animal testing. The main aims of Phase III trials are: to demonstrate that the treatment or drug is safe and effective for use in patients in the target group (i.e. in people for whom it is intended) to monitor side effects to test different doses or different ways of administering the drug to determine whether the drug could be used at different stages of the disease. to provide sufficient information as a basis for marketing approval Researchers may also be interested in showing that the experimental drug works for additional groups of people with conditions other than that for which the drug was initially developed. For example, they may be interested in testing a drug for inflammation on people with Alzheimer’s disease. The drug would have already have proven safe and obtained marketing approval but for a different condition, hence the need for additional clinical testing. Open label extension trails Open label extension studies are often carried out immediately after a double blind randomised clinical trial of an unlicensed drug. The aim of the extended study is to determine the safety and tolerability of the experimental drug over a longer period of time, which is generally longer than the initial trial and may extend up until the drug is licensed. Participants all receive the experimental drug irrespective of which arm of the previous trial they were in. Consequently, the study is no longer blind in that everybody knows that each participant is receiving the experimental drug but the participants and researchers still do not know which group participants were in during the initial trial. Post-marketing surveillance studies (phase IV) After the three phases of clinical testing and after the treatment has been approved for marketing, there may be a fourth phase to study the long-term effects of drugs or treatment or to study the impact of another factor in combination with the treatment (e.g. whether a particular drug reduces agitation). Usually, such trials are sponsored by pharmaceutical companies and described as pharmacovigilance. They are not as common as the other types of trials (as they are not necessary for marketing permission). However, in some cases, the EMA grants restricted or provisional marketing authorisation, which is dependent on additional phase IV trails being conducted. Expanded access to a trial Sometimes, a person might be likely to benefit from a drug which is at various stages of testing but does not fulfil the conditions necessary for participation in the trial (e.g. s/he may have other health problems). In such cases and if the person has a life-threatening or serious condition for which there is no effective treatment, s/he may benefit from “expanded access” use of the drug. There must, however, be evidence that the drug under investigation has some likelihood of being effective for that patient and that taking it would not constitute an unreasonable risk. The use of placebo and other forms of comparison The main purpose of clinical drug studies is to distinguish the effect of the trial drug from other influences such as spontaneous change in the course of the disease, placebo effect, or biased observation. A valid comparison must be made with a control. The American Food and Drugs Administration recognises different types of control namely, - active treatment with a known effective therapy or - no treatment, - historical treatment (which could be an adequately documented natural history of the disease or condition, or the results of active treatment in comparable patients or populations). The EMA considers three-armed trials (including the experimental medicine, a placebo and an active control) as a scientific gold standard and that there are multiple reasons to support their use in drug development . Participants in clinical trials are usually divided into two or more groups. One group receives the active treatment with the experimental substance and the other group receives a placebo, a different drug or another intervention. The active treatment is expected to have a positive curative effect whereas the placebo is expected to have zero effect. With regard to the aim to develop more effective treatments, there are two possibilities: 1. the experimental substance is more effective than the current treatment or 2. it is more effective than no treatment at all. According to article 11 of the International Ethical Guidelines for Biomedical Research (IEGBR) of 2002, participants allocated to the control group in a trial for a diagnostic, therapeutic or preventive intervention should receive an established effective intervention but it may in some circumstances be considered ethically acceptable to use a placebo (i.e. no treatment). In article 11 of the IEGBR, reasons for the use of placebo are: 1. that there is no established intervention 2. that withholding an established effective intervention would expose subjects to, at most, temporary discomfort or delay in relief of symptoms 3. that use of an established effective intervention as comparator would not yield scientifically reliable results and use of placebo would not add any risk of serious or irreversible harm to the subjects. November 2010, EMA/759784/2010 Committee for Medicinal Products for Human Use The use of placebo and the issue of irreversible harm It has been suggested that clinical trials are only acceptable in ethical terms if there is uncertainty within the medical community as to which treatment is most suitable to cure or treat a disease (National Bioethics Commission of Greece, 2005). In the case of dementia, whilst there is no cure, there are a few drugs for the symptomatic treatment of dementia. Consequently, one could ask whether it is ethical to deprive a group of participants of treatment which would have most likely improved their condition for the purpose of testing a potentially better drug (National Bioethics Commission of Greece, 2005). Can they be expected to sacrifice their own best interests for those of other people in the future? It is also important to ask whether not taking an established effective intervention is likely to result in serious or irreversible harm. In the 2008 amended version of the Helsinki Declaration (World Medical Association, 1964), the possible legitimate use of placebo and the need to protect subjects from harm are addressed. “32. The benefits, risks, burdens and effectiveness of a new intervention must be tested against those of the best current proven intervention, except in the following circumstances: The use of placebo, or no treatment, is acceptable in studies where no current proven intervention exists; or Where for compelling and scientifically sound methodological reasons the use of placebo is necessary to determine the efficacy or safety of an intervention and the patients who receive placebo or no treatment will not be subject to any risk of serious or irreversible harm. Extreme care must be taken to avoid abuse of this option.” (WMA, 1964 with amendments up to 2008) The above is also quite similar to the position supported by the Presidential Commission for the Study of Bioethical Issues (PCSBI) (2011). In its recently published report entitled “Moral science: protecting participants in human subjects research ”, the Presidential Commission argues largely in favour of a “middle ground” for ethical research, citing the work of Emanuel and Miller (2001) who state: “A placebo-controlled trial can sometimes be considered ethical if certain methodological and ethical standards are met. It these standards cannot be met, then the use of placebos in a clinical trial is unethical.” (Emanuel and Miller, 2001 cited in PCSBI, 2011, p. 89). One of the standards mentioned is the condition that withholding proven effective treatment will not cause more than minimal harm. The importance of placebo groups for drug development The ethical necessity to include a placebo arm in a clinical trial may differ depending on the type of drug being developed and whether other comparable drugs exist. For example, a placebo arm would be absolutely necessary in the testing of a new compound for which no drug has yet been developed. This would be combined with comparative arms involving other alternative drugs which have already been proven effective. For studies involving the development of a drug based on an existing compound, a comparative trial would be necessary but not necessarily with a placebo arm, or at least with a smaller placebo arm Nevertheless, the EMA emphasises the value of placebo-controlled trials in the development of new medicinal products even in cases where a proven effective drug exists: “forbiddingplacebo-controlled trials in therapeutic areas where there are proven, therapeutic methods would preclude obtaining reliable scientific evidence for the evaluation of new medicinal products, and be contrary to public health interest as there is a need for both new products and alternatives to existing medicinal products.” (EMA, 2001). In 2001, concerns were raised about the interpretation of paragraph 29 of the 2000 version of the Helsinki Declaration in which prudence was called for in the use of placebo in research trials and it was advised that placebo should only be used in cases where there was no proven therapy for the condition under investigation. A document clarifying the position of the WMA regarding the use of placebo was issued by the WMA in 2001 in which it was made clear that the use of placebo might be ethically acceptable even if proven therapy was available. The current version of this statement is article 32 of the 2008 revised Helsinki Declaration (quoted in sub-section 7.2.1). The PCSBI (2011) highlight the importance of ensuring that the design of clinical trials enables the researchers to resolve controversy and uncertainty over the merits of the trial drug and whether the trial drug is better than an existing drug if there is one. They suggest that studies which cannot resolve such questions or uncertainty are likely to be ignored by the scientific community and this would be unethical as it would mean that people had been unnecessarily exposed to risk without there being any social benefit. Reasons for participation People with dementia who take part in clinical trials may do so for a variety of reasons. One possible reason is that they hope to receive some form of treatment that will improve their condition or even result in a cure. This is sometimes called the “therapeutic misconception”. In such cases, clinical trials may seem unethical in that advantage is being taken of the vulnerability of some of the participants. On the other hand, the possibility of participating in such a trial may help foster hope which may even enable a person to maintain their morale. A review of 61 studies on attitudes to trials has shed some light on why people participate in clinical trials (Edwards, Lilford and Hewison, 1998). In this review, it was found that over 60% of participants in seven studies stated that they did or would participate in clinical trials for altruistic reasons. However, in 4 studies, over 70% of people stated that they participated out of self-interest and in two studies over 50% of people stated that they would participate in such a study out of self-interest. As far as informed consent is concerned, in two studies (which were also part of this review) 47% of responding doctors thought that few patients were actually aware that they were taking part in a clinical trial. On the other hand, an audit of four further studies revealed that at least 80% of participants felt that they had made an autonomous decision. There is no proof whether such perceptions were accurate or not. The authors conclude that self-interest was more common than altruism amongst the reasons given for participating in clinical trials but draw attention to the poor quality of some of the studies reviewed thereby suggesting the need for further research. It should not be necessary for people to justify why they are willing to participate in clinical trials. Reasons for participating in research are further discussed in section 3.2.4 insofar as they relate to end-of-life research. In a series of focus groups organised in 8 European countries plus Israel and covering six conditions including dementia, helping others was seen as the main reason why people wanted to take part in clinical trials (Bartlam et al., 2010). In a US trial of anti-inflammatory medication in Alzheimer’s disease in which 402 people were considered eligible, of the 359 who accepted, their main reasons for wanting to participate were altruism, personal benefit and family history of Alzheimer’s disease. Random assignment to study groups As people are randomly assigned to the placebo or the active treatment group, everyone has an equal chance of receiving the active ingredient or whichever other control groups are included in the study. There are possible advantages and drawbacks to being in each group and people are likely to have preferences for being a particular study group but randomization means that allocation is not in any way linked to the best interests of each participant from a medical perspective. This is not an ethical issue provided that each participant fully understands that the purpose of research is not to provide a tailor-made response to an individual’s medical condition and that while some participants benefit from participation, others do not. There are, however, medical issues to consider. In the case in double-blind studies, neither the participant nor the investigator knows to which groups a participant has been allocated. Consequently, if a participant encounters medical problems during the study, it is not immediately known whether this is linked to the trial drug or another unrelated factor, but the problems must be addressed and possible contraindications avoided, which may necessitate “de-blinding” (DuBois, 2008). Although many people would perhaps like to benefit from a new drug which is more effective than existing drugs, people have different ideas about what is an acceptable risk and different reasons for taking part in clinical trials. People who receive the placebo are not exposed to the same potential risks as those given the experimental drug. On the other hand, they have no possibility to benefit from the advantages the drug may offer. Those receiving a drug commonly considered as the standard therapy are not necessarily better off than those receiving a placebo as some participants may already know that they do not respond well to the accepted treatment (DuBois, 2008). If people who participate in a clinical trial are not informed which arm of the trial they were in, valuable information is lost which might have otherwise contributed towards to treatment decisions made after the clinical trial. Taylor and Wainwright (2005) suggest that “unblinding” should occur at the end of all studies and so as not to interfere with the analysis of data, this could be done by a person who is totally independent of the analysis. This would, however, have implications for open label extended trials as in that case participants, whilst better equipped to give informed consent would have more information than the researchers and this might be conveyed to researchers in anad hocmanner. Open label extension trails Open label extension studies (mentioned in sub-section 7.1.8) seem quite fair as they give each participant the opportunity to freely consent to continuing with the study in the full knowledge that s/he will receive the experimental drug. However, Taylor and Wainwright (2005) have highlighted a couple of ethical concerns linked to the consent process, the scientific value of such studies and issues linked to access to drugs at the end of the prior study. With regard to consent, they argue that people may have had a positive or negative experience of the trial but do not know whether this was due to the experimental drug, another drug or a placebo. They may nevertheless base their decision whether to continue on their experience so far. For those who were not taking the experimental drug, their experience in the follow-up trial may turn out to be very different. Also, if they are told about the possibility of the open label extension trial when deciding whether or not to take part in the initial trial (i.e. with the implication that whatever group they are ascribed to, in the follow-up study they will be guaranteed the experimental drug), this might induce them to participate in the initial study which could be considered as a form of subtle coercion. Finally, researchers may be under pressure to recruit as they can only recruit people in an open label extended trial who took part in the initial study. This may lead them in turn to put pressure (even inadvertently) on participants to continue with the study. The scientific validity of open label extension trials is questioned by Taylor and Wainwright (2005) on the grounds that people from the experimental arm of the first study who did not tolerate the drug would be unlikely to participate in the extension trial and this would lead to bias in the results. In addition, open-label trials often lack a precise duration other than “until the drug is licensed” which casts doubt on there being a valid research purpose. The above authors suggest that open label extension studies are dressed up marketing activities which lack the ethical justification for biomedical research which is the prospect of finding new ways of benefiting people’s health. However, it could be argued that the aim of assessing long-term tolerability of a new drug is a worthwhile pursuit and if conducted in a scientific manner could be considered as research. Moreover, not all open label extension trials are open-ended with regard to their duration. The main problem in interpreting open label extension studies is that little is known about the natural course of the disease. Protecting participants’ well-being at the end of the clinical trial Some people who participate in a clinical trial and who receive the experimental drug experience an improvement in their condition. This is to be hoped even if benefit to the health of individuals is not the aim of the study. However, at the end of the study, the drug is not yet licenced and there is no legal right to continue taking it. This could be psychologically disturbing to the participants in the trial and also to their families who may have seen a marked improvement in their condition. Taylor and Wainwright (2005) suggest that the open label trials may serve the purpose of prescribing an unlicensed drug on compassionate grounds, which whilst laudable, should not be camouflaged as scientific research. Rather governments should take responsibility and set up the appropriate legal mechanisms to make it possible for participants whose medical condition merits prolonged treatment with the experimental drug to have access to it. Minimising pain and discomfort Certain procedures to which people with dementia or their representatives consent may by burdensome or painful or simply worrying but in accordance with the principles of autonomy or justice/equity, people with dementia have the right to participate. The fact that they have made an informed decision to participate and are willing to tolerate such pain or burden does not release researchers from the obligation to try to minimise it. For example, if repeated blood samples are going to be necessary, an indwelling catheter could be inserted under local anaesthetic to make it easier or medical staff should provide reassurance about the use of various scanning equipment which might be worrying or enable the person’s carer to be present. In order to minimize fear, trained personnel are needed who have experience dealing with people with dementia. The advice of the carer, if there is one, could also be sought. Drug trials in countries with less developed safeguards Clinical trials are sometimes carried out in countries where safeguards are not well developed and where the participants and even the general population are likely to have less possibility to benefit from the results of successful trials. For example, some countries have not signed the Convention for the Protection of Human Rights and Dignity of the Human Being with regard to the Application of Biology and Medicine (1997) (referred to in section 188.8.131.52). The participants in those countries may be exposed to possible risks but have little chance of future medical benefit if the trial is successful. Yet people in countries with stricter safeguards for participants (which are often richer countries) stand to benefit from their efforts and from the risks they take, as they are more likely to be able to afford the drugs once developed. This raises ethical issues linked to voluntariness because there may be, in addition to the less developed safeguards, factors which make participation in such trials more attractive to potential participants. Such practices also represent a lack of equity in the distribution of risk, burden and possible benefit within society and could be interpreted as using people as a means to an end. Parallels can also be drawn to the situation whereby people in countries where stem cell research is banned profit from the results of studies carried out in countries where it is permitted or to the results of studies carried out in countries where research ethics are slack or inexistent. For a detailed discussion of the ethical issues linked to the involvement in research of people in other countries, particularly lower and middle income countries where standards of protection may by lower, please refer to the afore-mentioned report by the Presidential Commission for the Study of Bioethical Issues. - Researchers should consider including a placebo arm in clinical trials when there are compelling and sound methodological reasons for doing so. - Researchers should ensure that patients are aware that the aim of a randomised controlled trial is to test a hypothesis and provide generalizable knowledge leading to the development of a medical drug or procedure. They should explain how this differs from medical treatment and care which are aimed at enhancing the health and wellbeing of individual patients and where there is a reasonable expectation that this will be successful. - Researchers should ensure that potential participants understand that they may be allocated to the placebo group. - It should not be presumed that the treating doctor or contact person having proposed the participant for a trial has been successful in communicating the above information. - Researchers conducting clinical trials may need training in how to ensure effective communication with people with dementia. - Appropriate measures should be taken by researchers to minimize fear, pain and discomfort of participants. - All participants should, when possible, preferably have the option of receiving the experimental drug (if proven safe) after completion of the study. - Pharmaceutical companies should not be discouraged from carrying out open-label extension studies but this should not be the sole possibility for participants to access the trial drug after the end of the study if it is proving beneficial to them. - In multi-centre clinical trials, where data is transferred to another country in which data protection laws are perhaps less severe, the data should be treated as stated in the consent form signed by the participant. Last Updated: jeudi 29 mars 2012
<urn:uuid:9d3f2101-f19c-4a5e-a7b4-dcd94a6d33f1>
CC-MAIN-2013-20
http://www.alzheimer-europe.org/FR%20%20%20%20%20%20%20%20%20%20%20%20%EF%BF%BD%20%EF%BF%BD%C2%B3/Ethics/Ethical-issues-in-practice/Ethics-of-dementia-research/Clinical-trials
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.95562
6,489
3.640625
4
United Kingdom - Scotland Restrictions of freedom Mental Health (Care and Treatment) (Scotland) Act 2003 The Act was designed to modernise and improve the use of compulsory measures in mental health care. It reflects the general move over the last two decades towards care and treatment in the community rather than in hospitals or other residential settings. The title reflects the philosophy of the legislation with the focus on ‘care’ and ‘treatment’. In basic terms, the Act provides for the protection of people with a mental disorder in a hospital or community setting. It contains mechanisms for dealing with offenders who have a mental disorder and so interacts with the criminal justice system. The Act covers individuals who are defined as having a ‘mental disorder’. The term includes mental illness, personality disorder and learning disability. The majority of cases involving compulsory measures have been in relation to people diagnosed with a mental illness. However, the Mental Welfare Commission for Scotland monitors the use of compulsory measures and has found increasing use of emergency or short term measures being used for people aged over 75 years with a diagnosis of dementia. Detention (Involuntary internment) The Act deals with several forms of compulsion in relation to a person with mental disorder where: There is a significant risk to the person’s health, safety or welfare or the safety of any other person (what is a significant risk is a question of judgement for health and social care professionals. The tribunal will test this assessment during an appeal or on an application for a compulsory treatment order). Treatment is available to prevent the person’s condition from deteriorating or to relieve its symptoms or effects Compulsory admission is necessary because the person will not agree to admission and/or treatment; and The person’s ability to make decisions about the provision of medical treatment is significantly impaired because of mental disorder. Types of order Emergency Detention (72 hours) Short Term Detention (28 days and can be extended) Compulsory Treatment Order (6 months – can be extended) Mental Health Tribunals The Act introduced a new system of mental health tribunals with a number of functions, including considering applications for orders and appeals against orders. This is detention in a psychiatric hospital for up to 72 hours if necessary. It does not authorise any medical treatment. In an emergency, common law powers might be used. A registered medical practitioner can sign an emergency detention certificate if s/he believes that a person’s ability to make decisions about medical treatment is significantly impaired because of mental disorder. This authorises the removal of the individual to a specific hospital. Before signing the certificate the medical practitioner must be satisfied that: There is an urgent need to detain the person in hospital to access the medical treatment s/he needs If the person was not detained, there would be a significant risk to his or her health, safety, or welfare or the safety of another person, and Any delay caused by starting the short term detention procedure is undesirable. If any treatment is needed the short-term detention procedure must generally be used. Short term detention This may be used where it is necessary to detain an individual with mental disorder who cannot be treated voluntarily and without the treatment the person would be at risk of significant harm. To obtain a certificate the approved medical practitioner must consult and gain the approval of a Mental Health Officer whatever the circumstances. Compulsory Treatment Order Compulsory Treatment Orders (CTOs) are granted by the Mental Health Tribunal. They last for 6 months, can be extended by the responsible medical officer for a further six months and then extended annually. The Tribunal reviews them at least every two years. Therefore, they can restrict or deprive liberty for long periods of time. The Mental Welfare Commission for Scotland looks at how these orders are used for people of different ages and genders to see if there are any trends. Over recent years, the number of new orders has come down. The use of CTOs for people aged 65 and over has increased for people with dementia in recent years. ‘De facto detention’ Practitioners must be careful that they are not using excessive coercion to prevent people from leaving hospital when they wish to. They must take care to document situations where they have concerns if an informal patient wishes to leave. The Tribunal can, under section 291 of the 2003 Act, order that an informal patient is being unlawfully detained. People with dementia pose a difficult problem. The Tribunal has ruled that a person with dementia is unlawfully detained in a general hospital when prevented from leaving. It can be appropriate to redirect someone and dissuade him/her from leaving but repeatedly thwarting a determined effort to leave is likely to a significant deprivation of liberty, and the patient should be formally detained. Adults with Incapacity (Scotland) Act 2000 Scottish incapacity laws were reformed with the introduction of the Adults with Incapacity (Scotland) Act in 2000. This Act covers people with a mental disorder who lack some or all capacity to make decisions or act in their own interests. It recognises that capacity is not all or nothing but is ‘decision specific’. The Act introduced a number of measures to authorise someone else to make decisions on behalf of the person with incapacity, on the basis of a set of principles on the face of the Act. These are fundamental. Any action or decision - Must benefit the person - Must be the least restrictive of the person’s liberty in order to gain that benefit - Must take account of the person’s past and present wishes (s/he must be given assisted to communicate by whatever means is appropriate to the individual) - Must follow consultation with relevant others as far as practicable - Must encourage and support the person to maintain existing skills and develop new skills. The individual may, whilst competent, appoint one or more persons to act their financial (continuing) and or welfare attorney. This must be registered with the Office of the Public Guardian. It does not allow the attorney to detain the grantor in a psychiatric hospital. If the person refuses to comply with the attorney the attorney has no compulsory powers to detain. Where there is concern for the person’s safety the attorney can apply to the court for a welfare guardianship order. Powers can be granted to allow the guardian to decide on the accommodation of the person and other powers such as who they can consort with. Where the welfare guardian has powers over accommodation s/he is able to restrict the freedom of the person by placing them in a care home against their will. However, whether this amounts to deprivation of liberty under the European Court of Human Rights ruling will depend on a number of other circumstances and the accumulative impact of which would need to be considered (Patrick and Smith, 2009; Mental Welfare Commission for Scotland, 2011). With regard to the issue of non-compliance, if the person on guardianship, for example, runs away, the guardian can apply to the Court under s70 for an order to require the person to return. Because there is no automatic review of welfare guardianship orders there is concern that the Adults with Incapacity (Scotland) Act 2000 may not be compliant with the European Convention on Human Rights. The Act states that the order should be for a standard 3 years but can be more or less at the discretion of the Court. However, there has been a practice of orders being granted for indefinite periods and this has given rise to concern in relation to certain groups. However, for people with dementia, who have a progressive brain disorder, an indefinite order may be deemed appropriate. The Scottish Law Commission is currently undertaking a review of the Adults with Incapacity (Scotland) Act 2000 in relation to deprivation of liberty issues. It has established an advisory group of key stakeholders, including Alzheimer Scotland, and will be reporting in due course. The Road Traffic Act of 1991 contains a few articles relating to offences involving driving when unfit to do so, e.g.: - A person who causes the death of another person by driving a mechanically propelled vehicle dangerously on a road or other public place is guilty of an offence. - A person who drives a mechanically propelled vehicle dangerously on a road or other public place is guilty of an offence. - If a person drives a mechanically propelled vehicle on a road or other public place without due care and attention, or without reasonable consideration for other persons using the road or place, he (or she) is guilty of an offence. - According to the provisions of this act, a person is regarded as driving dangerously if the way s/he drives falls far below what would be expected of a competent and careful driver and it would be obvious to a competent and careful driver that driving in that way would be dangerous. A person who has been diagnosed with dementia must inform the Driver and Vehicle Licensing Authority (DVLA). Failure to do could lead to a fine of up to £1,000. Moreover, a person who had an accident but did not previously inform the DVLA of his/her dementia might not be covered by his/her insurance company. Once the DVLA has been informed of that someone has dementia, they send a questionnaire to the person and request a medical report. A driving assessment may also be required. The Medical Advisers at the DVLA then decide whether the person can continue driving (Alzheimer Scotland, 2003). Patrick, H. and Smith, N. (2009),Adult Protection and the Law in Scotland, Bloomsbury Professional. Mental Welfare Commission for Scotland Annual Report 2010 – 2011 www.mwcscot.org.uk Last Updated: mercredi 14 mars 2012
<urn:uuid:ca6ee1dd-37e5-4f09-89d9-c924539ff0e6>
CC-MAIN-2013-20
http://www.alzheimer-europe.org/FR%C2%AF/Policy-in-Practice2/Country-comparisons/Restrictions-of-freedom/United-Kingdom-Scotland
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.948657
1,987
2.75
3
Our opinion on ... - Executive Summary - Necessity for a response - Genetic testing - General principles - Other considerations The present paper constitutes the input of Alzheimer Europe and its member organisations to the ongoing discussions within Europe about genetic testing (in the context of Alzheimer's disease and other forms of dementia). Alzheimer Europe would like to recall some general principles which guide this present response: - Having a gene associated with Alzheimer's disease or another form of dementia does not mean that a person has the disease. - People who have a gene linked to Alzheimer's disease or another form of dementia have the same rights as anyone else. - Genetic testing does not only affect the person taking the test. It may also reveal information about other relatives who might not want to know. - No genetic test is 100% accurate. - The extent to which health cover is provided to citizens by the State social security system and/or privately contracted by individuals differs from one country to the next. On the basis of these principles, Alzheimer Europe has developed the following position with regard to genetic testing: - Alzheimer Europe firmly believes that the use and/or possession of genetic information by insurance companies should be prohibited. - Alzheimer Europe strongly supports research into the genetic factors linked to dementia which might further our understanding of the cause and development of the disease and possibly contribute to future treatment. - Based on its current information, Alzheimer Europe does not encourage the use of any genetic test for dementia UNLESS such test has a high and proven success rate either in assessing the risk of developing the disease (or not as the case may be) or in detecting the existence of it in a particular individual. - Alzheimer Europe requests further information on the accuracy, reliability and predictive value of any genetic tests for dementia. - Genetic testing should always be accompanied by adequate pre- and post-test counselling. - Anonymous testing should be possible so that individuals can ensure that such information does not remain in their medical files against their will. It is extremely important for people with dementia to be diagnosed as soon as possible. In the case of Alzheimer’s disease, an early diagnosis may enable the person concerned to benefit from medication, which treats the global symptoms of the disease and is most effective in the early to mid stages of the disease. Most forms of dementia involve the gradual deterioration of mental faculties (e.g. memory, language and thinking etc.) but in the early stages, it is still possible for the person affected to make decisions concerning his/her finances and care etc. – hence the importance of an early diagnosis. If it were possible to detect dementia before the first symptoms became obvious, this would give people a greater opportunity to make informed decisions about their future lives. This is one of the potential benefits of genetic testing. On the other hand, such information could clearly be used in ways which would be contrary to their personal interests, perhaps resulting in employment discrimination, loss of opportunities, stigmatisation, increased health insurance costs or even loss of health insurance to name but a few examples. The present discussion paper outlines some of the recommendations of Alzheimer Europe and its member organisations and raises a few points which deserve further clarification and discussion. The necessity for a response by Alzheimer Europe In the last few years, the issue of genetic testing has been increasingly debated. In certain European countries there are already companies offering such tests. Unfortunately, the general public do not always fully understand what the results of such tests imply and there are no regulations governing how they are carried out i.e. what kind of information people receive, how the results are presented, whether there is any kind of counselling afterwards and the issue of confidentiality etc. In order to provide information to people with dementia and other people interesting in knowing about their own state of health and in order to protect them from the unscrupulous use of the results of genetic tests, Alzheimer Europe has developed the present Position Paper. These general principles as well as the Convention of Human Rights and Biomedicine and the Universal Declaration on the Human Genome and Human Rights dictate Alzheimer Europe’s position with regard to genetic testing. Alzheimer Europe would like to draw a distinction between tests which detect existing Alzheimer's disease and tests which assess the risk of developing dementia Alzheimer's disease at some time in the future: - Diagnostic testing : Familial early onset Alzheimer’s disease (FAD) is associated with 3 genes. These are the amyloid precursor protein (APP), presenilin-1 and presenilin-2. These genetic mutations can be detected by genetic testing. However, it is important to note that the test only relates to those people with FAD (i.e. about 1% of all people with Alzheimer’s disease). In the extremely limited number of families with this dominant genetic disorder, family members inherit from one of their parents the part of the DNA (the genetic make-up), which causes the disease. On average, half the children of an affected parent will develop the disease. For those who do, the age of onset tends to be relatively low, usually between 35 and 60. - Assessment for risk testing : Whether or not members of one’s family have Alzheimer’s disease, everyone risks developing the disease at some time. However, it is now known that there is a gene, which can affect this risk. This gene is found on chromosome 19 and it is responsible for the production of a protein called apolipoprotein E (ApoE). There are three main types of this protein, one of which (ApoE4), although uncommon, makes it more likely that Alzheimer’s disease will occur. However, it does not cause the disease, but merely increases the likelihood. For example, a person of 50, would have a 2 in 1,000 chance of developing Alzheimer’s disease instead of the usual 1 in 1,000, but might never actually develop it. Only 50% of people with Alzheimer’s disease have ApoE4 and not everyone with ApoE4 suffers from it. There is no way to accurately predict whether a particular person will develop the disease. It is possible to test for the ApoE4 gene mentioned above, but strictly speaking such a test does not predict whether a particular person will develop Alzheimer’s disease or not. It merely indicates that he or she is at greater risk. There are in fact people who have had the ApoE4 gene, lived well into old age and never developed Alzheimer’s disease, just as there are people who did not have ApoE4, who did develop the disease. Therefore taking such a test carries the risk of unduly alarming or comforting somebody. Alzheimer Europe agrees with diagnostic genetic testing provided that pre- and post-test counselling is provided, including a full discussion of the implications of the test and that the results remain confidential. We do not actually encourage the use of genetic testing for assessing the risk of developing Alzheimer's disease. We feel that it is somewhat unethical as it does not entail any health benefit and the results cannot actually predict whether a person will develop dementia (irrespective of the particular form of ApoE s/he may have). We are totally opposed to insurance companies having access to results from genetic tests for the following reasons: - This would be in clear opposition to the fundamental principle of insurance which is the mutualisation of risk through large numbers (a kind of solidarity whereby the vast majority who have relatively good health share the cost with those who are less fortunate). - Failure to respect this principle would create an uninsurable underclass and lead to a genetically inferior group. - This in turn could entail the further stigmatisation of people with dementia and their carers. - In some countries, insurance companies manage to reach decisions on risk and coverage without access to genetic data. - We therefore urge governments and the relevant European bodies to take the necessary action to prohibit the use or possession of genetic data by insurance companies. Alzheimer Europe recognises the importance of research into the genetic determinants of Alzheimer’s disease and other forms of dementia. Consequently, - we support the use of genetic testing for the purposes of research provided that the person concerned has given informed consent and that the data is treated with utmost confidentiality; and - we would also welcome further discussion about the problem of data management. In our opinion, any individual who wishes to take a genetic test should be able to choose to do so anonymously in order to ensure that such information does not remain in his/her medical file. At its Annual General Meeting in Munich on 15 October 2000, Alzheimer Europe adopted recommendations on how to improve the legal rights and protection of adults with incapacity due to dementia. This included a section on bioethical issues. These recommendations obviously need to guide any response of the organisation regarding genetic testing for people who suspect or fear they may have dementia and also those who have taken the test and did develop dementia. - The adult with incapacity has the right to be informed about his/her state of health. - Information should, where appropriate, cover the following: the diagnosis, the person's general state of health, treatment possibilities, potential risks and consequences of having or not having a particular treatment, side-effects, prognosis and alternative treatments. - Such information should not be withheld solely on the grounds that the adult is suffering from dementia and/or has communication difficulties. Attempts should be made to provide information in such a way as to maximise his/her ability to understand, making use of technology and other available techniques to enhance communication. Attention should be paid to any possible difficulty understanding, retaining information and communicating, as well as his/her level of education, reasoning capacity and cultural background. Care should be taken to avoid causing unnecessary anxiety and suffering. - Written as well as verbal information should always be provided as a back-up. The adult should be granted access to his/her medical file(s). S/he should also have the opportunity to discuss the contents of the medical file(s) with a person of his/her choice (e.g. a doctor) and/or to appoint someone to receive information on his/her behalf. - Information should not be given against the will of the adult with incapacity. - The confidentiality of information should extend beyond the lifetime of the adult with incapacity. If any information is used for research or statistical purposes, the identity of the adult with incapacity should remain anonymous and the information should not be traceable back to him/her (in accordance with the provisions of national laws on respect for the confidentiality of personal information). Consideration should be given to access to information where abuse is suspected. - A clear refusal by the adult with incapacity to grant access to information to any third party should be respected regardless of the extent of his/her incapacity, unless this would be clearly against his/her best interests e.g. carers should have provided to them information on a need to know basis to enable them to care effectively for the adult with incapacity. - People who receive information about an adult with incapacity in connection with their work (either voluntary or paid) should be obliged to treat such information with confidentiality. People who take genetic tests and do not receive adequate pre and post test counselling may suffer adverse effects. Fear of discrimination based on genetic information may deter people from taking genetic tests which could be useful for research into the role of genes in the development of dementia. Certain tests may be relevant for more than one medical condition. For example, the ApoE test is used in certain countries as part of the diagnosis and treatment of heart disease. There is therefore a risk that a person might consent for one type of medical test and have the results used for a different reason. Last Updated: jeudi 06 août 2009
<urn:uuid:62210bfc-b709-4c59-93ac-36ec9784506d>
CC-MAIN-2013-20
http://www.alzheimer-europe.org/FR%C5%A0%C2%B7%C5%A0%20/Policy-in-Practice2/Our-opinion-on/Genetic-testing
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.943183
2,443
2.625
3