text
stringlengths
198
621k
id
stringlengths
47
47
dump
stringclasses
95 values
url
stringlengths
15
1.73k
file_path
stringlengths
110
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
49
160k
score
float64
2.52
5.03
int_score
int64
3
5
Andromeda downsized in massive hit list Tuesday, 10 October 2006 University of Cambridge astronomer Dr Mark Wilkinson and colleagues looked at several small dwarf galaxies in the local group, which includes Andromeda, the Milky Way and dozens of smaller galaxies. When they estimated Andromeda's mass, they found it less than previously thought, making the Milky Way the heftiest. Astronomers determine the mass of galaxies by looking at how fast the stars and the gas in the inner disk are rotating. The faster they move, the more massive the galaxy. Until now precise measurements of the rotation of the galaxy's dark halo, which extends for thousands of light-years beyond the galaxy's visible starry disk, have not been possible because there are few visible objects that can be measured. But with the high resolution of the European Southern Observatory's Very Large Telescope, Wilkinson measured the velocities of stars in several extremely faint dwarf galaxies orbiting Andromeda to determine the mass of its dark halo. "Although Andromeda's inner disk is moving faster than the Milky Way's, further out in the halo, which is almost exclusively made up of dark matter, the opposite is true," he says. "The best fit for the data showed that the Milky Way is about one and a half times more massive than Andromeda." Wilkinson and colleagues posted their findings recently on the arXiv physics website and have submitted the data to the journal Nuclear Physics B. Largest but not heaviest Last year astronomers suggested that Andromeda might be three times larger than previously thought. But the results are not necessarily contradictory. Data from the WM Keck Observatory in Hawaii was used to indicate the extent of the visible disk rather than the mass of the whole galaxy. "We need more data to work out the point at which the mass starts falling off," says Wilkinson. "That could tell us more about the profile of dark matter. However this [latest research] is a very small sample, only two out of billions of galaxies, so we need to reduce the error bars." Australian researcher Dr Jeremy Bailin, from Swinburne University of Technology, agrees. "It's notoriously difficult to measure the mass of astronomical objects. You can't put a galaxy on a scale," he says. "But reducing the uncertainties on mass is important for understanding the history of the local group and will help us better understand how it compares with other groups of galaxies."
<urn:uuid:0a7ff0c3-d877-463d-a262-9382522937a7>
CC-MAIN-2018-17
http://www.abc.net.au/science/news/stories/2006/1759866.htm
s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945660.53/warc/CC-MAIN-20180422212935-20180422232935-00301.warc.gz
en
0.933652
499
3.125
3
Third part of ‘Monsanto’s Caribbean Experiment’ American John Francis Queeny was inspired by a Puerto Rican woman to name, in 1901 in Missouri, the Monsanto company, which started as a pharmaceutical. Queeny named the company after his wife Olga, daughter of Emmanuel Mendes de Monsanto, who in turn funded the first steps of the corporation. This was to become a manufacturer of Agent Orange, the defoliant and herbicide that was tested in Aguadilla farms in the 50’s, and that was used in large scale to strip the jungle that under which the enemy of the United States hid during the Vietnam War. Today, Monsanto is the largest producer of transgenic seeds in the world, and uses Puerto Rico as a huge laboratory to develop genetically modified corn, soybean, sorghum and cotton. As an agricultural corporation, it occupies more than the 500 acres allowed under the Constitution, whose Article VI was intended to prevent monopoly and the displacement of small local farmers, as happened early last century when the sugarcane empire reigned, which Emmanuel Mendes Monsanto, funded in Vieques and in St. Thomas in the U.S. Virgin Islands. Monsanto. Transgenics. Among the scientific community and consumer advocacy groups, those two words incite passions. Some argue that genetically modified seeds can increase food production in places where pests and drought abound, which could alleviate famine in third world countries. However, an independent scientific organization in the United States, the Union of Concerned Scientists, dedicated to environmental protection, argues that transgenics do not result in an increase in reliable production and require more pesticide than conventional crops. Consumer groups claim that the impact on human health of transgenics has not been studied in depth and warn that federal laws do not require food manufacturers to state on labels that it contains transgenic components. In practice, it is known that the pollen of “improved” seeds may, by accident or on purpose, reach crops that are not genetically modified. That means that if Puerto Ricans jumpstart food production and begin to harvest their best land in the south, where there are seed producers, potential corn crops, for example, would be at risk of contamination. Corporations also may argue that the farmer “stole” the patented genetic material. This happened with the famous case of Canadian canola farmer Percy Schmeiser, whose crops were contaminated in 1997 on his farm in the province of Saskatchewan, and the Supreme Court of Canada decided that it constituted a “use” of the invention patented by Monsanto. The company makes farmers sign a contract stating that they waive the ancient practice of saving seeds for next year, so they have to go back to buy more. There are not many options: 90% of corn sold in the United States has genetic material produced by Monsanto. Much of the company’s revenue stems from the fact that it also manufactures the Roundup “total herbicide”, one of the world’s most popular, sold even in department stores gardening sections. The corporation produces the “Roundup Ready” seeds, genetically engineered to resist herbicides so that farmers can apply it to kill all plants except the genetically modified crop. In other words, Monsanto genetically modifies a plant not to increase food production or that it survives plagues, but to resist other agricultural products they sell. International organization Earth Open Source, that brings together farmers, corporations and academic institutions, argued in a report last June that the herbicide, whose active ingredient is glyphosate, still used at concentrations lower than those applied in agriculture, causes birth defects in laboratory animals. But the information has been withheld since the 80s by European regulators, says the company. Glyphosate is an herbicide developed for the removal of weeds and shrubs. Plants leaves absorb it, and they die because it removes their ability to generate amino acids necessary for life. An exception was the courts of France. They found Monsanto guilty of, between 2007 and 2009, falsely marketing this herbicide as “biodegradable.” And this is part of the company profile that represents the type of investment that the Government of Puerto Rico is encouraging to consolidate the island as a destination for life sciences, according to Economic Development and Commerce Department Secretary José Pérez-Riera. The Puerto Rico Promotion and Development of Agricultural Biotechnology Companies Law, signed in 2009, establishes as public policy to turn the island into a mecca of that agricultural sector. “Certainly, an industry that experiments on Puerto Rican soil is subject to the requirement of preparing an environmental impact statement required by the Puerto Rico Environmental Public Policy Law,” said attorney Jessica Rodriguez’s, a professor of the InterAmerican University Law School. “Transgenic experimentation affects the health of soils and groundwater, and the chemicals used may also present health risks on the population. Therefore, prior to any government approval be given, an environmental impact statement had to be prepared and had to be made public so that people could learn and participate in the proceedings.” The Environmental Quality Board could not tell the Center for Investigative Journalism if these companies comply with this requirement. Monsanto itself, meanwhile, acknowledged in the lease contract for Land Authority property in Juana Díaz, that their activities, “may generate material and substances that are toxic, both for humans and the environment in general, if adequate management practices are not carried out.”
<urn:uuid:5e05b590-1a2e-4b0d-9ae4-8559de0921dd>
CC-MAIN-2021-04
https://periodismoinvestigativo.com/2011/11/monsantos-environmental-impact-part-three/
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703548716.53/warc/CC-MAIN-20210124111006-20210124141006-00663.warc.gz
en
0.949534
1,123
2.9375
3
This exercise carries on from exercise 4.1 and the work scrunched up zig zag sketchbook I used at Brighton. My first step towards producing a ‘labour intensive’ piece was researching some artists that I could base my ideas on and give me some direction as I was a little lost in how I was doing to approach the drawing using the creases in the paper. Vitamin D: New perspectives in drawing was a good point to start and I found an artist named Vik Muniz, I took some notes: - Muniz works imagery he has collated from memory, photographs, art history and his surrounding using various mediums for chocolate to dust, clouds, lint (the common name for visible accumulations of textile fibers and other materials, usually found on and around clothing. materials used in the manufacture of clothing, such as cotton, linen, and wool, contain numerous, very short fibers bundled together), toys, string, dirt and holes punched from paper. He then photographs the result and discards the original. - He has reproduced historical art work in both 2 & 3D – drawing in wire or thread - Cathedral drawn in chocolate - The artists images are related to the process of psychoanalysis – using chocolate to sketch from memory then photographing and finally discarding the chocolate composition. He gives a physicality to childhood associations and then wipes them away. - The artists work are also about translation – from one socio-cultural viewpoint to another, from one material to another, from one person to another and what is lost and gained in the process. - His materials are associated within our culture and his subject. - His choice of materials often relates to subjects that he draws. For example when he draws with sugar the image is of children working on sugar plantations - Muriz’s work forces us to detach ourselves from our initial perception. We realise that what we think is a true-to-life representation, is imagery intercepted and interpreted by Muriz, a vision that requires our participation to comprehend as we bring to bear our own experiences, memories and misapprehensions. - As a caretaker in the early 1990s he reconnected with a childhood hobby – stargazing - The works made from there on fuse a private obsession with something that is public property: the cosmic sublime - His works began as ‘detailed hobbyist books memorialising his observations of celestial bodies (an astronomical object or celestial object is a naturally occurring physical entity, association or structure that exists in the observable universe. In astronomy the terms object and body are used interchangeably)’. - After expanding his works to ‘point-of view’ drawings, his interstellar imagery was vignetted (a decorative design placed at the beginning or end of a book or border of page) as seen through a telescope, Crotty began to draw on paper covered spheres. I am thinking this could fit in with my ideas linked to ‘edges’ in my drawings. These works by Crotty are very fitting with my ideas of the physicality of drawing, producing a 3D object. - The expansion of media suggests an attempt to get closer to the intimate experience of hands on astronomy. - Links to minimalist practice due to the repetition of marks – this is something that interests me - His tiny scratchy ink marks create an image that are realistic and the unfathomable signalling the artists own pleasure in sending his eye out into space – scratching the surface to make the marks. - Crotty has considered making art out around the local landscape on a cloudy night - I really like his work and the final product of his drawings are evident of intense labour. His composition are fairly simple but his marks invite you to look closer and gaze for a length of time at his works. – this is something I would like to try and produce - 1984 used gunpowder as a drawing medium (how exciting!!!!) - His experimental work combines drawing, painting and performance works - His hometown has long been engaged in war and the ongoing conflict has clearly left an impressions on the artist - 1989 he started a series of gun-powder related pieces - In these works art becomes a dynamic form that embodies the human desire to harness nature’s energy and power as well as human aspiration to rise freely above and beyond the earth - In his project ‘Extraterrestrials’ (1989) Cai created an image that turned the viewers attention not only to the sky, but also to the realm beyond our planet, suggesting our place within a greater universe and the possibility of other life forms in outer space.First Look: Can Guo-Qiang’s Extraterrestrial Vision (2018) – video showing the work. I think the combination of the materials used and the visual power of the work is quite breath taking. - As part of this body of work, the artist also produced eight large scale (13 foot) drawings on rice paper. He lays down trails and lines of gunpowder over large sheets of white paper then ignites one end to produce a series of small explosion that leave behind marks and lines - These spontaneously produced images recall both the abstraction of western modernism and lyrical from of Chinese ink painting - For Cai drawing first assumed a performative dimension - He also began to explore land as a form of draftsman’s ground, comparable to a sheet of paper – link with my ideas of using the landscape/ ground as a basis of my drawing. - Process and land art - I like on his drawings that he’s to a diagram of his plan to execute his performative drawing Drawing Now: Eight propositions: - (Russell Crotty) avid surfer he used stick figures and pen strokes to record observations of surfers - He arranged his series of drawings like film strips on a large sheet of paper gridded out like a minimalist painting - Crotty drew not just to understand the waves but in a romantic sense to understand them viscerally and experimentally - Crotty records planetary activity and phenomena such as comets in a chiaroscuro drawing style of pen-and-ink crosshatches - When drawing in his book he sometimes divides his page into individual panels (like in a comic book) and spreads out entire night-caps luxuriously across double pages creating a panoramic view - Since virtually all astronomical data today is collected by computer, Crotty’s stargazing is fundamentally a 19th century pastime. - For Crotty the telescope is a tool to bring him as close as possible to the stars and the astronomical observations he records are not merely data but evidence with nature - Crotty recently said, squinting at the night sky ‘what is this? This is not an intellectual construct. This is actual.’ - Artists Ugo Rodinone sketches directly from nature. In 1989 he started taking a sketchbook on alpine walks - 19th century writings go German romantic philosophers wrote of an Artist taking to the hulls and experiencing nature first hand by sketching it - These artists became known as ‘wandermaler’ (wandering painters). They produced a distinctive kind of landscape drawing: eschewing awesome vistas for more picturesque scenes of craggy knolls and tumbledown shacks, they drew highly finished, detailed compositions in pen an dink, outlining forms and inking in dramatic contrasts of light and dark. - Rondinone’s larger finished drawings are careful enlargements of sketched he has made on his mountain walks (I really like this idea of bringing his first hand research back to the studio to develop further) - The book mentions Ruskin and his term ‘local association and historical memory’. - According to Ruskin’s ‘elements of drawing’ nature could not be recorded exactly, but the light and dark shading of massed in space produced the effects closest to it and thus offered the only way to depict it in drawing. - Rondinone adopts the light/dark method because if signifieds how nature is drawn art historically. After conducting this research I felt better equipped with how to move forward with this exercise. This was the kind of thing i was hoping to produce in my sketchbook, using the creases in the book to give me a starting point for shadow and light. (Relief Shading, 2015) I wanted to use cross hatching and marks as my drawing technique. I was hoping the creases in the page would act as a relief so when light was applied to the page, it would have areas of dark and light which I would simply draw into. I started to notice that the creases in the page weren’t big enough to create the shadows I needed. I felt the drawing was going to be flat and not produce that emphasis of physicality I wanted to portray. At this point I decided to cover the whole book in pencil marks and then I thought about adding the finer details – shadow and highlights – on top. By this time I was getting frustrated with the piece. I decided to leave it as it was and come back to it with a fresh mind. On returning to the piece I thought about using charcoal to rub gently over the pages, picking up on the parts of the drawing that were raised. I actually thought this made the drawing look better however I felt that it was simply completed too quickly and I really didn’t feel that I had produced a labour intensive drawing. After some time reflecting on the piece (and considering what I could do on the opposite side) I thought about taking a step back from the concept of hills/ nature and focussed on my use of line. I wondered if I could simply repeat a line that followed the creases of the page. To be honest, I didn’t think much more than that about it and I went for it. The drawing was starting to form an interesting pattern, manipulating the surface into something almost mesmerising. Normally, at this point I would start to think about what it meant, what I could do next, how could I develop this idea but I forced myself to ‘go with it’. From there, I decided to fill the book with these marks. Each A5 sheet took me 2.5 hours. In total I spent 60 hours on this drawing. For me, that is an achievement in itself as I always rush through ideas and drawings, wanting to make them better or go onto the next thing. The process was both calming and frustrating. All I kept doing was adding up how many pages I had left before I could finish, but it started to become a time for relaxing and ‘switching off’. I just had to focus on the line I was following. I could take the drawing in any direction I wanted and allowed myself to not be too precious. I didn’t rub anything out. If there were any mistakes I left them. Although I found it frustrating it was also addictive. I would always think, i’ll just finish this part, but then a new line would come into focus and I would have to start that part. I enjoyed the inevitable change in my pencil tip. The thin, newly sharpened tip with a fine, neat mark compared to the blunt edge creating a wider, less controlled mark. The exercise has really taught me to slow down and focus on each individual mark that I made during this process. I think I would like to develop the idea into other mediums and perhaps produce a similar drawing for the assignment piece. Hoptman, L. (2003) Drawing now: eight propositions. New York: The Museum of Modern Art Dexter, E. (2005) Vitamin D. New perspectives in drawing. London: Phaidon Press Limited Figure 1. Guo-Qiang, C. (2004) Tide watching on West Lake [gunpowder on paper] In: Dexter, E. (2005) Vitamin D. New perspectives in drawing. London: Phaidon Press Limited Figure 2. Guo-Qiang, C. (2002) APEC: Ode to Joy [gunpowder on paper] In: Dexter, E. (2005) Vitamin D. New perspectives in drawing. London: Phaidon Press Limited Figure 3. Crotty, R. (2002) NGC 5466 “The Ghost” Globular Cluster n Bootes [ink and watercolour on paper] In: Dexter, E. (2005) Vitamin D. New perspectives in drawing. London: Phaidon Press Limited Figure 4. Crotty, R. (2004) View of exhibition “Globe Drawings” Miami Art Museum. In: Dexter, E. (2005) Vitamin D. New perspectives in drawing. London: Phaidon Press Limited Figure 5. Muniz, V (2003) Catedral de Leon [Cibachrome] In: Dexter, E. (2005) Vitamin D. New perspectives in drawing. London: Phaidon Press Limited Figure 6. Muniz, V (2002) Prison XIII, the well, After Piranesi [Cibachrome] In: Dexter, E. (2005) Vitamin D. New perspectives in drawing. London: Phaidon Press Limited Figure 7. Crotty, R. (1996) Five Nocturnes [ink on paper in bound book] In: Hoptman, L. (2003) Drawing now: eight propositions. New York: The Museum of Modern Art Figure 8. Rondinone, U. (1999) No. 135 Vierterjunineunzehnhundert-neunundneunzig [ink on paper] In: Hoptman, L. (2003) Drawing now: eight propositions. New York: The Museum of Modern Art First Look: Can Guo-Qiang’s Extraterrestrial Vision (2018) Pres. Sotheby’s. At: https://www.sothebys.com/en/videos/first-look-cai-guo-qiangs-extraterrestrial-vision (Accessed on 25 January 2019) Relief Shading. (2015) Drawing. At: http://www.reliefshading.com/techniques/drawing/ (Accessed on 25 January 2019)
<urn:uuid:05173f0d-5894-4dc6-b942-44e8b5605cd8>
CC-MAIN-2019-30
https://apowelledm.wordpress.com/exercise-4-2-labour-and-time/
s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195532251.99/warc/CC-MAIN-20190724082321-20190724104321-00134.warc.gz
en
0.95475
2,983
2.578125
3
The editors of the American Heritage® dictionaries have compiled a list of 100 words they recommend every high school graduate should know. Dr. Anita Archer presents research on the urgent need for vocabulary instruction and a wealth of strategies to explicitly teach Tier II and III vocabulary to students of any age. The world's largest flashcard library. A way to improve your vocabulary and to feed the hungry. Rice is donated for every right answer. SAT/ACT vocabulary lists for many major novels are listed. On the opening page, go to Resources For Vocabulary Study. On that page - go to Vocabulary for Novels - then pick your novel. A dictionary and thesaurus are both available. There is an option to hear each word pronounced. Podcasts of songs that help students remember challenging vocabulary. A dictionary with a new point of view that catches the eye and enriches the mind.
<urn:uuid:93855c28-176a-43a3-8d15-26eca7376818>
CC-MAIN-2016-50
http://www.resa.net/curriculum/curriculum/english/vocabulary/
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541324.73/warc/CC-MAIN-20161202170901-00499-ip-10-31-129-80.ec2.internal.warc.gz
en
0.903164
186
2.625
3
Hardness is a very old measure used by engineers and geologists which was first utilised to establish an order for the mechanical properties of minerals. Barba defined already 1640: „... hardness is such a property of precious stones that those which file can scratch are not so classed.” This explains hardness as a measure of plastic properties which are responsible for the durability or failure of machines and tools. Consequently the scratch hardness of Mohs (1822) was the first hardness unit that allowed a ranking of minerals between 10 reference materials. Since that time a lot of different hardness definitions and units were introduced. The nowadays mostly accepted hardness definition is that of Martens from 1912: Hardness is the resistance of a body against the penetration of another (harder) body. This definition means a permanent penetration due to plastic deformation because otherwise both bodies would be left unchanged after unloading and a measure can not be derived. All common hardness tests use only normal loading and exclude lateral forces, required for a scratch, to simplify the test conditions. However the diversity in hardness definitions makes it still complicated to compare hardness values. The ASM Metals Handbook, Vol. 8, Mechanical Testing makes the following note: „The definition of hardness varies depending on the experience or background of the person conduction the test or interpreting the test data. To the metallurgist, hardness is the resistance to indentation; to the design engineer, a measure of flow stress; to the lubrication engineer, the resistance to wear; to the mineralogist, the resistance to scratching; and to the machinist, the resistance to cutting”. A clear description under which conditions a hardness value was obtained is therefore absolutely necessary. The different hardness standards prescribe here clear rules which are unfortunately not always kept. H. O. Neill, Hardness Measurement of Metals and Alloys, Chapman and Hall, London (1967) 2 F. Mohs, Grundriß der Mineralogie, Dresden, 1822
<urn:uuid:f5bf80de-8406-4a46-b538-e4f3f4ec435d>
CC-MAIN-2020-29
https://asmec.de/en/information.php
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655882051.19/warc/CC-MAIN-20200703122347-20200703152347-00233.warc.gz
en
0.903871
422
3.765625
4
History of the European Union – Government (200 Level Course) The European Union consists of 15 member states and from 2004 on there will be most probably 25-member states. But how were the beginnings and why was it founded. The EU had its beginnings in the economic sector but the integration of Europe was a main aim as well. Through the experience of two world wars it was clear that the European states had to develop a kind of connection between each other that was so intensive and interconnected that the conflicts of the future would be solved with peaceful measures. Six main motivations for the European Integration: 1. Peace keeping 2. Belonging to a special system of values 3. Increasing of the economic prosperity 4. More influence in foreign and security policy 5. More success in solving European wide problems 6. Strengthening of the national economy So in September 1946 Winston Churchill suggested in his well-known speech of Zürich an intergovernmental solution with a European court of justice. Also several new international institutions were founded to help regulate relationships between states on political and economic level for example the United Nations (Oct. 1945), international monetary fund (1945) and the General Agreement on Traffics and Trade (1948). In the Hague in May 1948 there were general agreements for closer relationships and a federal state of Europe or a closer union. Which lead to the building of the “Council of Europe” in May 1949 In 1950 a new treaty was founded, the so called “Shuman Plan” which was created to control the production of coal and steel in Western Germany and France, but in the end it was signed by six countries and so as additional partners were also included Italy, Belgium, Luxemburg and Holland. The treaty came in to effect in 1951 and is known as the European Coal and Steel Community (ECSC/Europäische Gemeinschaft fürKohle und Stahl). With this step a common market was established which was completely new, because the states who signed the treaty were surrendering a substantial proportion of their national sovereignty to the new created supranational institution. In 1955 the Benelux states made a proposal at the conference of Messina, which was lead by the Belgium foreign minister Paul Henri Spaak, to work together and combine on the nuclear energy sector also there were proposals for customs union (Zollunion). So they created a commission, which worked the details for the customs union and an Organisation for the development and using of nuclear energy out. Based on this it came to the treaty of Rom in 1957, it included the European Economic Community (EEC/Europäische Wirtschafts Gemeinschaft) and a European Atomic Energy Community (EAEC/Euratom). Both treaties came in to force from the 1st of January in 1958. The EEC was a huge success and exceeded all expectations (übertraf alle Erwartungen), it made the EEC one of the most important Trade partners in the world, it lead to an growth of the GDP in Union states of 21,5% in the years of 1958-1962 also the industrial production grew about 37% in the same time. In the 1960s the integration process slowed down, because of the nuclear balance between USA and USSR the conflict lost its integrative effect, and national interests were more important and lead to doubts about the nessecarity of more integration steps. For example there were trials to change the decision-making process in the council of ministers from unanimous to the majority principle in 1966, but the French government was against it. Also in 1965 there were suggestions for a new financial system in agriculture but France has blocked it with so called “empty chair policy” (what means they withdrew all there Ministers from the council for over half a year). So it ended in the way that the principle of unanimous decision-making continued de facto. The stagnation ended with the early 70s in 1972 new members joined the EEC: Great Britain, Ireland and Denmark only Norway’s people said no to the integration through a referendum. Afterwards in the end of the 70s negotiations about the integration of Greece, Spain and Portugal started, not only for economic reasons, moreover to stabilise the democracy in these countries. In January 1981 Greece joined the EU followed by Spain and Portugal in January 1986. The next step was to create the “Single European Act (SEA/Einheitliche Europäische Akte)” to give the European states an economic impulse to stop the European economy from falling behind the USA and Japan. So the main aim of the SEA was the internal market program (Binnenmarkt) with its four freedoms that were free traffic and exchange of Also the policy of the integration was stretched, new was development- and technological policy, the ecological policy and economic and currency as well as work protection and social policy. For further integration and an increased deepening, two intergovernmental conferences led to a monetary and political union in 1992 in the treaty of Maastricht which came to effect in end of 1993. The next treaty was in Amsterdam in 1997, which concerned the asylum in the EU the outbordercontrols etc. Finally I want to mention the treaty of Nice that aimed the foreign and security policy in the union and a list of basic and human rights in the EU.
<urn:uuid:d3837428-9565-449d-b353-ee50da91fcee>
CC-MAIN-2023-50
https://freeonlineresearchpapers.com/history-european-union/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679101195.85/warc/CC-MAIN-20231210025335-20231210055335-00622.warc.gz
en
0.969108
1,116
3.546875
4
From Cornell International Affairs Review VOL. 7 NO. 1 Soft Power Deployment on the Korean Peninsula IN THIS ARTICLE South Korea, also known as the Republic of Korea (ROK), is a remarkable country in many ways. It survived the Korean War, supported by American military assistance. It successfully transitioned to democracy after nearly 40 years of authoritarian government. South Korea now boasts a strong economy that joined the trillion-dollar club of world economies in 2004.i The South Korean wave (hullyah) looks unstoppable with its success in Asia and around the globe. But in spite of its impressive résumé, the ROK has been criticized for its handling of diplomatic relations with the Democratic People's Republic of Korea (DPRK), after its northern neighbor's nuclear tests caused a shutdown of the Kaesong economic zone in April 2013. Debates about an appropriate solution to the diplomatic impasse became a common topic in academia and the media. Many pundits pointed to the ROK's soft power as a potential response to Northern aggression, while ignoring the full possibilities and operational mechanisms of this soft power. This paper argues that South Korea has failed to effectively exercise its soft power capability on the North Korean territory due to the DPRK's profoundly isolationist nature, manifested in the regime's draconian political and economic policies. It claims that the ROK's ability to exercise its soft power in dealing with North Korea is undercut by China, because of the DPRK's central geopolitical significance for Beijing. The paper is structured in five sections. The first section briefly discusses soft power as a concept and presents the theoretical framework for the analysis. The second section discusses the ROK's soft power resources. The third and fourth sections discuss the ROK's approaches with regard to North Korea, and explain why soft power failed to effectively handle the North Korean threat. Finally, the paper concludes that a soft power approach will not contain the North Korean threat due to the ROK's failure to penetrate the DPRK's isolated system, and because of its geostrategic importance to China as a buffer zone. Harvard academic Joseph Nye coined the term of soft power in his book Bound to Lead in 1990. While the concept has generated extensive academic discussions, it has also resulted in the misuse and misinterpretation of the term. Establishing a solid framework for the concept of soft power and its operational mechanisms is crucial for this paper's argument. Nye defined soft power as an "ability to affect others through the co-optive means of framing the agenda, persuading, and eliciting positive attraction in order to obtain preferred outcomes".1 Before Nye elaborated his concept, E.H. Carr discussed a similar idea – power over opinion in his distinguished work The Twenty Years' Crisis. He states that power over opinion is "not less essential for political purposes than military and economic power".2 This can be seen as an initial version of the concept of soft power, later advanced and brought to academia's attention by Joseph Nye.ii Conversely, other scholars see it as simple marketing, or worse, propaganda. For example, Christopher Layne claims that "soft power is a means of marketing" a certain state's brand, which can be measured by opinion polls.3 The term has been misused and misinterpreted to the extent that it has started to be used to refer even to culture and humanitarian aid. Critics claimed that "soft power now seems to mean everything," as it can refer to concepts as disparate as multilateralism, democratic values, and markets.4 However, such a simplistic definition overlooks soft power's significance and mechanisms. Nye addresses this criticism by stating that the authors confuse "the actions of a state seeking to achieve desired outcomes with the resources used to produce those outcomes".5 Nye suggests, for example, that attractive culture and democratic values increase soft power resources, but the fact that a certain state's culture and values are appealing does not mean the state automatically projects its soft power. Furthermore, Nye states that "whether one or another type of resource produces power in the sense of desired behavior depends upon the context".6 Both context and actions matter: the failure to make a judicious decision with regard to the Vietnam War resulted in a military disaster for the United States that damaged the American economy and harmed the nation's image abroad. It is important to remember that soft power is an important option in the foreign policy toolkit that should be utilized when the context is appropriate. Nye admits that "soft power is not the solution to all problems".7 Other forms of power are hard and economic power. Hard power refers to the use or threat of military force. Hard power resources are relatively easy to estimate by calculating the size of a country's conventional military forces. It can operate through inducements ("carrots") or threats ("sticks").8 Specific examples of the exercise of hard power include: conducting military operations, backing up coercive threats, protecting allies, conducting peacekeeping missions, or providing different forms of assistance, such as training military personnel in other countries.9 Economic power, on the other hand, is more complicated and intricate because it can function as both hard and soft power. In the simplest terms, it means rewarding states with economic benefits for good behavior, and punishing them with sanctions for not complying. When economic sanctions undermine the livelihoods of people on the receiving end, they can be considered an exercise of hard power. Yet economic success that attracts other states to a particular type of economic model may boost that state or states' soft power. For example, the European Union model of economic integration has attracted states from the former Eastern Bloc to change their domestic policies and structures in order to join the union.10 Thus, the context for the deployment of a certain type of power must be carefully examined. In Libya, for example, diplomatic resources and economic sanctions had only limited results. Gaddafi's 40-year-old regime was not eliminated and replaced with the Transnational National Council until NATO executed an air campaign in 2011.iii Failure to understand the context and to act accordingly using the appropriate tools can result in a strategic fiasco. This paper defines soft power as the ability of state A to persuade state B through projecting "three B's" (benignity, beauty, and brilliance) to do something, dependent on successful execution during transmission and reception stages. I will use Alexander Vuving's and Kondo Seichi's conceptual frameworks in order to illustrate the necessary features for successful soft power deployment. According to Kondo Seichi, the deployment of soft power consists of four stages: resources, transmission, reception, and outcome. He compares these stages of deployment to those of missiles: one needs to first possess missiles (resources) with the appropriate delivery mechanism (transmission), able to penetrate an enemy's territory (reception), and destroy targets on the enemy's terrain (outcome).11 In addition, Kondo states that, "With malfunction at any stage, power cannot determine the outcome".12 In other words, in order for soft power to work, every stage, particularly transmission and reception, should be executed with caution and foresight. Alexander Vuving further explains that "softness" is attained by the three "power currencies" of "beauty, benignity and brilliance".13 Benignity describes an unselfish non-threatening behavior to other actors which "produces gratitude and sympathy" in response.14 Switzerland is a classic example: much of its soft power resources derive from its diplomacy's emphasis on neutrality, rather than alliance building. Brilliance, in turn, produces admiration due to high performance and one's success.15 China's economic miracle, for example, has been one of the biggest sources of its soft power.iv Many states strive to emulate Chinese economic success by following the "Beijing Consensus" model.v Beauty refers to the "resonance that is evoked when you represent ideals, values, causes, or visions".16 For instance, the European Union's commitment to peace and democratic values has made it one of the biggest soft power "heavyweights." In sum, states attain soft power resources by projecting beauty, benignity, and brilliance. Once these resources are attained, soft power can be projected following Kondo's stages of transmission and reception, with the outcome dependent on the success or failure of these stages.17 ROK's Soft Power Resources South Korea's miraculous economic development and rapid democratization gave it crucial soft power resources. Indeed, this economic and democratic success has given the ROK the ability to attract other states through Vuving's concept of brilliance. Embracing democracy and democratic values secured it respect and prestige from like-minded states. Culturally, the Korean wave has influenced dozens of countries from Asia to South America. In many respects, Gangnam Style "infiltration" into western markets brought K-pop to light and boosted its popularity. South Korean dramas and movies have swept the globe, and have given South Korea an upper hand in its soft power rivalry with Japan and China despite its late start.vi Two crucial features that contribute to the Korean wave's attractiveness and potential expansion is its cultural proximity to China and Japan and, more importantly, the fact that Korean products are free from historical antagonism.vii ROK's assistance to Haiti in 2010 further demonstrated how the combination of military and economic resources can increase a state's soft power and international prestige.viii In addition, by 2010 ROK had established 35 King Sejong Institutes for Korean language and culture, and has plans to increase their number to 150 worldwide by 2015.ix Co-hosting the 2002 World Cup with Japan and reaching the semi-finals reinforced South Korea's position in the world.x South Korea's miraculous economic development and rapid democratization gave it crucial soft power resources It is important, however, to keep in mind that the above activities and attributes only cultivate soft power resources; they cannot by themselves deliver desired outcomes. Having many resources does not necessarily mean that a state can proceed to the further stages in Kondo's sequence of soft power deployment – namely, transmission and reception to produce a desired outcome. Moreover, having an abundance of soft power resources does not make its usage necessary. Policymakers need to calculate whether the ‘deployment' of soft power can bring the desired results in a specific context. Deploying Soft Power In North Korea From the onset of the Korean War until the election in 1988 of the first civilian president, Roh Tae-woo, the ROK's foreign policy was influenced by Cold War logic. The United States supported South Korea militarily and economically while Soviet Union propped the North Korean regime. In 1988, Roh Tae-Woo articulated Nordpolitik policies that had important domestic and international implications. The policy included warming relations with easternbloc states, China, and the Soviet Union while easing tensions with North Korea. As a result of these policies, China became one of South Korea's most important trade partners. It is noteworthy that, although Nordpolitik was framed as a foreign policy initiative to improve relations with North Korea and move towards reunification, it also enormously boosted the legitimacy and popularity of the new, democratically elected president. Economically, President Kim YoungSam (1993-1998) undertook a series of labor and chaebol reforms, and other financial reforms that significantly improved the economy. xi By 1993, the South Korea's economic miracle was fast being realized through development and industrialization.xii Subsequently, President Kim Dae-jung articulated the Sunshine policy in 1998 that lasted until the 2008 election of conservative President Lee Myung-bak. Under the Sunshine policy, the ROK shifted its approach from a hostile hardline policy to reconciliation and cooperation with the North.xiii Although the initiative was successful in securing a Nobel Peace Prize for Kim in 2000, and getting North Korea to the negotiation table during the Korean summit meeting in Pyongyang in 2000, it failed to achieve the desired concrete result – North Korea's rejoining the Nuclear Nonproliferation Treaty (NPT).xiv Many experts have voiced critical views about the manner in which South Korea utilized its soft power. For example, Sarah K. Yun, the Director of Public Affairs and Regional Issues for the Korea Economic Institute, claims that "in order to effectively employ soft power, Korea should specify its goals and desired outcome". Similarly, Geun Lee states that "[South] Korea's soft power capacity is still very limited, […], because Korea has not been interested in developing and applying soft resources to produce influence in the region and on the global stage".18 Such criticisms, however, ignore the mechanism of soft power influence. A country can specify its goals for soft power, but it does not mean that it can project it. The American goal during the Vietnam War was clearly stated and specified: winning the war against communist North Vietnam. However, this did not make the projection of soft power more effective. South Korea should indeed deploy soft power in dealing with the North, but it cannot simply transmit it to Pyongyang. The DPRK's dedication to isolating itself from the world impairs the transmission and reception phases of soft power, making the effective exercise of a soft power approach improbable. The issue is not that South Korea does not have enough soft power resources (it has them in abundance); rather, it is the transmission phase that remains a stumbling block. The ROK has been active and aware in enlarging its soft power resources, and employing them wherever possible. However, the DPRK's regime has sought to seal its borders from possible penetration of K-pop and Korean dramas that would reveal the ROK's economic and political success. Consider the fact that possession of a tunable radio has been a crime in North Korea since the late 1950's. All legally sold radios must be purchased from authorities and tuned to the official channels.19 The right to use the Internet—a distorted and tightly controlled official version—is reserved for elites and is unavailable to average citizens. The biggest deterrence mechanism for possession of any forbidden materials is the sentencing to labor in gulags, a Stalinist relic that the DPRK has preserved in the 21st century. For the DPRK's leadership, the main concern remains its political survival rather than liberalizing the economy or relaxing its laws For better or worse, non-state actors and individuals can also seek to exercise soft power. Non-state actors' involvement had destabilizing effects on the Korean Peninsula in 2012, when South Korean activists launched "balloon attacks" on the North. These balloons contained propaganda messages, as well as more tangible items like socks, cash, and medicine. Despite these South Korean activists' expressed desire to help their semi-compatriots, and to reveal the lavish lifestyles of North Korea's elite, their attempts to project soft power with balloons have ultimately been counterproductive. President Lee Myung-bak sent police forces to prevent the unauthorized soft power projection. At the same time, another group of activists criticized the balloon campaign for instigating conflict between the Koreas.xv In response, the DPRK launched a counterattack with its propaganda. But North Korea also took it one step further and threatened to substitute bombs for the leaflets in the future.xvi Such soft power deployment by non-state actors highlights the significance of the hostile territory and the context in which it is exercised. The unwillingness of North Korean leaders to reform the economy, even on a limited scale, demonstrates the nature of the regime's commitment to isolate itself and prevent information leakages that undermine the ROK's ability to transmit soft power. For the DPRK's leadership, the main concern remains its political survival rather than liberalizing its economy or relaxing its laws. Therefore, economic cooperation projects like the Kaesong Industrial Complex and the rule of law are primarily political tools, and only secondarily economic and judicial. Kaesong was designed under the Sunshine policy to foster cooperation and decrease tensions between the two Koreas. Therefore, not only did it provide economic incentives for both states, but for South Koreans it also contained an optimistic political aspiration of future reunification.xvii However, the DPRK government closed the complex in April 2013 amid an atmosphere of high political tension, and only reopened it on September 16, 2013. At the time the Kaesong complex was shut down, it hosted 123 South Korean firms and employed nearly 53,000 people, mostly from North Korea. Currently, only 70 percent of factories have resumed their production.xviii Kaesong, for many pundits, represents a hope for a gradual transformation and economic integration of DPRK in the region. Naturally, the pecuniary side of the project has benefited both parties, but North Korea was the main beneficiary since the project provides it with scarce foreign currency. Total output reached $470 million in 2012; before the shutdown the project as a whole generated an estimated $2 billion in trade.xix Sang-Young Rhyu describes Kaesong as an "experiment, testing whether North-South economic cooperation can contribute to the enhancement of political and military peace on the Korean Peninsula".20 If Kaesong serves as an experiment, then North Korea has sought to demonstrate that they will be in charge of it. When the DPRK refused an early negotiation offer from the South about reopening Kaesong, this expressed that the North would dictate the conditions to suit their economic and--more importantly--political interests. xx The recent Kaesong affair portrayed new leader Kim Jong-un as a powerful man who indeed ruled the country. Formally nullifying the 1953 armistice on March 10, 2013 and intensifying North Korean rhetoric were already alarming, but closing down the Kaesong project gave the words substance. It knows that having China as a powerful patron further narrows any room for South Korean influence on North Korea through soft power North Korea's commitment to political and economic isolation undermines the ROK's ability to employ soft power, especially during the transmission and reception stages. Closure of the Kaesong project demonstrated that the cash-starved DPRK regime is willing to lose one of its most significant sources of hard currency in order to give its threats credibility. North Korea's agreement to establish the Kaesong zone can be viewed as an effort to break away from isolation; however, a five-month abeyance of the project reveals the DPRK will maintain economic relations with the ROK only as long as they serve the regime's objectives. And when they do not, the regime is quick to react. Similarly, the DPRK's inconsistent legal framework and rejection of the rule of law reveal its priority of regime power over economic development. The Sinuiju Economic zone is governed by the Basic Law of the Sinuiju Special Administrative Region. Nonetheless, the whole legal structure of the zone is sabotaged by Article 4 that declares, "The Presidium of the Supreme People's Assembly will interpret this law".21 The regime applies the same jurisdictional power to Kaesong complex – the DPRK will close and open the project when it pleases. Darren Zook notes, however, that the DPRK's failure to establish a cohesive legal framework undermines regime stability and its ability to improve the state's economy. Consequently, South Korea's soft power cannot break into such a tightly controlled environment during the transmission phase. Many scholars voice their frustration about the efficiency of soft power with regard to South Korea's failure to resolve the security threat from the North. For example, Shin WhaLee argues that, in order to achieve desired outcomes in dealing with North Korea, Seoul needs to be firm "in demanding greater openness and reciprocity from Pyongyang".22 However, such openness and reciprocity from the DPRK is unlikely, judging from its past behavior. Even when the now-defunct Six-Party talks (between the two Koreas, the United States, China, Russia, and Japan) addressing nuclear disarmament were active, the majority of the observers agree that the DPRK failed to honor its promises. Even after the other states provided food and energy, albeit in a delayed manner, North Korea still refused to join the NPT and allow International Atomic Energy Agency (IAEA) observers back into the country. The geographic situation of North Korea make it both a strategic ally to China and a potential threat Nevertheless, some scholars and policymakers believe that South Korea should continue its efforts to influence the North using soft power. For example, Andrei Lankov suggests that South Korea should initiate student exchange programs and spread information about prosperous lifestyles in South Korea through radio channels, documentaries, and Korean dramas drawing a parallel with the American Cold War approach towards the USSR.23 However, comparing the DPRK to the Soviet Union and attempting to apply similar educational and informational solutions ignores and misrepresents the realities of the two states. Socioeconomic conditions and the state's ability to influence people's mentality through propaganda are incomparable. In 2008, a survey of refugees living in Seoul concluded that 75 percent of North Korean defectors do not express negative sentiment for Kim Jong-il. Another set of interviews revealed that only 9 percent of North Korean refugees mention political reasons for fleeing the country, while 55 percent explain that their decision to leave the DPRK was due to a lack of sustenance.24 Consequently, the North Korean ability to influence citizens' mentality appears far greater than that of the former USSR.Continued on Next Page »
<urn:uuid:23a6ce6c-56fe-43e8-af1a-1fe12a01df75>
CC-MAIN-2022-05
http://www.inquiriesjournal.com/articles/1482/soft-power-deployment-on-the-korean-peninsula
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305006.68/warc/CC-MAIN-20220126222652-20220127012652-00023.warc.gz
en
0.948364
4,328
2.953125
3
From the Internet of Medical Things and telehealth to smart medical technology, such as predictive analytics and conversational AI, there are certainly major technology trends disrupting the healthcare industry worldwide. But is digital disruption enough for people to enjoy healthier and As a regulated sector, healthcare is prone to the effects of developing technology, and lately we have seen the pace of disruption accelerate as more and more large tech companies merge and invest, making their way into the healthcare sector. In this respect, the newest trends involve, among other things, predictive analytics and conversational artificial intelligence that can reduce tedious work for hospital administrators and enable smooth functioning, as well as the Internet of Medical Things (IoMT). The IoMT is the incorporation of applications and medical devices that are connected to healthcare IT systems. Using networking technologies, the IoMT enables the transfer of medical information over secure networks. Telehealth is also getting a lot of attention. By combining mobile technology and document sharing, telehealth provides better healthcare access and is rapidly making its way into the ICU. But what does this all mean for patients? There is certainly an upside to digital disruption for healthcare organizations and the healthcare sector in general, healthcare providers and patients alike. More tools and advanced technology means better care, right? Well, almost. Digital disruption in the healthcare sector should not only mean technological evolution but should include many other necessary factors and improvements across the healthcare ecosystem. Unless we also invest in patient education and involvement, as well as patient access to the new technology, all these new tools could be only partially utilized. There’s no better example to illustrate this point than vaccines. Vaccines have been around much longer than the technological advances described above. They were nevertheless cutting-edge technology when they were first introduced to the market. Since then their beneficial effects in providing artificial immunity to diseases and shielding the human organism have been unequivocally proven and well documented. Yet, targeted misinformation has created doubt, which has spread like wildfire, resulting in epidemics caused by diseases that we thought had been eradicated decades ago. By investing in patient education, healthcare organizations and more directly healthcare professionals can impart information to patients and their caregivers that will guide and perhaps alter their behavior toward the healthcare system, helping them shed misconceptions, build trust in new technology, and improve their overall health status. Patient participation is a key component in the healthcare process as a means to improve patient safety, as well as optimize the ethics, relevance, accountability and transparency, communication, promotion and implementation of new technology. The effects on the real economy are also to be considered as primary and preventive care due to patient involvement greatly reduces future health care costs.
<urn:uuid:3516befd-87f3-4156-85a4-7c770ca25391>
CC-MAIN-2022-05
https://www.amcham.gr/business-partners/viewpoint/got-health/
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305266.34/warc/CC-MAIN-20220127133107-20220127163107-00685.warc.gz
en
0.962966
538
2.703125
3
Carbon capture and storage (CCS) is one of the few things which unites environmentalists and the fossil fuel industry. For environmentalists, it’s a way to clean up polluting industries like steel and cement. For coal, oil and gas coal execs, it’s a way to shore up business models without busting carbon budgets. Yet until now, no one was certain that it would work, at least not in the long run. Researchers with Pacific Northwest National Laboratory have proved that carbon dioxide injected into basalt can be converted into rock within just two years, hundreds of years faster than originally expected. Once the carbon dioxide has solidified into ankerite, it should remain locked up forever, proving that the technology works with a rock found around the world. But CCS isn’t a cure-all. Earth only has a limited supply of basalt, and some experts fear the tech could slow down the transition to renewable energies. Nevertheless, the Paris Climate Agreement requires that dirty industries are cleaned up fast and CCS may prove the best solution yet.
<urn:uuid:caf33e85-ee65-45de-bbaa-943524ec1df6>
CC-MAIN-2017-34
http://www.huffingtonpost.co.uk/entry/carbon-dioxide-can-be-turned-into-rock-in-just-two-years_uk_583320dfe4b09025ba330d12
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105086.81/warc/CC-MAIN-20170818175604-20170818195604-00112.warc.gz
en
0.935502
225
3.578125
4
George Gordon, Lord Byron (1788-1824) English Romantic poet and satirist, Byron was brought up in poverty in Scotland. At the age of 10 he inherited his great-uncle’s title and property, and moved to Newstead Abbey, England. Byron was educated at Harrow and later Cambridge. Travels in Greece resulted in the sardonic poem Childe Harold’s Pilgrimage. In January 1815 he married Annabella Milbanke, who bore him a daughter, Augusta, and then left him. During 1818-23, years spent with Teresa Guiccioli, he wrote three cantos of Don Juan, a satirical romance, the Prophecy of Dante, and four poetic dramas. Longing to help Greece obtain independence from Turkey, he joined their fight in December 1823, but died of fever on April 19, 1824. Refused burial in Westminster Abbey, he is buried with his ancestors near Newstead Abbey. Bologna, 25 August, 1819 My dearest Teresa, I have read this book in your garden;–my love, you were absent, or else I could not have read it. It is a favourite book of yours, and the writer was a friend of mine. You will not understand these English words, and others will not understand them,–which is the reason I have not scrawled them in Italian. But you will recognize the handwriting of him who passionately loved you, and you will divine that, over a book which was yours, he could only think of love. In that word, beautiful in all languages, but most so in yours–Amor mio–is comprised my existence here and hereafter. I feel I exist here, and I feel I shall exist hereafter,–to what purpose you will decide; my destiny rests with you, and you are a woman, eighteen years of age, and two out of a convent. I love you, and you love me,–at least, you say so, and act as if you did so, which last is a great consolation in all events. But I more than love you, and cannot cease to love you. Think of me, sometimes, when the Alps and ocean divide us, –but they never will, unless you wish it. Over 100,000,000 copies in circulation already, download your free copy now. Copyright 1996-2015 TheRomantic.com
<urn:uuid:e0e17155-53fe-45d1-8331-9f0ecf23962d>
CC-MAIN-2018-47
http://theromantic.com/LoveLetters/lordbyron2.htm
s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039744750.80/warc/CC-MAIN-20181118221818-20181119003818-00447.warc.gz
en
0.975094
498
2.53125
3
Small-scale fisheries (SSF) make important but undervalued contributions to the economies of some of the world’s poorest countries. They also provide much of the animal protein needed by societies in which food security remains a pressing issue. Assessment and management of these fisheries is usually inadequate or absent and they continue to fall short of their potential as engines for development and social change. In this study, we bring together existing theory and methods to suggest a general scheme for diagnosing and managing SSF. This approach can be adapted to accommodate the diversity of these fisheries in the developing world. Many threats and solutions to the problems that beset SSF come from outside the domain of the fishery. Significant improvements in prospects for fisheries will require major changes in societal priorities and values, with consequent improvements in policy and governance. Changes in development policy and science reflect these imperatives but there remains a need for intra-sectoral management that builds resilience and reduces vulnerability to those forces beyond the influence of small-scale fishers. Mendeley saves you time finding and organizing research Choose a citation style from the tabs below
<urn:uuid:6c21edbf-473e-45a0-8b3e-8b88b0eba831>
CC-MAIN-2018-30
https://www.mendeley.com/research-papers/diagnosis-management-small-scale-fisheries-developing-countries/
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591683.78/warc/CC-MAIN-20180720135213-20180720155213-00339.warc.gz
en
0.93409
224
2.546875
3
New Global Fund Report: Funding Needed to Mitigate The Impact Of COVID-19 as HIV, TB And Malaria Deaths Could Nearly Double As the global effort to end HIV, tuberculosis and malaria around the world faces new challenges due to COVID-19, the Global Fund to Fight AIDS, Tuberculosis and Malaria has published a new report titled, “Mitigating The Impact Of COVID-19 On Countries Affected By HIV, Tuberculosis And Malaria.” Focusing on the countries the Global Fund invests in to fight HIV, TB and malaria, the Global Fund estimates that at least US$28.5 billion is required for the next 12 months to adapt HIV, TB and malaria programs to mitigate the impact of COVID-19, to train and protect health workers, to reinforce systems for health so they don’t collapse, and to respond to COVID-19 itself, particularly through testing, tracing and isolation and by providing treatments as they become available. The Global Fund is a founding partner of the Access to COVID-19 Tools Accelerator (ACT-Accelerator) – a global collaboration to accelerate development, production and equitable access to new COVID-19 technologies. For its portion of the $28.5 billion, if the Global Fund secures a further $5 billion for the next year, it could: - Adapt HIV, TB and malaria programs to mitigate the impact of COVID-19 and safeguard progress - Protect front-line health workers through training and provisions of PPE - Reinforce critical aspects of health systems for health to avoid collapse and to sustain the response - Fight COVID-19, particularly through testing, tracing and supporting isolation, and through treatment services (as therapeutics become available). In 2018, combined deaths from HIV, TB and malaria amounted to 2.4 million people. Without decisive action, COVID-19 could double that annual death toll across the three diseases, a set back to levels not seen since the peak of the epidemics, wiping out nearly two decades of progress. The world faces potentially 534,000 additional AIDS-related deaths in 12 months over 2020-2021 compared to 2018 as a result of the COVID-19 pandemic. HIV prevention programs are seeing significant disruption, often depending on community and face-to-face interventions rendered impossible during lockdowns. Similarly, access to lifesaving antiretrovirals has been made more difficult for some by restrictions on movement, local stockouts, and in some cases, increased stigma and discrimination. There could be 525,000 additional TB deaths in 2020 compared to 2019 as a results of the COVID-19 pandemic. TB’s potential for confusion with COVID-19, given the similarity of initial symptoms and the diversion of diagnostic resources, risks fueling stigma and hindering case finding. As with HIV, some people with TB have encountered difficulties in sustaining treatment. The world could see 382,000 additional malaria deaths in 2020 compared to 2018 as a result of the COVID-19 pandemic. Delays in mosquito net distribution and indoor spraying programs have threatened to undermine vector control for malaria. Meanwhile the testing and treatment of people with fevers, particularly children, depends critically on the availability of health workers, who might be unable to travel, sick or scared to expose themselves without protective equipment. A recent Global Fund survey found that, across 106 countries, three-quarters of HIV, TB and malaria programs are facing disruptions due to COVID-19.
<urn:uuid:1d9087ea-4be9-4102-8248-aa424f63765b>
CC-MAIN-2020-40
https://www.theglobalfight.org/global-fund-covid-funding-needed/
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401624636.80/warc/CC-MAIN-20200929025239-20200929055239-00334.warc.gz
en
0.931229
728
2.515625
3
Many people end up frustrated and confused when trying to set up a wireless router themselves. They usually attempt to follow the setup CD or instructions that came with the router and end up giving up or paying someone else to do it for them. This article will outline the three basic concepts you need to understand to set up a wireless router yourself. I'm going to show you how to do it without using the setup CD that comes with your router and without any of the fancy gadgets or push buttons designed to make setting up a wireless router easy. While these methods may seem easy on the surface they don't always work. They also keep you isolated from any understanding of what is actually going on. If a simple mistake is made you may get stuck and be forced to turn to someone else for help. Netgear Wireless N Router Once you understand how to set up a wireless router you'll also understand how to set up just about any wireless device on the market including printers, game consoles, iPads etc. The three basic concepts you need to understand about wireless routers and wireless networking security are: 1. Your SSID - Service Set Identifier. This is a big sounding word that simply means the name of your wireless network. It's best to change this from the default and give it a name that means something to you but means little to someone else. Something like ILHMAP for "I Love Home Made Apple Pie" is good. 2. Your Encryption Type - You need to understand the hierarchy of wireless encryption. It all started with WEP or Wired Equivalent Privacy. This came standard with most B and G routers. As computer processors speeds increased WEP became easier and easier to crack so a new standard came out call WPA. WPA uses TKIP as it's encryption. Soon after WPA came out WPA2 was introduced. WPA2 uses an even stronger form of encryption called AES. Some older operating systems and game consoles will only work with WEP. When you can you want to use WPA or WPA2. Many times you can choose WPA/WPA2 which allows you to use both types of encryption with the same password. This is a very popular choice when setting up security on a wireless router. 3. Your Pass phrase - This is also known as the password or "encryption key". It's often confused with the router password. The router password is simply the password you use to log into the router. The encryption key is what allows a computer, printer or other network device to connect or "associate" with the wireless router. WEP passwords are generated by typing in a word or phrase. The result is usually scrambled into something like "17B295FcA8". You then have to type these hexidecimal characters into each of your devices. Not very user-friendly. WPA and WPA2 do not generate difficult to remember hex numbers like WEP. You can simply type in 8-63 characters such as "My dog barks 2 loud". In this example spaces count as characters and the "M" in "My" MUST be capitalized. Now that you understand the basic concepts involved with wireless networking let's put them to use. All that's really left to do now is access the routers web interface and enter the parameters mentioned above. In order to access the routers web interface you need to know three things. 1. The routers IP address. 2. The routers user name and password. 3. If you computer is on the same network as the routers IP address. The routers IP address is usually something like 192.168.0.1, 192.168.2.1 or even 10.0.0.1. The documentation that came with the router should provide this. If you don't have the documentation simply do a search engine search for "router make and model default IP." Username And Password The routers username and password is usually along the lines of "admin" and "password". If these don't work simply do a search engine search on "router make and model default password". If it's a second-hand router you may need to hard reset it to get it back to it's default. Hard resetting usually involves poking a paper clip into a tiny hole in the back of the router and holding it for 15 to 30 seconds and releasing. Once you release the router will reboot and return to it's factory default settings and you'll be able to use it's default username and password to log on. If you computer is on the same network as the routers IP address you'll be able to connect. If not you won't be able to connect. Network devices need to be on the same network to communicate with each other unless they're using a special configured router to join their separate networks. Once you know your routers default IP address simply go your computers command prompt and type in IPCONFIG. This will return your computers IP address. If the first three "octets" of your routers IP and your computers IP line up your on the same network. If they don't you won't be able to connect. Let's look at some examples of this: If your routers default IP address is: 192.168.1.1 and your computer IP address is 192.168.0.4 you won't be able to connect. A better illustration of this is as follows: So if your PC is not on the same network as the router what do you do? Simply connect one end of an ethernet cable to one of the four ports in the back of you router and the other to the network port in your PC and reboot. When the computer reboots it will automatically pick up an IP from your routers built-in DHCP server that will allow you to connect to the router. Now that all the stars are all properly aligned it's time to connect to the router. To do this simply: 1. Enter your routers IP address into your favorite browser (IE, Chrome, Firefox etc.) and hit "enter". 2. Type in the username and password to access the router. 3. Find the "Wireless" or "Wireless Security" section in your router and enter the SSID, Encryption and Pass phrase parameters as discussed above. Once you're done you simply need to enter the pass phrase you created into the wireless utilities in each of your network devices. This is a simple matter of clicking or tapping on your SSID, entering you pass phrase and clicking on "connect!"How to Set Up a Wireless Router netgear wireless n router
<urn:uuid:597aebe2-b14b-42e4-bb20-45b63fc2cc85>
CC-MAIN-2015-35
http://netgearwirelessnrouter.blogspot.com/2011/10/how-to-set-up-wireless-router.html
s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440646249598.96/warc/CC-MAIN-20150827033049-00289-ip-10-171-96-226.ec2.internal.warc.gz
en
0.937606
1,359
2.5625
3
versión On-line ISSN 2078-5135 SAMJ, S. Afr. med. j. vol.104 no.10 Cape Town oct. 2014 ISSUES IN PUBLIC HEALTH M N MokabaneI; M M MashaoII; M van StadenIII; M J PotgieterIV; A PotgieterV INelly Mokabane is an MSc student in the Department of Physiology and Environmental Health, School of Molecular and Life Sciences, University of Limpopo, South Africa IIMercy Mashao is a junior lecturer in the Department, and currently registered for an MSc degree in Physiology IIIMarlise van Staden, PhD, is a senior lecturer in the Department, with a keen interest in obesity, and its causes and effects IVMartin Potgieter, PhD, is an associate professor in the Department of Biodiversity, School of Molecular and Life Sciences, University of Limpopo VAnnelize Potgieter is manager of the Science Centre at the University of Limpopo The increasing prevalence of overweight and obesity among female adolescents is a global health problem. In developing countries such as South Africa, this increase is often associated with urbanisation and the adoption of a Western lifestyle. Two aspects of the Western lifestyle that contribute to the development of overweight and obesity are a decrease in physical activity levels and an increase in the consumption of energy-dense food, high in fats and refined sugar. Information on the prevalence of increased body fatness in populations in transition is scarce, but necessary for effective planning and intervention. Current indications are that there is a trend towards unhealthy behaviour among high-school girls, globally and in South Africa. Schools can play an important role in the prevention of overweight and obesity among schoolgirls. It is recommended that school governing bodies institute remedial action to prevent weight gain in children, especially girls. Obesity develops rapidly during adolescence. According to the World Health Organization (WHO), childhood obesity is a major global public health problem. The prevalence of overweight and obesity is increasing rapidly in Africa; between 1990 and 2010, the number of overweight or obese children doubled. Armstrong et al. noted that there was an increase in overweight and obesity in South Africa (SA) from 1994 to 2004. According to the WHO, females are more likely to be obese than males. This is especially true for black females in SA, who according to Martorell et al. are often overweight or obese, or have abdominal obesity. In African culture increased body fatness is viewed as a sign of health and wealth. Mothers therefore tend to over-feed their infants, thereby increasing the risk of developing obesity. Although traditional culture remains strong in many parts of SA, a transition is currently occurring between traditional and Western-orientated lifestyles. This has potential consequences for overweight and obesity among black schoolchildren, especially girls, who live in periurban and urban environments. Their increase in body fatness increases the risk of development of chronic diseases of lifestyle such as hypertension, stroke, coronary heart disease and type 2 diabetes mellitus. Possible causes for increased body fatness in schoolgirls Since increased body fatness is a consequence of a positive energy balance, schoolgirls who use less energy than they consume will gradually become overweight and eventually obese. The two most obvious causes of increased body fatness in schoolgirls are probably a decrease in physical activity and an increased consumption of energy, both of which are associated with urbanisation and westernisation. Life in an urban setting has a severe negative impact on the amount of physical activity a child has the chance to enjoy. In urban and periurban regions, safety concerns contribute to the decrease in physical activity, e.g. it is no longer safe for children to walk to school or to play in parks. Finances also contribute to the decreased physical activity level in children, many parents lacking the money to allow their children to participate in organised sport activities. Periurban communities often lack access to sports facilities in any case, and few schools in these environments have such facilities. The result is that children grow up in an environment that is conducive to sedentary rather than physical activities. The typical Western diet is high in energy-dense foods, fat and refined sugar that easily lead to a positive energy balance. This is especially true in the presence of decreased physical activity. During the process of urbanisation and westernisation, children (and adults) consume more snacks and convenience foods, which are very high in salt, fat, refined sugar and energy. Part of the problem may be that the SA public is continually confronted with misleading and confusing dietary information. Parents may not know what constitutes a healthy diet, or of the dangers associated with an unhealthy diet and the resultant increase in body fatness. Many public schools have feeding or nutrition programmes, where children receive at least one meal per day at school. The meals are prepared at the school and the menu is often determined by what is available, rather than what constitutes a healthy diet. Another factor contributing to childhood obesity is the foods sold at the school snack shop, which for many schools is an opportunity to increase income. The selection of snacks tends to be based on popularity rather than dietary benefit. Urbanisation and westernisation in SA are therefore setting the stage for an increasing prevalence of overweight and obesity in schoolchildren, with the associated risks of development of a variety of chronic diseases of lifestyle. We recently undertook a study at a high school in a periurban area of the Polokwane Local Municipality, Limpopo Province. The study population comprised 56 black girls aged 13 - 19 years. They were not specifically selected to participate in the study, any girl who brought back a consent form signed by her parents/guardian being allowed to take part. Each girl also received a questionnaire (with questions on physical activity and snacking behaviour) that the parents/guardian completed. The weight and height of each girl was measured according to the internationally accepted methods described by Marfell-Jones et al. The body mass index (BMI) was calculated using the formula kg/m2. Table 1 shows that the girls consumed significant quantities of snacks (such as sweets, biscuits and cake), between one and seven times per week, and beverages (e.g. cold drink or fruit juice), between two and six times per week. In addition, on average they spent a significant amount of time each day performing sedentary activities (refer to Table 1 for a list), and spent very little time being physically active (Table 1). Fig. 1 shows the percentages of girls who fell into each of the BMI categories (as suggested by Reilly). Of the girls, 67.8% had a normal BMI (>25 kg/m2), 12.5% were overweight and 3.6% were obese. The prevalence of overweight and obesity in this young female population is a cause for concern, because overweight in adolescents frequently continues into adulthood. The longer these girls are exposed to the increased body fat, the higher their risk of developing complications later in life.[161 Just over 16.1% of the girls were underweight/lean. The causes of underweight are usually poor nutrition or malnutrition and infections.!8,171 We did not investigate the causes for underweight in this study. There was a negative but weak correlation (p=0.017) between age and being physically active at home, indicating that as these girls grow older, their physical activity levels decrease. This may be one of the causes for the strong positive correlation between age and BMI (p=0.001), as well as the weak negative correlation between BMI and time spent being physically active at home (p=0.41), and between BMI and participation in sport (p=0.009). There were also weak positive correlations between watching television and frequency of consuming sweets (p=0.13) and soft drinks (p=0.20), indicating that watching television is associated with snacking behaviour. Furthermore, there was a weak positive correlation between time spent playing electronic games and watching television (p=0.009) and frequency of consuming sweets, biscuits and cakes (p=0.009), once again indicating the occurrence of unhealthy behaviours in the same girls. What do we need to do? The girls in our study were not selected on the grounds of their unhealthy behaviour. The behavioural trends they displayed are probably typical of girls in this age group. Taking into consideration that children spend a considerable amount of time at school, schools can play a very important role in promoting regular physical activity and the consumption of a healthy diet. It is recommended that schools introduce and promote sustained healthy physical activities during and after school hours via sports activities to counteract overweight and obesity, especially in girls. The establishment of safe community playgrounds would go a long way towards encouraging physical activity. Ideally, the above should be supplemented by counselling for weight loss and weight management in overweight and obese children. Furthermore, emphasis in the school curriculum must be placed on the health benefits of physical activity and a prudent diet. School snack shops should provide healthy alternatives to sweets and biscuits or cake. Acknowledgements. The principal, teachers and children at the secondary school are acknowledged for participating in the study. We also thank Dr V O Onywera for permission to use his questionnaire. 1. Dietz WH. Critical periods in childhood for the development of obesity. Am J Clin Nutr 1994;59(5):955-959. [ Links ] 2. Anrig CDC. The obese child. Dynamic Chiropractic 2003;21(22):27-31. [ Links ] 3. De Onis M, Blossner M. Prevalence and trends of overweight among preschool children in developing countries. Am J Clin Nutr 2000;72(4):1032-1039. [ Links ] 4. Armstrong MEG, Lambert MI, Lambert EV. Secular trends in the prevalence of stunting, overweight and obesity among South African children (1994-2004). Eur J Clin Nutr 2011;65(7):835-840. [http://dx.doi.org/10.1038/ejcn.2011.46] [ Links ] 5. World Health Organization. Obesity: Preventing and Managing the Global Epidemic. Report of a WHO Consultation. Geneva: WHO, 2000:252. [ Links ] 7. Mamabolo RL, Alberts M, Steyn NP. Prevalence and determinants of stunting and overweight in 3 year old black South African children residing in the central region of Limpopo Province, South Africa. Public Health Nutr 2005;8(5):501-508. [http://dx.doi.org/10.1079/PHN2005786] [ Links ] 8. Tathiah N, Moodley I, Mubaiwa V, Denny L, Taylor M. South Africa's nutritional transition: Overweight, obesity, underweight and stunting in female primary school learners in rural KwaZulu-Natal, South Africa. S Afr Med J 2013;103(10):718-723. [http://dx.doi.org/10.7196/SAMJ.6922] [ Links ] 9. Kruger HS, Venter CS, Vorster HH. Obesity in African women in the North West province, South Africa is associated with an increased risk of non-communicable diseases: The THUSA study. Br J Nutr 2001;86(6):733-740. [http://dx.doi.org/10.1079/BJN2001469] [ Links ] 10. Vorster HH, Badhan JB, Venter CS. An introduction to the revised food-based dietary guidelines for South Africa. South African Journal of Clinical Nutrition 2013;26(3, Suppl):S5-S12. [ Links ] 11. Botha CR, Wright HH, Moss SJ, Kolbe-Alexander TL. 'Be active!' Revising the South African food based dietary guideline for activity. South African Journal of Clinical Nutrition 2013;26(3, Suppl):S18-S27. [ Links ] 12. Onywera VO, Adamo KB, Sheel AW, et al. Emerging evidence of the physical activity transition in Kenya. Journal of Physical Activity and Health 2012;9(4):554-562. [ Links ] 13. Story M, Nanney MS, Schwartz MB. Schools and obesity prevention: Creating school environments and policies to promote healthy eating and physical activity. Milbank Q 2009;87(1):71-100. [http://dx.doi.org/10.1111/j.1468-0009.2009.00548.x] [ Links ] 14. Marfell-Jones M, Olds TS, Carter JEL. International Standards for Anthropometric Assessment. Underdale, Australia: International Society for the Advancement of Anthropometry, 2006:57-59. [ Links ] 15. Reilly JJ. Assessment of obesity in children and adolescents: Syntheses of recent systematic reviews and clinical guidelines. J Hum Nutr Diet 2010;23(3):205-211. [http://dx.doi.org/10.1111/j.1365-277X.2010.01054.x] [ Links ] 16. Whitaker RC, Wright JA, Pepe MS, Seidel KD, Dietz WH. Predicting obesity in young adulthood from childhood and parental obesity. N Engl J Med 1997;337(13):869-873. [http://dx.doi.org/10.1056/NEJM199709253371301] [ Links ] 17. Jinabhai CC, Taylor M, Sullivan KR. Implications of the prevalence of stunting, overweight and obesity amongst South African primary school children: A possible transition. Eur J Clin Nutr 2003;57(2):358-365. [http://dx.doi.org/10.1038/sj.ejcn.1601534] [ Links ] Accepted 22 August 2014
<urn:uuid:86b6075a-17c5-4b0b-8088-64f3f712d27e>
CC-MAIN-2016-22
http://www.scielo.org.za/scielo.php?script=sci_arttext&pid=S0256-95742014001000013&lng=es&nrm=iso&tlng=en
s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049275981.56/warc/CC-MAIN-20160524002115-00021-ip-10-185-217-139.ec2.internal.warc.gz
en
0.928309
2,914
3.03125
3
Many terms are used in plant material section of this website. We have provided definitions and explanations below for your reference. Scientific names and common names are given for each species described. Common names are typically the most used of the two, but sometimes they can be confusing because multiple plants may have the same common name and nearly every plant has multiple common names. Scientific names are unique to individual plants. There are two parts to a scientific name - the genus and the species. Both are displayed in italics with the first letter of the genus capitalized. In the hierarchal organization of plant relationships, the family comes just above genus. Families are made up of related genera (plural of genus) as genera are made up of related species. Rooted or Floating: Wetland plants can be divided into two groups - rooted and floating. Rooted (or emergent) plants are rooted into the substrate at the bottom of the wetland. Thus, they grow up through the water column and into the air. Floating plants are not necessarily attached to the bottom of the wetland. They are less effected by fluctuations in water levels. There are a few plants such as pickerelweed that are generally rooted but can sometimes float. Nutrient Removal Rating: Plants differ greatly in their ability to remove nutrients from water. It is important to be aware of a plant's ability to foster nutrient removal as you move through the constructed wetland planning process. This rating is based on research that has been conducted on the plant in question. Animals are a critical component of wetlands. Fortunately, many wetland plants are capable of providing food and cover for mammals, birds, and invertebrates that inhabit wetlands. Some plants are better for wildlife than others due to characteristics such as nectar and fruit and edible storage organ production. Some wetland plants are known to be invasive - they have previously escaped from cultivation. These species have caused major damage to natural ecosystems. As a result, precautions must be taken with plants that have the potential to be invasive. Native or Introduced: Many wetland plants of the southeastern US are also native to the area. However, some species have been introduced unintentionally from other countries or through the ornamental trade. Maximum Water Depth: Because wetland size, shape, and depth are variable, many different plants will be required to fill each niche. Plants can be selected based on their ability to cope with the depths of water that will likely be present in specific areas of the wetland. Here we focus on sun/shade requirements for plants. Most wetland plants are poorly adapted to shade. Typically they prefer full sun or part shade. There are a few, however, that can tolerate shaded areas of wetlands.
<urn:uuid:3478e7f8-39e2-4f33-90a7-d5703067f5e7>
CC-MAIN-2014-23
http://www.clemson.edu/extension/horticulture/nursery/remediation_technology/constructed_wetlands/plant_material/terminology.html
s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510272584.13/warc/CC-MAIN-20140728011752-00472-ip-10-146-231-18.ec2.internal.warc.gz
en
0.963558
564
4.0625
4
The US Navy SEALs now have a special place on their submarines to conduct undersea operations without making a sound. In 2017, Daniel Brown from Business Insider wrote a report on the USS John Warner, a Virginia-class attack submarine, in which he mentioned a ‘Lockout trunk’. A Lockout Trunk is a specially developed compartment near the top of a submarine, which fills up with water and then empties, allowing special mission divers such as Navy SEALs to put on scuba gear, wait for the trunk to fully submerge and then exit the submarine quietly. The National Interest reported that such a compartment was added during the construction of the Virginia-class attack submarine’s Block III variant. Another Virginia-class submarine, USS New Mexico (SSN 779), recently conducted a training in the Mediterranean sea in which US Navy SEALs “rehearsed command and control architecture, staged equipment, and conducted diving operations utilizing the submarine’s large lock-in/lock-out chamber,” according to a US Navy 6th Fleet press release. “This training demonstrated how the submarine force can adapt mission sets for theater commanders, providing a variety of options to address multi-domain challenges,” said Rear Adm. Anthony Carullo, Director of Maritime Operations, US Sixth Fleet and Commander of Submarine Group 8. Among other advantages of Lockout Trunks, the primary benefit is the ability to carry out underwater warfare missions without enemy detection. Underwater special operations include intelligence-gathering and surveillance tasks. Military reconnaissance refers to scouting or exploration of an area to obtain information about enemy forces, terrain, and other activities. Diving sabotage is another underwater operation in which unauthorized divers or “frogmen” are sent to hostile territory to gather information or cause damage to the enemy naval assets. Popular Mechanics reported on such “ghost” divers that many countries including the US and China are scared of. Other underwater missions include targeted attacks, secret rescue missions, or hostage recovery operations which can be performed with a much smaller, much less detectable acoustic signature, as reported by National Interest. This helps divers or in this case, Special Operation Forces, avoid detection by highly sensitive sonars. Although they would emit an acoustic signature, it would be non-detectable similar to an acoustic signature of a large fish, dolphin, or shark. The Lockout Trunks are also important for escape missions during a disaster or a sunken submarine. In case of an enemy attack, the Navy SEALs can escape using the trunk and then swim to retrieve what is known as a special-forces operations box, which would be filled with weapons and needed gear, from the tower, as per Brown’s report. Throughout the history of naval warfare, submarine innovation has provided solutions to rescue the crew of a sunken submarine. One of the earliest solutions was called “Escape Lungs”. The lungs were devices that recycled an escapee’s breath, using a chemical reaction to remove carbon dioxide and adding more gas as needed. The lungs were routinely stashed onboard submarines like the UK Royal Navy HMS Thetis, which sank during sea trials in 1939. Modern-day submariners are equipped with full-body waterproof suits called Submarine Escape Immersion Equipment or the SEIE suits that allow the crew to escape from depths of 600 feet. Once in the open sea, the suits essentially become a life raft that protects the crew from drowning and hypothermia. UK-based company Suvitech has specialized in designing, manufacturing, and supplying the most advanced safety solutions since the 1930s. The newest suit called MK11 is a full body garment designed for pressurized tower escape with a fully integrated single-seat liferaft. The US Navy is reportedly working on a submarine Deep Escape System which comprises a flood valve, auto-vent valve, single-man escape suit, and an escape suit hood inflation system among other components. The need for an effective and efficient escape trunk has risen given the threat posed by unmanned underwater combat drones. - Watch: The Ultimate Dogfight Between US & Russian Fighter Jets Over A ‘Top-Secret’ Air Base In Nevada
<urn:uuid:f1d089ce-6238-4f54-97a4-bda33e725932>
CC-MAIN-2022-21
https://eurasiantimes.com/us-navy-seals-seal-a-dedicated-submarine-slot-can-now-conduct-underwater-ops-in-stealth-mode-watch/
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662541747.38/warc/CC-MAIN-20220521205757-20220521235757-00403.warc.gz
en
0.938774
885
2.546875
3
This methane gas sensor detects the concentration of methane gas in the air and ouputs its reading as an analog voltage. The concentration sensing range of 300 ppm to 10,000 ppm is suitable for leak detection. For example, the sensor could detect if someone left a gas stove on but not lit. The sensor can operate at temperatures from -10 to 50°C and consumes less than 150 mA at 5 V. Please read the MQ4 datasheet (161k pdf) for more information about the sensor. Gas sensor with metal case bottom view. Connecting five volts across the heating (H) pins keeps the sensor hot enough to function correctly. Connecting five volts at either the A or B pins causes the sensor to emit an analog voltage on the other pins. A resistive load between the output pins and ground sets the sensitivity of the detector. Please note that Both configurations have the same pinout consistent with the bottom configuration.The resistive load should be calibrated for your particular application using the equations in the datasheet, but a good starting value for the resistor is 20 kΩ. We offer two breakout boards that make it easier to interface with these sensors: a Pololu carrier board and a SparkFun carrier board. The Pololu version is shown below. Pololu MQ gas sensor carrier with sensitivity-setting resistor soldered in the vertical orientation. People often buy this product together with:
<urn:uuid:6a469211-aa67-40a4-be4d-0363e3fff17c>
CC-MAIN-2021-17
https://www.pololu.com/product/1633
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038879305.68/warc/CC-MAIN-20210419080654-20210419110654-00079.warc.gz
en
0.870629
292
2.515625
3
September 25, 2008 Who Gets What and When in the Feed Line By WICKHAM, Ian IT'S all about the price of feed. Historically, dairy farmers have always faced a conundrum -- especially in New Zealand.That challenge is: "With a given amount of feed available on my farm, how do I prioritise which class of my animals to feed it to?" In the past when dairy farms were 'self-contained' (how long since you have heard that expression?), it was easy. The milking herd got all the best stuff and the replacements got the back paddock, the steep gully and anything that was left after the herd had their share. It all seemed to make sense. Because it was the herd that produced the income, they should be fed more to produce more. The replacements didn't produce any income, so they didn't get the pick of feed. 'Heifer Hay' was the hay not good enough to feed to the herd. It didn't seem to matter that the heifers were small at first calving, the researchers and academics talked about weights of 280 kilograms at 24 months of age, and researched how Californian thistle could be controlled by hard grazing with dairy replacements. Then, over the past 30 years, a quiet revolution took place. Dairy farmers discovered a cheap source of feed. They could send their dairy replacements off-farm to a grazier who would provide feed at a price low enough for them to profitably use all the on- farm feed for the milking herd. And they could demand that the grazier provide the day-to-day husbandry (at no cost) plus enough feed to achieve better grown, more profitable heifers. During this 30 years, relative to dairy payout, the price of grazing has declined substantially. But now -- another revolution has started. Dairy cow numbers are increasing rapidly. Farms that were providing the off-farm grazing and feed are being converted to dairy as a more profitable land use. Good quality land prices have increased dramatically. And -- dairy farmers are discovering that they can purchase off- farm feed and transport it to their dairy farm and make a profit from incremental dairy production. BUT -- Internationally feed has become in short supply because of bio- fuel demand and water shortages and this has rapidly increased food prices with dairy products being one of the first to rise. This fact, and a widespread drought in NZ, resulted in dairy farmers seeking feed wherever it can be found and suddenly the worldwide increase in feed costs has arrived locally. This has put a huge amount of pressure on those supplying dairy heifer grazing, especially when it comes to pricing. It does not help that pricing is normally for a period of up to 18 months ahead, which makes it something of a 'Future Contract' and the graziers input costs (such as fertiliser) may vary considerably during this time. The grazier also has the opportunity to sell his feed directly to other farmers, and it is the dairy sector who are able to make best use of this feed. In practice what is emerging is that feed, no matter what it is, or where, is becoming a standard price (with appropriate correction for food value and transport). So the conundrum returns! If all available feed is a similar price, which animals have the priority for full feeding and how does one ensure that the replacements are going to get their fair share? We may learn something from the USA where farmers are very experienced in fully feeding their herds. In that country, it is not normal for pasture to be the main feed ration. Most likely it will be a TMR (Total Mixed Ration) and will contain a formulated mixture of forage based fibre, grains, commodities, vitamins and minerals. The price/value is NOT dependent on which animals the feed goes to and it can fluctuate a considerable amount. It is normal practice for a dairy farmer to pay a heifer grower three main fees: A daily fee for the feed ration which may vary depending on current feed costs, a fee on a daily basis for husbandry and management, and a third fee for mating, health and vaccination programs. (c) 2008 Daily News; New Plymouth, New Zealand. Provided by ProQuest LLC. All rights Reserved.
<urn:uuid:3a3f94d7-48d0-4a5a-bec6-107075ff388c>
CC-MAIN-2017-22
http://www.redorbit.com/news/business/1568178/who_gets_what_and_when_in_the_feed_line/
s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463612018.97/warc/CC-MAIN-20170529053338-20170529073338-00016.warc.gz
en
0.972851
878
3.125
3
Opens image gallery Brand new: A new, unread, unused book in perfect condition with no missing or damaged pages. See the ... Read moreabout the condition |Exam Board: Edexcel Level: GCSE Subject: Maths / Statistics First teaching: September 2015 First exams: June 2017 Revise smart and save! * This Revision Workbook delivers hassle-free question practice, covering one topic per page and avoiding lengthy set up time. * Build your confidence with guided practice questions, before moving onto unguided questions and practice exam papers. * Target grades on the page allow you to progress at the right speed. * With one-to-one page correspondence between this Workbook and the companion Guide, this hugely popular Revision series offers the best value available for Key Stage 4 students.| |Publisher||Pearson Education Limited| |eBay Product ID (ePID)||213740963| |Product Key Features| |Additional Product Features| |Place of Publication||Harlow| |Series Title||Revise Edexcel Gcse Statistics| |Country of Publication||United Kingdom| |Educational Level||United Kingdom School Key Stage 4| |Subject||Science & Mathematics: Textbooks & Study Guides| |Imprint||Pearson Education Limited| |Date of Publication||09/10/2015| There are 5 items available. Please enter a number less than or equal to 5. Select a valid country. Please enter a valid postcode. Please enter five or nine numbers for the postcode. Domestic handling time Will usually send within 10 business days of receiving cleared payment - opens in a new window or tab. Item must be returned within 60 days after the buyer receives it |Payment method||Preferred / Accepted|
<urn:uuid:9572dd7c-9d27-42c0-be21-fef3b762db8c>
CC-MAIN-2019-43
https://www.ebay.com.au/itm/NEW-REVISE-Edexcel-GCSE-Statistics-Revision-Workbook-By-Rob-Summerson-Paperback-/293273684378
s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987781397.63/warc/CC-MAIN-20191021171509-20191021195009-00113.warc.gz
en
0.684218
381
2.8125
3
From nine to eleven months, babies are in basic training for walking. They’re taking it one step at a time to gain the physical control and balance required to walk. They do it by building the muscles and motor skills that prepare them for those exciting first steps. Don’t stress if your baby does this a little later, as baby will develop in their own time. Right around when babies can sit without wobbling and manoeuvre on all fours, they soon discover the stairs. You may find your child wants to spend hours on them. Crawling up is no sweat. The trouble is in crawling down. With a little help from their personal trainer (you!), they’ll eventually get the hang of it. But remember; even after they’ve graduated to “stair master” don’t leave your baby alone on or near the stairs. When you can’t be there to supervise, always put a safety gate in place! Soon your baby will be pulling themselves up on crib bars, chair legs or anything else that can bring them to a standing position. Once up, they may want to stand all the time even when they’re being dressed or changed. They’ll learn to find ways of moving themselves along one small step for baby, one giant leap to becoming a full-fledged walker. Many babies are eager to walk, even though they can’t keep their balance on their own. You might see your baby taking sideways steps while holding onto the crib rail or table edge. Many babies also love to practice steps while holding onto your two index fingers. The father of one such enthusiastic “walker” joked, “I’m afraid I’ll get stuck in a permanently bent-over position!” It’s not only the so-called “motor movements” like walking that are progressing at this stage. In the next few months, you’ll see your baby’s motor skills, like eye-hand coordination, improving tremendously. Your baby will soon be picking up small pieces of food like cereal with their thumb and forefinger, instead of using the “mitten grip” of earlier months. Babies enjoy putting objects into containers such as empty coffee cans and then dumping them out. Turn these activities into fun games for your baby to encourage these new skills.
<urn:uuid:5b422fc5-28f5-495b-be56-ace269eb80f3>
CC-MAIN-2021-17
https://www.huggies.co.nz/baby-care/milestones/development/first-steps
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038064520.8/warc/CC-MAIN-20210411144457-20210411174457-00509.warc.gz
en
0.948025
495
3.09375
3
The spoken language has mostly developed naturally from earlier forms of Finnish, and spread from main cultural and political centers. The standard language, however, has always been a consciously constructed medium for literature. It preserves grammatical patterns that have mostly vanished from the colloquial varieties and, as its main application is writing, it features complex syntactic patterns that are not easy to handle when used in speech. The spoken language develops significantly faster, and the grammatical and phonological simplifications include also the most common pronouns and suffixes, which sum up to frequent but modest differences. Some sound changes have been left out of the formal language, such as the regularization of some common verbs by assimilation, e.g. tule- → tuu- (although tule can be used in spoken language as well). Written language certainly still exerts a considerable influence upon the spoken word, because illiteracy is nonexistent and many Finns are avid readers. In fact, it is still not entirely uncommon to meet people who “talk book-ish” (puhuvat kirjakieltä); it may have a connotation of pedantry, exaggeration, moderation, weaseling or sarcasm. More common is the intrusion of typically literary constructions into a colloquial discourse, as a kind of quote from written Finnish. It should also be noted that it is quite common to hear book-like and polished speech on radio or TV, and the constant exposure to such language tends to lead to the adoption of such constructions even in everyday language. A prominent example of the effect of the standard language is the development of the consonant gradation form /ts : ts/ as in metsä : metsän, as this pattern was originally (1940) found natively only in the dialects of southern Karelian isthmus and Ingria. In fact, it has arisen from the spelling ‘ts’ for the dental fricative [θː], which has disappeared. In spoken language, a fusion of Western /tt : tt/ (mettä : mettän) and Eastern /ht : t/ (mehtä : metän) has been created: /tt : t/ (mettä : metän). It is notable that neither of these forms are identifiable as, or originate from, a specific dialect. The orthography of the informal language follows that of the formal language. However, sometimes sandhi may be transcribed, especially the internal ones, e.g. menenpä → menempä. This never takes place in formal language.
<urn:uuid:ce87cc3e-359b-4d58-b95e-2c0e2e6a3a0e>
CC-MAIN-2017-04
http://www.ccjk.com/language-translation/finnish-translation-services/spoken-finnish/
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280483.83/warc/CC-MAIN-20170116095120-00512-ip-10-171-10-70.ec2.internal.warc.gz
en
0.943926
541
3.15625
3
In the good old days we could joke about the Loch Ness monster or the Sasquatch. Now we have to pick our channels carefully so we don’t get a “science” lesson about a mermaid or Bigfoot. Did History Channel start this trend when it included aliens on its programming? Last week Discovery Channel gave us a “marine biology” lesson and told us paleontologists are wrong: the Megalodon, a 50-foot long extinct prehistoric shark, is still alive and eating people. The first time I held out a Megalodon tooth in my hand I knew I had a powerful teaching tool. Two years ago, as a volunteer interpreter at the Seattle Aquarium, I showed visitors the teeth that belonged to the gigantic sharks. It fit comfortably in the palm of my hand, a single tooth out of hundreds that sharks shed naturally like a conveyor belt—when one tooth falls off another is ready to replace it. Visitors were mesmerized by the tooth, specially compared to a modern shark’s tooth, and liked to call Megalodon a “dinosaur shark.” Well, the Megalodon was definitely not a dinosaur, and is much younger than dinosaurs. It was still around 2 million years ago. The teeth are the only evidence left behind by a cartilaginous fish that has no other solid bone in its body. The size it could have reached is extrapolated based on a jaw that could fit the palm-size teeth. But then Discovery Channel, as part of this year’s Shark Week, broadcasts a faux “documentary” (mockumentary?) called Megalodon: The Monster Shark That Lives. Yes, they said “That Lives.” The show implies that this predator is still alive and well, and finding meals out of fishing boats. With movie techniques straight out of The Blair Witch Project, it uses shaky cam footage and bad editing to simulate non-professional footage. Perhaps it aims to be an aquatic Cloverfield, showing the characters facing a monster. Except… it does not clearly state that its actors are characters. In regards to the “scientists” shown on the movie, a small print 2-second disclaimer is shown that says: “None of the institutions or agencies that appear in the film are affiliated with it in any way, nor have approved its contents.” A somewhat convoluted way of saying “our scientists are actors.” It also said: “Megalodon was a real shark. Legends of giant sharks persist all over the world. There is still debate about what they may be.” Which is almost to say, “legends exist, therefore they are true” (?!) Unfortunately some people prefer to believe in prettier stories. The twittersphere, barely recovered from Sharknado (which was clearly not a documentary, but an equally atrocious shark film), was baffled. Blogosphere outrage ensued: - Wil Wheaton demands an apology from Discovery Channel - Deep Sea News mourns the loss of educational channels - Scientists are getting calls: is the Megalodon alive? - An open letter to Discovery Channel - Shark scientists unite - Ed Yong shows actual shark facts John Platt covered it for Scientific American blogs, using the support of one of our favorite nature filmmakers here at Sci-Ed, Chris Palmer. Palmer was interviewed before for Sci-Ed in which we discussed the nature documentary’s role in science education and promoting conservation: “Nature films have the potential to educate and to bridge the knowledge gap between the general public and the scientific community (…) Rebecca Wexler reports that ‘viewers regard film sequences as realistic because of cultural tendencies resulting from 19th century understandings of photography and film as mechanically accurate reproductions of the visual world.’ This also happens because movies are labeled as scientifically correct and factual.. [The] status of scientific authority is given to nature films even in cases of scripted dramas: footage that has been twisted to accommodate a sequence of edited scenes closely following a script.” A Deep Sea News writer quotes her 9-year old cousin as an evidence of the perceived scientific authority of the show: “They spread that information out there, and then people start thinking it’s real. Then they start getting afraid of sharks, and then they start killing them…and that’s a problem.” And also a problem that a chunk of Shark Week’s target audience is made of (easily influenced) children. This may seem extreme, but isn’t outside the realm of possibility – people started killing stingrays following the death of environmentalist Steve Irwin. Is Discovery Channel doing a disservice to science by telling the public that Megalodon still exists? Or is it creating excitement and popularizing the ancient creature? What do you think? How important is young students’ trust in the scientific authority of Shark Week for science education outcomes? Can educators use the mistakes of Discovery Channel to give students experience developing skeptical habits of mind? Learning to be a scientist is not just about reading trusted textbooks and watching trusted channels — it’s about learning to ask questions for yourself and evaluate evidence. The Shark Week’s mistake: authority and science education by Sci-Ed, unless otherwise expressly stated, is licensed under a Creative Commons Attribution 3.0 Unported License.
<urn:uuid:1e0e3040-273b-4d6d-a832-ef7514d873ae>
CC-MAIN-2013-48
http://blogs.plos.org/scied/2013/08/12/shark-weeks-mistake-authority-and-science-education/
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164561235/warc/CC-MAIN-20131204134241-00057-ip-10-33-133-15.ec2.internal.warc.gz
en
0.948615
1,126
2.796875
3
This image, just released from NASA’s Earth Observatory, is both scary and beautiful This is – or was – the Aral Sea*. 50 years ago, it was a substantial body of water. Then, the rivers that fed it were diverted for irrigation, meaning that the amount of water flowing into the lake fell below the amount of water being lost by evaporation. As a result of this imbalance, the Aral Sea began to dry up, and is now but a shadow of its former self. If you’re looking for a powerful illustration of how quickly – and visibly – human activity can change the face of the planet, look no further. *even though, as my hydro-coblogger** would be quick to point out, it’s actually a lake (or not – see comments below). **who will hopefully forgive me for invading her disciplinary territory. Categories: environment, geology, photos Tags: aral sea environmental impact water
<urn:uuid:7e48fb9d-e768-4f27-b29b-fcbc886a120b>
CC-MAIN-2015-48
http://all-geo.org/highlyallochthonous/2009/08/the-puddle-that-was-once-a-sea/
s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398457697.46/warc/CC-MAIN-20151124205417-00347-ip-10-71-132-137.ec2.internal.warc.gz
en
0.968475
204
2.859375
3
Smartphones’ proximity to people’s ears, nose and mouth make them a good vector for transferring microbes. Bacteria and other infectious agents on smartphones can cause the flu, pinkeye, or diarrhea. Lab tests show that most phones have abnormally high levels of coliforms, a bacteria stemming from fecal contamination. For people who want to keep a clean phone, it can get confusing since there is a lot of disconnect between medical research and what device makers suggest for cleaning and sanitizing. Many of the products made for mobile phone cleaning can sometimes damage the phone’s coating or fail to remove all of the germs. The HML lab tested for different cleaning methods, including water, alcohol, Windex, and Nice ‘N Clean electronic cleaning wipes. Of those four, alcohol performed best, removing 100% of the bacteria. Water was the least effective. People are just as likely to get sick from their phones as from handles in bathrooms, states Dr. Cain of the American Academy of Family Physicians. Most phones’ coating will be damaged by the use of window cleaners, household cleaners, aerosol sprays, solvents, alcohol, ammonia or abrasives. Corning Gorilla Glass can be cleaned with standard off-the-shelf cleaning products like alcohol wipes, and the performance of the glass won’t be damaged, but it could affect the smartphones’ performance. Microfiber cloths clean most organisms as well as oil and dirt, but it’s not enough, as for some bacteria, humans need to ingest as few as 10 organisms to be affected adversely. A 2011 study from the University of Cape Coast in Ghana that sampled 100 college student cellphones, noted high a concentrations of bacteria and diversity of bacteria on the on phones. Another published study found that 20 to 30% of viruses can be easily transferred from fingertips to a glass surface, like that on a touch screen. UV disinfectant wands may be the best cleaning solution because the UV light kills germs without any need to touch the phone. A new product called PhoneSoap, which uses UV-C light to clean the phone while charging it, will begin shipping to consumers in January 2013. [via Wall Street Journal]
<urn:uuid:ee40512c-4499-452e-807b-de5d08db6af2>
CC-MAIN-2015-35
http://scitechdaily.com/smartphones-are-great-for-sharing-bacteria/
s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645335509.77/warc/CC-MAIN-20150827031535-00122-ip-10-171-96-226.ec2.internal.warc.gz
en
0.950966
463
3.046875
3
About Pediatric Liver Transplants About the liver The liver is your body’s filter and it’s the largest internal organ in the body. The liver is essential for survival and regulates most chemical levels in the blood (such as glucose), produces important proteins (such as clotting factors) and excretes a product called bile, which helps absorption of fats and fat-soluable vitamins. All of the blood leaving the stomach and intestines passes through the liver. The liver processes this blood and breaks down the nutrients and most foreign substances (such as medications) in the blood into forms that are easier to use for the rest of the body. More than 500 vital functions have been identified as being dependent on the liver. About liver transplants A liver transplant is a surgical procedure performed to replace a diseased liver with a healthy liver from another person. The liver may come from a deceased organ donor or from a living donor. Family members or individuals who are unrelated, but make a good match, may be able to donate a portion of their liver. This type of transplant is called a living donor transplant. For people who donate a portion of their liver, the remaining half is capapble of regenerating and resuming normal function. An entire liver may be transplanted, or just a section. Because the liver is the only organ in the body able to regenerate, a transplanted portion of a liver can rebuild to normal capacity within weeks. When a liver transplant is needed A liver transplant is recommended for children who have serious liver dysfunction and will not be able to live without having their liver replaced. The most common liver disease in children for which transplants are done is biliary atresia. Liver dysfunction could be acute or chronic. Causes for liver dysfunction could be infections, genetic, metabolic, immunologic, tumors, toxins or for unknown reasons (idiopathic). There are many more conditions that may require a liver transplant. Children who receive liver transplants tend to do very well Liver transplants are performed on the sickest children with liver disease. Most kids who receive liver transplants go on to lead productive lives. Unlike adults, most liver diseases requiring transplantation in children are congenital, meaning they were born with a defect and will not recur in the transplanted liver. In these cases, the liver transplant is curative, although stringent lifetime aftercare is required. Types of liver failure There are many specific types of liver disease in children that may require a transplant. They are grouped into two categories: - Acute liver failure (ALF) or fulminant liver failure occurs when many of the cells in the liver die or become very damaged in a short period of time, causing the liver to stop working normally. - Chronic liver failure – the liver is scarred and causes problems with blood flow, which can lead to gorged veins (varices) at risk of bleeding, enlarged spleen leading to low blood counts, and fluid accumulation in the abdomen (ascites). Chronic liver dysfunction increases the risk of bleeding, malnutrition and other conditions.
<urn:uuid:c1b50a65-9306-4980-9ae7-9766394f10ff>
CC-MAIN-2015-14
http://www.phoenixchildrens.org/medical-specialties/liver-transplant/pediatric-liver-transplants
s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131298819.92/warc/CC-MAIN-20150323172138-00153-ip-10-168-14-71.ec2.internal.warc.gz
en
0.935282
638
3.4375
3
The following HTML text is provided to enhance online readability. Many aspects of typography translate only awkwardly to HTML. Please use the page image as the authoritative form to ensure accuracy. Speaking of Health: Assessing Health Communication Strategies for Diverse Populations This view of culture deflects attention from a richer understanding of the diverse social and cultural processes that are meaningful for the development of effective health communication strategies. Cultural processes are dynamic, embedded in social context, and therefore influenced by social factors such as immigration or discrimination as well as by interactive social processes and cultural products like music, food, and language. The consideration of the proximal social and cultural processes, as opposed to group categories, facilitates the translation of theory-based strategies to reflect the life experiences of targeted communities. For example, to develop a health communication strategy for the population age 80 and up requires the identification of factors in their daily lives that are related to the message and specific health behavior under consideration. Their lives may be impacted by a lack of economic resources, limited accessibility of health care services, deaths of significant others, or decreasing physical and cognitive capabilities. The concept of self-identity allows us to look at individuals in the context of their life experiences and realities. Knowledge of the relevant experiences that individuals are likely to share allows health communicators to package the theoretical constructs of attitudes, norms, and efficacy beliefs in ways that are meaningful to the targeted group—and therefore will result in more effective health communication. Based on its consideration of the various issues related to diversity, the committee offers the following recommendations: Demographic factors are useful in epidemiological studies to understand whether health benefits are distributed equally and to identify intergroup differences. Policy makers and program planners should continue to use demographic factors to understand whether health benefits are equally distributed and to identify intergroup differences. Where there are existing disparities, it will
<urn:uuid:a6ed24f6-b50e-49ed-af5e-7ae0d7d49215>
CC-MAIN-2013-20
http://books.nap.edu/openbook.php?record_id=10018&page=253
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696384213/warc/CC-MAIN-20130516092624-00071-ip-10-60-113-184.ec2.internal.warc.gz
en
0.927997
379
3.296875
3
Introduction to Wheel (Ages 8-12) There are few things more amazing than creating a useful object out of a lump of spinning clay. Learn to use the potter’s wheel in this beginning class that will cover bowls and cylinders. We will also explore ways of altering the clay surface and adding hand built items to wheel thrown vessels. Sign up for Introduction to the Wheel with Alexa on either Monday OR Tuesday!
<urn:uuid:5fd95aa2-5308-4685-909d-d94f27bbbabd>
CC-MAIN-2019-09
https://theumbrellaarts.org/class/introduction-wheel-ages-8-12
s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247511573.67/warc/CC-MAIN-20190221233437-20190222015437-00261.warc.gz
en
0.902682
87
2.859375
3
Friday, June 29, 2012 The Belgian-Greek Royal Connection King Leopold I was the first King of the Belgians, and a very greatly admired and respected national leader all over the world, but the throne of Belgium was not the first kingdom he was offered to preside over. The Greeks had claimed independence to break away from the Ottoman Empire of Turkey in 1830 and were in need of a monarch. A prince from a powerful royal family or with, at least, family ties to powerful countries, was preferred to help secure Greek independence as a policy of insurance against efforts by the Turks to retake Greece. Prince Leopold of Saxe-Coburg-Gotha was considered and was asked to come reign over the new Kingdom of Greece. The cause of the Greeks had been a popular one in Europe, seen by many as a great romantic adventure and there was much sympathy for the Greeks against the Turks. Prince Leopold was not unaffected by this and he considered seriously accepting the offer to become the first King of Greece or King of the Greeks.
<urn:uuid:5672e985-19ba-4774-9fdc-f835cc9a8dc7>
CC-MAIN-2015-18
http://belgieroyalist.blogspot.com/2012/06/belgian-greek-royal-connection.html
s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246650195.9/warc/CC-MAIN-20150417045730-00227-ip-10-235-10-82.ec2.internal.warc.gz
en
0.987943
216
3.171875
3
Over the last 10–15 years, our understanding of the composition and functions of the human gut microbiota has increased exponentially. To a large extent, this has been due to new ‘omic’ technologies that have facilitated large-scale analysis of the genetic and metabolic profile of this microbial community, revealing it to be comparable in influence to a new organ in the body and offering the possibility of a new route for therapeutic intervention. Moreover, it might be more accurate to think of it like an immune system: a collection of cells that work in unison with the host and that can promote health but sometimes initiate disease. This review gives an update on the current knowledge in the area of gut disorders, in particular metabolic syndrome and obesity-related disease, liver disease, IBD and colorectal cancer. The potential of manipulating the gut microbiota in these disorders is assessed, with an examination of the latest and most relevant evidence relating to antibiotics, probiotics, prebiotics, polyphenols and faecal microbiota transplantation. - INTESTINAL BACTERIA This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/ Statistics from Altmetric.com Imagine the scenario: a scientist at a conference claims to have found a new organ in the human body. It is comparable to the immune system in as much as it is made up of a collection of cells, it contains a 100 times more genes than the host, is host-specific, contains heritable components, can be modified by diet, surgery or antibiotics, and in its absence nearly all aspects of host physiology are affected. While this may seem far-fetched, it is the current situation in which we find ourselves. We now realise that the human microbiota is an overlooked system that makes a significant contribution to human biology and development. Moreover, there is good evidence that humans co-evolved a requirement for their microbiota.1 In the past decade, partly because of high resolution observational studies using next-generation sequencing technologies and metabolite profiling (see box 1), the gut microbiota has become associated with promotion of health and the initiation or maintenance of different GI and non-GI diseases. As we enter the postmetagenomic era, we need to move away from simple observations to determine what are merely correlations and what are causal links—and focus efforts and resources on the latter. This postmetagenomic era is starting to provide new therapeutic targets based on a better understanding of how the microbiota interacts with the host's physiology. Ultimately, we aim to integrate an individual's microbiota into some form of personalised healthcare and, by better understanding its role, treat an individual's diseases more efficiently and in a more targeted fashion. With a more complete understanding of the disease process, we will be able to more accurately stratify different disease states and determine whether or not the gut microbiota is a potential therapeutic target which we can modulate in order to treat specific diseases. A short primer of microbiology (see also Lepage et al)4 A disturbance or imbalance in a biological system, for example, changes in the types and numbers of bacteria in the gut which may lead to developing different diseases, such as IBD. Recently discovered multiprotein complexes that are involved in a wide range of inflammatory processes including programmed cell death (pyroptosis), in response to the recognition of microbial and danger signals. A major component of the outer membrane of Gram-negative bacteria; an endotoxin. Now implicated as a driver of inflammation and associated with onset of certain diseases. A major component of the outer membrane of Gram-positive bacteria; an endotoxin. Now implicated as a driver of inflammation and associated with onset of certain diseases. A profile of the chemicals in a tissue or sample, for example the urine metabonome. This profile represents a snapshot in time of what chemicals are present in the sample. A method which allows us to create catalogues of what the bacteria can do based on the genes that they have. A collection of different microbes and their functions or genes found in an environmental habitat. Different parts of the body have different microbiomes, for example, the skin microbiome is different to the gut microbiome, but they are all part of the human microbiome. The types of organisms that are present in an environmental habitat, whether they are bacteria, viruses or eukaryotes. A term which describes a set of methods, such as genomics, metabonomics, metagenomics, etc, which we use to explore the interactions between the bacteria in the gut and the host. A commensal organism that can cause disease when specific genetic or environmental conditions are altered in the host. A collection of measurable features that define an individual. This review gives a much needed update on current understanding of the gut microbiota in GI diseases and metabolic disorders, and gives an insight into how this might impact on clinical practice. The evidence for the preventive and therapeutic benefit of different ways of modulating the gut microbiota, such as probiotics, prebiotics, antibiotics and faecal microbiota transplantation (FMT) (see box 2), is reviewed. Potential therapies aimed at modulation of the gut microbiota Probiotics: Live microorganisms that, when administered in adequate amounts, confer a health benefit on the host.107 ,108 Examples include strains of the genera Bifidobacterium and Lactobacillus. Probiotics can have multiple interactions with the host,109 including competitive inhibition of other microbes, effects on mucosal barrier function and interaction with antigen presenting dendritic cells.72 Prebiotics: A selectively fermented ingredient that results in specific changes in the composition and/or activity of the GI microbiota, thus conferring benefit(s) upon host health.110 Prebiotics are usually non-digestible carbohydrates, oligosaccharides or short polysaccharides, with inulin, oligofructose, galactofructose, galacto-oligosaccharides and xylo-oligosaccharides being some of the most intensively studied. Faecal microbiota transplantation: The introduction of gut bacteria from a healthy donor into a patient, through transfer of an infusion of a faecal sample via nasogastric tube, nasoduodenal tube, rectal enema or the biopsy channel of a colonoscope.111 Current understanding of the gut microbiota In the last decade, several large-scale projects, for example, the human microbiome project, have investigated the microbiota of a variety of bodily niches, including the skin as well as the oral, vaginal and nasal cavities.2 While some of these are relatively easy to access, the GI tract remains a challenging environment to sample, and to describe. Currently the majority of research is focused on the gut microbiota, since this is where the greatest density and numbers of bacteria are found, with most data being derived from faecal samples and, to a lesser extent, mucosal biopsies. While it is relatively easy to obtain fresh faecal samples, the information obtained from them does not represent the complete picture within the gut. From a number of limited studies, we know that the small intestine contains a very different abundance and composition of bacteria, with much more dynamic variation compared with the colon.3 The colonic microbiota is largely driven by the efficient degradation of complex indigestible carbohydrates but that of the small intestine is shaped by its capacity for the fast import and conversion of relatively small carbohydrates, and rapid adaptation to overall nutrient availability. While faeces are not an ideal proxy for the GI tract, they do give a snapshot of the diversity within the large intestine. Furthermore, the majority of the data comes from North American and European studies with very few studies in Asia, Africa or South America. Hence we have a somewhat biased view of the gut microbiota. This rapid increase in interest in the microbiome has also been driven by the application of multi-‘omic’ technologies; we refer the reader to Lepage et al4 for more detailed explanation of these (see also box 1). What do we know about the gut microbiota? Bearing in mind the limitations above, the GI tract is often seen as a two phylum system (the Firmicutes and Bacteroidetes) although it should be noted that members of at least 10 different phyla can also have important functional contributions (see box 3). We are also very bacteria-centric when we look at the gut microbiota; only a handful of papers have looked at the viral component (or virome) and micro-eukaryotes (protozoa and fungi). When the gut microbiota of relatively large cohorts of individuals (eg, more than 100) is analysed, it can be seen that the ratio of the Firmicutes:Bacteroidetes is not the same in all individuals. Currently we do not know the significance of being at either end of this continuum, especially as a large shift in the relative abundance of a group of organisms translates to a modest change in bacterial numbers. Yet there is evidence that depletion of a single species, for example, Faecalibacterium prausnitzii, belonging to the Firmicutes phylum, has been associated with IBD.5 But in the scientific literature, we see counterarguments for any involvement of this species in IBD.6 This disparity highlights the current status of understanding. We know that the gut microbiota is essential to the proper function and development of the host but we are unsure which are keystone species and whether the microbiota's function is more important than any individual member of the community. But this is too simplistic a view. In several cases, strain differences within a species can be the difference between being a pathogen/pathobiont and being a probiotic: for example, Escherichia coli is associated with IBD and colorectal cancer (CRC)7 ,8 yet an E. coli strain is used as a probiotic. A primer in taxonomics In order to classify bacteria we have adopted the Linnaean system, which comprises hierarchies into which an organism is placed. For example humans are classified at the species level as Homo sapiens, which are members of the genus Homo, family Hominidae, order Primates, class Mammalia, phylum Chordata and finally kingdom Eukaryota. As one moves up through the different taxonomic levels, from species to kingdom, greater numbers of organisms become associated with each other. In life there are three kingdoms, the Bacteria, Archaea and Eukaryota, with the majority of bacterial-like (or prokaryotes) being classified within the Bacteria and Archaea. For example the gut commensal and sometime pathogenic species Escherichia coli is found in the kingdom Bacteria; phylum Proteobacteria; class Gammaproteobacteria; order Enterobacteriales; family Enterobacteriaceae and finally genus Escherichia. Thus when we refer to phyla or a phylum, we are usually describing very large collections of related organisms. In the large intestine of healthy adults the two most dominant phyla are the Firmicutes (comprised mainly of Gram-positive clostridia) and Bacteroidetes (comprised mainly of Gram-negative bacteria such as the species Bacteroides fragilis). In fact, five phyla represent the majority of bacteria that comprise the gut microbiota. There are approximately 160 species in the large intestine of any individual9 and very few of these are shared between unrelated individuals. In contrast, the functions contributed by these species appear to be found in everybody's GI tract, an observation that leads us to conclude that function is more important than the identity of the species providing it. Yet differences in the gut microbiota may matter because these may result in differences in the effectiveness of a function. For example, while the ability to synthesise short chain fatty acids (SCFAs) is found in all humans,10 their amounts can vary. Dietary modulation of the gut microbiota Metabolic activities of the gut microbiota Carbohydrate fermentation is a core activity of the human gut microbiota, driving the energy and carbon economy of the colon. Dominant and prevalent species of gut bacteria, including SCFA-producers, appear to play a critical role in initial degradation of complex plant-derived polysaccharides,11 collaborating with species specialised in oligosaccharide fermentation (eg, bifidobacteria), to liberate SCFAs and gases which are also used as carbon and energy sources by other more specialised bacteria (eg, reductive acetogens, sulfate-reducing bacteria and methanogens).12 Efficient conversion of complex indigestible dietary carbohydrates into SCFA serves microbial cross-feeding communities and the host, with 10% of our daily energy requirements coming from colonic fermentation. Butyrate and propionate can regulate intestinal physiology and immune function, while acetate acts as a substrate for lipogenesis and gluconeogenesis.13 Recently, key roles for these metabolites have been identified in regulating immune function in the periphery, directing appropriate immune response, oral tolerance and resolution of inflammation, and also for regulating the inflammatory output of adipose tissue, a major inflammatory organ in obesity.14 In the colon, the majority of this carbohydrate fermentation occurs in the proximal colon, at least for people following a Western style diet. As carbohydrate becomes depleted as digesta moves distally, the gut microbiota switches to other substrates, notably protein or amino acids. Fermentation of amino acids, besides liberating beneficial SCFAs, produces a range of potentially harmful compounds. Some of these may play a role in gut diseases such as colon cancer or IBD. Studies in animal models and in vitro show that compounds like ammonia, phenols, p-cresol, certain amines and hydrogen sulfide, play important roles in the initiation or progression of a leaky gut, inflammation, DNA damage and cancer progression.15 On the contrary, dietary fibre or intake of plant-based foods appears to inhibit this, highlighting the importance of maintaining gut microbiome carbohydrate fermentation.16 Recognition of carbohydrate fermentation as a core activity of the gut microbiota provides the scientific basis for rational design of functional foods aimed at improving gut health and also for impacting on microbiota activities linked to systemic host physiology through newly recognised interkingdom axes of communication such as the gut:liver axis, the gut:brain axis and the gut:brain:skin axis.17 Three ‘P's’ for gut health: probiotics, prebiotics and polyphenols A number of dietary strategies are available for modulating either the composition or metabolic/immunological activity of the human gut microbiota: probiotics, prebiotics and polyphenols are among the most well established.18 There are many examples of positive results with different probiotic strains against a range of disease states in animal models, however the human data are equivocal. This may partly be due to poor study design and poor choice of strain. However, there is also a persistent lack of understanding as to the very nature of probiotics, which cannot be considered a ‘class’ of bioactives, amenable to traditional efficacy assessments such as the meta-analysis (unless restricted to one strain), since they are all unique living organisms and their health-promoting traits are strain-specific. Rarely have probiotic strains been selected with specific mechanisms of effect in mind; this has led to conflicting observations and damaged the reputation of this area of science. A few exceptions do exist, most notably the work of Jones et al who selected a bile salt-hydrolysing Lactobacillus reuteri strain, to study its ability to reduce cholesterol levels in hypercholesterolaemic individuals. In two well powered, randomised, placebo-controlled and double-blinded studies, they demonstrated that ingestion of this strain significantly lowered total and low density lipoprotein (LDL)-cholesterol. Moreover, they suggested an underlying novel mechanism linked to reduced fat absorption from the intestine19 via the nuclear receptor farnesoid X receptor (FXR).20 Prebiotics represent a specific type of dietary fibre that when fermented, mediate measurable changes within the gut microbiota composition, usually an increase in the relative abundance of bacteria thought of as beneficial, such as bifidobacteria or certain butyrate producers. As with probiotics, despite convincing and reproducible results from animal studies showing efficacy in prevention or treatment of many diseases (eg, IBD, IBS, colon cancer, obesity, type 2 diabetes (T2D) and cardiovascular disease), the data in humans remain ambiguous. Fewer well powered or well designed clinical studies have been conducted with prebiotics compared with probiotics, and there may be an issue with prebiotic dose. Human studies rarely, if ever, employ prebiotics. A prebiotic is shown to be efficacious in animal studies: typically 10% w/w of the diet, which in humans equates to about 50 g per day.18 However, as we learn more about the ecology of the gut microbiota, it is becoming clear that the prebiotic concept has tapped into the underlying fabric of the gut microbiota as a primarily saccharolytic and fermentative microbes community evolved to work in partnership with its host's digestive system to derive energy and carbon from complex plant polysaccharides which would otherwise be lost in faeces. Polyphenols are a diverse class of plant secondary metabolites, often associated with the colour, taste and defence mechanisms of fruit and vegetables. They have long been studied as the most likely class of compounds present in whole plant foods capable of affecting physiological processes that protect against chronic diet-associated diseases. The gut microbiota plays a critical role in transforming dietary polyphenols into absorbable biologically active species, acting on the estimated 95% of dietary polyphenols which reach the colon.21 Recent studies show that dietary intervention with polyphenol extracts, most notably dealcoholised red wine polyphenol extract and cocoa-derived flavanols, modulate the human gut microbiota towards a more ‘health-promoting profile’ by increasing the relative abundance of bifidobacteria and lactobacilli. These data again raise the possibility that certain functional foods tap into the underlying ecological processes regulating gut microbiome community structure and function, contributing to the health of the gut microbiota and its host.22 Obesity-related diseases and the gut microbiota Starting around 2004, the hallmark studies of Gordon et al demonstrated a potential relationship between the gut microbiome and development of an obese phenotype. An increase in relative abundance of Firmicutes and a proportional decrease in Bacteroidetes were associated with the microbiota of obese mice,23 which was confirmed in a human dietary intervention study demonstrating that weight loss of obese individuals (body mass index, BMI>30) was accompanied by an increase in the relative abundance of Bacteroidetes.24 Nevertheless, based on most human studies, the obesity-associated decrease in the ratio of Bacteroidetes to Firmicutes (B:F) remains controversial.24 ,25 This is likely due to heterogeneity among human subjects with respect to genotype and lifestyle. Recent studies have identified diet, especially fat, as a strong modulator of the microbiota, particularly in inbred and age-standardised laboratory animals. The sources of variation in the microbiota are mainly limited to the experimental diets used, and there is growing evidence that the high fat intake rather than obesity per se had a direct effect on the microbiota and linked clinical parameters.26 However, in humans the microbiome is exposed to fundamentally different ‘environmental’ factors in obese and lean individuals that go beyond BMI alone, including diet26 and host hormonal factors.27 In addition, the aetiology of obesity and its metabolic complications, including low grade inflammation, hyperlipidaemia, hypertension, glucose intolerance and diabetes, reflect the complex interactions of these multiple genetic, behavioural and environmental factors.28 Lastly, the accuracy of BMI as an indicator for obesity is limited; 25% of obese people could in fact be regarded as metabolically ‘healthy’ (ie, with normal lipid and glucose metabolism).29 Therefore, linking GI tract microbial composition directly and exclusively to obesity in humans will remain challenging due to the various confounding factors within the heterogeneous population. This complexity has led to a shift from treating obesity as a single phenotype, to attempts at correlating microbial signatures to distinct or multiple features associated with (the development of) metabolic syndromes such as T2D. Recently, two (meta)genome-wide association studies were performed, with 345 Chinese individuals30 and 145 European women.31 In both studies, de novo generated metagenomic species-level gene clusters were employed as discriminant markers which, via mathematical modelling, could better differentiate between patients and controls with higher specificity than a similar analysis based on either human genome variation or other known risk factors such as BMI and waist circumference. At the functional level, membrane transporters and genes related to oxidative stress were enriched in the microbiota of patients,31 while butyrate biosynthesis was decreased.30 Although both studies observed high similarities in microbial gene-encoded functions, the most discriminant metagenomic species-level gene clusters differed between the cohorts (Akkermansia did not contribute to the classification in the European cohort whereas Lactobacillus showed no contribution in the Chinese study population), indicating that diagnostic biomarkers could be specific to the population studied. In another metagenomic study, a bimodal distribution of microbial gene richness in obese individuals was observed, stratifying individuals as High Gene Count or Low Gene Count (HGC and LGC).32 HGC individuals were characterised by higher prevalence of presumed anti-inflammatory species such as F. prausnitzii, and an increased production potential of organic acids (including butyrate). In contrast, LGC individuals showed higher relative abundance of potentially proinflammatory Bacteroides spp and genes involved in oxidative stress response. Remarkably, only biochemical obesity-associated variables, such as insulin resistance, significantly correlated with gene count while weight and BMI did not, underscoring the inadequacy of BMI as an indicator for ‘Obesity and its Associated Metabolic Disorders’ (OAMD).33 An accompanying paper demonstrated that a diet-induced weight-loss intervention significantly increased gene richness in the LGC individuals which was associated with improved metabolic status.34 Although gene richness was not fully restored, these findings support the reported link between long-term dietary habits and the structure of the gut microbiota.31 It also suggests permanent adjustment of the microbiota may be achieved through diet. Most studies involving the microbiome have been solely correlative but recently a causal relationship was established between host glucose homoeostasis and gut microbial composition. FMT from lean donors to individuals with metabolic syndrome significantly increased their insulin sensitivity.33 The transplant produced an increase in faecal butyrate concentrations, microbial diversity and the relative abundance of bacteria related to the butyrate-producing Roseburia intestinalis. Together, these studies produce a body of evidence that the microbiome plays a role in host energy homoeostasis and the establishment and development of OAMD, although the exact mechanisms remain obscure. Previous contradictory findings might be attributed to miscellaneous approaches,35 and also heterogeneity in genotype, lifestyle and diet of humans combined with the complex aetiology of OAMD. Nonetheless, a clearer picture is emerging. The gut of individuals with OAMD is believed to harbour an inflammation-associated microbiome, with a lower potential for butyrate production and reduced bacterial diversity and/or gene richness. Although the main cause of OAMD is excess caloric intake compared with expenditure, differences in gut microbial ecology might be an important mediator and a new therapeutic target or a biomarker to predict metabolic dysfunction/obesity in later life. Liver disease and the gut microbiota The liver receives 70% of its blood supply from the intestine via the portal vein, thus it is continually exposed to gut-derived factors including bacterial components, endotoxins (lipopolysaccharide, flagellin and lipoteichoic acid) and peptidoglycans. Multiple hepatic cells, including Kupffer cells, sinusoidal cells, biliary epithelial cells and hepatocytes, express innate immune receptors known as pathogen-recognition-receptors that respond to the constant influx of these microbial-derived products from the gut.36 It is now recognised that the gut microbiota and chronic liver diseases are closely linked. Characterising the nature of gut dysbiosis, the integrity of the gut barrier and mechanisms of hepatic immune response to gut-derived factors is potentially relevant to development of new therapies to treat chronic liver diseases.37 Furthermore the field of bile acid signalling has thrown open the concept of the gut:liver axis as being active and highly regulated.38 Non-alcoholic fatty liver disease The pathophysiology of NAFLD is multifactorial with strong genetic and environmental contributions. Recent evidence demonstrates that gut microbiota dysbiosis can result in the development of obesity-related non-alcoholic fatty liver disease (NAFLD), and patients with NAFLD have small intestinal bacterial overgrowth and increased intestinal permeability.39 In the 1980s, development of non-alcoholic steatohepatitis (NASH) and small intestinal bacterial overgrowth was observed in humans after intestinal bypass and, interestingly, regression of hepatic steatosis after metronidazole treatment, suggesting a possible role for the gut bacteria in NAFLD.40 Disruption of the murine inflammasomes (see box 1) is associated with an increase in Bacteroidetes and reduction in Firmicutes and results in severe hepatic steatosis and inflammation.41 Faecal microbiota analysis of patients with NAFLD and NASH has produced variable results due to significant variation of patient demographics, severity of liver disease and methodology. A lower proportion of Ruminococcaceae was noted in patients with NASH compared with healthy subjects42 and a study which characterised gut microbiota of children with NASH, obesity and healthy controls showed that patients with NASH had a higher proportion of Escherichia compared with other groups.43 Patients with NAFLD also have increased gut permeability suggesting that translocation of bacteria or microbe derived products into the portal circulation contributes to the pathogenesis.39 Alcoholic liver disease Since not all alcoholics develop liver injury, it appears that chronic alcohol abuse is necessary but not sufficient to cause liver dysfunction. Numerous animal model and human observational studies indicate that gut bacterial products like endotoxin may mediate inflammation and function as cofactors for the development of alcohol-related liver injury.44 Serum endotoxin levels are elevated in humans and rats with alcoholic liver disease, and monocytes from alcoholics are primed to produce cytokines after endotoxin exposure. Alcohol causes intestinal bacterial overgrowth in humans and bacterial numbers were significantly higher in jejunal aspirates from patients with chronic alcohol abuse compared with controls, with similar findings in patients with alcohol-induced cirrhosis.45 The degree of overgrowth correlates with the severity of cirrhosis. Tsukamoto-French model mice fed intragastrically with alcohol for 3 weeks showed increased relative abundance of Bacteroidetes and Akkermansia spp and a reduction in Lactobacillus, Leuconostoc, Lactococcus and Pediococcus while control mice showed a relative predominance of Firmicutes.46 Patients with alcoholic liver disease also show increased gut permeability, allowing translocation of bacteria and bacterial products to the liver.47 Autoimmune liver diseases These consist of primary sclerosing cholangitis (PSC), primary biliary cirrhosis (PBC) and autoimmune hepatitis and represent at least 5% of all chronic liver diseases. They are presumed autoimmune conditions but the expectation is that the gut microbiota is relevant to pathogenesis, particularly because (A) PSC is associated with IBD and aberrant lymphocyte tracking, and (B) significant gut:liver axes exist through bile acid signalling. Patients with PSC develop a distinct form of IBD thus understanding the relationship between PSC and IBD is essential in uncovering the pathogenesis of PSC, which remains largely undetermined. However, it is likely that in genetically susceptible individuals, intestinal bacteria could trigger an abnormal or inadequate immune response that eventually leads to liver damage and fibrosis. Recently it was shown that patients with PSC have distinct gut microbiota. Analysis of colon biopsy microbiota revealed that patients with PSC-IBD and IBD showed reduced abundance of Prevotella and Roseburia (a butyrate-producer) compared with controls.48 ,49 Patients with PSC-IBD had a near-absence of Bacteroides compared with patients with IBD and control patients, and significant increases in Escherichia, Lachnospiraceae and Megasphaera. Randomised controllled trials (RCTs) investigating antibiotic therapy in PSC have shown these to be superior in improving biochemical surrogate markers and histological parameters of disease activity compared with ursodeoxycholic acid alone.50 In a recent prospective paediatric case series, oral vancomycin was shown to normalise or significantly improve liver function tests.51 There is evidence that mucosal integrity is compromised in patients with PSC, supporting the traditional leaky gut hypothesis of microbe-derived products translocating to the liver and biliary system to trigger an inflammatory reaction.52 It was also demonstrated that tight junctions of hepatocytes were impaired in patients with PSC and infusion of non-pathogenic E. coli into portal circulation caused portal fibrosis in animal models.53 These findings collectively suggest that bacterial antigens translocate across a leaky and possibly inflamed gut wall into the portal and biliary system to induce an abnormal immune response and contribute to PSC pathogenesis. PBC is a chronic cholestatic liver disease with an uncertain aetiology. It is generally believed to be an autoimmune disease triggered by environmental factors in individuals with genetic susceptibility. As yet, there have been no studies directly characterising the gut microbiota in patients but molecular mimicry has been suggested as a proposed mechanism for the development of autoimmunity in PBC, with serum antibodies of patients cross-reacting with conserved bacterial pyruvate dehydrogenase complex component E2 (PDC-E2) homologues of E. coli, Novosphingobium aromaticivorans, Mycobacterium and Lactobacillus species. Hence it has been speculated that these bacteria (of possible GI origin) may initiate molecular mimicry and development of PBC in genetically susceptible hosts.54 Modulation of the microbiota as a therapy in liver disease Probiotics have shown promise in ameliorating liver injury by reducing bacterial translocation and hepatic inflammation.55 A recent meta-analysis concluded that probiotics can reduce liver aminotransferases, total cholesterol, tumour necrosis factor α and improve insulin resistance in patients with NAFLD.56 A recent study in patients with cirrhosis with ascites showed that the probiotic VSL#3 significantly reduced portal hypertension.57 A further study evaluated the role of FMT in modulating liver disease by transferring the NAFLD phenotype from mice with liver steatosis to germ-free mice.58 There remains a need for detailed descriptive and interventional studies focused on bacterial diversity and mechanisms linking gut dysbiosis with inflammatory, metabolic and autoimmune/biliary liver injury. IBD and the gut microbiota Early studies implicating bacteria in IBD pathogenesis focused on identifying a potential culprit that could initiate the inflammatory cascade typical of IBD. Many organisms have been proposed: Mycobacterium avium subsp paratuberculosis and a number of Proteobacteria including enterohepatic Helicobacter, non-jejuni/coli Campylobacter and adherent and invasive E. coli. The focus has recently shifted with the realisation that the gut microbiota as a whole is altered in IBD. The concept of an altered gut microbiota or dysbiosis is possibly the most significant development in IBD research in the past decade. A definitive change of the normal gut microbiota with a breakdown of host-microbial mutualism is probably the defining event in IBD development.59 Changes in the gut microbiota have been repeatedly reported in patients with IBD, with certain changes clearly linked to either Crohn's disease (CD) or UC: the most consistent change is a reduction in Firmicutes.60 This has been balanced by reports of increased levels of Bacteroidetes phylum members,61 although a reduction in Bacteroidetes has also been reported.62 There is a suggestion that there may be spatial reorganisation of the Bacteroides species in patients with IBD, with Bacteroides fragilis being responsible for a greater proportion of the bacterial mass in patients with IBD compared with controls.63 Reduction in the Firmicutes species F. prausnitzii has been well documented in patients with CD, particularly those with ileal CD, although an increase in F. prausnitzii has been shown in a paediatric cohort, suggesting a more dynamic role for the species that merits further study.64 Other studies have also demonstrated a decrease in Firmicutes diversity, with fewer constituent species detected in patients with IBD compared with controls.65 Changes in the two dominant phyla, Firmicutes and Bacteroidetes, are coupled with an increase in abundance of members of the Proteobacteria phylum, which have been increasingly found to have a key role in IBD.66 Studies have shown a shift towards an increase in species belonging to this phylum, suggesting an aggressor role in the initiation of chronic inflammation in patients with IBD.67 More specifically, increased numbers of E. coli, including pathogenic variants, have been documented in ileal CD.68 The IBD metagenome contains 25% fewer genes than the healthy gut with metaproteomic studies showing a correlative decrease in proteins and functional pathways.69 Specifically, ileal CD has been shown to be associated with alterations in bacterial carbohydrate metabolism and bacterial-host interactions, as well as human host-secreted enzymes.69 A detailed investigation of functional dysbiosis during IBD built on this by including inferred microbial gene content from 231 subjects and an additional 11 metagenomes.70 This study identified enrichment in microbial pathways for oxidative stress tolerance, immune evasion and host metabolite uptake, with corresponding depletions in SCFA biosynthesis and typical gut carbohydrate metabolism and amino acid biosynthetic processes. Intriguingly, similar microbial metabolic shifts have been observed in other inflammatory conditions such as T2D,30 suggesting a common core gut microbial response to chronic inflammation and immune activation. In addition, recent work suggests a role for viruses in IBD, with a significant expansion of Caudovirales bacteriophage in patients.71 Modulation of the microbiota as a therapy in IBD Several clinical trials have examined the approach of modulating the microbiota in patients with IBD, many of which predate the ‘omics’ era. Such trials provide a ‘proof of concept’ for the importance of the role of the gut microbiota in IBD, but marrying up individual approaches with the complex multifactorial nature of IBD remains a challenge, particularly in addressing the different phenotypes and genotypes of disease and the different ‘phases’ of the disease process: for example, prophylaxis, maintenance of remission, treatment of relapses. In terms of probiotic research, one of the largest clinical trials in IBD was the use of E. coli Nissle 1917 in the setting of remission maintenance in UC. Patients (n=327) were assigned to a double-blind, double-dummy trial to receive either the probiotic or mesalazine.72 Both treatments were deemed equivalent with regards to relapse. E. coli Nissle is now considered an effective alternative to 5-aminosalicylate for remission maintenance in UC.73 There are two published clinical trials of the multistrain probiotic VSL#3 in the setting of mild to moderate flares of UC.74 Both demonstrate that high doses improve disease activity scores but whether such improvements in scores are clinically meaningful for patients, particularly compared with other treatment options, remains to be clarified. An alternative approach is transplantation of the whole gut microbiota from a healthy donor: FMT. In IBD, a recent systematic review and meta-analysis has shown that of nine cohort studies, eight case studies and one randomised controlled trial, overall 45% (54/119) achieved clinical remission. When only cohort studies were analysed 36% achieved clinical remission.75 Since that meta-analysis, two randomised controlled trials in UC show discrepant results. One trial, in which two faecal transplants were given via the upper GI route, showed no difference in clinical or endoscopic remission between the faecal transplant group and the control group (given autologous stool).76 A second trial, in which patients with UC were randomised to weekly faecal enemas from healthy donors or placebo enemas for 6 weeks, demonstrated remission in a greater percentage of patients given FMT compared with the control group (given water enema).77 There are unanswered questions regarding mode of delivery, frequency of delivery and optimal donor/host characteristics. Antibiotics demonstrate efficacy in particular groups of patients with CD but some antibiotics may be detrimental, showing a complex interplay between host and microbiota. Patients who have had a resection for CD have a decreased rate of endoscopic and clinical recurrence when metronidazole or ornidazole are used as prophylactic therapies.78 Several studies have assessed the specific role of antimycobacterial therapies in CD treatment but overall results are disappointing. There is no clinically relevant evidence base for the use of probiotics in CD and in terms of prebiotics, although an open label trial of fructo-oligosaccharide in CD showed promise,79 and a subsequent randomised placebo-controlled trial of fructo-oligosaccharide did not support any clinical benefit.80 Restorative proctocolectomy with ileal-pouch anal anastomosis is the operation of choice for patients with UC requiring surgery. Pouchitis has an incidence of up to 50% of patients although it is a significant clinical problem for only about 10%. Antibiotics are used as primary therapy; if single antibiotics fail, dual antibiotics used for longer periods of time or antibiotics tailored to the microbiota in an individual patient can be used. VSL#3 reduced the risk of disease onset and maintained an antibiotic-induced disease remission in pouchitis.81 A meta-analysis has shown that VSL#3 significantly reduced the clinical relapse rates for maintaining remission in patients with pouchitis.82 CRC and the gut microbiota Many microbiome studies have focused on colitis-associated cancers83 or rodent preclinical models.84 Despite this, there is increasing evidence that the colonic microbiota plays an important role in the cause of sporadic CRC.85 Reduced temporal stability and increased diversity has been shown for the faecal microbiota of subjects with established CRC and polyposis,86 and now metagenomic and metatranscriptomic studies have identified an individualised oncogenic microbiome and specific bacterial species that selectively colonise the on-tumour and off-tumour sites.87 Several competing theories of the microbial regulation of CRC have emerged (figure 1) to explain these observations. The keystone-pathogen hypothesis88 and the α-bug hypothesis both state that certain, low abundance microbiota members (such as enterotoxigenic B. fragilis) possess unique virulence traits, which are pro-oncogenic and remodel the microbiome and in turn promote mucosal immune responses and colonic epithelial cell changes.89 Tjalsma et al90 have also proposed the ‘driver-passenger’ model for CRC: a first hit by indigenous intestinal bacteria (‘bacterial drivers’), which drive the DNA damage that contributes to CRC initiation. Second, tumorigenesis induces intestinal niche alterations that favour the proliferation of opportunistic bacteria (‘bacterial passengers’). For example, CRCs have an increased enrichment of opportunistic pathogens and polymicrobial Gram-negative anaerobic bacteria91 but it is not yet clear whether these opportunistic pathogens merely benefit from the CRC microenvironment or influence disease progression. However, colonic polyps demonstrate higher bacterial diversity and richness when compared with control patients, with higher abundance of mucosal Proteobacteria and lower abundance of Bacteroidetes.92 This may in part be explained by the mucosal defensive strategies designed to manage the commensal microbiota. For example, α-defensin expression is significantly increased in adenomas resulting in an increased antibacterial activity compared with normal mucosa.93 At present, human studies have involved small patient numbers, with evidence of sampling heterogeneity, limited tumour phenotyping and oncological data. Despite this, a small number of specific pathobionts have now been linked with adenomas and CRC including Streptococcus gallolyticus,94 Enterococcus faecalis95 and B. fragilis.84 E. coli is also overexpressed on CRC mucosa; it expresses genes that confer properties relevant to oncological transformation including M cell translocation, angiogenesis and genotoxicity.96 Enrichment of Fusobacterium nucleatum has also been identified in adenoma versus adjacent normal tissue and is more abundant in stools from CRC and adenoma cases than in healthy controls. F. nucleatum's fadA, a unique adhesin, allows it to adhere to and invade human epithelial cells, eliciting an inflammatory response97 and stimulating cell proliferation.98 Novel mechanisms from previously unassociated bacteria are also being described to explain how bacterial proteins target proliferating stem-progenitor cells. For example, AvrA, a pathogenic product of Salmonella, has been shown to activate β-catenin signals and enhance colonic tumorigenesis.99 Work has also focused on the metabolic function of the gut microbiome and dietary microbiome interactions in the aetiology of CRC. It is likely that the metabolism of fibre is critical to this. Metagenomic analyses have consistently identified a reduction of butyrate-producers in patients with CRC,100 a finding replicated in animals.101 The microbiome also plays an important role in the metabolism of sulfate, through assimilatory sulfate-reduction to produce cysteine and methionine, and dissimilatory sulfate-reduction to produce hydrogen sulfide (H2S). H2S is likely to contribute to CRC development, as colonic detoxification of H2S is also reduced in patients with CRC; it also induces colonic mucosal hyperproliferation.102 There is also evidence that differences in host genotype, which affect the carbohydrate landscape of the distal gut, interact with diet to alter the composition and function of resident microbes in a diet-dependent manner.103 Therefore it is possible that patients genetically predisposed to CRC have a modified metabolically active microbiome, which is determined by their genes and by their family environment and dietary habits. There is other evidence from global studies of cancer risk, that the microbiome is important in cancer risk. African Americans possess a colon dominated by Bacteroides, while in Africans Prevotella are more abundant.104 African Americans, who are at high risk of CRC, may have evolved a CRC-microbiota moulded by dietary habits and environmental exposures. Critically, mucosal Ki67 expression (a biomarker for cancer risk) may decrease or increase within 2 weeks of either a high fibre (>50 g/day) dietary intervention in African Americans or a high fat, high protein low fibre Westernised diet in African subjects. This short-term intervention leads to reciprocal changes in luminal microbiome co-occurrence network structures that overwhelm interindividual differences in microbial gene expression. Specifically, an animal-based diet increases the abundance of bile-tolerant microorganisms (Alistipes, Bilophila and Bacteroides) and decreases the levels of Firmicutes that metabolise dietary plant polysaccharides (Roseburia, Eubacterium rectale and Ruminococcus bromii).105 ,106 In the past decade, interest in the human microbiome has increased considerably. A significant driver has been the realisation that the commensal microorganisms that comprise the human microbiota are not simply passengers in the host, but may actually drive certain host functions as well. In sterile rodents, we see the dramatic impact that removing the microbiota has on nearly all aspects of the host's ability to function normally. This review highlights some key disease areas in which the microbiota and its microbiome are thought to have not just an association, but also a key modulatory role (table 1). By better understanding the mechanisms and contribution the microbiota make to these diseases, we hope to develop novel therapeutics and strategies to modulate the microbiota to treat or prevent disease. Additionally, in some instances it may be possible to use the microbiome to detect gut-related diseases before conventional diagnostics can. In the future we hope to use this information to stratify patients more accurately and for more efficient treatment. A body of evidence also points to the gut microbiota being an environmental factor in drug metabolism, for example, inactivation of the cardiac drug digoxin by Eggerthella lenta in the gut. Thus, if we are to realise the vision of a personalised healthcare revolution, we must explore how the microbiome fits with this notion. This review was commissioned by the Gut Microbiota for Health expert panel of the British Society of Gastroenterology. Contributors AH, JRM, DHA, GDAH, LVT, FF, GMH and GH, MNQ, HS, KMT, EGZ and JK contributed to the conception/design of the work, drafting the work and revising it critically for important intellectual content and final approval of the version published. Competing interests AH has lectured for Yakult. Provenance and peer review Not commissioned; externally peer reviewed. If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
<urn:uuid:278b77a1-1fd8-4404-a5a8-b8feb645aa52>
CC-MAIN-2021-25
https://gut.bmj.com/content/65/2/330
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487623942.48/warc/CC-MAIN-20210616124819-20210616154819-00078.warc.gz
en
0.918726
9,717
2.984375
3
Easily grown in organically rich, medium moisture, well-drained soils in full sun to part shade. Thrives in moist soils, and appreciates a summer mulch which helps retain soil moisture. Bloom occurs on old wood. Prune immediately after flowering (little pruning is usually needed however). Prune out weak or winter-damaged stems in early spring. Plants should be given a sheltered location and winter protection (e.g., mulch, burlap wrap) in USDA Zone 5, particularly when not fully established. Plants can lose significant numbers of flower buds or die to the ground in harsh winters (temperatures below -10 degrees F), thus respectively impairing or totally destroying the bloom for the coming year. Hydrangea quercifolia, commonly called oak leaf hydrangea, is an upright, broad-rounded, suckering, deciduous shrub that typically grows 4-6' (less frequently to 8') tall. It is native to bluffs, moist woods, ravines and stream banks from Georgia to Florida to Louisiana. It is noted for producing pyramidal panicles of white flowers in summer on exfoliating branches clad with large, 3-7 lobed, oak-like, dark green leaves. ‘Sike's Dwarf’ is a dwarf mounded cultivar that matures to only 2-3' tall and to 3-4' wide. It differs from the species by growing much smaller with smaller leaves and smaller flower panicles, and by having a more moderate growth habit with less frequent suckering from the roots. Elongated, conical flower panicles (to 3-4" long) of showy, mostly sterile, white flowers begin bloom in late spring. Flowers emerge white, gradually fade to light pink and then turn brown by late summer with good persistence of the brown seed panicles into winter. Distinctive, deeply-lobed, somewhat coarse, deep green, oak-like leaves (to 5” long) turn attractive shades of bronze, maroon and purple in autumn. Mature stems exfoliate to reveal a rich brown inner bark which is attractive in winter. No serious insect or disease problems. Some susceptibility to leaf blight and powdery mildew. Aphids are occasional visitors. Good specimen or accent for foundations or other locations near homes or patios. Group or mass in shrub borders or in open woodland areas. Good informal hedge. Interesting dwarf selection for small gardens. Exfoliating mature branches provide interesting color and texture in winter. May be grown in large containers.
<urn:uuid:9d60f3d4-cc7e-4a2a-901f-085fdcfbb9d0>
CC-MAIN-2016-18
http://www.missouribotanicalgarden.org/PlantFinder/PlantFinderDetails.aspx?kempercode=e255
s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461862047707.47/warc/CC-MAIN-20160428164727-00185-ip-10-239-7-51.ec2.internal.warc.gz
en
0.918486
533
2.515625
3
Researchers believe a daily dose of nuts can help you slim down and lower your cholesterol Researchers believe a daily dose of nuts can help you slim down and lower your cholesterol. A research team from Louisiana State University looked at over 13,000 men and women and split them into groups according to their nut intake. Those who ate more than a quarter of an ounce of cashews, almonds, hazelnuts, brazil nuts, pistachios or walnuts a day were classed as ‘tree nut consumers’. ‘One of the more interesting findings was the fact that tree nut consumers had lower body weight, as well as lower body mass index (BMI) and waist circumference than non-consumers,’ says study author Carol O’Neil. Tree nuts contain high levels of high-density lipoprotein-cholesterol (good cholesterol) and low levels of C-reactive proteins, which are a main cause of inflammation of the heart or body. A HANDFUL OF NUTS A DAY CAN BEAT BELLY FAT Consumers were also 5% less likely to suffer metabolic syndrome that can cause stroke, heart conditions, high cholesterol and diabetes. Maureen Ternus, executive director of the International Tree Nut Council Nutrition Research and Education Council says: ‘In light of these new data and the fact that the FDA has issues a qualified health claim for nuts and heart disease with a recommended intake of 1.5 ounces of nuts per day, we need to educate people about the importance of including tree nuts in the diet.’
<urn:uuid:b4788bb2-0d27-4410-80bb-bdd8f5d0d0fb>
CC-MAIN-2019-18
https://www.marieclaire.co.uk/life/health-fitness/nuts-could-prevent-heart-disease-and-diabetes-150704
s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578527866.58/warc/CC-MAIN-20190419181127-20190419203127-00422.warc.gz
en
0.939009
324
2.6875
3
This preview shows page 1. Sign up to view the full content. Unformatted text preview: * - Around the same time, Virginians also experienced conflict w/the Indians b/c of land, although the conflict played out slightly differently. After land-hungry Virginians attacked two Indians tribes, Indians raided outlying farms in retaliation in the winter of 1676. - Governor William Berkeley, however, was reluctant to strike back b/c: (1) he had trade agreements w/the Indians and didn’t want to disrupt them and (2) he already had land and didn’t want competition anyway. - So the angry colonists [many former indentured servants] rallied around recent immigrant Nathaniel Bacon, who held members of the House of Burgesses until they authorized him to attack the Indians and was consequently declared to be in rebellion by Berkeley. 16 - Throughout the summer of 1676, then, Bacon fought both Indians and supporters of the gov’t, even burning Jamestown itself to the ground. Even though the rebellion died w/Bacon in October, the point was made and a new treaty in 1677 allowed more territory to be - Besides being a turning point in relations w/the Indians, Bacon’s rebellion had another very important consequence. As landowners realized that there wasn’t much land left to give to indentured servants, the custom stopped and they began looking for slave labor instead. *The Introduction of African Slavery* - As a consequence of Bacon’s rebellion and the reluctance of indentured servants to go to the Chesapeake [no more land] planters turned to slavery as a labor source. - They had no real moral qualms about this b/c slavery had been practiced in Europe for centuries and European Christians believed that it was OK to enslave “heathen” people. Racism against Africans, which viewed them as inferior b/c of their skin color, had also been developing in England since the 1500s. - Even though there was a slave system in the West Indies by the 1650s, it didn’t spread to the mainland colonies until the 70s. Anyhow, when slavery did start in the colonies, what was it like? Slavery in the South – after 1677 slaves were imported incredibly rapidly i... View Full Document - Fall '10
<urn:uuid:81f92667-fdce-4845-895b-9f432f038471>
CC-MAIN-2016-50
https://www.coursehero.com/file/9096225/After-land-hungry-Virginians-attacked-two-Indians-tribes-Indians/
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541518.17/warc/CC-MAIN-20161202170901-00374-ip-10-31-129-80.ec2.internal.warc.gz
en
0.9606
544
4.53125
5
What is a random sequence? In Cryptography papers there are lots of statements like Alice choses a random number k Bob choses a random element of F_p Can one recognize a number or a sequence of numbers as random? Which of the following sequences is random: Answer: all of them are equally likely outcomes of 23 coin-flips. Sérgio B. Volchan tells the history of the concept of randomness in mathematics in an article for the American Mathematical Monthly. It is quite fascinating IMHO how seemingly resonable definitions of randomness were put forward and shot down later to be replaced with the next definition. The most recent definitions preclude meaningful checks for randomness by examining finite parts of a sequence, so the conundrum remains: Is 7 a random number? That's how to write manuals The Jupiter ACE was a home computer produced in the UK in the 80ies. It had a FORTH interpreter instead the usual BASIC of the C64, BBC micro, etc. Their Manual explains the inner workings of the machine in an accessable way. Compare that to the thousands of VBA books that keep the reader totally in the dark what goes on behind the funny icons. Surprising results with IPv6 Spamfilters add complexity, which in turn makes v6 transition harder. Host A (running OpenBSD) has dual stack v4/v6 with routable v4 address Host B (running Plan9) has dual stack v4/v6 with a subnet-local v4 address Both machines have a routeable v6 address and run an MTA. So I assumed that it should be possible to send mail from A to B. Turns out to be not that simple. The Plan9's MTA uses various heuristics to find out if incoming mail is spam (as do other MTAs). One of the checks is to connect to the MTA listed in the MX record for the sender's address' domain. Host A's MX record is v4-only, so B cannot connect to the MTA, so it rejects the mail. Not only the sender and the receiver have to be v6-enabled, but also the sender's MX (and probably the blacklist
<urn:uuid:a953cfe0-199c-41ba-8253-dd92d82d8edd>
CC-MAIN-2016-44
http://pestilenz.org/cgi-bin/blosxom.cgi/2008/01/15
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719465.22/warc/CC-MAIN-20161020183839-00018-ip-10-171-6-4.ec2.internal.warc.gz
en
0.880078
482
2.796875
3
Industrial electronics is a branch of electronics that deals with power electronic devices such as thyristors, SCRs, AC/DC drives, meters, sensors, analyzers, load cells automatic test equipment, mulitimeters, data recorders, relays, resistors, semiconductors, transistors, waveguides, scopes, amplifiers, radio frequency (RF) circuit boards, timers, counters, etc. It covers all of the methods and facets of: control systems, instrumentation, mechanism and diagnosis, signal processing and automation of various industrial applications. The core research areas of industrial electronics include electrical power machine designs, power conditioning and power semiconductor devices. A lot of consideration is given to power economy and energy management in consumer electronic products. So to put it simply, industrial electronics refers to equipment, tools and processes that involve electrical equipment in an industrial setting. This could be a laboratory, automotive plant, power plant or construction site etc. Industrial electronics are also used extensively in: chemical processing plants, oil/gas/petroleum plants, mining and metal processing units, electronics and semiconductor manufacturing. The scope of industrial electronics ranges from the design and maintenance of simple electrical fuses to complicated programmable logic controllers (PLCs), solid-state devices and motor drives. Industrial electronics can handle the automation of all types of modern day electrical and mechanical industrial processes. Some of the specialty equipment used in industrial electronics includes: variable frequency converter and inverter drives, human machine interfaces, hydraulic positioners and computer or microprocessor controlled robotics. Industrial electronics is a large family indeed, but remember it is different than entertainment and consumer electronics. Instead of thinking DVD players and computers we are talking about things such as capacitors, motor drives, panel meters, limit switches and testers. Actually the list goes on and on. Because industrial electronics covers such a wide range of devices it’s important that you keep on top of your maintenance schedule or you will find yourself replacing items quite frequently. You should also keep abreast of all the new developments in the electronic world as new items are being manufactured all of the time and you may want too upgrade some of your components to stay up to date. As with all types of electronics you should also be aware of all of the safety hazards of each individual item.
<urn:uuid:7e652b52-311d-4891-8c82-2b86b701d244>
CC-MAIN-2017-22
http://www.industrial101.com/electronics/
s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607242.32/warc/CC-MAIN-20170522230356-20170523010356-00258.warc.gz
en
0.907594
470
2.9375
3
Your Baby Is Able To Respond To Small Stuff, Such As ''Bye-Bye!'' During these months, your baby might say “mama” or “dada” for the first time, and will communicate using body language, like pointing and shaking his head. Your baby will pay even more attention to your words and gestures and will try very hard to imitate you. What you need to know At this age, even before he can talk, he communicates through gestures like pointing, shaking his head “no” and “Bye-Bye” all demonstrate his ability to communicate, understand, and respond to language. By the end of the first year, your baby will follow simple requests from you like wave a bye-bye, enjoy peek-a- boo, and babble with inflections of typical speech. Continue talking to your baby using names as well as repetitive word games and ask your baby to point to familiar objects. What you need to do Make learning a whole body experience: touch your baby’s toe, when you say the word “toe”, or point out your own ear and say “mommy’s ear”. Face your baby when you speak to him and let him see your facial expressions and lip movements. Read to your baby from large, colourful picture books, and encourage him to turn the pages. Give your baby a chance to read and answer your questions. Disclaimer : Content presented here is for information purposes only, please consult with your doctor for any health queries
<urn:uuid:bb9b8af4-60be-4dc1-8f3d-36e21769108e>
CC-MAIN-2023-06
https://www.parentlane.com/baby/baby-development/your-baby-is-able-to-respond-to-small-stuff-such-as-bye-bye
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500384.17/warc/CC-MAIN-20230207035749-20230207065749-00805.warc.gz
en
0.943985
324
3.125
3
Those beautiful stones... Alexandrite is a rare variety of Chrysoberyl. Alexandrite was discovered near Ekaterinburg in the Ural Mountains in Russia in 1830, and was named after Alexander II. Other sources now include Sri Lanka, Burma, Brazil, Zimbabwe, Madagascar and North America. The important feature of Alexandrite which makes it highly unusual and collectable amongst gemstones is its strong color change when viewed in different lights. Many other stones show a similar property, but nowhere as strongly, and not with such highly contrasting colors. Amethyst is the clear purple, mauve or violet form of the mineral quartz. As such it is related to citrine which is the yellow form of quartz, and also to rock crystal which is the colorless variety. It is possible for some specimens of quartz to be different colors in different areas. Amethyst and citrine are varieties of quartz which can both occur in the same stone. As our photograph clearly shows, the difference in color at the two ends create a striking contrast. Such stones are sometimes called ametrines, but we think they could equally well be called citrysts or citrethysts. Amethyst gets its name from a Greek word amethustos meaning "not drunken", as it was believed to protect against intoxication by alcohol. It would be interesting to test this ancient theory whilst being breath-tested, but don't cite us in your defense. We believe that amethyst is one of the most beautiful of the colored gemstones, particularly in its better qualities. As with other gemstones, the "best" color can vary according to personal preference, and the quality of amethysts can vary considerably. The most important attribute should be an attractive color. The rich deep violet color is generally the favorite and most expensive, but a stone of medium color intensity, with plenty of sparkle, can also be very attractive. Many of the amethysts for sale in High Street stores are only in low to medium quality, being either pale or quite included, and as a result are not particularly attractive. Aquamarine is the sky blue or sea blue variety of beryl, sometimes it is sea green, and less desirable. The word aquamarine literally means seawater, and is very frequently shortened to aqua. While all aquamarines are beryl, not all beryls are aquamarines. The color of aquamarine is due to the presence of traces of iron present as an impurity in the crystal structure of beryl. As with many gemstones, the color of aquamarine has almost always been improved by treatment of some kind. Heat treatment is used for aquamarines, to turn greenish, yellow or brown stones into a beautiful blue, and the color change is permanent. Even the best aquamarines are fairly pale compared with other gemstones such as sapphire. Large stones benefit from the effects of color saturation, and as aquamarine, unlike emerald, tends to form quite large clear crystals, this means that large aquamarines are relatively easy to find, and the price does not rise so steeply with size as it does with many other stones. It also means that small stones often lack color intensity and are therefore not as attractive. Small aquamarines with good color are therefore harder to obtain, and relatively expensive for their appearance. For small stones it may be preferable to use Ceylon sapphire instead. Citrine is the clear yellow or golden form of the mineral quartz. As such it is related to amethyst which is the purple form of quartz, and also to rock crystal which is the colorless variety. It is possible for some specimens of quartz to be different colors in different areas. Amethyst and citrine are varieties of quart which can both occur in the same stone. As our photograph clearly shows, the difference in color at the two ends create a striking contrast. Such stones are sometimes called ametrines, but we think they could equally well be called citrysts or citrethysts. The hardness of citrine is 7, and appears on the Moh scale as quartz. Citrine is often incorrectly and ignorantly sold as topaz. Although topaz and citrine can often be similar colors, they are both completely distinct gemstones. Often the owners and wearers of citrines ring call them topaz, presumably along the lines of wishful thinking. Many topaz are a richer color than citrine, containing more orange coloration. It may seem surprising that diamond is simply carbon, just like charcoal or graphite. In fact carbon has at least two other rare, and only recently discovered forms, or allotropes, known as fullerenes. The difference is caused by the different types of bonding between adjacent atoms to form different types of crystalline structure. In diamond, each carbon atom is bonded to four other carbon atoms in a tetrahedral structure, like a pyramid. Each link or bond is the same length, and the tetrahedral formation is therefore completely regular. It is the strength and regularity of this bonding which makes diamond very hard, non-volatile and resistant to chemical attack. Theoretically a perfect diamond crystal could be composed of one giant molecule of carbon. Carbon is a non-metallic element with the atomic number of 6, and an atomic Weight of 12. In combination with oxygen and hydrogen it is contained by all living objects. In the form of graphite it appears black or dark gray, opaque, and is very soft, whereas in the form of diamond is it clear, colorless, and extremely hard. Diamond possesses many qualities which make it an ideal gemstone. It is extremely hard, and also very tough and hard-wearing, and this also helps it to take a very high polish. Some hard articles are brittle which detracts from their durability. There are some things which are harder than a diamond. In its pure form it is colorless, has a high refractive index, so has a very high luster. It possesses high dispersion, meaning that different light wavelengths are diffracted differently, giving a strong scintillating play of prismatic colors. Diamonds seem to have been known for about 3,000 years, being mentioned in Exodus chapter 28, however in early times, other hard minerals were often confused with diamond. It is thought that the earliest diamonds were found in about the 12th century B.C., in India , which remained the most important, if not the sole, source until 1725, when diamonds were discovered in Brazil. The Indian and Brazilian deposits had been almost exhausted when in 1866, the Eureka diamond was discovered in South Africa, followed by the Star of South Africa in 1869. Shortly afterwards, the great South African diamond rush had started, and South Africa remains one of the world's most important sources of diamonds today. Diamonds have since been discovered in many other regions of the world, including Russia and Australia. Until the South Africa finds, diamonds were so rare and valuable, that they were only owned by the very wealthy. Through the publicity and promotion given to diamonds largely by the De Beers Company, and through the Diamond Promotion Service, diamonds have become the most desired gemstone. Thanks to large scale mining, and the development of efficient cutting methods and equipment, diamonds have now become a consumer luxury affordable to the masses. Mass production jeweler manufacturing techniques have also helped to bring diamond rings and other diamond jeweler into very affordable, even commodity, price ranges. Emerald is the grass green variety of the gemstone called beryl. Although all emeralds are beryl, not all beryls are emerald. Pure beryl is colorless, often called white, and although quite rare, tends not to be valuable because it does not have much brilliance. Colors, as in many gemstones, are caused by small amounts of impurity, usually metallic oxides. This is a another case where impurity is desirable. Chromium, in the form of chromic oxide, causes the bright grassy green coloring in beryl, thereby producing emeralds. Vanadium can also affect the exact shade, as may traces of iron. It is also possible to have green beryl which is not emerald, because the coloring agent is not chromium. Emerald, along with other beryls, is quite hard, having a hardness of 71/2 to 8 on MohвЂs scale, compared with 10 for diamond, 9 for corundum, and 8 for topaz. Hardness is generally a desirable feature is gemstones. The earliest known source of emerald was near the Red Sea in Egypt, the so-called Cleopatra's emerald mines. They were probably worked from about 2000 B.C., apparently the location of them was lost in the middle ages, and not rediscovered until 1818. Most emeralds used in ancient jeweler are believed to have come from these mines. They are not worked nowadays because of the low quality of crystals found. Emeralds have been found in Austria since Roman times, in the Legbach ravine at Habachtal near Salzburg. These are no longer commercially mined. Columbia is generally recognized as the source of the world's finest quality emeralds, both in the past and the present. The Columbian Indians were using them before 1537, when Quesada conquered Columbia. Later the Spanish discovered that the emerald mines were at Somondoco, which means "god of the green stones", and which is now known as Chivor. The best colored Columbian emeralds are said to be those from the Muzo mine, although another mine at Cosquez is also highly rated. Russia has been another important source of emeralds in the past. Most Russian emeralds coming from Sverdlovsk or Ekaterinburg in the Ural Mountains. Emeralds were discovered in Australia in 1890 in New South Wales. Emeralds were discovered between1927 and 1929 at Gravelotte in South Africa, followed by other sources. Another important source of superb quality emeralds, usually only of small size, is Sandawana in Zimbabwe formerly Southern Rhodesia. These were discovered only in 1956. Emeralds were known in India from antiquity, but their source is not certain. The earliest known Indian source was 1929 at Arawalli in Rajahstan, other sources being discovered since. The quality of Indian emeralds is very variable, but most are of lower quality which are often polished as beads. Other sources of emerald include Norway, North Carolina, Connecticut, Maine, New Hampshire, although non of these are very important. Garnet is a naturally occurring gemstone. Its name comes from Latin granatus meaning seed, because it often resembles small round seeds when found in its matrix rock. Rather than a single gemstone, garnet is a family of related minerals, some of which occur as gemstones. Each has a common crystal structure, and a similar chemical composition. The popular understanding of garnet is as an inexpensive dark red stone. Because it is relatively common and inexpensive, it is often thought of as "only garnet", and as being inferior. This bias extends to other rare and attractive forms of garnet. Garnet occurs naturally in a large range of colors including: red, orange, brown, green, yellow, and brown. Its variability of color reflects the variations in its composition. There are two main theoretical groups or "families" of garnet:- pyrope, almandite, spessartite, which are all (metal) aluminum silicates, and uvarovite, grossularite, andradite, which are all calcium (metal) silicates. In practice, there are probably very few garnets with the precise pure chemical composition shown for their type, almost all garnets are of mixed types, where one type is partially replaced by another type. Demantoid garnet is a rare and beautiful bright grass green sub-variety of andradite garnet. It appears to have first been discovered around 1892 in the Bobrovka area of Russia. The Bobrovka is a small tributary of the River Tschussowaja in the Sissersk region on the western side of the Ural Mountains. It was at first thought to be emerald, which is found nearby, and has been erroneously called "Uralian emerald". The name demantoid means diamond-like, because it has a very high adamantine luster, and a color dispersion higher than diamond. The only disadvantageous property of demantoid is its low hardness figure at about 6.5 Moh. It is the softest of the garnets, and is more suitable for use in brooches, pendants, or ear-rings, rather than rings, because of this. The brilliant color of demantoid garnet is due to partial replacement of the silicate by chromic oxide. A diagnostic characteristic of demantoid is the inclusion of radiating fibers of byssolite (asbestos) fibers in a pattern described as a horse-tail. There is no other green stone which shows this feature. In late Victorian times, and early in the twentieth century, demantoid became a very sought after stone. It commanded high prices because it has never been available in large quantity. In recent decades, it has been unobtainable as newly mined stones, and has only been available from antique jeweler. Recently, small finds have again been made in Russia, and a small quantity of fine quality stones have recently come onto the market. Gemstone lovers wishing to acquire a piece of demantoid garnet should take this opportunity to do so. If the current seams of demantoid run out, there may be another century without new stocks of demantoid becoming available. Tsavolite, previously called tsavorite, is a bright green variety of grossular garnet, its color being induced by the presence of chromium. Opal is a paradoxical gemstone, and one of the most fascinating. It is a form of quartz, but is not a form of quartz. Quartz is very common, yet has many rare and precious gem varieties. Opal itself has numerous varieties. It is the most colorful gemstone, but some forms are colorless. It can be very bright and beautiful, and it can be dull and dead. It is best known for its flashes of color, but some varieties have no flashes of color, and are still opals. It can be black, and it can be white. Its best known attribute, the brilliant flashes of many colors, are not called opalescence, but iridescence. Some people think opal is unlucky, but it is one of the most valuable and desirable of gems. Opal is a variety of quartz. Quartz in turn is silicone dioxide, one of the commonest minerals on earth. Quartz exists in a number of different forms, ordinary sand is one form, but there are numerous gemstone forms of quartz. Actually because opal is a gel, it is, strictly speaking, not a form of quartz. Quartz is a crystalline form of silicon dioxide, opal is a solid gel. However because the chemical formula is the same except that opal is hydrous, that is it contains some water which is chemically attached to the silicon dioxide molecules. Pearls are organic gemstones. Most gemstones are formed from inorganic substances, but a number of gemstones are from organic sources, that is from living things, either plants or animals. Natural or real pearls come mainly from oysters, although there are other bi-valve mollusks from which can produce them. Cultured pearls are produced by artificially introducing a foreign object into the fleshy part of oysters, which become coated with nacre in a similar manner to natural pearls. Imitation pearls are also made in various ways. Pearls are formed naturally by the oyster when a foreign object enters the shell and causes irritation to its soft tissue. The oyster forms a secretion around the object as a form of protection. The foreign object can be a number of different things including a grain of sand or a parasite. In time the coating builds up in iridescent layers. Pearls can be almost any shape, but round ones are generally more desirable. The hardness of pearl is 3.5 to 4.0. Imitation pearls, usually called simulated pearls have been produced for many years. They can be made with a plastic core or of mother of pearl, coated with a layer containing fish scales which give the iridescent effect. Teridot is a bright yellow green or golden green variety of olivine. It was originally found on Egypt's St. John's Island once known as Topazios, in the Red Sea, which is now known as Zeberget. It is also found in Burma, Sri Lanka, USA and Norway. Because there hardness is lower than 7, they are not ideal for use in rings, and should be treated with reasonable care. Bright golden green, but can vary to darker green or greenish yellow. Peridot has also been known as Chrysolite, although this is an old name which was applied fairly indiscriminately to any yellow and greenish yellow stones. It was also once incorrectly called topaz. There are also brown peridots. Since1952 many stones believed to be brown peridots have been found to be a different mineral called Sinhalite. Ruby is the usual name for transparent red corundum. Ruby is red or pink. Blue or any other color of corundum is usually called sapphire. Corundum is the mineralogical name for aluminum oxide. Corundum can be colorless, red, pink, red, black, brown, orange, yellow, green, indigo, violet, or mauve. Red corundum and most pink corundum is called ruby, all other colors are called sapphire, usually with the color specified as a prefix to the word ruby, for example, yellow sapphire. Pure corundum is colorless, often called white, and although quite rare, tends not to be valuable because it does not have much brilliance. Colors, as in many gemstones, are caused by small amounts of impurity, usually metallic oxides. This is a case where impurity is desirable. Corundum is very hard, having a hardness of 9 on MohвЂs scale, compared with 10 for diamond, and 8 for topaz. Hardness is generally a desirable feature is gemstones. Other uses for corundum, because of its hardness, are as watch bearings, watch glasses, and as an abrasive. Originally, the best sapphires and rubies came from Burma, where they are believed to have been mined possibly from prehistoric times. Certainly they appear to have been worked during the times of Marco Polo. Thailand, previously called Siam, is an important source of attractive ruby. Thai rubies are usually pink rather than red, and often slightly pale and silky. Many people seem to believe that the darker the ruby the better. just as many seem to believe the opposite. Neither of these opinions is correct. If you think, even briefly, about this it becomes obvious why. A very dark ruby would appear black, and would not be very attractive or desirable. The darkness often being caused by inclusions. An extremely pale ruby would be colorless, and not particularly attractive or valuable. As usual, the truth lies between the two extremes. The most desirable rubies are generally those with an intense red color, and plenty of sparkle and life. These latter two factors are usually helped by high optical clarity and skilful cutting. Ultimately which is "best" is a subjective matter, and personal preference is important. Our usual advice to potential customers is to buy whichever color of ruby they personally find the most attractive. We also think it's slightly sad that we need to give this advice. Buy what you like, using your own judgment, rather than allowing yourself to be a slave to fashion and buying what you think will impress other people. The main choice in the color of rubies depends largely whether you prefer red or pink. One important factor when selecting a ruby is to ensure that it will not clash with your nail varnish or other clothing and accessories. This is a more important factor with ruby than with almost any other gemstone. Colorless diamonds, blue sapphires, and green emeralds hardly ever clash with other colors, whereas reds and pinks require considerable more care when mixing with similar colors. Sapphire is the usual name for transparent corundum. The usual color associated with sapphire is blue, but sapphire can be almost any color. Corundum is the mineralogical name for aluminum oxide. Corundum can be colorless, red, pink, blue, black, brown, orange, yellow, green, indigo, violet, or mauve. Red corundum and most pink corundum is called ruby, blue corundum is called sapphire, and other colors are also called sapphire, usually with the color specified as a prefix to the word sapphire, for example, yellow sapphire. Brilliant orange sapphires are sometimes called padparascha. Pure corundum is colorless, often called white, and although quite rare, tends not to be valuable because it does not have much brilliance. Colors, as in many gemstones, are caused by small amounts of impurity, usually metallic oxides. This is a case where impurity is desirable. Chromic oxide causes brilliant red coloring in corundum, thereby producing rubies. Ferric oxide causes yellow coloration, titanium oxide produces vivid blue. In fact the coloration of sapphire is not quite so simple as this. The titanium and iron are usually present in the form of ilmenite, a mineral which is a titanium iron oxide, TiFeO3. Ilmenite is not isomorphous with aluminum oxide. Isomorphous means being able to replace the host mineral within its crystal structure. Instead ilmenite is present as a microscopic inclusion, in the form of colloidal particles. This colloidal nature may be responsible for other optical effects such as "silk", asterism, and color banding. Corundum is very hard, having a hardness of 9 on Moh's scale, compared with 10 for diamond, and 8 for topaz. Hardness is generally a desirable feature is gemstones. Other uses for corundum, because of its hardness, are as watch bearings, watch glasses, and as an abrasive. Originally, the best sapphires and rubies came from Burma, where they are believed to have been mined possibly from prehistoric times. Certainly they appear to have been worked during the times of Marco Polo. Kashmir is another source of very fine sapphires, famous for its cornflower blue stones. Thailand, previously called Siam, is an important source of attractive sapphire. The term Ceylon sapphire is frequently used to denote pale to medium sapphires. Unless the stone is known to originate from Sri Lanka, as it is now called, such sapphire should accurately be called "Ceylon-type" sapphire. Currently most dark sapphires come from Australia, and the term "Australian sapphire" is often used to denote dark colored sapphires, in a similar way to the term "Ceylon sapphire" for lighter stones. Sapphires are also found in Montana and Colorado in the USA, India, with small quantities being found in numerous other countries. We are frequently informed, by partially educated customers that the darker the sapphire the better. We are equally frequently and erroneously told the opposite. If you think, even briefly, about this it becomes obvious why. A very dark sapphire would appear black, and would not be very attractive or desirable. The darkness often being caused by inclusions. An extremely pale sapphire would be colorless, and although rarer than black sapphire, is not particularly attractive or valuable. As usual, the truth lies between the two extremes. The most desirable sapphires are generally those with an intense blue color, and plenty of sparkle and life. These latter two factors are usually helped by high optical clarity and skilful cutting. Ultimately which is "best" is a subjective matter, and personal preference is important. Our usual advice to potential customers is to buy whichever color of sapphire they personally find the most attractive. We also think it's slightly sad that we need to give this advice. Buy what you like, using your own judgment, rather than allowing yourself to be a slave to fashion and buying what you think will impress other people. Tanzanite is the name given to transparent blue zoisite. It was discovered in Tanzania in 1967, and was introduced into jeweler in 1969 by Tiffany & Co. of New York. Usually blue, lilac blue, or deep violet blue, but other colors are possible including green, yellow, pink, brown and khaki. These colors and also paler blue stones are often heat treated to produce the preferred deep blue color. Tanzanite is slightly fragile, and can fracture badly, ultrasonic cleaning should be avoided, but otherwise it is very suitable for jeweler, being a very beautiful stone, similar to sapphire. Famous examples: A specimen named "The Midnight Blue", of 122.7 carats is located at the Natural History Museum in Washington, DC., USA. Topaz of the best known gemstones. Even its name sounds like something exotic and fabulous from The Arabian Nights, or poetic like "silken Samarkand". In fact its name is so popular that most of the owners of a citrine claim to own a topaz! The name is believed to have derived from the Greek work topazos, the ancient name for St. John's Island in the Red Sea, or from the Sanskrit tapas meaning fire. Topaz is well known to be yellow, and in ancient times all yellow stones were called topaz. Nowadays we know better. Topaz can also be colorless, blue, green, pink, orange or brown. The classical precious topaz is yellow or yellow to orange-brown in color. Sherry or Madeira (I suppose it goes well with the tapas!) would best describe the most desirable color. In the last 10 years or so, jewelers' windows have become filled with blue topaz, which is very attractive and inexpensive, and has to some extent become a substitute for Ceylon sapphire. Blue topaz does occur naturally, but almost all commercially available blue topaz is produced from less attractive colors which are irradiated and heat treated to turn them blue. This treatment produces a stable color, and normally the stones are not radioactive when they are released on the market, although there have been cases where stones with an unsafe level of radiation have been sold. There are distinct hues of blue topaz, which we presume arise because of the different treatments. The most usual colors are known as "London Blue", "Swiss Blue" and "Sky Blue", we have listed these in order from the deepest to the palest colors. Tourmaline is generally thought of as green, but can be almost any color, indeed some tourmalines display two or more colors within the same crystal. Because of the naturally occurring shape, tourmalines are often cut as long baguettes, emerald cuts, or ovals. Large size tourmalines are more relatively common compared with other gemstones, so they are ideal for large jeweler pieces. As with all gemstones, the most attractive colors and qualities are more expensive than lower qualities, and large desirable pieces are not cheap. Tourmaline exists in more colors than any other gemstone. The most common color is a dark green, but bright green chrome tourmalines are seen, as are blue, red, pink, orange, yellow, colorless, brown, violet and black. Strongly colored pink tourmaline is sometimes called rubellite. Chatoyancy or chatoyance, literally from French means cat's eye. Seen best in cat's eye chrysoberyl, but also found in a few other gemstones including tourmaline. Hits: 41627 | Leave a comment
<urn:uuid:9f0a1f68-43fe-423d-b9c2-a426e5d711db>
CC-MAIN-2020-24
https://beauty.bgfashion.net/article/3074/40/those-beautifull-stones
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347399820.9/warc/CC-MAIN-20200528135528-20200528165528-00194.warc.gz
en
0.966768
5,864
2.75
3
Ask Dr. Dairy: Can dairy foods help manage weight? In this series, Dr. Greg Miller, Ph.D., FACN, answers questions received from the health and wellness community. Question: Can dairy foods help you manage your weight? Answer: A healthy eating pattern, which includes low-fat or fat-free dairy foods, provides a foundation for managing your weight. Research indicates eating dairy foods, like milk, cheese and yogurt, is not linked to weight gain when consumed within calorie limits. As part of a higher protein eating pattern, dairy foods can help with your weight management goals, especially when consumed within a calorie restricted diet paired with physical activity. Dairy foods like milk, cheese and yogurt contain high-quality protein. Research shows that eating a higher-protein diet can help you manage your weight and feel full. In addition, a higher-protein eating pattern can help maintain lean body mass while you’re losing weight. Since more muscle can help burn more calories, preserving lean mass may help people maintain a healthy weight. A diet higher in protein along with resistance exercise can optimize your body’s ability to build muscle from carbohydrates or fat. Researchers continue to study the complex effect of protein on satiety, food consumption and body weight. Dietary intervention studies have demonstrated that higher-protein diets can help enhance satiety, reduce hunger, and fit into a weight loss plan. To help with appetite control, satiety and weight management, more protein is needed than the Recommended Dietary Allowance (RDA) (0.8 grams/kg body weight), but the amount is still within the recommended range. Research supports a higher protein meal plan for weight management that provides ~1.2 to 1.6 grams of protein per kilogram of body weight. This translates into approximately 68-90 grams of protein per day for a woman weighing 125 pounds and 95-127 grams of protein per day for a male weighing 175 pounds. Managing weight can be challenging because it often involves making several lifestyle changes, from eating smaller portions and making better food choices to fitting in more physical activity. People may be tempted to cut calories by reducing nutrient-dense foods from meals when they’re trying to lose weight. However, instead, people can focus on reducing nutrient-poor foods that are sources of excess calories. For those who want to reduce fat and/or calories in their meal plans and still include dairy, there are many options to choose from, including low-fat and fat-free milk and yogurt, as well as part-skim and reduced-fat cheeses, which contain at least 25 percent less fat than regular cheese.
<urn:uuid:436f18f0-587e-432b-8fa3-027c8b1a25d3>
CC-MAIN-2022-40
https://www.usdairy.com/news-articles/can-dairy-foods-help-you-manage-your-weight
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00751.warc.gz
en
0.954829
541
3.015625
3
Moyamoya disease is a rare neurological disorder involving the progressive narrowing of two of the major arteries (internal carotid arteries) supplying blood to the brain. As a result of this progressive narrowing, a new network of small blood vessels form around the base of the brain in an attempt to compensate for the reduced blood flow. The appearance of this new network of blood vessels resembles a “puff of smoke” on cerebral angiogram giving rise to the name “moyamoya” - from the Japanese expression for something “hazy just like a puff of smoke drifting in the air.” In many instances, this new vessel formation is insufficient to compensate for the progressive occlusion and blockage of the internal carotid arteries resulting in transient or permanent brain damage (see symptoms below). Although a high incidence of Moyamoya disease is found in people of Asian descent, especially Japanese, it has now been recognized world-wide, and in all ethnic groups. It appears to be more common in females and is more common in children accounting for about 6% of childhood strokes in Western counties. Adult onset may also occur. The etiology or cause of Moyamoya disease is unknown, although rare familial cases have suggested a genetic influence (and three genes have tentatively been identified in primary Moyamoya disease). Moyamoya syndrome (or “secondary” Moyamoya) is different from primary or idiopathic Moyamoya disease as it develops secondary to an underlying disorder such as Down syndrome, sickle cell disease, William syndrome, or neurofibromatosis, or may occur after brain radiation therapy. Symptoms in Moyamoya disease result from progressive blockage of the major intracranial blood vessels and results in loss of neurological function which may be either transient or permanent. This is an angiogram taken in the anterior-posterior direction (front to back) that shows the obliteration of the carotid artery that now is trying to grow new arteries to supply the brain. - Stroke: weakness of sensory disturbance in an arm and/or leg on one side of the body, difficulty speaking, visual abnormalities, or problems walking. Hemorrhagic stroke, or bleeding into the brain, is more common in adults from rupture of the fragile moyamoya vessels. - Transient ischemic attack or TIA: temporary or transient stroke-like symptoms that fully reverse and resolve. - Headaches: severe, persistent and progressive pattern - Progressive cognitive or learning impairment: due to progressive hypoperfusion of the brain. This is one of the images obtained from a Diamox SPECT scan. The various colors reflect differing levels of blood flow to the brain. Diagnosis of Moyamoya disease involves four components: - Detailed neurological and genetic evaluation to differentiate primary from secondary forms of Moyamoya. - Brain imaging: in the acute setting, special stroke protocol imaging studies are necessary to look for evidence of stroke or ischemic brain injury. - Vascular or Vessel Imaging: a range of modalities for vessel-specific imaging are used, including CT and MR angiography, and the “gold standard”, cerebral angiography. - Perfusion Studies: special neuroimaging modalities are often necessary to assess and quantify the degree of hypoperfusion (lack of blood supply) of the brain. This includes state of the art equipment such as CT or MR perfusion scans, Diamox SPECT scans, and PET scans. The diagnosis, treatment and management of Moyamoya disease requires a collaborative team approach between numerous specialists including neurosurgery, neuroradiology, genetics, neuropsychologists, physical, occupational and speech therapists. Treatment strategies are aimed at preventing recurrent symptoms, including stroke, and non-stroke symptoms such as progressive cognitive or learning impairment. General treatment strategies aim at preventing the clotting of blood in the narrowed blood vessels includes the use of anti-platelet therapies (such as aspirin or clopidogrel), or specific revascularization procedures.
<urn:uuid:179826f8-6f50-481a-bc90-21cb06a9ae6b>
CC-MAIN-2017-22
https://my.clevelandclinic.org/health/articles/moyamoya-disease
s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463609404.11/warc/CC-MAIN-20170528004908-20170528024908-00239.warc.gz
en
0.909359
850
3.59375
4
International Day of the Seafarers is celebrated annually on June 25th to recognize the invaluable contribution of seafarers to the world economy. This day aims to create awareness about the challenges faced by seafarers and to appreciate their hard work. Seafarers play a crucial role in the global supply chain, transporting goods and commodities across the world. The International Maritime Organization (IMO) designated the Day of the Seafarers in 2010, with the theme “Seafarers: at the core of shipping’s future.” The theme emphasizes the importance of seafarers in shaping the future of the shipping industry. The day is celebrated by organizations and individuals worldwide, with events and activities aimed at raising awareness about seafarers’ welfare and working conditions. Despite their significant contribution to the global economy, seafarers face several challenges, including long working hours, poor living conditions, and limited access to medical care. The COVID-19 pandemic has further exacerbated these challenges, with seafarers facing travel restrictions and being stranded at sea for extended periods. The International Day of the Seafarers provides an opportunity to highlight these issues and promote the welfare of seafarers worldwide. International Day of Seafarers Background and Significance The International Day of Seafarers is celebrated every year on June 25th to recognize the invaluable contribution of seafarers to global trade and the world economy. This day was first established in 2010 by the International Maritime Organization (IMO), a specialized agency of the United Nations that regulates the safety and security of shipping and the prevention of marine pollution. Seafarers play a crucial role in transporting goods and commodities across the world, with over 90% of global trade being carried by sea. Despite their vital contribution, seafarers often face challenging working conditions, including long periods away from home, isolation, and limited access to medical care and legal protection. The theme for the 2023 International Day of Seafarers is “Fair Future for Seafarers”, which aims to raise awareness of the challenges facing seafarers and to advocate for their rights and welfare. The day provides an opportunity to recognize the sacrifices and hardships that seafarers endure and to show appreciation for their vital role in the global economy. The COVID-19 pandemic has highlighted the importance of seafarers, as many have been stranded at sea for months due to travel restrictions and border closures. The IMO has been working to address this issue by calling on governments to recognize seafarers as key workers and to prioritize their vaccination and repatriation. In conclusion, the International Day of Seafarers is an important day to recognize the contributions and challenges faced by seafarers. The 2023 theme of “Fair Future for Seafarers” highlights the need to advocate for the rights and welfare of seafarers, especially during the ongoing COVID-19 pandemic. Theme of the Day The International Day of the Seafarers is celebrated every year on June 25th. The theme of the day is to recognize the contribution of seafarers to the world economy and to promote their welfare. The day also aims to raise awareness about the importance of seafarers and the challenges they face. This year’s theme is “Oceans Worth Protecting”. The theme highlights the importance of protecting the oceans and the role of seafarers in achieving this goal. The oceans are essential for life on earth, and they provide a range of benefits, including food, transportation, and recreation. Seafarers play a vital role in protecting the oceans. They are responsible for transporting goods and people across the seas, and they must ensure that their vessels do not harm the marine environment. Seafarers also play a critical role in responding to maritime emergencies, including oil spills and other environmental disasters. The World Maritime Theme for 2023 is “Connecting Ships, Ports, and People”. The theme emphasizes the importance of connectivity in the maritime industry and the need for collaboration between ships, ports, and people. The theme is relevant to the International Day of the Seafarers as it recognizes the role of seafarers in connecting the world through maritime trade. In conclusion, the International Day of the Seafarers is an important day that recognizes the contribution of seafarers to the world economy and promotes their welfare. This year’s theme, “Oceans Worth Protecting”, highlights the importance of protecting the oceans and the role of seafarers in achieving this goal. The World Maritime Theme for 2023, “Connecting Ships, Ports, and People”, emphasizes the importance of connectivity in the maritime industry and the need for collaboration between ships, ports, and people. Role of Seafarers Seafarers play a crucial role in the shipping industry, ensuring that goods are transported safely and efficiently across the world’s oceans. They are responsible for the operation and maintenance of ships, as well as the safety of the crew and cargo on board. The work of seafarers involves long voyages away from home, often lasting for months at a time. They work in a challenging and often dangerous environment, facing extreme weather conditions, rough seas, and the risk of piracy and other security threats. Despite the challenges, seafarers are an essential part of the global workforce, providing a vital service that keeps the world’s economies moving. They are skilled professionals who are trained in a wide range of disciplines, from navigation and engineering to firefighting and first aid. Seafarers also play an important role in ensuring that ships comply with international regulations and standards. They are responsible for maintaining the ship’s equipment and systems, as well as ensuring that the crew and cargo are transported safely and in compliance with all relevant laws and regulations. In summary, seafarers are an essential part of the shipping industry, providing the skills and expertise needed to transport goods safely and efficiently across the world’s oceans. They work in challenging and often dangerous conditions, but their dedication and professionalism ensure that the world’s economies continue to function smoothly. Impact on Economy The International Day of Seafarers has a significant impact on the economy, especially in countries with a large seafaring industry. The seafaring industry is responsible for transporting goods and raw materials across the world, making it a crucial part of the global economy. The seafaring industry is a significant contributor to the world economy, accounting for around 80% of global trade by volume and over 70% by value. The industry employs millions of people worldwide, contributing to the growth of many economies. Seaborne trade has been growing steadily over the years, and the International Day of Seafarers helps to raise awareness of the importance of seafarers in the global economy. It also highlights the challenges they face, such as piracy, hazardous working conditions, and long periods away from their families. The global economy is heavily dependent on seaborne trade, and disruptions to this industry can have a significant impact on the world economy. The COVID-19 pandemic, for example, has caused disruptions to the global supply chain, leading to shortages of goods and raw materials. In conclusion, the International Day of Seafarers has a significant impact on the economy, highlighting the importance of the seafaring industry in the global economy. It also raises awareness of the challenges faced by seafarers and the need for better working conditions and support for this vital industry. Training and Education Training and education are essential components for individuals pursuing a career as a seafarer. The International Maritime Organization (IMO) has set standards for seafarer training and certification under the Standards of Training, Certification, and Watchkeeping (STCW) Convention. The STCW Convention outlines the minimum requirements for seafarers to ensure that they possess the necessary knowledge, skills, and experience to carry out their duties safely and efficiently. Seafarers must undergo training and education in various areas, including safety, navigation, cargo handling, and communication. The training can be obtained through maritime academies, colleges, or training centers. These institutions offer a range of courses, from basic safety training to advanced specialized courses. The education and training of seafarers are continuously evolving to keep up with the latest developments in technology and regulations. The IMO has introduced the concept of continuous professional development (CPD), which requires seafarers to update their knowledge and skills regularly. CPD ensures that seafarers remain competent and up-to-date with changes in the industry. Experience is also a crucial aspect of seafarer training and education. Seafarers gain experience through onboard training and practical work. They must complete a specified period of sea service before they can be certified to work in a particular capacity. In conclusion, training and education are vital for seafarers to ensure they possess the necessary knowledge, skills, and experience to carry out their duties safely and efficiently. The STCW Convention sets the minimum requirements for seafarer training and certification, and the concept of CPD ensures that seafarers remain competent and up-to-date with changes in the industry. Safety and Health Concerns The safety and health concerns of seafarers are of utmost importance. Seafarers face various risks and challenges while working on board ships, such as accidents, injuries, and illnesses. The International Labor Organization (ILO) has set up various regulations and guidelines to ensure the maritime safety and health of seafarers. One of the significant health concerns of seafarers is mental health. The isolation and confinement of seafarers can lead to mental health problems such as depression, anxiety, and stress. The lack of access to proper medical care and support can worsen these conditions. Therefore, it is essential to provide adequate mental health support to seafarers. Seafarers also face various safety concerns while working on ships. The risk of accidents, injuries, and fatalities is high due to the nature of their work. The use of heavy machinery, exposure to hazardous materials, and extreme weather conditions can lead to accidents and injuries. Therefore, it is crucial to provide proper safety training and equipment to seafarers to minimize the risk of accidents and injuries. In addition to safety and health concerns, seafarers also face challenges related to medical care and shore leave. Seafarers may require medical attention while on board, and it is essential to provide them with proper medical care. Moreover, seafarers may face difficulties in obtaining shore leave due to various restrictions and regulations. Therefore, it is crucial to ensure that seafarers have access to medical care and shore leave when needed. Overall, the safety and health concerns of seafarers are crucial and must be addressed to ensure their well-being. The implementation of proper regulations and guidelines can help minimize the risks and challenges faced by seafarers while working on ships. Shipping is an essential component of the global economy, but it also has a significant impact on the environment. The environmental impact of shipping includes air pollution, water pollution, and greenhouse gas emissions. The marine environment is particularly vulnerable to the effects of shipping, as it can cause damage to marine ecosystems and wildlife. The International Maritime Organization (IMO) has implemented regulations to reduce the environmental impact of shipping. The MARPOL Convention, for example, sets standards for the discharge of pollutants from ships. The convention aims to prevent pollution of the marine environment and promote sustainable shipping practices. One of the biggest challenges facing the shipping industry is decarbonization. Shipping is responsible for around 2.5% of global greenhouse gas emissions, and this figure is expected to increase in the coming years. To address this issue, the IMO has set a target to reduce greenhouse gas emissions from shipping by at least 50% by 2050 compared to 2008 levels. To achieve this target, the shipping industry is exploring a range of decarbonization options, including the use of alternative fuels such as biofuels, hydrogen, and ammonia. The industry is also investing in more energy-efficient ships and improving operational practices to reduce emissions. In conclusion, the environmental impact of shipping is a significant issue that requires urgent action. The IMO and the shipping industry are working together to promote sustainable shipping practices and reduce the environmental impact of shipping. By implementing effective regulations and exploring new technologies, the industry can help to mitigate the impact of shipping on the environment and contribute to the fight against climate change. Challenges and Solutions The International Day of Safety for Seafarers is a crucial event that highlights the challenges faced by seafarers worldwide. Despite the efforts made by governments and organizations to improve the working conditions of seafarers, several challenges still exist. This section discusses some of the challenges faced by seafarers and the solutions to these challenges. One of the significant challenges faced by seafarers is the violation of their rights. Seafarers have the right to work in a safe and secure environment, but this is not always the case. Some seafarers are subjected to abuse, exploitation, and discrimination, which violates their human rights. Another challenge faced by seafarers is the lack of certification. Seafarers need to have proper certification to work on a ship. However, some seafarers do not have the necessary certification, which puts their lives and the lives of others at risk. The just transition to a sustainable maritime industry is another challenge faced by seafarers. The transition to a sustainable maritime industry requires significant changes in the way the industry operates. This change can lead to job losses and economic instability in some regions. The COVID-19 pandemic has also created challenges for seafarers. The pandemic has disrupted the global supply chain, leading to delays and cancellations of crew changes. Some seafarers are stuck on ships for months, which affects their mental and physical health. To address the challenges faced by seafarers, governments and organizations must work together to ensure that seafarers’ rights are protected. This includes implementing policies and regulations that protect seafarers from abuse, exploitation, and discrimination. To address the lack of certification, governments and organizations can provide training and education programs to help seafarers obtain the necessary certification. The just transition to a sustainable maritime industry can be achieved by providing support and assistance to seafarers who are affected by the transition. This includes providing training and education programs to help seafarers transition to new jobs. To address the challenges created by the COVID-19 pandemic, governments and organizations can implement measures to ensure that crew changes can take place safely. This includes providing personal protective equipment, testing, and quarantine facilities for seafarers. In conclusion, the challenges faced by seafarers are significant, but solutions exist. Governments and organizations must work together to protect seafarers’ rights, provide training and education programs, and support seafarers affected by the just transition to a sustainable maritime industry. Role of Social Media Social media has become an essential tool for communication and outreach in today’s digital age. It has revolutionized the way people interact with each other and has created new opportunities for businesses and organizations to reach their target audience. The International Day of Seafarers is no exception, and social media has played a significant role in raising awareness and promoting the event. The Secretary-General of the International Maritime Organization (IMO), Kitack Lim, has recognized the importance of social media in promoting the International Day of Seafarers. He has encouraged people to use social media platforms like Twitter, Facebook, and Instagram to share their messages of support for seafarers and to spread awareness about the challenges they face. To make it easier for people to participate in the social media campaign, the IMO has created a hashtag (#DayoftheSeafarer) that people can use to share their messages and photos. The hashtag has been widely used on social media, and it has helped to create a sense of community among people who support seafarers. Social media has also been used to showcase the contributions of seafarers to the global economy and to highlight the challenges they face. Organizations like the IMO have used social media to share stories of seafarers and to raise awareness about issues like piracy, abandonment, and the mental health of seafarers. In conclusion, social media has played a vital role in promoting the International Day of Seafarers and raising awareness about the challenges faced by seafarers. It has allowed people from all over the world to connect and share their messages of support, and it has helped to create a sense of community among those who support seafarers. Celebration and Tribute International Day of Seafarers is a celebration of the hard work and dedication of seafarers around the world. It is a day to recognize the important role that seafarers play in the global economy and to pay tribute to their sacrifices and contributions. Families of seafarers often face unique challenges, such as long periods of separation and uncertainty. On this day, they are recognized for their support and strength. It is an opportunity to acknowledge the sacrifices that families make to support their loved ones at sea. The journey of a seafarer is full of challenges and adventures. They spend long periods of time away from home, navigating the world’s oceans and seas. This day is a chance to celebrate their resilience and determination in the face of adversity. Life at sea can be both rewarding and challenging. Seafarers are responsible for the safe and efficient operation of vessels, often in difficult conditions. They work tirelessly to ensure that goods are transported safely and efficiently around the world. International Day of Seafarers is also a time to share stories and experiences. It is a chance to learn from each other and to celebrate the diversity of the seafaring community. From navigating treacherous waters to experiencing new cultures, seafarers have a wealth of experiences to share. In summary, International Day of Seafarers is a celebration of the seafaring community and a tribute to their sacrifices and contributions. It is an opportunity to recognize the challenges and adventures of life at sea, and to celebrate the resilience and determination of seafarers and their families. Future of Shipping Industry The shipping industry has undergone significant changes in recent years. The industry has been impacted by developments in technology, changes in international trade, and a growing awareness of the need for sustainable shipping practices. These trends are expected to continue, and the future of the shipping industry is likely to be shaped by a range of factors. One of the most significant trends in the shipping industry is the increasing use of technology. Advances in technology have enabled shipping companies to improve efficiency, reduce costs, and enhance safety. For example, the use of autonomous ships is becoming more common, and this technology is expected to become increasingly prevalent in the coming years. Another trend that is likely to shape the future of the shipping industry is the growing focus on sustainability. There is a growing awareness of the need to reduce the environmental impact of shipping, and this is leading to a range of initiatives aimed at promoting sustainable shipping practices. These initiatives include the use of alternative fuels, the development of more efficient shipping practices, and the adoption of new technologies to reduce emissions. In addition to these trends, the future of the shipping industry is likely to be shaped by developments in international trade. The growth of e-commerce is expected to continue, and this is likely to drive demand for shipping services. At the same time, there are concerns about the impact of trade tensions and protectionism on the shipping industry, and these issues will need to be addressed in the coming years. Overall, the future of the shipping industry is likely to be shaped by a range of factors, including technology, sustainability, and international trade. While there are challenges ahead, the industry is well-positioned to adapt and thrive in the years to come. Frequently Asked Questions What is the theme of the Day of the Seafarer 2023? The theme of the Day of the Seafarer 2023 is yet to be announced. When did the Day of the Seafarer start? The Day of the Seafarer was first celebrated on June 25, 2011, after its establishment by the International Maritime Organization (IMO) in 2010. What is the definition of a seafarer? A seafarer is a person who works on a ship or a boat, either as a member of the crew or as a passenger. What is the significance of International Day of the Seafarers? The International Day of the Seafarers is significant because it recognizes the contributions of seafarers to the global economy and society. It also highlights the challenges and risks that seafarers face while working on ships, and the need to improve their working conditions and welfare. What are some ways to celebrate International Day of the Seafarers? Some ways to celebrate International Day of the Seafarers include organizing events and activities to raise awareness about the importance of seafarers, sharing stories and experiences of seafarers, and expressing gratitude and appreciation for their contributions. Why is it important to recognize and support seafarers? It is important to recognize and support seafarers because they play a crucial role in global trade and transportation. They are responsible for transporting goods and commodities across the world, and without them, many industries and economies would suffer. However, seafarers often face challenging working conditions, including long hours, isolation, and limited access to shore leave and medical care. Recognizing and supporting seafarers is essential to ensuring their well-being and the continued success of the maritime industry.
<urn:uuid:d5825e39-8a3c-4d80-88df-e93c27a13dda>
CC-MAIN-2023-40
https://maritimepage.com/international-day-of-seafarers/
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506686.80/warc/CC-MAIN-20230925051501-20230925081501-00702.warc.gz
en
0.951958
4,337
3.328125
3
Legend Tomato - Lycopersicon lycopersicum With a name like Legend, one would expect something mythical and fabled. With late blight riddling gardens all over the country and tomatoes becoming more difficult to grow because of it, growing a tomato with tolerance to late blight may be something of fiction. Demonstrates some endurance towards late blight fungi US8 and US11. Another marvel of this tomato how early it matures, considering it is a full sized (3" - 5") slicing tomato. These tomatoes of Legend are exquisitely juicy, sweet and with the perfect balance of acidity. These perfectly uniform, smooth and round, luscious tomatoes were bred and release by Oregon State University Dr. Jim Baggett. - Sun: Full - Indoors: 6-8 weeks before last frost - Direct Sow: No - Seed Count: 25 - Days to Maturity: 60-70 - Plant Size: 4'-6' - Open Pollinated Bone meal is an ideal fertilizer. Mulch at base of plant. Use trellis or cage for support. Pinch flowers one month before first frost. Water base of plant only. Seeds should be planted indoors and kept in a dome with a heated mat for 4-8 weeks. It is key that your new starts be hardened off. This is a process that requires taking them outside during the day, for a period of time, before they are planted. This acclimates your seedlings to the outside world, meaning the elements like the wind and sun. We do it for several weeks to a month as this strengthens their stems and overall plant structure. - Start for a short period of time initially, then graduate to more time each day - One week minimum is recommended - Bring them inside in the early evening and overnight - Keep an eye on them and constantly water them. Make sure they have not blown over.
<urn:uuid:3814868d-87ca-4a2b-9209-9186a677a3f5>
CC-MAIN-2023-14
https://www.livingseedcompany.com/products/legend-tomato-lycopersicon-lycopersicum
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943695.23/warc/CC-MAIN-20230321095704-20230321125704-00184.warc.gz
en
0.933299
418
2.765625
3
As a society, we are becoming more and more aware of the importance of diversity and inclusion in all aspects of life, including children’s literature. Children’s books are a powerful tool for shaping young minds and attitudes, and it is vital that children see themselves represented in the stories they read. In this article, we will review the book ‘Last Stop on Market Street’ and explore the importance of diversity in children’s literature. ‘Last Stop on Market Street’ is a children’s book written by Matt de la Peña and illustrated by Christian Robinson. The book tells the story of a young boy named CJ and his grandmother as they ride the bus through the city. Along the way, CJ asks his grandmother questions about why they don’t have a car, why they have to go to a soup kitchen, and why they have to live in a neighborhood with so much noise and dirt. The Importance of Diversity in Children’s Literature Children’s literature has a crucial role in shaping the attitudes and beliefs of young readers. It is essential that children see themselves represented in the books they read and that they are exposed to diverse characters and cultures. When children see themselves reflected in the books they read, they feel seen, heard, and validated. Diverse children’s books can also help to break down stereotypes and promote empathy and understanding. When children are exposed to different cultures and experiences, they learn to appreciate and respect the differences in others. ‘Last Stop on Market Street’: A Review ‘Last Stop on Market Street’ is a prime example of a diverse children’s book. The book features a young African American boy and his grandmother, who both live in a low-income neighborhood. The story tackles themes of poverty, diversity, and community, all through the eyes of a child. One of the most striking things about ‘Last Stop on Market Street’ is the way it portrays the urban environment where the story takes place. The illustrations are vibrant and colorful, and they capture the energy and diversity of the city. The book also depicts a wide range of characters, from the bus driver to the musicians on the street, showcasing the many different people who make up a city. The book’s message is one of hope and resilience, as CJ learns to appreciate the beauty and richness of his community, despite its challenges. The story encourages children to see the world through a different lens and to embrace diversity and difference. In conclusion, ‘Last Stop on Market Street’ is a beautiful and powerful book that showcases the importance of diversity in children’s literature. The book’s themes of poverty, diversity, and community are all essential for children to understand and appreciate, and the story’s message of hope and resilience is one that will resonate with readers of all ages. As a society, we must continue to prioritize diversity and inclusion in all aspects of life, including children’s literature. By exposing children to diverse characters and cultures, we can help to break down stereotypes and promote empathy and understanding. ‘Last Stop on Market Street’ is an excellent example of a diverse children’s book, and it is a must-read for parents, teachers, and educators who want to promote diversity and inclusion in their classrooms and communities.
<urn:uuid:76ca96ab-930c-4c92-b1ae-48b81070c7fd>
CC-MAIN-2024-10
https://universityofthephoenix.com/the-importance-of-diversity-in-childrens-literature-a-review-of-last-stop-on-market-street/
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707948234904.99/warc/CC-MAIN-20240305092259-20240305122259-00355.warc.gz
en
0.958305
693
3.609375
4
Electrocardiogram(redirected from Elektrokardiogramm) Also found in: Dictionary, Thesaurus, Encyclopedia. electrocardiogram(ECG, EKG) [e-lek″tro-kahr´de-o-gram″] e·lec·tro·car·di·o·gram (ECG, EKG),(ē-lek'trō-kar'dē-ō-gram), Do not confuse this word with electrocardiograph. electrocardiogram/elec·tro·car·dio·gram/ (-kahr´de-o-gram″) a graphic tracing of the variations in electrical potential caused by the excitation of the heart muscle and detected at the body surface. The normal electrocardiogram is a scalar representation that shows deflections resulting from cardiac activity as changes in the magnitude of voltage and polarity over time and comprises the P wave, QRS complex, and T and U waves. Abbreviated ECG or EKG. See also electrogram. electrocardiogram (ECG, EKG) electrocardiogramECG, EKG Cardiology A non-invasive test of the electrical activity of heart's conduction system, which is transformed into recordings on graph paper–an electrocardiograph; in an EKG, electrodes–leads are placed on 12 specific sites of the body: standard limb leads–I, II, III, augmented limb leads–aVr , aVl, and aVf, and precordial or chest leads–V1 to V6; EKG tracings consist of 3 major components: the P wave, which indicates atrial depolarization, the QRS complex–ventricular depolarization, and the T wave–ventricular repolarization; the Holter monitor is a portable EKG recording device worn by an individual for continuous monitoring; the EKG is used to detect cardiac damage by evaluating alterations in the electrical conduction the heart, and can be performed at rest or during excercise–eg thallium stress test; the Holter monitor is a portable device worn by a Pt for continuous cardiac monitoring; the EKG is used to detect the presence and location of myocardial ischemia or infarction, cardiac hypertrophy, arrhythmias, conduction defects. See His bundle electrocardiography, Signal-averaged electrocardiography, Sleep electrocardiography. e·lec·tro·car·di·o·gram(ECG, EKG) (ĕ-lek'trō-kahr'dē-ō-gram) electrocardiogram (ECG)The tracing on paper, representing the electrical events associated with the heartbeats, produced by the ELECTROCARDIOGRAPH. Electrocardiogram (ECG, EKG) Area of applicationHeart. The monitoring of pulse and blood pressure evaluates only the mechanical activity of the heart. The electrocardiogram (ECG), a noninvasive study, measures the electrical currents or impulses that the heart generates during a cardiac cycle (see figure of a normal ECG at end of monograph). Electrical impulses travel through a conduction system beginning with the sinoatrial (SA) node and moving to the atrioventricular (AV) node via internodal pathways. From the AV node, the impulses travel to the bundle of His and onward to the right and left bundle branches. These bundles are located within the right and left ventricles. The impulses continue to the cardiac muscle cells by terminal fibers called Purkinje fibers. The ECG is a graphic display of the electrical activity of the heart, which is analyzed by time intervals and segments. Continuous tracing of the cardiac cycle activity is captured as heart cells are electrically stimulated, causing depolarization and movement of the activity through the cells of the myocardium. The ECG study is completed by using 12, 15, or 18 electrodes attached to the skin surface to obtain the total electrical activity of the heart. Each lead records the electrical potential between the limbs or between the heart and limbs. The ECG machine records and marks the 12 leads (most common system used) on the strip of paper in the machine in proper sequence, usually 6 in. of the strip for each lead. The ECG pattern, called a heart rhythm, is recorded by a machine as a series of waves, intervals, and segments, each of which pertains to a specific occurrence during the contraction of the heart. The ECG tracings are recorded on graph paper using vertical and horizontal lines for analysis and calculations of time, measured by the vertical lines (1 mm apart and 0.04 sec per line), and of voltage, measured by the horizontal lines (1 mm apart and 0.5 mV per 5 squares). A pulse rate can be calculated from the ECG strip to obtain the beats per minute. The P wave represents the depolarization of the atrial myocardium; the QRS complex represents the depolarization of the ventricular myocardium; the P-R interval represents the time from beginning of the excitation of the atrium to the beginning of the ventricular excitation; and the ST segment has no deflection from baseline, but in an abnormal state may be elevated or depressed. An abnormal rhythm is called an arrhythmia The ankle-brachial index (ABI) can also be assessed during this study. This noninvasive, simple comparison of blood pressure measurements in the arms and legs can be used to detect peripheral artery disease (PAD). A Doppler stethoscope is used to obtain the systolic pressure in either the dorsalis pedis or the posterior tibial artery. This ankle pressure is then divided by the highest brachial systolic pressure acquired after taking the blood pressure in both arms of the patient. This index should be greater than 1. When the index falls below 0.5, blood flow impairment is considered significant. Patients should be scheduled for a vascular consult for an abnormal ABI. Patients with diabetes or kidney disease, as well as some elderly patients, may have a falsely elevated ABI due to calcifications of the vessels in the ankle causing an increased systolic pressure. The ABI test approaches 95% accuracy in detecting PAD. However, a normal ABI value does not absolutely rule out the possibility of PAD for some individuals, and additional tests should be done to evaluate symptoms. This procedure is contraindicated for - Assess the extent of congenital heart disease - Assess the extent of myocardial infarction (MI) or ischemia, as indicated by abnormal ST segment, interval times, and amplitudes - Assess the function of heart valves - Assess global cardiac function - Detect arrhythmias, as evidenced by abnormal wave deflections - Detect peripheral artery disease (PAD) - Detect pericarditis, shown by ST segment changes or shortened P-R interval - Determine electrolyte imbalances, as evidenced by short or prolonged Q-T interval - Determine hypertrophy of the chamber of the heart or heart hypertrophy, as evidenced by P or R wave deflections - Evaluate and monitor cardiac pacemaker function - Evaluate and monitor the effect of drugs, such as digitalis, antiarrhythmics, or vasodilating agents - Monitor ECG changes during an exercise test - Monitor rhythm changes during the recovery phase after an MI - Normal heart rate according to age: range of 60 to 100 beats/min in adults - Normal, regular rhythm and wave deflections with normal measurement of ranges of cycle components and height, depth, and duration of complexes as follows: - P wave: 0.12 sec or three small blocks with amplitude of 2.5 mm - Q wave: less than 0.04 mm - R wave: 5 to 27 mm amplitude, depending on lead - T wave: 1 to 13 mm amplitude, depending on lead - QRS complex: 0.1 sec or two and a half small blocks - ST segment: 1 mm Abnormal findings related to - Atrial or ventricular hypertrophy - Bundle branch block - Electrolyte imbalances - Heart rate of 40 to 60 beats/min in adults - MI or ischemia - Pulmonary infarction - P wave: An enlarged P wave deflection could indicate atrial enlargement; an absent or altered P wave could suggest that the electrical impulse did not come from the SA node - P-R interval: An increased interval could imply a conduction delay in the AV node - QRS complex: An enlarged Q wave may indicate an old infarction; an enlarged deflection could indicate ventricular hypertrophy; increased time duration may indicate a bundle branch block - ST segment: A depressed ST segment indicates myocardial ischemia; an elevated ST segment may indicate an acute MI or pericarditis; a prolonged ST segment (or prolonged QT) may indicate hypocalcemia. A shortened ST segment may indicate hypokalemia - Tachycardia greater than 120 beats/min - T wave: A flat or inverted T wave may indicate myocardial ischemia, infarction, or hypokalemia; a tall, peaked T wave with a shortened QT interval may indicate hyperkalemia - Acute changes in ST elevation are usually associated with acute MI or pericarditis. - Heart block, second- and third-degree with bradycardia less than 60 beats/min - Pulseless electrical activity - Pulseless ventricular tachycardia - Premature ventricular contractions (PVCs) greater than three in a row, pauses greater than 3 sec, or identified blocks - Unstable tachycardia - Ventricular fibrillation - Bradycardia less than 60 beats/min - Pulseless electrical activity - Pulseless ventricular tachycardia - Supraventricular tachycardia - Ventricular fibrillation It is essential that a critical finding be communicated immediately to the requesting health-care provider (HCP). A listing of these findings varies among facilities. Timely notification of a critical finding for lab or diagnostic studies is a role expectation of the professional nurse. Notification processes will vary among facilities. Upon receipt of the critical value the information should be read back to the caller to verify accuracy. Most policies require immediate notification of the primary HCP, Hospitalist, or on-call HCP. Reported information includes the patient’s name, unique identifiers, critical value, name of the person giving the report, and name of the person receiving the report. Documentation of notification should be made in the medical record with the name of the HCP notified, time and date of notification, and any orders received. Any delay in a timely report of a critical finding may require completion of a notification form with review by Risk Management. Factors that may impair the results of the examination - Anatomic variation of the heart (i.e., the heart may be rotated in both the horizontal and frontal planes). - Distortion of cardiac cycles due to age, gender, weight, or a medical condition (e.g., infants, women [may exhibit slight ST segment depression], obese patients, pregnant patients, patients with ascites). - High intake of carbohydrates or electrolyte imbalances of potassium or calcium. - Improper placement of electrodes or inadequate contact between skin and electrodes because of insufficient conductive gel or poor placement, which can cause ECG tracing problems. - ECG machine malfunction or interference from electromagnetic waves in the vicinity. - Inability of the patient to remain still during the procedure, because movement, muscle tremor, or twitching can affect accurate test recording. - Increased patient anxiety, causing hyperventilation or deep respirations. - Medications such as barbiturates and digitalis. - Strenuous exercise before the procedure. Nursing Implications and Procedure - Positively identify the patient using at least two unique identifiers before providing care, treatment, or services. - Patient Teaching: Inform the patient this procedure can assist in assessing cardiac (heart) function. - Obtain a history of the patient’s complaints or clinical symptoms, including a list of known allergens, especially allergies or sensitivities to latex, anesthetics, or sedatives. Ask if the patient has had a heart transplant, implanted pacemaker, or internal cardiac defibrillator. - Obtain a history of the patient’s cardiovascular system, symptoms, and results of previously performed laboratory tests and diagnostic and surgical procedures. - Obtain a list of the patient’s current medications, including herbs, nutritional supplements, and nutraceuticals (see Effects of Natural Products on Laboratory Values online at DavisPlus). - Review the procedure with the patient. Inform the patient that it may be necessary to remove hair from the site before the procedure. Address concerns about pain related to the procedure and explain that there should be no discomfort related to the procedure. Inform the patient that the procedure is performed by an HCP and takes approximately 15 min. - Sensitivity to social and cultural issues, as well as concern for modesty, is important in providing psychological support before, during, and after the procedure. - Instruct the patient to remove jewelry and other metallic objects from the area to be examined. - Note that there are no food, fluid, or medication restrictions unless by medical direction. - Potential complications: N/A - Observe standard precautions, and follow the general guidelines in Patient Preparation and Specimen Collection. Positively identify the patient. - Ensure the patient has complied with pretesting preparations. - Ensure the patient has removed all external metallic objects from the area to be examined prior to the procedure. - Instruct the patient to void prior to the procedure and to change into the gown, robe, and foot coverings provided. - Record baseline values. - Place patient in a supine position. Expose and appropriately drape the chest, arms, and legs. - Instruct the patient to cooperate fully and to follow directions. Instruct the patient to remain still throughout the procedure because movement produces unreliable results. - Prepare the skin surface with alcohol and remove excess hair. Use clippers to remove hair from the site, if appropriate. Dry skin sites. - Avoid the use of equipment containing latex if the patient has a history of allergic reaction to latex. - Apply the electrodes in the proper position. When placing the six unipolar chest leads, place V1 at the fourth intercostal space at the border of the right sternum, V2 at the fourth intercostal space at the border of the left sternum, V3 between V2 and V4, V4 at the fifth intercostal space at the midclavicular line, V5 at the left anterior axillary line at the level of V4 horizontally, and V6 at the level of V4 horizontally and at the left midaxillary line. The wires are connected to the matched electrodes and the ECG machine. Chest leads (V1, V2, V3, V4, V5, and V6) record data from the horizontal plane of the heart. - Place three limb bipolar leads (two electrodes combined for each) on the arms and legs. Lead I is the combination of two arm electrodes, lead II is the combination of right arm and left leg electrodes, and lead III is the combination of left arm and left leg electrodes. Limb leads (I, II, III, aVl, aVf, and aVr) record data from the frontal plane of the heart. - The machine is set and turned on after the electrodes, grounding, connections, paper supply, computer, and data storage device are checked. - If the patient has any chest discomfort or pain during the procedure, mark the ECG strip indicating that occurrence. - Inform the patient that a report of the results will be made available to the requesting HCP, who will discuss the results with the patient. - When the procedure is complete, remove the electrodes and clean the skin where the electrode was applied. - Evaluate the results in relation to previously performed ECGs. Denote cardiac rhythm abnormalities on the strip. - Monitor vital signs and compare with baseline values. Protocols may vary among facilities. - Instruct the patient to immediately notify an HCP of chest pain, changes in pulse rate, or shortness of breath. - Recognize anxiety related to the test results and be supportive of perceived loss of independence and fear of shortened life expectancy. Discuss the implications of abnormal test results on the patient’s lifestyle. Provide teaching and information regarding the clinical implications of the test results, as appropriate. - Nutritional Considerations: Abnormal findings may be associated with cardiovascular disease. Nutritional therapy is recommended for the patient identified to be at risk for developing coronary artery disease (CAD) or for individuals who have specific risk factors and/or existing medical conditions (e.g., elevated LDL cholesterol levels, other lipid disorders, insulin-dependent diabetes, insulin resistance, or metabolic syndrome). Other changeable risk factors warranting patient education include strategies to encourage patients, especially those who are overweight and with high blood pressure, to safely decrease sodium intake, achieve a normal weight, ensure regular participation of moderate aerobic physical activity three to four times per week, eliminate tobacco use, and adhere to a heart-healthy diet. If triglycerides also are elevated, the patient should be advised to eliminate or reduce alcohol. The 2013 Guideline on Lifestyle Management to Reduce Cardiovascular Risk published by the American College of Cardiology (ACC) and the American Heart Association (AHA) in conjunction with the National Heart, Lung, and Blood Institute (NHLBI) recommends a “Mediterranean”-style diet rather than a low-fat diet. The new guideline emphasizes inclusion of vegetables, whole grains, fruits, low-fat dairy, nuts, legumes, and nontropical vegetable oils (e.g., olive, canola, peanut, sunflower, flaxseed) along with fish and lean poultry. A similar dietary pattern known as the Dietary Approach to Stop Hypertension (DASH) makes additional recommendations for the reduction of dietary sodium. Both dietary styles emphasize a reduction in consumption of red meats, which are high in saturated fats and cholesterol, and other foods containing sugar, saturated fats, trans fats, and sodium. - Social and Cultural Considerations: Numerous studies point to the prevalence of excess body weight in American children and adolescents. Experts estimate that obesity is present in 25% of the population ages 6 to 11 yr. The medical, social, and emotional consequences of excess body weight are significant. Special attention should be given to instructing the child and caregiver regarding health risks and weight control education. - Recognize anxiety related to test results, and be supportive of fear of shortened life expectancy. Discuss the implications of abnormal test results on the patient’s lifestyle. Provide teaching and information regarding the clinical implications of the test results, as appropriate. Educate the patient regarding access to counseling services. Provide contact information, if desired, for the American Heart Association (www.americanheart.org), the NHLBI (www.nhlbi.nih.gov), or the Legs for Life (www.legsforlife.org). - Reinforce information given by the patient’s HCP regarding further testing, treatment, or referral to another HCP. Answer any questions or address any concerns voiced by the patient or family. - Depending on the results of this procedure, additional testing may be performed to evaluate or monitor progression of the disease process and determine the need for a change in therapy. Evaluate test results in relation to the patient’s symptoms and other tests performed. - Related tests include antiarrhythmic drugs, apolipoprotein A and B, AST, atrial natriuretic peptide, BNP, blood gases, blood pool imaging, calcium, chest x-ray, cholesterol (total, HDL, LDL), CT cardiac scoring, CT thorax, CRP, CK and isoenzymes, echocardiography, echocardiography transesophageal, exercise stress test, glucose, glycated hemoglobin, Holter monitor, homocysteine, ketones, LDH and isos, lipoprotein electrophoresis, lung perfusion scan, magnesium, MRI chest, MI infarct scan, myocardial perfusion heart scan, myoglobin, PET heart, potassium, pulse oximetry, sodium, triglycerides, and troponin. - Refer to the Cardiovascular System table at the end of the book for related tests by body system. electrocardiogram; ECG graphic representation of heart action (Table 1) |ECG feature and duration||Comment| |P wave ≤0.12 seconds||Depolarization of the atria| |QRS complex ≤0.1 seconds||Depolarization of the ventricles| |T wave||Ventricular repolarization| |PR interval 0.12-0.22 seconds||The time, in seconds, from the start of the P wave to the start of the QRS complex, representing the time taken for the electrical activation (of the cardiac conduction system) to pass from the sinus node, through the atrium, the atrioventricular node and the His-Purkinje system to the ventricle| Male: ≤0.44 seconds Female: ≤0.46 seconds |The time, in seconds, from the start of the QRS complex to the end of the T wave, representing the time taken for the depolarization and repolarization of the ventricular myocardium| |ST segment||The period between the end of the QRS complex and the start of the T wave; in the normal heart, all myocardial cells are depolarized by this phase of the ECG|
<urn:uuid:84643ae1-e366-423d-bb74-61acabd2b8ed>
CC-MAIN-2017-51
http://medical-dictionary.thefreedictionary.com/Elektrokardiogramm
s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948618633.95/warc/CC-MAIN-20171218161254-20171218183254-00216.warc.gz
en
0.860695
4,654
3.109375
3
There are numerous resources you can use to collect valuable information and advice on how to properly grow a garden. You can easily spend all day on the web searching for horticulture information that is specific to your garden’s issues. This article will give you everything you need in order to get started. Try planting seeds in pots, and then transferring the seedlings to your garden. They are more likely to survive the transition to adulthood with this method. Seeds can’t always thrive in gardens, and are often eaten by birds. When you take out the prior set of adult plants, your seedlings will then be prepared to go in. Your plants will respond better to gradual changes in temperature or condition.Put them in the sun for approximately one to two hours on the first day. Over the course of a week, slowly increase the time they are allowed to stay outside. By the week’s end, the plants can make that big move without a problem! Use both annuals and biennials to add a splash of color to your flower beds. These biennials and annuals are fast-growing, and they allow you to brighten up your flower bed with a change for each season. They can make a handy, gap-filler between shrubs and perennials located in sunny areas. Some of these that you might consider are petunias, marigolds and sunflowers. If those are not flowers you like, you can also try cosmos, holyhocks or rudbeckias. Shoveling clay is very difficult and lots of work because the clay is hard and sticks to the shovel, and because it sticks to the shovel. To make your digging project easier, apply some car wax or floor wax to the head of the shovel and buff. The clay will slide off the surface and it will prevent rust. Be sure to get rid of the weeds growing in your garden. Weeds steal nutrients from plants, robbing a garden of its potential harvest. You might want to think about using white vinegar to do this. It can kill weeds. Putting white vinegar on your plants gets rid of much of the need to pull out the weeds. This insures that the chances of the plants can survive to adulthood. It also lets you to tighten up the planting periods in your garden. Your seedlings will be ready to go in as soon as you remove your old mature plants. If you are looking for an all-natural, organic way to weed your garden, consider “boiling off” the weeds. Boiling water can be considered as an herbicide, and it is a safe one. One simple layer across the weeds with a pot of boiling water will take care of the problem, but you have to remember the same applies to your plants, as well. The roots of the weeds are damaged by the boiling water which, in turn, inhibits further growth. Do you enjoy fresh mint, but don’t like how they engulf your garden in their growth? Contain their growth with a garden container or large pot instead. If you would like, go ahead and plant the container and the plant right in the ground to prevent root overtake. You don’t need a costly chemical treatments for plant mildew.Mix a bit of liquid soap and some baking soda in water. Spray this mix on your plants once a week until the mildew disappears. Baking soda is a good way to get rid of mildew without damaging your plants. Consider planting evergreens in your garden that produce berries. They add color to your yard, throughout the year. These plants can help you get some color during the winter months: Winterberry, Common Snowberry, American Holly, and American Cranberrybush. Pre-soak your seeds through the night in a dark place. This will hydrate the seeds to be watered and facilitate growth. This will also give your seeds a better chance of flourishing. Spray old aftershave, perfume, or scented products around the grass of your garden to prevent your dog from entering it. Using this will cover up the odors that your dog likes and will cut down on the intrigue of your landscaping for your furry friend. Knee pads are a garden with plenty of low-growing plants.Having a pair of knee pads will cushion your knees to provide additional comfort. Protect yourself from sun overexposure while gardening by wearing the proper clothing. Try wearing a large sunhat and sunglasses to protect your face and eyes, and use sunscreen on any exposed skin. Utilizing the correct sun protection makes it less likely that sunburn will occur and decreases the chance that skin cancer will develop. You can prevent pests away from your garden by using other plants and natural materials. Slugs can be kept at bay with either onions or pungent vegetables. These are proven methods remove the need for harsh chemical pesticides. In the hottest time of the day, most vegetables are less firm; even the act of harvesting the veggies may cause bruising. You should also be sure to cut them off the vine and not twist them, as twisting can hurt the plant. Place a two inch layer of organic mulch close to your tall vegetable plants. The mulch will help keep the soil around the plants more moist. It also helps prevent weeds from sprouting. You’ll find this is a time saver since you don’t have to pull them later. Strawberries are a good organic garden choice for families with strawberries, particularly everbearing strawberries. Children are thrilled to harvest fruit from their own garden, and doing so often makes them more enthusiastic about helping out with the more hum-drum aspects of tending a garden. Horticulture should be a relaxing hobby. There are numerous ways to seek personal relaxation and peace. North Cyprus Gardening is a great way to achieve this goal. The returns are huge for a garden far outweighs the minimal investment cost. The biggest dividend is the emotional satisfaction of planting and growing your own. When you are growing organic plants within the home or an enclosed area, considering how much light the plants will receive must be emphasized. If you want indoor plants, choose specimens that can grow in relatively dark places. If you want to grow plants that need a lot of light, consider using artificial lighting. Vegetables get softer as the temperature goes up, increasing the risk that you will damage them. Do not plant your seeds in a rush. The first step is to moisturize the soil. Then you want to spread your seeds evenly while making sure that they have enough room to grow. The depth at which you bury them should be three times their size. Some seeds should not be buried at all as they need light to grow. Plant ever-bearing strawberries for your children. Children will be much more willing to eat other foods you’ve planted as well. Try lightly ruffling the seedlings with your hands about twice a day. However odd this may sound, research shows that this touching encourages seedlings to grow better than they would without touching. The ideal temperature to set your thermostat for indoor plants should be kept between sixty-five and seventy-five degrees throughout the daylight hours. The temperature needs to remain steady and warm so they may grow. If you do not want to keep your home that warm during the winter months, you could always get the organic plants a heat lamp. Add mulch to your garden to improve the vitality of the soil. Mulch acts as a protective shield for the soil it covers. It protects the plant roots, keeping the ground cool on a hot summer day. It will also stop the soil from losing it’s moisture in the hot sunlight. It is also very good at controlling the weeds. Make sure you work in your garden. Don’t waste time by searching high and low for lost tools. Prepare all of your tools prior to working in the garden, and put them away nicely when you are done.If needed, use a tool belt or even pants that have quite a few pockets. You can find a lot of information on how to keep any unwanted pests away by researching local botanical insecticides. These natural insecticides are just as effective as chemicals, sometimes even more so. Keep in mind, however, that the biological composition of botanical insecticides can cause them to quickly decay and disappear. Have plastic bags on hand so that you can put over your muddy North Cyprus Gardening shoes if they are muddy. Plants growing in healthy soil will be healthier than plants growing in soil that is insect ridden and diseased. This won’t get rid of insects, but it does make them less harmful, which should make most people happy. Don’t let all the little chores in your organic horticulture tasks stack up for very long. Even if you’re to busy to focus on your garden’s needs each day, you can try little things that will prevent you from having a lot of work when you return to your garden. For example, if you are playing in the yard with your child, take the time to pull out a few weeds. Think about the shades trees will cast before planting them. You will not need to run air conditioners as much; in turn, it will help you to save money with utility bills, because the trees produce shade around your home. Northern Cyprus Gardening Just because winter is coming doesn’t necessarily mean that it’s time to give up your garden. Instead, create an outdoor tent to protect the area. Find some bean poles you aren’t using anymore and stick one at the end of each bed. Cover them with sheets and hold down the edges with bricks. This inexpensive tent can protect cabbage and kale, carrots, beets and potatoes to be harvested during the winter. You probably already know how rewarding Northern Cyprus Gardening can be. As you gain valuable North Cyprus Gardening experience and take in lots of information, your skills will only get better. Use all of the Northern Cyprus Gardening information you can get your hands on. So, use the tips you just learned from this article and before you know it your garden will be that much closer to your dream garden as possible. One way to save on watering costs in your garden is to use a large amount of mulch. Mulch can decrease your need to water plants because it provides and conserves the moisture available to your plants. You can get it from the store, parts of trees, or dead plant materials. The key to good much is to use enough of it and provide a thick layer of it.
<urn:uuid:203dfccb-c53c-46a6-998a-c754ca94da0c>
CC-MAIN-2019-26
https://ncproperties.co.uk/key-strategies-for-getting-the-most-out-of-your-garden/
s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998473.44/warc/CC-MAIN-20190617103006-20190617125006-00455.warc.gz
en
0.949877
2,186
2.75
3
What is Lyme disease? Lyme disease is caused by a bacterial infection. In the eastern U.S., the infection is transmitted by the bite of a black-legged tick, commonly known as the deer tick. Lyme disease, which can affect the skin, heart, nerves, or joints, can often be effectively treated with antibiotic therapy. The early diagnosis and proper treatment of Lyme disease are important strategies to avoid the costs and complications of infection and late-stage illness. As soon as you notice a characteristic rash or other possible symptoms, consult your healthcare provider. What are the symptoms of Lyme Disease? The symptoms of Lyme disease can vary because different parts of the body may be affected. The skin, joints, nerves or heart may be involved. Early symptoms of Lyme disease typically appear within 3 to 30 days after a tick bite and include one or more of the following: - Chills and fever - Muscle and joint pain - Swollen lymph nodes - “Bulls-eye” rash at or near the site of the tick bit The “bulls-eye” rash we associate with Lyme disease may occur in up to 80% of people infected. The rash usually appears within seven to 14 days. The center of the rash may clear as it spreads, giving it the appearance of a bull’s-eye. The rash may be warm, but it is usually not painful or itchy. Lyme disease absent of the telltale rash is often misdiagnosed because the symptoms mimic other ailments. Infections that are not recognized and treated in the early phase may spread to other parts of the body, a condition called disseminated Lyme disease. Symptoms of disseminated disease can occur days to months after the initial infection. Some of the symptoms associated with disseminated disease include: - numbness and pain in the arms or legs - paralysis of facial muscles, usually on one side of the face (also known as Bell’s palsy) - fever, stiff neck, and severe headaches if meningitis occurs - abnormal heart beat (rare) What is the treatment for Lyme disease? Several antibiotics are effective for treating Lyme disease. Patients treated with antibiotics in the early stages of the infection usually recover rapidly and completely. How can I protect my family and myself from getting Lyme disease? The best way to prevent Lyme disease and other tick-borne illnesses is to avoid contact with ticks. If you are working, playing, or relaxing in areas that may have ticks you should do the following: - Wear long sleeve shirts and pants. Light colored clothing makes it easier to spot ticks. - Tuck your pants into your socks and tuck your shirt into your pants. - Use an EPA approved repellent (such as DEET) on your skin, and apply permethrin to your clothes. For more information visit. http://cfpub.epa.gov/oppref/insect/ - Stay on trails and out of tall grass that ticks are especially fond of. - Keep your lawn mown, cut overgrown brush, and clear away leaf litter from your home. - Inspect any pets daily and remove any ticks found. Your risk of getting an infection like Lyme disease is significantly lower if you remove a tick within 36 hours – it can take that long before they transmit Lyme disease bacteria. - If bitten by a tick, wash the area of bite thoroughly with soap and water and apply an antiseptic to area of the bite. - Mark on a calendar the date that you were bitten, and then watch for signs of Lyme disease or any changes in your health every day for the next month. Check your children daily! Ticks can attach to any part of the human body but are often found in hard-to-see areas such as the groin, armpits, and scalp. Ticks can ride into the home on the items you use outdoors – jackets, blankets and backpacks. Toss clothing in a hot dryer for ten minutes. Washing clothes doesn’t kill ticks, but drying does. Be tick smart and keep your family safe from Lyme disease! Read more about ticks and Lyme disease prevention: http://www.tickencounter.org
<urn:uuid:0f4999f0-6b7c-4361-a7f7-633f5d86da3d>
CC-MAIN-2017-30
https://www.whrl.org/2014/06/the-low-down-on-ticks-and-lyme-disease-in-maine/
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549427749.61/warc/CC-MAIN-20170727062229-20170727082229-00025.warc.gz
en
0.944629
878
3.703125
4
Object oriented programming has become the preferred approach for most software projects. Object oriented programming offers a new and powerful way to cope with complexity. Object oriented programming concepts are useful for constructing complex physical systems such as car, airplanes etc. Instead of viewing the program as a series of steps to be carried out ,it views as a group of objects that have certain properties and can take appropriate actions . Among the Object oriented programming languages available C++ is most widely used language. Different programs based on Inheritance, polymorphism, encapsulation, overriding requires knowledge of C++. This subject acts as a base for languages JAVA, VC++ & UML. For books see the following link:
<urn:uuid:c932a0b3-342d-4312-8e8c-b16ce61b7f92>
CC-MAIN-2017-09
https://sites.google.com/site/tusharkute/index/oop
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170794.46/warc/CC-MAIN-20170219104610-00201-ip-10-171-10-108.ec2.internal.warc.gz
en
0.930013
140
3.1875
3
Educational Resources for Studying Graphic Design One of the greatest things about having access to the Internet, is the educational opportunities that it affords the public. People in all professions have the ability to expand their knowledge base through the wealth of information being shared via the Internet, and the same holds true for those who are studying graphic design. The Internet is positively bursting with tutorials and resources that can help one advance through the various stages of becoming a successful graphic designer. All the way, from a newbie to an experienced pro. Today, that is our purpose here. Gaining knowledge about graphic design so that we can advance our skills and blossom in our chosen fields. Here is a collection of invaluable educational resources on graphic design that have been broken down into different categories depending on your preferred methods of consumption. Each one of us has their own approach of learning, either learning visually, auditory or through repetitious means. It was in that vein in which the resources were collected. We have tutorials for your hands-on approach, podcasts for a more auditory take and an assortment of PDFs and articles to read through. The first category of resources that we have gathered are some assorted graphic design PDFs that will freely add to the educational foundation on which you are building. These community supplied supplements cover a range of graphic design elements and areas. So take a look through and begin downloading your new skill builders from the list below. Design Your Imagination is a valuable graphic design ebook that should be in any beginners toolbox. Deconstruct website design and learn to hone your creative skills with this comprehensive guide to the web: As with any good learning experience, you need someplace to start, and some choose to start at the beginning. With The History of Graphic Design you can do just that. This informative PDF offers a brief rundown of the history of graphic design: When it comes to teaching design in a whole new way, The Design Funnel: A Manifesto for Meaningful Design brings a unique look at the world of design, and guides the reader to honing their design process. From beginner to established pro, this manifesto could offer you a fresh perspective on your approach: Graphic Design Teaching and Learning is a helpful learning tool that takes on both the approach of teaching graphic design as well as learning it. Definitely an interesting and a worthwhile read: PDFoo is a large collection of ebooks and PDFs all related to the field of graphic design. This resource provides a full download and will keep your nose to the grindstone for days: Pictorial composition and the critical judgment of pictures is a wonderfully informative ebook that will allow anyone, no matter their level of expertise, to dive in and learn the basics about the composition and overall aesthetics of images: Just Creative Design’s Type Classification eBook is a fantastic educational resource for those wish to learn more about the fine art of typographical design. Whether type is to be your specialty or not, this ebook is worth a read: A theory of pure design; harmony, balance, rhythm is another learning library must add. This ebook explores in great depth the fundamentals of design theory. It is an artistic exploration into the very principles which lie at the foundation of illustration and painting as fine art: An Introduction to Graphic Design is another useful ebook to be stocked in any graphic designer’s learning library. It offers a look at the basic fundamentals of graphic design to build a more solid foundation: The Q&A E-book : Interviews With 25 Popular Bloggers pretty much tells you all you need to know about it in the title. Gain insight into the processes of the big names in the design field through this interview filled ebook and learn tricks that you can implement in your own: The final PDF we have for you is the Graphic Design Wikibook. Think of the Wikibook as an open-content collection of textbooks, in this case specifically focused on the graphic design world. Do not let its position fool you, it is definitely worth a gander: This second section of resources is an array of great articles all geared specifically towards graphic design education. Combing through the archives of some of the best known sites for assisting the growth of the community, we have pulled some must-read posts from their pasts just in case you missed them. Even if you do not read another post from any of the sites they have come from, these articles you should not let pass you by! Earlier, we mentioned needing someplace to start, well Teach Yourself Graphic Design: A Self-Study Course Outline is one of those places. This article is absolutely filled with resources that will help you begin your self directed graphic design studies: Want to know how to design? Learn The Basics. is a fantastic article from Just Creative Design that gets at the heart of design through the very basic elements of design. These are the fundamentals that you cannot get by without: The Difference Between Art and Design is an extremely insightful discussion about the difference between art and design. This is a long running debate in the design community, and if you are going to be a designer, you should certainly familiarize yourself with the ongoing dialog: Art vs Design is another pertinent and well thought out dissection of the art vs. design issue. Part of a solid foundation of learning comes from having a firm understanding of the field you are working in, through this evolving dialog, any designer can gain better footing in this design landscape: Graphic designers are often wondering What Skills Will I Learn as a Graphic Designer? especially when they are starting out. This article examines this very query in specific details: Underneath all of the fundamentals of graphic design lies the theory that drives it. In the article 50 Totally Free Lessons in Graphic Design Theory the TutsPlus family definitely delivers with a bevy of knowledge building lessons in theory: For more on theory, Noupe’s Graphic Design Theory: 50 Resources and Articles is another wonderful collection of graphic design theory based resources from our sister site. Help expand your design horizons even more with this educational article: Learn the Basics: 25+ Sites And Resources To Learn Typography is a great article from 1st Web Designer that will serve to enhance your typographical knowledge base. Typography is an important element in design and understanding it is key in your design growth: As far as useful articles on design go, David Airey put together a post on the communities favorite books about graphic design. What ’s your favourite graphic design book? opens up the comments to the community and the post evolves from there as the replies pour in: In your quest for graphic design knowledge you can even turn to About.com whose course Introduction to the Elements of Design will give you a grasp of the basics of graphic design. Another article that serves as a good starting point: Speaking of graphic design books, this site has a brilliant article on them. 30 Delightful Graphic Design Books spans the various arenas of graphic design to find some of the industries most useful books that will aid you in your thirst for knowledge: The Design Cubicle offers this insightful article Tips to improve as a graphic designer for anyone on an educational journey through the field of graphic design. It is certainly worth a look for new and old designers alike: Finally, when it comes to growing as a designer and learning to hone your skills, the blogosphere is an invaluable tool. The following article 100 Must Read Design Blogs helps you to maximize your surfing by pointing you in the direction of some great sources for educating yourself: In this category, we move into the auditory section of our graphic design learning experience with some selected podcasts. These informative online broadcasts take on the task of offering an educational outlet for the masses over the virtual airwaves with a different approach than you can get most anywhere else. Especially if you are more inclined towards audio forms of learning the podcasts are for you, but they are truly for anyone interested in learning more about this field. For Graphic Designers Only is a business minded graphic design focused broadcast which talks with industry experts to gain insight and give advice to the community. From building an individual freelance business to a more business firm oriented perspective this is a useful podcast: Rookie Designer‘s The Podcast for the Not-So-Accomplished Designer is a look at the graphic design landscape with a beginner’s gaze. This is a great place to start in the podcast pool, whether you are a newb or in need of a little refresher course: Design Guy takes a simpler approach to the design principles, taking the time to explain them in laid-back basic ways that should be easy to wrap your head around. Get your new design projects off the ground in no time with a little help from the Design Guy: Art a GoGo Podcast is certainly useful to anyone looking to solidify the base of their design work through an exploration of art. The Art a GoGo podcast features discussions about art news and the overall world of art in an educational and entertaining way: There have not been many logo specific resources, but Logo Design is an informative podcast that does help balance the scales a bit. Learning about logos through tips and in-depth discussions has a new face, the “Logo Design” podcast: Boagworld Web Design Advice is a fantastic weekly podcast which runs the proverbial gamut of the graphic design field. Offering the community interviews, reviews, and news Boagworld is a podcast for all levels of the designer from the learning to the learned: Art History Podcast is another podcast that can help you learn more about the art that has shaped and paved the way for the design world today through thoughtful analysis of timeless classics. Unfortunately the show is not still active but there are plenty of past episodes to learn from:>/p> PixelPerfect is another great resource for the designer looking to increase their knowledge base in both Adobe Photoshop and Illustrator. Complete with demonstrations to teach you the tips and tricks of both graphic programs. If you use either one, then you should check out this podcast: The Rissington Podcast is another insightful broadcast in which the hosts take questions from their listeners and answer them for the entire graphic design community. This would certainly be the problem solver’s podcast for designers old and new: CreativeXpert Design Interviews is a completely interview-based podcast which features a brilliant array of expert guests to provide the insight that lays the foundation for this informative show. With some of the brightest minds in the field sharing their techniques and inspiration, this podcast can enrich anyone’s educational journey: Another interesting podcast with a bit of a different twist on the show is FEED – A Magazine of Graphic Design. FEED is exactly what it sounds like from the tagline; it is a community based submission driven graphic arts magazine in the form of a podcast: The final podcast that we are going to feature for growing as a designer is Design Tools Weekly. This is another weekly broadcast that discusses the topics that designers are looking to have delved into. This is another show that bends the learning curve in your favor: We are going to wind things down with some useful websites to keep bookmarked and tracking regularly through their feeds for feeding your thirst for graphic design knowledge. We thought this would be a good place to finish, because most of the sites themselves are a growing resource that will continue to deliver new opportunities to learn. If you have not seen these sites, or not seen them lately, then we recommend that you stop by for a refresher on what they have to offer! Design is History is a fantastic resource for all those treading the design waters looking to learn more. The site scans so many eras in graphic design, teaching you the history and principles that shaped this dynamic field: As far as starting points go, Smarthistory takes the seeker of knowledge back even beyond the start of graphic design, and into the history of art from around the world for a more comprehensive overview of the seeds that helped sew the graphic design field: Arty Factory is an educational resource hub for design and art lessons online. They are totally free for all and provide an array of fully illustrated classes on drawing, painting, and design: BBC Learning – Art and Design is an online resource for building and advancing your art and design knowledge. This site offers various courses to help you grow in your selected field: The Metropolitan Museum of Art’s Heilbrunn Timeline of Art History is another wonderful place to turn for a more historical perspective and learning opportunity as you study through the great works of art from all over the world and all through time: Graphic Design Forum is one of the largest forums on the web for all things graphic design. If it is experience that you need to gain, then you have come to the right place, no matter what level of designer you are: All Graphic Design is an expansive resource in and of itself, all fed by the online design community. From forums to templates to articles and more, this is a one stop learning hub for graphic designers: TutsPlus is a tutorial junkie’s playland for hands-on graphic design walk throughs. If you are not familiar with the TutsPlus site and family then you are selling your graphic design business short. They offer both free and premium levels of tutorials: The MFA in Interaction Design program trains students to research, analyze, prototype, and design concepts in their business, social, and cultural contexts: Speaking of invaluable hubs for all things graphic design Design Talk Board comes in to educate the online masses with piles of resources, jobs information, graphics news, talk forums, software training, and oh, so much more! Graphic Design Principles Index is an installment based online graphic design course that is hosted by Duke University. The program is divided into 39 different parts: AIGA is a site that all professional graphic designers should be aware of. This is an association that is dedicated to advancing the overall design field as ‘a professional craft, strategic tool and vital cultural force’ in our society: HOW Design Forums is another wonderful and informative graphic design forum that will enrich any graphic designers learning experience. So many great communal contributions that you would be amiss to, well, miss: If it is informational sites that you are looking for, then SitePoint certainly needs to be on your radar. No matter your level of learning, this fast growing online media company and information provider has something for all web professionals: Graphics.com is a community resource that is shared by any and all graphic designers who wish to be apart of it. Complete with tutorials, educational videos, a full forum, and more, this is another resource center worth looking into: Packaging Design Archive is a collection of very well designed product packaging that users can flip through to study the presentations that are housed there for learning purposes. Not to mention, a little inspiration: Packaging of the World is another communal collection of product design, whose deep databases are full of helpful examples that you can once again study to learn from. If you like to learn by example, and package design is your passion, then look no further: The Chicago Design Archive is another great place where you can browse through a collection of locally created designs from the Chicago area and learn from a more hands on approach: STA Archive is an annual design contest where you can show off the skills that you have learned and been honing: We thought we would cap this section off with one more graphic design based forums for the online community. Your Design Forums can provide learning opportunities through shared experiences and advice of the community: If there are any sources for graphic design learning tools that you tend to turn to that we left off here, please let us know by adding it in the comments section below to keep the learning experience expanding! Consider Some of Our Previous Articles: - Useful Glossaries For Web Designers and Developers - The Web Design Community Offers Advice To Beginners - Lessons From Swiss Style Graphic Design
<urn:uuid:1bd326df-5df9-4481-bead-97bd6d3b21ac>
CC-MAIN-2014-23
http://www.noupe.com/design/education-resources-for-studying-graphic-design.html
s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997869884.75/warc/CC-MAIN-20140722025749-00121-ip-10-33-131-23.ec2.internal.warc.gz
en
0.945929
3,282
2.96875
3
Every nation around the world views money from two perspectives one except, the United States. You have the local currency and then the global reserve currency as an alternative safe haven to run to in times of trouble. When one is faltering you run to the other which in relation to the local is considered safe. Currently there are two South American countries that are abandoning their own currency in search of Federal Reserve Notes, or dollars. The currencies I am referring to are the Argentinean Peso and Venezuelan Bolivar Fuerte. The approximate population of both countries combined is roughly 70 million and yet they share the same fate. Government backed paper money or fiat currency that’s failing its citizens. A money supply that’s robbing the nation of purchasing power while destroying their livelihoods. It’s not a coincidence that they both suffer from incompetent leadership that borrows more than it could ever produce and places the burden to repay on the citizens. This type of governmental system is what takes place in every country but these two are the most disturbing. In my opinion, they will more likely be the first two countries that start a monetary ripple effect of collapsing paper money systems. In order to understand how bad the economy is in Argentina just follow the money. The government statistics in regards to the currency exchange rate for the dollar is stated at $1 to 8.43 pesos. While the citizens who are scrambling to get rid of pesos for dollars is willing to pay up to 15+ pesos for a single dollar. That’s almost double the official government number exchangeable for a dollar. The people have no confidence in their own money so they would rather pay almost double to get away from it. This example clearly shows a disconnect between the governments numbers and the citizens mistrust in the country’s currency. Venezuela is the exact same but even worse. The government has gone even further by pegging the nation’s money to the dollar at a fixed rate which means they actually determine the price. Unlike Argentina which fluctuates by consumer confidence amongst other means. The dollar in Venezuela is fixed at $1 to 6.29 Bolivars. So no matter how bad things are or what the citizens have to say about government policy that rate doesn’t change. So out of desperation the citizens are dumping the bolivar at the exchange rate of 102 Bolivars for one single dollar. The citizens’ mistrust their own paper currency so much that they are will to pay almost ten times more for something considered safe. In both of these examples the citizens in order to escape complete monetary destruction run towards dollars. It’s the global reserve status of King Dollar that makes these citizens feel safe. This monetary pedestal the dollar sits on is due to the fact that every nation must use the dollars in trade. So every nation is supposed to have dollars in reserves. This has been the standard since 1944, the day Bretton Woods became the monetary system of the globe. Ever since that agreement was put in place the dollar has been a major factor in every nation’s financial affairs. The dollar is the alternative to every nation’s problems and, in my opinion, will one day be a problem for every nation that has them. The dollar at the end of the day is no different than a peso or a bolivar. They all originate from governments that borrow to exist and allow central banks to print massive quantities to spend. Just like the citizens of the south are experiencing first hand massive inflation and sky high costs of living, so will other countries depending on the dollar. The only difference the citizens in the United States face is that every other country runs as a safe haven to the dollar. The question is when the dollar suffers the same fate as every other piece of paper what will the United States citizens run to for safety?
<urn:uuid:3ec9b77c-6acf-4a8a-8420-65ed7e28e972>
CC-MAIN-2024-10
https://www.rethinkingthedollar.com/foreigners-run-to-the-dollar-where-do-united-states-citizens-run-for-safety/
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947476180.67/warc/CC-MAIN-20240303011622-20240303041622-00831.warc.gz
en
0.951337
790
2.75
3
A file plan lists the records in your office, and describes how they are organized and maintained. File Plan - A classification scheme describing different types of files maintained in an office, how they are identified, where they should be stored, how they should be indexed for retrieval, and a reference to the approved disposition for each file. A good file plan is one of the essential components of a recordkeeping system, and key to a successful records management program. It can help you: - document your activities effectively - identify records consistently - retrieve records quickly - disposition records no longer needed - meet statutory and regulatory requirements A file structure is the framework of your file plan. The major steps in implementing a file plan in your office are: - identifying documentary material - creating the file structure - creating the file plan Identifying documentary material The first step in implementing a file plan for your office is identifying what you have. Whether you are updating an existing file plan or starting from scratch, you will need to do a survey of what documentary material you have, where it is located, and who is responsible for it. It is important to have an understanding of the functions performed in your office. You need to identify: - Records - FAA-owned documentary material created in the course of business, received for action, or needed to document FAA's activities (e.g., permits, leave requests) - Nonrecords - FAA-owned documentary material that does not meet the definition of a record (e.g., reference materials, convenience copies) - Personal Papers - Documentary material of a private nature that does not relate to FAA business (e.g., outside business pursuits, activities prior to government service) There are several ways you can survey the documentary material in your office. A traditional records inventory requires a team of records managers to do a folder-by-folder inventory of all work and storage spaces. Several FAA offices have hired contractors to do inventories. Other offices have used a shorter survey approach, enlisting the help of their network of records contacts and custodians. Regardless of which method you choose, the final product should be a complete listing of all documentary material created, received and/or maintained by staff and contractors, matched to the appropriate records schedules and disposition items. (See Six Steps to Better Files for details.) Creating the file structure Once you have identified what you have, the next step is creating the file structure, by arranging the records schedules and disposition items that apply to the records in your office in file code order. Creating the file plan Once you have a file structure, the next step is creating the file plan, by adding folder- or document-level details about the records in your office, as well as information about how they are managed. At a minimum, the file plan should include the following information for each folder or document: - Person and organization responsible for maintaining the records (i.e., custodian) - File code - Title of the records - Medium (e.g., paper, electronic, video) - Access restrictions - Vital records status - Location of the records (e.g., room number, storage location number) - Date range of the records - Dates when the records are closed, retired, and transferred or destroyed - Disposition status of records (e.g., active, inactive, hold) You may want to include other information, such as: - Description of the records - Arrangement of the records (e.g., alphabetically by site, chronologically) - Link to the records schedules - Person responsible for maintaining the file plan - Last revision date of the file plan During this process, you may need to make decisions on how the records are maintained. For example, you may need to determine: - Who is responsible for the "official record" and who only has convenience copies? - Are "drafts" or "working files" included in the record? - Is the record copy maintained in a paper or electronic recordkeeping system? - Should reference materials be centralized? It is important to include all stakeholders when making these decisions and to obtain management approval of your file plan. Now that you have a file plan, you need to train office staff on how to use it. And, remember that it is a "living" document that should reflect changes to your office (e.g., departing employees, office moves, changes in business). It may need to be updated monthly when the records schedule changes are issued. It must also be reviewed at least annually to ensure it still covers all of your office functions. A file plan can be a very effective tool when it is carefully planned, documented, and kept up-to-date.
<urn:uuid:b87ec0d1-ea1e-476f-80f9-98e9df793217>
CC-MAIN-2016-26
http://www.faa.gov/about/initiatives/records/tools/toolkits/filecode/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397797.77/warc/CC-MAIN-20160624154957-00134-ip-10-164-35-72.ec2.internal.warc.gz
en
0.926014
987
3
3
The GCD Calculator is a free online tool designer for you that displays the GCD of the given integers. In ancient mathematics, the Greatest Common Divisor (GCD) is defined as the one where the largest positive integer divides each of the integers. The greatest common divisor is sometimes also called the Highest Common Factor (HCF) or the greatest common denominator. For example, the GCD of 8 and 12 is 4 as both the numbers are divisible by 4. Likewise, the divisors of 8 can be 2, 4 and 8. The divisors of 12 can be 2, 3, 4, 6 and 12. From both of these, 2 and 4 are common and the greatest divisor of 8 and 12 is 4. Hence, the GCD of 8 and 12 is 4. The procedure to use the GCD calculator is as follows: Step 1: Simply enter the numbers in the respective input field. Step 2: Then hit the button “Solve” to get the result. Step 3: Finally, the GCD of the given numbers will be displayed in the output field or another tab.
<urn:uuid:8bc41efb-cafc-4b92-bf76-532c5311c4c1>
CC-MAIN-2021-43
https://calculator.info/gcd-calculator/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587659.72/warc/CC-MAIN-20211025092203-20211025122203-00163.warc.gz
en
0.895807
240
4.09375
4
There's a small snag with George Seaborn's idea for a frozen (and thus tiltable) mercury mirror (Letters, 12 August). So far I haven't managed to work out what precisely you would use a mirror covered in frozen water vapour for. It seems that what is really needed is some material that is solid at room temperature and has a low coefficient of thermal expansion. It could then be melted, spun to the mirror profile, and allowed to cool. I think that an ideal material might be a mixture of fused silicates - or glass. Anyone wishing to take up this idea has my permission. Reading your article ("Spinning images from mercury mirrors", 15 July), one would never know that it was Ermanno Borra of Laval University in Canada who was responsible for reviving the idea of liquid mirror telescopes. It was he who showed that one could overcome the difficulties ... To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
<urn:uuid:72a458cd-d8a7-400b-9a12-99129e0996e4>
CC-MAIN-2015-22
http://www.newscientist.com/article/mg14719946.600-but-whats-it-for.html
s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928114.23/warc/CC-MAIN-20150521113208-00097-ip-10-180-206-219.ec2.internal.warc.gz
en
0.968675
212
3.203125
3
Surgical sterilization has long been considered the gold standard for managing dog and cat populations in the United States. Yet the millions of stray and unwanted companion animals euthanized each year in this country raise questions about whether the gold standard is really just gold-plated, and if there's a better way of reducing the numbers of surplus animals. The most obvious shortcoming of spaying and neutering as methods of population control is one of logistics. For a number of reasons, ranging from a lack of responsible pet ownership to affordability, too few cats and dogs are being sterilized. Early spaying and neutering, discounted surgeries, or mandatory sterilization requirements for pet adoption have all been offered as solutions to the overpopulation problem. The sad reality remains, however. The number of new litters of fertile dogs and cats born each day in the United States vastly exceeds the delivery system for surgical sterilization, resulting in excess numbers of unwanted animals. Those not fortunate enough to be adopted can become victims of starvation, trauma, and disease. Dr. Margaret Slater, an associate professor of epidemiology at Texas A&M University, recently highlighted the overpopulation problem as it applies to cats. Speaking this past November at an international symposium in Alexandria, Va., on nonsurgical contraceptive methods for population control, Dr. Slater explained that the number of feral and stray cats in the United States is estimated at around a third to a half that of the owned cat population, which translates into 30 to 45 million free-roaming cats. More than a hundred people from across the world gathered at the symposium, hosted by the Alliance for Contraception in Cats & Dogs, to hear about the latest developments in dog and cat population management and new contraception technologies. While surgical sterilization is an essential tool in pet population control, ACC&D President Joyce Briggs believes additional contraception options are desperately needed. "I believe that when we are still euthanizing millions of animals as a means of managing dog and cat populations, it's a crisis," Briggs said. What the ACC&D is searching for is a drug, vaccine, or implant that is safe, inexpensive, and capable of rendering a cat or dog permanently sterile after a one-time procedure. Such a holy grail of chemical castration has yet to be discovered. But considering the scope of the surplus dog and cat problem, the alliance is stepping up its efforts to support research for the eventual development and commercialization, both domestically and abroad, of dog and cat contraceptives. The AVMA encourages research into the development and use of nonsurgical methods of sterilization. Also presented during the symposium was an emerging body of research suggesting that surgical sterilization can cause some adverse effects, such as incontinence, vaginitis, and increased aggression in female dogs. Evidence suggesting that surgical sterilization might cause health and behavioral problems has not been studied in a systematic way by the veterinary profession. Although the need for an effective chemical alternative to surgical sterilization is obvious, the focus and funding to develop such a product has been low. For the most part, the pharmaceutical industry has been reluctant to invest in bringing a dog or cat contraceptive to market because pet owners would likely not pay for something that isn't entirely safe, effective, or convenient, according to Dr. Wolfgang Jöchle, a diplomate of the American College of Theriogenologists, and one of the symposium speakers. "The requirement for the replacement of spay and neuter has to be 100 percent perfect. Pharmaceutical companies would have to come up with a perfect drug," Dr. Jöchle said. He added that, while there are many "exciting possibilities" on the horizon, there isn't a single dog or cat contraceptive drug the pharmaceutical industry is immediately willing to invest in. Currently, there is no commercially available contraceptive drug approved for dogs and cats in the United States. A zinc gluconate intratesticular injection for sterilizing male dogs went on the market in 2003. For business reasons that had nothing to do with the drug's safety and efficacy or with consumer demand, the manufacturer pulled the product from the market after just two years. Abbott Laboratories is expected to begin producing and selling the contraceptive in the near future. Overseas, the situation regarding dog and cat contraceptives is much the same as in the United States, although the European Union recently approved a one-year reversible contraceptive implant for female dogs. Limited research has also revealed that the drug suppresses estrus in female cats for nearly three years, and additional research is being conducted. Both here and abroad, the search for effective chemical contraceptives is under way. Loretta Mayer, PhD, has been treating female dogs with an industrial chemical that causes sterility in rodents by preventing development of ovarian follicles. At the symposium, Dr. Mayer, an assistant research professor at Northern Arizona University, presented this method, still in its early stages, as a possible means of achieving permanent sterilization. Dr. Mayer is working on doses and formulations for a single-injection application. She is also involved in preliminary studies at the University of Florida to test the drug's effectiveness in cats. The usefulness of a chemical contraceptive can vary regionally, according to Dr. Julie Dinnage, executive director of the Association of Shelter Veterinarians. In parts of the Northeast, for example, stray dogs aren't a major problem like they are on some American Indian reservations and in developing countries. Elsewhere, it is large populations of feral cats that need to be controlled. A chemical alternative to spaying and neutering that targeted cats would, therefore, be of greater use than one aimed at dogs. "We're going to need more than one option," Dr. Dinnage said. "I don't think it's going to be one silver bullet that's going to take care of the entire overpopulation problem. So the more tools we have to deal with issues in individual, very unique communities, the better poised we're going to be to really address this problem on a nationwide basis, and certainly internationally as well."
<urn:uuid:66366516-6a53-42fb-8ddb-acf86714b2a1>
CC-MAIN-2017-09
https://www.avma.org/News/JAVMANews/Pages/070115a.aspx?PF=1
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170823.55/warc/CC-MAIN-20170219104610-00359-ip-10-171-10-108.ec2.internal.warc.gz
en
0.959042
1,242
3.03125
3
September 13, 2011 Fifty New Exoplanets Found Astronomers using ESO’s leading exoplanet hunter HARPS have today announced more than fifty newly discovered planets around other stars. Among these are many rocky planets not much heavier than the Earth. One of them in particular seems to orbit in the habitable zone around its star. In this video news release we look at how astronomers discover these distant worlds and what the future may hold for finding rocky worlds like the Earth that may support life.
<urn:uuid:4296ad49-43f5-41a0-96e8-0bf25f00b6e6>
CC-MAIN-2016-50
http://www.redorbit.com/video/fifty-new-exoplanets/
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542687.37/warc/CC-MAIN-20161202170902-00335-ip-10-31-129-80.ec2.internal.warc.gz
en
0.946477
103
2.640625
3
Why Do My Strawberry Plants Not Produce Strawberries? If you are dismayed that your healthy strawberry plants produce runners without any fruit, you must first be certain of the type of strawberry plant you have. June strawberries produce fruit in early, middle or late spring. Ever-bearing strawberries produce fruit during three periods: in spring, summer and fall. Day neutral strawberries produce fruit during the entire growing season from spring to fall. If you have identified the type of strawberry plants you have and determined they are not producing fruit as they should, you can encourage the plants to bloom and produce strawberries. Plant strawberry cultivars determined to grow well in your climate. Research what strawberry plants thrive in your region. Plants developed for Minnesota might not grow well in Texas. When plants are not suited for the climate, they will not produce fruit. Test the soil in your garden before planting strawberries. They need a pH level between 5.5 to 6.5. If the pH is too low, add dolomitic lime. If you are using a garden area that previously grew grass, wait one year before planting strawberries. If pH levels are off, plants will not produce fruit. Pinch off the flowers of June strawberry plants when they appear throughout the first growing season. This will ensure robust root development and promote abundant fruit production the next year. Remove flowers on ever-bearing and day neutral strawberries until June 30. Plants will then produce fruit in summer and fall. Scatter fertilizer such as 10-10-10 over the soil before setting strawberry plants. Work the fertilizer into the top 6 to 8 inches of soil. During the second growing season and each year thereafter, fertilize again in July. Water carefully so fertilizer soaks down into the roots. Avoid using too much fertilizer. This causes abundant leaf growth and diminishes fruit production. Brush off any fertilizer that falls on leaves. Water your strawberry plants regularly. Their shallow root system can dry out easily on hot summer days. Plants will not produce fruit if they are too thirsty. In addition, overwatering stops fruit production. The crowns of the plants can rot if under water or planted in soil that is not well-drained. Renovate your strawberry patch starting the third or fourth growing season. Thin plants, leaving the most healthy with spacing of 6 inches apart on all sides. When planting strawberries, set them so the roots are just below the soil's surface. Do not cover the crown. This will lead to poor fruit production. - Renovate your strawberry patch starting the third or fourth growing season. Thin plants, leaving the most healthy with spacing of 6 inches apart on all sides. - When planting strawberries, set them so the roots are just below the soil's surface. Do not cover the crown. This will lead to poor fruit production.
<urn:uuid:aefcf045-16ab-4c70-8c1d-9eb1f63b92e7>
CC-MAIN-2020-29
https://www.gardenguides.com/12468835-why-do-my-strawberry-plants-not-produce-strawberries.html
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657140337.79/warc/CC-MAIN-20200712211314-20200713001314-00227.warc.gz
en
0.924287
575
3.359375
3
Earthquakes and resulting Tsunami’s release a lot of fear, says Florence local Jonathan, who explained how “this San Francisco earthquake sure got our attention;” while also pointing to last year’s earthquake across the Pacific in Japan that sent Tsunami waves and debris to West Coast beaches. In turn, Jonathan, who received Federal Emergency Management Agency (FEMA) brochures and briefings on how to prepare for earthquakes and floods March 5 – the same day that a magnitude 4.0 quake shook the San Francisco Bay area. Also, FEMA experts are not saying if but “when the next disaster hits,” and, thus, are advising both locals and visitors along the West Coast to be “prepared for earthquakes.” In turn, the California Highway Patrol’s Central Division reported on Twitter that there haven’t been any reports of damage from Monday’s quakes. In turn, local TV reports from the region featured people saying they “are worried if a big one hits.” For instance, Christine Cosgrove, who lives in Berkeley, told the San Francisco Chronicle that "a big chunk of our chimney fell down. For us, this was the strongest earthquake we've felt in 22 years in the house. Other items fell off window sills and broke." Government asks locals to report quakes At the same time, the U.S. Geological Survey website at http://www.usgs.gov/ asks locals who may experience either an “aftershock” or earthquake to: “Report shaking and damage at your location. You can also view a map displaying accumulated data from your report and others.” Also, Scientists at the U.S. Geological Survey (USGS) said they are working to “assess both the potential capacities and the potential limitations of the various forms of carbon sequestration and to evaluate their geologic, hydrologic, and ecological consequences.” In accordance with the Energy Independence and Security Act of 2007, the USGS has developed scientifically based methods for assessment of biologic and geologic carbon sequestration. Californians shaken awake According to the USGS website, Californians were awaken March 5 to “back-to-back earthquakes” that more than just rattle locals in northern California; while triggering a dejavu of last March’s Japan quake. Of the two earthquakes that hit the San Francisco Bay area March 5, the USGA noted how the stronger one was a “4.0 magnitude” and centered “one mile north of El Cerrito in the East Bay and 10 miles north-northwest of Oakland. The quake struck at a depth of 5.7 miles at 5:33 a.m. PT, stated a USGS report. FEMA on the ground out West In addition to the recent Tornadoes in the Midwest, FEMA teams are busy here in Oregon briefing locals on disaster aid after the recent January storms, and new fears that “the big one” may again hit the West Coast. Thus, FEMA is handing out advice on flood insurance and handing out its FEMA flood insurance handbook to area locals who were flooded here along the coast after Japan’s earthquake triggered massive Tsunami waves that slammed much of the West Coast early last March; leaving many regions flooded. In turn, FEMA’s website http://www.fema.gov/ now features advice on floods. “Anywhere it rains, it can flood. A flood is a general and temporary condition where two or more acres of normally dry land or two or more properties are inundated by water or mudflow. Many conditions can result in a flood: hurricanes , overtopped levees, outdated or clogged drainage systems and rapid accumulation of rainfall.” Also, FEMA’s guidance states that: “Just because you haven't experienced a flood in the past, doesn't mean you won't in the future. Flood risk isn't just based on history; it's also based on a number of factors: rainfall, river-flow and tidal-surge data, topography, flood-control measures, and changes due to building and development. Japan quake anniversary nears and so do fears Locals here along the Oregon coast are finding themselves increasingly focused on earthquakes and Tsunamis after a 6.0 magnitude also hit this area Feb. 14. Add this recent quake to the two earthquakes that hit the San Francisco Bay area March 5, and locals say you have “very worried people who live along the coast.” It hit like a thief in the night, when local TV media reported how “the 6.0 quake off the Oregon coast follows a 5.6 magnitude quake off the northern California Coast on Monday, Feb. 13. Although the West Coast and Alaska Tsunami Warning Center reported that the earthquake that “jolted the ocean floor” about 150 miles off the Oregon coast; the earthquake “was not large enough to produce a Tsunami.” In turn, NBC TV in Portland stated that the “Tuesday evening,” Feb. 14, earthquake “was a magnitude 6.0; and that the National Earthquake Information Center Reports the quake struck at 7:31pm Pacific Standard Time. The epicenter was out at sea 152 miles to the northwest of Bandon,” or about 70 miles south of Florence where locals have shaken nerves after the quake. Here along the central Oregon coast, for example, one local named Betty said she fears “falling down” as she did when the Japan earthquake, and resulting Tsunami waves, slammed the West Coast last March. “I imagine all sorts of bad things in my dreams about the earthquake,” Betty explained. “I no longer take walks on the beaches due to this fear.” Earthquake Déjà vu hits coastal locals In turn, others in this retirement community also note feeling “unsteady, unbalanced, as if their nerves are misfiring,” due to everything from strange metal boxes appearing along the beaches to the forthcoming one year anniversary of the earthquake that hit Japan last March. Still, many locals here in Florence like to frame a cheerful response about life on the coast in a time of real earthquake and new Tsunami fears. “We don’t like to whine. It’s in the nature of most Americans to whine just about anything these days. I was taught never to whine,” said 86-year-old Spencer whose memory of last year’s Tsunami – that forced him into a local shelter – still seem to ruffle through his mind like wind on water. Signs of the times for earthquakes Take a walk along the jetty near Florence, Oregon, and you notice massive black rocks that roar up from the water’s edge. This area, with a wondrously intricate lacework of bays and coves, also features fresh reminders of the Tsunami that smashed this coast last March after the Japan earthquake sent fierce Tsunami waves racing across the Pacific Ocean. Thus, today beach trekkers find all sorts of things from that Tsunami: planks of driftwood with Japanese writing on them, piece of broken boats and decks, large tree trunks and even what some locals call “strange metal boxes” that as of March 5, still can’t be explained after the boxes were discovered up and down the West Coast last week. “This place beckons you to look close and discover whatever clues there are to a possible new earthquake hitting closer to home, right off our coast. They’re now saying not if the earthquake will hit, but when,” explains Florence local Greg who likes to collect piece of driftwood for his various folk art projects. In turn, Greg says he moved to the coast – while “not thinking about the danger of earthquakes or Tsunami’s” – back in the early 1990s when he retired from a mill in nearby Eugene. “There’s a real ancient splendor of these untouched shores,” he says while pointing to yet another “metal box,” and asking “what the heck is that” during a recent Huliq interview. Blue water rippled gently toward the shore The National Earthquake Information Center in Golden, Colorado – at http://earthquake.usgs.gov/regional/neic/ -- stated in an overview of the recent earthquake update on these recent West Coast quakes, that they did not cause any major damage. However, that’s little comfort for locals who live on the very edge of the Oregon coast. “Sure, there’s real concern about ‘a big one’ hitting soon,” said local Laurence here in Florence whose brother lives down the coast in Bandon where the quakes “epicenter” hit last night. In turn, the mission of the National Earthquake Information Center (NEIC) is “to determine rapidly the location and size of all destructive earthquakes worldwide and to immediately disseminate this information to concerned national and international agencies, scientists, and the general public. The NEIC/WDS for Seismology compiles and maintains an extensive, global seismic database on earthquake parameters and their effects that serve as a solid foundation for basic and applied earth science research.” Locals get 15 minutes of warning, maybe? The NEIC explained, for example, that after coordinating with The National Earthquake Information Center in Golden, Colorado, it’s now viewed, the day after on Feb. 15, that the earthquake hit “with a preliminary magnitude of 6.0 off the Oregon coast caused no reported damage and only a smattering of reports from people who felt it as a weak jolt.” This “shallow quake was recorded at 7.31 pm PST Tuesday more than 150 miles west of southern Oregon. It did not generate a Tsunami,” stated the NEIC. However, if an earthquake did hit, emergency management experts only think the warnings "would only give locals 15 minutes, maybe." Quakes happening more often Still, the NBC TV station in nearby Portland reported that residents up and down the coast reported “feeling the quake,” including this reporter and others in Florence that is about 70 miles north of the epicenter. Almost 11 months have passed since the Japan earthquake and Tsunami smashed the West coast; while locals here along Oregon’s central coast are preparing while keeping their sense of humor with “run like hell” posted on local Tsunami warning signs. A year after Japan’s massive 8.9 earthquake and Tsunami, the Pacific region’s “Ring of Fire” is a clear and present danger, and reminder, for the West Coast to be on alert for the same kind of earthquake happening here. In fact, say geologists who are monitoring the region’s “shifting crust,” the “Big One” is way past due, with 2012 being a time of preparedness. Earthquakes still threaten West Coast Moreover, one local here along the central Oregon coast pointed to recently posted Tsunami warning signs that have taken on the light-hearted approach to “run like hell” when the next one hits. “We know it’s coming, but when?” said the owner of a beach-front home during a recent Huliq interview. "The Northwest coast of the U.S., that's where the big problem is, if you ask me," says Pedro Silva, a professor of civil and environmental engineering at George Washington University. "The potential is there for a mega-earthquake of the magnitude we saw in Japan. You would be unlikely to see many buildings withstand it." Oregon preparing for the “Big One” With Tsunami preparations on the mind of just about everybody on Oregon’s West Coast these days, it’s no surprise that Senator Ron Wyden, the senior U.S. Senator for Oregon discussed the need for earthquake and Tsunami preparations by locals. Senator Wyden hosted a recent town hall meeting here at the coastal town of Florence and, in turn, told locals to be prepared for earthquakes and Tsunamis. In turn, local officials said Senator Wyden heard local concerns at the Florence Fire House that serves as the area’s hub for Tsunami disaster alerts and preparedness training that’s been in full force since last year’s Japan earthquake. Also, Oregon’s disaster preparedness officials have organized large-scale earthquake and Tsunami drills; while Jan. 26 marked the 312th anniversary of a Pacific Northwest earthquake. At the same time, officials recently told National Public Radio (NPR) that this anniversary of the mighty quake that hit the Pacific Northwest “was roughly the size of Japan’s 8.9 earthquake that hit last March. “ Stay prepared, ready to run For now, communities and government agencies are still responding to the damages caused along parts of the West Coast last winter when a massive Tsunami raced at high speed across the Pacific Ocean from Japan. Thus, it’s no surprise that people down in the San Francisco Bay area joked about their recent “Monday Morning Quake” March 5 by telling local media that “they need to be prepared, and ready to run.” Still, it’s no joking matter that a quake the size of the one that hit Japan last March could result in a large loss of life when it finally hits. Tsunami a wake-up call for the West Coast One local Oregon coast resident also told Huliq, in a recent interview, that “the memories of that terrible Tsunami are crowding back,” like a hidden current that’s painfully in the back of many beach dwellers’ minds these days. In turn, any talk of another Tsunami hitting the West Coast is like the mighty Pacific Ocean wrapping around the rocks out at sea here along the Oregon coast where local Tsunami sirens break the peace and quiet regularly with a sort of cry and into a scream “that reminds you of last year’s Tsunami all over again,” quipped one local who said the “sirens stretch around them” like hearing one’s home fire alarm going off. “When we hear that sudden danger-whistle, it just makes you jump to your feet and you want to get moving away from the ocean,” explained Oregon coastal resident Peggy Ergin. “The Tsunami warning is like a roar from absolute silence. It’s the sound of danger, and boy do we get moving when it sounds.” Oregon still recovering from Tsunami If you travel just north of the California-Oregon border to the peaceful seaport town of Brookings you will still see remnants of last year’s Tsunami from the Japan earthquake. For instance, one city official explained how up to a dozen boats sunk when Tsunami waves smashed into the West Coast last year, and “you know what, we’re still finding damage from that disaster,” he said. Moreover, Rick Tine of nearby Coos Bay, made it into the national newspapers the day after the March 11 Japan earthquake when he showed how the Tsunami damaged his 44-foot sloop, Sponte, when it was docked in the Port of Brookings at the time of the Japan earthquake. In turn, Tine explained to the media that after huge wave surges from the Tsunami, his boat broke loose from its slip and, in turn, was smashed to bits by the powerful Tsunami waves. Tine said he was sailing from San Francisco to Coos Bay when he decided to take refuge from the rolling Tsunami waves, but had no idea of Mother’s Natures power when it forms into a Tsunami. Also, the Wall Street Journal reported this week that the West Coast is rightly preparing for the next earthquake and Tsunami because “both Japan and the Pacific Northwest lie along subduction zones – areas where tectonic plates push against each other. When the growing pressure finally gives way, earthquakes and related Tsunamis result.” Casadia fault is to blamed for fears on West Coast At the same time, csmonitor.com noted in reports after last year’s Japan earthquake that “the Casadia subduction zone – also called the Cascadia fault’ – is where the Juan de Fuca and North America plates meet – sometimes in violent confrontation. Its part of the ‘ring of fire’ – volcanoes and earthquakes surging in the Pacific Ocean. The Cascade Mountain Range – which includes Mount St. Helens – is volcanically active.” Also, csmonitor.com reported that “the last time a ‘big one’ of the type Japan saw Friday occurred along the northwest coast was on Jan 26, 1700. Judging by the number of times that’s happened over the past 10,000 years, some scientists think another one is due this century. Moreover, local TV stations in the Pacific Northwest and all along the West Coast have stepped up warnings and advisories for “residents to take precautions and have an escape plan” for when and not if the next earthquake and Tsunami hits the coast. “It has happened in the past. It will happen in the future. It would be devastating,” explained Professor Scott Burns during a KPTV interview in nearby Portland where this geology professor teaches at Portland State University. In turn, Professor Burns explained in a Wall Street Journal report “how the Pacific Northwest is dotted with tsunami warning devices that could give people the critical few minutes needed to reach higher ground. But in many areas, building codes and construction have not advanced to the extent they have in Japan.” Moreover, Professor Robert Butler – a geophysicist at the University of Portland – told The Columbian newspaper in Vancouver, Wash., that “we’re getting better about people taking this stuff seriously. I credit the 2004 Tsunami in the Indian Ocean” and last year’s Japan earthquake for what the professor calls “a step-up in awareness.” Warnings continue, fear rises NPR reported also reported recently that Japan’s earthquake and Tsunami from this time last year is “alerting the West Coast that the same kind of thing could happen here. Experts who study the Earth’s shifting crust say the ‘Big One’ may be past due.” In turn, NPR noted how “Japan lies on the ‘Ring of Fire’ an arc of earthquake and volcanic zones stretching around the Pacific where about 90 percent of the world's quakes occur, including the one that triggered the Dec. 26, 2004, Indian Ocean Tsunami that killed an estimated 230,000 people in 12 nations. A magnitude-8.8 temblor that shook central Chile last February also generated a tsunami and killed 524 people. Quake alerts now a way of life Now, nearly a year later, there’s monthly “Tsunami alerts” issued for the low-lying areas along the West Coast where massive Tsunami waves swamped Oregon’s coastal beaches and severely damaged harbors in both Oregon and California; while several people were reported missing along these coasts after they were swept out to sea. In turn, Japan continues to show TV images from Sendai that showed highways buckling and older wooded structures flattened by the force of the shaking. As the Tsunami wave swept ashore, Sendai airport was instantly inundated. The wave washed through a fish market near the shoreline, picking up an entire parking lot full of cars and sweeping them into the sea. “That can lead to enormous death tolls. Earthquakes aren't getting bigger or more frequent, says Raymond Pestrong, a San Francisco State University geologist, but they are occurring in more crowded places;” while Pestrong’s NPR interview also revealed how “cities such as Tehran, Iran; Istanbul; Caracas, Venezuela; and Manila, the Philippines are vulnerable to quakes that could leave hundreds of thousands dead, if their regions and structures don't become better prepared. One assessment done last year for the Filipino government found that a quarter of the structures in urban areas could crumble in the event of an earthquake.” Quakes feared now out West "The reason that we hear so much more about natural disasters today is that people are flocking into large cities more," Pestrong says. "Lots of the large cities are in very vulnerable areas. When an event happens, it affects more people." At the same time, the recent earthquakes to the hit the West Coast were reported by The National Earthquake Information Center to be 150 miles out to sea; while reminding locals that the infamous Casadia subduction zone is also “out to sea off the West Coast.” Also, San Francisco’s magnitude 4.0 earthquake early March 5 continues to leave locals “unsettled” after the quake sent aftershocks of fear up and down the entire West Coast, say officials. Image source of a FEMA guidebook for flooding that occurred during recent Pacific Northwest storms last January that resulted in the president declaring parts of Oregon a disaster area with locals able to file for help from FEMA. Photo by Dave Masko
<urn:uuid:c3978d08-236b-41a9-aeb1-7af96a6a3b3e>
CC-MAIN-2014-42
http://www.huliq.com/10282/san-francisco-earthquake-shakes-west-coast-residents-fearing-new-disasters
s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119648891.34/warc/CC-MAIN-20141024030048-00299-ip-10-16-133-185.ec2.internal.warc.gz
en
0.949338
4,556
3.109375
3
'Has any artist ever conceived and executed such a daring and successful realisation of the ineffable moment when God created Man?' The Creation of Adam, 1512, by Michelangelo Buonarroti (1475–1564), approximately 9ft 2in by 18ft 8in, Sistine Chapel, Rome, Italy. James Fox on The Creation of Adam Has any artist ever conceived and executed such a daring and successful realisation of the ineffable moment when God created Man (Genesis 1:26-27)? As a believer, looking into Michelangelo’s painting, I see Man reclining, lethargic, unselfconscious, beautiful. Dominating his bequeathed domain in submissive immobility and with a touch of self-assurance. ‘God flies through space and time with creative energy and movement, to touch the offered hand with His own life, to awaken him to the divine purpose of relationship. In this creative moment of Western rebirth, Michelangelo shows what we are to receive from the divine spark, awakening us to the offering of being with and under God in co-creation.’ James Fox is an actor. John McEwen on Michelangelo’s Sistine Chapel ceiling Vasari gave Michelangelo, his teacher and friend, divine status: ‘The benign ruler of heaven… decided to send into the world an artist… whose work alone would teach us how to attain perfection.’ Michelangelo’s mother died when he was six and his father, a government official, briefly entrusted him to a stonecutter and his wife in Settignano. ‘If there is any good in me,’ he told Vasari, it stemmed from this happy time, when he also learned the rudiments of sculpting. At 14, he was apprenticed to the Florentine painter Ghirlandaio and the sculptor Bertoldo di Giovanni. His brilliance meant he completed his education at the Court of Lorenzo de Medici. He was in Rome from 1496 to 1501 and, in 1508, Pope Julius II commissioned him to paint the ceiling of the Sistine Chapel, the Pope’s private chapel, its dimensions based on the temple of Solomon. it was built by Julius’s uncle, Pope Sixtus IV, hence its name. The ceiling was painted a star-studded blue, but Julius wanted Michelangelo to replace it with pictures of the 12 Apostles. Michelangelo told him it ‘would turn out a poor affair’. He wrote to a friend: ‘He asked me why. I said, “Because they themselves were poor.” Then he gave me a new commission to do what I liked.’ The ceiling is divided into nine stories from the Book of Genesis. Michelangelo painted it with merely marginal assistance. He left God creating Adam until last. ‘so God created man in his own image,’ says the Bible, but, before Michelangelo, God had never been shown as a man, only as an aura or celestial hand. This article was originally published in Country Life August 5, 2015. The Duke of Wellington chooses his favourite painting for Country Life. Hughie O’Donoghue chooses his favourite painting for Country Life.
<urn:uuid:a4ea0cc7-0b0f-428c-8b16-5aee234f0504>
CC-MAIN-2021-43
https://www.countrylife.co.uk/luxury/art-and-antiques/my-favourite-painting-james-fox-74534
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585516.51/warc/CC-MAIN-20211022145907-20211022175907-00475.warc.gz
en
0.968453
692
2.5625
3
North Korea is a country in Asia. Here are some interesting facts about North Korea. Interesting Facts About North Korea - The official name of the country is the Democratic People’s Republic of Korea (DPRK). - In North Korea you drive on the right hand side of the road. - North and South Korea fought a war (the Korean War) between 1950 and 1953. They are still officially at war with each other as a peace treaty has never been signed. For more information see the post ‘when did the Korean War start and end‘. - The North Korean government has decided to proceed with a uranium enrichment program. - North Korea has the fifth largest military force in the world. 1.21 million people (around 20% of all North Korean men aged 17-54) are in the military. - North Korea has an active nuclear weapons program. It is believed that North Korea has 6-8 nuclear weapons. The United Nations have demanded that North Korea conduct no further testing on nuclear missiles. A North Korean newspaper stated that the United States had 1000 nuclear weapons in South Korea which were ready to be used against North Korea. These weapons, however, were removed in 1991 after an international treaty. - North Korea is the 10th largest producer of fruit. - Very few Western tourists visit North Korea. Most tourists come from China, Russia and Japan. - While the North Korean Constitution provides for freedom of speech and freedom of the press, the government do not allow either. Criticism of the government is not permitted in the media. - Christians are heavily persecuted in North Korea. Read more about it here. - North Korea has a literacy rate of over 99%. - 60% of North Korean children suffer from malnutrition as food is poorly distributed. The military gets most of the food produced in North Korea.
<urn:uuid:77eefcc2-06e4-4a44-9d25-f9f02c33f964>
CC-MAIN-2017-47
http://wanttoknowit.com/interesting-facts-about-north-korea/
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805809.59/warc/CC-MAIN-20171119210640-20171119230640-00466.warc.gz
en
0.951848
380
2.890625
3
By Rebecca Steelman, Communications & Development Coordinator, GCNF Around the world, there is an unprecedented understanding and adoption of school meal programs as an economic investment strategy for country governments. Home-grown school feeding programs not only serve social protection and development goals by feeding hungry children and fostering education, they also contribute intergenerational benefits to families and can create jobs, empower women, and connect smallholder farmers with a consistent market for their goods. Members of the global school meal network anecdotally know that more countries are moving toward national ownership (progressing from international aid), local sourcing, and a stronger focus on nutrition. However, the field currently suffers from a lack of comprehensive, comparable, data-driven assessments of what is happening in a given country, across countries and continents, and across the globe. This leaves key pieces of impact assessment of school meal programs unaddressed. For example, current assessments do not cover the impact on agricultural development, private-sector engagement, food basket availability, diet diversity, and the inclusion of nutrition standards. Most assessments are done by program implementers, and few capture data regarding the efforts of other school meal implementers, even those implementing programs in the same country. In addition, there is little or no consistency in whether or how information is gathered and reported: One country might have data for some or all of those categories, but the next country will not, or its data won’t be comparable due to the timing of the survey. These issues are critical in the context of seeking sustainable and systemic progress via school meal strategies. In an effort to strengthen the work of the global school meal network, GCNF is designing a Global Survey of School Meal Programs to be piloted in 2018, launched in 2019, and conducted thereafter at intervals of every two or three years for at least ten years. The survey will ask a standard set of questions of 150 countries around the world, with the goal of identifying trends, gaps, and opportunities to guide governments’, GCNF’s, and other stakeholders’ decisions and investments related to school meal programs. Done well, surveys don’t just glean and report information. The process of asking questions can in and of itself help move the mission forward. New thinking can be instigated just by asking government leaders whether they are buying from their own farmers, using the program to create jobs for women or youth, or if they have national nutrition standards. GCNF will design its survey by building on the work of key partners and will make the data available in an open-source format so at to contribute to the work of our partners. Specifically, the open-source data is expected to help: - Track progress, challenges, and opportunities facing the field over the next ten years, - Foster relationships between countries facing similar challenges and opportunities, - Inform choices by and cooperation among key global players such as the World Food Programme, the Partnership for Child Development, and the World Bank, as well as international non-profits and corporations, - Assess and communicate supply chain gaps on a per country basis to potential partners, and - Promote the concepts of national ownership, and local sourcing GCNF plans to kick-off the first round of survey at the 2018 Global Child Nutrition Forum tentatively scheduled for mid-October 2018. For more information about this program or the 2018 Forum, please contact Rebecca Steelman at email@example.com. The Global Child Nutrition Foundation (GCNF) is a global network of governments, businesses, and civil society organizations working together to support national, locally-sourced, and nutritious school meal programs. GCNF expands opportunities for the world’s children to receive adequate nutrition for learning and achieving their potential. We envision a future where school meals sustainably nourish all children and help them, their families, communities, and nations to thrive.
<urn:uuid:be03b1a6-b51b-4f8e-93e4-31680f8eb126>
CC-MAIN-2020-05
http://internationalschoolmealsday.com/2018-blog-tracking-international-school-meal-network-progress-gcnf-launch-open-source-global-survey-school-meal-programs/
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250601615.66/warc/CC-MAIN-20200121044233-20200121073233-00201.warc.gz
en
0.933766
790
2.796875
3
“We are going to completely change what it means to do advanced analytics with our data solutions. We have machine learning stuff that is about really bringing advanced analytics and statistical machine learning into data-science departments everywhere.” Read on to find What Is R Programming? - Satya Nadella, CEO, Microsoft. So it all depends on data and analytics for every company if you want to bring the change, or say success. And what if you take Data Science with R course online? It can pave the way for a rewarding career in data science. We are living in the digital era, where we produce petabytes of data in just one hour through our activities over the Internet. As we know that Data Science is a blend of different tools, algorithms, and machine learning principles with the objective of finding hidden insights that are useful in making business decisions. R is one such tool that is widely adopted by Data Scientists to perform the analysis. Apart from Data Scientists, R programming language is extensively used by Data Miners, Statisticians, and Software programmers. Different sectors such as healthcare, finance, academics, consulting, media, retail, and more, find R as a versatile language that helps in performing data analysis and identifying meaningful information. What is R programming language? Developed in 1993 by Robert Gentleman and Ross Ihaka in the University of Auckland, Auckland, New Zealand, R is a programming language and an analytics tool. It is used extensively as an analytics tool and is considered one of the most popular tools used in Data Analytics and Business Analytics. It finds various applications in different sectors. The demand for trained and certified professionals in R is increasing with its huge applicability in Statistics, Data Visualization, and Machine Learning. R is defined as “a language and environment for statistical computing and graphics”, by the R Foundation. But, it is a lot more than that. Let’s see what R actually is. - R is a data analysis software that is widely used by Data Scientists, statisticians, and analysts. That means R can be used for statistical analysis, predictive modeling, and data visualization. - R is an object-oriented programming language that provides objects, operators, and functions allowing you to explore, model, and visualize the data. - R is a free, open-source software project which has a top-level of quality and numerical accuracy. Its open interface allows you to integrate with different applications and systems seamlessly. - R provides an environment for statistical analysis and makes standard statistical methods easier to implement. This is the reason that much of the cutting-edge research done in predictive modeling and statistics is done in R. - The R project leadership now has more than 20 leading computer scientists and statisticians from around the world, making it a community with thousands of contributors who have designed thousands of packages. Today, there are more than 2 million users in the vibrant online community of R. Features of R - Issued under GNU (General Public License), R is a free and open-source programming language. - In place of a compiler, R uses an interpreter thereby making the development of code go easy. - It is a flexible language that is capable of bridging the gap between data analysis and Software Development. - One of the most important features of R is its cross-platform interoperability. This means that it has distributors running on Windows, Linux, and Mac. Therefore, R code can easily be ported from one platform to another. - R can seamlessly associate different databases and performs well when it comes to bringing in information from Microsoft Excel as well as SQLite, Microsoft Access, MySQL, Oracle, etc. - There is a wide variety of packages provided by R with a broad range of codes, functions, and features that are tailored for statistical modeling, data analysis, Machine learning, Data Visualization, and importing and manipulating data. - R can effectively integrate different powerful tools to communicate reports which are in different formats such as XML, CSV, HTML, and pdf, and also via interactive websites, by taking the help of packages in R. - R allows you to write your own libraries and packages and serves these packages as add-ons. Therefore, R allows changes and updates to its tools making it a developer-friendly language. - If you have experience in statistics, R is best suited for you, as with the knowledge of statistics it becomes very easy and simple to learn R. Why choose R for Data Science? The need for analyzing and constructing insights from the data has made the field of Data Science one of the most popular ones. Industries now need to transform raw or unstructured data into furnished data products. Here, R comes into action as it provides the developers with an intensive environment to analyze, process, transform, and visualize the data. R contains a plethora of packages that are applicable in almost all fields such as biology, management, astronomy, and more. When you need to perform complex statistical modeling, R is the most preferred language, as it provides extensive support for operations on matrices, vectors, and arrays. In addition, R is famous for graphical libraries that let you describe aesthetic graphs easily and convert them to user-readable format. R Shiny allows you to develop your own web applications for incorporating visualizations in web-pages and allows excellent interaction between the users. Moreover, data extraction is considered one of the most important parts of Data Science. You can interface your R code with a database management system with R. There are various packages in R that support image processing. Also, there are various options for advanced data analytics for developing machine learning algorithms, predictive modeling, etc. The above-mentioned details were about what Is R Programming. Data Science has been ruling over the job market for the last decade. R finds a variety of applications in Data Science. If you wish to make a career in Data Science, learning R and getting certified in the same can be very beneficial. Taking up an online training course for getting certified is the wisest step you can take. This is because online training provides you the flexible learning hours and the mode of learning of your choice. There are doubt sessions which are conducted by industry experts to ensure that you are prepared well to take up the certification exam. Get yourself registered now!!
<urn:uuid:a34e5453-b070-4c7e-bc34-c911bb11cc53>
CC-MAIN-2021-17
https://www.benchmarkmonitor.com/2020/11/18/what-is-r-programming/
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038460648.48/warc/CC-MAIN-20210417132441-20210417162441-00543.warc.gz
en
0.939694
1,292
3.296875
3
Stroke is the second leading cause of mortality and morbidity worldwide. Early intervention is of great importance in reducing disease burden. Since the conventional risk factors cannot fully account for the pathogenesis of stroke, it is extremely important to detect useful biomarkers of the vascular disorder for appropriate intervention. Arterial stiffness, a newly recognised reliable feature of arterial structure and function, is demonstrated to be associated with stroke onset and serve as an independent predictor of stroke incidence and poststroke functional outcomes. In this review article, different measurements of arterial stiffness, especially pressure wave velocity, were discussed. We explained the association between arterial stiffness and stroke occurrence by discussing the secondary haemodynamic changes. We reviewed clinical data that support the prediction role of arterial stiffness on stroke. Despite the lack of long-term randomised double-blind controlled therapeutic trials, it is high potential to reduce stroke prevalence through a significant reduction of arterial stiffness (which is called de-stiffening therapy). Pharmacological interventions or lifestyle modification that can influence blood pressure, arterial function or structure in either the short or long term are promising de-stiffening therapies. Here, we summarised different de-stiffening strategies including antihypertension drugs, antihyperlipidaemic agents, chemicals that target arterial remodelling and exercise training. Large and well-designed clinical trials on de-stiffening strategy are needed to testify the prevention effect for stroke. Novel techniques such as modern microscopic imaging and reliable animal models would facilitate the mechanistic analyses in pathophysiology, pharmacology and therapeutics. Stroke is the second leading cause of death and causes excessive loss of disability-adjusted life-years every year worldwide.1 Despite the reduction in stroke mortality, the absolute number of people with stroke, stroke survivors, related death as well as global burden had increased greatly in the past two decades.2 In this regard, the prevention of stroke by early intervention is of great importance. Since the conventional risk factors cannot fully account for the pathogenesis, it is essential to detect unknown stroke risk factors especially biomarker of artery injury for an appropriate intervention. Arterial stiffness, also known as the loss of arterial elasticity, represents the mechanical property of artery resistant to deformation.3 Compliance and distensibility, although related to arterial stiffness, are not interchangeable with arterial stiffness because they depend on the stiffness of arteries, as well as on the size and thickness of arteries.4 Arterial stiffness has been regarded as a reliable marker of arterial structural and functional alteration after abundant experimental and clinical studies.3 ,5 Furthermore, a growing number of studies have demonstrated the association between arterial stiffness and stroke attack.6–10 The goal of this review is to address arterial stiffness with the following aspects: the measurements, the secondary haemodynamic consequences and the predictive role, the possible pathophysiological mechanism and de-stiffening therapy for stroke prevention. Measurements of artery stiffness in clinical investigation There are various parameters to present systemic and regional arterial stiffness by different invasive or non-invasive methods. Here, we mainly discuss three major measurements of arterial stiffness that are generally applied in clinical researches. Assessment of pressure wave velocity Pressure wave velocity (PWV) represents the speed of the pressure pulse travelling along the arterial region and could be obtained by automated devices, ultrasound and MRI.11 On the basis of a generally accepted propagative model, the fundamental principle of mechanism is that pressure wave travels faster in stiffer artery.12 Thus, PWV, which is used to directly measure the regional stiffness, is generally accepted as the simplest, most robust, reproducible and non-invasive method of detecting arterial stiffness.13 Aortic PWV is the most interesting parameter since the aorta makes the largest contribution to the buffering function and is responsible for most of the pathophysiological effects of arterial stiffness.12 Therefore, the measurement of aortic PWV (mainly carotid-femoral PWV) was used in numerous clinical studies and has emerged as golden measuring criteria of arterial stiffness in adults.13 The 2013 European guidelines for the management of hypertension and cardiovascular disease prevention in clinical practice even recommended that aortic PWV be used to assess target organ damage.14 The limitations of PWV measurement should be mentioned here. It remains difficult to accurately record the femoral pressure wave in participants with peripheral artery disease, and obesity effects the absolute value of PWV by overestimating the distance.13 Cardio-ankle vascular index (CAVI), one of the PWV measurement modifications and derived from arterial stiffness index β, was introduced by Japanese experts to obtain arterial stiffness not affected by blood pressure (BP) at a measuring time; thus permits for the first time to analyse the effect of antihypertension drugs on arterial property.15 CAVI exhibited reproducibility among various vascular diseases.10 ,16 However, the limitations of CAVI should also be concerned. CAVI cannot be measured accurately in patients with aortic stenosis, peripheral arterial disease or atrial fibrillation.17 CAVI usually evaluates the vascular condition of large arteries.18 In addition, the mixture of functional and anatomical concept also limits its clinical application. Analysis of the arterial pressure waveforms Pulse wave can be analysed through three major parameters: augmentation index (AIx), pulse pressure (PP) and systolic BP. Wave reflection occurs at sites of impedance mismatch and is quantified by the AIx, calculated as the difference between the second and the first systolic peaks expressed as a percentage of the PP (AIx=ΔP/PP). AIx is usually obtained from carotid, ascending aortic or radial artery waveforms recorded by applanation tonometry. AIx may be incorrectly assessed due to the technical difficulty in identifying the return time of the reflected pressure wave and the fiducial point.19 In addition to the amplitude of the reflection wave, AIx is also determined by the distance to the reflected site, PWV as well as cardiac cycle. Therefore, it is an indirect measurement of arterial stiffness and should be analysed in combination with PWV. From a clinical point of view, AIx is often used to evaluate the effect of de-stiffening drugs on wave reflection.13 As an innovative method, AIx has not yet been validated in large prospective clinical trials. Central PP and systolic BP are the crude indexes of large artery stiffness, and present as more powerful indictors for the cardiovascular events than periphery ones.20 While there is no non-invasive technique available to directly measure central PP, the most widely used approach is to measure the brachial PP by a sphygmomanometer and then apply the transfer function. However, the premise of using the transfer function is that the characteristics of the vascular system between the two measuring sites are the same in all individuals and under all conditions. Apparently, this is not true since vascular dimension depends on body size, and vascular properties vary with arterial pressure, with age and with treatment. Measurement of arterial diameter change with respect to the distending pressure Unlike PWV, which is the measurement of regional arterial stiffness in a certain segment, the measurement of changes in arterial diameter and volume can help evaluate the elasticity of the local artery. Echo tracking system or MRI is performed to acquire local stiffness index like compliance, distensibility, Young's elastic modulus and incremental elastic modulus. Compliance is defined as the change in arterial volume relative to change in pressure while distensibility is analogous to compliance but after normalisation of arterial size. The curvilinear relationship between pressure and diameter is approximated with a logarithmic transformation, resulting in the β index reflecting the stiffness. However, the pressure–diameter or pressure–volume relations depend both on the stiffness of the vessel wall and on the vessel geometry including artery size and wall thickness.21 Besides, it requires a high degree of technical expertise and takes longer than measuring PWV. This method is applied more in mechanistic analyses than in epidemiological studies.13 Mechanism of arterial stiffness Vascular structure, vascular function and BP are the three major components that are involved in arterial stiffness. Factors such as inflammation, oxide stress, the renin-angiotensin-aldosterone system (RAAS) and genetic factors that influence the function of vascular in short term or the structure in long term can induce arterial stiffness.22 ,23 Vascular structure is a major determinant of arterial stiffness. When stiffened vessels were examined microscopically, the change in the property or the distribution could be observed including increased collagen and matrix metalloproteinases (MMPs), fragmented and diminished elastin, abnormal and disorganised endothelium, infiltration of smooth muscle cells, macrophages and mononuclear cells.3 Extracellular matrix remodelling: Scaffolding protein collagen and elastin are closely linked to the strength and elasticity of the vessel. Normally, MMPs play a vital role in regulating the synthesis and degradation of these two proteins. Inflammation, haemodynamic or genetic factors could break the synthesis/degradation balance and raise the ratio of collagen/elastin, thus increasing arterial stiffness.24 Cross-links between collagen and elastin in the arterial wall are significant in providing elasticity and strength to the arteries. Owing to the glycation of proteins, especially collagen, advanced glycation end products (AGEs) are formed, which create excessive cross-links between collagen and consequently induce arterial stiffness.25 AGEs could also influence the stiffness of the artery wall by the receptor-mediated endothelial dysfunction and inflammation process. From a prospective study conducted in patients with early rheumatoid arthritis, arterial stiffness was improved greatly after 12-month treatment of anti-inflammation.26 Smooth muscle cell hypertrophy: Alteration in mechanical aortic wall properties accompany with reduction in compliance and distensibility preceded the development of hypertension in spontaneously hypertensive rats.27 While there was no significant difference in collagen content between young spontaneously hypertensive rats and normotensive rats, the media thickness and cross-sectional area were significantly larger in hypertensive rats. Vascular smooth muscle cell hypertrophy, which is mainly responsible for media thickness, participated in the development of arterial stiffness. Endothelial dysfunction: Endothelial dysfunction embodies its indispensable role in vascular disease by releasing vasoactive substances. Nitric oxide (NO) has a vasodilatory effect and exhibits antiatherogenic property through inhibition of vascular smooth muscle cell proliferation.28 Blocking endothelium-derived NO synthesis resulted in higher arterial stiffness.29 Impaired endothelial function was independently and inversely related to PWV, AIx and central PP as shown in the large-scale study among healthy participants.5 In fact, there might exist a vicious circle between endothelial dysfunction and arterial stiffness, that is, endothelial dysfunction could aggravate structural stiffening and, in turn, worsen endothelial function.24 Smooth muscle tone: Smooth muscle tone modulates the artery elasticity. Vasodilators decrease the smooth muscle tone, cause a reduction of wave reflection and raise the distensibility.30 Vasoconstrictors such as angiotensin II, on the other hand, lead to loss of elasticity in the vessel wall.31 The endothelial dysfunction interacts with impaired muscle tone via these released vasoactive substances in the progression of elasticity alteration. Arterial stiffness depends on cyclic strain of the arterial wall, mainly the cyclic change of BP. At a low BP level, the elastin controls the composite behaviour and the vessel wall is relatively extensible, while at a high BP level, the collagen with stiffer property is increasingly important and then the vessel wall becomes inextensible.21 ,32 Therefore, arterial stiffness increases at a higher BP even without structural change. Haemodynamic pathogenesis secondary to arterial stiffness Among the various models that applied to the circulatory system for a better understanding of haemodynamics, the propagative model based on a viscoelastic tube hypothesis is the most acceptable one.12 In this model, the elastic properties of the tube allow the generation of a forward pressure wave, which travels along the tubes. On the other hand, the numerous branch points and high resistance of the tubal end favour the wave reflection and generate retrograde waves. In healthy participants, reflected waves arrive at the central aorta during the diastole phase, contributing to the secondary fluctuation of the pressure waveform, which benefits the coronary perfusion. With increased arterial stiffness, the forward and reflected wave travel more rapidly along the arterial tree, leading to an earlier arrive in late phase of systole. Therefore, the reflected wave amplifies systolic BP and PP, increases the after load and may lead to ventricle hypertrophy in the long run. A raised PP damages small arteries in peripheral organs, thus in turn inducing arterial stiffness. Increased arterial stiffness could also promote excessive flow pulsatility into small vascular beds. Unlike most vascular beds which are protected by intense vasoconstriction upstream, the brain is more susceptible to pressure and flow pulsatility.33 ,34 This haemodynamic stress, pulsatile pressure or BP variability can cause a ‘tsunami effect’ towards cerebral parenchyma.35 This might help explain how aortic stiffness damages microvasculature and causes dysfunction.36 ,37 Arterial stiffness: a predictor of stroke An indirect clue for the influence of arterial stiffness on stroke comes from early cross-sectional studies. Patients with cardiovascular risk factors or vascular diseases such as coronary heart disease and end-stage renal disease had higher arterial stiffness than did the control group.38 ,39 The first study on arterial stiffness in patients with stroke evaluated vascular stiffness by calculating index β. Index β was significantly greater in patients with stroke than in the control group, indicating that aortic stiffness was independently associated with ischaemic stroke.40 Later, more and more large case–control studies confirmed that greater arterial stiffness was common in patients with stroke.41 ,42 Owing to the cross-sectional nature of these studies, it was impossible to conclude that vascular stiffness was predictive of stroke. Later longitudinal studies demonstrated that vascular stiffness was an independent predictor of cardiovascular and all-cause mortality in patients with hypertension, early-stage renal disease and in the elderly population.43–45 However, stroke was only discussed as one of the clinical end points until Laurent et al46 first investigated the association of vascular stiffness and fatal stroke occurrence in a cohort survey. After an average 7.9 years follow-up of middle-aged patients with essential hypertension, Laurent et al found that a 1-SD elevation (4 cm/s) in PWV was associated with a 72% higher risk of fatal stroke. High PWV remained significantly predictive of stroke death after adjustment for classical cardiovascular risk factors. Other researchers assessed its predictive value in the elderly and general population.6 ,47 Data from two recent meta-analyses suggest that the assessment of aortic or carotid stiffness could both improve the prediction of stroke beyond other conventional risk factors.48 ,49 In addition, aortic stiffness could predict the prognosis of ischaemic stroke.7 ,50 Carotid-femoral PWV measured 1 week after stroke was significantly associated with a 90-day functional outcome valued by the modified Rank Scale in patients.7 As to different subtypes of stroke, vascular stiffness seems to have different predictive value.8 ,9 ,51 Stroke is a heterogeneous disease due to its varied pathophysiology in each subtype. Patients with lacunar stroke tended to have a higher PWV compared with large artery atherosclerosis, cardioembolic and cryptogenic stroke.51 Increased arterial stiffness with greater flow pulsatility into a cerebral small vessel may contribute to the pathogenesis of lacunar stroke, thus resulting in the difference. Another study demonstrated that aortic stiffness index β was higher in patients with cerebral infarction than in those with a transient ischaemic attack, implying that cerebral infarction is associated with a more advanced degree of atherosclerotic process than transient ischaemic attack.9 Larger studies that evaluate the relationship between vascular stiffness and each subtype stroke are imperative to help clarify the direct interaction in pathogenesis and provide specific insights into efficient stroke prevention. In recent years, high resolution MRI provides a unique tool to study the relationship between vascular stiffness and neuroimagical changes relevant to the recurrence or severity of stroke. Cerebral small vessel disease (SVD), which can increase the risk of stroke, is linked to arterial stiffness.35 ,36 ,52 ,53 A study of 1282 patients with acute ischaemic stroke or transient ischaemic attack showed that brachial-ankle PWV was significantly associated with both acute and chronic cerebral SVD markers including acute lacunar infarct, chronic lacunar, white matter hyperintensity, deep cerebral microbleeding.52 In the general elderly population of the Rotterdam scan study, higher PWV was also related to larger white matter lesion volume, but not to lacunar infarcts or microbleeding.53 Vascular stiffness and cerebral SVD could share a common pathophysiological mechanism involving vascular injury. Mechanism of arterial stiffness during stroke A variety of mechanisms could interpret the association between arterial stiffness and stroke. Haemodynamic alterations secondary to arterial stiffness should be highlighted. Raised PP induces arterial remodelling, increases wall thickness, promotes the development of plaque and atherosclerosis, and eventually lead to rupture or ulceration of atherosclerotic plaques. Besides, increased aortic pulsatility may also transmit through stiffen large vessels to the cerebral microvasculature. As the central artery stiffen, the capacity to regulate the pulsatile flow is reduced, which leads to progressive impedance matching between the aorta and peripheral arteries. Such impedance cause a decrease in the reflection coefficient and thereby facilitates the penetration of excessive pulsatile energy into the periphery.54 To make it worse, the vascular resistance of the brain is relatively low; therefore, the pulsatility of pressure and flow extend well into this organ. This special input impedance of the brain provides an interpretation for how arterial stiffness damages the cerebral microvasculature and causes impaired cognition function.33 ,51 ,54 Furthermore, the measured higher aortic stiffness may reflect parallel structural changes in the intracerebral vasculature, including elastic fibres broken down, fibrosis, calcifications, medial smooth muscle necrosis and diffusion of macromolecules into the arterial wall.55 Finally, the classical vascular risk factors or vascular diseases such as hypertension, atherosclerosis, coronary heart disease and early-stage renal disease, which are associated with and even probably promoted by arterial stiffness, are also risk factors for ischaemic stroke. The de-stiffening therapy and stroke prevention It has aroused enormous interest whether the reduction on arterial stiffness can translate into real clinical benefits on stroke management. De-stiffening therapy emerges as a promising strategy to decrease stroke incidence or mortality and improve functional prognosis. On the basis of numerous clinical trials, antihypertension drugs are successful at reducing BP and stiffness.56–58 However, it is difficult to separate the effect of intervention on BP reduction alone from their direct effect on the property of the vessel wall. It is of uttermost importance to break the vicious circle between arterial stiffness and raised PP. Targeting structural factors in vascular signalling remains largely unexplored, yet progression should not be ignored. Exercise training is another effective non-pharmacological method in attenuating arterial stiffness. Antihypertension drugs: The most important mechanism of antihypertension drugs improving arterial stiffness is the efficacy in lowering BP.59 For the same BP reduction, antihypertension drugs which improve arterial stiffness to the greatest extent should be privileged. The BP-independent effect comes from the alteration on arterial function, structure or a combination of both. Antihypertension drugs with vasodilation activity such as ACE inhibitor (ACEI), angiotensin receptor blocker (ARB), calcium channel blocker (CCB) and some β-block (BB) have shown advantage in ameliorating arterial stiffness.60 ,61 Most of the de-stiffening strategy preferred the administration of RAAS inhibitors combined with a CCB or a diuretic.59 ACEI/ARB: Among all classes of the antihypertension drugs, the RAAS inhibitors are generally recognised to be superior to others in reducing arterial stiffness.56 ,62 ,63 The most probable explanation lies in the profibrotic action of the RAAS. Owing to the antifibrotic potency of RAAS inhibitors, the extracellular matrix in the vascular wall is reversed, which finally translates into a change in mechanical property of the vessel.64 In addition, ACEI could modulate endothelial function via releasing bradykinin and NO.65 ,66 Numerous studies including both long-term ones (such as the REASON trial, the ADVANCE trial) and short-term to medium-term ones showed a reduction of arterial stiffness when ACEI or ARB was used.67–69 When comparing ARB to ACEI, taking valsartan and captopril, for example, could equally reduce PWV as well as AIx.70 A combination of ACEI and ARB proved to achieve an even greater effect on PWV reduction in patients with chronic kidney disease.71 Clinical trials also confirmed the efficacy of RAAS inhibitors in improving patient survival and reducing cardiovascular events.72 ,73 The PRORESS trial (n=6105) demonstrated that after a 4-year follow-up, the ACEI regime reduced the risk of recurrent stroke by 28% among participants with previous stroke or transient ischaemic attack.72 CCB: CCB also proved to lower PWV and AIx, but to a lesser extent than RAAS inhibitors did.57 ,74 ,75 The largest amount of evidence came from trials evaluating amlodipine.57 ,76 A combination of CCB and ARB had an advantage over the combination of diuretics and ARB for less side effects and more arterial stiffness improvement.74 ,75 Diuretic: Although the combination of diuretics and ACEI/ARB has been merged as one of the most common regimes in treating hypertension, the role of diuretics on treating arterial stiffness has not been well explored.58 ,77 A 4-week study demonstrated that hydrochlorthiazide failed to decrease PWV and AIx in spite of a reduction in brachial BP, which imply no favourable effect of diuretics on arterial stiffness beyond BP reduction.76 BB: BB is less valuable in reducing arterial stiffness, probably because it reduces BP by lowering the cardiac output, which instead increases periphery resistance and the wave reflection. BB is far beyond homogeneous, and the effect of arterial stiffness by BB could be either favourable or unfavourable. A recent meta-analysis reported that BB increased AIx, whereas all other antihypertension drugs decreased AIx.78 Furthermore, another meta-analysis from 13 randomised controlled trials suggested that BB is inferior to prevent stroke by reporting that the relative risk of stroke was 16% higher by BB than other kinds of antihypertension drugs.79 Novel BBs with vasodilation activity such as nebivolol and carvedilol display the ability to decrease arterial stiffness, but need to be further investigated with large-scale prospective trials.80 ,81 Antihyperlipidaemic agents: The data on statins are somewhat conflicting.82 ,83 A systemic review of nine trials with 471 participants disproved the effect of statins on reducing arterial stiffness, but this conclusion might relate to the methodological limitation that the included trials only studied aortic or peripheral elasticity at a time.82 A recent clinical study, which was designed to measure both aortic PWV and AIx at the end of a 26-week follow-up, supported the role of statins in ameliorating stiffness through anti-inflammatory and antiproliferative properties.83 The percentage reduction in PWV by fluvastatin was associated with that of serum C reactive protein independent of the lipid-lowering effect.84 Other drugs: Other drugs that target vascular signalling in arterial stiffness development have achieved some progression. MMP inhibitors include endogenous tissue inhibitors and pharmacological inhibitors such as zinc chelators, doxycycline and marimastat, of which only doxycycline is approved by the Food and Drug Administration (FDA).85 In two-kidney one clip hypertensive rats, doxycycline (30 mg/kg per day, 4 weeks) successfully prevented surgery-induced increases in systolic BP and MMP-2 levels, reduction in endothelium-dependent vasodilation and vascular hypertrophy.86 However, in spontaneously hypertensive rats, though structural alteration was ameliorated after 6 months treatment of doxycycline, arterial pressure, PWV and left ventricular function were unaffected.87 The efficacy of doxycycline and other MMP inhibitors in reducing arterial stiffness needs to be studied by different measurements and in different animal models. AGEI which can prevent AGE cross-linking and AGE breaker which can break the AGE cross-links were found to attenuate arterial stiffness by interfering the arterial remodelling in animal experiments.88 ,89 Alagebrium chloride (also known as ALT-711), a novel non-enzymatic breaker of AGE, reduced AIx in patients with isolated systolic hypertension and such effect was related to improved endothelial function.90 However, the study enrolled only 13 patients and the therapy lasted for only 8 weeks. Later, in a randomised trial, a 1-year administration of ALT-711 failed to affect arterial stiffness or endothelial function.91 Despite the promising effect in reversing ventricular stiffness, the efficacy of AGEIs and AGE breakers in alleviating arterial stiffness needs long-term and high qualified research.92 Arterial stiffness increasing with ageing was less pronounced in physically active men and women.93 ,94 Several studies have shown the efficacy of aerobic exercise in preventing age-related arterial stiffness in healthy individuals and reversing arterial stiffness in patients with vascular risk factors as well.95–97 Aerobic exercise could also induce improvement in cardiovascular haemodynamics including arterial stiffness after stroke.98 The mechanism by which aerobic exercise improves arterial stiffness remains little known and is considered relevant to the raised NO availability and lowered oxidative stress.99 Physical activity could also modify gene polymorphisms that determine stiffness.100 The intensity, duration and frequency of aerobic exercise required for attenuating arterial stiffness is unclear. It has been recently established that 8 weeks of intermittent moderate aerobic exercise reduced stiffness parameters significantly in young healthy volunteers.95 A recent systematic review concluded that the effect of aerobic exercise improving arterial stiffness was enhanced with higher intensity.101 The improvement in arterial stiffness following aerobic exercise is also influenced by participants’ features. When it came to the elderly with multiple cardiovascular risk factors, though there was a decrease in arterial stiffness after 3-month training, the effect was not sustained after 6 months.102 Resistance exercise has an inconsistent effect on arterial stiffness.103 ,104 A meta-analysis demonstrated that high-intensity resistance training seemed associated with an 11.6% increase in stiffness while moderate-intensity resistance training did not show such association.104 In conclusion, larger reproducible clinical trials are needed to set the appropriate training type and pattern for specific groups. In summary, arterial stiffness, which can reflect the characteristic of arterial structure and function, is a novel and reliable predictor of stroke and offers a promising strategy to intervene stroke. Large and well-designed clinical trials on de-stiffening strategy are needed to further testify the prevention effect for stroke. Besides, development of reliable animal models and novel invasive techniques are extremely important to reveal the role of vascular stiffness in the progression of cerebrovascular disease. Hopefully, a recently developed animal model that is based on carotid calcification claims to meet all the characteristics of arterial stiffness without any unspecific effects such as brain hypoperfusion.105 The newly advanced techniques such as synchrotron radiation angiography may provide a new tool to observe the secondary flow pulsatility into brain vascular bed and help understand the contribution of arterial stiffness to cerebral microvasculature damage.106 This might fill gaps in understanding the pathophysiology involved in how arterial stiffness contributes to ischaemic stroke and offers theory foundation for therapeutic intervention. Contributors YC and FS wrote the manuscript. JL and G-YY organised, revised and finished the manuscript. Funding This work was supported by NSFC project 81070939 (G-YY) and U1232205 (G-YY). Competing interests None declared. Provenance and peer review Not commissioned; externally peer reviewed. Data sharing statement No additional data are available. This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/
<urn:uuid:fedb2912-6aad-45e6-9878-b0abb805fae4>
CC-MAIN-2017-17
http://svn.bmj.com/content/early/2017/03/17/svn-2016-000045
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121121.5/warc/CC-MAIN-20170423031201-00102-ip-10-145-167-34.ec2.internal.warc.gz
en
0.92315
6,162
2.671875
3
Summer (June – August) 2020 ranks as the 11th warmest and 29th driest summer on record for the state of Ohio since 1895. Temperatures averaged 1-4°F above average (1981-2010), with 5-10 inches of rainfall across the northwestern half of the state and 10-15 inches across the southeastern half. Particularly dry this summer has been the northwestern counties, a few counties in central and southwest Ohio (e.g., Madison, Pickaway, Ross, Fayette, and Greene), as well as Richland, Ashland, Wayne, and Stark Counties. Though too late for most crops in the state, recent rainfall is helping to recharge soil moisture. A slow-moving boundary draped across the state on Labor Day brought significant rainfall to much of northern Ohio. Most locations along and north of about I-70 (except NW Ohio) received 2-7” of rain. There was also a confirmed EF0 tornado a few miles east of Delaware with estimated winds to 80 mph and a few reports of large hail across the state. As of Thursday September 10, 2020, the U.S. Drought Monitor indicates about 19% of Ohio is experiencing abnormally dry to moderate drought conditions, down from about 37% the prior week (Figure 1). For more information on recent climate conditions and impacts, check out the latest Hydro-Climate Assessment from the State Climate Office of Ohio. Figure 2: Forecast precipitation for the next 7 days. Valid from 8 am Monday September 14, 2020 through 8 am Monday September 21, 2020. Figure from the Weather Prediction Center. High pressure and pleasant conditions are on tap for much of the upcoming week. A cold front approaching the region on Thursday could draw up some moisture from what is left of Hurricane Sally; but if it does, it is only expected to impact counties near the Ohio River. High temperatures this week should top out in 60s and 70s across the state, with overnight lows in the 40s and 50s. A reinforcing high pressure system over the weekend will reinforce cool, dry conditions over the weekend. The Weather Prediction Center is currently forecasting less than 0.25” across the southern counties and dry conditions across the north (Figure 2). Figure 3: Climate Prediction Center 8-14 Day Outlook valid for September 22 – 28, 2020 for left) temperatures and right) precipitation. Colors represent the probability of below, normal, or above normal conditions. The latest NOAA/NWS/Climate Prediction Center outlook for the 8-14 day period (September 22 - 28) and the 16-Day Rainfall Outlook from NOAA/NWS/Ohio River Forecast Center show elevated probabilities for near average temperatures and below average precipitation (Figure 3). Normal highs during the period are in the low to mid- to upper-70s, lows in the mid- to upper-50s, with about 0.75” of rainfall per week.
<urn:uuid:96cfd006-834c-409a-b0f9-82dcbd5b96a3>
CC-MAIN-2021-17
https://agcrops.osu.edu/newsletter/corn-newsletter/2020-31/labor-day-rainfall-eases-drought
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039604430.92/warc/CC-MAIN-20210422191215-20210422221215-00599.warc.gz
en
0.945204
596
2.515625
3
Nine Tips for a Safe & Healthy Potluck Author: Public Health Department 12/18/2017 12:57:32 PM Quick tips from County Public Health Department staff can help keep the focus on festive good times – not unwelcome ailments. It's that time of year: potluck season. As friends, families, and co-workers gather to share camaraderie and their favorite side dishes, a few quick tips from County Public Health Department staff can help keep the focus on festive good times – not unwelcome ailments. Remember: Don't invite food poisoning. "Preparing food for a potluck setting is different from putting dinner on the table at home," said Laurie Salo, supervising environmental health specialist. It's a good time to be extra cautious about food prep, and keep an eye on the clock so food doesn't sit out too long. 1. Remember the basics: clean, separate, cook, chill. Use hot, soapy water to wash your hands and the countertop before you cook. Keep raw meat, poultry and seafood separate from other ingredients and use separate cutting boards for each. Use a food thermometer to ensure food is cooked thoroughly. Refrigerate perishable food right away. 2. Eat first. "One of the easiest steps you can take is planning your event so food isn't sitting out too long before you eat," said Salo. If you're planning a potluck and meeting, eat first. If you're hosting a get-together with flexible hours, give guests a time range when food will be out ("dinner from 6-8 p.m.") and refrigerate the food after that. 3. Put away food after two hours. Bacteria that cause food poisoning can multiply rapidly at room temperature. After two hours, store leftovers in shallow containers in the refrigerator. Keep colds and flu away from the buffet. Even if you feel fine, it's important to take precautions to avoid spreading illness, said Christine Gaiger, communicable disease program manager. 4. Wash your hands. Before and after cooking, prepping the table, arranging dishes, or eating: wash your hands with warm, soapy water for at least 20 seconds. (Not sure how long that is? Sing the "happy birthday" song to yourself twice.) If you're planning the event, be sure to provide a place for guests to wash their hands. If handwashing isn't available, alcohol-based hand sanitizer is a good alternative. 5. Beware the tasting spoon. If you taste food as you're cooking, use a separate spoon for each taste. Plus, use a separate spoon for stirring and a fresh spoon for serving. Make sure each dish has a serving utensil so guests don't need to take food with their hands. Brighten the table with a healthy dish. A few extra vegetables can be a welcome bright spot on the buffet table, said Shannon Massey, public health nutritionist. 6. Make foods look festive by adding a few eye-catching vegetables to a favorite dish or try a new, flavorful healthy side dish or healthy entree recipe. Buy in-season produce when it costs less and tastes best. 7. Try a makeover of your favorite potluck dishes, using the My Recipe tool on SuperTracker to get healthier results. 8. Finds ways to cut back on added sugars, salt and fat as your prepare your favorite recipes. Try some of the delicious and budget-friendly recipes on What’s Cooking. Wishing happy, healthy potlucks to all.
<urn:uuid:157e3e3f-b676-4a38-b534-ede24f66e1fa>
CC-MAIN-2021-39
https://www.slocounty.ca.gov/Departments/Health-Agency/Public-Health/Department-News/Nine-Tips-for-a-Safe-Healthy-Potluck.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057388.12/warc/CC-MAIN-20210922193630-20210922223630-00181.warc.gz
en
0.929819
737
2.609375
3
About Koa Wood… A Woodworkers description and a sampling of images A few years ago, my friend and fellow woodworker Hank Snider wrote the following regarding Hawaiian koa wood: “With the possible exception of sandalwood, koa is the best-known hardwood of the Hawaiian Islands. Acacia koa is a native forest tree, unique to Hawaii, and held in reverence. Koa means bold, a quality essential to the ocean-going vessels which were adze-carved from giant logs in Big Island forests. Koa was used by early European craftsmen in Hawaii to make western-style furniture of the last century, some of which survives as Hawaiian heritage antiques. Modern uses include some extensive employment in isolated commercial and government buildings, some production furniture made locally in small factories, and the limited output of a few score of individual craftsmen who make one or a few pieces at a time. Availability of this treasured wood has been declining largely because of lack of reforestation with koa following logging. The usual practice has been to replace koa with cattle, which prevent regeneration of the forest. This practice reduces the tax burden on the land, under current tax laws. The reduction of the supply of koa has resulted in a dramatic rise in the price, which is about five times higher than in 1980. In the past few years some replanting has started, which will perhaps lead to a sustained supply of wood in a few decades. The present conditions of restricted supply have led to some interesting results affecting the quality of available wood. Presently, koa is coming from a variety of small suppliers who take trees from different parts of the geographic range of koa. Thus the available koa is highly variable in appearance, depending on where it came from. Koa is remarkably multi-colored, with a variety of hues appearing next to each other in the same piece of wood, ranging from yellow or greenish-yellow, through orange, brown, and red, to almost black. The dominant color among the others differs in woods from different sources. The result is that one has a choice of finished pieces which range in color and appearance much more than a generation ago, when the dark red wood from a single mill on the Big Island dominated the market. Many prefer the lighter golden hues which retain their brilliance when shown in interior spaces with subdued lighting. The other remarkable quality of koa results from its curly grain. Most koa has a three-dimensional quality which draws the eye beneath the surface of the wood, making its surface seem almost transparent. In the choicest pieces these swirls and waves in the wood are truly spectacular. Despite its also spectacular price this curly wood is in great demand, and is carefully husbanded by fine craftsmen to create jewels of the forest which will be treasured for generations. During the past few years a source of koa wood for Oahu craftsmen has been trees in local forests which have been dying or toppling in windstorms. Some of these old mature trees have proved to contain wood of exquisite color and character, and every available scrap of the wood has been used to make items as small as pens and hair ornaments. This is a welcome source of wood for the environmentally aware, concerned and active people who would otherwise embargo the use of koa and other trees from the native forests of the Hawaiian Islands. These islands are the most isolated in the world, and have remained so for tens of millions of years. During this extended period a truly spectacular biota of unique plants and animals evolved here. From his initial arrival a thousand years or so ago, the activities of man have reduced the range of many species and resulted in the extinction of some. This process has accelerated with recent population increases and economic expansion. The use of windfallen trees, veneers, alternative hardwoods as well as conscientious wood-sparing designs are among the responses of wood workers to these concerns. Public policy decisions about tax strategies, reforestation, restriction of introduced plants and animals, and economic development will be crucial in preserving Hawaii’s remaining unique biological heritage.” Hank Snider, 1994 For further information see: http://www.winrock.org/fnrm/factnet/factpub/factsh/AKOA.TXT For those of you who haven’t had the chance to see koa wood, I offer the following from a random selection of pieces that I deal with. In the first group, the samples have been lightly treated with Deft Danish oil (the SCNKOA#.GIF series); the second group of images is taken from the backs of some of my current instruments in stock (SCNUKE#) wherein the wood has been sealed and lacquered. The size chosen (200×200 pixels) may make it useful for your Windows wallpaper. If these images are used in a commercial site, I would appreciate either a link or a simple source acknowledgement. First we look at relatively simple patterns Sometimes I deal with koa wood that is a mixture of both sapwood, which is usually more lightly colored, and heartwood. If the wood has been allowed to languish in the elements before harvesting, various fungi enter the wood and discolor it in a variety of ways producing an effect termed “spalting”. Spalted wood is softer than unaltered wood, so our resin impregnation process becomes that much more important in making this beautiful effect still usable. The last set of images are from the backs of current instruments. Symmetrical patterns are taken from the center seam area…
<urn:uuid:1ced6c8a-b227-4104-9f9c-3cbe9e4737a2>
CC-MAIN-2022-21
https://ukuleles.com/ukuleles/about-koa-wood/
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662522284.20/warc/CC-MAIN-20220518151003-20220518181003-00207.warc.gz
en
0.956379
1,155
2.609375
3
Osteosarcoma (bone cancer) is the most common type of cancer that affects the bones of dogs. This cancer gradually eats away at the healthy bone tissue, leaving weaker, damaged bone which can break easily, even with normal activity. Affected dogs may have a noticeable hard swelling at the site of the cancer, or may simply go suddenly lame, almost overnight. Bone cancer is mostly seen in larger breeds of dog with Greyhounds being the breed diagnosed most with this disease, followed by Rottweilers and Great Danes. Although most common in older greyhounds, the disease has no real age limits and can be seen in dogs of racing age, as well as in brood bitches and retired racers kept as pets. The risk of developing this disease increases with age, and one study in the UK found it accounted for almost 50% of all tumours seen in greyhounds. The most common sites for bone cancer to develop in the front leg are the shoulder and just above the wrist. In the hind leg the cancer tends to occur just above or below the knee/stifle. The owner often reports that the dog has gone lame, and the area will often be swollen and painful to touch. Because it can appear so quickly, often owners assume the dog has simply hurt itself in the yard, or while exercising or playing with other dogs. The location of the swelling, amount of pain, and the appearance of the bone on x-ray are used to diagnose this disease. X-rays often show an area of bone that is very different from the normal bone above and below it, with a distinctive appearance described as ‘moth eaten’. Sometimes on x-ray it is also possible to see ‘pathologic’ fractures (breaks in the abnormal bone) which are the cause of a lot of the pain. Confirmation of the diagnosis can be achieved via a bone biopsy or fine needle aspirate – where some of the cells from the affected area are removed and sent for examination by a pathologist. Unfortunately bone cancer is usually a very aggressive and nasty disease and malignant cancers can spread from their initial location to other places in the body such as the liver and lungs. Osteosarcoma is a type of cancer that spreads very early in the disease, often well before any signs or symptoms from the original tumour are visible. Given this early spread, most dogs diagnosed with this disease have a very poor prognosis. It is estimated that most dogs will have a life expectancy of only a few months from the time of diagnosis. Treatment options include pain relieving medication, amputation of the affected limb, chemotherapy and radiotherapy. Amputation of the affected limb is primarily done to control the pain associated with the area, and does not ‘cure’ the cancer. Without amputation, pain relief may work for a short time, but usually an inability to control the pain leads to the owner deciding to euthanase the greyhound. Greyhounds usually cope quite well with amputation, even though it seems to be a very drastic option. Amputation of the affected limb can increase the life expectancy from just weeks to an average of 4-6 months. Chemotherapy is aimed at slowing the spread of the disease into other organs. Many owners do not consider chemotherapy because of the cost involved and the concern that their greyhound will suffer similar side effect to human chemotherapy patients (such as nausea and vomiting). Interestingly dogs are far less likely to suffer these types of reactions to the chemotherapy medication, and new chemotherapy drugs continue to be developed which are safer, more effective, and reduce the risks of unpleasant side effects. Amputation followed by chemotherapy gives the best life expectancy, but the average survival time with this option is still only 10-12 months. If you are concerned that your greyhound has gone suddenly lame, especially if it is an older dog, it is important to have the dog checked by your veterinarian. They will be able to diagnose the problem, and discuss all of the available options and their likely outcomes with you so that you can make an informed decision. If you would like to read more, the Greyhound Adoption of Ohio Inc has an excellent booklet written by William E. Feeman III DVM that you can access via the internet at http://www.animalmedicalcentreofmedina.com/library/Osteosarcoma.pdf or visit the Greyhound Health and Wellness Program site which is part of the University of Ohio at http://www.vet.ohio-state.edu/2096.htm.
<urn:uuid:f312787a-0970-4af1-8a55-404d9a949221>
CC-MAIN-2019-04
https://greyhoundcare.grv.org.au/health-and-well-being/bone-cancer/
s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547584547882.77/warc/CC-MAIN-20190124121622-20190124143622-00180.warc.gz
en
0.961756
946
3.015625
3
Symptom Checker: Symptoms & Signs Index Medical Author: Melissa Conrad Stöppler, MD Allergies are exaggerated immune responses to environmental triggers known as allergens. Allergies are very common, and about 50 million people in North America suffer from allergies. One of the most common forms of allergy is allergic rhinitis ("hay fever"), which produces symptoms like The symptoms of hay fever can, in turn, lead to fatigue and lethargy. Other types of allergic reactions can involve the skin (hives and itching). Anaphylactic shock is a severe form of allergic reaction that can be life-threatening. In anaphylactic shock, there is swelling of the throat and difficulty breathing. Asthma is also related to allergies in many cases. The symptoms of allergies can sometimes resemble those of other conditions. The common cold and the flu can cause respiratory symptoms similar to allergies. Typically, allergy symptoms are associated with a specific time of year or exposure to an allergen. Summary of Common Allergy Symptoms by MedicineNet Staff A review of our Patient Comments indicated that many people with allergies have similar symptoms and signs. Oftentimes, an allergy attack began with itchy eyes followed by facial swelling, particularly of the eyes and lips. Some patients mentioned that the itchiness occurred all over their bodies, and sometimes they developed hives. Several people indicated that their symptoms felt flu-like, in that they experienced coughing, fatigue, and the sensation that they had a fever. Read on to learn more about allergy symptoms in our Patient Comments. Medically Reviewed by a Doctor on 1/10/2015 Health concern on your mind? Visit the Symptom Checker. REFERENCE: WebMD.com. Allergy Types. Main Article on Allergy Pictures, Images, Illustrations & Quizzes Allergy Symptoms and Signs Examples of Medications for Allergy Symptoms & Signs A-Z List
<urn:uuid:18e1b038-0b11-4115-80f1-f60b7d960108>
CC-MAIN-2015-22
http://www.medicinenet.com/allergy/symptoms.htm
s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929256.27/warc/CC-MAIN-20150521113209-00238-ip-10-180-206-219.ec2.internal.warc.gz
en
0.940542
406
3.59375
4
June: NonFiction Pick Recommended by Elly We have a worldwide trash epidemic. The average American disposes of 4.4 pounds of garbage per day, and our landfills hold 254 million tons of waste. What if there were a simple—and fun—way for you to make a difference? What if you could take charge of your own waste, reduce your carbon footprint, and make an individual impact on an already fragile environment? A zero waste lifestyle is the answer—and Shia Su is living it. Every single piece of unrecyclable garbage Shia has produced in one year fits into a mason jar—and if it seems overwhelming, it isn’t! In Zero Waste , Shia demystifies and simplifies the zero waste lifestyle for the beginner, sharing practical advice, quick solutions, and tips and tricks that will make trash-free living fun and meaningful. Learn how to: - Build your own zero waste kit - Prepare real food—the lazy way - Make your own DIY household cleaners and toiletries - Be zero waste even in the bathroom! Be part of the solution! Implement these small changes at your own pace, and restructure your life to one of sustainable living for your community, your health, and the earth that sustains you.
<urn:uuid:bac2c5db-12c5-49bd-8ced-d9658208a0c8>
CC-MAIN-2023-40
https://nehlibrary.org/2018/06/nonfic-zero-waste-simple-life-hacks-to-drastically-reduce-your-trash-by-shia-su/
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510529.8/warc/CC-MAIN-20230929222230-20230930012230-00247.warc.gz
en
0.922756
270
2.515625
3
Carbohydrates are portrayed as ‘the bad guys’ in the fitness industry when losing fat. But people under estimate how important they are. To lose fat you have to eat the right carbs also known as ‘good carbs’. Good carbs are known as complex carbohydrates are the body’s main energy source. They are high in fibre and are digested slowly, providing slow release energy into the body. Complex carbs come from sources such as: Sweet potatoes, brown rice, wholegrain pasta, porridge beans and vegetables. Carbs are the building blocks for refuelling your body and you should be feeding it right with these slow-burning foods. Following an overnight fast of not eating for hours, your body needs to be restored in blood-sugar and muscle-glycogen, listed above are the essentials. Bad carbs are known as simple carbohydrates and should be consumed in moderation or very rarely. Simple carbs on the other hand are refined with sugar and aren’t good for you. They are very high in sugars and can raise glucose levels very quickly. This can cause a spike in energy levels too fast which isn’t sustained and can leave you depleted and fatigued. Examples of simple carbs include: fizzy drinks, cakes, sweets, white bread and pasta and cereals (basically anything that makes your mouth water). Always allow yourself to treat every now and again, but primarily stick to the good carbs for a healthier body and better lifestyle. So you can cut down on the bad carbs but be careful not to cut out carbs altogether as the complex variety provide an important energy source as well as fibre to help with the digestive system. So remember – not all carbs are the enemy!
<urn:uuid:ef92038c-7863-44c3-abad-baa354a73103>
CC-MAIN-2021-49
https://peakhealth.fitness/2017/10/23/good-bad-carbs/
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964361064.69/warc/CC-MAIN-20211202024322-20211202054322-00451.warc.gz
en
0.961601
357
2.8125
3
"South Asia is the world's most populous region. It is a region of overlapping ethnic, linguistic and cultural diversity. However, it is one of the least integrated in terms of regional or cross-border infrastructure. What Swaraj was hinting at is that instead of wielding tourism as an instrument of political leverage, as it has often been, between nations, it can be used as an effective means of economic and public diplomacy to help improve interactions and peace and prosperity in the region that has been one civilizational unit and historically a single market. Tourism contributes to people's getting to know each other more closely and helps them find out their commonalities, reaching beyond the commercial dimension. As the prime minister said in Thimpu, terrorism divides, tourism unites." After talks with the Bhutanese leadership, Prime minister Narendra Modi Monday proposed developing a tourism circuit combining India's Northeast region and the Himalayan nation. This came weeks after he conferred with other South Asian leaders and external affairs minister Sushma Swaraj spoke about some new initiatives that could be taken to build a new architecture of development cooperation among SAARC countries. One of those, she said, is tourism. This is understandable, given the fact that tourism is a high-impact activity, a major generator of jobs and a key export sector. Tourism routes are also key to regional development and integration, and the volume and growth of world tourism over the years have demonstrated that the sector deserves a higher degree of attention than it receives in South Asia that the United Nations says helps promote sustainable, people-centered growth. What is significant about the lead is the strategic framework to link tourism with the imperative of economic growth and regional integration. It is not that the regional grouping has not included tourism in its agenda. An action plan has been there. Promotion of the SAARC region as a common tourist destination by enhancing the role of the private sector, human resource development, promotion of the South Asian identity through tourism, development of cultural and eco-tourism have all been discussed in the past. The possibility of joint marketing, relaxing visa regime, giving access to cross-border driving licences, making Indian currency more flexible and increasing inter-SAARC movement by air by national flag carriers have come up from time to time. But nothing substantial has been achieved when seen in the context of international tourism. International tourism to emerging and developing economies has been growing strongly in recent years. In 2013, over one billion people travelled the world and these countries received 506 million or 47 percent of them all as compared to 38 percent in 2000. The UN World Tourism Organisation (UNTWO) forecast this share to surpass that of advanced economies in the coming years and to reach 57 percent by 2030. Also, as a worldwide export category, tourism ranks fifth after fuels, chemicals, food and automotive products. It ranks first in many developing countries. Tourism accounts for 42 percent of the exports of services of emerging markets and developing economies and has been identified by half of the least developed countries as a priority instrument for poverty reduction. According to the latest World Tourism Barometer, receipts in destinations worldwide from expenditure by international visitors on accommodation, food and drink, entertainment, shopping and other services and goods, reached an estimated $1,159 billion in 2013. Growth exceeded the long-term trend, reaching 5 percent in real terms (taking into account exchange rate fluctuations and inflation). The growth rate in receipts matched the increase in international tourist arrivals, also up by 5 percent, reaching 1087 million in 2013, from 1035 million in 2012. Such results confirm the increasingly important role of the tourism sector in stimulating economic growth and contributing to international trade. These results show that it is time to position tourism higher in the trade agenda so as to maximize its capacity to promote trade and regional integration, says Taleb Rifai, UNWTO secretary-general. However, tourism, as Anil Agarwal of Vedanta Group rightly says, has been one of India's most unsold and underrated asset. Despite its rich cultural heritage and diverse landscape, the country attracts only about six million tourists annually while around 60 million land in China. India also lags behind Asian competitors like Thailand and Singapore when it comes to attracting visitors. A lack of adequate infrastructure, connectivity, complicated visa rules, overall standards of hygiene and cleanliness and of late the problem of security of women and better marketing are cited as the reasons for the underachievement in the sector. The state of affairs looks set for a change as the imperative of economic growth has made the new Indian government to make tourism a policy priority. And the business-focussed prime minister has made it clear that tourism has to be an important growth area that will support economic recovery. Earlier this month, he held a meeting of the tourism ministry to discuss plans to promote adventure and religious tourism as well as identification of 50 tourist circuits. The government has also decided to push through the agreed changes in the visa system which from October 1 will enable visitors to get visas on arrival. Tourism, studies have shown, can create win-win partnerships for all countries in a region. Cross-border cooperation can promote tourist destinations and corridors with complimentary locations. Cooperation flows for tourism can act as catalyzers for national development efforts at different levels in middle income and least developed countries. These flows can then trigger private sector investment, contributing to a greater effectiveness of Aid and poverty reduction, says Marcio Favilla, UNTWO executive director for operational prgrammes and institutional relations. Swaraj in her first interaction with the media said every SAARC country is persuing their own tourism programmes and that synergy and a common approach to growth of tourism in the region benefiting all can be followed. She then referred to a suggestion from the Maldives in view of growing medical tourism that people who come to India for surgery can go to the archipelago for post-operative care. Prime minister Sushil Koirala of Nepal has reportedly sought Indian investment in tourism in the Himalayan republic. South Asia is the world's most populous region. It is a region of overlapping ethnic, linguistic and cultural diversity. However, it is one of the least integrated in terms of regional or cross-border infrastructure. What Swaraj was hinting at is that instead of wielding tourism as an instrument of political leverage, as it has often been, between nations, it can be used as an effective means of economic and public diplomacy to help improve interactions and peace and prosperity in the region that has been one civilizational unit and historically a single market. Tourism contributes to people's getting to know each other more closely and helps them find out their commonalities, reaching beyond the commercial dimension. As the prime minister said in Thimpu, terrorism divides, tourism unites. (17-05-2014- Saroj Mohanty is a veteran journalist and analyst. The views expressed are personal. He can be contacted at [email protected]) All rights reserved for news content. Reproduction, storage or redistribution of Nerve content and articles in any medium is strictly prohibited. Contact Nerve Staff for any feedback, corrections and omissions in news stories. All rights reserved for the news content. Reproduction, storage or redistribution of Nerve content and articles in any medium is strictly prohibited.
<urn:uuid:78f8381c-ad99-47c1-b9d7-0e28924e7a7d>
CC-MAIN-2014-41
http://www.nerve.in/news:2535002381803
s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657125654.84/warc/CC-MAIN-20140914011205-00010-ip-10-196-40-205.us-west-1.compute.internal.warc.gz
en
0.95518
1,483
2.578125
3
Agriculture is one of our nation’s largest industries. The technology behind it is constantly expanding, changing with new revolutions in science, and therefore changing the way we eat. Feed Science is one of the most important disciplines behind agriculture. It’s a science in its own right, combining a study of biology and chemistry to help us figure out how to maximize livestock production. Feed Science covers a lot of territory, from new production methods to the development of pet food. As a Feed Science major you will have the opportunity to be on the cutting edge of some of the newest developments in the feed industry. You’ll help perfect the newest processes of providing food for livestock.
<urn:uuid:2c04b3d5-2d8a-4439-8753-482572918e72>
CC-MAIN-2014-15
http://www.princetonreview.com/Majors.aspx?cip=010904
s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00372-ip-10-147-4-33.ec2.internal.warc.gz
en
0.914324
141
3
3
We often wonder what awaits us in the future and many of us have consulted various ways to find out what lies in future for us. Palmistry is the practice of characterisation and the prediction of future through the study of the palm. The study of Palmistry dates back to 3000 BC in ancient Egypt and it is still practiced today worldwide. 'Cheiro's Palmistry for All' serves as a basic guide to the meaning and importance of various lines, mounts, ridges, etc. of the palm.
<urn:uuid:04e1558c-b780-4b56-ad42-fc53dc9dbeda>
CC-MAIN-2018-26
http://bookthisbook.com/index.php/books-on-rent/astrology/cheiro-s-palmistry-for-all.html
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267865145.53/warc/CC-MAIN-20180623171526-20180623191526-00151.warc.gz
en
0.955294
102
3
3
For serious discussion of the controversies, approaches and enigmas surrounding the origins and development of the human species and of human civilization. (NB: for more ‘out there’ posts we point you in the direction of the ‘Paranormal & Supernatural ’ Message Board). The two big arguments advanced against the OCT are: 1) Orion (the belt stars anyway) was not a 'big cheese' (the term used at Hall of Maat) so they would not have aligned pyramid accordingly; 2) Pan generational projects were verboten so no succeeding Pharaoh would build based on what another had done ... That said, the Gizamids could have been built after the Dynastic Period (when Orion was a big cheese, knowledge of precession, PI, PHI, etc. was widespread) or before the Dynastic Period when a Lost Civilization with such knowledge built them or the AE simply took the plans and technology from the LC and built the structures as tombs for their kings (even suggesting this alternative a 15 years ago would have provoked derision and accusations of racism, Nazi sympathies, etc)
<urn:uuid:b6bf82ef-6436-4cb3-aa4a-6c6b75a8b6df>
CC-MAIN-2021-17
https://grahamhancock.com/phorum/read.php?1,1058447,1058462
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039508673.81/warc/CC-MAIN-20210421035139-20210421065139-00056.warc.gz
en
0.96623
232
2.53125
3
Rise and Fall Regardless of the obscurity of their origins, it is clear that a distinctive Etruscan culture evolved about the 8th cent. B.C., developed rapidly during the 7th cent., achieved its peak of power and wealth during the 6th cent., and declined during the 5th and 4th cent. Etruria had no centralized government, but rather comprised a loose confederation of city-states. Important centers were Clusium (modern Chiusi), Tarquinii (modern Tarquinia), Caere (modern Cerveteri), Veii (modern Veio), Volterra, Vetulonia, Perusia (modern Perugia), and Volsinii (modern Orvieto). The political domination of the Etruscans was at its height c.500 B.C., a time in which they had consolidated the Umbrian cities and had occupied a large part of Latium. During this period the Etruscans were a great maritime power and established colonies on Corsica, Elba, Sardinia, the Balearic Islands, and on the coast of Spain. In the late 6th cent. a mutual agreement between Etruria and Carthage, with whom Etruria had allied itself against the Greeks c.535 B.C., restricted Etruscan trade, and by the late 5th cent. their sea power had come to an end. The Romans, whose culture had been greatly influenced by the Etruscans (the Tarquin rulers of Rome were Etruscans), were distrustful of Etruscan power. The Etruscans had occuped Rome itself from c.616 B.C., but in c.510 B.C. they were driven out by the Romans. In the early 4th cent., after Etruria had been weakened by Gallic invasions, the Romans attempted to beat the Etruscans back. Beginning with Veii (c.396 B.C.) one Etruscan city after another fell to the Romans, and civil war further weakened Etruscan power. In the wars of the 3d cent., in which Rome defeated Carthage, the Etruscans provided support against their former allies. During the Social War (90–88 B.C.) of Sulla and Marius the remaining Etruscan families allied themselves with Marius, and in 88 B.C. Sulla eradicated the last traces of Etruscan independence. The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved. See more Encyclopedia articles on: Ancient History, Rome
<urn:uuid:d31dca94-b029-43c2-aa12-dce9631ffdf4>
CC-MAIN-2015-18
http://www.factmonster.com/encyclopedia/history/etruscan-civilization-rise-fall.html
s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246641054.14/warc/CC-MAIN-20150417045721-00237-ip-10-235-10-82.ec2.internal.warc.gz
en
0.969962
548
3.921875
4
Vertebrate neurons are exquisitely specialized for the functions they perform. As explained in previous chapters, a single neuron may receive information from and relay information to thousands of other neurons; consequently, the nervous system is capable of remarkably complex functions. Moreover, the brisk flux of ions across neural membranes permits extremely rapid interneuronal signaling. However, this specialization comes at a cost. A tremendous amount of energy is required to maintain ionic gradients across the membranes of the approximately 100 billion neurons that comprise the human brain. Although the brain represents only 2% of the body’s total mass, it uses approximately 20% of the body’s oxygen supply, and blood flow to the brain accounts for about 15% of total cardiac output. Ischemia, or insufficient blood supply, results in oxygen and glucose deprivation and in the buildup of potentially toxic metabolites such as lactic acid and CO2. Interruption of blood flow to the brain can lead to complete loss of consciousness within 10 seconds, the approximate amount of time required to consume the oxygen contained in the brain. Stroke occurs on disruption of blood flow to brain tissue caused by obstruction of blood flow or bleeding in the brain (hemorrhage). The exquisite vulnerability of neurons to energy deprivation caused by stroke results in vast medical, economic, and personal costs. In the United States alone, roughly 795,000 strokes occur each year. This equates to an average of one stroke every 40 seconds in the American population. Approximately 150,000 of these strokes are fatal, which equates to one death every 4 minutes, making stroke the fourth leading cause of death in the United States. Survivors of stroke often are beset by serious long-term disabilities, including paralysis and disruption of higher cognitive functions such as speech. Individuals with such disabilities may be unable to resume work and other daily activities, and often require extensive long-term care by healthcare professionals or friends and family. The term stroke, now less commonly referred to as a cerebrovascular accident (CVA), broadly refers to neurologic symptoms and signs that result when blood flow to brain tissue is interrupted. The two primary types of stroke, as noted above, are occlusive and hemorrhagic. An occlusive stroke is caused by the blockage of a blood vessel and accounts for 87% of all strokes in the United States 20–1. Vascular occlusion, which generally restricts blood flow to a discrete area of the brain, results in neurologic deficits and in a loss of functions controlled by the affected region. Occlusive strokes typically are caused by embolic, atherosclerotic, or thrombotic occlusion of cerebral vessels 20–2. Critical stenosis of the internal carotid artery (ICA) at the bifurcation of the common carotid artery (CCA) into the ICA and external carotid artery (ECA). Shown is critical (>90%) stenosis of the ICA identified on digital subtraction angiography in a patient with a recent left hemispheric stroke. (Used with permission from Robert Bucelli, Washington University School of Medicine.) Embolic infarctions. Diffusion-weighted MRI revealing embolic infarcts (white) within the left middle cerebral artery (MCA) territory. Such emboli may have arisen from blood clots in the heart (cardioembolic) or from clots or atheromatous plaque from an artery such as the carotid (artery-to-artery emboli). A hemorrhagic stroke is caused by bleeding from a vessel and accounts for 10% of all strokes in the United States. Intracranial bleeding can occur in the intraparenchymal, epidural, subdural, or subarachnoid spaces. Intraparenchymal hemorrhage may be caused by acute elevations in blood pressure or by a variety of disorders that weaken blood vessels. Chronic hypertension is the most common predisposing factor, but coagulation disorders, brain tumors that promote the development of fragile blood vessels, amyloid deposition in blood vessels (ie, amyloid angiopathy; Chapter 18), and the use of cocaine or amphetamines—both of which cause rapid elevation of blood pressure—are among the risk factors for intraparenchymal hemorrhages. Intraparenchymal hemorrhaging can lead to the formation of blood clots (hematomas) in the cerebrum, cerebellum, or brainstem, which in turn may limit the blood supply to nearby brain regions and exacerbate the injurious effects of a stroke. Hemorrhagic stroke can also occur secondary to an initially ischemic stroke. Epidural, subdural, and subarachnoid bleeding often results from head trauma or the rupture of an aneurysm. Subarachnoid hemorrhages account for the remaining 3% of strokes in the United States. In addition to the damage caused by the loss of blood supply to affected areas of the brain, hemorrhages can cause damage by increasing intracranial pressure that further compromises neuronal health. Moreover, through mechanisms that are not completely understood, subarachnoid hemorrhage can cause reactive vasospasm of cerebral surface vessels several days to weeks after the hemorrhage, which in turn can lead to a further reduction in blood supply and additional cerebral infarcts. Mechanisms of Neuronal Injury During Stroke When neurons are deprived of the nourishment they require, they quickly become unable to maintain their resting membrane potentials and, as they depolarize, they fire action potentials. Their firing triggers the release of neurotransmitters, in particular, glutamate, which in turn promotes depolarization of neighboring neurons. Such activity sets the stage for a destructive cycle of neuronal activation, neurotransmitter release, and further activation. Prolonged periods of neuronal activation can lead to the disruption of ionic gradients, massive Ca2+ influx, cellular swelling, activation of cellular proteases and lipases, mitochondrial damage, generation of free radicals, and eventually widespread neuronal death. Ischemia of only a few minutes’ duration can result in permanent brain damage. The basic biochemical mechanisms responsible for these processes of neuronal injury and death, broadly termed either necrosis or apoptosis (programmed cell death), are described in detail in Chapter 18. Despite our growing knowledge of the mechanisms that underlie ischemic neuronal death, our ability to treat stroke remains limited. Among the treatment strategies currently available, the best are geared toward prevention through the maintenance of cardiovascular health, restoration of blood supply, and the slowing of metabolism with hypothermia. Although none of these therapies has capitalized on the sophisticated studies of the biochemical events underlying neuronal cell death, efforts to prevent stroke nevertheless have been successful. The incidence of stroke has been reduced markedly by primary preventive measures aimed at controlling hypertension, hypercholesterolemia, diabetes, and tobacco use. HMG-CoA reductase inhibitors, known as the statin class of cholesterol-lowering agents, appear to confer additional benefit in stroke prevention beyond cholesterol control, as treatment with statins in individuals with normal cholesterol levels significantly reduces the incidence of future stroke. Prophylactic use of drugs that inhibit platelet function, such as aspirin, clopidogrel, or dipyridamole, has proven to be effective in reducing the risk of occlusive stroke. While these drugs can result in a slight increase in the risk of hemorrhagic stroke, this small risk is outweighed markedly by the benefit conferred for ischemic stroke prevention. One reason that stroke prevention is so important is that current stroke therapies are pitted against an unforgiving opponent: time. “Time is brain” is a fundamental concept to the treatment of acute ischemic stroke. As previously emphasized, serious neuronal damage can occur within minutes of an ischemic event. Barring round-the-clock observation of all individuals at risk for stroke, the effectiveness of treatment in humans is unlikely to approach that of laboratory animals because researchers in the laboratory have the luxury of administering therapy during or immediately after an ischemic insult. This underscores the need for protective therapy in high-risk populations. The Peri-Infarct Area: An Important Treatment Target Many of the approaches to treatment discussed in the sections that follow involve actions that occur primarily in the peri-infarct area, the “penumbra,” and serve to salvage at-risk neurons that would otherwise be destined to die within hours or days after a stroke. The peri-infarct area constitutes compromised but potentially salvageable tissue between the severely ischemic core and adequately perfused brain tissue. Although potentially salvageable, the peri-infarct area is quite vulnerable because it is subject to high levels of excitotoxic neurotransmitters and free radicals, waves of cellular depolarization, and inflammatory processes (Chapter 18). After an occlusive stroke, blood supply can be restored with thrombolytic agents, which dissolve the clots that impede the normal flow of blood. Thrombolytic agents have been found to improve the outcome of this type of stroke in clinical trials. However, strict protocols are utilized in determining patient eligible for this form of therapy in order to minimize the risk of hemorrhage. Consequently, the presence of hemorrhagic stroke must be ruled out by use of computerized tomography (CT), and other risk factors such as malignant hypertension, recent surgery, or prior cerebral hemorrhage must be excluded before thrombolytic agents are used. Even occlusive strokes are accompanied by a small but real risk of hemorrhagic transformation. Thrombolytics such as tissue plasminogen activator (tPA; eg, alteplase, reteplase), urokinase, streptokinase, prourokinase, and desmoteplase are proteins that promote the conversion of the proenzyme plasminogen into plasmin, an enzyme that degrades fibrin, a key structural protein in most blood clots 20–3; 20–1. Currently, tPA is the only thrombolytic substance approved for intravenous use in acute ischemic stroke in the United States. Clinical trials demonstrate that intravenously delivered tPA reduces the disability of patients with acute ischemic stroke who were treated within the first 4.5 hours of the onset of symptoms. Mechanisms of action of anticoagulants and thrombolytics. A. Platelets are activated by molecules exposed during tissue injury. Aspirin inhibits cyclooxygenase, which catalyzes the formation of thromboxane A2, a key intermediary in the clotting process. Clopidogrel and related drugs antagonize the activation by ADP of platelet P2Y12 receptors, which promote platelet aggregation. Dipyridamole inhibits clot formation through mechanisms as yet unknown. B. During the clotting cascade, a chain of precursor proteins (mostly serine proteases) activate one another, a process that results in amplification of the signal. Heparin activates the endogenous protein, antithrombin III, which then inhibits several of the activated clotting proteins, in particular, factor Xa and thrombin (factor IIa). Warfarin and related agents deplete vitamin K–dependent clotting factors: factors Xa, IXa, and VIIa, and thrombin. Apixaban and rivaroxaban inhibit factor Xa, while dabigatran inhibits thrombin. C. Thrombolytics such as tissue plasminogen activator (tPA) and streptokinase catalyze the conversion of the inactive precursor plasminogen to the active enzyme plasmin, which catalyzes the breakdown of fibrin polymers. Fibrin is a key component of the clot; it is produced from its precursor fibrinogen through catalysis by thrombin, a major product of the clotting cascade. 20–1 Role of Nitric Oxide in Stroke Nitric oxide (NO) functions as an intracellular and intercellular messenger in the brain (Chapter 8). It is synthesized by the Ca2+-activated enzyme, nitric oxide synthase (NOS). The importance of NOS activation in neuronal injury has been tested through the use of specific NOS inhibitors, such as L-nitroarginine. Initial studies produced inconsistent results: NOS inhibitors displayed neuroprotective effects in some experiments, especially those conducted in cell culture, and produced either no effect or a detrimental effect in others. These conflicting results most likely were attributable to the multiple roles and sources of NO in the brain. Three different isoforms of NOS exist, each of which is the product of a distinct gene. Neuronal NOS (nNOS) is expressed exclusively in neurons, endothelial NOS (eNOS) originally was identified in endothelial cells, and inducible NOS (iNOS) originally was identified in certain immune system cells. Some neurons express eNOS and iNOS in addition to nNOS. NO produced by eNOS acts as an endothelial relaxing factor: it promotes the relaxation of the smooth muscle surrounding arterioles and leads to vasodilation and increased blood flow. Intravenous treatment with the NOS substrate, L-arginine, promotes functional recovery in an experimental stroke model (see 20–6). A. Occlusion of the middle cerebral artery (MCA; onset indicated by red lightening) results in a profound reduction in regional cerebral blood flow (measured by laser Doppler flowmetry; lower graph) and functional activity of the brain (measured by electrocorticogram; upper graph). Days later, a large cerebral infarct evolves in the ischemic middle cerebral artery territory (indicated in red on the coronal brain section). B. In comparison, intravenous infusion of the eNOS substrate L-arginine after MCA occlusion in another rat augments cerebral blood flow, improves functional activity, and reduces the area of infarct compared with control treatment. These findings suggest that augmenting NO bioavailability can promote functional recovery in the ischemic brain. (Adapted with permission from Dalkara T, Morikawa E, Panahian N, et al. Blood flow-dependent functional recovery in a rat model of focal cerebral ischemia. Am J Physiol. 1994;267:H678–H683.) Gene knockout technology has helped to elucidate the roles of these NOS isoforms in neuronal injury. Compared with their wild-type counterparts, mice deficient in nNOS typically have smaller infarct volume (ie, amount of necrotic tissue) after an experimentally induced ischemic stroke. This finding suggests that nNOS activity may be detrimental to neuronal survival during ischemia. In contrast, mice deficient in eNOS tend to have greater than normal infarct volumes after experimentally induced stroke, which indicates that eNOS has neuroprotective activity. Most likely, eNOS exerts its beneficial effect by promoting the reperfusion of the ischemic area. This is demonstrated in the figure. Interestingly, iNOS knockout mice, like nNOS knockout mice, display diminished infarct volume after ischemic stroke; it is speculated that a decreased inflammatory response may reduce infarct size in these mice. The multiple effects of NO on ischemic injury provide an excellent lesson in the complexity of the brain’s response to ischemia. Other events that take place during ischemia, such as Ca2+ entry into cells, also may have multiple and varied effects, and these actions must be carefully examined if effective therapies are to be devised. If, for example, NOS inhibition can be developed as a clinical treatment for ischemic neuronal injury, such inhibition most likely will have to be carefully targeted to nNOS or perhaps iNOS. Prior to large initiatives to expedite the treatment of acute ischemic stroke, similar to efforts for acute myocardial infarction, by the time an individual was aware of the occurrence of a stroke, traveled to a hospital, and was diagnosed, hours had elapsed. In many centers around the world, the average onset to treatment time has dropped to under 60 minutes. However, despite these marked improvements in select centers, and the fact that administration of intravenous tPA within 4.5 hours of symptom onset is now standard of care in the United States, only 3% to 5% of stroke patients actually receive tPA because of the difficulty of administering it within this time frame. Even when intravenous tPA is successfully administered in time, the affected vessel does not always open or open completely. At academic centers, intravenous therapy is sometimes followed by interventional procedures, where a catheter is guided from a peripheral artery (usually the femoral artery in the groin) to the affected cerebral artery. The clot is then treated with a combination of mechanical clot disruption or direct instillation of a thrombolytic agent into the clot itself. Because a smaller dose of the thrombolytic agent is used when given at the site of the clot itself, as compared with systemic (intravenous) administration, intra-arterial therapy is in theory safer and can be performed up to 8 hours after symptom onset. However, three large trials published in 2013 argued against widespread use of this approach due to negative results. Thus, current recommendations are for intra-arterial therapy to be used on a selective, case-by-case basis. In cases of basilar artery thrombosis, when the brainstem is at risk and the matter of opening the artery is literally life and death, intra-arterial therapy administered up to 48 hours after symptom onset may improve outcomes. Prourokinase, urokinase, and desmoteplase are additional thrombolytics that are used less commonly in intra-arterial therapy. Heparin is a heterogeneous mixture of sulfated mucopolysaccharides. It is found in mast cells and in the extracellular matrix of most tissues. It has a molecular mass of 750 to 1000 kDa and is composed of long polymers of glycosaminoglycan chains that are attached to a core protein 20–4. Because of its structure, heparin is not effective after oral administration and must be given parenterally. It inhibits clot formation by enhancing the activity of antithrombin III, a protein that forms equimolar complexes with the various proteases activated during the clot formation process (see 20–3). By binding directly to antithrombin III, heparin causes a conformational change in the protein that enhances its binding to the clotting factor proteases. Heparin has been shown to cause more harm than benefit as an acute treatment of stroke and carries a risk for increased hemorrhage. Chemical structures of representative antiplatelet and anticoagulant drugs. As discussed in Chapters 4 and 11, aspirin inhibits cyclooxygenase, which in platelets catalyzes the conversion of arachidonic acid to thromboxane A2, among other products 20–4. Thromboxane A2 is a critical intermediate in the recruitment of platelets necessary for the clotting cascade. Aspirin administered to patients during hospital admission for stroke produces a small but significant net benefit in that it reduces mortality by 14% compared with placebo. Clopidogrel and related thienopyridine class agents (eg, prasugrel and ticlopidine) are other antiplatelet agents in current use. They act as adenosine diphosphate (ADP) antagonists, whereby they inhibit the binding of ADP to P2Y12 receptors (Chapter 8) on platelet membranes. They also irreversibly modify platelet P2Y12 receptors, and therefore their effects last for the lifespan of the platelets (approximately 7–10 days). The P2Y12 receptor is responsible for activation of the glycoprotein GPIIb/IIIa complex (also known as integrin αIIbβ3), the major receptor for fibrinogen. Dipyridamole, used for similar purposes clinically, inhibits clot formation and causes vasodilation, although it is not known which of its many actions (eg, phosphodiesterase, adenosine reuptake, or adenosine deaminase inhibition) is responsible for the drug’s clinical effects. A combination drug of aspirin–dipyridamole has been shown to offer added benefit in stroke prevention, relative to either drug given alone. Recent evidence suggests that dual antiplatelet therapy with aspirin and clopidogrel may also have added benefit in select populations. However, combination therapy historically has been avoided due to the increased risk of hemorrhage reported in the MATCH clinical trial. Patients at risk for cardioembolic strokes, such as those with atrial fibrillation or a mechanical heart valve, conditions that predispose patients to form intracardiac clots, often are treated with warfarin or one of the more recently developed oral anticoagulant alternatives (eg, dabigatran, rivaroxaban, and apixaban) 20–4. Warfarin is a synthetic derivative of a related compound in sweet clover, which was found in the early 20th century to promote bleeding. It acts as a functional vitamin K antagonist; it does not directly antagonize the function of vitamin K; rather, it depletes vitamin K by inhibiting its recycling. Vitamin K is a required cofactor for the enzymes that activate several clotting factors, including II, VII, IX, and X (see 20–3). Warfarin and related compounds are the most potent oral anticoagulants known; indeed, they are so potent that severe hemorrhage is a significant side effect of their use. (This effect, at high doses, is exploited in warfarin’s use as a rat poison.) Patients who take warfarin must have regular blood tests to ensure that their bleeding times are within safe boundaries and that dose adjustments are made accordingly. Unlike warfarin, the newer, oral anticoagulants do not require monitoring, allowing for standardized dosing; importantly, they appear equally efficacious. These drugs each target a specific factor in clotting cascades: dabigatran is a direct inhibitor of thrombin, while rivaroxaban and apixaban are inhibitors of factor Xa (see 20–3). Aspirin, warfarin, and other oral anticoagulants are used not only to treat stroke but also in the treatment of transient ischemic attacks (TIAs), which are brief periods of brain ischemia that resolve without a lasting neurologic deficit (ie, without appreciable neuronal death). These attacks are believed to be caused by transient occlusions of the cerebral vasculature. The symptoms of TIAs are similar to those of stroke, except that they resolve within minutes, to less commonly hours, of onset. Minimizing Ca2+ Influx Into Cells Because Ca2+ appears to be critically involved in promoting the biochemical processes that lead to neuronal destruction, the reduction of Ca2+ influx might be considered a promising strategy in the treatment of stroke. However, the effectiveness of drugs that reduce the influx of Ca2+ into neurons (see 20–1) has yet to be demonstrated in clinical trials. Inhibitors of voltage-dependent Ca2+ channels (Chapter 2) such as nimodipine, an L-type Ca2+ channel blocker that penetrates the brain, and flunarizine, a T-type Ca2+ channel blocker, have been investigated as potential therapies for stroke but thus far have not been shown to improve the functional outcome of patients after ischemic stroke. NMDA receptor antagonists exhibit a robust protective effect on neurons in culture and in vivo in animal models but have not proven to be effective in humans. Even if they were effective, many of these antagonists have phencyclidine-like adverse effects, such as psychosis and dissociation (Chapter 17), which severely limit the dose that can be used. Magnesium blocks the NMDA receptor, and a current phase 3 trial of intravenous magnesium sulfate given within 2 hours of symptom onset is under way. Because of the relative safety of magnesium and the narrow time window defined by the study, the trial design allows intravenous magnesium to be given by paramedics in the field, prior to arrival and evaluation in the emergency room. Antagonists of voltage-dependent Na+ channels, such as phenytoin, which can be very effective in the treatment of seizure disorders (Chapter 19), have failed to improve clinical outcomes of stroke. Likewise, drugs that promote GABAergic function in brain have been considered for stroke, but efficacy of this mechanism too has not yet been demonstrated in humans. 20–1Treatment of Stroke ||Download (.pdf) 20–1 Treatment of Stroke |Category ||Name ||Mechanism of Action ||Proven Clinical Efficacy1 | |Antiplatelet, anticoagulation, and thrombolytic agents || || | Cyclooxygenase inhibitor; inhibits synthesis of thromboxane A2, inhibiting platelet aggregation Antagonists of P2Y12 receptors, which inhibit platelet aggregation Inhibits clot formation and causes vasodilation Converts plasminogen to plasmin, which cleaves fibrin clots Inhibits synthesis of vitamin K–dependent coagulation factors Factor Xa inhibitors Direct thrombin inhibitor |Glutamate receptor blockade || | Aptiganel, dextrorphan, dextromethorphan, delucemine (NPS1506), remacemide Licostinel (ACEA1021), gavestinel (GV150526) YM872, ZK-200775 (MPQX) Low-affinity NMDA receptor antagonists NMDA receptor channel blocker NMDA glycine site antagonists NMDA polyamine site antagonist AMPA receptor antagonists |Voltage-gated Ca2+ channel blockers || || || | |Na+ channel blockers || | | || | |Voltage-dependent K+-channel agonist || || || | |Enhancement of inhibitory neurotransmission || || || | |Free radical scavengers, antioxidants || || || | |Neural repair || || | bFGF recombinant protein Other growth factors Reducing Free Radical Damage and Cell Death Pathways There is evidence that the increased generation of free radicals in ischemic brain tissue may contribute to neuronal injury and death 20–5; 20–1 (Chapter 18). Free radical scavengers are agents that are oxidized by oxygen-reactive species without deleterious effects to the cell and thus might be expected to have positive effects in stroke. Tirilazad, a nonglucocorticoid steroid that inhibits lipid peroxidation, has been shown to reduce infarct area in animals treated within 10 minutes of complete focal ischemia. However, tirilazad had no effect on functional outcome when administered to humans approximately 4 hours after stroke. Ebselen, another free radical scavenger, is now in phase 3 clinical trials. Disofenin, a free radical trapping agent, showed modest initial promise in one trial in terms of functional outcome at 90 days, but those results were not replicated in a subsequent trial. Thus, currently there are no accepted stroke therapies based on blocking free radical damage. Free radical damage to subcellular structures. Reactive oxygen species (ROS) and radicals are generated as a result of metabolic processes. These free radicals have at least one unpaired electron that renders them chemically unstable and highly reactive with other molecules in the body. Mitochondrial DNA (miDNA) is located near the inner mitochondrial membrane and lacks advanced DNA repair mechanisms; this makes miDNA particularly susceptible to damage from ROS. Cells respond to oxidative damage by neutralizing free radicals through antioxidant enzymes such as superoxide dismutase (SOD) and catalase. Eventually damage accumulates due to the inability of cells to repair damage as quickly as it arises. Knowledge of the biochemical basis of necrotic and apoptotic mechanisms of neuronal death (Chapter 18) has suggested many additional potential approaches to the treatment of stroke. Indeed, genetic manipulation of numerous cell death or survival proteins in rodents has been shown to alter the brain’s vulnerability to a stroke. However, none of these strategies has to date been validated in clinical trials in humans. Among such strategies that were previously investigated in animal models is the use of caspase inhibitors; caspases (short for cysteine aspartate proteases) are enzymes that promote apoptosis and necrosis (Chapter 18). Another potential strategy is the use of agonists of peroxisome proliferator–activated receptor-γ (PPARγ). PPARγ is a member of the nuclear receptor superfamily, which also includes the receptors for steroid hormones, vitamin D, and retinoic acid (Chapter 4). PPARγ, and its PPARα and PPARδ isoforms, heterodimerizes with the retinoid X receptor to form an active transcription factor complex that regulates many genes involved in intermediary metabolism. While the endogenous ligands for PPARγ remain uncertain (prostaglandins may be involved), synthetic agonists exert beneficial clinical effects: the thiazolidinediones, pioglitazone and rosiglitazone, are effective antidiabetic agents and act by increasing the sensitivity of peripheral tissues to insulin. Early studies demonstrated that diabetics treated with these agents exhibited a lower incidence of stroke; however, subsequent studies have suggested a higher risk of cardiovascular events (myocardial infarction and stroke) in diabetic patients exposed to rosiglitazone. Still another experimental approach involves inhibitors of nitric oxide synthesis, based on the animal literature that the generation of nitric oxide during ischemia, mediated by excessive glutamatergic transmission and intracellular Ca2+ levels, may contribute to neuronal injury perhaps via free radical formation. However, other studies of nitric oxide in stroke suggest that it might be protective, highlighting some of the complexities of translating findings from animal models to the clinical situation 20–1. Indeed, over 1000 neuroprotective agents have been tested in preclinical studies, with many showing promising results. In contrast, of the 200 ongoing or completed clinical trials, no agent has yet been successful in being translated to clinical practice. Strategies With Pleiotropic Agents Several neuroprotective strategies act at multiple levels in the cascade of postischemic damage. Statins, mentioned earlier as agents that play a key role in preventing strokes from occurring, appear to also confer benefit on stroke recovery in several animal and human studies. Effects of statins include improving endothelial function, reducing inflammation, and increasing cerebral blood flow. In animal models, statins reduce infarct size and improve functional recovery. These benefits of statins are bearing out in clinical studies as well. It will be interesting in future research to understand the mechanisms underlying these diverse, beneficial effects of statins. Another broad-spectrum strategy is hypothermia. Controlled hypothermia confers benefit in cases of global ischemia after cardiac arrest, and so interest has been sparked as to its effect in cases of focal ischemia or stroke. In the setting of neonatal hypoxic-ischemic injury, hypothermia has been shown to offer clinical benefit. Several studies have demonstrated neuroprotective effects of hypothermia in animal models of adult stroke, and hypothermia has proven benefits in other mechanisms of human brain injury, including cardiac arrest. The clinical utility in human stroke is a matter still under investigation, with mixed results in initial clinical trials. Promoting Neural Recovery Another approach to stroke therapy is to promote the self-repair of damaged neurons or the growth of healthy neurons to help compensate for the loss of neurons destroyed during an ischemic attack. Two general strategies can be used to this end. A nutritive strategy involves ensuring that neurons have the molecules they need for repair and growth, and a signaling strategy involves providing the chemical signals that instruct neurons to grow. The first strategy has been attempted with administration of the phospholipid precursor citicoline. Citicoline is a key intermediate in the biosynthesis of phosphatidylcholine, an important component of the neural cell membrane. While past studies had suggested some positive effects, a recent large trial demonstrated no clinical benefit relative to placebo. Neurotrophic factors such as basic fibroblast growth factor (bFGF; Chapter 8) also are being considered for restorative therapy. In animal stroke models, bFGF reduces infarct volume when given shortly after the onset of focal ischemia. Although bFGF did not reduce infarct size when given 24 hours after experimentally induced stroke, it did improve outcome as measured by behavioral tests. However, this neurotrophic factor and others listed in 20–1 have either shown no benefit or are yet to be tested in human clinical trials. Despite all of the advances in our understanding of neural injury related to stroke, the best-established way to ensure long-term recovery of function is through rehabilitation. Research involving laboratory animals has provided insight into how rehabilitation might work at the neurobiologic level 20–2. The results of such research raise the possibility that neurobiologic mechanisms underlying the inherent plasticity of the brain may be exploited in the future to ensure maximal return of function after stroke. 20–2 Neurobiologic Basis of Rehabilitation Stroke patients with large initial deficits often can exhibit striking improvement. The length of the recovery process (typically 1–2 years) suggests that events other than the resolution of edema and inflammation are responsible for improvement in function. In some cases, restoration of blood flow through the development of collateral circulation may contribute to the regaining of function. However, several lines of evidence indicate that neurons undergo anatomic and functional changes that significantly assist in functional improvement after a stroke. In rats, for example, increased expression of the growth cone–associated protein GAP-43 and of the synaptic vesicle protein synaptophysin have been detected near experimentally induced infarct areas. Growth cones are specialized endings of growing axons before they form mature synapses. Increased expression of GAP-43 also has been noted in the periphery of infarcted human brain tissue examined at autopsy. Interestingly, dendritic sprouting has been observed contralateral to cortical lesions produced by electrocauterization in rats. This finding suggests that recovery of function may occur as the corresponding, contralateral area of the brain assumes the function of its injured counterpart. The occurrence of compensatory neural remodeling after stroke has been demonstrated experimentally. Electrophysiologic experiments have revealed a reassignment of function after ischemic infarct in squirrel monkeys: when small infarcts are produced in the area of the motor cortex that corresponds to the hand, new areas of the cortex, previously responsible for movements of the arm and shoulder, slowly gain the ability to control hand motions. This topographic reorganization of neuronal function required that the monkeys perform tasks that necessitated the use of their debilitated hand. Thus, frequent activation may stimulate the growth of remaining neuronal processes responsible for control of the hand into arm-and-shoulder territory. Alternatively, such activation may increase the potency of a small number of “hand neurons” that preexisted in the arm and shoulder space. Similar reassignment of function likely takes place in the human brain. Positron emission tomography (PET) and magnetic resonance imaging (MRI) studies indicate that adjacent or contralateral brain regions may indeed work to compensate for damaged tissue. Verbal tasks typically cause activation of speech areas in the left hemispheres of normal subjects; however, in some recovered aphasic patients, increased activation of homologous areas in the right hemisphere has been observed. Is this an indication that the right hemisphere can undergo changes that allow compensation for the damaged left hemisphere? Unfortunately, current experiments have not conclusively answered this question. It is possible, for example, that speech centers in the right hemispheres of certain aphasics participated in verbal tasks to an unusual degree before the onset of stroke. However, in light of anatomic and physiologic data from animal models, dendritic and axonal growth and other forms of neural plasticity are mechanisms worth investigating in stroke patients. Transcranial magnetic stimulation and transcranial direct current stimulation are noninvasive brain stimulation techniques currently being investigated for their utility as adjuvants to intensive rehabilitation poststroke. Knowledge of the molecular and cellular events that influence such rearrangement may eventually lead to other techniques for aiding the recovery of stroke victims. Moreover, because shifts of function depend on the use of an affected area after stroke, it is likely that aggressive physical or speech therapy will continue to be a critical tool in promoting recovery after stroke.
<urn:uuid:e43d7efc-a965-4acb-891c-ebc23064b51c>
CC-MAIN-2023-06
https://accessbiomedicalscience.mhmedical.com/content.aspx?bookid=1204&sectionid=72651021
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500056.55/warc/CC-MAIN-20230203122526-20230203152526-00832.warc.gz
en
0.928893
7,906
3.28125
3
120: What is 120 or medium format film? 120 or medium format film is so called because it is larger than 35mm or 135 format film, but smaller than 4×5 sheet film, which is called large format. 120 film usually comes wrapped around a plastic spool, and yields 12 or 16 images, depending on which camera and film masks you use. The terms “120 film” and “medium format film” are pretty much interchangeable nowadays, but it is important to know that the film is not 120mm. Medium format film does require a bit of special care in that it must be processed and printed at a professional photo lab – most drugstore or one-hour labs will not be able to process 120 film of any kind.
<urn:uuid:9e0ee42b-7f90-4c8c-babb-b1b0f10d1664>
CC-MAIN-2020-24
https://www.lomography.com/about/faq/1379-120-what-is-120-or-medium-format-film
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347425481.58/warc/CC-MAIN-20200602162157-20200602192157-00165.warc.gz
en
0.941695
156
2.984375
3
If you have spent much time on YouTube recently, you might have noticed the influx of videos with computer synthesized voices. For whatever reason, people (or more likely bots) have been feeding news stories and articles through a text-to-speech engine, and turning the output into a video that gets uploaded to YouTube and given a clickbait title. Perhaps the most annoying thing about these videos (aside from their complete lack of original content) is that the synthesized voice sounds completely robotic. Words are often mispronounced, and there is a total lack of any sort of humanlike speech pattern variation. Like all things in technology, however, text-to-speech engines improve over time. Today’s text-to-speech engines, in spite of sounding robotic, are vastly superior to the ones that I grew up with in the 1980s. So with that in mind, it seems completely plausible that computer speech could eventually become indistinguishable from authentic human speech. But let’s take things one step further. What if that completely realistic synthesized voice were paired with an AI engine that allowed it to learn to have a conversation with a human? And what if it could do all of this over the phone? This is the near future according to Google. Of course, it is not exactly unusual for tech companies to give us futuristic predictions of how their technologies will shape the world. This video from Microsoft illustrates how the company once envisioned the future of business travel. In the case of Google, however, having a humanlike virtual assistant make calls on your behalf is not one of those futuristic concepts that may never see the light of day but rather is based on technology that already exists and that will be publicly available in the near future. The underlying technology that can make all of this happen is something that Google calls Duplex. There are a few different parts to Google Duplex, but the first is a truly natural sounding speech engine. The speech processor not only uses voice inflection but also inserts things like ums and ahs. These types of linguistic imperfections go a long way toward making speech sound more natural. Rumor has it that Google may have initially tested its Google Duplex technology using a more robotic-sounding speech engine but found that people were unwilling to interact with it over the phone. I think that it’s probably safe to say that most of us have a natural aversion to robocalls. Even if a robotic voice is being used to do something as innocuous as booking a restaurant reservation, the robot voice adds an air of illegitimacy to the process. The recipient of the call may be quick to dismiss the call as being fraudulent. Of course, the other big piece of the puzzle was making it so that Duplex is able to carry on a conversation with a human. Google had of course already laid the groundwork for this when it created Google Now. After all, a computer absolutely cannot carry on an intelligent conversation with someone unless it is able to understand what the person is saying. The flip side to this, of course, is that the computer has to be able to formulate an intelligent response based on the speech input that it has received. This part of the process would probably be impossible without the use of machine learning. How well does Duplex work? By now you are probably wondering how well Google Duplex works. I haven’t yet had an opportunity to try out Duplex myself, but I have heard from various sources that it works exceptionally well. I have been told that it is not only really hard to tell that you are talking to a computer, but that Google Duplex does a good job of recognizing what is being said, and responding appropriately. Surprisingly, Google Duplex even seems to know how to handle a really difficult phone call. In a demo that was recorded earlier this year, Google shows how Duplex goes about booking an appointment at a hair salon and then providing the user with a notification when the booking is complete. As if that demo were not impressive enough, Google demonstrates Duplex trying to make a restaurant reservation when “the call actually goes a bit differently than expected.” Assuming that this demo call was real (and some have doubted its authenticity), Duplex handled the call better than some humans probably would have. This raises a point that I have yet to hear anyone address — phone manners. When Google Duplex places a phone call on your behalf, it reflects on you. After all, Google Duplex is presumably making an appointment or a reservation in your name. Growing up, my parents taught me to always say please and thank you, and to show respect for whomever I am talking to. I have tried to continue doing this even as an adult. As such, I would be really uncomfortable with Google Duplex placing a call on my behalf if there was a chance that Duplex might treat the recipient of the call rudely. Thankfully, the demos seem to indicate that Duplex is programmed to be polite. What can Google Duplex do? If you watched the demo video that I linked to earlier, then you have seen that Google Duplex can be used to schedule a hair appointment or to make a restaurant reservation, but you may be wondering what else Duplex is and is not capable of. It probably goes without saying, but while Google Duplex can place calls on your behalf, it cannot impersonate your voice. Hence, you won’t be able to use Duplex to call that obnoxious relative and listen to their political rants so that you don’t have to. So what can Duplex do? From what I have heard, Duplex will initially be able to do things like making a hair appointment, making a restaurant reservation, or enquiring about a business’ operating hours. Because of the complexity of having a computer interact with a human in a natural way, it will take some time before Google Duplex will be able to make other types of calls. An amazing future The Google Duplex demos that I have seen have been nothing short of amazing, and I think that the technology holds enormous potential. In the future, for example, a Duplex-like technology may be able to call EMS and relay key information such as location and medical history if sensors that have been surgically implanted in the body detect that a heart attack is happening. Of course, the opposite is true. Every new technology gets abused. I can just imagine the future YouTube videos in which people use Duplex to troll various businesses. Trolling may seem unlikely since Duplex is the one who is controlling the call, but I have little doubt that a Duplex SDK will be released eventually, and then it’s game-on for the trolls. As ironic as it may be, one of the most useful things that Google could eventually do with Duplex is to make it so that Duplex figures out how to navigate all those seemingly endless telephone prompts for us so that we can speak to an actual person. Granted, this might not be in Google’s plans, but it would be helpful. Maybe Duplex can even be designed to wait on hold for us so that we don’t have to. Featured image: Shutterstock
<urn:uuid:12c426c7-0f7f-45cb-8cef-7abe6cf30e51>
CC-MAIN-2018-51
http://techgenix.com/google-duplex-phone/
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376825363.58/warc/CC-MAIN-20181214044833-20181214070333-00062.warc.gz
en
0.967883
1,478
2.875
3
Professional Dental Cleaning in Red Deer, Alberta 1. Removal of calculus (tartar buildup): The dentist or hygienist in Red Deer will remove tartar buildup which is formed by existing plaque. This happens when it has remained on certain teeth for a long enough period of time to cause it to harden and firmly bond, causing potential future complications. Calculus accumulates above and below the gum lines and can only be removed using certain dental instruments. 2.Removal of plaque: Plaque is a sticky and scarcely visible film that forms over teeth. It is a living deposit of bacterial culture, food particles, and saliva. The bacteria release toxins and typically causes inflammation in the gums. Inflammation is a symptom in the onset of advanced periodontal disease. 3. Teeth polishing: Removal of dark stains and plaque build-up that cannot be removed simply by uniquely using scaling techniques, let alone by just brushing your teeth at home. 4. Topical fluoride treatment: fluoride is a recommended way to help treat tooth decay. Fluoride interacts with enamel, helping to protect teeth from dental decay, while also providing additional strength to the areas where teeth’s enamel has begun eroding. Fluoride is a mineral that is found to be naturally present in many foods and water supplies and is also proven to protect teeth from cavities. Many kinds of toothpaste and mouth rinses contain some fluoride, however, we recommend a more thorough application of fluoride during your dental cleanings, as it benefits our patient’s long-term oral health. Please note: For young children not requiring calculus removal, a dental assistant may carry out the cleaning.
<urn:uuid:7899fa9f-3b15-4c9c-81c8-6678e2633bb0>
CC-MAIN-2023-06
https://www.housedental.ca/dental-treatment-in-red-deer/preventive-dentistry/dental-cleaning/
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499967.46/warc/CC-MAIN-20230202070522-20230202100522-00606.warc.gz
en
0.922278
349
2.546875
3
Anxiety can cause symptoms that may range from nervousness to feelings of dread and panic. It can lead to rapid breathing and may also cause other health conditions. Are you anxious? Maybe you’re feeling worried about a problem at work with your boss. Maybe you have butterflies in your stomach while waiting for the results of a medical test. Maybe you get nervous when driving home in rush-hour traffic as cars speed by and weave between lanes. In life, everyone experiences anxiety from time to time. This includes both adults and children. For most people, feelings of anxiety come and go, only lasting a short time. Some moments of anxiety are more brief than others, lasting anywhere from a few minutes to a few days. But for some people, these feelings of anxiety are more than just passing worries or a stressful day at work. Your anxiety may not go away for many weeks, months, or years. It can worsen over time, sometimes becoming so severe that it interferes with your daily life. When this happens, it’s said that you have an anxiety disorder. While anxiety symptoms vary from person to person, in general the body reacts in a very specific way to anxiety. When you feel anxious, your body goes on high alert, looking for possible danger and activating your fight or flight responses. As a result, some common symptoms of anxiety include: - nervousness, restlessness, or being tense - feelings of danger, panic, or dread - rapid heart rate - rapid breathing, or hyperventilation - increased or heavy sweating - trembling or muscle twitching - weakness and lethargy - difficulty focusing or thinking clearly about anything other than the thing you’re worried about - digestive or gastrointestinal problems, such as gas, constipation, or diarrhea - a strong desire to avoid the things that trigger your anxiety - obsessions about certain ideas, a sign of obsessive-compulsive disorder (OCD) - performing certain behaviors over and over again - anxiety surrounding a particular life event or experience that has occurred in the past, especially indicative of post-traumatic stress disorder (PTSD) A panic attack is a sudden onset of fear or distress that peaks in minutes and involves experiencing at least four of the following symptoms: - shaking or trembling - feeling shortness of breath or smothering - sensation of choking - chest pains or tightness - nausea or gastrointestinal problems - dizziness, light-headedness, or feeling faint - feeling hot or cold - numbness or tingling sensations (paresthesia) - feeling detached from oneself or reality, known as depersonalization and derealization - fear of “going crazy” or losing control - fear of dying There are some symptoms of anxiety that can happen in conditions other than anxiety disorders. This is usually the case with panic attacks. The symptoms of panic attacks are similar to those of heart disease, thyroid problems, breathing disorders, and other illnesses. As a result, people with panic disorder may make frequent trips to emergency rooms or doctor’s offices. They may believe they are experiencing life-threatening health conditions other than anxiety. There are several types of anxiety disorders, these include: People who have agoraphobia have a fear of certain places or situations that make them feel trapped, powerless, or embarrassed. These feelings lead to panic attacks. People with agoraphobia may try to avoid these places and situations to prevent panic attacks. Generalized anxiety disorder (GAD) People with GAD experience constant anxiety and worry about activities or events, even those that are ordinary or routine. The worry is greater than it should be given the reality of the situation. The worry causes physical symptoms in the body, such as headaches, stomach upset, or trouble sleeping. Obsessive-compulsive disorder (OCD) OCD is the continual experience of unwanted or intrusive thoughts and worries that cause anxiety. A person may know these thoughts are trivial, but they will try to relieve their anxiety by performing certain rituals or behaviors. This may include hand washing, counting, or checking on things such as whether or not they’ve locked their house. Panic disorder causes sudden and repeated bouts of severe anxiety, fear, or terror that peak in a matter of minutes. This is known as a panic attack. Those experiencing a panic attack may experience: - feelings of looming danger - shortness of breath - chest pain - rapid or irregular heartbeat that feels like fluttering or pounding (palpitations) Panic attacks may cause one to worry about them occurring again or try to avoid situations in which they’ve previously occurred. Post-traumatic stress disorder (PTSD) PTSD occurs after a person experiences a traumatic event such as: - natural disaster Symptoms include trouble relaxing, disturbing dreams, or flashbacks of the traumatic event or situation. People with PTSD may also avoid things related to the trauma. This is an ongoing inability of a child to talk in specific situations or places. For example, a child may refuse to talk at school, even when they can speak in other situations or places, such as at home. Selective mutism can interfere with everyday life and activities, such as school, work, and a social life. Separation anxiety disorder This is a childhood condition marked by anxiety when a child is separated from their parents or guardians. Separation anxiety is a normal part of childhood development. Most children outgrow it around 18 months. However, some children experience versions of this disorder that disrupt their daily activities. This is a fear of a specific object, event, or situation that results in severe anxiety when you’re exposed to that thing. It’s accompanied by a powerful desire to avoid it. Phobias, such as arachnophobia (fear of spiders) or claustrophobia (fear of small spaces), may cause you to experience panic attacks when exposed to the thing you fear. Doctors don’t completely understand what causes anxiety disorders. It’s currently believed certain traumatic experiences can trigger anxiety in people who are prone to it. Genetics may also play a role in anxiety. In some cases, anxiety may be caused by an underlying health issue and could be the first signs of a physical, rather than mental, illness. A person may experience one or more anxiety disorder at the same time. It may also accompany other mental health conditions such as depression or bipolar disorder. This is especially true of generalized anxiety disorder, which most commonly accompanies another anxiety or mental condition. It’s not always easy to tell when anxiety is a serious medical problem versus a bad day causing you to feel upset or worried. Without treatment, your anxiety may not go away and could worsen over time. Treating anxiety and other mental health conditions is easier early on rather than when symptoms worsen. You should visit your doctor if: - you feel as though you’re worrying so much that it’s interfering with your daily life (including hygiene, school or work, and your social life) - your anxiety, fear, or worry is distressing to you and hard for you to control - you feel depressed, are using alcohol or drugs to cope, or have other mental health concerns besides anxiety - you have the feeling your anxiety is caused by an underlying mental health problem - you are experiencing suicidal thoughts or are performing suicidal behaviors (if so, seek immediate medical assistance by calling 911) The Healthline FindCare tool can provide options in your area if you don’t already have a doctor. If you’ve decided you need help with your anxiety, the first step is to see your primary care doctor. They can determine if your anxiety is related to an underlying physical health condition. If they find an underlying condition, they can provide you with an appropriate treatment plan to help alleviate your anxiety. Your doctor will refer you to a mental health specialist if they determine your anxiety is not the result of any underlying health condition. The mental health specialists you will be referred to include a psychiatrist and a psychologist. A psychiatrist is a licensed doctor who is trained to diagnose and treat mental health conditions, and can prescribe medications, among other treatments. A psychologist is a mental health professional who can diagnose and treat mental health conditions through counseling only, not medication. Ask your doctor for the names of several mental health providers covered by your insurance plan. It’s important to find a mental health provider you like and trust. It may take meeting with a few for you to find the provider that’s right for you. To help diagnose an anxiety disorder, your mental healthcare provider will give you a psychological evaluation during your first therapy session. This involves sitting down one-on-one with your mental healthcare provider. They will ask you to describe your thoughts, behaviors, and feelings. They may also compare your symptoms to the criteria for anxiety disorders listed in the Diagnostic and Statistical Manual of Mental Disorders (DSM-V) to help arrive at a diagnosis. Online psychiatry services Read our roundup of the best online psychiatry services to find the right fit for you. Finding the right mental healthcare provider You’ll know your mental healthcare provider is right for you if you feel comfortable talking with them about your anxiety. You’ll need to see a psychiatrist if it’s determined that you need medication to help control your anxiety. It’s sufficient for you to see a psychologist if your mental healthcare provider determines your anxiety is treatable with talk therapy alone. Remember that it takes time to start seeing results of treatment for anxiety. Be patient and follow the directions of your mental healthcare provider for the best outcome. But also know that if you feel uneasy with your mental healthcare provider or don’t think you’re making enough progress, you can always seek treatment elsewhere. Ask your primary care doctor to give you referrals to other mental healthcare providers in your area. At-home anxiety treatments While taking medication and talking with a therapist can help treat anxiety, coping with anxiety is a 24–7 task. Luckily there are many simple lifestyle changes you can make at home to help further alleviate your anxiety. Get exercise. Setting up an exercise routine to follow most or all days of the week can help reduce your stress and anxiety. If you are normally sedentary, start off with just a few activities and continue adding more over time. Avoid alcohol and recreational drugs. Using alcohol or drugs can cause or increase your anxiety. If you have trouble quitting, see your doctor or look to a support group for help. Stop smoking and reduce or stop consuming caffeinated drinks. Nicotine in cigarettes and caffeinated beverages such as coffee, tea, and energy drinks can make anxiety worse. Try relaxation and stress management techniques. Taking meditation, repeating a mantra, practicing visualization techniques, and doing yoga can all promote relaxation and reduce anxiety. Get enough sleep. A lack of sleep can increase feelings of restlessness and anxiety. If you have trouble sleeping, see your doctor for help. Stick to a healthy diet. Eat plenty of fruits, vegetables, whole grains, and lean protein such as chicken and fish. Coping and support Coping with an anxiety disorder can be a challenge. Here are some things you can do to make it easier: Be knowledgeable. Learn as much as you can about your condition and what treatments are available to you so you can make appropriate decisions about your treatment. Be consistent. Follow the treatment plan your mental healthcare provider gives you, taking your medication as directed and attending all of your therapy appointments. This will help keep your anxiety disorder symptoms away. Know yourself. Figure out what triggers your anxiety and practice the coping strategies you created with your mental healthcare provider so you can best deal with your anxiety when it’s triggered. Write it down. Keeping a journal of your feelings and experiences can help your mental healthcare provider determine the most appropriate treatment plan for you. Get support. Consider joining a support group where you can share your experiences and hear from others who deal with anxiety disorders. Associations such as the National Alliance on Mental Illness or the Anxiety and Depression Association of America can help you find an appropriate support group near you. Manage your time intelligently. This can help reduce your anxiety and help you make the most of your treatment. Be social. Isolating yourself from friends and family can actually make your anxiety worse. Make plans with people you like spending time with. Shake things up. Don’t let your anxiety take control of your life. If you feel overwhelmed, break up your day by taking a walk or doing something that will direct your mind away from your worries or fears.
<urn:uuid:b541a3ad-3a17-4a60-bc95-35f93e1f421d>
CC-MAIN-2024-10
https://www.healthline.com/health/anxiety-symptoms?utm_source=ReadNext
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947475238.84/warc/CC-MAIN-20240301093751-20240301123751-00592.warc.gz
en
0.940829
2,636
3.109375
3
Australia’s energy markets are on the cusp of rapid change, but it is not just the prospect of individuals quitting the grid that represents the biggest challenge to industry incumbents: it’s the possible defection of whole towns and communities. The creation of micro-grids is seen by many leading players as an obvious solution to Australia’s soaring electricity costs, where the grid has to cover huge areas, at the cost of massive cross-subsidies that support it. The major network operators in Queensland, NSW, South Australia and Western Australia see micro-grids as an obvious solution to the challenge and cost of stringing networks out, sometimes more than 1,000km away from the source of generation. In Western Australia and Queensland, these subsidies amount to more than $500 a household. The cost of service to regional consumers in Queensland is far above the cost of service to those in the south-east corner. To address this, these states are proposing to take some small communities, and towns like Ravensthorpe in Western Australia off the grid. In New South Wales, some towns are taking the initiative themselves. In northern rivers region, the township of Tyalgum revealed it is considering a micro-grid that would allow it to largely, or entirely, look after its own energy needs. Indeed, the whole Byron shire is considering micro-grids as part of its efforts to become “zero net emissions” within the next decade, and to source 100 per cent of its electricity needs from renewables. But micro-grids are not just about grid defection. While it will make sense for those towns and communities at the edge of the network to become self-sufficient and disconnect entirely, most micro-grids will remain connected to the network, helping to reshape a centralised grid to one focused on more efficient decentralised renewable power generation sources and storage. Warner Priest, the head of emerging technologies at the Australian offices of German energy giant Siemens, says micro-grids are the innovative solution to our future smart grid needs. In fact, he notes, they were the original model for shared generation, but like electric vehicles they were swept aside by the push to big, centralized, fossil fuel generation, transmission and distribution. Now, through massive improvements in technology, it is becoming easier for remote and off-grid communities to look after their own energy needs without relying heavily on costly, imported energy derived from centralised fossil fuel sources. New sub-divisions may find it more cost-effective to never connect to the grid, and micro-grids could also be useful within major cities, addressing areas where the network is constrained by inadequate or end-of-life network assets. And within five to seven years, Priest says, these micro-grids could be completely renewable as new technologies such as on-site renewable hydrogen production become mainstream, replacing the non-renewable gas and diesel generation that is used as a micro-grid’s energy generation for when renewable energy sources are not available. Siemens Australia is drawing up plans for one 50MW micro-grid in Australia that would – ultimately – include up to 10,000 homes. It would comprise of some 40MW of rooftop solar (around ~4kW per home), an array of, centralised and decentralised battery storage, fossil fuelled gas generators, which could – within a few years – be replaced by renewable gas fuel such as hydrogen. The attraction comes through cost, resilience, reliability and efficiency. Fossil fuels burned at the point of consumption are two to three times more efficient than those burned at centralised power stations. That means more energy is harnessed from the equivalent fossil fuel, with ~50 per cent of that energy being in the form of thermal energy that is used for both heating and/or cooling. Priest says micro-grids are about integrating and balancing multiple loads and distributed generation resources within a smart micro distribution grid, using powerful software SCADA control systems (microgrid management systems), residential solar, wind energy, battery storage and other types of renewables and storage – such as hot water systems – ensuring that the use of fossil fuel gas and diesel is kept to a minimum. These micro-grids micro-grids currently run with the support of fossil fuels, but they will move to renewable fuels with the introduction of new technologies such as on-site production of renewable hydrogen using PEM electrolyzer technology. And, of course, battery storage. “Within the next five to seven years, these micro-grids could be completely renewable,” Priest says. “That is not far away. The innovation and change is moving at a phenomenal pace. It is very exciting.” Priest says the idea of micro-grids is not about displacing traditional distribution and transmission networks; it’s about encompassing these new energy cells as distributed energy sources into the incumbent networks, with the ability to wheel power to remote energy consumers connected to the existing grid. Some micro-grids will remain islanded from the main utility grid, but most will retain some form of connection to allow bi-directional flows of energy – this for when the micro-grids draw on cheap energy, or for when they provide support to the existing distribution grid. “With DC coupled micro-grids, they would look like large 50MW batteries to the utility distribution grid,” Priest says. That means they will have a dual role of being be able to participate in the wholesale energy market, selling energy to the networks when optimal, and within the microgrid, retail energy to its consumers, topping up their own requirements from the utility distribution grid when wholesale energy is cheap. Article sourced from Renew Economy
<urn:uuid:efb2c121-c5af-4d7d-8417-c1e6669e9bc7>
CC-MAIN-2019-47
http://www.driftwind.com.au/blog/australias-energy-future-could-be-network-of-renewable-micro-grids
s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671249.37/warc/CC-MAIN-20191122092537-20191122120537-00554.warc.gz
en
0.953391
1,187
2.734375
3
Girls who have an absent father are more likely to undergo puberty at an early age. This is according to a recent study conducted by scientists at the University of California in Berkeley. The report revealed that girls often developed public hair and breasts earlier when their biological father did not live with them. The findings of the research still held true when other factors, such as weight, were included in the equation. Julianna Deardorff, lead author of the study and assistant professor of maternal and child health at the university, commented: "While overweight and obesity alter the timing of girls' puberty, those factors don't explain all of the variance in pubertal timing. "The results from our study suggest that familial and contextual factors - independent of body mass index - have an important effect on girls' pubertal timing." However, the study, which was published in the Journal of Adolescent Health, did not detect the same association outside of higher income families. Independent advice on private healthcare
<urn:uuid:f3001d62-4778-4619-a208-0c43f1e48670>
CC-MAIN-2015-35
http://www.privatehealth.co.uk/news/early-puberty-linked-absent-father-80930/
s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645167576.40/warc/CC-MAIN-20150827031247-00113-ip-10-171-96-226.ec2.internal.warc.gz
en
0.975639
205
3.15625
3
pH Test For Reflux What is pH testing? This test measures the amount of acid that is refluxing from the stomach into the esophagus during a 24 hour period. The test is commonly used on patients to identify the causes of heartburn, especially in patients who failed medical therapy. Also this test is used on patients who are experiencing unusual symptoms of GERD such as, chest pain, cough, asthma, sore throat, ect. This test can also be used to evaluate the effectiveness of any treatments you might be on for heartburn or GERD. The pH testing can be used two ways on patients, first to identify the cause of your discomfort and second to see if the solution recommended for your condition is working effectively for you. How do I get this test done? This test is done right at Your G.I. Center. Our well-trained staff will put a thin soft flexible pressure-sensitive tube with a pH sensor at the tip through your nostril to the back of the throat and into the stomach. The probe is plugged into a small monitor that you wear on a belt which records pH data for 24 hours, detecting how much acid is refluxing into the esophagus. With a touch of a button on the monitor, it will record the times you eat, lie down, and have symptoms of reflux such as heartburn. All these data will be reviewed to see if acid reflux is the cause of your symptoms. Some medications may need to be discontinued before the procedure such as acid suppression medications. However, in special circumstances, your doctor may ask you to continue these medications during the monitoring period to see if it is effective. What should I do during the monitoring period? Try to follow your usual routine during the monitoring period. Many people tend to eat less or change their activities during the monitoring period, but this will affect the accuracy of the test since acid production will be different so do the same activities as you normally would do. However, do not take a bath or shower during the monitoring period, as the equipment cannot get wet. Where can I have this test done? This test can be done at Your G.I. Center. All the employees are well trained and work under the direction of Dr.Meah and Dr.Le when performing any labs. We do this test at our facility and interpret the results at the same location. Your G.I. Center will provide you with instructions for the lab and give you the results. Your G.I. Center is your one stop location for resolving all your gastrointestinal needs, go under the location tab to get the office directions.
<urn:uuid:23485046-d9a5-4d67-85de-a8aac0904157>
CC-MAIN-2020-50
https://yourgicenter.com/ph-test-for-reflux/
s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141186414.7/warc/CC-MAIN-20201126030729-20201126060729-00487.warc.gz
en
0.950175
538
2.921875
3
What led a group of nine ordinary Americans—including an artist, a nurse, and three clergymen—to seize several hundred draft records from a Selective Service office in Maryland and burn them in a nearby parking lot? Well, it was 1968 and many Americans had had enough of the Vietnam war. The nine Catholics, who came to be known as the Catonsville 9 took action. It was an action that they had been considering for some time. According to the Maryland Historical Society, “[Their] act of civil disobedience intensified protest against the draft, prompted debate in households in Maryland and across the nation, and stirred angry reaction on the part of pro-war Americans. It also propelled the nine into the national spotlight. The Catonsville action reflected not only the nature of the Vietnam antiwar movement but also the larger context of social forces that were reshaping American culture in the 1960s.” The Maryland Historical Society has tapped the legacy of the iconic Catonsville Nine protest as the subject for an exhibition opening May 12th. Here’s your invitation: Join us on May 12th for the opening of the exhibit Activism & Art: the Catonsville Nine, 50 Years Later, an exhibit that will examine one of the most iconic and written-about acts of political protest in 20th century American history. This exhibit will explore their motivations, consider the consequences of their action, and contextualize this protest in our present turbulent political climate. The opening events are free but reservations are required. Investigation of Flame Film Screening and Community Discussion May 12, 2018 – 5:00pm Activism & Art: the Catonsville Nine, 50 Years Later, Exhibit Opening & Reception May 12, 2018 – 7:00pm You can learn more about the Catonsville 9 via the Enoch Pratt Library’s digital collection: c9.digitalmaryland.org About the MdHS: Founded in 1844, the Maryland Historical Society (MdHS) is the state’s oldest continuously operating cultural institution. In keeping with the founders’ commitment to preserve the remnants of Maryland’s past, MdHS remains the premier institution for state history. With over 350,000 objects and seven million books and documents, this institution now serves upward of 100,000 people through its museum, library, press, and educational programs. The Maryland Historical Society 201 West Monument Street Baltimore, MD 21201 410-685-3750
<urn:uuid:217faba9-e225-4fc6-9f26-a74c7fac4185>
CC-MAIN-2020-40
https://brockelpress.com/2018/05/02/50-years-ago-this-month-the-catonsville-9-burned-draft-records-in-a-suburban-baltimore-parking-lot/
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401632671.79/warc/CC-MAIN-20200929060555-20200929090555-00664.warc.gz
en
0.9463
514
2.6875
3
Click to have a closer look About this book About this book Evolution, during the early nineteenth century, was an idea in the air. Other thinkers had suggested it, but no one had proposed a cogent explanation for how evolution occurs. Then, in September 1838, a young Englishman named Charles Darwin hit upon the idea that 'natural selection' among competing individuals would lead to wondrous adaptations and species diversity. Twenty-one years passed between that epiphany and publication of On the Origin of Species. The human drama and scientific basis of Darwin's twenty-one-year delay constitute a fascinating, tangled tale that elucidates the character of a cautious naturalist who initiated an intellectual revolution. Also published under the title "The Reluctant Mr Darwin" in the US. David Quammen attended Yale and Oxford, is the author of several critically acclaimed science books and for fifteen years wrote a column 'Natural Acts' for OUTSIDE magazine, making natural science understandable, relevant and accurate for readers and scientists alike. He is a three-time winner of the National Magazine Award in the United States, most recently for a NATIONAL GEOGRAPHIC story on Darwin. He lives in Bozeman, Montana, with his wife. Out of Print 304 pages, no illustrations 'a complete delight... this captivating biographical essay... is fresh and original.' -- Janet Browne SCIENCE magazine (USA) 'very readable... Part biography, part historical account, this book expertly teases apart Darwin's intellectual jouney of discovery... ' BBC FOCUS 'Anybody with the slightest interest in biology will want to devour every page of this exquisite book.' FINANCIAL TIMES 'an easy read that makes the perfect primer to understanding the man.' -- Martin Brookes NEW SCIENTIST 'a startlingly original intellectual biography of the shy, cautious genius who permanently transformed our ideas about life on Earth.' LONDON REVIEW OF BOOKS 'Quammen's stated aim was to produce a "pleasantly readable" account. With this he has achieved much more.' GEOGRAPHICAL
<urn:uuid:67c5eef5-5e9d-4002-9d61-3a60d8f1ba0b>
CC-MAIN-2022-21
https://www.nhbs.com/the-kiwis-egg-book
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662545090.44/warc/CC-MAIN-20220522063657-20220522093657-00545.warc.gz
en
0.921255
443
2.75
3
Due to lack of time for outdoor shopping and other such things, many people around the world prefer online shopping and also carry their financial transactions online. Many websites offers special discounts on purchases and have wide variety of products to choose from. Hence, they attract many customers. With all such advantages, there are some security issues associated with such E-commerce websites. Few things are mentioned below that are to be considered while developing a secure E-Commerce website. Securing Data: When a user fills out a form on a website and submits it, most of the websites transfer this information as a plain text format. That means all page contents, images, form data; etc is transferred as a plain text that is easily readable by humans. Whenever any sensitive information is to be transferred, always use HTTPS (Hyper text transfer protocol secure). This will help to transfer data in a more secure way. Securing Payments: Always use a payment gateway for any type of online transactions. Store sensitive customer payment details securely on a payment gateway account rather than on your website. SSL Certificates: They are known as Secure Sockets Layer certificates. A web hosting company provides this certificate and it charges annually in most cases. Once it is installed on a website, it encrypts all data on a web page. The URL of web pages where this certificate is installed start with https:// and an additional sign of a secured web page such as a closed padlock icon is seen. All information transferred is encrypted and is seen in human un-readable format and send to the web server. This information can be decrypted (decoded) only at the two ends, one is your computer and other is the web server. User Input: It is important to validate all user inputs to prevent common hacker attacks such as SQL injection and XSS (Cross site scripting). Passwords: Do not allow users to enter short passwords (with less characters) while user registers on website or in any other scenario where password is entered. Make it mandatory to create password that is a combination of alphanumeric characters and also special characters. If possible, make it mandatory for users to change their passwords after a certain time period. Securing Firewall on Web Server: When a E-Commerce website is hosted on a web server, it becomes necessary to configure firewall to protect it from outside traffic. Firewall is a network device used to block a certain kind on network traffic, forming a barrier between trusted and un-trusted network. Firewalls can block traffic based on IP addresses, port number and incoming emails. A properly configured firewall allows only good traffic that is allowed. Security is most important aspect that is considered while developing an E-Commerce website and it should never be compromised. Some points mentioned above will not only help users to stay secure on a website but also the website itself will be secured. A customer will visit a website and carry transactions and purchase products on it only if it is secure. Article Source: http://EzineArticles.com/7505144
<urn:uuid:45ad4e9f-db67-4560-90a9-7f8379e808de>
CC-MAIN-2019-51
http://www.ynl.com.au/how-to-develop-a-secure-e-commerce-website/
s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540482038.36/warc/CC-MAIN-20191205190939-20191205214939-00106.warc.gz
en
0.933872
618
2.640625
3
Download the entire document: Goal setting resources for class (pdf) In order to prepare students for the module on goal-setting, it is useful to have them do some pre-work prior to the class period. Either have them bring it to class, or put each question on the discussion board of your class site in mycourses and have them respond to it. This can be done anomalously to encourage response. 1) Explain the expected learning outcomes for this module: - Students demonstrate an awareness of goal setting processes by establishing a goal for the academic year. - Students: a) articulate a vision of their self-management behaviors and skills post freshman year; b) establish goals; c) plan objectives; d) implement action plans; and e) communicate the above to others. - Students assess their progress toward their stated goal and develop appropriate steps in order to achieve it. 2) Ice Breaker: Tape on Wall (or other) The instructor will give everyone two pieces of painters tape. The instructor will and ask them to go to the wall and stick one piece as high as they can. After they have completed the task the instructor will ask how could they make it go higher. Stand on a chair, get someone taller to help, etc... We are often stuck in our own context or perception... We want students to dream bigger... Ask the students to write a dream or goal they hope to achieve while at UMass D on the other piece of tape and then the class will mingle around a few moments looking at the dreams and sharing their own. 3) Lecture Content (SMART Goals, Vision, Goals, Objectives, Action Plans). A PowerPoint Presentation is provided and points to readings in this module that will provide content for you to talk to students about... 4) In-Class Discussion - Semester in Review: There are suggestions provided in the PowerPoint for class discussion/activities. If time allows, you can also have students reflect on where they have been this semester and where they are going next semester through the lens of “five variables correlated to academic and career success”. 5) Personal Development Plan: This is a homework assignment that students should have a few days to complete. Tell them to keep the original and hand the copy in to you. This is the only assignment for this module and it is important that they complete it. 5 Variables that contribute to academic and career success Looking back and looking ahead are actions that will help you determine what your next steps should be. So before you lay out your Personal Development Plan for following semesters, reflect on the questions below related to your first (and your next) semester in college. It is important to incorporate the five variables below into your planning because they are significantly correlated to achieving success in college and careers. 1. PLACE: Connect with supportive cultures – home, school, community. - Think about transition... What culture shock or challenges did you face adjusting to the college culture of UMass Dartmouth? - What connections did you make at UMass Dartmouth that expanded your comfort zone? - What steps do you need to increase your sense of connectedness at UMass Dartmouth? 2. POWER: Identify, develop and use your personal and academic strengths. - What internal and external resources (e.g. strengths) supported and motivated you? - How did you utilize, develop and improve on the resources available to you? - What resources do you need to use and/or develop for further support and motivation? 3. PURPOSE: Express your unique values, strengths and mission through purposeful - State your mission statement and explain how it relates to your purpose in college and in - How do your values/strengths affirm and demonstrate your core beliefs and passions? - What courses, work experience, co-curricular/community activities and relationships: - Have helped you demonstrate your skills and strengths already? - Will help you express who you are and develop whom you want to be? 4. PASSION: Visualize exciting/energizing majors, careers, and lifestyles to inspire you’re long term goals. - How and why have your major/career interests and goals changed or stayed the same? - How have your community and personal interests/goals changed or stayed the same? - What steps do you need to take to move closer to your goals in these four areas: Major, career, personal and community? 5. PREPARATION: Develop competencies by setting short term goals and implementing - What were the biggest obstacles you faced in achieving success? - What strategies were most helpful in overcoming these obstacles? - What challenges to you anticipate facing and what strategies will help you achieve Goal Setting Pre-work The week before the Goal Setting Session, the instructor should ask students to reflect on the following questions... Students will post their thoughts on a central web board? - Why am I in college? - How am I making a successful transition to college? - Who am I? - What is my life purpose? - Where am I going? - How do I get there? Vision: Image of the ideal. It is future looking, inspirational, and creates the most desirable Mission: A brief, clear, concise statement of the reasons for an organization’s existence, the purpose and function it desires to fulfill, its primary customer base, and the primary methods through which it intends to fulfill the purpose.2 It is somewhat uplifting but more practical than Goals: Statements of desired future states, long-term and possible, and based on mission and vision. Typically few in number, with a target date. Objectives: Short-term, specific, measureable outcomes statements. Outcomes: What a person will know or be able to do following an activity or event. Action Plans: Series of short-term tasks to be completed that will result in the achievement of the objectives or outcomes.
<urn:uuid:364da50b-fecd-4064-8899-b1e83b463ede>
CC-MAIN-2017-22
http://www.umassd.edu/fycm/goalsetting/resources/
s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463610342.84/warc/CC-MAIN-20170528162023-20170528182023-00426.warc.gz
en
0.939969
1,244
3.078125
3
The Jewish community in Rome is known to be the oldest Jewish community in Europe and also one the oldest continuous Jewish settlements in the world, dating back to the first century BCE. - The Classic Period - The Christian Empire - The Middle Ages - The Renaissance - The Jewish Ghetto - Late 19th-Early 20th Century - Rome During World War II - Rome Today - Jewish Tourist Sites - Former Jewish Cemetery of Rome - Jewish Museum of Rome The Classic Period The Jewish community in Rome is known to be the oldest Jewish community in Europe and also one the oldest continuous Jewish settlements in the world, dating back to 161 B.C.E. when Jason ben Eleazar and Eupolemus ben Johanan came as envoys of Judah Maccabee. Other delegations were sent by the Hasmonean rulers in 150 and 139 B.C.E. After the Romans invaded Judea in 63 B.C.E., Jewish prisoners of war were brought to Rome as slaves, Jewish delegates came to Rome on diplomatic missions and Jewish merchants traveled to Rome seeking business opportunities. Many of those who visited Rome stayed and the Jewish population began The Arch of Titus was built by the Roman commander to commemorate his Judean victory in 70 C.E. It shows the triumphal parade with the Temple vessels carried aloft. the treatment of Jews by the Romans in Palestine was often harsh, relations with the rulers in Rome were generally much better. Julius Caesar, for example, was known to be a friend of the Jews; he allowed them to settle anywhere in the Roman Empire. According to historians, when Caesar was assassinated by Brutus in 44 B.C.E., Roman Jews spent day and night at Caesars tomb, weeping over his death. His successor, Augustus, also acted favorably toward the Jews and even scheduled his grain distribution so that it would not interfere with the Jewish Sabbath. Two synagogues were founded by slaves who had been freed by Augustus (14 C.E.) and by Agrippa (12 B.C.E.). Twice in the Classic period, Jews were exiled from Rome, in 19 C.E. and in 49-50 C.E. The first exile took place due to the defrauding of an aristocratic Roman woman Fulvia, who had been attracted to Judaism. The second exile occurred because of disturbances caused by the rise of Christianity. It is not certain, though, that these measures were fully carried out or that the period of exile lasted a long time. During the Roman-Jewish wars in Palestine in 66-73 and 132-135, Jewish prisoners of war were brought to Rome as slaves. A number of the oldest Jewish Roman families trace their ancestry in the city to this period. Jewish scholars from Israel came to Rome in 95-96. In 212, Caracella granted the Jews the privilege of becoming From the second half of the first century C.E., the Roman Jewish community became firmly established. A majority of the community were shopkeepers, craftsman and peddlers, but other Jews became poets, physicians and actors. Satiric poets of the time, such as Juvenal and Martial, depicted the raucous activities of the Jewish peddlers and beggars in their poetry. Evidence has been found that twelve synagogues were functioning during this period (although not at the same time). Unfortunately, none of those synagogues have The Christian Empire The Jewish position in Rome began to deteriorate during the reign of Constantine the Great (306-336), who enacted laws limiting the rights of Jews as citizens. Jewish synagogues were destroyed by Christian mobs in 387-388 and in 493-526 (during the reign of Theodoric). When Rome was captured by Vandals in 455, spoils of the Jerusalem Temple were taken to Africa. Christianity became the official religion of the Roman Empire, emperors further limited the civil and political rights of the Jews. Most of the imperial laws dealing with the Jews since the days of Constantine are found in the Latin Codex Theodosianius (438) and in the Latin and Greek code of Justinian (534). Some of the relevant decrees in these codes include prohibitions against making proselytes, intermarriage, owning slaves (slave labor was very common and this prohibition severely restricted the economic life of the Jews), holding any esteemed position in the Roman state, building new synagogues and testifying against Orthodox Christians in court. During this period there was a revival of Hebrew studies in Rome, centered around the local yeshiva, Metivta de Mata Romi. A number of well-known scholars, Rabbi Kalonymus b. Moses and Rabbi Jacob "Gaon" and Rabbi Nathan b. Jehil (who wrote a great talmudic dictionary, the Arukh), contributed to Jewish learning and development. Roman Jewish traditions followed those practiced in the Land of Israel and the liturgical customs started in Rome spread throughout Italy and the rest of the world. The Middle Ages the 1200's to the mid 1400's, treatment of the Jews varied from pope to pope. For example, in 1295, Pope Bonifice VIII humiliated a visiting Jewish delegation that was sent to congratulate him on his ascendancy; whereas, Pope Boniface IX (1389-1404) treated the Jews benevolently. He favored a succession of Jewish physicians and recognized the rights of Jews as citizens. On the other hand, Eugenius IV (1431-47) passed anti-Jewish legislation in the Council Jews of Rome fully participated in the flourishing economic and intellectual climate of the Renaissance. They became merchants, traders and bankers, as well as artisans. During the reign of Pope Alexander VI (1492-1503), however, a special tax was imposed on the Jews of Rome to pay for his military operations against the Turks. Later popes during the first half of the 16th Century were more sympathetic to the Jewish community than Alexander VI. The Medici Popes, Leo X (1513-1521) and Clement VII (1523-1534), treated the Jews well. Leo X abolished certain discriminatory levies, did not enforce the wearing of the badges Jews had been forced to put on in the 12th century and also sanctioned the establishment of the Hebrew Printing Press. Leo X, as well as other popes from this period, such as Sixtus IV, retained Jewish physicians in Rome. The Jewish Ghetto Map of the Jewish Ghetto in Rome During the Reformation, in 1555, Pope Paul IV decreed that all Jews must be segregated into their own quarters (ghettos), and they were forbidden to leave their home during the night, were banned from all but the most strenuous occupations and had to wear a distinctive badge a yellow hat. More than 4,700 Jews lived in the seven-acre Roman Jewish ghetto that was built in the Travestere section of the city (which still remains a Jewish neighborhood to this day) If any Jews wanted to rent houses or businesses outside the ghetto boundaries, permission was needed from the Cardinal Vicar. Jews could not own any property outside the ghetto. They were not allowed to study in higher education institutions or become lawyers, pharmacists, painters, politicians, notaries or architects. Jewish doctors were only allowed to treat Jewish patients. Jews were forced to pay an annual stipend to pay the salaries of the Catholic officials who supervised the Ghetto Finance Administration and the Jewish Community Organization; a stipend to pay for Christian missionaries who proselytized to the Jews and a yearly sum to the Cloister of the Converted. In return, the state helped with welfare work, but gave no money toward education or caring for the sick. These anti-Jewish laws were similar to those imposed by Nazi Germany on the Jews during World War II. the Reformation, talmudic literature as a whole was banned in Rome. On Rosh Hashana 1553, the Talmud and other Hebrew books were burned. Raids of the ghetto were common, and were conducted to insure that Jews did not own any "forbidden" books (any other literature besides the Bible and liturgy). It was forbidden to sing psalms or dirges when escorting the dead to their burial place. Every Saturday, a number of Jews were forced to leave the ghetto and listen to sermons delivered in local churches. Also, whenever a new Pope was ordained, the Jews presented him with a Torah scroll. Jews continued to live in the ghetto for almost 300 years. Late 19th - Early 20th Century In 1870, Italy was united as a nation under King Victor Emanuel, who decreed that the ghettos be dismantled and gave the Jews full citizenship. Following the end of the papal states, Jews fully integrated into Italian society. They comprised a significant percentage of the university teachers, generals and admirals. A number of Jews were involved in government and were close advisors of Mussolini; they convinced Mussolini to intervene in the First World War. Five Jews were among the original founders of the fasci di combattimento in 1919 and were active in every branch of the Fascist movement. Both Mussolinis biographers, Margharita Sarfatti, and his Minister of Finance, Guido Jung, were Jews. Rome During World War II In 1931, approximately 48,000 Jews lived in Italy. By 1939, up to 4,000 had been baptized, and several thousand other Jews chose to emigrate, leaving 35,000 Jews in the country. During the war, the Nazi pressure to implement discriminatory measures against Jews was, for the most part, ignored or enacted half-heartedly. Most Jews did not obey orders to be transferred to internment camps and many of their non-Jewish neighbors and government officials shielded them from the Nazis. Some Jews were interned in labor camps in Italy. After the north was occupied by the Germans in 1943, the Nazis wanted to deport Italian Jewry to death camps, but resistance from the Italian public and officials stymied their efforts. A gold ransom was extorted to stop the S.S. commanding officer in Rome from killing 200 Jews. Still, nearly 8,000 Italian Jews perished in the Holocaust, but this number was significantly less than in most countries in Europe. Roughly 80 percent of the Italian Jews survived the war. In 2000, a stone plaque was unveiled at the Tiburtina train station, the site of the deportations, to honor the memory of Rome's Jews, whom the Nazis deported from the city on Oct. 16, 1943. Today, a diverse community of 15,000 Jews lives in Rome (communita Ebraica di Roma). The Jewish communitys organization, based in Rome, the Unione delle Comunita Ebraiche Italiane, is directly involved in providing religious, cultural, and educational services and also represents the community politically. The monthly publication Shalom is the Roman community's key publication, and Rome also has Jewish cultural clubs and several In 1987, the Jewish community obtained special rights from the Italian state allowing them to abstain from work on the Sabbath and to observe Jewish holidays At least 13 synagogues can be found in Rome, including a special synagogue for the Libyan Jews who immigrated to Rome after the Six-Day War in 1967. Three of thirteen synagogues are located under the same roof at Via Balbo 33 (Itlaki, Sephardic and Ashkenazi). The Italian chief rabbi officiates at the Great Synagogue of Rome and heads the country's rabbinical council. The continual presence of a Jewish community in Rome for more than two millennia has produced a distinctive tradition of prayer comparable to the Sephardic or Ashkenazi traditions called the Nusach Italki (Italian rite). The nusach has its own order of prayer and tunes. A number of synagogues in Rome, including the Great Synagogue, follow this tradition. Most synagogues in Italy are Sephardic. In October 2013, Rome's Jewish community gathered in the city's main synagogue to commemorate the October 16, 1943, roundup of Jews from the Rome Ghetto. The Jews were bound for Auschwitz; only a dozen survived. The community also discussed the passing of Nazi criminal Erich Priebke who was convicted for crimes against humanity from the massacre at the Ardeatine Caves outside Rome in which 335 civilians, mostly Jews, were slaughtered in cold blood. Jewish Tourist Sites Among the places worth visiting is Ostia Antica, the ancient seaport near Rome. Jews in Ostia were middle class and participated in a variety of trades, such as blacksmiths, tailors, butchers and actors. They were better off than the Jews in Rome and permitted to build and maintain a fine synagogue. synagogue of Ostia Antica is among the oldest in Europe and one of the oldest in the world. The remains of a 4th-century synagogue constructed on the site of a synagogue from the 1st century B.C.E. and the catacombs in Rome was discovered in 1961-62. Near the entrance courtyard of the synagogue is an area that contained a large oven, storage jars and a marble-topped table decorated with menorahs. It is believed this was a kitchen and/or dining room. Next to this room is one that includes benches that might have been used as beds by interesting synagogue to visit is the Synagogue of Rome, Longotevere Cenci, which was built from 1874-1904 after the emancipation of the Italian Jews following Italys unification. It has a unique Persian and Babylonian architectural design that contrasts with the rest of the city, which uses an ornamental baroque style. Inside the synagogue, a museum chronicles the history of Romes Jews. is possible to visit the old ghetto in the Travestere section. The first stop in the ghetto should be the Museo del Folklore, which contains paintings depicting 19th Century Roman ghetto life. Also in the ghetto, on Via Della Regiella, there is a narrow street lined with seven-story buildings where the ghetto was so small Jews were forced to The place where Jews were sent for deportation during the German occupation is in the piazza (square) between Portico dOttavia and Tempo Maggiaore. A plaque on one of the buildings reads, "On October 16, 1943, here began the merciless rout of the Jews. The few who escaped murder and many others, in solidarity, pray for love and peace from mankind and pardon and hope from God." of the Christian churches offer magnificent artwork that contain biblical subjects and themes. Among Michelangelos many magnificent works in Rome is the statue of Moses in the Church of San Pietro in Vincoli. This is the sculpture with horns coming out of the head of Moses. The depiction was apparently based on a mistranslation of the Bible, which speaks of "rays" shining from Moses when he emerged from Sinai with the Ten Commandments. The Hebrew word for "rays" is similar to the word for One important relic that ties Rome and Israel together is the Arch of Titus (opposite the Roman Forum). It was built by the Roman commander to commemorate his Judean victory in 70 C.E. It shows the triumphal parade with the Temple vessels carried aloft. A replica of the arch is in Beth Hatefutsoth, the Museum of the Diaspora in Tel Aviv. Former Jewish Cemetery of Rome In 1645, Pope Urban VIII began construction on a defensive perimeter wall for Rome that was going to interfere with the city's main Jewish cemetery of Porta Portese. In April of that year, the new Pope Innocent X arranged for the Jewish Society of Charity and Dead to purchase an area of land in Cerchi and the cemetery was moved to this plot. As this area began to fill up, in 1728 Pope Benedict XIII gave the Jewish Society permission to buy neighboring land in order to expand, and once this plot was full as well, Pope Pius VI in 1775 pressured another landowner to sell a plot for the expansion of the cemetery. All that remains of the cemetary is a road and a rose bed In 1934, more than a century and a half later, the government of Rome expropriated all land in the area of the 250 year old Jewish cemetery and began construction on a road that ran right through the heart of the burial ground. The cemetery was decided to be transferred to a part of the Campo Verano and working quickly to finish the road, the city exhumed and transferred bodies from the cemetery in a hurry, often times working on Jewish holidays and without religious Jewish supervision. A total of 7,800 corpses were recovered from the cemetery, but unfortunately not all of the bodies were properly identified. Additionally, in the rush to finish the project, many bodies were recovered as late as two days before the road was inaugurated while some areas were not searched and others were so haphazardly investigated that it is likely that thousands of corpses remain buried. In 1950, the city built a rose garden in the area, just a short distance from Circo Massimo and the Aventine hill. The president of the Jewish community at the time gave his consent for the construction of The Roseto Comunale of Rome with the condition that a single star be placed above the entrance to remind visitors of its sacred origin. This star is still present today. The other evidence of the Jewish connection to the site is the central staircase, which is in the shape of a menorah. Jewish Museum of Rome Over two millenia, the Jewish community of Rome has left behind some of the most compelling records and artifacts ever found in Europe. Many of these archaeological finds are showcased in Rome's newly renovated Jewish museum, the Museo Ebraico di Roma. Although the museum has been open and functioning since 1959, a new European Union-funded $2 million renovation gave the museum a complete remodeling. Instead of merely exhibiting artifacts, the museum incorporates them with photographs and documents to narrate the history of Rome's Jewish community, the oldest community of its kind in Europe. The museum is host to several exhibits which highlight the Jewish connection to Rome. The Gallery of Antique Marbles is a collection of precious marbles from the synagogues of the Ghetto of Rome. The Gallery contains over 100 inscriptions and architectural elements that vary in size and content. According to the museum, “The subjects of the inscriptions vary but together they illustrate the social fabric, daily life and history of the Jewish Community and its presence in Rome. They commemorate donations from wealthy families and the purchase of cemetery plots. They forbid bringing leavened bread into areas where unleavened bread is baked and record the activities of the confraternities of charitable works. There are also family coats of arms decorating objects that the families donated to Another permanent exhibit is the textile collection, which contains around 800 quality textiles from the 15th to the 19th centuries. The museum has planned to build a Textile Preservation Center, which will house some of the museum's most important finds. The fabrics will be placed in air-tight storage containers to prevent dust and sunlight that could be harmful to Sources: Bridger, David (ed.). The New Jewish Encyclopedia. Behrman House, Inc. Publishers, New Eban, Abba. Heritage: Civilization and the Jews. Summit Books, New York.1984. Johnson, Paul. A History of the Jews. Harper & Row. New York. 1987 Lachter, Lewis Eric. "When in Rome, feast on beauty and history," Washington Jewish Week, May 4, 2000. Jewish Community in Rome "Rome" Lets Go Europe 1990, Harvard Student Communities of the World ; The Roseto Comunale of Rome, Rome Tour.org; Ebraico di Roma; (December 14, 2005); Associated Press (October 16, 2013). Photo Credits: Photos of medieval Jews, ghetto map, Ostia synagogue, Travestere copyright © Jews and Synagogues. EdizioniStorti Venezia. 1999, the remainder copyright © Mitchell Bard.
<urn:uuid:8bd28411-4f00-49ef-a614-948c34e222f4>
CC-MAIN-2014-41
http://www.jewishvirtuallibrary.org/jsource/vjw/Rome.html
s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663754.5/warc/CC-MAIN-20140930004103-00239-ip-10-234-18-248.ec2.internal.warc.gz
en
0.958869
4,476
3.796875
4
Yoga is famous, we all know about many Yoga poses, however not many of us know the origin of these yoga poses and early forms of Yoga. The popularization of yoga in the West by Y Modern research in the field of Yoga has proved numerous benefits that Yoga has. There are many misconceptions about the origin of Yoga Poses and early forms of Yoga. Evidence suggests that Patanjali is not the origin of Yoga There are multiple verified sources and historical evidence about earlier practice of Yoga Poses and earlier forms of Yoga. Mohenjodaro Seals- Indus Valley Civilization, depicting Yoga In fact, the earliest illustration we have of yoga is from the Mohenjodaro seals. Mohenjo-Daro is the remains of an ancient city located in ancient India which now falls in Mohenjo-Daro’s parent city was Harrapa in India. These civilizations have been dated from 3300 BC to 1300 BC. Vedic Shastra and Yoga Some see yoga’s origins as being from the Vedic Shastras, or Vedicc religious texts, which are the foundation of Indian Hinduism. The Vedic texts were created from 2500 BC, and the Rigveda is believed to have been completed by 1500 BC yoga. The Rigveda is one of several There are sacrificial prayers, incantations, and elements related to magic, to name a few aspects of the subject matter. These are now viewed symbolically, or philosophically, although they were presumably intended more literally at the time. But the word “yoga” was discussed in the Bhagavad Gita – Krishna describes 4 types of Yoga Yoga is also discussed in the Bhagavad Gita, where Krishna describes 4 types of Yoga: - Selfless action – in following one’s soul path, one’s dharma, first and foremost, and without thinking of the outcome, the end result, or being motivated by self-gain (Karma Yoga) - Self-transcending knowledge (Jnana yoga) - Psycho-physical meditation (Raja yoga) - Devotion – loving service to the Divine Essence (Bhakti yoga) The Bhagavad Gita is believed to have been written between the 5th and 2nd century BC. Other misconceptions about Yoga Many consider the practice of yoga to be restricted to Hatha Yoga and Asanas (postures). However, among the Yoga Sutras, just three sutras dedicated to asanas. fundamentally, hatha yoga is a preparatory process so that the body can sustain higher levels of energy. The process begins with the body, then the breath, the mind, and the inner self. Yoga is also commonly understood as a therapy or exercise system for health and fitness. While physical and mental health is the natural consequence of yoga, the goal of yoga is more far-reaching. “Yoga is about harmonizing oneself with the universe. It is the ancient wisdom of aligning individual self-consciousness with the greater reality, to achieve the highest level of perception, peace
<urn:uuid:f2f85c5c-5a44-4dd4-9131-13331b3af3c3>
CC-MAIN-2019-51
https://indianyug.com/origin-of-yoga-poses-the-beginning-of-yoga/
s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540484815.34/warc/CC-MAIN-20191206050236-20191206074236-00374.warc.gz
en
0.957511
665
3.21875
3
Construction has a long-standing history of contributing tons of harmful gas emissions. Over the years, vast improvements have been made to reduce the damages to our ecosystems and atmosphere. But, one material, in general, has remained unchanged — asphalt. Asphalt has a longstanding reputation for being environmentally friendly. There are a couple of ways by choosing asphalt you are “going green.” WHY GREEN ASPHALT IS IMPORTANT In today’s society, we have two environments: the natural and the built. The natural environment recycles rainfall organically. The water will become absorbed by the soil and filter into water systems like a stream, lake, pond, or underground aquifers. A built system, by contrast, interrupts the natural process. The building materials can act like a sealant and prevent the rain from filtering appropriately. Instead, water levels rise because there is nowhere to go completely sidestepping the filtration nature intended. It can take with it contaminants that can cause issues within the system.Fortunately, there are stormwater management tools such as porous pavement. Site planners have been implementing this type of construction material since the 70s and have been proving their worth ever since. Through proper design and installation, porous asphalt not only provides a practical solution for managing stormwater but is also cost-effective and long-lasting. Proper installation will improve the quality of water, encourage appropriate infiltration, and sometimes remove the need for a detention basin. As the water passes through the pavement, many contaminants are removed allowing the water to cycle through appropriate microbial action. HOW POROUS ASPHALT WORKS Success is giving the water the option to go somewhere. Other, non-porous materials, will allow the water to sit. It’s terrible for the pavement and the environment. The building materials utilize an open-graded system giving the water the ability to seep down to a stone bed. From there it’s absorbed into the soil. The key to success is the depth of the subbase. At 18 to 36 inches, the stone bed is thick enough that even with heavy rains, the water will not rise back to the surface. When it’s not sitting on the surface, water doesn’t have the opportunity to mix with harmful products like gas or oil. DISCOVER MORE GREEN SOLUTIONS WHEN BUILDING YOUR DRIVEWAY OR PARKING LOT.REQUEST AN ESTIMATE IMPROVING WATER QUALITY We’ve been talking a lot about how beneficial and environmentally friendly the product is, but let’s take a closer look. Research on the product has been conducted since its inception. The University of New Hampshire conducted studies that showed large removal rates for “total suspended solids” such as metals, oil, and grease. The treatment performance for porous pavement has been so great it “consistently exceeds EPA’s recommended level of removal of total suspended solids, and meets regional ambient water quality criteria for petroleum hydrocarbons and zinc. Researchers observed limited phosphorus treatment and none for nitrogen, which is consistent with other non-vegetated infiltration systems.”The studies went on to illustrate 99% of suspended solids were removed, and there were significant improvements to winter maintenance. Salt used for deicing roads can be quite harmful to the environment. With porous pavement, the salt needed was reduced to 25% or some instances not needed at all. HOW LONG IT LASTS Porous pavement lasts in two ways: construction and infiltration. Water can deteriorate materials at an alarming rate. It would suggest then that the porous material would be subject to the same fate. However, studies show that this type of blacktop can last over twenty years and not suffer from cracking and potholes. Throughout 25-year precipitation, research showed that a parking lot at a Pennsylvania State Visitor center never had issues with infiltration. That means that water was readily absorbed into the asphalt and then the soil. Throughout that timeframe, there was no discharge found on the surface after a storm. To further illustrate the effects of a porous asphalt we can look at a parking lot in Massachusetts that was built in 1977. Since its installation, the pavement has not been repaved once nor has there been issues with the absorption rate. RECYCLING AND ENVIRONMENTALLY-FRIENDLY ASPHALT SOLUTIONS There is more than one way to stay green when it comes to asphalt. FULLY RECYCLABLE Asphalt construction materials are entirely recyclable. And, since it’s used all over the world, it makes it one of the most recycled materials on the planet. Because it can be recycled, there is a significant reduction in waste sustaining healthier environments. NEEDS LESS REPAIRING Concrete doesn’t hold up as well to temperature fluctuations as well as a blacktop. It will often crack, and repairs are challenging with that sort of material. Pavement is more flexible and stands up to heavyweight, friction, and weather. The need for patching, cracking filling, and resurfacing dramatically reduces. USES LESS ENERGY Laying asphalt takes less time than others in its category. It doesn’t require days to cure which means it’s only a matter of time before the roads can be used again. There is less equipment involved showing a reduction in energy consumption just to get it in place. There is also evidence that the surface will continue to reduce greenhouse gas emissions over time. GREEN ASPHALT NASHVILLE It’s easy to do your part to reduce your carbon footprint with asphalt. By choosing porous asphalt or recycled materials, you can ensure you are doing your part to keep our environment healthy. Contact Roadbuilders to learn more. CONTACT US TODAY TO GET A QUOTE Categorised in: Asphalt, Green Asphalt
<urn:uuid:d6172c45-8c35-4eee-92da-d49ad3e82e71>
CC-MAIN-2019-30
https://roadbuilderspaving.com/2019/04/your-introduction-to-green-asphalt-solutions/
s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526799.4/warc/CC-MAIN-20190720235054-20190721021054-00445.warc.gz
en
0.943654
1,210
2.953125
3
Of Mere Being © by Wallace Stevens The palm at the end of the mind, Beyond the last thought, rises In the bronze decor, A gold-feathered bird Sings in the palm, without human meaning, Without human feeling, a foreign song. You know then that it is not the reason That makes us happy or unhappy. The bird sings. Its feathers shine. The palm stands on the edge of space. The wind moves slowly in the branches. The bird's fire-fangled feathers dangle down. Submitted by Lillie Jean
<urn:uuid:09f6d67f-11f5-4651-8d68-5de6e4c20fd0>
CC-MAIN-2018-51
http://www.rutgerhauer.org/poets/0419.php
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376828697.80/warc/CC-MAIN-20181217161704-20181217183704-00155.warc.gz
en
0.864056
126
2.578125
3
Presentation on theme: "Secularism’s Rejection of God. For the wrath of God is revealed from heaven against all *ungodliness and unrighteousness of men who suppress the truth."— Presentation transcript: For the wrath of God is revealed from heaven against all *ungodliness and unrighteousness of men who suppress the truth in unrighteousness because that which is known about God is evident within them; for God made it evident to them. (NASB95) The positive verb sebō means to express in gestures, rites, or ceremonies one’s allegiance or devotion to deity, to show reverence, to worship. The negative verb asebeō means to violate the norms of a proper or professed relation to deity, to act impiously, to be ungodly. The positive noun sebasma refers to an object of worship, identifying something that relates to devotional activity, devotional object. The negative noun asebeia refers in general is understood vertically as a lack of reverence for deity and hallowed institutions as displayed in sacrilegious words and deeds, ungodliness, impiety. The positive adjective sebastos is descriptive of that which is considered worthy of reverence, revered, august, as a translation of the Latin Augustus and designation of the Roman emperor. The negative adjective asebēs pertains to violating norms for a proper relation to deity, irreverent, impious, ungodly. At Pisidian Antioch (Acts 13:42-43) At Philippi (Acts 16:14-15) At Thessalonica (Acts 17:1-4) At Athens (Acts 17:16-17) At Corinth (Acts 18:5-11) Improper Objects (Acts 17:22-23; 19:23-27; Rom. 1:25; 2 Thess. 2:3-4) Improper Forms (Matt. 15:7-9; Mark 7:6-8) Improper Words (2 Tim. 2:15-18; cf. Acts 13:48-52; 18:12-17) Improper Actions, in Times Past (2 Pet. 2:4- 10; Jude 14-15), in Times Present (Jude 3-4), and in Times Future (2 Pet. 3:7-12; Jude 17-21) Christ Justifies the Ungodly (Rom. 4:1-8). Christ Died for the Ungodly (Rom. 5:6-8). Christ Converts the Ungodly (Rom. 11:25-27). Christ Governs the Ungodly (1 Tim. 1:8-11). Christ Instructs the Ungodly (Titus 2:11-14). Christ Judges the Ungodly (1 Pet. 4:17-19).
<urn:uuid:91656c33-bf09-4bfe-9029-c2d2b4b115dd>
CC-MAIN-2018-09
http://slideplayer.com/slide/2509881/
s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814827.46/warc/CC-MAIN-20180223174348-20180223194348-00067.warc.gz
en
0.841333
601
2.515625
3
The Federal Child and Youth Welfare Act (B-KJHG) of 2013 governs instances in which youngsters are unable to remain in their own family due to a risk to the welfare of the child. Where it is possible to „avert this risk only by means of care outside the family or other current living environment, children must be put into care“ (Section 26 Subsection 1 B-KJHG 2013). This care – generally long-term in nature – takes the form of accommodation with close relatives, foster parents or in a social education institution (children’s home). There are currently no statistics for children who go to live with close relatives. In 2016, 13,646 children and adolescents were accommodated in the Austrian care system. Referred to the population under the age of 18 years, this represents 9 per 1,000 minors. A good three-fifths (8,423 youngsters or 61.7%) were accommodated in a children’s home (5.5 per 1,000), and 5,223 with foster parents (3.4 per 1,000). In terms of gender, boys predominated with a figure of 54.6%. Only the share of boys in children’s homes was above average at 56.9%: at 51.0% for foster children, it largely corresponded to the number of males in the population under the age of 18 years (51.5%). Figures from 2015 onwards can only be compared with those for previous years to a very limited extent as there has been a radical change in the method of data collection. Until 2014 statistics on children in care were published in the youth welfare reports known as the Jugendwohlfahrtsbericht (in 2014 called the Kinder- und Jugendhilfebericht or child and youth support report). Following introduction of the 2013 Federal Child and Youth Welfare Act, it has been replaced since 2015 by child and youth welfare statistics (Kinder- und Jugendhilfestatistik). Until 2014 the reference date for surveying the number of children and adolescents in care was 31 December of each year, but the procedure was altered in 2015. From this date it changed to the annual total, with a child or adolescent only being counted once to avoid being included several times where a child was taken into care multiple times during a year. This results in a more precise, more realistic picture, and is also the reason why the figures up to 2014 cannot be compared with those from 2015 onwards. The age groups have likewise been adapted, so making comparison more difficult. In particular, it should be noted here that until 2014 adolescents aged 18 years were also included in the statistics, but from 2015 they have been excluded, i.e. only children up to the age of 17 years are registered. This means that the parts of the resident population to which the figures relate are not identical. As data from the new method have only been available for the last two years, it is at present generally only possible to base conclusions about trends on figures for the period 2002-2014. During this period there was initially a nationwide rise in the number of the youngsters in care until 2011, then followed by a slight fall. Totalling 10,810 on the reference date in 2014, the number of children living in care was up by a fifth (+20.2%) in 2002 (8995), peaking at 11,343 in 2011 – an increase of 26.1%. Given the simultaneous reduction of approx. 7.7% in persons aged up to 18 years in the resident population, the relative increase is even more significant here. In 2002, 5.2 per 1,000 children and adolescents under the age of 19 years found themselves in care, peaking at 7.0‰ in 2011 (+34.6%). In 2014 this figure stood at 6.8‰ (+30.8% compared with 2002). The greatest increase here was the number of youngsters living in children’s homes: between 2002 and 2014 this grew by a quarter (25.2%). In 2002, 2.9 per 1,000 children and adolescents under 19 years were accommodated in homes, and at 3.9 by 2014 this was up by 34.5%. The peak during this period occurred in 2011: 4.2‰ (+44.8% compared with 2002). Although the number of children in foster care was lower, the increase seen here was continuous. In absolute figures, 14.1% more children and adolescents were living with foster parents in 2014 than in 2002. In relative terms, this represents a 20.8% increase, rising from 2.4 to 2.9 per 1,000 youngsters under 19 years. To offer an insight into the age structure and differences between the Länder of Austria, figures from the 2016 child and youth welfare statistics have been used. In relative terms, the largest number of children and adolescents (here: under 18 years) living in care in 2016 were to be found in Vienna and Carinthia: 12.4 per 1,000 minors and 12.1‰ respectively. At 10.5‰ Styria was above the national average of 9.0‰. As regards the largest number of children in foster care, Vienna took the lead at 5.4‰, ahead of Styria with 4.6‰ and Salzburg with 3.5‰. Tyrol had the fewest children living with foster parents: 1.8 per 1,000 minors. The divergence between the different Länder for children living in foster care was thus 3:1. As regards accommodation in children’s homes, at 9.0‰ Carinthia was ahead of Vienna with 7.0‰. Salzburg, Burgenland and Styria were in the order of 6‰ and also exceeded the national average. The lowest number of children and adolescents living in a home was to be found in Upper Austria: 4.2 per 1,000 minors, i.e. a divergence of just over 2:1. Analysis of the data for 2016 broken down into the three age groups did not show any major difference for the numbers of children in foster care. While on average 3.4 per 1,000 minors lived with foster parents, this figure was 3.0 for the under-sixes, 3.8 for children aged between 6 and 13 years and 3.4 for teenagers of 14 -17 years. In comparison, with an average of 5.4‰, a significantly larger number of older children and adolescents were accommodated in children’s homes than infants. While only one per thousand of under-sixes (1.0‰) lived in a home, this figure was 5.3‰ for children aged 6 to 13 years and 12.5‰ for teenagers between 14 and 17 years. In total, 2,027 or 14.9% of children and adolescents in care were aged less than 6 years (4.0‰ of all under-sixes), 6,087 or 44.6% were teenagers between 6 to 13 years (9.1‰ of the population in this age group), while 5,532 or 40.5% were adolescents aged 14 to 17 years (15.9‰). Children are put into care either on the basis of an agreement or following a court order. If the parents or other persons responsible for the care and upbringing of the child consent to such an offer of support, it is based on a written agreement between these persons and the child and youth welfare authority. If no agreement is reached, the court order will take effect. While the number of children in care following a court order was 40.4% in 2002, this fell to 35.3% in 2003. Since then it has fluctuated between 32.6% and 36.8%, bottoming out in 2010 and peaking in 2008. In 2016 agreements accounted for almost two thirds of all children living in care (65.8%).
<urn:uuid:54abad47-f453-45b4-a8b7-11aefe419029>
CC-MAIN-2018-51
https://www.kinderrechte.gv.at/factbook-english/children-in-care/
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376832259.90/warc/CC-MAIN-20181219110427-20181219132427-00167.warc.gz
en
0.970321
1,655
3.1875
3
Journal Articles (Philosophy) Items in this Collection - 21Brigandt, Ingo - 12Koslicki, Kathrin - 10Linsky, Bernard - 10Morin, Marie-Eve - 10Pelletier, Francis J. - 8Wilson, Robert A. A Puzzle About Material Constitution and How to Solve It: Enriching Constitution Views in MetaphysicsDownload 1. Two Intuitions and a Puzzle “Constitution” may be a philosophical term of art, but the idea of one thing’s being materially constituted by another thing (or other things) is one that ordinary folk are perfectly familiar with. When we talk explicitly of something’s being made up of, being made... In this paper I want to explore a certain community of writing, namely the one between Jacques Derrida and Jean-Luc Nancy. The on-going dialogue between the two on the subject of community has left in their writings only traces (with the exception of the first essay in Derrida’s Voyous): implicit... Introduction: In this paper, I argue that a surprisingly widespread strategy in metaphysics is suspect for various reasons and hence ought to be abandoned. In very broad strokes, situations which give rise to ‘The Suspect Strategy’ (TSS) contain as one of their ingredients a general metaphysical... Introduction: In 1986 Pelletier published an annotated list of logic problems, intended as an aid for students, developers, and researchers to test their automated theorem proving (ATP) systems. The 75 problems in the list are subdivided into propositional logic (Problems 1-17), monadic-predicate... This study examines the problem of belief revision, defined as deciding which of several initially accepted sentences to disbelieve, when new information presents a logical inconsistency with the initial set. In the first three experiments, the initial sentence set included a conditional... The paper works towards an account of explanatory integration in biology, using as a case study explanations of the evolutionary origin of novelties—a problem requiring the integration of several biological fields and approaches. In contrast to the idea that fields studying lower level phenomena... Despite John Buridan's reputation as the foremost Parisian philosopher of the fourteenth century and the predominant role played by his teachings in European universities until well into the sixteenth century,' our understanding of his thought in a number of areas remains sketchy. Epistemology is... Individualists claim that wide explanations in psychology are problematic. I argue that wide psychological explanations sometimes have greater explanatory power than individualistic explanations. The aspects of explanatory power I focus on are causal depth and theoretical appropriateness.... Cohabitating in the Globalised World: Peter Sloterdijk's Global Foams and Bruno Latour's CosmopoliticsDownload This paper seeks to present a comprehensive and systematic picture of Peter Sloterdijk's ambitious and provocative theory of globalisation. In the Sphären (Spheres) trilogy, Sloterdijk provides both a spatialised ontology of human existence and a historical thesis concerning the radical shifts in... In this paper we explore a class of belief update operators, in which the definition of the operator is compositional with respect to the sentence to be added. The goal is to provide an update operator that is intuitive, in that its definition is based on a recursive decomposition of the update...
<urn:uuid:bf9b234d-4008-45ee-b874-43ca9a466cf2>
CC-MAIN-2020-24
https://era.library.ualberta.ca/communities/191059ea-fff4-49ca-a2ec-19acae556a85/collections/9995fca7-10c0-474b-8f81-31a7f01b1aa7
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347434137.87/warc/CC-MAIN-20200603112831-20200603142831-00004.warc.gz
en
0.910501
709
2.578125
3
With over 5% of the global population having hearing loss, it is more common than most realize. Including Auditory Neuropathy Spectrum Disorder, Conductive Hearing Loss, Mixed Hearing Loss, and Sensorineural Hearing Loss, the types of hearing loss are differentiated by what part of the ear is damaged. Hearing loss is caused by a number of reasons including birth complications, genetics, chronic ear infections, excessive noise exposure, types of medical conditions, drug use, and age. Depending on the type of hearing loss, treatment options include assistive devices, cochlear implants, hearing aids, sign language, and surgery. Take time to research hearing loss so you are informed on what it is and available treatment options. In the following infographic, Online Hearing provides an overview of hearing loss.
<urn:uuid:718c7d8c-2f04-4095-8f58-472b42ab8fa3>
CC-MAIN-2021-43
https://infographicjournal.com/an-overview-of-hearing-loss/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585120.89/warc/CC-MAIN-20211017021554-20211017051554-00384.warc.gz
en
0.936149
159
3.453125
3
Last month’s article discussed techniques on accepting oneself. For those who might be struggling with the concept of self-acceptance, let’s go back one step and first get to know who we are. This might sound strange. Most of us think we know ourselves, but actually our self-knowledge is mostly superficial. We know our appearance, profession, gender, favorite foods, pastimes, and activities. But we may struggle with recognizing connected patterns of behavior and emotions, or with remaining present with our feelings and thoughts during stressful times. We may regularly repress, avoid, ignore feelings and emotions, or we may not know what we are feeling or thinking or our motivation/intention at any given time. In other words, we are mostly absent from ourselves. Self-awareness or knowing oneself is understanding how our feelings and thoughts affect our behavior and relationships with ourselves and others in the world. The heart of wisdom is clearly grasping that your internal world (feelings, emotions, thoughts) and external world (family, work, social relationships) influence each other. This is also called emotional intelligence. Living with self-awareness (i.e. being yourself or being authentic) is the key to being happy, energetic, passionate, successful, and fulfilled. Knowing ourselves empowers us to: The researched benefits of self-awareness are improved relationships, better mood and emotion management, improved self-esteem, success, and productivity, and a greater understanding of your needs and wants and your ability to attain them. We can cultivate a practice of deepening our self-knowledge through exploring new interests and activities or discovering our deepest feelings. Below are more ideas to get you started. Self-discovery can bring up painful feelings and memories, so be compassionate with yourself, and go slowly if you need. Try to be with the process without judgment and with detached observation as much as possible. This is where curiosity comes in very handy. The effort self-knowledge requires is big, but it is a very good investment because knowing ourselves will help us move through life with greater ease. May truly knowing yourself bring you peace and happiness.
<urn:uuid:8ae23dcf-aa52-4bdf-912e-cb1fc020d506>
CC-MAIN-2022-40
https://www.acceptancehealing.com/blog/how-to-get-to-know-yourself
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00590.warc.gz
en
0.956047
457
2.859375
3
Which Joe gave his name to ‘sloppy joes’? We look at five interesting sandwiches and their lexical origins. - old-fashioned term for niobium - ‘Most alloying elements, such as chromium, columbium, copper, iron, manganese, molybdenum, tantalum, and vanadium stabilize the phase to the extent that a mixed - phase or an entirely phase alloy can persist down to room temperature.’ - ‘To eliminate the intergranular corrosion, it is necessary either to reduce carbon to very low levels, or to add titanium and columbium to tie up the carbon and nitrogen.’ - ‘The gas tungsten arc welding process is used for the pure columbium and for the lower strength commercial alloys.’ - ‘In steel alloyed with molybdenum, manganese and columbium, which is use for these pipe-lines, molybdenum raises both strength and toughness.’ - ‘Coltan is a contraction of columbium and tantalum; it's found in 3 billion year old mud and without it much of our modern technology could not be made.’ Early 19th century: modern Latin, from Columbia, a poetic name for America from the name of Christopher Columbus(see Columbus, Christopher). We take a look at several popular, though confusing, punctuation marks. From Afghanistan to Zimbabwe, discover surprising and intriguing language facts from around the globe. The definitions of ‘buddy’ and ‘bro’ in the OED have recently been revised. We explore their history and increase in popularity.
<urn:uuid:0aa0bf60-7947-4272-ac06-7e566fe19cae>
CC-MAIN-2016-50
https://en.oxforddictionaries.com/definition/us/columbium
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542657.90/warc/CC-MAIN-20161202170902-00179-ip-10-31-129-80.ec2.internal.warc.gz
en
0.920289
356
3.09375
3
I did some more reading. It would appear that the way a key is generated varies from system to system, so a lot of it would depend on the method used to generate the keys. From what I see it is common to use the passphrase to encrypt the private key, and use a CSPRNG (cryptographically Secure Psudo-Random Number Generator) to generate those keys. So in those scenarios the passphrase encrypted private key is never published. Even if it were (bad idea) there would still be no way to compare the two and see they used the same passphrase or determine the passphrase though having both. Because an RSA pubic key is (basically) the product of two prime numbers, the key cannot be directly generated from a passphrase. For example the passphrase FOO does not directly translate into two prime numbers. Instead that passphrase can be used as a seed, or part of a seed for a system that generates prime numbers (usually combined with a random seed or system entropy since a passphrase does not contain enough entropy to generate even an 128-bit key) Even if the system only used a user provided passkey to generate the prime numbers, the prime numbers would be so totally different that were used for the two different key sizes, and they would not follow any deterministic pattern, so there would be no relation between the two keys. For example: Say the passphrase FOO always generates the 4-bit primes of 7 & 13 (for a product of 91) when it passes through the 4-bit algorithm. If that same passphrase was used in the 8-bit prime algorithm it could generate 10007 (not saying it would, but as an example of a possible relationship) as the first prime, but 100013 is not prime, so it would need to generate 10037 (for a product of 100440259) or some other prime. So even if the passphrase was used to directly generate a number, there is no guarantee that the generated number would be prime, so additional steps must be included to find a prime. These steps decouple the relationship between the generated key and the passphrase / seed used to generate it. A good simple explanation of RSA.
<urn:uuid:e8e575f5-83b3-49c0-b9d9-1b0a12d00093>
CC-MAIN-2014-49
http://crypto.stackexchange.com/questions/1610/two-public-keys-with-same-passphrase-insecure-can-two-hashes-be-compared
s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400378724.10/warc/CC-MAIN-20141119123258-00030-ip-10-235-23-156.ec2.internal.warc.gz
en
0.939065
456
3.015625
3
W ostatnich latach wsparcie dla regulacji technologicznych zaczęło słabnąć. To ma wpływ na wiele aspektów, w tym na reklamy PPC (Pay-Per-Click). Reklama PPC jest jednym z najbardziej popularnych i skutecznych sposobów promowania produktów i usług online. Jednak spadek poparcia dla regulacji technologicznych może mieć negatywny wpływ na skuteczność tego rodzaju reklam. W niniejszej pracy przyjrzymy się bliżej temu, jak spadek poparcia dla regulacji technologicznych może mieć wpływ na reklamy PPC. The Impact of Declining Support for Tech Regulation on PPC Advertising The decline in support for tech regulation has had a significant impact on the world of pay-per-click (PPC) advertising. PPC advertising is a form of online marketing that allows businesses to pay for their ads to be displayed on search engine results pages and other websites. The lack of regulation in the tech industry has allowed companies to take advantage of loopholes in the system, resulting in an increase in fraudulent activities such as click fraud and ad fraud. This has caused advertisers to lose money due to their ads being displayed on sites that are not relevant to their target audience or are not even real. Additionally, it has become increasingly difficult for advertisers to track the effectiveness of their campaigns due to the lack of transparency in the system. Furthermore, with fewer regulations in place, companies have been able to manipulate algorithms and use unethical tactics such as bid rigging and keyword stuffing, which can lead to higher costs for advertisers. This can make it difficult for small businesses and startups who may not have the resources or budget to compete with larger companies who are able to take advantage of these loopholes. Overall, declining support for tech regulation has had a negative impact on PPC advertising by making it more difficult for advertisers to track their campaigns and increasing costs due to fraudulent activities and unethical tactics. It is important that governments around the world take steps towards regulating the tech industry in order to ensure a fair playing field for all businesses involved in PPC advertising. Exploring the Pros and Cons of Tech Regulation in the Digital Age The digital age has brought with it a host of new technologies that have revolutionized the way we live, work, and communicate. However, with this new technology comes the need for regulation to ensure that it is used responsibly and ethically. In this article, we will explore the pros and cons of tech regulation in the digital age. One of the primary benefits of tech regulation is that it can help protect users from potential harm. By setting standards for how technology should be used, governments can ensure that users are not exposed to malicious content or activities. This can help protect people from cybercrime, identity theft, and other online threats. Additionally, tech regulation can help protect user privacy by ensuring that companies are not collecting or using personal data without permission. Another benefit of tech regulation is that it can help promote innovation and competition in the tech industry. By setting standards for how technology should be used, governments can create a level playing field for all companies to compete on. This can encourage companies to develop new products and services that meet these standards while also providing consumers with more choice in the marketplace. However, there are also some drawbacks to tech regulation in the digital age. One of these is that it can stifle innovation by making it difficult for companies to develop new products or services without first meeting certain regulatory requirements. Additionally, some regulations may be too restrictive or outdated for today’s rapidly changing technology landscape. Finally, tech regulation may also lead to increased costs for businesses as they must comply with various regulations in order to remain competitive in the market. In conclusion, while there are both pros and cons associated with tech regulation in the digital age, it is clear that there are many benefits to having regulations in place to ensure responsible use of technology and protect user privacy. Ultimately, governments must strike a balance between protecting users from potential harm while also allowing businesses enough freedom to innovate and compete in the marketplace. How Businesses Can Adapt to Changes in Tech Regulation Businesses must be prepared to adapt to changes in tech regulation. As technology advances, so do the regulations that govern it. Companies must stay up-to-date on the latest regulations and adjust their practices accordingly. Here are some tips for adapting to changes in tech regulation: 1. Stay informed: It is important for businesses to stay informed about the latest developments in tech regulation. This can be done by subscribing to industry newsletters, attending conferences, and reading relevant publications. 2. Develop a plan: Once businesses are aware of the new regulations, they should develop a plan for how they will comply with them. This plan should include steps such as updating policies and procedures, training staff, and investing in new technology or software if necessary. 3. Monitor compliance: Businesses should monitor their compliance with the new regulations on an ongoing basis to ensure they remain compliant. This can be done by conducting regular audits or reviews of their practices and procedures. 4. Seek advice: If businesses are unsure about how to comply with a particular regulation, they should seek advice from legal professionals or industry experts who specialize in tech regulation compliance. By following these tips, businesses can ensure that they remain compliant with the latest tech regulations and avoid any potential penalties or fines associated with non-compliance. Understanding the Implications of Declining Support for Tech Regulation on Consumers The decline in support for tech regulation has far-reaching implications for consumers. As tech companies become increasingly powerful, they are able to shape the digital landscape in ways that may not be beneficial to consumers. Without adequate regulation, tech companies can use their market power to manipulate prices, limit consumer choice, and reduce competition. Furthermore, without proper oversight, tech companies may be able to collect and use consumer data in ways that are not transparent or ethical. This could lead to a lack of privacy and security for consumers as their personal information is used for marketing purposes or sold to third parties without their knowledge or consent. Finally, the lack of regulation could lead to a decrease in innovation as tech companies are no longer incentivized to create new products and services that benefit consumers. Without competition from smaller startups and entrepreneurs, the market could become stagnant with fewer choices available for consumers. In conclusion, declining support for tech regulation has serious implications for consumers. Without proper oversight and enforcement of regulations, tech companies will be able to take advantage of their market power with potentially negative consequences for consumer privacy, security, choice, and innovation. Konkluzja jest taka, że spadek poparcia dla regulacji technologicznych ma wpływ na PPC 452996. Oznacza to, że wszelkie zmiany w przepisach dotyczących technologii mogą mieć bezpośredni wpływ na skuteczność i efektywność tego programu. W związku z tym ważne jest, aby firmy monitorowały i reagowały na zmiany w przepisach dotyczących technologii, aby móc skutecznie korzystać z PPC 452996.
<urn:uuid:ff0ff975-c48e-4655-a715-5403caca19cb>
CC-MAIN-2023-40
https://funkymedia.pl/declining-support-of-tech-regulation-how-does-this-impact-ppc-452996.html
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233505362.29/warc/CC-MAIN-20230921073711-20230921103711-00713.warc.gz
en
0.88285
1,647
2.53125
3
Celebrating NAHM: 12 Important Purposes of Humanities Audience members enjoy a presentation at the 2017 South Dakota Festival of Books in Deadwood. The Festival of Books is one of many programs of the South Dakota Humanities Council, a statewide non-profit that provides humanities programming to South Dakotans. To celebrate National Arts and Humanities Month, SDHC is highlighting the importance of humanities programming in South Dakota and across the U.S. 12 Ways the Humanities Improve our Lives October is National Arts & Humanities Month (NAHM), a "coast-to-coast collective recognition of the importance of culture in America," according to Americans for the Arts. The organization launched NAHM as a week-long event 30 years ago before establishing it as a month-long celebration in 1993. As a statewide organization whose sole purpose is to bring humanities programming to South Dakotans, we're reflecting this month on the importance of humanities programming locally and around the nation. Last spring, we created a blog series called "Why the Humanities" in which we examined the importance of the humanities from the perspective of our constituents, librarians, college professors and more. One of the goals of NAHM is "Raising public awareness about the role the arts and humanities play in our communities and lives." The humanities improve our lives by: 1. Amplifying our Stories "By supporting the creation and amplification of stories, we create time machines that allow future generations to understand our era better. Don't believe me? Whenever I open a book by Charles Dickens, I float out of my body and I live, however temporarily, in London during the 1850s." - South Dakota Author Patrick Hicks, pictured above at the 2017 South Dakota Festival of Books in Deadwood 2. Bringing Educational Programs to our Backyard "These aren't elitist projects or esoteric exhibitions on the coasts that many critics say are the primary recipients of federal funding for the arts and humanities. They are here in our backyard." - Kara Dirkson, executive director of the Sioux Falls Arts Council, pictured above in her office. 3. Dispelling Stereotypes "The humanities enhanced my knowledge of the history, culture and values of my people. This has helped me in my writing to dispel stereotypes and contributed to communication between the Natives and all citizens of South Dakota." - South Dakota author Virginia Driving Hawk Sneve, pictured above in a panel about the book "Black Elk Speaks" at the 2017 South Dakota Festival of Books. 4. Connecting Readers, Writers, Authors "Nationally recognized authors come to South Dakota to talk about their books. Our programs connect people, and those connections enhance the human experience." - Judith Meierhenry, SDHC board chair, pictured above awarding the 2017 Distinguished Achievement in the Humanities Award to United Way of the Black Hills during the 2017 South Dakota Festival of Books 5. Forming our Collective Memory "So yes, genealogy and history matter! Arts and humanities matter! The stone features or burial places that are the evidence of our existence as aboriginal people matter. The floral beadwork or quillwork from long ago is the cultural expression of ancestors who survived so that we may live today. Just as in all cultures, ALL of it matters. Together, it forms our collective memory, and we would be lost without it." - SDHC board member Tamara St. John, pictured at an event in Washington, D.C. 6. Informing us as Citizens "Informed, enlightened and engaged citizens are crucial to our democracy, and the arts and humanities are fundamental to creating these citizens." - South Dakota author Linda Hasselstrom, pictured above at the 2015 South Dakota Festival of Books. 7. Putting Books in Children's Hands "The students appreciate the books and the experiences that SDHC provides along with them. When Kate DiCamillo's "The Miraculous Journey of Edward Tulane" was distributed to third graders in Sioux Falls, one young reader hugged the book, saying, 'This is the first brand-new book I've ever had—it smells SO GOOD!'" "Owning a book is amazing—meeting the author of that book is beyond amazing. Another young reader, after receiving the book, stated, 'I will keep this safe and close to my heart.'" - Ann Smith, Director of Curriculum and Instruction for the Sioux Falls Public Schools and a partner in SDHC's Young Readers Initiative, pictured above next to Tom Fishback (far right) of First Bank and Trust, who is also an SDHC board member. Smith and First Bank and Trust were honored with Distinguished Achievement in the Humanities Awards for their involvement with the Young Readers Initiative, including providing books for third graders in the Brookings and Sioux Falls school districts. 8. Helping People who are Struggling "We have seen the humanities community form a resounding response to address returning soldiers and the potential short- and long-term problems they might face in their return to civilian life as well as from PTSD and other problems." - Dr. Jason McEntee, professor and department head of English at South Dakota State University, coordinator of the Literature and Medicine program and a former member of the South Dakota Humanities Council Board of Directors, pictured above during a lecture. 9. Celebrating us as Humans "The more you know, the better person you become. You develop a sensitivity for others and celebrate humans!" - Angela Ostrander, supervisor of the Faith Public/School Library, a regular SDHC program coordinator, and winner of the 2015 Distinguished Achievement in the Humanities Award, pictured above doing a presentation at the Faith Public/School Library. 10. Helping us Understand the Human Experience "We learn more about each other, and we learn more about ourselves. We have the opportunity to better understand the human experience, and what can be more important than that?" - Terry Woster, a graduate of South Dakota State University journalism program who worked as a news reporter in South Dakota for more than 40 years, pictured above in a portrait. 11. Keeping our Eyes on the Bigger Picture "We are all connected geographically, culturally, and through celebration and tragedy. Like my dad, we are all focused on paying the bills. The humanities help us keep an eye on the bigger picture. What is the importance of the human experience? And how can we make it better?" - SDHC board member Katie Hunhoff, co-publisher of South Dakota Magazine; South Dakota Magazine staff members are pictured above at a South Dakota Festival of Books event. 12. Answering Important Questions "The importance of the humanities, then, is not that it answers the questions for you but that it does something much more subtle and much more important: it gives you the wherewithal and the confidence to answer the questions for yourself, not only to answer them but to defend them to others and, more importantly defend them to a much tougher audience: yourself." - Joseph Tinguely, assistant professor of philosophy at the University of South Dakota and a scholar in the SDHC Speakers Bureau, pictured above in a portrait. Subscribe for More on the Humanities The South Dakota Humanities Council works with museums, libraries and other cultural, educational and community-based organizations across South Dakota to inspire curiosity and the quest for understanding our place in the world. Programs such as the South Dakota Festival of Books, Speakers' Bureau, Book Club to Go, our Pulitzer Prize Centennial that brought 13 Pulitzer Prize-winning authors to South Dakota in 2016, and major grant discussions help us celebrate literature, promote civil conversation and tell the stories that define our state. Subscribe to our blog below to learn more about the humanities and why they're important.
<urn:uuid:eaee5e9a-fe6f-4691-b6bb-c0282735f175>
CC-MAIN-2019-30
http://sdhumanities.org/media/blog/celebrating-nahm-12-important-purposes-of-humanities/
s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195527531.84/warc/CC-MAIN-20190722051628-20190722073628-00093.warc.gz
en
0.938305
1,612
2.8125
3
The Shilla Dynasty's culture is based on Buddhism and ruins from this period are found in the ancient capital, Kyongju and the surrounding mountains, Mt Nam, Mt Toham and Mt Ham-wol. Golgulsa (Stone Buddha Temple) located 20km east of Kyongju, treasures the oldest historical Buddhist ruins in Mt Hamwol and the only cave temple in Korea. The temple was built out of solid rock during the 6th century by Saint Kwang Yoo and his companions, Buddhist monks from India. This temple contains a sculptured MayaTathagata Buddha and twelve rock This locally used prayer sanctuardry and birth place of spiritual culture had been eroding due to its Recently ex-Master Monk of Kirimsa (Kirim Temple), Seol Jeok Woon, constructed a road and renovated and developed the temple. Now, the temple offers peace of mind to all Buddhists.
<urn:uuid:74ced9f4-5fac-4a7e-a251-127c8cc89cff>
CC-MAIN-2017-47
http://www.sunmudo.com/home/sub6.htm?PHPSESSID=a71974495e20dedd700ab90f7c7d6704
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934808972.93/warc/CC-MAIN-20171124214510-20171124234510-00679.warc.gz
en
0.928073
211
2.671875
3
|Most counseling and therapy is intended to enhance self-worth, remove barriers to happiness, and teach people how to achieve their goals. Existential therapists utilize non-traditional therapy to meet these objectives. Traditional counseling typically identifies underlying issues contributing to behavioral problems and provides remedies; whereas, existential therapy is designed to correct problems by aligning individual values and beliefs with goals.| When administering existential therapy, a counselor determines values that motivate their patients. If the therapist succeeds, therapy aligning values with corrective action can be recommended. Additionally, the therapist can encourage patients to refrain from contradictory behaviors. When patients recognize personal values, it's usually more difficult for them to further justify behavior contradicting their value systems. Existential therapists must be trustworthy, empathetic, sincere, and personable. They must also possess excellent communication and problem-solving skills. Additionally, they must be constantly working to align their behavior with their values, beliefs, and personal philosophies. Existential therapy usually appeals to individuals interested in holistic healing and non-traditional therapy. Personable individuals who effortlessly develop close relationships with many types of people, and those who link personal morality with mental health are drawn to existential therapy. To begin a career in existential psychology, complete courses in psychology, personal motivation, counseling, and existentialism as an undergraduate. Certain colleges and universities offer classes and degrees at all levels related to existential therapy. To administer psychotherapy, you'll have to obtain a graduate degree and satisfy state licensing requirements. Learn more about existential psychotherapy by requesting information from colleges and universities offering classes and programs in existential therapy. Find Psychology Careers Search the Most Popular Psychology Careers by Specialty Social Work / Therapy - Army (Mental Health) - Child Welfare - Case Management - Mental Health - Policy Work - Public Health - Substance Abuse
<urn:uuid:a507e5ed-8381-403c-a56d-c272ba959d5d>
CC-MAIN-2017-34
https://www.psychologycareercenter.org/existential-therapist.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886103270.12/warc/CC-MAIN-20170817111816-20170817131816-00210.warc.gz
en
0.903536
378
2.625
3
- Artist depicts emotional abandonment - Emotional painting is but another word for feeling Abandoned - 6 February '19 by Robert McIntosh6 February '19 Artist depicts emotional abandonment Abandonment is an emotional state in which people feel undesired, left behind, insecure, or discarded. Those experiencing emotional abandonment may feel cut off or at a loss from a source of livelihood. The source has been withdrawn, either suddenly, or through a process of erosion. Feeling rejected as a fundamental component of abandonment has an impact in that it activates the pain centers of the brain that leaves an emotional imprint in the neuro warning system. Separation stress is the primary source of human dysfunction. When a human experience a threat or disconnection within a primary attachment, it triggers a fear response. We are continuously aware of our emotional needs in order to control and understand what’s missing in our relationship with ourselves or with others. Without it, we may just feel, blue, lonely, apathetic, irritable, angry, or tired. Life has many emotional needs, like the need for affection, for love, for companionship, to be listened to and understood, to be appreciated, to be valued … Gheorghe Virtosu is the 'Illuminati' of contemporary British abstract art. As an artist who paints social phenomena, he challenged himself to create a piece that strives to express Abandonment as a form of social reprimand. The idea was born as he went through a life-changing experience in solitary confinement. Without the ability to see people as whole and constant, it may be difficult to evoke the feel of the presence of the loved ones when they are not physically present. The feeling of being left on our own can trigger intense reactions. When fear of abandonment is triggered, shame and self-blame follow. We find ourselves in a state of distress and destabilization. While the centerpiece is colorful and bright, it is grotesque, having been transformed from human to a creature by the surrounding misery, and above all, its abandonment.
<urn:uuid:57190ce3-495e-48d5-8ec0-91b2a9a74edb>
CC-MAIN-2020-24
https://www.virtosuart.com/blog/artist-paint-feel-of-emotional-abandonment
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347396163.18/warc/CC-MAIN-20200527204212-20200527234212-00327.warc.gz
en
0.955883
420
2.78125
3
Weekly Standard: Protect Lizards And Endanger Jobs Beth Henary Watson is a writer in Texas. A three-inch lizard scuttled into the spotlight in December after the U.S. Fish and Wildlife Service proposed moving it onto the Endangered Species List. The dunes sagebrush lizard's habitat covers just eight counties on the Texas-New Mexico border, right in the heart of the Permian Basin, a major oil-producing region. Particularly in Texas, industry leaders and local businesses see the action as hostile — another Obama administration environmental policy targeting their successful, energy-sparked economy. "This is a lizard versus families," says Bill Hammond, president of the Texas Association of Business, the state's largest business interest group. "Nothing is more important than a job." Setting 1980s Dallas stereotypes aside, oil and gas production is between 12 and 15 percent of the Texas economy. It's more than 70 percent of the economy in the vast and sparsely populated Permian Basin. The 17-county basin produces nearly 20 percent of all domestic crude oil. Of the eight counties in the lizard's habitat, four are in Texas. All those are among the top ten oil-yielding counties in the state. In its proposal to list the lizard as endangered, U.S. Fish and Wildlife argues that several activities fragment the creature's habitat. Together these constitute a clean sweep of the region's economic drivers: oil and gas (particularly exploration), wind turbine erection, and agriculture. The dunes sagebrush lizard resides only in areas with sandy dunes covered by low-lying shinnery oak trees. A public comment period closed May 9, and U.S. Fish and Wildlife will decide by mid-December whether to put the lizard on the Endangered Species List. An "endangered" finding triggers an assessment period to define the lizard's range and identify protection strategies. At that time, new surface-disrupting economic activity and perhaps maintenance of existing wells and windmills could be hampered. Steve Pruett, president and CFO of Midland-based Legacy Reserves LP, explains that stifling exploration threatens the most jobs. He hires subcontractors to operate his rigs, the towering structures used to drill wells. Legacy runs just one rig in the lizard's presumptive habitat, but 131 other rigs are active, each of which drills two wells a month and employs about 150 people. "We wouldn't be contracting as many wells to be drilled," Pruett says. "Not to mention the general loss of confidence of our investors. We would have less production and less cash to pay out." According to Permian Basin Petroleum Association president Ben Shepperd, wells produce at diminishing rates, making new exploration vital to retaining blue-collar workers like roughnecks and roustabouts. He cites a study that found a majority of jobs even in the cities of Midland and Odessa depend on oil and gas production. "If oil and gas were to stop out here, these West Texas towns would just dry up and blow away," Shepperd says. Excluding giants like Chevron, the average Permian Basin Petroleum Association member employs about 10 people. Texas opponents of listing the lizard dispute the thoroughness of U.S. Fish and Wildlife's science and say they will work cooperatively to rehabilitate the population. Conservation agreements — another way to restore species populations — are already in place in New Mexico. With the agreements, private landowners, businesses, and the government follow a prearranged plan, although Sheppard says signing on can cost an oil business as much as $20,000 per well. Texas land commissioner Jerry Patterson told an industry rally in Midland in late April that the state's landowners and businesses need a chance to work out agreements with the fish and wildlife service. The state currently enforces mitigation for turtle populations near drilling along the Gulf Coast, an arrangement that followed a court battle. "We can plant a lot of shinnery oak if we need to," Patterson said. "It's not the lizard or us. It's both of us." Even if Texas, with New Mexico's help, is able to avoid endangered species classification for the dunes sagebrush lizard, a proposed listing for another species in the Permian Basin, the lesser prairie chicken, lurks in the future. Hammond, with the business association, says the effort to list the lizard as endangered is but one grievance his group has with the Obama administration, which he says is engaged in a "job-killing enterprise" against Texas. Texas's showdown with the Environmental Protection Agency over air permitting is the major concern. "Industry has spent literally trillions of dollars to bring air quality to a level that is perfectly acceptable," according to Hammond. Industry efforts aside, last year the EPA ruled that certain permits issued by the Texas Commission on Environmental Quality — which had regulatory authority under the Clean Air Act — do not comply with federal law. Operating under the permits since 1994, more than 100 businesses have been left in legal limbo while Texas contests the decision. One affected business is EBAA Iron, Inc., a family-owned iron foundry with 250 employees at plants in Eastland and Albany, Texas. Until last year, the foundry ran under a flexible permit issued by the state environmental agency. The flexible permits emphasized results over an entire organization, while EPA concerns itself with individual sources of emissions. Jim Keffer, president of EBAA Iron, Inc., says his staff has contacted EPA for guidance but keeps getting put off. The business, which opened in 1964, may be operating illegally. Keffer runs the iron foundry full time, but he also serves as the state representative for his area and chairs the Texas House Energy Resources Committee. "Everywhere you look, every time you turn around, the federal government is trying to stop exploration, to stop the use of fossil fuels," Keffer says. "We're trying to work on self-reliance. We're trying to explore and bring to the country the resources that Texas has been blessed with." While the Texas Commission on Environmental Quality's mission requires it to consider economic impacts, U.S. Fish and Wildlife and EPA don't have to. Keffer points out EPA's December emergency order to a Fort Worth company under the Safe Drinking Water Act. The agency acted in response to alleged contamination of two drinking water wells, even though the state's gas regulatory agency had been on the scene. More than a mile separates the shallow wells from Range Resources' natural gas wells. The company says it has spent $1.5 million defending itself against the EPA order. "The EPA was having a press conference before they had all the facts," Keffer says. "If you sit back and take in all that's happened, it's easy to look at a conspiracy theory." The Texas Public Policy Foundation, a free-market think tank, held a briefing last month on 10 proposed and adopted rules it says constitute an "Approaching EPA Avalanche." The organization is most concerned with EPA's order that states regulate greenhouse gas emissions from major sources. The Lone Star State alone refused to comply, although at least 20 others are also suing the agency over greenhouse gas regulations. TPPF scholar Kathleen Hartnett White, a former state environmental director, says the rules also require "Rolls Royce" emissions control technologies on industrial boilers and certain cement kilns. Unions claim the boiler rule alone could send 700,000 U.S. jobs to countries less concerned about air quality. EPA is also considering tightening standards on "coarse particulate matter," White says, and the proposed rule would drop the exemption for rural dust, a fact of life in West Texas. Remediation techniques for rural dust suggested by EPA include watering dirt roads and no-till days for farmers. Because of the makeup of its economy, including the nation's largest petrochemical complex, in Houston, Texas will be disproportionately affected by most air quality regulations. White says it doesn't matter if Washington is deliberately picking on her state, though the administration's actions speak to a strong desire to make alternatives to fossil fuels more appealing. "We are a bad example," White says. "We are not what the administration would like to see." Copyright 2020 The Weekly Standard. To see more, visit .
<urn:uuid:5e9afc0a-12ef-4137-be89-0bd36857a013>
CC-MAIN-2024-10
https://www.kunc.org/npr-news/2011-06-03/weekly-standard-protect-lizards-and-endanger-jobs
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474659.73/warc/CC-MAIN-20240226094435-20240226124435-00799.warc.gz
en
0.956586
1,716
2.53125
3