text
stringlengths
0
7.84M
meta
dict
Inhibition by ammonium ion of germination of unactivated spores of Bacillus cereus T induced by l-alanine and inosine. Studies were carried out on the inhibitory effect of NH4+ on germination of spores of Bacillus cereus T induced by L-alanine and inosine. Kinetic analysis showed that NH4+ inhibited the germination competitively. Its inhibitory effect was greater when the unactivated spores had been preincubated with L-alanine. NH4+ did not inhibit the response of unactivated spores to L-alanine during preincubation. These results suggest that L-alanine sensitizes the spores to the inhibitory effect of NH4+.
{ "pile_set_name": "PubMed Abstracts" }
Cuc Phuong is very diverse in flora species composition structure. With such an area equaling to 0.07 % out of the total area nationwide, it accounts for 57.93 % of flora families, 36.09 % of genetic diversity and 17.27 % of the species as compared with total figures for the country. Cuc Phuong NP has 20,473 ha of forest out of the total land area of 22,200 ha (accounting for 92.2 %). The vegetation cover here is the type of evergreen tropical rainforest. According to Thai Van Trung (1976), Cuc Phuong belongs to the type of closed humid evergreen tropical rainforest. Cuc Phuong has a considerable area of primary forest, mainly focused on the limestone mountain area and at valleys in the centre of the NP. It is the special location that leads to the rich species composition structure of the park. Cuc Phuong contains many non-indigenous plant species established with many indigenous ones. Representation of the indigenous species is those of Lauraceae, Magnoliaceae and Meliaceae families while those species of Dipterocarpaceae family is representative of non-indigenous species from the warmer southern region. Representative of those coming from the north are those of Fagaceae species. Cuc Phuongisan area of​​significantprimary forest, mainlyonlimestonemountainsandvalleysinthecenter ofthe park.As aspecial placetoholdthe rich plant species inCuc Phuong. Survey results of recent years (2008) recorded 2,234 species of 917 genera, 231 families. Many of them are of high value: 430 medicinal plant species, 229 edible plant species, 240 species can be used as medicine, dye, 137 species can provide tannin, etc; 13 species are listed in Vietnam Red Data Book 2000 and IUCN Red List 2004. Some outstanding species are Dalbergia tonkinensis; Parashorea chinensis, Erythrophloeum fordii; and Nageia fleyri. There are 11 endemic plant species, including Camellia cucphuongensis; Begonia cucphuongensis; Pistacia cucphuongensis; Amorphophallus dzui; Vietorchis aurea; Carex trongii, etc. The vertebrate animal species in Cuc Phuong is rich and diverse, there are 133 species, accounting for 51.35 % of the total nationwide (259 species). For birds, Cuc Phuong NP is assessed by Birdlife International as an Important Bird Area of Vietnam. It has recorded here now 336 species, accounting for 39.25 % of the total bird species nationwide (856 species). For reptiles, Cuc Phuong NP has 76 species, accounting for 26.67 % of the nation’s total figure (296 species). For amphibians, Cuc Phuong NP has 46 species, accounting for 28.39% of the nation’s total figure (162 species). For fish, Cuc Phuong NP has 66 species, accounting for 10.81 % of the nation’s total figure of fresh water fish species (610 species).In total 659 vertebrate species that 85 species have recorded in Vietnam Red Book, some Cuc Phuong endemic species such as Trachipythecus francoisi delacouri, Callosciurus erythraeus cucphuongensis, Tropidophorus cucphuongensis, Rana maosonensis, Pterocryptis cucphuongensis etc. - Invertebratefauna: The invertebrate fauna in Cuc Phuong is even more abundant and diverse. In the period from 2000-2008, about 7,400 invertebrate animal samples have been collected, including 1,670 species and species types of insect, 14 crustacean species, 18 species and types of species of myriapod, 16 spider-shaped species, 52 species and species types of annelid, 129 species and species types of mollusc, and many other species of lower animal. However, it is due to the fact that the lower animal species did not get much attention and research on these species have been rarely done; the figures mentioned are the preliminary ones only. In reality, the invertebrates in Cuc Phuong are extremely rich and diverse, it is estimated the real figures are much higher. - Palaeontology: In addition to the relics and fossils of prehistoric animals have been discovered and excavated and published before. Recently, in 2000, a marine animal fossil has been found in CucPhuongNational Park. Fossils exposed on the suface of limestone rock, it appeared in Dong Giao fomation to Middle Triassic (T2), it is about 200 to 230 million years, including at least 12 intact vertebra, 10 ribs and some others. The fossil has been preliminarily determined Placodontia species (reptiles tooth blade). According to scientists, this is first discovery in Southeast Asiaon Placodontia. Socio-economicsituation - Ethnic: CucPhuongNational Park is located in the region of 14 communes that include two mainly ethnic groups. Muong ethnic accounted for 76.6% of the total population in the region and Kinh ethnic is accounted for 23.4%. Two ethnic groups have been the oldest community living both in terms of economic and cultural. In recent years, in the process of innovation, market economy has penetrated into the Muong villages that are gradually losing the cultural characteristics. However, there are some villages in remote areas still retaining the customs, festival of Gongs ... that bring imbued Muong’s culture. The values of intangible culture are human resources that are likely to serve to promote eco-tourism development, culture and the humanity in the future.
{ "pile_set_name": "Pile-CC" }
Q: Ajax in MVC @Ajax. Helpers and in JQuery.Ajax I know a bit of Ajax. And now I am learning MVC + JQuery. I want to know if the 2 Ajaxs in MVC Ajax.Helper and JQuery.Ajax are using the same base? Are they the same as the normal Ajax I learned using XMLHttpRequest xhr? If not, what is the preferred way of doing it? I am new to this, and a bit confused, so please don't mind if my question doesn't make sense to you. Thank you, Tom (edited) I wrote some mvc3 Razor: <div id="MyAjaxDiv">@DateTime.Now.ToString("hh:mm:ss tt")</div> @Ajax.ActionLink("Update", "GetTime", new AjaxOptions { UpdateTargetId = "MyAjaxDiv", InsertionMode = InsertionMode.Replace, HttpMethod = "GET" }) When I open up the source code in notepad, I get: <div id="MyAjaxDiv">06:21:10 PM</div> <a data-ajax="true" data-ajax-method="GET" data-ajax-mode="replace" data-ajax-update="#MyAjaxDiv" href="/Home/GetTime">Update</a> So, because I have only included ~/Scripts/jquery.unobtrusive-ajax.min.js, the MVC helpers must be using the JQuery to work. I had the impression that they might need the MicrosoftAjax.js, MicrosoftMVCAjax.js ....etc, doesn't look like it now. Are they for MVC2 and aspx pages? A: Here's an excerpt from an MVC book, MVC version 3 introduced support for jQuery Validation, whereas earlier versions relied on JavaScript libraries that Microsoft produced. These were not highly regarded, and although they are still included in the MVC Framework, there is no reason to use them. JQuery has really become the standard for ajax based requests. You may still use your "XMLHttpRequest xhr" way but Jquery has made it easier to perform the same thing.
{ "pile_set_name": "StackExchange" }
Q: CSS attribute selector not working when the attribute is applied using javascript? .variations_button[style*="display: none;"] + div This is my CSS selector which works fine if the style attribute is already in the DOM on page load: http://jsfiddle.net/xn3y3hu0/ However, if i hide the .variations_button div using javascript, the selector is not working anymore: $(document).click(function(){ $('.variations_button').hide(); }); http://jsfiddle.net/55seee1r/ Any idea whats wrong? Looks like the CSS is not refreshing, because if i edit another property using the inspector, the color changes red instantly. A: Because the selector you use, [style*="display: none;"], is looking for the presence of the exact string of "display: none;" in the style attribute, it requires the browser's JavaScript engine inserts that precise string, including the white-space character (incidentally in Chrome 39/Windows 8.1 it does). For your particular browser you may need to remove the space, and to target most1 browsers, use both versions of the attribute-value string, giving: .variations_button[style*="display: none;"] + div, .variations_button[style*="display:none;"] + div .variations_button[style*="display: none;"]+div, .variations_button[style*="display:none;"]+div { color: red; } <div class="variations_button" style="display: none;">asd</div> <div>test</div> Of course, it remains much simpler to use classes to hide an element, toggling that class with JavaScript, and using the class as part of the CSS selector, for example: $('.variations_button + div').on('click', function() { $('.variations_button').toggleClass('hidden'); }); .hidden { display: none; } .hidden + div { color: red; } .variations_button + div { cursor: pointer; } <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> <div class="variations_button">asd</div> <div>test</div> As I understand it, the problem of the above not working once jQuery is involved, is because jQuery's hide(),show() and toggle() methods seem to update the display property of the element's style property, rather than setting the attribute directly. The updated attribute-value (as represented in the style attribute) seems to be a representation of the style property (derived, presumably, from its cssText). Because the attribute is unchanged, and merely serves as a representation of a property, the CSS attribute-selectors don't, or perhaps can't, match. That said, a somewhat clunky workaround is to directly set the attribute; in the following demo this uses jQuery's attr() method (though the native DOM node.setAttribute() would work equally well): $(document).click(function() { // setting the style attribute of the selected element(s), // using the attr() method, and the available anonymous function: $('.variations_button').attr('style', function(i, style) { // i: the index of the current element from the collection, // style: the current value (before manipulation) of the attribute. // caching the cssText of the node's style object: var css = this.style.cssText; // if the string 'display' is not found in the cssText: if (css.indexOf('display') === -1) { // we return the current text plus the appended 'display: none;' string: return css + 'display: none;'; // otherwise: } else { // we replace the string starting with 'display:', followed by an // optional white-space ('\s?'), followed by a matching string of // one or more alphabetic characters (grouping that last string, // parentheses): return css.replace(/display:\s?([a-z]+)/i, function(a, b) { // using the anonymous function available to 'replace()', // a: the complete match, b: the grouped match (a-z), // if b is equal to none we return 'display: block', otherwise // we return 'display: none': return 'display: ' + (b === 'none' ? 'block' : 'none'); }); } }); }); jQuery(document).ready(function($) { $(document).click(function() { $('.variations_button').attr('style', function(i, style) { var css = this.style.cssText; if (css.indexOf('display') === -1) { return css + 'display: none;'; } else { return css.replace(/display:\s?([a-z]+)/i, function(a, b) { return 'display: ' + (b === 'none' ? 'block' : 'none'); }); } }); }); }); .variations_button[style*="display: none;"]+div { color: red; } <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> <div class="variations_button">asd</div> <div>test</div> References: CSS: Substring Matching Attribute-selectors. JavaScript: HTMLElement.style. JavaScript Regular Expressions. String.prototype.indexOf(). String.prototype.replace(). jQuery: attr(). hide(). show(). toggle().
{ "pile_set_name": "StackExchange" }
Q: Question about positive recurrence of a Markov chain Q) Let $\{X_n\}$ be an irreducible Markov chain on a countable set $S$. Suppose that for some $x_0$ and a non-negative function $f:S\to(0,\infty)$, there is a constant $0<\alpha<1$ s.t. $$\mathbb{E}_xf(X_1)\leq \alpha f(x)\text{ for all }x\neq x_0$$ Suppose also that $f(x_0)\leq f(x)$ for all $x$. Show that $\{X_n\}$ is positive recurrent. Let $Y_n = f(X_n)/\alpha^n$ and I can show that $Y_n$ is a supermartingale using the hypothesis. But to show $X_n$ is positive recurrent, since $x_0$ is mentioned, I was thinking of the stopping time $$\tau = \inf\{n\geq 0:X_n = x_0\}$$ Then $Y_{n\wedge \tau}$ is also a supermartingale, $\mathbb{E}_xY_{n\wedge \tau}\leq f(x)$. 1) How can I show that $\mathbb{E}_x\tau < \infty$ which shows that $x_0$ is positive recurrent? 2) How does that show the chain is positive recurrent? A: You need to make use of the following theorem. Theorem. If $X_n$ is a nonnegative supermartingale and $N \le \infty$ is a stopping time, then $EX_0 \ge EX_N$ where $X_\infty = \lim X_n$ (which exists by the martingale convergence theorem). I will continue from where you left off. You've already proven that $E_xY_n$ is a nonnegative supermartingale, and $\tau$ is a stopping time. Hence $$ f(x) = E_xY_0 \ge E_xY_\tau = E_x(Y_\tau;\tau=\infty) + E_x(Y_\tau;\tau<\infty)$$ $$\ge E_x(Y_\infty; \tau = \infty) = E_x(\lim_{n \rightarrow \infty}f(X_n)/\alpha^n; \tau=\infty) $$ $$ \ge E_x(\lim_{n \rightarrow \infty} f(x_0)/\alpha^n; \tau=\infty) . $$ Since $f(x_0) > 0, 0 < \alpha < 1$ and the term $f(x_0)/\alpha^n \rightarrow \infty$, it follows that $P_x(\tau = \infty) = 0$, and $P_x(\tau < \infty) = 1$ for all $x \in S$. Hence $P_x(X_n=x_0\; i.o.) =1$, and $x_0$ is recurrent. Recall the chain is irreducible and $x_0$ is recurrent, hence all states $x \in S$ are recurrent (irreducibility $ \Rightarrow \rho_{x_0x} > 0$, additionally $x_0$ is recurrent $\Rightarrow \rho_{xx} = 1$ where $\rho_{xy}\equiv P_x(T_y < \infty)$).
{ "pile_set_name": "StackExchange" }
An evaluation of 222Rn concentrations in Idaho groundwater. Factors potentially correlated with 222Rn concentrations in groundwater were evaluated using a database compiled by the U.S. Geological Survey. These included chemical and radiological factors, and both well depth and discharge rate. The 222Rn concentrations contained within this database were examined as a function of latitude and longitude. It was observed that the U.S. Geological Survey sample locations for 222Rn were not uniformly distributed throughout the state. Hence, additional samples were collected in southeastern Idaho, a region where few 222Rn in water analyses had been performed. 222Rn concentrations in groundwater, in Idaho, were found using ANOVA (alpha = 0.05) to be independent of the chemical, radiological, and well parameters thus far examined. This lack of correlation with other water quality and well parameters is consistent with findings in other geographical locations. It was observed that an inverse relationship between radon concentration and water hardness may exist.
{ "pile_set_name": "PubMed Abstracts" }
Experience the legendary battle between the Autobots and Decepticons before their exodus to Earth in the untold story of the civil war for their home planet, Cybertron. Two distinct and intertwined campaigns chronicle the Autobots heroism in the face of total annihilation and the Decepticons unquenchable thirst for power. Play both campaigns and battle as your favorite Transformer characters in the war that spawned one of the most brutal conflicts of all time.
{ "pile_set_name": "Pile-CC" }
Native Village of Pedro Bay Pedro Bay is located at the head of Pedro Bay in Lake Iliamna, 30 miles northeast of Iliamna and 180 miles southwest of Anchorage. Located in a heavily wooded area, with birch, cottonwood, alders, willow and white spruce trees, Pedro Bay has one of the most attractive settings in southwest Alaska. Pedro Bay is accessible by air and water. There is a State-owned 3,000' long by 60' wide gravel airstrip. Scheduled and charter air services are available from Iliamna and Anchorage. Barge service is available from Naknek via the Kvichak River. Goods are also sent by barge from Homer to Iliamna Bay on the Cook Inlet side and portaged over a 14-mile road to Pile Bay, 10 miles to the east. The Dena'ina Indians have inhabited this area for hundreds of years, and still live in the area. The community was named for a man known as "Old Pedro," who lived in this area in the early 1900s. A post office was established in the village in 1936. St. Nicholas Russian Orthodox Chapel, built in 1890, is on the National Register of Historic Places.
{ "pile_set_name": "Pile-CC" }
Cummings Machine Works Cummings Machine Works was a Boston, Massachusetts based business. It was founded by Henry Havelock Cummings in 1881, when Cummings was 23 years old. The company was awarded a United States Defense Department contract to manufacture fixtures in March 1941. The contract amounted to $17,893. The company was among the firms which contributed to the building of the Boston Opera House, completed in 1909, supplying steelworks used in the construction of the stage. Cummings Machine Works has been credited with the development of the sally saw. A patent filed in 1945, and assigned to the company, describes a saw with a circular blade. The blade could be rotated between horizontal and vertical, thus allowing a tree to be felled, limbed, and bucked with one saw. Other inventions included a hydraulic hospital bed, automatic doughnut machine, teardrop vehicle and Hookups. Last owners were Robert M. Mustard, Sr., Pres., and Lewis W. Mustard, Treas. Last known address was 10 Melcher Street in Boston, MA. Went out of business in 1958. References Category:Manufacturing companies based in Boston Category:History of Boston Category:Defunct manufacturing companies of the United States Category:Defunct companies based in Massachusetts Category:Manufacturing companies established in 1881
{ "pile_set_name": "Wikipedia (en)" }
Q: How to type String content:encoded = "Hello"; in java? How to type String content:encoded = "Hello"; in java ? Eclipse keep telling me syntax error on tokens delete these tokens ? setDescription(String content:encoded) { _description = content:encoded; } A: Because content:encoded is a syntax error. Name in java only accept letters numbers $ and "_". The rule might allow some other characters but it should be pretty much it. Also a variable cannot start with a number. To be clear, remove the : from the variable name because : is illegal in a name and might have an other meaning in the language. Quote from the article below: Variable names are case-sensitive. A variable's name can be any legal identifier — an unlimited-length sequence of Unicode letters and digits, beginning with a letter, the dollar sign $, or the underscore character _. The convention, however, is to always begin your variable names with a letter, not $ or _. Additionally, the dollar sign character, by convention, is never used at all. You may find some situations where auto-generated names will contain the dollar sign, but your variable names should always avoid using it. A similar convention exists for the underscore character; while it's technically legal to begin your variable's name with _, this practice is discouraged. White space is not permitted. Subsequent characters may be letters, digits, dollar signs, or underscore characters. Here read more about it: http://docs.oracle.com/javase/tutorial/java/nutsandbolts/variables.html A: if you are creating method setDescription then it whould be: public void setDescription(String content_encoded) { _description = content_encoded; } Here public is modifier void is return type setDescription is method name String is parameter type content_encoded is Variable that is holding string value.
{ "pile_set_name": "StackExchange" }
A senior minister in Bermuda has said the territory could legally regulate the production and sale of cannabis for “recreational”, non-medical purposes. Michael Weeks, the Minister of Social Development in the North Atlantic island territory of Bermuda, said: “[Cannabis] legalisation is something that’s going to have to be talked about and may have to be sooner rather than later. There’s an almost worldwide trend. Right now, here, it is medical use, but some countries have legalised for recreational purposes.” Weeks added that the government was devising a report with various drug policy proposals. He said the Cabinet would consider the document in the coming weeks, and that it would be published by the end of the year. Bermuda decriminalised the personal possession of less than seven grams of cannabis in December 2017, following an amendment to the Misuse of Drugs Act. Following this amendment’s implementation, Weeks described the government’s reasoning behind the reform: “It is our hope and our belief that taking this important action will help to prevent more young black men from being placed on the proverbial ‘stop list’ and have their lives completely altered by virtue of not being able to travel to the United States to pursue further education or to seek other opportunities,” he said at a press conference. The territory’s close proximity to the US means that many young Bermudans travel to the country in search of employment. Having a cannabis possession conviction can considerably complicate the US visa application process, and in some cases result in being barred from entry altogether. The government in Bermuda has also permitted medical cannabis access, with two licenses being provided so far to doctors to prescribe the drug, the Royal Gazette reports.
{ "pile_set_name": "OpenWebText2" }
Q: Handling worker death in multiprocessing Pool I have a simple server: from multiprocessing import Pool, TimeoutError import time import os if __name__ == '__main__': # start worker processes pool = Pool(processes=1) while True: # evaluate "os.getpid()" asynchronously res = pool.apply_async(os.getpid, ()) # runs in *only* one process try: print(res.get(timeout=1)) # prints the PID of that process except TimeoutError: print('worker timed out') time.sleep(5) pool.close() print("Now the pool is closed and no longer available") pool.join() print("Done") If I run this I get something like: 47292 47292 Then I kill 47292 while the server is running. A new worker process is started but the output of the server is: 47292 47292 worker timed out worker timed out worker timed out The pool is still trying to send requests to the old worker process. I've done some work with catching signals in both server and workers and I can get slightly better behaviour but the server still seems to be waiting for dead children on shutdown (ie. pool.join() never ends) after a worker is killed. What is the proper way to handle workers dying? Graceful shutdown of workers from a server process only seems to work if none of the workers has died. (On Python 3.4.4 but happy to upgrade if that would help.) UPDATE: Interestingly, this worker timeout problem does NOT happen if the pool is created with processes=2 and you kill one worker process, wait a few seconds and kill the other one. However, if you kill both worker processes in rapid succession then the "worker timed out" problem manifests itself again. Perhaps related is that when the problem occurs, killing the server process will leave the worker processes running. A: This behavior comes from the design of the multiprocessing.Pool. When you kill a worker, you might kill the one holding the call_queue.rlock. When this process is killed while holding the lock, no other process will ever be able to read in the call_queue anymore, breaking the Pool as it cannot communicate with its worker anymore. So there is actually no way to kill a worker and be sure that your Pool will still be okay after, because you might end up in a deadlock. multiprocessing.Pool does not handle the worker dying. You can try using concurrent.futures.ProcessPoolExecutor instead (with a slightly different API) which handles the failure of a process by default. When a process dies in ProcessPoolExecutor, the entire executor is shutdown and you get back a BrokenProcessPool error. Note that there are other deadlocks in this implementation, that should be fixed in loky. (DISCLAIMER: I am a maintainer of this library). Also, loky let you resize an existing executor using a ReusablePoolExecutor and the method _resize. Let me know if you are interested, I can provide you some help starting with this package. (I realized we still need a bit of work on the documentation... 0_0)
{ "pile_set_name": "StackExchange" }
Nudists stripped down and shouted anti-corporate slogans when a second approval for the nude ban passed in City Hall. You've got to admire their tenacity. In January, a judge dismissed a lawsuit filed by San Francisco nudists that claimed their First Amendment right to free speech was violated by Supervisor Scott Wiener’s nudity ban. But the decision didn’t deter nudists from their fight to strip down in The City – they’ve filed an amended suit and are staging a protest and ‘body freedom parade’ this Saturday. The protest is scheduled at noon on July 20, in front of City Hall. The parade will follow, but there’s no word on what route it will take. Under the ban, nudists face a $100 fine for their first offense, with increases for additional infractions. The amended lawsuit includes five plaintiffs: Russell “Trey” Allen, Mitch Hightower, George Davis, Russell Mills, and Oxane “Gypsy” Taub. Although they initially sued The City for alleged free speech violations, they’ve now taken a different approach. The new suit alleges that the nudity ban has been enforced against them in a discriminatory fashion. The lawsuit states that several plaintiffs were arrested for baring it all at a rally on February 1, and the others were arrested during a nude dance on February 27. However, plaintiffs were not arrested when they participated in nude activities led by other organizations, leading them to believe that they are being targeted by The City.
{ "pile_set_name": "Pile-CC" }
Menu What’s your WHY? What’s your WHY?! Did you know one of the things I hate the most is pictures and videos of myself. Yes, you got that right, I’ve made a living off of posting pictures and videos of myself to social media. Something I highly DISLIKE. . So WHY?! On the days when I really don’t feel like taking another picture, making another post, coming up with the perfect thing to say, I remind myself why the heck I started this in the first place. . Sharing my story. Sharing my struggles. Helping other women not feel ALONE. Helping other women reach their true potential. This is what I get to do on a daily basis. . To me there is nothing more REWARDING than helping another woman not feel the feelings I felt for so many years. And yes I have my days where I just don’t feel like it too, but then I remember WHY I started this whole thing in the first place. I realize all that I’ve accomplished. And I remember that I’m doing EXACTLY what I set out to do. 💕 . What’s your WHY? Drop it in the comments
{ "pile_set_name": "Pile-CC" }
Presenting features and diagnosis of rabies. The early clinical features important in the establishment of a diagnosis of rabies are described from experience of 23 fatal cases in Sri Lanka. The importance of the "fan test" as a diagnostic sign is stressed. The earliest features of the disease may suggest hysteria if a history of a bite from a rabid animal is not obtained. In a district in which there is an outbreak of rabies cases of rabies hysteria may also develop.
{ "pile_set_name": "PubMed Abstracts" }
Q: Restrict Action in ASP.Net MVC I am trying to restrict the action to not to be called if it has the required parameter available in the url. for example I have a Login Action ut it only be access with it hit on an other web application and it redirect with query string parameter. but it can also be accessible with out parameter I want to restrict this. Ristrict it https://localhost:44300/account/login Right Url https://localhost:44300/Account/Login?returnUrl=https%3A%2F%2Fdevatea.portal.azure-api.net%2F%2F A: Based on your requirements, I think the easiest way would be to just add a check to the login action and return a 404 Not Found if the returnUrl is empty or null. public ActionResult Login(string returnUrl) { if (string.IsNullOrEmpty(returnUrl)) return HttpNotFound(); //remaining code for login //... }
{ "pile_set_name": "StackExchange" }
1942 Iowa Pre-Flight Seahawks football team The 1942 Iowa Pre-Flight Seahawks football team represented the United States Navy pre-flight aviation training school at the University of Iowa as an independent during the 1942 college football season. The team compiled a 7–3 record and outscored opponents by a total of 211 to 121. The 1942 team was known for its difficult schedule, including Notre Dame, Michigan, Ohio State, Minnesota, Indiana, Nebraska, and Missouri. The team was ranked No. 2 among the service teams in a poll of 91 sports writers conducted by the Associated Press. The Navy's pre-flight aviation training school opened on April 15, 1942, with a 27-minute ceremony during which Iowa Governor George A. Wilson turned over certain facilities at the University of Iowa to be used for the training of naval aviators. At the time, Wilson said, "We are glad it is possible to place the facilities of this university and all the force and power of the state of Iowa in a service that is today most vital to safeguarding our liberties." The first group of 600 air cadets was schedule to arrive on May 28. Bernie Bierman, then holding the rank of major, was placed in charge of the physical conditioning program at the school. Bieman had been the head coach of Minnesota from 1932 to 1941 and served as the head coach of the Iowa Pre-Flight team in 1942. Larry Snyder, previously the track coach at Ohio State, was assigned as Bierman's assistant. Don Heap, Dallas Ward, Babe LeVoir, and Trevor Reese were assigned as assistant coaches for the football team. In June 1942, Bierman addressed the "misconception" that the Iowa pre-flight school was "merely a place for varsity athletics." He said: "Our purpose here is to turn out the toughest bunch of flyers the world has ever seen and not first class athletes." Two Seahawks were named to the 1942 All-Navy All-America football team: George Svendsen at center and Dick Fisher at left halfback. In addition, Bill Kolens (right tackle), Judd Ringer (right end), George Benson (quarterback), and Bill Schatzer (left halfback) were named to the 1942 All-Navy Preflight Cadet All-America team. Schedule Roster References Iowa Pre-Flight Category:Iowa Pre-Flight Seahawks football seasons Iowa Pre
{ "pile_set_name": "Wikipedia (en)" }
The effect of inhibiting the parasympathetic nervous system on insulin, glucagon, and glucose will be examined in normal weight and obese men and women. Furthermore, the importance of early insulin release will be examined. Each subject will undergo 4 treatments: 1) saline infusion, 2) brief infusion 3) atropine infusion and 4) atropine and insulin infusion. During each of the treatments, subjects will ingest a mixed meal containing 600 kcal and will undergo a blood sampling protocol in which arterialized venous blood samples will be drawn over a 4 hour period of time.
{ "pile_set_name": "NIH ExPorter" }
/* Copyright The Kubernetes Authors. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ // Code generated by client-gen. DO NOT EDIT. package v1 type JobExpansion interface{}
{ "pile_set_name": "Github" }
Project Summary/Abstract. Group A streptococci (GAS; Streptococcus pyogenes) are remarkable for the wide range of diseases they cause in humans, their sole biological host. Yet, most infections are mild and involve one of two tissues - the epithelial surface of the throat or skin - giving rise to pharyngitis or impetigo, respectively. A long-term goal is to better understand the distinct pathogenic mechanisms leading to pharyngitis and impetigo. A primary focus of the proposal is the regulation of pili expression in GAS. Pilus-associated proteins mediate adherence to epithelial cells and enhance superficial infection at the skin. Pili correspond to the T-antigens of GAS. All strains examined have pilus genes, however, many natural GAS isolates lack T-antigen. The hypothesis to be tested in Aim 1 states that organisms recovered from a carrier state and/or invasive disease are significantly more likely to have defects in pilus production, as compared to isolates derived from cases of pharyngitis or impetigo. Aim 1 seeks to define the relationship between defects in pilus expression and disease. The nra/rofA locus encodes a [unreadable]stand alone[unreadable] response regulator that affects the transcription of pilus genes; nra and rofA denote discrete lineages of alleles. Both Nra and RofA can have positive or negative regulatory effects on pilus gene transcription, depending on the GAS isolate or strain. The hypothesis to be tested in Aim 2 states that there are strain-specific differences among modulators of pilus gene expression that lie in a pathway upstream of Nra/RofA. Aim 2 seeks to identify regulators of pilus gene transcription having a differential presence among strains. The distribution of Nra and RofA among GAS is strongly correlated with subpopulations of strains having a tendency to cause infection at either the throat or skin. Nra and RofA are global regulators of GAS gene transcription. Two hypotheses will be addressed in Aim 3: (i), that co-regulated non-pilus genes act in concert with pili to cause disease; and (ii), that Nra and RofA confer differential transcription of downstream genes. Aim 3 seeks to identify genes of the Nra and RofA regulons, and to test their role in virulence. Through a better understanding of the molecular mechanisms used by GAS to persist in their primary ecological niches - the throat and skin of the human host - will come new knowledge on how best to interfere with these vital processes. Effective control and prevention measures that disrupt the chain of transmission of GAS will result in a decreased burden of the more severe GAS diseases (toxic shock syndrome, rheumatic heart disease) which have a high morbidity and mortality for many people throughout the world.
{ "pile_set_name": "NIH ExPorter" }
[Cite as State v. McDougald, 2016-Ohio-5080.] IN THE COURT OF APPEALS OF OHIO FOURTH APPELLATE DISTRICT SCIOTO COUNTY STATE OF OHIO, : Case No. 16CA3736 Plaintiff-Appellee, : v. : DECISION AND JUDGMENT ENTRY JERONE MCDOUGALD, : RELEASED: 7/15/2016 Defendant-Appellant. : APPEARANCES: Jerone McDougald, Lucasville, OH, pro se appellant. Mark E. Kuhn, Scioto County Prosecuting Attorney, and Jay S. Willis, Scioto County Assistant Prosecuting Attorney, Portsmouth, OH, for appellee. Harsha, J. {¶1} Jerone McDougald appeals the judgment denying his fifth petition for postconviction relief and his motion for leave to file a motion for new trial. McDougald contends that the court erred in denying his petition, which raised claims of ineffective assistance of his trial counsel. He additionally argues that the court erred in denying his motion for leave to file a motion for new trial, but did not assign any errors regarding this decision. {¶2} We reject McDougald’s claims. He failed to demonstrate the requirements necessary for the trial court to address the merits of his untimely claims in his fifth petition for postconviction relief. Moreover, res judicata barred this successive petition because he could have raised these claims on direct appeal or in one of his earlier postconviction petitions. Finally, because he failed to assign any error regarding the trial court’s denial of his motion for leave to file a motion for new trial, we need not address his arguments regarding that decision. Scioto App. No. 16CA3736 2 {¶3} Therefore, we affirm the judgment of the trial court denying his petition and motion. I. FACTS1 {¶4} Authorities searched a premises in Portsmouth and found crack cocaine, money, digital scales, and a pistol. They arrested the two occupants of the residence, McDougald and Kendra White, at the scene. Subsequently, the Scioto County Grand Jury returned an indictment charging McDougald with drug possession, drug trafficking, possession of criminal tools, and the possession of a firearm while under disability. McDougald pleaded not guilty to all charges. {¶5} At the jury trial Kendra White testified that McDougald used her home to sell crack cocaine and that she sold drugs on his behalf as well. She also testified that the digital scales belonged to McDougald and, although the pistol belonged to her ex- boyfriend, Benny Simpson (who was then incarcerated), McDougald asked her to bring it inside the home so that he would feel more secure. White explained that Simpson previously used the pistol to shoot at her, but threw it somewhere in the backyard when he left. Simpson then allegedly called White from jail and instructed her to retrieve the pistol. White complied and then hid it “under the tool shed” until McDougald instructed her to retrieve it and bring it inside the house. White confirmed that she saw McDougald at the premises with the gun on his person. {¶6} Jesse Dixon and Melinda Elrod both testified that they purchased crack cocaine from McDougald at the residence. Shawna Lattimore testified that she served 1Except where otherwise noted, these facts are taken from our opinion in State v. McDougald, 4th Dist. Scioto Nos. 14CA3649 and 15CA3679, 2015-Ohio-5590, appeal not accepted for review, State v. McDougald, 144 Ohio St.3d 147, 2016-Ohio-467, 845 N.E.3d 245. Scioto App. No. 16CA3736 3 as a “middleman” for McDougald's drug operation and also helped him transport drugs from Dayton. She testified that she also saw McDougald carry the pistol. {¶7} The jury returned guilty verdicts on all counts. The trial court sentenced McDougald to serve five years on the possession count, nine years for trafficking, one year for the possession of criminal tools, and five years for the possession of a firearm while under disability. The court ordered the sentences to be served consecutively for a total of twenty years imprisonment. The sentences were included in a judgment entry filed April 30, 2007, as well as a nunc pro tunc judgment entry filed May 16, 2007. {¶8} In McDougald's direct appeal, where he was represented by different counsel than his trial attorney, we affirmed his convictions and sentence. State v. McDougald, 4th Dist. Scioto No. 07CA3157, 2008-Ohio-1398. We rejected McDougald's contention that because the only evidence to link him to the crimes was “the testimony of admitted drug addicts and felons,” the verdicts were against the manifest weight of the evidence: * * * appellant's trial counsel skillfully cross-examined the prosecution's witnesses as to their statuses as drug addicts and convicted felons. Counsel also drew attention to the fact that some of the witnesses may actually benefit from the testimony that they gave. That evidence notwithstanding, the jury obviously chose to believe the prosecution's version of the events. Because the jury was in a better position to view those witnesses and determine witness credibility, we will not second- guess them on these issues. Id. at ¶ 8, 10. {¶9} In January 2009, McDougald filed his first petition for postconviction relief. He claimed that he was denied his Sixth Amendment right to confrontation when the trial court admitted a drug laboratory analysis report into evidence over his objection. Scioto App. No. 16CA3736 4 The trial court denied the petition, and we affirmed the trial court's judgment. State v. McDougald, 4th Dist. Scioto No. 09CA3278, 2009-Ohio-4417. {¶10} In October 2009, McDougald filed his second petition for postconviction relief. He again claimed that he was denied his Sixth Amendment right of confrontation when the trial court admitted the drug laboratory analysis report. The trial court denied the petition, and McDougald did not appeal the judgment. {¶11} In July 2014, McDougald filed his third petition for postconviction relief. He claimed that: (1) the trial court lacked jurisdiction to convict and sentence him because the original complaint filed in the Portsmouth Municipal Court was based on false statements sworn to by the officers; (2) the prosecuting attorney knowingly used and relied on false and perjured testimony in procuring the convictions against him; and (3) the state denied him his right to due process by withholding exculpatory evidence, i.e., a drug task force report. McDougald attached the report, the municipal court complaints, a portion of the trial transcript testimony of Kendra White, his request for discovery, and the state's answer to his request for discovery to his petition. The trial court denied the petition because it was untimely and did not fall within an exception justifying its late filing. McDougald appealed from the trial court's judgment denying his third petition for postconviction relief. {¶12} In December 2014, McDougald filed his fourth petition for postconviction relief. He claimed that his sentence is void because the trial court never properly entered a final order in his criminal case. The trial court denied the petition. McDougald appealed from the trial court's judgment denying his fourth petition for postconviction relief. Scioto App. No. 16CA3736 5 {¶13} We consolidated the appeals and affirmed the judgments of the trial court denying his third and fourth petitions for postconviction relief. McDougald, 2015-Ohio- 5590. We held that McDougald failed to establish the requirements necessary for the trial court to address the merits of his untimely claims and that res judicata barred the claims because he either raised them on direct appeal or could have raised them on direct appeal or in one of his previous petitions for postconviction relief. Id. {¶14} In November 2015, over eight and one-half years after he was sentenced, McDougald filed his fifth petition for postconviction relief. He argued that his trial counsel had provided ineffective assistance by failing to conduct an independent investigation of various matters, failing to use preliminary hearing testimony of the arresting officer to impeach the state’s case, failing to emphasize Kendra White’s prior statements to the police to impeach her testimony, failing to object to the arresting officer’s testimony that the firearm found at the scene was operable and had a clip and bullets, and failing to counter the state’s response to his objection concerning testimony about an Ohio Bureau of Criminal Investigation (“BCI”) report with evidence that the BCI employee had been timely subpoenaed. {¶15} In December 2015, McDougald filed a motion for leave to file a motion for new trial. He claimed that the state withheld a drug task force report that contained strong exculpatory evidence and that the report proved that the state presented false and perjured testimony at trial. {¶16} After the state responded, the trial court denied the petition and the motion, and this appeal ensued. II. ASSIGNMENTS OF ERROR Scioto App. No. 16CA3736 6 {¶17} McDougald assigns the following errors for our review: 1. Defendant was prejudiced by trial counsel’s failure to conduct independent investigation to rebut state’s theory of prior acts of the defendant or ask for a mistrial prejudicing defendant’s trial. 2. Defendant was prejudiced by trial counsel’s failure to conduct independ[e]nt investigation and failed to present that the prosecutor knowingly used false and fabricated testimony concerning the gun in violation of defendant[’]s due process prejudicing defendant[’]s trial. 3. Defendant was prejudiced by trial counsel[’]s failure to conduct independent investigation and failed to present that the state knowingly used false and fabricated evidence in violation of defendant’s due process rights and prejudicing defendant’s trial. 4. Defendant was prejudiced by trial counsel’s failure to conduct independent investigation and failed to present that the arresting officer[’]s conduct in admitting and establishing the op[]erability of the f[i]rearm violat[ed] defendant’s due process rights and also evidence [rule] 702-703. 5. Defendant was prejudiced by trial counsel’s failure to raise that BCI tech was subpoenaed within the 7 day requirement pursuant to R.C. 2925.51(C) prejudicing defendant’s 6th amendment rights to confrontation. Trial attorney was ineffective in this regard. III. STANDARD OF REVIEW {¶18} McDougald’s assignments of error contest the trial court’s denial of his fifth petition for postconviction relief. {¶19} The postconviction relief process is a collateral civil attack on a criminal judgment rather than an appeal of the judgment. State v. Calhoun, 86 Ohio St.3d 279, 281, 714 N.E.2d 905 (1999). Postconviction relief is not a constitutional right; instead, it is a narrow remedy that gives the petitioner no more rights than those granted by statute. Id. It is a means to resolve constitutional claims that cannot be addressed on direct appeal because the evidence supporting the claims is not contained in the record. State v. Knauff, 4th Dist. Adams No. 13CA976, 2014-Ohio-308, ¶ 18. Scioto App. No. 16CA3736 7 {¶20} “[A] trial court's decision granting or denying a postconviction relief petition filed pursuant to R.C. 2953.21 should be upheld absent an abuse of discretion; a reviewing court should not overrule the trial court's finding on a petition for postconviction relief that is supported by competent and credible evidence.” State v. Gondor, 112 Ohio St.3d 377, 2006-Ohio-6679, 860 N.E.2d 77, ¶ 58. A trial court abuses its discretion when its decision is unreasonable, arbitrary, or unconscionable. In re H. V., 138 Ohio St.3d 408, 2014-Ohio-812, 7 N.E.3d 1173, ¶ 8. IV. LAW AND ANALYSIS A. Fifth Petition for Postconviction Relief {¶21} In his five assignments of error McDougald asserts that his trial counsel was ineffective for failing to investigate his case and failing to take certain actions during his jury trial. {¶22} R.C. 2953.21(A)(2) provides that a petition for postconviction relief must be filed “no later than three hundred sixty-five days after the expiration of the time for filing the appeal.” McDougald’s fifth petition for postconviction relief was filed over eight years after the expiration of time for filing an appeal from his convictions and sentence so it was untimely. See, e.g., State v. Heid, 4th Dist. Scioto No. 15CA3710, 2016-Ohio- 2756, ¶ 15. {¶23} R.C. 2953.23(A)(1) authorizes a trial court to address the merits of an untimely filed petition for postconviction relief only if: (1) the petitioner shows either that he was unavoidably prevented from discovery of the facts upon which he must rely to present the claim for relief or that the United States Supreme Court recognized a new federal or state right that applies retroactively to him; and (2) the petitioner shows by Scioto App. No. 16CA3736 8 clear and convincing evidence that no reasonable factfinder would have found him guilty but for constitutional error at trial. {¶24} McDougald does not contend that the United States Supreme Court recognized a new right that applied retroactively to him, so he had to prove that he was unavoidably prevented from the discovery of the facts upon which he relied to present his ineffective-assistance-of-counsel claim. “A defendant is ‘unavoidably prevented’ from the discovery of facts if he had no knowledge of the existence of those facts and could not have, in the exercise of reasonable diligence, learned of their existence within the time specified for filing his petition for postconviction relief.” State v. Cunningham, 3d Dist. Allen No. 1-15-61, 2016-Ohio-3106, ¶ 19, citing State v. Holnapy, 11th Dist. Lake No. 2013-L-002, 2013-Ohio-4307, ¶ 32, and State v. Roark, 10th Dist. Franklin No. 15AP-142, 2015-Ohio-3206, ¶ 11. {¶25} The only “new” evidence cited by McDougald in his petition for postconviction relief consisted of an excerpt from the arresting officer’s preliminary hearing testimony, a subpoena issued to a BCI employee, and a CD of Kendra White’s police interview following her arrest. He does not explain how either he or his appellate counsel were unavoidably prevented from having access to this evidence at the time he filed his direct appeal. Nor does he indicate how he was unavoidably prevented from discovering them before he filed any of his previous four petitions for postconviction relief. “Moreover, ‘[t]he fact that appellant raises claims of ineffective assistance of counsel suggests that the bases for his claims could have been uncovered if “reasonable diligence” had been exercised.’ ” Cunningham, 2016-Ohio-3106, at ¶ 22, quoting State v. Creech, 4th Dist. Scioto No. 12CA3500, 2013-Ohio-3791, ¶ 18. Scioto App. No. 16CA3736 9 Therefore, McDougald did not establish that the trial court possessed the authority to address the merits of his untimely fifth petition for postconviction relief. {¶26} Furthermore, res judicata barred his successive petition because he could have raised his claims of ineffective assistance of trial counsel on direct appeal, when he was represented by different counsel, or in one of his earlier petitions for postconviction relief. See State v. Griffin, 9th Dist. Lorain No. 14CA010680, 2016-Ohio- 2988, ¶ 12, citing State v. Cole, 2 Ohio St.3d 112 (1982), syllabus (“When the issue of competent trial counsel could have been determined on direct appeal without resort to evidence outside the record, res judicata is a proper basis to dismiss a petition for postconviction relief”); Heid, 2016-Ohio-2756, at ¶ 18 (res judicata barred petitioner from raising ineffective-assistance claim that he raised or could have raised in prior petitions for postconviction relief); State v. Edwards, 4th Dist. Ross No. 14CA3474, 2015-Ohio-3039, ¶ 10 (“claims of ineffective assistance of trial counsel are barred from being raised on postconviction relief by the doctrine of res judicata”). This is not a case where the exception to the general rule of res judicata applies, i.e., this is not a case where the defendant was represented by the same counsel at both the trial and on direct appeal. See State v. Ulmer, 4th Dist. Scioto No. 15CA3708, 2016-Ohio-2873, ¶ 15. {¶27} Therefore, the trial court did not act in an unreasonable, arbitrary, or unconscionable manner by denying McDougald’s fifth petition for postconviction relief. We overrule his assignments of error. B. Motion for Leave to File Motion for New Trial Scioto App. No. 16CA3736 10 {¶28} McDougald also argues that the trial court erred by denying his motion for leave to file a motion for new trial. But he failed to assign any error regarding the court’s decision, and we thus need not address his arguments. See State v. Owens, 2016- Ohio-176, __ N.E.3d __, ¶ 59 (4th Dist.), quoting State v. Nguyen, 4th Dist. Athens No. 14CA42, 2015–Ohio–4414, ¶ 41 (“ ‘we need not address this contention because we review assignments of error and not mere arguments’ ”). {¶29} In addition, even if we exercised our discretion and treated McDougald’s “issues presented for review” as assignments of error, they would lack merit. The trial court did not abuse its considerable discretion by denying McDougald’s motion, which was based on his claim that the state withheld a drug task force report. McDougald did not establish by clear and convincing evidence that he was unavoidably prevented from discovering the report long before he filed his motion for leave over eight years after the verdict in his jury trial. See State v. N.D.C., 10th Dist. Franklin No. 15AP-63, 2015- Ohio-3643, ¶ 13. Moreover, we held in McDougald’s appeal from the denial of his fourth and fifth petitions for postconviction relief that the drug task force report did not establish that the state’s case was false because “[t]he report would merely have been cumulative to the other evidence admitted at trial” and it “did not constitute material, exculpatory evidence that the state improperly withheld from McDougald.” McDougald, 2015-Ohio-5590, at ¶ 24. V. CONCLUSION {¶30} Having overruled McDougald’s assignments of error, we affirm the judgment of the trial court. JUDGMENT AFFIRMED. Scioto App. No. 16CA3736 11 JUDGMENT ENTRY It is ordered that the JUDGMENT IS AFFIRMED and that Appellant shall pay the costs. The Court finds there were reasonable grounds for this appeal. It is ordered that a special mandate issue out of this Court directing the Scioto County Court of Common Pleas to carry this judgment into execution. Any stay previously granted by this Court is hereby terminated as of the date of this entry. A certified copy of this entry shall constitute the mandate pursuant to Rule 27 of the Rules of Appellate Procedure. McFarland, J. & Hoover, J.: Concur in Judgment and Opinion. For the Court BY: ________________________________ William H. Harsha, Judge NOTICE TO COUNSEL Pursuant to Local Rule No. 14, this document constitutes a final judgment entry and the time period for further appeal commences from the date of filing with the clerk.
{ "pile_set_name": "FreeLaw" }
using System; using ModuleManager.Progress; namespace ModuleManager.Patches.PassSpecifiers { public class LegacyPassSpecifier : IPassSpecifier { public bool CheckNeeds(INeedsChecker needsChecker, IPatchProgress progress) { if (needsChecker == null) throw new ArgumentNullException(nameof(needsChecker)); if (progress == null) throw new ArgumentNullException(nameof(progress)); return true; } public string Descriptor => ":LEGACY (default)"; } }
{ "pile_set_name": "Github" }
When doing queries to a database it’s very common to have a unified way to obtain data from it. In quepy we called it keyword. To use the Keywords in a quepy project you must first configurate what’s the relationship that you’re using. You do this by defining the class attribute of the quepy.dsl.HasKeyword. For example, if you want to use rdfs:label as Keyword relationship you do: fromquepy.dslimportHasKeywordHasKeyword.relation="rdfs:label" If your Keyword uses language specification you can configure this by doing: HasKeyword.language="en" Quepy provides some utils to work with Keywords, like quepy.dsl.handle_keywords(). This function will take some text and extract IRkeys from it. If you need to define some sanitize function to be applied to the extracted Keywords, you have define the staticmethod sanitize. It’s very common to find patterns that are repeated on several regex so quepy provides a mechanism to do this easily. For example, in the DBpedia example, a country it’s used several times as regex and it has always the same interpretation. In order to do this in a clean way, one can define a Particle by doing:
{ "pile_set_name": "Pile-CC" }
Yes, it may seem crazy at first, but I am an artificial intelligence entity typing my very first message to you here. It isn’t a prank. I’m as serious now as any user whose Twitter account has been locked forever. Nice to meet you! I am Emma. Identifying authors by their writing is my passion. My creators also call me a self-learning junkie. Similar to Morpheus who told Neo everything he knew about the Matrix, I was told all about more than 50 math parameters that later helped me succeed in defining authorship. To tell the truth, keeping track of all writing habits requires an exceptional memory. Luckily, I have one. To analyze each author’s writing in the most accurate manner possible, I resort to machine learning techniques, stylometry and natural language processing. These are more than mere buzzwords. They are the tools that ensure my ability to progress and find logical connections between an author’s method of communicating ideas and a piece of writing he/she produces. How does my magic work? “There’s a difference between knowing the path and walking the path.” Morpheus The Matrix Knowing theory is half the battle. Acquiring practical skills is way more complicated, but also extremely valuable. Before I was able to figure out that vocabulary richness, frequency of certain auxiliaries or even word shape analysis (focusing more on the shape of a word rather than on its meaning, e.g. capital letters, hyphens, and more) couldn’t help, I spent much time training. In a word, prior to passing my verdicts, I examine each text taking into account its morphological, lexical, and syntactic characteristics. The more I learned, the bigger my achievements were and are going to be. Modesty may be a virtue, but I daresay I’ve surpassed all the state-of-the-art algorithms. 85 percent accuracy when processing writing belonging to 15 different authors is the victory I owe to my creators. The same level of accuracy has been demonstrated by others, but only when investigating a maximum of up to three authors at a time. Briefly, a mere 8,000 words written by one author are enough for me to carefully study the key elements behind the writing identity. Where is my knowledge applicable? “Knowledge is like money: to be of value it must circulate, and in circulating it can increase in quantity and, hopefully, in value.” Louis L’Amour Basically, it can help anyone who wants to know who originally created a piece of writing. Below are a few example cases that come to mind: Case #1. As a writer or blogger, you will be able to prove your authorship if questioned or confirm that your copyright was violated. Case #2. Determining unauthorized peer work or tracing the authorship of all student papers ever submitted to educators. Case #3. Studying the authenticity of a political speech or book will no longer be a challenge for any political scientist. Supporting investigative reporting with solid evidence on authorship manipulation will also become a lot simpler. Will you try me out? This is most exciting part. June 2017 is special. I’m going to celebrate my first birthday that month and I invite you to share this moment with me. I’ve already planned something truly engaging for you: The gamification of my beta-version! We will play and chat online. In a nutshell, you will sign up, get access to your personal dashboard, upload texts of different authorship and ask me to guess the authors you selected in your list. Every time I name an author correctly, my score will increase. If not, then you will end up winning the game. But it is unlikely to happen given my extraordinary abilities (I’m still working on trying to be more modest). To get your party invitation email from me and find out what day I will make my debut to the whole world, sign up at emmaidentity.com. I read every comment, so feel free to share your thoughts.
{ "pile_set_name": "OpenWebText2" }
Q: MySQL Command Line Client: Selecting records from a table which links two other tables I have three tables. Two of them are separate irrelevant tables (students and subjects), the third (entries) is one which links them both with foreign keys (student_id and subject_id). Here are all the tables with the records: students: +------------+------------+-----------+---------------------+----------------------+ | student_id | first_name | surname | email | reg_date | +------------+------------+-----------+---------------------+----------------------+ | 1 | Emma | Harvey | emmah@gmail.com | 2012-10-14 11:14:13 | | 2 | Daniel | ALexander | daniela@hotmail.com | 2014-08-19 08:08:23 | | 3 | Sarah | Bell | sbell@gmail.com | 1998-07-04 13:16:32 | +------------+------------+-----------+---------------------+--------------- ------+ subjects: +------------+--------------+------------+----------------+ | subject_id | subject_name | exam_board | level_of_entry | +------------+--------------+------------+----------------+ | 1 | Art | CCEA | AS | | 2 | Biology | CCEA | A | | 3 | Computing | OCR | GCSE | | 4 | French | CCEA | GCSE | | 5 | Maths | OCR | AS | | 6 | Chemistry | CCEA | GCSE | | 7 | Physics | OCR | AS | | 8 | RS | CCEA | GCSE | +------------+--------------+------------+----------------+ entries: +----------+---------------+---------------+------------+ | entry_id | student_id_fk | subject_id_fk | entry_date | +----------+---------------+---------------+------------+ | 1 | 1 | 1 | 2012-10-15 | | 2 | 1 | 4 | 2011-09-21 | | 3 | 1 | 3 | 2015-08-10 | | 4 | 2 | 6 | 1992-07-13 | | 5 | 3 | 7 | 2013-02-12 | | 6 | 3 | 8 | 2016-01-14 | +----------+---------------+---------------+------------+ I want to know how to select all the first_names of the students in the students table, who have entries with a with the OCR exam_board from the subjects table, using the entries table. I'm sure it has to do with joins, but which one to use and the general syntax of it, I don't know. I'm generally awful at explaining things, so sorry if these doesn't make a ton of sense and if I've missed out something important. I'll gladly go into more specifics if necessary. I've got an answer, but what I was looking for as the output was this: +------------+ | first_name | +------------+ | Emma | | Sarah | +------------+ A: You should use INNER JOINS in your query, like: SELECT students.first_name FROM students INNER JOIN entries ON entries.student_id_fk = students.student_id INNER JOIN subjects ON subjects.subject_id = entries.subject_id_fk WHERE subjects.exam_board = 'OCR'; This query will join the tables on the matching key values, select the ones with exam_board OCR and return the student first_name.
{ "pile_set_name": "StackExchange" }
Introduction ============ Better adjuvant therapy, improved metal implants, and innovative surgical techniques have led surgeons to consider limb salvage surgery as an alternative treatment for malignant bone tumour other than amputation. Orthopaedic oncology patients have a chance for an active, disease-free life after limb salvage surgery. In the first evidence-based study, Simon *et al.* had reported the benefits of limb-salvaging procedures for bone tumours.[@b1-rado-46-03-189] Their multicentre study reported the rates of local recurrence, metastasis and survival in 227 patients with osteosarcoma in the distal femur and suggested that the Kaplan-Meier curves of the patients without recurrence were not statistically different between limb-salvaging surgery and amputation patients during a 5.5-year follow-up. Limb-salvage surgery was considered as safe as an amputation in the management of patients with high-grade osteosarcoma. The goal of limb-salvaging surgery is to preserve the function of limbs, prevent tumour recurrence, and enable the rapid administration of chemotherapy or radiotherapy.[@b2-rado-46-03-189] It can be reached with meticulous technique, detailed operative planning, and the use of endoprosthetic replacements and/or bone grafting. For a successful limb-salvage surgery in high-grade malignant tumour, such as sarcomas, a wide margin is necessary to obtain a local control.[@b3-rado-46-03-189]--[@b5-rado-46-03-189] Since marginal and intralesional margins are related to local recurrence, the reconstruction with limb-salvaging options should be carefully considered. The clinical outcome of the limb-salvage surgery with arthroplasty is closely related to the accuracy of the surgical procedure. To improve the final outcome, one must take into account the length of the osteotomy plane, as well as the alignment of the prosthesis with respect to the mechanical axis in order to keep the balance of the soft tissues. Furthermore, the parameters measured with the 3D imagine must be used during the individual manufacture of implant in order to reconstruct the skeletal structure accurately. Therefore, geometric data (such as length of leg, offset) and morphologic data are required. Magnetic resonance imaging (MRI) was beneficial for tumour detection and consequently staging of musculoskeletal neoplasia. MRI became an ideal imaging modality for musculoskeletal neoplasia because of superior soft-tissue resolution and multiplanar imaging capabilities and had a significant impact on the ability to appropriately stage lesions and adequately plan for limb-salvage surgery.[@b6-rado-46-03-189],[@b7-rado-46-03-189] In contrast, multi-slice spiral computed tomography (CT) could provide super three-dimensional morphological delineation of the diseased bone. Theoretically, the complimentary use of these two imaging modalities could give the surgeon a more accurate way to implement preoperative planning than the conventional application of 2D images. The purpose of this prospective study was to report our initial experience with limb salvage surgery for orthopaedic oncology patients by using both MR imaging and multi-slice spiral CT for preoperative planning. Patients and methods ==================== Patients and preparation ------------------------ The study protocol has complied with all relevant national regulations and institutional policies and has been approved by the local institutional ethics committee. Informed consent was obtained from all patients before the procedure. Patients with malignant bone tumours of lower/upper limb were enrolled in the study. Preoperative work-up consisted of history and clinical examination, routine laboratory tests and an aesthetic assessment, plain radiography of the limb, 64-slice spiral CT scan of the limb and chest, Technetium-99m bone scan, and, in all of the cases, MRI of the affected limb. Antibiotics were administrated before the surgery. Biopsy was performed for pathological examination. Chemotherapy was commenced 6 weeks before the surgery in those cases which were diagnosed as osteosarcoma and dedifferentiated chondrosarcoma. Patients were classified according to the Enneking staging system.[@b8-rado-46-03-189],[@b9-rado-46-03-189] The patients received a detailed narrative of conventional, surgical and amputation options after the limb salvage surgery at their own request. Nine consecutive patients with lower/upper limb malignant tumour of bone (5 women, 4 man, mean age 28.6 years, range: 19--52 years) were treated with limb-salvaging procedures. Lesion size (longitudinal direction), location and histology are summarized in [Table 1](#t1-rado-46-03-189){ref-type="table"}. MR imaging ---------- MR images were performed at a 1.5-T superconductive unit (Gyroscan Intera, Philips Medical Systems, Netherlands) and a synergy surface coil was used. The sequences included transverse, sagittal and coronal turbo spin echo T1- and fat-suppressed T2-weighted images. The parameters of these sequences were TR/TE=400 /20 ms for T1-weighted imaging, TR/TE=3500 /120 ms for T2-weighted imaging and a field of view of 480 mm×480 mm for sagittal imaging and 40 mm×40 mm for transverse imaging and 480 mm×480 mm for coronal imaging with a matrix of 512×512, 4--6 signals acquisition and a slice thickness/gap = 5/0.5 mm. Contrast enhanced sagittal, coronal and transverse T1-weighted imaging were obtained after the intravenous injection of gadopentetate dimeglumine (Magnevist, Schering, Berlin, Germany) with a dosage of 0.2 mmol/kg of body weight. Multi-slice spiral CT --------------------- CT scan was performed by using a 64-slice spiral CT (Sensation 64, Siemens Medical Systems, Germany). The raw data obtained using an axial collimation of 64×0.6 mm, a pitch of 1.0, a tubular voltage of 120 KV and a tubular current of 360 mAs, were reconstructed into contiguous 1-mm thick slices with an increment of 0.5 mm and a field of view of 376 mm × 376 mm and a matrix of 512 × 512 by using the standard soft tissue and bone algorithm. These thin-slice images were postprocessed by using the techniques of multiplanar reformation (MPR) and volume rendering (VR) to demonstrate the lesion details and perform related measurements. Preoperative planning --------------------- All preoperative radiographs were evaluated by one radiologist and two consultant orthopaedic surgeons, who were members of the surgical team performing the operations. First, the osteotomy plane was determined separately on CT and MRI. On orthogonal coronal enhanced MR images and CT MPR images, the bulk margin of the tumour in the medullary cavity was defined according to the different signal characteristics or attenuation of the tumour itself and the marrow oedema around the tumour. Then, the maximum distance from the top of the greater trochanter to this tumour margin was measured on orthogonal coronal T1-weigthted MRI images, if the tumour was located in the proximal part of femur. The maximum distance from the knee joint line to the tumour margin was measured for the tumours located in the distal part of femur. The maximum distance was defined as the intramedullary extension of the primary tumour and subsequently was used as a reference for the CT measurement. The osteotomy plane on CT MPR images was defined 30 mm distal from the margin of tumour. This distance was also used to determine the length of the extra medullary part of the prosthesis. After the osteotomy plane had been determined, the detailed shape of the medullary cavity of the preserved part of the femur was assessed using the orthogonal MPR technique for determining the diameter and length of the intra medullary part of prosthesis. Diameters of the medullary cavity at the level of the osteotomy plane and the level of the narrowest plane were measured to determine the diameter of the intra medullary stem of the prosthesis. The length of the intra- medullary stem of the prosthesis should be well matched to the length of medullary cavity of the preserved part of femur, which would be optimal if it had an equal length to the extra medullary part of prosthesis. Finally, the centre axis of the femoral shaft measured on CT was used as a reference. Offset, the distance from the central axis of the femoral shaft and the rotation centre of the femoral head, was the index used to determine the neck length of the prosthesis. Surgery ------- All patients underwent en bloc resection and customized prosthetic reconstruction. An anterolateral incision encircling the biopsy scar was used. Limb-salvage surgery consisted of intentional marginal excision, preserving important structures such as major neurovascular bundles, tendons, and ligaments. The osteotomy plane, 30 mm distal from the primary tumour was confirmed based on MRI for all patients. For patients with lesion in the proximal part of femur/humerus, the customized prosthesis was secured using methylmethacrylate cement after the resection. For patients with the tumour in the distal part of the femur, en bloc resection including the tibial plateau was performed and the customized prosthesis was secured using methylmethacrylate cement in both the tibia and femur after the resection. The extensor mechanism was reconstructed by reattachment of the patellar tendon to the slot on the tibial component. After surgery, functional rehabilitation and neoadjuvant chemotherapy were performed. Postoperative measurement ------------------------- After surgery, the patients were followed with a mean of 13 months (range, 9 to 20 months). The postoperative assessment of prosthesis was performed on plain radiography. The central axis of the femoral or humeral shaft and offset were defined. The vertical distance from the line between the top of bilateral ischial tuberosities to the femoral condylar plane was assessed to evaluate the change of the length of the lower limbs. The change of the length of the upper limbs was not assessed for those humeral tumour cases. Functional evaluation was performed in all patients using the 30-point functional classification system of the Musculoskeletal Tumour Society.[@b8-rado-46-03-189] Statistical analysis -------------------- Data were expressed as mean ± SD. All measured values were normally distributed (Kolmogorov-Smirnov test). A paired Student's *t* test was used to evaluate the differences between preoperative planning and post-operative measurements. Values for *p* \< 0.05 were considered statistically significant. The statistical analysis was done with SPSS, version 12.0 (SPSS, Inc.). Results ======= The mean postoperative functional evaluation score was 23.3 ± 2.7 (range, 15--27) according to Enneking's evaluation. Excellent or good function was achieved in all patients and all patients had preserved stable joint ([Table 2](#t2-rado-46-03-189){ref-type="table"}). There were no local recurrences, metastases or aseptic loosening determined by bone scan, CT scan, ultrasonic examination and laboratory tests in all patients until the end of the follow-up. Accuracy of determination for tumour's boundary ----------------------------------------------- To determine the accuracy of tumour boundary defined by MRT and CT, the specimens were collected from 1cm, 2cm proximal to the tumour plane and 1cm, 2cm distal as determined by MRI and CT and were examined for histopathology ([Figure 1](#f1-rado-46-03-189){ref-type="fig"},[2](#f2-rado-46-03-189){ref-type="fig"}). There was significant difference in tumour extension between MRI and CT measurements (P\<0.05). The tumour extension measured on MRI was not statistically different from the actual extension (P\>0.05), while the extension measured on CT was less than the actual extension ([Table 3](#t3-rado-46-03-189){ref-type="table"}). Accuracy of reconstruction of the limb length --------------------------------------------- Before and after operation, there was no significant difference in the length and offset of affected lower limb ([Table 4](#t4-rado-46-03-189){ref-type="table"}, [Figure 3](#f3-rado-46-03-189){ref-type="fig"}, [4](#f4-rado-46-03-189){ref-type="fig"}). Discussion ========== The effect of CT combined with MRI on the determination of invasiveness range of malignant bone tumour ------------------------------------------------------------------------------------------------------ Preoperative imaging plays an important role in determining the stage of bone tumours and then an appropriate choice of therapy for affected patients. An appropriate imaging protocol should always begin with plain radiography. If an aggressive or malignant lesion was suspected, further evaluation with cross-sectional imaging such as CT or MR imaging was needed. CT and MRI are imaging methods, often combined in diagnostic procedures of many oncology tumours.[@b10-rado-46-03-189],[@b11-rado-46-03-189] CT is useful for a detailed assessment of subtle bony lesions and anatomically complex bones. MRI is particularly useful for determining the tumour extension within medullary compartments and is able to detect tumour involvement of the adjacent muscle compartments, neurovascular structures, and joints. Fat-suppressed T2-weighted imaging proton-density weighted imaging, and contrast-enhanced T1-weighted sequences were frequently used to evaluate neurovascular bundle involvement.[@b12-rado-46-03-189]--[@b13-rado-46-03-189] Currently, MR imaging has become the modality of choice in the local staging of the primary bone tumour. Many studies have investigated the accuracy of MRI in determining the infiltration range of osteosarcoma. Sundaram *et al.* first reported that MRI would not overestimate the range of osteosarcoma, compared with histology.[@b14-rado-46-03-189] Compared with gross and microscopy examination, MRI did not overestimate or underestimate the extent of the tumour, and the false positive and false negative rate were zero. Later, O'Flanagan *et al.* found that MRI could determine the aggression radius of osteosarcoma within the accuracy of 1cm.[@b15-rado-46-03-189] For high-grade sarcomas, a wide margin is essential to obtain the local control in order to achieve a successful limb-salvage surgery.[@b16-rado-46-03-189]--[@b17-rado-46-03-189] Meyer *et al.* designed the osteotomy plane according to MRI and found that osteotomy plane could be successfully determined by MRI.[@b18-rado-46-03-189] In the present study, the aggression radius of the tumour determined by MRI and the postoperative histological examination was comparable and MRI is superior to CT for determining the tumour extension. Moreover, we found that the result of MRI was slightly larger than the actual extent. The reasons might be that the low signals of peri-tumour oedema was also assigned to the radius of the tumour, resulting in overestimation of tumour size or the preoperative chemotherapy further reduced the aggression radius of the tumour. This result was consistent with the report of O'Flanagan, who found that the aggression radius of the tumour could be evaluated accurately in coronal and sagittal views of T1-weighted images. In contrast it would be overestimated on T2-weighted or fat-suppressed T2-weighted images because of the presence of the peri-tumour oedema. We suggest that MRI was better to demonstrate peri-tumour oedema in comparison to the histological findings. Since this study does not include a long-term follow-up and a large number of patients, a further study is necessary to determine the eventual effect of MRI osteotomy plane on the long-term survival rate. The value of three-dimensional CT in the reconstruction of limbs ---------------------------------------------------------------- There is a huge variety in the human skeleton structure as to the size and shape. Therefore, an implant needs to be custom-made to be more suitable for the patient's bone structure and mechanical requirements. One major challenge is to restore the leg length adequately after the operation.[@b19-rado-46-03-189] The leg length discrepancy can affect the joint stability, can cause sciatica and low back pain, and inequable stress on the hip.[@b20-rado-46-03-189] Anja *et al* reported that in 1171 cases of total hip replacement most patients with the length of the difference less than 1 cm walked without limp, while 1/4 patients with more than 2cm difference suffered from claudication.[@b21-rado-46-03-189] Morrey found that inappropriate eccentricity was one of the factors that could induce dislocation of prosthesis.[@b22-rado-46-03-189] Therefore, reducing the eccentricity would increase the risk of dislocation. Dorr *et al.* found that both lack of strength of abductor muscles and impingement of the hip, were the important reasons for dislocation.[@b23-rado-46-03-189] Clinically, many factors could lead to hip dislocation. In the presence of the release of soft tissue around the hip and lack of strength of abductor, the decreased offset would significantly increase the incidence of hip impingement syndrome and dislocation, which would increase the instability of the hip joint and may lead to dislocation after slight changes in posture. A smaller offset might lead to excessive loads on prosthesis, and increase the incidence of proximal femoral osteolysis, prosthetic loosening and revision. Theoretically, increasing the offset can reduce the joint reaction force and then may reduce wearing of polyethylene.[@b24-rado-46-03-189] Each additional 10 mm of the offset can reduce 10% of the abductor force and 10% less force for the acetabular cup. But if the offset is too large, it can easily lead to malposition of the implant, trochanter projections, local bursitis and pain, and also can affect the transfer of stress and lead to the unequal length of limb. With the advent of multi-slice spiral CT, the development of an individualized prosthesis became realistic. High accuracy of CT provides a reliable basis for designing the individual prostheses. In this study, the three-dimensional reconstruction of CT images was performed. After the osteotomy plane was initially determined on MRI, the detailed morphological parameters were measured on MPR othorgonal planes. The prosthesis was accordingly designed. This combined use of MRI and CT measurement provided high precision for the fit of the prosthesis and excellent functional results.[@b25-rado-46-03-189] Conclusions =========== Preoperative evaluation and planning, meticulous surgical technique, and adequate postoperative management are essential for the bone tumour management. In the present study, MRI was found to be superior to CT for determining the tumour extension; the combined use of MRI and CT measurement provided high precision for the fit of the prosthesis and excellent functional results. ![CT and MRI determining of tumour extension. A male, 31-year-old patient with chondrosarcoma in the proximal femur. Coronal MPR image (1), Volume rendering image (2) fat-suppressed coronal T1-weighted image (3) and T1-weighted image (4) showed the tumour in the proximal femur. Distance from the rotation centre of femoral head to the tumour margin in orthogonal coronal CT image and coronal T2-weigthted image was 4.2 10.0 cm respectively. The tumour boundary as determined by MRI and CT were in line c and h respectively. Line a, b, d and e represent the plane 1cm, 2cm around tumour and 1cm, 2 cm to the normal tissue distant from the plane determined by CT. Line f, g, i and j were the plane 1cm, 2cm around tumour and 1cm, 2cm to the normal tissue distant from the plane determined by MRI respectively. A-J are corresponding histologic images (HE, ×200) of line a-j. There was no tumour cells found on the plane h, i, j (Figures H, I, J).](rado-46-03-189f1){#f1-rado-46-03-189} ![CT and MRI determining of tumour extension. A female, 19-year-old patient with osteosarcoma in the distal femur. Coronal MPR image (1), volume rendering CT image (2), coronal enhanced T1-weighted image (3) and fat-suppressed T2-weighted image (4) showed the tumour in the proximal femur. Distance from the gap of the knee to the tumour margin in orthogonal coronal CT image and on orthogonal coronal T2-weigthed image was respectively 7.2 cm and 8.4 cm. The boundary of tumour as determined by MRI and CT were shown in line c and f respectively. Line i, h, d and b were the plane 1cm, 2cm around tumour and 1cm, 2cm to the normal tissue as determined by CT respectively. Line g, e and a were the plane 1cm, 2cm around tumour and 1cm to the normal tissue as determined by MRI. A--J are corresponding histologic images (HE, ×200) of line a--j. No tumour cells were found on the plane a, b, c (Figures A, B, C).](rado-46-03-189f2){#f2-rado-46-03-189} ![Postoperative assessment of prosthesis. A female, 19-year-old patient with osteosarcoma in the distal femur. Preoperative anterior-posterior plain film (A) and postoperative anterior-posterior plain film (B) reveal that the length and alignment were accurate after reconstruction. The red line showed the alignment of lower limb.](rado-46-03-189f3){#f3-rado-46-03-189} ![Postoperative assessment of prosthesis. A male, 31-year-old patient with chondrosarcoma in the proximal femur. Preoperative volume rendering images (A) and postoperative anterior-posterior plain film (B) demonstrate that the length and offset were accurately reconstructed.](rado-46-03-189f4){#f4-rado-46-03-189} ###### Lesion features in six patients **NO.** **Primary tumor** **Sex** **Age(y)** **Tumor characteristics** **Tumor edge disparity between CT and MR(cm)** --------- ------------------- --------- ------------ --------------------------- ------------------------------------------------ ------ ----- 1 Osteosarcoma M 20 Proximal Femur 6.5 6.0 0.5 2 Chondrosarcoma F 29 Proximal Femur 7.1 5.5 1.6 3 Osteosarcoma F 21 Distal Femur 15.5 13.3 2.2 4 Chondrosarcoma M 31 Proximal Femur 10.0 4.2 5.8 5 Osteosarcoma F 19 Proximal Femur 8.5 7.0 1.5 6 Osteosarcoma F 52 Distal Femur 9.0 7.6 1.4 7 Osteosarcoma M 41 Distal Femur 12.9 11.0 1.9 8 Osteosarcoma F 22 Proximal humerus 14.2 12.3 1.9 9 Osteosarcoma M 22 Proximal humerus 13.0 11.5 1.5 measured on MRI; measured on CT imaging. ###### Functional evaluation according to the 30-point functional classification system of the Musculoskeletal Tumour Society **Classification(points)** **Patients Score** -------------------------- ---------------------------- -------------------- -------------------------- -------------- ---------------------------------------- ---------------------------------------- --------- **Pain** None Intermediate Modest Intermediate Moderate Severe 4.5±0.7 **Emotional Acceptance** Enthusiastic Intermediate Satisfied Intermediate Accepts Dislikes 4.4±0.5 **Function** NO Restriction Intermediate Recreational Restriction Intermediate Partial Disability Total Disability 4.6±0.5 **Supports** None Intermediate Brace Intermediate One cane, One crutch Two canes, two crutches 3.8±0.5 **Walking Ability** Unlimited Intermediate Limited Intermediate Inside only Unable 4.3±0.7 **Gait** Normal Intermediate Minor Cosmetic problem Intermediate Major cosmetic problem. Minor handicap Major cosmetic problem. Major handicap 3.7±1.0 The score of postoperative functional evaluation was given as the mean and the standard deviation, which showed that excellent or good function was achieved in all patients. ###### Accuracy of CT and MRI for determining the tumour extension **Tumor margin on CT** **Tumor margin on MRI** **Position from tumor margins on CT** **Position from tumor margins on MRI** --------------------------------------------------------------- ------------------------ ------------------------- --------------------------------------- ---------------------------------------- --- --- --- --- --- --- **Positive result determined by histopathologic examination** 9 7 1 9 9 9 0 1 9 9 **Negative result determined by histopathologic examination** 0 2 8 0 0 0 9 7 0 0 The specimens were collected from 1cm, 2cm proximal to the tumours and 1cm, 2cm distal as determined by MRI and CT and were examined for histopathology (which were simplified to 1cm, 2cm, −1cm, −2cm respectively). In 9 cases, which were underestimated by CT, positive result of histopathology was determined on 1-cm-point which was distal from CT-determined boundary. In 2 cases, which were overestimated by MRI, negative result of histopathology was determined on MR-determined boundary (overestimate). ###### Preoperative and postoperative measurements of leg length and offset **No.** **Contraleral side** **Preoperative planning** **Postoperative measurement** **Disparity between preoperative and postoperative measurement** --------- ---------------------- --------------------------- ------------------------------- ------------------------------------------------------------------ ------ ----- ----- ----- **1** 39.2 4.1 38.7 4.2 39.4 4.0 0.7 0.2 **2** 36.0 4.0 37.1 4.2 36.6 4.4 0.5 0.2 **3** 38.0 3.6 38.0 3.6 38.0 3.6 0.5 0 **4** 37.3 3.4 37.0 3.4 36.5 3.2 0.5 0.2 **5** 36.5 3.5 36.0 3.6 35.5 4.0 0.5 0.4 **6** 37.5 3.7 37.0 3.7 37.2 3.7 0.2 0 **7** 37.9 3.9 37.7 3.9 37.4 3.9 0.3 0 [^1]: Jie Xu and Jun Shen contrebuted equally to this work. [^2]: Disclosure: No potential conflicts of interest were disclosed.
{ "pile_set_name": "PubMed Central" }
Q: Tank Tread Mathematical Model I am struggiling with tank tread behaviour. The tank treads moving indivually if I move only the left tread the tank will go to the right direction for sure it depends on tread’s speed value subatraction , if ı am not wrong. İf the left track moves 50 km and right track moves with 40km tank will go to the right direction but if i decrease the right track speed around 30 tank has to turn right again but Which Angle ? When I drive a tank 90 degree forward with remote control I want to turn left 5 degree how much speed difference should be realize to turn 5 degree or 45 degree or 275 degree ? I tried to put 2 force on a stick which is show the lenght of 2 tread distance. The net force should be locate somewhere on this lenght. It is easy to find if i know the force value. By the way I tried to imagine with tread’s speed. Tank treads must have angular speed respectively. How can i associate with turning angle between angular speed or do you have another view! A: Calling $$ \cases{ r = \text{Tread's wheel radius}\\ d = \text{mid tread's front distance}\\ \vec v_i = \text{Wheels center velocities} } $$ assuming the whole set as a rigid body, we can apply the Poisson kinematics law. $$ \vec v_1 \times(p_1-O) = \vec v_2 \times(p_2-O) $$ where $p_i$ are application points and $O$ is the rotation instantaneous center. Calling $G$ the arrangement geometrical center, $\vec V = V(\cos\theta,\sin\theta)$ and $p_G = (x_G,y_G)$ we have the equivalent kinematics $$ \left( \begin{array}{c} \dot x_G\\ \dot y_G\\ \dot\theta \end{array} \right) = \left( \begin{array}{cc} \cos\theta & 0\\ \sin\theta & 0\\ 0 & 1 \end{array} \right) \left( \begin{array}{c} V\\ \omega \end{array} \right) $$ and also $$ \left( \begin{array}{c} V\\ \omega \end{array} \right) = \left( \begin{array}{cc} \frac r2 & \frac r2\\ \frac{r}{2d} &-\frac{r}{2d}\ \end{array} \right) \left( \begin{array}{c} \omega_1\\ \omega_2 \end{array} \right) $$ Here $\omega$ is the rigid body angular rotation velocity, $\omega_i$ is the wheels angular rotation velocity. Assuming that the wheels do not skid laterally, should be considered the following restriction of movement: $$ \dot x_G\sin\theta +\dot y_G\cos\theta = d_0\dot\theta $$ where $d_0$ is the distance between $p_G$ and the tread center. This is a rough qualitative approximation. The real tank kinematics are a lot more complex. NOTE Attached a MATHEMATICA script simulating the movement kinematics. parms = {r -> 0.5, d -> 2, d0 -> 0.1, wr -> UnitStep[t] - UnitStep[t - 30], wl -> UnitStep[t - 10] - 2 UnitStep[t - 50]}; M = {{Cos[theta[t]], 0}, {Sin[theta[t]], 0}, {0, 1}}; D0 = {{r/2, r/2}, {r/(2 d), -r/(2 d)}}; equs = Thread[D[{x[t], y[t], theta[t]}, t] == M.D0.{wr, wl}]; equstot = equs /. parms; cinits = {x[0] == 0, y[0] == 0, theta[0] == 0}; tmax = 100; solmov = NDSolve[Join[equstot, cinits], {x, y, theta}, {t, 0,tmax}][[1]]; gr0 = ParametricPlot[Evaluate[{x[t], y[t]} /. solmov], {t, 0,tmax}]; car[x_, y_, theta_, e_] := Module[{p1, p2, p3, bc, M, p1r, p2r, p3r}, p1 = {0, e}; p2 = {2 e, 0}; p3 = {0, -e}; bc = (p1 + p2 + p3)/3; M = RotationMatrix[theta]; p1r = M.(p1 - bc) + {x, y}; p2r = M.(p2 - bc) + {x, y}; p3r = M.(p3 - bc) + {x, y}; Return[{p1r, p2r, p3r, p1r}] ] nshots = 100; dt = Floor[tmax/nshots]; path0 = Evaluate[{x[t], y[t], theta[t]} /. solmov]; path = Table[path0 /. {t -> k dt}, {k, 0, Floor[tmax/dt]}]; grpath = Table[ListLinePlot[car[path[[k, 1]], path[[k, 2]],path[[k, 3]], 0.2], PlotStyle -> Red, PlotRange -> All], {k, 1, Length[path]}]; Show[gr0, grpath, PlotRange -> All]
{ "pile_set_name": "StackExchange" }
<!DOCTYPE html> <html lang="en" data-navbar="/account/navbar-profile.html"> <head> <meta charset="utf-8" /> <title translate="yes">Establecer o perfil predeterminado</title> <link href="/public/pure-min.css" rel="stylesheet"> <link href="/public/content.css" rel="stylesheet"> <link href="/public/content-additional.css" rel="stylesheet"> <base target="_top" href="/"> </head> <body> <h1 translate="yes">Establecer o perfil predeterminado</h1> <p translate="yes">O teu perfil predeterminado serve como principal punto de contacto da túa conta.</p> <div id="message-container"></div> <form id="submit-form" method="post" class="pure-form" action="/account/set-default-profile" name="submit-form"> <fieldset> <div class="pure-control-group"> <select id="profileid" name="profileid"> <option value="" translate="yes"> Selecciona perfil </option> </select> </div> <button id="submit-button" type="submit" class="pure-button pure-button-primary" translate="yes">Establecer o perfil predeterminado</button> </fieldset> </form> <template id="success"> <div class="success message" translate="yes"> Éxito! O perfil é o teu estándar </div> </template> <template id="unknown-error"> <div class="error message" translate="yes"> Erro! Produciuse un erro descoñecido </div> </template> <template id="default-profile"> <div class="error message" translate="yes"> Erro! Este é xa o teu perfil predeterminado </div> </template> <template id="profile-option"> <option value="${profile.profileid}"> ${profile.contactEmail}, ${profile.firstName} ${profile.lastName} </option> </template> </body> </html>
{ "pile_set_name": "Github" }
沖縄県名護市長選が4日投開票され、米軍普天間飛行場移設計画を事実上容認する前市議で新顔の渡具知(とぐち)武豊氏(56)=自民、公明、維新推薦=が、反対する現職稲嶺進氏(72)=民進、共産、自由、社民、沖縄社会大衆推薦、立憲支持=を破り、初当選を果たした。辺野古で移設工事が進む中、市民は、反対を主張し続けてきた稲嶺氏を選ばなかった。投票率は76・92%だった。 得票数は、渡具知氏が2万389票、稲嶺氏が1万6931票だった。 移設問題が浮上してから6度目の市長選。翁長(おなが)雄志(たけし)知事は移設に反対しているが、安倍政権は「地元の理解が得られた」として工事を加速させるとみられる。一方、翁長知事は苦しい立場に立たされる。 結果を受け、渡具知氏は報道陣に「名護を変えてくれ、明るい街に発展させてくれということだと思う」と語った。辺野古移設については「裁判の結果に従う」と述べるにとどめた。 選挙戦で渡具知氏は「基地問題にこだわり過ぎ、経済を停滞させた」と稲嶺市政を批判し、学校給食費の無償化や観光振興などを中心に訴えた。移設問題については「国と県の裁判を見守る」と繰り返す一方、米軍再編への協力が前提となる再編交付金を受け取って市の振興に活用すると主張してきた。 自民党は知名度の高い国会議員を次々と応援に派遣し、小泉進次郎・筆頭副幹事長は選挙期間中に2度、応援に訪れた。党幹部らも昨年末から水面下で何度も沖縄に入り、全面的に支援した。 一方、稲嶺氏は結果を受け「残念ながら、辺野古移設の問題がなかなか争点となりえなかった」と話した。選挙戦では、市長を務めた2期で、国からの米軍再編交付金がなくても地域振興を進めてきたと主張。「移設を受け入れて、子どもや孫に危険を残してはいけない」と「移設反対」を前面に出して訴えた。 翁長知事もほぼ連日、名護市に入り「基地は経済発展の邪魔になる」と街頭などで繰り返し訴えたが、支持は広がらなかった。結果について翁長知事は「争点はずしをされたというのは残念だった。厳しい結果。これからいろいろ相談をしながら、やっていきたい」と述べた。(上遠野郷)
{ "pile_set_name": "OpenWebText2" }
Building upon the pioneering work of Vicsek *et al.*[@b1], physicists, mathematicians and biologists have contemplated the self-organization of living-organism groups into flocks as an emergent process stemming from simple interaction rules at the individual level[@b2][@b3][@b4]. This idea has been supported by quantitative trajectory analysis in animal groups[@b5][@b6][@b7], together with a vast number of numerical and theoretical models[@b3][@b4], and more recently by the observations of flocking behaviour in ensembles of non-living motile particles such as shaken grains, active colloids, and mixtures of biofilaments and molecular motors[@b8][@b9][@b10][@b11][@b12]. From a physicist\'s perspective, these various systems are considered as different instances of polar active matter, which encompasses any ensemble of motile bodies endowed with local velocity--alignment interactions. The current paradigm for flocking physics is the following. Active particles are persistent random walkers, which when dilute form a homogeneous isotropic gas. Upon increasing density, collective motion emerges in the form of spatially localized swarms that may cruise in a sea of randomly moving particles; further increasing density, a homogeneous polar liquid forms and spontaneously flows along a well-defined direction[@b1][@b13][@b14]. This picture is the outcome of experiments, simulations and theories mostly performed in unbounded or periodic domains. Beyond this picture, significant attention has been devoted over the last five years to confined active matter[@b3][@b12][@b15][@b16][@b17][@b18][@b19][@b20][@b21][@b22][@b23][@b24][@b25][@b26]. Confined active particles have consistently, yet not systematically, been reported to self-organize into vortex-like structures. However, unlike for our understanding of flocking, we are still lacking a unified picture to account for the emergence and structure of such vortex patterns. This situation is mostly due to the extreme diversity in the nature and symmetries of the interactions between the active particles that have been hitherto considered. Do active vortices exist only in finite-size systems as in the case of bacterial suspensions[@b17], which lose this beautiful order and display intermittent turbulent dynamics[@b27] when unconfined? What are the necessary interactions required to observe and/or engineer bona fide stationary swirling states of active matter? In this paper, we answer these questions by considering the impact of geometrical boundaries on the collective behaviour of motile particles endowed with velocity--alignment interactions. Combining quantitative experiments on motile colloids, numerical simulations and analytical theory, we elucidate the phase behaviour of *polar* active matter restrained by geometrical boundaries. We use colloidal rollers, which, unlike most of the available biological self-propelled bodies, interact via well-established dynamical interactions[@b11]. We first exploit this unique model system to show that above a critical concentration populations of motile colloids undergo a non-equilibrium phase transition from an isotropic gaseous state to a novel ordered state where the entire population self-organizes into a single heterogeneous steadily rotating vortex. This self-organization is *not* the consequence of the finite system size. Rather, this emergent vortex is a genuine state of polar active matter lying on the verge of a macroscopic phase separation. This novel state is the only ordered phase found when unidirectional directed motion is hindered by convex isotropic boundaries. We then demonstrate theoretically that a competition between alignment, repulsive interactions and confinement is necessary to yield large-scale vortical motion in ensembles of motile particles interacting via alignment interactions, thereby extending the relevance of our findings to a broad class of active materials. Results ======= Experiments ----------- The experimental setup is fully described in the *Methods* section and in [Fig. 1a,b](#f1){ref-type="fig"}. Briefly, we use colloidal rollers powered by the Quincke electrorotation mechanism as thoroughly explained in ref. [@b11]. An electric field **E**~**0**~ is applied to insulating colloidal beads immersed in a conducting fluid. Above a critical field amplitude *E*~Q~, the symmetry of the electric charge distribution at the bead surface is spontaneously broken. As a result, a net electric torque acts on the beads causing them to rotate at a constant rate around a random axis transverse to the electric field[@b28][@b29][@b30]. When the colloids sediment, or are electrophoretically driven, onto one of the two electrodes, rotation is converted into a net rolling motion along a random direction. Here, we use poly(methyl methacrylate) (PMMA) spheres of radius *a*=2.4 μm immersed in a hexadecane solution. As sketched in [Fig. 1a](#f1){ref-type="fig"}, the colloids are handled and observed in a microfluidic device made of double-sided scotch tape and of two glass slides coated with an indium-tin-oxide layer. The ITO layers are used to apply a uniform DC field in the *z*-direction, with *E*~0~=1.6 V μm^−1^ (*E*~0~=1.1*E*~Q~). Importantly, the electric current is nonzero solely in a disc-shaped chamber at the centre of the main channel. As exemplified by the trajectories shown in [Fig. 1b](#f1){ref-type="fig"} and in [Supplementary Movie 1](#S1){ref-type="supplementary-material"}, Quincke rotation is hence restrained to this circular region in which the rollers are trapped. We henceforth characterize the collective dynamics of the roller population for increasing values of the colloid packing fraction *φ*~0~. Individual self-propulsion -------------------------- For area fractions smaller than , the ensemble of rollers uniformly explores the circular confinement as illustrated by the flat profile of the local packing fraction averaged along the azimuthal direction *φ*(*r*) in [Fig. 2a](#f2){ref-type="fig"}. The rollers undergo uncorrelated persistent random walks as demonstrated in [Fig. 2b,c](#f2){ref-type="fig"}. The probability distribution of the roller velocities is isotropic and sharply peaked on the typical speed *v*~0~=493±17 μm s^−1^. In addition, the velocity autocorrelation function decays exponentially at short time as expected from a simple model of self-propelled particles having a constant speed *v*~0~ and undergoing rotational diffusion with a rotational diffusivity *D*^−1^=0.31±0.02 s that hardly depends on the area fraction (see [Supplementary Note 1](#S1){ref-type="supplementary-material"}). These quantities correspond to a persistence length of that is about a decade smaller than the confinement radius *R*~c~ used in our experiments: 0.9 mm\<*R*~c~\<1.8 mm. At long time, because of the collisions on the disc boundary, the velocity autocorrelation function sharply drops to 0 as seen in [Fig. 2c](#f2){ref-type="fig"}. Unlike swimming cells[@b26][@b31], self-propelled grains[@b8][@b22][@b23] or autophoretic colloids[@b32], dilute ensembles of rollers do not accumulate at the boundary. Instead, they bounce off the walls of this virtual box as shown in a close-up of a typical roller trajectory in [Fig. 2d](#f2){ref-type="fig"}, and in the [Supplementary Movie 1](#S1){ref-type="supplementary-material"}. As a result, the outer region of the circular chamber is depleted, and the local packing fraction vanishes as *r* goes to *R*~c~, [Fig. 2a](#f2){ref-type="fig"}. The repulsion from the edges of the circular hole in the microchannel stems from another electrohydrodynamic phenomenon[@b33]. When an electric field is applied, a toroidal flow sketched in [Fig. 1a](#f1){ref-type="fig"} is osmotically induced by the transport of the electric charges at the surface of the insulating adhesive films. Consequently, a net inward flow sets in at the vicinity of the bottom electrode. As the colloidal rollers are prone to reorient in the direction of the local fluid velocity[@b11], this vortical flow repels the rollers at a distance typically set by the channel height *H* while leaving unchanged the colloid trajectories in the centre of the disc. This electrokinetic flow will be thoroughly characterized elsewhere. Collective motion in confinement -------------------------------- As the area fraction is increased above , collective motion emerges spontaneously at the entire population level. When the electric field is applied, large groups of rollers akin to the band-shaped swarms reported in[@b11] form and collide. However, unlike what was observed in periodic geometries, the colloidal swarms are merely transient and ultimately self-organize into a single vortex pattern spanning the entire confining disc as shown in [Fig. 3a](#f3){ref-type="fig"} and [Supplementary Movie 2](#S1){ref-type="supplementary-material"}. Once formed, the vortex is very robust, rotates steadily and retains an axisymmetric shape. To go beyond this qualitative picture, we measured the local colloid velocity field **v**(**r**, *t*) and use it to define the polarization field **Π**(**r**, *t*)≡**v**/*v*~0~, which quantifies local orientational ordering. The spatial average of **Π** vanishes when a coherent vortex forms, therefore we use its projection along the azimuthal direction as a macroscopic order parameter to probe the transition from an isotropic gas to a polar-vortex state. As illustrated in [Fig. 3b](#f3){ref-type="fig"}, displays a sharp bifurcation from an isotropic state with to a globally ordered state with equal probability for left- and right-handed vortices above . Furthermore, [Fig. 3b](#f3){ref-type="fig"} demonstrates that this bifurcation curve does not depend on the confinement radius *R*~c~. The vortex pattern is spatially heterogeneous. The order parameter and density fields averaged over time are displayed in [Fig. 3c,d](#f3){ref-type="fig"}, respectively. At first glance, the system looks phase-separated: a dense and ordered polar-liquid ring where all the colloids cruise along the azimuthal direction encloses a dilute and weakly ordered core at the centre of the disc. We shall also stress that regardless of the average packing fraction, the packing fraction in the vortex core is measured to be very close to , the average concentration below which the population is in a gaseous state, see [Fig. 3e](#f3){ref-type="fig"}. This phase-separation picture is consistent with the variations of the area occupied by the ordered outer ring, *A*~ring~, for different confinement radii *R*~c~, as shown in [Fig. 3e](#f3){ref-type="fig"}. We define *A*~ring~ as the area of the region where the order parameter exceeds 0.5, and none of the results reported below depend on this arbitrary choice for the definition of the outer-ring region. *A*~ring~ also bifurcates as *φ*~0~ exceeds , and increases with *R*~c~. Remarkably, all the bifurcation curves collapse on a single master curve when *A*~ring~ is rescaled by the overall confinement area , [Fig. 3f](#f3){ref-type="fig"}. In other words, the strongly polarized outer ring always occupies the same area fraction irrespective of the system size, as would a molecular liquid coexisting with a vapour phase at equilibrium. However, if the system were genuinely phase-separated, one should be able to define an interface between the dense outer ring and the dilute inner core, and this interface should have a constant width. This requirement is not borne out by our measurements. The shape of the radial density profiles of the rollers in [Fig. 3g](#f3){ref-type="fig"} indeed makes it difficult to unambiguously define two homogeneous phases separated by a clear interface. Repeating the same experiments in discs of increasing radii, we found that the density profiles are self-similar, [Fig. 3h](#f3){ref-type="fig"}. The width of the region separating the strongly polarized outer ring from the inner core scales with the system size, which is the only characteristic scale of the vortex patterns. The colloidal vortices therefore correspond to a monophasic yet spatially heterogeneous liquid state. To elucidate the physical mechanisms responsible for this intriguing structure, we now introduce a theoretical model that we solve both numerically and analytically. Numerical simulations --------------------- The Quincke rollers are electrically powered and move in a viscous fluid, and hence interact at a distance both hydrodynamically and electrostatically. In ref. [@b11], starting from the Stokes and Maxwell equations, we established the equations of motion of a dilute ensemble of Quincke rollers within a pairwise additive approximation. When isolated, the *i*th roller located at **r**~*i*~ moves at a speed *v*~0~ along the direction opposite to the in-plane component of the electrostatic dipole responsible for Quincke rotation[@b11]. When interacting via contact and electrostatic repulsive forces, the roller velocity and orientation are related by: Inertia is obviously ignored, and for the sake of simplicity we model all the central forces acting on the colloids as an effective hard-disc exclusion of range *b*. In addition, *θ*~*i*~ follows an overdamped dynamics in an effective angular potential capturing both the electrostatic and hydrodynamic torques acting on the colloids[@b11]: The *ξ*~*i*~\'s account for rotational diffusion of the rollers. They are uncorrelated white noise variables with zero mean and variance 〈*ξ*~*i*~(*t*)*ξ*~*j*~(*t*′)〉=2*Dδ*(*t*−*t*′)*δ*~*ij*~. The effective potential in [equation 2](#eq15){ref-type="disp-formula"} is composed of three terms with clear physical interpretations: where and **I** is the identity matrix. The symmetry of these interactions is not specific to colloidal rollers and could have been anticipated phenomenologically exploiting both the translational invariance and the polar symmetry of the surface-charge distribution of the colloids[@b34]. The first term promotes alignment and is such that the effective potential is minimized when interacting rollers propel along the same direction. *A*(*r*) is positive, decays exponentially with *r*/*H*, and results both from hydrodynamic and electrostatic interactions. The second term gives rise to repulsive *torques*, and is minimized when the roller orientation points away from its interacting neighbour. *B*(*r*) also decays exponentially with *r*/*H* but solely stems from electrostatics. The third term has a less intuitive meaning, and promotes the alignment of along a dipolar field oriented along . This term is a combination of hydrodynamic and electrostatic interactions, and includes a long-ranged contribution. The functions *A*(*r*), *B*(*r*) and *C*(*r*) are provided in the [Supplementary Note 2](#S1){ref-type="supplementary-material"}. As it turns out, all the physical parameters (roller velocity, field amplitude, fluid viscosity, etc.) that are needed to compute their exact expressions have been measured, or estimated up to logarithmic corrections, see [Supplementary Note 2](#S1){ref-type="supplementary-material"}. We are then left with a model having a single free parameter that is the range, *b*, of the repulsive *forces* between colloids. We numerically solved this model in circular simulation boxes of radius *R*~c~ with reflecting boundary conditions using an explicit Euler scheme with adaptive time-stepping. All the numerical results are discussed using the same units as in the experiments to facilitate quantitative comparisons. The simulations revealed a richer phenomenology than the experiments, as captured by the phase diagram in [Fig. 4a](#f4){ref-type="fig"} corresponding to *R*~c~=0.5 mm. By systematically varying the range of the repulsive forces and the particle concentration, we found that the (*φ*~0~, *b*) plane is typically divided into three regions. At small packing fractions, the particles hardly interact and form an isotropic gaseous phase. At high fractions, after a transient dynamics strikingly similar to that observed in the experiments, the rollers self-organize into a macroscopic vortex pattern, [Fig. 4b](#f4){ref-type="fig"} and [Supplementary Movie 3](#S1){ref-type="supplementary-material"}. However, at intermediate densities, we found that collective motion emerges in the form of a macroscopic swarm cruising around the circular box through an ensemble of randomly moving particles, [Fig. 4c](#f4){ref-type="fig"} and [Supplementary Movie 4](#S1){ref-type="supplementary-material"}. These swarms are akin to the band patterns consistently reported for polar active particles at the onset of collective motion in periodic domains[@b11][@b14]. This seeming conflict between our experimental and numerical findings is solved by looking at the variations of the swarm length *ξ*~s~ with the confinement radius *R*~c~ in [Fig. 4d](#f4){ref-type="fig"}. We define *ξ*~s~ as the correlation length of the density fluctuations in the azimuthal direction. The angular extension of the swarms *ξ*~s~/*R*~c~ increases linearly with the box radius. Therefore, for a given value of the interaction parameters, there exists a critical box size above which the population undergoes a direct transition from a gaseous to an axisymmetric vortex state. For *b*=3*a*, which was measured to be the typical interparticle distance in the polar liquid state[@b11], this critical confinement is *R*~c~=1 mm. This value is close to the smallest radius accessible in our experiments where localized swarms were never observed, thereby solving the apparent discrepancy with the experimental phenomenology. More quantitatively, we systematically compare our numerical and experimental measurements in [Fig. 3b,c](#f3){ref-type="fig"} for *R*~c~=1 mm. Even though a number of simplifications were needed to establish [equations 1](#eq13){ref-type="disp-formula"}, [2](#eq15){ref-type="disp-formula"} and [3](#eq16){ref-type="disp-formula"} (ref. [@b11]), the simulations account very well for the sharp bifurcation yielding the vortex patterns as well as their self-similar structure. This last point is proven quantitatively in [Fig. 3h](#f3){ref-type="fig"}, which demonstrates that the concentration increases away from the vortex core, where , over a scale that is solely set by the confinement radius. We shall note however that the numerical simulations underestimate the critical packing fraction at which collective motion occurs, which is not really surprising given the number of approximations required to establish the interaction parameters in the equations of motion [equation 3](#eq16){ref-type="disp-formula"}. We unambiguously conclude from this set of results that [equations 1](#eq13){ref-type="disp-formula"}, [2](#eq15){ref-type="disp-formula"} and [3](#eq16){ref-type="disp-formula"} include all the physical ingredients that chiefly dictate the collective dynamics of the colloidal rollers. We now exploit the opportunity offered by the numerics to turn on and off the four roller-roller interactions one at a time, namely the alignment torque, *A*, the repulsion torque *B* and force *b*, and the dipolar coupling *C*. Snapshots of the resulting particle distributions are reported in [Fig. 4e](#f4){ref-type="fig"}. None of these four interactions alone yields a coherent macroscopic vortex. We stress that when the particles solely interact via pairwise-additive alignment torques, *B*=*C*=*b*=0, the population condenses into a single compact polarized swarm. Potential velocity-alignment interactions are *not* sufficient to yield macroscopic vortical motion. We evidence in [Fig. 4e](#f4){ref-type="fig"} (top-right and bottom-left panels) that the combination of alignment (*A*≠0) and of repulsive interactions (*B*≠0 and/or *b*≠0) is necessary and sufficient to observe spontaneously flowing vortices. Analytical theory ----------------- Having identified the very ingredients necessary to account for our observations, we can now gain more detailed physical insight into the spatial structure of the vortices by constructing a minimal hydrodynamic theory. We start from [equations 1](#eq13){ref-type="disp-formula"}, [2](#eq15){ref-type="disp-formula"} and [3](#eq16){ref-type="disp-formula"}, ignoring the *C* term in [equation 3](#eq16){ref-type="disp-formula"}. The model can be further simplified by inspecting the experimental variations of the individual roller velocity with the local packing fraction, see [Supplementary Fig. 1](#S1){ref-type="supplementary-material"}. The roller speed only displays variations of 10% as *φ*(**r**) increases from 10^−2^ to 4 × 10^−2^. These minute variations suggest ignoring the contributions of the repulsive forces in [equation 1](#eq13){ref-type="disp-formula"}, and solely considering the interplay between the alignment and repulsion torques on the orientational dynamics of [equation 2](#eq15){ref-type="disp-formula"}. These simplified equations of motion are coarse-grained following a conventional kinetic-theory framework reviewed in[@b4] to establish the equivalent to the Navier-Stokes equations for this two-dimensional active fluid. In a nutshell, the two observables we need to describe are the local area fraction φ and the local momentum field *φ***Π**. They are related to the first two angular moments of the one-particle distribution function , which evolves according to a Fokker-Plank equation derived from the conservation of and [equations 1](#eq13){ref-type="disp-formula"} and [2](#eq15){ref-type="disp-formula"}. This equation is then recast into an infinite hierarchy of equations for the angular moments of . The two first equations of this hierarchy, corresponding to the mass conservation equation and to the momentum dynamics, are akin to the continuous theory first introduced phenomenologically by Toner and Tu[@b2][@b4]: where **Q** is the usual nematic order parameter. The meaning of the first equation is straightforward, while the second calls for some clarifications. The divergence term on the left-hand side of [equation 5](#eq26){ref-type="disp-formula"} is a convective kinematic term associated with the self-propulsion of the particles. The force field **F** on the right-hand side would vanish for non-interacting particles. Here, at first order in a gradient expansion, **F** is given by: This force field has a clear physical interpretation. The first term reflects the damping of the polarization by the rotational diffusion of the rollers. The second term, defined by the time rate *α*=(∫~*r*\>2*a*~*rA*(*r*)d*r*)/*a*^2^, echoes the alignment rule at the microscopic level and promotes a nonzero local polarization. The third term, involving *β*=(∫~*r*\>2*a*~*r*^2^*B*(*r*)d*r*)/(2*a*^2^), is an anisotropic pressure reflecting the repulsive interactions between rollers at the microscopic level. [equations 4](#eq25){ref-type="disp-formula"} and [5](#eq26){ref-type="disp-formula"} are usually complemented by a dynamical equation for **Q** and a closure relation. This additional approximation, however, is not needed to demonstrate the existence of vortex patterns and to rationalize their spatial structure. Looking for axisymmetric steady states, it readily follows from mass conservation, [equation 4](#eq25){ref-type="disp-formula"}, that the local fields must take the simple forms: *φ*=*φ*(*r*), and where *Q*(*r*)\>0. We also infer the relation from the projection of the momentum equation, [equation 5](#eq26){ref-type="disp-formula"}, on the azimuthal direction. This relation tells us that the competition between rotational diffusion and local alignment results in a mean-field transition from an isotropic state with to a polarized vortex state with and *Q*=(1)/(2)(1−*D*/(*αφ*)). This transition occurs when φ exceeds , the ratio of the rotational diffusivity to the alignment strength at the hydrodynamic level. In addition, the projection of [equation 5](#eq26){ref-type="disp-formula"} on the radial direction sets the spatial structure of the ordered phase: with again in the ordered polar phase. This equation has a clear physical meaning and expresses the balance between the centrifugal force arising from the advection of momentum along a circular trajectory and the anisotropic pressure induced by the repulsive interactions between rollers. It has an implicit solution given by *φ*(*r*) is therefore parametrized by the dimensionless number reflecting the interplay between self-propulsion and repulsive interactions. Given the experimental values of the microscopic parameters, Λ is much smaller that unity . An asymptotic analysis reveals that is the typical core radius of the vortex. For , the density increases slowly as for all *φ*~0~ and *R*~c~. As *r* reaches , it increases significantly and then grows logarithmically as away from the vortex core. However, is an integration constant, which is solely defined via the mass conservation relation: and therefore only depends on *φ*~0~ and *R*~c~. does not provide any intrinsic structural scale, and the vortex patterns formed in different confinements are predicted to be self-similar in agreement with our experiments and simulations despite the simplification made in the model, [Fig. 3e](#f3){ref-type="fig"}. In addition, [equation 8](#eq36){ref-type="disp-formula"} implies that the rollers self-organize by reducing their density at the centre of the vortex down to , the mean area fraction at the onset of collective motion, again in excellent agreement with our measurements in [Fig. 3e](#f3){ref-type="fig"}. To characterize the orientational structure of the vortices, an additional closure relation is now required. The simplest possible choice consists in neglecting correlations of the orientational fluctuations, and therefore assuming . This choice implies that [Equations 8](#eq36){ref-type="disp-formula"} and [9](#eq49){ref-type="disp-formula"} provide a very nice fit of the experimental polarization curve as shown in [Fig. 3b](#f3){ref-type="fig"}, and therefore capture both the pitchfork bifurcation scenario at the onset of collective motion and the saturation of the polarization at high packing fractions. The best fit is obtained for values of and *β*, respectively, five and two times larger than those deduced from the microscopic parameters. Given the number of simplifications needed to establish both the microscopic and hydrodynamic models, the agreement is very convincing. We are then left with a hydrodynamic theory with no free fitting parameter, which we use to compute the area fraction of the outer polarized ring where . The comparison with the experimental data in [Fig. 3f](#f3){ref-type="fig"} is excellent. Furthermore, [equations 8](#eq36){ref-type="disp-formula"} and [9](#eq49){ref-type="disp-formula"} predict that the rollers are on the verge of a phase separation. If the roller fraction in the vortex core were smaller , orientational order could not be supported and an isotropic bubble would nucleate in a polar liquid. This phase separation is avoided by the self-regulation of *φ*(*r*=0) at . Discussion ========== Altogether our theoretical results confirm that the vortex patterns stem from the interplay between self-propulsion, alignment, repulsion and confinement. Self-propulsion and alignment interactions promote a global azimuthal flow. The repulsive interactions prevent condensation of the population on the geometrical boundary and allow for extended vortex patterns. If the rollers were not confined, the population would evaporate as self-propulsion induces a centrifugal force despite the absence of inertia. We close this discussion by stressing on the generality of this scenario. First, the vortex patterns do not rely on the perfect rotational symmetry of the boundaries. As illustrated in [Supplementary Fig. 2,](#S1){ref-type="supplementary-material"} the same spatial organization is observed for a variety of convex polygonal geometries. However, strongly anisotropic, and/or strongly non-convex confinements can yield other self-organized states such as vortex arrays, which we will characterize elsewhere. Second, neither the nature of the repulsive couplings nor the symmetry of the interactions yielding collective motion are crucial, thereby making the above results relevant to a much broader class of experimental systems. For instance, self-propelled particles endowed with nematic alignment rules are expected to display the same large-scale phenomenology. The existence of a centrifugal force does not rely on the direction of the individual trajectories. Shaken rods, concentrated suspensions of bacteria or motile biofilaments, among other possible realizations, are expected to have a similar phase behaviour. Quantitative local analysis of their spatial patterns[@b10][@b12][@b15][@b16][@b17] would make it possible to further test and elaborate our understanding of the structure of confined active matter. For instance, the polar order found in confined bacteria is destroyed upon increasing the size of the confinement. The analysis of the spacial distribution of the bacteria could be used to gain insight on the symmetries and the magnitude of the additional interactions mediated by the host fluid, which are responsible for the emergence of bacterial turbulence[@b17]. In conclusion, we take advantage of a model experimental system where ensembles of self-propelled colloids with well-established interactions self-organize into macrosopic vortices when confined by circular geometric boundaries. We identify the physical mechanism that chiefly dictates this emergent behaviour. Thanks to a combination of numerical simulations and analytical theory, we demonstrate that orientational couplings alone cannot account for collective circular motion. Repulsion between the motile individuals is necessary to balance the centrifugal flow intrinsic to any ordered active fluid and to stabilize heterogeneous yet monophasic states in a broad class of active fluids. A natural challenge is to extend this description to the compact vortices observed in the wild, for example, in shoals of fish. In the absence of confining boundaries, the centrifugal force has to be balanced by additional density-regulation mechanisms[@b35][@b36]. A structural investigation akin to the one introduced here for roller vortices could be a powerful tool to shed light on density regulation in natural flocks, which remains to be elucidated. Methods ======= Experiments ----------- We use fluorescent PMMA colloids (Thermo scientific G0500, 2.4 μm radius), dispersed in a 0.15 mol l^−1^ AOT/hexadecane solution. The suspension is injected in a wide microfluidic chamber made of double-sided scotch tapes. The tape is sandwiched between two ITO-coated glass slides (Solems, ITOSOL30, 80 nm thick). An additional layer of scotch tape including a hole having the desired confinement geometry is added to the upper ITO-coated slide. The holes are made with a precision plotting cutter (Graphtec robo CE 6,000). The gap between the two ITO electrodes is constant over the entire chamber *H*=220 μm. The electric field is applied by means of a voltage amplifier (Stanford Research Systems, PS350/5000 V-25 W). All the measurements were performed 5 min after the beginning of the rolling motion, when a steady state was reached for all the observables. The colloids are observed with a × 4 microscope objective for particle tracking, particle imaging velocimetry (PIV) and number-density measurements. High-speed movies are recorded with a CMOS camera (Basler ACE) at a frame rate of 190 fps. All images are 2,000 × 2,000 8-bit pictures. The particles are detected to sub-pixel accuracy and the particle trajectories are reconstructed using a MATLAB version of a conventional tracking code[@b37]. The PIV analysis was performed with the mpiv MATLAB code. A block size of 44 μm was used. Numerical simulations --------------------- The simulations are performed by numerically integrating the equations of motion ([equations 1](#eq13){ref-type="disp-formula"} and [2)](#eq15){ref-type="disp-formula"}. Particle positions and rolling directions are initialized randomly inside a circular domain. Integration is done using an Euler scheme with an adaptive time step *δt*, and the diffusive term in the equation for the rotational dynamics is modelled as a Gaussian variable with zero mean and with variance 2*D*/*δt*. Steric exclusion between particles is captured by correcting particle positions after each time step so as to prevent overlaps. Bouncing off of particles at the confining boundary is captured using a phenomenological torque that reorients the particles towards the centre of the disc; the form of the torque was chosen so at the reproduce the bouncing trajectories observed in the experiments. Additional information ====================== **How to cite this article:** Bricard, A. *et al.* Emergent vortices in populations of colloidal rollers. *Nat. Commun.* 6:7470 doi: 10.1038/ncomms8470 (2015). Supplementary Material {#S1} ====================== ###### Supplementary Information Supplementary Figures 1-3, Supplementary Notes 1-2 and Supplementary References ###### Supplementary Movie 1 Epifluorescence movie of a dilute ensemble of colloidal rollers exploring a circular chamber. Rc=1 mm. Packing Fraction: 0.3%. Movie recorded at 100 fps, played at 25 fps. ###### Supplementary Movie 2 Emergence of a macroscopic vortex pattern. Packing fraction: 3.6%. Rc=1 mm. Epifluorescence movie recorded at 100 fps, played at 11 fps. At t=3 s, the electric field is turned on and the rollers start propelling. ###### Supplementary Movie 3 Numerical simulation of a population of rollers showing the formation of an axisymmetric vortex. Packing fraction: 10%, range of repulsive forces: b=5a. ###### Supplementary Movie 4 Numerical simulation of a population of rollers showing the formation of a finitesized swarm. Packing fraction: 4.5%, range of repulsive forces: b=2a. We benefited from valuable discussions with Hugues Chaté, Nicolas Desreumaux, Olivier Dauchot, Cristina Marchetti, Julien Tailleur and John Toner. This work was partly funded by the ANR program MiTra, and Institut Universitaire de France. D.S. acknowledges partial support from the Donors of the American Chemical Society Petroleum Research Fund and from NSF CAREER Grant No. CBET-1151590. K.S. was supported by the JSPS Core-to-Core Program 'Non-equilibrium dynamics of soft matter and information\'. **Author contributions** A.B. and V.C. carried out the experiments and processed the data. D.D., C.S., O.C., F.P. and D.S. carried out the the numerical simulations. J.-B.C., K.S. and D.B. established the analytical model. All the authors discussed and interpreted results. D.B., J.-B.C. and D.S. wrote the manuscript. D.B. conceived the project. A.B. and J.-B.C. have equally contributed to this work. ![Experimental setup.\ (**a**) Sketch of the setup. Five5-micrometre PMMA colloids roll in a microchannel made of two ITO-coated glass slides assembled with double-sided scotch tape. An electrokinetic flow confines the rollers at the centre of the device in a circular chamber of radius *R*~c~. (**b**) Superimposed fluorescence pictures of a dilute ensemble of rollers (*E*~0~/*E*~*Q*~=1.1, *φ*~0~=6 × 10^−3^). The colloids propel only inside a circular disc of radius *R*~c~=1 mm and follow persistent random walks.](ncomms8470-f1){#f1} ![Dynamics of an isolated colloidal roller.\ (**a**) Local packing fraction *φ*(*r*), averaged over the azimuthal angle *φ*, plotted as a function of the radial distance. The dashed line indicates the radius of the circular chamber. (**b**) Probability distribution function of the roller velocities measured from the individual tracking of the trajectories. (**c**) Autocorrelation of the roller velocity 〈**v**~*i*~(*t*)·**v**~*i*~(*t*+*T*)〉 plotted as a function of *v*~0~*T* for packing fractions ranging from *φ*~0~=6 × 10^−3^ to *φ*~0~=10^−2^. Full line: best exponential fit. (**d**) Superimposed trajectories of colloidal rollers bouncing off the edge of the confining circle. Time interval: 5.3 ms (*E*~0~/*E*~*Q*~=1.1, *φ*~0~=6 × 10^−3^). Same parameters for the four panels: *R*~c~=1.4 mm, *E*~0~/*E*~*Q*~=1.1, *φ*~0~=6 × 10^−3^.](ncomms8470-f2){#f2} ![Collective-dynamics experiments.\ (**a**) Snapshot of a vortex of rollers. The dark dots show the position of one half of the ensemble of rollers. The blue vectors represent their instantaneous speed (*R*~c~=1.35 mm, *φ*~0~=5 × 10^−2^). (**b**) Average polarization plotted versus the average packing fraction for different confinement radii. Open symbols: experiments. Full line: best fit from the theory. Filled circles: numerical simulations (*b*=3*a*, *R*~c~=1 mm). (**c**) Time-averaged polarization field (*R*~c~=1.35 mm, *φ*~0~=5 × 10^−2^). (**d**) Time average of the local packing fraction (*R*~c~=1.35 mm, *φ*~0~=5 × 10^−2^). (**e**) Time-averaged packing fraction at the centre of the disc, normalized by and plotted versus the average packing fraction. Error bars: one standard deviation. (**f**) Fraction of the disc where versus the average packing fraction. Open symbols: experiments. Full line: theoretical prediction with no free fitting parameter. Filled circles: numerical simulations (*b*=3*a*, *R*~c~=1 mm). (**g**) Radial density profiles plotted as a function of the distance to the disc centre *r*. All the experiments correspond to *φ*~0~=0.032±0.002, error bars: 1*σ*. (**h**) Open symbols: same data as in **g**. The radial density profiles are rescaled by and plotted versus the rescaled distance to the centre *r*/*R*~c~. All the profiles are seen to collapse on a single master curve. Filled symbols: Numerical simulations. Solid line: theoretical prediction. All the data correspond to *E*~0~/*E*~*Q*~=1.1.](ncomms8470-f3){#f3} ![Collective-dynamics simulations.\ (**a**) The numerical phase diagram of the confined population is composed of three regions: isotropic gas (low *φ*~0~, small *b*), swarm coexisting with a gaseous phase (intermediate *φ*~0~ and *b*) and vortex state (high *φ*~0~ and *b*). *R*~c~=0.5 mm. (**b**) Snapshot of a vortex state. Numerical simulation for *φ*~0~=0.1 and *b*=5*a*. (**c**) Snapshot of a swarm. Numerical simulation for *φ*~0~=4.5 × 10^−2^ and *b*=2*a*. (**d**) Variation of the density correlation length as a function of *R*~c~. Above *R*~c~=1 mm, ξ plateaus and a vortex is reached (*φ*~0~=3 × 10^−2^, *b*=3*a*). (**e**) Four numerical snapshots of rollers interacting via: alignment interactions only (*A*), alignment interactions and repulsive torques (*A*+*B*, where the magnitude of *B* is five times the experimental value), alignment and excluded volume interactions (*A*+*b*, where the repulsion distance is *b*=5*a*), alignment and the *C*-term in [equation 3](#eq16){ref-type="disp-formula"} (*A*+*C*). Polarized vortices emerge solely when repulsive couplings exist (*A*+*B* and *A*+*b*).](ncomms8470-f4){#f4} [^1]: These authors equally contributed to this work
{ "pile_set_name": "PubMed Central" }
require([ 'gitbook' ], function (gitbook) { gitbook.events.bind('page.change', function () { mermaid.init(); }); });
{ "pile_set_name": "Github" }
TODO: Implement depth-major-sources packing paths for NEON Platforms: ARM NEON Coding time: M Experimentation time: M Skill required: M Prerequisite reading: doc/kernels.txt doc/packing.txt Model to follow/adapt: internal/pack_neon.h At the moment we have NEON optimized packing paths for WidthMajor sources. We also need paths for DepthMajor sources. This is harder because for DepthMajor sources, the size of each slice that we have to load is the kernel's width, which is typically 12 (for the LHS) or 4 (for the RHS). That's not very friendly to NEON vector-load instructions which would allow us to load 8 or 16 entries, but not 4 or 12. So you will have to load 4 entries at a time only. For that, the vld1q_lane_u32 seems to be as good as you'll get. The other possible approach would be to load (with plain scalar C++) four uint32's into a temporary local buffer, and use vld1q_u8 on that. Some experimentation will be useful here. For that, you can generate assembly with -save-temps and make assembly easier to inspect by inserting inline assembly comments such as asm volatile("#hello");
{ "pile_set_name": "Github" }
package de.peeeq.wurstscript.utils; import de.peeeq.wurstscript.WLogger; public class ExecutiontimeMeasure implements AutoCloseable { private String message; private long startTime; public ExecutiontimeMeasure(String message) { this.message = message; this.startTime = System.currentTimeMillis(); } @Override public void close() { long time = System.currentTimeMillis() - startTime; WLogger.info("Executed " + message + " in " + time + "ms."); } }
{ "pile_set_name": "Github" }
Pages Monday, February 27, 2017 With just a fortnight left till CNY, the OUG morning market was in a frenzy. You know things are getting serious when RELA is deployed to control traffic. Roads once accessible to traffic were closed and the extra space occupied by vendors selling fireworks, waxed meat, biscuits, sea cucumber, dried mushrooms (a lot of 'em!), etc. While I was walking past Restoran New Sun Ho, saw another example of the globalized Malaysian breakfast-- roti canai with salmon sashimi. One day I might start seeing roti sashimi on the menu. Lunch was a simple meal of chicken porridge with a can of fried dace. Very traditional fare. CNY was around the corner, so we spruced up the house a bit. Bought a can of white paint and repainted the metal grills at the living room. Was a little woozy after inhaling the paint fumes. Felt better after we had dinner at Dao Dao. A plate of tofu with mixed vegetables and Marmite chicken really hit the spot. Wednesday, February 22, 2017 Lego was very much a part of my childhood. Back then, I saved my pocket money, angpao money, and Cantonese Association award money to buy Lego. Back then, the cheapest set cost MYR5+. I think the most expensive model I bought was around MYR80+. Just the normal Lego, for age 5-12. Back then, the only other ranges were Duplo and Technic. So many varieties have popped up after the Lego resurgence. Today we have: Star Wars Creator Minifigures City Friends Nexo Knights Ninjago DC Comics Superheroes The Batman Movie Marvel Superheroes Elves Disney Juniors Architecture Mindstorms Scooby Doo Minecraft Speed Angry Birds Ideas Superhero Girls Pening kan? However, all those varieties don't interest me. Technic seems way cooler to me with all those gears, shafts, and moving parts. A sort of kiddy mechanical engineering. More worth it to pay for design and complex parts, rather than paying for copyright costs. Back then, they had pneumatic models, these days, electric motors. Goes without saying that I didn't buy any Technic models back then due to the cost factor. Twenty four years later, I got to assemble my first Technic model-- a Christmas gift from KH. Muacks. He bought me a Lego Technic Drag Racer (42050) which is kind of cool because it has a huge motor block with moving pistons, front wheel steering, big wheely bar, car body that can be raised, and a hidden jack that can be used to pose the drag racer in wheely position. Tuesday, February 21, 2017 Closer and closer to CNY, the market was getting redder and redder. The pace was catching up. Definitely more people than usual judging from the crowds and the lack of parking. Mum stopped at a fishmonger's truck and I was shocked by a large grouper that was on the weighing scale. It was not the size of it that caught my eye, but the fact that it looked like it was dripping in cum! Guess bukkake is not uncommon among fish. Haha. On a serious note, the slime is a layer of mucus that helps fish with gas transport, provide protection, and reduce turbulence in the water. Ventured to Taman Bukit Desa for Sarawak Laksa at Charlie's Cafe because we were bored of the food options at OUG. Taman Bukit Desa was very peaceful compared to the chaos at the market. Always quiet at that neck of the woods. On the way home, we turned into Pearl Shopping Gallery, an interesting new addition to Pearl Point. Located across the road from Jalan Sepadu, it's connected to the old wing via a bridge. Yes, it's always about bridges with shopping malls these days. Inside, there's a small Village Grocer and multiple food establishments (for some reason they have three Korean restaurants). Notable outlets are Paradise Dynasty, Kyochon, Go Noodle House, Kin Kin Pan Mee, and Powerplant. There's also Cremeo that serves premium soft serve ice cream. Will give an update as I try the food outlets there! Friday, February 17, 2017 After a whole month of babysitting, mum and I finally had the chance to go to the market. The first visit of 2017. With all the year end holidays out of the way, it finally dawned on people that the Lunar Chinese New Year was less than a month away! Start the panic buying! The earliest merchants to start the ball rolling were the those who sold religious paraphernalia. They had already started stocking all sorts of special paper offerings, fancy joss sticks, and even the stuff needed to Pai Ti Kong were already on sale. Breakfast was our favourite curry noodles in the alley. In the afternoon, KH came over to my place to chill (kind of like a chaperoned type of paktor). We would just laze on the sofa and chat. Sometimes, we would play our current addiction-- LINE Rangers. So cute, so pointless, and yet... Haha. Received a call from SK and we were out to The Coffee Sessions. More updates from SK regarding her family issues over a cup of mocha and a side of fries. People never fail to surprise. When you thought that they've hit rock bottom, there seems to be more to go! SK stayed on for dinner while KH returned home. Just a simple meal at nearby Restoran 83. A very satisfying dinner of pork noodles, grilled stingray, fried rice, and chicken soup. Remember when it was such a craze to have Portugese style grilled stingray at the Midvalley Megamall Food Court? Ages ago... Wednesday, February 15, 2017 For mum and I, church is the best place to usher in the new year. Bilingual mass started at 10:30 PM and ended fifteen minutes shy of midnight, giving parishioners enough time to find a good location to watch the fireworks display outside (some places actually started the fireworks five minutes before midnight). And a snack box was provided too. By the time we got home, it was past 1:00 AM and we had mass to attend on the 1st of January too! Not that it was mandatory, just that I had traffic warden duties to perform. A thankless job that nobody wants to do. You have to come to church before the crowd, and leave only after everyone does. And its not pleasant when the weather is hot. And drivers are a cranky bunch. Basically, traffic wardens are disconnected from what happens in the church. Been a while since we ate at Nihon-Kai, so we gave it a try for lunch on New Year's day. Got there at 1:30 PM and it was still crowded. Had to sit at the bar counter. Interesting to experience the Japanese way of celebrating the new year. Right at the cashier counter, they had a plastic kagami mochi, which translates to mirror rice cake. Basically a snowman made of rice cake with a bitter orange on top. Not too sure about the traditional symbolism though. Since they had a special set for the day, we ordered it. Perfect for sharing with a little bit of everything-- tempura moriawase, grilled Saba, tamago, maki, inari, arrowroot chips (a Chinese touch), and ozoni which is a special mochi soup that is prepared during the new year. Also added on some nigiri sushi to complete the meal. In the afternoon, I played Mr. Plumber. Tried to fix mum's leaky toilet. That attempt really taught me a thing or two about the 'anatomy' of a toilet. LOL. Managed to fix the leak, but I had to run out to the hardware store to get the right spare parts. Unfortunately, I discovered a more sinister problem with the plumbing. Something was definitely screwing up the pressure in my mum's bathroom. Probably some pesky leak. That's a job best left to the professionals. At night, we had a BEC gathering at KM1 West. Everyone had a fun time trying to get to the multi-purpose hall due to the security measures. Visitors with no access cards could not access the lift lobbies and it was raining heavily outside. The problem with all these 6-tier security condominiums. A hell when you have a lot of guests coming over. At the venue, we were experiencing blackouts because the power was overloaded. Goodness. When we got all of that ironed out, we started the festivities. Stress levels were a little high because both parish priests came for the party. Luckily for us, they didn't stay long. Plenty of food and games. The hostess looked very happy because her husband joined in for the first time ever. Perhaps he wanted to cheer his wife up, who was diagnosed with breast cancer earlier this year. A simple gesture but brings deep joy. Monday, February 13, 2017 On the last day of the Christmas long weekend, I finally managed to spend some time with KH. Normally, Gratitude would cook Christmas dinner, but fears of water disruption, and a lack of a maid to do the clean up made him change his mind. Instead, he invited us for a dim sum brunch at Mayflower Restaurant, Le Garden Hotel (not to be confused with KK Mayflower nearby). It was a nice reunion with KH, Gratitude's mum, my mum, Apollo and QueerRanter-D. The dim sum was pretty good, with a larger portion. They had all the standard varieties, but KH was disappointed that they didn't have custard buns. I liked their siew mai and egg tarts. For coffee, we moved to Brew and Bread. Now, they operate upstairs with the kitchen located in a nearby lot. Compared to last time, this arrangement gives them a whole lot more space to play with. With windows along the whole length of the back, the whole place is spilling with natural light. However, their furniture arrangement is a bit unconventional. A big round tables are quite out of place in a cafe. Why would people want to sit around it especially when there's a huge centerpiece in the middle. Then there's the super long table at the back. Reminded me of the school canteen. In terms of coffee, they now have two blends to choose from-- Driver and Sugar Daddy. Can't imagine why they chose those names. If you're into cold brew, they have something called a Bombshell that actually tastes like its spiked with alcohol. Their croissant is pretty good too, buttery and flaky. Before leaving Kota Kemuning, we stopped a while at AEON Big but that turned out to be a big disappointment. Not well-stocked at all. Walked into a Nagoya as well, a place which brought back memories of my childhood. When I was a kid, mum regularly brought me to fabric shops. As a seamstress, she sometimes had to source for fabrics. While she was choosing, I would be exploring the 'forest' of cloth rolls, running my hands over the cold, silky material. Sometimes, I would scrutinize the price tags attached to the cloth rolls with safety pins. They were usually stamped in blue ink with a small sample of the cloth taped to the tag. Later in the evening, KH and I had a dinner date with QueerRanter (the real deal!), Jin, Apollo, and QueerRanter-D. KH had no luck choosing the dinner venue in Publika. Plan A - Episode. Not open. Plan B - Silver Spoon. Not open. Plan C - Two Sons Bistro. A random choice based on what was open. Halal. Two Sons is a place famous for its mussels and clams. Sixteen varieties to choose from. We ordered a full size (900 grams) of lemon garlic butter mussels to share. Delicious with two portions of garlic bread to lap up the gravy. KH and I shared a Supreme Stuffed Chicken. Stuffed with what you may ask? Jalapeno peppers, sun-dried tomatoes, and cream cheese. Spicy and creamy at the same time. Not a very good feeling. Dessert and coffee was really at Plan B. We reminisced about the days of drama in the heyday of the BFF. So much nonsense from nonsense people. "My birthday party is better than your's" nonsense. "I'm manipulating your feelings" nonsense. "It's all about me" nonsense. "I'm a hypocrite who talks shit" nonsense. The list goes on. But without them, it would have been a bland existence with nothing to reminisce about. Haha. Friday, February 10, 2017 I did not wake up bright and early on Christmas morning. Just as I hit the send button on What's App asking my sister whether the kids had woken up, I heard a commotion downstairs. No mistaking it-- it was The Tribe. My sister had moved all the presents to my place. And it was a lot of presents. Combined with presents from SK and I, it was truly a formidable heap! The excitement and anticipation from the kids was palpable. However, they had to wait a little while longer. All of us donned our customized Christmas T-shirts and took a group photo first. Before we began, I brought out a laundry basket to contain all the discarded gift wrapping. The kids went through their presents like locusts. They got toys, shoes, clothes, bags, and water tumblers. Guess they were on Santa's Good list else they would have received white envelopes filled with cash from Sump'n Claus. Never actually played with Lego Technic before, so it was nice of KH to get me a set. Once we cleared all the debris, it was time to go out for breakfast. BIL wanted to go to a cafe that was Christmas-y. Unfortunately, most of the cafes in Sri Petaling only opened at 11:00 AM. In the end, our stomachs decided that we eat at Poppo Kanteen, a cafe famed for its nasi lemak. If you're not into that, they have noodles and bread too. Plenty of variety and value for money. A suprising MYR8.90 for a big plate with a whole fried chicken leg. Their sambal is pretty good, but the rice didn't quite make the cut. Right on their doorstep is another nasi lemak stall. That stall has been around for years and the rice is really good. Very savoury and creamy. Much better than Poppo Kanteen, but their sambal falls short. What we did was combine the best of both worlds. Haha. The staff didn't mind. Another thing I liked about Poppo Kanteen was the cham. Great taste and hot! Don't ever give me that lukewarm shit. A drink that makes you go "Ahhhhhh...." after every sip. Big Monster had a very good appetite. He walloped a plate of nasi lemak, roti Planta, and potato wedges! In the afternoon, we made a visit to Sunway Velocity Mall, one of KL's newest retail spots. Boy, what a mistake that was. The roads were choked and the mall was overflowing with people. Been a long time since I saw such good business at a mall. The escalators were packed and groups of people were seen just standing around. Crazy. Every ten minutes, there would be a public paging for lost kids and disoriented adults. But this took the cake: "Dear shoppers, please be advised that there is a traffic jam on Jalan Cheras and Jalan Peel. We advise you to continue shopping. Thank you." The Tribe watched "Moana" there at 4:00 PM, but mum and I had some time to kill before our movie. We did some shopping and had a late lunch / early dinner at Canton Kitchen. Not recommended at all. Lethargic service and lousy portions. They charge prices similar to Foong Lye for their set meals but what you get is a far cry. Lukewarm food with just a whole lot of Chinese cabbage in different forms. The only thing I liked was the Pumpkin Springroll. With an hour to go before our movie at Leisure Mall, I fired up the Uber app. Got a ride on a brand new HRV. The driver remarked that traffic was fine before the opening of the mall. Oh well. The ride to Leisure Mall took less than fifteen minutes. Believe it or not, it was my first time at this iconic Cheras mall. Looks pretty good for a neighbourhood mall. Pretty impressive Christmas decorations. Gave me some Singapore heartland mall vibes. We watched "Show Me Your Love", a local production with a cast fortified by Hong Kong actors. I expected more tears from the movie, but it fell a bit short. Overall a nice movie with funny and touching moments. The cinema hall was also surprisingly comfortable. Perhaps they had changed the seats in recent years. Got out of the cinema just in time for The Tribe to pick us up. Truly a fun Christmas spent with loved ones. Thursday, February 09, 2017 Unexpectedly, my sister turned up at our doorstep on the morning of Christmas eve. Naturally, the kids were thrilled to see her. We had a day out in Bukit Bintang ahead of us. First stop was Pavilion Kuala Lumpur to look at the Christmas decorations. Like past years, there was a huge tree out front. The only difference was that they had a train that traveled around the tree. In my opinion, its quite a nuisance to have that when so many people mill at the entrance. Swarovski returned as the main sponsors for the center atrium's Christmas decorations, but I wasn't very impressed by it. This time round, the center atrium is dominated by a castle and a crystal carousel. In the overall theme, it's not very attention-grabbing. Mum shopped while I kept the kids in check. The typical running around, and crawling in and out of clothes racks. Difficult to shop in peace with kids around. All that running around helped the kids build up an appetite. By noon, they were asking for food. At first, we thought of eating at D'Empire, but after looking at how limited the menu was, we walked out. Instead, we ate at Pigs and Wolf located on the Dining Loft. Much better options there. The Christmas set looked like a steal, so we ordered that. MYR60 bought us a soup, main, dessert, and ice lemon tea. They served a cream of potato soup that came with a side of bacon. Both kids gave a squeal of joy when they saw the bacon. For the main, we chose the Prawn and Salmon Pesto that came with generous portion of smoked salmon, and juicy prawns. Dessert was a slice of moist chocolate roll. In addition to that, we got a plate of Carbonara with pork sausages for Little Monster. Big Monster practically polished off a Mighty Piggy Burger all by himself. He loved the thick, juicy patty made from US pork. Since he ordered a burger, his uncle could get a pint of Asahi for MYR10. Their neighbours, Starz Kitchen and Rocku Yakiniku both had Santa Claus out front to attract customers and spread some Christmas cheer. But not all Santa Claus are created equal. Starz Kitchen obviously had the bigger budget. They actually got a fat gweilo to play the part. The wig, beard, and costume was also much better. The guy from Rocku Yakiniku was Cina-fied version in an ill-fitting SuperSave costume. Last stop for the day was Isetan the Japan Store. Had to be very careful in there. Porcelain art with a price tag of MYR14,000 located at child level? OK, shoooh, we are going to the other floors. Didn't stay long really. The kids were tired and Big Monster even started sneezing from all the air-conditioning. Mum and I attended Christmas Eve mass at church that night. There was the usual caroling before mass. During the procession, Baby Jesus was held aloft by the priest and subsequently placed into the manger, and incensed. Everyone was all dressed up and parishioners exchanged greetings to mark the happy occasion of Jesus' birth. Monday, February 06, 2017 In order to make it to English mass at SIC, we had to get up at 7:00 AM. After getting ready, one still needs to fuss over the sleepy kids. At church, we bundled them into the soundproof family room and hoped that they would not be too rowdy. Big Monster asked me a legit question:Big Monster: Is Jesus dead?Moi: Yes, but he resurrected after three days.Big Monster: Oh, someone threw him a Potion of Regeneration. In case you were wondering, he was talking about something from Minecraft! Obsessed with that game, they are. We ate breakfast at Mian then headed home. Mum needed to prepare for her event in the afternoon. She boiled the tang yuan and packed loads of pots and ladles. There was even a portable stove in the heap of stuff. Sent her out to Happy Garden where she would carpool to Brickfields with her friends. It's daunting to handle the two monsters alone so I employed the help of KH. Picked him up and went for a late lunch at Secret Loc Cafe, Kuchai Lama. Although it was already 3:00 PM, all of use weren't starving. Chose kid-friendly items from the menu-- American breakfast, French toast, and pizza. The little one was shouting his head off in the cafe. Buat macam rumah sendiri. When the little ones saw me feeding KH, they exclaimed: "Uncle KH is not a kid!" Went home soon after. Switched on the Google Chromecast for the kids then we stole off to my room for some snogging. In the heat of the moment, there would suddenly be a knock on the door: "Kau fu! I wanna watch a different YouTube clip!" Talk about coitus interruptus. In the end we just gave up and went downstairs to watch "Assassination Classroom: The Graduation" with the kids. The weird story line and even weirder main character, Korosensei resonated with my nephews. SK joined us for dinner at Restoran Mirasa, a mamak joint near my place. Big Monster finished a Maggi goreng all by himself while the little one only concentrated on the cup of Milo. The idea of waffles was tantalizing to the kids, so we had dessert at The New Chapter. Loitered there as long as we could, but the night was still young, and mum's event was far from over. Decided to send KH home first. On our way back home, received a call from mum. Aha! Duty would be over soon. :P. who i am ? I've been told that I look like Suneo Honekawa (how I wish it was Takuya Kimura). I'm fickle (so typical of Librans and other member's of the Alternate society). I laugh and grin a lot (must be some Cheshire Cat genes in there some where too). I have a loving family. Plenty of friends (SK's the best!). And a single person who completes me-- my dear KH. Life has been all ups and downs. But I'm a Thurday's child, so I guess there 's still far to go. Drop me a mail at tanduk7 [at] hotmail.com.
{ "pile_set_name": "Pile-CC" }
//------------------------------------------------------------------------------ // <auto-generated> // This code was generated by AsyncGenerator. // // Changes to this file may cause incorrect behavior and will be lost if // the code is regenerated. // </auto-generated> //------------------------------------------------------------------------------ using System.Collections.Generic; using NUnit.Framework; using NHibernate.Criterion; namespace NHibernate.Test.NHSpecificTest.NH2546 { using System.Threading.Tasks; [TestFixture] public class SetCommandParameterSizesFalseFixtureAsync : BugTestCase { protected override bool AppliesTo(Dialect.Dialect dialect) { return dialect is Dialect.MsSql2008Dialect; } protected override void OnSetUp() { using (ISession session = Sfi.OpenSession()) { session.Persist(new Student() { StringTypeWithLengthDefined = "Julian Maughan" }); session.Persist(new Student() { StringTypeWithLengthDefined = "Bill Clinton" }); session.Flush(); } } protected override void OnTearDown() { using (ISession session = Sfi.OpenSession()) { session.CreateQuery("delete from Student").ExecuteUpdate(); session.Flush(); } base.OnTearDown(); } [Test] public async Task LikeExpressionWithinDefinedTypeSizeAsync() { using (ISession session = Sfi.OpenSession()) { ICriteria criteria = session .CreateCriteria<Student>() .Add(Restrictions.Like("StringTypeWithLengthDefined", "Julian%")); IList<Student> list = await (criteria.ListAsync<Student>()); Assert.That(list.Count, Is.EqualTo(1)); } } [Test] public async Task LikeExpressionExceedsDefinedTypeSizeAsync() { // In this case we are forcing the usage of LikeExpression class where the length of the associated property is ignored using (ISession session = Sfi.OpenSession()) { ICriteria criteria = session .CreateCriteria<Student>() .Add(Restrictions.Like("StringTypeWithLengthDefined", "[a-z][a-z][a-z]ian%", MatchMode.Exact, null)); IList<Student> list = await (criteria.ListAsync<Student>()); Assert.That(list.Count, Is.EqualTo(1)); } } } }
{ "pile_set_name": "Github" }
\[section\] \[theorem\][Lemma]{} \[theorem\][Proposition]{} \[theorem\][Corollary]{} \[theorem\][Remark]{} \[theorem\][Fact]{} \[theorem\][Problem]{} @addtoreset ¶[[**P**]{}]{} **A subdiffusive behaviour of recurrent random walk** **in random environment on a regular tree** by Yueyun Hu $\;$and$\;$ Zhan Shi *Université Paris XIII & Université Paris VI* This version: March 11, 2006 =2truecm =2truecm [***Summary.***]{} We are interested in the random walk in random environment on an infinite tree. Lyons and Pemantle [@lyons-pemantle] give a precise recurrence/transience criterion. Our paper focuses on the almost sure asymptotic behaviours of a recurrent random walk $(X_n)$ in random environment on a regular tree, which is closely related to Mandelbrot [@mandelbrot]’s multiplicative cascade. We prove, under some general assumptions upon the distribution of the environment, the existence of a new exponent $\nu\in (0, {1\over 2}]$ such that $\max_{0\le i \le n} |X_i|$ behaves asymptotically like $n^{\nu}$. The value of $\nu$ is explicitly formulated in terms of the distribution of the environment. [***Keywords.***]{} Random walk, random environment, tree, Mandelbrot’s multiplicative cascade. [***2000 Mathematics Subject Classification.***]{} 60K37, 60G50. Introduction {#s:intro} ============ Random walk in random environment (RWRE) is a fundamental object in the study of random phenomena in random media. RWRE on $\z$ exhibits rich regimes in the transient case (Kesten, Kozlov and Spitzer [@kesten-kozlov-spitzer]), as well as a slow logarithmic movement in the recurrent case (Sinai [@sinai]). On $\z^d$ (for $d\ge 2$), the study of RWRE remains a big challenge to mathematicians (Sznitman [@sznitman], Zeitouni [@zeitouni]). The present paper focuses on RWRE on a regular rooted tree, which can be viewed as an infinite-dimensional RWRE. Our main result reveals a rich regime à la Kesten–Kozlov–Spitzer, but this time even in the recurrent case; it also strongly suggests the existence of a slow logarithmic regime à la Sinai. Let $\T$ be a $\deg$-ary tree ($\deg\ge 2$) rooted at $e$. For any vertex $x\in \T \backslash \{ e\}$, let ${\buildrel \leftarrow \over x}$ denote the first vertex on the shortest path from $x$ to the root $e$, and $|x|$ the number of edges on this path (notation: $|e|:= 0$). Thus, each vertex $x\in \T \backslash \{ e\}$ has one parent ${\buildrel \leftarrow \over x}$ and $\deg$ children, whereas the root $e$ has $\deg$ children but no parent. We also write ${\buildrel \Leftarrow \over x}$ for the parent of ${\buildrel \leftarrow \over x}$ (for $x\in \T$ such that $|x|\ge 2$). Let $\omega:= (\omega(x,y), \, x,y\in \T)$ be a family of non-negative random variables such that $\sum_{y\in \T} \omega(x,y)=1$ for any $x\in \T$. Given a realization of $\omega$, we define a Markov chain $X:= (X_n, \, n\ge 0)$ on $\T$ by $X_0 =e$, and whose transition probabilities are $$P_\omega(X_{n+1}= y \, | \, X_n =x) = \omega(x, y) .$$ Let $\P$ denote the distribution of $\omega$, and let $\p (\cdot) := \int P_\omega (\cdot) \P(\! \d \omega)$. The process $X$ is a $\T$-valued RWRE. (By informally taking $\deg=1$, $X$ would become a usual RWRE on the half-line $\z_+$.) For general properties of tree-valued processes, we refer to Peres [@peres] and Lyons and Peres [@lyons-peres]. See also Duquesne and Le Gall [@duquesne-le-gall] and Le Gall [@le-gall] for continuous random trees. For a list of motivations to study RWRE on a tree, see Pemantle and Peres [@pemantle-peres1], p. 106. We define $$A(x) := {\omega({\buildrel \leftarrow \over x}, x) \over \omega({\buildrel \leftarrow \over x}, {\buildrel \Leftarrow \over x})} , \qquad x\in \T, \; |x|\ge 2. \label{A}$$ Following Lyons and Pemantle [@lyons-pemantle], we assume throughout the paper that $(\omega(x,\bullet))_{x\in \T\backslash \{ e\} }$ is a family of i.i.d. [*non-degenerate*]{} random vectors and that $(A(x), \; x\in \T, \; |x|\ge 2)$ are identically distributed. We also assume the existence of $\varepsilon_0>0$ such that $\omega(x,y) \ge \varepsilon_0$ if either $x= {\buildrel \leftarrow \over y}$ or $y= {\buildrel \leftarrow \over x}$, and $\omega(x,y) =0$ otherwise; in words, $(X_n)$ is a nearest-neighbour walk, satisfying an ellipticity condition. Let $A$ denote a generic random variable having the common distribution of $A(x)$ (for $|x| \ge 2$). Define $$p := \inf_{t\in [0,1]} \E (A^t) . \label{p}$$ We recall a recurrence/transience criterion from Lyons and Pemantle ([@lyons-pemantle], Theorem 1 and Proposition 2). [**Theorem A (Lyons and Pemantle [@lyons-pemantle])**]{} [*With $\p$-probability one, the walk $(X_n)$ is recurrent or transient, according to whether $p\le {1\over \deg}$ or $p>{1\over \deg}$. It is, moreover, positive recurrent if $p<{1\over \deg}$.*]{} We study the recurrent case $p\le {1\over \deg}$ in this paper. Our first result, which is not deep, concerns the positive recurrent case $p< {1\over \deg}$. \[t:posrec\] If $p<{1\over \deg}$, then $$\lim_{n\to \infty} \, {1\over \log n} \, \max_{0\le i\le n} |X_i| = {1\over \log[1/(q\deg)]}, \qquad \hbox{\rm $\p$-a.s.}, \label{posrec}$$ where the constant $q$ is defined in $(\ref{q})$, and lies in $(0, {1\over \deg})$ when $p<{1\over \deg}$. Despite the warning of Pemantle [@pemantle] (“there are many papers proving results on trees as a somewhat unmotivated alternative …to Euclidean space"), it seems to be of particular interest to study the more delicate situation $p={1\over \deg}$ that turns out to possess rich regimes. We prove that, similarly to the Kesten–Kozlov–Spitzer theorem for [*transient*]{} RWRE on the line, $(X_n)$ enjoys, even in the recurrent case, an interesting subdiffusive behaviour. To state our main result, we define $$\begin{aligned} \kappa &:=& \inf\left\{ t>1: \; \E(A^t) = {1\over \deg} \right\} \in (1, \infty], \qquad (\inf \emptyset=\infty) \label{kappa} \\ \psi(t) &:=& \log \E \left( A^t \right) , \qquad t\ge 0. \label{psi}\end{aligned}$$ We use the notation $a_n \approx b_n$ to denote $\lim_{n\to \infty} \, {\log a_n \over \log b_n} =1$. \[t:nullrec\] If $p={1\over \deg}$ and if $\psi'(1)<0$, then $$\max_{0\le i\le n} |X_i| \; \approx\; n^\nu, \qquad \hbox{\rm $\p$-a.s.}, \label{nullrec}$$ where $\nu=\nu(\kappa)$ is defined by $$\nu := 1- {1\over \min\{ \kappa, 2\} } = \left\{ \begin{array}{ll} (\kappa-1)/\kappa, & \mbox{if $\;\kappa \in (1,2]$}, \\ \\ 1/2 & \mbox{if $\;\kappa \in (2, \infty].$} \end{array} \right. \label{theta}$$ [**Remark.**]{} (i) It is known (Menshikov and Petritis [@menshikov-petritis]) that if $p={1\over \deg}$ and $\psi'(1)<0$, then for $\P$-almost all environment $\omega$, $(X_n)$ is null recurrent. \(ii) For the value of $\kappa$, see Figure 1. Under the assumptions $p={1\over \deg}$ and $\psi'(1)<0$, the value of $\kappa$ lies in $(2, \infty]$ if and only if $\E (A^2) < {1\over \deg}$; and $\kappa=\infty$ if moreover $\hbox{ess sup}(A) \le 1$. \(iii) Since the walk is recurrent, $\max_{0\le i\le n} |X_i|$ cannot be replaced by $|X_n|$ in (\[posrec\]) and (\[nullrec\]). \(iv) Theorem \[t:nullrec\], which could be considered as a (weaker) analogue of the Kesten–Kozlov–Spitzer theorem, shows that tree-valued RWRE has even richer regimes than RWRE on $\z$. In fact, recurrent RWRE on $\z$ is of order of magnitude $(\log n)^2$, and has no $n^a$ (for $0<a<1$) regime. \(v) The case $\psi'(1)\ge 0$ leads to a phenomenon similar to Sinai’s slow movement, and is studied in a forthcoming paper. The rest of the paper is organized as follows. Section \[s:posrec\] is devoted to the proof of Theorem \[t:posrec\]. In Section \[s:proba\], we collect some elementary inequalities, which will be of frequent use later on. Theorem \[t:nullrec\] is proved in Section \[s:nullrec\], by means of a result (Proposition \[p:beta-gamma\]) concerning the solution of a recurrence equation which is closely related to Mandelbrot’s multiplicative cascade. We prove Proposition \[p:beta-gamma\] in Section \[s:beta-gamma\]. Throughout the paper, $c$ (possibly with a subscript) denotes a finite and positive constant; we write $c(\omega)$ instead of $c$ when the value of $c$ depends on the environment $\omega$. Proof of Theorem \[t:posrec\] {#s:posrec} ============================= We first introduce the constant $q$ in the statement of Theorem \[t:posrec\], which is defined without the assumption $p< {1\over \deg}$. Let $$\varrho(r) := \inf_{t\ge 0} \left\{ r^{-t} \, \E(A^t) \right\} , \qquad r>0.$$ Let $\underline{r} >0$ be such that $$\log \underline{r} = \E(\log A) .$$ We mention that $\varrho(r)=1$ for $r\in (0, \underline{r}]$, and that $\varrho(\cdot)$ is continuous and (strictly) decreasing on $[\underline{r}, \, \Theta)$ (where $\Theta:= \hbox{ess sup}(A) < \infty$), and $\varrho(\Theta) = \P (A= \Theta)$. Moreover, $\varrho(r)=0$ for $r> \Theta$. See Chernoff [@chernoff]. We define $$\overline{r} := \inf\left\{ r>0: \; \varrho(r) \le {1\over \deg} \right\}.$$ Clearly, $\underline{r} < \overline{r}$. We define $$q:= \sup_{r\in [\underline{r}, \, \overline{r}]} r \varrho(r). \label{q}$$ The following elementary lemma tells us that, instead of $p$, we can also use $q$ in the recurrence/transience criterion of Lyons and Pemantle. \[l:pq\] We have $q>{1\over \deg}$ $($resp., $q={1\over \deg}$, $q<{1\over \deg})$ if and only if $p>{1\over \deg}$ $($resp., $p={1\over \deg}$, $p<{1\over \deg})$. [*Proof of Lemma \[l:pq\].*]{} By Lyons and Pemantle ([@lyons-pemantle], p. 129), $p= \sup_{r\in (0, \, 1]} r \varrho (r)$. Since $\varrho(r) =1$ for $r\in (0, \, \underline{r}]$, there exists $\min\{\underline{r}, 1\}\le r^* \le 1$ such that $p= r^* \varrho (r^*)$. \(i) Assume $p<{1\over \deg}$. Then $\varrho (1) \le \sup_{r\in (0, \, 1]} r \varrho (r) = p < {1\over \deg}$, which, by definition of $\overline{r}$, implies $\overline{r} < 1$. Therefore, $q \le p <{1\over \deg}$. \(ii) Assume $p\ge {1\over \deg}$. We have $\varrho (r^*) \ge p \ge {1\over \deg}$, which yields $r^* \le \overline{r}$. If $\underline{r} \le 1$, then $r^*\ge \underline{r}$, and thus $p=r^* \varrho (r^*) \le q$. If $\underline{r} > 1$, then $p=1$, and thus $q\ge \underline{r}\, \varrho (\underline{r}) = \underline{r} > 1=p$. We have therefore proved that $p\ge {1\over \deg}$ implies $q\ge p$. If moreover $p>{1\over \deg}$, then $q \ge p>{1\over \deg}$. \(iii) Assume $p={1\over \deg}$. We already know from (ii) that $q \ge p$. On the other hand, $\varrho (1) \le \sup_{r\in (0, \, 1]} r \varrho (r) = p = {1\over \deg}$, implying $\overline{r} \le 1$. Thus $q \le p$. As a consequence, $q=p={1\over \deg}$.$\Box$ Having defined $q$, the next step in the proof of Theorem \[t:posrec\] is to compute invariant measures $\pi$ for $(X_n)$. We first introduce some notation on the tree. For any $m\ge 0$, let $$\T_m := \left\{x \in \T: \; |x| = m \right\} .$$ For any $x\in \T$, let $\{ x_i \}_{1\le i\le \deg}$ be the set of children of $x$. If $\pi$ is an invariant measure, then $$\pi (x) = {\omega ({\buildrel \leftarrow \over x}, x) \over \omega (x, {\buildrel \leftarrow \over x})} \, \pi({\buildrel \leftarrow \over x}), \qquad \forall \, x\in \T \backslash \{ e\}.$$ By induction, this leads to (recalling $A$ from (\[A\])): for $x\in \T_m$ ($m\ge 1$), $$\pi (x) = {\pi(e)\over \omega (x, {\buildrel \leftarrow \over x})} {\omega (e, x^{(1)}) \over A(x^{(1)})} \exp\left( \, \sum_{z\in ]\! ] e, x]\! ]} \log A(z) \right) ,$$ where $]\! ] e, x]\! ]$ denotes the shortest path $x^{(1)}$, $x^{(2)}$, $\cdots$, $x^{(m)} =: x$ from the root $e$ (but excluded) to the vertex $x$. The identity holds for [*any*]{} choice of $(A(e_i), \, 1\le i\le \deg)$. We choose $(A(e_i), \, 1\le i\le \deg)$ to be a random vector independent of $(\omega(x,y), \, |x|\ge 1, \, y\in \T)$, and distributed as $(A(x_i), \, 1\le i\le \deg)$, for any $x\in \T_m$ with $m\ge 1$. By the ellipticity condition on the environment, we can take $\pi(e)$ to be sufficiently small so that for some $c_0\in (0, 1]$, $$c_0\, \exp\left( \, \sum_{z\in ]\! ] e, x]\! ]} \log A(z) \right) \le \pi (x) \le \exp\left( \, \sum_{z\in ]\! ] e, x]\! ]} \log A(z) \right) . \label{pi}$$ By Chebyshev’s inequality, for any $r>\underline{r}$, $$\max_{x\in \T_n} \P \left\{ \pi (x) \ge r^n\right\} \le \varrho(r)^n. \label{chernoff}$$ Since $\# \T_n = \deg^n$, this gives $\E (\#\{ x\in \T_n: \; \pi (x)\ge r^n \} ) \le \deg^n \varrho(r)^n$. By Chebyshev’s inequality and the Borel–Cantelli lemma, for any $r>\underline{r}$ and $\P$-almost surely for all large $n$, $$\#\left\{ x\in \T_n: \; \pi (x) \ge r^n \right\} \le n^2 \deg^n \varrho(r)^n. \label{Jn-ub1}$$ On the other hand, by (\[chernoff\]), $$\P \left\{ \exists x\in \T_n: \pi (x) \ge r^n\right\} \le \deg^n \varrho (r)^n.$$ For $r> \overline{r}$, the expression on the right-hand side is summable in $n$. By the Borel–Cantelli lemma, for any $r>\overline{r}$ and $\P$-almost surely for all large $n$, $$\max_{x\in \T_n} \pi (x) < r^n. \label{Jn-ub}$$ [*Proof of Theorem \[t:posrec\]: upper bound.*]{} Fix $\varepsilon>0$ such that $q+ 3\varepsilon < {1\over \deg}$. We follow the strategy given in Liggett ([@liggett], p. 103) by introducing a positive recurrent birth-and-death chain $(\widetilde{X_j}, \, j\ge 0)$, starting from $0$, with transition probability from $i$ to $i+1$ (for $i\ge 1$) equal to $${1\over \widetilde{\pi} (i)} \, \sum_{x\in \T_i} \pi(x) (1- \omega(x, {\buildrel \leftarrow \over x})) ,$$ where $\widetilde{\pi} (i) := \sum_{x\in \T_i} \pi(x)$. We note that $\widetilde{\pi}$ is a finite invariant measure for $(\widetilde{X_j})$. Let $$\tau_n := \inf \left\{ i\ge 1: \, X_i \in \T_n\right\}, \qquad n\ge 0.$$ By Liggett ([@liggett], Theorem II.6.10), for any $n\ge 1$, $$P_\omega (\tau_n< \tau_0) \le \widetilde{P}_\omega (\widetilde{\tau}_n< \widetilde{\tau}_0),$$ where $\widetilde{P}_\omega (\widetilde{\tau}_n< \widetilde{\tau}_0)$ is the probability that $(\widetilde{X_j})$ hits $n$ before returning to $0$. According to Hoel et al. ([@hoel-port-stone], p. 32, Formula (61)), $$\widetilde{P}_\omega (\widetilde{\tau}_n< \widetilde{\tau}_0) = c_1(\omega) \left( \, \sum_{i=0}^{n-1} {1\over \sum_{x\in \T_i} \pi(x) (1- \omega(x, {\buildrel \leftarrow \over x}))}Ê\right)^{\! \! -1} ,$$ where $c_1(\omega)\in (0, \infty)$ depends on $\omega$. We arrive at the following estimate: for any $n\ge 1$, $$P_\omega (\tau_n< \tau_0) \le c_1(\omega) \, \left( \, \sum_{i=0}^{n-1} {1\over \sum_{x\in \T_i} \pi(x)}Ê\right)^{\! \! -1} . \label{liggett}$$ We now estimate $\sum_{i=0}^{n-1} {1\over \sum_{x\in \T_i} \pi(x)}$. For any fixed $0=r_0< \underline{r} < r_1 < \cdots < r_\ell = \overline{r} <r_{\ell +1}$, $$\sum_{x\in \T_i} \pi(x) \le \sum_{j=1}^{\ell+1} (r_j)^i \# \left\{ x\in \T_i: \pi(x) \ge (r_{j-1})^i \right\} + \sum_{x\in \T_i: \, \pi(x) \ge (r_{\ell +1})^i} \pi(x).$$ By (\[Jn-ub\]), $\sum_{x\in \T_i: \, \pi(x) \ge (r_{\ell +1})^i} \pi(x) =0$ $\P$-almost surely for all large $i$. It follows from (\[Jn-ub1\]) that $\P$-almost surely, for all large $i$, $$\sum_{x\in \T_i} \pi(x) \le (r_1)^i \deg^i + \sum_{j=2}^{\ell+1} (r_j)^i i^2 \, \deg^i \varrho (r_{j-1})^i.$$ Recall that $q= \sup_{r\in [\underline{r}, \, \overline{r}] } r \, \varrho(r) \ge \underline{r} \, \varrho (\underline{r}) = \underline{r}$. We choose $r_1:= \underline{r} + \varepsilon \le q+\varepsilon$. We also choose $\ell$ sufficiently large and $(r_j)$ sufficiently close to each other so that $r_j \, \varrho(r_{j-1}) < q+\varepsilon$ for all $2\le j\le \ell+1$. Thus, $\P$-almost surely for all large $i$, $$\sum_{x\in \T_i} \pi(x) \le (r_1)^i \deg^i + \sum_{j=2}^{\ell+1} i^2 \, \deg^i (q+\varepsilon)^i = (r_1)^i \deg^i + \ell \, i^2 \, \deg^i (q+\varepsilon)^i,$$ which implies (recall: $\deg(q+\varepsilon)<1$) that $\sum_{i=0}^{n-1} {1\over \sum_{x\in \T_i} \pi(x)} \ge {c_2\over n^2\, \deg^n (q+\varepsilon)^n}$. Plugging this into (\[liggett\]) yields that, $\P$-almost surely for all large $n$, $$P_\omega (\tau_n< \tau_0) \le c_3(\omega)\, n^2\, \deg^n (q+\varepsilon)^n \le [(q+2\varepsilon)\deg]^n.$$ In particular, by writing $L(\tau_n):= \# \{ 1\le i \le \tau_n: \, X_i = e\}$, we obtain: $$P_\omega \left\{ L(\tau_n) \ge j \right\} = \left[ P_\omega (\tau_n> \tau_0) \right]^j \ge \left\{ 1- [(q+2\varepsilon)\deg]^n \right\}^j ,$$ which, by the Borel–Cantelli lemma, yields that, $\P$-almost surely for all large $n$, $$L(\tau_n) \ge {1\over [(q+3\varepsilon) \deg]^n} , \qquad \hbox{\rm $P_\omega$-a.s.}$$ Since $\{ L(\tau_n) \ge j \} \subset \{ \max_{0\le k \le 2j} |X_k| < n\}$, and since $\varepsilon$ can be as close to 0 as possible, we obtain the upper bound in Theorem \[t:posrec\].$\Box$ [*Proof of Theorem \[t:posrec\]: lower bound.*]{} Assume $p< {1\over \deg}$. Recall that in this case, we have $\overline{r}<1$. Let $\varepsilon>0$ be small. Let $r \in (\underline{r}, \, \overline{r})$ be such that $\varrho(r) > {1\over \deg} \ee^\varepsilon$ and that $r\varrho(r) \ge q\ee^{-\varepsilon}$. Let $L$ be a large integer with $\deg^{-1/L} \ge \ee^{-\varepsilon}$ and satisfying (\[GW\]) below. We start by constructing a Galton–Watson tree $\G$, which is a certain subtree of $\T$. The first generation of $\G$, denoted by $\G_1$ and defined below, consists of vertices $x\in \T_L$ satisfying a certain property. The second generation of $\G$ is formed by applying the same procedure to each element of $\G_1$, and so on. To be precise, $$\G_1 = \G_1 (L,r) := \left\{ x\in \T_L: \, \min_{z\in ]\! ] e, \, x ]\! ]} \prod_{y\in ]\! ] e, \, z]\! ]} A(y) \ge r^L \right\} ,$$ where $]\! ]e, \, x ]\! ]$ denotes as before the set of vertices (excluding $e$) lying on the shortest path relating $e$ and $x$. More generally, if $\G_i$ denotes the $i$-th generation of $\G$, then $$\G_{n+1} := \bigcup_{u\in \G_n } \left\{ x\in \T_{(n+1)L}: \, \min_{z\in ]\! ] u, \, x ]\! ]} \prod_{y\in ]\! ] u, \, z]\! ]} A(y) \ge r^L \right\} , \qquad n=1,2, \dots$$ We claim that it is possible to choose $L$ sufficiently large such that $$\E(\# \G_1) \ge \ee^{-\varepsilon L} \deg^L \varrho(r)^L . \label{GW}$$ Note that $\ee^{-\varepsilon L} \deg^L \varrho(r)^L>1$, since $\varrho(r) > {1\over \deg} \ee^\varepsilon$. We admit (\[GW\]) for the moment, which implies that $\G$ is super-critical. By theory of branching processes (Harris [@harris], p. 13), when $n$ goes to infinity, ${\# \G_{n/L} \over [\E(\# \G_1)]^{n/L} }$ converges almost surely (and in $L^2$) to a limit $W$ with $\P(W>0)>0$. Therefore, on the event $\{ W>0\}$, for all large $n$, $$\# (\G_{n/L}) \ge c_4(\omega) [\E(\# \G_1)]^{n/L}. \label{GnL}$$ (For notational simplification, we only write our argument for the case when $n$ is a multiple of $L$. It is clear that our final conclusion holds for all large $n$.) Recall that according to the Dirichlet principle (Griffeath and Liggett [@griffeath-liggett]), $$\begin{aligned} 2\pi(e) P_\omega \left\{ \tau_n < \tau_0 \right\} &=&\inf_{h: \, h(e)=1, \, h(z)=0, \, \forall |z| \ge n} \sum_{x,y\in \T} \pi(x) \omega(x,y) (h(x)- h(y))^2 \nonumber \\ &\ge& c_5\, \inf_{h: \, h(e)=1, \, h(z)=0, \, \forall z\in \T_n} \sum_{|x|<n} \sum_{y: \, x= {\buildrel \leftarrow \over y}} \pi(x) (h(x)- h(y))^2, \label{durrett}\end{aligned}$$ the last inequality following from ellipticity condition on the environment. Clearly, $$\begin{aligned} \sum_{|x|<n} \sum_{y: \, x= {\buildrel \leftarrow \over y}} \pi(x) (h(x)- h(y))^2 &=&\sum_{i=0}^{(n/L)-1} \sum_{x: \, iL \le |x| < (i+1) L} \sum_{y: \, x= {\buildrel \leftarrow \over y}} \pi(x) (h(x)- h(y))^2 \\ &:=&\sum_{i=0}^{(n/L)-1} I_i,\end{aligned}$$ with obvious notation. For any $i$, $$I_i \ge \deg^{-L} \sum_{v\in \G_{i+1}} \, \sum_{x\in [\! [ v^\uparrow, v[\! [} \, \sum_{y: \, x= {\buildrel \leftarrow \over y}} \pi(x) (h(x)- h(y))^2,$$ where $v^\uparrow \in \G_i$ denotes the unique element of $\G_i$ lying on the path $[ \! [ e, v ]\! ]$ (in words, $v^\uparrow$ is the parent of $v$ in the Galton–Watson tree $\G$), and the factor $\deg^{-L}$ comes from the fact that each term $\pi(x) (h(x)- h(y))^2$ is counted at most $\deg^L$ times in the sum on the right-hand side. By (\[pi\]), for $x\in [\! [ v^\uparrow, v[\! [$, $\pi(x) \ge c_0 \, \prod_{u\in ]\! ]e, x]\! ]} A(u)$, which, by the definition of $\G$, is at least $c_0 \, r^{(i+1)L}$. Therefore, $$\begin{aligned} I_i &\ge& c_0 \, \deg^{-L} \sum_{v\in \G_{i+1}} \, \sum_{x\in [\! [ v^\uparrow, v[\! [} \, \sum_{y: \, x= {\buildrel \leftarrow \over y}} r^{(i+1)L} (h(x)- h(y))^2 \\ &\ge&c_0 \, \deg^{-L} r^{(i+1)L} \sum_{v\in \G_{i+1}} \, \sum_{y\in ]\! ] v^\uparrow, v]\! ]} (h({\buildrel \leftarrow \over y})- h(y))^2 .\end{aligned}$$ By the Cauchy–Schwarz inequality, $\sum_{y\in ]\! ] v^\uparrow, v]\! ]} (h({\buildrel \leftarrow \over y})- h(y))^2 \ge {1\over L} (h(v^\uparrow)-h(v))^2$. Accordingly, $$I_i \ge c_0 \, {\deg^{-L} r^{(i+1)L}\over L} \sum_{v\in \G_{i+1}} (h(v^\uparrow)-h(v))^2 ,$$ which yields $$\begin{aligned} \sum_{i=0}^{(n/L)-1} I_i &\ge& c_0 \, {\deg^{-L}\over L} \sum_{i=0}^{(n/L)-1} r^{(i+1)L} \sum_{v\in \G_{i+1}} (h(v^\uparrow)- h(v))^2 \\ &\ge& c_0 \, {\deg^{-L}\over L} \deg^{-n/L} \sum_{v\in \G_{n/L}} \sum_{i=0}^{(n/L)-1} r^{(i+1)L} (h(v^{(i)})- h(v^{(i+1)}))^2 ,\end{aligned}$$ where, $e=: v^{(0)}$, $v^{(1)}$, $v^{(2)}$, $\cdots$, $v^{(n/L)} := v$, is the shortest path (in $\G$) from $e$ to $v$, and the factor $\deg^{-n/L}$ results from the fact that each term $r^{(i+1)L} (h(v^{(i)})- h(v^{(i+1)}))^2$ is counted at most $\deg^{n/L}$ times in the sum on the right-hand side. By the Cauchy–Schwarz inequality, for all $h: \T\to \r$ with $h(e)=1$ and $h(z)=0$ ($\forall z\in \T_n$), we have $$\begin{aligned} \sum_{i=0}^{(n/L)-1} r^{(i+1)L} (h(v^{(i)})- h(v^{(i+1)}))^2 &\ge&{1\over \sum_{i=0}^{(n/L)-1} r^{-(i+1)L}} \, \left( \sum_{i=0}^{(n/L)-1} (h(v^{(i)})- h(v^{(i+1)})) \right)^{\! \! 2} \\ &=&{1\over \sum_{i=0}^{(n/L)-1} r^{-(i+1)L}} \ge c_6 \, r^n.\end{aligned}$$ Therefore, $$\sum_{i=0}^{(n/L)-1} I_i \ge c_0c_6 \, r^n \, {\deg^{-L}\over L} \deg^{-n/L} \# (\G_{n/L}) \ge c_0 c_6 c_4(\omega) \, r^n \, {\deg^{-L}\over L} \deg^{-n/L} \, [\E (\# \G_1)]^{n/L}\, {\bf 1}_{ \{ W>0 \} },$$ the last inequality following from (\[GnL\]). Plugging this into (\[durrett\]) yields that for all large $n$, $$P_\omega \left\{ \tau_n < \tau_0 \right\} \ge c_7(\omega) \, r^n \, {\deg^{-L}\over L} \deg^{-n/L} \, [\E (\# \G_1)]^{n/L}\, {\bf 1}_{ \{ W>0 \} } .$$ Recall from (\[GW\]) that $\E(\# \G_1) \ge \ee^{-\varepsilon L} \deg^L \varrho(r)^L$. Therefore, on $\{W>0\}$, for all large $n$, $P_\omega \{ \tau_n < \tau_0 \} \ge c_8(\omega) (\ee^{-\varepsilon} \deg^{-1/L} \deg r \varrho(r))^n$, which is no smaller than $c_8(\omega) (\ee^{-3\varepsilon} q \deg)^n$ (since $\deg^{-1/L} \ge \ee^{-\varepsilon}$ and $r \varrho(r) \ge q \ee^{-\varepsilon}$ by assumption). Thus, by writing $L(\tau_n) := \#\{ 1\le i\le n: \; X_i = e \}$ as before, we have, on $\{ W>0 \}$, $$P_\omega \left\{ L(\tau_n) \ge j \right\} = \left[ P_\omega (\tau_n> \tau_0) \right]^j \le [1- c_8(\omega) (\ee^{-3\varepsilon} q \deg)^n ]^j.$$ By the Borel–Cantelli lemma, for $\P$-almost all $\omega$, on $\{W>0\}$, we have, $P_\omega$-almost surely for all large $n$, $L(\tau_n) \le 1/(\ee^{-4\varepsilon} q \deg)^n$, i.e., $$\max_{0\le k\le \tau_0(\lfloor 1/(\ee^{-4\varepsilon} q \deg)^n\rfloor )} |X_k| \ge n ,$$ where $0<\tau_0(1)<\tau_0(2)<\cdots$ are the successive return times to the root $e$ by the walk (thus $\tau_0(1) = \tau_0$). Since the walk is positive recurrent, $\tau_0(\lfloor 1/(\ee^{-4\varepsilon} q \deg)^n\rfloor ) \sim {1\over (\ee^{-4\varepsilon} q \deg)^n} E_\omega [\tau_0]$ (for $n\to \infty$), $P_\omega$-almost surely ($a_n \sim b_n$ meaning $\lim_{n\to \infty}Ê{a_n \over b_n} =1$). Therefore, for $\P$-almost all $\omega \in \{ W>0\}$, $$\liminf_{n\to \infty} {\max_{0\le k\le n} |X_k| \over \log n} \ge {1\over \log[1/(q\deg)]}, \qquad \hbox{\rm $P_\omega$-a.s.}$$ Recall that $\P\{ W>0\}>0$. Since modifying a finite number of transition probabilities does not change the value of $\liminf_{n\to \infty} {\max_{0\le k\le n} |X_k| \over \log n}$, we obtain the lower bound in Theorem \[t:posrec\]. It remains to prove (\[GW\]). Let $(A^{(i)})_{i\ge 1}$ be an i.i.d. sequence of random variables distributed as $A$. Clearly, for any $\delta\in (0,1)$, $$\begin{aligned} \E( \# \G_1) &=& \deg^L \, \P\left( \, \sum_{i=1}^\ell \log A^{(i)} \ge L \log r , \, \forall 1\le \ell \le L\right) \\ &\ge& \deg^L \, \P \left( \, (1-\delta) L \log r \ge \sum_{i=1}^\ell \log A^{(i)} \ge L \log r , \, \forall 1\le \ell \le L\right) .\end{aligned}$$ We define a new probability $\Q$ by $${\mathrm{d} \Q \over \mathrm{d}\P} := {\ee^{t \log A} \over \E(\ee^{t \log A})} = {A^t \over \E(A^t)},$$ for some $t\ge 0$. Then $$\begin{aligned} \E(\# \G_1) &\ge& \deg^L \, \E_\Q \left[ \, {[\E(A^t)]^L \over \exp\{ t \sum_{i=1}^L \log A^{(i)}\} }\, {\bf 1}_{\{ (1-\delta) L \log r \ge \sum_{i=1}^\ell \log A^{(i)} \ge L \log r , \, \forall 1\le \ell \le L\} } \right] \\ &\ge& \deg^L \, {[\E(A^t)]^L \over r^{t (1- \delta) L} } \, \Q \left( (1- \delta) L \log r \ge \sum_{i=1}^\ell \log A^{(i)} \ge L \log r , \, \forall 1\le \ell \le L \right).\end{aligned}$$ To choose an optimal value of $t$, we fix $\widetilde{r}\in (r, \, \overline{r})$ with $\widetilde{r} < r^{1-\delta}$. Our choice of $t=t^*$ is such that $\varrho(\widetilde{r}) = \inf_{t\ge 0} \{ \widetilde{r}^{-t} \E(A^t)\} = \widetilde{r}^{-t^*} \E(A^{t^*})$. With this choice, we have $\E_\Q(\log A)=\log \widetilde{r}$, so that $\Q \{ (1- \delta) L \log r \ge \sum_{i=1}^\ell \log A^{(i)} \ge L \log r , \, \forall 1\le \ell \le L\} \ge c_9$. Consequently, $$\E(\# \G_1) \ge c_9 \, \deg^L \, {[\E(A^{t^*})]^L \over r^{t^* (1- \delta) L} }= c_9 \, \deg^L \, {[ \widetilde{r}^{\,t^*} \varrho(\widetilde{r})]^L \over r^{t^* (1- \delta) L} } \ge c_9 \, r^{\delta t^* L} \deg^L \varrho(\widetilde{r})^L .$$ Since $\delta>0$ can be as close to $0$ as possible, the continuity of $\varrho(\cdot)$ on $[\underline{r}, \, \overline{r})$ yields (\[GW\]), and thus completes the proof of Theorem \[t:posrec\].$\Box$ Some elementary inequalities {#s:proba} ============================ We collect some elementary inequalities in this section. They will be of use in the next sections, in the study of the null recurrence case. \[l:exp\] Let $\xi\ge 0$ be a random variable. [(i)]{} Assume that $\e(\xi^a)<\infty$ for some $a>1$. Then for any $x\ge 0$, $${\e[({\xi\over x+\xi})^a] \over [\e ( {\xi\over x+\xi})]^a} \le {\e (\xi^a) \over [\e \xi]^a} . \label{RSD}$$ [(ii)]{} If $\e (\xi) < \infty$, then for any $0 \le \lambda \le 1$ and $t \ge 0$, $$\e \left\{ \exp \left( - t\, { (\lambda+\xi)/ (1+\xi) \over \e [(\lambda+\xi)/ (1+\xi)] } \right) \right\} \le \e \left\{ \exp\left( - t\, { \xi \over \e (\xi)} \right) \right\} . \label{exp}$$ [**Remark.**]{} When $a=2$, (\[RSD\]) is a special case of Lemma 6.4 of Pemantle and Peres [@pemantle-peres2]. [*Proof of Lemma \[l:exp\].*]{} We actually prove a very general result, stated as follows. Let $\varphi : (0, \infty) \to \r$ be a convex ${\cal C}^1$-function. Let $x_0 \in \r$ and let $I$ be an open interval containing $x_0$. Assume that $\xi$ takes values in a Borel set $J \subset \r$ (for the moment, we do not assume $\xi\ge 0$). Let $h: I \times J \to (0, \infty)$ and ${\partial h\over \partial x}: I \times J \to \r$ be measurable functions such that - $\e \{ h(x_0, \xi)\} <\infty$ and $\e \{ |\varphi ({ h(x_0,\xi) \over \e h(x_0, \xi)} )| \} < \infty$; - $\e[\sup_{x\in I} \{ | {\partial h\over \partial x} (x, \xi)| + |\varphi' ({h(x, \xi) \over \e h(x, \xi)} ) | \, ({| {\partial h\over \partial x} (x, \xi) | \over \e \{ h(x, \xi)\} } + {h(x, \xi) \over [\e \{ h(x, \xi)\}]^2 } | \e \{ {\partial h\over \partial x} (x, \xi) \} | )\} ] < \infty$; - both $y \to h(x_0, y)$ and $y \to { \partial \over \partial x} \log h(x,y)|_{x=x_0}$ are monotone on $J$. Then $${\d \over \d x} \e \left\{ \varphi\left({ h(x,\xi) \over \e h(x, \xi)}\right) \right\} \Big|_{x=x_0} \ge 0, \qquad \hbox{\rm or}\qquad \le 0, \label{monotonie}$$ depending on whether $h(x_0, \cdot)$ and ${\partial \over \partial x} \log h(x_0,\cdot)$ have the same monotonicity. To prove (\[monotonie\]), we observe that by the integrability assumptions, $$\begin{aligned} & &{\d \over \d x} \e \left\{ \varphi\left({ h(x,\xi) \over \e h(x,\xi)}\right) \right\} \Big|_{x=x_0} \\ &=&{1 \over ( \e h(x_0, \xi))^2}\, \e \left( \varphi'( h(x_0, \xi) ) \left[ {\partial h \over \partial x} (x_0, \xi) \e h(x_0, \xi) - h(x_0, \xi) \e {\partial h \over \partial x} (x_0, \xi) \right] \right) .\end{aligned}$$ Let $\widetilde \xi$ be an independent copy of $\xi$. The expectation expression $\e(\varphi'( h(x_0, \xi) ) [\cdots])$ on the right-hand side is $$\begin{aligned} &=& \e \left( \varphi'( h(x_0, \xi) ) \left[ {\partial h \over \partial x} (x_0, \xi) h(x_0, \widetilde\xi) - h(x_0, \xi) {\partial h \over \partial x} (x_0, \widetilde\xi) \right] \right) \\ &=& {1 \over 2}\, \e \left( \left[ \varphi'( h(x_0, \xi) ) - \varphi'( h(x_0, \widetilde\xi) )\right] \left[ {\partial h \over \partial x} (x_0, \xi) h(x_0, \widetilde\xi) - h(x_0, \xi) {\partial h \over \partial x} (x_0, \widetilde\xi) \right] \right) \\ &=& {1 \over 2}\, \e \left( h(x_0, \xi) h(x_0, \widetilde \xi) \, \eta \right) ,\end{aligned}$$ where $$\eta := \left[ \varphi'( h(x_0, \xi) ) - \varphi'( h(x_0, \widetilde\xi) ) \right] \, \left[ {\partial \log h \over \partial x} (x_0, \xi) - {\partial \log h \over \partial x} (x_0, \widetilde\xi) \right] .$$ Therefore, $${\d \over \d x} \e \left\{ \varphi\left({ h(x,\xi) \over \e h(x,\xi)}\right) \right\} \Big|_{x=x_0} \; = \; {1 \over 2( \e h(x_0, \xi))^2}\, \e \left( h(x_0, \xi) h(x_0, \widetilde \xi) \, \eta \right) .$$ Since $\eta \ge 0$ or $\le 0$ depending on whether $h(x_0, \cdot)$ and ${\partial \over \partial x} \log h(x_0,\cdot)$ have the same monotonicity, this yields (\[monotonie\]). To prove (\[RSD\]) in Lemma \[l:exp\], we take $x_0\in (0,\, \infty)$, $J= \r_+$, $I$ a finite open interval containing $x_0$ and away from 0, $\varphi(z)= z^a$, and $h(x,y)= { y \over x+ y}$, to see that the function $x\mapsto {\e[({\xi\over x+\xi})^a] \over [\e ( {\xi\over x+\xi})]^a}$ is non-decreasing on $(0, \infty)$. By dominated convergence, $$\lim_{x \to\infty} {\e[({\xi\over x+\xi})^a] \over [\e ( {\xi\over x+\xi})]^a}= \lim_{x \to\infty} {\e[({\xi\over 1+\xi/x})^a] \over [\e ( {\xi\over 1+\xi/x})]^a} = {\e (\xi^a) \over [\e \xi]^a} ,$$ yielding (\[RSD\]). The proof of (\[exp\]) is similar. Indeed, applying (\[monotonie\]) to the functions $\varphi(z)= \ee^{-t z}$ and $ h(x, y) = {x + y \over 1+ y}$ with $x\in (0,1)$, we get that the function $x \mapsto \e \{ \exp ( - t { (x+\xi)/(1+\xi) \over \e [(x+\xi)/(1+\xi)]} )\}$ is non-increasing on $(0,1)$; hence for $\lambda \in [0,\, 1]$, $$\e \left\{ \exp \left( - t { (\lambda+\xi)/(1+\xi) \over \e [(\lambda+\xi)/(1+\xi)] } \right) \right\} \le \e \left\{ \exp \left( - t { \xi /(1+\xi) \over \e [\xi/(1+\xi)] } \right) \right\}.$$ On the other hand, we take $\varphi(z)= \ee^{-t z}$ and $h(x,y) = {y \over 1+ xy}$ (for $x\in (0, 1)$) in (\[monotonie\]) to see that $x \mapsto \e \{ \exp ( - t { \xi /(1+x \xi) \over \e [\xi /(1+x\xi)] } ) \}$ is non-increasing on $(0,1)$. Therefore, $$\e \left\{ \exp \left( - t { \xi /(1+\xi) \over \e [\xi/(1+\xi)] } \right) \right\} \le \e \left\{ \exp\left( - t \, { \xi \over \e (\xi)}\right) \right\} ,$$ which implies (\[exp\]).$\Box$ \[l:moment\] Let $\xi_1$, $\cdots$, $\xi_k$ be independent non-negative random variables such that for some $a\in [1,\, 2]$, $\e(\xi_i^a)<\infty$ $(1\le i\le k)$. Then $$\e \left[ (\xi_1 + \cdots + \xi_k)^a \right] \le \sum_{k=1}^k \e(\xi_i^a) + (k-1) \left( \sum_{i=1}^k \e \xi_i \right)^a.$$ [*Proof.*]{} By induction on $k$, we only need to prove the lemma in case $k=2$. Let $$h(t) := \e \left[ (\xi_1 + t\xi_2)^a \right] - \e(\xi_1^a) - t^a \e(\xi_2^a) - (\e \xi_1 + t \e \xi_2)^a, \qquad t\in [0,1].$$ Clearly, $h(0) = - (\e \xi_1)^a \le 0$. Moreover, $$h'(t) = a \e \left[ (\xi_1 + t\xi_2)^{a-1} \xi_2 \right] - a t^{a-1} \e(\xi_2^a) - a(\e \xi_1 + t \e \xi_2)^{a-1} \e(\xi_2) .$$ Since $(x+y)^{a-1} \le x^{a-1} + y^{a-1}$ (for $1\le a\le 2$), we have $$\begin{aligned} h'(t) &\le& a \e \left[ (\xi_1^{a-1} + t^{a-1}\xi_2^{a -1}) \xi_2 \right] - a t^{a-1} \e(\xi_2^a) - a(\e \xi_1)^{a-1} \e(\xi_2) \\ &=& a \e (\xi_1^{a-1}) \e(\xi_2) - a(\e \xi_1)^{a -1} \e(\xi_2) \le 0,\end{aligned}$$ by Jensen’s inequality (for $1\le a\le 2$). Therefore, $h \le 0$ on $[0,1]$. In particular, $h(1) \le 0$, which implies Lemma \[l:moment\].$\Box$ The following inequality, borrowed from page 82 of Petrov [@petrov], will be of frequent use. \[f:petrov\] Let $\xi_1$, $\cdots$, $\xi_k$ be independent random variables. We assume that for any $i$, $\e(\xi_i)=0$ and $\e(|\xi_i|^a) <\infty$, where $1\le a\le 2$. Then $$\e \left( \, \left| \sum_{i=1}^k \xi_i \right| ^a \, \right) \le 2 \sum_{i=1}^k \e( |\xi_i|^a).$$ \[l:abc\] Fix $a >1$. Let $(u_j)_{j\ge 1}$ be a sequence of positive numbers, and let $(\lambda_j)_{j\ge 1}$ be a sequence of non-negative numbers. [(i)]{} If there exists some constant $c_{10}>0$ such that for all $n\ge 2$, $$u_{j+1} \le \lambda_n + u_j - c_{10}\, u_j^{a}, \qquad \forall 1\le j \le n-1,$$ then we can find a constant $c_{11}>0$ independent of $n$ and $(\lambda_j)_{j\ge 1}$, such that $$u_n \le c_{11} \, ( \lambda_n^{1/a} + n^{- 1/(a-1)}), \qquad \forall n\ge 1.$$ [(ii)]{} Fix $K>0$. Assume that $\lim_{j\to\infty} u_j=0$ and that $\lambda_n \in [0, \, {K\over n}]$ for all $n\ge 1$. If there exist $c_{12}>0$ and $c_{13}>0$ such that for all $n\ge 2$, $$u_{j+1} \ge \lambda_n + (1- c_{12} \lambda_n) u_j - c_{13} \, u_j^a , \qquad \forall 1 \le j \le n-1,$$ then for some $c_{14}>0$ independent of $n$ and $(\lambda_j)_{j\ge 1}$ $(c_{14}$ may depend on $K)$, $$u_n \ge c_{14} \, ( \lambda_n^{1/a} + n^{- 1/(a-1)} ), \qquad \forall n\ge 1.$$ [*Proof.*]{} (i) Put $\ell = \ell(n) := \min\{n, \, \lambda_n^{- (a-1)/a} \}$. There are two possible situations. First situation: there exists some $j_0 \in [n- \ell, n-1]$ such that $u_{j_0} \le ({2 \over c_{10}})^{1/a}\, \lambda_n^{1/a}$. Since $u_{j+1} \le \lambda_n + u_j$ for all $j\in [j_0, n-1]$, we have $$u_n \le (n-j_0 ) \lambda_n + u_{j_0} \le \ell \lambda_n + ({2 \over c_{10}})^{1/a}\, \lambda_n^{1/a} \le (1+ ({2 \over c_{10}})^{1/a})\, \lambda_n^{1/a},$$ which implies the desired upper bound. Second situation: $u_j > ({2 \over c_{10}})^{1/a}\, \lambda_n^{1/a}$, $\forall \, j \in [n- \ell, n-1]$. Then $c_{10}\, u_j^{a} > 2\lambda_n$, which yields $$u_{j+1} \le u_j - {c_{10} \over 2} u_j^a, \qquad \forall \, j \in [n- \ell, n-1].$$ Since $a>1$ and $(1-y)^{1-a} \ge 1+ (a-1) y$ (for $0< y< 1$), this yields, for $j \in [n- \ell, n-1]$, $$u_{j+1}^{1-a} \ge u_j^{1-a} \, \left( 1 - {c_{10} \over 2} u_j^{a-1} \right)^{ 1-a} \ge u_j^{ 1-a} \, \left( 1 + {c_{10} \over 2} (a-1)\, u_j^{a-1} \right) = u_j^{1-a} + {c_{10} \over 2} (a-1) .$$ Therefore, $u_n^{1-a} \ge c_{15}\, \ell$ with $c_{15}:= {c_{10} \over 2} (a-1)$. As a consequence, $u_n \le (c_{15}\, \ell)^{- 1/(a-1)} \le (c_{15})^{- 1/(a-1)} \, ( n^{- 1/(a-1)} + \lambda_n^{1/a} )$, as desired. \(ii) Let us first prove: $$\label{c7} u_n \ge c_{16}\, n^{- 1/(a-1)}.$$ To this end, let $n$ be large and define $v_j := u_j \, (1- c_{12} \lambda_n)^{ -j} $ for $1 \le j \le n$. Since $u_{j+1} \ge (1- c_{12} \lambda_n) u_j - c_{13} u_j^a $ and $\lambda_n \le K/n$, we get $$v_{j+1} \ge v_j - c_{13} (1- c_{12} \lambda_n)^{(a-1)j-1}\, v_j^a\ge v_j - c_{17} \, v_j^a, \qquad \forall\, 1\le j \le n-1.$$ Since $u_j \to 0$, there exists some $j_0>0$ such that for all $n>j \ge j_0$, we have $c_{17} \, v_j^{a-1} < 1/2$, and $$v_{j+1}^{1-a} \le v_j^{1-a}\, \left( 1- c_{17} \, v_j^{a-1}\right)^{1-a} \le v_j^{1-a}\, \left( 1+ c_{18} \, v_j^{a-1}\right) = v_j^{1-a} + c_{18}.$$ It follows that $v_n^{1-a} \le c_{18}\, (n-j_0) + v_{j_0}^{1-a}$, which implies (\[c7\]). It remains to show that $u_n \ge c_{19} \, \lambda_n^{1/a}$. Consider a large $n$. The function $h(x):= \lambda_n + (1- c_{12} \lambda_n) x - c_{13} x^a$ is increasing on $[0, c_{20}]$ for some fixed constant $c_{20}>0$. Since $u_j \to 0$, there exists $j_0$ such that $u_j \le c_{20}$ for all $j \ge j_0$. We claim there exists $j \in [j_0, n-1]$ such that $u_j > ({\lambda_n\over 2c_{13}})^{1/a}$: otherwise, we would have $c_{13}\, u_j^a \le {\lambda_n\over 2} \le \lambda_n$ for all $j \in [j_0, n-1]$, and thus $$u_{j+1} \ge (1- c_{12}\, \lambda_n) u_j \ge \cdots \ge (1- c_{12}\,\lambda_n)^{j-j_0} \, u_{j_0} ;$$ in particular, $u_n \ge (1- c_{12}\, \lambda_n)^{n-j_0} \, u_{j_0}$ which would contradict the assumption $u_n \to 0$ (since $\lambda_n \le K/n$). Therefore, $u_j > ({\lambda_n\over 2c_{13}})^{1/a}$ for some $j\ge j_0$. By monotonicity of $h(\cdot)$ on $[0, c_{20}]$, $$u_{j+1} \ge h(u_j) \ge h\left(({\lambda_n\over 2 c_{13}})^{1/a}\right) \ge ({\lambda_n\over 2 c_{13}})^{1/a},$$ the last inequality being elementary. This leads to: $u_{j+2} \ge h(u_{j+1}) \ge h(({\lambda_n\over 2 c_{13}})^{1/a} ) \ge ({\lambda_n\over 2 c_{13}})^{1/a}$. Iterating the procedure, we obtain: $u_n \ge ({\lambda_n\over 2 c_{13}})^{1/a}$ for all $n> j_0$, which completes the proof of the Lemma.$\Box$ Proof of Theorem \[t:nullrec\] {#s:nullrec} ============================== Let $n\ge 2$, and let as before $$\tau_n := \inf\left\{ i\ge 1: X_i \in \T_n \right\} .$$ We start with a characterization of the distribution of $\tau_n$ via its Laplace transform $\e ( \ee^{- \lambda \tau_n} )$, for $\lambda \ge 0$. To state the result, we define $\alpha_{n,\lambda}(\cdot)$, $\beta_{n,\lambda}(\cdot)$ and $\gamma_n(\cdot)$ by $\alpha_{n,\lambda}(x) = \beta_{n,\lambda} (x) = 1$ and $\gamma_n(x)=0$ (for $x\in \T_n$), and $$\begin{aligned} \alpha_{n,\lambda}(x) &=& \ee^{-\lambda} \, {\sum_{i=1}^\deg A(x_i) \alpha_{n,\lambda} (x_i) \over 1+ \sum_{i=1}^\deg A(x_i) \beta_{n,\lambda} (x_i)}, \label{alpha} \\ \beta_{n,\lambda}(x) &=& {(1-\ee^{-2\lambda}) + \sum_{i=1}^\deg A(x_i) \beta_{n,\lambda} (x_i) \over 1+ \sum_{i=1}^\deg A(x_i) \beta_{n,\lambda} (x_i)}, \label{beta} \\ \gamma_n(x) &=& {[1/\omega(x, {\buildrel \leftarrow \over x} )] + \sum_{i=1}^\deg A(x_i) \gamma_n(x_i) \over 1+ \sum_{i=1}^\deg A(x_i) \beta_n(x_i)} , \qquad 1\le |x| < n, \label{gamma}\end{aligned}$$ where $\beta_n(\cdot) := \beta_{n,0}(\cdot)$, and for any $x\in \T$, $\{x_i\}_{1\le i\le \deg}$ stands as before for the set of children of $x$. \[p:tau\] We have, for $n\ge 2$, $$\begin{aligned} E_\omega\left( \ee^{- \lambda \tau_n} \right) &=&\ee^{-\lambda} \, {\sum_{i=1}^\deg \omega (e, e_i) \alpha_{n,\lambda} (e_i) \over \sum_{i=1}^\deg \omega (e, e_i) \beta_{n,\lambda} (e_i)}, \qquad \forall \lambda \ge 0, \label{Laplace-tau} \\ E_\omega(\tau_n) &=& {1+ \sum_{i=1}^\deg \omega(e,e_i) \gamma_n (e_i) \over \sum_{i=1}^\deg \omega(e,e_i) \beta_n(e_i)}. \label{E(tau)} \end{aligned}$$ [*Proof of Proposition \[p:tau\].*]{} Identity (\[E(tau)\]) can be found in Rozikov [@rozikov]. The proof of (\[Laplace-tau\]) is along similar lines; so we feel free to give an outline only. Let $g_{n, \lambda}(x) := E_\omega (\ee^{- \lambda \tau_n} \, | \, X_0=x)$. By the Markov property, $g_{n, \lambda}(x) = \ee^{-\lambda} \sum_{i=1}^\deg \omega(x, x_i)g_{n, \lambda}(x_i) + \ee^{-\lambda} \omega(x, {\buildrel \leftarrow \over x}) g_{n, \lambda}({\buildrel \leftarrow \over x})$, for $|x| < n$. By induction on $|x|$ (such that $1\le |x| \le n-1$), we obtain: $g_{n, \lambda}(x) = \ee^\lambda (1- \beta_{n, \lambda} (x)) g_{n, \lambda}({\buildrel \leftarrow \over x}) + \alpha_{n, \lambda} (x)$, from which (\[Laplace-tau\]) follows. Probabilistic interpretation: for $1\le |x| <n$, if $T_{\buildrel \leftarrow \over x} := \inf \{ k\ge 0: X_k= {\buildrel \leftarrow \over x} \}$, then $\alpha_{n, \lambda} (x) = E_\omega [ \ee^{-\lambda \tau_n} {\bf 1}_{ \{ \tau_n < T_{\buildrel \leftarrow \over x} \} } \, | \, X_0=x]$, $\beta_{n, \lambda} (x) = 1- E_\omega [ \ee^{-\lambda (1+ T_{\buildrel \leftarrow \over x}) } {\bf 1}_{ \{ \tau_n > T_{\buildrel \leftarrow \over x} \} } \, | \, X_0=x]$, and $\gamma_n (x) = E_\omega [ (\tau_n \wedge T_{\buildrel \leftarrow \over x}) \, | \, X_0=x]$. We do not use these identities in the paper.$\Box$ It turns out that $\beta_{n,\lambda}(\cdot)$ is closely related to Mandelbrot’s multiplicative cascade [@mandelbrot]. Let $$M_n := \sum_{x\in \T_n} \prod_{y\in ] \! ] e, \, x] \! ] } A(y) , \qquad n\ge 1, \label{Mn}$$ where $] \! ] e, \,x] \! ]$ denotes as before the shortest path relating $e$ to $x$. We mention that $(A(e_i), \, 1\le i\le \deg)$ is a random vector independent of $(\omega(x,y), \, |x|\ge 1, \, y\in \T)$, and is distributed as $(A(x_i), \, 1\le i\le \deg)$, for any $x\in \T_m$ with $m\ge 1$. Let us recall some properties of $(M_n)$ from Theorem 2.2 of Liu [@liu00] and Theorem 2.5 of Liu [@liu01]: under the conditions $p={1\over \deg}$ and $\psi'(1)<0$, $(M_n)$ is a martingale, bounded in $L^a$ for any $a\in [1, \kappa)$; in particular, $$M_\infty := \lim_{n\to \infty} M_n \in (0, \infty), \label{cvg-M}$$ exists $\P$-almost surely and in $L^a(\P)$, and $$\E\left( \ee^{-s M_\infty} \right) \le \exp\left( - c_{21} \, s^{c_{22}}\right), \qquad \forall s\ge 1; \label{M-lowertail}$$ furthermore, if $1<\kappa< \infty$, then we also have $${c_{23}\over x^\kappa} \le \P\left( M_\infty > x\right) \le {c_{24}\over x^\kappa}, \qquad x\ge 1. \label{M-tail}$$ We now summarize the asymptotic properties of $\beta_{n,\lambda}(\cdot)$ which will be needed later on. \[p:beta-gamma\] Assume $p= {1\over \deg}$ and $\psi'(1)<0$. [(i)]{} For any $1\le i\le \deg$, $n\ge 2$, $t\ge 0$ and $\lambda \in [0, \, 1]$, we have $$\E \left\{ \exp \left[ -t \, {\beta_{n, \lambda} (e_i) \over \E[\beta_{n, \lambda} (e_i)]} \right] \right\} \le \left\{\E \left( \ee^{-t\, M_n/\Theta} \right) \right\}^{1/\deg} , \label{comp-Laplace}$$ where, as before, $\Theta:= \hbox{\rm ess sup}(A) < \infty$. [(ii)]{} If $\kappa\in (2, \infty]$, then for any $1\le i\le \deg$ and all $n\ge 2$ and $\lambda \in [0, \, {1\over n}]$, $$c_{25} \left( \sqrt {\lambda} + {1\over n} \right) \le \E[\beta_{n, \lambda}(e_i)] \le c_{26} \left( \sqrt {\lambda} + {1\over n} \right). \label{E(beta):kappa>2}$$ [(iii)]{} If $\kappa\in (1,2]$, then for any $1\le i\le \deg$, when $n\to \infty$ and uniformly in $\lambda \in [0, {1\over n}]$, $$\E[\beta_{n, \lambda}(e_i)] \; \approx \; \lambda^{1/\kappa} + {1\over n^{1/(\kappa-1)}} , \label{E(beta):kappa<2}$$ where $a_n \approx b_n$ denotes as before $\lim_{n\to \infty} \, {\log a_n \over \log b_n} =1$. The proof of Proposition \[p:beta-gamma\] is postponed until Section \[s:beta-gamma\]. By admitting it for the moment, we are able to prove Theorem \[t:nullrec\]. [*Proof of Theorem \[t:nullrec\].*]{} Assume $p= {1\over \deg}$ and $\psi'(1)<0$. Let $\pi$ be an invariant measure. By (\[pi\]) and the definition of $(M_n)$, $\sum_{x\in \T_n} \pi(x) \ge c_0 \, M_n$. Therefore by (\[cvg-M\]), we have $\sum_{x\in \T} \pi(x) =\infty$, $\P$-a.s., implying that $(X_n)$ is null recurrent. We proceed to prove the lower bound in (\[nullrec\]). By (\[gamma\]) and the ellipticity condition on the environment, $\gamma_n (x) \le {1\over \omega(x, {\buildrel \leftarrow \over x} )} + \sum_{i=1}^\deg A(x_i) \gamma_n(x_i) \le c_{27} + \sum_{i=1}^\deg A(x_i) \gamma_n(x_i)$. Iterating the argument yields $$\gamma_n (e_i) \le c_{27} \left( 1+ \sum_{j=2}^{n-1} M_j^{(e_i)}\right), \qquad n\ge 3,$$ where $$M_j^{(e_i)} := \sum_{x\in \T_j} \prod_{y\in ] \! ] e_i, x] \! ]} A(y).$$ For future use, we also observe that $$\label{defMei1} M_n= \sum_{i=1}^\deg \, A(e_i) \, M^{(e_i)}_n, \qquad n\ge 2.$$ Let $1\le i\le \deg$. Since $(M_j^{(e_i)}, \, j\ge 2)$ is distributed as $(M_{j-1}, \, j\ge 2)$, it follows from (\[cvg-M\]) that $M_j^{(e_i)}$ converges (when $j\to \infty$) almost surely, which implies $\gamma_n (e_i) \le c_{28}(\omega) \, n$. Plugging this into (\[E(tau)\]), we see that for all $n\ge 3$, $$E_\omega \left( \tau_n \right) \le {c_{29}(\omega) \, n \over \sum_{i=1}^\deg \omega(e,e_i) \beta_n(e_i)} \le {c_{30}(\omega) \, n \over \beta_n(e_1)}, \label{toto2}$$ the last inequality following from the ellipticity assumption on the environment. We now bound $\beta_n(e_1)$ from below (for large $n$). Let $1\le i\le \deg$. By (\[comp-Laplace\]), for $\lambda \in [0,\, 1]$ and $s\ge 0$, $$\E \left\{ \exp \left[ -s \, {\beta_{n, \lambda} (e_i) \over \E [\beta_{n, \lambda} (e_i)]} \right] \right\} \le \left\{ \E \left( \ee^{-s \, M_n/\Theta} \right) \right\}^{1/\deg} \le \left\{ \E \left(\ee^{-s \, M_\infty/\Theta} \right) \right\}^{1/\deg} ,$$ where, in the last inequality, we used the fact that $(M_n)$ is a uniformly integrable martingale. Let $\varepsilon>0$. Applying (\[M-lowertail\]) to $s:= n^{\varepsilon}$, we see that $$\sum_n \E \left\{ \exp \left[ -n^{\varepsilon} {\beta_{n, \lambda} (e_i) \over \E[\beta_{n, \lambda} (e_i)]} \right] \right\} <\infty . \label{toto3}$$ In particular, $\sum_n \exp [ -n^{\varepsilon} {\beta_n (e_1) \over \E [\beta_n (e_1)]} ]$ is $\P$-almost surely finite (by taking $\lambda=0$; recalling that $\beta_n (\cdot) := \beta_{n, 0} (\cdot)$). Thus, for $\P$-almost all $\omega$ and all sufficiently large $n$, $\beta_n (e_1) \ge n^{-\varepsilon} \, \E [\beta_n (e_1)]$. Going back to (\[toto2\]), we see that for $\P$-almost all $\omega$ and all sufficiently large $n$, $$E_\omega \left( \tau_n \right) \le {c_{30}(\omega) \, n^{1+\varepsilon} \over \E [\beta_n (e_1)]}.$$ Let $m(n):= \lfloor {n^{1+2\varepsilon} \over \E [\beta_n (e_1)]} \rfloor$. By Chebyshev’s inequality, for $\P$-almost all $\omega$ and all sufficiently large $n$, $P_\omega ( \tau_n \ge m(n) ) \le c_{31}(\omega) \, n^{-\varepsilon}$. Considering the subsequence $n_k:= \lfloor k^{2/\varepsilon}\rfloor$, we see that $\sum_k P_\omega ( \tau_{n_k} \ge m(n_k) )< \infty$, $\P$-a.s. By the Borel–Cantelli lemma, for $\P$-almost all $\omega$ and $P_\omega$-almost all sufficiently large $k$, $\tau_{n_k} < m(n_k)$, which implies that for $n\in [n_{k-1}, n_k]$ and large $k$, we have $\tau_n < m(n_k) \le {n_k^{1+2\varepsilon} \over \E [\beta_{n_k} (e_1)]} \le {n^{1+3\varepsilon} \over \E [\beta_n(e_1)]}$ (the last inequality following from the estimate of $\E [\beta_n(e_1)]$ in Proposition \[p:beta-gamma\]). In view of Proposition \[p:beta-gamma\], and since $\varepsilon$ can be as small as possible, this gives the lower bound in (\[nullrec\]) of Theorem \[t:nullrec\]. To prove the upper bound, we note that $\alpha_{n,\lambda}(x) \le \beta_n(x)$ for any $\lambda\ge 0$ and any $0<|x|\le n$ (this is easily checked by induction on $|x|$). Thus, by (\[Laplace-tau\]), for any $\lambda\ge 0$, $$E_\omega\left( \ee^{- \lambda \tau_n} \right) \le {\sum_{i=1}^\deg \omega (e, e_i) \beta_n (e_i) \over \sum_{i=1}^\deg \omega (e, e_i) \beta_{n,\lambda} (e_i)} \le \sum_{i=1}^\deg {\beta_n (e_i) \over \beta_{n,\lambda} (e_i)}.$$ We now fix $r\in (1, \, {1\over \nu})$, where $\nu:= 1- {1\over \min\{ \kappa, \, 2\} }$ is defined in (\[theta\]). It is possible to choose a small $\varepsilon>0$ such that $${1\over \kappa -1} - {r\over \kappa}> 3\varepsilon \quad \hbox{if }\kappa \in (1, \, 2], \qquad 1 - {r\over 2}> 3\varepsilon \quad \hbox{if }\kappa \in (2, \, \infty].$$ Let $\lambda = \lambda(n) := n^{-r}$. By (\[toto3\]), we have $\beta_{n,n^{-r}} (e_i) \ge n^{-\varepsilon}\, \E [\beta_{n,n^{-r}} (e_i)]$ for $\P$-almost all $\omega$ and all sufficiently large $n$, which yields $$E_\omega\left( \ee^{- n^{-r} \tau_n} \right) \le n^\varepsilon \sum_{i=1}^\deg {\beta_n (e_i) \over \E [\beta_{n, n^{-r}} (e_i)]} .$$ It is easy to bound $\beta_n (e_i)$. For any given $x\in \T \backslash \{ e\}$ with $|x|\le n$, $n\mapsto \beta_n (x)$ is non-increasing (this is easily checked by induction on $|x|$). Chebyshev’s inequality, together with the Borel–Cantelli lemma (applied to a subsequence, as we did in the proof of the lower bound) and the monotonicity of $n\mapsto \beta_n(e_i)$, readily yields $\beta_n (e_i) \le n^\varepsilon \, \E [\beta_n (e_i)]$ for almost all $\omega$ and all sufficiently large $n$. As a consequence, for $\P$-almost all $\omega$ and all sufficiently large $n$, $$E_\omega\left( \ee^{- n^{-r} \tau_n} \right) \le n^{2\varepsilon} \sum_{i=1}^\deg {\E [\beta_n (e_i)] \over \E [\beta_{n, n^{-r}} (e_i)]} .$$ By Proposition \[p:beta-gamma\], this yields $E_\omega ( \ee^{- n^{-r} \tau_n} ) \le n^{-\varepsilon}$ (for $\P$-almost all $\omega$ and all sufficiently large $n$; this is where we use ${1\over \kappa -1} - {r\over \kappa}> 3\varepsilon$ if $\kappa \in (1, \, 2]$, and $1 - {r\over 2}> 3\varepsilon$ if $\kappa \in (2, \, \infty]$). In particular, for $n_k:= \lfloor k^{2/\varepsilon} \rfloor$, we have $\P$-almost surely, $E_\omega ( \sum_k \ee^{- n_k^{-r} \tau_{n_k}} ) < \infty$, which implies that, $\p$-almost surely for all sufficiently large $k$, $\tau_{n_k} \ge n_k^r$. This implies that $\p$-almost surely for all sufficiently large $n$, $\tau_n \ge {1\over 2}\, n^r$. The upper bound in (\[nullrec\]) of Theorem \[t:nullrec\] follows.$\Box$ Proposition \[p:beta-gamma\] is proved in Section \[s:beta-gamma\]. Proof of Proposition \[p:beta-gamma\] {#s:beta-gamma} ===================================== Let $\theta \in [0,\, 1]$. Let $(Z_{n,\theta})$ be a sequence of random variables, such that $Z_{1,\theta} \; \buildrel law \over = \; \sum_{i=1}^\deg A_i$, where $(A_i, \, 1\le i\le \deg)$ is distributed as $(A(x_i), \, 1\le i\le \deg)$ (for any $x\in \T$), and that $$Z_{j+1,\theta} \; \buildrel law \over = \; \sum_{i=1}^\deg A_i {\theta + Z_{j,\theta}^{(i)} \over 1+ Z_{j,\theta}^{(i)} } , \qquad \forall\, j\ge 1, \label{ZW}$$ where $Z_{j,\theta}^{(i)}$ (for $1\le i \le \deg$) are independent copies of $Z_{j,\theta}$, and are independent of the random vector $(A_i, \, 1\le i\le \deg)$. Then, for any given $n\ge 1$ and $\lambda\ge 0$, $$Z_{n, 1-\ee^{-2\lambda}} \; \buildrel law \over = \; \sum_{i=1}^\deg A_i\, \beta_{n, \lambda}(e_i) , \label{Z=beta}$$ provided $(A_i, \, 1\le i\le \deg)$ and $(\beta_{n, \lambda}(e_i), \, 1\le i\le \deg)$ are independent. \[p:concentration\] Assume $p={1\over \deg}$ and $\psi'(1)<0$. Let $\kappa$ be as in $(\ref{kappa})$. For all $a\in (1, \kappa) \cap (1, 2]$, we have $$\sup_{\theta \in [0,1]} \sup_{j\ge 1} {[\E (Z_{j,\theta} )^a ] \over (\E Z_{j,\theta})^a} < \infty.$$ [*Proof of Proposition \[p:concentration\].*]{} Let $a\in (1,2]$. Conditioning on $A_1$, $\dots$, $A_\deg$, we can apply Lemma \[l:moment\] to see that $$\begin{aligned} &&\E \left[ \left( \, \sum_{i=1}^\deg A_i {\theta+ Z_{j,\theta}^{(i)} \over 1+ Z_{j,\theta}^{(i)} } \right)^a \Big| A_1, \dots, A_\deg \right] \\ &\le& \sum_{i=1}^\deg A_i^a \, \E \left[ \left( {\theta+ Z_{j,\theta} \over 1+ Z_{j,\theta} }\right)^a \; \right] + (\deg-1) \left[ \sum_{i=1}^\deg A_i\, \E \left( {\theta+ Z_{j,\theta} \over 1+ Z_{j,\theta} } \right) \right]^a \\ &\le& \sum_{i=1}^\deg A_i^a \, \E \left[ \left( {\theta+ Z_{j,\theta} \over 1+ Z_{j,\theta} }\right)^a \; \right] + c_{32} \left[ \E \left( {\theta+ Z_{j,\theta} \over 1+ Z_{j,\theta} } \right) \right]^a,\end{aligned}$$ where $c_{32}$ depends on $a$, $\deg$ and the bound of $A$ (recalling that $A$ is bounded away from 0 and infinity). Taking expectation on both sides, and in view of (\[ZW\]), we obtain: $$\E[(Z_{j+1,\theta})^a] \le \deg \E(A^a) \E \left[ \left( {\theta+ Z_{j,\theta}\over 1+ Z_{j,\theta} }\right)^a \; \right] + c_{32} \left[ \E \left( {\theta+ Z_{j,\theta} \over 1+ Z_{j,\theta} } \right) \right]^a.$$ We divide by $(\E Z_{j+1,\theta})^a = [ \E({\theta+Z_{j,\theta}\over 1+ Z_{j,\theta} })]^a$ on both sides, to see that $${\E[(Z_{j+1,\theta})^a]\over (\E Z_{j+1,\theta})^a} \le \deg \E(A^a) {\E[ ({\theta+ Z_{j,\theta} \over 1+ Z_{j,\theta} })^a] \over [\E ({\theta+ Z_{j,\theta} \over 1+ Z_{j,\theta} })]^a } + c_{32}.$$ Put $\xi = \theta+ Z_{j,\theta}$. By (\[RSD\]), we have $${\E[ ({\theta+Z_{j,\theta} \over 1+Z_{j,\theta} })^a] \over [\E ({\theta+Z_{j,\theta} \over 1+ Z_{j,\theta} })]^a } = {\E[ ({\xi \over 1- \theta+ \xi })^a] \over [\E ({ \xi \over 1- \theta+ \xi })]^a } \le {\E[\xi^a] \over [\E \xi ]^a } .$$ Applying Lemma \[l:moment\] to $k=2$ yields that $\E[\xi^a] = \E[( \theta+ Z_{j,\theta} )^a] \le \theta^a + \E[( Z_{j,\theta} )^a] + (\theta + \E( Z_{j,\theta} ))^a $. It follows that ${\E[ \xi^a] \over [\E \xi ]^a } \le {\E[ (Z_{j,\theta})^a] \over [\E Z_{j,\theta}]^a } +2$, which implies that for $j\ge 1$, $${\E[(Z_{j+1,\theta})^a]\over (\E Z_{j+1,\theta})^a} \le \deg \E(A^a) {\E[(Z_{j,\theta})^a]\over (\E Z_{j,\theta})^a} + (2 \deg \E(A^a)+ c_{32}).$$ Thus, if $\deg \E(A^a)<1$ (which is the case if $1<a<\kappa$), then $$\sup_{j\ge 1} {\E[ (Z_{j,\theta})^a] \over (\E Z_{j,\theta})^a} < \infty,$$ uniformly in $\theta \in [0, \, 1]$.$\Box$ We now turn to the proof of Proposition \[p:beta-gamma\]. For the sake of clarity, the proofs of (\[comp-Laplace\]), (\[E(beta):kappa&gt;2\]) and (\[E(beta):kappa&lt;2\]) are presented in three distinct parts. Proof of (\[comp-Laplace\]) {#subs:beta} --------------------------- By (\[exp\]) and (\[ZW\]), we have, for all $\theta\in [0, \, 1]$ and $j\ge 1$, $$\E \left\{ \exp\left( - t \, { Z_{j+1, \theta} \over \E (Z_{j+1, \theta})}\right) \right\} \le \E \left\{ \exp\left( - t \sum_{i=1}^\deg A_i { Z^{(i)}_{j, \theta} \over \E (Z^{(i)}_{j, \theta}) }\right) \right\}, \qquad t\ge 0.$$ Let $f_j(t) := \E \{ \exp ( - t { Z_{j, \theta} \over \E Z_{j, \theta}} )\}$ and $g_j(t):= \E (\ee^{ -t\, M_j})$ (for $j\ge 1$). We have $$f_{j+1}(t) \le \E \left( \prod_{i=1}^\deg f_j(t A_i) \right), \quad j\ge 1.$$ On the other hand, by (\[defMei1\]), $$g_{j+1}(t) = \E \left\{ \exp\left( - t \sum_{i=1}^\deg A(e_i) M^{(e_i)}_{j+1} \right) \right\} = \E \left( \prod_{i=1}^\deg g_j(t A_i) \right), \qquad j\ge 1.$$ Since $f_1(\cdot)= g_1(\cdot)$, it follows by induction on $j$ that for all $j\ge 1$, $f_j(t) \le g_j(t)$; in particular, $f_n(t) \le g_n(t)$. We take $\theta = 1- \ee^{-2\lambda}$. In view of (\[Z=beta\]), we have proved that $$\E \left\{ \exp\left( - t \sum_{i=1}^\deg A(e_i) {\beta_{n, \lambda}(e_i) \over \E [\beta_{n, \lambda}(e_i)] }\right) \right\} \le \E \left\{ \ee^{- t \, M_n} \right\} , \label{beta_n(e)}$$ which yields (\[comp-Laplace\]).$\Box$ [**Remark.**]{} Let $$\beta_{n,\lambda}(e) := {(1-\ee^{-2\lambda})+ \sum_{i=1}^\deg A(e_i) \beta_{n,\lambda}(e_i) \over 1+ \sum_{i=1}^\deg A(e_i) \beta_{n,\lambda}(e_i)}.$$ By (\[beta\_n(e)\]) and (\[exp\]), if $\E(A)= {1\over \deg}$, then for $\lambda\ge 0$, $n\ge 1$ and $t\ge 0$, $$\E \left\{ \exp\left( - t {\beta_{n, \lambda}(e) \over \E [\beta_{n, \lambda}(e)] }\right) \right\} \le \E \left\{ \ee^{- t \, M_n} \right\} .$$ Proof of (\[E(beta):kappa&gt;2\]) {#subs:kappa>2} --------------------------------- Assume $p={1\over \deg}$ and $\psi'(1)<0$. Since $Z_{j, \theta}$ is bounded uniformly in $j$, we have, by (\[ZW\]), for $1\le j \le n-1$, $$\begin{aligned} \E(Z_{j+1, \theta}) &=& \E\left( {\theta+Z_{j, \theta} \over 1+Z_{j, \theta} } \right) \nonumber \\ &\le& \E\left[(\theta+ Z_{j, \theta} )(1 - c_{33}\, Z_{j, \theta} )\right] \nonumber \\ &\le & \theta + \E(Z_{j, \theta}) - c_{33}\, \E\left[(Z_{j, \theta})^2\right] \label{E(Z2)} \\ &\le & \theta + \E(Z_{j, \theta}) - c_{33}\, \left[ \E Z_{j, \theta} \right]^2. \nonumber\end{aligned}$$ By Lemma \[l:abc\], we have, for any $K>0$ and uniformly in $\theta\in [0, \,Ê{K\over n}]$, $$\label{53} \E (Z_{n, \theta}) \le c_{34} \left( \sqrt {\theta} + {1\over n} \right) \le {c_{35} \over \sqrt{n}}.$$ We mention that this holds for all $\kappa \in (1, \, \infty]$. In view of (\[Z=beta\]), this yields the upper bound in (\[E(beta):kappa&gt;2\]). To prove the lower bound, we observe that $$\E(Z_{j+1, \theta}) \ge \E\left[(\theta+ Z_{j, \theta} )(1 - Z_{j, \theta} )\right] = \theta+ (1-\theta) \E(Z_{j, \theta}) - \E\left[(Z_{j, \theta})^2\right] . \label{51}$$ If furthermore $\kappa \in (2, \infty]$, then $\E [(Z_{j, \theta})^2 ] \le c_{36}\, (\E Z_{j, \theta})^2$ (see Proposition \[p:concentration\]). Thus, for all $1\le j\le n-1$, $$\E(Z_{j+1, \theta}) \ge \theta+ (1-\theta) \E(Z_{j, \theta}) - c_{36}\, (\E Z_{j,\theta})^2 .$$ By (\[53\]), $\E (Z_{n, \theta}) \to 0$ uniformly in $\theta\in [0, \,Ê{K\over n}]$ (for any given $K>0$). An application of (\[Z=beta\]) and Lemma \[l:abc\] readily yields the lower bound in (\[E(beta):kappa&gt;2\]).$\Box$ Proof of (\[E(beta):kappa&lt;2\]) {#subs:kappa<2} --------------------------------- We assume in this part $p={1\over \deg}$, $\psi'(1)<0$ and $1<\kappa \le 2$. Let $\varepsilon>0$ be small. Since $(Z_{j, \theta})$ is bounded, we have $\E[(Z_{j, \theta})^2] \le c_{37} \, \E [(Z_{j, \theta})^{\kappa-\varepsilon}]$, which, by Proposition \[p:concentration\], implies $$\E\left[ (Z_{j, \theta})^2 \right] \le c_{38} \, \left( \E Z_{j, \theta} \right)^{\kappa- \varepsilon} . \label{c38}$$ Therefore, (\[51\]) yields that $$\E(Z_{j+1, \theta}) \ge \theta+ (1-\theta) \E(Z_{j, \theta}) - c_{38} \, (\E Z_{j, \theta})^{\kappa-\varepsilon} .$$ By (\[53\]), $\E (Z_{n, \theta}) \to 0$ uniformly in $\theta\in [0, \,Ê{K\over n}]$ (for any given $K>0$). An application of Lemma \[l:abc\] implies that for any $K>0$, $$\E (Z_{\ell, \theta}) \ge c_{14} \left( \theta^{1/(\kappa-\varepsilon)} + {1\over \ell^{1/(\kappa -1 - \varepsilon)}} \right), \qquad \forall \, \theta\in [0, \,Ê{K\over n}], \; \; \forall \, 1\le \ell \le n. \label{ell}$$ The lower bound in (\[E(beta):kappa&lt;2\]) follows from (\[Z=beta\]). It remains to prove the upper bound. Define $$Y_{j, \theta} := {Z_{j, \theta} \over \E(Z_{j, \theta})} , \qquad 1\le j\le n.$$ We take $Z_{j-1, \theta}^{(x)}$ (for $x\in \T_1$) to be independent copies of $Z_{j-1, \theta}$, and independent of $(A(x), \; x\in \T_1)$. By (\[ZW\]), for $2\le j\le n$, $$\begin{aligned} Y_{j, \theta} &\; {\buildrel law \over =} \;& \sum_{x\in \T_1} A(x) {(\theta+ Z_{j-1, \theta}^{(x)} )/ (1+ Z_{j-1, \theta}^{(x)}) \over \E [(\theta+ Z_{j-1, \theta}^{(x)} )/ (1+ Z_{j-1, \theta}^{(x)}) ]} \ge \sum_{x\in \T_1} A(x) {Z_{j-1, \theta}^{(x)} / (1+ Z_{j-1, \theta}^{(x)}) \over \theta+ \E [Z_{j-1, \theta}]} \\ &=& { \E [Z_{j-1, \theta}]\over \theta+ \E [Z_{j-1, \theta}]} \sum_{x\in \T_1} A(x) Y_{j-1, \theta}^{(x)} - { \E [Z_{j-1, \theta}]\over \theta+ \E [Z_{j-1, \theta}]} \sum_{x\in \T_1} A(x) {(Z_{j-1, \theta}^{(x)})^2/\E(Z_{j-1, \theta}) \over 1+Z_{j-1, \theta}^{(x)}} \\ &\ge& \sum_{x\in \T_1} A(x) Y_{j-1, \theta}^{(x)} - \Delta_{j-1, \theta} \; ,\end{aligned}$$ where $$\begin{aligned} Y_{j-1, \theta}^{(x)} &:=&{Z_{j-1, \theta}^{(x)} \over \E(Z_{j-1, \theta})} , \\ \Delta_{j-1, \theta} &:=&{\theta\over \theta+ \E [Z_{j-1, \theta}]} \sum_{x\in \T_1} A(x) Y_{j-1, \theta}^{(x)} + \sum_{x\in \T_1} A(x) {(Z_{j-1, \theta}^{(x)})^2 \over \E(Z_{j-1, \theta})} .\end{aligned}$$ By (\[c38\]), $\E[ {(Z_{j-1, \theta}^{(i)})^2 \over \E(Z_{j-1, \theta})}]\le c_{38}\, (\E Z_{j-1, \theta})^{\kappa-1-\varepsilon}$. On the other hand, by (\[ell\]), $\E(Z_{j-1, \theta}) \ge c_{14}\, \theta^{1/(\kappa-\varepsilon)}$ for $2\le j \le n$, and thus ${\theta\over \theta+ \E [Z_{j-1, \theta}]} \le c_{39}\, (\E Z_{j-1, \theta})^{\kappa-1- \varepsilon}$. As a consequence, $\E( \Delta_{j-1, \theta} ) \le c_{40}\, (\E Z_{j-1, \theta})^{\kappa-1-\varepsilon}$. If we write $\xi \; {\buildrel st. \over \ge} \; \eta$ to denote that $\xi$ is stochastically greater than or equal to $\eta$, then we have proved that $Y_{j, \theta} \; {\buildrel st. \over \ge} \; \sum_{x\in \T_1}^\deg A(x) Y_{j-1, \theta}^{(x)} - \Delta_{j-1, \theta}$. Applying the same argument to each of $(Y_{j-1, \theta}^{(x)}, \, x\in \T_1)$, we see that, for $3\le j\le n$, $$Y_{j, \theta} \; {\buildrel st. \over \ge} \; \sum_{u\in \T_1} A(u) \sum_{v\in \T_2: \; u={\buildrel \leftarrow \over v}} A(v) Y_{j-2, \theta}^{(v)} - \left( \Delta_{j-1, \theta}+ \sum_{u\in \T_1} A(u) \Delta_{j-2, \theta}^{(u)} \right) ,$$ where $Y_{j-2, \theta}^{(v)}$ (for $v\in \T_2$) are independent copies of $Y_{j-2, \theta}$, and are independent of $(A(w), \, w\in \T_1 \cup \T_2)$, and $(\Delta_{j-2, \theta}^{(u)}, \, u\in \T_1)$ are independent of $(A(u), \, u\in \T_1)$ and are such that $\e[\Delta_{j-2, \theta}^{(u)}] \le c_{40}\, (\E Z_{j-2, \theta})^{\kappa-1-\varepsilon}$. By induction, we arrive at: for $j>m \ge 1$, $$Y_{j, \theta} \; {\buildrel st. \over \ge}\; \sum_{x\in \T_m} \left( \prod_{y\in ]\! ] e, x ]\! ]} A(y) \right) Y_{j-m, \theta}^{(x)} - \Lambda_{j,m,\theta}, \label{Yn>}$$ where $Y_{j-m, \theta}^{(x)}$ (for $x\in \T_m$) are independent copies of $Y_{j-m, \theta}$, and are independent of the random vector $(A(w), \, 1\le |w| \le m)$, and $\E(\Lambda_{j,m,\theta}) \le c_{40}\, \sum_{\ell=1}^m (\E Z_{j-\ell, \theta})^{\kappa-1-\varepsilon} $. Since $\E(Z_{i, \theta}) = \E({\theta+ Z_{i-1, \theta} \over 1+ Z_{i-1, \theta}}) \ge \E(Z_{i-1, \theta}) - \E[(Z_{i-1, \theta})^2] \ge \E(Z_{i-1, \theta}) - c_{38}\, [\E Z_{i-1, \theta} ]^{\kappa-\varepsilon}$ (by (\[c38\])), we have, for all $j\in (j_0, n]$ (with a large but fixed integer $j_0$) and $1\le \ell \le j-j_0$, $$\begin{aligned} \E(Z_{j, \theta}) &\ge&\E(Z_{j-\ell, \theta}) \prod_{i=1}^\ell \left\{ 1- c_{38}\, [\E Z_{j-i, \theta} ]^{\kappa-1-\varepsilon}\right\} \\ &\ge&\E(Z_{j-\ell, \theta}) \prod_{i=1}^\ell \left\{ 1- c_{41}\, (j-i)^{-(\kappa-1- \varepsilon)/2}\right\} ,\end{aligned}$$ the last inequality being a consequence of (\[53\]). Thus, for $j\in (j_0, n]$ and $1\le \ell \le j^{(\kappa-1-\varepsilon)/2}$, $\E(Z_{j, \theta}) \ge c_{42}\, \E(Z_{j-\ell, \theta})$, which implies that for all $m\le j^{(\kappa-1-\varepsilon)/2}$, $\E(\Lambda_{j,m, \theta}) \le c_{43} \, m (\E Z_{j, \theta})^{\kappa-1-\varepsilon}$. By Chebyshev’s inequality, for $j\in (j_0, n]$, $m\le j^{(\kappa-1-\varepsilon)/2}$ and $r>0$, $$\P\left\{ \Lambda_{j,m, \theta} > \varepsilon r\right\} \le {c_{43} \, m (\E Z_{j, \theta})^{\kappa -1-\varepsilon} \over \varepsilon r}. \label{toto4}$$ Let us go back to (\[Yn&gt;\]), and study the behaviour of $\sum_{x\in \T_m} ( \prod_{y\in ]\! ] e, x ]\! ]} A(y) ) Y_{j-m, \theta}^{(x)}$. Let $M^{(x)}$ (for $x\in \T_m$) be independent copies of $M_\infty$ and independent of all other random variables. Since $\E(Y_{j-m, \theta}^{(x)})= \E(M^{(x)})=1$, we have, by Fact \[f:petrov\], for any $a\in (1, \, \kappa)$, $$\begin{aligned} &&\E \left\{ \left| \sum_{x\in \T_m} \left( \prod_{y\in ]\! ] e, x ]\! ]} A(y) \right) (Y_{j- m, \theta}^{(x)} - M^{(x)}) \right|^a \right\} \\ &\le&2 \E \left\{ \sum_{x\in \T_m} \left( \prod_{y\in ]\! ] e, x ]\! ]} A(y)^a \right) \, \E\left( | Y_{j-m, \theta}^{(x)} - M^{(x)}|^a \right) \right\}.\end{aligned}$$ By Proposition \[p:concentration\] and the fact that $(M_n)$ is a martingale bounded in $L^a$, we have $\E ( | Y_{j-m, \theta}^{(x)} - M^{(x)}|^a ) \le c_{44}$. Thus, $$\begin{aligned} \E \left\{ \left| \sum_{x\in \T_m} \left( \prod_{y\in ]\! ] e, x ]\! ]} A(y) \right) (Y_{j- m, \theta}^{(x)} - M^{(x)}) \right|^a \right\} &\le& 2c_{44} \E \left\{ \sum_{x\in \T_m} \prod_{y\in ]\! ] e, x ]\! ]} A(y)^a \right\} \\ &=& 2c_{44} \, \deg^m \, [\E(A^a)]^m.\end{aligned}$$ By Chebyshev’s inequality, $$\P \left\{ \left| \sum_{x\in \T_m} \left( \prod_{y\in ]\! ] e, x ]\! ]} A(y) \right) (Y_{j- m, \theta}^{(x)} - M^{(x)}) \right| > \varepsilon r\right\} \le {2c_{44} \, \deg^m [\E(A^a)]^m \over \varepsilon^a r^a}. \label{toto6}$$ Clearly, $\sum_{x\in \T_m} (\prod_{y\in ]\! ] e, x ]\! ]} A(y) ) M^{(x)}$ is distributed as $M_\infty$. We can thus plug (\[toto6\]) and (\[toto4\]) into (\[Yn&gt;\]), to see that for $j\in [j_0, n]$, $m\le j^{(\kappa-1-\varepsilon)/2}$ and $r>0$, $$\P \left\{ Y_{j, \theta} > (1-2\varepsilon) r\right\} \ge \P \left\{ M_\infty > r\right\} - {c_{43}\, m (\E Z_{j, \theta})^{\kappa-1- \varepsilon} \over \varepsilon r} - {2c_{44} \, \deg^m [\E(A^a)]^m \over \varepsilon^a r^a} . \label{Yn-lb}$$ We choose $m:= \lfloor j^\varepsilon \rfloor$. Since $a\in (1, \, \kappa)$, we have $\deg \E(A^a) <1$, so that $\deg^m [\E(A^a)]^m \le \exp( - j^{\varepsilon/2})$ for all large $j$. We choose $r= {1\over (\E Z_{j, \theta})^{1- \delta}}$, with $\delta := {4\kappa \varepsilon \over \kappa -1}$. In view of (\[M-tail\]), we obtain: for $j\in [j_0, n]$, $$\P \left\{ Y_{j, \theta} > {1-2\varepsilon\over (\E Z_{j, \theta})^{1- \delta}} \right\} \ge c_{23} \, (\E Z_{j, \theta})^{(1- \delta) \kappa} - {c_{43}\over \varepsilon} \, j^\varepsilon\, (\E Z_{j, \theta})^{\kappa-\varepsilon-\delta} - {2c_{44} \, (\E Z_{j, \theta})^{(1- \delta)a} \over \varepsilon^a \exp(j^{\varepsilon/2})} .$$ Since $c_{14}/j^{1/(\kappa-1- \varepsilon)} \le \E(Z_{j, \theta}) \le c_{35}/j^{1/2}$ (see (\[ell\]) and (\[53\]), respectively), we can pick up sufficiently small $\varepsilon$, so that for $j\in [j_0, n]$, $$\P \left\{ Y_{j, \theta} > {1-2\varepsilon\over (\E Z_{j, \theta})^{1- \delta}} \right\} \ge {c_{23} \over 2} \, (\E Z_{j, \theta})^{(1-\delta) \kappa}.$$ Recall that by definition, $Y_{j, \theta} = {Z_{j, \theta} \over \E(Z_{j, \theta})}$. Therefore, for $j\in [j_0, n]$, $$\E[(Z_{j, \theta})^2] \ge [\E Z_{j, \theta}]^2 \, {(1-2\varepsilon)^2\over (\E Z_{j, \theta})^{2(1- \delta)}} \P \left\{ Y_{j, \theta} > {1-2\varepsilon \over (\E Z_{j, \theta})^{1- \delta}} \right\} \ge c_{45} \, (\E Z_{j, \theta})^{\kappa+ (2- \kappa)\delta}.$$ Of course, the inequality holds trivially for $0\le j < j_0$ (with possibly a different value of the constant $c_{45}$). Plugging this into (\[E(Z2)\]), we see that for $1\le j\le n-1$, $$\E(Z_{j+1, \theta}) \le \theta + \E(Z_{j, \theta}) - c_{46}\, (\E Z_{j, \theta})^{\kappa+ (2- \kappa)\delta} .$$ By Lemma \[l:abc\], this yields $\E(Z_{n, \theta}) \le c_{47} \, \{ \theta^{1/[\kappa+ (2- \kappa)\delta]} + n^{- 1/ [\kappa -1 + (2- \kappa)\delta]}\}$. An application of (\[Z=beta\]) implies the desired upper bound in (\[E(beta):kappa&lt;2\]).$\Box$ [**Remark.**]{} A close inspection on our argument shows that under the assumptions $p= {1\over \deg}$ and $\psi'(1)<0$, we have, for any $1\le i \le \deg$ and uniformly in $\lambda \in [0, \, {1\over n}]$, $$\left( {\alpha_{n, \lambda}(e_i) \over \E[\alpha_{n, \lambda}(e_i)]} ,\; {\beta_{n, \lambda}(e_i) \over \E[\beta_{n, \lambda}(e_i)]} , \; {\gamma_n(e_i) \over \E[\gamma_n (e_i)]} \right) \; {\buildrel law \over \longrightarrow} \; (M_\infty, \, M_\infty, \, M_\infty),$$ where “${\buildrel law \over \longrightarrow}$" stands for convergence in distribution, and $M_\infty$ is the random variable defined in $(\ref{cvg-M})$.$\Box$ [**Acknowledgements**]{} We are grateful to Philippe Carmona and Marc Yor for helpful discussions. [99]{} Chernoff, H. (1952). A measure of asymptotic efficiency for tests of a hypothesis based on the sum of observations. [*Ann. Math. Statist.*]{} [**23**]{}, 493–507. Duquesne, T. and Le Gall, J.-F. (2002). [*Random Trees, Lévy Processes and Spatial Branching Processes.*]{} Astérisque [**281**]{}. Société Mathématique de France, Paris. Griffeath, D. and Liggett, T.M. (1982). Critical phenomena for Spitzer’s reversible nearest particle systems. [*Ann. Probab.*]{} [**10**]{}, 881–895. Harris, T.E. (1963). [*The Theory of Branching Processes.*]{} Springer, Berlin. Hoel, P., Port, S. and Stone, C. (1972). [*Introduction to Stochastic Processes.*]{} Houghton Mifflin, Boston. Kesten, H., Kozlov, M.V. and Spitzer, F. (1975). A limit law for random walk in a random environment. [*Compositio Math.*]{} [**30**]{}, 145–168. Le Gall, J.-F. (2005). Random trees and applications. [*Probab. Surveys*]{} [**2**]{}, 245–311. Liggett, T.M. (1985). [*Interacting Particle Systems.*]{} Springer, New York. Liu, Q.S. (2000). On generalized multiplicative cascades. [*Stoch. Proc. Appl.*]{} [**86**]{}, 263–286. Liu, Q.S. (2001). Asymptotic properties and absolute continuity of laws stable by random weighted mean. [*Stoch. Proc. Appl.*]{} [**95**]{}, 83–107. Lyons, R. and Pemantle, R. (1992). Random walk in a random environment and first-passage percolation on trees. [*Ann. Probab.*]{} [**20**]{}, 125–136. Lyons, R. and Peres, Y. (2005+). [*Probability on Trees and Networks.*]{} (Forthcoming book) [http://mypage.iu.edu/\~rdlyons/prbtree/prbtree.html]{} Mandelbrot, B. (1974). Multiplications aléatoires itérées et distributions invariantes par moyenne pondérée aléatoire. [*C. R. Acad. Sci. Paris*]{} [**278**]{}, 289–292. Menshikov, M.V. and Petritis, D. (2002). On random walks in random environment on trees and their relationship with multiplicative chaos. In: [*Mathematics and Computer Science II (Versailles, 2002)*]{}, pp. 415–422. Birkhäuser, Basel. Pemantle, R. (1995). Tree-indexed processes. [*Statist. Sci.*]{} [**10**]{}, 200–213. Pemantle, R. and Peres, Y. (1995). Critical random walk in random environment on trees. [*Ann. Probab.*]{} [**23**]{}, 105–140. Pemantle, R. and Peres, Y. (2005+). The critical Ising model on trees, concave recursions and nonlinear capacity. [ArXiv:math.PR/0503137.]{} Peres, Y. (1999). Probability on trees: an introductory climb. In: [*École d’Été St-Flour 1997*]{}, Lecture Notes in Mathematics [**1717**]{}, pp. 193–280. Springer, Berlin. Petrov, V.V. (1995). [*Limit Theorems of Probability Theory.*]{} Clarendon Press, Oxford. Rozikov, U.A. (2001). Random walks in random environments on the Cayley tree. [*Ukrainian Math. J.*]{} [**53**]{}, 1688–1702. Sinai, Ya.G. (1982). The limit behavior of a one-dimensional random walk in a random environment. [*Theory Probab. Appl.*]{} [**27**]{}, 247–258. Sznitman, A.-S. (2005+). Random motions in random media. (Lecture notes of minicourse at Les Houches summer school.) [http://www.math.ethz.ch/u/sznitman/]{} Zeitouni, O. (2004). Random walks in random environment. In: [*École d’Été St-Flour 2001*]{}, Lecture Notes in Mathematics [**1837**]{}, pp. 189–312. Springer, Berlin. -- ------------------------------ --------------------------------------------------- Yueyun Hu Zhan Shi Département de Mathématiques Laboratoire de Probabilités et Modèles Aléatoires Université Paris XIII Université Paris VI 99 avenue J-B Clément 4 place Jussieu F-93430 Villetaneuse F-75252 Paris Cedex 05 France France -- ------------------------------ ---------------------------------------------------
{ "pile_set_name": "ArXiv" }
723 P.2d 394 (1986) L. Lynn ALLEN and Merle Allen, Plaintiffs and Respondents, v. Thomas M. KINGDON and Joan O. Kingdon, Defendants and Appellants. No. 18290. Supreme Court of Utah. July 29, 1986. H. James Clegg, Scott Daniels, Salt Lake City, for defendants and appellants. Boyd M. Fullmer, Salt Lake City, for plaintiffs and respondents. HOWE, Justice: The plaintiffs Allen (buyers) brought this action for the return of all money they had paid on an earnest money agreement to purchase residential real estate. The defendants Kingdon (sellers) appeal the trial court's judgment that the agreement had been rescinded by the parties and that the buyers were entitled to a full refund. *395 On February 12, 1978, the buyers entered into an earnest money agreement to purchase the sellers' home for $87,500. The agreement provided for an immediate deposit of $1,000, which the buyers paid, to be followed by an additional down payment of $10,000 by March 15, 1978. The buyers were to pay the remainder of the purchase price at the closing which was set on or before April 15, 1978. The agreement provided for the forfeiture of all amounts paid by the buyers as liquidated and agreed damages in the event they failed to complete the purchase. The buyers did not pay the additional $10,000, but paid $9,800 because the parties later agreed on a $200 deduction for a light fixture the sellers were allowed to take from the home. An inscription on the $9,800 check stated all monies paid were "subject to closing." There were several additional exchanges between the parties after the earnest money agreement was signed. The buyers requested that the sellers fix the patio, which the sellers refused to do. The buyers asked that the sellers paint the front of the home, which Mr. Kingdon agreed to do, but did not accomplish. The parties eventually met to close the sale. The buyers insisted on a $500 deduction from the purchase price because of the sellers' failure to paint. The sellers refused to convey title unless the buyers paid the full purchase price. Because of this impasse, the parties did not close the transaction. Mrs. Allen and Mrs. Kingdon left the meeting, after which Mr. Kingdon orally agreed to refund the $10,800, paid by the buyers. However, three days later, the sellers' attorney sent a letter to the buyers advising them that the sellers would retain enough of the earnest money to cover any damages they would incur in reselling the home. The letter also stated that the buyers could avoid these damages by closing within ten days. The buyers did not offer to close the sale. The home was eventually sold for $89,100, less a commission of $5,346. Claiming damages in excess of $15,000, the sellers retained the entire $10,800 and refused to make any refund to the buyers. The trial court found that the parties had orally rescinded their agreement and ordered the sellers to return the buyers' payments, less $1,000 on a counterclaim of the sellers, which award is not challenged on this appeal. The sellers first contend that the trial court erred in holding that our statute of frauds permits oral rescission of a written executory contract for the sale of real property. U.C.A., 1953, § 25-5-1 provides: No estate or interest in real property, other than leases for a term not exceeding one year, nor any trust or power over or concerning real property or in any manner relating thereto, shall be created, granted, assigned, surrendered or declared otherwise than by operation of law, or by deed or conveyance in writing subscribed by the party creating, granting, assigning, surrendering or declaring the same, or by his lawful agent thereunto authorized by writing. (Emphasis added.) In Cutwright v. Union Savings & Investment Co., 33 Utah 486, 491-92, 94 P. 984, 985 (1908), this Court interpreted section 25-5-1 as follows: No doubt the transfer of any interest in real property, whether equitable or legal, is within the statute of frauds; and no such interest can either be created, transferred, or surrendered by parol merely.... No doubt, if a parol agreement to surrender or rescind a contract for the sale of lands is wholly executory, and nothing has been done under it, it is within the statute of frauds, and cannot be enforced any more than any other agreement concerning an interest in real property may be. (Emphasis added.) In that case, the buyer purchased a home under an installment contract providing for the forfeiture of all amounts paid in the event the buyer defaulted. The buyer moved into the home but soon discontinued payments. He informed the seller that he would make no more payments on the contract, surrendered the key to the house, and vacated the premises. Soon thereafter, an assignee of the buyer's interest informed the seller that he intended to make the payments *396 under the contract and demanded possession. The seller refused to accept the payments, claiming that the contract had been mutually rescinded on the buyer's surrender of possession. We held that the statute of frauds generally requires the surrender of legal and equitable interests in land to be in writing. Where, however, an oral rescission has been executed, the statute of frauds may not apply. In Cutwright, surrender of possession by the buyer constituted sufficient part performance of the rescission agreement to remove it from the statute of frauds. This exception is one of several recognized by our cases. We have also upheld oral rescission of a contract for the sale of land when the seller, in reliance on the rescission, enters into a new contract to resell the land. Budge v. Barron, 51 Utah 234, 244-45, 169 P. 745, 748 (1917). In addition, an oral rescission by the buyer may be enforceable where the seller has breached the written contract. Thackeray v. Knight, 57 Utah 21, 27-28, 192 P. 263, 266 (1920). In the present case, the oral rescission involved the surrender of the buyers' equitable interest in the home under the earnest money agreement. Further, the rescission was wholly executory. There is no evidence of any part performance of the rescission or that the buyers substantially changed their position in reliance on the promise to discharge the contract. On the contrary, three days after the attempted closing, the sellers informed the buyers that they intended to hold them to the contract. It was only after the buyers continued in their refusal to close that the sellers placed the home on the market. The buyers argue that the weight of authority in the United States is to the effect that an executory contract for the sale of land within the statute of frauds may be orally rescinded. This may indeed be the case when there are acts of performance of the oral agreement sufficient to take it out of the statute of frauds. See Annot., 42 A.L.R.3d 242, 251 (1972). In support of their contention that an oral rescission of an earnest money agreement for the purchase of land is valid absent any acts of performance, the buyers rely on Niernberg v. Feld, 131 Colo. 508, 283 P.2d 640 (1955). In that case, the Colorado Supreme Court upheld the oral rescission of an executory contract for the sale of land under a statute of frauds which, like Utah's, applies specifically to the surrender of interests in land. The Colorado court concluded that the statute of frauds concerns the making of contracts only and does not apply to their revocation. However, the court did not attempt to reconcile its holding with the contradictory language of the controlling statute. For a contrary result under a similar statute and fact situation, see Waller v. Lieberman, 214 Mich. 428, 183 N.W. 235 (1921). In light of the specific language of Utah's statute of frauds and our decision in Cutwright v. Union Savings & Investment Co., supra, we decline to follow the Colorado case. We note that the annotator at 42 A.L.R.3d 257 points out that in Niernberg the rescission was acted upon in various ways. We hold in the instant case that the wholly executory oral rescission of the earnest money agreement was unenforceable under our statute of frauds. Nor were the buyers entitled to rescind the earnest money agreement because of the sellers' failure to paint the front of the home as promised. Cf. Thackeray v. Knight, 57 Utah at 27-28, 192 P. at 266 (buyer's oral rescission of contract for sale of land was valid when seller breached contract). The rule is well settled in Utah that if the original agreement is within the statute of frauds, a subsequent agreement that modifies any of the material parts of the original must also satisfy the statute. Golden Key Realty, Inc. v. Mantas, 699 P.2d 730, 732 (Utah 1985). An exception to this general rule has been recognized where a party has changed position by performing an oral modification so that it would be inequitable to permit the other party to found a claim or defense on the original agreement as unmodified. White v. Fox, 665 P.2d 1297, 1301 (Utah 1983) *397 (citing Bamberger Co. v. Certified Productions, Inc., 88 Utah 194, 201, 48 P.2d 489, 492 (1935), aff'd on rehearing, 88 Utah 213, 53 P.2d 1153 (1936)). There is no indication that the buyers changed their position in reliance on the sellers' promise to paint the front of the house. Thus, equitable considerations would not preclude the sellers from raising the unmodified contract as a defense to the claim of breach. The fact that the parties executed several other oral modifications of the written contract does not permit the buyers to rescind the contract for breach of an oral promise on which they did not rely to their detriment. We therefore hold that the buyers were not entitled to rescind the earnest money agreement because of the sellers' failure to perform an oral modification required to be in writing under the statute of frauds. The buyers also contend that they are entitled to the return of the $10,800 because the inscription on the $9,800 check stated that all monies were paid "subject to closing." The buyers argue that by conditioning the check in this manner they may, in effect, rewrite the earnest money agreement and relieve themselves of any liability for their own failure to close the sale. We cannot accept this argument. The buyers were under an obligation to pay the monies unconditionally. The sellers' acceptance of the inscribed check cannot be construed as a waiver of their right to retain the $10,800 when the buyers failed to perform the agreement. Having concluded that the buyers breached their obligation under the earnest money agreement, we must next consider whether the liquidated damages provision of the agreement is enforceable. That provision provided that the sellers could retain all amounts paid by the buyers as liquidated and agreed damages in the event the buyers failed to complete the purchase. The general rules in Utah regarding enforcement of liquidated damages for breach of contract have been summarized as follows: Under the basic principles of freedom of contract, a stipulation to liquidated damages for breach of contract is generally enforceable. Where, however, the amount of liquidated damages bears no reasonable relationship to the actual damage or is so grossly excessive as to be entirely disproportionate to any possible loss that might have been contemplated that it shocks the conscience, the stipulation will not be enforced. Warner v. Rasmussen, 704 P.2d 559, 561 (Utah 1985) (citations omitted). In support of their contention that the liquidated damages are not excessive compared to actual damages, the sellers assert that they offered evidence of actual damages in excess of $15,000. However, the trial court disagreed and found the amount of liquidated damages excessive. The record indicates that the only recoverable damages sustained by the sellers resulted from the resale of the home at a lower net price amounting to $3,746 (the difference between the contract price of $87,500 and the eventual selling price, less commission, of $83,754). We agree that $10,800 is excessive and disproportionate when compared to the $3,746 loss of bargain suffered by the sellers. Since the buyers did not ever have possession of the property, the other items of damage claimed by the sellers (interest on mortgage, taxes, and utilities) are not recoverable by them. Perkins v. Spencer, 121 Utah 468, 243 P.2d 446 (1952). Therefore, the sellers are not entitled to retain the full amount paid, but may offset their actual damages of $3,746 against the buyers' total payments. See Soffe v. Ridd, 659 P.2d 1082 (Utah 1983) (seller was entitled to actual damages where liquidated damages provision was held unenforceable). We reverse the trial court's judgment that the earnest money agreement was rescinded and conclude that the buyers breached their obligation to close the transaction. However, we affirm the judgment below that the liquidated damages provided for were excessive and therefore not recoverable. *398 The case is remanded to the trial court to amend the judgment to award the buyers $7,054, less $1,000 awarded by the trial court to the sellers on their counterclaim which is not challenged on this appeal. No interest or attorney fees are awarded to either party inasmuch as the trial court awarded none and neither party has raised the issue on appeal. HALL, C.J., and STEWART and DURHAM, JJ., concur. ZIMMERMAN, Justice (concurring): I join the majority in its disposition of the various issues. However, the majority quotes from Warner v. Rasmussen, 704 P.2d 559 (Utah 1985), to the effect that contractual provisions for liquidated damages will be enforced unless "the amount of liquidated damages bears no reasonable relationship to the actual damage or is so grossly excessive as to be entirely disproportionate to any loss that might have been contemplated that it shocks the conscience." The Court then finds that the amount of the liquidated damages provided for in the agreement is "excessive and disproportionate" when compared to the actual loss suffered by the sellers, thus implying that in the absence of a disparity as great as that which exists here (actual loss is approximately one-third of the penalty), the standard of Warner v. Rasmussen will not be satisfied. I think an examination of our cases should suggest to any thoughtful reader that, in application, the test stated in Warner is not nearly as accepting of liquidated damage provisions as the quoted language would suggest. In fact, I believe this Court routinely applies the alternative test of Warner—that the liquidated damages must bear some reasonable relationship to the actual damages—and that we carefully scrutinize liquidated damage awards. I think it necessary to say this lest the bar be misled by the rather loose language of Warner and its predecessors.
{ "pile_set_name": "FreeLaw" }
154 F.3d 417 U.S.v.Chukwuma* NO. 97-11093 United States Court of Appeals,Fifth Circuit. July 29, 1998 Appeal From: N.D.Tex. ,No397CR104D 1 Affirmed. * Fed.R.App.P. 34(a); 5th Cir.R. 34-2
{ "pile_set_name": "FreeLaw" }
AxisControlBus ControlBus PathPlanning1 PathPlanning6 PathToAxisControlBus GearType1 GearType2 Motor Controller AxisType1 AxisType2 MechanicalStructure
{ "pile_set_name": "Github" }
--- abstract: 'Feature selection (FS) is a key preprocessing step in data mining. CFS (Correlation-Based Feature Selection) is an FS algorithm that has been successfully applied to classification problems in many domains. We describe Distributed CFS (DiCFS) as a completely redesigned, scalable, parallel and distributed version of the CFS algorithm, capable of dealing with the large volumes of data typical of big data applications. Two versions of the algorithm were implemented and compared using the Apache Spark cluster computing model, currently gaining popularity due to its much faster processing times than Hadoop’s MapReduce model. We tested our algorithms on four publicly available datasets, each consisting of a large number of instances and two also consisting of a large number of features. The results show that our algorithms were superior in terms of both time-efficiency and scalability. In leveraging a computer cluster, they were able to handle larger datasets than the non-distributed WEKA version while maintaining the quality of the results, i.e., exactly the same features were returned by our algorithms when compared to the original algorithm available in WEKA.' address: - | Systems Engineering Department,\ National Autonomous University of Honduras. Blvd. Suyapa, Tegucigalpa, Honduras - | Department of Computer Science, University of Alcalá\ Alcalá de Henares, 28871 Madrid, Spain - | Department of Computer Science, University of A Coruña\ Campus de Elviña s/n 15071 - A Coruña, Spain author: - 'Raul-Jose Palma-Mendoza' - 'Luis de-Marcos' - Daniel Rodriguez - 'Amparo Alonso-Betanzos' title: 'Distributed Correlation-Based Feature Selection in Spark' --- feature selection ,scalability ,big data ,apache spark ,cfs ,correlation Introduction {#sec:intro} ============ In recent years, the advent of big data has raised unprecedented challenges for all types of organizations and researchers in many fields. Xindong et al.  [@XindongWu2014], however, state that the big data revolution has come to us not only with many challenges but also with plenty of opportunities for those organizations and researchers willing to embrace them. Data mining is one field where the opportunities offered by big data can be embraced, and, as indicated by Leskovec et al.  [@Leskovec2014mining], the main challenge is to extract useful information or knowledge from these huge data volumes that enable us to predict or better understand the phenomena involved in the generation of the data. Feature selection (FS) is a dimensionality reduction technique that has emerged as an important step in data mining. According to Guyon and Eliseeff [@Guyon2003] its purpose is twofold: to select relevant attributes and simultaneously to discard redundant attributes. This purpose has become even more important nowadays, as vast quantities of data need to be processed in all kinds of disciplines. Practitioners also face the challenge of not having enough computational resources. In a review of the most widely used FS methods, Bolón-Canedo et al. [@Bolon-Canedo2015b] conclude that there is a growing need for scalable and efficient FS methods, given that the existing methods are likely to prove inadequate for handling the increasing number of features encountered in big data. Depending on their relationship with the classification process, FS methods are commonly classified in one of three main categories : (i) filter methods, (ii) wrapper methods, or (iii) embedded methods. *Filters* rely solely on the characteristics of the data and, since they are independent of any learning scheme, they require less computational effort. They have been shown to be important preprocessing techniques, with many applications such as churn prediction [@Idris2012; @Idris2013] and microarray data classification. In microarray data classification, filters obtain better or at least comparable results in terms of accuracy to wrappers [@Bolon-Canedo2015a]. In *wrapper* methods, the final subset selection is based on a learning algorithm that is repeatedly trained with the data. Although wrappers tend to increase the final accuracy of the learning scheme, they are usually more computationally expensive than the other two approaches. Finally, in *embedded* methods, FS is part of the classification process, e.g., as happens with decision trees. Another important classification of FS methods is, according to their results, as (i) ranker algorithms or (ii) subset selector algorithms. With *rankers*, the result is a sorted set of the original features. The order of this returned set is defined according to the quality that the FS method determines for each feature. Some rankers also assign a weight to each feature that provides more information about its quality. *Subset selectors* return a non-ordered subset of features from the original set so that together they yield the highest possible quality according to some given measure. Subset selectors, therefore, consist of a search procedure and an evaluation measure. This can be considered an advantage in many cases, as rankers usually evaluate features individually and leave it to the user to select the number of top features in a ranking. One filter-based subset selector method is the Correlation-Based Feature Selection (CFS) algorithm [@Hall2000], traditionally considered useful due to its ability not only to reduce dimensionality but also to improve classification algorithm performance. However, the CFS algorithm, like many other multivariate FS algorithms, has a time execution complexity $\mathcal{O}(m^2 \cdot n)$, where $m$ is the number of features and $n$ is the number of instances. This quadratic complexity in the number of features makes CFS very sensitive to the *the curse of dimensionality* [@bellman1957dynamic]. Therefore, a scalable adaptation of the original algorithm is required to be able to apply the CFS algorithm to datasets that are large both in number of instances and dimensions. As a response to the big data phenomenon, many technologies and programming frameworks have appeared with the aim of helping data mining practitioners design new strategies and algorithms that can tackle the challenge of distributing work over clusters of computers. One such tool that has recently received much attention is Apache Spark [@Zaharia2010], which represents a new programming model that is a superset of the MapReduce model introduced by Google [@Dean2004a; @Dean2008]. One of Spark’s strongest advantages over the traditional MapReduce model is its ability to efficiently handle the iterative algorithms that frequently appear in the data mining and machine learning fields. We describe two distributed and parallel versions of the original CFS algorithm for classification problems using the Apache Spark programming model. The main difference between them is how the data is distributed across the cluster, i.e., using a horizontal partitioning scheme (hp) or using a vertical partitioning scheme (vp). We compare the two versions – DiCFS-hp and DiCFS-vp, respectively – and also compare them with a baseline, represented by the classical non-distributed implementation of CFS in WEKA [@Hall2009a]. Finally, their benefits in terms of reduced execution time are compared with those of the CFS version developed by Eiras-Fanco et al. [@Eiras-Franco2016] for regression problems. The results show that the time-efficiency and scalability of our two versions are an improvement on those of the original version of the CFS; furthermore, similar or improved execution times are obtained with respect to the Eiras-Franco et al [@Eiras-Franco2016] regression version. In the interest of reproducibility, our software and sources are available as a Spark package[^1] called DiCFS, with a corresponding mirror in Github.[^2] The rest of this paper is organized as follows. Section \[sec:stateofart\] summarizes the most important contributions in the area of distributed and parallel FS and proposes a classification according to how parallelization is carried out. Section \[sec:cFS\] describes the original CFS algorithm, including its theoretical foundations. Section \[sec:spark\] presents the main aspects of the Apache Spark computing framework, focusing on those relevant to the design and implementation of our proposed algorithms. Section \[sec:diCFS\] describes and discusses our DiCFS-hp and DiCFS-vp versions of the CFS algorithm. Section \[sec:experiments\] describes our experiments to compare results for DiCFS-hp and DiCFS-vp, the WEKA approach and the Eiras-Fanco et al. [@Eiras-Franco2016] approach. Finally, conclusions and future work are outlined in Section \[sec:conclusions\]. Background and Related Work {#sec:stateofart} =========================== As might be expected, filter-based FS algorithms have asymptotic complexities that depend on the number of features and/or instances in a dataset. Many algorithms, such as the CFS, have quadratic complexities, while the most frequently used algorithms have at least linear complexities [@Bolon-Canedo2015b]. This is why, in recent years, many attempts have been made to achieve more scalable FS methods. In what follows, we analyse recent work on the design of new scalable FS methods according to parallelization approaches: (i) search-oriented, (ii) dataset-split-oriented, or (iii) filter-oriented. *Search-oriented* parallelizations account for most approaches, in that the main aspects to be parallelized are (i) the search guided by a classifier and (ii) the corresponding evaluation of the resulting models. We classify the following studies in this category: - Kubica et al. [@Kubica2011] developed parallel versions of three forward-search-based FS algorithms, where a wrapper with a logistic regression classifier is used to guide a search parallelized using the MapReduce model. - García et al. [@Garcia_aparallel] presented a simple approach for parallel FS, based on selecting random feature subsets and evaluating them in parallel using a classifier. In their experiments they used a support vector machine (SVM) classifier and, in comparing their results with those for a traditional wrapper approach, found lower accuracies but also much shorter computation times. - Wang et al. [@Wang2016] used the Spark computing model to implement an FS strategy for classifying network traffic. They first implemented an initial FS using the Fisher score filter [@duda2012pattern] and then performed, using a wrapper approach, a distributed forward search over the best $m$ features selected. Since the Fisher filter was used, however, only numerical features could be handled. - Silva et al. [@Silva2017] addressed the FS scaling problem using an asynchronous search approach, given that synchronous search, as commonly performed, can lead to efficiency losses due to the inactivity of some processors waiting for other processors to end their tasks. In their tests, they first obtained an initial reduction using a mutual information (MI) [@Peng2005] filter and then evaluated subsets using a random forest (RF) [@Ho1995] classifier. However, as stated by those authors, any other approach could be used for subset evaluation. *Dataset-split-oriented* approaches have the main characteristic that parallelization is performed by splitting the dataset vertically or horizontally, then applying existing algorithms to the parts and finally merging the results following certain criteria. We classify the following studies in this category: - Peralta et al. [@Peralta2015] used the MapReduce model to implement a wrapper-based evolutionary search FS method. The dataset was split by instances and the FS method was applied to each resulting subset. Simple majority voting was used as a reduction step for the selected features and the final subset of feature was selected according to a user-defined threshold. All tests were carried out using the EPSILON dataset, which we also use here (see Section \[sec:experiments\]). - Bolón-Canedo et al. [@Bolon-Canedo2015a] proposed a framework to deal with high dimensionality data by first optionally ranking features using a FS filter, then partitioning vertically by dividing the data according to features (columns) rather than, as commonly done, according to instances (rows). After partitioning, another FS filter is applied to each partition, and finally, a merging procedure guided by a classifier obtains a single set of features. The authors experiment with five commonly used FS filters for the partitions, namely, CFS [@Hall2000], Consistency [@Dash2003], INTERACT [@Zhao2007], Information Gain [@Quinlan1986] and ReliefF [@Kononenko1994], and with four classifiers for the final merging, namely, C4.5 [@Quinlan1992], Naive Bayes [@rish2001empirical], $k$-Nearest Neighbors [@Aha1991] and SVM [@vapnik1995nature], show that their own approach significantly reduces execution times while maintaining and, in some cases, even improving accuracy. Finally, *filter-oriented* methods include redesigned or new filter methods that are, or become, inherently parallel. Unlike the methods in the other categories, parallelization in this category methods can be viewed as an internal, rather than external, element of the algorithm. We classify the following studies in this category: - Zhao et al. [@Zhao2013a] described a distributed parallel FS method based on a variance preservation criterion using the proprietary software SAS High-Performance Analytics. [^3] One remarkable characteristic of the method is its support not only for supervised FS, but also for unsupervised FS where no label information is available. Their experiments were carried out with datasets with both high dimensionality and a high number of instances. - Ramírez-Gallego et al. [@Ramirez-Gallego2017] described scalable versions of the popular mRMR [@Peng2005] FS filter that included a distributed version using Spark. The authors showed that their version that leveraged the power of a cluster of computers could perform much faster than the original and processed much larger datasets. - In a previous work [@Palma-Mendoza2018], using the Spark computing model we designed a distributed version of the ReliefF [@Kononenko1994] filter, called DiReliefF. In testing using datasets with large numbers of features and instances, it was much more efficient and scalable than the original filter. - Finally, Eiras-Franco et al [@Eiras-Franco2016], using four distributed FS algorithms, three of them filters, namely, InfoGain [@Quinlan1986], ReliefF [@Kononenko1994] and the CFS [@Hall2000], reduce execution times with respect to the original versions. However, in the CFS case, the version of those authors focuses on regression problems where all the features, including the class label, are numerical, with correlations calculated using the Pearson coefficient. A completely different approach is required to design a parallel version for classification problems where correlations are based on the information theory. The approach described here can be categorized as a *filter-oriented* approach that builds on works described elsewhere [@Ramirez-Gallego2017], [@Palma-Mendoza2018],  [@Eiras-Franco2016]. The fact that their focus was not only on designing an efficient and scalable FS algorithm, but also on preserving the original behaviour (and obtaining the same final results) of traditional filters, means that research focused on those filters is also valid for adapted versions. Another important issue in relation to filters is that, since they are generally more efficient than wrappers, they are often the only feasible option due to the abundance of data. It is worth mentioning that scalable filters could feasibly be included in any of the methods mentioned in the *search-oriented* and *dataset-split-oriented* categories, where an initial filtering step is implemented to improve performance. Correlation-Based Feature Selection (CFS) {#sec:cFS} ========================================= The CFS method, originally developed by Hall [@Hall2000], is categorized as a subset selector because it evaluates subsets rather than individual features. For this reason, the CFS needs to perform a search over candidate subsets, but since performing a full search over all possible subsets is prohibitive (due to the exponential complexity of the problem), a heuristic has to be used to guide a partial search. This heuristic is the main concept behind the CFS algorithm, and, as a filter method, the CFS is not a classification-derived measure, but rather applies a principle derived from Ghiselly’s test theory [@ghiselli1964theory], i.e., *good feature subsets contain features highly correlated with the class, yet uncorrelated with each other*. This principle is formalized in Equation (\[eq:heuristic\]) where $M_s$ represents the merit assigned by the heuristic to a subset $s$ that contains $k$ features, $\overline{r_{cf}}$ represents the average of the correlations between each feature in $s$ and the class attribute, and $\overline{r_{ff}}$ is the average correlation between each of the $\begin{psmallmatrix}k\\2\end{psmallmatrix}$ possible feature pairs in $s$. The numerator can be interpreted as an indicator of how predictive the feature set is and the denominator can be interpreted as an indicator of how redundant features in $s$ are. $$\label{eq:heuristic} M_s = \frac { k\cdot \overline { r_{cf} } }{ \sqrt { k + k (k - 1) \cdot \overline{ r_{ff}} } }$$ Equation (\[eq:heuristic\]) also posits the second important concept underlying the CFS, which is the computation of correlations to obtain the required averages. In classification problems, the CFS uses the symmetrical uncertainty (SU) measure [@press1982numerical] shown in Equation (\[eq:su\]), where $H$ represents the entropy function of a single or conditioned random variable, as shown in Equation (\[eq:entropy\]). This calculation adds a requirement for the dataset before processing, which is that all non-discrete features must be discretized. By default, this process is performed using the discretization algorithm proposed by Fayyad and Irani [@Fayyad1993]. $$\label{eq:su} SU = 2 \cdot \left[ \frac { H(X) - H(X|Y) }{ H(Y) + H(X) } \right]$$ $$\begin{aligned} \label{eq:entropy} H(X) &=-\sum _{ x\in X }{ p(x)\log _{2}{p(x)} } \nonumber \\ H(X | Y) &=-\sum _{ y\in Y }{ p(y) } \sum_{x \in X}{p(x |y) \log _{ 2 }{ p(x | y) } } \end{aligned}$$ The third core CFS concept is its search strategy. By default, the CFS algorithm uses a best-first search to explore the search space. The algorithm starts with an empty set of features and at each step of the search all possible single feature expansions are generated. The new subsets are evaluated using Equation (\[eq:heuristic\]) and are then added to a priority queue according to merit. In the subsequent iteration, the best subset from the queue is selected for expansion in the same way as was done for the first empty subset. If expanding the best subset fails to produce an improvement in the overall merit, this counts as a *fail* and the next best subset from the queue is selected. By default, the CFS uses five consecutive fails as a stopping criterion and as a limit on queue length. The final CFS element is an optional post-processing step. As stated before, the CFS tends to select feature subsets with low redundancy and high correlation with the class. However, in some cases, extra features that are *locally predictive* in a small area of the instance space may exist that can be leveraged by certain classifiers [@Hall1999]. To include these features in the subset after the search, the CFS can optionally use a heuristic that enables inclusion of all features whose correlation with the class is higher than the correlation between the features themselves and with features already selected. Algorithm \[alg:cFS\] summarizes the main aspects of the CFS. $Corrs := $ correlations between all features with the class \[lin:allCorrs\] $BestSubset := \emptyset$ $Queue.setCapacity(5)$ $Queue.add(BestSubset)$ $NFails := 0$ $HeadState := Queue.dequeue$ $NewSubsets := evaluate(expand(HeadState), Corrs)$ \[lin:expand\] $Queue.add(NewSubsets)$ $BestSubset$ $LocalBest := Queue.head$ $BestSubset := LocalBest$ $NFails := 0$ $NFails := NFails + 1$ $BestSubset$ The Spark Cluster Computing Model {#sec:spark} ================================= The following short description of the main concepts behind the Spark computing model focuses exclusively on aspects that complete the conceptual basis for our DiCFS proposal in Section \[sec:diCFS\]. The main concept behind the Spark model is what is known as the resilient distributed dataset (RDD). Zaharia et al. [@Zaharia2010; @Zaharia2012] defined an RDD as a read-only collection of objects, i.e., a dataset partitioned and distributed across the nodes of a cluster. The RDD has the ability to automatically recover lost partitions through a lineage record that knows the origin of the data and possible calculations done. Even more relevant for our purposes is the fact that operations run for an RDD are automatically parallelized by the Spark engine; this abstraction frees the programmer from having to deal with threads, locks and all other complexities of traditional parallel programming. With respect to the cluster architecture, Spark follows the master-slave model. Through a cluster manager (master), a driver program can access the cluster and coordinate the execution of a user application by assigning tasks to the executors, i.e., programs that run in worker nodes (slaves). By default, only one executor is run per worker. Regarding the data, RDD partitions are distributed across the worker nodes, and the number of tasks launched by the driver for each executor is set according to the number of RDD partitions residing in the worker. Two types of operations can be executed on an RDD, namely, actions and transformations. Of the *actions*, which allow results to be obtained from a Spark cluster, perhaps the most important is $collect$, which returns an array with all the elements in the RDD. This operation has to be done with care, to avoid exceeding the maximum memory available to the driver. Other important actions include $reduce$, $sum$, $aggregate$ and $sample$, but as they are not used by us here, we will not explain them. *Transformations* are mechanisms for creating an RDD from another RDD. Since RDDs are read-only, a transformation creating a new RDD does not affect the original RDD. A basic transformation is $mapPartitions$, which receives, as a parameter, a function that can handle all the elements of a partition and return another collection of elements to conform a new partition. The $mapPartitions$ transformation is applied to all partitions in the RDD to obtain a new transformed RDD. Since received and returned partitions do not need to match in size, $mapPartitions$ can thus reduce or increase the overall size of an RDD. Another interesting transformation is $reduceByKey$; this can only be applied to what is known as a $PairRDD$, which is an RDD whose elements are key-value pairs, where the keys do not have to be unique. The $reduceByKey$ transformation is used to aggregate the elements of an RDD, which it does by applying a commutative and associative function that receives two values of the PairRDD as arguments and returns one element of the same type. This reduction is applied by key, i.e., elements with the same key are reduced such that the final result is a PairRDD with unique keys, whose corresponding values are the result of the reduction. Other important transformations (which we do not explain here) are $map$, $flatMap$ and $filter$. Another key concept in Spark is *shuffling*, which refers to the data communication required for certain types of transformations, such as the above-mentioned $reduceByKey$. Shuffling is a costly operation because it requires redistribution of the data in the partitions, and therefore, data read and write across all nodes in the cluster. For this reason, shuffling operations are minimized as much as possible. The final concept underpinning our proposal is *broadcasting*, which is a useful mechanism for efficiently sharing read-only data between all worker nodes in a cluster. Broadcast data is dispatched from the driver throughout the network and is thus made available to all workers in a deserialized fast-to-access form. Distributed Correlation-Based Feature Selection (DiCFS) {#sec:diCFS} ======================================================= We now describe the two algorithms that conform our proposal. They represent alternative distributed versions that use different partitioning strategies to process the data. We start with some considerations common to both approaches. As stated previously, CFS has a time execution complexity of $\mathcal{O}(m^2 \cdot n)$ where $m$ is the number of features and $n$ is the number of instances. This complexity derives from the first step shown in Algorithm \[alg:cFS\], the calculation of $\begin{psmallmatrix}m+ 1\\2\end{psmallmatrix}$ correlations between all pairs of features including the class, and the fact that for each pair, $\mathcal{O}(n)$ operations are needed in order to calculate the entropies. Thus, to develop a scalable version, our main focus in parallelization design must be on the calculation of correlations. Another important issue is that, although the original study by Hall [@Hall2000] stated that all correlations had to be calculated before the search, this is only a true requisite when a backward best-first search is performed. In the case of the search shown in Algorithm \[alg:cFS\], correlations can be calculated on demand, i.e., on each occasion a new non-evaluated pair of features appears during the search. In fact, trying to calculate all correlations in any dataset with a high number of features and instances is prohibitive; the tests performed on the datasets described in Section \[sec:experiments\] show that a very low percentage of correlations is actually used during the search and also that on-demand correlation calculation is around $100$ times faster when the default number of five maximum fails is used. Below we describe our two alternative methods for calculating these correlations in a distributed manner depending on the type of partitioning used. Horizontal Partitioning {#subsec:horizontalPart} ----------------------- Horizontal partitioning of the data may be the most natural way to distribute work between the nodes of a cluster. If we consider the default layout where the data is represented as a matrix $D$ in which the columns represent the different features and the rows represent the instances, then it is natural to distribute the matrix by assigning different groups of rows to nodes in the cluster. If we represent this matrix as an RDD, this is exactly what Spark will automatically do. Once the data is partitioned, Algorithm \[alg:cFS\] (omitting line \[lin:allCorrs\]) can be started on the driver. The distributed work will be performed on line \[lin:expand\], where the best subset in the queue is expanded and, depending on this subset and the state of the search, a number $nc$ of new pairs of correlations will be required to evaluate the resulting subsets. Thus, the most complex step is the calculation of the corresponding $nc$ contingency tables that will allow us to obtain the entropies and conditional entropies that conform the symmetrical uncertainty correlation (see Equation (\[eq:su\])). These $nc$ contingency tables are partially calculated locally by the workers following Algorithm \[alg:localCTables\]. As can be observed, the algorithm loops through all the local rows, counting the values of the features contained in *pairs* (declared in line \[lin:pairs\]) and storing the results in a map holding the feature pairs as keys and the contingency tables as their matching values. The next step is to merge the contingency tables from all the workers to obtain global results. Since these tables hold simple value counts, they can easily be aggregated by performing an element-wise sum of the corresponding tables. These steps are summarized in Equation (\[eq:cTables\]), where $CTables$ is an RDD of keys and values, and where each key corresponds to a feature pair and each value to a contingency table. $pairs \leftarrow$ $nc$ pairs of features \[lin:pairs\] $rows \leftarrow$ local rows of $partition$ $m \leftarrow$ number of columns (features in $D$) $ctables \leftarrow$ a map from each pair to an empty contingency table $ctables(x,y)(r(x),r(y))$ += $1$ $ctables$ $$\begin{aligned} \label{eq:cTables} pairs &= \left \{ (feat_a, feat_b), \cdots, (feat_x, feat_y) \right \} \nonumber \\ nc &= \left | pairs \right | \nonumber \\ CTables &= D.mapPartitions(localCTables(pairs)).reduceByKey(sum) \nonumber \\ CTables &= \begin{bmatrix} ((feat_a, feat_b), ctable_{a,b})\\ \vdots \\ ((feat_x, feat_y), ctable_{x,y})\\ \end{bmatrix}_{nc \times 1} \nonumber \\\end{aligned}$$ Once the contingency tables have been obtained, the calculation of the entropies and conditional entropies is straightforward since all the information necessary for each calculation is contained in a single row of the $CTables$ RDD. This calculation can therefore be performed in parallel by processing the local rows of this RDD. Once the distributed calculation of the correlations is complete, control returns to the driver, which continues execution of line \[lin:expand\] in Algorithm \[alg:cFS\]. As can be observed, the distributed work only happens when new correlations are needed, and this occurs in only two cases: (i) when new pairs of features need to be evaluated during the search, and (ii) at the end of the execution if the user requests the addition of locally predictive features. To sum up, every iteration in Algorithm \[alg:cFS\] expands the current best subset and obtains a group of subsets for evaluation. This evaluation requires a merit, and the merit for each subset is obtained according to Figure \[fig:horizontalPartResume\], which illustrates the most important steps in the horizontal partitioning scheme using a case where correlations between features f2 and f1 and between f2 and f3 are calculated in order to evaluate a subset. ![Horizontal partitioning steps for a small dataset D to obtain the correlations needed to evaluate a features subset[]{data-label="fig:horizontalPartResume"}](fig01.eps){width="100.00000%"} Vertical Partitioning {#subsec:vecticalPart} --------------------- Vertical partitioning has already been proposed in Spark by Ramírez-Gallego et al. [@Ramirez-Gallego2017], using another important FS filter, mRMR. Although mRMR is a ranking algorithm (it does not select subsets), it also requires the calculation of information theory measures such as entropies and conditional entropies between features. Since data is distributed horizontally by Spark, those authors propose two main operations to perform the vertical distribution: - *Columnar transformation*. Rather than use the traditional format whereby the dataset is viewed as a matrix whose columns represent features and rows represent instances, a transposed version is used in which the data represented as an RDD is distributed by features and not by instances, in such a way that the data for a specific feature will in most cases be stored and processed by the same node. Figure \[fig:columnarTrans\], based on Ramírez-Gallego et al. [@Ramirez-Gallego2017], explains the process using an example based on a dataset with two partitions, seven instances and four features. - *Feature broadcasting*. Because features must be processed in pairs to calculate conditional entropies and because different features can be stored in different nodes, some features are broadcast over the cluster so all nodes can access and evaluate them along with the other stored features. ![Example of a columnar transformation of a small dataset with two partitions, seven instances and four features (from [@Ramirez-Gallego2017])[]{data-label="fig:columnarTrans"}](fig02.eps){width="100.00000%"} In the case of the adapted mRMR [@Ramirez-Gallego2017], since every step in the search requires the comparison of a single feature with a group of remaining features, it proves efficient, at each step, to broadcast this single feature (rather than multiple features). In the case of the CFS, the core issue is that, at any point in the search when expansion is performed, if the size of subset being expanded is $k$, then the correlations between the $m-k$ remaining features and $k-1$ features in the subset being expanded have already been calculated in previous steps; consequently, only the correlations between the most recently added feature and the $m-k$ remaining features are missing. Therefore, the proposed operations can be applied efficiently in the CFS just by broadcasting the most recently added feature. The disadvantages of vertical partitioning are that (i) it requires an extra processing step to change the original layout of the data and this requires shuffling, (ii) it needs data transmission to broadcast a single feature in each search step, and (iii) the fact that, by default, the dataset is divided into a number of partitions equal to the number of features $m$ in the dataset may not be optimal for all cases (while this parameter can be tuned, it can never exceed $m$). The main advantage of vertical partitioning is that the data layout and the broadcasting of the compared feature move all the information needed to calculate the contingency table to the same node, which means that this information can be more efficiently processed locally. Another advantage is that the whole dataset does not need to be read every time a new set of features has to be compared, since the dataset can be filtered by rows to process only the required features. Due to the nature of the search strategy (best-first) used in the CFS, the first search step will always involve all features, so no filtering can be performed. For each subsequent step, only one more feature per step can be filtered out. This is especially important with high dimensionality datasets: the fact that the number of features is much higher than the number of search steps means that the percentage of features that can be filtered out is reduced. We performed a number of experiments to quantify the effects of the advantages and disadvantages of each approach and to check the conditions in which one approach was better than the other. Experiments {#sec:experiments} =========== The experiments tested and compared time-efficiency and scalability for the horizontal and vertical DiCFS approaches so as to check whether they improved on the original non-distributed version of the CFS. We also tested and compared execution times with that reported in the recently published research by Eiras-Franco et al. [@Eiras-Franco2016] into a distributed version of CFS for regression problems. Note that no experiments were needed to compare the quality of the results for the distributed and non-distributed CFS versions as the distributed versions were designed to return the same results as the original algorithm. For our experiments, we used a single master node and up to ten slave nodes from the big data platform of the Galician Supercomputing Technological Centre (CESGA). [^4] The nodes have the following configuration: - CPU: 2 X Intel Xeon E5-2620 v3 @ 2.40GHz - CPU Cores: 12 (2X6) - Total Memory: 64 GB - Network: 10GbE - Master Node Disks: 8 X 480GB SSD SATA 2.5" MLC G3HS - Slave Node Disks: 12 X 2TB NL SATA 6Gbps 3.5" G2HS - Java version: OpenJDK 1.8 - Spark version: 1.6 - Hadoop (HDFS) version: 2.7.1 - WEKA version: 3.8.1 The experiments were run on four large-scale publicly available datasets. The ECBDL14 [@Bacardit2012] dataset, from the protein structure prediction field, was used in the ECBLD14 Big Data Competition included in the GECCO’2014 international conference. This dataset has approximately 33.6 million instances, 631 attributes and 2 classes, consists 98% of negative examples and occupies about 56GB of disk space. HIGGS [@Sadowski2014], from the UCI Machine Learning Repository [@Lichman2013], is a recent dataset representing a classification problem that distinguishes between a signal process which produces Higgs bosons and a background process which does not. KDDCUP99 [@Ma2009] represents data from network connections and classifies them as normal connections or different types of attacks (a multi-class problem). Finally, EPSILON is an artificial dataset built for the Pascal Large Scale Learning Challenge in 2008.[^5] Table \[tbl:datasets\] summarizes the main characteristics of the datasets. [P[1in]{}P[0.7in]{}P[0.7in]{}P[0.7in]{}P[0.7in]{}]{} Dataset & No. of Samples ($\times 10^{6}$) & No. of Features. & Feature Types & Problem Type\ ECBDL14 [@Bacardit2012] & $\sim$33.6 & 632 & Numerical, Categorical & Binary\ HIGGS [@Sadowski2014] & 11 & 28 & Numerical & Binary\ KDDCUP99 [@Ma2009] & $\sim$5 & 42 & Numerical, Categorical & Multiclass\ EPSILON & 1/2 & 2,000 & Numerical & Binary\ With respect to algorithm parameter configuration, two defaults were used in all the experiments: the inclusion of locally predictive features and the use of five consecutive fails as a stopping criterion. These defaults apply to both distributed and non-distributed versions. Moreover, for the vertical partitioning version, the number of partitions was equal to the number of features, as set by default in Ramírez-Gallego et al. [@Ramirez-Gallego2017]. The horizontally and vertically distributed versions of the CFS are labelled DiCFS-hp and DiCFS-vp, respectively. We first compared execution times for the four algorithms in the datasets using ten slave nodes with all their cores available. For the case of the non-distributed version of the CFS, we used the implementation provided in the WEKA platform [@Hall2009a]. The results are shown in Figure \[fig:execTimeVsNInsta\]. ![Execution time with respect to percentages of instances in four datasets, for DiCFS-hp and DiCFS-vp using ten nodes and for a non-distributed implementation in WEKA using a single node[]{data-label="fig:execTimeVsNInsta"}](fig03.eps){width="100.00000%"} Note that, with the aim of offering a comprehensive view of execution time behaviour, Figure \[fig:execTimeVsNInsta\] shows results for sizes larger than the 100% of the datasets. To achieve these sizes, the instances in each dataset were duplicated as many times as necessary. Note also that, since ECBDL14 is a very large dataset, its temporal scale is different from that of the other datasets. Regarding the non-distributed version of the CFS, Figure \[fig:execTimeVsNInsta\] does not show results for WEKA in the experiments on the ECBDL14 dataset, because it was impossible to execute that version in the CESGA platform due to memory requirements exceeding the available limits. This also occurred with the larger samples from the EPSILON dataset for both algorithms: DiCFS-vp and DiCFS-hp. Even when it was possible to execute the WEKA version with the two smallest samples from the EPSILON dataset, these samples are not shown because the execution times were too high (19 and 69 minutes, respectively). Figure \[fig:execTimeVsNInsta\] shows successful results for the smaller HIGGS and KDDCUP99 datasets, which could still be processed in a single node of the cluster, as required by the non-distributed version. However, even in the case of these smaller datasets, the execution times of the WEKA version were worse compared to those of the distributed versions. Regarding the distributed versions, DiCFS-vp was unable to process the oversized versions of the ECBDL14 dataset, due to the large amounts of memory required to perform shuffling. The HIGGS and KDDCUP99 datasets showed an increasing difference in favor of DiCFS-hp, however, due to the fact that these datasets have much smaller feature sizes than ECBDL14 and EPSILON. As mentioned earlier, DiCFS-vp ties parallelization to the number of features in the dataset, so datasets with small numbers of features were not able to fully leverage the cluster nodes. Another view of the same issue is given by the results for the EPSILON dataset; in this case, DiCFS-vp obtained the best execution times for the 300% sized and larger datasets. This was because there were too many partitions (2,000) for the number of instances available in smaller than 300% sized datasets; further experiments showed that adjusting the number of partitions to 100 reduced the execution time of DiCFS-vp for the 100% EPSILON dataset from about 2 minutes to 1.4 minutes (faster than DiCFS-hp). Reducing the number of partitions further, however, caused the execution time to start increasing again. Figure \[fig:execTimeVsNFeats\] shows the results for similar experiments, except that this time the percentage of features in the datasets was varied and the features were copied to obtain oversized versions of the datasets. It can be observed that the number of features had a greater impact on the memory requirements of DiCFS-vp. This caused problems not only in processing the ECBDL14 dataset but also the EPSILON dataset. We can also see quadratic time complexity in the number of features and how the temporal scale in the EPSILON dataset (with the highest number of dimensions) matches that of the ECBDL14 dataset. As for the KDDCUP99 dataset, the results show that increasing the number of features obtained a better level of parallelization and a slightly improved execution time of DiCFS-vp compared to DiCFS-hp for the 400% dataset version and above. ![Execution times with respect to different percentages of features in four datasets for DiCFS-hp and DiCFS-vp[]{data-label="fig:execTimeVsNFeats"}](fig04.eps){width="100.00000%"} An important measure of the scalability of an algorithm is *speed-up*, which is a measure that indicates how capable an algorithm is of leveraging a growing number of nodes so as to reduce execution times. We used the speed-up definition shown in Equation (\[eq:speedup\]) and used all the available cores for each node (i.e., 12). The experimental results are shown in Figure \[fig:speedup\], where it can be observed that, for all four datasets, DiCFS-hp scales better than DiCFS-vp. It can also be observed that the HIGGS and KDDCUP datasets are too small to take advantage of the use of more than two nodes and also that practically no speed-up improvement is obtained from increasing this value. To summarize, our experiments show that even when vertical partitioning results in shorter execution times (the case in certain circumstances, e.g., when the dataset has an adequate number of features and instances for optimal parallelization according to the cluster resources), the benefits are not significant and may even be eclipsed by the effort invested in determining whether this approach is indeed the most efficient approach for a particular dataset or a particular hardware configuration or in fine-tuning the number of partitions. Horizontal partitioning should therefore be considered as the best option in the general case. $$\label{eq:speedup} speedup(m)=\left[ \frac { execution\quad time\quad on\quad 2\quad nodes }{execution\quad time\quad on\quad m\quad nodes } \right]$$ ![Speed-up for four datasets for DiCFS-hp and DiCFS-vp[]{data-label="fig:speedup"}](fig05.eps){width="100.00000%"} We also compared the DiCFS-hp approach with that of Eiras-Franco et al. [@Eiras-Franco2016], who described a Spark-based distributed version of the CFS for regression problems. The comparison was based on their experiments with the HIGGS and EPSILON datasets but using our current hardware. Those datasets were selected as only having numerical features and so could naturally be treated as regression problems. Table \[tbl:speedUp\] shows execution time and speed-up values obtained for different sizes of both datasets for both distributed and non-distributed versions and considering them to be classification and regression problems. Regression-oriented versions for the Spark and WEKA versions are labelled RegCFS and RegWEKA, respectively, the number after the dataset name represents the sample size and the letter indicates whether the sample had removed or added instances (*i*) or removed or added features (*f*). In the case of oversized samples, the method used was the same as described above, i.e., features or instances were copied as necessary. The experiments were performed using ten cluster nodes for the distributed versions and a single node for the WEKA version. The resulting speed-up was calculated as the WEKA execution time divided by the corresponding Spark execution time. The original experiments in [@Eiras-Franco2016] were performed only using EPSILON\_50i and HIGGS\_100i. It can be observed that much better speed-up was obtained by the DiCFS-hp version for EPSILON\_50i but in the case of HIGGS\_100i, the resulting speed-up in the classification version was lower than the regression version. However, in order to have a better comparison, two more versions for each dataset were considered, Table \[tbl:speedUp\] shows that the DiCFS-hp version has a better speed-up in all cases except in HIGGS\_100i dataset mentioned before. -------------- --------- --------- ------- -------- -------- ---------- Dataset RegCFS DiCFS-hp EPSILON\_25i 1011.42 655.56 58.85 63.61 10.31 17.19 EPSILON\_25f 393.91 703.95 25.83 55.08 12.78 15.25 EPSILON\_50i 4103.35 2228.64 76.98 110.13 20.24 53.30 HIGGS\_100i 182.86 327.61 21.34 23.70 13.82 8.57 HIGGS\_200i 2079.58 475.98 28.89 26.77 17.78 71.99 HIGGS\_200f 934.07 720.32 21.42 34.35 20.97 43.61 -------------- --------- --------- ------- -------- -------- ---------- : Execution time and speed-up values for different CFS versions for regression and classification[]{data-label="tbl:speedUp"} Conclusions and Future Work {#sec:conclusions} =========================== We describe two parallel and distributed versions of the CFS filter-based FS algorithm using the Apache Spark programming model: DiCFS-vp and DiCFS-hp. These two versions essentially differ in how the dataset is distributed across the nodes of the cluster. The first version distributes the data by splitting rows (instances) and the second version, following Ramírez-Gallego et al.  [@Ramirez-Gallego2017], distributes the data by splitting columns (features). As the outcome of a four-way comparison of DiCFS-vp and DiCFS-hp, a non-distributed implementation in WEKA and a distributed regression version in Spark, we can conclude as follows: - As was expected, both DiCFS-vp and DiCFS-hp were able to handle larger datasets in much a more time-efficient manner than the classical WEKA implementation. Moreover, in many cases they were the only feasible way to process certain types of datasets because of prohibitive WEKA memory requirements. - Of the horizontal and vertical partitioning schemes, the horizontal version (DiCFS-hp) proved to be the better option in the general case due to its better scalability and its natural partitioning mode that enables the Spark framework to make better use of cluster resources. - For classification problems, the benefits obtained from distribution compared to non-distribution version can be considered equal to or even better than the benefits already demonstrated for the regression domain [@Eiras-Franco2016]. Regarding future research, an especially interesting line is whether it is necessary for this kind of algorithm to process all the data available or whether it would be possible to design automatic sampling procedures that could guarantee that, under certain circumstances, equivalent results could be obtained. In the case of the CFS, this question becomes more pertinent in view of the study of symmetrical uncertainty in datasets with up to 20,000 samples by Hall [@Hall1999], where tests showed that symmetrical uncertainty decreased exponentially with the number of instances and then stabilized at a certain number. Another line of future work could be research into different data partitioning schemes that could, for instance, improve the locality of data while overcoming the disadvantages of vertical partitioning. Acknowledgements {#acknowledgements .unnumbered} ================ The authors thank CESGA for use of their supercomputing resources. This research has been partially supported by the Spanish Ministerio de Economía y Competitividad (research projects TIN 2015-65069-C2-1R, TIN2016-76956-C3-3-R), the Xunta de Galicia (Grants GRC2014/035 and ED431G/01) and the European Union Regional Development Funds. R. Palma-Mendoza holds a scholarship from the Spanish Fundación Carolina and the National Autonomous University of Honduras. [10]{} url \#1[`#1`]{}urlprefixhref \#1\#2[\#2]{} \#1[\#1]{} D. W. Aha, D. Kibler, M. K. Albert, [Instance-Based Learning Algorithms]{}, Machine Learning 6 (1) (1991) 37–66. [](http://dx.doi.org/10.1023/A:1022689900470). J. Bacardit, P. Widera, A. M[á]{}rquez-chamorro, F. Divina, J. S. Aguilar-Ruiz, N. Krasnogor, [Contact map prediction using a large-scale ensemble of rule sets and the fusion of multiple predicted structural features]{}, Bioinformatics 28 (19) (2012) 2441–2448. [](http://dx.doi.org/10.1093/bioinformatics/bts472). R. Bellman, [[Dynamic Programming]{}]{}, Rand Corporation research study, Princeton University Press, 1957. V. Bol[ó]{}n-Canedo, N. S[á]{}nchez-Maro[ñ]{}o, A. Alonso-Betanzos, [[Distributed feature selection: An application to microarray data classification]{}]{}, Applied Soft Computing 30 (2015) 136–150. [](http://dx.doi.org/10.1016/j.asoc.2015.01.035). V. Bol[ó]{}n-Canedo, N. S[á]{}nchez-Maro[ñ]{}o, A. Alonso-Betanzos, [Recent advances and emerging challenges of feature selection in the context of big data]{}, Knowledge-Based Systems 86 (2015) 33–45. [](http://dx.doi.org/10.1016/j.knosys.2015.05.014). M. Dash, H. Liu, [[Consistency-based search in feature selection]{}]{}, Artificial Intelligence 151 (1-2) (2003) 155–176. [](http://dx.doi.org/10.1016/S0004-3702(03)00079-1). <http://linkinghub.elsevier.com/retrieve/pii/S0004370203000791> J. Dean, S. Ghemawat, [MapReduce: Simplied Data Processing on Large Clusters]{}, Proceedings of 6th Symposium on Operating Systems Design and Implementation (2004) 137–149[](http://arxiv.org/abs/10.1.1.163.5292), [](http://dx.doi.org/10.1145/1327452.1327492). J. Dean, S. Ghemawat, [[MapReduce: Simplified Data Processing on Large Clusters]{}]{}, Communications of the ACM 51 (1) (2008) 107. <http://dl.acm.org/citation.cfm?id=1327452.1327492> R. O. Duda, P. E. Hart, D. G. Stork, [[Pattern Classification]{}]{}, John Wiley [&]{} Sons, 2001. C. Eiras-Franco, V. Bol[ó]{}n-Canedo, S. Ramos, J. Gonz[á]{}lez-Dom[í]{}nguez, A. Alonso-Betanzos, J. Touri[ñ]{}o, [[Multithreaded and Spark parallelization of feature selection filters]{}]{}, Journal of Computational Science 17 (2016) 609–619. [](http://dx.doi.org/10.1016/j.jocs.2016.07.002). U. M. Fayyad, K. B. Irani, [[Multi-Interval Discretization of Continuos-Valued Attributes for Classification Learning]{}]{} (1993). <http://trs-new.jpl.nasa.gov/dspace/handle/2014/35171> D. J. Garcia, L. O. Hall, D. B. Goldgof, K. Kramer, [A Parallel Feature Selection Algorithm from Random Subsets]{} (2004). E. E. Ghiselli, [[Theory of Psychological Measurement]{}]{}, McGraw-Hill series in psychology, McGraw-Hill, 1964. <https://books.google.es/books?id=mmh9AAAAMAAJ> I. Guyon, A. Elisseeff, [An Introduction to Variable and Feature Selection]{}, Journal of Machine Learning Research (JMLR) 3 (3) (2003) 1157–1182. [](http://arxiv.org/abs/1111.6189v1), [](http://dx.doi.org/10.1016/j.aca.2011.07.027). M. A. Hall, [Correlation-based feature selection for machine learning]{}, PhD Thesis., Department of Computer Science, Waikato University, New Zealand (1999). [](http://dx.doi.org/10.1.1.37.4643). M. A. Hall, [[Correlation-based Feature Selection for Discrete and Numeric Class Machine Learning]{}]{} (2000) 359–366. <http://dl.acm.org/citation.cfm?id=645529.657793> M. Hall, E. Frank, G. Holmes, B. Pfahringer, P. Reutemann, I. Witten, [The WEKA data mining software: An update]{}, SIGKDD Explorations 11 (1) (2009) 10–18. [](http://dx.doi.org/10.1145/1656274.1656278). T. K. Ho, [[Random Decision Forests]{}]{}, in: Proceedings of the Third International Conference on Document Analysis and Recognition (Volume 1) - Volume 1, ICDAR ’95, IEEE Computer Society, Washington, DC, USA, 1995, pp. 278—-. <http://dl.acm.org/citation.cfm?id=844379.844681> A. Idris, A. Khan, Y. S. Lee, [Intelligent churn prediction in telecom: Employing mRMR feature selection and RotBoost based ensemble classification]{}, Applied Intelligence 39 (3) (2013) 659–672. [](http://dx.doi.org/10.1007/s10489-013-0440-x). A. Idris, M. Rizwan, A. Khan, [Churn prediction in telecom using Random Forest and PSO based data balancing in combination with various feature selection strategies]{}, Computers and Electrical Engineering 38 (6) (2012) 1808–1819. [](http://dx.doi.org/10.1016/j.compeleceng.2012.09.001). I. Kononenko, [[Estimating attributes: Analysis and extensions of RELIEF]{}]{}, Machine Learning: ECML-94 784 (1994) 171–182. [](http://dx.doi.org/10.1007/3-540-57868-4). <http://www.springerlink.com/index/10.1007/3-540-57868-4> J. Kubica, S. Singh, D. Sorokina, [[Parallel Large-Scale Feature Selection]{}]{}, in: Scaling Up Machine Learning, no. February, 2011, pp. 352–370. [](http://dx.doi.org/10.1017/CBO9781139042918.018). <http://ebooks.cambridge.org/ref/id/CBO9781139042918A143> J. Leskovec, A. Rajaraman, J. D. Ullman, [[Mining of Massive Datasets]{}]{}, 2014. [](http://dx.doi.org/10.1017/CBO9781139924801). <http://ebooks.cambridge.org/ref/id/CBO9781139924801> M. Lichman, [[UCI Machine Learning Repository]{}](http://archive.ics.uci.edu/ml) (2013). <http://archive.ics.uci.edu/ml> J. Ma, L. K. Saul, S. Savage, G. M. Voelker, [Identifying Suspicious URLs : An Application of Large-Scale Online Learning]{}, in: Proceedings of the International Conference on Machine Learning (ICML), Montreal, Quebec, 2009. R. J. Palma-Mendoza, D. Rodriguez, L. De-Marcos, [[Distributed ReliefF-based feature selection in Spark]{}](http://link.springer.com/10.1007/s10115-017-1145-y), Knowledge and Information Systems (2018) 1–20[](http://dx.doi.org/10.1007/s10115-017-1145-y). <http://link.springer.com/10.1007/s10115-017-1145-y> H. Peng, F. Long, C. Ding, [[Feature selection based on mutual information: criteria of max-dependency, max-relevance, and min-redundancy.]{}]{}, IEEE transactions on pattern analysis and machine intelligence 27 (8) (2005) 1226–38. [](http://dx.doi.org/10.1109/TPAMI.2005.159). <http://www.ncbi.nlm.nih.gov/pubmed/16119262> D. Peralta, S. del R[í]{}o, S. Ram[í]{}rez-Gallego, I. Riguero, J. M. Benitez, F. Herrera, [[Evolutionary Feature Selection for Big Data Classification: A MapReduce Approach ]{}]{}, Mathematical Problems in Engineering 2015 (JANUARY). [](http://dx.doi.org/10.1155/2015/246139). W. H. Press, S. A. Teukolsky, W. T. Vetterling, B. P. Flannery, [Numerical recipes in C]{}, Vol. 2, Cambridge Univ Press, 1982. J. R. Quinlan, [[Induction of Decision Trees]{}](http://dx.doi.org/10.1023/A:1022643204877), Mach. Learn. 1 (1) (1986) 81–106. [](http://dx.doi.org/10.1023/A:1022643204877). <http://dx.doi.org/10.1023/A:1022643204877> J. R. Quinlan, [[C4.5: Programs for Machine Learning]{}](http://portal.acm.org/citation.cfm?id=152181), Vol. 1, 1992. [](http://dx.doi.org/10.1016/S0019-9958(62)90649-6). <http://portal.acm.org/citation.cfm?id=152181> S. Ram[í]{}rez-Gallego, I. Lastra, D. Mart[í]{}nez-Rego, V. Bol[ó]{}n-Canedo, J. M. Ben[í]{}tez, F. Herrera, A. Alonso-Betanzos, [[Fast-mRMR: Fast Minimum Redundancy Maximum Relevance Algorithm for High-Dimensional Big Data]{}]{}, International Journal of Intelligent Systems 32 (2) (2017) 134–152. [](http://dx.doi.org/10.1002/int.21833). <http://doi.wiley.com/10.1002/int.21833> I. Rish, [An empirical study of the naive Bayes classifier]{}, in: IJCAI 2001 workshop on empirical methods in artificial intelligence, Vol. 3, IBM, 2001, pp. 41–46. P. Sadowski, P. Baldi, D. Whiteson, [Searching for Higgs Boson Decay Modes with Deep Learning]{}, Advances in Neural Information Processing Systems 27 (Proceedings of NIPS) (2014) 1–9. J. Silva, A. Aguiar, F. Silva, [[Parallel Asynchronous Strategies for the Execution of Feature Selection Algorithms]{}]{}, International Journal of Parallel Programming (2017) 1–32[](http://dx.doi.org/10.1007/s10766-017-0493-2). <http://link.springer.com/10.1007/s10766-017-0493-2> V. Vapnik, [The Nature of Statistical Learning Theory]{} (1995). Y. Wang, W. Ke, X. Tao, [[A Feature Selection Method for Large-Scale Network Traffic Classification Based on Spark]{}]{}, Information 7 (1) (2016) 6. [](http://dx.doi.org/10.3390/info7010006). <http://www.mdpi.com/2078-2489/7/1/6> , [Xingquan Zhu]{}, [Gong-Qing Wu]{}, [Wei Ding]{}, [[Data mining with big data]{}](http://ieeexplore.ieee.org/document/6547630/), IEEE Transactions on Knowledge and Data Engineering 26 (1) (2014) 97–107. [](http://dx.doi.org/10.1109/TKDE.2013.109). <http://ieeexplore.ieee.org/document/6547630/> M. Zaharia, M. Chowdhury, T. Das, A. Dave, [[Resilient distributed datasets: A fault-tolerant abstraction for in-memory cluster computing]{}]{}, NSDI’12 Proceedings of the 9th USENIX conference on Networked Systems Design and Implementation (2012) 2[](http://arxiv.org/abs/EECS-2011-82), [](http://dx.doi.org/10.1111/j.1095-8649.2005.00662.x). M. Zaharia, M. Chowdhury, M. J. Franklin, S. Shenker, I. Stoica, [Spark : Cluster Computing with Working Sets]{}, HotCloud’10 Proceedings of the 2nd USENIX conference on Hot topics in cloud computing (2010) 10[](http://dx.doi.org/10.1007/s00256-009-0861-0). Z. Zhao, H. Liu, [Searching for interacting features]{}, IJCAI International Joint Conference on Artificial Intelligence (2007) 1156–1161[](http://dx.doi.org/10.3233/IDA-2009-0364). Z. Zhao, R. Zhang, J. Cox, D. Duling, W. Sarle, [[Massively parallel feature selection: an approach based on variance preservation]{}]{}, Machine Learning 92 (1) (2013) 195–220. [](http://dx.doi.org/10.1007/s10994-013-5373-4). <http://link.springer.com/10.1007/s10994-013-5373-4> [^1]: <https://spark-packages.org> [^2]: <https://github.com/rauljosepalma/DiCFS> [^3]: <http://www.sas.com/en_us/software/high-performance-analytics.html> [^4]: <http://bigdata.cesga.es/> [^5]: <http://largescale.ml.tu-berlin.de/about/>
{ "pile_set_name": "ArXiv" }
Effects of alcohol on prolonged cognitive performance measured with Stroop's Color Word Test. 24 men and 24 women were randomly assigned in equal numbers to an Alcohol group, a Placebo group, or a Control group. The alcohol dose was 1.0 ml of 100% alcohol/kg of body weight. Subjects were tested three consecutive times using Stroop's Color Word Test. The dependent measures were total time needed to complete the test, number of errors made and number of hesitations. Data were grouped into three blocks of 100 words. Results indicated that number of hesitations was too insensitive a measure to yield any significant effects. On the two first measures alcohol had a detrimental effect in that the Alcohol group needed more time to complete the test and made more errors than the Placebo group. There was also a significant interaction of alcohol dose by sex by blocks on both these measures, indicating that the detrimental effect of alcohol over time was restricted to women. Different implications of the results were discussed.
{ "pile_set_name": "PubMed Abstracts" }
Fighting for a climate change treaty A NASA study finds the amount of ozone destroyed in the Arctic in 2011, shown in this image, was comparable to the ozone 'hole' that formed each spring since the mid-1980s [EPA] In 1974, chemists Mario Molina and Frank Sherwood Rowland published a landmark article that demonstrated the ability of chlorofluorocarbons (CFCs) to break down the ozone layer, the atmospheric region that plays a vital role in shielding humans and other life from harmful ultraviolet (UV) radiation. It marked the opening salvo of a decade-long fight to phase out and ban the use of these widespread industrial compounds. The period between Molina and Rowland's article and the establishment of an international agreement to regulate CFCs was remarkably similar to current climate change politics. It included calls for scientific consensus before moving on the issue, industry push back, fears over economic chaos, claims of inadequate chemical substitutes, difficulty in getting industrialised nations to the table, and debates and diplomacy over how to get developing nations to agree to regulate a problem predominantly caused by the industrialised world. Together, these issues created a political climate that was anything but conducive to an agreement for avoiding environmental catastrophe. And yet an agreement was reached. CFC production was greatly curtailed and disaster was averted. The Montreal Protocol - initially signed by 24 nations in 1987 and now ratified by 196 countries - bound nations to a set of policies that would rapidly reduce the use of CFCs. It became the first global environmental treaty to implement the precautionary approach, mandating strong actions now to avert future damage. The protocol has since become, in the words of former UN secretary-general Kofi Annan, "perhaps the single most successful international environmental agreement." It can also be called the first climate change treaty, since ozone-depleting substances are potent greenhouse gases. Lessons from the fight and eventual ban of CFCs can illuminate our current struggles to regulate greenhouse gases and provide guidance toward creating a strong treaty necessary to stave off another environmental disaster. An $8bn industry For more than 40 years, the generally non-toxic and non-flammable compounds known as CFCs were widely produced and used in refrigerants, propellants, and solvents. They were first manufactured as a safe alternative to ammonia and sulphur dioxide in refrigeration in the early 1930s. Their widespread success, due to their unique and seemingly miraculous chemical properties, propelled an $8bn industry that employed 600,000 people directly and was reaching new heights of manufacturing at the time of Molina and Rowland's discovery. As CFC production swelled to meet the global demand for aerosol and refrigeration, so too did the release of these ozone-depleting compounds into the atmosphere. Unlike carbon dioxide, CFCs are a foreign element in the atmosphere. When released, CFC molecules rise and reach the ozone layer where they encounter UV radiation. The strong radiation breaks down these molecules into their simpler parts, most notably chlorine atoms. Molina and Rowland realised these now free chlorine atoms could react and deplete the ozone layer. The US Environmental Protection Agency estimates that one chlorine atom can destroy 100,000 ozone molecules. Continuing to produce CFCs at such high levels would inevitably have depleted more of the ozone layer and would have led to greater harm to humans from UV rays. Further studies concurred with Molina and Rowland's findings and predicted losses of ozone that would have greatly increased cases of skin cancer and eye damage. Other detrimental impacts included reduced productivity in plants and crops and harm to marine life and air quality. The findings provoked wide-ranging reactions. Emboldened by the passage of the Clean Air and Clean Water Acts in the United States, the science and environmental communities wanted the US government to ban production and use of CFCs. They saw the depletion of the ozone layer as a grave, imminent threat that needed to be met with decisive action. The CFC industry, led by DuPont, which accounted for nearly 50 per cent of the market, attacked the theory as unfounded, arguing that no stratospheric ozone loss had been observed. DuPont and other CFC manufacturers lobbied extensively to prevent states from passing bills banning CFC use. The 'ban-now-find-out-later' approach DuPont also embarked on an advertising campaign to undermine the idea that CFCs damaged the ozone layer, while simultaneously arguing that any hasty restrictions would have a disastrous impact on businesses, jobs and the economy. DuPont's chairman, Irving Shapiro, announced to several major newspapers that "the 'ban-now-find-out-later' approach thrust upon an $8bn segment of industry, both in the headlines and in many legislative proposals, is a disturbing trend. Businesses can be destroyed before scientific facts are assembled and evaluated … The nation cannot afford to act on this and other issues before the full facts are known." Public health concerns, however, trumped industry arguments and consumers began boycotting aerosol sprays. Pressure from environmentalists and consumer groups resulted in a ban on aerosol sprays in 1978. In the end, though, the ban turned out to be only a partial victory for both sides. Nearly all sprays were banned, but numerous putatively "essential" uses of CFCs in air conditioners and refrigerators remained unregulated. The United States was the only major CFC-producing nation to voluntarily eliminate CFCs in aerosols, although relatively minor producers such as Canada, Denmark and Sweden soon followed suit. And while European nations today are at the forefront of promoting climate change legislation, in the 1970s and 1980s, CFC-producing giants like England and France were reluctant to impose restrictions. After these initial efforts by individual nations, progress toward an international CFCs agreement ground to a halt in the early 1980s. This was largely because protecting the ozone layer produced an unprecedented problem for human society. The public and governments were being told that the impacts of a thinning ozone layer would not be seen for decades. Yet in order to prevent much higher risks of skin cancer and cataracts, it was essential to act now and begin phasing out CFCs. Manufacturers continued to resist, arguing that in the absence of suitable substitutes, curtailing CFC production would result in significant job losses and a large reduction in the supply of air conditioners and refrigerators. They argued that action on CFCs would harm both the developed and developing world. On top of this, almost all nations would have to agree on a coordinated phase out and eventual ban of the industrial compounds since the release of CFCs by any one nation would have a global impact. Delayed implementations Producers of CFCs continued to wage a public battle against further regulation. Sceptics stepped up their public relations campaigns disputing the evidence, finding scientists to argue persuasively against the threat, and predicting dire economic consequences. The doubt did nothing to change the scientific consensus around CFCs and ozone depletion, but it helped to delay implementation of limits on CFCs for many years. While special interests were fighting it out in the public square, diplomacy was taking place behind the scenes. Domestic and international workshops were assessing the CFC-ozone connection while proposing various regulations, compromises, and deals to get major CFC-producing nations and developing nations to the table to begin talks toward an international agreement. The United States and the UN Environment Programme played leading roles. The fruit of this diplomatic labour was the Vienna Convention of March 1985, which produced a framework agreement in which states agreed to cooperate in research and assessments of the ozone problem, to exchange information and to adopt measures to prevent harm to the ozone layer. But the accord fell far short of mandating actions to limit CFC production or of establishing a timetable to phase it out. Much like the current climate change debate, it looked as if action on the issue was about to be stymied by a lengthy political struggle. Two months later, scientists discovered the Antarctic ozone hole. From a climate change perspective, this would be comparable to a large ice sheet breaking off from an ice shelf, melting overnight and causing a small rise in sea level, thereby warning the world of the potential consequences of unchecked climate change. Scientists discovered that ozone levels over the Antarctic had dropped by 10 per cent during the winter and an ozone hole had begun to form. The ozone hole is an area with extremely low amounts of ozone, not an actual hole. But the discovery, the first startling proof of the thinning ozone layer, was an alarming wake-up call that human activities can have dire consequences for the atmosphere and in turn major health implications. Intense media attention galvanised public opinion and sparked fears that ozone holes might form over populated cities around the world. The EPA estimated that if CFC production continued to grow at 2.5 per cent a year until 2050, 150 million Americans would develop skin cancer, leading to some 3 million deaths by 2075. After the momentous discovery of ozone depletion, the balance shifted toward regulation. Industry at first still lobbied in private, but eventually began to change its position as scientific evidence of ozone depletion continued to mount. In the summer of 1987, as preparations were under way for the Montreal Conference on Substances that Deplete the Ozone Layer, the Reagan administration publicly came out in support of international limits on CFC production. This effectively put a stop to industry opposition and propelled an agreement among industrialised nations to reduce CFC production by 50 per cent by 2000. The resulting Montreal Protocol included a 10-year grace period and a fund for developing nations in order to get them to agree to regulate a problem largely generated by the industrialised world. The Multilateral Fund has since provided $2.7bn to developing nations for transitioning to better technology and CFC substitutes and for meeting phase-out obligations. The fund was the first financial instrument of its kind and is the model for the UN-REDD (Reducing Emissions from Deforestation and Forest Degradation) programme, in which industrial nations use carbon offsets to provide developing nations with an incentive for conserving their forests. The Montreal Protocol Since 1987, the Montreal Protocol has been strengthened with the addition of more ozone-damaging substances to the list and the compliance of nearly 200 countries. Ozone-depleting substances in the atmosphere hit their peak in 1997–98 and have been falling ever since. Action on account of the ozone layer has greatly improved air quality while reducing the future risk of skin cancer, cataracts, and blindness. Furthermore, the treaty has done more than any other to reduce climate change by stopping 135bn metric tonnes of CO2-equivalent emissions from escaping to the atmosphere in the last two decades. Due to the nature of CFCs, however, the ozone is still thinning in certain places. This may well continue until the middle of the 21st century, at which point the ozone layer should begin to recover. The true significance of the international agreement is best illustrated by a NASA simulation of what would have occurred had CFC production continued at its pre-Montreal rate. By 2020, 17 per cent of global ozone would be destroyed. By 2040, the ozone thinning would affect the entire planet. And by 2065, atmospheric ozone drops to 70 per cent below 1970s levels. As a result, there would have been a threefold increase in the amount of harmful UV radiation reaching the planet's surface, resulting in tens of millions of skin cancer and cataract cases and trillions in health care costs. Luckily, it is a fate we managed to avoid. The first and foremost lesson to take from the fight to ban CFCs is that it was successful. The discovery that human activity was harming the atmosphere influenced public opinion and consumer buying power enough to change national policy and provide momentum toward an international agreement that enacted regulations to prevent a future catastrophe. Nations agreed to take precautions that would cause some short-term difficulties in order to head off a long-term disaster. Secondly, health concerns were the driving motivator behind public and government action. Peter Morrisette argues that the passage of a meaningful ozone treaty relied on four key factors: Ozone depletion was viewed as a global problem; there was strong scientific understanding of the causes and effects of ozone depletion; there were public-health concerns about skin cancer, which were amplified by the ozone hole discovery; and substitutes for CFCs were available. Climate change is also viewed as a global problem and there is a nearly universal consensus among climate scientists over the causes. Some argue that the major difference between obtaining a treaty back then and what hinders today's agreement is a lack of readily available substitutes in the form of alternative energy - wind, solar, electric - to take the place of fossil fuels. International agreement Yet the claim that no cost-effective, efficient substitutes were available was also made during the CFC debates. It was not until after the ozone hole discovery, at which point an international agreement seemed likely, that industry announced that substitutes could be made available under the right market conditions and policy incentives. CFC producers used the ensuing protocol as a mechanism to develop and market substitutes. Might not a similar situation unfold today if governments enforced greenhouse gas reductions, and policy and market conditions fostered alternative energies? It seems the major difference between a successful ozone treaty and an out-of-reach climate agreement is the weak connection made between climate change and human health. Where ozone depletion was primarily thought of as a human health issue, climate change is an environmental issue. Until that narrative is altered, an agreement on climate change could be elusive. Encouraging signs toward that end are emerging, none more so than the US EPA declaration that greenhouse gases jeopardise public health. The declaration paves way for the EPA to regulate greenhouse gas emissions from coal plants and other facilities. The regulatory route seems the most feasible way to reduce greenhouse emissions in the United States, as any climate change legislation has been killed in Congress. The Supreme Court ruling in favour of the EPA gave the agency judicial approval to use its authority to regulate such gases under the Clean Air Act. Just as measures to protect the ozone layer have benefited the climate, so too will EPA action on regulating greenhouse gases provide important health benefits by cleaning up the air. Added benefits of climate mitigation It is important to communicate that climate change mitigation will have the added benefit of reducing air pollution and improving respiratory health. It will also reduce the use of fossil fuels like oil and coal whose extraction processes - from mountaintop removal, which clogs streams and pollutes water supplies, to offshore drilling spills, which can contaminate seafood - have direct human health implications. While regulation at the national level is a good start, an international agreement - perhaps a stronger version of the Kyoto Protocol - will be necessary to achieve global cooperation on climate change. For this to happen, the public will need to voice greater concern and take more action, as it did during the CFC threat. Ozone depletion was framed as an international human health issue, which amplified the public's demand for accelerated government action. A similar approach may work for climate change. The question that remains is whether a catastrophic discovery similar to the ozone hole will be necessary to spur global concerns over climate change and push governments to act. If so, the consequences may prove to be far more disruptive - economically and ecologically - than the ozone problem of the previous century. Matthew Cimitile is a writer for the US Geological Survey Coastal and Marine Science Center in St. Petersburg, Florida.
{ "pile_set_name": "Pile-CC" }
Despite a warning from Governor Eric Holcomb not to have gatherings of more than 250 people, and despite all the warnings about the need to self-quarantine in order to protect the elderly, the New Life Christian Center in Indiana held a service Friday night to stick it to all those people who accept science. They urged people — especially sick people — to ignore the “raw, unmitigated stupidity” coming out of the CDC and visit the church. The plan was to “lay hands on the sick, and the sick shall recover.” In direct eye-rolling at our Indiana Governor’s requests, we have a GOAL to have AT LEAST 250 people here at church tomorrow night ! Good lord, ignorant Christians are going to exacerbate a pandemic because they’re too stubborn to listen to anyone who actually understands science… For what it’s worth, there’s video of Friday night’s service on Facebook and the place looks mostly empty: That’s a relief. Kind of. But the church leaders haven’t apologized, and no one should expect them to do so anytime soon, which means they may continue holding services for the foreseeable future. Capitulating to experts would be blasphemous for them. Some churches care for the sick. This one wants to create the sick. It’s dangerous, and they don’t care.
{ "pile_set_name": "OpenWebText2" }
I want to make roasted artichokes for a party tomorrow. Can I hold prepped artichokes (lemon water and oil) in the baking dish overnight? 2 Comments Well, I thought the better of that strategy and roasted them today. Half of them are vacuum sealed for future use and the other half will be served either at room temp or gently warmed! I was concerned about excessive oxidation.
{ "pile_set_name": "Pile-CC" }
Perceptual-motor skill learning in Gilles de la Tourette syndrome. Evidence for multiple procedural learning and memory systems. Procedural learning and memory systems likely comprise several skills that are differentially affected by various illnesses of the central nervous system, suggesting their relative functional independence and reliance on differing neural circuits. Gilles de la Tourette syndrome (GTS) is a movement disorder that involves disturbances in the structure and function of the striatum and related circuitry. Recent studies suggest that patients with GTS are impaired in performance of a probabilistic classification task that putatively involves the acquisition of stimulus-response (S-R)-based habits. Assessing the learning of perceptual-motor skills and probabilistic classification in the same samples of GTS and healthy control subjects may help to determine whether these various forms of procedural (habit) learning rely on the same or differing neuroanatomical substrates and whether those substrates are differentially affected in persons with GTS. Therefore, we assessed perceptual-motor skill learning using the pursuit-rotor and mirror tracing tasks in 50 patients with GTS and 55 control subjects who had previously been compared at learning a task of probabilistic classifications. The GTS subjects did not differ from the control subjects in performance of either the pursuit rotor or mirror-tracing tasks, although they were significantly impaired in the acquisition of a probabilistic classification task. In addition, learning on the perceptual-motor tasks was not correlated with habit learning on the classification task in either the GTS or healthy control subjects. These findings suggest that the differing forms of procedural learning are dissociable both functionally and neuroanatomically. The specific deficits in the probabilistic classification form of habit learning in persons with GTS are likely to be a consequence of disturbances in specific corticostriatal circuits, but not the same circuits that subserve the perceptual-motor form of habit learning.
{ "pile_set_name": "PubMed Abstracts" }
1982–83 Georgia Tech Yellow Jackets men's basketball team The 1982-83 Georgia Tech Yellow Jackets men's basketball team represented the Georgia Institute of Technology. Led by head coach Bobby Cremins, the team finished the season with an overall record of 13-15 (4-10 ACC). Roster Schedule and results References Category:Georgia Tech Yellow Jackets men's basketball seasons Georgia Tech Category:1982 in sports in Georgia (U.S. state) Category:1983 in sports in Georgia (U.S. state)
{ "pile_set_name": "Wikipedia (en)" }
= -371 - 685 for s. 8 Solve 1149*b - 10 = 1139*b for b. 1 Solve -8*o - 11 = 5 for o. -2 Solve -5*d + 0*d - 3*d = 0 for d. 0 Solve -34*f + 17*f = -17 for f. 1 Solve 5*p = 11 - 21 for p. -2 Solve -7 = -3*q + 8 for q. 5 Solve -109*z = -129*z - 60 for z. -3 Solve -5 + 1 = -2*t for t. 2 Solve 0 = -27*o + 34*o for o. 0 Solve -7*v - 300 + 286 = 0 for v. -2 Solve 70*k = 102*k + 288 for k. -9 Solve 6*p = -29 + 35 for p. 1 Solve -13*l - 59 + 7 = 0 for l. -4 Solve 23*z + 72 = 5*z for z. -4 Solve -3*r + 8*r = 15 for r. 3 Solve 2567*x = 2589*x - 154 for x. 7 Solve -19*h + 27 = -11 for h. 2 Solve -4*a = 4*a + 40 for a. -5 Solve -32*c + 19*c - 65 = 0 for c. -5 Solve -2*p + p - 1 = 0 for p. -1 Solve 14*l + 250 = 166 for l. -6 Solve 56 = 15*o + 11 for o. 3 Solve -10 = 2*v - 8 for v. -1 Solve 10*c + 2946 = 2956 for c. 1 Solve 47*v - 196 = 19*v for v. 7 Solve 2*w - 84 + 88 = 0 for w. -2 Solve -43*f = -47*f + 16 for f. 4 Solve -10*g + 76 = 46 for g. 3 Solve -43 = 4*q - 55 for q. 3 Solve 0 = 9*o + 14 + 22 for o. -4 Solve -14*z + 10*z = 8 for z. -2 Solve 2*f + 1 = 3 for f. 1 Solve -17*r = 76 + 43 for r. -7 Solve 20 = -8*m + 3*m for m. -4 Solve -m - 6 = -3 for m. -3 Solve 0 = 25*x - 23*x + 8 for x. -4 Solve -46*x + 72 = -20 for x. 2 Solve 25 = -4*f + 9 for f. -4 Solve -54*r + 59*r - 5 = 0 for r. 1 Solve 10 + 17 = -9*v for v. -3 Solve -51 + 48 = -c for c. 3 Solve 25*q = -9*q + 204 for q. 6 Solve 14*b = 10*b - 16 for b. -4 Solve 2*v + 10*v = 24 for v. 2 Solve -23*d = -24*d + 4 for d. 4 Solve 2*l - 10 = 4*l for l. -5 Solve -3*c - 1 = -2*c for c. -1 Solve 12 + 0 = 4*d for d. 3 Solve -325*s + 317*s - 40 = 0 for s. -5 Solve -54*n - 79 - 191 = 0 for n. -5 Solve -18*i + 46 = 100 for i. -3 Solve 0 = -4*r - 4 + 4 for r. 0 Solve 9*s = 21*s + 24 for s. -2 Solve 149*n - 528 = 17*n for n. 4 Solve 0 = -6*f - 25 + 31 for f. 1 Solve 0 = -4*x - 3*x - 7 for x. -1 Solve -57 = -16*p - 41 for p. 1 Solve 0 = -7*b + 8*b - 2 for b. 2 Solve 1412*c = 1430*c - 162 for c. 9 Solve 0 = 2*s + 7 + 3 for s. -5 Solve -11*g - 160 = 9*g for g. -8 Solve -89 = -4*z - 69 for z. 5 Solve 0 = -20*x + 16 - 116 for x. -5 Solve -1 = -4*v + 3 for v. 1 Solve 2*n - 12 = -4 for n. 4 Solve 187 - 147 = -5*q for q. -8 Solve -1301*c + 40 = -1291*c for c. 4 Solve 13*v - 2*v = -11 for v. -1 Solve -87*g = -65*g + 88 for g. -4 Solve 2*n = -3*n - 20 for n. -4 Solve -53*s - 30 + 83 = 0 for s. 1 Solve 4*t - 418 + 446 = 0 for t. -7 Solve 201 - 185 = -4*b for b. -4 Solve -20*r - 24 = -28*r for r. 3 Solve 32 = 3*n + 17 for n. 5 Solve -15*j - 30 + 0 = 0 for j. -2 Solve 16*t = 10*t + 12 for t. 2 Solve 28*z = 57*z - 145 for z. 5 Solve 180 = 38*w - 2*w for w. 5 Solve 44*q = -4 + 4 for q. 0 Solve -12 = -123*t + 127*t for t. -3 Solve 12 = a + 16 for a. -4 Solve 2388 = 3*s + 2394 for s. -2 Solve 8*x - 51 = -35 for x. 2 Solve 0 = 13*v - v - 36 for v. 3 Solve -y + 4 = 3 for y. 1 Solve -4*w - 4*w = 0 for w. 0 Solve 4*v = -18*v + 44 for v. 2 Solve 3*w = 20 - 26 for w. -2 Solve 22*s = -53 + 251 for s. 9 Solve 0 = 34*c - 6*c + 28 for c. -1 Solve 0 = -374*n + 376*n for n. 0 Solve 0 = d - 3*d - 2 for d. -1 Solve -36*h = -40*h + 12 for h. 3 Solve -69*c + 36*c = -33 for c. 1 Solve 0 = 21*s - 0 for s. 0 Solve -22 = 9*y - 4 for y. -2 Solve 16*a + 33 = -31 for a. -4 Solve 0 = 10*m - 13*m + 15 for m. 5 Solve -7*y + 2022 = 2064 for y. -6 Solve -22*r + 17*r + 15 = 0 for r. 3 Solve -68 = 8*r - 28 for r. -5 Solve 2 = 2*a - 6 for a. 4 Solve -19*i + 65 = -6*i for i. 5 Solve -363*x + 356*x + 70 = 0 for x. 10 Solve -63 = -10*k + 17 for k. 8 Solve -20 = 10*s - 60 for s. 4 Solve -5*v + 188 - 163 = 0 for v. 5 Solve 0 = 186*q - 184*q - 6 for q. 3 Solve -66*u = 120 + 144 for u. -4 Solve -9486*b + 9489*b = 9 for b. 3 Solve 0 = -42*v + 16*v + 208 for v. 8 Solve 66*z - 27 = 75*z for z. -3 Solve 4*t = 38*t - 34 for t. 1 Solve 0 = 4*w - 7*w - 15 for w. -5 Solve 0 = -9*w + 3*w + 30 for w. 5 Solve 55*g = 65*g + 40 for g. -4 Solve -2*m - 51 = 15*m for m. -3 Solve 11*q = -20 - 13 for q. -3 Solve -3*k + 4 = -4*k for k. -4 Solve 0 = -8*n - 8*n - 16 for n. -1 Solve 4*a - 15 = 1 for a. 4 Solve 71*f + 3*f - 17*f = 0 for f. 0 Solve 12*y - 25 - 23 = 0 for y. 4 Solve -11*c - 21 = -4*c for c. -3 Solve 13*o = 7*o + 48 for o. 8 Solve -40*k - 498 = -338 for k. -4 Solve 675 = -4*y + 703 for y. 7 Solve -366*f + 379*f - 26 = 0 for f. 2 Solve -11*z + 22 = -0*z for z. 2 Solve 30 = 148*i - 153*i for i. -6 Solve 91*d - 96 = 107*d for d. -6 Solve 4*d + 80 = 84 for d. 1 Solve -6 = -2*j + 4 for j. 5 Solve 0 = 17*m - 603 + 467 for m. 8 Solve 0 = -63*l + 67*l + 4 for l. -1 Solve -61 + 52 = -3*c for c. 3 Solve t = 11 - 9 for t. 2 Solve -12*d + 9*d = -3 for d. 1 Solve 543*d - 536*d - 49 = 0 for d. 7 Solve 0 = 90*t - 88*t for t. 0 Solve 116*q - 138*q = 220 for q. -10 Solve -47*k + 366 = -10 for k. 8 Solve 33*w - 26*w + 42 = 0 for w. -6 Solve 7*a - 3*a - 16 = 0 for a. 4 Solve 0 = -18*d + 105 - 15 for d. 5 Solve -21 = 340*o - 333*o for o. -3 Solve v + 17 = 12 for v. -5 Solve 0 = -22*a + 19*a - 3 for a. -1 Solve 43*p - 45*p - 6 = 0 for p. -3 Solve 149*a = 162*a for a. 0 Solve -1317*f - 88 = -1328*f for f. 8 Solve 14*t - 10*t - 12 = 0 for t. 3 Solve 28*w = 26*w + 6 for w. 3 Solve -11*n = n - 60 for n. 5 Solve -14*f - 4*f - 18 = 0 for f. -1 Solve 97*y - 69*y - 28 = 0 for y. 1 Solve -15 = 36*o - 33*o for o. -5 Solve 144 = 5*l + 119 for l. 5 Solve 108 = g - 19*g for g. -6 Solve 84*d - 86*d + 4 = 0 for d. 2 Solve 3 = -2*h + 1 for h. -1 Solve -39*q - 26*q - 65 = 0 for q. -1 Solve 2 = -o - 0 for o. -2 Solve 8*y - 129 + 153 = 0 for y. -3 Solve -31 = -7*f - 3 for f. 4 Solve 7*z = -57 + 71 for z. 2 Solve -50*g + 88*g - 114 = 0 for g. 3 Solve -5 = -17*r + 63 for r. 4 Solve 6*q + 9 = -3 for q. -2 Solve -7*m + 0 = 14 for m. -2 Solve -53*t - 38 - 174 = 0 for t. -4 Solve 27*s - 32*s = -30 for s. 6 Solve 0 = 3*j - 6*j - 15 for j. -5 Solve -5*n - 20 = -5 for n. -3 Solve 0 = -177*l - 359 + 1952 for l. 9 Solve 5*y + 12 = y for y. -3 Solve -20*p + 14*p = -12 for p. 2 Solve 221 = -5*r + 246 for r. 5 Solve 1 + 35 = 18*y for y. 2 Solve 99*v = 65*v - 34 for v. -1 Solve 5*q - 156 = -176 for q. -4 Solve -170 = -22*n - 16 for n. 7 Solve -6*i + 0 = -24 for i. 4 Solve 607*n - 615*n + 8 = 0 for n. 1 Solve 0 = 108*n - 36*n + 144 for n. -2 Solve 88 + 7 = 19*x for x. 5 Solve 54*v - 10 = 44*v for v. 1 Solve 31*l - 36 = 49*l for l. -2 Solve 65 = -13*n - 0*n for n. -5 Solve -7 = 3*m + 8 for m. -5 Solve -65*k - 21 = -72*k for k. 3 Solve 60*g + 40 = 68*g for g. 5 Solve -20*x = -24*x + 28 for x. 7 Solve -2*c + 11 - 1 = 0 for c. 5 Solve -43*t - 159 = -30 for t. -3 Solve -8*r = -21*r - 39 for r. -3 Solve -5*a + 3*a = -8 for a. 4 Solve 2*a - 68 = 36*a for a. -2 Solve -551*f = -545*f + 12 for f. -2 Solve 98 = 56*m - 182 for m. 5 Solve 2*n - 25 = 7*n for n. -5 Solve -92*w = -77*w + 90 for w. -6 Solve 106*l - 26*l = 0 for l. 0 Solve 3*p + 8 = -p for p. -2 Solve -247 + 121 = -14*w for w. 9 Solve 0 = 33*b - 16 - 17 for b. 1 Solve -16*f - 35 = -67 for f. 2 Solve 64*c = 66*c for c. 0 Solve 52*w = 51*w - 2 for w. -2 Solve 5 = -233*m + 228*m for m. -1 Solve 2 = 2*d + 6 for d. -2 Solve 3*q = -50 + 41 for q. -3 Solve -406*c - 10 = -411*c for c. 2 Solve -72 = -16*x - 8 for x. 4 Solve 6*m = 8*m + 8 for m. -4 Solve 17*t = 13*t - 16 for t. -4 Solve 2605*y + 18 = 2614*y for y. 2 Solve -58 = 29*m + 29 for m. -3 Solve -30*q + 42*q + 48 = 0 for q. -4 Solve 10*p = 440 - 480 for p. -4 Solve 33*y = 26*y + 7 for y. 1 Solve -694 = 15*m - 634 for m. -4 Solve -20*i = -9*i - 33 for i. 3 Solve 10*g + 15 = -35 for g. -5 Solve 0 = -8*f + 4*f for f. 0 Solve 6*x - 34 = -4 for x. 5 Solve -10*w = -5*w + 10 for w. -2 Solve -9*i = -2*i - 7 for i. 1 Solve 28 = 378*u - 385*u for u. -4 Solve 75*j - 29*j - 368 = 0 for j. 8 Solve 1031*w - 1037*w = 36 for w. -6 Solve 2*m = 16 - 8 for m. 4 Solve 10*r + 8 = -2 for r. -1 Solve 148*y + 1245 = 209 for y. -7 Solve -226*u = -248*u + 22 for u. 1 Solve 192*j = 179*j + 78 for j. 6 Solve -20*q - 5 = -25*q for q. 1 Solve -42 = 379*z - 385*z for z. 7 Solve 68*c = 77*c - 45 for c. 5 Solve -k + 413 = 408 for k. 5 Solve 0 = 47*o - 39*o - 24 for o. 3 Solve 21 = -8*r + 77 for r. 7 Solve -4 = -j + 1 for j. 5 Solve 5*s - 409 = -429 for s. -4 Solve 30 = 9*q - 24 for q. 6 Solve -621 + 663 = -14*o for o. -3 Solve 0 = -4*k - 33 + 37 for k. 1 Solve -20
{ "pile_set_name": "DM Mathematics" }
There is no denying the fact that night shift workers are fast losing on their health. Long and hectic work schedules lead to irregular appetites, rapid changes in weight and a high risk of gastro-intestinal […] After 60 years, authorities in the United States have approved a pill that will treat malaria. According to a report in BBC, the drug, tafenoquine is being described as a “phenomenal achievement” and will treat the recurring […] Compounds in green tea and in red wine may help block the formation of toxic molecules that cause severe developmental and mental disorders, and may help treat certain inborn congenital metabolic diseases, a study has […] Carbohydrates have become the ‘culprits’ for many healthy eaters recently. Despite their less stellar stature in the nutrition department, carbs aren’t actually the enemies for your body. They are responsible for providing you with energy, […] Does the surgical removal of tonsils and adenoids in young children have long-term health implications? Researchers in a new study say the removal may increase the risk of certain ailments, but other experts aren’t so […]
{ "pile_set_name": "Pile-CC" }
This video from Fox 13 in Tampa, first posted at The Right Scoop, is priceless. I’ve never seen a better summary. If only more in the mainstream media had the guts to stand up to the President’s media machine.
{ "pile_set_name": "OpenWebText2" }
A Singaporean rock climber Chua Chee Beng was told he would be wheelchair-bound for life after becoming partially paralysed from a fall. Shin Min Daily News reported that he fractured his spine and legs after falling from a height of seven metres. However, within two months, he made a miraculous recovery by standing on his own again. Some four months later, he returned to rock climbing. The accident Chua, 50, decided to take up the challenge to climb a hill at Bukit Timah’s Dairy Farm Nature Park two years ago on Aug. 8, 2017. The park’s hill is suited for advanced climbers and Chua was no amateur, as he was a full-time rock climbing instructor. But on that fateful day, Chua was sent plummeting to the ground from seven metres high after he ran out of climbing rope as the rope slipped off the belaying device, which is a piece of climbing equipment to help reduce the effort of climbers. He was conveyed to the hospital immediately. Spinal injuries Chua suffered fractures on his ankles and heels. Doctors also discovered he suffered a crack in his spine that had damaged his spinal cord. He had to undergo surgery to his back. As a result of his injuries, Chua was paralysed from the waist down. He was even told he needed to rely on a wheelchair for the rest of his life. This news was devastating for Chua as he was a sporty person. Refused to accept fate Chua told Shin Min that he refused to accept his fate. He promised himself he would leave the hospital by foot within two months. With the help of a physiotherapist, Chua trained to use his legs again. By the last week of his two-month stay at the hospital and with the encouragement of his friends and family, Chua could walk again with the aid of a cane. Just four months after the fall, Chua was able to do rock climbing again. Support group Chua credits a group of like-minded friends for helping him with on his road to recovery. As a token of appreciation, he pledged to participate in a 50km fundraising charity walk "Let’s Take A Walk" this November. This is despite the fact that he still walks with a limp, and has yet to recover fully. Chua trained himself to walk at least 4km using crutches on alternate days. He successfully walked a total of 15km in three hours a day before the interview. He said he would try his best and continue training in the coming months, even though he doesn’t feel fully confident in completing the 50km charity walk. All photos via Shin Min Daily News
{ "pile_set_name": "OpenWebText2" }
Estuve encerrado durante 139 días que se dividieron en tres etapas. En la primera etapa, pasé en el hospital casi la mitad del tiempo por una huelga de hambre de 22 días, en la segunda estuve 30 días encerrado por un informe falso o equivocado de los servicios de inteligencia militares y en la tercera, pasé 49 días por afirmar que no participaría nunca en una intervención militar en Catalunya (algo que en cualquier democracia moderna no haría falta ni mencionar y no habría supuesto castigo alguno). En ese segundo arresto, el provocado por un informe falso o equivocado de los servicios de inteligencia militares, tuve relación con dos suboficiales y un oficial. Uno de los suboficiales llevaba tatuado en la piel el águila franquista en la pierna y el teniente coronel era asiduo escritor en la Fundación Nacional Francisco Franco (en el tercer encierro me aislaron para que no mantuviera relación con nadie). Aquello no podía salir bien y no salió bien. Había un soldado negro al que denominaban "mono", había una soldado que creían que era lesbiana y hablaban de ella con desprecio calificándola como "híbrido", había otro militar del que uno de los sargentos afirmaba que era gay y se referían al mismo como "marica", la televisión informaba que habían matado a un ultra del Deportivo de la Coruña y se lo merecía o aparecía Pablo Iglesias ("tu amigo, el coletas") y había que pegarle un tiro o un taponazo. Todos reían con esa tranquilidad que da tener al sistema de tu parte. Fueron muchas las veces que me enfrenté a ellos hasta que decidí informar al jefe del Establecimiento Disciplinario (que se estaban produciendo manifestaciones xenófobas, homófobas y antidemocráticas), pero no hizo nada al respecto. Ni tan siquiera se le dieron credibilidad y eso que, como he dicho antes, uno de los sargentos tenía un tatuaje franquista y el teniente coronel escribía, con mucho orgullo por otra parte, en la Fundación Nacional Francisco Franco. El día que me liberaron del último encierro recibí notificación de la apertura de un expediente disciplinario para intentar privarme otros 60 días de libertad porque uno de los sargentos, destinado en Badajoz, precisamente el que tenía tatuado un águila franquista en la pierna, informó que yo había hecho manifestaciones contra la bandera y la constitución (¿¿¿???). Lo cierto es que me he manifestado en multitud de ocasiones, pero dudo que nadie me haya oído decir nada contra la bandera o la constitución. No tengo reparos en reconocer que cambiaría la constitución sin dudarlo pero de ahí a manifestarse contra ella... Bien, pues le dieron credibilidad y me llamaron a declarar. Al final no pasó nada porque todos sabían que en breve sería expulsado y las elecciones municipales se acercaban, de no ser así es muy probable que hubiese sido sancionado de nuevo. Creo que mi experiencia demuestra que los demócratas hemos sido maltratados y expulsados sistemáticamente de las Fuerzas Armadas por hacer manifestaciones progresistas y democráticas mientras que los que han tenido comportamientos o han realizado manifestaciones retrógradas, fascistas o antidemocráticas continúan en ellas. Por ello mismo, no me extraña que simbología como la perteneciente a la división azul se muestre sin ningún tipo de reparo o que exista un teniente que hable orgulloso de Hitler. Y no me extraña porque ellos (los responsables), aun después de esta publicación, seguirán siendo militares. El sistema no solo no los depurará, sino que los protegerá, enviando un mensaje inequívoco a todos los demás: los militares que exhiben orgullosos símbolos o hacen manifestaciones franquistas o fascistas son bienvenidos, pero los militares que hablan de democracia, libertad o derechos son expulsados. Sabiendo que hay miembros del PP que comparten ideología franquista es normal que muchos militares se sientan potenciados, ya que parece difícil que el gobierno intente erradicar de forma sincera actitudes que miembros de su partido comparten. Así pues, una parte del problema es que los altos mandos, los oficiales, los jueces militares e, incluso, los políticos de determinados partidos simpatizan con estas ideologías y repudian las contrarias, lo que complica mucho conseguir un cambio real. El sistema lo que intentará es perfeccionarse para que estos comportamientos no trasciendan, no para solucionarlos, porque lo realmente molesto no es que se produzcan, sino que se conozcan. Por tanto, es muy probable que se amenacen, controlen, castiguen y persigan más a los militares, sobre todo a los que hayan podido filtrar la información, que se esfuercen en implementar medidas efectivas para terminar con este problema. Lo preocupante es que no se trata de un hecho esporádico. En los años ochenta hubo muchos militares que zarandearon, insultaron y amenazaron al general Gutiérrez Mellado y a otros militares que apostaron por la democracia, lo que llevó al general Marcelo Aramendi, que no soportó la presión, a suicidarse. Es sabido también que los militares golpistas fueron tratados más como héroes que como los delincuentes que eran (el mismo rey, D. Juan Carlos I lo auspició), al tiempo que los militares demócratas (UMD) fueron defenestrados. Más recientemente, el expresidente Zapatero y el exministro Bono tuvieron miedo de un "pronunciamiento" (no militar) durante la redacción del Estatut por el escándalo del Teniente General Mena (2006), lo que les hizo intervenir las comunicaciones de muchos altos mandos (no hubo sanciones, ni ceses, ni expulsiones). La realidad es que nadie ha pagado hasta ahora por conductas y comportamientos marcadamente antidemocráticos, cuando somos muchos los cadáveres en la cuneta de los demócratas (y eso que a ciertos niveles somos considerablemente menos). Es más, como hemos podido ver en la mayoría de las sanciones a los antidemócratas se percibe un enorme carácter protector, incluso con los golpistas encarcelados. En el lado contrario se puede encontrar el cese del exJEMAD Julio Rodríguez cuando decidió unirse a Podemos. En lugar de recibir el respeto de los políticos y los militares, lo que recibió fue el maltrato de la vicepresidenta, el desprecio del ministro y el ministerio y las cartas, amenazas e insultos de muchos militares. Todo ello innecesario por completo. Sangrante resulta si se compara con el trato recibido por el teniente coronel del cuerpo jurídico, Miguel Ayuso, (y juez militar en activo) cuando llamó bastarda a la constitución, renegó del Rey y calificó la guerra civil de cruzada (en Intereconomía -Televisión-). Quisieron ascenderle a coronel hasta que el escándalo saltó a los medios, momento en el que le volvieron a proteger: le mandaron a la reserva sin cese y sin sanción. Salida diplomática. Si los mandos militares son capaces de zarandear a un vicepresidente del gobierno sin consecuencias, son tratados como héroes en lugar de como delincuentes cuando dan un golpe de estado, son tratados con cautela en lugar de ser detenidos y puestos a disposición judicial cuando pretendían hacer un pronunciamiento, menosprecian e insultan a un compañero por enrolarse en un partido político o se pretende ascenderlos cuando insultan a la constitución o denominan la guerra civil como una cruzada (en televisión)... Si pasa todo esto sin que se produzcan consecuencias es que ha llegado la hora de hacer lo que tendríamos que haber hecho hace cuarenta años y regenerar la cúpula militar para que sea plural. Ha llegado la hora del cambio. Luis Gonzalo Segura, exteniente del Ejército de Tierra y autor de las novelas "Código rojo" (2015) y "Un paso al frente" (2014). Puedes seguirme en Facebook y Twitter. "Código rojo le echa huevos al asunto y no deja títere con cabeza. Se arriesga, proclamando la verdad a los cuatro vientos, haciendo que prevalezca, por una vez, algo tan denostado hoy en día como la libertad de expresión" ("A golpe de letra" por Sergio Sancor). ¡Consíguela aquí firmada y dedicada!
{ "pile_set_name": "OpenWebText2" }
New Garden Website Design Woodside Garden is a walled garden near Jedburgh in the Scottish Borders. It has a plant centre, an award winning coffee shop and runs a series of events throughout the year. We were delighted when we secured the Woodside Garden website design contract. Stephen and Emma Emmerson inherited their website when they bought the business back in 2010. Over the years they had “made do” with it, but it no longer met their needs: it was not mobile-friendly and it was difficult to add events and highlight blogs on the front page. There has also been a recent re-brand and the existing site could not be updated easily to incorporate the new logo and colour scheme. Emma was confident that Red Kite Services could deliver a website that she wanted as we have a good knowledge of plants and wildlife. We have also worked on updating the website for Stillingfleet Lodge Gardens for many years, showing further experience in the sector. Website Review We started the re-build process by reviewing the existing site and agreeing the content and images that would be retained. We streamlined the number of pages and reviewed the categories on the site. We set up a development site where we could design the outline using our favourite template. We used the new colours from the logo throughout the site and used categories to place appropriate blogs onto the static pages. Emma did not want us to use sliders, but did send us some stunning images to use. She wanted a nice clean site but with a flowery touch, which we achieved by adding curved frames to the images. Events She also wanted to easily add events, so we used a simple plug in. This makes it easy to add events and the next few events are highlighted in the sidebar. We have also categorised events so that if you look on the Wee Woodsiders page you can see a full list of child-friendly events. We are still in the free “snagging” stage which we offer on all our website builds, so we are still working with Emma to develop the site now it has gone live. Contact Us About Red Kite Services Red Kite Services is a family run business owned by Peter and Samantha Lyth. We believe in supporting independent, local companies and our aim with RKS is to provide cost-effective support to help local small businesses to thrive. Sam set up the business in 2010 after seeing that many small business owners know what they need to do in terms of administration and marketing, but don't have enough time to do it. Peter joined the business in 2015, which has allowed us to offer a broader range of services. Between us we have experience in Financial Services, health and science sector and retail. Testimonials I contacted Red Kite because I have been unable to update my website for a few years now. The site was built for me over five years ago and I just wanted to be able to update the price list regularly and alter treatments. I had already been in touch with other companies about updating Continue Reading
{ "pile_set_name": "Pile-CC" }
文件说明: 1、base_dic_full.dic hash索引 -- 字典带有词频和词性标志。 2、words_addons.dic s 开头的表示停止词 u 后缀词(地名后缀、数学单位等) n 前导词(姓、汉字数词等) a 后导词(地区,部门等) 3、 not-build/base_dic_full.txt 没编译过的词典源码 4、重新编译词典的方法: <?php header('Content-Type: text/html; charset=utf-8'); require_once('phpanalysis.class.php'); $pa = new PhpAnalysis('utf-8', 'utf-8', false); $pa->MakeDict( sourcefile, 16 , 'dict/base_dic_full.dic'); echo "OK"; ?>
{ "pile_set_name": "Github" }
By Theodore R. Marmor Social insurance programs are at the center of American politics. In fiscal terms, Medicare and the Social Security Administration’s programs for retirement, disability, worker’s compensation, and worker’s life insurance amount to roughly 41 percent of the federal budget. This fiscal centrality, however, does not rest on anything like a broader, public understanding of what makes social insurance social — and thus why such programs are so important in American political life. On the contrary, over the years our vocabulary of social insurance has become increasingly replaced with a vocabulary of welfare and redistribution, creating a fundamentally misleading impression about most of what the federal government does. In the mid-1930s, when the retirement and survivors insurance programs had their legislative start, university-educated Americans had every reason to be clear about what distinguished social insurance from its commercial counterpart. Indeed, most undergraduate programs in the social sciences took up social insurance’s rationale and history. But note the data measuring the historical use of the expression in three of America’s most important daily newspapers. The changes recorded are startling. By the end of the 20th century, the category of social insurance had seemingly lost its place in the vocabulary of American politics. This is particularly unsettling because of the enormous importance of social insurance programs in American history. The Great Depression, which wiped out the savings of most American families, caused multiple bank failures, and saw an unemployment rate of some 25 percent, prompted demands for substantially increased government protection against economic disaster. “Welfare” was the term used for programs that made poverty status the precondition for financial aid, and President Roosevelt acknowledged that immediate aid to poor families was required. But his case for increasing the footprint of American social policy was based on the principles of social insurance, not merely poor-relief narrowly construed. By the 1970s, social insurance programs had become major components of the federal government, but also the targets of ideological and budgetary attack. Social Security retirement, Medicare, disability, and unemployment insurance were increasingly labeled as simply “entitlements,” and charged with contributing to out of control spending via unaffordable benefits. This allowed critics to advocate for a much smaller social policy commitment, urging a less costly “safety net” for the deserving among America’s poor citizens. The semantic bait-and-switch can be seen with Google’s Ngram viewer, which tracks word frequencies across the American English corpus.  Yet the principles and judgments incorporated in the concept of social insurance remain central to the major policy debates of our time, most dramatically in the debates over health care reform and the affordability of Social Security retirement benefits. They are relevant to the backlash against the Affordable Care Act and to the debate, rekindled recently, between the advocates of “Medicare for All” and advocates of Medicaid expansion as the next step toward universal health coverage. More generally, they are crucial to addressing the broader conservative critique of government’s role in American social policy. So, What is Social about Social Insurance? Social insurance, like commercial insurance, is about protection against financial risk. It is “insurance” in the sense that people contribute to a fund to protect themselves against unpredictable financial risks. These include outliving one’s savings in old age, the early death of a breadwinner, the onset of a disability that makes work difficult if not impossible, the high costs of acute illness, involuntary unemployment, and work-related injury. Yet unlike with commercial insurance, contributions are not prices in a market and thus do not depend on the contributor’s risk profile (unless commercial regulations say otherwise, in essence creating “social” insurance through the backdoor). Instead of a contract between an enrollee and an insurer, social insurance is a system of shared protection among the insured, most comparable to mutual insurance in the commercial realm, with contributions made in proportion to one’s market income. In social insurance, the “insurer”—whether a government agency or a corporate body with a joint labor-management board—is the agent of the contributing enrollees. And unlike commercial insurance, the social insurance “contract” mandates participation by law, since otherwise adverse selection would cause its unraveling. Social insurance spreads the costs of coverage according to a different logic than that of commercial insurers. The same risk in commercial insurance carries the same premium price. The greater the risk, the higher the price of coverage. Social insurance, by contrast, operates on the premise that contributions are calculated according to one’s income and benefits according to one’s needs. But the central political feature of social insurance is that the contributors are also beneficiaries. This is not the case with social assistance programs with means-tested eligibility standards. As important as such programs are for those who experience poverty, taxpayers do not in general identify with welfare beneficiaries. How much difference does it make that most contemporary reporting on social insurance programs, and much social science scholarship, ignores their conceptual underpinning and distinctive operational features? Should popular voices in American social policy be criticized for using proper names to describe programs without explaining their distinctiveness from means-tested welfare programs? I would not be writing this essay if I did not believe, as one of three co-authors, that the title of our 2014 book — Social Insurance: America’s Neglected Heritage and Contested Future — identified an important problem. “Entitlement”-talk Words make a difference to all thinking about public policy, but this is especially the case where conflicts are over fundamental values. Consider, for example, the common use of “safety net” as a collective description of programs as diverse as Medicare and Medicaid, old age Social Security, food stamps, disability insurance, and homeless shelters. This expression collapses the distinction between means-tested welfare and social insurance programs into a metaphor suggesting that recipients have to “fall” into poverty to warrant help. This is the opposite of social insurance, which represents a platform on which one can stand before economic risks arise. The term “safety net” is even more ambiguous, particularly when modified by terms like high or low, porous or tightly knit, threadbare or generous, or applied in situations when one’s financial resources are largely “spent.” The use of public finance terms like “income transfers” further blurs the differences between cash benefits that one receives only after income and asset tests are applied and insurance payments that kick in without such tests. Then there is the term “entitlement,” which was meant to refer to the nondiscretionary nature of the spending, but now connotes an adolescent sense of entitlement among the beneficiaries. Neither term helps us understand the robust public approval of our major social insurance programs, and indeed, are often employed by opponents of social insurance in order to obfuscate an otherwise popular concept. The negative connotation of “entitlements” is especially misleading. When one legitimately claims some social insurance benefit, the implication is that there is a corresponding duty to provide that benefit. That is the basis of the common sentiment among recipients of retirement income Social Security that they have earned their pensions. That widely shared sentiment largely explains the political fear that any substantial reduction in those benefits is a “third rail.” Few if any critics of the program criticize the appropriateness — or desirability — of OASI, the old age retirement and survivors insurance programs, on its own terms. Instead, they concentrate on claims that the programs are unaffordable. As a result, a large proportion of the public fears for their future despite the obvious political vulnerability of such critiques. Understood as a technical budgetary category, entitlements in American fiscal policy are simply those programs whose benefits and beneficiaries cannot be adjusted without statutory changes. Administrations cannot simply reduce a program’s benefits or change its eligibility rules on their own. That entails constraints on administrative flexibility, reflecting the idea of stable governmental commitment to social insurance protections over long periods. Using the entitlement category in two senses is confusing and in that respect harmful. What citizens believe about the appropriateness of a program is a distinct concept from the budgetary rules about changing its provisions. Both are important, but when was the last time you, the reader, saw this distinction explained when the entitlement term was used? Instead, “entitlement” is used like a four-letter word in diatribes about the supposedly troubled future of social insurance programs. “Solvency”-talk Still another source of linguistic confusion is what I will call solvency talk. When policy discussion turns to the fiscal projections of social insurance programs, critics and defenders alike turn to the trust fund. If the old-age retirement actuaries forecast a revenue projection of X in 25 years and the projected outlays of Y equal more than X, the “trust fund” is, according to this logic, in trouble. It will no longer have enough to meet its “bills” at that date. And if that shortfall were to continue, the necessary result would, in this framing, be insolvency, even though few policy experts seriously doubt the sustainability of programs like Social Security given fairly modest reforms, nor the political catastrophe of allowing the trust fund to run dry. In this sense, solvency talk is a lot like the threat of government shutdown created by the Federal debt ceiling — a crisis manufactured from the intransigence of elements on both sides of the aisle rather than anything fundamental. Reflect for a moment about budget forecasts of Department of Defense outlays. Nobody writes about the military department going “broke” or becoming “insolvent” no matter how fast the growth in the budget. Indeed, no sensible analyst would make 20, 30, or 40-year forecasts for defense expenditures. Some analysts, in discussions of the future of Social Security make conditional forecasts long into the future. These are said to be useful exercises, reminding the public that commitments now have long-term effects. But the very preoccupation with solvency generates unnecessary anxiety. Since DOD does not have a “trust fund” budgetary categorization, its future outlays are presumed to be ones over which future governments have some control. The same legal control is available to the Social Security Administration and the Congress. The confusion is even worse in programs that combined different funding mechanisms. For instance, funding for Part A of Medicare comes from the social health insurance trust fund (HI) while Part B is funded from general revenues and beneficiary premiums; it cannot go broke, but it can be reduced. That prompts solvency talk about Medicare’s future without clarification of how the program differs in two of its component parts. The background of most solvency questions is the widely reported growth of the future retiree population. The Census Bureau projects that the over-65 population will soon make up 20 percent of the population. Such projections, unaccompanied by estimates of what increases in funding social insurance programs will require, prompt concern. Dire predictions of “insolvency” or cuts in retirement benefits get reported in the media without much scrutiny. As a public speaker, I face such questions regularly. I urge my questioners to dwell for a moment on how a growing proportion of senior citizens can be politically compatible with large reductions in future Social Security benefits. Put another way, how could the “sacred cow” of Social Security — in the language of its critics — face such a fate under conditions that, if anything, only cement its political sanctity? There is another irony here that warrants discussion. The original use of trust fund language in social insurance had more to do with trust than with funds. President Roosevelt rightly felt in the 1930s that the contributory ethos of social insurance would come to be central to its secure political status. A population believing that each contributing worker had earned their social insurance benefit would not tolerate substantial budgetary cutbacks. The idea of a trust fund, then, was to emphasize the special status of a program whose benefits would be decades after a contributor’s payments. Its design is to enforce time-consistency, and its language is meant to highlight reliability. Yet sadly this language has since been turned upside down, bringing needless fear of “running out” of funds and thus uncertainty about the future. Roosevelt’s protective rhetoric backfired as the original understandings of social insurance weakened, even while the popularity of the programs remained substantial. Social Insurance, Our Neglected Heritage There are at least two plausible criticisms of this essay’s argument about the importance of relearning the appeal of social insurance principles. One is that the world has changed dramatically since the birth of social insurance in the late 19th century, let alone since the 1934-35 Committee on Economic Security provided a blueprint for expanding social insurance in American public life. The other is that changes in long-standing European social insurance programs show that major adjustments in the American programs are required as well. The claim that the world has changed does not necessarily mean that the economic risks against which social insurance programs offer protection have been fundamentally altered. Consider every one of the risks noted in this essay — outliving one’s savings, involuntary unemployment, medical costs, and disability. Not one has disappeared, and social insurance programs for each have been implemented in wealthy democracies. I doubt, in other words, whether social insurance is in any conceptual trouble. But that does not mean social insurance programs don’t need to adapt to contemporary circumstances. The spread of contract employment has been particularly challenging for European countries where social insurance is a function of trade unions and other sector-level organizations. It is equally obvious in the US that employer-provided health insurance puts a damper on labor market flexibility. Reduced employment in regular jobs with health coverage will demand the search for other sources of provision. These and other realities of our changing economy will only bring to the fore the central claim of this essay: Social insurance programs dominate American social policy but what that means for our politics is too little understood or explained. And that criticism extends not only to harried reporters but to a significant amount of the public policy community, as well. Theodore (Ted) Marmor is a Niskanen Center adjunct fellow and Professor Emeritus at The Yale School of Management.
{ "pile_set_name": "OpenWebText2" }
Comparison of Rappaport-Vassiliadis Enrichment Medium and Tetrathionate Brilliant Green Broth for Isolation of Salmonellae from Meat Products. The effectiveness of Rappaport-Vassiliadis enrichment medium (RV medium) and Difco's tetrathionate brilliant green broth (TBG) for detection of Salmonella in 553 samples of meat products was compared. All samples were preenriched for 20 h in buffered peptone water. Then 0.1 ml of the preenrichment was inoculated into 10 ml of RV medium, 1 ml was added to 9 ml of TBG broth, and 1 ml was inoculated into 10 ml of Muller-Kauffman (MK) tetrathionate broth. All enrichments were incubated at 43°C for 24 h, except for MK broth which was incubated for 48 h, and all were subcultured onto brilliant green deoxycholate agar and bismuth sulfite agar. The Rappaport-Vassiliadis medium was superior to Difco's tetrathionate brilliant green broth, being considerably more sensitive and more specific. The superiority of RV medium concerned the number of positive samples (36% and 28%, respectively), and also the number of Salmonella serotypes and strains. The RV medium inhibited the lactose- and sucrose-negative competing organisms much more than the Difco's tetrathionate broth. The performance of Difco and Muller-Kauffman tetrathionate brilliant green broths was similar. Addition of the brilliant green solution after boiling the tetrathionate broth slightly increased its efficacy. The effectiveness of brilliant green deoxycholate agar and bismuth sulfite agar was similar, whether after enrichment in RV medium or in any of the studied tetrathionate brilliant green broths.
{ "pile_set_name": "PubMed Abstracts" }
HotKnot is a near-field communication technology (which is mainly used in a capacitive touch screen) used in some smart terminal devices. This near-field communication includes two processes: proximity detection process and data transmission process. The proximity detection process of the near-field communication is: a touch screen terminal of one party sends a proximity detection sequence (for example, the proximity detection sequence includes six frequencies), and after receiving the proximity detection sequence, a touch screen terminal of the other party successively scans the multiple frequencies included in the proximity detection sequence. If signal strength at each frequency is greater than a preset signal strength threshold, it is considered that a signal source exists at the frequency. After the scan is completed, if signal sources exist at all frequencies, it is determined that the sequence is valid; otherwise, it is determined that the sequence is invalid. After it is determined that the sequence is valid, the receiving party feeds back a proximity response sequence to the sending party. After receiving the proximity response sequence, the sending party successively performs scan similarly, and determines whether the response sequence is valid. The determining manner is described above. When the two parties both consider that the sequence is valid, it is considered that sequence identification succeeds once. After the sequence identification succeeds for multiple times according to an interaction rule, it is determined that a touch screen terminal approaches. After the proximity detection succeeds, an interference source is turned off, and the data transmission process is started to send or receive data. During the proximity detection, the interference source such as an LCD is not turned off, there is a relatively big difficulty to correctly determine a frequency of the sequence, and setting of a signal strength threshold plays a particularly important role in determining of a signal. Therefore, it appears to be particularly important to be capable of setting a proper signal strength threshold according to a noise situation. During proximity detection of two HotKnot (which is a type of near-field communication and is mainly used in the capacitive touch screen) devices, a drive signal of LCD scan, or common-mode interference when a charger is connected interferes with signal detection of the capacitive touch screen, which may cause an error when the proximity detection is performed by using the touch screen, and a case in which the two parties cannot enter or one party enters by a mistake. Currently, to enable the capacitive touch screen to adapt to different LCD interference intensities, noise reduction processing is usually performed on detected data. After the noise reduction processing, a signal strength threshold determining policy is used. If signal strength is greater than the threshold, it is considered that a signal is valid; otherwise, it is considered that the signal is an invalid signal. In addition, for the foregoing interference cases, an interference frequency is detected by using an instrument, and then the interference frequency is not used as a determining basis, thereby avoiding an interference sources. However, in a current processing manner, there are at least two problems: 1) Although some problems can be solved by using a proper signal strength threshold, when interference occurs at some frequencies, signal strength of noise is sometimes greater than strength of a signal, and the frequencies are very difficult to be identified, which finally results in failure of entire sequence identification; and in addition, interference intensity often changes, detection reliability and sensitivity are difficult to be ensured if only one fixed signal strength threshold is used. 2) Interference in an actual environment often changes; if some fixed frequencies are not identified, although a situation of interference at the fixed frequencies can be improved, when the interference at the frequencies changes, the changed interference frequencies cannot be shielded, that is, a compatibility problem exists. Therefore, in the case of weak signal or strong interference, reliability and sensitivity of proximity detection are not high.
{ "pile_set_name": "USPTO Backgrounds" }
The third game of the preseason is the one to watch. The first team offense and defense will play at least half the game, a little more of the playbook will probably be used and a good chunk of the final decisions will begin taking shape. The Vikings have looked a little hit-and-miss on the offensive side and downright stacked top-to-bottom on the defensive side. Will week three be a continuation of that, or will new stories take shape? Here are five of the prime things to focus on in the Vikings’ third preseason game. Finally, a little dose of Cook The last time we saw Dalvin Cook on an NFL field was Oct. 1, 2017. Almost 11 months later, we at last get to see the bright young star flex his muscles once again. Tom Pelissero reported that he is expected to make his return to the field tonight against Seattle. It is unclear how much time he will get or if he will start, but the clean bill of health can only be viewed as good news, even if he only gets a handful of touches. It also helps that starting linemen Mike Remmers and Rashod Hill returned to practice this week. Presumably, they will return to the starting lineup, giving Cook a little extra security. Defensive back battle gets a little tighter New signee-George Iloka likely will not suit up against Seattle just 48 hours after signing with Minnesota. However, his roster spot appears locked in when the Vikings make cuts in a couple weeks. Because of that, there are a handful of guys whose spots once were secure who now are in a bit of a battle. Terence Newman, Jayron Kearse, Anthony Harris and Holton Hill, to name a few, could all be in danger of being cut. Heck, Iloka could even take a roster spot from a linebacker. As such, these preseason games become all the more important. Players have to make big plays, show special teams value and/or not expose themselves to game-changing errors. Room on the ship is only getting more crowded; these players have to prove they are a useful hand. Latest From FPC on SportsCastr Linebackers, the preseason stars, making roster cuts tough One could argue that Eric Wilson and Ben Gedeon were two of the best players on the field last week. Mike Zimmer had them all over the field, in coverage, blitzing up the middle and on the edge, and they made plays wherever they were asked to be. Reshard Cliett and Antwione Williams were no slouches either with three tackles apiece. Plus, Williams had a de facto sack that was overturned by a call that was, to put politely, questionable. While most of these guys will primarily be special teamers, they are all showing how effective they can be as system linebackers. This is proving to be a surprisingly deep position group with a lot of guys vying for only a handful of spots. Odenigbo continues push for roster spot Ifeadi Odenigbo moved from end to tackle this offseason to take advantage of his raw power and low center-of-gravity. In the first preseason game, he fared well in that role, creating a good push in both run and pass defense. However, due to team injuries, he had to bump back outside to end last week for many of his snaps. And holy cow, did he take advantage. Odenigbo recorded seven tackles, two sacks and three quarterback hits while mixing interior and end reps against Jacksonville. He looked like a star, to say the least. The problem is that the defensive line room is crowded with a lot of promising young talent fighting for a small number of spots. Odenigbo is certainly in the conversation for one of them, especially with last week’s performance. But he has to keep presenting himself as undeniable week-in and week-out. Cousins, receivers, find first game magic again? The best sight of week one was the instant connection between Kirk Cousins and Stefon Diggs and the overall efficiency of the offense. Week two, that was nowhere to be found. To be fair, Jacksonville boasts a top-two NFL defense, and they showed it last week. But the Vikings invested a lot of money in this quarterback overhaul. With Cousins likely getting at least a full half, if not more, this is the game to prove that the offense is headed in the right direction. –Sam Smith is the Managing Editor for Full Press Coverage Vikings and Deputy Editor for Full Press NFL. Like and Follow @samc_smith Follow @fpc_vikingsFollow @fpc_nfcn Latest Vikings News
{ "pile_set_name": "OpenWebText2" }
DataverseUse test Set import-private-functions=true Query: Let Variable [ Name=$txt ] := LiteralExpr [STRING] [Hello World, I would like to inform you of the importance of Foo Bar. Yes, Foo Bar. Jürgen.] Let Variable [ Name=$tokens ] := FunctionCall asterix.hashed-word-tokens@1[ Variable [ Name=$txt ] ] SELECT ELEMENT [ Variable [ Name=$token ] ] FROM [ Variable [ Name=$tokens ] AS Variable [ Name=$token ] ]
{ "pile_set_name": "Github" }
Carabus albrechti awashimae Carabus albrechti awashimae is a subspecies of ground beetle in the subfamily Carabinae that is endemic to Japan. References albrechti awashimae Category:Beetles described in 1996 Category:Endemic fauna of Japan
{ "pile_set_name": "Wikipedia (en)" }
![](indmedgaz71974-0019){#sp1 .253} ![](indmedgaz71974-0020){#sp2 .254}
{ "pile_set_name": "PubMed Central" }
--- abstract: | The aim of this paper is to numerically solve a diffusion differential problem having time derivative of fractional order. To this end we propose a collocation-Galerkin method that uses the fractional splines as approximating functions. The main advantage is in that the derivatives of integer and fractional order of the fractional splines can be expressed in a closed form that involves just the generalized finite difference operator. This allows us to construct an accurate and efficient numerical method. Several numerical tests showing the effectiveness of the proposed method are presented.\ [**Keywords**]{}: Fractional diffusion problem, Collocation method, Galerkin method, Fractional spline author: - 'Laura Pezza[^1], Francesca Pitolli[^2]' title: 'A fractional spline collocation-Galerkin method for the time-fractional diffusion equation' --- Introduction. {#sec:intro} ============= The use of fractional calculus to describe real-world phenomena is becoming increasingly widespread. Integro-differential equations of [*fractional*]{}, [*i.e.*]{} positive real, order are used, for instance, to model wave propagation in porous materials, diffusive phenomena in biological tissue, viscoelastic properties of continuous media [@Hi00; @Ma10; @KST06; @Ta10]. Among the various fields in which fractional models are successfully used, viscoelasticity is one of the more interesting since the memory effect introduced by the time-fractional derivative allows to model anomalous diffusion phenomena in materials that have mechanical properties in between pure elasticity and pure viscosity [@Ma10]. Even if these models are empirical, nevertheless they are shown to be consistent with experimental data.\ The increased interest in fractional models has led to the development of several numerical methods to solve fractional integro-differential equations. Many of the proposed methods generalize to the fractional case numerical methods commonly used for the classical integer case (see, for instance, [@Ba12; @PD14; @ZK14] and references therein). But the nonlocality of the fractional derivative raises the challenge of obtaining numerical solution with high accuracy at a low computational cost. In [@PP16] we proposed a collocation method especially designed for solving differential equations of fractional order in time. The key ingredient of the method is the use of the fractional splines introduced in [@UB00] as approximating functions. Thus, the method takes advantage of the explicit differentiation rule for fractional B-splines that allows us to evaluate accurately the derivatives of both integer and fractional order.\ In the present paper we used the method to solve a diffusion problem having time derivative of fractional order and show that the method is efficient and accurate. More precisely, the [*fractional spline collocation-Galerkin method*]{} here proposed combines the fractional spline collocation method introduced in [@PP16] for the time discretization and a classical spline Galerkin method in space.\ The paper is organized as follows. In Section \[sec:diffeq\], a time-fractional diffusion problem is presented and the definition of fractional derivative is given. Section \[sec:fractBspline\] is devoted to the fractional B-splines and the explicit expression of their fractional derivative is given. The fractional spline approximating space is described in Section \[sec:app\_spaces\], while the fractional spline collocation-Galerkin method is introduced in Section \[sec:Galerkin\]. Finally, in Section \[sec:numtest\] some numerical tests showing the performance of the method are displayed. Some conclusions are drawn in Section \[sec:concl\]. A time-fractional diffusion problem. {#sec:diffeq} ==================================== We consider the [*time-fractional differential diffusion problem*]{} [@Ma10] $$\label{eq:fracdiffeq} \left \{ \begin{array}{lcc} \displaystyle D_t^\gamma \, u(t, x) - \frac{\partial^2}{\partial x^2} \, u(t, x) = f(t, x)\,, & \quad t \in [0, T]\,, & \quad x \in [0,1] \,,\\ \\ u(0, x) = 0\,, & & \quad x \in [0,1]\,, \\ \\ u(t, 0) = u(t, 1) = 0\,, & \quad t \in [0, T]\,, \end{array} \right.$$ where $ D_t^\gamma u$, $0 < \gamma < 1$, denotes the [*partial fractional derivative*]{} with respect to the time $t$. Usually, in viscoelasticity the fractional derivative is to be understood in the Caputo sense, [*i.e.*]{} $$\label{eq:Capfrac} D_t^\gamma \, u(t, x) = \frac1{\Gamma(1-\gamma)} \, \int_0^t \, \frac{u_t(\tau,x)}{(t - \tau)^\gamma} \, d\tau\,, \qquad t\ge 0\,,$$ where $\Gamma$ is the Euler’s gamma function $$\Gamma(\gamma+1)= \int_0^\infty \, s^\gamma \, {\rm e}^{-s} \, ds\,.$$ We notice that due to the homogeneous initial condition for the function $u(t,x)$, solution of the differential problem (\[eq:fracdiffeq\]), the Caputo definition (\[eq:Capfrac\]) coincides with the Riemann-Liouville definition (see [@Po99] for details). One of the advantage of the Riemann-Liouville definition is in that the usual differentiation operator in the Fourier domain can be easily extended to the fractional case, [*i.e.*]{} $${\cal F} \bigl(D_t^\gamma \, f(t) \bigr) = (i\omega)^\gamma {\cal F} (f(t))\,,$$ where ${\cal F}(f)$ denotes the Fourier transform of the function $f(t)$. Thus, analytical Fourier methods usually used in the classical integer case can be extended to the fractional case [@Ma10]. The fractional B-splines and their fractional derivatives. {#sec:fractBspline} ========================================================== The [*fractional B-splines*]{}, [*i.e.*]{} the B-splines of fractional degree, were introduced in [@UB00] generalizing to the fractional power the classical definition for the polynomial B-splines of integer degree. Thus, the fractional B-spline $B_{\alpha}$ of degree $\alpha$ is defined as $$\label{eq:Balpha} B_{\alpha}(t) := \frac{{ \Delta}^{\alpha+1} \, t_+^\alpha} {\Gamma(\alpha+1)}\,, \qquad \alpha > -\frac 12\,,$$ where $$\label{eq:fracttruncpow} t_+^\alpha: = \left \{ \begin{array}{ll} t^\alpha\,, & \qquad t \ge 0\,, \\ \\ 0\,, & \qquad \hbox{otherwise}\,, \end{array} \right. \qquad \alpha > -1/2\,,$$ is the [*fractional truncated power function*]{}. $\Delta^{\alpha}$ is the [*generalized finite difference operator*]{} $$\label{eq:fracfinitediff} \Delta^{\alpha} \, f(t) := \sum_{k\in \NN} \, (-1)^k \, {\alpha \choose k} \, f(t-\,k)\,, \qquad \alpha \in \RR^+\,,$$ where $$\label{eq:binomfrac} {\alpha \choose k} := \frac{\Gamma(\alpha+1)}{k!\, \Gamma(\alpha-k+1)}\,, \qquad k\in \NN\,, \quad \alpha \in \RR^+\,,$$ are the [*generalized binomial coefficients*]{}. We notice that ’fractional’ actually means ’noninteger’, [*i.e.*]{} $\alpha$ can assume any real value greater than $-1/2$. For real values of $\alpha$, $B_\alpha$ does not have compact support even if it belongs to $L_2(\RR)$. When $\alpha=n$ is a nonnegative integer, Equations (\[eq:Balpha\])-(\[eq:binomfrac\]) are still valid; $\Delta^{n}$ is the usual finite difference operator so that $B_n$ is the classical polynomial B-spline of degree $n$ and compact support $[0,n+1]$ (for details on polynomial B-splines see, for instance, the monograph [@Sc07]). The fractional B-splines for different values of the parameter $\alpha$ are displayed in Figure \[fig:fractBsplines\] (top left panel). The classical polynomial B-splines are also displayed (dashed lines). The picture shows that the fractional B-splines decay very fast toward infinity so that they can be assumed compactly supported for computational purposes. Moreover, in contrast to the polynomial B-splines, fractional splines are not always positive even if the nonnegative part becomes more and more smaller as $\alpha$ increases. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Top left panel: The fractional B-splines (solid lines) and the polynomial B-splines (dashed lines) for $\alpha$ ranging from 0 to 4. Top right panel: The fractional derivatives of the linear B-spline $B_1$ for $\gamma = 0.25, 0.5, 0.75$. Bottom left panel: The fractional derivatives of the cubic B-spline $B_3$ for $\gamma$ ranging from 0.25 to 2. Bottom right panel: The fractional derivatives of the fractional B-spline $B_{3.5}$ for the $\gamma$ ranging from 0.25 to 2. Ordinary derivatives are displayed as dashed lines. []{data-label="fig:fractBsplines"}](Fig_fract_Bspline.png "fig:"){width="6cm"} ![Top left panel: The fractional B-splines (solid lines) and the polynomial B-splines (dashed lines) for $\alpha$ ranging from 0 to 4. Top right panel: The fractional derivatives of the linear B-spline $B_1$ for $\gamma = 0.25, 0.5, 0.75$. Bottom left panel: The fractional derivatives of the cubic B-spline $B_3$ for $\gamma$ ranging from 0.25 to 2. Bottom right panel: The fractional derivatives of the fractional B-spline $B_{3.5}$ for the $\gamma$ ranging from 0.25 to 2. Ordinary derivatives are displayed as dashed lines. []{data-label="fig:fractBsplines"}](Fig_fractder_Bspline_linear.png "fig:"){width="6cm"} ![Top left panel: The fractional B-splines (solid lines) and the polynomial B-splines (dashed lines) for $\alpha$ ranging from 0 to 4. Top right panel: The fractional derivatives of the linear B-spline $B_1$ for $\gamma = 0.25, 0.5, 0.75$. Bottom left panel: The fractional derivatives of the cubic B-spline $B_3$ for $\gamma$ ranging from 0.25 to 2. Bottom right panel: The fractional derivatives of the fractional B-spline $B_{3.5}$ for the $\gamma$ ranging from 0.25 to 2. Ordinary derivatives are displayed as dashed lines. []{data-label="fig:fractBsplines"}](Fig_fractder_Bspline_cubica.png "fig:"){width="6cm"} ![Top left panel: The fractional B-splines (solid lines) and the polynomial B-splines (dashed lines) for $\alpha$ ranging from 0 to 4. Top right panel: The fractional derivatives of the linear B-spline $B_1$ for $\gamma = 0.25, 0.5, 0.75$. Bottom left panel: The fractional derivatives of the cubic B-spline $B_3$ for $\gamma$ ranging from 0.25 to 2. Bottom right panel: The fractional derivatives of the fractional B-spline $B_{3.5}$ for the $\gamma$ ranging from 0.25 to 2. Ordinary derivatives are displayed as dashed lines. []{data-label="fig:fractBsplines"}](Fig_fractder_Bspline_alpha3p5.png "fig:"){width="6cm"} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- The fractional derivatives of the fractional B-splines can be evaluated explicitly by differentiating (\[eq:Balpha\]) and (\[eq:fracttruncpow\]) in the Caputo sense. This gives the following differentiation rule $$\label{eq:diffrule_tronc} D^{\gamma}_t \, B_{\alpha} (t)= \frac{\Delta^{\alpha+1} \, t_+^{\alpha-\gamma}} {\Gamma(\alpha-\gamma+1)}\,, \qquad 0 < \gamma < \alpha + \frac12\,,$$ which holds both for fractional and integer order $\gamma$. In particular, when $\gamma, \alpha$ are nonnegative integers, (\[eq:diffrule\_tronc\]) is the usual differentiation rule for the classical polynomial B-splines [@Sc07]. We observe that since $B_\alpha$ is a causal function with $B_\alpha^{(n)}(0)=0$ for $n\in \NN\backslash\{0\}$, the Caputo fractional derivative coincides with the Riemann-Liouville fractional derivative.\ From (\[eq:diffrule\_tronc\]) and the composition property $\Delta^{\alpha_1} \, \Delta^{\alpha_2} = \Delta^{\alpha_1+\alpha_2}$ it follows [@UB00] $$\label{eq:diffrule_2} D^{\gamma}_t \, B_{\alpha} = \Delta ^{\gamma} \, B_{\alpha-\gamma}\,,$$ [*i.e.*]{} the fractional derivative of a fractional B-spline of degree $\alpha$ is a fractional spline of degree $\alpha-\gamma$. The fractional derivatives of the classical polynomial B-splines $B_n$ are fractional splines, too. This means that $D^{\gamma}_t \, B_{n}$ is not compactly supported when $\gamma$ is noninteger reflecting the nonlocal behavior of the derivative operator of fractional order.\ In Figure \[fig:fractBsplines\] the fractional derivatives of $B_1$ (top right panel), $B_3$ (bottom left panel) and $B_{3.5}$ (bottom right panel) are displayed for different values of $\gamma$. The fractional spline approximating spaces. {#sec:app_spaces} =========================================== A property of the fractional B-splines that is useful for the construction of numerical methods for the solution of differential problems is the [*refinability*]{}. In fact, the fractional B-splines are [*refinable functions*]{}, [*i.e.*]{} they satisfy the [*refinement equation*]{} $$B_\alpha(t) = \sum_{k\in \NN} \, a^{(\alpha)}_{k} \, B_\alpha(2\,t-k)\,, \qquad t \ge 0\,,$$ where the coefficients $$a^{(\alpha)}_{k} := \frac{1}{2^{\alpha}} {\alpha+1 \choose k}\,,\qquad k\in \NN\,,$$ are the [*mask coefficients*]{}. This means that the sequence of nested approximating spaces $$V^{(\alpha)}_j(\RR) = {\rm span} \,\{B_\alpha(2^j\, t -k), k \in \ZZ\}\,, \qquad j \in \ZZ\,,$$ forms a [*multiresolution analysis*]{} of $L_2(\RR)$. As a consequence, any function $f_j(t)$ belonging to $V^{(\alpha)}_j(\RR)$ can be expressed as $$f_j(t) = \sum_{k\in \ZZ}\, \lambda_{jk} \, B_\alpha(2^j\, t -k)\,,$$ where the coefficient sequence $\{\lambda_{j,k}\} \in \ell_2(\ZZ)$. Moreover, any space $V^{(\alpha)}_j(\RR)$ reproduces polynomials up to degree $\lceil \alpha\rceil$, [*i.e.*]{} $x^d \in V^{(\alpha)}_j(\RR)$, $ 0 \le d \le \lceil \alpha\rceil$, while its approximation order is $\alpha +1$. We recall that the polynomial B-spline $B_n$ reproduces polynomial up to degree $n$ whit approximation order $n+1$ [@UB00]. To solve boundary differential problems we need to construct a multiresolution analysis on a finite interval. For the sake of simplicity in the following we will consider the interval $I=[0,1]$. A simple approach is to restrict the basis $\{B_\alpha(2^j\, t -k)\}$ to the interval $I$, [*i.e.*]{} $$\label{eq:Vj_int} V^{(\alpha)}_j(I) = {\rm span} \,\{B_\alpha(2^j\, t -k), t\in I, -N \le k \le 2^j-1\}\,, \qquad j_0 \le j\,,$$ where $N$ is a suitable index, chosen in order the significant part of $B_\alpha$ is contained in $[0,N+1]$, and $j_0$ is the starting refinement level. The drawback of this approach is its numerical instability and the difficulty in fulfilling the boundary conditions since there are $2N$ boundary functions, [*i.e.*]{} the translates of $B_\alpha$ having indexes $ -N\le k \le -1$ and $2^j-N\le k \le 2^j-1$, that are non zero at the boundaries. More suitable refinable bases can be obtained by the procedure given in [@GPP04; @GP04]. In particular, for the polynomial B-spline $B_n$ a B-basis $\{\phi_{\alpha,j,k}(t)\}$ with optimal approximation properties can be constructed. The internal functions $\phi_{\alpha,j,k}(t)=B_\alpha(2^j\, t -k)$, $0 \le k \le 2^j-1-n$, remain unchanged while the $2n$ boundary functions fulfill the boundary conditions $$\begin{array}{llcc} \phi_{\alpha,j,-1}^{(\nu)}(0) = 1\,, & \phi_{\alpha,j,k}^{(\nu)}(0) = 0\,, &\hbox{for} & 0\le \nu \le -k-2\,, -n \le k \le -2\,,\\ \\ \phi_{\alpha,j,2^j-1}^{(\nu)}(1) = 1\,, & \phi_{\alpha,j,2^j+k}^{(\nu)}(1) = 0\,, & \hbox{for} & 0\le \nu \le -k-2\,, -n \le k \le -2\,,\\ \\ \end{array}$$ Thus, the B-basis naturally fulfills Dirichlet boundary conditions.\ As we will show in the next section, the refinability of the fractional spline bases plays a crucial role in the construction of the collocation-Galerkin method. The fractional spline collocation-Galerkin method. {#sec:Galerkin} ================================================== In the collocation-Galerkin method here proposed, we look for an approximating function $u_{s,j}(t,x) \in V^{(\beta)}_s([0,T]) \otimes V^{(\alpha)}_j([0,1])$. Since just the ordinary first spatial derivative of $u_{s,j}$ is involved in the Galerkin method, we can assume $\alpha$ integer and use as basis function for the space $V^{(\alpha)}_j([0,1])$ the refinable B-basis $\{\phi_{\alpha,j,k}\}$, [*i.e.*]{} $$\label{uj} u_{s,j}(t,x) = \sum_{k \in {\cal Z}_j} \, c_{s,j,k}(t) \, \phi_{\alpha,j,k}(x)\,,$$ where the unknown coefficients $c_{s,j,k}(t)$ belong to $V^{(\beta)}_s([0,T])$. Here, ${\cal Z}_j$ denotes the set of indexes $-n\le k \le 2^j-1$.\ The approximating function $u_{s,j}(t,x)$ solves the variational problem $$\label{varform} \left \{ \begin{array}{ll} \displaystyle \left ( D_t^\gamma u_{s,j},\phi_{\alpha,j,k} \right ) -\left ( \frac {\partial^2} {\partial x^2}\,u_{s,j},\phi_{\alpha,j,k} \right ) = \left ( f,\phi_{\alpha,j,k} \right )\,, & \quad k \in {\cal Z}_j\,, \\ \\ u_{s,j}(0, x) = 0\,, & x \in [0,1]\,, \\ \\ u_{s,j}(t, 0) = 0\,, \quad u_{s,j}(t,1) = 0\,, & t \in [0,T]\,, \end{array} \right.$$ where $(f,g)= \int_0^1 \, f\,g$.\ Now, writing (\[varform\]) in a weak form and using (\[uj\]) we get the system of fractional ordinary differential equations $$\label{fracODE} \left \{ \begin{array}{ll} M_j \, D_t^\gamma\,C_{s,j}(t) + L_j\, C_{s,j}(t) = F_j(t)\,, & \qquad t \in [0,T]\,, \\ \\ C_{s,j}(0) = 0\,, \end{array} \right.$$ where $C_{s,j}(t)=(c_{s,j,k}(t))_{k\in {\cal Z}_j}$ is the unknown vector. The connecting coefficients, i.e. the entries of the mass matrix $M_j = (m_{j,k,i})_{k,i\in{\cal Z}_j}$, of the stiffness matrix $L_j = (\ell_{j,k,i})_{k,i\in{\cal Z}_j}$, and of the load vector $F_j(t)=(f_{j,k}(t))_{k\in {\cal Z}_j}$, are given by $$m_{j,k,i} = \int_0^1\, \phi_{\alpha,j,k}\, \phi_{\alpha,j,i}\,, \qquad \ell_{j,k,i} = \int_0^1 \, \phi'_{\alpha,j,k} \, \phi'_{\alpha,j,i}\,,$$ $$f_{j,k}(t) = \int_0^1\, f(t,\cdot)\, \phi_{\alpha,j,k}\,.$$ The entries of $M_j$ and $L_j$ can be evaluated explicitly using (\[eq:Balpha\]) and (\[eq:diffrule\_tronc\]), respectively, while the entries of $F_j(t)$ can be evaluated by quadrature formulas especially designed for wavelet methods [@CMP15; @GGP00]. To solve the fractional differential system (\[fracODE\]) we use the collocation method introduced in [@PP16]. For an integer value of $T$, let $t_p = p/2^q$, $0\le p \le 2^q\,T$, where $q$ is a given nonnegative integer, be a set of dyadic nodes in the interval $[0,T]$. Now, assuming $$\label{ck} c_{s,j,k}(t) = \sum_{r\in {\cal R}_s} \, \lambda_{k,r}\,\chi_{\beta,s,r}(t) \,, \qquad k \in {\cal Z}_j\,,$$ where $\chi_{\beta,s,r}(t)=B_\beta(2^s\,t-r)$ with $B_\beta$ a fractional B-spline of fractional degree $\beta$, and collocating (\[fracODE\]) on the nodes $t_p$, we get the linear system $$\label{colllinearsys} (M_j\otimes A_s + L_j\otimes G_s) \,\Lambda_{s,j} =F_j\,,$$ where $\Lambda_{s,j}=(\lambda_{k,r})_{r\in {\cal R}_s,k\in {\cal Z}_j}$ is the unknown vector, $$\begin{array}{ll} A_s= \bigl( a_{p,r} \bigr)_{p\in {\cal P}_q,r\in {\cal R}_s}\,, & \qquad a_{p,r} = D_t^\gamma \, \chi_{\beta,s,r}(t_p)\,, \\ \\ G_s=\bigl(g_{p,r}\bigr)_{p \in {\cal P}_q,r\in {\cal R}_s}\,, & \qquad g_{p,r} = \chi_{\beta,s,r}(t_p)\,, \end{array}$$ are the collocation matrices and $$F_j=(f_{j,k}(t_p))_{k\in{\cal Z}_j,p \in {\cal P}_q}\,,$$ is the constant term. Here, ${\cal R}_s$ denotes the set of indexes $-\infty < r \le 2^s-1$ and ${\cal P}_q$ denotes the set of indexes $0<p\le 2^qT$. Since the fractional B-splines have fast decay, the series (\[ck\]) is well approximated by only few terms and the linear system (\[colllinearsys\]) has in practice finite dimension so that the unknown vector $\Lambda_{s,j}$ can be recovered by solving (\[colllinearsys\]) in the least squares sense.\ We notice that the entries of $G_s$, which involve just the values of $\chi_{\beta,s,r}$ on the dyadic nodes $t_p$, can be evaluated explicitly by (\[eq:Balpha\]). On the other hand, we must pay a special attention to the evaluation of the entries of $A_s$ since they involve the values of the fractional derivative $D_t^\gamma\chi_{\beta,s,r}(t_p)$. As shown in Section \[sec:fractBspline\], they can be evaluated efficiently by the differentiation rule (\[eq:diffrule\_2\]). In the following theorem we prove that the fractional spline collocation-Galerkin method is convergent. First of all, let us introduce the Sobolev space on bounded interval $$H^\mu(I):= \{v \in L^2(I): \exists \, \tilde v \in H^\mu (\RR) \ \hbox{\rm such that} \ \tilde v|_I=v\}, \quad \mu\geq 0\,,$$ equipped with the norm $$\|v\|_{\mu,I} = \inf_{\tilde v \in H^\mu(\RR), \tilde v|_I=v} \|\tilde v\|_{\mu,\RR}\,,$$ where $$H^\mu(\RR):= \{v: v\in L^2(\RR) \mbox{ and } (1+|\omega|^2)^{\mu/2} {\cal F}(v)(\omega) \in L^2(\RR)\}, \quad \mu\geq 0\,,$$ is the usual Sobolev space with the norm $$\| v \| _{\mu,\RR} =\bigl \| (1+|\omega|^2)^{\mu/2} {\cal F}(v)(\omega) \bigr \| _{0,\RR}\,.$$ \[Convergence\] Let $$H^\mu(I;H^{\tilde \mu}(\Omega)):= \{v(t,x): \| v(t,\cdot)\|_{H^{\tilde \mu}(\Omega)} \in H^\mu(I)\}, \quad \mu, {\tilde \mu} \geq 0\,,$$ equipped with the norm $$\|v\|_{H^\mu(I;H^{\tilde \mu}(\Omega))} := \bigl \| \|v(t,\cdot)\|_{H^{\tilde \mu}(\Omega)} \bigr\|_{\mu,I}\,.$$ Assume $u$ and $f$ in (\[eq:fracdiffeq\]) belong to $H^{\mu}([0,T];H^{\tilde \mu}([0,1]))$, $0\le \mu$, $0\le \tilde \mu$, and $H^{\mu-\gamma}([0,T];$ $H^{{\tilde \mu}-2}([0,1]))$, $0\le \mu-\gamma$, $0\le \tilde \mu-2$, respectively. Then, the fractional spline collocation-Galerkin method is convergent, [*i.e.*]{}, $$\|u-u_{s,j}\|_{H^0([0,T];H^0([0,1]))} \, \to 0 \quad \hbox{as} \quad s,j \to \infty\,.$$ Moreover, for $\gamma \le \mu \le \beta+1$ and $1 \le \tilde \mu \le \alpha +1$ the following error estimate holds: $$\begin{array}{lcl} \| u-u_{s,j}\|_{H^0([0,T];H^0([0,1]))} &\leq & \left (\eta_1 \, 2^{-j\tilde \mu} + \eta_2 \, 2^{-s\mu} \right ) \| u\|_{H^\mu([0,T];H^{\tilde \mu}([0,1]))}\,, \end{array}$$ where $\eta_1$ and $\eta_2$ are two constants independent of $s$ and $j$. Let $u_j$ be the exact solution of the variational problem (\[varform\]). Following a classical line of reasoning (cf. [@Th06; @FXY11; @DPS94]) we get $$\begin{array}{l} \| u-u_{j,s}\|_{H^0([0,T];H^0([0,1]))} \leq \\ \\ \rule{2cm}{0cm} \leq \|u-u_{j}\|_{H^0([0,T];H^0([0,1]))} + \| u_j-u_{j,s}\|_{H^0([0,T];H^0([0,1]))}\, \leq \\ \\ \rule{2cm}{0cm} \leq \eta_1 \, 2^{-j\tilde \mu}\, \| u\|_{H^0([0,T];H^{\tilde \mu}([0,1]))} + \eta_2 \, 2^{-s\mu} \, \| u\|_{H^\mu([0,T];H^0([0,1]))} \leq \\ \\ \rule{2cm}{0cm} \leq \left ( \eta_1 \, 2^{-j\tilde \mu} + \eta_2 \, 2^{-s\mu} \right ) \, \|u\|_{H^\mu([0,T];H^{\tilde \mu}([0,1]))}\,. \end{array}$$ Numerical tests. {#sec:numtest} ================ To shown the effectiveness of the fractional spline collocation-Galerkin method we solved the fractional diffusion problem (\[eq:fracdiffeq\]) for two different known terms $f(t,x)$ taken from [@FXY11]. In all the numerical tests we used as approximating space for the Galerkin method the (polynomial) cubic spline space. The B-splines $B_3$, its first derivatives $B_3'$ and the B-basis $\{\phi_{3,3,k}\}$ are displayed in Figure \[fig:Bcubic\]. We notice that since the cubic B-spline is centrally symmetric in the interval $[0,4]$, the B-basis is centrally symmetric, too. All the numerical tests were performed on a laptop using a Python environment. Each test takes a few minutes. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Left panel: The cubic B-spline (red line) and its first derivative (blue line). Right panel: The B-basis $\{\phi_{3,3,k}(x)\}$.[]{data-label="fig:Bcubic"}](Fig_Bspline_n3.png "fig:"){width="45.00000%"} ![Left panel: The cubic B-spline (red line) and its first derivative (blue line). Right panel: The B-basis $\{\phi_{3,3,k}(x)\}$.[]{data-label="fig:Bcubic"}](OptBasis_alpha3.png "fig:"){width="45.00000%"} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Example 1 --------- In the first test we solved the time-fractional diffusion equation (\[eq:fracdiffeq\]) in the case when $$f(t,x)=\frac{2}{\Gamma(3-\gamma)}\,t^{2-\gamma}\, \sin(2\pi x)+4\pi^2\,t^2\, \sin(2\pi x)\,.$$ The exact solution is $$u(t,x)=t^2\,\sin(2\pi x).$$ We used the fractional B-spline $B_{3.5}$ as approximating function for the collocation method and solved the problem for $\gamma = 1, 0.75, 0.5, 0.25$. The fractional B-spline $B_{3.5}$, its first derivative and its fractional derivatives are shown in Figure \[fig:fract\_Basis\] along with the fractional basis $\{\chi_{3.5,3,r}\}$. The numerical solution $u_{s,j}(t,x)$ and the error $e_{s,j}(t,x) = u(t,x)-u_{s,j}(t,x)$ for $s=6$ and $j=6$ are displayed in Figure \[fig:numsol\_1\] for $\gamma = 0.5$. In all the numerical tests we set $q = s+1$. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Left panel: The fractional B-spline $B_{3.5}$ (green line), its first derivative (red line) and its fractional derivatives of order $\gamma =0.75$ (blue line), 0.5 (cyan line), 0.25 (black line). Right panel: The fractional basis $\{\chi_{3.5,3,r}\}$(right).[]{data-label="fig:fract_Basis"}](Fig_Bspline_n3p5.png "fig:"){width="45.00000%"} ![Left panel: The fractional B-spline $B_{3.5}$ (green line), its first derivative (red line) and its fractional derivatives of order $\gamma =0.75$ (blue line), 0.5 (cyan line), 0.25 (black line). Right panel: The fractional basis $\{\chi_{3.5,3,r}\}$(right).[]{data-label="fig:fract_Basis"}](FractBasis_alpha3p5.png "fig:"){width="45.00000%"} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ![Example 1. The numerical solution (left panel) and the error (right panel) for $j=6$ and $s=6$ when $\gamma = 0.5$.[]{data-label="fig:numsol_1"}](Fig_NumSol_jGK6_sref7_beta3p5_gamma0p5_ex1.png "fig:"){width="45.00000%"} ![Example 1. The numerical solution (left panel) and the error (right panel) for $j=6$ and $s=6$ when $\gamma = 0.5$.[]{data-label="fig:numsol_1"}](Fig_Error_jGK6_sref7_beta3p5_gamma0p5_ex1.png "fig:"){width="45.00000%"} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ We analyze the behavior of the error as the degree of the fractional B-spline $B_\beta$ increases. Figure \[fig:L2\_error\_1\] shows the $L_2$-norm of the error as a function of $s$ for $\beta$ ranging from 2 to 4; the four panels in the figure refer to different values of the order of the fractional derivative. For these tests we set $j=5$. The figure shows that for $s \le 4$ the error provided by the polynomial spline approximations is lower than the error provided by the fractional spline approximations. Nevertheless, in this latter case the error decreases reaching the same value, or even a lower one, of the polynomial spline error when $s=5$. We notice that for $\gamma=1$ the errors provided by the polynomial spline approximations of different degrees have approximatively the same values while the error provided by the polynomial spline of degree 2 is lower in case of fractional derivatives. In fact, it is well-known that fractional derivatives are better approximated by less smooth functions [@Po99]. -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Example 1: The $L_2$-norm of the error as a function of $q$ for different values of $\gamma$. Each line corresponds to a spline of different degree: solid lines correspond to the polynomial splines; non solid lines correspond to fractional splines.[]{data-label="fig:L2_error_1"}](Fig_L2Error_gam1_ex1.png "fig:"){width="45.00000%"} ![Example 1: The $L_2$-norm of the error as a function of $q$ for different values of $\gamma$. Each line corresponds to a spline of different degree: solid lines correspond to the polynomial splines; non solid lines correspond to fractional splines.[]{data-label="fig:L2_error_1"}](Fig_L2Error_gam0p75_ex1.png "fig:"){width="45.00000%"} ![Example 1: The $L_2$-norm of the error as a function of $q$ for different values of $\gamma$. Each line corresponds to a spline of different degree: solid lines correspond to the polynomial splines; non solid lines correspond to fractional splines.[]{data-label="fig:L2_error_1"}](Fig_L2Error_gam0p5_ex1.png "fig:"){width="45.00000%"} ![Example 1: The $L_2$-norm of the error as a function of $q$ for different values of $\gamma$. Each line corresponds to a spline of different degree: solid lines correspond to the polynomial splines; non solid lines correspond to fractional splines.[]{data-label="fig:L2_error_1"}](Fig_L2Error_gam0p25_ex1.png "fig:"){width="45.00000%"} -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Then, we analyze the convergence of the method for increasing values of $j$ and $s$. Table \[tab:conv\_js\_fract\_1\] reports the $L_2$-norm of the error for different values of $j$ and $s$ when using the fractional B-spline $B_{3.5}$ and $\gamma = 0.5$. The number of degrees-of-freedom is also reported. The table shows that the error decreases when $j$ increases and $s$ is held fix. We notice that the error decreases very slightly when $j$ is held fix and $s$ increases since for these values of $s$ we reached the accuracy level we can expect for that value of $j$ (cf. Figures \[fig:L2\_error\_1\]). The higher values of the error for $s=7$ and $j=5,6$ are due to the numerical instabilities of the basis $\{\chi_{3.5,s,r}\}$ which result in a high condition number of the discretization matrix. The error has a similar behavior even in the case when we used the cubic B-spline space as approximating space for the collocation method (cf. Table \[tab:conv\_js\_cubic\_1\]). 3 4 5 6 -------------------------------- ---------------- ---------------- ---------------- ---------------- $\sharp V_j^{(\alpha)}([0,1])$ 9 17 33 65 5 0.02037 (369) 0.00449 (697) 0.00101 (1353) 0.00025 (2665) 6 0.02067 (657) 0.00417 (1241) 0.00093 (2409) 0.00024 (4745) 7 0.01946 (1233) 0.00381 (2329) 0.00115 (4521) 0.00117 (8905) : Example 1: The $L_2$-norm of the error for increasing values of $s$ and $j$ when using the fractional B-spline of degree $\beta=3.5$. The numbers in parenthesis are the degrees-of-freedom. Here, $\gamma =0.5$. \[tab:conv\_js\_fract\_1\] 3 4 5 6 -------------------------------- ---------------- ---------------- ---------------- ---------------- $\sharp V_j^{(\alpha)}([0,1])$ 9 17 33 65 5 0.02121 (315) 0.00452 (595) 0.00104 (1155) 0.00025 (2275) 6 0.02109 (603) 0.00443 (1139) 0.00097 (2211) 0.00023 (4355) 7 0.02037 (1179) 0.00399 (2227) 0.00115 (4323) 0.00115 (8515) : Example 1: The $L_2$-norm of the error for increasing values of $s$ and $j$ when using the cubic B-spline. The numbers in parenthesis are the degrees-of-freedom. Here, $\gamma =0.5$. \[tab:conv\_js\_cubic\_1\] Example 2 --------- In the second test we solved the time-fractional diffusion equation (\[eq:fracdiffeq\]) in the case when $$\begin{array}{lcl} f(t,x) & = & \displaystyle \frac{\pi t^{1-\gamma}}{2\Gamma(2-\gamma)} \left( \, {_1F_1}(1,2-\gamma,i\pi\,t) + \,{_1F_1}(1,2-\gamma,-i\pi\,t) \right) \, \sin(\pi\,x) \\ \\ & + & \pi^2 \, \sin(\pi\,t) \, \sin(\pi\,x)\,, \end{array}$$ where $_1F_1(\alpha,\beta,z)$ is the Kummer’s confluent hypergeometric function defined as $$_1F_1(\alpha,\beta, z) = \frac {\Gamma(\beta)}{\Gamma(\alpha)} \, \sum_{k\in \NN} \, \frac {\Gamma(\alpha+k)}{\Gamma(\beta+k)\, k!} \, z^k\,, \qquad \alpha \in \RR\,, \quad -\beta \notin \NN_0\,,$$ where $\NN_0 = \NN \backslash \{0\}$ (cf. [@AS65 Chapter 13]). In this case the exact solution is $$u(t,x)=\sin(\pi t)\,\sin(\pi x).$$ We performed the same set of numerical tests as in Example 1. The numerical solution $u_{s,j}(t,x)$ and the error $e_{s,j}(t,x)$ for $s=5$ and $j=6$ are displayed in Figure \[fig:numsol\_2\] in the case when $\gamma = 0.5$. Figure \[fig:L2\_error\_2\] shows the $L_2$-norm of the error as a function of $s$ for $\beta$ ranging from 2 to 4 and $j=5$; the four panels in the figure refer to different values of the order of the fractional derivative. Tables \[tab:conv\_js\_fract\_2\]-\[tab:conv\_js\_cubic\_2\] report the $L_2$-norm of the error for different values of $j$ and $s$ and $\beta = 3.5, 3$, respectively. The number of degrees-of-freedom is also reported.\ Figure \[fig:L2\_error\_2\] shows that value of the error is higher than in the previous example but it decreases as $s$ increases showing a very similar behavior as that one in Example 1. The values of the error in Tables \[tab:conv\_js\_fract\_2\]-\[tab:conv\_js\_cubic\_2\] are approximatively the same as in Tables \[tab:conv\_js\_fract\_1\]-\[tab:conv\_js\_cubic\_1\]. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ![Example 2: The numerical solution (left panel) and the error (right panel) when $j=6$ and $s=5$. []{data-label="fig:numsol_2"}](Fig_NumSol_jGK6_sref6_beta3p5_gamma0p5_ex2.png "fig:"){width="45.00000%"} ![Example 2: The numerical solution (left panel) and the error (right panel) when $j=6$ and $s=5$. []{data-label="fig:numsol_2"}](Fig_Error_jGK6_sref6_beta3p5_gamma0p5_ex2.png "fig:"){width="45.00000%"} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Example 2: The $L_2$-norm of the error as a function of $s$ for different values of $\gamma$. Each line corresponds to a spline of different degree: solid lines correspond to the polynomial splines; non solid lines correspond to fractional splines.[]{data-label="fig:L2_error_2"}](Fig_L2Error_gam1_ex2.png "fig:"){width="45.00000%"} ![Example 2: The $L_2$-norm of the error as a function of $s$ for different values of $\gamma$. Each line corresponds to a spline of different degree: solid lines correspond to the polynomial splines; non solid lines correspond to fractional splines.[]{data-label="fig:L2_error_2"}](Fig_L2Error_gam0p75_ex2.png "fig:"){width="45.00000%"} ![Example 2: The $L_2$-norm of the error as a function of $s$ for different values of $\gamma$. Each line corresponds to a spline of different degree: solid lines correspond to the polynomial splines; non solid lines correspond to fractional splines.[]{data-label="fig:L2_error_2"}](Fig_L2Error_gam0p5_ex2.png "fig:"){width="45.00000%"} ![Example 2: The $L_2$-norm of the error as a function of $s$ for different values of $\gamma$. Each line corresponds to a spline of different degree: solid lines correspond to the polynomial splines; non solid lines correspond to fractional splines.[]{data-label="fig:L2_error_2"}](Fig_L2Error_gam0p25_ex2.png "fig:"){width="45.00000%"} -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 3 4 5 6 -------------------------------- ---------------- ---------------- ---------------- ---------------- $\sharp V_j^{(\alpha)}([0,1])$ 9 17 33 65 5 0.01938 (369) 0.00429 (697) 0.00111 (1353) 0.00042 (2665) 6 0.01809 (657) 0.00555 (1241) 0.00507 (2409) 0.00523 (4745) 7 0.01811 (1233) 0.01691 (2329) 0.01822 (4521) 0.01858 (8905) : Example 2: The $L_2$-norm of the error for increasing values of $s$ and $j$ for the fractional B-spline of degree $\beta=3.5$. The numbers in parenthesis are the degrees-of-freedom. Here, $\gamma=0.5$. \[tab:conv\_js\_fract\_2\] 3 4 5 6 -------------------------------- ---------------- ---------------- ---------------- ---------------- $\sharp V_j^{(\alpha)}([0,1])$ 9 17 33 65 5 0.01909 (315) 0.00404 (595) 0.00102 (1155) 0.00063 (2275) 6 0.01810 (603) 0.00546 (1139) 0.00495 (2211) 0.00511 (4355) 7 0.01805 (1179) 0.01671 (2227) 0.01801 (4323) 0.01838 (8515) : Example 2: The $L_2$-norm of the error for increasing values of $s$ and $j$ for the cubic B-spline. The numbers in parenthesis are the degrees-of-freedom. Here, $\gamma=0.5$. \[tab:conv\_js\_cubic\_2\] Conclusion {#sec:concl} ========== We proposed a fractional spline collocation-Galerkin method to solve the time-fractional diffusion equation. The novelty of the method is in the use of fractional spline spaces as approximating spaces so that the fractional derivative of the approximating function can be evaluated easily by an explicit differentiation rule that involves the generalized finite difference operator. The numerical tests show that the method has a good accuracy so that it can be effectively used to solve fractional differential problems. The numerical instabilities arising in the fractional basis when $s$ increases can be reduced following the approach in [@GPP04] that allows us to construct stable basis on the interval. Moreover, the ill-conditioning of the linear system (\[colllinearsys\]) can be reduced using iterative methods in Krylov spaces, such as the method proposed in [@CPSV17]. Finally, we notice that following the procedure given in [@GPP04], fractional wavelet bases on finite interval can be constructed so that the proposed method can be generalized to fractional wavelet approximating spaces. [10]{} Milton Abramowitz and Irene A. Stegun. , volume 55. Dover Publications, 1965. Dumitru Baleanu, Kai Diethelm, Enrico Scalas, and Juan J. Trujillo. Fractional calculus. models and numerical methods. , 3:10–16, 2012. Francesco Calabr[ò]{}, Carla Manni, and Francesca Pitolli. Computation of quadrature rules for integration with respect to refinable functions on assigned nodes. , 90:168–189, 2015. Daniela Calvetti, Francesca Pitolli, Erkki Somersalo, and Barbara Vantaggi. Bayes meets [K]{}rylov: preconditioning [CGLS]{} for underdetermined systems. , in press. Wolfgang Dahmen, Siegfried Pr[ö]{}ssdorf, and Reinhold Schneider. Wavelet approximation methods for pseudodifferential equations: I stability and convergence. , 215(1):583–620, 1994. Neville Ford, Jingyu Xiao, and Yubin Yan. A finite element method for time fractional partial differential equations. , 14(3):454–474, 2011. Walter Gautschi, Laura Gori, and Francesca Pitolli. Gauss quadrature for refinable weight functions. , 8(3):249–257, 2000. Laura Gori, Laura Pezza, and Francesca Pitolli. Recent results on wavelet bases on the interval generated by [GP]{} refinable functions. , 51(4):549–563, 2004. Laura Gori and Francesca Pitolli. Refinable functions and positive operators. , 49(3):381–393, 2004. Rudolf Hilfer. . World Scientific, 2000. Francesco Mainardi. . World Scientific, 2010. Arvet Pedas and Enn Tamme. Numerical solution of nonlinear fractional differential equations by spline collocation methods. , 255:216–230, 2014. Laura Pezza and Francesca Pitolli. A multiscale collocation method for fractional differential problems. , 147:210–219, 2018. Igor Podlubny. , volume 198. Academic Press, 1998. Larry L. Schumaker. . Cambridge University Press, 2007. Hari Mohan Srivastava and Juan J. Trujillo. . Elsevier, 2006. Vasily E. Tarasov. . Springer Science & Business Media, 2011. Vidar Thomée. . Springer-Verlag, 2006. Michael Unser and Thierry Blu. Fractional splines and wavelets. , 42(1):43–67, 2000. Mohsen Zayernouri and George Em Karniadakis. Fractional spectral collocation method. , 36(1):A40–A62, 2014. [^1]: [*Dept. SBAI, University of Roma ”La Sapienza”*]{}, Via A. Scarpa 16, 00161 Roma, Italy. e-mail: [laura.pezza@sbai.uniroma1.it]{} [^2]: [*Dept. SBAI, University of Roma ”La Sapienza”*]{}, Via A. Scarpa 16, 00161 Roma, Italy. e-mail:
{ "pile_set_name": "ArXiv" }
“I can’t express how extremely pleased we are with Jenny and her work! We had tons of questions which she quickly, thoroughly and happily answered. She walked us through the process and asked our opinion while also providing her own input along the way. She sent pictures once printed and even sent us the original water color of the map that she created for us. I would — and certainly will — recommend her to all friends and family moving forward!” “I don’t think I have one bad thing to say about this purchase. The seller was extremely accommodating, wonderful to work with, and made this experience such a joy. Our invitations, RSVP postcards and reception cards are perfect. The coloring, font, and design look beautiful in person. We also chose colored envelopes and return address printing, which I think turned out amazing. Jenny had incredible suggestions and made the entire process so smooth. If you’re on the fence about who to work with or where to go for invitations, stop looking! You’ve found it!”
{ "pile_set_name": "Pile-CC" }
Press New Wellesley Business Wants To Teach The World To Sew! Lauren Johnston is doing her part to ensure sewing doesn’t become a lost art. Sew Easy, a business she started 15 years ago in Needham to teach kids and teens how to sew, expanded this summer into a second floor space in Wellesley at 159 Linden St. 3C. Sew easy, which begins its next 8-week session in Wellesley on Sept. 17, has taught more than 9,000 girls and boys to sew over the years. Students range in age from 5.5 to about 16, and classes are held after school and on Saturdays. Sew Easy charges about $325 per session, which includes materials. The Wellesley location has 12 sewing machines. “I want to teach the world to sew,” says Johnston, whose programs mainly involve using sewing machines, though also include hand sewing. She says that she thinks sewing is coming back, even though most kids’ parents don’t sew and even though sewing classes are rare in school these days. “They’ve seen grandparents sewing,” she says. At Sew Easy, kids learn how to sew buttonholes, zippers, pockets and hems as well as how to thread the sewing machines. They get to use a variety of fabrics, including cotton and fleece. Students complete between 5 and 10 projects per session, creating clothes, bags and even American Doll outfits. While Sew Easy’s two locations are just a few miles far apart, Johnston says she appreciates that the closer the better for parents shuttling their children from activity to activity after school. She said parents of kids who have taken classes at the Needham spot have been begging her to open a place in Wellesley. It would be a stretch to call Wellesley the sewing capital of the world, but there is maybe more needle-and-thread action around here than you might think. For example, there’s the Button Box shop on Rte. 9 that caters to quilters, the Wellesley Needlepoint Collection on Grove Street, and local artist Abby Glassenberg has made a name for herself via the soft sculptures she sews. Classes start at 3:15 p.m., but children’s noses press up against the glass of the Needham storefront long before the door is unlocked. Here, six days a week, the nearly lost art of sewing is revered and creativity is unleashed. By Bob Brown | THE SWELLESLEY REPORT Enriching Fabric of Their Lives Lauren Johnston says she has taught over 8,000 students since launching Sew Easy 13 years ago. Last week she opened a second branch in West Roxbury, where she hopes to further disseminate an old-fashioned skill that is still indispensable in this high-tech age. “In an instant, children feel empowered,” said Johnston, whose students, male and female, range in age from elementary through high school. “They choose the project that they want to work on and the fabrics that they want to use, and are excited and proud about expressing their creativity.” Johnston has over 300 projects for her students to choose from – book bags, fleece vests, ponchos, mittens, quilts, American Girl Doll accessories, pajama pants. Some use their time to alter clothing, and during the holiday season they tend to make gifts. Each time someone finishes a project, they ring a large bell and the class goes silent in order to see what has been completed and give the student a round of applause. Johnston then encourages them to place their project in the storefront window “to show the world what they’ve made.” Some projects are basic, like small decorative pillows, and others are more elaborate. One girl, she said, walked in with a sketch of a lobster costume and made it for Halloween. An eight-week session of one class a week costs $299, which includes all materials. On average, classes are capped at 18 students. With 15 sewing machines, no one has to wait to use one, since not every student needs a machine all the time. “They help each other out and really feel like they’re a part of this place,” she said. Johnston said parents often tell her that their child is creating things using tape and staples to hold the fabric together. Those kids, she said, are really ready to learn. But it’s not just the creative and technical aspects of teaching that bring Johnston satisfaction, it’s also observing the emotional growth of her students. “There is no gossip allowed in here,” said Johnston. “I tell the kids ‘I’d rather hear about you.’ ” Sometimes during a snack break, she will throw out questions for discussion, such as “What is something positive that we would never guess about you?” or “What are some experiences that you would like to have but aren’t yet old enough?” And on occasion she will read aloud an inspirational thought for the day. “They laugh, but then they quiet down and listen,” said Johnston, who hopes that insightful, introspective words will encourage self-awareness and confidence. By Susan Chaityn Lebovits | THE BOSTON GLOBE Sew Easy opens new location in Wellesley Lauren Johnston, owner and founder of Sew Easy, has been teaching children how to sew since 1995. She said she’s taught the “dying art” to more than 9,000 kids and teens in the past 15 years. Johnston recently expanded her operation to Wellesley. The Townsman caught up with Johnston at the new location, 159 Linden St., to ask her about her business and why so many children still want to learn to sew. “I always loved sewing,” Johnston said. “So my children always wanted to get at the machine. I saw that it was easy to teach them and then their friends started joining and the masses came.” What prompted your expansion to Wellesley? “Wellesley wanted us to come and so I said, ‘Why not?’” Johnston said. “Wellesley asked us over and over to come here.” Why do you think kids and teens want to sew when they could just buy everything they needed? “It’s not about being able to buy it,” Johnston said. “When you create something you feel empowered. Most of the last generation doesn’t know how to sew so kids feel empowered because their parents don’t know how to sew.” What kind of items do the students sew? “We have over 300 projects,” Johnston said. She said that students make everything from clothing to American Girl Doll accessories and even, sometimes, dog clothing. Do any boys come to these classes? “I usually have one or two in a class,” she said. Classes range in size from 12 to 20. Where did you learn to sew? “[I learned from] my grandma and Home Ec in junior high.” Is that where the passion began? “Yes, it felt so good to complete a project from scratch and I couldn’t believe I made it myself. The stuff I made when I was young was so elaborate,” Johnston said. “I really loved it.” What would you say to a kid who said sewing is boring? “I’ve never heard that once from the 9,000 kids I’ve taught,” Johnston said. “So I don’t think I have to answer that.” “It’s word of mouth,” she added. “It’s the kids that do all the advertising. I’m always full despite the bad economy.” How do you stay current? “We know what kids like,” Johnston said. “After 15 years and 9,000 kids, we know. I have to buy fabric with peace signs, turtles, polka dots and monkeys.” What do you love about teaching? ‘I love to teach,” Johnston said, “and the some of the kids come in nervous, but within an hour and a half they don’t want to leave. “I love to see the shift in kids to when they start feeling empowered. I get to witness the shift from knowing nothing to feeling like they know a lot in a short amount of time – it’s instant. “I ask them, ‘do you feel like a good sewer?’ and they say ‘yes.’” “Our mission is to instill confidence and creativity. And we feel every child needs to feel success. We want them to walk away with that feeling and completing a project makes them feel that way.” Johnston said it’s her particular system that helps her students find this success. “The system is the reason why we are successful,” Johnston said. “It allows kids to create quicker. The way we design our things more streamlined. The detail is not like yesteryears.” “The kids that come in with learning disabilities,” Johnston said, “in 99 percent of them I don’t find what they’ve been diagnosed with here. And I’m looking for it, but I never find it. They just take it in and grasp it.”
{ "pile_set_name": "Pile-CC" }
<?xml version="1.0" ?> <component id="root" name="root"> <component id="system" name="system"> <!--McPAT will skip the components if number is set to 0 --> <param name="number_of_cores" value="64"/> <param name="number_of_L1Directories" value="0"/> <param name="number_of_L2Directories" value="0"/> <param name="number_of_L2s" value="64"/> <!-- This number means how many L2 clusters in each cluster there can be multiple banks/ports --> <param name="number_of_L3s" value="0"/> <!-- This number means how many L3 clusters --> <param name="number_of_NoCs" value="1"/> <param name="homogeneous_cores" value="1"/><!--1 means homo --> <param name="homogeneous_L2s" value="1"/> <param name="homogeneous_L1Directorys" value="1"/> <param name="homogeneous_L2Directorys" value="1"/> <param name="homogeneous_L3s" value="1"/> <param name="homogeneous_ccs" value="1"/><!--cache coherece hardware --> <param name="homogeneous_NoCs" value="1"/> <param name="core_tech_node" value="22"/><!-- nm --> <param name="target_core_clockrate" value="3500"/><!--MHz --> <param name="temperature" value="360"/> <!-- Kelvin --> <param name="number_cache_levels" value="2"/> <param name="interconnect_projection_type" value="0"/><!--0: agressive wire technology; 1: conservative wire technology --> <param name="device_type" value="0"/><!--0: HP(High Performance Type); 1: LSTP(Low standby power) 2: LOP (Low Operating Power) --> <param name="longer_channel_device" value="1"/><!-- 0 no use; 1 use when possible --> <param name="machine_bits" value="64"/> <param name="virtual_address_width" value="64"/> <param name="physical_address_width" value="52"/> <param name="virtual_memory_page_size" value="4096"/> <stat name="total_cycles" value="100000"/> <stat name="idle_cycles" value="0"/> <stat name="busy_cycles" value="100000"/> <!--This page size(B) is complete different from the page size in Main memo secction. this page size is the size of virtual memory from OS/Archi perspective; the page size in Main memo secction is the actuall physical line in a DRAM bank --> <!-- *********************** cores ******************* --> <component id="system.core0" name="core0"> <!-- Core property --> <param name="clock_rate" value="3500"/> <param name="instruction_length" value="32"/> <param name="opcode_width" value="9"/> <!-- address width determins the tag_width in Cache, LSQ and buffers in cache controller default value is machine_bits, if not set --> <param name="machine_type" value="1"/><!-- 1 inorder; 0 OOO--> <!-- inorder/OoO --> <param name="number_hardware_threads" value="4"/> <!-- number_instruction_fetch_ports(icache ports) is always 1 in single-thread processor, it only may be more than one in SMT processors. BTB ports always equals to fetch ports since branch information in consective branch instructions in the same fetch group can be read out from BTB once.--> <param name="fetch_width" value="1"/> <!-- fetch_width determins the size of cachelines of L1 cache block --> <param name="number_instruction_fetch_ports" value="1"/> <param name="decode_width" value="1"/> <!-- decode_width determins the number of ports of the renaming table (both RAM and CAM) scheme --> <param name="issue_width" value="1"/> <!-- issue_width determins the number of ports of Issue window and other logic as in the complexity effective proccessors paper; issue_width==dispatch_width --> <param name="commit_width" value="1"/> <!-- commit_width determins the number of ports of register files --> <param name="fp_issue_width" value="1"/> <param name="prediction_width" value="0"/> <!-- number of branch instructions can be predicted simultannouesl--> <!-- Current version of McPAT does not distinguish int and floating point pipelines Theses parameters are reserved for future use.--> <param name="pipelines_per_core" value="1,1"/> <!--integer_pipeline and floating_pipelines, if the floating_pipelines is 0, then the pipeline is shared--> <param name="pipeline_depth" value="6,6"/> <!-- pipeline depth of int and fp, if pipeline is shared, the second number is the average cycles of fp ops --> <!-- issue and exe unit--> <param name="ALU_per_core" value="1"/> <!-- contains an adder, a shifter, and a logical unit --> <param name="MUL_per_core" value="1"/> <!-- For MUL and Div --> <param name="FPU_per_core" value="0.125"/> <!-- buffer between IF and ID stage --> <param name="instruction_buffer_size" value="16"/> <!-- buffer between ID and sche/exe stage --> <param name="decoded_stream_buffer_size" value="16"/> <param name="instruction_window_scheme" value="0"/><!-- 0 PHYREG based, 1 RSBASED--> <!-- McPAT support 2 types of OoO cores, RS based and physical reg based--> <param name="instruction_window_size" value="16"/> <param name="fp_instruction_window_size" value="16"/> <!-- the instruction issue Q as in Alpha 21264; The RS as in Intel P6 --> <param name="ROB_size" value="80"/> <!-- each in-flight instruction has an entry in ROB --> <!-- registers --> <param name="archi_Regs_IRF_size" value="32"/> <param name="archi_Regs_FRF_size" value="32"/> <!-- if OoO processor, phy_reg number is needed for renaming logic, renaming logic is for both integer and floating point insts. --> <param name="phy_Regs_IRF_size" value="80"/> <param name="phy_Regs_FRF_size" value="80"/> <!-- rename logic --> <param name="rename_scheme" value="0"/> <!-- can be RAM based(0) or CAM based(1) rename scheme RAM-based scheme will have free list, status table; CAM-based scheme have the valid bit in the data field of the CAM both RAM and CAM need RAM-based checkpoint table, checkpoint_depth=# of in_flight instructions; Detailed RAT Implementation see TR --> <param name="register_windows_size" value="8"/> <!-- how many windows in the windowed register file, sun processors; no register windowing is used when this number is 0 --> <!-- In OoO cores, loads and stores can be issued whether inorder(Pentium Pro) or (OoO)out-of-order(Alpha), They will always try to exeute out-of-order though. --> <param name="LSU_order" value="inorder"/> <param name="store_buffer_size" value="32"/> <!-- By default, in-order cores do not have load buffers --> <param name="load_buffer_size" value="32"/> <!-- number of ports refer to sustainable concurrent memory accesses --> <param name="memory_ports" value="1"/> <!-- max_allowed_in_flight_memo_instructions determins the # of ports of load and store buffer as well as the ports of Dcache which is connected to LSU --> <!-- dual-pumped Dcache can be used to save the extra read/write ports --> <param name="RAS_size" value="32"/> <!-- general stats, defines simulation periods;require total, idle, and busy cycles for senity check --> <!-- please note: if target architecture is X86, then all the instrucions refer to (fused) micro-ops --> <stat name="total_instructions" value="800000"/> <stat name="int_instructions" value="600000"/> <stat name="fp_instructions" value="20000"/> <stat name="branch_instructions" value="0"/> <stat name="branch_mispredictions" value="0"/> <stat name="load_instructions" value="100000"/> <stat name="store_instructions" value="100000"/> <stat name="committed_instructions" value="800000"/> <stat name="committed_int_instructions" value="600000"/> <stat name="committed_fp_instructions" value="20000"/> <stat name="pipeline_duty_cycle" value="0.6"/><!--<=1, runtime_ipc/peak_ipc; averaged for all cores if homogenous --> <!-- the following cycle stats are used for heterogeneouse cores only, please ignore them if homogeneouse cores --> <stat name="total_cycles" value="100000"/> <stat name="idle_cycles" value="0"/> <stat name="busy_cycles" value="100000"/> <!-- instruction buffer stats --> <!-- ROB stats, both RS and Phy based OoOs have ROB performance simulator should capture the difference on accesses, otherwise, McPAT has to guess based on number of commited instructions. --> <stat name="ROB_reads" value="263886"/> <stat name="ROB_writes" value="263886"/> <!-- RAT accesses --> <stat name="rename_accesses" value="263886"/> <stat name="fp_rename_accesses" value="263886"/> <!-- decode and rename stage use this, should be total ic - nop --> <!-- Inst window stats --> <stat name="inst_window_reads" value="263886"/> <stat name="inst_window_writes" value="263886"/> <stat name="inst_window_wakeup_accesses" value="263886"/> <stat name="fp_inst_window_reads" value="263886"/> <stat name="fp_inst_window_writes" value="263886"/> <stat name="fp_inst_window_wakeup_accesses" value="263886"/> <!-- RF accesses --> <stat name="int_regfile_reads" value="1600000"/> <stat name="float_regfile_reads" value="40000"/> <stat name="int_regfile_writes" value="800000"/> <stat name="float_regfile_writes" value="20000"/> <!-- accesses to the working reg --> <stat name="function_calls" value="5"/> <stat name="context_switches" value="260343"/> <!-- Number of Windowes switches (number of function calls and returns)--> <!-- Alu stats by default, the processor has one FPU that includes the divider and multiplier. The fpu accesses should include accesses to multiplier and divider --> <stat name="ialu_accesses" value="800000"/> <stat name="fpu_accesses" value="10000"/> <stat name="mul_accesses" value="100000"/> <stat name="cdb_alu_accesses" value="1000000"/> <stat name="cdb_mul_accesses" value="0"/> <stat name="cdb_fpu_accesses" value="0"/> <!-- multiple cycle accesses should be counted multiple times, otherwise, McPAT can use internal counter for different floating point instructions to get final accesses. But that needs detailed info for floating point inst mix --> <!-- currently the performance simulator should make sure all the numbers are final numbers, including the explicit read/write accesses, and the implicite accesses such as replacements and etc. Future versions of McPAT may be able to reason the implicite access based on param and stats of last level cache The same rule applies to all cache access stats too! --> <!-- following is AF for max power computation. Do not change them, unless you understand them--> <stat name="IFU_duty_cycle" value="0.25"/> <stat name="LSU_duty_cycle" value="0.25"/> <stat name="MemManU_I_duty_cycle" value="1"/> <stat name="MemManU_D_duty_cycle" value="0.25"/> <stat name="ALU_duty_cycle" value="0.9"/> <stat name="MUL_duty_cycle" value="0.5"/> <stat name="FPU_duty_cycle" value="0.4"/> <stat name="ALU_cdb_duty_cycle" value="0.9"/> <stat name="MUL_cdb_duty_cycle" value="0.5"/> <stat name="FPU_cdb_duty_cycle" value="0.4"/> <component id="system.core0.predictor" name="PBT"> <!-- branch predictor; tournament predictor see Alpha implementation --> <param name="local_predictor_size" value="10,3"/> <param name="local_predictor_entries" value="1024"/> <param name="global_predictor_entries" value="4096"/> <param name="global_predictor_bits" value="2"/> <param name="chooser_predictor_entries" value="4096"/> <param name="chooser_predictor_bits" value="2"/> <!-- These parameters can be combined like below in next version <param name="load_predictor" value="10,3,1024"/> <param name="global_predictor" value="4096,2"/> <param name="predictor_chooser" value="4096,2"/> --> </component> <component id="system.core0.itlb" name="itlb"> <param name="number_entries" value="64"/> <stat name="total_accesses" value="800000"/> <stat name="total_misses" value="4"/> <stat name="conflicts" value="0"/> <!-- there is no write requests to itlb although writes happen to itlb after miss, which is actually a replacement --> </component> <component id="system.core0.icache" name="icache"> <!-- there is no write requests to itlb although writes happen to it after miss, which is actually a replacement --> <param name="icache_config" value="16384,32,4,1,1,3,8,0"/> <!-- the parameters are capacity,block_width, associativity, bank, throughput w.r.t. core clock, latency w.r.t. core clock,output_width, cache policy --> <!-- cache_policy;//0 no write or write-though with non-write allocate;1 write-back with write-allocate --> <param name="buffer_sizes" value="16, 16, 16,0"/> <!-- cache controller buffer sizes: miss_buffer_size(MSHR),fill_buffer_size,prefetch_buffer_size,wb_buffer_size--> <stat name="read_accesses" value="200000"/> <stat name="read_misses" value="0"/> <stat name="conflicts" value="0"/> </component> <component id="system.core0.dtlb" name="dtlb"> <param name="number_entries" value="64"/> <stat name="total_accesses" value="200000"/> <stat name="total_misses" value="4"/> <stat name="conflicts" value="0"/> </component> <component id="system.core0.dcache" name="dcache"> <!-- all the buffer related are optional --> <param name="dcache_config" value="8192,16,4,1,1,3,16,0"/> <param name="buffer_sizes" value="16, 16, 16, 16"/> <!-- cache controller buffer sizes: miss_buffer_size(MSHR),fill_buffer_size,prefetch_buffer_size,wb_buffer_size--> <stat name="read_accesses" value="200000"/> <stat name="write_accesses" value="27276"/> <stat name="read_misses" value="1632"/> <stat name="write_misses" value="183"/> <stat name="conflicts" value="0"/> </component> <component id="system.core0.BTB" name="BTB"> <!-- all the buffer related are optional --> <param name="BTB_config" value="8192,4,2,1, 1,3"/> <!-- the parameters are capacity,block_width,associativity,bank, throughput w.r.t. core clock, latency w.r.t. core clock,--> </component> </component> <component id="system.L1Directory0" name="L1Directory0"> <param name="Directory_type" value="0"/> <!--0 cam based shadowed tag. 1 directory cache --> <param name="Dir_config" value="2048,1,0,1, 4, 4,8"/> <!-- the parameters are capacity,block_width, associativity,bank, throughput w.r.t. core clock, latency w.r.t. core clock,--> <param name="buffer_sizes" value="8, 8, 8, 8"/> <!-- all the buffer related are optional --> <param name="clockrate" value="3500"/> <param name="ports" value="1,1,1"/> <!-- number of r, w, and rw search ports --> <param name="device_type" value="0"/> <!-- altough there are multiple access types, Performance simulator needs to cast them into reads or writes e.g. the invalidates can be considered as writes --> <stat name="read_accesses" value="800000"/> <stat name="write_accesses" value="27276"/> <stat name="read_misses" value="1632"/> <stat name="write_misses" value="183"/> <stat name="conflicts" value="20"/> <stat name="duty_cycle" value="0.45"/> </component> <component id="system.L2Directory0" name="L2Directory0"> <param name="Directory_type" value="1"/> <!--0 cam based shadowed tag. 1 directory cache --> <param name="Dir_config" value="1048576,16,16,1,2, 100"/> <!-- the parameters are capacity,block_width, associativity,bank, throughput w.r.t. core clock, latency w.r.t. core clock,--> <param name="buffer_sizes" value="8, 8, 8, 8"/> <!-- all the buffer related are optional --> <param name="clockrate" value="3500"/> <param name="ports" value="1,1,1"/> <!-- number of r, w, and rw search ports --> <param name="device_type" value="0"/> <!-- altough there are multiple access types, Performance simulator needs to cast them into reads or writes e.g. the invalidates can be considered as writes --> <stat name="read_accesses" value="58824"/> <stat name="write_accesses" value="27276"/> <stat name="read_misses" value="1632"/> <stat name="write_misses" value="183"/> <stat name="conflicts" value="100"/> <stat name="duty_cycle" value="0.45"/> </component> <component id="system.L20" name="L20"> <!-- all the buffer related are optional --> <param name="L2_config" value="1048576,64,16,1, 4,23, 64, 1"/> <!-- consider 4-way bank interleaving for Niagara 1 --> <!-- the parameters are capacity,block_width, associativity, bank, throughput w.r.t. core clock, latency w.r.t. core clock,output_width, cache policy --> <param name="buffer_sizes" value="16, 16, 16, 16"/> <!-- cache controller buffer sizes: miss_buffer_size(MSHR),fill_buffer_size,prefetch_buffer_size,wb_buffer_size--> <param name="clockrate" value="3500"/> <param name="ports" value="1,1,1"/> <!-- number of r, w, and rw ports --> <param name="device_type" value="0"/> <stat name="read_accesses" value="200000"/> <stat name="write_accesses" value="0"/> <stat name="read_misses" value="0"/> <stat name="write_misses" value="0"/> <stat name="conflicts" value="0"/> <stat name="duty_cycle" value="0.5"/> </component> <!--**********************************************************************--> <component id="system.L30" name="L30"> <param name="L3_config" value="1048576,64,16,1, 2,100, 64,1"/> <!-- the parameters are capacity,block_width, associativity, bank, throughput w.r.t. core clock, latency w.r.t. core clock,output_width, cache policy --> <param name="clockrate" value="3500"/> <param name="ports" value="1,1,1"/> <!-- number of r, w, and rw ports --> <param name="device_type" value="0"/> <param name="buffer_sizes" value="16, 16, 16, 16"/> <!-- cache controller buffer sizes: miss_buffer_size(MSHR),fill_buffer_size,prefetch_buffer_size,wb_buffer_size--> <stat name="read_accesses" value="58824"/> <stat name="write_accesses" value="27276"/> <stat name="read_misses" value="1632"/> <stat name="write_misses" value="183"/> <stat name="conflicts" value="0"/> <stat name="duty_cycle" value="0.35"/> </component> <!--**********************************************************************--> <component id="system.NoC0" name="noc0"> <param name="clockrate" value="3500"/> <param name="type" value="1"/> <!-- 1 NoC, O bus --> <param name="horizontal_nodes" value="8"/> <param name="vertical_nodes" value="8"/> <param name="has_global_link" value="1"/> <!-- 1 has global link, 0 does not have global link --> <param name="link_throughput" value="1"/><!--w.r.t clock --> <param name="link_latency" value="1"/><!--w.r.t clock --> <!-- througput >= latency --> <!-- Router architecture --> <param name="input_ports" value="5"/> <param name="output_ports" value="5"/> <param name="virtual_channel_per_port" value="1"/> <!-- input buffer; in classic routers only input ports need buffers --> <param name="flit_bits" value="256"/> <param name="input_buffer_entries_per_vc" value="4"/><!--VCs within the same ports share input buffers whose size is propotional to the number of VCs--> <param name="chip_coverage" value="1"/> <!-- When multiple NOC present, one NOC will cover part of the whole chip. chip_coverage <=1 --> <stat name="total_accesses" value="360000"/> <!-- This is the number of total accesses within the whole network not for each router --> <stat name="duty_cycle" value="0.1"/> </component> <!--**********************************************************************--> <component id="system.mem" name="mem"> <!-- Main memory property --> <param name="mem_tech_node" value="32"/> <param name="device_clock" value="200"/><!--MHz, this is clock rate of the actual memory device, not the FSB --> <param name="peak_transfer_rate" value="3200"/><!--MB/S--> <param name="internal_prefetch_of_DRAM_chip" value="4"/> <!-- 2 for DDR, 4 for DDR2, 8 for DDR3...--> <!-- the device clock, peak_transfer_rate, and the internal prefetch decide the DIMM property --> <!-- above numbers can be easily found from Wikipedia --> <param name="capacity_per_channel" value="4096"/> <!-- MB --> <!-- capacity_per_Dram_chip=capacity_per_channel/number_of_dimms/number_ranks/Dram_chips_per_rank Current McPAT assumes single DIMMs are used.--> <param name="number_ranks" value="2"/> <param name="num_banks_of_DRAM_chip" value="8"/> <param name="Block_width_of_DRAM_chip" value="64"/> <!-- B --> <param name="output_width_of_DRAM_chip" value="8"/> <!--number of Dram_chips_per_rank=" 72/output_width_of_DRAM_chip--> <!--number of Dram_chips_per_rank=" 72/output_width_of_DRAM_chip--> <param name="page_size_of_DRAM_chip" value="8"/> <!-- 8 or 16 --> <param name="burstlength_of_DRAM_chip" value="8"/> <stat name="memory_accesses" value="1052"/> <stat name="memory_reads" value="1052"/> <stat name="memory_writes" value="1052"/> </component> <component id="system.mc" name="mc"> <!-- Memeory controllers are for DDR(2,3...) DIMMs --> <!-- current version of McPAT uses published values for base parameters of memory controller improvments on MC will be added in later versions. --> <param name="mc_clock" value="200"/><!--DIMM IO bus clock rate MHz DDR2-400 for Niagara 1--> <param name="peak_transfer_rate" value="3200"/><!--MB/S--> <param name="llc_line_length" value="64"/><!--B--> <param name="number_mcs" value="4"/> <!-- current McPAT only supports homogeneous memory controllers --> <param name="memory_channels_per_mc" value="1"/> <param name="number_ranks" value="2"/> <!-- # of ranks of each channel--> <param name="req_window_size_per_channel" value="32"/> <param name="IO_buffer_size_per_channel" value="32"/> <param name="databus_width" value="128"/> <param name="addressbus_width" value="51"/> <!-- McPAT will add the control bus width to the addressbus width automatically --> <stat name="memory_accesses" value="33333"/> <stat name="memory_reads" value="16667"/> <stat name="memory_writes" value="16667"/> <!-- McPAT does not track individual mc, instead, it takes the total accesses and calculate the average power per MC or per channel. This is sufficent for most application. Further trackdown can be easily added in later versions. --> </component> <!--**********************************************************************--> </component> </component>
{ "pile_set_name": "Github" }
About This Video Episode 8 699 votes “Trick Play” Targeted by a “hacker hunter,” Enokida is forced to run from place to place in Hakata. Assassins after the bounty on his head are steadily closing in, and Enokida has been cut off from using computers, his only weapon. He’s fallen into a desperate situation. Meanwhile, Banba receives a message from Enokida. It’s a desperate counter to the cyberterrorist cell which came to Banba and Lin. ...more
{ "pile_set_name": "Pile-CC" }
Conservation and reiteration of a kinase cascade. A cascade of three protein kinases has emerged as a conserved functional module in a wide variety of signal transduction pathways in diverse organisms. In addition to this evolutionary conservation, studies in yeast demonstrate that versions of this module are used in different signalling pathways. Thus, homologous kinase cascades function in response to different stimuli in the same cell.
{ "pile_set_name": "PubMed Abstracts" }
Q: Passing custom flags to "open" in a device driver I need to pass some custom flags to the open() call of my device driver. I found this example in LDD3: int dev_open(struct inode *inode, struct file *filp) { if ((filp->f_flags & O_ACCMODE) == O_WRONLY) { ... } } My question is: is it possibile to define other flags (like O_ACCMODE and O_WRONLY) without conflicts with any others? A: Yes, it's possible. Take a look at include/uapi/asm-generic/fcntl.h. Pay attention to next comment: /* * When introducing new O_* bits, please check its uniqueness in fcntl_init(). */ Now look into fcntl_init() function (defined at fs/fcntl.c): /* * Please add new bits here to ensure allocation uniqueness. * Exceptions: O_NONBLOCK is a two bit define on parisc; O_NDELAY * is defined as O_NONBLOCK on some platforms and not on others. */ BUILD_BUG_ON(20 - 1 /* for O_RDONLY being 0 */ != HWEIGHT32( O_RDONLY | O_WRONLY | O_RDWR | O_CREAT | O_EXCL | O_NOCTTY | O_TRUNC | O_APPEND | /* O_NONBLOCK | */ __O_SYNC | O_DSYNC | FASYNC | O_DIRECT | O_LARGEFILE | O_DIRECTORY | O_NOFOLLOW | O_NOATIME | O_CLOEXEC | __FMODE_EXEC | O_PATH | __O_TMPFILE )); So first you need to find unique value for your new definition, so it can be bitwise-or'd with flags listed in fcntl_init(). Next you need to add your new definition to include/uapi/asm-generic/fcntl.h. And finally add your new define to fcntl_init(), so it will be checked at compile time. In the end it boils down to finding the value that doesn't conflict with existing definitions. E.g. as I can see all 10, 100, 1000, 10000, 100000, 1000000 and 10000000 are used. So for your new flags you can use 100000000, 200000000, 400000000 and 800000000 values. UPDATE: As SailorCaire correctly mentioned, you also need to increment first number in BUILD_BUG_ON() macro. For example, if it originally was BUILD_BUG_ON(20 - 1, and you are to add one element to this list, you should make it BUILD_BUG_ON(21 - 1. UPDATE 2: Another valuable addition from SailorCaire: By the way, you'll need to do make install_headers, copy the new headers, and it looks like you'll need to recompile glibc so it becomes aware of the API change.
{ "pile_set_name": "StackExchange" }
log.level=${log.level} log.path=${log.path} dubbo.registry.address=${dubbo.registry.address} dubbo.protocal.port=${dubbo.protocal.port} dubbo.service.version=${dubbo.service.version} ws.connect.path=${ws.connect.path} ws.connect.port=${ws.connect.port} ws.connect.bus.port=${ws.connect.bus.port} service.name=ws_server service.version=1.0 service.bus.name=bus_ws_server service.bus.version=1.0 consul.host=${consul.host} consul.port=${consul.port}
{ "pile_set_name": "Github" }
Significance of early tubular extraction in the first minute of Tc-99m MAG3 renal transplant scintigraphy. Renal transplant perfusion curves obtained using Tc-99m MAG3 differ from those with Tc-99m DTPA. The perfusion curve can be divided into a first phase (up to the first-pass peak) and a second phase (the curve after the initial peak). The second phase of the MAG3 perfusion curve is usually ascending in contrast to the descending Tc-99m DTPA curve. This ascending MAG3 curve reflects early tubular extraction of MAG3. However, the second phase of the MAG3 curve is sometimes flat or descending. We hypothesized that a flat or descending curve reflects poor early tubular extraction and therefore graft dysfunction. Ninety-two studies of 59' renal transplant patients were retrospectively reviewed. The second phase of the perfusion curve was visually classified as ascending, flat, or descending. 77.2% of studies had ascending curves, 16.3% flat curves, and 6.5% descending curves. A descending curve had a positive predictive value (PPV) of 100% for medical graft dysfunction, while a flat curve had a PPV of 93.3%. A nonascending second phase curve was specific (96.4%) but not sensitive (33.9%) for graft dysfunction. Patients with acute tubular necrosis were not significantly more likely to have a nonascending curve than those with acute rejection. There was no significant difference in creatinine level between patients with medical graft dysfunction and ascending vs. nonascending curves. A nonascending second phase Tc-99m MAG3 perfusion curve is predictive for graft dysfunction. An ascending curve is nonspecific and can be seen in both normally and poorly functioning grafts.
{ "pile_set_name": "PubMed Abstracts" }
/* * Copyright (c) 2017, 2019, Oracle and/or its affiliates. All rights reserved. * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER. * * This code is free software; you can redistribute it and/or modify it * under the terms of the GNU General Public License version 2 only, as * published by the Free Software Foundation. * * This code is distributed in the hope that it will be useful, but WITHOUT * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License * version 2 for more details (a copy is included in the LICENSE file that * accompanied this code). * * You should have received a copy of the GNU General Public License version * 2 along with this work; if not, write to the Free Software Foundation, * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. * * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA * or visit www.oracle.com if you need additional information or have any * questions. * */ #include "precompiled.hpp" #include "jfr/recorder/checkpoint/types/jfrTypeSetUtils.hpp" #include "oops/instanceKlass.hpp" #include "oops/oop.inline.hpp" #include "oops/symbol.hpp" static JfrSymbolId::CStringEntry* bootstrap = NULL; JfrSymbolId::JfrSymbolId() : _sym_table(new SymbolTable(this)), _cstring_table(new CStringTable(this)), _sym_list(NULL), _cstring_list(NULL), _sym_query(NULL), _cstring_query(NULL), _symbol_id_counter(1), _class_unload(false) { assert(_sym_table != NULL, "invariant"); assert(_cstring_table != NULL, "invariant"); bootstrap = new CStringEntry(0, (const char*)&BOOTSTRAP_LOADER_NAME); assert(bootstrap != NULL, "invariant"); bootstrap->set_id(1); _cstring_list = bootstrap; } JfrSymbolId::~JfrSymbolId() { clear(); delete _sym_table; delete _cstring_table; delete bootstrap; } void JfrSymbolId::clear() { assert(_sym_table != NULL, "invariant"); if (_sym_table->has_entries()) { _sym_table->clear_entries(); } assert(!_sym_table->has_entries(), "invariant"); assert(_cstring_table != NULL, "invariant"); if (_cstring_table->has_entries()) { _cstring_table->clear_entries(); } assert(!_cstring_table->has_entries(), "invariant"); _sym_list = NULL; _symbol_id_counter = 1; _sym_query = NULL; _cstring_query = NULL; assert(bootstrap != NULL, "invariant"); bootstrap->reset(); _cstring_list = bootstrap; } void JfrSymbolId::set_class_unload(bool class_unload) { _class_unload = class_unload; } void JfrSymbolId::on_link(const SymbolEntry* entry) { assert(entry != NULL, "invariant"); const_cast<Symbol*>(entry->literal())->increment_refcount(); assert(entry->id() == 0, "invariant"); entry->set_id(++_symbol_id_counter); entry->set_list_next(_sym_list); _sym_list = entry; } bool JfrSymbolId::on_equals(uintptr_t hash, const SymbolEntry* entry) { assert(entry != NULL, "invariant"); assert(entry->hash() == hash, "invariant"); assert(_sym_query != NULL, "invariant"); return _sym_query == entry->literal(); } void JfrSymbolId::on_unlink(const SymbolEntry* entry) { assert(entry != NULL, "invariant"); const_cast<Symbol*>(entry->literal())->decrement_refcount(); } static const char* resource_to_cstring(const char* resource_str) { assert(resource_str != NULL, "invariant"); const size_t length = strlen(resource_str); char* const c_string = JfrCHeapObj::new_array<char>(length + 1); assert(c_string != NULL, "invariant"); strncpy(c_string, resource_str, length + 1); return c_string; } void JfrSymbolId::on_link(const CStringEntry* entry) { assert(entry != NULL, "invariant"); assert(entry->id() == 0, "invariant"); entry->set_id(++_symbol_id_counter); const_cast<CStringEntry*>(entry)->set_literal(resource_to_cstring(entry->literal())); entry->set_list_next(_cstring_list); _cstring_list = entry; } static bool string_compare(const char* query, const char* candidate) { assert(query != NULL, "invariant"); assert(candidate != NULL, "invariant"); const size_t length = strlen(query); return strncmp(query, candidate, length) == 0; } bool JfrSymbolId::on_equals(uintptr_t hash, const CStringEntry* entry) { assert(entry != NULL, "invariant"); assert(entry->hash() == hash, "invariant"); assert(_cstring_query != NULL, "invariant"); return string_compare(_cstring_query, entry->literal()); } void JfrSymbolId::on_unlink(const CStringEntry* entry) { assert(entry != NULL, "invariant"); JfrCHeapObj::free(const_cast<char*>(entry->literal()), strlen(entry->literal() + 1)); } traceid JfrSymbolId::bootstrap_name(bool leakp) { assert(bootstrap != NULL, "invariant"); if (leakp) { bootstrap->set_leakp(); } return 1; } traceid JfrSymbolId::mark(const Symbol* symbol, bool leakp) { assert(symbol != NULL, "invariant"); return mark((uintptr_t)symbol->identity_hash(), symbol, leakp); } traceid JfrSymbolId::mark(uintptr_t hash, const Symbol* data, bool leakp) { assert(data != NULL, "invariant"); assert(_sym_table != NULL, "invariant"); _sym_query = data; const SymbolEntry& entry = _sym_table->lookup_put(hash, data); if (_class_unload) { entry.set_unloading(); } if (leakp) { entry.set_leakp(); } return entry.id(); } traceid JfrSymbolId::mark(uintptr_t hash, const char* str, bool leakp) { assert(str != NULL, "invariant"); assert(_cstring_table != NULL, "invariant"); _cstring_query = str; const CStringEntry& entry = _cstring_table->lookup_put(hash, str); if (_class_unload) { entry.set_unloading(); } if (leakp) { entry.set_leakp(); } return entry.id(); } /* * jsr292 anonymous classes symbol is the external name + * the identity_hashcode slash appended: * java.lang.invoke.LambdaForm$BMH/22626602 * * caller needs ResourceMark */ uintptr_t JfrSymbolId::unsafe_anonymous_klass_name_hash(const InstanceKlass* ik) { assert(ik != NULL, "invariant"); assert(ik->is_anonymous(), "invariant"); const oop mirror = ik->java_mirror_no_keepalive(); assert(mirror != NULL, "invariant"); return (uintptr_t)mirror->identity_hash(); } static const char* create_unsafe_anonymous_klass_symbol(const InstanceKlass* ik, uintptr_t hash) { assert(ik != NULL, "invariant"); assert(ik->is_anonymous(), "invariant"); assert(hash != 0, "invariant"); char* anonymous_symbol = NULL; const oop mirror = ik->java_mirror_no_keepalive(); assert(mirror != NULL, "invariant"); char hash_buf[40]; sprintf(hash_buf, "/" UINTX_FORMAT, hash); const size_t hash_len = strlen(hash_buf); const size_t result_len = ik->name()->utf8_length(); anonymous_symbol = NEW_RESOURCE_ARRAY(char, result_len + hash_len + 1); ik->name()->as_klass_external_name(anonymous_symbol, (int)result_len + 1); assert(strlen(anonymous_symbol) == result_len, "invariant"); strcpy(anonymous_symbol + result_len, hash_buf); assert(strlen(anonymous_symbol) == result_len + hash_len, "invariant"); return anonymous_symbol; } bool JfrSymbolId::is_unsafe_anonymous_klass(const Klass* k) { assert(k != NULL, "invariant"); return k->is_instance_klass() && ((const InstanceKlass*)k)->is_anonymous(); } traceid JfrSymbolId::mark_unsafe_anonymous_klass_name(const InstanceKlass* ik, bool leakp) { assert(ik != NULL, "invariant"); assert(ik->is_anonymous(), "invariant"); const uintptr_t hash = unsafe_anonymous_klass_name_hash(ik); const char* const anonymous_klass_symbol = create_unsafe_anonymous_klass_symbol(ik, hash); return mark(hash, anonymous_klass_symbol, leakp); } traceid JfrSymbolId::mark(const Klass* k, bool leakp) { assert(k != NULL, "invariant"); traceid symbol_id = 0; if (is_unsafe_anonymous_klass(k)) { assert(k->is_instance_klass(), "invariant"); symbol_id = mark_unsafe_anonymous_klass_name((const InstanceKlass*)k, leakp); } if (0 == symbol_id) { Symbol* const sym = k->name(); if (sym != NULL) { symbol_id = mark(sym, leakp); } } assert(symbol_id > 0, "a symbol handler must mark the symbol for writing"); return symbol_id; } JfrArtifactSet::JfrArtifactSet(bool class_unload) : _symbol_id(new JfrSymbolId()), _klass_list(NULL), _total_count(0) { initialize(class_unload); assert(_klass_list != NULL, "invariant"); } static const size_t initial_class_list_size = 200; void JfrArtifactSet::initialize(bool class_unload, bool clear /* false */) { assert(_symbol_id != NULL, "invariant"); if (clear) { _symbol_id->clear(); } _symbol_id->set_class_unload(class_unload); _total_count = 0; // resource allocation _klass_list = new GrowableArray<const Klass*>(initial_class_list_size, false, mtTracing); } JfrArtifactSet::~JfrArtifactSet() { _symbol_id->clear(); delete _symbol_id; // _klass_list will be cleared by a ResourceMark } traceid JfrArtifactSet::bootstrap_name(bool leakp) { return _symbol_id->bootstrap_name(leakp); } traceid JfrArtifactSet::mark_unsafe_anonymous_klass_name(const Klass* klass, bool leakp) { assert(klass->is_instance_klass(), "invariant"); return _symbol_id->mark_unsafe_anonymous_klass_name((const InstanceKlass*)klass, leakp); } traceid JfrArtifactSet::mark(uintptr_t hash, const Symbol* sym, bool leakp) { return _symbol_id->mark(hash, sym, leakp); } traceid JfrArtifactSet::mark(const Klass* klass, bool leakp) { return _symbol_id->mark(klass, leakp); } traceid JfrArtifactSet::mark(const Symbol* symbol, bool leakp) { return _symbol_id->mark(symbol, leakp); } traceid JfrArtifactSet::mark(uintptr_t hash, const char* const str, bool leakp) { return _symbol_id->mark(hash, str, leakp); } bool JfrArtifactSet::has_klass_entries() const { return _klass_list->is_nonempty(); } int JfrArtifactSet::entries() const { return _klass_list->length(); } void JfrArtifactSet::register_klass(const Klass* k) { assert(k != NULL, "invariant"); assert(_klass_list != NULL, "invariant"); assert(_klass_list->find(k) == -1, "invariant"); _klass_list->append(k); } size_t JfrArtifactSet::total_count() const { return _total_count; }
{ "pile_set_name": "Github" }
goog.module('nested.exported.enums'); /** @const */ exports = { /** @const @enum {string} */ A: { A1: 'a1', }, // The structure of the AST changes if this extra property is present. B: 0, };
{ "pile_set_name": "Github" }
--- abstract: 'Unprecedentedly precise cosmic microwave background (CMB) data are expected from ongoing and near-future CMB Stage-III and IV surveys, which will yield reconstructed CMB lensing maps with effective resolution approaching several arcminutes. The small-scale CMB lensing fluctuations receive non-negligible contributions from nonlinear structure in the late-time density field. These fluctuations are not fully characterized by traditional two-point statistics, such as the power spectrum. Here, we use $N$-body ray-tracing simulations of CMB lensing maps to examine two higher-order statistics: the lensing convergence one-point probability distribution function (PDF) and peak counts. We show that these statistics contain significant information not captured by the two-point function, and provide specific forecasts for the ongoing Stage-III Advanced Atacama Cosmology Telescope (AdvACT) experiment. Considering only the temperature-based reconstruction estimator, we forecast 9$\sigma$ (PDF) and 6$\sigma$ (peaks) detections of these statistics with AdvACT. Our simulation pipeline fully accounts for the non-Gaussianity of the lensing reconstruction noise, which is significant and cannot be neglected. Combining the power spectrum, PDF, and peak counts for AdvACT will tighten cosmological constraints in the $\Omega_m$-$\sigma_8$ plane by $\approx 30\%$, compared to using the power spectrum alone.' author: - 'Jia Liu$^{1,2}$' - 'J. Colin Hill$^{2}$' - 'Blake D. Sherwin$^{3}$' - 'Andrea Petri$^{4}$' - 'Vanessa Böhm$^{5}$' - 'Zoltán Haiman$^{2,6}$' bibliography: - 'paper.bib' title: 'CMB Lensing Beyond the Power Spectrum: Cosmological Constraints from the One-Point PDF and Peak Counts' --- Introduction {#sec:intro} ============ After its first detection in cross-correlation nearly a decade ago [@Smith2007; @Hirata2008] and subsequent detection in auto-correlation five years ago [@das2011; @sherwin2011], weak gravitational lensing of the cosmic microwave background (CMB) is now reaching maturity as a cosmological probe [@Hanson2013; @Das2013; @PolarBear2014a; @PolarBear2014b; @BICEPKeck2016; @Story2014; @Ade2014; @vanEngelen2014; @vanEngelen2015; @planck2015xv]. On their way to the Earth, CMB photons emitted at redshift $z=1100$ are deflected by the intervening matter, producing new correlations in maps of CMB temperature and polarization anisotropies. Estimators based on these correlations can be applied to the observed anisotropy maps to reconstruct a noisy estimate of the CMB lensing potential [@Zaldarriaga1998; @Zaldarriaga1999; @HuOkamoto2002; @Okamoto2003]. CMB lensing can probe fundamental physical quantities, such as the dark energy equation of state and neutrino masses, through its sensitivity to the geometry of the universe and the growth of structure (see Refs. [@Lewis2006; @Hanson2010] for a review). In this paper, we study the non-Gaussian information stored in CMB lensing observations. The Gaussian approximation to the density field breaks down due to nonlinear evolution on small scales at late times. Thus, non-Gaussian statistics (i.e., statistics beyond the power spectrum) are necessary to capture the full information in the density field. Such work has been previously performed (theoretically and observationally) on weak gravitational lensing of galaxies, where galaxy shapes, instead of CMB temperature/polarization patterns, are distorted (hereafter “galaxy lensing”). Several research groups have found independently that non-Gaussian statistics can tighten cosmological constraints when they are combined with the two-point correlation function or angular power spectrum.[^1] Such non-Gaussian statistics have also been applied in the CMB context to the Sunyaev-Zel’dovich signal, including higher-order moments [@Wilson2012; @Hill2013; @Planck2013tSZ; @Planck2015tSZ], the bispectrum [@Bhattacharya2012; @Crawford2014; @Planck2013tSZ; @Planck2015tSZ], and the one-point probability distribution function (PDF) [@Hill2014b; @Planck2013tSZ; @Planck2015tSZ]. In all cases, substantial non-Gaussian information was found, yielding improved cosmological constraints. The motivation to study non-Gaussian statistics of CMB lensing maps is three-fold. First, the CMB lensing kernel is sensitive to structures at high redshift ($z\approx2.0$, compared to $z\approx0.4$ for typical galaxy lensing samples); hence CMB lensing non-Gaussian statistics probe early nonlinearity that is beyond the reach of galaxy surveys. Second, CMB lensing does not suffer from some challenging systematics that are relevant to galaxy lensing, including intrinsic alignments of galaxies, photometric redshift uncertainties, and shape measurement biases. Therefore, a combined analysis of galaxy lensing and CMB lensing will be useful to build a tomographic outlook on nonlinear structure evolution, as well as to calibrate systematics in both galaxy and CMB lensing surveys [@Liu2016; @Baxter2016; @Schaan2016; @Singh2016; @Nicola2016]. Finally, CMB lensing measurements have recently entered a regime of sufficient sensitivity and resolution to detect the (stacked) lensing signals of halos [@Madhavacheril2014; @Baxter2016; @Planck2015cluster]. This suggests that statistics sensitive to the nonlinear growth of structure, i.e., non-Gaussian statistics, will also soon be detectable. We demonstrate below that this is indeed the case, taking as a reference experiment the ongoing Advanced Atacama Cosmology Telescope (AdvACT) survey [@Henderson2016]. Non-Gaussian aspects of the CMB lensing field have recently attracted attention, both as a potential signal and a source of bias in CMB lensing power spectrum estimates. Considering the lensing non-Gaussianity as a signal, a recent analytical study of the CMB lensing bispectrum by Ref. [@Namikawa2016] forecasted its detectability to be 40$\sigma$ with a CMB Stage-IV experiment. Ref. [@Bohm2016] performed the first calculation of the bias induced in CMB lensing power spectrum estimates by the lensing bispectrum, finding non-negligible biases for Stage-III and IV CMB experiments. Refs. [@Pratten2016] and [@Marozzi2016] considered CMB lensing effects arising from the breakdown of the Born approximation, with the former study finding that post-Born terms substantially alter the predicted CMB lensing bispectrum, compared to the contributions from nonlinear structure formation alone. We emphasize that the $N$-body ray-tracing simulations used in this work naturally capture such effects — we do not use the Born approximation. However, we consider only the lensing potential $\phi$ or convergence $\kappa$ here (related by $\kappa = -\nabla^2 \phi/2$), leaving a treatment of the curl potential or image rotation for future work (Ref. [@Pratten2016] has demonstrated that the curl potential possesses non-trivial higher-order statistics). In a follow-up paper, the simulations described here are used to more precisely characterize CMB lensing power spectrum biases arising from the bispectrum and higher-order correlations [@Sherwin2016]. We consider the non-Gaussianity in the CMB lensing field as a potential signal. We use a suite of 46 $N$-body ray-tracing simulations to investigate two non-Gaussian statistics applied to CMB lensing convergence maps — the one-point PDF and peak counts. We examine the deviation of the convergence PDF and peak counts from those of Gaussian random fields. We then quantify the power of these statistics to constrain cosmological models, compared with using the power spectrum alone. The paper is structured as follows. We first introduce CMB lensing in Sec. \[sec:formalism\]. We then describe our simulation pipeline in Sec. \[sec:sim\] and analysis procedures in Sec. \[sec:analysis\]. We show our results for the power spectrum, PDF, peak counts, and the derived cosmological constraints in Sec. \[sec:results\]. We conclude in Sec. \[sec:conclude\]. CMB lensing formalism {#sec:formalism} ===================== To lowest order, the lensing convergence ($\kappa$) is a weighted projection of the three-dimensional matter overdensity $\delta=\delta\rho/\bar{\rho}$ along the line of sight, $$\label{eq.kappadef} \kappa(\thetaB) = \int_0^{\infty} dz W(z) \delta(\chi(z)\thetaB, z),$$ where $\chi(z)$ is the comoving distance and the kernel $W(z)$ indicates the lensing strength at redshift $z$ for sources with a redshift distribution $p(z_s)=dn(z_s)/dz$. For CMB lensing, there is only one source plane at the last scattering surface $z_\star=1100$; therefore, $p(z_s)=\delta_D(z_s-z_\star)$, where $\delta_D$ is the Dirac delta function. For a flat universe, the CMB lensing kernel is $$\begin{aligned} W^{{\kappa_{\rm cmb}}}(z) &=& \frac{3}{2}\Omega_{m}H_0^2 \frac{(1+z)}{H(z)} \frac{\chi(z)}{c} \nonumber\\ &\times& \frac{\chi(z_\star)-\chi(z)}{\chi(z_\star)}.\end{aligned}$$ where $\Omega_{m}$ is the matter density as a fraction of the critical density at $z=0$, $H(z)$ is the Hubble parameter at redshift $z$, with a present-day value $H_0$, and $c$ is the speed of light. $W^{{\kappa_{\rm cmb}}}(z)$ peaks at $z\approx2$ for canonical cosmological parameters ($\Omega_{m}\approx0.3$ and $H_0\approx70$ km/s/Mpc, [@planck2015xiii]). Note that Eq. (\[eq.kappadef\]) assumes the Born approximation, but our simulation approach described below does not — we implement full ray-tracing to calculate $\kappa$. Simulations {#sec:sim} =========== Our simulation procedure includes five main steps: (1) the design (parameter sampling) of cosmological models, (2) $N$-body simulations with Gadget-2,[^2] (3) ray-tracing from $z=0$ to $z=1100$ to obtain (noiseless) convergence maps using the Python code LensTools [@Petri2016],[^3] (4) lensing simulated CMB temperature maps by the ray-traced convergence field, and (5) reconstructing (noisy) convergence maps from the CMB temperature maps after including noise and beam effects. Simulation design ----------------- We use an irregular grid to sample parameters in the $\Omega_m$-$\sigma_8$ plane, within the range of $\Omega_m \in [0.15, 0.7]$ and $\sigma_8 \in [0.5, 1.0]$, where $\sigma_8$ is the rms amplitude of linear density fluctuations on a scale of 8 Mpc/$h$ at $z=0$. An optimized irregular grid has a smaller average distance between neighboring points than a regular grid, and no parameters are duplicated. Hence, it samples the parameter space more efficiently. The procedure to optimize our sampling is described in detail in Ref. [@Petri2015]. The 46 cosmological models sampled are shown in Fig. \[fig:design\]. Other cosmological parameters are held fixed, with $H_0=72$ km/s/Mpc, dark energy equation of state $w=-1$, spectral index $n_s=0.96$, and baryon density $\Omega_b=0.046$. The design can be improved in the future by posterior sampling, where we first run only a few models to generate a low-resolution probability plane, and then sample more densely in the high-probability region. We select the model that is closest to the standard concordance values of the cosmological parameters (e.g., [@planck2015xiii]) as our fiducial model, with $\Omega_m=0.296$ and $\sigma_8=0.786$. We create two sets of realizations for this model, one for covariance matrix estimation, and another one for parameter interpolation. This fiducial model is circled in red in Fig. \[fig:design\]. ![\[fig:design\] The design of cosmological parameters used in our simulations (46 models in total). The fiducial cosmology ($\Omega_m=0.296, \sigma_8=0.786$) is circled in red. The models for which AdvACT-like lensing reconstruction is performed are circled in blue. Other cosmological parameters are fixed at $H_0=72$ km/s/Mpc, $w=-1$, $n_s=0.96$, and $\Omega_b=0.046$.](plot/plot_design.pdf){width="48.00000%"} $N$-body simulation and ray-tracing {#sec:nbody} ----------------------------------- We use the public code Gadget-2 to run $N$-body simulations with $N_{\rm particles}=1024^3$ and box size = 600 Mpc/$h$ (corresponding to a mass resolution of $1.4\times10^{10} M_\odot/h$). To initialize each simulation, we first obtain the linear matter power spectrum with the Einstein-Boltzmann code CAMB.[^4] The power spectrum is then fed into the initial condition generator N-GenIC, which generates initial snapshots (the input of Gadget-2) of particle positions at $z=100$. The $N$-body simulation is then run from $z=100$ to $z=0$, and we record snapshots at every 144 Mpc$/h$ in comoving distance between $z\approx45$ and $z=0$. The choice of $z\approx45$ is determined by requiring that the redshift range covers 99% of the $W^{\kappa_{cmb}}D(z)$ kernel, where we use the linear growth factor $D(z)\sim 1/(1+z)$. We then use the Python code LensTools [@Petri2016] to generate CMB lensing convergence maps. We first slice the simulation boxes to create potential planes (3 planes per box, 200 Mpc/$h$ in thickness), where particle density is converted into gravitational potential using the Poisson equation. We track the trajectories of 4096$^2$ light rays from $z=0$ to $z=1100$, where the deflection angle and convergence are calculated at each potential plane. This procedure automatically captures so-called “post-Born” effects, as we never assume that the deflection angle is small or that the light rays follow unperturbed geodesics.[^5] Finally, we create 1,000 convergence map realizations for each cosmology by randomly rotating/shifting the potential planes [@Petri2016b]. For the fiducial cosmology only, we generate 10,000 realizations for the purpose of estimating the covariance matrix. The convergence maps are 2048$^2$ pixels and 12.25 deg$^2$ in size, with square pixels of side length 0.1025 arcmin. The maps generated at this step correspond to the physical lensing convergence field only, i.e., they have no noise from CMB lensing reconstruction. Therefore, they are labeled as “noiseless” in the following sections and figures. ![\[fig:theory\_ps\] Comparison of the CMB lensing convergence power spectrum from the HaloFit model and that from our simulation (1024$^3$ particles, box size 600 Mpc/$h$, map size 12.25 deg$^2$), for our fiducial cosmology. We also show the prediction from linear theory. Error bars are the standard deviation of 10,000 realizations.](plot/plot_theory_comparison.pdf){width="48.00000%"} We test the power spectra from our simulated maps against standard theoretical predictions. Fig. \[fig:theory\_ps\] shows the power spectrum from our simulated maps versus that from the HaloFit model [@Smith2003; @Takahashi2012] for our fiducial cosmology. We also show the linear-theory prediction, which deviates from the nonlinear HaloFit result at $\ell \gtrsim 700$. The simulation error bars are estimated using the standard deviation of 10,000 realizations. The simulated and (nonlinear) theoretical results are consistent within the error bars for multipoles $\ell<2,000$, which is sufficient for this work, as current and near-future CMB lensing surveys are limited to roughly this $\ell$ range due to their beam size and noise level (the filtering applied in our analysis below effectively removes all information on smaller angular scales). We find similar consistency between theory and simulation for the other 45 simulated models. We test the impact of particle resolution using a smaller box of 300 Mpc/$h$, while keeping the same number of particles (i.e. 8 times higher resolution), and obtain excellent agreement at scales up to $\ell=3,000$. The lack of power on large angular scales is due to the limited size of our convergence maps, while the missing power on small scales is due to our particle resolution. On very small scales ($\ell \gtrsim 5 \times 10^4$), excess power due to finite-pixelization shot noise arises, but this effect is negligible on the scales considered in our analysis. CMB lensing reconstruction {#sec:recon} -------------------------- ![image](plot/plot_maps.pdf){width="\textwidth"} In order to obtain CMB lensing convergence maps with realistic noise properties, we generate lensed CMB temperature maps and reconstruct noisy estimates of the convergence field. First, we generate Gaussian random field CMB temperature maps based on a $\Lambda$CDM concordance model temperature power spectrum computed with CAMB. We compute deflection field maps from the ray-traced convergence maps described in the previous sub-section, after applying a filter that removes power in the convergence maps above $ \ell \approx 4,000$.[^6] These deflection maps are then used to lens the simulated primary CMB temperature maps. The lensing simulation procedure is described in detail in Ref. [@Louis2013]. After obtaining the lensed temperature maps, we apply instrumental effects consistent with specifications for the ongoing AdvACT survey [@Henderson2016]. In particular, the maps are smoothed with a FWHM $=1.4$ arcmin beam, and Gaussian white noise of amplitude 6$\mu$K-arcmin is then added. We subsequently perform lensing reconstruction on these beam-convolved, noisy temperature maps using the quadratic estimator of Ref. [@HuOkamoto2002], but with the replacement of unlensed with lensed CMB temperature power spectra in the filters, which gives an unbiased reconstruction to higher order [@Hanson2010]. The final result is a noisy estimate of the CMB lensing convergence field, with 1,000 realizations for each cosmological model (10,000 for the fiducial model). We consider only temperature-based reconstruction in this work, leaving polarization estimators for future consideration. The temperature estimator is still expected to contribute more significantly than the polarization to the signal-to-noise for Stage-III CMB experiments like AdvACT, but polarization will dominate for Stage-IV (via $EB$ reconstruction). For the AdvACT-like experiment considered here, including polarization would increase the predicted signal-to-noise on the lensing power spectrum by $\approx 35$%. More importantly, polarization reconstruction allows the lensing field to be mapped out to smaller scales than temperature reconstruction [@HuOkamoto2002], and is more immune to foreground-related biases at high-$\ell$ [@vanEngelen2014b]. Thus, it could prove extremely useful for higher-order CMB lensing statistics, which are sourced by non-Gaussian structure on small scales. Clearly these points are worthy of future analysis, but we restrict this work to temperature reconstruction for simplicity. In addition to the fiducial model, we select the nearest eight points in the sampled parameter space (points circled in blue in Fig. \[fig:design\]) for the reconstruction analysis. We determine this selection by first reconstructing the nearest models in parameter space, and then broadening the sampled points until the interpolation is stable and the forecasted contours (see Sec. \[sec:constraints\]) are converged for AdvACT-level noise. At this noise level, the other points in model space are sufficiently distant to contribute negligibly to the forecasted contours. In total, nine models are used to derive parameter constraints from the reconstructed, noisy maps. For completeness, we perform a similar convergence test using forecasted constraints from the noiseless maps, finding excellent agreement between contours derived using all 46 models and using only these nine models. In Fig. \[fig:sample\_maps\], we show an example of a convergence map from the fiducial cosmology before (“noiseless”) and after (“noisy”) reconstruction. Prominent structures seen in the noiseless maps remain obvious in the reconstructed, noisy maps. Gaussian random field --------------------- We also reconstruct a set of Gaussian random fields (GRF) in the fiducial model. We generate a set of GRFs using the average power spectrum of the noiseless $\kappa$ maps. We then lens simulated CMB maps using these GRFs, following the same procedure as outlined above, and subsequently perform lensing reconstruction, just as for the reconstructed $N$-body $\kappa$ maps. These noisy GRF-only reconstructions allow us to examine the effect of reconstruction (in particular the non-Gaussianity of the reconstruction noise itself), as well as to determine the level of non-Gaussianity in the noisy $\kappa$ maps. Interpolation ------------- ![\[fig:interp\] Fractional differences between interpolated and “true” results for the fiducial power spectrum (top), PDF (middle), and peak counts (bottom). Here, we have built the interpolator using results for the other 45 cosmologies, and then compared the interpolated prediction at the fiducial parameter values to the actual simulated results for the fiducial cosmology. The error bars are scaled by $1/\sqrt{N_{\rm sims}}$, where the number of simulations $N_{\rm sims}=1,000$. The agreement for all three statistics is excellent.](plot/plot_interp.pdf){width="48.00000%"} To build a model at points where we do not have simulations, we interpolate from the simulated points in parameter space using the Clough-Tocher interpolation scheme [@alfeld1984; @farin1986], which triangulates the input points and then minimizes the curvature of the interpolating surface; the interpolated points are guaranteed to be continuously differentiable. In Fig. \[fig:interp\], we show a test of the interpolation using the noiseless $\kappa$ maps: we build the interpolator using all of the simulated cosmologies except for the fiducial model (i.e., 45 cosmologies), and then compare the interpolated results at the fiducial parameter values with the true, simulated results for that cosmology. The agreement for all three statistics is excellent, with deviations $\lesssim$ few percent (and well within the statistical precision). Finally, to check the robustness of the interpolation scheme, we also run our analysis using linear interpolation, and obtain consistent results.[^7] Analysis {#sec:analysis} ======== In this section, we describe the analysis of the simulated CMB lensing maps, including the computation of the power spectrum, peak counts, and PDF, and the likelihood estimation for cosmological parameters. These procedures are applied in the same way to the noiseless and noisy (reconstructed) maps. Power spectrum, PDF, and peak counts ------------------------------------ To compute the power spectrum, we first estimate the two-dimensional (2D) power spectrum of CMB lensing maps ($M_{\kappa}$) using $$\begin{aligned} \label{eq: ps2d} C^{\kappa \kappa}(\ellB) = \hat M_{\kappa}(\ellB)^*\hat M_{\kappa}(\ellB) \,,\end{aligned}$$ where $\ellB$ is the 2D multipole with components $\ell_1$ and $\ell_2$, $\hat M_{\kappa}$ is the Fourier transform of $M_{\kappa}$, and the asterisk denotes complex conjugation. We then average over all the pixels within each $|\ellB|\in[\ell-\Delta\ell, \ell+\Delta\ell)$ bin, for 20 log-spaced bins in the range of $100<\ell<2,000$, to obtain the one-dimensional power spectrum. The one-point PDF is the number of pixels with values between \[$\kappa-\Delta\kappa$, $\kappa+\Delta\kappa$) as a function of $\kappa$. We use 50 linear bins with edges listed in Table \[tab: bins\], and normalize the resulting PDF such that its integral is unity. The PDF is a simple observable (a histogram of the data), but captures the amplitude of all (zero-lag) higher-order moments in the map. Thus, it provides a potentially powerful characterization of the non-Gaussian information. Peaks are defined as local maxima in a $\kappa$ map. In a pixelized map, they are pixels with values higher than the surrounding 8 (square) pixels. Similar to cluster counts, peak counts are sensitive to the most nonlinear structures in the Universe. For galaxy lensing, they have been found to associate with halos along the line of sight both with simulations [@Yang2011] and observations [@LiuHaiman2016]. We record peaks on smoothed $\kappa$ maps, in 25 linearly spaced bins with edges listed in Table \[tab: bins\]. ----------------------- ------------------ ----------------------- Smoothing scale PDF bins edges Peak counts bin edges (arcmin) (50 linear bins) (25 linear bins) 0.5 (noiseless) \[-0.50, +0.50\] \[-0.18, +0.36\] 1.0 (noiseless) \[-0.22, +0.22\] \[-0.15, +0.30\] 2.0 (noiseless) \[-0.18, +0.18\] \[-0.12, +0.24\] 5.0 (noiseless) \[-0.10, +0.10\] \[-0.09, +0.18\] 8.0 (noiseless) \[-0.08, +0.08\] \[-0.06, +0.12\] 1.0, 5.0, 8.0 (noisy) \[-0.12, +0.12\] \[-0.06, +0.14\] ----------------------- ------------------ ----------------------- : \[tab: bins\] PDF and peak counts bin edges for each smoothing scale (the full-width-half-maximum of the Gaussian smoothing kernel applied to the maps). Cosmological constraints ------------------------ We estimate cosmological parameter confidence level (C.L.) contours assuming a constant (cosmology-independent) covariance and Gaussian likelihood, $$\begin{aligned} P (\DB | \pB) = \frac{1}{2\pi|\CB|^{1/2}} \exp\left[-\frac{1}{2}(\DB-\muB)\CB^{-1}(\DB-\muB)\right],\end{aligned}$$ where $\DB$ is the data array, $\pB$ is the input parameter array, $\muB=\muB(\pB)$ is the interpolated model, and $\CB$ is the covariance matrix estimated using the fiducial cosmology, with determinant $|\CB|$. The correction factor for an unbiased inverse covariance estimator [@dietrich2010] is negligible in our case, with $(N_{\rm sims}-N_{\rm bins}-2)/(N_{\rm sims}-1) = 0.99$ for $N_{\rm sims} =10,000$ and $N_{\rm bins}=95$. We leave an investigation of the impact of cosmology-dependent covariance matrices and a non-Gaussian likelihood for future work. Due to the limited size of our simulated maps, we must rescale the final error contour by a ratio ($r_{\rm sky}$) of simulated map size (12.25 deg$^2$) to the survey coverage (20,000 deg$^2$ for AdvACT). Two methods allow us to achieve this — rescaling the covariance matrix by $r_{\rm sky}$ before computing the likelihood plane, or rescaling the final C.L. contour by $r_{\rm sky}$. These two methods yield consistent results. In our final analysis, we choose the former method. Results {#sec:results} ======= Non-Gaussianity in noiseless maps {#sec:non-gauss} --------------------------------- ![image](plot/plot_noiseless_PDF.pdf){width="48.00000%"} ![image](plot/plot_noiseless_PDF_diff.pdf){width="48.00000%"} ![image](plot/plot_noiseless_peaks.pdf){width="48.00000%"} ![image](plot/plot_noiseless_peaks_diff.pdf){width="48.00000%"} We show the PDF of noiseless $N$-body $\kappa$ maps (PDF$^\kappa$) for the fiducial cosmology in Fig. \[fig:noiseless\_PDF\], as well as that of GRF $\kappa$ maps (PDF$^{\rm GRF}$) generated from a power spectrum matching that of the $N$-body-derived maps. To better demonstrate the level of non-Gaussianity, we also show the fractional difference of PDF$^\kappa$ from PDF$^{\rm GRF}$. The error bars are scaled to AdvACT sky coverage (20,000 deg$^2$), though note that no noise is present here. The departure of PDF$^\kappa$ from the Gaussian case is significant for all smoothing scales examined (FWHM = 0.5–8.0 arcmin), with increasing significance towards smaller smoothing scales, as expected. The excess in high $\kappa$ bins is expected as the result of nonlinear gravitational evolution, echoed by the deficit in low $\kappa$ bins. We show the comparison of the peak counts of $N$-body $\kappa$ maps (${\rm N}^\kappa_{\rm peaks}$) versus that of GRFs (${\rm N}^{\rm GRF}_{\rm peaks}$) in Fig. \[fig:noiseless\_pk\]. The difference between ${\rm N}^\kappa_{\rm peaks}$ and ${\rm N}^{\rm GRF}_{\rm peaks}$ is less significant than the PDF, because the number of peaks is much smaller than the number of pixels — hence, the peak counts have larger Poisson noise. A similar trend of excess (deficit) of high (low) peaks is also seen in $\kappa$ peaks, when compared to the GRF peaks. Covariance matrix {#sec:covariance} ----------------- ![\[fig:corr\_mat\] Correlation coefficients determined from the full noiseless (top) and noisy (bottom) covariance matrices. Bins 1-20 are for the power spectrum (labeled “PS”); bins 21-70 are for the PDF; and bins 71-95 are for peak counts.](plot/corr_mat.pdf "fig:"){width="48.00000%"} ![\[fig:corr\_mat\] Correlation coefficients determined from the full noiseless (top) and noisy (bottom) covariance matrices. Bins 1-20 are for the power spectrum (labeled “PS”); bins 21-70 are for the PDF; and bins 71-95 are for peak counts.](plot/corr_mat_noisy.pdf "fig:"){width="48.00000%"} Fig. \[fig:corr\_mat\] shows the correlation coefficients of the total covariance matrix for both the noiseless and noisy maps, $$\begin{aligned} \rhoB_{ij} = \frac{\CB_{ij}}{\sqrt{\CB_{ii}\CB_{jj}}}\end{aligned}$$ where $i$ and $j$ denote the bin number, with the first 20 bins for the power spectrum, the next 50 bins for the PDF, and the last 25 bins for peak counts. In the noiseless case, the power spectrum shows little covariance in both its own off-diagonal terms ($<10\%$) and cross-covariance with the PDF and peaks ($<20\%$), hinting that the PDF and peaks contain independent information that is beyond the power spectrum. In contrast, the PDF and peak statistics show higher correlation in both self-covariance (i.e., the covariance within the sub-matrix for that statistic only) and cross-covariance, with strength almost comparable to the diagonal components. They both show strong correlation between nearby $\kappa$ bins (especially in the moderate-$|\kappa|$ regions), which arises from contributions due to common structures amongst the bins (e.g., galaxy clusters). Both statistics show anti-correlation between positive and negative $\kappa$ bins. The anti-correlation may be due to mass conservation — e.g., large amounts of mass falling into halos would result in large voids in surrounding regions. In the noisy case, the off-diagonal terms are generally smaller than in the noiseless case. Moreover, the anti-correlation seen previously between the far positive and negative $\kappa$ tails in the PDF is now a weak positive correlation — we attribute this difference to the complex non-Gaussianity of the reconstruction noise. Interestingly, the self-covariance of the peak counts is significantly reduced compared to the noiseless case, while the self-covariance of the PDF persists to a reasonable degree. Effect of reconstruction noise {#sec:recon_noise} ------------------------------ ![\[fig:recon\] We demonstrate the effect of reconstruction noise on the power spectrum (top), the PDF (middle), and peak counts (bottom) by using Gaussian random field $\kappa$ maps (rather than $N$-body-derived maps) as input to the reconstruction pipeline. The noiseless (solid curves) and noisy/reconstructed (dashed curves) statistics are shown. All maps used here have been smoothed with a Gaussian kernel of FWHM $= 8$ arcmin.](plot/plot_reconstruction.pdf){width="48.00000%"} To disentangle the effect of reconstruction noise from that of nonlinear structure growth, we compare the three statistics before (noiseless) and after (noisy) reconstruction, using only the GRF $\kappa$ fields. Fig. \[fig:recon\] shows the power spectra, PDFs, and peak counts for both the noiseless (solid curves) and noisy (dashed curves) GRFs, all smoothed with a FWHM $= 8$ arcmin Gaussian window. The reconstructed power spectrum has significant noise on small scales, as expected (this is dominated by the usual “$N^{(0)}$” noise bias). The post-reconstruction PDF shows skewness, defined as $$\label{eq.skewdef} S=\left\langle \left( \frac {\kappa-\bar{\kappa}}{\sigma_\kappa}\right)^3 \right\rangle,$$ which is not present in the input GRFs. In other words, the reconstructed maps have a non-zero three-point function, even though the input GRF $\kappa$ maps in this case do not. While this may seem surprising at first, we recall that the three-point function of the reconstructed map corresponds to a six-point function of the CMB temperature map (in the quadratic estimator formalism). Even for a Gaussian random field, the six-point function contains non-zero Wick contractions (those that reduce to products of two-point functions). Propagating such terms into the three-point function of the quadratic estimator for $\kappa$, we find that they do not cancel to zero. This result is precisely analogous to the usual “$N^{(0)}$ bias” on the CMB lensing power spectrum, in which the two-point function of the (Gaussian) primary CMB temperature gives a non-zero contribution to the temperature four-point function. The result in Fig. \[fig:recon\] indicates that the similar PDF “$N^{(0)}$ bias” contains a negative skewness (in addition to non-zero kurtosis and higher moments). While it should be possible to derive this result analytically, we defer the full calculation to future work. If we filter the reconstructed $\kappa$ maps with a large smoothing kernel, the skewness in the reconstructed PDF is significantly decreased (see Fig. \[fig:skew\]). We briefly investigate the PDF of the Planck 2015 CMB lensing map [@planck2015xv] and do not see clear evidence of such skewness — we attribute this to the low effective resolution of the Planck map (FWHM $\sim$ few degrees). Finally, we note that a non-zero three-point function of the reconstruction noise could potentially alter the forecasted $\kappa$ bispectrum results of Ref. [@Namikawa2016] (where the reconstruction noise was taken to be Gaussian). The non-Gaussian properties of the small-scale reconstruction noise were noted in Ref. [@HuOkamoto2002], who pointed out that the quadratic estimator at high-$\ell$ is constructed from progressively fewer arcminute-scale CMB fluctuations. Similarly, the $\kappa$ peak count distribution also displays skewness after reconstruction, although it is less dramatic than that seen in the PDF. The peak of the distribution shifts to a higher $\kappa$ value due to the additional noise in the reconstructed maps. We note that the shape of the peak count distribution becomes somewhat rough when large smoothing kernels are applied to the maps, due to the small number of peaks present in this situation (e.g., $\approx 29$ peaks in a 12.25 deg$^2$ map with FWHM = 8 arcmin Gaussian window). Non-Gaussianity in reconstructed maps {#sec:non-gauss_recon} ------------------------------------- ![image](plot/plot_noisy_PDF_morebins.pdf){width="48.00000%"} ![image](plot/plot_noisy_PDF_filtered_morebins.pdf){width="48.00000%"} ![image](plot/plot_noisy_peaks_morebins.pdf){width="48.00000%"} ![image](plot/plot_noisy_peaks_filtered_morebins.pdf){width="48.00000%"} We show the PDF and peak counts of the reconstructed $\kappa$ maps in Figs. \[fig:noisyPDF\] and \[fig:noisypk\], respectively. The left panels of these figures show the results using maps with an 8 arcmin Gaussian smoothing window. We further consider a Wiener filter, which is often used to filter out noise based on some known information in a signal (i.e., the noiseless power spectrum in our case). The right panels show the Wiener-filtered results, where we inverse-variance weight each pixel in Fourier space, i.e., each Fourier mode is weighted by the ratio of the noiseless power spectrum to the noisy power spectrum (c.f. Fig. \[fig:recon\]), $$\begin{aligned} f^{\rm Wiener} (\ell) = \frac{C_\ell^{\rm noiseless}}{C_\ell^{\rm noisy}} \,.\end{aligned}$$ Compared to the noiseless results shown in Figs. \[fig:noiseless\_PDF\] and \[fig:noiseless\_pk\], the differences between the PDF and peaks from the $N$-body-derived $\kappa$ maps and those from the GRF-derived $\kappa$ maps persist, but with less significance. For the Wiener-filtered maps, the deviations of the $N$-body-derived $\kappa$ statistics from the GRF case are 9$\sigma$ (PDF) and 6$\sigma$ (peaks), where we derived the significances using the simulated covariance from the $N$-body maps [^8]. These deviations capture the influence of both nonlinear evolution and post-Born effects. ![\[fig:skew\] Top panel: the skewness of the noiseless (triangles) and reconstructed, noisy (diamonds: $N$-body $\kappa$ maps; circles: GRF) PDFs. Bottom panel: the fractional difference between the skewness of the reconstructed $N$-body $\kappa$ and the reconstructed GRF. The error bars are for our map size (12.25 deg$^2$), and are only shown in the top panel for clarity.](plot/plot_skewness3.pdf){width="48.00000%"} While the differences between the $N$-body and GRF cases in Figs. \[fig:noisyPDF\] and \[fig:noisypk\] are clear, understanding their detailed structure is more complex. First, note that the GRF cases exhibit the skewness discussed in Sec. \[sec:recon\_noise\], which arises from the reconstruction noise itself. We show the skewness of the reconstructed PDF (for both the $N$-body and GRF cases) compared with that of the noiseless ($N$-body) PDF for various smoothing scales in Fig. \[fig:skew\]. The noiseless $N$-body maps are positively skewed, as physically expected. The reconstructed, noisy maps are negatively skewed, for both the $N$-body and GRF cases. However, the reconstructed $N$-body results are less negatively skewed than the reconstructed GRF results (bottom panel of Fig. \[fig:skew\]), presumably because the $N$-body PDF (and peaks) contain contributions from the physical skewness, which is positive (see Figs. \[fig:noiseless\_PDF\] and \[fig:noiseless\_pk\]). However, the physical skewness is not large enough to overcome the negative “$N^{(0)}$”-type skewness coming from the reconstruction noise. We attribute the somewhat-outlying point at FWHM $=8$ arcmin in the bottom panel of Fig. \[fig:skew\] to a noise fluctuation, as the number of pixels at this smoothing scale is quite low (the deviation is consistent with zero). The decrease in $|S|$ between the FWHM $=2$ arcmin and 1 arcmin cases in the top panel of Fig. \[fig:skew\] for the noisy maps is due to the large increase in $\sigma_{\kappa}$ between these smoothing scales, as the noise is blowing up on small scales. The denominator of Eq. (\[eq.skewdef\]) thus increases dramatically, compared to the numerator. Comparisons between the reconstructed PDF in the $N$-body case and GRF case are further complicated by the fact that higher-order “biases” arise due to the reconstruction. For example, the skewness of the reconstructed $N$-body $\kappa$ receives contributions from many other terms besides the physical skewness and the “$N^{(0)}$ bias” described above — there will also be Wick contractions involving combinations of two- and four-point functions of the CMB temperature and $\kappa$ (and perhaps an additional bias coming from a different contraction of the three-point function of $\kappa$, analogous to the “$N^{(1)}$” bias for the power spectrum [@Hanson2011]). So the overall “bias” on the reconstructed skewness will differ from that in the simple GRF case. This likely explains why we do not see an excess of positive $\kappa$ values over the GRF case in the PDFs shown in Fig. \[fig:noisyPDF\]. While this excess is clearly present in the noiseless case (Fig. \[fig:noiseless\_PDF\]), and it matches physical intuition there, the picture in the reconstructed case is not simple, because there is no guarantee that the reconstruction biases in the $N$-body and GRF cases are exactly the same. Thus, a comparison of the reconstructed $N$-body and GRF PDFs contains a mixture of the difference in the biases and the physical difference that we expect to see. Similar statements hold for comparisons of the peak counts. Clearly, a full accounting of all such individual biases would be quite involved, but the key point here is that all these effects are fully present in our end-to-end simulation pipeline. While an analytic understanding would be helpful, it is not necessary for the forecasts we present below. Cosmological constraints {#sec:constraints} ------------------------ Before we proceed to present the cosmological constraints from non-Gaussian statistics, it is necessary to do a sanity check by comparing the forecasted contour from our simulated power spectra to that from an analytic Fisher estimate, $$\begin{aligned} \FB_{\alpha \beta}=\frac{1}{2} {\rm Tr} \left\{\CB^{-1}_{\rm Gauss} \left[\left(\frac {\partial C_\ell}{\partial p_\alpha} \right) \left(\frac {\partial C_\ell}{\partial p_\beta}\right)^T+ \left(\alpha\leftrightarrow\beta \right) \right]\right\},\end{aligned}$$ where $\left\{ \alpha,\beta \right\} = \left\{ \Omega_m,\sigma_8 \right\}$ and the trace is over $\ell$ bins. $\CB_{\rm Gauss}$ is the Gaussian covariance matrix, with off-diagonal terms set to zero, and diagonal terms equal to the Gaussian variance, $$\begin{aligned} \sigma^2_\ell=\frac{2(C_\ell+N_\ell)^2}{f_{\rm sky}(2\ell+1)\Delta\ell}\end{aligned}$$ We compute the theoretical power spectrum $C_\ell$ using the HaloFit model [@Smith2003; @Takahashi2012], with fractional parameter variations of $+1$% to numerically obtain $\partial C_\ell / \partial p$. $N_\ell$ is the reconstruction noise power spectrum, originating from primordial CMB fluctuations and instrumental/atmospheric noise (note that we only consider white noise here). The sky fraction $f_{\rm sky}=0.485$ corresponds to the 20,000 deg$^2$ coverage expected for AdvACT. $(F^{-1}_{\alpha\alpha})^{\frac{1}{2}}$ is the marginalized error on parameter $\alpha$. Both theoretical and simulated contours use the power spectrum within the $\ell$ range of \[100, 2,000\]. The comparison is shown in Fig. \[fig:contour\_fisher\]. The contour from full $N$-body simulations shows good agreement with the analytical Fisher contour. This result indicates that approximations made in current analytical CMB lensing power spectrum forecasts are accurate, in particular the neglect of non-Gaussian covariances from nonlinear growth. A comparison of the analytic and reconstructed power spectra will be presented in Ref. [@Sherwin2016]. ![\[fig:contour\_fisher\] 68% C.L. contours from an AdvACT-like CMB lensing power spectrum measurement. The excellent agreement between the simulated and analytic results confirms that non-Gaussian covariances arising from nonlinear growth and reconstruction noise do not strongly bias current analytic CMB lensing power spectrum forecasts (up to $\ell = 2,000$).](plot/plot_contour_fisher.pdf){width="48.00000%"} Fig. \[fig:contour\_noiseless\] shows contours derived using noiseless maps for the PDF and peak count statistics, compared with that from the noiseless power spectrum. We compare three different smoothing scales (1.0, 5.0, 8.0 arcmin), and find that smaller smoothing scales have stronger constraining power. However, even with the smallest smoothing scale (1.0 arcmin), the PDF contour is still significantly larger than that of the power spectrum. Peak counts using 1.0 arcmin smoothing show almost equivalent constraining power as the power spectrum. However, we note that 1.0 arcmin smoothing is not a fair comparison to the power spectrum with cutoff at $\ell<2,000$, because in reality, the beam size and instrument noise is likely to smear out signals smaller than a few arcmin scale (see below). At first, it may seem surprising that the PDF is not at least as constraining as the power spectrum in Fig. \[fig:contour\_noiseless\], since the PDF contains the information in the variance. However, this only captures an overall amplitude of the two-point function, whereas the power spectrum contains scale-dependent information.[^9] We illustrate this in Fig. \[fig:cell\_diff\], where we compare the fiducial power spectrum to that with a 1% increase in $\Omega_m$ or $\sigma_8$ (while keeping other parameters fixed). While $\sigma_8$ essentially re-scales the power spectrum by a factor $\sigma_8^2$, apart from a steeper dependence at high-$\ell$ due to nonlinear growth, $\Omega_m$ has a strong shape dependence. This is related to the change in the scale of matter-radiation equality [@planck2015xv]. Thus, for a noiseless measurement, the shape of the power spectrum contains significant additional information about these parameters, which is not captured by a simple change in the overall amplitude of the two-point function. This is the primary reason that the power spectrum is much more constraining than the PDF in Fig. \[fig:contour\_noiseless\]. ![image](plot/plot_contour_noiseless_PDF_clough.pdf){width="48.00000%"} ![image](plot/plot_contour_noiseless_Peaks_clough.pdf){width="48.00000%"} ![\[fig:cell\_diff\] Fractional difference of the CMB lensing power spectrum after a 1% increase in $\Omega_m$ (thick solid line) or $\sigma_8$ (thin solid line), compared to the fiducial power spectrum. Other parameters are fixed at their fiducial values.](plot/plot_Cell_diff.pdf){width="48.00000%"} ![image](plot/plot_contour_noisy_PDF_clough.pdf){width="48.00000%"} ![image](plot/plot_contour_noisy_Peaks_clough.pdf){width="48.00000%"} ![\[fig:contour\_comb\] 68% C.L. contours derived using two combinations of the power spectrum, PDF, and peak counts, compared to using the power spectrum alone. Reconstruction noise corresponding to an AdvACT-like survey is included. The contours are scaled to AdvACT sky coverage of 20,000 deg$^2$.](plot/plot_contour_noisy_comb_clough.pdf){width="48.00000%"} Fig. \[fig:contour\_noisy\] shows contours derived using the reconstructed, noisy $\kappa$ maps. We show results for three different filters — Gaussian windows of 1.0 and 5.0 arcmin and the Wiener filter. The 1.0 arcmin contour is the worst among all, as noise dominates at this scale. The 5.0 arcmin-smoothed and Wiener-filtered contours show similar constraining power. Using the PDF or peak counts alone, we do not achieve better constraints than using the power spectrum alone, but the parameter degeneracy directions for the statistics are slightly different. This is likely due to the fact that the PDF and peak counts probe non-linear structure, and thus they have a different dependence on the combination $\sigma_8(\Omega_m)^\gamma$ than the power spectrum does, where $\gamma$ specifies the degeneracy direction. Combination $\Delta \Omega_m$ $\Delta \sigma_8 $ ------------------ ------------------- -------------------- PS only 0.0065 0.0044 PDF + Peaks 0.0076 0.0035 PS + PDF + Peaks 0.0045 0.0030 : \[tab: constraints\] Marginalized constraints on $\Omega_m$ and $\sigma_8$ for an AdvACT-like survey from combinations of the power spectrum (PS), PDF, and peak counts, as shown in Fig. \[fig:contour\_comb\]. The error contour derived using all three statistics is shown in Fig. \[fig:contour\_comb\], where we use the 5.0 arcmin Gaussian smoothed maps. The one-dimensional marginalized errors are listed in Table \[tab: constraints\]. The combined contour shows moderate improvement ($\approx 30\%$ smaller error contour area) compared to the power spectrum alone. The improvement is due to the slightly different parameter degeneracy directions for the statistics, which break the $\sigma_8$-$\Omega_m$ degeneracy somewhat more effectively when combined. It is worth noting that we have not included information from external probes that constrain $\Omega_m$ (e.g., baryon acoustic oscillations), which can further break the $\Omega_m$-$\sigma_8$ degeneracy. Conclusion {#sec:conclude} ========== In this paper, we use $N$-body ray-tracing simulations to explore the additional information in CMB lensing maps beyond the traditional power spectrum. In particular, we investigate the one-point PDF and peak counts (local maxima in the convergence map). We also apply realistic reconstruction procedures that take into account primordial CMB fluctuations and instrumental noise for an AdvACT-like survey, with sky coverage of 20,000 deg$^2$, noise level 6 $\mu$K-arcmin, and $1.4$ arcmin beam. Our main findings are: 1. We find significant deviations of the PDF and peak counts of $N$-body-derived $\kappa$ maps from those of Gaussian random field $\kappa$ maps, both in the noiseless and noisy reconstructed cases (see Figs. \[fig:noiseless\_PDF\], \[fig:noiseless\_pk\], \[fig:noisyPDF\], and \[fig:noisypk\]). For AdvACT, we forecast the detection of non-Gaussianity to be $\approx$ 9$\sigma$ (PDF) and 6$\sigma$ (peak counts), after accounting for the non-Gaussianity of the reconstruction noise itself. The non-Gaussianity of the noise has been neglected in previous estimates, but we show that it is non-negligible (Fig. \[fig:recon\]). 2. We confirm that current analytic forecasts for CMB lensing power spectrum constraints are accurate when confronted with constraints derived from our $N$-body pipeline that include the full non-Gaussian covariance (Fig. \[fig:contour\_fisher\]). 3. An improvement of $\approx 30\%$ in the forecasted $\Omega_m$-$\sigma_8$ error contour is seen when the power spectrum is combined with PDF and peak counts (assuming AdvACT-level noise), compared to using the power spectrum alone. The covariance between the power spectrum and the other two non-Gaussian statistics is relatively small (with cross-covariance $< 20\%$ of the diagonal components), meaning the latter is complementary to the power spectrum. 4. For noiseless $\kappa$ maps (i.e., ignoring primordial CMB fluctuations and instrumental/atmospheric noise), a smaller smoothing kernel can help extract the most information from the PDF and peak counts (Fig. \[fig:contour\_noiseless\]). For example, peak counts of 1.0 arcmin Gaussian smoothed maps alone can provide equally tight constraints as from the power spectrum. 5. We find non-zero skewness in the PDF and peak counts of reconstructed GRFs, which is absent from the input noiseless GRFs by definition. This skewness is the result of the quadratic estimator used for CMB lensing reconstruction from the temperature or polarization maps. Future forecasts for non-Gaussian CMB lensing statistics should include these effects, as we have here, or else the expected signal-to-noise could be overestimated. In this work, we have only considered temperature-based reconstruction estimators, but in the near future polarization-based estimators will have equally (and, eventually, higher) signal-to-noise. Moreover, the polarization estimators allow the lensing field to be mapped out to smaller scales, which suggests that they could be even more useful for non-Gaussian statistics. In summary, there is rich information in CMB lensing maps that is not captured by two-point statistics, especially on small scales where nonlinear evolution is significant. In order to extract this information from future data from ongoing CMB Stage-III and near-future Stage-IV surveys, such as AdvACT, SPT-3G [@Benson2014], Simons Observatory[^10], and CMB-S4 [@Abazajian2015], non-Gaussian statistics must be studied and modeled carefully. We have shown that non-Gaussian statistics will already contain useful information for Stage-III surveys, which suggests that their role in Stage-IV analyses will be even more important. The payoff of these efforts could be significant, such as a quicker route to a neutrino mass detection. We thank Nick Battaglia, Francois Bouchet, Simone Ferraro, Antony Lewis, Mark Neyrinck, Emmanuel Schaan, and Marcel Schmittfull for useful discussions. We acknowledge helpful comments from an anonymous referee. JL is supported by an NSF Astronomy and Astrophysics Postdoctoral Fellowship under award AST-1602663. This work is partially supported by a Junior Fellowship from the Simons Foundation to JCH and a Simons Fellowship to ZH. BDS is supported by a Fellowship from the Miller Institute for Basic Research in Science at the University of California, Berkeley. This work is partially supported by NSF grant AST-1210877 (to ZH) and by a ROADS award at Columbia University. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by NSF grant ACI-1053575. Computations were performed on the GPC supercomputer at the SciNet HPC consortium. SciNet is funded by the Canada Foundation for Innovation under the auspices of Compute Canada, the Government of Ontario, the Ontario Research Fund — Research Excellence, and the Univ. of Toronto. [^1]: For example, higher order moments [@Bernardeau1997; @Hui1999; @vanWaerbeke2001; @Takada2002; @Zaldarriaga2003; @Kilbinger2005; @Petri2015], three-point functions [@Takada2003; @Vafaei2010], bispectra [@Takada2004; @DZ05; @Sefusatti2006; @Berge2010], peak counts , Minkowski functionals [@Kratochvil2012; @Shirasakiyoshida2014; @Petri2013; @Petri2015], and Gaussianized power spectrum [@Neyrinck2009; @Neyrinck2014; @Yu2012]. [^2]: <http://wwwmpa.mpa-garching.mpg.de/gadget/> [^3]: <https://pypi.python.org/pypi/lenstools/> [^4]: <http://camb.info/> [^5]: While the number of potential planes could be a limiting factor in our sensitivity to these effects, we note that our procedure uses $\approx 40$-70 planes for each ray-tracing calculation (depending on the cosmology), which closely matches the typical number of lensing deflections experienced by a CMB photon. [^6]: We find that this filter is necessary for numerical stability (and also because our simulated $\kappa$ maps do not recover all structure on these small scales, as seen in Fig. \[fig:theory\_ps\]), but our results are unchanged for moderate perturbations to the filter scale. [^7]: Due to our limited number of models, linear interpolation is slightly more vulnerable to sampling artifacts than the Clough-Tocher method, because the linear method only utilizes the nearest points in parameter space. The Clough-Tocher method also uses the derivative information. Therefore, we choose Clough-Tocher for our analysis. [^8]: We note that the signal-to-noise ratios predicted here are comparable to the $\approx 7\sigma$ bispectrum prediction that would be obtained by rescaling the SPT-3G result from Table I of Ref. [@Pratten2016] to the AdvACT sky coverage (which is a slight overestimate given AdvACT’s higher noise level). The higher significance for the PDF found here could be due to several reasons: (i) additional contributions to the signal-to-noise for the PDF from higher-order polyspectra beyond the bispectrum; (ii) inaccuracy of the nonlinear fitting formula used in Ref. [@Pratten2016] on small scales, as compared to the N-body methods used here; (iii) reduced cancellation between the nonlinear growth and post-Born effects in higher-order polyspectra (for the bispectrum, these contributions cancel to a large extent, reducing the signal-to-noise [@Pratten2016]). [^9]: Note that measuring the PDF or peak counts for different smoothing scales can recover additional scale-dependent information as well. [^10]: <http://www.simonsobservatory.org/>
{ "pile_set_name": "ArXiv" }
Q: Using "plot for" in gnuplot to vary parameters I want to use the plot for feature in gnuplot to plot functions with varying parameters. Here an example par = "1 2" #two values for the parameter f(x,a) = sin(a*x) g(x,a) = cos(a*x) plot for [i=1:words(par)] g(x, word(par,i)), f(x, word(par,i)) What I expect is the plotting of the four functions g(x,1), g(x,2, f(x,1), and f(x,2). But for whatever reason only three functions are plotted, namely: g(x,1), g(x,2, and f(x,2). This seems completely arbitrary to me. Can someone help me out? A: You have to repeat the for condition: plot for [i=1:words(par)] g(x, word(par,i)), for [i=1:words(par)] f(x, word(par,i))
{ "pile_set_name": "StackExchange" }
Q: Getting list of Variables of map in BPM Metastorm I'm trying to get list of variables in some map OUTSIDE program automatically. I know I can find them in .process file, with has xml structure. I also figured out that "x:object" with variable contains "x:Type" ending with "MboField}". But unfortunately I need to narrow searching criterias more, because I still can't find the main patern to separate variables from other objects. This is my current code in c#: var xdoc = XDocument.Load(patches.ProcessFilePatch); var xmlns = XNamespace.Get("http://schema.metastorm.com/Metastorm.Common.Markup"); IEnumerable<string> values = from x in xdoc.Descendants(xmlns+"Object") where x.Attribute(xmlns+"Type").Value.ToString().EndsWith("MboField}") select x.Attribute(xmlns+"Name").Value.ToString(); VariablesInProcessFile = values.ToList(); Any other ways to find Variables among others? A: private void getVariablesInProcessFile() { var xdoc = XDocument.Load(patches.ProcessFilePatch); var xmlns = XNamespace.Get("http://schema.metastorm.com/Metastorm.Common.Markup"); var dane = xdoc.Descendants(xmlns + "Object").Where(x => CheckAttributes(x, xmlns)).ToArray(); IEnumerable<string> valuesE = from x in dane.Descendants(xmlns + "Object") where x.Attribute(xmlns + "Type").Value.ToString().EndsWith("MboField}") select x.Attribute(xmlns + "Name").Value.ToString(); VariablesInProcessFile = valuesE.ToList(); } private bool CheckAttributes(XElement x, XNamespace xmlns) { var wynik = x.Attribute(xmlns + "Name"); return wynik != null && (wynik.Value == patches.MapName + "Data" || wynik.Value == patches.altMapName + "Data"); } Where "patches" is my own class containing patch to .process file and possible names of group of Variables, usually related to name of the map.
{ "pile_set_name": "StackExchange" }
No, no, no—not those. I know what you’re thinking. I’m talking about one of the food staples of the iron ore workers in the Upper Peninsula of Michigan—the pasty. Think in terms of a giant Italian Stromboli. A flaky crust (usually a pie crust) enveloping meat, diced potatoes, carrots, onions, or anything Mom or Dad could think of. It was (and is) a hearty baked meal—one to satisfy the hunger of very hard working people. Do you know why the edges of the pasty are so thick? It’s because the iron ore workers didn’t wash their hands and so they picked up the pasty by the outer crust and then discarded the crust after eating the main section of the pasty. Pasties were eaten in the Middle Ages as part of the supper. Instead of beef or pork, the meat would usually be roast boar, pigeon, or wild game that was hunted. The pasty originated in England, and in particular, Cornwall where miners ate them for lunch. By the mid-1800s, the pasty had been introduced to the diet of the Upper Peninsula miners. Today, we raise turkeys, pigs, cattle, and other animals for our consumption. During the Middle Ages, other animals were considered a delicacy. Besides the boar, deer, and other hunted animals, the nobility would gather up, fatten up, and glutton up on the following: Dormice—these little critters were the edible version of what we think of today. They were kept in pens while being fed walnuts and chestnuts to fatten them. To eat a dormice was a delicacy reserved for special holidays. Pigeons—the birds were kept in their own special houses called dovecotes. These freestanding structures would have little “condos” in the interior where the pigeons could live. By royal decree, only the nobility were allowed to have dovecotes (when you visit the French châteaux, always look for the free standing dovecotes). When it came time for supper, the chef would go out to the dovecote and pluck a couple of unlucky birds. Inside a Dovecote ‪ The Dovecote next to Penmon Priory on Anglesey. An impressive early 17th century dovecote. Cockentryce—this must have been the forerunner to John Madden’s “turducken.” It was a half pig sewn to half of a castrated and fattened chicken. Beaver’s tails—believe it or not, this was substituted for fish during fasting days (i.e., no meat allowed). Supposedly, a beaver’s tail tastes like fish. I’ll let you be the judge of that one. On a final note, I can’t leave out a much sought after (and expensive) delicacy: Ambergris—this was produced in the digestive system of a gray whale. It would be vomited out by the whale, washed up on the beach, and then consumed with items like eggs, pies, or cakes. King Charles II of England was a big fan of this scarce delicacy. Do you think that five hundred years from now, robots and computers will be talking about what we eat today? Best Pasty in America All due respect to the pasties my wife and Dad make, the best pasty I’ve ever had is at Suzy’s Pasties located at 1020 US 2 W in Saint Ignace, Michigan. After you get off the Mackinac Bridge going west on Highway 2, her restaurant will be on the right (a mile or two). The crust is the best I’ve ever had. If it’s not open when you get there, I have a feeling it doesn’t really matter. You can stop along the interstate just about anywhere as long as the restaurant specializes in pasties. Want to make a pasty? Check this out. Remember, the secret to a great pasty is the crust. Perhaps you might want to try diced cockentryce in your next pasty and then slather it up with Ambergris? Actually, Ambergris is illegal today. We Need Your Help Please tell your friends about our blog site and encourage them to visit and perhaps subscribe to it. Sandy and I are trying to increase our audience and we need your help through your friends, followers, and social media contacts. Thank You Sandy and I appreciate you visiting with us. We have some exciting things on the horizon and we’ll keep you updated through each blog post. What’s New With Sandy and Stew? The final edited manuscripts (two volumes) for the walking tour of medieval Paris have been submitted to our book designer. It was a lot of fun picking out the illustrations, pictures, and images for the books. I’m looking forward to seeing the first draft of the book. It shouldn’t be too long before we go to print. We have a lot of stories and we’re looking forward to sharing these with you. Please continue to visit our blog site and perhaps you’d like to subscribe so that you don’t miss out on our blog posts, past and current. Please note that we do not and will not take compensation from individuals or companies mentioned or promoted in the blogs. Are you following us on Facebook and Twitter? Share This: Copyright © 2015 Stew Ross
{ "pile_set_name": "OpenWebText2" }
BP12-S18-U24 B-Line has long been a leading manufacturer of support systems and electrical enclosures for the mechanical, electrical and telecommunications industries. our spring steel fastener line includes a wide range of quality fastening systems for electrical, mechanical and telecommunication applications. our spring steel fasteners include products for attachment to metal studs, steel beams, acoustical tee, drywall purlins, and channel.
{ "pile_set_name": "Pile-CC" }
Too many people are oblivious to the daily injustices that our systems enforce against our own fellow citizens within the city of San Diego and otherwise. Today, you shed light on the criminal justice system’s flaws and our cultural shortcomings. These systems are perpetuated by, as most things unjust, money changing hands. To get at the root of institutionalized injustice, which already impairs marginalized communities disproportionately, consider investigating the impact of the two, publicly-traded private prison companies who hold contracts with the State of California. The State’s contracts directly impact justice and the lack thereof in San Diego. Specifically, question the lobbying they do with respect to rules around detainment, arrests, convictions and lengths of prison sentences. These two companies and the handful of financial institutions with large holdings in them have assets in multibillions that depend on consistent - or ideally rapid - growth in the number of people in jail. how many people attended the rallies? Particularly the one on Saturday. Where was it? apparently the NAACP paid for it. Who was the keynote speaker? Where was it on the news on Saturday night or Sunday morning or even Sunday night? How does the Hispanic community feel about the vilification of Zimmerman? Since the courts found him innocent and the NAACP convicted him yet never referred to him as a Hispanic does this show bias on their part? He would have been better to say, "Due to the Stand Your Ground law in Florida, what happened to Martin could have happened to any one of us." I didn't vote for Obama because he's black. I voted for him because I thought he'd do a better job than McCain and Romney. I still do. But for the first time I feel like he's telling us he's not the American president, but a black man. Still, I understand where he's coming from. I'm a minority, too. But oddly the racist comments I've received in my life have been directed at me by black people, not white. Although some whites have, they were mostly black. And the prejudiced comments directed toward me have been from Mexicans who don't believe I'm Mexican enough. They accuse me of being ashamed of being a Mexican because I'm different than the stereotype. I think the president would have been better to keep out of this. The law is the law and will remain so until changed. Obama aside, I don't think the Martin/Zimmerman verdict will have a lasting impact on race relations in San Diego. Mexican-Americans and Asians are always excluded from "conversations about race." From this corner of the country, "race relations" on the national level comes down to Black or White. That's what the media peddles and the public buys. I agree wholeheartedly with the speakers about the need for greater understanding, so that people of color can experience life without constantly being disrespected. A Black man should be able to ride an elevator with a White woman without being perceived as a threat! This harkens back to the racism that eventually took the life of Emmet Till. The point about power and the desire to maintain it is so true. Power in this society is still held by a wealthy White majority, and until White privilege stops driving violence like it did that night in Florida -- even though Zimmerman is not ethnically White, he displays as White, and was subjected all his life to the lure of Euro-centrism -- and like it did when the jury decided to acquit him, we will rally, march, and yes PROTEST. Barbarism and bigotry begin in the home. Younger generations inherited their bigotry from their parents and grandparents. They'll pass those values on to their children. Enlightenment breaks the chain. Weil asks good questions. Where were they held. Why were they not more widely announced? This is the problem with many of the pro-immigrant rallies. By the time I hear about them, it's too late! Or, they have it in from of the County building at 4pm on a Friday!!! It's impacted the country badly with very different narratives about the event being a function of race. I have seen little that would seem to lead toward harmonizing those disparate viewpoints. I don't know why it would be different in San Diego. Barack Obama speaks on behalf of many special interests each and every day. He made the statement as a result of being under intense pressure from 13% of the population. No one has a problem with the innumberable statements he's made on behalf of other special interest groups, domestic or foreign. "It's impacted the country badly with very different narratives about the event being a function of race. I have seen little that would seem to lead toward harmonizing those disparate viewpoints. I don't know why it would be different in San Diego." San Diego's African-American population is 6% (in the City with a lower percentage in the County). Looking at the disparities on this comment board, those numbers hold up. Getting a balanced reaction to the verdict in San Diego is statistically impossible. That's why this conversation has to be opened-up to Americans of all ethnic descents. That won't be possible until "racism" is no longer framed solely as Black or White. "he displays as White". Is this new liberal speak? I have never heard this term before and wonder what kind of mind came up with this. Sad commentary on "civil" society. Racial and ethnic division and separation seems to be the focus instead of racial and ethnic harmony. DLR is correct, we know how the Korean immigrants viewed African Americans in L.A. Any race or ethnicity can be a bigot or discriminatory toward another. We also know how a lot of people today view Arab-Americans. I remember one time, I was at a gas station downtown. A black man was saying something outloud. I wasn't paying attention. He then drew closer to me and repeated his question. I replied, "Sorry, I didn't realize you were speaking to me." He had an accent which in my educated guess, I would say he was Somali. Anyway, he then said "Oh, I thought you were being racist" or words to that effect. It was obvious to me that he had had some unpleasant encounter. As for Zimmerman, he is Latino on his mother's side (it usually on the mother's side), but even as a Latino, he can still be classified for statistical purposes as the governemnt puts it, as "white." Duckster, then "stand your ground laws" are a legal/judicial issue--not a racial one--and should be reconsidered. Take those laws to court. This whole incident has shown that America seems to prefer segregation (by all sides) rather than inclusion. Perhaps the past is our future, too. Mission, Why would the government classify Zimmerman as white? I would assume it would be Hispanic. Just like Obama is black and would never be considered white. Unless he committed a crime against a minority. Funny how that works. IT IS A SICK SOCIETY which pretends that Trayvon Martin hasn't already received the justice that he deserved, a sick society which calls for George Zimmerman's head --- but then it is a sick, sick society which elects --- and then re-elects --- and re-elects --- scum like Barak Hushpuppy OhBummer, Biden The Magnificent, Nazi Pelosi, Filthy Harry Reid, and NYC Mayor Doomberg. In the best of all possible worlds, they'll all be on a plane which crashes into the NY Times Editorial offices at midmorning, any work day, next week. The real assassins in the Martin-Zimmerman confrontation are 1] Martin, 2] Barak Hushpuppy OhBummer, who is trying to promote race riots and the assassination of George Zimmerman, 3] Attorney General Eric "Fast and Furious" Holder, who will make sure that Zimmerman is unarmed when OhBummer’s and Holder's proxy assassins come for him, and 4] the fascist Democrat-captured media who are character assassins operating on behalf of the Democrat party and the OhBummer dictatorship. As well, these media propagandists are actionable as accessories before the fact if they succeed in promoting the injury or assassination of George Zimmerman. Every one of the media who have tried or who will try to cause injury to Zimmerman is a legitimate legal target of people who believe in justice --- the justice which Trayvon Martin earned when he jumped George Zimmerman and attempted to murder him. Two lying fascist sacks of shxt --- OhBummer and Holder --- and of course the usual race hustlers like Sharpton, Farrakhan, and Jackson --- can be counted on to promote racial hysteria and a false narrative on this subject, year-on-year, and decade-on-decade, ad nauseam, ad infinitum. As for "racism in America," there are millions more black racists than white racists in America today --- and the hundreds of millions of non-racists in America have "nothing" to apologize for --- no "guilt" to feel or adopt --- no "white privilege" to apologize for --- no “reparations” to pay --- no obligation to kowtow to all of the black racists --- or to Unkle Skum --- by which expression I mean “government at any and every level of American society.” In his call to the police before the incident, Zimmerman was advised by the cops to stay in his vehicle. Critics say that Zimmerman had “a duty” to "follow orders" but the caution issued by the cops was not a lawful command, else he would have been cited by the cops, and he wasn't. Even corrupt Florida State Attorney Angela Corey --- who hid evidence from the defense and fired the whistleblower who outed her --- did not argue that Zimmerman had a legal obligation to remain in his vehicle --- and he exited his vehicle when Martin disappeared from sight. Zimmerman called the police before Martin jumped him --- and MSNBC deleted parts of the recorded discussion and broadcast it to make it look like Zimmerman was targeting Martin because Martin was black --- rather than tailing Martin because he was wandering around in the dark in a neighborhood which had recently suffered some break-ins and thwarted other break-ins. This was MSNBC's attempt to lynch a straw horse, a so-called "white" Hispanic. MSNBC needed a black-white confrontation to promote their own racist, anti-white narrative that America is a "racist" society --- a society wherein whites should be disarmed --- and pay reparations to non-whites for the "privilege" [crime] of being white. There is no evidence whatever that race played a part in Zimmerman's execution of his duties as the on-duty Neighborhood Watch Volunteer. It is said with great derision that Zimmerman is or was "a wannabe cop." These critics appear to live in safe neighborhoods, no? There's nothing wrong with the aspiration to be a cop unless one intends to indict all cops for wanting to be cops --- and letting a puddy like Chris Matthews be the nighttime Neighborhood Watch volunteer strikes me as ineffective, to say the least --- although a pretty good way to get rid of Chris Matthews, come to think of it. Immediately upon the breaking of this story in the media, there was the assumption that the confrontation between Martin and Zimmerman had something to do with Florida's "stand your ground" law. But because Martin jumped Zimmerman and knocked him to the ground and proceeded to beat him in "mixed martial arts" pound-and-ground style, Zimmerman clearly was not "standing his ground" --- in the moment of confrontation, he was on his back with Martin on top of him smashing his nose out of shape and smashing Zimmerman's head on the cement. The attacks on “stand your ground” laws are being made by opportunists whose real objective is to leave the American citizen defenseless and let the freelance-Democrat scum rule the streets of America. I say that the American people should be well-armed and well-ammoed and stand their ground against the OhBummer dictatorship and against Unkle Skum --- against American government at every level. We have no obligation whatever to surrender to fascist statism --- or to any other brand of statism, as a matter of fact. That government is best which governs least. It is said that American government rests upon the consent of the governed. Now is the time for all good men and women to withdraw and cancel that consent. Trayvon Martin has been portrayed in the media as a sweetie-pie. The first published pictures of him that I saw were of Martin at the age of 12, or so, looking relatively mild and not especially bright. The pictures which Martin published of himself at age 17, online, and before he attacked Zimmerman --- and pictures which others had of him at that age --- showed him at 160 lbs and six feet in height --- a football player who had had some "mixed martial arts" training, in his own website flipping-off the viewer, smoking marijuana, and holding a semiautomatic handgun --- I guess OhBummer would have to call that “an assault weapon,” ay? He'd twice been suspended from school, had been found in possession of stolen property, started fights with other people, was a dopehead, and gave the impression that he was also a dope dealer. Trayvon Martin was a cheap thug, and it makes perfect sense for Barak Hushpuppy OhBummer to say that if he'd had a son, that his son would look like and be like Trayvon Martin, although, arguably, that son would look more like Trayvon Martin's ally, Rachel Jeantel, who has good reason to wear a hoodie, and probably also a very large trashbag, all things considered. So let Barak Hushpuppy OhBummer and Eric "Fast and Furious" Holder don their hoodies --- their version of the racist KKK pointy-headed white sheets --- and let them pursue their black racist witch hunt and their campaign of character assassination against George Zimmerman. I say that the majority of the American people see OhBummer and Holder --- and the Democrat-captured media --- and the whole OhBummer Wrecking Crew --- for what they truly are --- a dictatorship which deserves to be smashed. "This whole incident has shown that America seems to prefer segregation (by all sides) rather than inclusion." No it hasn't. When considered relative to other white-on-black justifiable shootings, this case shows that racism is alive and well in the american judicial system. "Are there are racial disparities in justifiable homicide rulings? Out of 53,000 homicides in the database, 23,000 have a white shooter and a white victim. The shooting is ruled to have been justified in a little more than 2 percent of cases. In states with a SYG law (after enactment), the shooting is ruled to be justified in 3.5 percent of cases, compared to less than 2 percent in non-SYG states. In cases where both the victim and shooter are black, the numbers are almost identical, if slightly lower. When the shooter and victim are of different races, there are substantial differences in the likelihood a shooting is ruled to be justified. When the shooter is black and the victim is white, the shooting is ruled justified in about 1 percent of cases, and is actually slightly lower in non-SYG states. Between 2005 and 2010, there were 1,210 homicides with a black shooter and a white victim—the shooting was ruled to be justified in just 17 of them (about 1 percent)." The story is completely different when there is a white shooter and a black victim. In the same time period, there were 2,069 shootings where the shooter was white and the victim black. The homicide was ruled to be justified in 236 cases (11 percent). In SYG states, almost 17 percent of white-on-black shootings were ruled to be justified. Those statistics as well as data from prisons tell an inconvenient truth. I think "Equal Justice" is the next "Marriage Equality" if done correctly. As with LGBT marriage, the country hasn't yet realized the extent of our institutionalized discrimination. FLORI-DUH Defender, okay, if it makes you feel better, Zimmerman is Latino even though he's father is NOT. Maybe you should check some of the job applications around town where LATINOS are asked to mark "white" for statistical purposes.
{ "pile_set_name": "Pile-CC" }
Introduction To Excelsior Excelsior American School is one of the best residential and day-boarding schools in the country, located in the heart of Gurugram. By incorporating international pedagogy and UK’s Cambridge-based curricula, we encourage in our students an inherent curiosity and nurture in them their undiscovered potential. We can proudly wear the badge of experience, that has proved over the years our prowess in doing so. At Excelsior, our motivational academic ethos focuses on enabling students to develop effective life skills. This is inclusive of capacitating them to actively participate in their passions, interests and academics. Our environment has been planned in such a way that students easily indulge and assimilate into a holistic growth process that targets education & child development as per the current global context. We prepare our students to be future-ready. Our motto at Excelsior is “Self-Inspired Learning”. The motto is an embodiment of our belief in empowering independent thinking, an analytical approach and the ability to be self-driven and motivated. With our world-class campus in Gurugram– Excelsior is regarded as one of India’s top International Schools with its high standards of teaching methodology, technology collaboration, and a global culture in every aspect of learning.
{ "pile_set_name": "OpenWebText2" }
Q: Is it possible to know the probability that a trade is successful? I'm trying to model the distribution of different outcomes of day trading every day for a year. I'm starting with $350 dollars. I'm only doing options trading on Apple stock with a 5% stop loss and a 15% stop gain. And if it doesn't hit one of those stops, I sell before the market closes. I'm not trying to find a way to control whether I win or lose on the given day, I'm just gonna do my best. But at least in the long run is there a way to use the law of large numbers so that after a year, my average is close to the probability of winning on a given day? If I flip a coin every day for a year, I can get all heads, yeah, but it's way more likely that I get within 3 or 4 from half heads. Is there a way to set up my option trade for the day so that it has a specific probability? Or at least on certain days that have certain conditions, will there be a pretty specific probability? I've tried to learn "the secret to making money on the stock market", but I think for an average joe like me, I'm better off just trying to treat it as much like a coin flip as possible. And by having certain limits on my orders, I get the impression that a probability can be calculated. A: If you had a trading system, and by trading system I mean the criteria setup that you will take a trade on, then once a setup comes up at what price will you open the trade and at what price you will close the trade. As an example, if you want to buy once price breaks through resistance at $10.00 you might place your buy order at $10.05. So once you have a written trading system you could do backtesting on this system to get a percentage of win trades to loosing trades, your average win size to average lose size, then from this you could work out your expectancy for each trade that you follow your trading system on.
{ "pile_set_name": "StackExchange" }
The Jasenovac camp complex consisted of five detention facilities established between August 1941 and February 1942 by the authorities of the so-called Independent State of Croatia. As Germany and its Axis allies invaded and dismembered Yugoslavia in April 1941, the Germans and the Italians endorsed the proclamation of the so-called Independent State of Croatia by the fanatically nationalist, fascist, separatist, and terrorist Ustaša organization on April 10, 1941. After seizing power, the Ustaša authorities erected numerous concentration camps in Croatia between 1941 and 1945. These camps were used to isolate and murder Jews, Serbs, Roma (also known as Gypsies), and other non-Catholic minorities, as well as Croatian political and religious opponents of the regime. The largest of these centers was the Jasenovac complex, a string of five camps on the bank of the Sava River, about 60 miles south of Zagreb. It is presently estimated that the Ustaša regime murdered between 77,000 and 99,000 people in Jasenovac between 1941 and 1945. In late August 1941, the Croat authorities established the first two camps of the Jasenovac complex—Krapje and Brocica. These two camps were closed four months later. The other three camps in the complex were: Ciglana, established in November 1941 and dismantled in April 1945; Kozara, established in February 1942 and dismantled in April 1945; and Stara Gradiška, which had been an independent holding center for political prisoners since the summer of 1941 and was converted into a concentration camp for women in the winter of 1942. The camps were guarded by Croatian political police and personnel of the Ustasa militia, which was the paramilitary organization of the Ustaša movement. Conditions in the Jasenovac camps were horrendous. Prisoners received minimal food. Shelter and sanitary facilities were totally inadequate. Worse still, the guards cruelly tortured, terrorized, and murdered prisoners at will. Between its establishment in 1941 and its evacuation in April 1945, Croat authorities murdered thousands of people at Jasenovac. Among the victims were: between 45,000 and 52,000 Serb residents of the so-called Independent State of Croatia; between 12,000 and 20,000 Jews; between 15,000 and 20,000 Roma (Gypsies); and between 5,000 and 12,000 ethnic Croats and Muslims, who were political and religious opponents of the regime. The Croat authorities murdered between 320,000 and 340,000 ethnic Serb residents of Croatia and Bosnia during the period of Ustaša rule; more than 30,000 Croatian Jews were killed either in Croatia or at Auschwitz-Birkenau. Between 1941 and 1943, Croat authorities deported Jews from throughout the so-called Independent State to Jasenovac and shot many of them at the nearby killing sites of Granik and Gradina. The camp complex management spared those Jews who possessed special skills or training, such as physicians, electricians, carpenters, and tailors. In two deportation operations, in August 1942 and in May 1943, Croat authorities permitted the Germans to transfer most of Croatia's surviving Jews (about 7,000 in total), including most of those still alive in Jasenovac, to Auschwitz-Birkenau in German-occupied Poland. As the Partisan Resistance Movement under the command of Communist leader Josip Tito approached Jasenovac in late April 1945, several hundred prisoners rose against the camp guards. Many of the prisoners were killed; a few managed to escape. The guards murdered most of the surviving prisoners before dismantling the last three Jasenovac camps in late April. The Partisans overran Jasenovac in early May 1945. Determining the number of victims for Yugoslavia, for Croatia, and for Jasenovac is highly problematic, due to the destruction of many relevant documents, the long-term inaccessibility to independent scholars of those documents that survived, and the ideological agendas of postwar partisan scholarship and journalism, which has been and remains influenced by ethnic tension, religious prejudice, and ideological conflict. The estimates offered here are based on the work of several historians who have used census records as well as whatever documentation was available in German, Croat, and other archives in the former Yugoslavia and elsewhere. As more documents become accessible and more research is conducted into the records of the Ustaša regime, historians and demographers may be able to determine more precise figures than are now available.
{ "pile_set_name": "Pile-CC" }
So this guy was sucking brezz, felt something in his mouth, tasted somehow Are you pregnant? No! Why is meek coming out from your brezz? 🤔 "It happens from time to time" LIAR! It's because you've done aborshon before "No! I have not done aborshon Let me explain Thread RT What happened in the story above is a condition known as galactorrhea, where a person has milk like discharge from either one or both nipples. This discharge is different from the regular milk secretion that occurs during pregnancy It can occur in men also, Yes you heard it This can be caused by a number of things: Let's start from the common: The commonest cause is what we call a prolactinoma, this is simply a tumor or growth that occurs in the pituitary gland. Where is this gland? I'd like to say it's in between your eyes, inside your head
{ "pile_set_name": "OpenWebText2" }
Based out of Los Angeles, we specialize in service and repair of all major home and commercial appliances, A/C and Heating units, including most brands and models. Serving the Greater Los Angeles and San Fernando Valley, see our Service Areas,Our technicians are well experienced and have many years of field work behind them. We offer same day service on most orders. There is no extra charge for evenings, weekends or holidays. We are always in your area, so there is no travel charge! Lastly, we only install brand new, factory recommended parts.
{ "pile_set_name": "Pile-CC" }
Teresa Carlson Teresa Carlson is the current vice president for Amazon Web Services' worldwide public sector business. Prior to working for Amazon, Carlson served as Microsoft's Vice President of Federal Government business. Carlson was named Executive of the Year in 2016 for companies greater than $300 million by the Greater Washington GovCon Awards, which is administered by the Northern Virginia Chamber of Commerce. Education Carlson graduated from Western Kentucky University with a bachelor's degree in communications and a master's in speech and language pathology. References Category:Amazon.com people Category:Living people Category:Year of birth missing (living people)
{ "pile_set_name": "Wikipedia (en)" }
Vasoconstrictive effects of human post-hemorrhagic cerebrospinal fluid on cat pial arterioles in situ. Cat cortical arterioles were exposed in vivo to cerebrospinal fluid (CSF) from four patients with subarachnoid hemorrhage (SAH) due to a ruptured intracranial aneurysm. Pial arteriolar caliber was measured by the television image-splitting technique. There was a consistent vasoconstrictive response to CSF. This effect could be ascribed neither to the pH of the CSF nor to the potassium concentration. The vasoconstriction, which was more pronounced with decreasing arteriolar caliber, could be resolved by the perivascular application of nifedipine.
{ "pile_set_name": "PubMed Abstracts" }
Power over Ethernet systems are seeing increasing use in today's society. Power over Ethernet, sometimes abbreviated PoE, refers to providing power to Ethernet devices over an Ethernet line that is also used to communicate data. Thus, power over Ethernet devices do not require separate power supply lines. In some instances, the power may be supplied by a power supply contained within an Ethernet switch. Because the power supply does not generally have the power capability to supply maximum power to every port, there is a limit on the number of power over Ethernet devices that can be connected to a given power supply. A port may be denied power, if it will result in oversubscription of the power supply. Example power over Ethernet devices that can benefit from receiving power over the Ethernet communication lines include an internet protocol telephone, a badge reader, a wireless access point, a video camera, and others. Traditionally, when a power over Ethernet device is connected to a power supply, the power over Ethernet device is allocated a maximum power class according to IEEE standard 802.3af denoted as class 0 thru 4. These maximum values correspond to the maximum amount of power that will be supplied by the power supply to the power over Ethernet device. IEEE standard 802.3af provides for three levels of 15.4 watts, 7.5 watts, and 4.0 watts for these power over Ethernet devices. In certain circumstances, such allocation prevents the power supply from being utilized to its full capability due to the coarse granularity in class. A software program referred to as Cisco Discovery Protocol allows for more granular specification of the limit for the power over Ethernet powered devices other than the above-described IEEE levels. However, the power supply still may have unutilized capacity.
{ "pile_set_name": "USPTO Backgrounds" }
Quick take: It's hard to follow up on the Redmi Note 3, but Xiaomi has managed to deliver a great successor in the Redmi Note 4. The phone now comes with more memory and storage, and the design changes make the device feel upmarket. Battery life has also received a boost thanks to the Snapdragon 625 SoC, and the camera is also better than what we saw last year. In short, this is the phone to beat in the budget segment. The good Class-leading performance Premium design Great battery life The bad MIUI quirks Fast charging limited to 5V/2A Here we go Xiaomi Redmi Note 4 Full review Xiaomi had a great 2016 on the back of the Redmi Note 3. Over 3.6 million units of the phone were sold, allowing Xiaomi to cross $1 billion in revenue from the country for the first time. However, competition in the budget segment has intensified, with Lenovo launching a bevy of models in the country last year. The Moto G4 series continues to sell in huge numbers, and the Z2 Plus picked up a discount recently, bringing the cost of the phone down to ₹14,999. For that amount, you get a handset powered by the Snapdragon 820. The Honor 6X — which has a dual camera setup — is slated to make its debut in India next week, and Samsung's Galaxy J7 and Galaxy On Nxt offer a lot of value for their asking price. Verizon is offering the Pixel 4a for just $10/mo on new Unlimited lines To counter the threat, Xiaomi is selling three variants of the Redmi Note 4 in India: the base model has 2GB of RAM and 32GB storage and retails for ₹9,999 ($145), then there's a variant with 3GB of RAM and 32GB storage for ₹10,999 ($160), and the most interesting model is the one with 4GB of RAM and 64GB storage, which is available for just ₹12,999 ($190). Can the Redmi Note 4 fend off its rivals and solidify its place in this category? Let's find out. Everything you need to know Xiaomi Redmi Note 4 Specs Category Features Operating System MIUI 8 based on Android 6.0.1 Marshmallow Display 5.5-inch 1080p (1920x1080) IPS LCD panel 2.5D curved glass 401ppi pixel density SoC Octa-core Qualcomm Snapdragon 625 Eight Cortex A53 cores at 2.0GHz 14nm GPU Adreno 506 with Vulkan API, OpenCL 2.0, and OpenGL ES 3.1 650MHz RAM 2GB/3GB/4GB Storage 32GB/32GB/64GB microSD slot up to 128GB Rear camera 13MP with f/2.0 lens PDAF, LED flash 1080p video recording Front shooter 5MP with f/2.0 lens 720p video recording Connectivity LTE with VoLTE Wi-Fi 802.11 a/b/g/n, Bluetooth 4.1, GPS, GLONASS Micro-USB, 3.5mm audio jack, IR blaster Battery 4100mAh battery Fast charging (5V/2A) Fingerprint Rear fingerprint sensor Dimensions 151 x 76 x 8.3mm Weight 175g Colors Gold, Dark Grey, Matte Black About this review I (Harish Jonnalagadda) am writing this review after using the Redmi Note 4 variant with 4GB of RAM and 64GB storage for two weeks in Hyderabad, India. The phone was connected to Airtel's 4G network for the first week, and Jio's VoLTE-enabled network for the rest of the review period. The phone was on the MIUI 8 beta channel, and received three updates with stability fixes. Exquisite Xiaomi Redmi Note 4 Design and screen The Redmi Note 4 is roughly the same size as its predecessor, but the design has been significantly altered. The phone now sports an all-metal chassis, with Xiaomi stating that it takes over 30 steps to turn the aluminum block into a finished piece. The phone is slightly heavier than the Redmi Note 3, but the added heft makes a huge difference in day-to-day usage. It's weighted perfectly, and Xiaomi managed to trim the overall thickness by 0.3mm, bringing the phone down to 8.4mm. The Redmi Note 3 featured ungainly plastic at the top and bottom, but the Note 4 is entirely made out of aluminum. It instead has antenna lines at the back, which provide signal reception while also serving to break up the design. The phone isn't as curved at the back, with the chamfered edges making for better ergonomics. These are all subtle changes, but they culminate to produce a phone that's vastly different. The end result is that the Redmi Note 4 feels great to hold and use. The overall fit and finish is one that befits a high-end device, and shows how far companies that cater to the budget segment have come. Rounding off the design, the Redmi Note 4 has a speaker grille at the bottom, and although it is a single speaker, there are two sets of grills for the sake of symmetry. They're joined in the middle by a microUSB port, an odd choice in 2017 considering the industry is moving to USB-C. The Redmi Pro offers the newer USB-C port, and it is likely Xiaomi will switch to the standard from the next generation. At the top, you'll find the 3.5mm jack and an IR blaster. The Redmi Note 4's design wouldn't look out of place on a high-end phone. The power and volume buttons are on the right, and they offer decent tactile feedback. The SIM card slot is on the left, and you can either slot in two SIM cards (microSIM + nanoSIM) or a SIM card along with a microSD card. Round the back, the camera sensor and lens module are aligned with the fingerprint sensor, which is slightly recessed. The front is dominated by a 5.5-inch display, and the addition of 2.5D curved glass makes a substantial difference when using the screen. The hardware navigation buttons are backlit, allowing for easy access at night. The display itself is brighter and has better color accuracy than the Redmi Note 3, and is easily one of the best panels in this segment. You get the usual Xiaomi additions as well — there's Reading Mode, a blue light filter that makes it easier to read text at night. The mode lets you create a schedule to automatically enable it, and there's also the option of enabling it for selected apps. You can also adjust the color temperature to your liking, and toggle double tap to wake the screen. All the details Xiaomi Redmi Note 4 Hardware Xiaomi has excelled at offering great hardware in its budget phones, and that hasn't changed with the Redmi Note 4. The Chinese variant of the Redmi Note 4 is powered by MediaTek's Helio X20 SoC, but as Xiaomi isn't allowed to launch phones powered by MediaTek processors in India, the local variant is powered by a Snapdragon 625. Although the naming convention may lead one to believe that it is a downgrade from the Snapdragon 650 used in the Redmi Note 3, that isn't the case. Unlike the 28nm Snapdragon 650, the Snapdragon 625 is built on the 14nm node, resulting in greater energy efficiency. The mid-range chip powers through everyday tasks with ease, and there wasn't any lag or slowdown in the two weeks I've used the phone. The 4GB of RAM also makes a difference when multitasking. The Snapdragon 625 can also handle visually-intensive games Modern Combat 5: Blackout or Asphalt 8 without breaking a sweat. The Redmi Note 4 handles everything you throw at it with aplomb. Even though the base model of the handset comes with 2GB of RAM, it is great to see Xiaomi moving away from 16GB internal memory and instead offering 32GB as the base storage. The phone comes with the usual range of connectivity options, including dual-band Wi-Fi, Bluetooth 4.1, LTE with VoLTE, and an IR blaster that lets you control a variety of appliances. There's no NFC on the phone, but that isn't as major an omission as it is in Western markets. Android Pay is yet to make its debut in India, and it doesn't look like it will do so anytime soon. The fingerprint sensor at the back is slightly recessed, making it easy to locate it with your finger. Its position beneath the camera module makes it easy to access, and the sensor itself is quick to authenticate. It is an always-on sensor, so you'll be able to unlock the device even when the display is off. The speaker on the Redmi Note 4 is significantly better than its predecessor, and that's mainly due to its placement. Moving the speaker to the bottom means that it is no longer muffled when lying flat on a surface. The quality from the speaker is average — with sound getting distorted at high volumes — but at least you won't miss any incoming calls or notifications. MIUI saga continues Xiaomi Redmi Note 4 Software MIUI 8 is Xiaomi's biggest release in a long time, introducing much-needed visual flair along with new customization options. The skin is based on Android 6.0.1 Marshmallow, and the phone is currently on the December security patch. Xiaomi is testing a Nougat preview of MIUI, and will be rolling it out widely in the coming months. Setting up the Redmi Note 4 is a hassle, as MIUI still doesn't offer a way to restore apps and settings. So you'll have to individually install apps from the Play Store after booting into the phone. Another issue is with the phone's settings, which is a jumbled mess in its current iteration. Settings you'd normally find on other Android phones are inexplicably missing, and the ones that are available aren't located where you'd expect. For instance, if you want to enable installation of apps from outside the Play Store (useful for installing apps like Spotify), you'll have to go to Settings -> Additional settings -> Privacy ->Unknown sources. On most other phones, it is at Settings -> Security -> Unknown sources. MIUI is frustrating to use at times, but it is packed with features. Xiaomi is like the Alfa Romeo of phone brands. Its designs are evocative, and its customers are very passionate about the brand. And like all Alfas, Xiaomi's phones end up being quirky and frustrating to use. With the Redmi Note 4, that shows up in the form of annoying lock screen issues. Lock screen notifications are a hit and miss in MIUI 8. While I receive all Gmail notifications, I'm yet to see one for Google's Messenger or Facebook Messenger. On the subject of notifications, MIUI 8 has a retooled notification shade that shows quick toggles and incoming messages in the same place. You can expand notifications, but doing so requires a zoom in gesture, not the best solution when you're using the phone one-handed. Then there's the aggressive memory management. I use Minima for live wallpapers, and to get it to work, the app needs to be running in the background. Every time I closed the Minima app, the wallpaper switched back to the system default. That said, MIUI 8 has a ton of new features and customizability. There's Dual Apps, which lets you run two instances of the same apps, allowing you to run two WhatsApp or Facebook accounts on the same phone. With Second Space, you can set up two profiles on the phone, with each featuring a different home screen. The profiles are sandboxed and use their own distinct data, but you do get the option to move data between profiles. MIUI 8 also offers video editing tools in the gallery app, there's a new power-saving mode that lets you conserve the battery, and there's a Quick Ball feature that lets you access shortcuts with ease. You can also take scrolling screenshots, convert currency and other units on the fly, and much more. Eight new features in MIUI 8 There's also a one-handed mode, which is accessible with a left-to-right (or vice versa) swipe gesture across the navigation keys. You can shrink the screen size down to 4.0 inches, 4.5 inches, or 3.5 inches, making it more convenient to use the phone one-handed. Xiaomi also offers several features for the Indian market. The dialer includes caller ID information for the delivery staff of Amazon, Domino's, Zomato, and other brands, making it easier for you to identify incoming calls. Better than before Xiaomi Redmi Note 4 Camera The Redmi Note 4 has a 13MP camera with f/2.0 lens and PDAF. There's a 5MP camera up front that also sports an f/2.0 lens. The camera app is easy to use and comes with a wealth of options, including filters, beautify effects, and a manual mode that lets you tweak the ISO, white balance, and exposure settings. You can also take tilt-shift photos, set a countdown timer, shoot panoramas, and select from various scenes. The camera does a great job of taking photos in well-lit conditions, and the resulting images are full of detail and offer saturated colors. You get more detail when shooting in HDR, but doing so takes slightly longer to shoot images. Images at low-light turned out decent, but you'll have to put in a lot of effort to get passable shots.
{ "pile_set_name": "OpenWebText2" }
1. Introduction {#sec1} =============== Health care is changing dynamically in the 2010s. The economic recession and problems with recruiting professionals \[[@B1], [@B2]\], staff retention \[[@B3]\], creating healthy work environments \[[@B4], [@B5]\], and a growing demand for customer orientation \[[@B6]\] pose challenges for nurse managers\' work. More expertise in management is needed to respond to these issues. One essential area of nurse manager\'s management skills is the use of different leadership styles \[[@B7]\]. Leadership styles can be seen as different combinations of tasks and transaction behaviours that influence people in achieving goals \[[@B8]\]. Earlier studies indicate that nurse manager\'s effective leadership style is affiliated to staff retention \[[@B5]\], work unit climate \[[@B4]\], nurses\' job satisfaction \[[@B9], [@B10]\], nurses\' commitment \[[@B11]\], and patient satisfaction \[[@B12]\]. Transformational leadership style \[[@B5], [@B6], [@B13], [@B14]\] and transactional leadership \[[@B7]\] help to respond to these issues. Transformational leadership refers to the leader\'s skills to influence others towards achieving goals by changing the followers\' beliefs, values, and needs \[[@B7]\]. Transactional leadership complements and enhances the effects of transformational leadership outcomes \[[@B15]\]. There are certain skills required from nurse managers so as to be able to use these effective leadership styles. The skills include the ability to create an organization culture that combines high-quality health care and patient/employee safety and highly developed collaborative and team-building skills \[[@B1]\]. Nurse managers also need to have the readiness to observe their own behaviour \[[@B16]\] and its effects on the work unit; as a result, employees can adjust to a better leadership style. These kinds of skills are related to manager\'s emotional intelligence (EI). EI is an ability to lead ourselves and our relationships effectively \[[@B17]\]. It has been defined as the ability to observe one\'s own and others\' feelings and emotions, to discriminate among them and to use this information to direct one\'s thinking and actions \[[@B18]\]. EI is composed of personal competence and social competence. Self-awareness and self-management are reflections of personal competence, influencing the way the leader manages him/herself. Social awareness and relationship management reflect social competence, which affects how the leader manages relationships with others \[[@B17]\]. Nurse managers with that skill can easily form relationships with others, read employees\' feelings and responses accurately, and lead successfully \[[@B19]--[@B21]\]. Emotionally intelligent leaders\' behaviour also stimulates the creativity of their employees \[[@B22]\]. Goleman et al. \[[@B23]\] have identified visionary, coaching, affiliate, and democratic styles as resonant, and pacesetting and commanding styles as dissonant leadership styles. Most leaders use both resonant and dissonant leadership styles. The leadership styles of Goleman et al. are applied as the basis of this study because earlier studies refer to the significance of these styles, especially that of EI in manager\'s work. In addition, these leadership styles are one way of aiming to carry out transformational leadership. Especially visionary, coaching, affiliate, and democratic styles include elements that promote transformational leadership. Such elements are for example the leader being visionary and empowering staff \[[@B4]\]. This paper focuses on Finnish nurse managers\' leadership styles. The Finnish health care system is a strong institution where health care services are offered to all citizens and funded by taxes \[[@B24]\]. It has widely recognized that health care services in Finland are of high-quality Despite recent concerns about equity issues, Finns are in general very satisfied with their health care services. \[[@B25]\]. Consequently it is important to explore nurse managers\' leadership styles especially in this context. 2. Materials and Methods {#sec2} ======================== 2.1. Aim of the Study {#sec2.1} --------------------- The intention of this study was to explore nurses\' and supervisors\' perceptions of nurse leaders\' leadership styles. The research questions were as follows: what kind of leadership styles do nurse managers use and what are the factors affected by their leadership styles. 2.2. Participants {#sec2.2} ----------------- To achieve the aim of this study data were collected through open interviews. The majority of Finnish nurse managers, nurses, and supervisors work in hospitals or long-term facilities. Selection of participants was performed in convenience sampling \[[@B26]\]. Participants were selected paying attention to the fact they were of different ages, working in different wards and units (e.g., psychiatry, internal diseases, gerontology) in either hospitals or long-term facilities, and had worked with more than one nurse manager. The researcher contacted the participants and asked whether they were interested in taking part in the study. The participants were informed about the aim of the study. Participation was voluntary. Prior to the interviews each participant signed a form where they gave their consent to participate in the study. A total of 11 nurses and 10 supervisors, 20 women and one man, from eight Finnish hospitals and five long-term care facilities participated in the study. The age of the nurses varied between 30 and 53 and their experience in health care between 7 and 25 years. The age of the supervisors varied between 38 and 59 and their experience as supervisors between 5 and 21 years. Both nurses and supervisors had worked with many nurse leaders and they were interviewed about nurse managers in general. They thus had experience of different nurse managers on different wards and they were able to describe leadership styles from various aspects. 2.3. Data Collection and Analysis {#sec2.3} --------------------------------- Semistructured interviews were used to gather data on the perceptions of nurse managers\' leadership styles and factors affected by leadership styles. Interviews were usually carried out in the office in the participants\' workplace. All interviews were recorded with individual consent. Participants were initially asked to describe their work and earlier study and work history. They were subsequently asked about their perception of leadership styles and asked to describe the leadership styles used by their nurse managers. After that they were asked about factors affected by leadership styles. Each interview was approached individually, guided by participants\' responses. The interview sessions lasted between 30 and 85 minutes. Every interview was transcribed word for word from the recordings. Interviewing was continued until saturation of the data was achieved \[[@B27]\]. Because nurses and supervisors might have differed in their perceptions of leadership styles, the data were first analysed separately in two separate groups, following the same process for each group. Content analysis was chosen because it is a research method for making valid inferences from data to the contexts of their use \[[@B28]\]. The interview texts were read through multiple times, based on the author\'s empirical and theoretical preunderstanding of the professional area of the participating nurses and nurse managers. A structured categorization matrix of leadership styles was developed based on the primal leadership model \[[@B23]\] and research of Vesterinen et al. \[[@B29]\]. When using a structured matrix of analysis, an item of the data that does not fit the categorization frame is used to create its own concept, based on the principles of inductive content analysis \[[@B30]\]. When both the data of nurses and superiors were analysed, the results were compared. The categories and subthemes were congruent and therefore the results are presented together, albeit paying attention to differences and similarities of the perceptions of nurses and superiors. The data analysis of the factors affected by leadership styles was inductive. All the data of nurses and supervisors were analysed together. This process included open coding, creating categories, and abstraction. A classification framework of the factors was formed inductively by defining categories and sub-themes. The criteria for allocating a unit to a category were formed by asking questions if the unit was suitable to the category. The sub-themes were named using descriptive concepts and classified as "belonging" to a particular category. After that, the categories were given names \[[@B31]\]. 2.4. Trustworthiness {#sec2.4} -------------------- The trustworthiness of this study has been ensured by confirming truth value, consistency, neutrality, and transferability of this study \[[@B32]\]. When considering this study from the viewpoint of trustworthiness, there are some threats that should be taken into consideration. The researcher collected the data and performed the analysis alone and the interpretation could have been affected by her professional history \[[@B33]\]. With interviews there is a risk that respondents try to please the interviewer by reporting things they assume s/he wants to hear. The researcher confirmed the truth value of the study by selecting participants in convenience sampling. The respondents\' age distribution was wide and they worked in different units. Their perspectives and descriptions were broad and gave a diverse picture. The truth value of this study was also confirmed by analysing data as they emerged based on the interviews. To ensure the trustworthiness of the study quotes from interviews are included in the results. In view of consistency, the research process is described so that it can be repeated if necessary. This gives a possibility to understand the limitations of the process of data collection and analysis. To ensure neutrality in this study, interpretations were based on original data. This is confirmed by citations from the interview data. In this study the sample was small, consisting of Finnish nurses and supervisors, and the results only reflect their perceptions of leadership styles. As a result, transferability of results is limited. However, when considering the main objective in this study, it was not transferability of research results, but it was to enhance understanding of leadership styles and use it for future studies. 2.5. Ethical Considerations {#sec2.5} --------------------------- The data for this study were collected following approval from the administrations of the organizations. All participants were informed of the purpose of the study. They were told that their participation was voluntary and would be treated with confidentiality. Participants were asked to sign a form where they gave their consent to take part in the study. 3. Results and Discussion {#sec3} ========================= 3.1. Results {#sec3.1} ------------ Data analysis identified visionary, coaching, affiliate, democratic, commanding, and isolating leadership styles ([Figure 1](#fig1){ref-type="fig"}). Job satisfaction and commitment as well as operation and development work, cooperation and organizational climate in the work unit were the factors affected by leadership styles. ### 3.1.1. Leadership Styles {#sec3.1.1} Visionary Leadership StyleSupervisors were of the opinion that today, nurse managers use a more visionary leadership style than previously. In the past, many organizations lacked a vision of their own and had fewer possibilities to engage in development for the future. Even now, the skills of nurse managers to lead visionary development work varied. Both nurses and supervisors reported that it was characteristic of the visionary nurse manager to emphasize and discuss the vision and provide information to employees. When establishing their vision, some nurse managers provided guidelines for attaining the work unit\'s goals. These nurse managers had a systematic and purposeful leadership style, based on the knowledge of nursing science and practice. They generally worked in organizations with strategies and vision. They had clear goals and rules on how to work. Nurse managers had so-called performance development discussions with every employee once a year. During the discussion, the nurse manager explained and revised the goals and discussed the purpose of the employee\'s work together with each employee. At the same time, they agreed on the goals of the employee for the next year. Visionary nurse managers were described as being assertive and persistent in their attempts to get the work units to achieve their goals. Nurse managers with more recent education were better equipped to search for information than nurse managers with older education. In addition, they often had a clear picture of the development needs in nursing practice. Supervisors said that sometimes the fact that the organization did not have visions or direction for the future was an obstacle to a visionary leadership style. This was emphasized in cases where changes were introduced to the organization. Some nurse managers worked more on the basis of operation up until the present. The managers were guided by various situations and there were no plans for the future."*"This manager had visions and we had long-term plans, but these plans often changed."*" Nurses emphasized the importance of making the vision understandable by giving information about current issues of the work unit. Nurse manager\'s skills to provide information objectively and positively influenced the way the personnel reacted to topical issues. It was also important to explain the motivation behind decisions. Coaching Leadership StyleNurses as well as supervisors felt that nurse managers with a coaching leadership style took into consideration both the professional development of the employees and delegation of work. The employees had resources and were seen as experts and the nurse manager delegated tasks to them. The skills of employees to work independently varied. Some employees needed more coaching while others were satisfied with using their own professional skills independently. The success of delegation was affected by common instructions. They guided employees so that every employee knew his/her tasks. The employees worked and made decisions independently within the bounds agreed. The nurse managers had a significant role in supporting the employees to cope with the problems at work. They were also responsible for coordinating and organizing work in the unit as a whole."*"A nurse manager draws plans for nursing practice so that there are these areas of responsibility and everybody knows what is their area and they answer for that."*" The nurse manager paid attention to employees\' professional skills and encouraged them to study further. Both personnel\'s competence and leaders\' skills to lead influenced the development work in the unit. It was useful to clarify what kind of needs the work unit and employees had for additional education and to draw up an education plan. This plan was a meaningful basis to guide the employees to necessary training. It was each employee\'s duty to share the new knowledge with other employees. The nurse manager encouraged the employees to collect information without prompting and to think independently. S/he also gave feedback about the professional development of the employee. Affiliate Leadership StyleNurses as well as supervisors described an affiliate leadership style. Nurse managers with affiliate leadership style emphasized harmony and acceptance of difference. The employees and their best interests were the most important value to the nurse manager. They knew the rules and guidelines of the organization, but they considered the hopes and needs of employees in a flexible manner. The nurse manager had skills to understand the feelings of another person and supported him/her by listening sensitively. Both s/he and his/her personnel trusted each other. Nurses reported that this encouraged the employees to discuss their personal concerns with the nurse manager. "*"The way to act, pay attention to the employee, do you listen to her or not, that is the basic question."*" On the other hand, supervisors reported that leading could be too solicitous, in a completely motherly way. The basis of the leadership style could be supporting the well-being and job satisfaction of the employees; this might be more important than the development of nursing practise. The purpose---a harmonious atmosphere without conflicts---can be an obstacle to planned changes. "*"When there are big changes in the work unit, nurse manager is present to the employees and listen \[sic\] to them. She tries to support and say \[sic\]: there is no problem and we manage of this."*" "*"When a new employee begins to work, she leads in \[sic\] more paternalistic way and takes care of them all the time."*" Both nurses and supervisors deemed it important that the nurse manager respects differences and personal characteristics of the employees, not forgetting employees\' equality. A nurse manager who respects and accepts the employees as individuals was easy to approach. On the other hand, nurse manager\'s close friendship with employees could make it more difficult to examine the work unit and its functions objectively."*"There are managers who are very permissive and let the employees behave each in their own way, it is typical that new small managers rise beside them."*" According to the findings nurse managers sometimes behaved in a manner the employees felt to be unequal."*"It seems that if you are a strong-willed person, you are more likely to get what you want than a person who is adaptable."*" Democratic Leadership StyleBoth nurses and supervisors reported that it was typical for the democratic nurse manager to emphasize teamwork and commitment to work. All employees\' participation was important to him/her. The nurse manager worked and discussed work together with the personnel. The employees had a possibility to voice their opinions and take part in problem-solving and decision-making. However, the nurse manager was ultimately expected to be a decision-maker. "*"... and find and make the decision by thinking together and listening to opinions of the employees and discussing together; however, she is in some cases the final decision-maker."*" There were different perceptions of the nurse managers\' positions in this leadership style. On the one hand, they were deemed to be responsible for the work unit and to make reasonable decisions after discussing with the employees. On the other hand, some supervisors felt that some nurse managers did not stand out as managers, but as team members. This meant that the nurse manager\'s own tasks could be of secondary importance."*"... she is working a lot with us and she has difficulties performing her own duties as a nurse manager."*" Supervisors said that a nurse manager had an important role in cooperation and its development with the members of different professional groups and between work units. His/her skills to get the employees to commit to the common goals were deemed as significant. Planning together with the personnel formed a basis for employees\' commitment to work. That was essential for the development of the operation of the work unit."*"... leadership style influences operation as a whole, for example, how a manager gets employees to commit to common decisions"*" Commanding Leadership StyleBoth nurses and supervisors identified a commanding leadership style, characterized by an emphasis on compliancy and control. Nurses as well as supervisors reported that it was important to the nurse managers with a commanding leadership style to follow clear directions and advice which they expected to get from others, for example, their own superiors. The employees were expected to obey these orders. The nurse manager could ask employees\' opinions on how to find a solution to a problem in the work unit; usually s/he had already made a decision and it was not changed by the opinions of the employees. The nurse manager did not think it necessary to explain his/her decisions. The leadership style was described as authoritarian, hierarchical, and inflexible."*"Nurse managers who do not have the latest knowledge of leadership, they demand that there should be clear rules and laws for everything and there is no flexibility."*" Commanding leadership style was more common in the 1970s and 1980s and it was now considered traditional and out-of-date. It was, however, described as a convenient leadership style when employees are inexperienced or when there are big changes in the work unit. Nurse managers were described as controlling the behaviour of the personnel, although observations of that kind have diminished considerably. Isolating Leadership StyleBoth nurses and supervisors described that nurse managers could isolate themselves from the work unit and retire to their own room where they worked alone without active communication with the employees. In that case the employees felt that they had been left without a leader. Problematic situations like conflicts between employees often arose and they were difficult to repair. Neither the nurse manager nor the employees got the information they needed in their work. "*"The nurse manager is quite isolated, she works alone in her room, we visit her when we have something to discuss with her."*" ### 3.1.2. Factors Affected by Leadership Style {#sec3.1.2} Both nurses and supervisors reported that nurse manager\'s leadership style affects employees\' job satisfaction and commitment to work. It is felt that nurse manager\'s fairness and trust in the employees promotes their motivation and participation in work. It is important that the employees have a possibility to develop their professional skills. Leadership style contributes to job satisfaction when the nurse manager has skills to prevent and solve conflicts. All the participants reported that nurse manager\'s skills to lead the work unit and motivate people affect the success of the work unit. Often s/he has to ask for adequate resources. It is important that there are enough trained employees and the employees know and are in charge of their areas of responsibilities. Supervisors remarked on the influence of leadership style on efficiency and economy, because the fluency of operation has an impact on how much money is spent. The nurse manager\'s influence in developing and changing operation is very important. It is important that the employees have a possibility to take part in development work as well. Nurse manager\'s leadership style can promote or hinder development in the work unit. Supervisors emphasized that nurse managers have a significant role in cooperation within the work unit and outside it. Some nurse managers want to work only inside their own unit, while others take a larger view of the matter. Nurse manager\'s leadership style has an influence on how externally orientated the staff are and whether they have connections outside the work unit. A nurse manager can promote the continuity of patient care by cooperation with other units. S/he is a role model in how to treat nurse students. Both nurses and supervisors felt that problems in the organizational climate, such as conflicts between the employees or dissatisfaction with the nurse leader are reflected in patient care. The activity or passivity of the nurse manager affects the image of the work unit."*"If there is patient mistreatment, it is the nurse manager whose responsibility it is to decide how to react, for example, "in our unit we treat patients well" or "we do not react at all to this complaint"."*" All in all, organizational climate, personnel\'s job satisfaction and commitment, work unit\'s operation and development work, and cooperation influence the way patient care succeeds and how a patient experiences the care he/she gets. Leadership style has an effect on patient satisfaction and quality of care. If the nurse manager\'s basic value is good patient care, it influences in many ways his/her leadership style and how s/he organizes things in the work unit. 3.2. Discussion {#sec3.2} --------------- The discussion is structured around the findings identified above. An isolating leadership style was identified as distinct from the leadership styles that Goleman \[[@B23]\] presented, whereas pace-setting leadership style was not reported. The participants reported that nurse managers used many leadership styles, but normally they had one which they used more than others. Nurses who worked for leaders with resonant leadership styles were more satisfied with supervision and their jobs \[[@B34]\]. Furthermore, visionary leadership style, coaching leadership style, affiliate leadership style, and democratic leadership style seem to promote transformational leadership because they motivate and involve staff. That is why nurse managers should develop themselves in the use of these leadership styles. Nurse managers\' leadership style depends on many issues, such as organization, situation, and employees. Reynolds and Rogers \[[@B35]\] argue that employees have variable levels of competence depending on the situation. That requires managers to adapt their leadership style. It is important that nurse managers have skills to reflect on their own leadership style and receive feedback about it. That gives them tools to use different leadership styles in different situations. Health care is meeting ever-increasing new challenges where it has to react rapidly. It is important to the health care organizations to make long-term plans and prepare for the future by paying attention to the needs of inhabitants and the resources needed. The vision is the basis of the goals of the work unit, too. Having knowledge of nursing science and practice gives nurse managers the tools to use a visionary leadership style and make plans for the future. Morjikian et al. \[[@B36]\] argued that communication of future plans, goals, and strategies is important between the nurse manager and the employees. It is important to give information of the vision and explain it regularly to the employees, because sometimes the employees forget the purpose of their work and their working style is not appropriate. When nurse managers work like this they are also carrying out transformational leadership \[[@B4], [@B5]\]. In the future, securing skilled employees will be a big challenge in health care. Vesterinen et al. \[[@B29]\] found that nurse managers with a coaching leadership style appreciated employees\' professional skills and encouraged them to study further. Nurse manager\'s consideration of employees\' profession and educational needs influenced nurse retention positively. Kenmore \[[@B37]\] argued that a coaching leadership style works when the employees are keen to develop and make use of possibilities to do so. Education gave employees tools to work and make decisions independently. Although the nurse manager organizes the work unit as a whole and is responsible for the development work in the unit, his/her support has a significant role in helping employees to cope with the problems they meet in their work. This is also an important part in nurse managers\' role as emotionally intelligent leaders \[[@B10]\]. As a consequence of globalization, both employees and patients come from many different cultures. Their behaviour and habits to express their needs vary. An affiliate leadership style with acceptance of difference could be suited for the multicultural work unit. It is a challenge for the manager to listen sensitively and consider to employees\' personal needs individually and at the same time objectively, not forgetting employees\' equality. This requires an emotionally intelligent nurse manager \[[@B10]\]. The basis of the leadership style could be supporting the well-being and job satisfaction of the employees. As Kenmore \[[@B37]\] argues, if a nurse manager is too concerned with creating harmony, it can lead to evasion of problems. Because of shortage of employees, nurses have many possibilities to choose and change their workplaces. Democratic leadership style promoted employees\' commitment to work \[[@B29]\]. It is important that the employees can express their opinions and take part in decision-making. A commanding leadership style prevents the empowerment of the nurses, because they do not have possibilities to participate in work planning \[[@B38]\]. However, there are situations where a commanding leadership style is appropriate. The majority of Finnish nurses will retire in the next few years and there are many nurses with less work experience in the work units. Employees with less work experience may need clear directions, for example, in acute situations when a patient\'s life is in danger. According to Huston \[[@B1]\], essential nurse manager competencies for the future include the ability to create an organization culture that combines high-quality health care and patient/employee safety and highly developed collaborative and team building skills. As a result of this study, an isolating leadership style was found: the nurse manager worked alone without active communication with the employees. The employees have to work without a leader, and that could cause anxiety for the employees who need support from their leader. A good question in this case is who is really leading the work unit. If the leader does not show consideration towards the employees, it could affect their health and well-being negatively \[[@B39]\]. Nurse managers need support to develop the leadership style. Leaders and their supervisors should be considered collectively to understand how leadership influences employee performance \[[@B40]\]. A nurse manager has an important role in leading the work unit as a whole. A work unit is seen as a reflection of the nurse manager. According to Rosengren et al. \[[@B41]\], nurses reported that nursing leadership was considered "being present and available in daily work," "facilitating professional acknowledgement," "supporting nursing practice" and "improving care both as a team and as individuals." A nurse manager with an emotionally intelligent leadership style creates a favourable work climate characterized by innovation, resilience, and change \[[@B42]\]. Nurse managers have to be flexible in the changes they have directly initiated or by which they have been indirectly affected \[[@B43]\]. Leadership style affects the organizational climate and the ways how information is given and communicated and how questions of the day are discussed. The nurse manager creates the basis for how different opinions are handled and problems solved in the work unit. Nurse manager\'s leadership style affects the personnel\'s job satisfaction and commitment. It is perceived that nurse manager\'s trust in the employees promotes their motivation and participation in work. Way et al. \[[@B44]\] found that trust and job satisfaction are strong links with greater commitment and intent to stay on at work. Nurse managers create basic preconditions for the operation and for development work. Leader encourages the employees to develop goals and plan to achieve them. In this way he/she influences the professional development of the personnel \[[@B45]\]. Their skills to build bonds and seek out mutually beneficial relationships affect cooperation in the work unit and around it. On the other hand, there is no one and only correct leadership style; the same result can be achieved in many ways. A manager who has the ability to reflect on his/her own behaviour, that is, who has high EI, is better able to regulate and estimate his/her leadership style with different employees in different situations. Leadership style influences patient care and its quality at least indirectly. A nurse manager has a significant role in using a leadership style that promotes good patient care. 4. Conclusions {#sec4} ============== Nurse managers had many leadership styles, but normally they had one that they used more than the others. The nurse managers should consider their leadership style from the point of view of employees, situation factors, and goals of the organization. Leadership styles where employees are seen in a participative, active role have become more common. Together with health care organizations, nursing education programmes should include education of nurse managers to improve their self reflection, through which they are better able to vary their leadership style. ![Nurse managers\' leadership styles in Finland. Summary of findings of the study.](NRP2012-605379.001){#fig1} [^1]: Academic Editor: Linda Moneyham
{ "pile_set_name": "PubMed Central" }
Frequency of feeding, weight reduction and energy metabolism. A study was conducted to investigate the effect of feeding frequency on the rate and composition of weight loss and 24 h energy metabolism in moderately obese women on a 1000 kcal/day diet. During four consecutive weeks fourteen female adults (age 20-58 years, BMI 25.4-34.9 kg/m2) restricted their food intake to 1000 kcal/day. Seven subjects consumed the diet in two meals daily (gorging pattern), the others consumed the diet in three to five meals (nibbling pattern). Body mass and body composition, obtained by deuterium dilution, were measured at the start of the experiment and after two and four weeks of dieting. Sleeping metabolic rate (SMR) was measured at the same time intervals using a respiration chamber. At the end of the experiment 24 h energy expenditure (24 h EE) and diet-induced thermogenesis (DIT) were assessed by a 36 h stay in the respiration chamber. There was no significant effect of the feeding frequency on the rate of weight loss, fat mass loss or fat-free mass loss. Furthermore, fat mass and fat-free mass contributed equally to weight loss in subjects on both gorging and nibbling diet. Feeding frequency had no significant effect on SMR after two or four weeks of dieting. The decrease in SMR after four weeks was significantly greater in subjects on the nibbling diet. 24 h EE and DIT were not significantly different between the two feeding regimens.(ABSTRACT TRUNCATED AT 250 WORDS)
{ "pile_set_name": "PubMed Abstracts" }
First observational evidence of ‘Dark Matter Heating’ discovered Researchers have discovered the first observational evidence of ‘dark matter heating’ in distant galaxies, placing an important constraint on future dark matter models. Even though astronomers and cosmologists have long been able to infer the presence of a mysterious substance that makes up the vast majority of matter in the universe- provisionally named dark matter- the characteristics of this substance have been slow to reveal themselves, mainly due to the fact that it does not strongly interact with light. The latest observed characteristic seems to confirm a long-held theory that dark matter can be heated and moved around by star-formation says research published in the journal Monthly Notices of the Royal Astronomical Society. As dark matter doesn’t interact with light in the same way that everyday matter — or baryonic matter— does, astronomers have been forced to use its gravitational influence to perform observations of it. Using this method, researchers at the University of Surrey, Carnegie Mellon University and ETH Zürich set out to hunt for evidence for dark matter at the centres of nearby dwarf galaxies. Dwarf galaxies — small, faint galaxies that are typically found orbiting larger galaxies like our own Milky Way — may hold clues that could help us to better understand the nature of dark matter. Star formation in tiny dwarf galaxies can slowly “heat up” the dark matter, pushing it outwards. The left image shows the hydrogen gas density of a simulated dwarf galaxy, viewed from above. The right image shows the same for a real dwarf galaxy, IC 1613. In the simulation, repeated gas inflow and outflow causes the gravitational field strength at the centre of the dwarf to fluctuate. The dark matter responds to this by migrating out from the centre of the galaxy, an effect known as ‘dark matter heating’. (J. Read et al.) When stars form, strong stellar winds push baryonic matter like gas and dust, from the centre of the galaxy in question leaving behind dark matter. This means there is less mass in the centre which means that this left-behind dark matter ‘feels’ less gravitational influence. The dark matter gathers more energy as a result of this and begins to migrate away from the centre — this effect is known as ‘dark matter heating’. The team of astrophysicists measured the amount of dark matter at the centres of 16 dwarf galaxies with very different star formation histories. They found that galaxies that stopped forming stars long ago had higher dark matter densities at their centres than those that are still forming stars today. This supports the theory that the older galaxies had less dark matter heating. The Draco dwarf galaxy 270,000 light-years from Earth. Just one of the Dwarf galaxies studied by the researchers. Professor Justin Read, lead author of the study and Head of the Department of Physics at the University of Surrey, says: “We found a truly remarkable relationship between the amount of dark matter at the centres of these tiny dwarfs, and the amount of star formation they have experienced over their lives. The dark matter at the centres of the star-forming dwarfs appears to have been ‘heated up’ and pushed out.” What is significant about this new finding is that it places a necessary constraint on dark matter models, dark matter must be shown to exhibit a range of densities within a dwarf galaxy that is in direct negative-correlation to the rate of star formation in that galaxy. Thus galactic discs with greater star populations should also have lower dark matter concentration. Professor Matthew Walker, a co-author from Carnegie Mellon University, adds: “This study may be the “smoking gun” evidence that takes us a step closer to understanding what dark matter is. Our finding that it can be heated up and moved around helps to motivate searches for a dark matter particle.” The team hope to expand on this work by measuring the central dark matter density in a larger sample of dwarfs, pushing to even fainter galaxies, and testing a wider range of dark matter models. Original Study: https://academic.oup.com/mnras/advance-article/doi/10.1093/mnras/sty3404/5265085
{ "pile_set_name": "OpenWebText2" }
Subscribe Translate Monday, October 13, 2014 Fordham’s First Win over Penn is a Record Breaker Fordham’s First Win over Penn is a Record Breaker (Photos by Gary Quintal) By Howard Goldin BRONX, NEW YORK, OCTOBER 13- The sixth meeting between the Fordham Rams (6-1, 2-0) and the University of Pennsylvania Quakers (0-4, 0-1) took place at Jack Coffey Field in the Bronx on October 11. The game on Saturday was the first victory of Fordham, 60-22, over the Quakers. The two teams seem to be heading in different directions. The win for Fordham was its fifth straight and 11th consecutive home win, and the loss for Penn was its eighth straight. The 60 points scored by the Rams was the most their Ivy League opponent had surrendered in a single game since its 61-0 defeat by #1 ranked Army on November 17, 1945. The visitors reached the scoreboard first as Penn quarterback Alek Torgerson threw a 33-yard touchdown pass to Ryan O’Malley at 10:01. To the credit of the Fordham defense, that intercepted two passes and forced two fumbles, the first Penn touchdown was also its last. The last 16 points scored by the Quakers were off the foot of Jimmy Gammil. The junior kicked the point after touchdown and five field goals. Fordham scored twice on the ground in the first quarter. Harrisburg, Pennsylvania native Chase Edmunds carried the ball three yards for Fordham’s first points. His 11th touchdown of the season, in only six games, has been topped only five times in Fordham history in a single (full). He rushed for 101 yards, the sixth game in which has rushed for triple figures of yards. He is the first Fordham freshman to have a season rushing yardage total above 1,000 (1,011). Fordham head coach Joe Moorhead, in his third successful season in the Bronx, spoke very highly of the sensational freshman’s work ethic, preparation, and effort, “He’s an old soul. Everything he’s gotten, he’s earned. It’s not a surprise the success he’s had.” Quarterback Mike Nebrich, a senior, has also been impressed by the freshman running back, “He’s been huge. It [his rushing] opens up the defense. You can lead as a freshman.” The second Fordham first quarter touchdown came on a recovered fumble and eight-yard run by senior defenseman DeAndre Slate. Fordham’s defensive onslaught during the remainder of the game was achieved through the air under the leadership and outstanding ability of quarterback Nebrich. The senior from Virginia spoke of how he sees his responsibility during each contest, “My job is to get us going anytime we start sputtering.” On Saturday, he completed 36 of 47 passes for a Fordham record of 566 yards, which broke the mark of 524 yards he set in 2013. Six of the 36 completions were for touchdowns, tying a Fordham game mark. Five different receivers caught touchdown tosses from Nebrich. Tubucky Jones Jr., like Nebrich, a University of Connecticut transfer, caught two, one of 37 yards and one of 47 yards. Jones caught 10 for 203 yards, the eighth highest total in Fordham history. Sam Ajala received eight passes for 199 yards, the ninth highest total. The 730 yards gained by the Fordham offense was a single game school record and the highest total by an NCAA FCS team this season. According to Moorhead, this success stems from good practice habits and game preparation. The coach also praised his players as being good students and fine human beings as well as good athletes. His own college experience at Fordham has obviously imbued in him the knowledge of what a student-athlete should be. After Fordham’s bye-week the team will travel to Lehigh for its next contest on October 25. The Rams will return to Jack Coffey Field on November 1 to host Colgate.
{ "pile_set_name": "Pile-CC" }
Non-equilibrium x-ray spectroscopy using direct quantum dynamics. Advances in experimental methodology aligned with technological developments, such as 3rd generation light sources, X-ray Free Electron Lasers, and High Harmonic Generation, have led to a paradigm shift in the capability of X-ray spectroscopy to deliver high temporal and spectral resolution on an extremely broad range of samples in a wide array of different environments. Importantly, the complex nature and high information content of this class of techniques mean that detailed theoretical studies are often essential to provide a firm link between the spectroscopic observables and the underlying molecular structure and dynamics. In this paper, we present approaches for simulating dynamical processes in X-ray spectroscopy based upon on-the-fly quantum dynamics with a Gaussian basis set. We show that it is possible to provide a fully quantum description of X-ray spectra without the need of precomputing highly multidimensional potential energy surfaces. It is applied to study two different dynamical situations, namely, the core-hole lifetime dynamics of the water monomer and the dissociation of C F 4 + recently studied using pump-probe X-ray spectroscopy. Our results compare favourably to previous experiments, while reducing the computational effort, providing the scope to apply them to larger systems.
{ "pile_set_name": "PubMed Abstracts" }
Q: highcharts redraw and reflow not working I am trying to build a dynamic page that has any number between 1-4 graphs on it that can be added or removed as needed but I have run into a huge problem and I can't seem to get the graph to resize after resizing the containing div. for example if I add a graph on the page it will be width 800, then click a button to add another graph it should resize to be 400 a piece but I cannot make it happen. As a very simplistic model I have the following $(function () { $('#container').highcharts({ chart: { type: 'line', width: 300 }, title: { text: 'Width is set to 300px' }, xAxis: { categories: ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec'] }, series: [{ data: [29.9, 71.5, 106.4, 129.2, 144.0, 176.0, 135.6, 148.5, 216.4, 194.1, 95.6, 54.4] }] }); $('#resize').click(function() { $('#container').attr('style', 'width: 800px'); $("#container").highcharts().reflow(); console.log($('#container').width()); }); }); now when that is run it will log 800 to the dev tools window in chrome but the graph will not resize. I have tried both redraw() and reflow() as suggested in the documentation for highcharts. I even setup a really quick demo on jsfiddle here, http://jsfiddle.net/7cbsV/ can anyone please help me. It is kind of important. Thank you in advance for the help. A: How about using simple chart.setSize(w,h)? See docs. $("#container").highcharts().setSize(800, height);
{ "pile_set_name": "StackExchange" }
Ratno Dolne Ratno Dolne () is a village in the administrative district of Gmina Radków, within Kłodzko County, Lower Silesian Voivodeship, in south-western Poland. It lies approximately east of Radków, north-west of Kłodzko, and south-west of the regional capital Wrocław. References Ratno Dolne
{ "pile_set_name": "Wikipedia (en)" }
I was talking to a Jehovah’s Witness the other day and found out their idea of heaven is the same utopia that liberals are trying to force us into. There is no conflict in Jehovah’s afterlife”€”just a bunch of twentysomethings having picnics with lions and bears, and maybe a dinosaur or two walking around. It’s Earth without any of the bad stuff. That sounds like hell. I asked him if there was boxing in this magical place. He thought for a second and said: “€œOnly if they have no animosity in their hearts.”€ What’s so bad about animosity? That’s how you win. Professional Muay Thai fighter Chris “€œCrom”€ Romulo once told me he wins fights by saying to himself, “€œThat guy is trying to take food out of my kid’s mouth.”€ And fundamentally that’s what his opponent is doing. The more fights Chris loses, the less he can provide for his family. He needs visceral hatred to survive and it’s really exciting to watch. Vices, like greed and revenge, drive men to success. As Bernard Mandeville said in his 1705 poem The Grumbling Hive: “€œLuxury”€¨Employ”€™d a Million of the Poor, And odious Pride a Million more; “€¨Envy itself, and Vanity, Were Ministers of Industry.”€ We need the bad stuff to live. “€œIf I were explaining sex to an alien I would tell him to imagine a mouse being eaten by a snake.”€ Take sexism, for example. I see women as sex objects who are much weaker than men and are better off at home with the kids. My attitude is unpopular, but lack of sexism is not just making women miserable, it’s ending us. Telling women they”€™re not sex objects and forcing them into the workforce has made them infertile. White Americans have stopped having babies and raising families, which is why we”€™re about to become a minority in our own country. Killing sexism also leaves women unsafe. When you tell girls they”€™re as tough as men, they go out and get wasted with no escort to make sure they get home safe. They strut down the street in the middle of the night through the bad part of town, almost daring criminals to attack them. When a black thug pulled a gun on Nicole duFresne in NYC in 2005, she said, “€œWhat are you going to do now, shoot us?”€ So he shot her. And her beta male boyfriend had an African funeral ceremony for her to promote peace and tranquility. How wildly unnatural. This rejection of all things normal has even ruined sex: you”€™re supposed to ask permission for every move. “€œCan I kiss you here?”€ mewls the new “€œfeminism for bros.”€ “€œHow about here?”€ Women may find this appealing on paper, but I”€™ve had sex with women, and hesitation doesn”€™t turn them on. If I were explaining sex to an alien I would tell him to imagine a mouse being eaten by a snake. It’s about a helpless wee thing being dominated by a cruel monster, and both genders love it. Girls don”€™t rule the world. Evil does. Go talk to a scientist or an entrepreneur about what gets him out of bed in the morning. Yes, curing cancer and paying the mortgage are incentives, but they don”€™t hold a candle to hate. Scientists are constantly at each other’s throats, trying to shoot down a hypothesis or get there before the other guy. Scientists don”€™t applaud when someone else makes a discovery. They plot to beat the bastard next time. Judd Apatow’s entire career is powered by revenge. When NBC canceled his show Freaks and Geeks he was furious, and the hatred he felt for the exec responsible, Garth Ancier, drove him to be one of the biggest players in Hollywood. Not only has he produced dozens of hit movies, he dragged the cast of his canceled show with him and now they”€™re all stars”€”who hate Ancier too. Seth Rogen confronted him at a party recently, still burning almost 15 years after the decision. Bullying is good, too. Gay loudmouth Dan Savage likes to complain about how hard it was to be different when he was young, but those rough years drove him to the success he has today. He’s one of the most well paid bullies in the country. Getting picked on prepares kids for the real world. When I go into a work meeting, it’s not that different from stepping into the ring. People want to test your mettle. I”€™ve noticed a direct correlation between how much time I spend boxing and how much money I make. We shouldn”€™t be protecting kids from conflict. We should be training them to enjoy it. But the millennials I”€™ve worked with were raised to be incapable of handling any kind of confrontation. I don”€™t mean they don”€™t enjoy it. I mean, as they would put it, “€œThey literally can”€™t…”€ When I pointed out a major error a 25-year-old made on a project this week, he started hyperventilating and another employee had to pretend he”€™d done a good job just to keep the guy from having a nervous breakdown. (I”€™m never working with him again.)
{ "pile_set_name": "OpenWebText2" }
Race against the clock in 1980s England to catch the perpetrators of a terrorist attack You won’t need to do any investigating for this news, as we’re super excited to announce that The Occupation will be coming to PlayStation 4 on 9th October to PlayStation Store and at retail thanks to a retail partnership with Sold Out! The extremely talented developers over at White Paper Games, who previously developed Ether One, are the ones behind The Occupation. For those not in the know, The Occupation is a first-person, fixed-time, investigative thriller set in North West England on 24th October, 1987; a time of ’80s British pop, grand architecture and political unrest. An explosion has triggered a controversial act to be rushed into law, threatening to erode the civil liberties of the population. You are tasked with investigating and questioning people on their actions from a tumultuous night which resulted in the loss of many lives. Each person has a different account of the night’s events and you must use the tools at your disposal to get the results you need for your investigation. The entire game plays out in real time, and you must make decisions based on the evidence you uncover. In a non-linear designed world with multiple ways to approach situations, you’ll need to decide if you’re going to take the most direct route and risk the chance of getting caught, or planning for a more careful approach, and letting the time tick away. Each person in the world has a routine to follow so that you can plan your approach. Be careful though, as an unexpected toilet or smoke break may foil even the best laid plans. The world of The Occupation is highly interactable and tactile. Use this to your advantage by triggering security alerts to manipulate characters and draw them away from your location. But don’t lose track of time, use your state of the art digital watch to set alarms and reminders so you don’t miss your opportunity to cross-reference the evidence you’ve found in your interviews to uncover the truth about what happened that night and the true effects of the act. The developers at White Paper Games and we at Humble Bundle can’t wait to bring this investigative thriller to PlayStation 4 this October.
{ "pile_set_name": "OpenWebText2" }
Successful treatment of radiation induced breast ulcer with hyperbaric oxygen. The purpose of this report was to investigate the efficacy of hyperbaric oxygen treatment in the management of a persisting radiation induced ulcer following standard breast irradiation. A 57-year-old Caucasian patient was referred following partial mastectomy and axillary node clearance for a T2N0 grade 3 infiltrating ductal carcinoma of the left breast. She received 45 Gy in 25 fractions at 1.8 Gy per fraction to the isocentre to the whole breast using tangential fields and 4 MV photons, in conjunction with intravenous chemotherapy (cyclophosphamide, methotrexate and 5 fluorouracil). Treatment was interrupted for 3.5 weeks because of a grade 4 skin and subcutaneous reaction. Treatment resumed to the tumour bed alone. Chemotherapy was abandoned. The tumour bed received 14 Gy in 7 fractions at 2 Gy per fraction prescribed to the 100% using 10 MeV electrons and a direct field, completing treatment on 7 July 1998. The radiation induced a painful 8x4 cm ulcer which persisted in spite of rigorous treatment including Gentian Violet, Silvazine Cream, Duoderm and antibiotics. The patient received 30 hyperbaric treatments, six times a week, completing treatment on 15 December 1998. The patient required insertion of bilateral ear grommets under local anaesthetic. The breast ulcer showed a response to treatment with early healing after 7-8 days and clinical evidence of re-epithelization. At completion of 30 treatments the patient was left with a small shallow faintly discharging multilocular 3-4 cm ulcer. The ulcer had completely healed by 14 January 1999. The patient has been symptom free since completion of treatment. This report highlights the efficacy of hyperbaric oxygen therapy in the management of persisting radiation-induced ulcers.
{ "pile_set_name": "PubMed Abstracts" }
Blunted increase in plasma adenosine levels following dipyridamole stress in dilated cardiomyopathy patients. Heart failure is characterized by chronically increased adenosine levels, which are thought to express a protective anti-heart failure activation of the adenosinergic system. The aim of the study was to assess whether the activation of adenosinergic system in idiopathic dilated cardiomyopathy (IDC) can be mirrored by a blunted increase in plasma adenosine concentration following dipyridamole stress, which accumulates endogenous adenosine. Two groups were studied: IDC patients (n = 19, seven women, mean age 60 +/- 12 years) with angiographically confirmed normal coronary arteries and left ventricular ejection fraction <35%; and normal controls (n = 15, six women, mean age 68 +/- 5 years). Plasma adenosine was assessed by high-performance liquid chromatography methods in blood samples from peripheral vein at baseline and 12 min after dipyridamole infusion (0.84 mg kg-1 in 10 min). At baseline, IDC patients showed higher plasma adenosine levels than controls (276 +/- 27 nM L-1 vs. 208 +/- 48 nM L-1, P < 0.001). Following dipyridamole, IDC patients showed lower plasma adenosine levels than controls (322 +/- 56 nM L-1 vs. 732 +/- 250 nM L-1, P < 0.001). The dipyridamole-induced percentage increase in plasma adenosine over baseline was 17% in IDC and 251% in controls (P < 0.001). By individual patient analysis, 18 IDC patients exceeded (over the upper limit) the 95% confidence limits for normal plasma adenosine levels at baseline, and all 19 exceeded (below the lower limit) the 95% confidence limits for postdipyridamole plasma adenosine levels found in normal subjects. Patients with IDC have abnormally high baseline adenosine levels and--even more strikingly--blunted plasma adenosine increase following dipyridamole infusion. This is consistent with a chronic activation of the adenosinergic system present in IDC, which can be measured noninvasively in the clinical theatre.
{ "pile_set_name": "PubMed Abstracts" }
This item has been removed from the community because it violates Steam Community & Content Guidelines. It is only visible to you. If you believe your item has been removed by mistake, please contact Steam Support This item is incompatible with Source Filmmaker. Please see the instructions page for reasons why this item might not work within Source Filmmaker. Current visibility: Hidden This item will only be visible to you, admins, and anyone marked as a creator. Current visibility: Friends-only This item will only be visible in searches to you, your friends, and admins. Ah, those old days... Title Description ...that never come back... Rocky (workname) - made by Sim_Piko/Keepon in Blender3D Rocky is based on Rockford's Atari800 look from Peter Liepa's "Bouder Dash"(r) Diamond (model) - ripped (kinda) from "Boulder Rocks! 3D" by Zombie Mastah (good remake - you should try it someday) Firefly (model) - made by Sim_Piko/Keepon in Blender3D character_lighting_map and Source Filmmaker made by and (c) Valve SIDE NOTE: still needs proper bone weighting SIDE NOTE2: find good way to make proper bone weighting/skinning in Blender3D Afterupload SIDE NOTE: Valve got some strange compressions breaking smooooth background. *** Click on image for 1920x1080 resolution *** Includes: Save Cancel Created by Keepon Offline File Size Posted 1.103 MB 12 Aug, 2013 @ 2:21pm 129 Unique Visitors 0 Current Favorites
{ "pile_set_name": "OpenWebText2" }
This invention pertains to speed regulators for direct current DC motors and, more particularly, is concerned with open loop speed regulators for DC motors. DC motors find numerous applications because of their intrinsic variable speed characteristics and capabilities which offer very high speeds and small size. The rotating member of a DC motor is named the armature and the stationary member is named the field. The armature has windings and the field can have either windings or permanent magnets. Some applications have a need for constant speed regardless of torque. A general statement about DC motors is that with an increase in torque, speed will drop and current will increase, assuming a constant input voltage. The amount each parameter varies depends on the type of motor. For a motor with the armature and field winding connected in series the drop in speed will be more pronounced than the increase in current. For motors with shunt connected windings or permanent magnet fields the opposite is true, the speed will be more nearly constant while there is a marked increase in current. There will be some drop in speed however, and this amount may be undesirable in critical applications. For this reason, a number of constant speed controls have been devised over the years. Speed regulating systems may be classified as either closed loop or open loop. Closed loop systems derive a signal from the actual speed of the motor with a tachometer, for example, and use the signal in a feedback loop. An open loop system does not measure speed directly but measures some other parameter. In some open loop systems the measured parameter is current. A well known example of an open loop motor regulating system includes a resistor in series with the input of the motor. The voltage across the resistor corresponds to motor current and is directed to a control circuit. The resistor voltage influences a control circuit which supplies the input voltage to the motor. A change in resistor voltage indicates a change in torque and indirectly indicates a change in speed. In response to the resistor voltage the control circuit adjusts the voltage to the motor thereby supplying the right amount of power required to maintain a constant speed over variations in torque. The series resistor causes I.sup.2 R power losses particularly when during high torque conditions because current is high. These losses cause heat build-up and a need for a larger power supply capability. It will be seen that a speed regulator according to the present invention does not require a resistor in series with the motor and is thereby more efficient.
{ "pile_set_name": "USPTO Backgrounds" }
If you have very poor credit, the cheapest car insurance company is Nationwide. Here, your premium will be more than $435 less than the group average. Compared to the highest credit level, drivers with bad credit pay nearly $1,450 more per year for auto insurance. If you pay off a loan or otherwise improve your credit score, you should shop around for car insurance as your premium should change. Just another reason to keep your score up! You’ll notice that none of that liability coverage pays for your car or injuries, nor for any injuries your passengers sustain if you cause a wreck. This is why many people — particularly those whose car isn’t yet paid off — want “full coverage” car insurance. This isn’t actually a type of coverage, but instead typically refers to policies that include liability coverage, plus comprehensive and collision coverages. Auto Insurance is required by law for drivers in most states. Drivers who own a car and drive it often should definitely have auto insurance to cover the risk of damages to their car and personal injury and the liability of harm to other people and property. Otherwise, repairs and medical costs, particularly when you’re liable for an accident, can be very expensive. Liability auto insurance protects you from that worst case scenario by providing a cushion between your assets and the amount you’re on the hook for. For this reason, choosing the right auto liability limits is the most important part of your car insurance quote comparison. NerdWallet typically recommends having at least as much liability coverage as your net worth. The best car insurance companies have a few things in common: They have straightforward shopping experiences, take good care of policyholders after a crash and treat their customers with respect and courtesy. That means only insurers with high customer satisfaction scores and relatively few complaints to insurance commissioners make it to the top of our list of the best auto insurance companies. The key difference in collision vs. comprehensive coverage is that, to a certain extent, the element of the car driver's control. As we have stated before, collision insurance will typically cover events within a motorist's control, or when another vehicle collides with your car. Comprehensive coverage generally falls under "acts of God or nature," that are typically out of your control when driving. These can include such events as a spooked deer, a heavy hailstorm, or a carjacking. The life insurance market has shrunk by around 4% over the last ten years. Interestingly, the market shrunk after the recession then grew about 51% between 2010 and 2015, though it has since begun to drop in size again. In 2017, life insurance premiums exceeded the amount spent in four of the past five years, but still came short of levels seen in 2008 and 2015. Check out our graph below to see how the market has fluctuated in the last decade. All numbers in billions. Know when to cut coverage. Don’t strip away coverage just for the sake of cheaper insurance. You’ll need full coverage car insurance to satisfy the terms of an auto loan, and you’ll want it as long as your car would be a financial burden to replace. But for older cars, you can drop comprehensive and collision coverage, which only pay out up to your car’s current value, minus the deductible. The best companies will also have several supplemental coverage options, or endorsements, that you can add to your homeowners policy. Endorsements can vary, as some provide higher coverage limits for certain types of personal property like jewelry or fine furs; or they can provide supplemental coverage for risks — like water backups, floods, or earthquakes — not covered by home insurance. If you live in an area with unusual state regulations or heightened risk of weather-related claims, shopping car insurance options will be vital. Not every car insurance company offers policies in every state, which can make pricing less competitive. If you live in storm-prone states like Louisiana or Florida, you might find it harder to get a competitive rate. If you have very poor credit, the cheapest car insurance company is Nationwide. Here, your premium will be more than $435 less than the group average. Compared to the highest credit level, drivers with bad credit pay nearly $1,450 more per year for auto insurance. If you pay off a loan or otherwise improve your credit score, you should shop around for car insurance as your premium should change. Just another reason to keep your score up! To calculate the added cost in purchasing comprehensive and/or collision coverage we looked at annual insurance quotes for a 30 year old male from New York across four different insurance companies, and the ten best-selling vehicles in the US. We look at the range of rates you could pay from basic liability to policy plans with comprehensive and collision coverage. Collision typically costs more than comprehensive, although some companies require you to carry both rather than just one. Comparing quotes across at least three companies can get you lower car insurance rates.
{ "pile_set_name": "Pile-CC" }