text
stringlengths 3
744k
⌀ | summary
stringlengths 24
154k
|
---|---|
SECTION 1. SHORT TITLE.
This Act may be cited as the ``Homefront Heroes Tax Relief Act of
2009''.
SEC. 2. CREDIT FOR CARE PACKAGES FOR MEMBERS OF ARMED FORCES IN A
COMBAT ZONE.
(a) In General.--Subpart A of part IV of subchapter B of chapter I
of the Internal Revenue Code of 1986 (relating to nonrefundable
personal credits) is amended by inserting after section 25D the
following new section:
``SEC. 25E. CARE PACKAGES FOR MEMBERS OF ARMED FORCES IN A COMBAT ZONE.
``(a) In General.--In the case of an individual, there shall be
allowed as a credit against the tax imposed by this chapter for the
taxable year an amount equal to the qualified care package amount.
``(b) Limitation.--The amount allowed as a credit under subsection
(a) for the taxable year shall not exceed $500.
``(c) Qualified Care Package Amount.--For purposes of subsection
(a), the term `qualified care package amount' means the amount paid or
incurred to provide a care package for a member of the Armed Forces of
the United States serving in a combat zone (as defined in section
112(c)(2)) through an organization--
``(1) described in section 501(c)(3) and exempt from tax
under section 501(a),
``(2) organized for a purpose which includes supporting
members of the Armed Forces of the United States, and
``(3) listed on a website maintained by the Secretary of
Defense.
``(d) Special Rules.--
``(1) Related persons.--No amount shall be taken into
account under subsection (a) for a care package provided for a
related person. For purposes of the preceding sentence, the
term `related person' means a person who bears a relationship
to the taxpayer which would result in a disallowance of losses
under section 267 or 707(b).
``(2) Receipts.--No amount shall be taken into account
under subsection (a) with respect to which the taxpayer has not
submitted such information as the Secretary determines
necessary, including information relating to receipts for
contents and shipping of care packages.''.
(b) Clerical Amendments.--The table of sections for such part is
amended by inserting after the item relating to section 25D the
following new item:
``Sec. 25E. Care packages for members of Armed Forces in a combat
zone.''.
(c) Effective Date.--The amendments made by this section shall
apply to taxable years beginning after December 31, 2008.
SEC. 3. CREDIT FOR VOLUNTEER SERVICE TO MILITARY FAMILIES THROUGH
AMERICA SUPPORTS YOU PROGRAM.
(a) In General.--Subpart A of part IV of subchapter A of chapter 1
of the Internal Revenue Code of 1986 (relating to nonrefundable
personal credits), as amended by section 2, is amended by inserting
after section 25E the following new section:
``SEC. 25F. VOLUNTEER SERVICE TO MILITARY FAMILIES THROUGH AMERICA
SUPPORTS YOU PROGRAM.
``(a) Allowance of Credit.--In the case of an individual, there
shall be allowed as a credit against the tax imposed by this chapter
for the taxable year an amount equal to the sum of the qualified
service amounts with respect to qualified service performed during the
taxable year by the taxpayer, his spouse, and his dependents (as
defined in section 152, determined without regard to subsections
(b)(1), (b)(2), and (d)(1)(B) thereof).
``(b) Limitation.--The amount allowed as a credit under subsection
(a) for a taxable year shall not exceed $500.
``(c) Qualified Service Amount.--For purposes of subsection (a),
the term `qualified service amount' means, with respect to an hour (or
portion thereof) of qualified service, the minimum wage required under
section 6(a) of the Fair Labor Standards Act of 1938 (29 U.S.C. 206(a))
as in effect on the date of such service.
``(d) Qualified Service.--For purposes of subsection (a)--
``(1) In general.--The term `qualified service' means
service meeting the requirements of paragraph (2) which is
provided through an organization--
``(A) described in section 501(c)(3) and exempt
from tax under section 501(a), and
``(B) which is approved by the Secretary of Defense
to participate in the America Supports You program of
the Department of Defense.
``(2) Service requirements.--Service meets the requirements
of this paragraph if the service--
``(A) is provided on a volunteer basis,
``(B) is for not less than 10 hours per week in not
less than 4 weeks of the taxable year, and
``(C) is directly involved with the mission of the
America Supports You program of helping military
families.
``(3) Certification requirement.--Service shall not be
taken into account under this section unless the organization
through which such service is performed certifies the date of
such service and that such service meets the requirements of
paragraph (2).
``(e) Inflation Adjustment.--
``(1) In general.--In the case of any taxable year
beginning in a calendar year after 2009, the $500 amount in
subsection (b) shall be increased by such amount multiplied by
the percentage change (if any) from the minimum wage on January
1, 2009, to the minimum wage on the last day of the preceding
taxable year.
``(2) Minimum wage.--For purposes of paragraph (1), the
term `minimum wage' means the minimum wage required under
section 6(a) of the Fair Labor Standards Act of 1938 (29 U.S.C.
206(a)).
``(3) Rounding.--If any amount as adjusted under paragraph
(1) is not a multiple of $10, such amount shall be rounded to
the nearest multiple of $10.
``(f) Regulations.--The Secretary shall prescribe such regulations
as may be necessary or appropriate to carry out this section.''.
(b) Clerical Amendment.--The table of sections for subpart A of
part IV of subchapter A of chapter 1 of such Code, as so amended, is
amended by inserting after the item relating to section 25E the
following new item:
``Sec. 25F. Volunteer service to military families through America
Supports You program.''.
(c) Effective Date.--The amendments made by this section shall
apply to service performed in taxable years beginning after December
31, 2008. | Homefront Heroes Tax Relief Act of 2009 - Amends the Internal Revenue Code to allow tax credits for: (1) sending care packages to members of the Armed Forces serving in a combat zone; and (2) providing volunteer service to military families through the America Supports You program of the Department of Defense (DOD). |
SECTION 1. SHORT TITLE.
This Act may be cited as the ``Rural and Urban Health Care Act of
2001''.
SEC. 2. REQUIREMENTS FOR ADMISSION OF H-1C NONIMMIGRANT NURSES.
(a) In General.--Section 212(m) of the Immigration and Nationality
Act (8 U.S.C. 1182(m)) is amended to read as follows:
``(m)(1) The qualifications referred to in section
101(a)(15)(H)(i)(c), with respect to an alien who is coming to the
United States to perform nursing services for a facility, are that the
alien--
``(A) has obtained a full and unrestricted license to
practice professional nursing in the country where the alien
obtained nursing education or has received nursing education in
the United States or Canada;
``(B) has passed the examination given by the Commission on
Graduates of Foreign Nursing Schools or another appropriate
examination (recognized in regulations promulgated in
consultation with the Secretary of Health and Human Services)
or has a full and unrestricted license under State law to
practice professional nursing in the State of intended
employment; and
``(C) is fully qualified and eligible under the laws
(including such temporary or interim licensing requirements
which authorize the nurse to be employed) governing the place
of intended employment to engage in the practice of
professional nursing as a registered nurse immediately upon
admission to the United States and is authorized under such
laws to be employed by the facility, except that, in the case
of an alien who is otherwise eligible to take the State
licensure examination after entering into the United States,
but who has not passed such examination before entering--
``(i) the alien may take such examination not more
than twice after entering, but the alien's status as a
nonimmigrant under section 101(a)(15)(H)(i)(c) shall
terminate, and the alien shall be required to depart
the United States, if the alien does not pass such
examination either the first or second time; and
``(ii) the failure of the alien to have obtained a
social security account number shall not be deemed a
ground of ineligibility to take such examination.
``(2)(A) The attestation referred to in section
101(a)(15)(H)(i)(c), with respect to a facility for which an alien will
perform services, is an attestation as to the following:
``(i) The employment of the alien will not adversely affect
the wages and working conditions of registered nurses similarly
employed by the facility.
``(ii) The alien will be paid the wage rate for registered
nurses similarly employed by the facility.
``(iii) There is not a strike or lockout in the course of a
labor dispute, the facility did not lay off and will not lay
off a registered staff nurse employed by the facility within
the period beginning 90 days before and ending 90 days after
the date of filing of any visa petition, and the employment of
such an alien is not intended or designed to influence an
election for a bargaining representative for registered nurses
of the facility.
``(iv) At the time of the filing of the petition for
registered nurses under section 101(a)(15)(H)(i)(c), notice of
the filing has been provided by the facility to the bargaining
representative of the registered nurses at the facility or,
where there is no such bargaining representative, notice of the
filing has been provided to the registered nurses employed at
the facility through posting in conspicuous locations.
``(v) The facility will not, with respect to any alien
issued a visa or otherwise provided nonimmigrant status under
section 101(a)(15)(H)(i)(c)--
``(I) authorize the alien to perform nursing
services at any worksite other than a worksite
controlled by the facility; or
``(II) transfer the place of employment of the
alien from one worksite to another.
``(vi) The facility will not, with respect to any alien
issued a visa or otherwise provided nonimmigrant status under
section 101(a)(15)(H)(i)(c), require the alien to pay a penalty
(as determined under State law) for ceasing employment prior to
a date agreed to by the alien and the facility.
``(B) A copy of the attestation shall be provided, within 30 days
of the date of filing, to registered nurses employed at the facility on
the date of filing.
``(C) The Secretary shall review the attestation only for
completeness and obvious inaccuracies. Unless the Secretary finds that
the attestation is incomplete or obviously inaccurate, the Secretary
shall provide the certification described in section
101(a)(15)(H)(i)(c) within 7 days of the date of the filing of the
attestation.
``(D) Subject to subparagraph (F), an attestation under
subparagraph (A)--
``(i) shall expire on the date that is the later of--
``(I) the end of the 3-year period beginning on the
date of its filing with the Secretary of; or
``(II) the end of the period of admission under
section 101(a)(15)(H)(i)(c) of the last alien with
respect to whose admission it was applied (in
accordance with clause (ii)); and
``(ii) shall apply to petitions filed during the 3-year
period beginning on the date of its filing with the Secretary
if the facility states in each such petition that it continues
to comply with the conditions in the attestation.
``(E) A facility may meet the requirements of this paragraph with
respect to more than one registered nurse in a single attestation.
``(F)(i) The Secretary of Labor shall compile and make available
for public examination in a timely manner in Washington, D.C., a list
identifying facilities that have filed petitions for nonimmigrants
under section 101(a)(15)(H)(i)(c) and, for each such facility, a copy
of the facility's attestation under subparagraph (A) (and accompanying
documentation) and each such petition filed by the facility.
``(ii) The Secretary shall establish a process, including
reasonable time limits, for the receipt, investigation, and disposition
of complaints respecting a facility's failure to meet conditions
attested to or a facility's misrepresentation of a material fact in an
attestation. Complaints may be filed by any aggrieved person or
organization (including bargaining representatives, associations deemed
appropriate by the Secretary, and other aggrieved parties as determined
under regulations of the Secretary). The Secretary shall conduct an
investigation under this clause if there is reasonable cause to believe
that a facility willfully failed to meet conditions attested to.
Subject to the time limits established under this clause, this
subparagraph shall apply regardless of whether an attestation is
expired or unexpired at the time a complaint is filed.
``(iii) Under such process, the Secretary shall provide, within 180
days after the date such a complaint is filed, for a determination as
to whether or not a basis exists to make a finding described in clause
(iv). If the Secretary determines that such a basis exists, the
Secretary shall provide for notice of such determination to the
interested parties and an opportunity for a hearing on the complaint
within 60 days of the date of the determination.
``(iv) If the Secretary of Labor finds, after notice and
opportunity for a hearing, that a facility (for which an attestation is
made) has willfully failed to meet a condition attested to or that
there was a willful misrepresentation of material fact in the
attestation, the Secretary shall notify the Attorney General of such
finding and may, in addition, impose such other administrative remedies
(including civil monetary penalties in an amount not to exceed $1,000
per nurse per violation, with the total penalty not to exceed $10,000
per violation) as the Secretary determines to be appropriate. Upon
receipt of such notice, the Attorney General shall not approve
petitions filed with respect to a facility during a period of at least
one year for nurses to be employed by the facility.
``(v) In addition to the sanctions provided for under clause (iv),
if the Secretary finds, after notice and an opportunity for a hearing,
that a facility has violated the condition attested to under
subparagraph (A)(ii) (relating to payment of registered nurses at the
prevailing wage rate), the Secretary shall order the facility to
provide for payment of such amounts of back pay as may be required to
comply with such condition.
``(G)(i) The Secretary shall impose on a facility filing an
attestation under subparagraph (A) a filing fee, in an amount
prescribed by the Secretary based on the costs of carrying out the
Secretary's duties under this subsection, but not exceeding $250.
``(ii) Fees collected under this subparagraph shall be deposited in
a fund established for this purpose in the Treasury of the United
States.
``(iii) The collected fees in the fund shall be available to the
Secretary, to the extent and in such amounts as may be provided in
appropriations Acts, to cover the costs described in clause (i), in
addition to any other funds that are available to the Secretary to
cover such costs.
``(3) The period of admission of an alien under section
101(a)(15)(H)(i)(c) shall be for an initial period not to exceed 3
years, and may be extended if the extension does not cause the total
period of authorized admission as such a nonimmigrant to exceed 6
years.
``(4) The total number of nonimmigrant visas issued pursuant to
petitions granted under section 101(a)(15)(H)(i)(c) in each fiscal year
shall not exceed 195,000.
``(5) A facility that has filed a petition under section
101(a)(15)(H)(i)(c) to employ a nonimmigrant to perform nursing
services for the facility--
``(A) shall provide the nonimmigrant a wage rate and
working conditions commensurate with those of nurses similarly
employed by the facility; and
``(B) shall not interfere with the right of the
nonimmigrant to join or organize a union.
``(6) For purposes of this subsection and section
101(a)(15)(H)(i)(c):
``(A) The term `facility' includes a hospital, nursing
home, skilled nursing facility, registry, clinic, assisted-
living center, and an employer who employs any registered nurse
in a home setting.
``(B)(i) The term `lay off' with respect to a worker (for
purposes of paragraph (2)(A)(iii))--
``(I) means to cause the worker's loss of
employment, other than through a discharge for
inadequate performance, violation of workplace rules,
cause, voluntary departure, voluntary retirement, or
the expiration of a grant or contract; but
``(II) does not include any situation in which the
worker's offered, as an alternative to such loss of
employment, a similar employment opportunity with the
same employer at equivalent or higher compensation and
benefits than the position from which the employee was
discharged, regardless of whether or not the employee
accepts the offer.
``(ii) Nothing in this subparagraph is intended to limit an
employee's or an employer's rights under a collective
bargaining agreement or other employment contract.
``(C) The term `Secretary' means the Secretary of Labor.''.
(b) Regulations; Effective Date.--Not later than 90 days after the
date of the enactment of this Act, regulations to carry out subsection
(a) shall be promulgated by the Secretary of Labor, in consultation
with the Secretary of Health and Human Services and the Attorney
General. Notwithstanding the preceding sentence, the amendment made by
subsection (a) shall take effect 90 days after the date of the
enactment of this Act, regardless of whether such regulations are in
effect on such date.
SEC. 3. INCREASE IN NUMBER OF WAIVERS OF TWO-YEAR FOREIGN RESIDENCE
REQUIREMENT UPON REQUESTS BY STATE AGENCIES.
Section 214(l)(1)(B) of the Immigration and Nationality Act (8
U.S.C. 1184(l)(1)(B)) is amended by striking ``20;'' and inserting
``40;''. | Rural and Urban Health Care Act of 2001 - Amends the Immigration and Nationality Act to: (1) revise admission requirements for nonimmigrant alien nurses, including increasing the type of qualifying employer-facilities; and (2) increase the number of annual two-year foreign residency requirement waivers for aliens receiving graduate medical education or training in the United States. |
Image copyright The Sun
Buckingham Palace has said it is disappointed that footage from 1933 showing the Queen performing a Nazi salute has been released.
The Sun has published the film which shows the Queen aged about seven, with her mother, sister and uncle.
The palace said it was "disappointing that film, shot eight decades ago... has been obtained and exploited".
The newspaper has refused to say how it got the footage but said it was an "important and interesting story".
'Misleading and dishonest'
The black and white footage, which lasts about 17 seconds, shows the Queen playing with a dog on the lawn in the gardens of Balmoral, the Sun says.
The Queen Mother then raises her arm in the style of a Nazi salute and, after glancing towards her mother, the Queen mimics the gesture. Prince Edward, the future Edward VIII, is also seen raising his arm.
The footage is thought to have been shot in 1933 or 1934, when Hitler was rising to prominence as Fuhrer in Germany but the circumstances in which it was shot are unclear.
Image copyright AFP Image caption The Queen recently made a state visit to Germany where she visited a former Nazi concentration camp
A Palace source said: "Most people will see these pictures in their proper context and time. This is a family playing and momentarily referencing a gesture many would have seen from contemporary news reels.
"No-one at that time had any sense how it would evolve. To imply anything else is misleading and dishonest."
'Fascinating insight'
The source added: "The Queen and her family's service and dedication to the welfare of this nation during the war, and the 63 years the Queen has spent building relations between nations and peoples speaks for itself."
BBC Royal correspondent Sarah Campbell said Buckingham Palace was not denying the footage was authentic but that there were "questions over how this video has been released".
Media playback is unsupported on your device Media caption The Palace says the footage has been "exploited"
Who was the man in the video?
Image copyright Getty Images Image caption Edward pictured with his wife Wallis Simpson
Edward was uncle of the young princess Elizabeth and brother of George VI
He briefly became King himself in 1936 but abdicated just 326 days later because of his plans to marry American divorcee Wallis Simpson - a marriage government and church figures deemed unacceptable
Replaced by George VI, Edward was one of the shortest reigning monarchs in British history
In October 1937, Edward and his wife - by now the Duke and Duchess of Windsor - visited Nazi Germany with the idea of discussing becoming a figurehead for an international movement for peace on Hitler's terms
During the controversial visit they met Hitler and dined with his deputy, Rudolf Hess
Evidence emerged Edward went to the early stages of a concentration camp, although it is not thought evidence of mass murder was made clear to him
He moved to France with the Duchess after the war and died there in 1972
Dickie Arbiter, a former Buckingham Palace press secretary, said the Palace would be investigating.
"They'll be wondering whether it was in fact something that was held in the Royal Archives at Windsor, or whether it was being held by the Duke of Windsor's estate," he said.
"And if it was the Duke of Windsor's estate, then somebody has clearly taken it from the estate and here it is, 82 years later.
"But a lot of questions have got to be asked and a lot of questions got to be answered."
Image copyright Getty Images Image caption Edward and his wife Wallis Simpson met Adolf Hitler two years before World War Two broke out
Sun managing editor Stig Abell said he did not accept Buckingham Palace's accusation that the footage had been "exploited".
He said the newspaper had decided to publish the story because it was of great public importance and the involvement of Prince Edward gave it "historical significance".
The then Prince of Wales faced numerous accusations of being a Nazi sympathiser and was photographed meeting Hitler in Munich in October 1937.
Media playback is unsupported on your device Media caption Royal Correspondent Peter Hunt: "Palace focusing on breach of privacy"
Analysis
BBC royal correspondent Peter Hunt
It's an arresting, once private image on the front of a national newspaper.
Its publication has prompted Palace officials to talk about a breach of privacy and the Sun to argue it's acting in the national interest.
Apart from the obvious anger on one side, it's striking how both sides have talked of the need to put the home movie in its "proper context".
From the Palace perspective this is a six-year-old princess who didn't attach any meaning to the gesture. Such an explanation doesn't, of course, explain the thinking of her mother.
Those around the royals are also keen to focus on the war record of the then King, Queen and their two daughters.
What they're less keen to focus on - and what the Queen would like not to be reminded of - is the behaviour of her uncle.
A man, who was briefly King, and whose fascination with Nazi Germany is well documented.
Read Peter Hunt's blog here
'Social history'
Mr Abell said: "We are not using it to suggest any impropriety on behalf of them. But it is an important and interesting issue, the extent to which the British aristocracy - notably Edward VIII, in this case - in the 1930s, were sympathetic towards fascism.
Media playback is unsupported on your device Media caption Sun managing editor Stig Abell: Video "should be shown"
"That must be a matter of national and public interest to discuss. And I think this video and this footage animates that very clearly."
Mr Abell told the BBC the video was a piece of "social history" and said the paper had set out the context of the time and explained that the Queen and Queen Mother went on to become "heroes" of World War Two.
He denied the video had intruded into the Royal Family's privacy.
"I think this is a piece of social history. One of the most significant events in our country's history, the Second World War, the rise of Nazism, one of the most pernicious movements in human history, and I think one is entitled to have a look at some of the background to it."
He added: "We're very clear. We're of course not suggesting anything improper on behalf of the Queen or the Queen Mum."
The Queen was 13 when World War Two broke out and she later served in the Women's Auxiliary Territorial Service.
In June she made a state visit to Germany where she visited the Bergen-Belsen concentration camp and met some of the survivors and liberators. ||||| LONDON - Royal officials in Britain expressed anger Saturday that archive film showing Queen Elizabeth performing a Nazi salute as a young girl in the 1930s had been "exploited" by a Murdoch tabloid.
The video, obtained by The Sun, shows the queen, aged about six, joining her uncle, Prince Edward, in raising an arm in the grounds of their Scottish vacation home, Balmoral.The previously-unseen footage is thought to have been shot in 1933 or 1934, when Hitler was rising to prominence in Germany.
The newspaper, owned by the U.K. division of Rupert Murdoch's News Corp, ran the story on its front page under the headline "Their royal heilnesses."
The newspaper defended its decision to publish the film, saying it was of “immense interest to historians” and would be seen in the context of the period.
However, a Buckingham Palace spokeswoman said: “It is disappointing that film, shot eight decades ago and apparently from Her Majesty’s personal family archive, has been obtained and exploited in this manner."
A royal source said: “Most people will see these pictures in their proper context and time. This is a family playing and momentarily referencing a gesture many would have seen from contemporary news reels.
“No one at that time had any sense how it would evolve. To imply anything else is misleading and dishonest. The queen is around six years of age at the time and entirely innocent of attaching any meaning to these gestures.”
Secret 1933 film shows Edward VIII teaching Nazi salute to Queen. Watch EXCLUSIVE video FREE http://t.co/cfKKZYCjNp pic.twitter.com/NiPG5UiImQ — The Sun (@TheSun) July 17, 2015
The source added: “The queen and her family's service and dedication to the welfare of this nation during the war, and the 63 years she has spent building relations between nations and peoples speaks for itself."
The grainy film shows the queen playing with a dog before raising an arm to wave to the camera, ITV News reported. Her mother then makes a Nazi salute, and after glancing towards her mother, the queen mimics the gesture.
Prince Edward, who later became King Edward VIII and abdicated in 1936 to marry the American socialite Wallis Simpson, faced accusations of being a Nazi sympathizer. The couple was photographed meeting Hitler in Munich in October 1937, less than two years before the Second World War broke out.
"they do not reflect badly on our Queen..They do, however, provide insight into the warped prejudices of Edward VIII" pic.twitter.com/5FbViJmkw3 — Dylan Sharpe (@dylsharpe) July 18, 2015
In an editorial column, the newspaper defended the queen, saying: “These images have been hidden for 82 years. We publish them today, knowing they do not reflect badly on our queen, her late sister or mother in any way.
"They do, however, provide a fascinating insight in the warped prejudices of Edward VIII and his friends in that bleak, paranoid, tumultuous decade."
Stig Abell, Managing Editor of newspaper, said it was "a matter of national historic significance to explore what was going on in the 30s ahead of the Second World War."
He said: "We're not, of course, suggesting anything improper on the part of the queen," adding: "Edward VIII became a Nazi sympathizer in 1936 ... after he abdicated he headed off to Germany briefly in 1937. In 1939 he was talking about his sympathy for Hitler and Germany."
"I think this is a matter of historical significance ... from which we shouldn't shy away." ||||| Tabloid’s managing editor, Stig Abell, says reason for releasing leaked footage, apparently shot in 1933 or 1934, is to provide context for attitudes before WW2
The managing editor of the Sun has defended his newspaper’s decision to release leaked footage, apparently shot in 1933 or 1934, showing the Queen perform a Nazi salute as a matter of historical significance.
Facebook Twitter Pinterest The Sun front page showing a still of footage showing a young Queen performing a Nazi salute with her family at Balmoral. Photograph: TheSun/Twitter/PA
The black-and-white footage shows the Queen, then aged six or seven, and her sister Margaret, around three, joining the Queen Mother and her uncle, Prince Edward, the Prince of Wales, in raising an arm in the signature style of the German fascists.
Edward, who later became King Edward VIII and abdicated to marry the American socialite Wallis Simpson, faced numerous accusations of being a Nazi sympathiser. The couple were photographed meeting Hitler in Munich in October 1937, less than two years before the second world war broke out.
Buckingham Palace said in a statement that it was disappointing the film – shot eight decades ago – had been exploited, while questions have been raised over how the newspaper obtained the clip, which is apparently from the monarch’s personal family archive.
But speaking to the BBC, Stig Abell, managing editor of the Sun, defended the move. He said: “I think the justification is relatively evident - it’s a matter of national historical significance to explore what was going on in the ’30s ahead of the second world war.
The Sun was right to publish scoop of the Queen giving a Nazi salute Read more
“We’re very clear we’re not, of course, suggesting anything improper on the part of the Queen or indeed the Queen Mum.
“It’s very clear Edward VIII, who became a Nazi sympathiser, in ’36 after he abdicated he headed off to Germany briefly.
“In ’37 [to] 1939, he was talking about his sympathy for Hitler and Germany, even before his death in 1970 he was saying Hitler was not a bad man.
“I think this is a matter of historical significance, I think this is footage that should be shown providing the context is very clear.
“We’ve taken a great amount of trouble and care to demonstrate that context at great length in the paper today. This is a matter of historical significance from which we shouldn’t shy away.”
The grainy clip, which lasts around 17 seconds, shows the Queen playing with a dog on the lawn in the gardens of Balmoral, the Sun claims, before she raises an arm to wave to the camera with Margaret.
The Queen Mother then makes a Nazi salute, and, after glancing towards her mother, the Queen mimics the gesture.
Palace criticises Sun over film of Queen giving Nazi salute as a child Read more
The Queen Mother repeats the salute, joined by Edward, and Margaret raises her left hand before the two children continue dancing and playing on the grass.
A Palace source said: “Most people will see these pictures in their proper context and time. This is a family playing and momentarily referencing a gesture many would have seen from contemporary news reels.
“No one at that time had any sense how it would evolve. To imply anything else is misleading and dishonest. The Queen is around six years of age at the time and entirely innocent of attaching any meaning to these gestures.
“The Queen and her family’s service and dedication to the welfare of this nation during the war, and the 63 years the Queen has spent building relations between nations and peoples speaks for itself.”
Queen's Nazi salute on Sun front page sparks mixed reaction on Twitter Read more
The footage is thought to have been shot in 1933 or 1934, when Hitler was rising to prominence in Germany.
In its leader column, the Sun said its focus was not on the young child who would become queen, but on her uncle, who was then heir to the throne.
The Queen’s former press secretary Dickie Arbiter said there would be great interest in royal circles in finding out how the footage was made public.
“They’ll be wondering whether it was in fact something that was held in the Royal Archives at Windsor, or whether it was being held by the Duke of Windsor’s estate,” he told the BBC news.
“And if it was the Duke of Windsor’s estate, then somebody has clearly taken it from the estate and here it is, 82 years later. But a lot of questions have got to be asked and a lot of questions got to be answered.” ||||| These crawls are part of an effort to archive pages as they are created and archive the pages that they refer to. That way, as the pages that are referenced are changed or taken from the web, a link to the version that was live when the page was written will be preserved.Then the Internet Archive hopes that references to these archived pages will be put in place of a link that would be otherwise be broken, or a companion link to allow people to see what was originally intended by a page's authors.The goal is to fix all broken links on the web . Crawls of supported "No More 404" sites. | – Buckingham Palace is seething today at images in the Sun of a young Queen Elizabeth raising her hand in the Nazi salute. Rupert Murdoch's tabloid obtained a short film clip of the royal family horsing around in 1933 or 1934, when Elizabeth was about 7 years old. In the clip, Elizabeth makes the gesture along with her mother; her uncle, Prince Edward; and her younger sister, Margaret, reports the Guardian. The palace isn't disputing that the family is making the salute but said it is "disappointing that film, shot eight decades ago and apparently from Her Majesty's personal family archive, has been obtained and exploited in this manner," reports NBC News. The Sun defends its decision to publish the images—under the headline "Their Royal Heilnesses"—as a matter of "historical significance." It won't say how it got the film clip. It's all about context, a palace source tells the BBC: "This is a family playing and momentarily referencing a gesture many would have seen from contemporary news reels," says the unnamed official. "No one at that time had any sense how it would evolve. To imply anything else is misleading and dishonest." |
trace minerals such as zn , cu and mn are necessary for health in dairy cows because they play important roles in protein synthesis , body metabolism , formation of connective tissue and immune system function .
oxidative stress is an important pathogenic factor in many diseases that has also recently been found to be involved in the development of lameness in cows .
lameness is a crucial welfare issue in modern dairy husbandry that could result in serious economic losses to dairy producers because of decreased milk yield , reduced fertility , and increased treatment costs and culling rates .
indeed , lameness has been identified as the third most important health problem in dairy farming following mastitis and infertility .
however , it has been shown that supplemental trace minerals ( such as cu , zn and mn ) may help reduce the incidence of lameness . regarding trace mineral sources , sulfate salts are readily available and widely used as sources of inorganic trace elements , but their efficiency of absorption is very low .
organic trace minerals are reportedly absorbed , stored , metabolized , and transferred more efficiently than in their inorganic forms .
organic forms of zn , cu , and mn have been developed to increase intestinal absorption and mineral bioavailability .
furthermore , nocek et al . found that supplementation with organic trace minerals can reduce the incidence of lameness .
therefore , this experiment was designed to determine the effects of dietary zn / cu / mn applied as sulfate salts or metal methionine hydroxyl analogs on production performance , different biochemical indicators and indices related to hoof health .
all experiments were reviewed and approved by the animal care and use committee of shandong agricultural university ( approval no .
cows selected in this experiment were of similar parity , lactation , and milk production . before the start of the experiment ,
the gait score of each cow was determined using a 5-point gait score system ( table 1 ) .
cows were assigned into two groups of 24 cows each for health or lameness based on their gait score ( 1 and 2 indicate health and 3 , 4 and 5 lameness ) .
cows in each group were assigned to the following two treatments : ( 1 ) control ( con ) : 50 mg zn , 12 mg cu , 20 mg mn / kg dm as sulfate salts ; ( 2 ) chelated trace mineral ( ctm ) : 50 mg zn , 12 mg cu , 20 mg mn / kg dm as metal methionine hydroxyl analog ( novus international , usa ) .
cows were injected with 2 ml of foot - and - mouth disease ( fmd ) trivalent inactivated vaccine ( catalog no .
20130826 ; jinyu baoling bio - pharmaceutical , china ) at day 90 of the experiment period .
cows were housed in individual tie stalls and milked three times a day ( 3:00 am , 10:00 am , and 5:00 pm ) while receiving the diets .
all cows in this experiment had free access to water and received the same basal diet ( table 2 ) .
basal diet samples were analyzed monthly , and all samples were well above the nutrient research council requirements ( table 3 ) .
milk yield of the experimental cows was recorded every 10 days , and milk samples were collected for analysis of fat , protein , lactose , and solid non - fat ( snf ) .
additionally , 15 ml of blood was sampled from the coccygeal vein on day 0 , 90 , and 180 .
serum samples were obtained by centrifuging the blood samples at 1,500 g for 10 min , then stored at -20 for later analysis of glutathione peroxidase ( gsh - px ) , superoxide dismutase ( sod ) , catalase ( cat ) activities , reduced glutathione ( gsh ) , oxidized glutathione ( gssg ) , malondialdehyde ( mda ) , metallothionein ( mt ) , procollagen - ii n - terminal peptide ( piianp ) , c - terminal telopeptide of type ii collagen ( ctx - ii ) , cartilage oligomeric matrix protein ( comp ) , interleukin 1 ( il-1 ) , immunoglobulin a ( iga ) , and zn / cu / mn levels .
serum gsh - px , sod , and cat activities and gsh , gssg , and mda levels were measured by spectrophotometry with commercial kits ( jiancheng , china ) .
serum mt , piianp , ctx - ii , comp , il-1 and iga were determined using the double - antibody sandwich enzyme - linked immunosorbent assay ( elisa ) method with commercial kits ( lengton , china ) .
in addition , blood samples were collected at day 90 , 120 , 150 , and 180 of the experimental period for analysis of fmd antibody titers .
fmd antibody titers ( a , o , and asia i ) were measured by the liquid - phase blocking elisa method with commercial kits ( liquid - phase blocking elisa kit for detecting foot - and - mouth disease virus type a / o / asia i antibodies , lanzhou veterinary research institute , china ) .
fmd antibody titers were log2 transformed before analysis to obtain homogeneity of the residual variance . at day 0 , 90 and 180 of the experiment period
, hoof hardness of the apex sole in the heel was tested using a shore scale d durometer to measure the ball indentation hardness .
five measurements were made for each cow , and the average value was recorded as the hoof hardness .
hair samples were also collected and analyzed for trace minerals . for trace mineral analysis ,
serum samples were thawed before testing . for each sample , 2 ml was digested with 20 ml of concentrated nitric acid and 2 ml of perchloric acid in an erlenmeyer flask .
one gram hair samples were accurately weighed , after which each sample was carbonized in a crucible .
carbonized samples were incinerated in the muffle furnace until completely incinerated , and then dissolved in nitric acid .
the levels of zn , cu and mn were analyzed using a flame atomic spectrophotometer ( purkinje general , china ) after the volumes of the digested samples were constant .
all results were evaluated using a mixed model including day , lameness , ctm , day lameness , day ctm , lameness ctm , and lameness ctm day .
day was included as a repeated measure using an autoregressive covariance structure , and data collected the day before the start of the trial were included as covariates .
significance was declared at p 0.05 , and trends were reported at 0.05 < p 0.10 .
all experiments were reviewed and approved by the animal care and use committee of shandong agricultural university ( approval no .
cows selected in this experiment were of similar parity , lactation , and milk production . before the start of the experiment ,
the gait score of each cow was determined using a 5-point gait score system ( table 1 ) .
cows were assigned into two groups of 24 cows each for health or lameness based on their gait score ( 1 and 2 indicate health and 3 , 4 and 5 lameness ) .
cows in each group were assigned to the following two treatments : ( 1 ) control ( con ) : 50 mg zn , 12 mg cu , 20 mg mn / kg dm as sulfate salts ; ( 2 ) chelated trace mineral ( ctm ) : 50 mg zn , 12 mg cu , 20 mg mn / kg dm as metal methionine hydroxyl analog ( novus international , usa ) .
cows were injected with 2 ml of foot - and - mouth disease ( fmd ) trivalent inactivated vaccine ( catalog no .
20130826 ; jinyu baoling bio - pharmaceutical , china ) at day 90 of the experiment period .
cows were housed in individual tie stalls and milked three times a day ( 3:00 am , 10:00 am , and 5:00 pm ) while receiving the diets .
all cows in this experiment had free access to water and received the same basal diet ( table 2 ) .
basal diet samples were analyzed monthly , and all samples were well above the nutrient research council requirements ( table 3 ) .
milk yield of the experimental cows was recorded every 10 days , and milk samples were collected for analysis of fat , protein , lactose , and solid non - fat ( snf ) .
additionally , 15 ml of blood was sampled from the coccygeal vein on day 0 , 90 , and 180 .
serum samples were obtained by centrifuging the blood samples at 1,500 g for 10 min , then stored at -20 for later analysis of glutathione peroxidase ( gsh - px ) , superoxide dismutase ( sod ) , catalase ( cat ) activities , reduced glutathione ( gsh ) , oxidized glutathione ( gssg ) , malondialdehyde ( mda ) , metallothionein ( mt ) , procollagen - ii n - terminal peptide ( piianp ) , c - terminal telopeptide of type ii collagen ( ctx - ii ) , cartilage oligomeric matrix protein ( comp ) , interleukin 1 ( il-1 ) , immunoglobulin a ( iga ) , and zn / cu / mn levels .
serum gsh - px , sod , and cat activities and gsh , gssg , and mda levels were measured by spectrophotometry with commercial kits ( jiancheng , china ) .
serum mt , piianp , ctx - ii , comp , il-1 and iga were determined using the double - antibody sandwich enzyme - linked immunosorbent assay ( elisa ) method with commercial kits ( lengton , china ) .
in addition , blood samples were collected at day 90 , 120 , 150 , and 180 of the experimental period for analysis of fmd antibody titers .
fmd antibody titers ( a , o , and asia i ) were measured by the liquid - phase blocking elisa method with commercial kits ( liquid - phase blocking elisa kit for detecting foot - and - mouth disease virus type a / o / asia i antibodies , lanzhou veterinary research institute , china ) .
fmd antibody titers were log2 transformed before analysis to obtain homogeneity of the residual variance . at day 0 , 90 and 180 of the experiment period
, hoof hardness of the apex sole in the heel was tested using a shore scale d durometer to measure the ball indentation hardness .
five measurements were made for each cow , and the average value was recorded as the hoof hardness .
hair samples were also collected and analyzed for trace minerals . for trace mineral analysis ,
serum samples were thawed before testing . for each sample , 2 ml was digested with 20 ml of concentrated nitric acid and 2 ml of perchloric acid in an erlenmeyer flask .
one gram hair samples were accurately weighed , after which each sample was carbonized in a crucible .
carbonized samples were incinerated in the muffle furnace until completely incinerated , and then dissolved in nitric acid .
the levels of zn , cu and mn were analyzed using a flame atomic spectrophotometer ( purkinje general , china ) after the volumes of the digested samples were constant .
all statistical analyses were performed with sas ( ver . 8.0 ; sas institute , usa ) using the cow as the experimental unit .
all results were evaluated using a mixed model including day , lameness , ctm , day lameness , day ctm , lameness ctm , and lameness ctm day .
day was included as a repeated measure using an autoregressive covariance structure , and data collected the day before the start of the trial were included as covariates .
significance was declared at p 0.05 , and trends were reported at 0.05 < p 0.10 .
there was no significant difference in dmi , milk yield or compositions between healthy and lame cows .
cows receiving ctm had a significantly lower milk fat percent ( p = 0.031 ) . moreover ,
cows fed with ctm had numerically higher milk yield and protein yield than those in the con group ( p = 0.102 and 0.103 , respectively ) .
however , there were no differences in dmi , protein , lactose , snf and fat yield between the con and ctm group .
as shown in table 5 , there was no significant difference between healthy and lame cows for gsh , gssg , gsh / gssg , cat , gsh - px , piianp , ctx - ii and comp .
however , lame cows had significantly lower sod ( p = 0.039 ) and higher mda ( p = 0.031 ) levels than healthy cows .
when compared with healthy cows , lame cows tended to show lower mt ( p = 0.087 ) .
cows fed ctm had significantly higher gsh ( p = 0.008 ) and lower gssg ( p = 0.010 ) values , resulting in a higher gsh / gssg ( p = 0.009 ) than for those fed the con . in addition , gsh - px , sod and mt were significantly higher ( p = 0.011 , 0.009 and 0.034 , respectively ) and mda , piianp and ctx - ii were significantly lower ( p = 0.007 , 0.008 , and 0.039 , respectively ) due to ctm supplementation .
there was no difference in cat observed between the con and ctm groups ( p > 0.1 ) .
additionally , no interaction between lameness status and ctm was observed in this study for the blood variables tested above . as shown in table 6
, there was no significant difference in il-1 , iga , and fmd antibody titers ( type a , o , and asia i , respectively ) between healthy and lame cows ( p > 0.1 ) .
additionally , there was no significant difference in the il-1 and fmd antibody titer- asian-1 type between con and ctm groups . however , iga and fmd antibody titer - o type were significantly higher because of ctm supplementation ( p = 0.008 and 0.012 , respectively ) . fmd antibody titer - a type tended to increase for cows supplemented with ctm ( p = 0.080 ) .
the levels of zn / cu / mn in serum and hair are shown in table 7 .
there was no difference in serum cu between healthy and lame cows ( p > 0.1 ) .
however , lame cows had significantly lower serum zn and mn than healthy cows ( p = 0.007 and 0.08 , respectively ) .
serum zn , cu and mn were significantly higher because of ctm supplementation ( p = 0.021 , 0.019 , and 0.045 , respectively ) .
similarly , there was no difference in zn and mn in hair between healthy or lame cows .
lame cows tended to show lower cu in hair than healthy cows ( p = 0.078 ) .
hair zn , cu and mn were significantly higher because of ctm supplementation ( p = 0.009 , 0.010 and 0.021 , respectively ) .
ctm tended to increase the hoof hardness on day 90 after supplementation ( p = 0.085 ) , while significant improvement of hoof hardness was observed on day 180 due to ctm supplementation ( p = 0.001 ) , regardless of healthy or lame cows .
finally , ctm affected the gait score of lame cows at the end of the experiment ( day 180 ) ( table 9 ) .
no obvious differences were observed in dmi , milk yield and components except milk fat percent between the con and ctm groups .
similar to one previous study , no significant effect was found for organic trace minerals on milk yield and components .
previous studies have shown variable results regarding production response to organic trace mineral sources . a meta - analysis evaluating
the effectiveness of supplementation with organic trace minerals found that organic trace minerals increased milk yield relative to control cows .
. also found that supplementation of organic trace minerals to the diet increased milk yield , but had no effect on milk compositions . in this study , cows fed ctm showed significantly decreased milk fat percentage , but the milk fat yield did not differ significantly .
these findings suggest that decreased milk fat may be a dilution effect because cows fed with ctm tended to have numerically higher milk yield .
zn plays an important role in the induction and activation of gsh - px in liver cells , thereby reducing free radicals . in the antioxidant system ,
zn , cu and mn are components of zn / cu / mn - sod , which can potentially clear superoxide anion radicals .
moreover , a large amount of mt is produced when levels of available zn become critical .
mt is a potent free radical scavenger , and its major function is to limit oxidative damage . in this study ,
lame cows had higher oxidant status , but ctm was found to markedly restore the enzymatic ( gsh - px and sod ) and non - enzymatic ( gsh and mt ) antioxidants levels .
similarly , osorio et al . found that lame cows had higher levels of oxidative stress than healthy cows .
moreover , supplementation with organic zn , cu and mn was shown to reduce oxidative stress .
found that there was no effect of feeding organic zn on serum sod activity , which may be because only one type of organic trace mineral was provided .
ctx - ii is a degradation product of type ii collagen that can reflect the degree of cartilage degradation .
serum comp , which is a main component of non collagen and the main extracellular matrix protein in cartilage , has been shown to increase in response to arthritis .
interestingly , aigner et al . found that collagen iia mrna is highly expressed in chondrocytes of patients with arthritis , while normal adult cartilage cells did not express it .
these findings suggest that the degradation product of collagen iia levels , i.e. , piianp , were higher in arthritis patents .
however , there were no significant differences in these three arthritis biomarkers observed between healthy and lame cows in the present study .
this may have been because the selected lame cows or both healthy and lame cows had very mild arthritis .
however , cows fed ctm had lower levels of arthritis biomarkers than those fed with con , indicating that ctm could reduce the incidence of arthritis .
il-1 is a cytokine that is produced by macrophages during tissue injury , infection or antigen stimulation .
this cytokine is involved in regulation of the immune response and inflammation , and it can increase the host 's defensive mechanisms . in this study
, no significant difference was observed in changes of il-1 level between treatments , suggesting that this cytokine was not affected by ctm supplementation .
moreover , another immune marker , iga , plays a key role in a variety of protective functions via specific receptors and immune mediators .
indeed , iga can reflect the body 's immune function . as shown in table 6 , the value of iga in cows fed ctm was significantly higher than that in the con group , indicating that organic trace minerals may strengthen immune function in cows .
in addition , cell - mediated immunity has been found to be involved in the clearance of fmd virus from infected animals .
moreover , specific t - cell responses are associated with the induction of anti - fmd virus antibodies .
furthermore , it is known that trace minerals ( particularly zn , cu , mn ) could activate t - cells and affect antibody responses in the body . in this study ,
similarly , organic trace minerals have been shown to increase rabies antibody titer in cows .
shinde et al . also reported that organic zn could improve the immune response of pigs .
based on these results , there is further evidence that ctm may enhance immune response in dairy cows .
elevated levels of fmd antibody titers varied from different serotypes , which may have been due to several reasons , including the epidemic serotype of the region , secretion mechanism of the antibody , or vaccine quality .
lame cows have been reported to have lower blood zn levels than healthy cows , which is consistent with the results of the present study . as shown in table 7
, administration of organic trace minerals resulted in increased zn , cu and mn in serum , which may have been due to the better availability of these materials from organic forms .
in addition , hair trace minerals levels were shown to be the result of long - term accumulation .
as shown in table 7 , lame cows had lower cu in hair than healthy cows , which may have been because of the long - term lack of cu in lame cows .
administration of organic trace minerals led to increased levels of zn , cu and mn in hair , which further demonstrated that organic trace minerals have better bioavailability .
keratin , which is the main component of hooves , can improve hoof integrity by speeding wound healing and increasing hardness of the hoof .
zn and cu are essential minerals of keratin synthesis . as shown in table 7 and 8 ,
lame cows have decreased hoof hardness , but hoof hardness was significantly improved when they were fed ctm . moreover ,
the gait score of lame cows was affected by ctm supplementation ( table 9 ) .
trace mineral deficiencies restricted the keratin synthesis , resulting in lower hoof hardness . in this study ,
lower levels of trace minerals in the body might have been the main reason for reduced hoof hardness .
attempted supplementation with 36 mg organic zn / kg of dry matter for 14 weeks to improve hoof hardness of cows , but no significant effect was obtained . when compared with our experimental design , we think that organic trace minerals should be fed for a longer time and that a complex of trace minerals ( zn / cu / mn ) should be used for dairy cows . | to evaluate the effects of chelated zn / cu / mn on redox status , immune responses and hoof health in lactating holstein cows , 48 head in early lactation were divided into healthy or lame groups according to their gait score .
cows were fed the same amount of zn / cu / mn as sulfate salts or in chelated forms for 180 days , and foot - and - mouth disease ( fmd ) vaccine was injected at day 90 .
the results showed that lame cows had lower antioxidant function , serum zn / mn levels , hair cu levels , and hoof hardness .
moreover , increased antioxidant status , fmd antibody titers , serum and hair levels of zn / cu / mn , and hoof hardness and decreased milk fat percent and arthritis biomarkers were observed in cows fed chelated zn / cu / mn . in summary , supplementation with chelated zn / cu / mn improved antioxidant status and immune responses , reduced arthritis biomarkers , and increased accumulation of zn / cu / mn in the body and hoof hardness in dairy cows . |
Investigators were examining evidence found in the car and talking with Mr. Ortega-Hernandez’s relatives, who told them that he appeared to have a fixation on the White House or President Obama , one official said. Another official said the Bureau of Alcohol, Tobacco, Firearms and Explosives had traced the origin of the weapon, but the official declined to disclose the circumstances of its purchase.
The United States Park Police had obtained an arrest warrant charging Mr. Ortega-Hernandez with a felony count of carrying a deadly weapon. The agency had also released photographs showing that Mr. Ortega-Hernandez had distinctive tattoos, including the word “Israel” on the left side of his neck. He was also said to have three dots tattooed on his right hand, the name “Ortega” on his back, rosary beads and hands clasped in prayer on his right chest and folded hands on his left chest.
The police in Arlington County took those photographs on Friday when they stopped Mr. Ortega-Hernandez after someone reported that a man outside was circling the area in a suspicious manner. Mr. Ortega-Hernandez was on foot and unarmed, said Lieutenant Joe Kantor, a county police spokesman, and since “there was no crime,” there was no reason to arrest him.
Late on Friday, the police searched the Occupy DC protest camp, on McPherson Square just blocks from the White House, after reports that the suspect might have spent time there. Protesters there said on Wednesday that the police had been through their encampment several times since then, showing around a photograph of Mr. Ortega-Hernandez.
Photo
Sgt. David Schlosser of the Park Police said any motive would not be clear “until we can talk to this guy.”
He said Mr. Ortega-Hernandez had criminal records in Idaho, Texas and Utah for charges including drugs, underage drinking, domestic violence, resisting arrest and assault on a police officer. Public records indicated much the same thing.
The Secret Service did not have Mr. Ortega-Hernandez on record as someone who had made any threats to the president, an agency official said.
Newsletter Sign Up Continue reading the main story Please verify you're not a robot by clicking the box. Invalid email address. Please re-enter. You must select a newsletter to subscribe to. Sign Up You will receive emails containing news content , updates and promotions from The New York Times. You may opt-out at any time. You agree to receive occasional updates and special offers for The New York Times's products and services. Thank you for subscribing. An error has occurred. Please try again later. View all New York Times newsletters.
Mr. Obama and his wife, Michelle, were away on Friday; the president was in San Diego on his way to an Asia -Pacific economic forum in Hawaii , and on Wednesday was in Australia .
Advertisement Continue reading the main story
It was not clear for several days that someone had deliberately fired at the White House. But the Secret Service has now found at least two bullets. The agency would not say where the White House had been struck, but workers on Wednesday were examining a window overlooking the Truman Balcony, an area outside the second-floor residential quarters where the first family sometimes relaxes or hosts guests.
A second round was found outside, and officials were searching the South Lawn for any other rounds. The area around the White House was generally more crowded than usual with police cars on Wednesday.
The shooting came from roughly 750 yards south of the White House, just outside the outer security perimeter. The security perimeter extends to the south edge of the Ellipse, a grassy area where the National Christmas Tree is displayed. It is across Constitution Avenue from the more distant Washington Monument.
The street area, usually open to the public, is one of the most guarded parts of the city — or of the country, for that matter — with the United States Park Police patrolling on the National Mall and the District of Columbia police on the streets. In addition, the Secret Service, with a uniformed force of 1,400 officers, has agents on guard at fixed positions on the White House grounds and on patrol nearby in cars and on bicycles. Streets are routinely shut down for motorcades. The Secret Service said its security worked effectively on Friday evening, though it planned to “scrutinize how we can make it better,” an official said.
The Secret Service said it had been tipped off to Mr. Ortega-Hernandez’s whereabouts by a hotel employee who recognized a photograph of him and called the Pittsburgh field office. The field office alerted the Pennsylvania State Police, which dispatched troopers to the hotel.
Investigators established Mr. Ortega-Hernandez’s route by talking to people who knew him, an official at the Secret Service said, and circulated photos of him to several hotels in the area. The investigation involved all of the Secret Service’s 122 field offices in the United States, the official said.
The last known episode in which bullets struck the White House occurred in 1994, when one round fired from the area of the Ellipse penetrated a first-floor window and landed in the State Dining Room, and another was found in a Christmas tree near the South Portico. President Bill Clinton and his wife and daughter were sleeping upstairs. No one was hurt. ||||| This image provided by the U.S. Park Police shows an undated image of Oscar Ortega. U.S. Park Police have an arrest warrant out for Ortega, who is believed to be connected to a bullet hitting an exterior... (Associated Press)
Law enforcement officers photograph a window at the White House in Washington, Wednesday, Nov. 16, 2011, as seen from the South Lawn. A bullet hit an exterior window of the White House and was stopped... (Associated Press)
Law enforcement officers photograph a window at the White House in Washington, Wednesday, Nov. 16, 2011, as seen from the South Lawn. A bullet hit an exterior window of the White House and was stopped... (Associated Press)
Law enforcement officers photograph a window at the White House in Washington, Wednesday, Nov. 16, 2011, as seen from the South Lawn. A bullet hit an exterior window of the White House and was stopped... (Associated Press)
Law enforcement officers photograph a window at the White House in Washington, Wednesday, Nov. 16, 2011, as seen from the South Lawn. A bullet hit an exterior window of the White House and was stopped... (Associated Press)
This image provided by the U.S. Park Police shows an undated image of Oscar Ortega. U.S. Park Police have an arrest warrant out for Ortega, who is believed to be connected to a bullet hitting an exterior... (Associated Press) | – The 21-year-old drifter suspected of firing a semi-automatic rifle at the White House has been arrested in Pennsylvania, reports the AP. Police say they caught Oscar Ramiro Ortega-Hernandez at a hotel in the southwestern part of the state after an intense manhunt. His distinctive tattoos, including a prominent one of "Israel" on his neck, didn't help his anonymity. Shots were reported near the White House on Friday, and agents found two bullets yesterday—one stopped by reinforced glass in a window and the other somewhere else on the grounds. Authorities on Friday found an abandoned car with a semiautomatic inside and traced it to Ortega. He reportedly has a history of petty crimes and unstable behavior, reports the New York Times. He had been stopped earlier on Friday by Arlington police after a citizen said he had been suspiciously "circling the area." His family in Idaho reported him missing in late October. |
As the election has drawn closer, Donald Trump's voice on Twitter has gotten louder. And progressively more insane.
Finally, he's getting called for his nonsense.
On NBC, Brian Williams said, Trump has "driven well past the last exit to relevance and veered into something closer to irresponsible."
Williams also read some of Trump's dopey tweets on the air, then said, "So, that happened."
It is an extra interesting comment from Williams because Trump is a star for NBC with "The Apprentice." It's rare for two big network personalities to clash.
But, the truth of the matter is that Trump has been a clown for this entire election cycle. Someone had to point it out. ||||| [There was a video here]
Diane Sawyer was definitely probably drunk tonight during ABC's election coverage. Would you agree? Not convinced yet? She slurred every other word, rambled about the lack of music, and asked a correspondent if the exclamation point in the Obama slogan represented the direction he truly wanted to go with his presidency. What more proof do you need?
But of course it was already true according to Twitter:
nothig to c here guys im ok d'ont worry abot me — Drunk Diane Sawyer (@DrnkDianeSawyer) November 7, 2012
And from our own Caity Weaver: "IT'S LIKE SHE TOOK AN AMBIEN AND WASHED IT DOWN WITH PINOT." Wouldn't be the first time. ||||| Once again, Diane Sawyer helped to lead ABC's Election Night coverage, and once again, she appeared entirely intoxicated while doing so. At least, that's what Twitter thought. As the last of the polls closed on Tuesday night, "Diane Sawyer" became a trending topic and not in a good way. John Gruber who runs the blog Daring Fireball tweeted, "Swear to god, Diane Sawyer is drunker than I am." The popular 90s band They Might Be Giants chimed in, "And Diane Sawyer declares tonight's winner is... chardonnay!" New York Times media report Brian Stelter cut Sawyer some slack, "Diane Sawyer's name is trending. Many people saying she seems drunk on air. Alternative theory: she gets this way when she's really tired."
Stelter's on to something. We've seen Diane Sawyer in this state before. The night after Barack Obama's inauguration in 2009, Sawyer appeared pretty cheerful on Good Morning America. She also had a hard time putting sentences together which led many to draw the conclusion that she was still drunk from the night before. Then, like now, we're not sure the slurred words and excessive smiles are necessarily proof that Sawyer had done some drinking, but she does look a little bit out of character. Oh well. We all do weird things when we're tired.
Want to add to this story? Let us know in comments. You can share ideas for stories on the Open Wire.
Adam Clark Estes | – If you were watching ABC's election coverage last night, there was probably just one question on your mind: Is Diane Sawyer really drunk, or just tired? Gawker has an amusing video montage that seems to support what many assumed: She was smashed. The video pieces together clips of Sawyer slurring and stumbling over her words, swaying in her seat, vehemently complaining about the lack of music, rambling about the exclamation point in Obama's campaign slogan, and at one point intensely reminding viewers that people have "died—literally died" for the right to vote. All the while, George Stephanopoulos looks on with, at times, apparent frustration at his inability to get a word in. The Atlantic Wire rounds up some of the better tweets about the situation ("And Diane Sawyer declares tonight's winner is … chardonnay!"), and has yet more video evidence that we've seen Sawyer like this in the past. But its conclusion is that Sawyer simply gets weird when she's tired. Meanwhile, another news anchor also made headlines last night: On NBC, Brian Williams totally "crushed" Donald Trump, Business Insider reports. Williams reluctantly reported on Trump's Twitter feed, noting that Trump had "driven well past the last exit to relevance and veered into something closer to irresponsible" with his angry election-related tweets. Williams then read some of the tweets on air, before ending with, "So, that happened." Click for a third buzzed-about media moment of the night. |
giant resonances of nuclei are a clear manifestation of the strong collective excitation modes in many - body quantum systems .
detailed experimental and theoretical studies have been devoted to find out all possible giant resonances with various multipole transitions @xcite .
inelastic @xmath1 scattering has been used as the most suitable tool to extract isoscalar multipole strengths . since @xmath1 particle has @xmath5 and @xmath6 and the first excited state is as high as 20.2 mev , only isoscalar natural parity transitions are strongly excited , ( an exception is the weak coulomb excitation of the isovector giant dipole resonance , @xmath7 and @xmath8 ) . at extremely forward scattering - angles including 0@xmath9 ,
cross sections for states with small transferred angular momenta ( @xmath10 ) are strongly enhanced .
in addition , @xmath11 angular distributions at high bombarding energies are characterized by clear diffraction patterns .
these characteristic features allow us to reliably determine the multipole transition strengths .
in fact , by means of the multipole decomposition analysis ( mda ) , many isoscalar giant resonances have been successfully determined , and their excitation strengths have been extracted in recent years by the rcnp and the texas a & m groups @xcite . the giant resonances in @xmath12 mg , @xmath4si and @xmath13ca have been already studied by both groups . among these light nuclei , of special interest
are the giant resonances in @xmath0s with proton and neutron numbers of 16 .
various theoretical models such as mean - field approaches , the shell model , and cluster - structure and molecular - resonance points of view , have predicted that there must exist well - developed superdeformed ( sd ) bands at high excitation energies in @xmath0s .
this interesting prediction is made on basis of the concept that when the @xmath0s nucleus attains a superdeformed shape , with the ratio 2:1 for the long and short axes , nucleon number 16 becomes a magic number and the advent of a stable sd band is expected at high excitation energies @xcite . also , @xmath0s is a key nucleus to understand the relation between the sd structure in heavy nuclei and the cluster structure in light nuclei .
the sd bands in many light nuclei , such as @xmath14ar , @xmath13ca , and @xmath15ni @xcite have been discovered in the last decade .
therefore , many experiments have been performed @xcite in order to search for the sd band in @xmath0s . however , no clear evidence for the sd band in @xmath0s has so far been reported . in the present work ,
we report the results on the @xmath0s(@xmath1,@xmath16 ) experiment at @xmath17 = 386 mev .
we find candidate states that might constitute the sd and @xmath4si + @xmath1 cluster bands in @xmath0s .
the experiment was performed at the ring cyclotron facility of research center for nuclear physics ( rcnp ) , osaka university .
the details of the experimental setup and procedure are described in ref .
@xcite . here , we present the brief outline of the experiment and procedures specific to the present measurement .
inelastic scattering of 386 mev @xmath1 particles from @xmath0s has been measured at forward angles ( @xmath18 = 0@xmath9 @xmath19 10.5@xmath9 ) . in order to identify low - j@xmath3 values of complicated overlapping states , background - free measurements in inelastic @xmath1 scattering at forward angles including 0@xmath9 were greatly helpful .
we used two self - supporting natural sulfur foils with thicknesses of 14.3 mg/@xmath20 for 0@xmath9 and of 15.6 mg/@xmath21 for finite angles .
the sulfur target was prepared in the following procedure @xcite . at first
, the natural sulfur powder ( the abundance of @xmath0s is 95.02% ) was melted at the temperature of 112.8@xmath9c .
the liquid sulfur was solidified between a couple of the teflon sheets with a well defined thickness .
the target was kept cool during the measurement with liquid nitrogen by using the target cooling system described in ref .
@xcite to avoid subliming the sulfur .
inelastically scattered @xmath1 particles were momentum analyzed in the high resolution spectrometer , grand raiden @xcite , and detected in the focal - plane detector system consisting of two multi - wire drift - chambers and two plastic scintillators . the scattering angle at the target and the momentum of the scattered particles were determined by the ray - tracing method .
the energy spectra have been obtained in the range of 5 @xmath22 52 mev at @xmath23 = 2.5@xmath9 @xmath19 9@xmath9 and of 6 @xmath22 50 mev at 0@xmath9 .
measurements were performed with two different energy - bite settings at each angle . in the 0@xmath9 measurements ,
the primary beam was stopped at the just behind of the d2 magnet of grand raiden for the high excitation energy bite and at the downstream of the focal - plane detector system for the low excitation energy bite . at forward
angles from 2.5@xmath9 to 5@xmath9 , the beam was stopped at the location just after the q1 magnet . at the backward angles over 6.5@xmath9 ,
the beam was stopped in the scattering chamber of grand raiden .
the energy resolution was less than 200 kev through all the runs .
figure [ fig : espec ] shows typical energy spectra at @xmath23 = 0.7@xmath9 and 4.2@xmath9 . in the forward angle measurements , especially at 0@xmath9 , backgrounds due to the beam halo and multiple coulomb - scattering become very large .
however , we eliminated practically all the backgrounds using the double - focus property of the ion - optics of the grand raiden spectrometer , though the effect of the multiple coulomb - scattering was smaller in the @xmath0s(@xmath1,@xmath16 ) measurement than those in heavier nuclei such as @xmath24pb .
elastic scattering from @xmath0s was also measured at @xmath25 = 4@xmath9 - 27@xmath9 to determine the nucleon-@xmath1 interaction parameters with the same incident energy .
the mda has been carried out to extract multipole transition strengths from e0 to e3 , by taking into account the transferred angular momentum ( @xmath10 ) up to @xmath10 = 13 and minimizing the chi - square per degree of freedom .
@xmath10 @xmath26 5 strengths were assumed to be backgrounds due to other physical processes such as quasielastic scattering in the ( @xmath1,@xmath16 ) reaction .
the cross section data were binned in 1 mev energy intervals to reduce the fluctuation effects of the beam energy resolution .
the experimentally obtained angular distributions , @xmath27 , have been fitted by means of the least square method with a linear combination of the calculated distributions , @xmath28 defined by @xmath29 where @xmath30 is the energy weighted sum rule fraction for the @xmath10 component . in the dwba calculation , a single - folded potential model was employed , with a nucleon-@xmath1 interaction of the density - dependent gaussian form , as described in refs .
the nucleon-@xmath1 interaction parameters are given by : @xmath31 where the ground state density @xmath32 was obtained using the point nucleon density unfolded from the charge density distribution @xcite .
the parameters @xmath33 , @xmath34 , @xmath35 , @xmath36 in eq .
( [ eqn : ddpint ] ) were determined by fitting the differential cross sections of elastic @xmath1-scattering measured for @xmath0s at @xmath37 = 386 mev ; the fit is shown in fig .
[ fig : discrete ] , and the obtained parameters are presented in table [ tab : interaction ] . the value @xmath36 = -1.9 was adopted from ref .
the angular distribution of the 2.23 mev 2@xmath38 state was well reproduced with the known value of @xmath39 = 0.304 @xcite .
contribution from the isovector giant dipole ( ivgdr ) component , arising from the coulomb - excitation , was subtracted above the excitation energy of 10 mev by using the gamma absorption cross section @xcite . in the @xmath40 @xmath41 40 mev region ,
ivgdr strength was approximated by the tail of the breit - wigner function to smoothly connect to the @xmath40 @xmath42 40 mev region .
figure [ fig : strength ] shows strength distributions for the @xmath10 = 0 ( isoscalar giant monopole resonance , e0 ) , @xmath10 = 1 ( isoscalar giant dipole resonance , e1 ) , @xmath10 = 2 ( isoscalar giant quadrupole resonance , e2 ) , and @xmath10 = 3 ( high energy octupole resonance , e3 ) modes .
figure [ fig : grmda ] shows typical fitting results of the mda . in the region above @xmath40 = 43.5 mev , the sum of @xmath10 @xmath26 5 components constituted dominant part of the cross section as shown at the right lower part of fig .
[ fig : grmda ] . therefore , energy - weighted sum rule ( ewsr ) values , centroid energies , and r.m.s .
widths for e0 , e1 , and e2 have been obtained by summing up from 6 to 43 mev .
errors were estimated by changing the summing region to @xmath43 2 mev ( 6 - 41 mev and 6 - 45 mev ) . a total of 108 @xmath44% of the e0 ewsr was found .
the e0 centroid energy ( m1/m0 ) is 23.65 @xmath45 mev , and the rms width is 9.43 mev .
the isoscalar e1 ewsr fraction is 103 @xmath43 11% .
however , the isoscalar e1 strength continues up to @xmath46 mev , similar to that in @xmath4si @xcite .
the e2 strength was identified with 143 @xmath47% of the ewsr .
the e2 centroid energy is 22.42 @xmath48 mev , and the rms width is 9.14 mev . the sum of the e3 strength between 6 mev and 50 mev was found to correspond to only 33 @xmath49% ewsr .
however , the low excitation energy part between 6 and 18 mev comprises about 3% of the ewsr which is equal to that reported in @xmath4si .
it would appear that the high energy e3 ( heor ) strength between 18 and 43 mev could not be separated from higher multipole ( @xmath10 @xmath26 4 ) components .
the centroid energy of the heor is 31.4 @xmath50 mev which is also comparable to that of @xmath4si .
although the low excitation energy region of the e4 strength could be separated from higher multipole ( @xmath10 @xmath26 5 ) components , as described later , it was not possible to clearly identify the e4 strength above @xmath40@xmath41 25 mev due to featureless angular distributions , as shown in fig .
[ fig : grmda ] . figure [ fig : strength - sd ] shows the distributions for the e0 , e1 , e2 , and e3 strengths obtained by the mda with a small bin size of 200 kev . for @xmath10 = 0 , 1 , 2 , 3 , and 4 . in order to obtain excitation energies of the 0@xmath51 , 1@xmath52 , 2@xmath51 , 3@xmath52 , and 4@xmath51 levels , we fitted energy spectra with a gaussian at 0.7@xmath9 , 1.9@xmath9 , 3.3@xmath9 , 4.8@xmath9 , and 5.6@xmath9 , respectively .
the transition strengths were estimated by integrating the strength distribution corresponding to the states . it should be noted that their absolute values are strongly affected by the dwba calculation used in the mda .
the extracted excitation energies and strengths are listed in table [ tab : positive ] . in the @xmath10 = 0 strength distribution presented in fig .
[ fig : strength - sd ] , there were many candidates for the e0 strength at @xmath40 @xmath53 14 mev .
however , since the isovector e1 cross section due to the coulomb - force shows also a strong peak at 0@xmath9 similar to the e0 strength , it could not be excluded from the e0 strength at @xmath40 @xmath53 14 mev .
a possible way to look at the ivgdr contribution is to compare the ( @xmath1,@xmath16 ) strength distributions with those obtained from ( p , p@xmath54 ) at similar energies .
such data are available from ref .
@xcite . from a comparison of the 0@xmath9 spectra between the ( @xmath1 , @xmath16 ) and ( p , p@xmath54 ) reactions , we identified six 0@xmath51 states in the e0 strength distribution of ( @xmath1,@xmath16 ) as listed in tables [ tab : positive ] and [ tab : negative ] .
in light nuclei , the isoscalar giant monopole ( isgmr ) strength is fragmented into the wide excitation energy region , as reviewed in ref . @xcite . in recent works on @xmath12
mg , @xmath4si , @xmath13ca , and @xmath55ca @xcite , a large part of the e0 strength was found over @xmath40@xmath19 20 mev . the e0 strength in @xmath0s was also found to be fragmented in the wide excitation energy region from 6 mev to 43 mev as shown in fig . [ fig : strength](a ) .
the e0 centroid energy of 23.65 @xmath45 mev is comparable to the empirical expression , e@xmath56 @xmath19 78 a@xmath57 , of 24.6 mev .
as for the centroid energy of the isoscalar giant dipole resonance ( isgdr ) , the e1 strength continues up to @xmath40@xmath19 50 mev , as described in the previous section . the empirical expression of e@xmath58 @xmath19 133 a@xmath57 found in ref .
@xcite becomes 41.9 mev .
although the e1 strength was found almost 100% in this measurement , and since the absolute value of the strength is strongly affected by the dwba calculation used in the mda , it implies the measurement up to the sufficiently high excitation energy region is needed to find the whole strength of the isgdr in light nuclei such as @xmath0s . the 0@xmath51 states at @xmath40= 10.49 mev , 11.62 mev , 11.90 mev are candidates for the bandhead state of the sd band .
the bandhead 0@xmath51 state of the sd band in @xmath0s is predicted to appear at @xmath40 = 10 @xmath19 12 mev in the hf and hfb frameworks @xcite .
it has also been shown that this sd band is essentially identical to the pauli allowed lowest @xmath59 = 24 band of the @xmath60o+@xmath60o molecular structure @xcite .
it is tempting , to conjecture that these 0@xmath61 states might , indeed , be the bandhead of a sd band .
extending this conjecture , we observe 2@xmath61 and 4@xmath61 members of the sd band above these excitation energies .
figure [ fig : rotational - band ] shows the two - dimensional histogram of the excitation energies versus the j(j+1 ) values . the solid lines are drown to guide the eye .
the slope of these lines corresponds to @xmath62 83 kev .
although this value is larger than predicted one of 48.5 kev in ref .
@xcite , it is in good agreement with a simple calculation of @xmath63 85 kev obtained by the assumption of point masses for a rigid @xmath60o + @xmath60o molecular structure with the radius , r = 1.1 a@xmath64 fm .
it is also comparable to @xmath65 = 82 kev and 69 kev of the sd bands observed in @xmath66ar @xcite and @xmath13ca @xcite , respectively .
however , the experimental bandheads of the sd bands in @xmath66ar and @xmath13ca are at low excitation energies ( 4.33 mev and 5.21 mev , respectively ) in comparison with @xmath40 = 10 @xmath19 12 mev in @xmath0s
. this high excitation energy of the bandhead might be a reason why the sd band has not been observed in @xmath67-ray spectroscopic studies so far @xcite . in a macroscopic analysis of the @xmath60o + @xmath60o rainbow scattering ,
it was concluded that the low - spin 0@xmath51 , 2@xmath51 , 4@xmath51 , and 6@xmath51 states of the n = 24 @xmath60o + @xmath60o cluster band were fragmented @xcite and in an elastic @xmath4si + @xmath1 scattering experiment , many fragmented 0@xmath51 states were observed @xcite .
therefore , the 0@xmath51 states at @xmath40@xmath19 11 mev observed in the present work could be the candidates of fragmented 0@xmath51 states .
the lower excitation energy 0@xmath51 states , at 6.6 mev and 7.9 mev , which are near the @xmath1-decay threshold energy in @xmath0s , are discussed in relation to the bandhead of the @xmath4si + @xmath1 cluster band in the analogy with the @xmath68c + @xmath1 cluster in @xmath60o and the @xmath60o + @xmath1 cluster in @xmath69ne @xcite . since there are mirror configurations of the @xmath68c + @xmath1 and @xmath60o + @xmath1 clusters , these cluster structures lead the parity - doublet rotational bands .
the appearance of a parity - doublet rotational band in the asymmetric intrinsic @xmath1 cluster configurations is also explained by a cluster model with a deep potential @xcite .
the dashed and dotted lines in fig . [ fig : rotational - band ] are drawn to point out members of the parity - doublet @xmath4si + @xmath1 cluster band in @xmath0s .
the rotational constants @xmath65 corresponding to the dashed and dotted lines are 234 kev and 125 kev , respectively .
the gap energy between the positive and the negative bands for the dashed line is almost zero .
it indicates the @xmath4si + @xmath1 cluster structure in this band has a rigid body .
the value of 234 kev is in good agreement with a simple calculation of 245 kev obtained with the assumption of point masses for a rigid @xmath4si + @xmath1 cluster with a radius r = 1.1 a@xmath64 fm for @xmath4si , and 1.6 fm for the @xmath1-particle .
however , these simple calculations of the rotational constant are just trials to explain the experimentally - observed rotational constants .
more realistic theoretical calculations are highly desired for the further detailed comparison with the experimental results .
we have investigated the isoscalar giant resonance strengths in the doubly - closed shell nucleus @xmath0s , with a view to search for the possible superdeformed bandhead predicted in theoretical calculations . a novel technique was used to prepare an enriched @xmath0s target , and the @xmath0s(@xmath1,@xmath16 ) measurements were made at extremely forward angles , including 0@xmath9 at e@xmath2 = 386 mev .
the extracted e0 , e1 , e2 , and e3 strength distributions from mda are similar to those in nearby light nuclei . from the mda with a 200 kev energy bin , three 0@xmath51 states at 10.49 mev , 11.62 mev , and 11.90 mev are extracted .
these three 0@xmath51 states would be candidates for the bandhead of the sd band in @xmath0s .
in addition , the parity - doublet @xmath4si + @xmath1 cluster bands have been identified .
the rotational constants obtained from the level distance for the possible rotation states are in good agreement with simple calculations with the assumption of point masses for the @xmath60o + @xmath60o and @xmath4si + @xmath1 cluster structures .
we would like to thank h. matsubara and a. tamii for providing us with the @xmath0s(p , p@xmath54 ) spectrum at 0@xmath9 .
we would also like to thank k. matsuyanagi and e. ideguchi , and odahara for fruitful discussions .
we wish to thank rcnp staff for providing the high - quality @xmath1 beams required for these measurements .
this work was supported in part by jsps kakenhi grant number 24740139 , 24540306 , and the u.s . national science foundation ( grant nos .
int03 - 42942 , phy04 - 57120 , phy07 - 58100 , and phy-1068192 ) and by the us - japan cooperative science program of jsps .
s(@xmath1,@xmath16 ) at averaged laboratory angles of @xmath23 = 0.7@xmath9 and @xmath23= 4.7@xmath9 .
the black line shows the energy spectrum obtained from the low excitation measurement .
the red line shows that obtained from the high excitation measurement .
, width=604 ] particles from @xmath0s .
( b ) angular distribution of differential cross sections for the 2.23 mev 2@xmath51 state . in both cases ,
the solid lines show the results of the dwba calculations using the single - folding model ( see text ) . , width=604 ] scattering .
the line through the data shows the sum of various multipole components obtained by the mda .
each multipole contribution is represented by the color line and the transferred angular momentum l are indicated .
the blue line shows a contribution of the ivgdr estimated from the gamma absorption cross section ( see text ) .
, width=604 ] s(@xmath1,@xmath16 ) reaction at ( a ) @xmath18 = 0.7@xmath9 , ( b ) 2.0@xmath9 , ( c ) 3.4@xmath9 , ( d ) 4.8@xmath9 , and ( e ) 5.6@xmath9 , respectively , scaled to fit in the figures .
some differences of peak positions between the excitation energy spectrum at @xmath18 = 0.7@xmath9 and the e0 strength distribution are arising from primarily an artifact of histogramming .
the differences , if any , are within the uncertainty of 0.05 mev in peak positions . , width=529 ] | isoscalar giant resonances and low spin states in @xmath0s have been measured with inelastic @xmath1 scattering at extremely forward angles including zero degrees at e@xmath2 = 386 mev . by applying the multipole decomposition analysis , various excited states
are classified according to their spin and parities ( j@xmath3 ) , and are discussed in relation to the super deformed and @xmath4si + @xmath1 cluster bands . |
Technology reviews by website CNET have long been respected for their thoroughness and integrity, but that reputation has come under scrutiny after a top reporter quit over what he says is editorial interference by its parent company, CBS Corp.
The dispute centers on CNET's choice of best gadgets from last week's International CES show in Las Vegas.
CNET voted Dish Network Corp.'s "Hopper with Sling" the best home theater and audio product. Because CBS is in a legal fight with Dish over the Hopper's ad-skipping capabilities, CBS vetoed the selection, saying the product couldn't be considered "Best of CES." Instead, CNET's official selection was a sound bar from TV maker Vizio.
Reporter Greg Sandoval tweeted on Monday morning that he was resigning, saying he had lost confidence that CBS is committed to editorial independence.
"I just want to be known as an honest reporter," he tweeted, adding "CNET wasn't honest about what occurred regarding Dish."
In an apparent response to the resignation, CNET Reviews Editor-in-Chief Lindsey Turrentine posted a story on the site a few hours after Sandoval's tweet saying that around 40 CNET editorial members voted, and Dish's Hopper won the designation because of "innovative features that push shows recorded on DVR to iPads."
She said "the conflict of interest was real" and said she contemplated quitting as well, but stayed on to explain the situation to staff and prevent a recurrence. She said CNET staff was asked to re-vote after the Hopper was excluded, and regretted not revealing at first that it had won.
"I wish I could have overridden the decision not to reveal that Dish had won the vote," she wrote. "For that I apologize to my staff and to CNET readers."
A spokesman for CBS, which also owns such marquee journalism properties as CBS News and 60 Minutes, declined to comment on how a similar situation might be handled if it occurred at its other news properties.
"In terms of covering actual news, CNET maintains 100 percent editorial independence, and always will," CBS said in a prepared statement.
CBS bought CNET for $1.8 billion in June 2008. In December, the site had 33.4 million visitors, up 8 percent from a year earlier. ||||| Are news and reviews subject to different ethical standards? That appears to be the message from CBS in response to Dish's controversial Hopper DVR. Official CBS policy now bans CNET from reviewing products implicated in lawsuits, but claims CNET still has complete editorial independence over "actual news."
On Monday, CBS issued a statement to the New York Times calling the ban on the Hopper "an isolated and unique incident in which a product that has been challenged as illegal." A spokesperson noted that not only CBS but other media companies had brought suit against Dish. "CBS has nothing but the highest regard for the editors and writers at CNET… and, in terms of covering actual news, CNET maintains 100% editorial independence, and always will." (Spokespeople for CBS, CBS Interactive, and CNET did not return requests to respond directly to The Verge for comment on this story.)
Meanwhile, CNET Reviews editor Lindsey Turrentine expressed regret that the publication did not clearly state that the Hopper with Sling had won the editors' vote for Best In Show. CBS Corporate insisted on using language that obfuscated that fact, after Turrentine and CNET editorial staff had already lost its fight to stand by the original vote. "I wish I could have overridden the decision not to reveal that Dish had won the vote in the trailer," writes Turrentine. "For that I apologize to my staff and to CNET readers."
"The least of [our disappointment] is the loss of the award."
"I'm looking for a word more descriptive than disappointment," said Bob Toevs, head of corporate communications at Dish, in a phone interview. "The least of it is the loss of the award. It's really everything to do with editorial independence and integrity, which we've valued from CNET in the past and cheer for its restoration in the future. It's terribly unfortunate they've been put in this position [by CBS], and it's completely avoidable."
Typically, Toevs says, journalistic outlets have handled conflicts regarding ongoing lawsuits with an asterisk and a simple disclosure. "It says, the conflict is out there, it's no secret. That's absolutely appropriate and fine. If you look at publications by News Corp. [also currently involved in a lawsuit with Dish Networks concerning the Hopper], that's how they handle it. And that kind of open disclosure enhances their credibility as a result."
CBS intervention is "terribly unfortunate" and "completely avoidable," says Dish
Last Wednesday, editors at CNET told representatives at Dish that Dish's Hopper with Sling would be a finalist for CNET's "Best of CES" award. On Thursday morning, about 25 minutes prior to its announcement of the winner, CNET told Dish that the Hopper with Sling had been withdrawn from consideration due to Dish's lawsuit with CNET's parent company CBS over the Hopper DVRs' commercial-skipping feature. CNET did not indicate to Dish, or to anyone else outside the company, that its editors had in fact already voted to name the Hopper with Sling Best in Show, nor that editors had been made to revote on a directive from CBS CEO Les Moonves. That conversation Thursday morning was the last communication between representatives at Dish and those at CNET, CBS Interactive, or CBS regarding ongoing reviews coverage of the Hopper or any current Dish products.
Are reviews journalism?
The distinction between news and reviews may be the thorniest part of CBS's response, apart from its interference with CNET's editorial team in the first place. CBS may say that the case of the Hopper is "isolated and unique," but has also said that CNET "will no longer be reviewing products manufactured by companies with which we are in litigation with respect to such product." So CNET's editorial staff is permitted to cover Timehop DVRs, Aereo's online TV service, or CBS' lawsuits with Dish and Aereo themselves as "news," but not allowed to evaluate those products and services for its readers as "reviews."
It's difficult to see how a distinction between full editorial independence for news and limited editorial independence for reviews can be maintained without eroding reader trust in the reviews. "It seems impractical," says Dish's Toevs. "I don't know how [CNET's editorial team] would like to live with that policy. And it raises the question of how much editorial oversight CBS is giving CNET on any given story."
Transparency is the beginning of trust, not trust itself
It's equally difficult to see how disclosures to readers alone can solve these ethical and professional dilemmas for news organizations. CNET admits erring in concealing the results of its CES awards vote, but disclosure alone is not sufficient to clear CBS of the charge of editorial interference. By CNET's own account, CBS' corporate division took the decision to give an award out of CNET's hands and dictated editorial content on CNET's site, in the form of the editor's note removing the Hopper from the list of award finalists. Since that note is now the offical policy of CBS and CNET, charges of editorial inference will not end here, even when CBS and other networks' lawsuits with Dish are resolved.
"We're dealing with a rapidly evolving landscape regarding how media and technology interact with each other," says Toevs. "Today this is about Dish and the merits (or not) of our product. Where will it be tomorrow?" | – A CNET reporter has left the tech review site after owner CBS stepped in to alter a story. When the CNET team picked a Dish Network product as the best home theater and audio item at the Consumer Electronics Show, CBS rejected the choice. That's because CBS is in the midst of a legal battle with Dish, the AP reports. So CNET picked another product to top its list. The switch prompted Greg Sandoval's exit. "I just want to be known as an honest reporter," Sandoval tweeted. "CNET wasn't honest about what occurred regarding Dish." Hours later, CNET Reviews Editor-in-Chief Lindsey Turrentine posted at the site that "the conflict of interest was real" and apologized to staff and readers for not announcing the true winner. For its part, CBS called the spat "an isolated and unique incident" regarding "a product that has been challenged as illegal," the Verge reports. When it comes to "actual news, CNET maintains 100% editorial independence, and always will." |
The body of Sydney Loofe, 24, was found, according to her family. She’d been missing for more than two weeks after going on a date with a woman she met online. Lincoln Police Department Courtesy photo ||||| CLOSE “Ready for my date,” were the last four words anyone heard or read from Sydney Loofe on November 15th. Three weeks later, her body was found. Buzz60
Sydney Loofe was last seen on the evening of November 16. (Photo: FBI via Twitter)
Authorities have recovered the body of a Nebraska woman whose disappearance after a Tinder date last month triggered a massive search and bizarre social media posts from two persons of interest in the tragic mystery.
Lincoln Police Chief Jeff Bliemeister said Tuesday that "analysis of digital evidence" led authorities to a body in rural Clay County they believe is that of Sydney Loofe, 24, who vanished three weeks ago.
"We do believe that there is evidence of foul play," Bliemeister said.
Bliemeister expressed a "strong belief" that the body is that of Loofe, who was reported missing Nov. 16 after failing to show up at her job at a Lincoln home improvement store. He said formal confirmation would be made in the coming days.
Bliemeister provided no further details on the cause of death or circumstances surrounding the discovery. Investigators had been using Loofe's cellphone signal to retrace her movements in the hours before she disappeared.
More: Police: Missing Florida teen found safe in New York
More: Hunter charged in killing of neighbor while shooting after sunset
Loofe’s parents, George and Susie Loofe, acknowledged their daughter's death on their "Finding Sydney Loofe" Facebook page.
"It's with heavy hearts that we share this most recent update with you all," the couple said. "Please continue to pray for Sydney and our entire family. May God grant eternal rest unto thee. We love you Sydney."
Bliemeister said the persons of interest, Aubrey Trail, 51, and Bailey Boswell, 23, remained in custody but had not been charged in the case. Both apparently left the state in the days after Loofe disappeared and were arrested Thursday near Branson, Mo., on unrelated charges.
Social media posts indicate Loofe went on a date Nov. 15 with Boswell, who has confirmed on social media that she met Loofe via the dating app Tinder.
Boswell and roommate, Aubrey Trail, 51, live in the eastern Nebraska town of Wilber, about 40 miles south of Lincoln and the last place Loofe was seen alive. Trail and Boswell posted videos on social media last week proclaiming their innocence and claiming their efforts to speak with Lincoln police had been largely rebuffed.
Boswell, wearing a hoodie and sunglasses in a video, said she dropped Loofe off at a friend’s house after their date and never heard from her again. Bliemeister said authorities thus far have been unable to confirm Boswell's timeline.
Trail said on his video that he "wasn't running from anything." He said he was praying for Sydney and wished the best for her family.
"We're continuing to speak with Aubrey Trail and we'll continue to do so as long as he's willing," Bliemeister said Tuesday.
Bliemeister said police believe that there is no continuing threat to the public. But he provided no motive for the murder and stressed that no one had been charged.
"By their own statements on social media, we believe that Aubrey Trail and Bailey Boswell were two of the last people to see her before her disappearance," Bliemeister said. "Thus they remain persons of interest."
Read or Share this story: https://usat.ly/2nvPwxp ||||| The interactive transcript could not be loaded.
Rating is available when the video has been rented.
This feature is not available right now. Please try again later. | – Sydney Loofe sent friends a selfie on Snapchat on Nov. 15 with the caption, "Ready for my date." It was the last time friends would hear from Loofe, who went on a date with a woman she'd met on Tinder but didn't turn up for work the next day, police say, per the Washington Post. On Tuesday, their three-week search for the 24-year-old Nebraska woman came to a close with the discovery of a body in Clay County, about 90 miles from Loofe's Lincoln Home, per the Kansas City Star. Foul play is suspected, Lincoln Police Chief Jeff Bliemeister tells USA Today. Bailey Boswell, 23, who posted a video online in which she said she dropped Loofe off at a friend's house after their date, and her male roommate, 51-year-old Aubrey Trail, are considered persons of interest in the case, police add. Both were jailed last week on unrelated charges. Police haven't commented on how Loofe is believed to have died, or on a possible motive for her killing, but they say "digital evidence" led them to the body. They also say data from Loofe's cellphone indicated she had been in the area of Boswell and Trail's Wilber home, 40 miles from her own. But though Boswell says Loofe smoked marijuana there on Nov. 15, she denied any wrongdoing in a nine-minute video posted online on Nov. 29, showing Boswell and Trail together in a car, per the Omaha World-Herald. A day later, Boswell and Trail were arrested near Branson, Mo. "By their own statements on social media, we believe that Aubrey Trail and Bailey Boswell were two of the last people to see [Loofe] before her disappearance," Bliemeister says. "Thus they remain persons of interest." (Police say a teen murder suspect in Colorado had a kill list.) |
yellow , blue and dark blue : this is the simple color palette used for painting and penning each of the two - sided nobel diplomas awarded to takaaki kajita @xcite and arthur b. mcdonald @xcite . on the left side , one can gaze at an artist s view sketched with a few broad strokes of the neutrino transformative trip from the bright yellow sun , through the earth s blue darkness , into a blue pool of water @xcite . on the right side , one can read the beautifully and precisely penned nobel laureate names and prize motivations , in ink colors that continuously change from deep blue to blue with yellow shades @xcite . in a sense , the two sides of the diplomas evoke the interplay between a broad - brush picture of @xmath2 masses and mixings ( the pioneering era ) and carefully designed measurements and theoretical descriptions ( the precision era ) , in a continuous feedback between breakthrough and control , that may open the field to further fundamental discoveries @xcite . in this paper
, we aim at presenting both the broad - brush features and the fine structure of the current picture of neutrino oscillation phenomena , involving the mixing of the three neutrino states having definite flavor @xmath19 with three states @xmath20 having definite masses @xmath21 @xcite .
information on known and unknown neutrino mass - mixing parameters is derived by a global analysis of neutrino oscillation data , that extends and updates our previous work @xcite with recent experimental inputs , as discussed in sec . 2 ( see also @xcite for previous global analyses by other groups ) . in sec . 3 , precise constraints ( at few % level )
are obtained on four well - known oscillation parameters , namely , the squared - mass differences @xmath22 and @xmath23 , and the mixing angles @xmath24 and @xmath25 .
less precise constraints , including an octant ambiguity , are reported for the angle @xmath5 . in this picture
, we also discuss the current unknowns related to the neutrino mass hierarchy [ sign@xmath26 and to the possible leptonic cp - violating phase @xmath7 .
the trend favoring negative values of @xmath11 appears to be confirmed , with best - fit values around @xmath27@xmath28 ( i.e. , @xmath29 ) .
more fragile indications , which depend on alternative analyses of specific data sets , concern the exclusion of some @xmath7 ranges at @xmath13 , and a slight preference for normal hierarchy at 90% c.l .
the covariances of selected parameter pairs , and the implications for non - oscillation searches , are presented in sec . 4 and 5 , respectively .
our conclusions are summarized in sec .
in this section we discuss methodological issues and input updates for the global analysis .
readers interested only in the fit results may jump to sec . 3 . in general
, no single oscillation experiment can currently probe , with high sensitivity , the full parameter space spanned by the mass - mixing variables @xmath30 .
one can then group different data sets , according to their specific sensitivities or complementarities with respect to some oscillation parameters .
we follow the methodology of refs . @xcite as summarized below .
we first combine the data coming from solar and kamland reactor experiments ( `` solar+kl '' ) with those coming from long - baseline accelerator searches in both appearance and disappearance modes ( `` lbl acc '' ) .
the former data set constrains the @xmath31 parameters ( and , to some extent , also @xmath25 @xcite ) , which are a crucial input for the @xmath0 probabilities relevant to the latter data set .
the combination `` lbl acc+solar+kl data '' provides both upper and lower bounds on the @xmath32 parameters but , by itself , is not particularly sensitive to @xmath7 or to sign(@xmath33 ) ( @xmath34 for normal hierarchy , nh , and @xmath35 for inverted hierarchy , ih ) . the lbl acc+solar+kl data are then combined with short - baseline reactor data ( `` sbl reactors '' ) , that provide strong constraints on the @xmath25 mixing angle via disappearance event rates , as well as on useful bounds on @xmath36 via spectral data ( when available ) .
the synergy between lbl acc+solar+kl data and sbl reactor data increases significantly the sensitivity to @xmath7 @xcite . finally , we add atmospheric neutrino data ( `` atmos '' ) , which probe both flavor appearance and disappearance channels for @xmath2 and @xmath1 , both in vacuum and in matter , with a very rich phenomenology spanning several decades in energy and path lengths .
this data set is dominantly sensitive to the mass - mixing pair ( @xmath37 ) and , subdominantly , to all the other oscillation parameters . despite their complexity
, atmospheric data may thus add useful pieces of information on subleading effects ( and especially on the three unknown parameters ) , which may either support or dilute the indications coming from the previous data sets . in all cases ,
the fit results are obtained by minimizing a @xmath38 function , that depends on the arguments @xmath30 and on a number of systematic nuisance parameters via the pull method @xcite .
allowed parameter ranges at @xmath39 standard deviations are defined via @xmath40 @xcite .
the same definition is maintained in covariance plots involving parameter pairs , so that the previous @xmath39 ranges are recovered by projecting the allowed regions onto each axis .
undisplayed parameters are marginalized away .
a final remark is in order .
the definition @xmath41 is based on wilks theorem @xcite , that is not strictly applicable to discrete choices ( such as nh vs ih , see @xcite and references therein ) or to cyclic variables ( such as @xmath7 , see @xcite ) .
concerning hierarchy tests , it has been argued that the above @xmath39 prescription can still be used to assess the statistical difference between nh and ih with good approximation @xcite . concerning cp violation tests , the prescription appears to lead ( in general ) to more conservative bounds on @xmath7 , as compared with the results obtained from numerical experiments @xcite . in principle
, one can construct the correct @xmath38 distribution by generating extensive replicas of all the relevant data sets via monte carlo simulations , randomly spanning the space of the neutrino oscillation and systematic nuisance parameters .
however , such a construction would be extremely time - consuming and is beyond the scope of this paper . for the sake of simplicity , we shall adopt the conventional @xmath39 definition , supplemented by cautionary comments when needed .
with respect to @xcite , the solar neutrino analysis is unchanged .
concerning kamland ( kl ) reactor neutrinos , we continue to use the 2011 data release @xcite as in @xcite .
we remark that the latest published kl data @xcite are divided into three subsets , with correlated systematics that are difficult to implement outside the collaboration . in this work , we reanalyze the 2011 kl data for the following reason . the kl analysis requires the ( unoscillated ) absolute reactor @xmath42 spectra as input . in this context , a new twist has been recently provided by the observation of a @xmath43 event excess in the range @xmath447 mev ( the so - called `` bump '' or `` shoulder '' ) @xcite , with respect to the expectations from reference huber - mller ( hm ) spectra @xcite , in each of the current high - statistics sbl reactor experiments reno @xcite , double chooz @xcite and daya bay @xcite .
this new spectral feature is presumably due to nuclear physics effects ( see the recent review in @xcite ) , whose origin is still subject to investigations and debate @xcite . in principle , one would like to know in detail the separate spectral modifications for each reactor fuel component @xcite . however , the only information available at present is the overall energy - dependent ratio @xmath45 between data and hm predictions , which we extract ( and smooth out ) from the latest daya bay results ( see fig . 3 in @xcite ) .
we use the @xmath45 ratio as an effective fudge factor multiplying the unoscillated hm spectra for kamland , which are thus anchored to the absolute daya bay spectrum @xcite . in our opinion , this overall correction can capture the main bump effects in the kl spectral analysis .
more refined kl data fits will be possible when the bump feature(s ) will be better understood and broken down into separate spectral components . concerning the kl dominant oscillation parameters @xmath31 , we find that the inclusion of the bump fudge factor induces a slight negative shift of their best - fit values , which persists in combination with solar data ( see sec . 3 ) .
finally , we recall that the @xmath0 analysis of solar+kl data is performed in terms of three free parameters @xmath46 , providing a weak but interesting indication for nonzero @xmath25 @xcite . tiny differences between transition probabilities in nh and ih @xcite are negligible within the present accuracy .
the hierarchy - independent function @xmath47 , derived from the solar+kl data fit , is then used in combination with the following lbl accelerator data . with respect to @xcite
, we include the most recent results from the tokai - to - kamioka ( t2k ) experiment in japan and from the no@xmath2a experiment at fermilab , in both appearance and disappearance modes .
in particular , we include the latest t2k neutrino data @xcite and the first t2k antineutrino data @xcite , as well as the first no@xmath2a neutrino data as of january 2016 @xcite . the statistical analysis of lbl experiments has been performed using a modified version of the software globes @xcite for the calculation of the expected number of events . and
@xmath5 via the position and amplitude of the oscillation dip , respectively . in appearance mode ,
characterized by much lower statistics , we have fitted the total number of events .
we have checked that , even for t2k @xmath2 appearance data , total - rate or spectral analyses of events produce very similar results in the global fit . ] for each lbl data set , the @xmath38 function takes into account poisson statistics @xcite and the main systematic error sources , typically related to energy - scale errors and to normalization uncertainties of signals and backgrounds , as taken from @xcite . concerning no@xmath2a @xmath16 appearance data ,
the collaboration used two different event selection methods for increasing the purity of the event sample : a primary method based on a likelihood identification ( lid ) selector , and a secondary one based on a library event matching ( lem ) selector , leading to somewhat different results for the @xmath16 signal and background @xcite .
we shall consider the lid data as a default choice for no@xmath2a , but we shall also comments on the impact of the alternative lem data .
we have reproduced with good approximation the allowed parameter regions shown by t2k @xcite and by no@xmath2a @xcite ( in both lid and lem cases @xcite ) , under the same hypotheses or restrictions adopted therein for the undisplayed parameters .
we remark that , in our global analysis ( see sec . 4 ) ,
all the oscillation parameters are left unconstrained .
note that we define the parameter @xmath36 , driving the dominant lbl oscillations , as @xmath48 in both nh ( @xmath49 ) and ih ( @xmath50 ) @xcite . a comparative discussion of this and alternative conventions in terms of @xmath51 , @xmath52 , @xmath53 , @xmath54 and @xmath55 is reported in @xcite and references therein .
although any such convention is immaterial ( as far as the full @xmath0 oscillation probabilities are used ) , the adopted one must be explicitly declared , since the various definitions differ by terms of @xmath56 , comparable to the current @xmath57 uncertainty of @xmath36 . with respect to @xcite
, we include herein the spectral data on the far - to - near detector ratio as a function of energy , as recently reported by the experiments daya bay ( fig . 3 of @xcite ) and reno ( fig . 3 of @xcite ) .
besides the statistical errors , we include a simplified set of pulls for energy - scale and flux - shape systematics , since the bin - to - bin correlations are not publicly reported in @xcite .
we neglect systematics related to the spectral bump feature , which affect absolute spectra ( see sec .
2.1 ) but largely cancel in the analysis of far / near ratios ( see @xcite ) .
we reproduce with good accuracy the joint allowed regions reported in @xcite and @xcite for the mixing amplitude @xmath58 and their effective squared mass parameters @xmath55 , for both nh and ih .
definitions and conventions , see @xcite . in any case , in our global fits we always use the @xmath36 parameter defined in eq .
( [ dm2 ] ) . ]
we then combine the daya bay and reno analyses , in terms of our default parameters @xmath59 and @xmath36 .
the combined fit results are dominated , for both mass - mixing parameters , by the high - statistics daya bay data .
while the reactor bounds on @xmath25 are extremely strong , the current bounds on @xmath36 are not yet competitive with those coming from lbl accelerator data in disappearance mode , although they help in reducing slightly its uncertainty ( see sec . 4 ) . with respect to @xcite
, we update our analysis of super - kamiokande ( sk ) atmospheric neutrino data by including the latest ( phase i - iv ) data as taken from @xcite .
we also include for the first time the recent atmospheric data released by the icecube deepcore ( dc ) collaboration @xcite .
we reproduce with good accuracy the joint bounds on the @xmath60 and @xmath61 parameters shown by dc in @xcite , under the same assumptions used therein . in this work ,
the @xmath38 functions for sk and dc have been simply added . in the future , it may be useful to isolate and properly combine possible systematics which may be common to sk and dc ( related , e.g. , to flux and cross section normalizations ) .
in this section we discuss the constraints on known and unknown oscillation parameters , coming from the global @xmath0 analysis of all the data discussed above .
the impact of different data sets will be discussed in the next section . from the best fit , for either nh ( solid lines ) or ih ( dashed lines ) . bounds on @xmath62
are hierarchy - independent .
horizontal dotted lines mark the 1 , 2 , and @xmath63 levels for each parameter .
[ fig01],scaledwidth=57.5% ] figure 1 shows the bounds on single oscillation parameters , in terms of standard deviations @xmath39 from the best fit .
linear and symmetric curves would correspond to gaussian uncertainties a situation realized with excellent approximation for the @xmath64 mass - mixing pair and , to a lesser extent , for the @xmath62 pair .
the best fit of the @xmath60 parameter flips from the first to the second octant by changing the hierarchy from normal to inverted , but this indication is not statistically significant , since maximal mixing @xmath65 is allowed at @xmath66 ( @xmath67 c.l . ) for nh and at @xmath68 for ih . in any case , all these parameters have both upper and lower bounds well above the @xmath63 level .
if we define the average @xmath69 error as @xmath70 of the @xmath71 range , our global fit implies the following fractional uncertainties : @xmath8 ( 2.4% ) , @xmath9 ( 5.8% ) , @xmath36 ( 1.8% ) , @xmath59 ( 4.7% ) , and @xmath60 ( 9% ) .
the parameter @xmath7 is associated to a dirac phase in the neutrino mixing matrix , which might induce leptonic cp violation effects for @xmath72 @xcite .
recent fits to global @xmath2 data @xcite and partial ( lbl accelerator ) data @xcite have consistently shown a preference for negative values of @xmath11 , as a result of the combination of lbl accelerator @xmath2 and @xmath1 data and of sbl reactor data .
the reason is that the lbl appearance probability contains a cp - violating part proportional to @xmath73 ( @xmath74 ) for neutrinos ( antineutrinos ) @xcite .
with respect to the cp - conserving case @xmath75 , values of @xmath76 are then expected to produce a slight increase ( decrease ) of events in @xmath77 ( @xmath78 ) oscillations for @xmath25 fixed ( by reactors ) , consistently with the appearance results of t2k ( using both @xmath2 @xcite and @xmath1 @xcite ) and in no@xmath2a ( using @xmath2 @xcite ) , although within large statistical uncertainties . @xcite but with relatively low statistical significance , so that the overall preference for @xmath76 from t2k and no@xmath2a is not spoiled . ]
this trend for @xmath7 is clearly confirmed by the results in fig . 1 , which show a best fit for @xmath12 ( @xmath79@xmath80 ) in both nh and ih , while opposite values around @xmath81 are disfavored at almost @xmath63 level .
although all values of @xmath7 are still allowed at @xmath63 , the emerging indications in favor of @xmath82 are intriguing and deserve further studies in t2k and no@xmath2a , as well as in future lbl accelerator facilities .
we remark that our bounds on @xmath7 are conservative , and that dedicated constructions of the @xmath38 distributions via extensive numerical simulations might lead to stronger indications on @xmath7 , as discussed in sec . 2 .
table 1 shows the same results of fig . 1 in numerical form
, with three significant digits for each parameter . in the last row of the table
we add a piece of information not contained in fig . 1 , namely , the @xmath83 difference between normal and inverted hierarchy .
the nh is slightly favored over the ih at the ( statistically insignificant ) level of @xmath68 in the global fit .
we remark that both fig . 1 and table 1 use the no@xmath2a lid data set in appearance mode ( see sec .
2.2 ) . by adopting the alternative
no@xmath2a lem data set , we find no variation for the @xmath8 and @xmath9 parameters ( dominated by solar+kl data ) and for the @xmath36 parameter ( dominated by lbl data in disappearance mode , in combination with atmospheric and reactor spectral data ) .
we find slight variations for @xmath59 and @xmath60 , and a small but interesting increase of the bounds on @xmath7 above the @xmath63 level .
figure 2 shows the corresponding results for the @xmath60 and @xmath7 parameters , to be compared with the rightmost panels of fig . 1 . a lem data replacing lid data .
see the text for details .
[ fig02],scaledwidth=22.0% ] in table 2 we report the results of the global fit using no@xmath2a lem data , but only for those parameters bounds which differ from table 1 .
some intervals surrounding @xmath84 can be excluded at @xmath13 .
we also find an increased sensitivity to the hierarchy , with the nh slightly favored ( at @xmath85 c.l . ) over the ih .
these indications , although still statistically limited , deserve some attention , for reasons that will be discussed in more detail at the end of the next section .
in this section we show and interpret the joint @xmath39 contours ( covariances ) for selected pairs of oscillation parameters .
we also discuss the impact of different data sets on such bounds .
we start with the analysis of the ( @xmath86 parameters , which govern the oscillations phenomenology of solar and kamland neutrinos .
figure 3 shows the corresponding bounds derived by a fit to solar+kl data only ( solid lines ) . by themselves , these data provide a @xmath87 hint of @xmath88 @xcite , with a best fit ( @xmath89 ) close to current sbl reactor values . , 2 and 3 on each pair of parameters chosen among ( @xmath86 , as derived by our analysis of solar+kl data ( solid lines ) and of all data ( dashed lines ) .
the dots mark the best - fit points .
the bounds refer to nh case , and are very similar for ih case ( not shown ) .
[ fig03],scaledwidth=53.0% ] the @xmath62 parameters in fig .
3 appear to be slightly anticorrelated , with a best - fit point at ( @xmath90 ev@xmath91,@xmath92 ) .
these values are slightly lower than those reported in our previous work ( @xmath93 ev@xmath91,@xmath94 ) @xcite , as a result of altering the absolute kl spectra to account for the bump feature ( see sec .
statistically , these deviations amount to about @xmath95 for @xmath8 and @xmath96 for @xmath9 , and thus are not entirely negligible .
a better understanding of the absolute reactor spectra ( in both normalization and shape ) is thus instrumental to analyze the kamland data with adequate precision
. finally , fig .
3 shows the joint bounds on the ( @xmath86 parameters from the global fit including all data ( dashed lines ) .
the bounds on the pair @xmath62 are basically unaltered , while those on @xmath59 are shrunk by more than an order of magnitude , mainly as a result of sbl reactor data .
let us consider now the interplay between @xmath59 and the mass - mixing parameters @xmath36 and @xmath60 , which dominate the oscillations of lbl accelerator neutrinos .
figure 4 shows the covariance plot for the @xmath97 parameters .
starting from the leftmost panels , one can see that the lbl acc.+solar+kl data , by themselves , provide both upper and lower bounds on @xmath59 at @xmath63 level .
the best - fit values of @xmath59 lie around @xmath98 in either nh or ih , independently of sbl reactor data .
the best - fit values of @xmath36 are slightly higher than in our previous work @xcite , mainly as a result of the recent no@xmath2a data .
the joint @xmath97 contours appear to be somewhat bumpy , as a result of the octant ambiguity discussed below .. ] in the middle panels , the inclusion of sbl reactor data improves dramatically the bounds on @xmath59 and , to a small but nonnegligible extent , also those on @xmath36 .
finally , in the rightmost panels , atmospheric data induce a small increase of its central value ( mainly as a result of deepcore data ) , and a further reduction of the @xmath36 uncertainty . in comparison with @xcite ,
the @xmath36 value is shifted by @xmath99 upwards in the global fit .
figure 5 shows the covariance plot for the @xmath100 parameters .
the leftmost panels show a slight negative correlation and degeneracy between these two variables , which is induced by the dominant dependence of the lbl appearance channel on the product @xmath101 , as also discussed in @xcite .
the overall lbl acc+solar+kl preference for relatively low values of @xmath59 ( @xmath102 ) breaks such a degeneracy and leads to a weak preference for the second octant
. parameters . from left to right , the regions allowed at @xmath103 , 2 and 3 refer to the analysis of lbl acc+solar+kl data ( left panels ) , plus sbl reactor data ( middle panels ) , plus atmospheric data ( right panels ) , with best fits marked by dots .
the three upper ( lower ) panels refer to nh ( ih ) .
[ fig04],scaledwidth=77.0% ] parameters .
[ fig05],scaledwidth=77.0% ] sbl reactor data ( middle panels of fig . 5 ) shrink the @xmath59 range for both nh and ih . for ih
, however , they do not significantly change the central value of @xmath59 , nor the correlated best - fit value of @xmath60 , which stays in the second octant .
conversely , for nh , the sbl reactor data do shift the central value of @xmath59 upwards ( with respect to the left panel ) , and best - fit value of @xmath60 is correspondingly shifted into first octant .
finally , the inclusion of atmospheric data ( rightmost panels ) alters the @xmath39 contours , but does not change the qualitative preference for the first ( second ) octant of @xmath5 in nh ( ih ) . figure 6 shows the octant ambiguity in terms of bounds on the mass - mixing parameters @xmath104 .
the fragility of current octant indications stems from the data themselves rather than on analysis details : nearly maximal mixing is preferred by t2k ( accelerator ) and deepcore ( atmospheric ) data , while nonmaximal mixing is preferred by minos and no@xmath2a ( accelerator ) and by sk ( atmospheric ) data .
the combined results on @xmath5 appear thus still fragile , as far as the long - standing octant degeneracy @xcite is concerned .
[ fig06],scaledwidth=77.0% ] parameters .
[ fig07],scaledwidth=77.0% ] a very recent example of the ( non)maximal @xmath5 issue is provided by the no@xmath2a data in disappearance mode , which entailed a preference for maximal mixing with preliminary data @xcite and for nonmaximal mixing with definitive data @xcite .
we trace this change to the migration of a few events among reconstructed energy bins in the final no@xmath2a data ( not shown ) .
let us complete the covariance analysis by discussing the interplay of the cp - violating phase @xmath7 with the mixing parameters @xmath59 and @xmath60 .
figure 7 shows the @xmath39 bounds in the @xmath105 plane , which is at the focus of lbl accelerator searches in appearance mode @xcite .
the leftmost panels show the wavy bands allowed by lbl acc.+solar+kl data , with a bumpy structure due to the octant ambiguity ( which was even more evident in older data fits @xcite ) . in the middle panels ,
sbl reactor data select a narrow vertical strip , which does not alter significantly the preference for @xmath106 stemming from lbl acc.+solar+kl data alone . parameters .
[ fig08],scaledwidth=77.0% ] in this context , it is sometimes asserted that the current preference for @xmath106 emerges from a `` tension '' between lbl accelerator and sbl reactor data on @xmath25 ; however , fig .
7 clearly show that these data are currently highly consistent with each other about @xmath25 , and that their interplay should be described in terms of synergy rather than tension .
finally , the inclusion of atmospheric data ( rightmost panels ) corroborates the previous indications for @xmath7 , with a global best fit around @xmath107@xmath80 and a slight reduction of the allowed ranges at 1 and 2@xmath108 ( at least for nh ) .
note that , with no@xmath2a lem data , the wavy bands in the leftmost panels of fig .
7 would be slightly shifted to the right ( not shown ) , leading to slightly stronger bounds on @xmath7 in combination with sbl reactor and atmospheric data , as reported in fig . 2 of the previous section ; see also the official no@xmath2a lid and lem results in @xcite . in this case
, one might invoke a slight `` tension '' between lbl accelerator and and sbl reactor data , but only at the level of @xmath68 differences on @xmath25 in the worst case ( ih ) .
figure 8 shows the @xmath39 bounds in the @xmath109 plane , which is gaining increasing attention from several viewpoints , including studies of degeneracies among these parameters and @xmath25 @xcite , of the interplay between lbl appearance and disappearance channels @xcite , and of statistical issues in the interpretation of @xmath39 bounds @xcite .
the bounds in fig . 8 appear to be rather asymmetric in the two half - ranges of both @xmath5 and @xmath7 , and also quite different in nh and ih .
this is not entirely surprising , since this is the only covariance plot ( among figs .
38 ) between two unknowns : the @xmath5 octant ( in abscissa ) and the cp - violating phase @xmath7 ( in ordinate ) .
therefore , the contours of fig . 8 may evolve significantly as more data are accumulated , especially by oscillation searches with atmospheric and lbl accelerator experiments .
we conclude this section by commenting the @xmath110 values reported in sec . 3 , which differ only by the inclusion of no@xmath2a lid data ( table 1 ) _ vs _ lem data ( table 2 ) .
in the first case , the @xmath110 takes the value @xmath111 for a fit to lbl acc.+solar+kl data , becomes @xmath112 by including sbl reactor data , and changes ( also in sign ) to @xmath113 by including atmospheric data .
since these @xmath110 values are both small ( at the level of @xmath57 ) and with unstable sign , we conclude that there is no significant indication about the mass hierarchy , at least within the global fit including default ( lid ) no@xmath2a data . by replacing no@xmath2a lid with lem data ( table 2 )
, the same exercise leads to the following progression of @xmath110 values : @xmath114 ( lbl acc.+solar+kl ) , @xmath115 ( plus sbl reactor ) , @xmath116 ( plus atmospheric ) . in this case , a weak hint for nh ( at @xmath66 , i.e. , @xmath67 c.l . ) seems to emerge from consistent ( same - sign ) indications coming from different data sets , which is the kind of `` coherent '' signals that one would hope to observe , at least in principle .
time will tell if these fragile indications about the hierarchy will be strengthened or weakened by future data with higher statistics .
let us discuss the implications of the previous oscillation results on the three observables sensitive to the ( unknown ) absolute @xmath2 mass scale : the sum of @xmath2 masses @xmath15 ( probed by precision cosmology ) , the effective @xmath16 mass @xmath17 ( probed by @xmath117 decay ) , and the effective majorana mass @xmath18 ( probed by @xmath118 decay if neutrinos are majorana fermions ) . definitions and previous constraints for these observables can be found in @xcite ; here we just remark that the following discussion is not affected by the current uncertainties on @xmath5 or @xmath7 .
figure 9 shows the constraints induced by our global @xmath0 analysis at @xmath119 level , for either nh ( blue curves ) or ih ( red curves ) , in the planes charted by any two among the parameters @xmath17 , @xmath18 and @xmath15 . the allowed bands for nh and ih , which practically coincide in the ( so - called degenerate ) mass region well above @xmath120 ev , start to differ significantly at relatively low mass scales of @xmath121 and below . at present ,
@xmath117- and @xmath118-decay data probe only the degenerate region of @xmath17 and @xmath18 , respectively @xcite , while cosmological data are deeply probing the sub - ev scale , with upper bounds on @xmath15 as low as @xmath1220.2 ev , see e.g. @xcite and references therein
. taken at face value , the cosmological bounds would somewhat disfavor the ih case , which entails @xmath15 values necessarily larger than @xmath123 ev ( see fig .
interestingly , these indications are consistent with a possible slight preference for nh from the global @xmath0 analysis ( with no@xmath2a lem data ) , as discussed at the end of the previous section .
we do not attempt to combine cosmological and oscillation data , but we remark that the evolution of such hints will be a major issue in neutrino physics for a long time , with challenging implications for @xmath117-decay and @xmath118-decay searches @xcite .
level , for either nh ( blue curves ) or ih ( red curves ) , in the planes charted by any two among the absolute neutrino mass observables @xmath17 , @xmath18 and @xmath15 . [ fig09],scaledwidth=53.0% ]
we have presented the results of a state - of - the - art global analysis of neutrino oscillation data , performed within the standard @xmath0 framework .
relevant new inputs ( as of january 2016 ) include the latest data from the super - kamiokande and icecube deepcore atmospheric experiments , the long - baseline accelerator data from t2k ( antineutrino run ) and no@xmath2a ( neutrino run ) in both appearance and disappearance mode , the far / near spectral ratios from the daya bay and reno short - baseline reactor experiments , and a reanalysis of kamland data in the light of the `` bump '' feature recently observed in reactor antineutrino spectra . the five known oscillation parameters ( @xmath124 ) have been determined with fractional accuracies as small as ( 2.4%,5.8%,1.8%,4.7%,9% ) , respectively . with respect to previous fits ,
the new inputs induce small downward shifts of @xmath8 and @xmath9 , and a small increase of @xmath10 ( see fig . 1 and table 1 ) .
the status of the three unknown oscillation parameters is as follows .
the @xmath5 octant ambiguity remains essentially unresolved : the central value of @xmath60 is somewhat fragile , and it can flip from the first to the second octant by changing the data set or the hierarchy . concerning the cp - violating phase @xmath7 , we confirm the previous trend favoring @xmath82 ( with a best fit at @xmath29 ) , although all @xmath7 values are allowed at @xmath63 . finally , we find no statistically significant indication in favor of one mass hierarchy ( either nh or ih ) .
some differences arise by changing the no@xmath2a appearance data set , from the default ( lid ) sample to the alternative ( lem ) sample .
a few known parameters are slightly altered , as described in fig . 2 and table 2 .
there is no significant improvement on the octant ambiguity , while the indications on @xmath7 are strengthened , and some ranges with @xmath125 can be excluded at @xmath63 level . concerning the mass hierarchy , the nh case appears to be slightly favored ( at @xmath14 c.l . ) .
we have discussed in detail the parameter covariances and the impact of different data sets through figs . 38 , that allow to appreciate the interplay among the various ( known and unknown ) parameters , as well as the synergy between oscillation searches in different kinds of experiments .
finally , we have analyzed the implications of the previous results on the non - oscillation observables ( @xmath126 ) that can probe absolute neutrino masses ( fig .
9 ) . in this context
, tight upper bounds on @xmath15 from precision cosmology appear to favor the nh case .
further and more accurate data are needed to probe the hierarchy and absolute mass scale of neutrinos , their dirac or majorana nature and cp - violating properties , and the @xmath5 octant ambiguity , which remain as missing pieces of the @xmath0 puzzle .
this work was supported by the italian ministero dellistruzione , universit e ricerca ( miur ) and istituto nazionale di fisica nucleare ( infn ) through the `` theoretical astroparticle physics '' research projects .
m. c. gonzalez - garcia , m. maltoni and t. schwetz , updated fit to three neutrino mixing : status of leptonic cp violation , jhep 1411 ( 2014 ) 052 [ arxiv:1409.5439 [ hep - ph ] ] ; global analyses of neutrino oscillation experiments , arxiv:1512.06856 [ hep - ph ] ( to appear in this special issue ) . g. l. fogli , e. lisi , a. marrone , d. montanino , a. palazzo and a. m. rotunno , global analysis of neutrino masses , mixings and phases : entering the era of leptonic cp violation searches , phys .
d86 ( 2012 ) 013012 [ arxiv:1205.5254 [ hep - ph ] ] .
a. gando _ et al . _ [ kamland collaboration ] , constraints on @xmath25 from a three - flavor oscillation analysis of reactor antineutrinos at kamland , phys .
d83 ( 2011 ) 052002 [ arxiv:1009.4771 [ hep - ex ] ] .
y. abe _ et al . _
[ double chooz collaboration ] , improved measurements of the neutrino mixing angle @xmath25 with the double chooz detector , jhep 1410 ( 2014 ) 086 ; erratum ibidem 1502 ( 2015 ) 074 [ arxiv:1406.7763 [ hep - ex ] ] .
a. a. zakari - issoufou _ et al . _
[ igisol collaboration ] , total absorption spectroscopy study of @xmath127rb decay : a major contributor to reactor antineutrino spectrum shape , phys .
lett . 115 ( 2015 ) 102503 [ arxiv:1504.05812 [ nucl - ex ] ] .
a. c. hayes , j. l. friar , g. t. garvey , d. ibeling , g. jungman , t. kawano and r. w. mills , possible origins and implications of the shoulder in reactor neutrino spectra , phys .
d92 ( 2015 ) 3 , 033015 [ arxiv:1506.00583 [ nucl - th ] ] .
k. abe _ et al . _
[ t2k collaboration ] , measurements of neutrino oscillation in appearance and disappearance channels by the t2k experiment with @xmath128 protons on target , phys . rev .
d91 ( 2015 ) 072010 [ arxiv:1502.01550 [ hep - ex ] ] .
p. huber , m. lindner and w. winter , simulation of long - baseline neutrino oscillation experiments with globes ( general long baseline experiment simulator ) , comput .
commun . 167 ( 2005 ) 195 [ hep - ph/0407333 ] .
p. huber , j. kopp , m. lindner , m. rolinec and w. winter , new features in the simulation of neutrino oscillation experiments with globes 3.0 : general long baseline experiment simulator , comput .
commun . 177 ( 2007 ) 432 [ hep - ph/0701187 ]
. f. p. an _ et al .
_ [ daya bay collaboration ] , new measurement of antineutrino oscillation with the full detector configuration at daya bay , phys . rev .
lett . 115 ( 2015 ) 11 , 111802 [ arxiv:1505.03456 [ hep - ex ] ] . m. g. aartsen _ et al . _
[ icecube collaboration ] , determining neutrino oscillation parameters from atmospheric muon neutrino disappearance with three years of icecube deepcore data , phys .
d91 ( 2015 ) 072004 [ arxiv:1410.7227 [ hep - ex ] ] .
p. adamson _ et al .
_ [ minos collaboration ] , combined analysis of @xmath129 disappearance and @xmath130 appearance in minos using accelerator and atmospheric neutrinos , phys .
lett . 112 ( 2014 )
191801 [ arxiv:1403.0867 [ hep - ex ] ] .
p. coloma , h. minakata and s. j. parke , interplay between appearance and disappearance channels for precision measurements of @xmath5 and @xmath7 , phys . rev .
d90 ( 2014 ) 093003 [ arxiv:1406.2551 [ hep - ph ] ] .
g. l. fogli , e. lisi , a. marrone , a. melchiorri , a. palazzo , p. serra and j. silk , observables sensitive to absolute neutrino masses : constraints and correlations from world neutrino data , phys .
d70 ( 2004 ) 113003 [ hep - ph/0408045 ] . | within the standard @xmath0 mass - mixing framework , we present an up - to - date global analysis of neutrino oscillation data ( as of january 2016 ) , including the latest available results from experiments with atmospheric neutrinos ( super - kamiokande and icecube deepcore ) , at accelerators ( first t2k @xmath1 and no@xmath2a @xmath2 runs in both appearance and disappearance mode ) , and at short - baseline reactors ( daya bay and reno far / near spectral ratios ) , as well as a reanalysis of older kamland data in the light of the `` bump '' feature recently observed in reactor spectra .
we discuss improved constraints on the five known oscillation parameters ( @xmath3 ) , and the status of the three remaining unknown parameters : the mass hierarchy [ sign@xmath4 , the @xmath5 octant [ sign@xmath6 , and the possible cp - violating phase @xmath7 . with respect to previous global fits , we find that the reanalysis of kamland data induces a slight decrease of both @xmath8 and @xmath9 , while the latest accelerator and atmospheric data induce a slight increase of @xmath10 . concerning the unknown parameters , we confirm the previous intriguing preference for negative values of @xmath11 ( with best - fit values around @xmath12 ) , but we find no statistically significant indication about the @xmath5 octant or the mass hierarchy ( normal or inverted ) . assuming an alternative ( so - called lem ) analysis of no@xmath2a data , some @xmath7 ranges can be excluded at @xmath13 , and the normal mass hierarchy appears to be slightly favored at @xmath14 c.l .
we also describe in detail the covariances of selected pairs of oscillation parameters .
finally , we briefly discuss the implications of the above results on the three non - oscillation observables sensitive to the ( unknown ) absolute @xmath2 mass scale : the sum of @xmath2 masses @xmath15 ( in cosmology ) , the effective @xmath16 mass @xmath17 ( in beta decay ) , and the effective majorana mass @xmath18 ( in neutrinoless double beta decay ) . |
Chinese President Xi Jinping was greeted by Vice President Joe Biden after landing in Maryland on Thursday. (AP)
Chinese President Xi Jinping was greeted by Vice President Joe Biden after landing in Maryland on Thursday. (AP)
Chinese President Xi Jinping on Friday will announce a nationwide cap-and-trade program to curtail carbon emissions, adopting a mechanism most widely used in Europe to limit greenhouse gases, Obama administration officials said.
Expanding on a pilot project in seven Chinese cities, the cap-and-trade program will impose a nationwide ceiling on emissions from the most carbon-intensive sectors of the Chinese economy and require companies exceeding their quotas to buy permits from those that have sharply reduced emissions.
Xi will make the announcement in Washington in a joint statement with President Obama, who has been pressing world leaders to take ambitious steps to slow climate change and submit detailed plans in advance of a Paris climate conference in December.
The announcement could provide a bright spot to a summit darkened with disagreements over China’s cyberattacks on U.S. companies, its more restrictive proposed law on nongovernmental organizations, continuing human rights differences, and the apparent construction in progress of four fighter jet runways on disputed islands and reefs in the South China Sea.
It also spells out the actions China will take to meet the target Xi set last November during Obama’s visit to Beijing.
A runner wears a face mask as he takes part in the 35th Beijing International Marathon in Beijing. Some participants wore face masks to protect themselves from air pollution. (Fred Dufour/AFP/Getty Images)
China is the world’s biggest emitting nation, accounting for nearly 30 percent of greenhouse gas emissions. The government has already pledged that by 2020 it will reduce by 40 to 45 percent the amount of carbon produced for every unit of gross domestic product and will reach a peak emission level by 2030.
How much the new cap-and-trade program alters that path will depend on the level of the nationwide cap. But it will apply to China’s power generation sector, iron and steel industries, chemical firms, and makers of building materials, cement and paper.
“Together they produce a substantial amount of China’s climate pollution,” said a senior administration official. “It is a significant move.”
It also addresses an issue for ordinary Chinese, who have been angered by conventional air pollution that has obscured skylines and triggered widespread respiratory illnesses, as well as hundreds of thousands of premature deaths.
China on Friday will also pledge to aid low-income nations in a financial commitment similar to the $3 billion the Obama administration has already asked Congress to appropriate in fiscal 2016 for the international Green Climate Fund. Obama will reaffirm his commitment to making that contribution and to already-announced plans for limiting emissions through regulations for heavy vehicles and the Clean Power Plan for utilities.
The joint statement on Friday will also include language reinforcing the end of an approach dating to 1992 that divided the world into developed countries that needed to take climate measures and less-developed ones that did not. The language will acknowledge different circumstances but require all countries to combat climate change.
There are many ironies in the Chinese announcement. The cap-and-trade program was first advocated in the United States but first adopted in Europe. In Obama’s first year in office in 2009, the House passed a cap-and-trade measure, but it died in the Senate.
If American political candidates have a favorite punching bag, it's China. Wonkblog's Ana Swanson explains why so many candidates change their tune once elected, and just how important the U.S.-China relationship really is. (Jorge Ribas/The Washington Post)
Some U.S. states have tried their own limited cap-and-trade programs, including California and a group of Northeast states.
Li Shuo, Greenpeace’s senior climate and energy policy adviser for East Asia, said that the climate accords last November, when China set ambitious targets, now bind the two countries together when the political horizon is hazy.
“If there is a Republican president, you will have an interesting dynamic,” Li said. “Can he or she walk away from an agreement without worrying about the consequences if there is a commitment made by the presidents of the two countries? An aspect of this is the politics: binding these two countries together.” ||||| White House officials announce deal, which will make China the world’s biggest carbon market, on eve of summit between Barack Obama and Xi Jinping
China, the world’s biggest carbon polluter, will launch a national cap-and-trade scheme in 2017, the White House said on Thursday.
'Big Daddy Xi' attempts to charm US but tough crowds still await Read more
The move, announced on the eve of a summit in Washington between Presidents Barack Obama and Xi Jinping, would make China the world’s biggest carbon market, overtaking the European Union, and could strengthen global efforts to put a price on carbon.
White House officials said the cap-and-trade plan would be formally announced on Friday along with a “very substantial financial commitment” from China to help the world’s poorest countries fight climate change.
The US has already pledged $3bn to a Green Climate Fund for poor countries.
The US and Chinese leaders will also commit to the decarbonisation of their economies over the course of the century, campaign groups briefed on the negotiations said. G7 countries made a similar call at their summit in Germany this year, but the support from China – as a developing country and the world’s biggest emitter – would send an important signal ahead of November’s Paris climate meeting that global economies were moving away from fossil fuels, the campaigners said.
China’s announcement of a launch date for the national cap-and-trade system – though long anticipated – will help solidify the joint efforts the two countries have taken on climate change.
Chinese officials have been promising since last year to consolidate existing regional cap-and-trade schemes into a national programme.
China already has a network of seven regional carbon markets, but there are wide variations in pricing among them.
White House officials said the new national scheme would cover power generation, iron and steel, chemicals, building materials including cement, paper-making and nonferrous metals which together account for a large share of China’s carbon pollution.
The White House acknowledged in a conference call with reporters that the Chinese actions were helpful to Obama’s efforts to fight climate change by neutralising Republican arguments that the US was acting alone.
“One of the arguments that has been proffered against the United States stepping up and providing more resources to help poor countries develop in low-carbon ways has been that if the United States steps up with resources, then other countries won’t – the sort of argument that if the US leads, then others will just take a backseat,” officials told a conference call with reporters.
Since Obama’s visit to Beijing last November, the US and China have undertaken a number of measures in tandem to fight climate change. Earlier this month, Chinese cities pledged to peak carbon polllution several years ahead of the national target. ||||| WASHINGTON — President Xi Jinping of China will make a landmark commitment on Friday to start a national program in 2017 that will limit and put a price on greenhouse gas emissions, Obama administration officials said Thursday.
The move to create a so-called cap-and-trade system would be a substantial step by the world’s largest polluter to reduce emissions from major industries, including steel, cement, paper and electric power.
The announcement, to come during a White House summit meeting with President Obama, is part of an ambitious effort by China and the United States to use their leverage internationally to tackle climate change and to pressure other nations to do the same.
Joining forces on the issue even as they are bitterly divided on others, Mr. Obama and Mr. Xi will spotlight the shared determination of the leaders of the world’s two largest economies to forge a climate change accord in Paris in December that commits every country to curbing its emissions. ||||| BEIJING (AP) — As state media reports have it, China's President Xi Jinping is dispelling all concerns about cyberhacking, the economy and the South China Sea during his U.S. trip, and relations between the two countries have never been rosier.
In this Wedneday, Sept. 23, 2015, photo, people are reflected on glass as they walk past a stand in Beijing displaying the China Daily newspaper with an image of Chinese President Xi Jinping and his wife... (Associated Press)
People walk past a large videoscreen showing Chinese President Xi Jinping during his trip to the United States from Chinese state broadcaster CCTV in an office building in Beijing, Friday, Sept. 25, 2015.... (Associated Press)
In this Thursday, Sept. 24, 2015, photo, Xinhua News Agency correspondent Shi Yingshan, center, speaks to the camera while a fellow Chinese reporter, left, poses for picture during a preview of the State... (Associated Press)
In this Thursday, Sept. 24, 2015, photo, China's flag is displayed next to the American flag on the side of the Old Executive Office Building on the White House complex in Washington, the day before a... (Associated Press)
In this Thursday, Sept. 24, 2015, file photo, U.S. President Barack Obama and Chinese President Xi Jinping, right, walk on the North Lawn of the White House in Washington, as they head for a private dinner... (Associated Press)
A newspaper showing Chinese President Xi Jinping and President and CEO of the Boeing Company Dennis Muilenburg on its front page during Xi's trip to the United States sits for sale at a newsstand in Beijing,... (Associated Press)
In this Thursday, Sept. 24, 2015, photo, China's flag is displayed next to the American flag on the side of the Old Executive Office Building on the White House complex in Washington, the day before a... (Associated Press)
In this Thursday, Sept. 24, 2015, photo, Chinese journalists ask questions to White House Executive Chef Cristeta Comerford, front left, during a preview of the State Dinner for the visiting Chinese President... (Associated Press)
Xi's seven-day trip, from his visits to Boeing and tech giants to his casual, open-shirt stroll with President Barack Obama through White House gardens, is receiving blanket coverage at home. Prime-time news bulletins start with at least a quarter-hour of coverage of Xi, along with his famous wife, former singer Peng Liyuan. The applause and ovations at each stop — the airport, Boeing plant, a Washington high school he once visited 21 years ago — are shown in their entirety.
"I'm very proud of China's getting ever-more stronger," Beijing resident Zhang Yanhua, 49, who works as an editor, said in an interview on a downtown sidewalk. "When I saw President Xi Jinping and his wife on TV when they arrived in Seattle, the way they talked with local officials made me feel that they are indeed the embodiment of a great country."
The state media coverage has acknowledged that frictions do exist, but have focused more on the areas of agreement and cooperation — the better to portray Xi as the well-received statesmen while China seeks to bring itself alongside the U.S. in what Xi calls "a new type of major-country relationship."
State media outlets have reported that Xi managed to reassure the U.S. business community of China's economic health and successfully dispelled concerns about two key issues for Washington — cyberhacking and Chinese ambitions in the South China Sea.
"There are no two big countries in human history that have had a relationship as close as that between China and the U.S. today," proclaimed the People's Daily.
Some ordinary Chinese see the positivity as superficial.
"It may take a long time to see the two countries develop genuine friendship," said Li Jinglin, a retired man of 67, who added that while China has become more powerful "it's still far from being on an equal footing with the United States."
"Even if we suddenly see the two sides shaking hands, it may not be a sincere relationship. Many problems won't easily be solved without some time," said Li.
The frequent themes in Chinese state media that cast the U.S. in a negative light, such as Washington's moves to contain China, its intervention in other countries' affairs and the high gun crime rate in U.S. cities, have been wiped away — for a week, anyway.
Zhan Jiang, a journalism professor, said that normally in state media reports "you can see some good things and some bad things and even quarrels between the two countries such as about hacking, maybe even including human rights quarrels."
"But if you are accustomed to China's system, you know that all things can be changed overnight," Zhan said. "You can suppose that after Xi Jinping's visit, China's media must restore their criticism against American policy."
He added: "This is China."
For the U.S., the main thorns in the relationship are intensifying hacking attacks on American government agencies and companies that officials say originate in China and Beijing's moves to assert its territorial claims in the South China Sea.
In China, the focus is on trade and the economy — and ordinary folk likewise are interested in Xi's visits to the big-name companies that they are familiar with, like Boeing, Apple and Microsoft.
"Even though some Chinese companies are thriving, they need to learn from the experience and business ideas of giant American companies such as Microsoft," said Beijing resident Zhao Ying, 29, who runs a bed-and-breakfast business. "And the U.S. will show more respect to China if we improve our capacity and power."
For a break from the dry, official coverage, Chinese are following quirkier reports carried on the big online portals, which are not allowed to report political news.
The top story on the online news portal Sina on Friday morning was that Xi and Obama went for a casual, no-tie look on an evening stroll around the White House.
Even some of the reporting from the official media outlets was occasionally light-hearted. One of the country's state news agencies, China News Service, profiled the hotel where Xi and his delegation are staying in Washington, D.C. It has prepared not only Chinese food, but also "panda-patterned umbrellas" in case it rains. | – President Obama and Chinese President Xi Jinping are set to make an announcement today on one of the few things they agree on. The White House says that at a joint press conference with Obama, Xi plans to announce a 2017 launch date for a national cap-and-trade program to cut emissions, reports the Guardian, which notes that China is the world's biggest carbon polluter. The plan follows a surprise deal on greenhouse-gas emissions announced during Obama's visit to Beijing last year, and the New York Times notes that it will undercut Republican arguments that there is little point in the US trying to combat climate change when China refuses to take action. White House officials say the plan also includes a "substantial financial commitment" from China to help poor countries deal with climate change, reports the Guardian. Cap-and-trade programs involve the buying of permits by companies that exceed set limits, and the Washington Post finds it ironic that China is adopting one: The system is an American invention, but it was rejected by American lawmakers in 2009. In China, meanwhile, state media reports that Xi has managed to solve all major issues between the US and China, per the AP. "There are no two big countries in human history that have had a relationship as close as that between China and the US," says the People's Daily. (Yesterday, Xi posed for a photo with 29 top US and Chinese tech executives.) |
SECTION 1. SHORT TITLE.
This Act may be cited as the ``Small Business Health Care Relief
Act of 2008''.
SEC. 2. REFUNDABLE CREDIT FOR SMALL BUSINESSES WHICH PROVIDE HEALTH
CARE COVERAGE FOR EMPLOYEES.
(a) In General.--Subpart C of part IV of subchapter A of chapter 1
of the Internal Revenue Code of 1986 (relating to refundable credits)
is amended by redesignating section 36 as section 37 and by inserting
after section 35 the following new section:
``SEC. 36. SMALL BUSINESSES PROVIDING HEALTH CARE COVERAGE FOR
EMPLOYEES.
``(a) In General.--In the case of an eligible small business, there
shall be allowed as a credit against the tax imposed by this subtitle
for the taxable year an amount equal to the applicable percentage of
the expenses paid or incurred by the taxpayer for qualified health care
coverage of eligible employees, their spouses, and dependents (within
the meaning of section 213(a)).
``(b) Applicable Percentage.--For purposes of this section, the
term `applicable percentage' means--
``(1) 50 percent if qualified health care coverage is
provided by the taxpayer to an average (on days during the
taxable year) of 10 or fewer eligible employees of the
taxpayer,
``(2) 25 percent if qualified health care coverage is
provided by the taxpayer to an average (on such days) of at
least 10 but not more than 25 eligible employees of the
taxpayer, and
``(3) 15 percent if qualified health care coverage is
provided by the taxpayer to an average (on such days) of more
than 25 eligible employees of the taxpayer.
``(c) Eligible Small Business.--For purposes of this section, the
term `eligible small business' means any taxpayer engaged in a trade or
business if the taxpayer meets the requirements of the following
paragraphs:
``(1) 50 or fewer employees.--
``(A) In general.--A taxpayer meets the
requirements of this paragraph if the taxpayer employs
an average of 50 or fewer employees on business days
during the preceding taxable year.
``(B) Taxpayer not in existence.--In any case in
which the taxpayer is an entity and is not in existence
throughout the preceding taxable year, subparagraph (A)
shall be applied by substituting `taxable year' for
`preceding taxable year'.
``(2) Gross receipts limitation.--
``(A) In general.--A taxpayer meets the
requirements of this paragraph if the gross receipts of
the taxpayer for the preceding taxable year do not
exceed $10,000,000.
``(B) Taxpayer not in existence.--In any case in
which the taxpayer is an entity and is not in existence
throughout the preceding taxable year, subparagraph (A)
shall be applied by substituting `taxable year' for
`preceding taxable year'.
``(C) Special rules.--For purposes of subparagraph
(A), the rules of subparagraphs (B) and (C) of section
448(c)(3) shall apply.
``(3) Plan offering requirement.--A taxpayer meets the
requirements of the paragraph if--
``(A) the taxpayer offers qualified health
coverage, on the same terms and conditions, to at least
90 percent of the taxpayer's eligible employees, and
``(B) such offering is made at least annually and
at such other times and in such manner as the Secretary
shall prescribe.
``(4) Plan participation requirement.--
``(A) In general.--A taxpayer meets the
requirements of the paragraph if the average daily
percentage of eligible employees who are provided with
qualified health coverage by the taxpayer during the
taxable year is not less than such average for the
preceding taxable year.
``(B) Exceptions.--
``(i) Not in existence.--Subparagraph (A)
shall not apply if the trade or business was
not in existence throughout the preceding
taxable year.
``(ii) Business decline.--Under regulations
prescribed by the Secretary, subparagraph (A)
shall not apply to the extent that any
reduction in such percentage is the result of a
reduction in the number of employees of the
taxpayer on account of a reduction in the gross
receipts of the taxpayer.
``(5) Minimum employer payment.--A taxpayer meets the
requirements of the paragraph if at least 65 percent of the
cost of qualified health coverage provided to each eligible
employee is borne by the employer (determined without regard to
this section).
``(d) Eligible Employees.--For purposes of this section, the term
`eligible employee' means any employee of the taxpayer if--
``(1) such employee is not covered under--
``(A) any health plan of the employee's spouse,
``(B) title XVIII, XIX, or XXI of the Social
Security Act,
``(C) chapter 17 of title 38, United States Code,
``(D) chapter 55 of title 10, United States Code,
``(E) chapter 89 of title 5, United States Code, or
``(F) any other provision of law, and
``(2) such employee is not a part-time or seasonal
employee.
``(e) Qualified Health Coverage.--For purposes of this section, the
term `qualified health coverage' means coverage under a health plan
provided by the employer which is substantially equivalent on an
actuarial basis to coverage provided chapter 89 of title 5, United
States Code.
``(f) Special Rules.--For purposes of this section--
``(1) Treatment of predecessors.--Any reference in
paragraphs (1), (2), and (4) of subsection (c) to an entity
shall include a reference to any predecessor of such entity.
``(2) Controlled groups.--All persons treated as a single
employer under subsection (b) or (c) of section 52 shall be
treated as 1 person.
``(3) Mergers and acquisitions.--Rules similar to the rules
of subparagraphs (A) and (B) of section 41(f)(3) shall apply.
``(4) Employee to include self-employed.--The term
`employee' includes an individual who is an employee within the
meaning of section 401(c)(1) (relating to self-employed
individuals).
``(5) Exception for amounts paid under salary reduction
arrangements.--No amount paid or incurred pursuant to a salary
reduction arrangement shall be taken into account under
subsection (a).''.
(b) Denial of Double Benefit.--Section 280C of such Code is amended
by adding at the end the following new subsection:
``(h) Credit for Small Business Health Insurance Expenses.--
``(1) In general.--No deduction shall be allowed for that
portion of the expenses (otherwise allowable as a deduction)
taken into account in determining the credit under section 36
for the taxable year which is equal to the amount of the credit
allowed for such taxable year under section 36(a).
``(2) Controlled groups.--Paragraph (3) of subsection (b)
shall apply for purposes of this subsection.''.
(c) Technical Amendments.--
(1) Paragraph (2) of section 1324(b) of title 31, United
States Code, is amended by inserting ``or 36'' after ``section
35''.
(2) The table of sections for subpart C of part IV of
subchapter A of chapter 1 of the Internal Revenue Code of 1986
is amended by striking the item relating to section 36 and
inserting the following new items:
``Sec. 36. Small businesses providing health care coverage for
employees.
``Sec. 37. Overpayments of tax.''.
(d) Effective Date.--The amendments made by this section shall
apply to taxable years beginning after the date of the enactment of
this Act. | Small Business Health Care Relief Act of 2008 - Amends the Internal Revenue Code to allow certain small business owners with 50 or fewer employees a refundable tax credit for the payment of a portion of the health care expenses of their employees. |
SECTION 1. SHORT TITLE.
This Act may be cited as the ``TVA Distributor Self-Sufficiency Act
of 2001''.
SEC. 2. LIMITATION ON AUTHORITY OF TENNESSEE VALLEY AUTHORITY.
Section 4 of the Tennessee Valley Authority Act of 1933 (16 U.S.C.
831c) is amended by adding at the end the following:
``(m)(1) Shall not prohibit, interfere with, or impair any
determination made or any activity conducted by a TVA distributor
(acting alone or in combination with any person) to build, acquire any
interest in, operate any part of, or purchase electric power from a
facility for the generation of electric power for the purpose of
supplying the incremental power supply needs of the TVA distributor
(without regard to any other purpose for which electric power supplied
by the facility is used).
``(2) In this subsection--
``(A) the term `incremental power supply needs' means the
power generation capacity that a TVA distributor determines is
required to satisfy the projected peak load of the TVA
distributor (with appropriate reserve margins), to the extent
that the projected peak load and margins exceed the average
annual quantity of power purchases of the TVA distributor
during 1996, 1997, and 1998; and
``(B) the term `TVA distributor' means a cooperative
organization or publicly owned electric power system that, on
January 2, 2001, purchased electric power at wholesale from the
Corporation.''.
SEC. 3. TENNESSEE VALLEY AUTHORITY LEAST COST PLANNING PROGRAM.
Section 113 of the Energy Policy Act of 1992 (16 U.S.C. 831m-1) is
amended--
(1) by striking subsection (a) and inserting the following:
``(a) Definitions.--In this section:
``(1) TVA distributor.--The term `TVA distributor' means a
cooperative organization or publicly owned electric power
system that, on January 2, 2001, purchased electric power at
wholesale from the Tennessee Valley Authority.
``(2) System cost.--
``(A) In general.--The term `system cost' means all
direct and quantifiable net costs of an energy resource
over the available life of the energy resource.
``(B) Inclusions.--The term `system cost' includes
the costs of--
``(i) production;
``(ii) transportation;
``(iii) utilization;
``(iv) waste management;
``(v) environmental compliance; and
``(vi) in the case of an imported energy
resource, maintaining access to a foreign
source of supply.'';
(2) in subsection (b)--
(A) by striking paragraph (1) and inserting the
following:
``(1) In general.--
``(A) Triennial planning programs.--The Tennessee
Valley Authority shall conduct a least-cost planning
program in accordance with this section once every 3
years, including 1 such program to be concluded by
December 31, 2001.
``(B) Public participation.--Each planning program
shall be open to public participation.
``(C) Requirements.--In conducting a planning
program, the Tennessee Valley Authority shall use a
planning and selection process for new energy resources
that evaluates the full range of existing and
incremental resources (including new power supplies
that may be constructed, owned, and operated by 1 or
more TVA distributors, other new power supplies, energy
conservation and efficiency, and renewable energy
resources) in order to provide adequate and reliable
service to electric customers of the Tennessee Valley
Authority requiring such service at the lowest system
cost.'';
(B) in paragraph (2)--
(i) in subparagraph (B), by striking
``and'' at the end;
(ii) in subparagraph (C), by striking the
period at the end and inserting
``; and''; and
(iii) by adding at the end the following:
``(D) take into account current, planned, and
projected ownership and self-supply of power generation
resources by 1 or more TVA distributors.''; and
(C) by striking paragraph (3); and
(3) in subsection (c)--
(A) in paragraph (1)--
(i) in subparagraph (A), by striking
``distributors of the Tennessee Valley
Authority'' and inserting ``TVA distributors'';
and
(ii) by striking subparagraph (B) and
inserting the following:
``(B) encourage and assist TVA distributors in--
``(i) the planning and implementation of
cost-effective energy efficiency options;
``(ii) load forecasting; and
``(iii) the planning, construction,
ownership, operation, and maintenance of power
generation facilities owned or acquired by a
TVA distributor.''; and
(B) in the first and second sentences of paragraph
(2), by striking ``distributors'' and inserting ``TVA
distributors''.
SEC. 4. INCLUSION OF THE TENNESSEE VALLEY AUTHORITY IN THE DEFINITION
OF PUBLIC UTILITY FOR PURPOSES OF PARTS II AND III OF THE
FEDERAL POWER ACT.
(a) In General.--Section 201(e) of the Federal Power Act (16 U.S.C.
824(e)) is amended--
(1) by striking ``means any person who'' and inserting
``means--
``(1) any person that'';
(2) by striking ``the period at the end and inserting ``;
and''; and
(3) by adding at the end the following:
``(2) the Tennessee Valley Authority.''.
(b) Conforming Amendment.--Section 201(f) of the Federal Power Act
(16 U.S.C. 824(f)) is amended by striking ``foregoing, or any
corporation'' and inserting ``foregoing (other than the Tennessee
Valley Authority), or any corporation''. | TVA Distributor Self-Sufficiency Act of 2001 - Amends the Tennessee Valley Authority Act of 1933 to prohibit the Tennessee Valley Authority (TVA) from prohibiting, interfering with, or impairing any determination made or any activity conducted by a TVA distributor to build, acquire any interest in, operate, or purchase electric power from an electric power generating facility for the purpose of supplying the distributor's incremental power supply needs.Amends the Energy Policy Act of 1992 to require TVA to conduct a triennial least-cost planning program open to public participation.Amends the Federal Power Act to include TVA in the definition of public utility for purposes of such Act. |
the detection of planets outside of our solar system has opened up the possibility of answering several questions which have nagged the minds of philosophers for millennia .
these questions include : is the architecture of our solar system typical or unusual ?
how common are planets the size of the earth ?
how common is life in the universe ?
exactly how many things are out there that can kill us ?
it is now apparent that the process of planet formation produces an enormous diversity of planetary systems .
it is also clear from more recent discoveries , most notably those from the _ kepler _ mission , that terrestrial - size planets are exceptionally common .
the primary motivation for establishing such a correlation lies within the search for life exterior to our solar system and thus determine if life is common .
the fact that earth - size planets are relatively common is surely good news for resolving this issue .
this may be true , but a positive deduction that life is common may have a serious negative consequence . the evolution of life on earth has been accompanied by symbiotic relationships between animal species and the bacteria and viruses which use the animals as hosts .
these occasionally result in destructive outcomes which have had a devastating impact on various populations of animals due to genetic breakdowns caused by the virus .
particularly lethal pandemics which have affected homo - sapiens in recent centuries include cholera , influenza , typhus , and smallpox .
a more recent phenomenon which has been studied in great detail is that of spontaneous necro - animation psychosis ( snap ) , often referred to as zombie - ism .
this highly contagious condition is particularly nefarious in so far as its use of the host itself to provide a mobile platform from which to consciously spread the condition .
detailed modeling of various snap outbreak scenarios by @xcite have shown that human civilization would not only be unlikely to survive such an event but would collapse remarkably quickly . here
we discuss how recent exoplanet discoveries combined with studies of infectious diseases indicate that the universe may harbor reservoirs of planets full of bio - decay remains where zombie apocalypses have occurred . in section
[ danger ] we outline the dangerous nature of snap , quantify the possible numbers of snap - contaminated planets , and their proximity to earth . in section [ decomp ] we describe the decomposition process and the gases released .
this process is then used to establish the resulting necro - signatures and their potential for identification in section [ necro ] .
the observing window for detecting such signatures is discussed in section [ window ] and we provide the final sobering and terrifying conclusions in section [ conclusions ] .
spontaneous necro - animation psychosis is undoubtedly the most dangerous viral condition to infect living organisms .
the infectious nature of the condition is maximized by bestowing upon the host an insatiable desire to spread the virus at all costs .
this ensures that it will spread quickly and , usually , uncontrollably .
although there have not yet been documented cases of snap outbreaks on earth , the reality of the condition has been extensively depicted in both literature and cinema @xcite .
the science of zombie - ism has been investigated and found a snap outbreak could equally result from both natural evolution and genetic engineering @xcite . in either case ,
defense from such an outbreak has also been explored in great detail to maximize the survival probability @xcite .
even with such defenses , the global scale of the outbreak will rapidly break down any existing civilization .
the novel `` world war z '' depicts one such scenario although it presents an unlikely end result in which humans are able to recover , albeit at the brink of extinction @xcite .
the work of @xcite more accurately quantifies the likely outcome of a complete extinction event occurring .
although detecting the necro - signatures of worlds where a zombie apocalypse has occurred , we can also estimate the number of worlds which are affected in this way . to accomplish this , we use a modified version of the well - known _ drake equation _ , the original of which takes the following form : @xmath0 where the purpose is to calculate @xmath1 which is the number of advanced civilizations in the galaxy with radio - communication capability .
the other variables include @xmath2 ( average rate of star formation ) , @xmath3 ( fraction of stars with planets ) , @xmath4 ( average number of life - capable planets per star ) , @xmath5 ( fraction of planets with life ) , @xmath6 ( fraction of planets with intelligent life ) , @xmath7 ( fraction of civilizations that develop radio technology ) , and @xmath8 ( length of time such civilizations communicate ) . for a zombie outbreak to occur ,
there is no reason to _ a priori _ assume that intelligent life is required .
thus the modified _
zombie drake equation _ is as follows : @xmath9 where @xmath10 is the total number of snap - contaminated planets and @xmath11 is the fraction of planets where an outbreak has utterly destroyed the local population .
= 10000 there is strong evidence to suggest that the appearance of earth - based life occurred at a very early stage of earth s history , probably at least as early as 3.9 billion years ago .
this lends credence to the hypothesis that life is indeed a natural consequence of having suitable conditions , which is a terrestrial planet within the habitable zone of the star .
based on reasonable estimates of the frequency of terrestrial planets with such conditions , we use equation [ moddrake ] with a conservative @xmath11 value of 10% .
shown in figure [ planets ] is a histogram of all stars within 100 parsecs of earth based on data from the _ hipparcos _ mission .
the dark region shows an estimate of the distribution of nearby stars which could have a snap - contaminated planet based on the above assumptions .
this would mean that there are more than 2,500 such systems within 100 parsecs of earth .
if that does nt scare the bejeezus out of you then you may need to check your pulse !
furthermore , the projected frequency of snap planets explains a contradiction which has long troubled the proposition that intelligent life is common : the _ fermi paradox_. this premise of the paradox is that the timescale for extraterrestrial civilizations to spread throughout the galaxy is small compared with stellar lifetimes and so we should have encountered our neighbors by now .
our work here shows the resolution of the paradox to be quite simple .
the desolation of a civilization requires only that they encounter a case of snap during their exploration phase and their entire civilization will collapse .
let us not repeat history by rushing in to where our predecessors ought to have feared to tread .
now that your trousers are presumably the appropriate shade of brown , we must determine how the snap planets may be detected remotely and thus avoid them like ... well , the plague . to quantify this , we first need to identify what it is we are actually looking for .
a defining outcome of a zombie apocalypse is the death of all animal life on the planetary surface .
this will result in the transfer of a substantial fraction of the total biomass to the atmosphere through the process of decomposition .
all animals undergo similar stages of decomposition : fresh , bloat , active decay , advanced decay , and dry remains .
the primary decomposition stages during which the purging of gases and fluids occur are the bloat and active decay stages . for earth - based animals ,
the primary gases produced during this process are carbon dioxide , hydrogen sulfide , ammonia , and methane .
the extent to which this translates into strong signatures within the planetary atmosphere depends on the relative mass of the biosphere being converted to these gases .
humans constitute roughly 350 million tonnes of terrestrial biomass with an additional 700 million tonnes available via domesticated animals .
the cross contamination of snap for humans versus animals varies depending on the movie / literature source .
even if humans are the only species to succumb to such an infection , we can expect the levels of decomposition gases described above to at least double and probably increase by factors of several .
incidentally , the animal species on earth with the highest biomass is antarctic krill who apparently have natural selection all figured out .
however , it s unknown what a zombie apocalypse involving krill would look like and there is certainly a cinema niche awaiting any film director who would care to portray it .
[ cols="^,^ " , ] the gases released by the decomposition process described above may be used to remotely detect the zombie afflicted worlds . in particular , the strong atmospheric presence of co@xmath12 , h@xmath12s , ch@xmath13 , and nh@xmath14 will reveal those locations where massive amounts of death and decay have recently taken place .
the strength of the respective signatures for these gases in emission and transmission spectra will vary greatly .
the presence of increased levels of h@xmath12s for example will have a relatively weak associated signature . however , there are other atmospheric processes that occur as part of apocalypse - level decomposition that more than compensates and delivers an unambiguous necro - signature .
figure [ spectrum ] shows the spectral signatures associated with a zombie apocalypse on an earth - like planet .
the left panel shows the transmission spectrum that can be obtained by transit spectroscopy while the right panel shows the thermal emission of the planet that can be observed either by secondary eclipse spectroscopy or infrared nulling interferometry if the planet does not transit its host star .
two cases are compared to present earth .
one is a moderate snap with less than 10% of an earth - sized human and animal biomass affected by the infection .
this results in enhanced levels of some atmospheric gases : 2 pal ( present atmospheric level ) of co@xmath12 , and 5 pal of ch@xmath13 , n@xmath12o and nh@xmath14 , due to the putrefaction and disruption of the nitrogen cycle ( known as the savini effect ) .
this also results in an increased greenhouse warming and a mean surface temperature of 296 k instead of 288 k on present earth .
we also modeled a major snap event , assuming a biomass twice the one on present earth and a 90% infection . in this case
, we find 4 pal of co@xmath12 , 20 pal of ch@xmath13,and 50 pal n@xmath12o and nh@xmath14 .
spectral features associated with all these species reveal the rise of zombies at a planetary scale .
if you detect a signature such as the one described above , here is what you should do .
first , hyper - ventilate into a paper bag .
second , call the person who occupies your country s highest office whilst screaming hysterically . neither action will help the situation but it will make you feel like you ve actually done something useful .
due to the animation aspect of a zombie and its desire to infect others , the corpse is invariably exposed to the elements .
additionally , conflict between the zombies and those not yet infected will produce high temperature conditions .
the combination of these two environmental effects will be to accelerate the rate of decomposition ( see section [ decomp ] ) and thus produce a relatively brief window in which signatures of the apocalypse may persist in the atmosphere .
there are various models which may be used to determine the spread of infection and thus the rise of decomposition gases in the atmosphere .
a broadly applicable model to use for the infection rate is the basic model outbreak scenario of @xcite since we are not assuming that the primary species with the infection is intelligent ( yes , including earth ) .
this model predicts an unimpeded spread of the infection and a correspondingly rapid rise in decomposition gases .
this is shown by the solid line in figure [ conc ] .
alternatively , one may consider the latent infection model in which a certain fraction of the zombies are destroyed as they are created by the astute uninfected population . as shown by the dashed line in figure [ conc ]
, this slows the release of decomposition gases into the atmosphere but results in only a small delay . in either case , the 100% fatality rate produces a period of at least one year during which the necro - signatures described in section [ necro ] will be at their maximum amplitude , gradually decreasing over the following couple of years . for biomasses larger than that currently present on earth , this period of maximum amplitude will be proportionally longer .
removal of these gases from the atmosphere assumes absorption by liquid water oceans and other mineral chemical reactions .
however , it is possible that a new equilibrium is reached by which the necro - signatures could persist for much longer .
we have shown that there is a significantly non - zero probability that in the search for life in the universe we will also encounter large amounts of undeath . any person who has been exposed to
even a relatively benign zombie film understands the threat posed by this heinous malady .
this is not to be trifled with .
therefore the risk imposed of encountering a snap - contaminated planet can not be overstated .
we have shown that the sign - posts for snap worlds are present and detectable in exoplanet atmospheres .
we have also shown that these signatures may not persist for very long in the upper atmosphere which emphasizes the need for continuous observations .
an extension of the necro - signature would be produced by worlds where advanced civilizations existed due to the considerable time required for the breakdown of the industry infrastructure left behind .
one may well point out that there are numerous scenarios other than a zombie apocalypse that could equally quell all life on a planet .
however , we argue that none of those scenarios are anywhere near as scary as being eaten by a zombie and so we justifiably ignore those other possibilities .
the best chance that we as a civilization has of preventing a future encounter with a zombie virus is to carefully monitor and catalog the snap - contaminated planets .
although this requires the dedicated use of the james webb space telescope ( jwst ) to perform this task , this will likely be insufficient to meet the challenge of monitoring all stars with the needed signal - to - noise .
thus we strongly advocate the construction of a fleet of no fewer than 10 jwsts with _ increased _ apertures ( 12 meters should do the trick ! ) .
these should be designed to also operate together as a nuller interferometer so they can survey non - transiting nearby exoplanets , which represent the main threat .
transiting planets are generally far away , and we can all agree that an undead neighbor is immensely more scary than a distant zombie . whatever the course of action , we must actively strive to address the threat and to mitigate the risk of annihilation by an exoplanet zombie infection .
the authors would like to thank vetter brewery in heidelberg for the hefeweizen - fueled stagger which inspired this work .
stephen would also like to thank sean raymond for not turning into a zombie and eating his office - mate franck without whom this work would not have been possible .
brooks , m. 2003 , `` the zombie survival guide : complete protection from the living dead '' brooks , m. 2007 , `` world war z : an oral history of the zombie war '' brooks , m. , 2009 , `` the zombie survival guide : recorded attacks '' kay , g. 2012 , `` zombie movies : the ultimate guide '' , 2nd edition munz , p. , hudea , i. , imad , j. , smith , r.j .
2009 , infectious disease modelling research progress , pp .
133 - 150 russell , j. 2005 , `` book of the dead : the complete history of zombie cinema '' swain , f. 2013 , `` how to make a zombie : the real life ( and death ) science of reanimation and mind control '' vuckovic , j. 2011 , `` zombies ! : an illustrated history of the undead '' | as we learn more about the frequency and size distribution of exoplanets , we are discovering that terrestrial planets are exceedingly common . the distribution of orbital periods in turn results in many of these planets being the occupants of the habitable zone of their host stars . here
we show that a conclusion of prevalent life in the universe presents a serious danger due to the risk of spreading spontaneous necro - animation psychosis ( snap ) , or zombie - ism .
we quantify the extent of the danger posed to earth through the use of the zombie drake equation and show how this serves as a possible explanation for the fermi paradox .
we demonstrate how to identify the resulting necro - signatures present in the atmospheres where a zombie apocalypse may have occurred so that the risk may be quantified .
we further argue that it is a matter of planetary defense and security that we carefully monitor and catalog potential snap - contaminated planets in order to exclude contact with these worlds in a future space - faring era . |
understanding the statistical properties of a fully developed turbulent velocity field from the lagrangian point of view is a challenging theoretical and experimental problem .
it is a key ingredient for the development of stochastic models for turbulent transport in such diverse contexts as combustion , pollutant dispersion , cloud formation , and industrial mixing.@xcite progress has been hindered primarily by the presence of a wide range of dynamical timescales , an inherent property of fully developed turbulence . indeed , for a complete description of particle statistics , it is necessary to follow their paths with very fine spatial and temporal resolution , on the order of the kolmogorov length and time scales @xmath0 and @xmath1 .
moreover , the trajectories should be tracked for long times , order the eddy turnover time @xmath2 , requiring access to a vast experimental measurement region .
the ratio of the above timescales can be estimated as @xmath3 , and the microscale reynolds number @xmath4 ranges from hundreds to thousands in typical laboratory experiments . despite these difficulties , many experimental and numerical studies of lagrangian turbulence
have been reported over the years.@xcite here , we present a detailed comparison between state - of - the - art experimental and numerical studies of high reynolds number lagrangian turbulence
. we focus on single particle statistics , with time lags ranging from smaller than @xmath5 to order @xmath2 .
in particular , we study the lagrangian velocity structure functions ( lvsf ) , defined as @xmath6^p \rangle\ , , \label{eq : dlsf}\ ] ] where @xmath7 denotes a single velocity component . in the past , the corresponding eulerian quantities , _
i.e. _ the moments of the spatial velocity increments , have attracted significant interest in theory , experiments , and numerical studies ( for a review see ref . ) .
it is now widely accepted that spatial velocity fluctuations are intermittent in the inertial range of scales , for @xmath8 , @xmath9 being the largest scale of the flow . by intermittency we mean anomalous scaling of the moments of the velocity increments , corresponding to a lack of self - similarity of their probability density functions ( pdfs ) at different scales . in an attempt to explain eulerian intermittency ,
many phenomenological theories have been proposed , either based on stochastic cascade models ( _ e.g. _ multifractal descriptions @xcite ) , or on closures of the navier - stokes equations.@xcite common to all these models is the presence of non - trivial physics at the dissipative scale , @xmath10 , introduced by the complex matching of the wild fluctuations in the inertial range and the dissipative smoothing mechanism at small scales.@xcite numerical and experimental observations show that clean scaling behavior for the eulerian structure functions is found only in a range @xmath11 ( see ref . for a collection of experimental and numerical results ) . for spatial scales
@xmath12 , multiscaling properties , typical of the intermediate dissipative range , are observed due to the superposition of inertial range and dissipative physics.@xcite similar questions can be raised in the lagrangian framework : ( i ) is there intermittency in lagrangian statistics ? ( ii ) is there a range of time lags where clean scaling properties ( _ i.e. _ power law behavior ) can be detected ?
( iii ) are there signatures of the complex interplay between inertial and dissipative effects for small time lags @xmath13 ? in this paper
we shall address the above questions by comparing accurate direct numerical simulations ( dns ) and laboratory experiments .
unlike eulerian turbulence , the study of which has attracted experimental , numerical and theoretical efforts since the last thirty years , lagrangian studies become available only very recently mainly due to the severe difficulty of obtaining accurate experimental and numerical data at sufficiently high reynolds numbers . consequently
, the understanding of lagrangian statistics is still poor .
this explains the absence of consensus on the scaling properties of the lvsf .
in particular , there have been different assessments of the scaling behavior @xmath14 mainly due the desire to extract a single number , _
i.e. _ the scaling exponent @xmath15 , over a range of time lags . measurements using acoustic techniques @xcite gave the first values of the exponents @xmath15 , measuring scaling properties in the range @xmath16 .
subsequently , experiments based on cmos sensors @xcite provided access to scaling properties for shorter time lags , @xmath17 , finding more intermittent values , though compatible with ref . .
dns data , obtained at lower reynolds number , allowed simultaneous measurements in both of these ranges.@xcite for @xmath18 , scaling exponents were found to be slightly less intermittent than those measured with the acoustic techniques , though again compatible within error bars . on the other hand ,
dns data @xcite for small time lags , @xmath19 , agree with scaling exponents measured in ref . .
the primary goal of this paper is to critically compare state - of - the - art numerical and experimental data in order to analyze intermittency at both short and long time lags .
this is a necessary step both to bring lagrangian turbulence up to the same scientific standards as eulerian turbulence and to resolve the conflict between experiment and simulations ( see also refs . ) . to illustrate some of the difficulties discussed above , in fig .
[ fig:0 ] we show a compilation of experimental and numerical results for the second - order lagrangian structure function at various reynolds numbers ( see later for details ) .
the curves are compensated with the dimensional prediction given by the classical kolmogorov dimensional theory in the inertial range @xcite , @xmath20 , where @xmath21 is the turbulent kinetic energy dissipation .
the absence of any extended plateau and the trend with the reynolds number indicate that the inertial range , if any , has not developed yet .
the same trends have been observed in other dns studies @xcite and by analyzing the temporal behavior of signals with a given power - law fourier spectrum.@xcite we stress that assessing the actual scaling behavior of the second ( and higher ) order lagrangian velocity structure functions is crucial for the development of stochastic models for lagrangian particle evolution .
indeed , these models are based on the requirement that the second - order lvsf scales as @xmath22 . the issues of whether the predicted scaling is ever reached and ultimately how the lvsf deviate as a function of the reynolds numbers remains to be clarified .
, at various reynolds numbers and for all data sets .
details can be found in tables 1 and 2 .
exp2 and exp4 refer to the same reynolds number ( @xmath23 ) , but with different measurement volumes ( larger in exp4 ) ; in particular exp2 and exp4 better resolve the small and large time lag ranges , respectively , and intersect for @xmath24
. we indicate with a solid line the resulting data set made of data from exp2 ( for @xmath25 ) and exp4 ( for @xmath26 ) ; a good overlap among these data is observed in the range @xmath27 . for all data sets , a extended plateau is absent , indicating that the power law regime typical of the inertial range has not yet been achieved , even at the highest reynolds number , @xmath28 , in experiment.,scaledwidth=47.0% ] moreover ,
an assessment of the presence of lagrangian intermittency calls for more general questions about phenomenological modeling . for instance
, multifractal models derived from eulerian statistics can be easily translated to the lagrangian framework,@xcite with some degree of success.@xcite the material is organized as follows . in section
[ sect : expdns ] , we describe the properties of the experimental setup and the direct numerical simulations , detailing the limitations in both sets of data .
a comparison of lagrangian velocity structure functions is considered in section [ sec : lvsfcomp ] .
section [ sec : lsess ] presents a detailed scale - by - scale discussion of the local scaling exponents , which is the central result of the paper .
section [ sec : concl ] draws conclusions and offers perspectives for the future study of lagrangian turbulence .
before describing the experimental setup and the dns we shall briefly list the possible sources of uncertainties in both experimental and dns data . in general this is not an easy task .
first , it is important to discern the deterministic from the statistical sources of errors .
second , we must be able to assess the quantitative importance of both types of uncertainties on different observables . _
deterministic uncertainties_. for simplicity , we report in this work the data averaged over all three components of the velocity for both the experiments and the dns .
since neither flows in the experiments nor the dns are perfectly isotropic , a part of the uncertainty in the reported data comes from the anisotropy . in the experiments
the anisotropy reflects the generation of the flow and the geometry of the experimental apparatus .
the anisotropy in dns is introduced by the finite volume and by the choice of the forcing mechanism . in general , the dns data are quite close to statistical isotropy , and anisotropy effects are appreciable primarily at large scales .
this is also true for the data from the experiment , especially at the higher reynolds numbers .
an important limitation of the experimental data is that the particle trajectories have finite length due both to finite measurement volumes and to the tracking algorithm , which primarily affect the data for large time lags .
it needs to be stressed , however , that in the present experimental set up due to the fact that the flow is not driven by bulk forces , but by viscous and inertial forces at the blades , the observation volume would anyhow be limited by the mean velocity and the time it takes for a fluid particle to return to the driving blades . at the blades the turbulence is strongly influenced by the driving mechanism .
therefore , in the experiments reported here the observation volume was selected to be sufficiently far away from the blades to minimize anisotropy . for short time
lags , the greatest experimental difficulties come from the finite spatial resolution of the camera and the optics , the image acquisition rate , data filtering and post - processing , a step necessary to reduce noise . for dns ,
typical sources of uncertainty at small time lags are due to the interpolation of the eulerian velocity field to obtain the particle position , the integration scheme used to calculate trajectories from the eulerian data , and the numerical precision of floating point arithmetic .
the _ statistical uncertainties _ for both the experimental and dns data arise primarily from the finite number of particle trajectories and especially for dns from the time duration of the simulations .
we note that this problem is also reflected in a residual , large - scale anisotropy induced by the non - perfect averaging of the forcing fluctuations in the few eddy turnover times simulated .
the number of independent flow realizations can also contribute to the statistical convergence of the data . while it is common to obtain experimental measurements separated by many eddy turnover times ,
typical dns results contain data from at most a few statistically independent realizations .
we stress that , particularly for lagrangian turbulence , only an in - depth comparison of experimental and numerical data will allow the quantitative assessment of uncertainties .
for instance , as we shall see below , dns data can be used to investigate some of the geometrical and statistical effects induced by the experimental apparatus and measurement technique .
this enables us to quantify the importance of some of the above mentioned sources of uncertainty directly .
dns data are , however , limited to smaller reynolds number than experiment ; therefore only data from experiments can help to better quantify reynolds number effects .
the most comprehensive experimental data of lagrangian statistics are obtained by optically tracking passive tracer particles seeded in the fluid .
images of the tracer particles are analyzed to determine their motion in the turbulent flow.@xcite due to the rapid decrease of the kolmogorov scale with reynolds number in typical laboratory flows , previous experimental measurements were often limited to small reynolds numbers.@xcite the kolmogorov time scale at @xmath29 in a laboratory water flow was so far resolved only by using four high speed silicon strip detectors originally developed for high - energy physics experiments .
@xcite the one - dimensional nature of the silicon strip detector , however , restricted the three dimensional tracking to a single particle at a time , limiting severely the rate of data collection .
recent advances in electronics technology now allow simultaneous three dimensional measurements of @xmath30 particles at a time , by using three cameras with two - dimensional cmos sensors .
high - resolution lagrangian velocity statistics at reynolds numbers comparable to those measured using silicon strip detectors are therefore becoming available.@xcite lagrangian statistics can also be measured acoustically .
the acoustic technique measures the doppler frequency shift of ultrasound reflected from particles in the flow , which is directly proportional to their velocity.@xcite the size of the particles needed for signal strength in the acoustic measurements can be significantly larger than the kolmogorov scale of the flow .
consequently , the particles do not follow the motion of fluid particles,@xcite and this makes the interpretation of the experimental data more difficult.@xcite [ cols="^,^,^,^,^,^,^,^,^,^,^,^,^,^ " , ] in the simulations , the main systematic error for small time lags comes from the interpolation of the eulerian velocity fields needed to integrate the equation for particle positions , @xmath31 of course , high - order interpolation schemes such as third - order taylor series interpolation or cubic splines partially remove this problem .
cubic splines give higher interpolation accuracy , but they are more difficult to use in implementations that rely on secondary storage.@xcite it has been reported @xcite that cubic schemes may resolve the most intense events better than linear interpolation , especially for acceleration statistics ; the effect , however , appears to be rather small especially as far as velocity is concerned .
more crucial than the order of the interpolation scheme is the resolution of the eulerian grid in terms of the kolmogorov length scale . to enlarge the inertial range as much as possible , typical eulerian simulations tend to poorly resolve the smallest scale velocity fluctuations by choosing a grid spacing @xmath32 larger then the kolmogorov scale @xmath0 .
since this strategy may be particularly harmful to lagrangian analysis , here it has been chosen to better resolve the smallest fluctuations by choosing @xmath33 and to use the simple and computationally less expensive linear interpolation .
we stress that having well resolved dissipative physics for the eulerian field is also very important for capturing the formation of rare structures on a scale @xmath34 .
moreover , as discussed in ref .
, such structures , because of their filamentary geometry , may influence not only viscous but also inertial range physics .
another possible source of error comes from the loss of accuracy in the integration of eq .
( [ eq : dxdt ] ) for very small velocities due to round - off errors .
this problem can be overcome by adopting higher - order schemes for temporal discretization .
for extremely high reynolds numbers it may also be necessary to use double precision arithmetic , while for moderate @xmath4 , single precision , which was adopted in the present dns , is sufficient for accurate results ( see , _
e.g. _ , ref . ) .
details of the dns analyzed here can be found elsewhere @xcite ; here , we simply state that the lagrangian tracers move according to eq .
( [ eq : dxdt ] ) , in a cubic , triply periodic domain of side @xmath35 .
dns parameters are summarized in table [ tab : dns ] .
let us now compare the experimental and numerical measurements of the lagrangian velocity structure functions directly .
figures [ fig : lsf2 ] and [ fig : lsf4 ] show a direct comparison of lvsfs of order @xmath36 and @xmath37 for all data sets .
the curves are plotted using the dimensional normalization , assuming that @xmath38 ( where we use @xmath39 and @xmath40 ) .
such a rescaling can be generalized as @xmath41 .
both the @xmath42 and @xmath43 order moments show a fairly good collapse , especially in the range of intermediate time lags . however , some dependence can be observed both on @xmath4 ( see fig . [
fig : lsf4 ] ) and on the size of the measurement volume ( compare exp2 and exp4 ) .
both effects call for a more quantitative understanding . a common way to assess how the statistical properties change for varying time lags is to look at dimensionless quantities such as the generalized flatness @xmath44^{p}}\ , .
\label{eq : flat}\ ] ] we speak of _ intermittency _ when such a function changes its behavior as a function of @xmath45 : this is equivalent to the pdf of the velocity fluctuations @xmath46 , normalized to unit variance , changing shape for different @xmath45.@xcite of order @xmath36 and @xmath47 , measured from dns2 , exp2 , and exp4 .
data from exp2 and exp4 are connected by a continuous line .
the gaussian values are given by the two horizontal lines .
the curves have been averaged over the three velocity components and the error bars are computed from the scatter between the three different components as a measure of the effect of anisotropy .
statistical errors due to the limitation in the statistics are evaluated by dividing the whole data sets in sub samples and comparing the results .
these statistical errors are always smaller than those estimated from the residual anisotropy.,scaledwidth=47.0% ] when the generalized flatness varies with @xmath45 as a power law , @xmath48 , the scaling laws are _
intermittent_. such behavior is very difficult to assess quantitatively , since many decades of scaling are typically needed to remove the effects of sub - leading contributions ( for instance , it is known that eulerian scaling may be strongly affected by slowly decaying anisotropic fluctuations @xcite ) .
we are interested in quantifying the degree of intermittency at changing @xmath45 . in fig .
[ fig : flat ] , we plot the generalized flatness @xmath49 for @xmath36 and @xmath47 for the data sets dns2 , exp2 and exp4 .
numerical and experimental results are very close , and clearly show that the intermittency changes considerably going from low to high @xmath45 .
the difficulty in trying to characterize these changes quantitatively is that , as shown by fig .
[ fig : flat ] , one needs to capture variations over many orders of magnitude . for this reason , we prefer to look at observables that remain @xmath50 over the entire range of scales and which convey information about intermittency _ without having to fit any scaling exponent_. with this aim
, we measured the logarithmic derivative ( also called local slope or local exponent ) of structure function of order @xmath51 , @xmath52 , with respect to a reference structure function,@xcite for which we chose the second - order @xmath53 : @xmath54 we stress the importance of taking the derivative with respect to a given moment : this is a direct way of looking at intermittency with no need of _ ad hoc _ fitting procedures and no request of power law behavior .
this procedure,@xcite which goes under the name of extended self similarity @xcite ( ess ) , is particularly important when assessing the statistical properties at reynolds numbers not too high and/or close to the viscous dissipative range .
a non - intermittent behavior would corresponds to @xmath55 . in the range of @xmath45 for which the exponents @xmath56 are different from the dimensional values @xmath57 , structure functions are intermittent and correspondingly the normalized pdfs of @xmath58 change shape with @xmath45 .
figures [ fig : ess4 ] and [ fig : ess6 ] show the logarithmic local slopes of the numerical and experimental data sets for several reynolds numbers for @xmath37 and @xmath59 versus time normalized to the kolmogorov scale , @xmath60 .
these are the main results of our analysis .
the first observation is that for both orders @xmath37 and @xmath59 , the local slopes @xmath61 deviate strongly from their non - intermittent values @xmath62 and @xmath63 .
there is a tendency toward the differentiable non - intermittent limit @xmath64 only for very small time lags @xmath65 . in the following
, we shall discuss in detail the small and large time lag behavior .
_ small time lags .
_ for the structure function of order @xmath37 ( fig . [ fig : ess4 ] ) , we observe the strongest deviation from the non - intermittent value in the range of time @xmath66 . it has previously been proposed that this deviation is associated with particle trapping in vortex filaments.@xcite this fact has been supported by dns investigations of inertial particles.@xcite the agreement between the dns and the experimental data in this range is remarkable . for @xmath59 ( fig . [
fig : ess6 ] ) , the scatter among the data is higher due to the fact that , with increasing order of the moments , inaccuracies in the data become more important .
still , the agreement between dns and the experimental data is excellent .
differently from the @xmath37 case , a dependence of mean quantities on the reynolds number is here detectable , though it lies within the error - bars .
the experimental data set for @xmath59 , at the highest reynolds number ( @xmath67 ) , show a detectable trend in the local slope toward less intermittent values in the dip region , @xmath68
. this change may potentially be the signature of vortex destabilization at high reynolds number which would reduce the effect of vortex trapping .
it is more likely , however , that at this very high reynolds number both spatial and temporal resolution of the measurement system may not have been sufficient to resolve the actual trajectories of intense events.@xcite we consider this to be an important open question for future studies . _
larger time lags .
_ for @xmath69 up to @xmath2 , the experimental data obtained in small measurement volumes ( exp1,2,3 ) , are not resolving the physics , as they develop both strong oscillations and a common trend toward smaller and smaller values for the local slopes for increasing @xmath45 .
this may be attributed to finite volume corrections ( see also sect .
[ sec : vol ] ) . for these reasons ,
the data of exp1,2,3 are not shown for these time ranges .
on the other hand , the data from exp4 , obtained from a larger measurement volume , allow us to compare experiment and simulation . here
the local slope of the experimental data changes slower very much akin to the simulations .
this suggests that in this region high reynolds number turbulence may show a plateau , although the current data can not give a definitive answer to this question . for @xmath59 , a similar trend
is detected , though with larger uncertainties .
the excellent quantitative agreement between dns and the experimental data gives us high confidence into the local slope behavior as a function of time lag . in light of these results
, we can finally clarify the recent apparent discrepancy between measured scaling exponents of the lvsfs in experiments @xcite and dns,@xcite which have lead to some controversy in the literature.@xcite in the experimental work @xcite , scaling exponents were measured by fitting the curves in fig .
[ fig:6 ] in the range @xmath19 , where the compensated second order velocity structure functions reach a maximum , as shown in fig .
[ fig:0 ] ( measuring the fourth and sixth order scaling exponents @xmath61 to be @xmath70 and @xmath71 , respectively ) . on the other hand , in the simulations
@xcite scaling exponents were measured in the regions in the range of times @xmath72 ( finding the values @xmath73 and @xmath74 ) .
it needs to be emphasized , however , that the limits induced by the finiteness of volume and of the inertial range extension in both dns and experimental data do not allow for making a definitive statement about the behavior in the region @xmath75 .
we may ask instead if the relative extension of the interval where we see the large dip at @xmath76 and the possible plateau , observed for @xmath75 both in the numerical and experimental data ( see exp4 data set ) , becomes larger or smaller at increasing the reynolds number.@xcite if the dip region the one presumably affected by vortex filaments flattens , it would give the asymptotically stable scaling properties of lagrangian turbulence . if instead the apparent plateau region , at large times , increases in size while the effect of high intensity vortex remains limited to time lags around @xmath77 , the plateau region would give the asymptotic scaling properties of lagrangian turbulence .
this point remains a very important question for the future because , as of today , it can not be answered conclusively neither by experiments nor by simulations . that a trajectory lasts a time @xmath78 vs @xmath79 for the experiment exp2 and for dns2 trajectories in different numerical measurement domains @xmath80 .
, scaledwidth=47.0% ] as noted above , the exp4 data for @xmath81 develop an apparent plateau at a smaller value than the dns data . in this section ,
we show how the dns data can be used to suggest a possible origin for this mismatch .
we investigate the behavior of the local slopes for the simulations , when the volume of size @xmath82 , where particles are tracked , is systematically decreased .
essentially only trajectories which stay in this sub - volume are considered in the analysis , mimicking what happens in the experimental measurement volume .
we considered volume sizes @xmath83 in the range which goes from the full box size @xmath84 to @xmath85 , and we average over all the sub - boxes to increase the statistical samples . in fig .
[ fig : times ] , we plot the statistics of the trajectory durations for both the experiment and dns by varying the measurement volume size . for @xmath86 ,
the modified dns statistics are essentially indistinguishable from the experimental results .
it is now interesting to look at the lvsf measured from these finite length numerical trajectories .
this shows that the method we devised is able to mimic the presence of a finite measurament volume as in experiments . in fig .
[ fig : alea ] , we show the fourth - order lvsf obtained by considering the full length trajectories and the trajectories living in a sub - volume as explained above .
what clearly appears from fig .
[ fig : alea ] is that the finite length of the trajectories lowers the value of the structure functions for time lags of the order of @xmath87 .
indeed , the finite - length statistics give a signal that is always lower than the full averaged quantity : this effect may be due to a bias to slow , less energetic particles , which have a tendency to linger inside the volume for longer times than fast particles , introducing a systematic change in the statistics .
note that this is the same trend detected when comparing exp2 and exp4 in figs .
[ fig : lsf ] . in fig .
[ fig : aleb ] , we also show the effect of the finite measurement volume on the local slope for @xmath37 . by decreasing the observation volume
, we observe a trend towards a shorter and shorter plateau with smaller and smaller values
. this could be the source of the small offset between the plateaux developed by the exp4 data and the dns data in fig .
[ fig:6 ] . for the sake of clarity , we should recall that in the dns particles can travel across a cubic fully periodic volume , so during their full history they can reenter the volume several times . in principle , this may affect the results for long time delays .
however , since the particle velocity is taken at different times we may expect that possible spurious correlations induced by the periodicity to be very small , if not absent .
this is indeed confirmed in fig .
[ fig : aleb ] where we can notice the perfect agreement between data obtained by using periodic boundary conditions or limiting the analysis to subvolumes of size @xmath88 ( i.e. not retaining the periodicity ) and even @xmath89 . as discussed in sect .
[ sect : expdns ] , results at small time lags can be slightly contaminated by several effects both in dns and experiments .
dns data can be biased by resolution effects due to interpolation of the eulerian velocity field at the particle position . in experiments uncorrelated
experimental noise needs to be filtered to recover the trajectories.@xcite to understand the importance of such effects quantitatively , we have modified the numerical lagrangian trajectories in the following way .
first , we have introduced a random noise of the order of @xmath90 to the particle position , in order to mimic the noise present in the experimental particle detection .
second , we have implemented the same gaussian filter of variable width used to smooth the experimental trajectories @xmath91 .
we also tested the effect of filtering by processing experimental data with filters of different length . in figs . [
fig : filterdns ] and [ fig : filterexp ] , we show the local scaling exponents for @xmath81 as measured from these modified dns trajectories together with the results obtained from the experiment , for several filter widths .
the qualitative trend is very similar for both the dns and the experiment .
the noise in particle position introduces non - monotonic behavior in the local slopes at very small time lags in the dns trajectories .
this effect clearly indicates that small scale noise may strongly perturb measurements at small time lags , but will not have important consequences for the behavior on time scales larger than @xmath5 . on the other hand ,
the effect of the filter is to increase the smoothness at small time lags slightly ( notice the shift of local slopes curves toward the right for @xmath92 for increasing filter widths ) .
a similar trend is observed in the experimental data ( fig .
[ fig : filterexp ] ) . in this case , choosing the filter width to be in the range @xmath93\tau_{\eta}$ ] seems to be optimal , minimizing the dependence on the filter width and the effects on the relevant time lags .
understanding filter effects may be even more important for experiments with larger particles , on the order of or comparable with the kolmogorov scale . in those cases , the particle size naturally introduces a filtering by averaging velocity fluctuations over its size , _
i.e. _ , those particles are not faithfully following the fluid trajectories.@xcite
a detailed comparison between state - of - the - art experimental and numerical data of lagrangian statistics in turbulent flows has been presented .
the focus has been on single - particle lagrangian structure functions . only due to the critical comparison of experimental and dns data it is possible to achieve a quantitative understanding of the velocity scaling properties over the entire range of time scales , and for a wide range of reynolds numbers .
in particular , the availability of high reynolds number experimental measurements allowed us to assess in a robust way the existence of very intense fluctuations , with high intermittency in the lagrangian statistics around @xmath94 \tau_{\eta}$ ] . for
larger time lags @xmath75 , the signature of different statistics seems to emerge , with again good agreement between dns and experiment ( see fig .
[ fig:6 ] ) . whether the trend of logarithmic local slopes at large times is becoming more and more extended at larger and larger reynolds number is an issue for further research .
both experiments and numerics show in the ess local slope of the fourth and sixth order lagrangian structure functions a dip region at around time lags @xmath77 and a flattening at @xmath95 . as of today
, it is unclear whether the dip or the flattening region give the asymptotic scaling properties of lagrangian turbulence
. the question of which region will extend as a function of reynolds number can not be resolved at present , and remains open for future research .
it would also be important to probe the possible relations between eulerian and lagrangian statistics as suggested by simple phenomenological multifractal models.@xcite in these models , the translation between eulerian ( single - time ) spatial statistics and lagrangian statistics is made via the dimensional expression of the local eddy turnover time at scale @xmath96 : @xmath97 .
this allows predictions for lagrangian statistics if the eulerian counterpart is known
. an interesting application concerns lagrangian acceleration statistics,@xcite where this procedure has given excellent agreement with experimental measurements . when applied to single - particle velocities , multifractal predictions for the lvsf
scaling exponents are close to the plateau values observed in dns at time lags @xmath75 .
it is not at all clear , however , if this formalism is able to capture the complex behavior of the local scaling exponents close to the dip region @xmath94 \tau_{\eta}$ ] , as depicted in fig .
[ fig:6 ] .
indeed , multifractal phenomenology , as with all multiplicative random cascade models,@xcite does not contain any signature of spatial structures such as vortex filaments .
it is possible that in the lagrangian framework a more refined matching to the viscous dissipative scaling is needed , as was proposed in ref .
, rephrasing known results for eulerian statistics.@xcite even less clear is the relevance for lagrangian turbulence of other phenomenological models , based on super - statistics @xcite , as recently questioned in ref . . the formulation of a stochastic model able to capture the whole shape of local scaling properties from the smallest to the largest time lag , as depicted in fig
. [ fig:6 ] , remains an open important theoretical challenge .
eb , nto and hx gratefully acknowledge financial support from the nsf under contract phy-9988755 and phy-0216406 and by the max planck society .
lb , mc , asl and ft acknowledge j. bec , g. boffetta , a. celani , b. j. devenish and s. musacchio for discussions and collaboration in previous analysis of the numerical dataset .
lb acknowledges partial support from miur under the project prin 2006 .
numerical simulations were performed at cineca ( italy ) under the `` key - project '' grant : we thank g. erbacci and c. cavazzoni for resources allocation .
lb , mc , asl and ft thank the deisa consortium ( co - funded by the eu , fp6 project 508830 ) , for support within the deisa extreme computing initiative ( www.deisa.org ) .
unprocessed numerical data used in this study are freely available from the icfddatabase.@xcite b. l. sawford , p. k. yeung , m. s. borgas , p. vedula , a. la porta , a. m. crawford , and e. bodenschatz , `` conditional and unconditional acceleration statistics in turbulence , '' phys
. fluids * 15 * , 3478 ( 2003 ) .
s. ayyalasomayajula , a. gylfason , l. r. collins , e. bodenschatz , z. warhaft , `` lagrangian measurements of inertial particle accelerations in grid generated wind tunnel turbulence , '' phys .
lett . * 97 * , 144507 ( 2007 ) .
p. k. yeung , s. b. pope , e. a. kurth and a. g. lamorgese , `` lagrangian conditional statistics , acceleration and local relative motion in numerically simulated isotropic turbulence , '' j. fluid .
582 * , 399 ( 2007 ) .
y. kaneda , t. ishihara , m. yokokawa , k. itakura and a. uno , `` energy dissipation rate and energy spectrum in high resolution direct numerical simulations of turbulence in a periodic box , '' phys .
fluids * 15 * , l21 ( 2003 ) .
s. chen , g. d. doolen , r. h. kraichnan and z .- s .
she , `` on statistical correlations between velocity increments and locally averaged dissipation in homogeneous turbulence , '' phys .
fluids a * 5 * , 458 ( 1993 ) .
h. homann , j. dreher and r. grauer , `` impact of the floating - point precision and interpolation scheme on the results of dns of turbulence by pseudo - spectral codes , '' e - arxiv:0705.3144 to appear in comp .
phys . comm . | a detailed comparison between data from experimental measurements and numerical simulations of lagrangian velocity structure functions in turbulence is presented . by integrating information from experiments and numerics , a quantitative understanding of the velocity scaling properties over a wide range of time scales and reynolds numbers
is achieved .
the local scaling properties of the lagrangian velocity increments for the experimental and numerical data are in good quantitative agreement for all time lags .
the degree of intermittency changes when measured close to the kolmogorov time scales or at larger time lags .
this study resolves apparent disagreements between experiment and numerics . |
Three days in Chicago: three kids shot, two killed
Police stand near a car in an alley between Kostner and Kenneth avenues while investigating a fatal shooting in North Lawndale. | Andy Grimm/Sun-Times
Three children were shot in Chicago in less than three days — two fatally — with the third still in critical condition with a gunshot wound to her head.
None was old enough to attend high school. One was three years away from starting kindergarten. All were unintended targets.
Coming on the heels of the most violent year since the mid-90s, Chicago’s 2017 gun violence stats have held steady, with more than 70 homicides in the first 45 days of the year, according to records from police and the Cook County medical examiner’s office.
Three child victims in less than a week, though, is an uncommonly high number.
“One victim of one shooting is one too many, but when innocent children are caught in the crossfire of gun violence and young people have their childhood stolen by stray bullets, our consciences are shaken and our hearts are broken,” Mayor Rahm Emanuel said in an emailed statement Tuesday.
About 1:30 p.m. Tuesday, 2-year-old Lavontay White, a 26-year-old man and 25-year-old woman were in a car in the 2300 block of South Kenneth when another vehicle drove past and someone got out, pulled out a weapon and fired shots, authorities said.
The toddler and man, 26-year-old Lazarec Collins, were both shot in the head and the 25-year-old woman — Lavontay’s pregnant aunt — was shot in the abdomen, police said. Collins and Lavontay were taken to Stroger Hospital, where they died. The woman was taken to Mount Sinai in fair condition.
In the minutes leading up to the shooting, the woman was live-streaming a video to Facebook, in which she and the man, seated in the front, were listening to Chicago rappers Lil Durk and Chief Keef.
The woman’s cheery demeanor changes as she peers outside the car and to her left, shortly before the crack of a gunshot. The woman tumbles out the door of the car and sprints through a lot, an alley, and through a doorway, screaming for help.
The woman bolts into a house, and the screen goes dark, but the audio continues. The woman says she’s been shot in the stomach, but doesn’t want to go to the hospital.
“I can’t go to the hospital,” she says. “They’ll send me to jail.”
Addressing reporters at the crime scene, Chicago Police Supt. Eddie Johnson said: “We just cannot afford to have our children shot down for something they had no involvement in.”
As of 8 p.m., Lavontay and the man were two of the five people killed in Chicago on Valentine’s Day 2017. From 2001 to 2015, a total of 11 people were killed in Chicago on Valentine’s Day, according to city records. In 2016, there were four.
Two girls, 11 and 12 years old, were shot — one fatally — within 30 minutes and 5 miles of each other Saturday night on the South Side. Takiya Holmes — the 11-year-old cousin of Chicago anti-violence activist Andrew Holmes — died Tuesday.
Kamari Gentry-Bowers is still in critical condition, authorities said.
Takiya was sitting next to her 3-year-old brother in the back seat — her mother and aunt were in the front seats — when gunfire erupted about 7:40 p.m. Saturday in the 6500 block of South King Drive in the Parkway Gardens neighborhood.
Takiya’s mother was parked outside a dry cleaning store, where she worked, and planned to exchange cars with a co-worker when someone fired shots, said Patsy Holmes, Takiya’s grandmother.
The girl was pronounced dead by doctors early Tuesday, but remained on life support so her organs could be used for transplants, Holmes said, adding that she hoped a relative with a kidney problem would be a donor match.
Andrew Holmes, a frequent presence at crime scenes across the city, called on anyone with information about the shooting to come forward to police.
“The damage has been done,” he said. “We just want them to step up, turn someone in. The key to this is the community. We got to stop pointing fingers . . . because the killers came out of that community.”
Kanari, 12, was shot while she was outside playing with friends about 7:15 p.m. Saturday at Henderson Elementary School in the 1900 block of West 57th Street in West Englewood.
A tearful Rochetta Tyler, Kanari’s aunt, was on the way to the hospital to visit the girl Tuesday morning.
“Kanari is still fighting for her life,” said Tyler, who also had heard the news about Takiya’s passing.
Patsy Holmes said she had prayed for Kanari.
“Hopefully, she can pull through,” she said.
Takiya and Kanari were among 33 people shot — five fatally — across Chicago last weekend.
Contributing: Stefano Esposito and Mitch Dudek ||||| CHICAGO (AP) — A Chicago toddler was shot and killed on Tuesday in what police suspect was a "gang hit" on a man in a vehicle with her, just a few days after two young girls were shot in the head. It marked the latest spasm of violence in a city struggling to contain such attacks.
Chicago Police Department spokesman Anthony Guglielmi said police suspect the man, a well-known gang member with an extensive criminal history, was the target of the shooting that also left a woman wounded.
No other details were released and he said no arrests had been made.
Over the weekend, two girls ages 11 and 12 were shot in the head by gunmen who were aiming at someone else in an area of the city known for heavy gang activity. Both girls were in critical condition on Monday but police have not updated their condition since.
The shootings highlight the street gang violence that police say was largely responsible for 762 homicides last year — nearly 300 more than occurred in 2015 — and more than 3,500 shooting incidents. That violence has continued this year, with January ending with 51 homicides, the highest total since January 1999 when there were 55.
Those kinds of numbers help explain why Illinois Gov. Bruce Rauner's office said he will announce on Wednesday in his budget speech a push to fund 200 more state police cadets to patrol Chicago-area expressways. In 2016, there were 51 shootings on those roadways, compared to 37 in 2015. State police have said that the gun violence in the city is spilling onto the expressways.
It is unclear how Illinois can find the money to pay for the new state troopers in the midst of a budget crisis. Rauner's office would not elaborate..
The violence on the expressways also prompted the Chicago Crime Commission to ask that state and federal officials find the money to purchase a high-tech "expressway video surveillance system."
"An expressway video surveillance system would be designed to assist law enforcement in identifying and apprehending those responsible for the epidemic of shootings occurring on area expressways," J.R. Davis, the chairman and president of the commission, said in a statement. | – A Chicago toddler was shot and killed on Tuesday in what police suspect was a "gang hit" on a man in a vehicle with him, just a few days after two young girls were shot in the head. It marked the latest spasm of violence in a city struggling to contain such attacks. Chicago Police Department spokesman Anthony Guglielmi said police suspect the man, a well-known gang member with an extensive criminal history, was the target of the shooting that also left a woman wounded. The Chicago Sun-Times identifies the slain toddler as 2-year-old Lavontay White. Gugliemi said no arrests had been made in the case, the AP reports. In separate incidents over the weekend, two girls ages 11 and 12 were shot in the head by gunmen who were aiming at someone else in an area of the city known for heavy gang activity. Takiya Holmes, cousin of anti-violence activist Andrew Holmes, died on Tuesday, while 11-year-old Kamari Gentry-Bowers is still in critical condition, the Sun-Times reports. The shootings highlight the street gang violence that police say was largely responsible for 762 homicides last year—nearly 300 more than occurred in 2015—and more than 3,500 shooting incidents. That violence has continued this year, with January ending with 51 homicides, the highest total since January 1999, when there were 55. |
human cancers rely on multiple overlapping signal transduction pathways to activate and regulate cellular proliferation , survival and migration programs .
the epithelial - to - mesenchymal transition ( emt ) is a critical process in embryonic development for metazoan organisms and a similar process has also been shown to play a role in oncogenic progression and metastasis .
tumor metastasis involves a sequential series of processes which promote and regulate the escape of migratory cancer cells to generate metastatic lesions at distant sites .
the process begins in the primary tumor , where tumor cells dysregulate homotypic cell adhesion , downregulate cell adhesion proteins such as e - cadherin , and upregulate proteins characteristic of a more motile , mesenchymal - like phenotype such as vimentin .
this process requires transcriptional reprogramming to suppress e - cadherin expression via transcription factors associated with emt ( for review see ) .
tumor cells undergoing emt have been shown to undergo cadherin switching , downregulating e - cadherin and compensating with alternate cadherin proteins such as n cadherin .
there is evidence that the downregulation of e - cadherin and upregulation of proteins characteristic of a mesenchymal phenotype may occur preferentially at the invasive edge of a tumor .
initiation of metastasis requires this initial disruption of cell cell junctions and gain of cellular motility , permitting individual cells to migrate away from the primary tumor . in order to migrate through the surrounding extracellular matrix ( ecm ) cells may upregulate secreted proteases such as the matrix metalloproteinases ( mmps ) .
these motile , invasive cells may then cross the endothelial cell barrier and intravasate into the bloodstream . once in the bloodstream ,
these mesenchymal - like cancer cells can travel to distant sites where they extravasate the endothelial cell wall to colonize in a new , supportive niche .
primary tumor cells of specific cellular origins have been shown to preferentially colonize specific tissues , although the reasons for this are not entirely clear .
however it is commonly accepted that once a metastatic tumor cell has implanted in a niche supportive of proliferation , that cell may undergo a mesenchymal - to - epithelial transition ( met ) .
consistent with this , the emerging metastatic tumor often resembles the primary tumor from which it derived both in cellular phenotype and multi - cellular architecture .
it is not clear whether emt - like changes are required for all steps in metastasis , and the possibility remains that emt is a necessary but insufficient step in cancer metastasis .
a hallmark of emt is loss of e - cadherin , a key mediator of cell cell junctions .
numerous studies have shown a high correlation between loss of e - cadherin , the gain of vimentin and tumor invasiveness in cancer cells and patient tumors ( e.g. [ 46 ] ) . a down regulation of e - cadherin most frequently results from transcriptional repression , mediated by zinc finger , forkhead domain and bhlh factors including zeb1/tcf8/ef1 , zeb2 ( sip1 ) , snail , slug , foxc2 and twist .
the expression of snail , slug and specific bhlh transcription factors have been implicated in cell survival and acquired resistance to chemotherapy [ 713 ] .
however loss of e - cadherin alone does not constitute emt , as cells which harbor a mutation in e - cadherin and have lost functional cell
cell junctions do not acquire the additional morphological and transcriptional changes associated with emt [ 14 , 15 ] .
these changes include acquisition of cellular markers characteristic of a mesenchymal cell such as vimentin and fibronectin , expression of e - cadherin - repressing transcription factors , and frequently the acquisition of a migratory or scattering morphology . loss of e - cadherin appears to be a prerequisite for tumor progression and not just a consequence of tumor dedifferentiation . in transgenic mice which spontaneously develop pancreatic tumors ,
e - cadherin expression was shown to decrease with tumor progression , but maintenance of e - cadherin expression during tumorigenesis arrested tumor development at the benign adenoma stage while expression of a dominant negative e - cadherin induced early metastasis .
ectopic expression of e - cadherin is sufficient to suppress tumor cell invasion in vitro and tumor progression in vivo while knock - down of e - cadherin converts cells from non - invasive to invasive .
however , ectopic expression of e - cadherin does not restore an epithelial phenotype in cells which overexpress the transcriptional repressor twist .
this implies that restoration of e - cadherin to mesenchymal cells may be insufficient to reverse emt and confer an epithelial phenotype .
taken together , these observations are consistent with the hypothesis that e - cadherin has a tumor suppressive function and is not simply a marker of tumor differentiation .
accumulating evidence suggests that emt occurs at the level of transcriptional reprogramming and chromatin remodeling [ 1823 ] .
several transcription factors have been implicated in the transcriptional repression of e - cadherin , through interaction with specific e - boxes ( reviewed in ) .
the zinc finger proteins snail , slug , zeb-1 and zeb-2 ( sip1 ) as well as the bhlh factors twist and e12/e47 have been shown to repress e - cadherin and markers of cell polarity .
ectopic expression of twist or zeb-1 is sufficient to downregulate endogenous e - cadherin and induce emt [ 18 , 20 , 2527 ] .
snail has been shown to activate the transcription of the mesenchymal biomarkers vimentin and fibronectin .
moreover , expression of snail has been shown to induce transcriptional downregulation of e - cadherin , to upregulate vimentin , fibronectin and zeb1 and to promote a fibroblastic , invasive cellular phenotype .
the fact that zeb1 is activated by snail implies cooperativity between these factors although zeb1 does not appear to be directly downstream of snail .
elegant in vivo and in vitro studies using a series of breast carcinoma cell lines with distinct metastatic properties identified twist as a key regulator of metastasis .
overexpression of twist in non - cancerous epithelial cells induces expression of mesenchymal cell markers , repression of e - cadherin and an emt - like phenotype .
consistent with this , suppression of twist inhibits metastasis in a mouse mammary carcinoma model .
activation of these transcription factors , leading to initiation of emt and consequently metastasis , may be triggered by a variety of extra- and intracellular signals .
one of the first factors observed to induce emt was scatter factor or hgf [ 29 , 30 ] . since this early observation
these include growth factors ( egf , vegf , tgf- , wnt , sdf , pge2 ) , cytokines ( ilei , interleukins ) , integrin signaling , extracellular matrix proteins ( mmps ) , inflammatory signals ( cox-2 ) , and potentially stress stimuli such as hypoxia , signaling through non - receptor tyrosine kinases such as src and oncogenic activation of receptor tyrosine kinases rtks ) [ 3135 ] .
these emt - activating signals can be paracrine , emanating from infiltrating stroma , or autocrine , produced by the tumor cells themselves .
one intriguing report suggests that emt may not only promote the migration of primary breast cancer tumor cells , but may also lead to the formation of nonmalignant stromal cells , and this reciprocal interaction may help explain the poor prognosis of some cancers which show evidence of emt .
in addition to effects on proliferation and migration , emt activators have been shown to promote cell survival through inhibition of apoptosis .
interestingly , tgf- has been shown to be a potent activator of apoptosis in many cell types including epithelial cells [ 37 , 38 ] . when treated with tgf- ,
most fetal hepatocytes undergo apoptosis , however a fraction survive . those hepatocytes which have survived tgf- treatment , at least in part through resistance to apoptosis , exhibit phenotypic and genomic changes characteristic of an emt , including increased vimentin expression and higher levels of proto - oncogene transcripts as well as elevated pakt and bcl - xl [ 39 , 40 ] .
emt - like transitions have been also been shown to shown to confer resistance to tgf- induced apoptosis mammary epithelial cells [ 41 , 42 ] .
evidence from several independent research groups suggests that the emt - inducing transcription factors snail and slug can induce expression of anti - apoptotic genes while down - regulating pro - apoptotic pathways in both epithelial cells and hematopoetic progenitor cells [ 4345 ] .
this emt - related inhibition of apoptosis may provide selective advantage for tumor cells which are transitioning to a mesenchymal - like state .
the activation of the pi3k / akt cell survival pathway through alternate rtks may protect against tgf- induced apoptosis while driving pathways critical to carcinogenesis .
rtks , such as egfr , c - met , igf-1r , fgf receptors and the non - rtk c - src have been reported to induce phosphorylation of e - cadherin and associated catenins , resulting in their degradation , providing a link between oncogenic activation of these kinases and induction of emt .
thus a rationale exists for the prevention of emt - like transitions and tumor metastasis through inhibition of these oncogenic kinases in early - stage carcinomas .
several lines of evidence implicate igf-1r signaling as an important driver of emt . in mammary epithelial cells ,
constitutively active igf - ir caused cells to undergo emt which was associated with dramatically increased migration and invasion , and this transition was mediated by the induction of snail and downregulation of e - cadherin .
multiple groups have demonstrated that igf-1r activation or overexpression correlates with increased invasion and metastasis [ 4851 ] .
these effects are mediated , at least in part , by its ligand , igf-1 .
igf-1 is known to influence cell adhesion to the substratum and integrin - mediated cell motility .
furthermore , igf-1 stimulation can induce the phosphorylation and transcriptional activation of -catenin and dissociation of e - cadherin from the cell membrane .
in addition to disruption of homotypic cell adhesion , igf-1 has also been shown to promote tumor invasiveness via secretion of matrix metalloproteinases or crosstalk with integrin signaling pathways .
igf-1r is ubiquitously expressed but is frequently overexpressed in tumors , including melanomas , pancreas , prostate and kidney ( reviewed in ) .
perhaps most relevant is not the expression level of igf-1r but its function in cancer cells .
igf-1r signaling promotes akt phosphorylation and protection from apoptosis , which is predicted to limit the efficacy of standard of care chemotherapies .
thus there is a strong rationale for development of igf-1r targeted therapies , and igf-1r inhibition might be expected to enhance the effect of cytotoxic chemotherapies or other molecular targeted therapies .
an array of monoclonal antibodies are currently in phase i or phase ii trials , including cp-753,871 ( pfizer ) , amg0479 ( amgen ) , r1507 ( genmab / roche ) , imc - a12 ( imclone ) , ave-1642 ( immunogen / sanofi - aventis ) , mk0646 ( merck ) and sch717454 ( schering - plough ) .
two low molecular weight inhibitors of the igf-1r tyrosine kinase have entered phase i trials : osi-906 ( osi pharmaceuticals ) and insm-18 ( insmed ) . given the role of igf-1r signaling in cell survival and emt , one might hope that these therapies might target both the primary tumor as well as emerging metastatic cells .
egfr function is frequently dysregulated in epithelial tumors , and egfr signaling has been shown to play an important role both in cancer progression and in emt - like transitions .
egf has been shown to promote tumor cell migration and invasion , at least in part through dephosphorylation and inactivation of fak [ 5760 ] .
egf treatment of tumor cells overexpressing egfr also leads to downregulation of caveolin-1 which leads to loss of e - cadherin , transcriptional activation of -catenin and enhanced invasiveness .
thus inhibition of egfr might be expected to restrain emt in certain cellular contexts . in support of this hypothesis ,
ligand - independent , constitutively active forms of egfr can increase motility and invasiveness of tumor cells and egfr inhibitors have been shown to inhibit cancer cell migration in vitro [ 6265 ] . in oral squamous cell carcinoma cells
egfr inhibition resulted in a transition from a fibroblastic morphology to a more epithelial phenotype as well as accumulation of desmosomal cadherins at cell cell junctions .
taken together , these summarized observations suggest that inhibition of egfr affects tumor growth through inhibiting egfr - dependent mitogenic stimulation but may also restrain invasion and metastasis by re - establishing intercellular contacts between tumor cells .
such inhibition of emt , and potentially metastasis , may translate to improved overall patient survival in the clinical setting .
egfr is widely expressed by cells of both epithelial and mesenchymal lineages , and the degree of egfr expression is variable .
egfr overexpression has been reported in multiple human cancers including non - small cell lung ( nsclc ) , head and neck ( hnscc ) , pancreas , breast and central nervous system ( cns ) , and has been shown to correlate with poor survival .
several selective egfr and her family antagonists have been shown to offer clinical benefit , including erlotinib ( osi pharmaceuticals / genetech / roche ) , gefitinib ( astra zeneca ) and lapatinib ( glaxosmithkline ) .
erlotinib is approved for the treatment of nsclc patients who have failed two or three previous rounds of chemotherapy .
erlotinib is also approved in the usa and europe for the treatment of pancreatic cancer in combination with gemcitibine .
lapatinib , a dual inhibitor of egfr and her2 , has been shown to delay progression of trastuzumab - refractory breast cancer and is used in combination with capecitibine for patients who have received prior therapy with an anthracycline , a taxane , and trastuzumab .
anti - egfr antibodies have also shown clinical utility , including cetuximab ( imclone / bristol myers ) and panitumamab ( abgenix / amgen ) which are approved for the treatment of egfr - expressing , metastatic colorectal carcinoma . additional small molecule dual egfr - her2 inhibitors which bind irreversibly are in earlier stages of clinical development .
the current egfr inhibitors have provided significant clinical benefit when compared to the current standard of care , however not all patients derive a benefit in terms of overall survival or recist criteria .
for example , in the br.21 trial which compared the efficacy of erlotinib as a single agent in comparison to placebo in nsclc patients who did not respond to chemotherapy , the overall response rate was only 8.9% while the hazard ratio for treatment benefit associated with overall survival was 0.7 .
the median survival among patients who were treated with erlotinib was 6.7 months compared to 4.7 months for those treated with placebo .
these data suggest both that recist did not directly correlate with potential survival benefit and that some , but not all , patients clearly benefited from erlotinib .
this observation spurred research to identify biomarkers to predict patient response to erlotinib and potentially other egfr antagonists , and to identify the mechanistic basis for the differential response .
parallel efforts from independent research groups employed gene expression and proteomic profiling to identify biomarkers which correlated with sensitivity to erlotinib [ 7072 ] .
each of these groups used panels of established nsclc cell lines and xenografts to determine commonalities within those cell lines for which erlotinib inhibited growth in vitro and in vivo , as compared to those cell lines which were insensitive to erlotinib .
those cell lines which were classified as sensitive , having greater than 50% maximal inhibition of proliferation , expressed the canonical epithelial markers e - cadherin and -catenin and displayed the classic cobblestone epithelial morphology and tight cell
conversely , those cell lines which were relatively insensitive to erlotinib lacked those epithelial markers and expressed proteins characteristic of mesenchymal cells , including vimentin , fibronectin and zeb-1 and exhibited a more fibroblastic , scattered morphology .
these observations were later extended to other tumor types and egfr antagonists , including pancreatic , colorectal , head and neck , bladder and breast suggesting that emt status may be a broadly applicable indicator of sensitivity to egfr inhibitors .
an important clinical substantiation of this hypothesis resulted from a retrospective analysis of tribute , a nsclc phase iii randomized trial which compared the combination of erlotinib with chemotherapy to chemotherapy alone .
this trial failed to show significant clinical benefit for the concurrent administration of erlotinib and chemotherapy , however subset analysis of e - cadherin levels in patient samples using immunohistochemistry was revealing .
patients with tumor samples showing strong e - cadherin staining had a significantly longer time to progression ( hazard ratio 0.37 ) and a nonsignificant increase in overall survival when treated with the combination of erlotinib and chemotherapy as compared to chemotherapy alone .
notably , expression of egfr itself , as measured by ihc was a poor predictor of response to egfr antagonists , both in the clinic and in cultured cell lines [ 70 , 77 ] .
recent studies suggest that it is not the abundance of receptor , but rather the activation of the egfr signaling axis that mediates sensitivity to egfr inhibition , at least in vitro [ 7880 ] .
collectively , these compelling in vitro data and clinical findings indicate that expression of e - cadherin or vimentin , and emt as a process , may be viable biomarkers to predict efficacy of egfr inhibitors in cancer patients .
the mechanism by which emt results in insensitivity to egfr antagonists appears to derive from the acquisition of alternative routes to activation of the pi3 kinase - akt - mtor and ras - raf - mek - erk pathways .
it has been reported that cells which have transitioned to a mesenchymal - like state may have upregulated the pi3k - akt cell survival pathway to circumvent apoptosis , and consequently have decreased sensitivity to an inhibitor of the mapk proliferative pathway . human cancer cells which exhibit mesenchymal characteristics express lower levels of egf ligands , suggesting that these cells have become dependent upon alternate signaling pathways .
however fetal rat hepatocytes which have undergone tgf- -mediated emt have upregulated egfr ligand expression , and in this system inhibition of egfr does not block tgf- -mediated emt .
thus in fetal hepatocytes , egfr activation appears to be dispensable for the emt process , however in human cancer cells egfr signaling is an important driver of emt .
the expression egfr and reported downregulation of ligand in mesenchymal nsclc cells suggests that it is not the expression of the growth factor receptor , but rather the usage of that receptor , which governs sensitivity to targeted inhibitors .
for example it has been shown that her3 , a heterodimerization partner of egfr , allows egfr to effectively activate the pi3 kinase - akt pathway .
it has also been shown that her3 rna transcripts and protein are attenuated or lost during emt - like transitions , depriving egfr of pi3 kinase coupling [ 70 , 80 ] .
there is substantial evidence that cancer cells can readily shift the cellular equilibrium to rely on alternate growth factor and adhesion signaling pathways in response to emt - like transitions .
for example , inhibition of the ras - mapk proliferative pathway can lead to increased activation of the pi3k - akt cellular survival pathway [ 75 , 84 , 85 ] .
therefore it is not surprising that increased igf-1r signaling has been associated with insensitivity to egfr inhibitors [ 86 , 87 ] .
high concentrations of the ligand igf-1 were shown to inhibit apoptosis caused by the egfr antagonist erlotinib , possibly due to an increased reliance on the igf-1r / pi3k signaling axis .
interestingly , while igf-1r can promote emt - like transitions it appears to be less frequently used by mesenchymal - like carcinoma cells as a sole driver of the pi3 kinase pathway .
igf-1r activation drives upregulation of the pi3k - akt survival pathway and promotes epithelial - to - mesenchymal transition , and these changes may account for the reduced cellular sensitivity to egfr antagonists . however , committed mesenchymal - like nsclc , colon and pancreas adenocarcinomas are not dependent on igf-1r for proliferation or survival , suggesting that , like egfr , igf-1r signaling is an important driver in epithelial cells and can promote emt , but once these cells have transitioned to a mesenchymal state they are no longer reliant on igf-1r .
this hypothesis would suggest that the combination of an egfr antagonist such as erlotinib and an inhibitor of igf-1r could synergistically inhibit proliferation and potentially drive apoptosis in early stage tumors with an epithelial phenotype .
several in vitro studies have provided data which supports this hypothesis [ 86 , 8891 ] .
furthermore , since blockade of either igf-1r [ 92 , 93 ] or egfr [ 9496 ] signaling results in decreased metastasis in vivo , partnering egfr and igf-1r antagonists might also improve overall patient survival in early disease in epithelial tumors dependent on egfr and igf1r signaling .
once a cancer cell has transitioned to a mesenchymal - like phenotype , cellular dependence on igf-1r and egfr signaling is reduced and alternate growth factor pathways are activated .
recent data suggest that emt - like transitions can promote the novel acquisition of alternate receptor tyrosine kinase ( rtk ) autocrine and paracrine loops , such as pdgfr , which can exert proliferative and anti - apoptotic actions .
pdgfr and are restricted to cells of mesodermal origin and autocrine pdgfr signaling has been described in non - epithelial tumors such as gliomas [ 99 , 100 ] . however high expression has been observed in multiple tumor types including ovarian prostate and breast carcinomas , suggesting that pdgfr may be detectable in stroma and in tumor cells which have undergone emt .
autocrine pdgfr expression has been shown to promote progression of breast and ovarian cancer and contribute to maintenance of a mesenchymal - like cell phenotype [ 101 , 103 , 104 ] .
thus , mesenchymal cells which have acquired alternate signaling pathways , such as pdgfr , to activate pi3k signaling and prevent apoptosis may be sensitized to specific molecular targeted therapies .
the acquisition of these receptor signaling pathways which are predominantly used in mesenchymal cells may overlap on a common set of required signaling nodes , although this remains to be determined .
nevertheless , this observation provides support for rationally designed combinations of targeted therapies to impede the complex , heterogeneous signaling pathways that exist within a tumor . in summary
, we propose a model in which carcinomas undergo an epithelial - to - mesenchymal - like transition , potentially triggered by dysregulated growth factor signaling or inflammation - mediated activation of the pi3 kinase and ras pathways .
tumor cells with a mesenchymal - like phenotype become less reliant on egfr , igf-1r , met and ron signaling pathways . over time
, mesenchymal - like tumor cells appear to become more committed to a mesenchymal phenotype through epigenetic changes and can upregulate alternate receptor tyrosine kinase pathways as mechanisms for survival signaling and escape from anoikis .
the interactions of egf receptor signaling with other cellular pathways regulating mitogenic , survival and migration cues has clinical implications as we try to identify and develop treatments which not only target the primary tumor cells but also the mesenchymal - like cells deriving from emt - like transitions that can promote cancer metastasis and recurrence . | over 90% of all cancers are carcinomas , malignancies derived from cells of epithelial origin . as carcinomas progress
, these tumors may lose epithelial morphology and acquire mesenchymal characteristics which contribute to metastatic potential . an epithelial - to - mesenchymal transition ( emt )
similar to the process critical for embryonic development is thought to be an important mechanism for promoting cancer invasion and metastasis . epithelial - to - mesenchymal transitions have been induced in vitro by transient or unregulated activation of receptor tyrosine kinase signaling pathways , oncogene signaling and disruption of homotypic cell adhesion .
these cellular models attempt to mimic the complexity of human carcinomas which respond to autocrine and paracrine signals from both the tumor and its microenvironment .
activation of the epidermal growth factor receptor ( egfr ) has been implicated in the neoplastic transformation of solid tumors and overexpression of egfr has been shown to correlate with poor survival .
notably , epithelial tumor cells have been shown to be significantly more sensitive to egfr inhibitors than tumor cells which have undergone an emt - like transition and acquired mesenchymal characteristics , including non - small cell lung ( nsclc ) , head and neck ( hn ) , bladder , colorectal , pancreas and breast carcinomas .
egfr blockade has also been shown to inhibit cellular migration , suggesting a role for egfr inhibitors in the control of metastasis .
the interaction between egfr and the multiple signaling nodes which regulate emt suggest that the combination of an egfr inhibitor and other molecular targeted agents may offer a novel approach to controlling metastasis . |
over the last five years there has been unprecedented effort directed at quality and efficiency improvements in health care using health information technology ( hit ) .
such activities are now underway at the national , state , and local community levels . given the complexity and rapid pace of development and change , those involved in crafting and implementing these interventions are often in a position of breaking new ground .
rarely are the experiences of these digital path breakers shared widely , and the learning opportunities are often lost . with support from the commonwealth fund , academyhealth s
beacon evidence and innovation network has sponsored this special egems issue to capture important insights from a series of vanguard organizations involved in major hit interventions intended to improve community- or population health through cross - organizational hit partnerships .
the papers published in this special issue emphasize ways to share , integrate , and use new digital data sources , including electronic health records ( ehrs ) and other electronic sources derived from consumers and public health agencies . in this commentary
we provide an overview of the scope of the papers appearing in this issue and offer a few observations regarding this unique set of publications , including some unifying themes .
we then highlight a few challenges and opportunities as well as future directions for hit application to the community , public health , and population - health domains .
the wide - scale adoption of hit , with a quadrupling of ehr use among physicians in the last decade1 has enabled diverse parties , such as providers , payers , and government agencies to collaborate on digitally based interventions to improve the health of communities or other defined populations .
there have been many population - wide interventions that have used hit solutions to improve the health of persons enrolled in specific health plans or cared for by a single provider organization . to date ,
most community hit efforts have been primarily focused on a defined geographic area and generally represent one - off projects , often with federal funding.2,3 the health information technology for economic and clinical health ( hitech ) act was designed to boost hit adoption nationwide , and has been the foremost source of funding for these types of efforts .
in addition to incentivizing providers through the meaningful use program that has supported clinician and hospital adoption of ehrs , the office of the national coordinator for hit ( onc ) has supported health information exchanges ( hies ) to promote the exchange of electronic data across the country , and has supported beacon community hit - supported cross - provider collaborative quality- improvement interventions .
these time - limited beacon grants challenged 17 communities to design , pilot , and evaluate community - based hit interventions by expanding and sharing data captured in ehrs , and often linking these data to other available data sources ( e.g. , insurance claims and public health data).4 the number of submissions for this special issue is a sign of the increased interest in such collaborations .
the papers accepted for publication cover a wide array of applied issues ranging from the role of consumers in a community hit effort to the development of geographic - based registries of chronic diseases , and include in - depth reviews of the federally funded beacon community initiative .
to date there is no agreed - upon definition of what is or is not a
for this reason we encouraged submissions if the project spanned a geographic area and involved more than a single type of stakeholder ideally representing multiple organizations from a diverse range of stakeholders , including but not limited to payers , providers , and patients .
although the traditional definition of community health which encompasses both healthy and unhealthy patient populations does not apply to all papers in this issue , most of these papers have resulted in adopting hit interventions that affect the health of a large subpopulation of a community .
while a common thread of the papers appearing in this special egems issue is the use of ehrs or other hit applied on a collaborative basis to communities or other target populations , the articles reflect a rich diversity of technical innovations , stakeholders , and organizational and political contexts . to give readers a sense of the richness of these papers , we have arrayed some key features of each project and its focus along several dimensions .
these dimensions are outlined below and are represented in table 1 . in response to egems primary domains of focus , the papers focus primarily on governance , informatics , and the integration of learning health systems to improve population health .
papers focus in depth on legal , political , and organizational challenges of engaging a diverse set of stakeholders in a community - based hit intervention ( these papers will also be cross - posted in the edm forum s governance toolkit ) . the four informatics papers emphasize the technological challenges of exchanging digital data across heterogeneous data sets while assuring data accuracy , access , and security . and the eight learning health systems papers share lessons learned from various stages of design , development , and deployment of hit - supported solutions targeted at community - wide system transformation and evidence - based care improvement .
to help readers better identify the maturity stage and collaborative scope of the interventions described in this issue , we identified major themes that reflect the rich diversity of the papers and the innovations they describe . given that these projects represent some of the most advanced efforts to date in the united states that are related to community hit intervention , these themes can be used to assess the current state of the art and associated gaps .
below are some of our observations of what was and was not reported by the authors as part of their interventions :
hit solutions : several hit tools and modalities were applied across the projects described in this issue ( e.g. , ehrs , phrs , hies ) . perhaps understandably for these cross - provider initiatives , 80 percent of the papers have used hie infrastructure as their key hit system to facilitate data exchange across their providers and subpopulations.funding : all but one of the projects described in this issue were supported , at least in part , by the onc - funded beacon community program or related initiatives such as the cdc beacon community program for public health.geographic locales : the geographical distribution of these projects covered metropolitan and rural communities and several entire states .
two of the papers review existing community - wide hit projects that span multiple states.5,6stakeholder engagement : three of the papers delineate stake - holder engagement.7,8,9 these papers often address governance challenges and offer solutions to engage stakeholders .
stake - holder diversity is high among these projects as they include representatives from providers , payers , population denominators , and public health entities.design and development : four papers primarily address the design and development of community - based hit solutions.10,11,12,13 these papers offer innovative solutions for exchanging data or creating centralized registries of patients among multiple stakeholders.deployment and intervention : three of the submissions discuss the implementation challenges of hit interventions within a community , such as incorporating hie notifications in care coordination.14,15,16 these papers have faced both technical and policy challenges in increasing the diversity of their stake-holders.evaluation : a third of the articles focus on the evaluation of hit - enabled , community - based transformations.17,18,19,20,21sustainability : although sustainability was mentioned by a few of the submissions , none of them dedicated its paper to sustainability challenges faced by community - based hit projects .
community - based hit programs will most certainly need guidance from future literature on how to sustain federally and locally funded projects . and
of course , journals such as egems can be an effective venue to discuss and disseminate solutions for such challenges , as illustrated by the recent egems special issue on approaches to achieving the sustainability of health data infrastructure.22conceptualization : conceptual frameworks are sparsely discussed in the submissions .
none of the manuscripts has a dedicated focus on conceptualizing community - based hit solutions .
future research should entail the conceptualization and translation of overarching population hit frameworks into community - based hit interventions .
hit solutions : several hit tools and modalities were applied across the projects described in this issue ( e.g. , ehrs , phrs , hies ) . perhaps understandably for these cross - provider initiatives , 80 percent of the papers have used hie infrastructure as their key hit system to facilitate data exchange across their providers and subpopulations .
funding : all but one of the projects described in this issue were supported , at least in part , by the onc - funded beacon community program or related initiatives such as the cdc beacon community program for public health .
geographic locales : the geographical distribution of these projects covered metropolitan and rural communities and several entire states .
two of the papers review existing community - wide hit projects that span multiple states.5,6 stakeholder engagement : three of the papers delineate stake - holder engagement.7,8,9 these papers often address governance challenges and offer solutions to engage stakeholders . stake - holder diversity is high among these projects as they include representatives from providers , payers , population denominators , and public health entities .
design and development : four papers primarily address the design and development of community - based hit solutions.10,11,12,13 these papers offer innovative solutions for exchanging data or creating centralized registries of patients among multiple stakeholders .
deployment and intervention : three of the submissions discuss the implementation challenges of hit interventions within a community , such as incorporating hie notifications in care coordination.14,15,16 these papers have faced both technical and policy challenges in increasing the diversity of their stake - holders .
evaluation : a third of the articles focus on the evaluation of hit - enabled , community - based transformations.17,18,19,20,21 sustainability : although sustainability was mentioned by a few of the submissions , none of them dedicated its paper to sustainability challenges faced by community - based hit projects .
community - based hit programs will most certainly need guidance from future literature on how to sustain federally and locally funded projects . and
of course , journals such as egems can be an effective venue to discuss and disseminate solutions for such challenges , as illustrated by the recent egems special issue on approaches to achieving the sustainability of health data infrastructure.22 conceptualization : conceptual frameworks are sparsely discussed in the submissions . none of the manuscripts has a dedicated focus on conceptualizing community - based hit solutions .
future research should entail the conceptualization and translation of overarching population hit frameworks into community - based hit interventions .
this special issue describes cutting edge projects and offers an opportunity to expedite the way of others who may follow in similar footsteps . based on our review of this collection of leading edge projects , we believe further work is needed in a variety of domains .
overall , many of the challenges faced by the projects described in this special issue are similar and thus likely fore - shadow what cross - provider hit interventions at the community level will encounter in the years to come .
the common challenges described across the papers include the following :
ambiguity of definitions : developing clear definitions of what constitutes a community - based hit intervention , given that current definitions are still ambiguous , should be a priority .
future research should develop frameworks and guidelines on how to identify denominators , stakeholders , determinants , data sources , methods , interventions , outcomes , and measures for a given community or population with a defined set of hit and data infrastructure.need for unified conceptual models : conceptual models of population hit interventions are needed to guide the overall design and deployment of practical community - based hit solutions .
these conceptual models should be validated in practice and eventually unified into overarching models that can be publicly accessed and easily adopted to better target the institute for healthcare improvement s
triple aims.23interoperability issues : insufficiency of interoperability standards to integrate and exchange information across stakeholders is still a major barrier .
the lack of interoperability will be more prominent within community - based hit interventions that require incorporating nontraditional or emerging data sources ( e.g. , non - ehr data).fragmented big data : data are highly fragmented in community - based transformations .
data are often stored in silos , and a series of technical , financial , political , and cultural factors prohibits the stakeholders from sharing them .
the emergence of big data will be inevitable in such projects given the volume , variety , and velocity of the data that will be evolved over time .
future research should address these problems and should also provide appropriate methods to integrate and analyze such uncommon data compositions.community-based quality measures : accurate and timely metrics are needed to evaluate and compare hit - enabled , community - based interventions across a diverse set of stake - holders .
these metrics should cover various aspects of community - based hit interventions including performance , process , outcome , patient satisfaction , safety , and population health .
population metrics are immature and national benchmarks are not established yet,24 thus limiting the comparison of impact and success among community - based hit projects .
future research should develop a set of community - based hit measures that retain a high reliability and validity when generalized to other populations.stakeholder diversity : ineffective community - based hit infrastructure , lack of interoperability , exchange standards that cut across a diverse set of stakeholders ( e.g. , integrating social data with ehrs ) , and insufficient incentives to share data across stakeholder groups are all limiting factors to building a diverse set of stakeholders that would represent providers , payers , patients , and public health agencies all together . defining population needs and identifying interventions that can be beneficial to all stakeholders ( cutting across stakeholder categories ) should be a priority for future community - based hit deployments.misalignment of incentives : alignment of incentive structures among stakeholders to share , integrate , and analyze data is critical to the success of community - based interventions . also , the misalignment of incentives is a major impediment to scaling up and generalizing successful hit - enabled , community - based transformations to other communities .
new federal , state , or local policies , either hit or payment reform , should address these challenges and incentivize all stakeholders to share data and learn from the collective outcomes .
recent payment reforms such as the population - based all payer hospital payment model in maryland25 could be a unique environment to pilot and evaluate such hit - enabled transformations.privacy and security : access barriers associated with privacy and security protocols are often exacerbated when hit interventions are deployed across the organizations and geography of an entire community .
engaging the entire community and a diverse set of stakeholders in data governance earlier in the project may provide opportunities to resolve some of these issues .
ambiguity of definitions : developing clear definitions of what constitutes a community - based hit intervention , given that current definitions are still ambiguous , should be a priority .
future research should develop frameworks and guidelines on how to identify denominators , stakeholders , determinants , data sources , methods , interventions , outcomes , and measures for a given community or population with a defined set of hit and data infrastructure .
need for unified conceptual models : conceptual models of population hit interventions are needed to guide the overall design and deployment of practical community - based hit solutions .
these conceptual models should be validated in practice and eventually unified into overarching models that can be publicly accessed and easily adopted to better target the institute for healthcare improvement s
triple aims.23 interoperability issues : insufficiency of interoperability standards to integrate and exchange information across stakeholders is still a major barrier .
the lack of interoperability will be more prominent within community - based hit interventions that require incorporating nontraditional or emerging data sources ( e.g. , non - ehr data ) . fragmented big data : data are highly fragmented in community - based transformations .
data are often stored in silos , and a series of technical , financial , political , and cultural factors prohibits the stakeholders from sharing them
. the emergence of big data will be inevitable in such projects given the volume , variety , and velocity of the data that will be evolved over time .
future research should address these problems and should also provide appropriate methods to integrate and analyze such uncommon data compositions .
community - based quality measures : accurate and timely metrics are needed to evaluate and compare hit - enabled , community - based interventions across a diverse set of stake - holders .
these metrics should cover various aspects of community - based hit interventions including performance , process , outcome , patient satisfaction , safety , and population health .
population metrics are immature and national benchmarks are not established yet,24 thus limiting the comparison of impact and success among community - based hit projects .
future research should develop a set of community - based hit measures that retain a high reliability and validity when generalized to other populations .
stakeholder diversity : ineffective community - based hit infrastructure , lack of interoperability , exchange standards that cut across a diverse set of stakeholders ( e.g. , integrating social data with ehrs ) , and insufficient incentives to share data across stakeholder groups are all limiting factors to building a diverse set of stakeholders that would represent providers , payers , patients , and public health agencies all together . defining population needs and identifying interventions that can be beneficial to all stakeholders ( cutting across stakeholder categories ) should be a priority for future community - based hit deployments .
misalignment of incentives : alignment of incentive structures among stakeholders to share , integrate , and analyze data is critical to the success of community - based interventions . also
, the misalignment of incentives is a major impediment to scaling up and generalizing successful hit - enabled , community - based transformations to other communities .
new federal , state , or local policies , either hit or payment reform , should address these challenges and incentivize all stakeholders to share data and learn from the collective outcomes .
recent payment reforms such as the population - based all payer hospital payment model in maryland25 could be a unique environment to pilot and evaluate such hit - enabled transformations .
privacy and security : access barriers associated with privacy and security protocols are often exacerbated when hit interventions are deployed across the organizations and geography of an entire community .
engaging the entire community and a diverse set of stakeholders in data governance earlier in the project may provide opportunities to resolve some of these issues .
it is our hope that this special issue will help trigger interest among the next wave of hit implementers , researchers , and program officers to conduct and fund new hit - enabled , community - based transformations .
these future efforts should build on the fine , albeit challenging , work described in these articles , and they should attempt to surmount some of the existing limitations discussed by the authors and outlined above .
community- and population - targeted hit interventions such as these will be essential if the digital health infrastructure now spreading across the nation is to meaningfully contribute to improvement in united states health care and public health systems and , more importantly , the health of americans . | rising health information technology ( hit ) adoption and the increasing interoperability of health data have propelled the role of it in community - wide health transformations . disseminating the challenges and opportunities that the early adopters of community - wide hit interventions have experienced is critical for empowering the growing demand for community - based health systems .
this special issue of egems addresses that need .
this issue includes a variety of community - based hit projects covering topics such as governance , informatics , and learning health systems .
these projects represent a diverse set of stakeholders , a wide selection of data sources , and multiple information platforms to collate or exchange data .
we hope that this special issue of egems will be the first of several future issues dedicated to community - wide hit transformations . |
is generally used to refer how someone discerns and thinks about themselves , and it has persuasive influence in one 's life .
the concept of self is of major interest and is the indispensable human need .
the self - concept is unique to individual and changes over time with environmental context .
development of positive or negative self - concept mainly results from physical changes , appearance and performance changes , health challenges , and on the feedback significantly from others .
alteration in health status due to loss or severance of a body part can also affect the self - concept .
enormous importance is placed by individuals on head and neck area than any other parts of the body .
the integrity of head and neck is also very much essential for emotional expressions , interaction , and swallowing .
many of the head and neck cancer ( hnc ) patients can not hide the side effects of the treatments such as radiotherapy , chemotherapy , and surgery due to the obvious visibility of the condition and functional difficulties .
the distinguishable changes in the appearance , intense changes in taste , swallowing , and speech also lead to awkwardness . due to these factors , there will be more apprehension , opposition toward disfigurement , and stigmatized in the society among hnc patients .
this is a mixed method study conducted in two tertiary care hospitals of south india by using a triangulation mixed method design .
both descriptive survey approach and qualitative phenomenological approach were adopted to assess the depth of the self - image of the patients with hncs .
the study was conducted between february 7 , 2015 , and may 10 , 2015 .
the subjects receiving radiation therapy with or without chemotherapy , during the 4 week of radiation therapy were included in the study .
a nested sampling technique was adapted for the mixed method design , i.e. , for the descriptive survey design , to assess the self - image of hnc patients , a purposive sampling technique was used .
data were collected among 54 participants . only female participants those who were included in the quantitative approach and also able to communicate
verbally were included in the qualitative phenomenological approach for interviewing by using the semi - structured interview focusing on the self - image .
the quantitative and qualitative data were collected simultaneously by focusing equal importance on both strands . administrative permission and institutional ethical permission were obtained .
data were gathered by using self - administered self - image scale and structured interview technique .
self - image scale was a four point likert scale with three sub - domains such as body image , self - esteem , and integrity .
the measuring instruments were validated with five subject experts from the field of nursing , oncology , and department of humanities .
reliability coefficient was computed by using cronbach alpha , and it was reliable ( r = 0.7 ) .
chicago ) and open code software 4.0 ( opc 4.0 ) for qualitative analysis ( its and epidemiology , university of umea ) .
both methods were mixed at the analysis phase of research . that means both quantitative and qualitative data were actually merged in the areas of self - image and the experiences at the analytic phase of the continuum .
qualitative data were transcribed and translated into english and were analyzed by using steps of the colaizzi process for phenomenological approach .
data in the figure show that majority , i.e. , 30 ( 56% ) subjects had positive self - image and remaining 24 ( 44% ) of them had negative self - image [ figure 1 ] .
description of the self - image of the patients with head and neck cancer similar findings were observed in the phenomenological approach .
subjects mentioned that they were not paying much importance to the external appearance . instead being in a state of good
immaterial of external appearance and desire of good health to all . participant quoted ,
my neighbors say how i got this disease , such a good person , such a helping person how she got the disease ( agony ) ( pause) .
data in table 1 show that the mean and standard deviation ( sd ) of self - esteem , body image , and integrity were 17.44 ( 2.724 ) , 14.98 ( 3.04 ) , and 14.06 ( 6.027 ) , respectively .
however , there were also subjects scored maximum scores ( 24/24 ) in the areas of body image and self - esteem .
description of the domains of self - image of the patients with head and neck cancer ( n=54 ) similar findings were also observed in phenomenological approach .
the participants also remained immaterial about external beauty and they felt they are possessing good qualities .
participant a said , what i have to do with the beauty if god gives good health
what i have to do with external beauty ? ( smiles ) health is important what else to do with beauty ? that too elderly like me ( smiles )
participant f said , we are on the earth only for 4 days if we are healthy that is enough ( pause ) .
this section presents the description of the relationship between domains of self - image of the patients with hnc , since the data were not following normal distribution .
spearman 's rho was computed to assess the relationship between the variables . to assess the relationship between domains of self - image of the patients with hnc , the following null hypothesis was stated .
h01 ( null hypothesis ) : there will be no significant relationship among the domains of self - image . the data in table 2 show that there is a moderate positive correlation between body image and integrity ( r = 0.430 , p = 0.001 ) and weak positive correlation between body image and self - esteem ( r = 0.270 , p = 0.049 ) . self - esteem and integrity ( r = 0.203 , p = 0.141 ) of the patients with hnc
thus , it is inferred that the impact on one area of self - image may also have impact on another domain of self - image except for self - esteem and integrity .
description of the relationship between domains of self - image of the patients with head and neck cancer ( n=54 )
data in the figure show that majority , i.e. , 30 ( 56% ) subjects had positive self - image and remaining 24 ( 44% ) of them had negative self - image [ figure 1 ] .
description of the self - image of the patients with head and neck cancer similar findings were observed in the phenomenological approach .
subjects mentioned that they were not paying much importance to the external appearance . instead being in a state of good
immaterial of external appearance and desire of good health to all . participant quoted ,
my neighbors say how i got this disease , such a good person , such a helping person how she got the disease ( agony ) ( pause) .
data in table 1 show that the mean and standard deviation ( sd ) of self - esteem , body image , and integrity were 17.44 ( 2.724 ) , 14.98 ( 3.04 ) , and 14.06 ( 6.027 ) , respectively .
however , there were also subjects scored maximum scores ( 24/24 ) in the areas of body image and self - esteem .
description of the domains of self - image of the patients with head and neck cancer ( n=54 ) similar findings were also observed in phenomenological approach .
the participants also remained immaterial about external beauty and they felt they are possessing good qualities .
participant a said , what i have to do with the beauty if god gives good health
participant b said , no ( nods the head ) . what to do with external beauty
what i have to do with external beauty ? ( smiles ) health is important what else to do with beauty ? that too elderly like me ( smiles )
participant f said , we are on the earth only for 4 days if we are healthy that is enough ( pause ) .
this section presents the description of the relationship between domains of self - image of the patients with hnc , since the data were not following normal distribution .
spearman 's rho was computed to assess the relationship between the variables . to assess the relationship between domains of self - image of the patients with hnc , the following null hypothesis was stated .
h01 ( null hypothesis ) : there will be no significant relationship among the domains of self - image . the data in table 2 show that there is a moderate positive correlation between body image and integrity ( r = 0.430 , p = 0.001 ) and weak positive correlation between body image and self - esteem ( r = 0.270 , p = 0.049 ) . self - esteem and integrity ( r = 0.203
thus , it is inferred that the impact on one area of self - image may also have impact on another domain of self - image except for self - esteem and integrity .
description of the relationship between domains of self - image of the patients with head and neck cancer ( n=54 )
this study aimed at describing the self - image of the patients by using the mixed method approach .
the study provides some important information on the self - image of the hnc patients .
existing reviews show negative self - image , low self - esteem , and transformed self - image among hnc patients . however , the present study findings showed mean and sd of self - esteem of 17.44 ( 2.724 ) , body image of 14.98 ( 3.04 ) , and integrity of 14.06 ( 6.027 ) .
these findings show that the domains of self - image are not much affected for the patients with hnc , since the means are above normal for self - esteem and near normal for body image and integrity .
similar experiences in terms of their verbatim supporting the quantitative data were noted among the participants .
a systematic review of qualitative studies was conducted by nayak et al . , 2015 , to assess the self - concept of the patients with hnc .
this review had only studies conducted in non - asian continents among both genders of hnc patients .
results showed that there were perceived and transformed changes in self - esteem and changes in self - image or ruptured self - image among hnc patients .
however , the subjects included in those studies also underwent surgical management with / without radiation therapy and chemotherapy for hnc .
nevertheless , there was no major difference in the sample characteristics such as age and diagnosis when compared with the present study . in this study
, subjects also verbalized that they are not concerned about the external appearance and body image , since they are aged .
this study findings also showed moderate positive correlation between body image and integrity ( r = 0.430 , p = 0.001 ) and weak positive correlation between body image and self - esteem ( r = 0.270 , p = 0.049 ) .
as human beings are considered as a system , the impact on one area of self - image may also have an impact on another domain of self .
feelings of self - consciousness , inadequacy and embarrassment , and unattractiveness as prompted low self - esteem among hnc patients were reported in a qualitative study conducted by phenomenological approach .
another qualitative study was carried out at the united kingdom to explore and describe the patients experiences of changes .
the study describes changes in the functions such as eating , drinking , and hearing along with the appearance , which had a great impact on their confidence .
the functional difficulties and the physical appearance together impacted on persons change in behavior and attitude . some hnc patients felt a loss of dignity and described themselves in denigratory terms due to disfigurement of outer body image . body image and self - image were also impacted due to disfigurement and altered body functioning such as difficulties in eating , drinking , speaking , and breathing .
hospitalization was aggravating the trauma of change in appearance , sense of loss of self , feeling of incapacitated , and loss of autonomy for hnc patients .
hnc patients also portray themselves as the odd man out and visible minority when standing in public , as disfigurement attracts the attention of society .
the patients were also shocked , frightened , and incredulous when they saw themselves in the mirror .
a sense of ruptured self - image and a feeling of no longer the same person were emerged from hnc patients as a result of disfigurement .
however , most of the subjects included in this study underwent surgical treatment such as radical and neck dissection , neck and lateral arm flap , and radical free flap in addition to radiation therapy and chemotherapy . subjects from both gender and aged between 35 and 71 years were included in the study .
understanding patients self - concept and living experiences of patients with hnc is important for the health care professionals to improve the care .
thus , the study generates avenues to develop nursing interventions built on patients own needs .
| aim : the aim of the study was to assess the self - image of the patients with head and neck cancers ( hncs ) by using a mixed method research.subjects and methods : a mixed method approach and triangulation design was used with the aim of assessing the self - image of the patients with hncs .
data was gathered by using self - administered self - image scale and structured interview .
nested sampling technique was adopted .
sample size for quantitative approach was 54 and data saturation was achieved with seven subjects for qualitative approach .
institutional ethical committee clearance was obtained.results:the results of the study showed that 30 ( 56% ) subjects had positive self - image and 24 ( 44% ) had negative self - image .
there was a moderate positive correlation between body image and integrity ( r = 0.430 , p = 0.001 ) , weak positive correlation between body image and self - esteem ( r = 0.270 , p = 0.049 ) , and no correlation between self - esteem and integrity ( r = 0.203 , p = 0.141 ) .
the participants also scored maximum ( 24/24 ) in the areas of body image and self - esteem .
similar findings were also observed in the phenomenological approach .
the themes evolved were immaterial of outer appearance and desire of good health to all.conclusion:the illness is long - term and impacts the individual 24 h a day .
understanding patients self - concept and living experiences of patients with hnc is important for the health care professionals to improve the care . |
Close Get email notifications on Bart Pfankuch daily!
Your notification has been saved.
There was a problem saving your notification.
Whenever Bart Pfankuch posts new content, you'll get an email delivered to your inbox with a link.
Email notifications are only sent once a day, and only if there are new matching items. ||||| Rochester, N.Y. - The man accused of stealing a Rural Metro ambulance while drunk on the University of Rochester campus was in court Thursday.
The crash happened on Trustee Road around 2:20 a.m.
Police say Rural Metro was on campus for the report of an intoxicated man, when 22-year-old Robert Cordaro Jr. hopped in the driver's seat of the ambulance and drove off.
Cordaro is a student at the University of Rochester majoring in political science and English. He is also a member of the football team.
According to police, Cordaro was able to travel about a quarter of a mile before driving off the roadway and ending up in a flower bed.
The ambulance was damaged and was towed away.
Court paperwork stated when Cordaro was taken in custody, he said, "You got me...do whatever you need to do." As he was being handcuffed, he reportedly said, "It was stupid, it was stupid."
Cordaro registered a .09 percent BAC when he was subjected to a breathalyzer test. Court papers said he told police he had been drinking beers and manhattans.
The 22-year-old was taken into custody and charged with grand larceny, criminal mischief, DWI and other violations.
The judge set his bail at $1,500 and remanded him to the Monroe County Jail.
The University of Rochester did not comment on his graduating status. | – If you're bored on the University of Rochester campus after the bars let out, don't do as this student did: Police say 22-year-old Robert Cordaro Jr. hopped into a Rural Metro ambulance around 2:20am Thursday and drove a quarter of a mile until he crashed into a flower bed, per WHAM. Cordaro was charged with grand larceny, criminal mischief, and DUI. In an equally strange case in South Dakota, authorities say Damon Andrews, 18, crashed a stolen car into a yard at 1:25am Sunday. He was charged with DUI—as was another driver who crashed into the ambulance sent to pick Andrews up, reports the Rapid City Journal. |
Two U.S. women suffered miscarriages after being infected with the Zika virus, according to officials from the Centers for Disease Control and Prevention.
Interested in ? Add as an interest to stay up to date on the latest news, video, and analysis from ABC News. Add Interest
The virus, which usually causes mild symptoms including fever, rash and fatigue, has already been associated with a rare birth defect in Brazil called microcephaly. The defect is characterized by an abnormally small head and brain.
Luis Robayo/AFP/Getty Images
Officials have also been concerned that the virus could cross the placenta, an organ that develops in a woman's uterus during pregnancy and provides oxygen and nutrients to the fetus. This development could potentially lead to miscarriages.
CDC officials confirmed to ABC News that the women who miscarried were being monitored by their doctors after they were diagnosed with the Zika virus. In total, at least three women in the U.S. have been infected with Zika after returning from abroad with the virus.
One woman in Hawaii gave birth to a child with microcephaly in January. That woman is believed to have been exposed to the Zika virus in Brazil last year. In all three cases the Zika virus was found in the placenta.
Dr. William Schaffner, an infectious disease expert at Vanderbilt University Medical School, said the possibility of the virus being associated with miscarriages has been an ongoing concern for health officials.
"There has been a concern that is it possible that this virus...could also create sufficient inflammation in the placenta such that miscarriages can occur," said Schaffner, explaining that the link was not yet definitive. "Two cases don't make the whole story but it certainly would be biologically consistent with the [fact.]"
Schaffner said researchers would likely look to see if there is more physical evidence of the virus being linked to miscarriages in countries where virus transmission is active. ||||| Everything you ever wanted to know about the Zika virus and its spread across North and South America. (Daron Taylor,Claritza Jimenez/The Washington Post)
Two U.S. women who contracted the Zika virus while traveling out of the country miscarried after returning home, and the virus was found in their placentas, a spokesman for the Centers for Disease Control and Prevention said Thursday.
Federal health officials have not previously reported miscarriages in American travelers infected with the mosquito-borne virus while abroad. But there have been miscarriages reported in Brazil, the epicenter of a Zika epidemic that now spans nearly three dozen countries. Researchers in Salvador, Brazil's third largest city, are investigating some miscarriages and still births at three maternity hospitals for possible links to Zika.
The STAT website first reported the U.S. miscarriages, based on information from the CDC's chief pathologist. The pathologist told STAT the women miscarried early in their pregnancies but provided no additional details.
[Zika and sex: Seven key things you need to know about the case in Dallas]
Last month, officials said a baby born in a Hawaii hospital was the first in the country with a birth defect linked to Zika. Hawaii officials said the baby's mother likely contracted the virus while living in Brazil last year and passed it on while her child was in the womb. Babies born with this rare condition, known as microcephaly, have abnormally small heads and brain abnormalities.
In cases when women have one or two miscarriages, the cause is usually severe chromosomal problems, experts say. "It's absolutely possible for an infection, whether it be viral or bacterial, to result in a miscarriage," said Zev Williams, an obstetrician-gynecologist who specializes in pregnancy loss at the Albert Einstein College of Medicine at Montefiore Medical Center in New York. "Whether it was caused by Zika remains to be determined," he said, but urged individuals to take precautions to avoid contracting or transmitting the virus.
Some virus infections in pregnancy, like Rubella or German measles infections especially early in pregnancy, can spread from the mother and infect the cells of the fetus and cause direct injury to it, said Jesse Goodman, an infectious diseases doctor at Georgetown University.
Here's a look at the pandemics that made it to our shores. (Gillian Brockell/The Washington Post)
In testimony before Congress Wednesday, CDC Director Tom Frieden reiterated that the agency is learning more about Zika every day, including how it can be transmitted from mother to fetus. Increasing evidence in Brazil also is linking Zika to microcephaly and other suspected neurological complications.
More than four dozen Zika cases have been confirmed in 14 states and the District of Columbia -- six involving pregnant women -- with at least another 21 cases in U.S. territories, the CDC said last Friday. Frieden also said that one U.S. case of Guillain-Barré syndrome may be linked to Zika.
[Zika linked to serious eye defects in babies with microcephaly, study finds]
It was unclear whether the two miscarriages were counted among the six cases involving pregnant women. Global health officials are closely monitoring the spread of the virus and the incidence of suspected neurological complications. Frieden has said the link between Zika and Guillain-Barré, which can lead to paralysis in adults, is growing stronger. Several South American countries have identified cases of the syndrome.
The World Health Organization, which has designated the outbreak a "global public health emergency," issued guidance Wednesday on how women should protect themselves against possible sexual transmission of Zika. It said that until more is known, "all men and women living in or returning from an area where Zika is present -- especially pregnant women and their partners -- should be counseled on the potential risks of sexual transmission and ensure safe sexual practices."
[NIH officials accelerate timeline for human trials of Zika vaccine, saying they will now begin in the summer]
Those include the correct and consistent use of condoms, the WHO said.
Last week the CDC issued its own detailed recommendations for preventing sexual transmission of the virus, including the suggestion that men who have traveled to Zika-affected regions consider abstaining from sex with their pregnant partner for the duration of the pregnancy. The guidelines came after a Dallas resident was infected by having sex with a person who had contracted the disease while traveling in Venezuela.
Read more:
NIH officials accelerate timeline for human trials of Zika vaccine CDC: To avoid exposure, consider no sex
Zika and sex: Seven key things you need to know about the case in Dallas
FAQ: What is Zika, and what are the risks as it spreads?
‘Zika isn’t important': The infuriating case of a scientist’s search for funding. | – Two pregnant American women infected with the Zika virus have miscarried, the Centers for Disease Control and Prevention announced Thursday. It's the first potentially Zika-related miscarriages in the US, the Washington Post reports. According to ABC News, officials have been concerned the virus could cross through the placenta and cause miscarriages. These miscarriages would appear to confirm that theory. "It's absolutely possible for an infection, whether it be viral or bacterial, to result in a miscarriage," one obstetrician-gynecologist tells the Post. The Zika virus was found in the placentas of both women. But it has not been confirmed the miscarriages were actually caused by the virus. "Two cases don't make the whole story, but it certainly would be biologically consistent with the [fact]," an infectious disease expert tells ABC. The women contracted the virus, which has now spread to more than 30 countries, while traveling abroad. It is believed to be behind an outbreak of birth defects—and a handful of miscarriages—in Brazil. |
SECTION 1. SHORT TITLE.
This Act may be cited as the ``Higher Education Affordability and
Fairness Act of 2005''.
SEC. 2. DEDUCTION FOR HIGHER EDUCATION EXPENSES.
(a) Increase in Dollar Limitation.--Subsection (b) of section 222
of the Internal Revenue Code of 1986 (relating to dollar limitations)
is amended to read as follows:
``(b) Limitations.--
``(1) Limitation for first 2 years of postsecondary
education.--For any taxable year preceding a taxable year
described in paragraph (2), the amount of qualified tuition and
related expenses which may be taken into account under
subsection (a) shall not exceed--
``(A) except as provided in subparagraph (B), the
excess (if any) of--
``(i) the lesser of--
``(I) $10,000 for each eligible
student, or
``(II) $15,000, over
``(ii) the amount of such expenses which
are taken into account in determining the
credit allowable to the taxpayer or any other
person under section 25A(a)(1) with respect to
such expenses, and
``(B) in the case of a taxpayer with respect to
whom the credit under section 25A(a)(1) is reduced to
zero by reason of section 25A(d)(1), $5,000.
``(2) Limitation for second 2 years of postsecondary
education.--For any taxable year if an eligible student has
completed (before the beginning of such taxable year) the first
2 years of postsecondary education at an eligible educational
institution, the amount of qualified tuition and related
expenses which may be taken into account under subsection (a)
shall not exceed--
``(A) except as provided in subparagraph (B) or
(C), $10,000,
``(B) in the case of a taxpayer with respect to
which a credit under section 25A(a)(1) would be reduced
to zero by reason of section 25A(d)(1), $5,000, and
``(C) in the case of taxpayer with respect to whom
the credit under section 25A(a)(2) is allowed for such
taxable year, zero.
``(3) Deduction allowed only for 4 taxable years for each
eligible student.--A deduction may not be allowed under
subsection (a) with respect to the qualified tuition and
related expenses of an eligible student for any taxable year if
such a deduction was allowable with respect to such expenses
for such student for any 4 prior taxable years.
``(4) Eligible student.--For purposes of this section, the
term `eligible student' has the meaning given such term by
section 25A(b)(3).''.
(b) Repeal of Termination.--Section 222 of such Code is amended by
striking subsection (e).
(c) Determination of Adjusted Gross Income With Respect to Other
Benefits.--
(1) Section 21(a)(2) of such Code is amended by inserting
``(determined without regard to section 222)'' after ``adjusted
gross income''.
(2) Section 22(d) of such Code is amended--
(A) by inserting ``(determined without regard to
section 222)'' after ``adjusted gross income'' the
first place it appears, and
(B) by inserting ``(as so determined)'' after
``adjusted gross income'' the second place it appears.
(3) Section 23(b)(2)(B) of such Code is amended by
inserting ``222,'' before ``911''.
(4) Section 24(b)(1) of such Code is amended by inserting
``222,'' before ``911''.
(5) Section 151(d)(3) of such Code is amended--
(A) by inserting ``(determined without regard to
section 222)'' after ``adjusted gross income'' in
subparagraph (A), and
(B) by inserting ``(as so determined)'' after
``adjusted gross income'' in subparagraph (B).
(6) Section 165(h)(2)(A)(ii) of such Code is amended by
inserting ``(determined without regard to section 222)'' after
``adjusted gross income''.
(7) Section 213(a) of such Code is amended by inserting
``(determined without regard to section 222)'' after ``adjusted
gross income''.
(8) Section 1400C(b)(2) of such Code is amended by
inserting ``222,'' before ``911''.
(d) Effective Date.--The amendments made by this section shall
apply to expenses paid after December 31, 2004 (in taxable years ending
after such date), for education furnished in academic periods beginning
after such date.
SEC. 3. EDUCATION TAX CREDIT FAIRNESS.
(a) Increase in AGI Limits.--
(1) In general.--Subsection (d) of section 25A of the
Internal Revenue Code of 1986 is amended to read as follows:
``(d) Limitation Based on Modified Adjusted Gross Income.--
``(1) Hope credit.--
``(A) In general.--The amount which would (but for
this subsection) be taken into account under subsection
(a)(1) shall be reduced (but not below zero) by the
amount determined under subparagraph (B).
``(B) Amount of reduction.--The amount determined
under this subparagraph equals the amount which bears
the same ratio to the amount which would be so taken
into account as--
``(i) the excess of--
``(I) the taxpayer's modified
adjusted gross income for such taxable
year, over
``(II) $50,000 ($100,000 in the
case of a joint return), bears to
``(ii) $10,000 ($20,000 in the case of a
joint return).
``(2) Lifetime learning credit.--
``(A) In general.--The amount which would (but for
this subsection) be taken into account under subsection
(a)(2) shall be reduced (but not below zero) by the
amount determined under subparagraph (B).
``(B) Amount of reduction.--The amount determined
under this subparagraph equals the amount which bears
the same ratio to the amount which would be so taken
into account as--
``(i) the excess of--
``(I) the taxpayer's modified
adjusted gross income for such taxable
year, over
``(II) $40,000 ($80,000 in the case
of a joint return), bears to
``(ii) $10,000 ($20,000 in the case of a
joint return).
``(3) Modified adjusted gross income.--For purposes of this
subsection, the term `modified adjusted gross income' means the
adjusted gross income of the taxpayer for the taxable year
increased by any amount excluded from gross income under
section 911, 931, or 933.''.
(2) Conforming amendment.--Paragraph (2) of section 25A(h)
of such Code is amended to read as follows:
``(2) Income limits.--
``(A) Hope credit.--In the case of a taxable year
beginning after 2005, the $50,000 and $100,000 amounts
in subsection (d)(1)(B)(i)(II) shall be increased by an
amount equal to--
``(i) such dollar amount, multiplied by
``(ii) the cost-of-living adjustment
determined under section 1(f)(3) for the
calendar year in which the taxable year begins,
determined by substituting `calendar year 2004'
for `calendar year 1992' in subparagraph (B)
thereof.
``(B) Lifetime learning credit.--In the case of a
taxable year beginning after 2001, the $40,000 and
$80,000 amounts in subsection (d)(2)(B)(i)(II) shall be
increased by an amount equal to--
``(i) such dollar amount, multiplied by
``(ii) the cost-of-living adjustment
determined under section 1(f)(3) for the
calendar year in which the taxable year begins,
determined by substituting `calendar year 2000'
for `calendar year 1992' in subparagraph (B)
thereof.
``(C) Rounding.--If any amount as adjusted under
subparagraph (A) or (B) is not a multiple of $1,000,
such amount shall be rounded to the next lowest
multiple of $1,000.''.
(b) Coordination With Other Higher Education Benefits.--Section
25A(g) of such Code is amended by striking paragraph (5) and by
redesignating paragraphs (6) and (7) as paragraphs (5) and (6),
respectively.
(c) Effective Date.--The amendments made by this section shall
apply to expenses paid after December 31, 2004 (in taxable years ending
after such date), for education furnished in academic periods beginning
after such date.
SEC. 4. RELATIONSHIP BETWEEN TUITION AND FINANCIAL AID.
(a) Study.--The Comptroller General of the United States shall
conduct an annual study to examine whether the Federal income tax
incentives to provide education assistance affect higher education
tuition rates in order to identify if institutions of higher education
are absorbing the intended savings by raising tuition rates.
(b) Report.--The Comptroller General of the United States shall
report the results of the study required under subsection (a) to
Congress on an annual basis.
SEC. 5. SENSE OF THE HOUSE OF REPRESENTATIVES REGARDING PELL GRANTS.
It is the sense of the House of Representatives that the maximum
Pell Grant should be increased to $4,700 to pay approximately--
(1) 20 percent of the tuition, fees, room and board, and
other expenses of the average college, or
(2) the tuition and fees of the average public college. | Higher Education Affordability and Fairness Act of 2005 - Amends the Internal Revenue Code to increase the tax deduction for qualified higher education tuition and related expenses. Makes such tax deduction permanent. Increases adjusted gross income limits for purposes of determining the allowable amount of the Hope Scholarship tax credit. Directs the Comptroller General of the United States to conduct an annual study to examine whether the Federal income tax incentives to provide education assistance affect higher education tuition rates in order to identify if institutions of higher education are absorbing the intended savings by raising tuition rates. Expresses the sense of the House of Representatives that the maximum Pell Grant should be increased to $4,700 to pay approximately: (1) 20 percent of the tuition, fees, room and board, and other expenses of the average college; or (2) the tuition and fees of the average public college. |
(CNN) Tonya Couch, the mother of so-called "affluenza" teen Ethan Couch, has posted bail after her bond was lowered from $1 million to $75,000.
Couch will be released Tuesday after she's fitted with an electronic ankle monitor, Tarrant County Sheriff Dee Anderson said.
Authorities have accused Tonya Couch of helping her son leave the country to avoid a probation hearing that might have led to jail time for him.
Texas prosecutors had charged her with hindering the apprehension of a felon and initially set bond at $1 million. That happened in December after she was returned to the United States but was still in Los Angeles, in the custody of the L.A. Police Department.
She was arraigned Friday in Fort Worth but did not enter a formal plea.
During the Monday bond hearing, Tarrant County, Texas, Judge Wayne Salvant lowered her bond and issued several other conditions
While Couch is out on bond, she must:
Wear an electronic ankle monitor
Report to authorities on a weekly basis
Live in Tarrant County with her 29-year-old son and his family
Abstain from using controlled substances or alcohol (she'll be drug tested)
Be placed under 24-hour home confinement (lawyers and doctors are allowed to visit her)
Not possess or transport any firearms or weapons
Pay a monthly $60 supervision fee
Avoid "bad actors"
Salvant also issued a gag order for lawyers involved in the case, barring them from communicating with the media.
Couch to undergo a mental exam
On Friday, Tarrant County Magistrate Judge Matt King ordered Tonya Couch to undergo a mental exam after the court found "reasonable cause" to believe that she suffers from "a mental illness or is a person with a mental retardation," according to court documents.
The judge's order was issued Friday and must be completed within 30 days. The mother will plead not guilty, said Stephanie Patten, her attorney.
The mental examination will determine whether there is clinical evidence to support the argument that Tonya Couch may be incompetent to stand trial.
Before she and her son fled to Mexico, she withdrew $30,000 from her account and told her husband that he would not see them again, an arrest affidavit stated.
Probation woes
Before he went to Mexico, Ethan Couch was on probation for killing four people in a drunken driving accident in 2013, when he was 16.
At the time, outrage followed when a judge sentenced him to probation instead of jail time. During the trial, his lawyers cited the now notorious "affluenza" defense, suggesting he was too rich and spoiled to understand the consequences of his actions.
Ethan Couch is still in Mexico, and his return to the United States largely depends on whether he decides to contest his deportation. Last week, a Mexican judge granted the teen a temporary stay, halting deportation proceedings. ||||| Tonya Couch -- the mother of a North Texas teen who used an "affluenza" defense during a trial for a deadly 2013 drunken driving wreck -- leaves a visit with her probation officer wearing an electronic ankle monitor after her release from jail Tuesday morning. (Published Tuesday, Jan. 12, 2016)
The mother of a North Texas teenager known for using an "affluenza" defense while on trial for a deadly 2013 drunken driving wreck was released from jail Tuesday morning after posting a reduced bond.
A judge reduced Tonya Couch's bond from $1 million to $75,000 in a Tarrant County courtroom Monday.
Couch is charged with hindering the apprehension of a felon: Her son, Ethan Couch, who killed four people in a 2013 crash and was facing allegations that he violated his probation.
Tonya Couch was brought back to Texas last week, days after she and her son were arrested in Puerto Vallarta, Mexico. Ethan Couch remains in a Mexico City detention facility.
'Affluenza' Mother, Tonya Couch, Released From Jail
After posting bond Monday, Tonya Couch -- the mother of a North Texas teen known for using an "affluenza" defense during a trial for a deadly 2013 drunken driving wreck -- was released from jail Tuesday morning. She left the jail and went to the probation office where she would be fitted with a ankle monitor. (Published Tuesday, Jan. 12, 2016)
Tonya Couch received an electronic ankle monitor, which she will be required to wear, and must remain at home except for appointments with her doctor and lawyer. She will be electronically monitored 24 hours per day and will need to be available for a visit from a probation officer at any time. She will also have to take routine drug and urine tests.
Tarrant County Sheriff Dee Anderson said he hopes that's enough.
"That's always a concern you're going to have when somebody has already fled one time," he said. "I hope the restrictions put in place will be sufficient."
NBC 5 law enforcement expert Don Peritz said if she tampers with the ankle monitor, it could land her back in jail.
"The monitor would send a signal back to the home unit that something is wrong," Peritz said. "The home unit would make a phone call and let the person that's monitoring her know that something is wrong, and they may have an immediate warrant for her arrest issued."
Tonya Couch will have to pay $60 per month for the monitoring service.
"It's easier than being in jail for everybody involved," Peritz said. "It's cheaper for everybody involved, except for the person being monitored. Tarrant County does not have to feed her or have jail guards watch over her. In the long road, it's a better situation for everybody."
Tonya Couch Bond Reduced, Still in Jail
On Monday afternoon, State District Judge Wayne Salvant reduced Tonya Couch's bond from $1 million to $75,000. Sheriff Dee Anderson said the earliest she could be released is Tuesday, even if she posts bond Monday evening, because she would have to wait to have an ankle monitor issued. (Published Monday, Jan. 11, 2016)
State District Judge Wayne Salvant said he understood prosecutors' concerns that Couch might flee again, but that the charge against her, while a third-degree felony, wasn't serious enough to merit a $1 million bond.
Ethan Couch was 16 when he killed four people in June 2013, ramming a pickup truck into a crowd of people trying to help stranded motorists on the side of a North Texas road. He was driving at nearly three times the legal limit for adult drivers.
Judge Lowers Tonya Couch Bond
On Monday afternoon, State District Judge Wayne Salvant reduced Tonya Couch's bond from $1 million to $75,000. (Published Monday, Jan. 11, 2016)
A juvenile court judge gave Couch 10 years' probation, outraging prosecutors who had called for the teen to face detention time. The case drew widespread derision after an expert called by Couch's lawyers argued Couch had been coddled into a sense of irresponsibility by his wealthy parents, a condition the expert called "affluenza."
Despite all of the previous testimony about Ethan Couch's wealthy upbringing, his mother's attorneys have argued that she had few assets to her own name and couldn't pay the cost of a $1 million bond.
Another of Tonya Couch's sons, Steven McWilliams, testified Monday that the balance on a bank account belonging to her read "-$99 billion."
Tonya Couch is separated from Fred Couch, Ethan's father, who owns a suburban Fort Worth business that does large-scale metal roofing.
RAW VIDEO: Judge Reduces Tonya Couch's Bond
On Monday afternoon, State District Judge Wayne Salvant reduced Tonya Couch's bond from $1 million to $75,000. (Published Monday, Jan. 11, 2016)
According to an arrest warrant, Tonya Couch is accused of taking $30,000 and telling Fred Couch that he would never see her or Ethan again before fleeing.
The couple originally married in 1996, but divorced 10 years later. They remarried in April 2011, but court records show they are amid divorce proceedings, haven't been living together as husband and wife since at least August 2014, and that Fred Couch's attorneys couldn't locate Tonya Couch as of Dec. 21.
Law enforcement officials believe the mother and son had a going away party shortly before driving across the border in her pickup truck.
They were first tracked to a resort condominium after ordering pizza before police found them at an apartment in Puerto Vallarta's old town.
When they were arrested, Ethan Couch appeared to have tried to disguise himself by dying his blond hair black and his beard brown, according to investigators.
Copyright Associated Press / NBC 5 Dallas-Fort Worth ||||| Play Facebook
Twitter
Google Plus
Embed Judge Agrees to Lower Tonya Couch's Bond 4:08 autoplay autoplay Copy this code to your website or blog
The mother of "affluenza" teen Ethan Couch had her bond reduced from $1 million to $75,000 after her older son testified Monday she was broke.
"Me, so far," Steven McWilliams answered at Tonya Couch's bond hearing, when asked who was on the hook to pay his mom's mounting legal bills.
While McWilliams was on the stand, his mother sat quietly beside her lawyers in a Forth Worth, Texas courtroom.
Couch and his mother were arrested in the Mexican resort city of Puerto Vallarta last month after several weeks on the lam.
Tonya Couch returned to court in Fort Worth on Monday for a bond reduction hearing.
McWilliams said his mom used one of her ex-husband's company pickup trucks to escape Texas. He said he'd make sure she had a ride to get her to court if she were let out of jail.
When asked if his mom had friends who could help her in Tarrant County, McWilliams answered, "Yes and no." There was no elaboration.
Asked if his mother attended church, her son said, "She used to, I don't know where."
Tonya Couch is charged with hindering apprehension of her son, a Class C felony that normally carries a bond closer to $10,000.
Prosecutors ordered her held on a million dollars bond after calling her a "proven flight risk."
At the conclusion of the hearing on Monday, Judge Wayne Salvant agreed to reduce Tonya Couch's bond to $75,000, NBCDFW reported.
Couch, 18, remains in a Mexican jail and is fighting extradition to the U.S., where he faces jail time for violating his probation.
Law enforcement officials said last week Tonya Couch took $30,000 from a bank account and cut ties with her son's dad before splitting for Mexico.
Couch became infamous after his lawyers used his pampered, privileged upbringing to escape jail for a deadly 2013 drunk-driving crash.
Instead, Couch was sentenced in juvenile court to 10 years of probation for killing four people and injuring nine others.
But when video surfaced that appeared to show Couch violating his probation by drinking, he and his mother took off for Mexico. | – Tonya Couch, mom to "affluenza" teen Ethan Couch, will likely be released from jail in Fort Worth, Texas, on Tuesday after posting bond. A Tarrant County judge reduced her bond from $1 million to $75,000 on Monday after hearing that the government has frozen Couch's bank account, leaving her essentially broke, reports NBC 5. Her 29-year-old son Steven McWilliams testified that Couch's account balance read "-$99 billion." When asked who was paying Couch's legal bills, McWilliams answered "me, so far," per NBC News. Couch will be fitted with an ankle monitor and not be allowed to leave the home of McWilliams and his family, reports CNN. Her doctor and lawyer are allowed to visit. Still, a sheriff fears Couch could try to flee again. "That's always a concern you're going to have when somebody has already fled one time," he says. "I hope the restrictions put in place will be sufficient." Judge Wayne Salvant said Couch's charge of hindering apprehension of a felon, a third-degree felony, didn't warrant a $1 million bond; the charge usually carries a bond of around $10,000, though prosecutors said Couch is a "proven flight risk." Couch's attorney says she'll plead not guilty, though a judge has ordered she undergo a mental exam to determine if she is fit to stand trial. The judge on Friday said there is "reasonable cause" to believe Couch suffers from "a mental illness or is a person with a mental retardation." (MADD, meanwhile, wants Ethan Couch's case sent to adult court.) |
It has taken thousands of years, but a combination of 21st-century forensic science and luck has finally revealed what happened to Tutankhamun – the world's most famous pharaoh.
Mystery has surrounded the boy king ever since his death in 1323BC, aged 19. The mystery intensified when the archaeologist Lord Carnarvon died in Cairo shortly after he and Howard Carter discovered Tutankhamun's tomb in 1922.
Now British experts think they have solved the riddle of the king's death. They believe injuries on his body are akin to those sustained in a chariot accident and that his mummification was botched.
Dr Chris Naunton, director of the Egypt Exploration Society, was intrigued when he came across references in Carter's records of the body having been burnt. A clue came from Dr Robert Connolly, an anthropologist at Liverpool University, who was part of the team that X-rayed Tutankhamun's remains in 1968. Among the bones in his office he recently found a piece of the pharaoh's flesh – the only known sample outside Egypt.
Working with forensic archaeologist Dr Matthew Ponting, Dr Connolly used a scanning electron microscope to determine that the flesh had been burnt. Subsequent chemical tests confirmed that Tutankhamun's body was burnt while sealed inside his coffin. Researchers discovered that embalming oils combined with oxygen and linen caused a chemical reaction which "cooked" the king's body at temperatures of more than 200C. Dr Chris Naunton said: "The charring and possibility that a botched mummification led the body spontaneously combusting shortly after burial was entirely unexpected, something of a revelation."
Working with scientists from the Cranfield Forensic Institute, researchers performed a "virtual autopsy" which revealed a pattern of injuries down one side of his body. Their investigation also explains why King Tut's mummy was the only pharaoh to be missing its heart: it had been damaged beyond repair.
The pharaoh's injuries have been matched to a specific scenario – with car-crash investigators creating computer simulations of chariot accidents. The results suggest a chariot smashed into him while he was on his knees – shattering his ribs and pelvis and crushing his heart.
The new findings will be shown for the first time in Channel 4's 'Tutankhamun: The Mystery of the Burnt Mummy' next Sunday at 8pm ||||| Egyptologist finds evidence that ‘King Tut’ spontaneously combusted
By Scott Kaufman
Sunday, November 3, 2013 14:50 EST
According to a Channel 4 documentary, evidence shows that the embalmed body of the most famous Egyptian pharaoh spontaneously combusted inside its sarcophagus.
“Tutankhamun: The Mystery of the Burnt Mummy” examines not only how the young pharaoh died, but what happened to his body after it was interred.
Egyptologist Chris Nauton, the director of the Egypt Exploration Society, and a team of car crash investigators ran computer simulations that lend credence to the increasingly accepted theory that Tutankhamun was killed in a chariot accident. The simulations showed that the injuries scaling down one side of his body are consistent with a high-speed collision.
But it is the possibility of a botched mummification and its consequences that really interest Nauton.
“Despite all the attention Tut’s mummy has received over the years the full extent of its strange condition has largely been overlooked,” he said. “The charring and possibility that a botched mummification led the body spontaneously combusting shortly after burial was entirely unexpected, something of a revelation in fact.”
Nauton discovered a post-mortem exam from the 1960s in which a scanning electron microscope indicated that the mummy’s flesh was burnt. His research discovered that the embalming oils used at the time of Tutankhamun’s death can, in the presence of oxygen and treated linen, cause a chemical reaction capable of “cooking” a human body.
The documentary airs on Sunday, November 10 in the U.K.
["Tut Ench Amun" on Shutterstock] ||||| Starting in 1996, Alexa Internet has been donating their crawl data to the Internet Archive. Flowing in every day, these data are added to the Wayback Machine after an embargo period. | – The story of King Tut's death—and what followed—just got even more interesting, all thanks to a single piece of flesh. That remnant of Tutankhamun is the only one of its kind known to exist outside Egypt, and British experts decided to analyze it after stumbling upon a decades-old record by one of the archaeologists who found Tut's tomb in 1922. That record indicated the body had been burnt, and a scanning electron microscope and chemical tests proved that was indeed the case, reports the Independent. But here's the truly intriguing part: The body caught afire after it was sealed in its sarcophagus, essentially "spontaneously combusting," says Egyptologist Chris Nauton. The scientists' theory is that the mummification was bungled and that the body caught fire due to an unfortunate chemical reaction spurred by the combination of oxygen, embalming oils, and linen; the temperature would have reached 400 degrees Fahrenheit. Raw Story adds that the researchers also partnered with a team of car crash investigators whose computer simulations seem to verify the leading theory as to cause of death: a chariot accident. The injuries sustained on one side of his body (as revealed by what the Independent calls a "virtual autopsy") suggest a chariot crashed into Tut while he was on his knees, crushing his heart. The findings will be presented in a documentary airing Sunday in Britain. Scientists believe Tutankhamun may have fallen from the chariot while hunting. |
SECTION 1. SHORT TITLE.
This Act may be cited as the ``United States Export Promotion Act
of 2005''.
SEC. 2. ELIMINATION OF FEES CHARGED FOR EXPORT PROMOTION PROGRAMS.
(a) Elimination of Fees.--The Secretary of Commerce, the
International Trade Administration, and the United States and Foreign
Commercial Service may not charge fees to United States exporters,
United States businesses, or United States persons, for assistance
provided to such exporters, businesses, or persons under subtitle C of
the Export Enhancement Act of 1988 (15 U.S.C. 4721 et seq.) or under
any other export promotion program.
(b) Authorization of Appropriations.--There are authorized to be
appropriated to the Department of Commerce, the International Trade
Administration, and the United States and Foreign Commercial Service
such sums as may be necessary to cover the costs of providing services
to United States exporters, United States businesses, or United States
persons, under export promotion programs.
SEC. 3. CAPITAL SECURITY COST-SHARING PROGRAM CHANGES.
In determining the total overseas presence of an agency for
purposes of section 604(e) of the Secure Embassy Construction and
Counterterrorism Act of 1999 (as enacted by section 1000(a)(7) of
Public Law 106-113), there shall be excluded any positions or
activities of the agency attributable to export promotion programs.
SEC. 4. UNITED STATES AND FOREIGN COMMERCIAL SERVICE ACTIVITIES ABROAD.
The Secretary of Commerce shall, not later than 180 days after the
date of the enactment of this Act--
(1) develop and submit to the Congress a plan to locate and
relocate offices, officers, and employees of the USFCS in other
countries at places other than the United States embassy or, in
any country in which there is no such embassy, the chief
diplomatic mission of the United States in that country;
(2) develop and submit to the Congress a plan to place, in
each country with which the United States has diplomatic
relations, a USFCS office or, in countries with smaller
markets, one or more foreign nationals working under the
supervision of a regional USFCS officer, to carry out functions
under export promoting programs if, on the basis of a market
analysis of the country conducted by the Secretary of Commerce,
the Secretary determines such placement is viable; and
(3) conduct and report to the Congress on a market analysis
of other countries for purposes of expanding activities of the
USFCS in those countries, particularly those with developing
economies.
SEC. 5. UNITED STATES TRADE MISSIONS.
The Secretary of Commerce shall, not later than 180 days after the
date of the enactment of this Act, develop and submit to the Congress a
plan for conducting at least 100 United States trade missions abroad in
fiscal years 2006 and 2007. Of these trade missions--
(1) 1 shall be dedicated for each of the several States,
(2) 1 shall be dedicated for the District of Columbia,
(3) 1 shall be dedicated for Puerto Rico and the Virgin
Islands, and
(4) 1 shall be dedicated for Guam and American Samoa,
with each such mission being comprised primarily of United States
businesses whose principal place of business is in the State or other
place listed in paragraphs (2) through (4) for which the trade mission
is dedicated. No fee may be charged to any United States business for
participating in any such trade mission.
SEC. 6. INCREASING PARTICIPATION IN GLOBAL MARKETS OF SMALL- AND
MEDIUM-SIZED BUSINESSES.
The Secretary of Commerce shall, not later than 180 days after the
date of the enactment of this Act, submit to the Congress--
(1) budget, staffing, and reorganization requirements of
the Department of Commerce and, with the concurrence of the
Administrator of the Small Business Administration, of the
Small Business Administration, in order to substantially
increase the ability of small businesses and medium-sized
businesses in the United States to compete in global markets;
and
(2) an overall United States trade promotion strategy, with
achievable annual action plans, that aggressively markets small
businesses and medium-sized businesses in the United States to
expanding overseas markets and directly supports, through trade
missions and related activities, the efforts of the individual
States (and the District of Columbia) toward achieving this
goal.
SEC. 7. DEVELOPMENT OF EXPORT DATABASE AND OTHER TRADE PROMOTION
ACTIVITIES.
(a) Database.--The Secretary of Commerce shall--
(1) conduct a comprehensive review, reorganization, and
expansion of the Web site www.export.gov (or any successor Web
site) of the Department of Commerce in order to--
(A) increase the usability and scope of the Web
site; and
(B) ensure that each USFCS office location has an
interactive Web site that is interoperable with
www.export.gov; and
(2)(A) create and maintain a database of United States
exporters;
(B) provide United States exporters with the ability to
elect to be included in the database; and
(C) report to Congress on methods other Federal agencies
may use to assist United States businesses interested in
developing export markets in accessing the database; and
(3) after reviewing successful trade promotion activities
of other countries with which the United States competes in
global markets, make such modifications to the operations of
the Department of Commerce in carrying out export promotion
programs, including modifications to Internet access, as are
necessary to more effectively assist in matching business
opportunities abroad to potential suppliers in the United
States, and to support closing of transactions, arranging of
financing, and delivery of goods or services.
SEC. 8. DEFINITIONS.
In this Act:
(1) Export promotion program.--The term ``export promotion
program'' has the meaning given that term in section 201(d) of
the Export Administration Amendments Act of 1985 (15 U.S.C.
4051(d)).
(2) Small business.--The term ``small business'' means any
small business concern as defined under section 3 of the Small
Business Act (15 U.S.C. 632).
(3) United states business.--The term ``United States
business'' has the meaning given that term in section 2304(e)
of the Export Enhancement Act of 1988 (15 U.S.C. 4724(e)).
(4) United states exporter.--The term ``United States
exporter'' has the meaning given that term in section 2301(j)
of the Export Enhancement Act of 1988 (15 U.S.C. 4721(j)).
(5) USFCS.--The term ``USFCS'' means the United States and
Foreign Commercial Service of the Department of Commerce.
(6) United states person.--The term ``United States
person'' has the meaning given that term in section 2306(c) of
the Export Enhancement Act of 1988 (15 U.S.C. 4725(c)). | United States Export Promotion Act of 2005 - Prohibits the Secretary of Commerce, the International Trade Administration, and the U.S. and Foreign Commercial Service (USFCS) from charging fees to U.S. exporters, businesses, or persons for assistance provided to them under export promotion programs.
Revises the capital security cost-sharing program under the Secure Embassy Construction and Counterterrorism Act of 1999 to exclude from determination of an agency's total overseas presence any positions or activities attributable to export promotion programs.
Requires the Secretary to develop and submit to Congress plans to: (1) locate and relocate USFCS offices, officers, and employees in other countries at places other than the U.S. embassy or the U.S. chief diplomatic mission; and (2) place a USFCS office where the United States has diplomatic relations or, where viable in countries with smaller markets, one or more foreign nationals working under a regional USFCS officer's supervision to carry out export promotion functions.
Directs the Secretary to develop and submit to Congress a plan for conducting at least 100 U.S. trade missions abroad in FY2006-FY2007.
Requires the Secretary to: (1) increase the participation in global markets of small and medium-sized U.S. businesses; (2) review, reorganize, and expand the Department of Commerce Web site to increase its usability and scope; and (3) create a database of U.S. exporters. |
during the last years , we have witnessed an increasing interest in the optical properties of metallic nanoparticles .
research in the area of plasmonic nanoelectronics explores the behavior of electromagnetic fields which are confined over dimensions smaller than the wavelength .
this field confinement is induced by interactions between electromagnetic waves and conduction electrons at the interfaces of metallic nanostructures .
these plasmonic interactions can be used to manipulate light beyond the classical diffraction limit , giving rise to an emerging wide range of interesting novel applications in sensing and waveguiding , amongst other applications @xcite . additionally , at optical frequencies , metallic nanostructures can be characterized by localized surface
plasmon resonances ( lsprs ) that support a strong enhancement in the directivity of spontaneous emission of light by single fluorescent molecules or other point - like emitters of light .
this enhancement in the emission of light is a fundamental aspect that has recently originated an intensive promising research work on the design and the experimental construction and testing of nanoantennas [ 37 ] . as a consequence of this novel research area pioneered in recent years , numerical techniques for providing accurate simulation and analysis of problems involving plasmonic nanostructures
are indispensably required to exploit the growing range of applications relying on optical plasmonic properties .
the behavior of nanoparticles at optical frequencies can be well modeled by classical electrodynamics [ 1 ] . nevertheless ,
in electromagnetic optics , the penetration of fields into metals must be considered and therefore sie ( surface integral equation ) formulations for penetrable scatterers are a suitable choice for simulating the optical responses of isolated plasmonic nanoparticles or composite structures consisting of multiple nanoparticles .
surface integral equation techniques based on the method of moments ( mom ) have demonstrated to provide very accurate simulation results in many different problems involving real plasmonic objects [ 811 ] . in spite of being a classical approach , the sie
mom method delivers accurate predictive results for particle and surface feature sizes down to @xmath0 nm , a distance below which quantum non - local phenomena become non - negligible [ 11 ] .
although not yet widely employed in optics , the sie
mom approach brings important advantages over volumetric approaches such as the discrete - dipole approximation ( dda ) [ 12 ] , the finite difference in time domain ( fdtd ) method [ 13 ] , and the frequency - domain finite - element ( fem ) methods [ 14 ] .
such advantages include the fact that the sie
mom approach requires discretizations of the material boundary surfaces only , thus generally reducing the number of unknowns with respect to integral formulations based on volumetric mesh modeling .
additionally , absorbing boundary conditions and surrounding empty space need not to be parametrized , resulting in a significantly easier mesh modeling .
finally , the sie mom methods are less prone to be affected by instabilities due to abrupt and spatially rapid variations of the permittivity [ 11 ] , as commonly occurs in the case of plasmonic problems .
there are many known factors that determine the final accuracy and the total runtime when simulating with the sie
mom approach .
however , two of them are of special interest when solving this type of problems : the sie mom formulation , and the iterative numerical solver .
previous studies have concentrated on assessing which sie formulations are more suitable for different types of electromagnetic scattering and radiation problems in terms of both runtime and accuracy [ 1520 ] , including problems involving plasmonic structures [ 20 ] .
nevertheless , there is a general lack of publications that focus explicitly on the role of the iterative solver .
addressing this lack of results would certainly be relevant as the selected iterative solver has a strong impact on the total simulation runtime and memory consumption .
this impact is particularly important when dealing with real - world plasmonic problems , because they usually require performing massive batches of large - scale simulations for different varying parameters such as optical wavelength , illumination beam , and relative permittivity .
a comparative study of the performance of some well - known sie formulations was first released in [ 15 ] for non - plasmonic conducting and dielectric objects .
later on [ 16 ] , this was also done for perfect electric conductor ( pec ) bodies . a similar study for dielectric objects , but using acceleration algorithms for mom , was carried out in [ 17 ] .
a comparative study applying the sie mom approach to left - handed metamaterials ( lhm s ) was reported in [ 18 ] ; and , later on , this was done using acceleration algorithms [ 19 ] .
the foremost published comparison of five widespread sie formulations in the interesting context of plasmonic media was accomplished in [ 20 ] .
all the above - mentioned publications also include iterative - performance studies . however , these studies are only focused on a single iterative solver .
the iterative - performance study carried out in [ 15 ] was realized using only as iterative solver the ordinary ( without restarts ) version of the generalized minimal residual method ( gmres ) [ 21 ] .
a restarted gmres with fixed restart parameter was chosen in @xcite , whereas another iterative solver called bicgstab [ 22 ] was used for all the analyses published in @xcite . as detailed in 1.2.5 of [ 23 ] , the algorithmic performance and the memory complexities of each iterative solver may vary significantly depending on the considered type of engineering problem . as a consequence , in order to optimize the performance and memory usage for a given class of problems , there are no general selection rules for the solver and an investigation should be required to compare iterative solutions provided by various algorithms .
this kind of important study was left out in all the iterative - performance analyses reported in the aforesaid publications [ 1520 ] , as these papers perform comparative analyses by only varying the sie formulation for a fixed iterative solver . in the present work
, we extend to multiple iterative solvers some comparative multiple - sie single - solver results published in [ 20 ] for plasmonic scatterers .
furthermore , a memory complexity analysis not included in the previously mentioned reference is presented in this paper .
moreover , the nanoscatterers used for obtaining the results in [ 20 ] consist only of spheres simulated at a single operating frequency .
the selection of spheres as targets is well justified by the availability of the mie s series analytical reference results [ 24 ] ; however , in the present paper we also analyze the iterative performance results when dealing with more elaborate geometries , such as a real plasmonic nanoantenna .
in addition to the study of the iterative performance , we also check , following a procedure similar to that in [ 20 ] , the accuracy of each sie formulation when the mom linear system is solved by different iterative techniques .
the sie formulations considered for the iterative performance comparison in this paper include tangential equations only ( combined tangential formulation , known as ctf , and poggio - miller - chang - harrington - wu - tsai formulation , named by the acronym pmchwt ) , normal equations only ( combined normal formulation , cnf , and modified normal mller formulation , mnmf ) and both normal and tangential equations ( electric and magnetic current combined - field integral equation , jmcfie ) .
a detailed description of these five formulations can be found in @xcite .
the four studied iterative methods , based on krylov subspaces , used for solving linear systems of equations are the following : gmres ( generalized minimum residual method ) , tfqmr ( transpose free quasi - minimal residual method ) , cgs ( conjugate gradients squared method ) and bicgstab ( bi - conjugate gradients stabilized method ) .
thorough explanations for all these solvers can be found in [ 22 ] . in the particular case of gmres , in addition to the iterative performance , we analyze the influence of the restart parameter on the memory consumption as well .
the rest of this paper is organized as follows .
the sie formulations considered in the study are described in section 2 .
next , in section 3 , a concise overview on krylov iterative methods applied to sie formulations is given .
we present in section 4 a comparative study of numerical results for the considered sie formulations .
first , in subsection 4.1 , representative sets of near and far field outcomes obtained employing each sie formulation are compared with the analytical results provided by the mie s series , in order to determine the level of accuracy for each formulation .
then , in subsection 4.2 , the iterative performance of each linear solver is assessed by measuring the runtime to solve the mom system and the total number of required matrix - vector multiplications for different plasmonic geometries .
finally , section 5 concludes the paper with a summary .
in the analysis of the scattered field produced by an impinging electromagnetic wave on a penetrable object , both tangential and normal boundary conditions are typically imposed for the electric and magnetic fields at the interface of the object .
these boundary conditions establish the normal electric field integral equation ( n efie ) , the normal magnetic field integral equation ( n mfie ) , the tangential electric field integral equation ( t efie ) and the tangential magnetic field integral equation ( t mfie ) .
the following linear combinations of the aforementioned formulations are known to provide stable sets of sie formulations @xcite : @xmath1 in the preceding equations , we employ the same sign conventions as in [ 23 ] .
@xmath2 is the intrinsic impedance in region @xmath3 for @xmath4 . @xmath5 and @xmath6 are the outer and inner regions of the scatterer , respectively .
different values can be assigned to the complex scalar parameters @xmath7 for @xmath8 in order to obtain valid stable formulations .
the expressions for all the identities involved in eq .
( 1 ) are the following : @xmath9 in eqs .
( 2 ) , @xmath10 and @xmath11 denote the , a - priori unknown , induced equivalent surface currents ( electric and magnetic currents respectively ) on the interface between @xmath5 and @xmath6 . @xmath10 and @xmath11 are vector functions of an arbitrary surface point @xmath12 , which is defined approaching the surface from @xmath5 .
vector @xmath13 is the unit normal to the surface , pointing towards exterior region @xmath5
. vectors @xmath14 and @xmath15 respectively represent the incident electric and magnetic fields at surface point @xmath12 . @xmath16 and @xmath17
are used to denote integro - differential operators defined as @xmath18{g_i}({\bf{r}},{\bf{r'}})\,\,ds ' } , \\
{ k_i}{\bf{x}}({\bf{r } } ) = \dashint_s { { \bf{x}}({\bf{r ' } } ) \times \nabla { g_i}({\bf{r}},{\bf{r'}})\,\;ds ' } .
\end{array } \label{eq:3 } \ ] ] the symbol @xmath19 is used in the definition of @xmath17 for indicating that the integration is taken as a cauchy principal value integral .
the integration surface @xmath20 refers to the separation interface between @xmath5 and @xmath6 .
the term @xmath21 in ( 3 ) refers to the scalar green s function . a generalization of the sie
mom formulation for the analysis of _ multiple _ plasmonic media can be looked up in [ 9 ] , or a different alternative approach can be found in [ 25 ] .
the discretization of the unknown currents into basis functions and the generation of the mom linear system can be consulted , for example , in @xcite .
the comparative study included in the following sections considers the five widespread formulations defined for the parameters @xmath7 in table 1 . 0.2 in .parameters for obtaining five well - documented surface integral equation formulations .
[ cols="^,^,^,^,^",options="header " , ] [ tab:1 ]
the adequate choice of the iterative method to solve plasmonic problems involving a high number of unknowns is fully supported by the fact that mom - based accelerating techniques such as the fast multipole method ( fmm ) and the multilevel fast multipole algorithm ( mlfma) are usually employed for problems involving hundreds of thousands of unknowns . unlike pure mom , these mom - based accelerating techniques accept a controllable error in the solution [ 23 ] . in the fmm and mlfma algorithms , the direct inversion of the mom matrix is not feasible , even on modern parallel computers ; and , for this reason , iterative solvers are indispensable to find the initially unknown vector .
even when using a pure mom code , without fmm and mlfma accelerations , an iterative solver usually provides faster convergence than a direct solver such as lu matrix factorization .
in fact , the computational complexity of pure mom with a direct solver is @xmath22 , where @xmath23 is the number of unknowns .
this complexity can be easily lowered to @xmath24 by simply switching from a direct to an iterative solver [ 26 ] .
krylov iterative solvers are among the most popular in computational electromagnetics , because of their ability to deliver good rates of convergence and to efficiently handle very large problems [ 16 ] .
this kind of methods look for the vector solution @xmath25 of the system @xmath26 in the krylov space @xmath27 , where @xmath28 represents a number of iteration in the solver .
@xmath29 is a suitable space from which one can construct approximate solutions of the linear system of equations , since it is closely related to @xmath30@xcite .
when any programmed krylov method is called , a tolerance value @xmath31 is inputted to the code . in practice , the solver does not run until an exact solution is found , but rather terminate at iteration @xmath32 if a certain criterion has been satisfied for the estimated solution @xmath33 .
one typical criterion is to terminate the algorithm after the following inequality is met : @xmath34 the term @xmath35 is called relative residual .
typical values for the tolerance are @xmath36 for double precision ( 64 bits ) entries in @xmath37 and @xmath38 , and @xmath39 for single precision ( 32 bits ) entries . however , for mom - based simulations involving tens of millions of unknowns , the tolerance must be raised to values as high as about @xmath40 [ 27 ] .
this increase in @xmath31 is needed because the condition number of the mom system typically worsens as the number of unknowns lifts and the number of iterations to achieve @xmath31 would be impracticable to reach . the krylov method gmres ( generalized minimum residual method ) , developed by y. saad and h. schultz in 1986 [ 21 ] , is known to be the optimal iterative solver in the sense that it minimizes the number of iterations required to converge satisfying ( 4 ) [ 16 ] .
nevertheless , the optimality of gmres comes at a price .
the memory cost of applying the method increases with the iterations , and it may sometimes become prohibitive for solving certain problems . as an attempt to limit this cost , there exist restarted versions of gmres in which , after a given number _
r _ of iterations ( _ r _ is known as the restart parameter ) , the approximate solution for the next steps is computed form the previously generated krylov subspace .
then this existent krylov subspace is completely erased from the memory , and a new space is constructed from the latest residual . in a restarted gmres , each ordinary iteration
is called internal iteration , whereas a set of _ r _ ( _ r _ denotes again the restart value ) internal iterations is called external iteration . summarizing the storage costs ,
the number of additional floats or doubles over the baseline memory requirements in mom / fmm / mlfma that gmres requires in memory can be easily estimated from the following analytical results [ 28 ] : @xmath41 where _ n _ is the number of complex unknowns and @xmath42 is the number of iterations required to achieve ( 4 ) in the unrestarted ordinary gmres . for a restarted gmres ,
@xmath42 is computed from the restart parameter _
r _ as @xmath43 , with @xmath44 the total number of internal iterations .
in addition to the restarted versions of gmres , other non - optimal iterative solvers attempt to preserve the favorable convergence properties of gmres while introducing a negligible overuse in ram usage . for this paper , we consider complex versions of iterative krylov solvers based on different versions of the so - called lanczos biorthogonalization algorithm [ 22 ] : tfqmr ( transpose free quasi - minimal residual method ) , cgs ( conjugate gradients squared method ) and bicgstab ( bi - conjugate gradients stabilized method ) .
the analysis of the accuracy and the iterative performance of the five sie formulations described in section 2 is carried out in this section for representative problems involving plasmonic materials .
error and iterative performance results were obtained for the krylov solvers described in section 3 .
the error analyses are summarized in subsection 4.1 .
then , in subsection 4.2 , we present iterative performance analyses that not only include spheres as scatterers , but also real models of plasmonic nanoantennas .
the gmres solver with a fixed restart was the only solver considered in [ 20 ] , whereas our results for plasmonic problems cover the following important cases : three additional solvers ( tfqmr , cgs and bicgstab ) , the ordinary gmres without restarts , and the restarted gmres including a set of different restart parameters not considered in the aforesaid reference .
furthermore , the results in [ 20 ] cover up to 36,000 unknowns . in contrast
, we have extended the comparative results up to around 100,000 unknowns .
some quantitative results presented in this paper regarding the iterative performance and the accuracy differ from their counterparts in [ 20 ] even though there is a general agreement among those main qualitative results that are common to both papers due to two major reasons : i ) we employed a gaussian quadrature rule , described in [ 29 ] , consisting of 7 points per triangle for numerical integration , whereas in [ 20 ] a 3-point rule was used ; ii ) unlike the simulations in [ 20 ] , we employed a diagonal preconditioner described in [ 29 ] together with the preconditioning technique in [ 30 ] .
a diagonal preconditioner and the technique in [ 30 ] are straightforward to implement in any existing mom code without adding significant computational complexity . moreover , in 10.2.4 of [ 29 ] is stated that experimental results show that the 7-point quadrature rule is preferable to the 3-point rule , as it provides superior precision for mom problems , which strengthens our choice for this integration technique .
this paper also describes the numerical precision and meshes employed for the discretization , not clearly stated in some of the aforementioned literature , in order to allow a full reproducibility of our results by the scientific community .
double - precision floating - point c calculations were used for all the results in this paper , and we employed the so - called `` frontal '' mesh type in the free software gmsh [ 31 ] .
the `` frontal '' type was selected because it provides triangles with good aspect ratios , namely , triangles which do not have small internal angles .
this triangle feature is required in mom to obtain accurate results , as explained in 8.7.4.1 of [ 29 ] .
for all the studied iterative solvers , the maximum number of external iterations was unlimited with the exception of the unrestarted gmres , where the number of iterations is limited to the number of unknowns , and the relative residue tolerance for stopping each method was set to 10@xmath45 , in accordance with the traditionally used tolerance value which can be found , for instance in @xcite .
the main graphical representations of the results in this paper were obtained with a pure mom implementation , as in [ 20 ] .
however , some figures in this paper include mlfma results whenever a `` mlfma vs. pure mom '' comparison is relevant .
the mlfma implementation used for the present work was configured with the same mlfma parameters described in [ 32 ] .
our computational implementation of the mom / mlfma - sie method consists of a regular c code involving double - precision floating - point calculations . for the accurate evaluation of the singular integrals ,
we have used all the proper analytical extraction techniques explained in [ 29 ] . in order to model the mom currents ,
we have employed the well - known rao - wilton - glisson ( rwg ) basis functions [ 29 ] . the galerkin s method was assumed for this work , meaning that the mom testing functions are the same as the basis functions .
all our simulations were carried out on a computer with two intel xeon processors e5 - 2690v2 running the 64-bit operating system windows 8.1 professional .
the code was not parallelized by hand , but automatically parallelized using the source - to - source compiler parallware [ 33 ] to be run with 20 threads executed on half of the cores . the aforementioned model of the processors and
the number of threads have no influence on any comparative result , but only determine the absolute quantitative runtime outcomes in subsection 4.2 .
regarding the automatic parallelization , parallware is a parallelizing tool that automatically extracts the parallelism implicit in the source code of a sequential simulation program written in the regular c programming language .
in addition , parallware automatically generates an optimized parallel - equivalent program written in c and annotated with openmp [ 34 ] compiler pragmas .
a comparative analysis of the accuracy was carried out employing the five selected mom - based sie formulations and the four solvers . the same sphere with radius @xmath46 employed in 3.3 of [ 20 ] was used in this section as a representative example for evaluating the accuracy of each considered iterative solver .
this sphere is illuminated by a plane wave , with incidence direction @xmath47 and polarization @xmath48 , at the operating optical wavelength @xmath49 the following plasmonic materials were chosen for the sphere composition : gold ( @xmath50 , at the simulation frequency ) , silver ( @xmath51 ) and aluminum ( @xmath52 ) .
the values of the complex relative dielectric permittivity for each material have been extracted from [ 20 ] .
the normalized root mean square ( rms ) error with respect to the mie s series results was calculated using the following expression : @xmath53 where @xmath54 is the magnitude of the scattered electric far field obtained in the simulation for @xmath23 different observation values .
@xmath55 is the reference field provided by the mie s series . for far - field patterns ,
the error was calculated using @xmath56 equispaced angular values for the variable @xmath57 on plane xz .
the near - field patterns comprise a mesh with resolution @xmath58 points .
this mesh was created on plane xz over a centered square of side length @xmath59 , with @xmath60 the sphere radius .
we found no significant differences in the comparative error variation among the three considered plasmonic materials .
we did not either encounter any effect of the chosen iterative solver on the normalized rms error , as all the four solvers yielded identical error levels .
these results in this section complete the important findings published in @xcite , extending the error analysis to multiple iterative solvers .
1 shows the error versus the number of unknowns for each sie formulation when the sphere is made of gold .
the variation on the number of unknowns was achieved by varying the maximum side length @xmath61 of the triangles in the geometry mesh , according to the rule @xmath62 : ( left ) far field ; ( right ) near field .
error values were found to be identical among the four considered iterative solvers . ] in terms of accuracy , the pmchwt has clearly shown to be the most reliable formulation in plasmonics .
this statement is in full agreement with the results in [ 20 ] .
the mnmf formulation has an error level comparable to the pmchwt in most cases ; however its far - field error level becomes unstable and starts to grow for high numbers of unknowns .
this anomalous behavior in mnmf might be ascribed to the singularity extraction techniques involved in the mnmf implementation [ 29 ] .
the ctf formulation exhibits a near - field error similar to the pmchwt , but in the far field its error level is notably worse . given that both pmchwt and ctf combine tangential equations only , the accuracy provided by the pmchwt highlights the actual importance of the combination scalar parameters on the obtained level of accuracy . as a graphical reference , fig .
2 shows far and near - field reference values obtained from the mie s series for the sphere with radius @xmath46 .
pmchwt far - field values simulated with two very different numbers of unknowns ( mesh sizes ) are represented in fig . 2 in order to allow for visual assessment of the real importance of taking into account the error variations shown in fig . 1 .
with the purpose of assessing the iterative performance of the five studied sie formulations , we initially simulated the @xmath46 radius gold sphere , using mom solved iteratively .
we employed four well - known iterative solvers for linear systems of equations [ 22 ] : tfqmr , cgs , bicgstab and gmres an unrestarted gmres version , plus three more gmres versions with restart parameters 30 , 60 and 90. in fig .
3 , we analyze the influence of the gmres restart parameter on the memory cost .
this analysis is used to justify our elections for the mentioned restart values .
3 shows on the left the memory usage required by pure mom and also by mlfma , as a function of the number of unknowns .
these curves on the left are independent of the particular chosen solver . then , on the right of fig .
3 , the ram overuse introduced by gmres is represented for the restart values 30 , 60 and 90 , and for the unrestarted version . as inferred from fig .
3 , the percentage ram overuse may be negligible when using pure mom together with unrestarted gmres , but is noticeably worse in the mlfma case . in other words , even if the available ram allows a mlfma simulation , the election of the unrestarted gmres for the solver may make this simulation unfeasible . the maximum gmes restart considered in the present work , _ _
r__=90 , was selected to bound the total ram consumption increase in mlfma to around 1% of the total ram usage in every non - gmres krylov iterative solver .
measurements of the iterative performance of the formulations and solvers are shown on the left of fig .
4 , where the runtime for iteratively solving the mom system has been taken from the fastest iterative solver for each formulation . for this particular comparison ,
unrestarted gmres was not considered , in order to make a fair comparison among formulations .
indeed , for every formulation , gmres is optimal ; and , as a consequence , it outperforms all other krylov methods in terms of iterative performance .
however , as show above , unrestarted gmres should be only used when memory requirements are not a concern .
4 also includes on the right , for comparison purposes , the time required to fill in the full mom matrix before the iterative solver begins .
the time for filling the mom matrix is slightly smaller with the pmchwt and the ctf because , in these formulations , it is possible to avoid the integration of the green s function gradient which appears in operator @xmath16 in eq .
this simplification on the integral requires employing the two - dimensional version of the divergence theorem , which can not be invoked for the formulations that involve normal equations , due to the vector product by @xmath13 . as shown in fig .
4 , the restarted gmres(90 ) solver is the best choice for all the formulations , except for the tangential formulations pmchwt and ctf , where tfqmr provides the best iterative behavior .
another important comment on fig .
4 is that the mnmf formulation provides the best performance when solved iteratively .
radius gold sphere : ( left ) runtime for iteratively solving the mom system when the iterative method with the fastest convergence is chosen for each sie formulation ; ( right ) total runtime for filling in the mom matrix . ]
since , as discussed above in subsection 4.1 , the pmchwt provides the best accuracy when dealing with plasmonic problems , we chose to represent in fig .
5 the iterative performance of this formulation for each solver , including gmres without restart .
even though important previous works such as @xcite indicate that the pmchwt is unfit to be solved by an iterative method , the scenario considered in this paper is different and not directly comparable to @xcite .
the recently introduced preconditioning scheme in @xcite , not yet available at the time of the report in @xcite , was considered for the simulations in this work , as justified above . as can be seen in the graphical results ,
cgs and tfqmr are the best choices for the pmchwt , if unrestarted gmres is excluded owing to memory requirements .
5 also shows that the number of matrix - vector multiplications ( mvms ) required by each iterative solver is proportional to the runtime .
it is worth stating that all the solvers require two mvms in each iteration , except for gmres ( all versions ) , which requires only one mvm per internal iteration .
the number of mvms is thus a machine - independent parameter that can be easily used to compare the total processing time associated with each one of the formulations and solvers .
let us note that for the formulations other than the pmchwt all the solvers display a similar iterative behavior , as seen in fig .
6 .
7 shows the relative residual as a function of the number of mvms ( matrix - vector multiplications ) for the pmchwt formulation for 57,000 unknowns other mesh sizes exhibit an analogous behavior. as expected , gmres without restart exhibits the fastest residual decay .
regarding data from the cgs and bicgstab solvers in fig .
7 , it can be noted that the time - dependent residual variances are very high .
a similar erratic residual evolution can be also observed in the iterative solvers included in commercial codes like feko ; nonetheless , the important fact is that the time - averaged minimum residual is reduced . as a matter of fact
, it must be clear from fig . 7
that , for instance , both tfqmr ( non - erratic ) and cgs ( erratic in this scenario ) exhibit a similar time - averaged minimum residual evolution and they converge at a similar rate . returning to fig .
7 , it is clear that for the cases where memory - expensive unrestarted gmres is not an option , tfqmr and cgs provide a better convergence than gmres(90 ) in the pmchwt , for the residue tolerance of 10@xmath45 ; however , this will not be the case if a higher residual error is defined to achieve convergence .
this observation regarding the pmchwt formulation is particularly important because , when programming a single sie formulation in a mom code , the pmchwt is generally chosen as the preferred formulation due to the following reasons : 1 ) it is the easiest to implement , as it does not require using the normal vector directions ; and 2 ) it is employed in commercially available software like feko [ 35 ] and in publications about simulating plasmonic bodies such as [ 8 ] . in order to extract valid general conclusions about the iterative runtimes
, we also analyzed spheres made of silver and aluminum and we tested more elaborated geometries at different optical frequencies , such as a nanoantenna made of aluminum .
these general conclusions must be understood as valid under the simulation parameters justified at the beginning of section 4 .
the use of alternative preconditioners would certainly be an important point to consider in order to obtain more general conclusions ; however , these preconditioners would have required a noticeably extended explanation out of the scope of this paper . likewise ,
despite the fact that higher - order basis functions are not indispensably required to perform accurate plasmonic simulations @xcite , we understand that they could be interesting for further research .
additionally , sharp edges and corners are not considered in the present work when extracting general conclusions .
it is well known that sharp edges and corners are not typically found in real plasmonic structures chemically generated in a laboratory setup @xcite ; however we believe that additional research should be pursued in the future regarding plasmonic edges in both experimental and theoretical realms . finally , following a similar approach used in the literature @xcite
, we have only represented the most relevant cases when dealing with qualitative conclusions .
the results corresponding to the spheres made of silver and aluminum , as well as simulations varying the nanoantenna plasmonic composition , are not shown for the sake of simplicity , as these results have proved to be qualitatively equivalent to the shown ones . the considered real plasmonic yagi - uda nanoantenna for the emission of light at @xmath63 , whose emitted near field pattern obtained with our code is represented in fig .
8 , has been designed in [ 5 ] .
see this reference for a detailed description of the antenna dimensions , which have been optimized for high optical directivity at the operating frequency .
the optical antenna is made of aluminum ( @xmath64 at the simulation frequency ) , and it enhances the emission of a single fluorescent chemical molecule modeled as a classical hertzian dipole along direction @xmath65 , 4 nm above the feed element . in our particular simulation , the _ equivalent _ dipole has length 8 nm and its electric current is 1 na . .
the represented near - field magnitudes were obtained with the pmchwt formulation for a mesh size @xmath66 . ] as inferred from fig .
9 , iterative performance in the nanoantenna problem is similar to that previously presented for the mie scattering problem when the sphere was simulated .
the absolute runtimes versus the number of unknowns vary when compared to the mie scattering , but the qualitative differences among the four iterative solvers remain very similar , which allows extracting some general conclusions for a wide variety of tests involving plasmonic media .
these conclusions are summarized in the next section .
is simulated : ( left ) time for solving the mom system ; ( right ) number of matrix - vector multiplications . ]
four well - documented iterative solvers have been applied to five widespread mom - sie approaches in the analysis of nanostructures made of plasmonic materials . in order to increase the reproducibility of the results in this paper , we have provided in - depth details about all the parameters involved in our code implementation ( numerical integration rule , mesh type , computational precision , etc . ) , and we have explained the reasons and convenience for the election of such parameters . these thorough explanations on code implementation are usually not described in detail , and they affect the reproducibility of results in previous papers about iterative solvers in computational electromagnetics .
our observations are summarized as follows : 1 . in the case where memory requirements are not an impediment according to eq .
( 5 ) , then the best election for all the formulations is always gmres without restart .
whenever accuracy is required in the results , then the best choice is to use the pmchwt formulation , because this formulation has the smallest error in all the analyzed plasmonic problems ; however , the pmchwt has a poor iterative performance .
if the pmchwt is chosen and unrestarted gmres can not be applied due to memory requirements , then the best iterative solvers are tfqmr and cgs , because they may provide a much better performance , for the standard double - precision residue tolerance of 10@xmath45 , than the rest of the analyzed solvers .
tfqmr and cgs are also the best choices for the ctf formulation ( although in this case the effect is less noticeable ) .
the fastest convergence in plasmonics belongs to the mnmf and jmcfie formulations . for these formulations ,
importantly , however , the choice of the iterative solver has a small impact on the total runtime .
5 . for a given fixed residual tolerance and iterative solver
, the choice of the sie formulation has a direct impact on the simulation error and also an impact on the runtime .
nonetheless , for a fixed residual tolerance and sie formulation , the choice of the iterative solver has a negligible impact on the simulation error but , in general , a relevant impact on the total runtime . in this paper
, we did not only confirm conclusions in items 1 and 2 above also approached in @xcite and @xcite respectively but we have also extended our analysis to extract the remaining conclusions on the list above .
these novel conclusions , together with the quantified results in the presented figures , are relevant contributions from this work to the know - how literature of computational electromagnetics in the context of plasmonic problems .
the authors especially thank the company _ appentra solutions , _ developers of the automatic parallelizing source - to - source compiler _ parallware _ ( www.appentra.com ) , for assisting us in the analysis and parallelization of some parts of our c codes .
this work is partially supported by the spanish national research and development program under project tec2011 - 28683-c02 - 02 , by the spanish government under project tactica , by the european regional development fund ( erdf ) , and by the galician regional government under agreement for funding atlanttic ( atlantic research center for information and communication technologies ) .
barsan v , lungu rp .
trends in electromagnetism - from fundamentals to applications .
intech ; 2012 .
chapter 7 , fast preconditioned krylov methods for boundary integral equations in electromagnetic scattering ; p. 155176 .
ergl , grel l. comparison of integral - equation formulations for the fast and accurate solution of scattering problems involving dielectric objects with the multilevel fast multipole algorithm .
ieee trans antenn propag .
2009;57:176187 .
arajo mg , taboada jm , sols dm , rivero j , landesa l , obelleiro f. comparison of surface integral equation formulations for electromagnetic analysis of plasmonic nanoscatterers . opt express .
2012;20:91619171 .
gomez - sousa h , rubios - lopez o , martinez - lorenzo ja .
junction modeling for piecewise non - homogeneous geometries involving arbitrary materials .
2014 international symposium on antennas and propagation ; 2014 july 6 - 11 , memphis , tennessee , usa . ieee antennas and propagation society ( ap - s ) ; 2014 .
p. 21962197 .
taboada jm , arajo mg , brtolo jm , landesa l , obelleiro f , rodrguez jl .
mlfma - fft parallel algorithm for the solution of large - scale problems in electromagnetics .
progr in electromag research
. 2010;105:1530 .
landesa l , arajo mg , taboada jm , bote l , obelleiro f. improving condition number and convergence of the surface integral - equation method of moments for penetrable bodies . opt express .
2012;20:1723717249 .
the appentra team .
parallware : automatic parallelization of sequential codes [ computer software ] . a corua ( spain ) : appentra solutions sl ; 2015 .
available from : http://www.appentra.com/products/parallware/ em software & systems - s.a .
( pty ) ltd .
modelling of dielectric materials in feko .
technical report ; 2005 [ cited 2015 sep 7 ] .
available from : https://www.feko.info/about-us/quarterly/feko_quarterly_mar_2005.pdf | the electromagnetic behavior of plasmonic structures can be predicted after discretizing and solving a linear system of equations , derived from a continuous surface integral equation ( sie ) and the appropriate boundary conditions , using a method of moments ( mom ) methodology . in realistic large - scale optical problems , a direct inversion of the sie
mom matrix can not be performed due to its large size , and an iterative solver must be used instead .
this paper investigates the performance of four iterative solvers ( gmres , tfqmr , cgs , and bicgstab ) for five different sie
mom formulations ( pmchwt , jmcfie , ctf , cnf , and mnmf ) .
moreover , under this plasmonic context , a set of suggested guidelines are provided to choose a suitable sie formulation and iterative solver depending on the desired simulation error and available runtime resources .
electromagnetic optics ; plasmonic nanostructures ; computational electromagnetics ; surface integral equations ; method of moments ; iterative solvers |
at zero temperature all degrees of freedom tend to freeze and usually a variety of different orders , such as superconductivity and magnetism , will develop in different materials .
however , in a quantum system with a large zero - point energy , one may expect a liquid - like ground state to exist even at @xmath6 . in a system
consisting of localized quantum magnets , we call such a quantum - fluctuation - driven disordered ground state a quantum spin liquid ( sl)@xcite .
it is an exotic phase with novel fractionalized " excitations carrying only a fraction of the electron quantum number , _ e.g. _ spinons which carry spin but no charge .
the internal structures of these sls are so rich that they are beyond the description of landau s symmetry breaking theory@xcite of conventional ordered phases .
instead they are characterized by long - range quantum entanglement@xcite encoded in the ground state , which is coined
topological order"@xcite in contrast to the conventional symmetry - breaking order .
geometric frustration in a system of quantum magnets would lead to a huge degeneracy of classical ground state configurations .
the quantum tunneling among these classical ground states provides a mechanism to realize quantum sls .
the quest for quantum sls in frustrated magnets ( for a recent review see ref . ) has been pursued for decades . among them the heisenberg @xmath0 kagome lattice model ( hklm ) @xmath7 has long been thought as a promising candidate . here @xmath8 denotes @xmath9 being a nearest neighbor pair .
experimental evidence of sl@xcite has been observed in zncu@xmath10(oh)@xmath11cl@xmath12 ( called herbertsmithite ) , a spin - half antiferromagnet on the @xmath1 lattice . theoretically , in lack of an exact solution of the two - dimensional ( 2d ) quantum hamiltonian ( [ hklm ] ) in the thermodynamic limit , in previous studies either a honeycomb valence bond crystal@xcite ( hvbc ) with an enlarged @xmath13-site unit cell , or a gapless sl@xcite were proposed as the ground state of hklm .
however , recently an extensive density - matrix - renormalization - group ( dmrg ) study@xcite on hklm reveals the ground state of hklm as a gapped sl , which substantially lowers the energy compared to hvbc . besides
, they also observe numerical signatures of @xmath3 topological order in the sl state .
lattice and the elements of its symmetry group .
@xmath14 are the translation unit vectors , @xmath15 denotes @xmath16 rotation around honeycomb center and @xmath17 represents mirror reflection along the dashed blue line . here
@xmath18 and @xmath19 denote 1st and 2nd nearest neighbor ( n.n . ) mean - field bonds while @xmath20 and @xmath21 represent two kinds of independent 3rd n.n .
mean - field bonds .
( b ) mean - field ansatz of @xmath4\beta$ ] state up to 2nd nearest neighbor .
colors in general denote the sign structure of mean - field bonds .
dashed lines denote 1st n.n .
real hopping terms @xmath22 : red ones have @xmath23 and black ones have @xmath24 .
solid lines stand for 2nd n.n . hopping @xmath25 and singlet pairing @xmath26 : again red ones have @xmath23 and blue ones have @xmath24 . here
@xmath27 and @xmath28 are real parameters after choosing a proper gauge .
, title="fig:",scaledwidth=22.0% ] lattice and the elements of its symmetry group .
@xmath14 are the translation unit vectors , @xmath15 denotes @xmath16 rotation around honeycomb center and @xmath17 represents mirror reflection along the dashed blue line . here
@xmath18 and @xmath19 denote 1st and 2nd nearest neighbor ( n.n . ) mean - field bonds while @xmath20 and @xmath21 represent two kinds of independent 3rd n.n .
mean - field bonds .
( b ) mean - field ansatz of @xmath4\beta$ ] state up to 2nd nearest neighbor .
colors in general denote the sign structure of mean - field bonds .
dashed lines denote 1st n.n .
real hopping terms @xmath22 : red ones have @xmath23 and black ones have @xmath24 .
solid lines stand for 2nd n.n . hopping @xmath25 and singlet pairing @xmath26 : again red ones have @xmath23 and blue ones have @xmath24 .
here @xmath27 and @xmath28 are real parameters after choosing a proper gauge .
, title="fig:",scaledwidth=22.0% ] motivated by this important numerical discovery , we try to find out the nature of this gapped @xmath3 sl .
different @xmath3 sls on the @xmath1 lattice have been previously studied using schwinger - boson representation@xcite .
here we propose the candidate states of symmetric @xmath3 sls on @xmath1 lattice by schwinger - fermion mean field approach@xcite .
following is the summary of our results .
first we use projective symmetry group@xcite ( psg ) to classify all 20 possible schwinger - fermion mean - field ansatz of @xmath3 sls which preserve all the symmetry of hklm , as shown in table [ tab : mf_ansatz ] .
we analyze these 20 states and rule out some obviously unfavorable states : _
e.g. _ gapless states , and those states whose 1st nearest neighbor ( n.n . ) mean - field amplitudes must vanish due to symmetry
. then we focus on those @xmath3 sls in the neighborhood of the @xmath5-dirac sl@xcite . in ref .
it is shown that @xmath5-dirac sl has a significantly lower energy compared with other candidate @xmath5 sl states , such as the uniform resonating - valence - bond ( rvb ) state(or the @xmath5 sl-@xmath29 $ ] state in notation of ref . ) .
we find out that there is only one gapped @xmath3 sl , which we label as @xmath4\beta$ ] , in the neighborhood of ( or continuously connected to ) @xmath5-dirac sl .
therefore we propose this @xmath4\beta$ ] state as a promising candidate state for the ground state of hklm .
the mean - field ansatz of @xmath4\beta$ ] state is shown in fig .
[ fig : kagome](b ) .
our work also provides guideline for choosing variational states in future numeric studies of sl ground state on @xmath1 lattice .
in the schwinger - fermion construction@xcite , we represent a spin-1/2 operator at site @xmath30 by fermionic spinons @xmath31 : @xmath32 heisenberg hamiltonian @xmath33 is represented as @xmath34 .
this construction enlarges the hilbert space of the original spin system . to obtain the physical spin state from a mean - field state of @xmath35-spinons
, we need to enforce the following one-@xmath35-spinon - per - site constraint : @xmath36 mean - field parameters of symmetric sls are @xmath37 , @xmath38 , where @xmath39 is the completely antisymmetric tensor .
both terms are invariant under global @xmath40 spin rotations . after a hubbard - stratonovich transformation
, the lagrangian of the spin system can be written as @xmath41+\sum_i a_0^l(i ) \psi_i^{\dagger}\tau^l\psi_i\label{eq : action}\end{aligned}\ ] ] where two - component fermion notation @xmath42 is introduced for reasons that will be explained shortly .
we use @xmath43 to denote the @xmath44 identity matrix and @xmath45 are the three pauli matrices .
@xmath46 is a matrix of mean - field amplitudes : @xmath47 @xmath48 are the local lagrangian multipliers that enforce the constraints eq.([eq : constraint ] ) . in terms of @xmath49
, schwinger - fermion representation has an explicit @xmath40 gauge redundancy : a transformation @xmath50 , @xmath51 , @xmath52 leaves the action invariant .
this redundancy is originated from representation eq.([eq : schwinger - fermion ] ) : this local @xmath40 transformation leaves the spin operators invariant and does not change physical hilbert space .
one can try to solve eq.([eq : action ] ) by mean - field ( or saddle - point ) approximation . at mean - field level ,
@xmath46 and @xmath53 are treated as complex numbers , and @xmath53 must be chosen such that constraints ( [ eq : constraint ] ) are satisfied at the mean field level : @xmath54 .
the mean - field ansatz can be written as : @xmath55 where we defined @xmath56 . under a local @xmath40 gauge transformation @xmath57 , but the physical spin state described by the mean - field ansatz @xmath58 remains the same . by construction
the mean - field ansatz does not break spin rotation symmetry , and the mean field solutions describe sl states if lattice symmetry is preserved .
different @xmath59 ansatz may be in different sl phases .
the mathematical language to classify different sl phases is projective symmetry group ( psg)@xcite .
psg characterizes the topological order in schwinger - fermion representation : sls described by different psgs are different phases .
it is defined as the collection of all combinations of symmetry group and @xmath40 gauge transformations that leave mean - field ansatz @xmath59 invariant ( as @xmath53 are determined self - consistently by @xmath59 , these transformations also leave @xmath53 invariant ) .
the invariance of a mean - field ansatz @xmath59 under an element of psg @xmath60 can be written as @xmath61 here @xmath62 is an element of symmetry group ( sg ) of the corresponding sl . in our case of symmetric sls on the @xmath1 lattice ,
we use @xmath63 to label a site with sublattice index @xmath64 and @xmath65 .
bravais unit vector are chosen as @xmath66 and @xmath67 as shown in fig .
[ fig : kagome](a ) .
the symmetry group is generated by time reversal operation @xmath68 , lattice translations @xmath69 along @xmath14 directions , @xmath16 rotation @xmath70 around honeycomb plaquette center and the mirror reflection @xmath17 ( for details see appendix [ app : symmetry group ] ) . for example , if @xmath71 is the translation along @xmath72-direction in fig.[fig : kagome](a ) , @xmath73 .
@xmath74 is the gauge transformation associated with @xmath75 such that @xmath60 leave @xmath59 invariant .
notice this condition ( [ psg_definition ] ) allows us to generate all symmetry - related mean - field bonds from one by the following relation : @xmath76 there is an important subgroup of psg , the invariant gauge group ( igg ) , which is composed of all the pure gauge transformations in psg : @xmath77 .
in other words , @xmath78 is the pure gauge transformation associated with identity element @xmath79 of the symmetry group .
one can always choose a gauge in which the elements in igg is site - independent .
in this gauge , igg can be the global @xmath3 transformations : @xmath80 , the global @xmath5 transformations : @xmath81\}$ ] , or the global @xmath40 transformations : @xmath82,\hat n\in s^2\}$ ] , and we term them as @xmath3 , @xmath5 and @xmath40 state respectively .
the importance of igg is that it controls the low energy gauge fluctuations of the corresponding sl states . beyond mean - field level ,
fluctuations of @xmath83 and @xmath53 need to be considered and the mean - field state may or may not be stable .
the low energy effective theory is described by fermionic spinon band structure coupled with a dynamical gauge field of igg .
for example , @xmath3 state with gapped spinon dispersion can be a stable phase because the low energy @xmath3 dynamical gauge field can be in the deconfined phase@xcite .
notice that the condition @xmath84 for a @xmath3 sl leads to a series of consistent conditions for the gauge transformations @xmath85 , as shown in appendix [ app : symmetry group ] .
gauge inequivalent solutions of these conditions ( [ algebra : psg : t])-([algebra : psg : c6 ] ) lead to different @xmath3 sls .
soon we will show that there are 20 @xmath3 sls on the @xmath1 lattice that can be realized by a schwinger - fermion mean - field ansatz @xmath59 .
following previous discussions , we use psg to classify all possible 20 @xmath3 sl states on @xmath1 lattice in this section . as will be shown later , among them
there is one gapped @xmath3 sl labeled as @xmath4\beta$ ] state in the neighborhood of @xmath5-dirac sl .
this @xmath4\beta$ ] sl state is the most promising candidate for the sl ground state of hklm . + applying
the condition @xmath86 to @xmath1 lattice with symmetry group described in appendix [ app : symmetry group ] , we obtain a series of consistent conditions for the gauge transformation @xmath87 ,
_ i.e. _ conditions ( [ algebra : psg : t])-([algebra : psg : c6 ] ) . solving these conditions
we classify all the 20 different schwinger - fermion mean - field states of @xmath3 sls on @xmath1 lattice , as summarized in table [ tab : mf_ansatz ] .
these 20 mean - field states correspond to different @xmath3 sl phases , which can not be continuously tuned into each other without a phase transition .
.[tab : mf_ansatz]mean - field ansatz of 20 possible @xmath3 sls on a @xmath1 lattice . in our notation of mean - field amplitudes
@xmath88 $ ] , this table summarizes all symmetry - allowed mean - field bonds up to 3rd n.n . , _
bond @xmath89 $ ] , 2nd n.n .
bond @xmath90 $ ] , 3rd n.n .
bonds @xmath91 $ ] and @xmath92 $ ] as shown in fig .
[ fig : kagome](a ) .
@xmath93 denote the on - site chemical potential terms which enforce the constraint ( [ constraint : mf ] ) .
@xmath43 is @xmath44 identity matrix while @xmath45 are three pauli matrices . @xmath94
denote hopping while @xmath95 denote pairing terms .
@xmath96 means the corresponding mean - field amplitudes must vanish due to symmetry .
red color denotes the shortest mean - field bonds necessary to realize a @xmath3 sl . in other words , the mean - field amplitudes with red color break the @xmath5 gauge redundancy down to @xmath3 through higgs mechanism .
so in @xmath97 and @xmath98 states a @xmath3 sl can not be realized with up to 3rd n.n .
mean - field amplitudes .
note that @xmath99 state needs only 3rd n.n .
bond @xmath20 to realize a @xmath3 sl ( @xmath100 not necessary ) , while state @xmath101 needs only @xmath100 to realize a @xmath3 sl ( @xmath20 not necessary ) .
notice that when @xmath102 the mean - field ansatz ( instead of the sl itself ) will break translational symmetry and double the unit cell .
there are six @xmath3 sls , _
i.e. _ @xmath98 that do nt allow any 1st n.n .
mean - field bonds . among the other 14 @xmath3 sls with nonvanishing 1st
mean - field bonds , only five @xmath3 sl states , _
i.e. _ @xmath103 have gapped spinon spectra .
@xmath104 or @xmath4\beta$ ] state in neighborhood of @xmath5-dirac sl is the most promising candidate of @xmath3 sl for the hklm ground state . [
cols="^,^,^,^,^,^,^,^,^,^,^",options="header " , ] in the following we find out all the gauge - inequivalent solutions of @xmath40 matrices @xmath105 satisfying the above conditions .
they are summarized in table .
( slowromancap1@ ) @xmath106 and therefore : conditions ( [ condition : t_sig ] ) and ( [ condition : t_c6 ] ) are automatically satisfied .
\(1 ) : notice that under a global gauge transformation @xmath107 the psg elements transform as @xmath108 thus from ( [ condition : sig ] ) and ( [ condition : c6_sig ] ) we can always have @xmath109 and @xmath110 , @xmath111 by choosing a proper gauge .
\(a ) : from ( [ condition : c6 ] ) we have @xmath112 .
\(b ) : from ( [ condition : c6 ] ) we have @xmath113 by gauge fixing .
+ ( 2 ) : from ( [ condition : sig ] ) we have @xmath114 and @xmath115 by gauge fixing . also from ( [ condition : c6_sig ] ) we can choose a gauge so that @xmath110 and @xmath116 .
\(a ) : in this case ( [ condition : c6_sig ] ) requires @xmath112 and thus according to ( [ condition : c6 ] ) .
\(b ) : \(a ) : now from ( [ condition : c6_sig ] ) and ( [ condition : c6 ] ) we have @xmath117 by gauge fixing .
\(b ) : by ( [ condition : c6_sig ] ) and ( [ condition : c6 ] ) we must have @xmath113 . to summarize there are @xmath118 different algebraic psgs with @xmath119 and @xmath106 .
+ ( slowromancap2@ ) @xmath120 and : \(1 ) : according to ( [ condition : sig])and ( [ condition : t_sig ] ) , by choosing a proper gauge we can have @xmath109 and . from ( [ condition : c6 ] ) and ( [ condition : c6_sig ] ) we also have @xmath121 ^ 2=g_{{c_6}}(v)g_{{c_6}}(u)=\eta_{{{\boldsymbol{\sigma}}}{{c_6}}}\tau^0=\eta_{12}\eta_{{c_6}}\tau^0 $ ] .
\(a ) : from ( [ condition : t_c6 ] ) , ( [ condition : c6 ] ) and ( [ condition : c6_sig ] ) , by choosing gauge we have @xmath122 and .
\(b ) : \(a ) : in this case we have @xmath123 and @xmath117 by choosing a proper gauge .
\(b ) : in this case we can have @xmath124 by choosing a proper gauge .
+ ( 2 ) : \(a ) : from ( [ condition : t_sig ] ) and ( [ condition : sig ] ) we have @xmath125 and @xmath114 by proper gauge fixing . also from ( [ condition : c6_sig ] ) we know @xmath121 ^ 2=-\eta_{{{\boldsymbol{\sigma}}}{{c_6}}}\tau^0 $ ] and @xmath126 .
\(a ) : from ( [ condition : t_c6 ] ) and ( [ condition : c6_sig ] ) , ( [ condition : c6 ] ) it s clear that , @xmath127 and @xmath128 through gauge fixing .
also we have .
\(b ) : ( b1 ) : in this case , and we can always choose a proper gauge so that @xmath110 , @xmath129 .
( b2 ) : in this case , and we can always choose a proper gauge so that @xmath130 , @xmath131 .
\(b ) : conditions ( [ condition : t_sig ] ) and ( [ condition : sig ] ) assert that @xmath132 by proper gauge choosing .
\(a ) : in this case from ( [ condition : c6_sig ] ) we know @xmath113 , hence
. then we can always choose a gauge so that @xmath133 and so from ( [ condition : c6 ] ) .
\(b ) : ( b1 ) : in this case from ( [ condition : t_sig]),([condition : c6_sig ] ) we have @xmath134 by a proper gauge choice .
meanwhile conditions ( [ condition : c6 ] ) and ( [ condition : c6_sig ] ) become @xmath121 ^ 2=\eta_{12}\eta_{{{c_6}}}\tau^0 $ ] and @xmath135 ^ 2=-\tau^0 $ ] .
( b.1.1 ) : here we have @xmath112 .
( b.1.2 ) : here we have @xmath117 . ( b2 ) : in this case from ( [ condition : t_sig ] ) and ( [ condition : c6_sig ] ) we can always choose a proper gauge so that @xmath136 .
we also have @xmath137 and from ( [ condition : c6 ] ) . to summarize there are @xmath138 different algebraic psgs with @xmath139 and @xmath120 .
+ so in summary we have @xmath140 different @xmath3 algebraic psgs satisfying conditions ( [ algebra : psg : t])-([algebra : psg : c6,t2 ] ) . among them
there are at most 20 solutions that can be realized by a mean - field ansatz , since those psgs with @xmath106 would require all mean - field bonds to vanish due to ( [ psg : def : t ] ) . as a result
there are different @xmath3 spin liquids on a @xmath1 lattice .
let s denote the mean - field bonds connecting sites @xmath141 and @xmath63 as @xmath142\equiv\langle x , y , s|0,0,u\rangle$ ] . using ( [ psg : def ] )
we can generate any other mean - field bonds through symmetry operations ( such as translations @xmath143 and mirror reflection @xmath144 ) from @xmath142 $ ] .
however these mean - field bonds can not be chosen arbitrarily since they possess symmetry relation ( [ psg : def ] ) : @xmath145 where @xmath75 is any element in the symmetry group .
notice that for time reversal @xmath68 we have @xmath146 we summarize these symmetry conditions on the mean - field bonds here : now let s consider several simplest examples .
at first , on - site chemical potential terms @xmath153 satisfy the following consistent conditions : @xmath154 in fact in all 20 @xmath3 spin on a @xmath1 lattice we all have @xmath155 with a proper gauge choice .
all the 1st n.n .
mean - field bonds can be generated from @xmath156 $ ] .
for a generic @xmath3 spin liquid with psg elements @xmath157 and ( [ psg : t1,2])([psg : sig])([psg : c6 ] ) , the bond @xmath89 $ ] satisfies the following consistent conditions : @xmath158 it follows immediately that for six @xmath3 spin liquids , _
i.e. _ @xmath159 in table [ tab : z2kagome ] all n.n .
mean - field bonds must vanish since @xmath160 as required by ( [ condition:1st_nn ] ) .
therefore it s unlikely that the @xmath3 spin liquid realized in @xmath1 hubbard model would be one of these 6 states . in the following
we study the rest 14 @xmath3 spin liquids on the @xmath1 lattice .
all 2nd n.n .
mean - field bonds can be generated from @xmath161 $ ] which satisfies the following symmetry conditions @xmath162 there are two kinds of 3rd n.n .
mean - field bonds : the first kind can all be generated by @xmath163 $ ] which satisfies @xmath164^\dagger = u_{\gamma}^\dagger.\notag\end{aligned}\ ] ] the second kind can all be generated by @xmath165 $ ] which satisfies @xmath166^\dagger=\eta_{12}\tilde u_{\gamma}^\dagger.\notag\end{aligned}\ ] ]
following @xmath40 schwinger fermion formulation with @xmath168 , we focus on those @xmath3 spin liquids ( sls ) in the neighborhood of @xmath5 sl-@xmath167 $ ] state with the following mean - field ansatz : @xmath169 where @xmath170 is a real hopping parameter .
we define mean - field bonds @xmath171 in the following way @xmath172 for convenience of later calculation we implement the following gauge transformation @xmath173 and the original mean - field ansatz ( [ mf : u1sl0 ] ) transforms to be @xmath174 the projected symmetry group ( psg ) corresponds to the above mean - field ansatz ( [ mf : u1sl1 ] ) is @xmath175 so that the mean - field ansatz satisfy ( [ psg : def ] ) . implementing the generic conditions mentioned earlier on several near neighbor mean - field bonds with psg ( [ psg : all four z2])-([psg : z2d ] ) , we obtain the following consistent conditions : \(0 ) for on - site chemical potential terms @xmath181 , translations operations @xmath143 in psg guarantee that @xmath182 .
they satisfy @xmath183 ( slowromancap1@ ) for 1st neighbor mean - field bond @xmath184^\dagger$ ] ( there is only one independent mean - field bond , meaning all other 1st neighbor bonds can be generated from [ 0,0,v ] through symmetry operations ) @xmath185 ( slowromancap2@ ) for 2nd neighbor mean - field bond @xmath186 $ ] we have @xmath187 ( slowromancap2@ ) for 3rd neighbor mean - field bonds @xmath188 $ ] and @xmath189 $ ] we have @xmath190 and @xmath191 for @xmath4\alpha$ ] state with @xmath192 the mean - field ansatz are ( up to 3rd neighbor mean - field bonds ) @xmath193 since we are considering a phase perturbed from the @xmath5 sl-@xmath167 $ ] state , we shall always assume @xmath194 ( 1st neighbor hopping terms ) in the following discussion .
a @xmath4\alpha$ ] spin liquid can be realized by 1st neighbor mean - field singlet pairing terms with @xmath195 . for @xmath4\beta$ ] state with @xmath196
the mean - field ansatz are ( up to 3rd neighbor mean - field bonds ) @xmath197 a @xmath4\beta$ ] spin liquid can be realized by 2nd neighbor pairing terms with @xmath198 . for @xmath4\gamma$ ] state with @xmath199
the mean - field ansatz are ( up to 3rd neighbor mean - field bonds ) @xmath200 a @xmath4\gamma$ ] spin liquid can be realized by 2nd neighbor pairing terms with @xmath201 . for @xmath4\delta$ ] state with @xmath202
the mean - field ansatz are ( up to 3rd neighbor mean - field bonds ) @xmath203 a @xmath4\delta$ ] spin liquid can be realized by 3rd neighbor pairing terms with @xmath204 .
the reciprocal unit vectors ( corresponding to unit vectors @xmath205 ) on a @xmath1 lattice are @xmath206 and @xmath207 , satisfying @xmath208 . in the mean - field ansatz ( [ mf : u1sl1 ] ) of @xmath5
sl-@xmath167 $ ] the unit cell is doubled whose translation unit vectors are @xmath209 and @xmath210 . accordingly
the 1st bz for such a mean - field ansatz is only half of the original 1st bz with new reciprocal unit vectors being @xmath211 and @xmath212 . denoting the momentum as @xmath213 with @xmath214
, we have @xmath215 the two dirac cones in the spectra of @xmath5 sl-@xmath167 $ ] state ( [ mf : u1sl1 ] ) are located at @xmath216 with @xmath217 with the proper chemical potential @xmath218 added to mean - field ansatz ( [ mf : u1sl1 ] ) . for convenience
we choose the following basis for dirac - like hamiltonian obtained from expansion around @xmath216 : @xmath219 where @xmath220 are valley index for two dirac cones at @xmath216 with pauli matrices @xmath221 and @xmath222 are band indices ( for the two bands forming the dirac cone ) with pauli matrices @xmath223 .
pseudospin indices @xmath224 are assigned to the two degenerate bands related by time reversal , with pauli matrices @xmath225 .
the corresponding creation operators for these modes are @xmath226 in the order of @xmath227 for the six sites per doubled new unit cell .
notice that in terms of @xmath35-spinons we have @xmath228 .
here @xmath229 , @xmath230 and @xmath231 are transformation matrices on 12-component eigenvectors for time reversal @xmath68 and translation @xmath69 operations . by definition of psg
the eigenvectors @xmath232 with momentum @xmath233 and energy @xmath234 have the following symmetric properties : @xmath235 @xmath236 and @xmath237 are the basis after and before the symmetry operations . in such a set of basis the dirac hamiltonian obtained by expanding the @xmath5 sl-@xmath167 $ ] mean - field ansatz ( [ mf : u1sl1 ] ) around the two cones at @xmath216 is @xmath238 @xmath239
should be understood as small momenta measured from @xmath216 .
possible mass terms are @xmath240 and @xmath241 .
however not all of them are allowed by symmetry .
here we numerate all symmetry operations and associated operator transformations : spin rotation along @xmath242-axis by angle @xmath243 : @xmath244 spin rotation along @xmath245-axis by @xmath246 : @xmath247 time reversal @xmath68 : @xmath248 translation @xmath249 : @xmath250 translation @xmath251 : @xmath252 considering the above conditions , the only symmetry - allowed mass terms are @xmath253 with @xmath254 and @xmath255 . the transformation rules for mirror reflection @xmath17 and
@xmath16 rotation @xmath15 depend on the choice of @xmath256 in the psg .
in general we have @xmath257 using the basis ( [ dirac : basis ] ) the @xmath258 matrices @xmath259 can be expressed in terms of pauli matrices @xmath260 . for the four @xmath3 spin liquid we have @xmath261 it turns out in @xmath4\beta$ ] state , only the 1st mass term @xmath254 is invariant under @xmath17 and @xmath15 operations .
in other 3 states neither mass terms @xmath262 are symmetry - allowed . as a result
we only have one gapped @xmath3 spin liquid , _
i.e. _ @xmath4\beta$ ] state in the neighborhood of @xmath5 dirac sl-@xmath167 $ ] state .
let s consider mean - field bonds up to 2nd neighbor for ansatz @xmath4\beta$ ] .
perturbations to the two dirac cones of @xmath5 sl-@xmath167 $ ] with @xmath263 in general has the following form @xmath264\mu^0\sigma^3\nu^0\\ & + \big[(\sqrt3 + 1)b_1-\lambda_2-(\sqrt3 - 1)a_1\big]\mu^0\sigma^1\nu^0\end{aligned}\ ] ] this means we need either 1st neighbor ( @xmath265 ) or 2nd neighbor ( @xmath266 ) pairing term to open up a gap in the spectrum .
meanwhile these pairing terms break the original @xmath5 symmetry down to @xmath3 symmetry . | due to strong geometric frustration and quantum fluctuation , @xmath0 quantum heisenberg antiferromagnets on the @xmath1 lattice has long been considered as an ideal platform to realize spin liquid ( sl ) , a novel phase exhibiting fractionalized excitations without any symmetry breaking .
a recent numerical study [ ] of heisenberg @xmath2 lattice model ( hklm ) shows that in contrast to earlier results , the ground state is a singlet - gapped sl with signatures of @xmath3 topological order .
motivated by this numerical discovery , we use projective symmetry group to classify all 20 possible schwinger - fermion mean - field states of @xmath3 sls on @xmath1 lattice . among them
we found only one gapped @xmath3 sl ( which we call @xmath4\beta$ ] state ) in the neighborhood of @xmath5-dirac sl state .
since its parent state , _
i.e. _ @xmath5-dirac sl is found [ ] to be the lowest among many other candidate @xmath5 sls including the uniform resonating - valence - bond states , we propose this @xmath4\beta$ ] state to be the numerically discovered sl ground state of hklm . |
SECTION 1. SHORT TITLE.
This Act may be cited as the ``EPA Regulatory Relief Act of 2011''.
SEC. 2. LEGISLATIVE STAY.
(a) Establishment of Standards.--In place of the rules specified in
subsection (b), and notwithstanding the date by which such rules would
otherwise be required to be promulgated, the Administrator of the
Environmental Protection Agency (in this Act referred to as the
``Administrator'') shall--
(1) propose regulations for industrial, commercial, and
institutional boilers and process heaters, and commercial and
industrial solid waste incinerator units, subject to any of the
rules specified in subsection (b)--
(A) establishing maximum achievable control
technology standards, performance standards, and other
requirements under sections 112 and 129, as applicable,
of the Clean Air Act (42 U.S.C. 7412, 7429); and
(B) identifying non-hazardous secondary materials
that, when used as fuels or ingredients in combustion
units of such boilers, process heaters, or incinerator
units are solid waste under the Solid Waste Disposal
Act (42 U.S.C. 6901 et seq.; commonly referred to as
the ``Resource Conservation and Recovery Act'') for
purposes of determining the extent to which such
combustion units are required to meet the emissions
standards under section 112 of the Clean Air Act (42
U.S.C. 7412) or the emission standards under section
129 of such Act (42 U.S.C. 7429); and
(2) finalize the regulations on the date that is 15 months
after the date of the enactment of this Act.
(b) Stay of Earlier Rules.--The following rules are of no force or
effect, shall be treated as though such rules had never taken effect,
and shall be replaced as described in subsection (a):
(1) ``National Emission Standards for Hazardous Air
Pollutants for Major Sources: Industrial, Commercial, and
Institutional Boilers and Process Heaters'', published at 76
Fed. Reg. 15608 (March 21, 2011).
(2) ``National Emission Standards for Hazardous Air
Pollutants for Area Sources: Industrial, Commercial, and
Institutional Boilers'', published at 76 Fed. Reg. 15554 (March
21, 2011).
(3) ``Standards of Performance for New Stationary Sources
and Emission Guidelines for Existing Sources: Commercial and
Industrial Solid Waste Incineration Units'', published at 76
Fed. Reg. 15704 (March 21, 2011).
(4) ``Identification of Non-Hazardous Secondary Materials
That Are Solid Waste'', published at 76 Fed. Reg. 15456 (March
21, 2011).
(c) Inapplicability of Certain Provisions.--With respect to any
standard required by subsection (a) to be promulgated in regulations
under section 112 of the Clean Air Act (42 U.S.C. 7412), the provisions
of subsections (g)(2) and (j) of such section 112 shall not apply prior
to the effective date of the standard specified in such regulations.
SEC. 3. COMPLIANCE DATES.
(a) Establishment of Compliance Dates.--For each regulation
promulgated pursuant to section 2, the Administrator--
(1) shall establish a date for compliance with standards
and requirements under such regulation that is, notwithstanding
any other provision of law, not earlier than 5 years after the
effective date of the regulation; and
(2) in proposing a date for such compliance, shall take
into consideration--
(A) the costs of achieving emissions reductions;
(B) any non-air quality health and environmental
impact and energy requirements of the standards and
requirements;
(C) the feasibility of implementing the standards
and requirements, including the time needed to--
(i) obtain necessary permit approvals; and
(ii) procure, install, and test control
equipment;
(D) the availability of equipment, suppliers, and
labor, given the requirements of the regulation and
other proposed or finalized regulations of the
Environmental Protection Agency; and
(E) potential net employment impacts.
(b) New Sources.--The date on which the Administrator proposes a
regulation pursuant to section 2(a)(1) establishing an emission
standard under section 112 or 129 of the Clean Air Act (42 U.S.C. 7412,
7429) shall be treated as the date on which the Administrator first
proposes such a regulation for purposes of applying the definition of a
new source under section 112(a)(4) of such Act (42 U.S.C. 7412(a)(4))
or the definition of a new solid waste incineration unit under section
129(g)(2) of such Act (42 U.S.C. 7429(g)(2)).
(c) Rule of Construction.--Nothing in this Act shall be construed
to restrict or otherwise affect the provisions of paragraphs (3)(B) and
(4) of section 112(i) of the Clean Air Act (42 U.S.C. 7412(i)).
SEC. 4. ENERGY RECOVERY AND CONSERVATION.
Notwithstanding any other provision of law, and to ensure the
recovery and conservation of energy consistent with the Solid Waste
Disposal Act (42 U.S.C. 6901 et seq.; commonly referred to as the
``Resource Conservation and Recovery Act''), in promulgating rules
under section 2(a) addressing the subject matter of the rules specified
in paragraphs (3) and (4) of section 2(b), the Administrator--
(1) shall adopt the definitions of the terms ``commercial
and industrial solid waste incineration unit'', ``commercial
and industrial waste'', and ``contained gaseous material'' in
the rule entitled ``Standards of Performance for New Stationary
Sources and Emission Guidelines for Existing Sources:
Commercial and Industrial Solid Waste Incineration Units'',
published at 65 Fed. Reg. 75338 (December 1, 2000); and
(2) shall identify non-hazardous secondary material to be
solid waste only if--
(A) the material meets such definition of
commercial and industrial waste; or
(B) if the material is a gas, it meets such
definition of contained gaseous material.
SEC. 5. OTHER PROVISIONS.
(a) Establishment of Standards Achievable in Practice.--In
promulgating rules under section 2(a), the Administrator shall ensure
that emissions standards for existing and new sources established under
section 112 or 129 of the Clean Air Act (42 U.S.C. 7412, 7429), as
applicable, can be met under actual operating conditions consistently
and concurrently with emission standards for all other air pollutants
regulated by the rule for the source category, taking into account
variability in actual source performance, source design, fuels, inputs,
controls, ability to measure the pollutant emissions, and operating
conditions.
(b) Regulatory Alternatives.--For each regulation promulgated
pursuant to section 2(a), from among the range of regulatory
alternatives authorized under the Clean Air Act (42 U.S.C. 7401 et
seq.) including work practice standards under section 112(h) of such
Act (42 U.S.C. 7412(h)), the Administrator shall impose the least
burdensome, consistent with the purposes of such Act and Executive
Order No. 13563 published at 76 Fed. Reg. 3821 (January 21, 2011).
Passed the House of Representatives October 13, 2011.
Attest:
KAREN L. HAAS,
Clerk. | EPA Regulatory Relief Act of 2011 - Provides that the following rules shall have no force or effect and shall be treated as though they had never taken effect: (1) the National Emission Standards for Hazardous Air Pollutants for Major Sources: Industrial, Commercial, and Institutional Boilers and Process Heaters; (2) the National Emission Standards for Hazardous Air Pollutants for Area Sources: Industrial, Commercial, and Institutional Boilers; (3) the Standards of Performance for New Stationary Sources and Emission Guidelines for Existing Sources: Commercial and Industrial Solid Waste Incineration Units; and (4) Identification of Non-Hazardous Secondary Materials That are Solid Waste.
Requires the Administrator of the Environmental Protection Agency (EPA), in place of such rules, to promulgate and finalize on the date that is 15 months after the date of the enactment of this Act regulations for industrial, commercial, and institutional boilers and process heaters and commercial and industrial solid waste incinerator units subject to such rules, that: (1) establish maximum achievable control technology standards, performance standards, and other requirements for hazardous air pollutants or solid waste combustion under the Clean Air Act; and (2) identify non-hazardous secondary materials that, when used as fuels or ingredients in combustion units of such boilers, heaters, or incinerator units, are solid waste under the Solid Waste Disposal Act for purposes of determining the extent to which such combustion units are required to meet emission standards for such pollutants under such Act. Requires the Administrator to establish a date for compliance with standards and requirements under such regulations, which shall be no earlier than five years after such a regulation's effective date, after considering compliance costs, non-air quality health and environmental impacts and energy requirements, the feasibility of implementation, the availability of equipment, suppliers, and labor, and potential net employment impacts.
Treats the date on which the Administrator proposes such a regulation establishing an emission standard as the proposal date for purposes of applying the definition of a "new source" to hazardous air pollutants requirements or of a "new solid waste incineration unit" to solid waste combustion requirements under the Clean Air Act.
Requires the Administrator, in promulgating such regulations, to: (1) adopt the definitions of "commercial and industrial solid waste incineration unit," "commercial and industrial waste," and "contained gaseous material" in the rule entitled Standards for Performance of New Stationary Sources and Emission Guidelines for Existing Sources: Commercial and Industrial Solid Waste Incineration Units; (2) identify non-hazardous secondary material to be solid waste only if the material meets such definitions; (3) ensure that emissions standards for existing and new sources can be met under actual operating conditions consistently and concurrently with emission standards for all other air pollutants regulated by the rule for the source category, taking into account variability in actual source performance, source design, fuels, inputs, controls, ability to measure the pollutant emissions, and operating conditions; and (4) impose the least burdensome regulatory alternative. |
the dark matter particle explorer ( dampe ) is a space mission supported by the strategic space projects of the chinese academy of sciences with the contribution of swiss and italian institutions @xcite .
the rocket has been successfully launched on december 17 , 2015 and dampe presently flies regularly on a sun - synchronous orbit at the altitude of @xmath5 .
the satellite is equipped with four different detectors : a plastic scintillator array , a silicon - tungsten tracker , a bgo calorimeter and a neutron detector .
they are devoted to measure the fluxes of charged crs ( electrons , protons and heavier nuclei ) , to study the high energy gamma ray signal from astrophysical sources and to search for indirect dark - matter signatures .
dampe ( fig .
[ fig : side ] ) consists of a plastic scintillator strip detector ( psd ) that is used as anti - coincidence and charge detector , a silicon - tungsten tracker - converter ( stk ) to reconstruct the direction of incident particles , a bgo imaging calorimeter ( bgo ) of about 32 radiation lengths that measures the energy with high resolution and distinguishes between electrons and protons , and a neutron detector ( nud ) that can further increase the hadronic shower rejection power . the high energy sky is mainly dominated by nuclei with different electrical charges @xcite .
this charged flux is studied by dampe but it is also a background for gamma astronomy .
therefore the psd is designed to work as a veto and to measure the charge ( @xmath6 ) of incident high - energy particles up to @xmath7 . following these requirements
the psd must have a high detection efficiency for charged particles , a large dynamic range and a relatively good energy resolution .
the silicon - tungsten tracker - converter is devoted to the precise reconstruction of the particle tracks .
it consist of twelve position - sensitive silicon detector planes ( six planes for the @xmath8-coordinate , six planes for the @xmath9-coordinate ) .
three layers of tungsten are inserted in between the silicon planes ( 2 , 3 , 4 and 5 ) to convert gamma rays in electron - positron pairs .
the specifications of the stk are given in table [ tab : stk ] and a comparison with other experiments is shown in fig .
[ fig : comparison ] for what concerns the active area and the number of channels .
.stk specifications [ cols= " < , < , < " , ] the neutron detectos is a further device to distinguish the types of high - energy showers .
it consist of four boron - loaded plastics each read out by a pmt .
typically hadron - induced showers produce roughly one order of magnitude more neutrons than electron - induced showers .
once these neutrons are created , they thermalize quickly in the bgo calorimeter and the neutron activity can be detected by the nud within few @xmath10 ( @xmath11 after the shower in bgo ) .
neutrons entering the boron - loaded scintillator undergo the capture process @xmath12 its probability is inversely proportional to neutron velocity and the capture time is inversely proportional to @xmath13 loading .
roughly 570 optical photons are produced in each capture @xcite .
an extensive monte carlo simulation activity was carried out during the r&d phase in order to find a proper compromise between research goals and limitations on geometry , power consumption and weight .
dampe performances were verified by a series of beam tests at cern .
the ps and sps accelerators provide electron and proton beams .
the beam test data were used to study the performance of the bgo calorimeter , and in particular the energy resolution ( fig .
[ fig : resol ] ) , the linearity and the @xmath14 separation .
also a beam of argon fragments was used for performing tests with heavy ions .
details of the beam - test preliminary results as well as the features of the qualified module can be found in @xcite .
after launch , the spacecraft entered the sky - survey mode immediately and the dedicated - calibration of the detector was performed in two weeks .
the calibration included the studies of pedestal , response to mips , alignment , timing , etc ... the satellite is on a solar - synchronized orbit lasting 95 minutes .
the pedestal calibration is performed twice per orbit and the global trigger rate is kept at @xmath15 by using different pre - scales for unbiased and low - energy triggers at different latitudes . in absence of on - board analysis processing , the data are just packaged with timestamp and transmitted to ground ( about 4 millions of events per day corresponding to @xmath16 ) . after the event reconstruction the data size is @xmath17 per day .
the dampe detectors started to take physics data very soon after the launch .
the performance parameters ( temperature , noise , spatial resolution , efficiency ) are very stable and very close to what expected .
the absolute calorimeter energy measurement has been checked by using the geomagnetic cut - off and it results well calibrated . also the absolute pointing has been successfully verified .
the photon - data collected in 165 days were enough to draw a preliminary high - energy sky - map where the main gamma - ray sources are visible in the proper positions .
the energy released in the psd allows to measure the charge and to distinguish the different nuclei in the cr flux .
[ fig : z2 ] show the result of this measurement for the full range up to iron ( @xmath18 ) .
measurement up to iron with only 10 days of data.,width=321,height=204 ] the measurement of electron and positron flux is one of the main goals of the dampe mission because some dark matter signature could be found in the electron and positron spectra .
the shower development in the bgo provides the main tool to distinguish leptons from hadrons .
then a shape parameter is defined as : @xmath19 where @xmath20 is the index of the bgo layer ( @xmath21 ) , @xmath22 is the shower width in the i - th layer , @xmath23 and @xmath24 are the energy on the single layer and on all the layers , respectively . using the shape parameters on the last bgo layers ( 13 , 14 ) it is possible to separate leptons from hadrons with a rejection power higher than @xmath25 ( preliminary result in fig .
[ fig : fig7 ] ) . the rejection capability will be further enhanced by means of the nud .
the dampe detector is expected to work for more than 3 years .
this data - taking time is sufficient to investigate deeply many open questions in cr studies . in fig .
[ fig : fig8 ] the possible dampe measurement of the all electron spectrum in 3 years is shown .
the energy range is so large to observe a cut - off and a new increase of the flux due to nearby astrophysical sources , if present . and nearby astrophysical sources.,width=302,height=204 ]
many experiments @xcite observed a hardening of the cr elemental spectra at @xmath4-energies .
this is another interesting topic related to cr origin and propagation and dampe will be able to perform significant measurements about it and also about the boron / carbon ratio ( fig .
[ fig : fig11 ] ) . finally the large exposure will allow extending energy spectra measurements for protons and nuclei up to tens of @xmath4 .
the dampe satellite has been successfully launched in orbit on december 2015 and the preliminary data analyses confirm that the detectors work very well .
the dampe program foresees important measurements on the cr flux and chemical composition , electron and diffuse gamma - ray spectra and anisotropies , gamma astronomy and possible dark matter signatures .
this challenging program is based on the outstanding dampe features : the large acceptance ( @xmath26 ) , the `` deep '' calorimeter ( @xmath27 ) , the precise tracking and the redundant measurement techniques . | the dampe ( dark matter particle explorer ) satellite was launched on december 17 , 2015 and started its data taking operation a few days later . dampe has a large geometric factor ( @xmath0 ) and provides good tracking , calorimetric and charge measurements for electrons , gammas rays and nuclei .
this will allow precise measurement of cosmic ray spectra from tens of @xmath1 up to about @xmath2 .
in particular , the energy region between @xmath3 will be explored with higher precision compared to previous experiments .
the various subdetectors allow an efficient identification of the electron signal over the large ( mainly proton - induced ) background . as a result
, the all - electron spectrum will be measured with excellent resolution from few @xmath1 up to few @xmath4 , thus giving the opportunity to identify possible contribution of nearby sources . a report on the mission goals and status
is presented , together with the on - orbit detector performance and the first data coming from space . |
The study found no link between consuming butter and an increased risk of heart disease or stroke, instead finding that butter might actually be slightly protective against type 2 diabetes. And although consuming butter was linked with an increased risk of early death, the increase in risk was extremely small, the researchers said.
"Overall, our results suggest that butter should neither be demonized nor considered 'back' as a route to good health," study co-author Dr. Dariush Mozaffarian, dean of the Friedman School of Nutrition Science and Policy at Tufts University in Massachusetts, said in a statement. The findings "do not support a need for major emphasis in dietary guidelines on either increasing or decreasing butter consumption," the researchers wrote in their study. [7 Foods Your Heart Will Hate]
Butter is relatively high in saturated fat, which is generally considered a "bad" fat. But, increasingly, researchers are looking at the overall effects of eating certain foods, rather than focusing on specific nutrients by themselves, the researchers said. That's because the combination of nutrients in a food, like butter, may have a different effect on people's health than any single nutrient alone.
In the new study, the researchers analyzed information from nine earlier studies that together included more than 636,000 people in 15 countries who were followed for 10 to 23 years, on average. During that time, 28,271 people died; 9,783 were diagnosed with heart disease; and 23,954 were diagnosed with type 2 diabetes. The average amount of butter that the people in the studies consumed ranged from one-third of a tablespoon daily to 3 tablespoons daily.
A daily serving of butter (14 grams or about 1 tablespoon) was linked with a 1 percent higher risk of death during the study period. On the other hand, a daily serving of butter was linked with a 4 percent reduced risk of type 2 diabetes.
There was no relationship found between eating butter and being diagnosed with heart disease, the researchers said.
The findings suggested butter may be a "middle-of-the-road" food, said study co-author Laura Pimpin, also of Tufts University. For example, butter may be healthier for you than foods high in sugar or starch, which have been linked with an increased risk of heart disease and diabetes, Pimpin said.
However, butter may be a worse for you than other spreads and cooking oils that are richer in "healthy fats," she said. These alternatives include soybean, canola, flaxseed and extra-virgin olive oil, along with some types of margarine. Such spreads and oils contain more unsaturated fats, which are generally considered healthier than saturated fats.
More research is needed to understand why consuming butter is linked with a slightly lower risk of type 2 diabetes, Mozaffarian said. Some previous studies have also found a link between consuming dairy fat from yogurt and cheese and a lower risk of type 2 diabetes.
The new study looked only at the association between people's butter consumption and their risk of heart disease, early death and types 2 diabetes, so it cannot prove for certain that butter does or does not cause these conditions. There may be factors that the study did not take into account, such as people's physical activity levels or their genetic risk factors, which could affect the results, the researchers said.
The study is published today (June 29) in the journal PLOS ONE.
Original article on Live Science. ||||| This systematic review and meta-analysis suggests relatively small or neutral overall associations of butter with mortality, CVD, and diabetes. These findings do not support a need for major emphasis in dietary guidelines on either increasing or decreasing butter consumption, in comparison to other better established dietary priorities; while also highlighting the need for additional investigation of health and metabolic effects of butter and dairy fat.
We searched 9 databases from inception to May 2015 without restriction on setting, or language, using keywords related to butter consumption and cardiometabolic outcomes. Prospective cohorts or randomized clinical trials providing estimates of effects of butter intake on mortality, cardiovascular disease including coronary heart disease and stroke, or diabetes in adult populations were included. One investigator screened titles and abstracts; and two reviewed full-text articles independently in duplicate, and extracted study and participant characteristics, exposure and outcome definitions and assessment methods, analysis methods, and adjusted effects and associated uncertainty, all independently in duplicate. Study quality was evaluated by a modified Newcastle-Ottawa score. Random and fixed effects meta-analysis pooled findings, with heterogeneity assessed using the I 2 statistic and publication bias by Egger’s test and visual inspection of funnel plots. We identified 9 publications including 15 country-specific cohorts, together reporting on 636,151 unique participants with 6.5 million person-years of follow-up and including 28,271 total deaths, 9,783 cases of incident cardiovascular disease, and 23,954 cases of incident diabetes. No RCTs were identified. Butter consumption was weakly associated with all-cause mortality (N = 9 country-specific cohorts; per 14g(1 tablespoon)/day: RR = 1.01, 95%CI = 1.00, 1.03, P = 0.045); was not significantly associated with any cardiovascular disease (N = 4; RR = 1.00, 95%CI = 0.98, 1.02; P = 0.704), coronary heart disease (N = 3; RR = 0.99, 95%CI = 0.96, 1.03; P = 0.537), or stroke (N = 3; RR = 1.01, 95%CI = 0.98, 1.03; P = 0.737), and was inversely associated with incidence of diabetes (N = 11; RR = 0.96, 95%CI = 0.93, 0.99; P = 0.021). We did not identify evidence for heterogeneity nor publication bias.
Competing interests: The authors have completed and submitted the ICMJE Form for Disclosure of Potential Conflicts of Interest and Dr. Mozaffarian reports ad hoc honoraria or consulting from Boston Heart Diagnostics, Haas Avocado Board, Astra Zeneca, GOED, and Life Sciences Research Organization; chapter royalties from UpToDate; and scientific advisory board Elysium Health. Harvard University has been assigned patent US8889739 B2, listing Dr. Mozaffarian as one of three co-inventors, for “Use of transpalmitoleic acid in identifying and treating metabolic disease”. This does not alter our adherence to PLOS ONE policies on sharing data and materials.
A systematic review of the evidence for of the relationship between butter consumption and long-term health is of considerable importance, both for understanding food-based health as well as informing dietary recommendations for clinicians and policy makers. The US Department of Agriculture has documented a 40-year record high in US butter consumption in 2014 [ 13 ], making a synthesis of the evidence on butter and major chronic diseases highly relevant and timely.
For example, growing evidence supports potential metabolic benefits of certain dairy products, such as yogurt and possibly cheese, on risk of type 2 diabetes [ 5 , 6 ], which may even relate to benefits of dairy fat. [ 7 – 9 ] However, the relationship of butter, which is highest in dairy fat, with diabetes remains unclear. The long-term effects of butter consumption on other major endpoints, such as all-cause mortality and CVD, are also not well-established. Previous reviews have evaluated only some of these outcomes, included butter as part of a wider investigation into dairy foods or types of fats [ 10 – 12 ], and utilized methods that provided imprecise estimates of effect, precluded dose-response evaluation, or may have introduced unintended bias (e.g., due to inclusion of crude, unadjusted effect estimates).
Growing uncertainty and changing views on the role of butter in cardiovascular disease (CVD) have been prominently discussed, including in the New York Times and Time Magazine. [ 1 , 2 ] This has partly arisen from increasing controversy on the utility of focusing on isolated macronutrients, such as saturated fat, for determining risk of chronic diseases. Mounting evidence indicates a need to shift away from isolated macronutrients toward food-based paradigms for investigating dietary priorities for chronic diseases. [ 3 , 4 ] The 2015 Dietary Guidelines Advisory Committee (DGAC) recommended replacing animal fats, including butter, with non-hydrogenated vegetable oils high in unsaturated fats and relatively low in saturated fatty acids. [ 4 ] Yet, the DGAC also concluded that further research was needed on the effects of saturated fat from different food sources, including animal products, on cardiovascular risk, because different food sources contain varying specific fatty acid profiles as well as other constituents that may result in distinct lipid and metabolic effects. [ 4 ]
Heterogeneity between studies was quantified using the I 2 statistic, with statistical significance (P<0.05) evaluated by the Q statistic. [ 21 ] We considered I2 values between 25% and 50%, between 50% and 75% and above75% as upper thresholds for low, moderate, and high heterogeneity, respectively. We planned pre-specified subgroup analyses to further explore potential heterogeneity in results by gender, population mean age and body mass index, duration of follow-up, and study quality score. Restricted cubic spline models [ 22 ] with knots at the 25th, 50th, and 75th percentiles were used to examine potential nonlinear relations.
Reported hazard ratios were assumed to approximate relative risks (RRs). We used the two-stage generalized least-squares trend estimation method described by Greenland and Longnecker [ 17 , 18 ] to perform dose-response analysis and compute study-specific linear estimates and 95% CIs across categories of butter intake. Butter intakes across studies were standardized at the study level to 14 g/d, corresponding to one United Stated Department of Agriculture-defined serving. [ 19 ] Study-specific dose-response estimates were then pooled to derive an overall estimate using inverse-variance weighted DerSimonian and Laird meta-analysis with random effects. [ 20 ] Because random effects can result in larger weights for small outlier studies, we also conducted fixed effects meta-analysis for comparison. For reports presenting results only by study subgroups (e.g., men, women), we first pooled the study-specific subgroups using fixed-effect meta-analysis to obtain a single estimate per study.
We adapted the Newcastle-Ottawa quality scale(NOS) [ 16 ] to assess study quality, based on five criteria evaluating the reporting and appropriateness/representativeness of participant inclusion and exclusion criteria (combining the first two items of the NOS Selection scale), participant attrition (NOS adequacy of follow-up item), control for confounding (NOS Comparability scale), assessment of exposure (NOS ascertainment of exposure item), and assessment of outcome (combining the first two items of the NOS Outcome scale). One point was allocated per criterion met, the sum of which provided an overall quality score. A score between 0 and 3 was considered low-quality; and 4 to 5, high-quality. Quality scores were assessed independently and in duplicate by two investigators, with any differences resolved by consensus.
When more than one multivariable model was reported, we used the risk estimate including the greatest number of potential confounders but not potential mediators (e.g., blood cholesterol). If the main multivariable model included covariates which could either be confounders or intermediates, this was utilized rather than a model with crude or minimal covariate adjustment. When energy intake was included as a covariate, body mass index was not considered to be an intermediate variable, so models adjusting for body mass index were extracted (this only arose in one study, by Buijsse et al. [ 14 ]). The effect results from the Guasch-Ferre et al. [ 15 ] study were estimated using the models of risk of diabetes associated with substitution of olive oil for equivalent amounts of butter, and our results were confirmed and validated by contact with the authors.
Data from the included studies were independently extracted in duplicate by two investigators using a standardized and piloted electronic form (Microsoft Excel). Any differences in extraction were resolved by consensus. Information was extracted on the publication (first author name, contact information, publication year), study details (name, location, design), population (age, gender, race, socioeconomic status, body mass index), sample size, dates of recruitment, duration of follow-up, dietary assessment (dates, method, definition, categories), outcome(s) (assessment method, definition), covariates and analysis methods, and multivariate-adjusted effect estimates and associated uncertainty. To evaluate dose-response, we extracted continuous effect estimates when available; and for categorical analyses, collected additional information on median exposure, number of participants or person-years, and number of events in each category. Missing information in any category was obtained by direct author contact or, if necessary, estimated using a standard approach (see S5 File ).
Studies were also excluded if evaluating only children or populations with major end-stage diseases such as cancer; if duration of intervention or follow-up was less than 3 months; if consumption of butter was not separately distinguishable from other dairy product or fats; if evaluating only soft endpoints (e.g. angina pectoris, coronary insufficiency); or, for observational studies, if providing only unadjusted (crude) effect estimates. When duplicate publications were identified, the report including the largest number of cases for each endpoint of interest was selected. If references were only available in abstract form (e.g. from meeting proceedings or conference presentations), data were extracted if sufficient detail was available; if not, a relevant publication was searched for in PubMed.
In addition, among studies excluded by title and abstract screening, several were identified evaluating overall dietary patterns (e.g., Mediterranean, Western, etc.) To ensure that we were not missing effect estimates for butter contained within these reports (e.g., in supplementary tables on the individual components of these dietary patterns), we also reviewed the full texts of the first 15 identified studies of dietary patterns. None of these studies reported individual effect estimates for butter, so further diet pattern studies lacking any information on butter in the title or abstract were excluded
We performed a systematic search for all prospective cohort studies and randomized clinical trials examining butter consumption and all-cause mortality, CVD including CHD and stroke, or type 2 diabetes. Electronic searches were performed using PubMed ( www.ncbi.nlm.nih.gov/pubmed ), EMBASE ( www.scopus.com ), The Cochrane Library ( www.cochranelibrary.com ), Web of Knowledge ( www.webofscience.com ), CAB Abstracts and Global Health ( www.ovid.com ), CINAHL ( www.ebscohost.com ) and grey literature searches of SIGLE ( www.opengrey.eu ) and ZETOC ( www.zetoc.mimas.ac.uk/ ) from the earliest indexing year of each database through May 2015, without language or other restrictions. Search terms included butter, margarine, dairy, dairy products, yogurt, cheese, ghee, animal fat, solid fat, cardiovascular diseases, heart disease, stroke, myocardial infarction, heart attack, cerebrovascular disease, cerebrovascular accident, sudden death, diabetes, mortality and deaths; see S3 File for a full listing. For all final included articles, we further performed hand-searches of citation lists and a review of the first 20 related references on PubMed for additional eligible reports.
Visual inspection of funnel plots and Egger’s test suggested little evidence for asymmetry or presence of small-study effects for any CVD (p = 0.866), stroke (p = 0.913), CHD (p = 0.769), or diabetes (p = 0.369), although the relatively small number of studies limited statistical power of Egger’s test ( S1 Fig ). Egger’s test could not estimate small-study effects for all-cause mortality (N = 2 studies). No trimming was identified for all-cause mortality or CVD using Duval and Tweedie’s “Trim and Fill” method ( S2 Fig ). For diabetes, this approach did estimate one missing study, the addition of which resulted in a theoretical corrected pooled estimate of RR = 0.95 (95%CI = 0.93, 0.98; P = 0.001).
While total numbers of subjects and cases were large, the relatively low number of separate studies precluded meaningful subgroup analyses by study or participant characteristics, which were therefore not performed. Similarly, potential nonlinearity in dose-response could not be meaningfully evaluated for total mortality. Evidence for nonlinearity was not identified for butter intake and CVD or diabetes (by cubic spline regression, P for nonlinearity = 0.364 and 0.160, respectively).
Diet was generally assessed by detailed, semi-quantitative food frequency questionnaires; one cohort utilized a structured diet history interview ( Table 1 ). The median butter consumption across studies ranged from 4.5g/d (0.3 servings/d) in the European Prospective Investigation into Cancer and Nutrition (EPIC) studies to 46 g/d (3.2 servings/d) in Finland. Mean participant age ranged from 44 to 71 years. All studies were published between 2005 and 2015, and included 1 in the Netherlands, 2 in the US, 2 in Finland, 2 in Sweden, and 2 from the multi-country, multi-cohort EPIC study which included 8 country-specific cohorts from Denmark, France, Italy, Germany, the Netherlands, Spain, Sweden and the UK. Five of the studies presented results from models with optimal covariate adjustment including demographics, clinical risk factors, and other dietary habits; the remainder provided results with moderate covariate adjustment.
Discussion
In this systematic review and meta-analysis of prospective studies, we found a small positive association between butter consumption and all-cause mortality, no significant association with incident CVD or CVD subtypes, and a modest inverse association with type 2 diabetesNo RCTs of butter intake were identified in our literature search. Because several of the identified reports included multiple country-specific cohorts, the total numbers of nation-specific cohorts, participants, and clinical events appear reasonably robust. Indeed, together these studies included more than 28,000 total deaths, nearly 10,000 cases of incident CVD, and nearly 24,000 cases of incident diabetes. We found limited formal evidence for between-study heterogeneity or publication bias, and all reports had high quality scores. Together, these findings suggest relatively small or neutral associations of butter consumption with long-term health.
Current dietary recommendations on butter and dairy fat are largely based upon predicted effects of specific individual nutrients (e.g., total saturated fat, calcium), rather than actual observed health effects. Our findings add to a growing body of evidence on long-term health effects of specific foods and types of fats. [12, 32, 33] Conventional guidelines on dietary fats have not accounted for their diverse food sources nor the specific individual fatty acid profiles in such foods. [4] Different foods represent complex matrices of nutrients, processing, and food structure, which together influence net health effects. [3, 34] Thus, studying intakes of foods, as in the present investigation, is crucial to elucidate health impact. Our novel results, together with other prior research described below, indicate a need for further funding, evaluation, and reporting on health effects of butter and dairy fat on mechanistic pathways and long-term health outcomes.
While prior meta-analyses have evaluated total dairy or some dairy subtypes and incident diabetes, to our knowledge none have evaluated butter and type 2 diabetes. [6, 12] A meta-analysis of butter and all-cause mortality identified no significant association (highest category vs. lowest: RR = 0.96; 95%CI = 0.95, 1.08) [10], but did not include the more recent large report from Sluik et al. [30](258,911 participants, 12,135 deaths) and also included two smaller studies not meeting our inclusion criteria: one having only crude (unadjusted) estimates, [35] and another evaluating polyunsaturated fats or margarine in comparison to butter, rather than butter separately. [36] A meta-analysis evaluating dairy consumption and CVD found no association between butter consumption and stroke (2 cohorts: RR = 0.94; 95%CI = 0.84, 1.06) or CHD (3 cohorts: RR = 1.02, 95%CI = 0.88, 1.20), but only evaluated high vs. low categories of intake rather than conducting dose-response analyses utilizing all available data. [5] Another meta-analysis included dose-response findings on butter consumption and stroke, but not CHD, CVD, diabetes, or all-cause mortality, and arrived at similar findings for stroke as seen in the present study. [11] In comparison to these prior reports, we evaluated up-to-date reports and full dose-response analyses for all-cause mortality, CVD including CHD and stroke, and type 2 diabetes; providing the most comprehensive investigation to-date of butter consumption and risk of long-term major health endpoints.
Our investigation also adds to and expands upon prior studies evaluating other dairy foods and dairy fat biomarkers in relation to cardiometabolic outcomes. In a multi-ethnic US population, serum levels of pentadecanoic acid (15:0), the odd-chain saturated fat most strongly associated with self-reported butter intakes (r = 0.13), were associated with lower CVD and CHD risk. [37] This is consistent with a meta-analysis of odd-chain saturated fat biomarkers demonstrating inverse associations with CHD [33]. A prior meta-analysis of dairy consumption and CVD suggested protective associations with total CVD (for highest vs lowest category of intake: 12% lower risk) and stroke (13% lower risk), with conflicting results for major subtypes of dairy. [5] Dairy fat has also been linked to lower risk of diabetes, based on studies of circulating fatty acid biomarkers [8, 9] and studies of self-reported consumption of dairy products, which have seen protective associations for yogurt and perhaps cheese, and null associations for both low-fat and whole-fat milk. [12, 27]
Given adverse effects of certain dairy fats (e.g. 16:0) on cardiometabolic risk factors such as LDL-cholesterol and fasting glucose [38, 39], our findings suggest potential presence of other mechanistic benefits of butter that might at least partly offset these harms. For instance, saturated fats also increase HDL-C, lower VLDL-C and chylomicron remnants, and lower lipoprotein(a) [40, 41]; while potential cardiometabolic benefits have been identified for calcium, fat-soluble vitamin D, medium-chain saturated fats, branched-chain fats, trace ruminant trans fats, or other processes related to fermentation (e.g. cheese) or active bacterial cultures (e.g. in yogurt). For example, dietary calcium may decrease fatty acid synthase and increase lipolytic activity in adipocytes, [42] reduce blood pressure by modulation of smooth muscle reactivity, [43, 44] and reduce weight gain. [45] Vitamin D may reduce dyslipidemia and improve blood pressure through maintenance of calcium homeostasis, stimulation of insulin production and release, and regulation of the renin-angiotensin-aldosterone system. Higher dairy fat consumption has been linked to lower liver fat and greater hepatic and systemic insulin sensitivity [46] which could relate to inhibition of hepatic de novo lipogenesis by specific dairy fatty acids. [8] Branched-chain fatty acids in dairy fat may promote healthier bacterial microbiome composition and function. Dairy fat also contains monounsaturated fats which might improve glycemic responses and insulin sensitivity. [47, 48] Other dairy-related factors, such as probiotic bacteria in yogurt and menaquinones in fermented milk and cheeses, may improve insulin sensitivity, reduce weight gain, and reduce inflammation through microbiome and vitamin-K related pathways; [49, 50] such pathways would be less relevant for butter, which has been linked to greater weight gain. [51, 52] Clearly, additional mechanistic studies on health effects of butter, dairy fat, and dairy foods are warranted.
Our results suggest relatively small or neutral overall associations of butter with mortality, CVD, and diabetes. These findings should be considered against clear harmful effects of refined grains, starches, and sugars on CVD and diabetes; [53–55] and corresponding benefits of fruits, nuts, legumes, n-6 rich vegetable oils, and possibly other foods such as fish on these endpoints. In sum, these results suggest that health effects of butter should be considered against the alternative choice. For instance, butter may be a more healthful choice than the white bread or potato on which it is commonly spread. In contrast, margarines, spreads, and cooking oils rich in healthful oils, such as soybean, canola, flaxseed, and extra-virgin olive oil, appear to be healthier choices than either butter or refined grains, starches, and sugars. [15, 56, 57] In Guasch-Ferre’s analysis of the Nurses Health Study, substitution of 8 g olive oil for an equivalent amount of butter was associated with an 8% reduction in the risk of type 2 diabetes (RR = 0.92 (95%CI = 0.87, 0.97). [15] Thus, even with an absence of major health associations in the present investigation, healthier (and less healthy) alternatives may be available. Our findings suggest a major focus on eating more or less butter, by itself, may not be linked to large differences in mortality, cardiovascular disease, or diabetes. In sum, our findings do not support a need for major emphasis in dietary guidelines on butter consumption, in comparison to other better established dietary priorities. In any meta-analysis, the effects of potential publication bias should be considered. Such bias increases the probability that large, positive associations, rather than small or null findings, will be published. In this case, the identified studies each reported generally modest or null findings. Considering the number of large prospective studies globally having data on dietary habits (including butter consumption) and these outcomes, it is evident that many additional cohort studies have collected such data but not analyzed or reported their findings. Such “missing,” unpublished studies may be more likely to have null effects. This may be particularly relevant for total mortality, with only 2 identified publications: additional publications might plausibly move findings toward the null. For diabetes, where a larger count of publications allowed better assessment for bias, the “trim and fill” method identified one theoretical missing study, with a protective point estimate.
Our investigation has several strengths. We followed stringent eligibility criteria that maximized inclusion of high quality, comparable studies. Our comprehensive literature search of multiple databases together with author contacts for clarification and missing data maximized statistical power and minimized the possibility of missed reports. While relatively few publications reported on certain outcomes, the identified studies were large, included multiple nation-specific cohorts and thousands of cases, and were of high quality; and as described above, it would be unlikely that publication bias would explain small or null (as opposed to large) associations. The inclusion of generally healthy participants followed since the 1980s and 1990s to the present provided populations generally free of lipid-lowering medications, which might otherwise mask full effects of butter on CVD. The identified cohorts provided a wide range of butter intakes, increasing power to detect an effect, if present. The dose-response analyses maximized use of all reported data, increasing precision.
Potential limitations should be considered. The health effect of any food could be modified by a person’s background diet, genetics, or risk factor profile. This is true for any lifestyle, pharmacologic, or other health intervention—effects may be modified by other treatments or underlying characteristics—but this does not lessen the relevance of evaluating the average population effect. We did not observe any obvious differences in associations based on country or region, where background dietary patterns might differ; but the number of identified studies precluded robust investigation of potential sources of heterogeneity. While the majority of studies adjusted for major demographic, clinical, and dietary covariates, residual confounding may be present. Because butter consumption is associated with generally worse diet patterns and lifestyle habits [58, 59], such residual confounding may overestimate potential harms of butter for mortality, and underestimate potential benefits of butter for CVD or diabetes. Error or bias in measurement of dietary intake from self-reports, as well as the long periods between dietary assessment and follow-up in several studies (10 years or more), may attenuate findings. On the other hand, even with such limitations, many other dietary factors in these and other cohorts have identified significant associations with mortality, CVD, and diabetes, so this is unlikely to be the sole explanation for the null findings. We did not identify any randomized clinical trials of our hard endpoints, although such a long-term trial focused on butter alone might be prohibitively expensive and impractical. Our results are based on best available observational findings, and long-term interventional studies were not found, limiting inference on causality.
In conclusion, the available evidence indicates small or neutral associations of butter consumption with all-cause mortality, CVD, and type 2 diabetes. ||||| An analysis by Tufts University researchers has failed to find a link between butter consumption and cardiovascular disease. And hallelujah to that—the ongoing hysteria against butter can now finally come to an end.
For years we’ve been told to reduce the amount of butter in our diets. Health guidelines, many of which have been around since the 1970s, have warned us about the dangers of eating food high in saturated fats, claiming—and often without merit—that they contribute to heart problems and other health issues. Increasingly, however, scientists are learning that saturated fats aren’t the demons they’ve been made out to be.
Advertisement
A new study published in PLOS ONE is now bolstering this changing tide of opinion, showing there’s no link between butter and chronic disease. This gigantic analysis—a meta-study that included a total of 636,151 individuals across 15 countries, and involving 6.5 million person-years of follow-up—showed no association between the consumption of butter and cardiovascular disease.
What the researchers did find, however, was that butter could be linked to a decrease—yes, a decrease—in a person’s chance of developing diabetes. Each daily tablespoon of butter was linked to a four percent lower risk of diabetes.
The downside is that researchers did connect butter with all-cause mortality, however. For each tablespoon of butter consumed each day, the researchers observed a one percent increase in all-cause mortality risk, that is, death from any cause. The researchers suspect this connection is due to other factors; people who eat butter, for example, tend to have generally worse diets and lifestyles.
Advertisement
So does this mean we can start slathering butter on our toast and waffles with reckless abandon, and douse our popcorn in this golden syrup of deliciousness? Well, not quite. This study shows that butter on its own isn’t a pure evil. But it shouldn’t be considered a health food, either. As the researchers put it, butter is a kind of “middle-of-the-road” food. And as is often the case, it’s the foods we put the butter on that’s the problem.
Indeed, butter is healthier than sugar or starches like bread, which have been linked to an increased risk of cardiovascular disease and diabetes. On the other hand, butter is worse than many margarines and cooking oils, such as those rich in healthy fats, like soybean, canola, flaxseed, and extra virgin olive oils. Importantly, margarine made from trans fats should be avoided like the plague.
As study co-author Dariush Mozaffarian succinctly put it: “Overall, our results suggest that butter should neither be demonized nor considered ‘back’ as a route to good health.”
Advertisement
Mozaffarian and his colleagues said further research is still required to understand why butter is connected to a lower risk of diabetes, but similar things have been observed in studies of dairy fat. This could imply that other factors are at play. As the researchers concede, ‘[Our] study does not prove cause-and-effect.”
[PLOS ONE] ||||| BOSTON (Embargoed until 2 PM EDT, June 29, 2016)--Butter consumption was only weakly associated with total mortality, not associated with cardiovascular disease, and slightly inversely associated (protective) with diabetes, according to a new epidemiological study which analyzed the association of butter consumption with chronic disease and all-cause mortality. This systematic review and meta-analysis, published in PLOS ONE, was led by Tufts scientists including Laura Pimpin, Ph.D., former postdoctoral fellow at the Friedman School of Nutrition Science and Policy at Tufts in Boston, and senior author Dariush Mozaffarian, M.D., Dr.P.H., dean of the School.
Based on a systematic review and search of multiple online academic and medical databases, the researchers identified nine eligible research studies including 15 country-specific cohorts representing 636,151 unique individuals with a total of 6.5 million person-years of follow-up. Over the total follow-up period, the combined group of studies included 28,271 deaths, 9,783 cases of cardiovascular disease, and 23,954 cases of new-onset type 2 diabetes. The researchers combined the nine studies into a meta-analysis of relative risk.
Butter consumption was standardized across all nine studies to 14 grams/day, which corresponds to one U.S. Department of Agriculture estimated serving of butter (or roughly one tablespoon). Overall, the average butter consumption across the nine studies ranged from roughly one-third of a serving per day to 3.2 servings per day. The study found mostly small or insignificant associations of each daily serving of butter with total mortality, cardiovascular disease, and diabetes.
"Even though people who eat more butter generally have worse diets and lifestyles, it seemed to be pretty neutral overall," said Pimpin, now a data analyst in public health modelling for the UK Health Forum. "This suggests that butter may be a "middle-of-the-road" food: a more healthful choice than sugar or starch, such as the white bread or potato on which butter is commonly spread and which have been linked to higher risk of diabetes and cardiovascular disease; and a worse choice than many margarines and cooking oils - those rich in healthy fats such as soybean, canola, flaxseed, and extra virgin olive oils - which would likely lower risk compared with either butter or refined grains, starches, and sugars."
"Overall, our results suggest that butter should neither be demonized nor considered "back" as a route to good health," said Mozaffarian. "More research is needed to better understand the observed potential lower risk of diabetes, which has also been suggested in some other studies of dairy fat. This could be real, or due to other factors linked to eating butter - our study does not prove cause-and-effect."
###
Additional authors of this study are Jason HY Wu, M.Sc., Ph.D., and Hila Haskelberg, Ph.D., both of The George Institute for Global Health, University of Sydney, Australia; and Liana Del Gobbo, Ph.D., formerly a postdoctoral fellow at the Friedman School and currently a research fellow in cardiovascular medicine at Stanford School of Medicine.
This work was supported by the National Heart, Lung, and Blood Institute of the National Institutes of Health, under award number 5R01HL085710. For conflicts of interest disclosure, please see the study.
Pimpin L, Wu JHY, Haskelberg H, Del Gobbo L, Mozaffarian D (2016) Is Butter Back? A Systematic Review and Meta-Analysis of Butter Consumption and Risk of Cardiovascular Disease, Diabetes, and Total Mortality. PLoS ONE 11(6): e0158118. doi:10.1371/journal.pone.0158118
About the Friedman School of Nutrition Science and Policy at Tufts University
The Gerald J. and Dorothy R. Friedman School of Nutrition Science and Policy at Tufts University is the only independent school of nutrition in the United States. The school's eight degree programs - which focus on questions relating to nutrition and chronic diseases, molecular nutrition, agriculture and sustainability, food security, humanitarian assistance, public health nutrition, and food policy and economics - are renowned for the application of scientific research to national and international policy. | – We might all owe Paula Deen an apology. A study published this week in PLOS ONE finds no connection between eating butter and an increased risk of cardiovascular disease. On the contrary, researchers found eating butter might actually make people slightly healthier by reducing the risk of diabetes. Researchers from Tufts University looked at nine previous studies of more than 636,000 people who ate between one-third a tablespoon of butter and three tablespoons of butter per day, Live Science reports. For the purposes of the study, they called one tablespoon of butter a daily serving. Researchers found that eating butter was in no way associated with a risk of stroke or heart disease and reduced the risk of type 2 diabetes by 4%. "Hallelujah," Gizmodo responds to the study, pointing out that warnings about the dangers of butter have been around since the 1970s. But that doesn't mean it's time to eat an entire butter sculpture. Researchers say that while butter may actually be healthier than sugars and starches, it's probably still worse for you than olive oil or some margarines. "Overall, our results suggest that butter should neither be demonized nor considered 'back' as a route to good health," researcher Dariush Mozaffarian says in a press release. Researchers did find that eating butter led to a 1% higher risk of death, though they chalk that up to the fact that "people who eat more butter generally have worse diets and lifestyles." (This guy found edible butter dating back to Jesus.) |
SECTION 1. SHORT TITLE.
This Act may be cited as the ``Small Business Paperwork Relief Act
of 2001''.
SEC. 2. FACILITATION OF COMPLIANCE WITH FEDERAL PAPERWORK REQUIREMENTS.
(a) Requirements Applicable to the Director of OMB.--Section
3504(c) of title 44, United States Code (commonly referred to as the
``Paperwork Reduction Act''), is amended--
(1) in paragraph (4), by striking ``; and'' and inserting a
semicolon;
(2) in paragraph (5), by striking the period and inserting
a semicolon; and
(3) by adding at the end the following:
``(6) publish in the Federal Register and make available on
the Internet (in consultation with the Small Business
Administration) on an annual basis a list of the compliance
assistance resources available to small businesses, with the
first such publication occurring not later than 1 year after
the date of enactment of the Small Business Paperwork Relief
Act of 2001.''.
(b) Establishment of Agency Point of Contact.--Section 3506 of
title 44, United States Code, is amended by adding at the end the
following:
``(i)(1) In addition to the requirements described in subsection
(c), each agency described under paragraph (2) shall, with respect to
the collection of information and the control of paperwork, establish 1
point of contact in the agency to act as a liaison between the agency
and small business concerns (as defined in section 3 of the Small
Business Act (15 U.S.C. 632)). Each such point of contact shall be
established not later than 1 year after the date of enactment of the
Small Business Paperwork Relief Act of 2001.
``(2) An agency described under this paragraph is--
``(A) any agency with a head that is listed at a level I
position on the Executive Schedule under section 5312 of title
5; and
``(B) the Federal Communications Commission, the Securities
and Exchange Commission, and the Environmental Protection
Agency.''.
(c) Additional Reduction of Paperwork for Certain Small
Businesses.--Section 3506(c) of title 44, United States Code, is
amended--
(1) in paragraph (2)(B), by striking ``; and'' and
inserting a semicolon;
(2) in paragraph (3)(J), by striking the period and
inserting ``; and''; and
(3) by adding at the end the following:
``(4) in addition to the requirements of this chapter
regarding the reduction of information collection burdens for
small business concerns (as defined in section 3 of the Small
Business Act (15 U.S.C. 632)), make efforts to--
``(A) further reduce the information collection
burden for small business concerns with fewer than 25
employees; and
``(B) eliminate any unnecessary paperwork
burdens.''.
SEC. 3. ESTABLISHMENT OF TASK FORCE ON INFORMATION COLLECTION AND
DISSEMINATION.
(a) In General.--Chapter 35 of title 44, United States Code, is
amended--
(1) by redesignating section 3520 as section 3521; and
(2) by inserting after section 3519 the following:
``Sec. 3520. Establishment of task force on information collection and
dissemination
``(a) There is established a task force to study the feasibility of
streamlining requirements with respect to small business concerns
regarding collection of information and strengthening dissemination of
information (in this section referred to as the `task force').
``(b) The members of the task force shall be appointed by the head
of each applicable department or agency (and in the case of paragraph
(12) by the Director), and include--
``(1) not less than 2 representatives of the Department of
Labor, including 1 representative of the Bureau of Labor
Statistics and 1 representative of the Occupational Safety and
Health Administration;
``(2) not less than 1 representative of the Environmental
Protection Agency;
``(3) not less than 1 representative of the Department of
Transportation;
``(4) not less than 1 representative of the Office of
Advocacy of the Small Business Administration;
``(5) not less than 1 representative of the Internal
Revenue Service;
``(6) not less than 2 representatives of the Department of
Health and Human Services, including 1 representative of the
Health Care Financing Administration;
``(7) not less than 1 representative of the Department of
Agriculture;
``(8) not less than 1 representative of the Department of
Interior;
``(9) not less than 1 representative of the General
Services Administration;
``(10) not less than 1 representative of each of 2 agencies
not represented by representatives described under paragraphs
(1) through (9) and (11);
``(11) 1 representative of the Director, who shall convene
and chair the task force; and
``(12) not less than 3 representatives of the small
business community.
``(c) The task force shall--
``(1) recommend a plan for the development of an
interactive Government application, available through the
Internet, to allow each small business to better understand
which Federal requirements regarding collection of information
(and, when possible, which other Federal regulatory
requirements) apply to that particular business;
``(2) identify ways to integrate the collection of
information across Federal agencies and programs and examine
the feasibility of requiring each agency to consolidate
requirements regarding collections of information with respect
to small business concerns, within and across agencies without
negatively impacting the effectiveness of underlying laws and
regulations regarding such collections of information, in order
that each small business concern may submit all information
required by the agency--
``(A) to 1 point of contact in the agency; and
``(B) in a single format, such as a single
electronic reporting system, with respect to the
agency;
``(3) examine the feasibility and helpfulness to small
businesses of the Director publishing a list of the collections
of information applicable to small business concerns (as
defined in section 3 of the Small Business Act (15 U.S.C.
632)), organized--
``(A) by North American Industrial Classification
System code;
``(B) industrial/sector description; or
``(C) in another manner by which small business
concerns can more easily identify requirements with
which those small business concerns are expected to
comply;
``(4) examine the savings, including cost savings, for
implementing a system of electronic paperwork submissions; and
``(5) examine the feasibility of measures to strengthen the
dissemination of information.
``(d) Not later than 1 year after the date of enactment of the
Small Business Paperwork Relief Act of 2001, the task force shall
submit a report of its findings under subsection (c), including any
minority views of the task force, to--
``(1) the Director;
``(2) the chairpersons and ranking minority members of--
``(A) the Committee on Governmental Affairs and the
Committee on Small Business and Entrepreneurship of the
Senate; and
``(B) the Committee on Government Reform and the
Committee on Small Business of the House of
Representatives; and
``(3) the Small Business and Agriculture Regulatory
Enforcement Ombudsman designated under section 30(b) of the
Small Business Act (15 U.S.C. 657(b)).
``(e) In this section, the term `small business concern' has the
meaning given under section 3 of the Small Business Act (15 U.S.C.
632).''.
(b) Technical and Conforming Amendment.--The table of sections for
chapter 35 of title 44, United States Code, is amended by striking the
item relating to section 3520 and inserting the following:
``3520. Establishment of task force on information collection and
dissemination.
``3521. Authorization of appropriations.''.
SEC. 4. REGULATORY ENFORCEMENT REFORMS.
Section 223 of the Small Business Regulatory Enforcement Fairness
Act of 1996 (5 U.S.C. 601 note) is amended by striking subsection (c)
and inserting:
``(c) Reports.--
``(1) In general.--Not later than 1 year after the date of
enactment of the Small Business Paperwork Relief Act of 2001,
and not later than every 2 years thereafter, each agency shall
submit a report to the Director of the Office of Management and
Budget and the chairpersons and ranking minority members of the
Committee on Governmental Affairs and the Committee on Small
Business of the Senate, and the Committee on the Judiciary and
the Committee on Small Business of the House of
Representatives, that includes information with respect to the
applicable 1-year period or 2-year period covered by the report
on each of the following:
``(A) The number of enforcement actions in which a
civil penalty is assessed.
``(B) The number of enforcement actions in which a
civil penalty is assessed against a small entity.
``(C) The number of enforcement actions described
under subparagraphs (A) and (B) in which the civil
penalty is reduced or waived.
``(D) The total monetary amount of the reductions
or waivers referred to under subparagraph (C).
``(2) Definitions in reports.--Each report under paragraph
(1) shall include definitions of the terms `enforcement
actions', `reduction or waiver', and `small entity' as used in
the report.''.
Passed the Senate December 17, 2001.
Attest:
Secretary.
107th CONGRESS
1st Session
S. 1271
_______________________________________________________________________
AN ACT
To amend chapter 35 of title 44, United States Code, for the purpose of
facilitating compliance by small business concerns with certain Federal
paperwork requirements, to establish a task force to examine
information collection and dissemination, and for other purposes. | Small Business Paperwork Relief Act of 2001 - Amends the Paperwork Reduction Act to require the Director of the Office of Management and Budget, annually, to publish in the Federal Register and make available on the Internet a list of the paperwork compliance resources available to small businesses. Requires the following Federal agencies, also within one year, to establish one agency point of contact to act as a liaison with small businesses with respect to the collection of information and the control of paperwork: (1) each agency with a head listed at level I on the Executive Schedule; and (2) the Federal Communications Commission, the Securities and Exchange Commission, and the Environmental Protection Agency.Requires each agency to make efforts to: (1) further reduce the paperwork burden for small businesses with fewer than 25 employees; and (2) eliminate any unnecessary paperwork burdens.Establishes a task force to study and report to the Director, specified congressional committees, and the Small Business and Agriculture Regulatory Enforcement Ombudsman on the feasibility of streamlining requirements with respect to small businesses regarding the collection of information and strengthening the dissemination of information.Amends the Small Business Regulatory Enforcement Fairness Act of 1996 to require each agency to submit, biennially, to the Director and such committees information concerning regulatory enforcement actions taken and civil penalties assessed, including actions and assessments against small businesses. |
glioblastomas are the most common and aggressive among primary brain tumors . in spite of the intensive basic and clinical studies ,
one - third of patients keep surviving no longer than one year from diagnosis , and average life expectancy remains dismal ( 1215 months ) , even when radical surgical resection , chemo- and radiotherapy can be applied .
the major problem with glioblastomas is their highly migratory and invasive potential into the normal brain tissue that prevents complete surgical removal of tumor cells and the extreme resistance of these cells to standard treatments . to worsen
the outcome of the disease is the presence in the tumor mass of a recently identified subpopulation of highly tumorigenic stem - like glioblastoma cells possessing even more invasive power , chemo- and radio - resistance than nonstem tumor cells , that are also thought to be responsible for the commonly observed tumor relapses [ 24 ] .
glioblastomas are characterized by a large number and variety of genetic mutations that heavily disregulate the major signaling pathways controlling cell survival , proliferation , differentiation , and invasion . among the disregulated pathways found in glioblastoma cells
there are those controlling the expression of ion channels , transmembrane proteins endowed with a permeation pore that allows the passage of ions .
usually ion channels are selectively permeable to one particular ion and can open and close their permeation pore in response to chemical and physical stimuli , such as neurotransmitters , modulators , and changes in the membrane potential .
ion channels have been found to be involved in several cellular functions , hallmarks of cancer cell aggressiveness , such as proliferation , apoptosis , and migration . in most cases
their contribution consists in regulating two important cellular parameters , the cell volume and the intracellular ca concentration ( [ ca]i ) [ 7 , 8 ] . by allowing the movement of k and cl ions through the plasmamembrane , and the osmotically driven water flux
, ion channels critically control the changes of cell volume that are functionally relevant for glioblastoma cells .
for example , a premitotic volume condensation ( pvc ) is required for glioblastoma cells to switch from a bipolar into a round cell morphology just prior cell division .
notably , this process requires the opening of cl - selective clc-3 channels , that are markedly upregulated in glioblastoma cells as compared to healthy astrocytes [ 912 ] .
similarly , a cell volume reduction , the so - called apoptotic volume decrease ( avd ) , was observed during the staurosporine- or trail ( tnf - alpha - related apoptosis inducing ligand)-induced apoptosis of glioblastoma cells , and also in this case it was found to be sustained by a cl channel flux , being prevented by inhibitors of cl channels .
cell migration and invasion through the narrow extracellular spaces of the brain parenchyma also require major changes in cell volume .
these processes in addition to the clc-3 channels discussed above require the activity of ca - activated k - selective bk channels , likewise markedly upregulated in glioblastoma cells as compared to healthy astrocytes [ 1416 ] .
the important role of the ca signals in the development of glioblastoma has recently been reviewed .
notably , ion channels play a critical role to this regard ; besides sustaining directly the ca influxes ( through ca - permeable channels ) they can influence the entry of extracellular ca ions by modulating the membrane potential that controls the driving force for ca influx .
ca influx through the trpc family of ca - permeable channels has indeed been shown to modulate glioblastoma cell cycle progression [ 1820 ] and to induce a camkii - dependent activation of clc-3 during premitotic volume condensation .
in addition , glioblastoma cell migration has been shown to be accompanied by intracellular ca oscillations that are instrumental to promote the kinase - dependent detachment of focal adhesions during cell rear retraction [ 21 , 22 ] , and these intracellular ca oscillations can be significantly affected by the membrane hyperpolarization determined by the activity of k channels . perhaps the best suited ion channels to play a role in tumor development are the ca - activated k ( kca ) channels , as they are at the cell crossroad where ca influx , membrane potential , and outward ion fluxes , all processes governed by kca channels , integrate to modulate a large array of cellular processes .
kca channels are subdivided into three major classes according to their single channel conductance : large conductance ( 150300 ps ) k channels ( bkca or kca1 ) , small conductance ( 220 ps ) k channels ( sk or kca2.1 , kca2.2 , kca2.3 ) , and intermediate conductance ( 2060 ps ) k channels ( ikca or kca3.1 ) .
kca1 channels , encoded by the kcnma1 gene , are broadly expressed in various tissues .
they are regulated by cytoplasmic ca but also by membrane potential . in the absence of ca
elevations in cytoplasmic [ ca ] shift the range of activating voltages to more negative potentials . near resting potentials ,
paxilline , iberiotoxin , and low concentrations of tetraethyl ammonium are potent and specific inhibitors of the kca1 channel .
the kca2.x channels are voltage independent but more sensitive to ca ( ec50 in submicromolar range ) due to the presence of calmodulin associated with the c - terminus that works as ca sensor .
the kca3.1 channels , like the kca2.x channels , are voltage independent but gated by intracellular ca that binds to calmodulin and opens the channel .
clotrimazole and its derivative tram-34 are potent inhibitors of the kca3.1 channels , discriminating them from other kca channels .
kca3.1 channels are expressed in a variety of normal and tumor cells , where they participate in important cell functions such as cell cycle progression , migration , and epithelial transport , by controlling the cell volume and the driving force for ca influx [ 2527 ] . here
we review the major progresses that have led to our present understanding of the expression and role of the kca3.1 channels in glioblastoma .
the kca3.1 channel has the overall architecture of the voltage - gated k ( kv ) channel superfamily , with four subunits , each containing six transmembrane domains ( s1s6 ) and a pore domain ( p loop ) located between s5 and s6 . the s4 domain , which confers voltage sensitivity to the kv channels , shows in kca3.1 channels only two positively charged aminoacids , as compared to the 47 charged residues of voltage - gated k channels . channel activation is , therefore , voltage independent .
the kca3.1 channel is gated instead by the binding of intracellular ca to calmodulin , a ca - binding protein that is constitutively associated with the c terminus of each channel subunit [ 2830 ] .
this ca - dependent gating is similar to that displayed by the kca2.x channel family but distinct from kca1 channels , where the ca - dependent module is intrinsic to the channel subunit .
patch - clamp experiments in several cell types , including glioblastoma , give ic50s for kca3.1 channel activation by ca of 200400 nm [ 31 , 32 ] , consistent with those found for the cloned channel [ 3335 ] . the high ca sensitivity of the kca3.1 channel
allows its activation by submicromolar ca levels , easily reached upon ca release from intracellular stores or influx through ca permeable channels .
a four - state gating scheme was proposed for kca3.1 channels , with ca - dependent transitions dependent on the [ ca]i in a nonlinear manner .
this peculiarity , not shared by the kca2.x channel family , is related to the channel behaviour at saturating [ ca]i , as elevated divalent concentrations have been reported to block the channel [ 36 , 38 ] .
the most studied kca3.1 mrna is the 2.1 kb form , but other transcripts have been reported in humans [ 34 , 35 ] .
three distinct kcnn4 cdnas that are designated as kcnn4a , kcnn4b , and kcnn4c encoding 425 , 424 , and 395 aminoacid proteins , respectively , were isolated from the rat colon , and several differences in the functional expression and pharmacological properties of the different isoforms were found .
the kca3.1 channels are target for several inhibitory and activatory agents ( for an exhaustive review see ) .
two structurally distinct groups of kca3.1 channel blockers , peptidic and nonpeptidic , have been found which also differ for their binding site on the channel protein . among the peptidic blockers , maurotoxin and charybdotoxin
lys23 of the toxin binds to the pore filter of the channel from the extracellular side , and a - interaction between tyr32 of the toxin and a cluster of aromatic residues in the channel pore vestibule stabilizes the interaction .
maurotoxin is not selective for kca3.1 channels , being also a potent blocker of some members of kv channels .
charybdotoxin ( chtx ) , a 37-aminoacid toxin , displays a block mechanism similar to maurotoxin , and poor selectivity , blocking effectively other ion channels including kca1 channels .
several nonpeptidic molecules have been found to block kca3.1 channels , such as the vasodilator cetiedil [ 44 , 45 ] , the antimycotic triarylmethane clotrimazole ( ctl , ) , and the antihypertensive l - type ca channel blocker nifedipine . from chemical modification of cetiedil several more potent kca3.1 channel blockers were obtained .
the investigation of one of these compounds , the ucl 1608 , suggests that they interact with a lipophilic - binding site located within the membrane .
also the chemical modification of the poorly selective ctl has led to the production of several more effective kca3.1 channel blockers , including the triarylmethanes tram-34 and ica-17043 .
tram-34 is so far the best probe to study the roles of kca3.1 channels , being much more selective than ctl .
an excellent work has conclusively delineated the properties of the kca3.1 channel binding site for tram-34 .
these authors found that the tram-34 analogue and membrane impermeant tram-30 blocked the channel only when applied from inside , and the interaction of tram-34 with the channel required the p - loop aminoacid thy250 and the s6 segment aminoacid val275 , both likely facing a large water - filled cavity localized below the narrow selectivity filter of the channel .
they thus concluded that the tram-34 binding site is accessible from the cytoplasmic side and lays well up inside the inner vestibule .
the same work has also found that the dihydropyridines - binding site is likely different from the tram-34 binding site , as the same mutation does not alter the blocking action of nifedipine .
starting from nifedipine as lead compound , the 4-phenil-4h - pyrans and the related cyclohexadienes were obtained [ 52 , 53 ] , of which cyclohexadiene 4 represents the most potent blocker of kca3.1 channel .
particularly interesting for kca3.1 channel targeting in glioblastomas is the analogue compound bicycle hexadiene lactone 16 , that displays a 10-fold enrichment in brain tissue . from the early discovery of 1-ethyl-2-benzimidazolinone ( ebio ) as kca3.1 channel activator , much effort has been devoted to increase its potency and selectivity .
potency was initially improved with the introduction of dc - ebio , and more recently with ns309 .
selectivity on the contrary has been more difficult to increase since these compounds activate also kca2.x channels .
the mechanism of action of kca3.1 channel activators , and the location and structure of their binding sites have been only partially clarified [ 57 , 58 ] .
the potency of all kca3.1 channel activators depends on ca , as they are totally ineffective in its absence [ 54 , 57 , 58 ] . the origin of this ca dependence is still unclear .
several studies have described a rundown of the kca3.1 channel activity in atp - free internal milieu that can be restored after the readdition of atp , suggesting the involvement of kinases in the process . in accordance ,
several kinases such as pkc , pka , and pi3ks have been shown to regulate the kca3.1 channels [ 5961 ] , although not through the direct phosphorylation of the channel subunit [ 59 , 61 , 62 ] .
only the nucleotide diphosphate kinase ( ndpk ) has been shown to phosphorylate the kca3.1 channel alpha subunit ( at the hist358 ) , and a similar action could be exerted by adenosine monophosphate kinase ( ampk ) , although the aminoacid residue targeted in this case has not been identified .
it is possible that ndpk or ampk represent integration points for other kinases found to modulate kca3.1 channels , as already demonstrated for the pi3k class ii .
the regulation of the pathways involved in kca3.1 channel trafficking has been proposed as a new strategy for regulating the kca3.1 current , since the inhibition of endocytosis by the ubiquitin - activating enzyme e1 strongly increases the number of kca3.1 channels in the membrane . in expression systems ,
the kca3.1 channels at the plasma membrane have a relatively short life , being internalized within 6090 min and targeted for lysosomal degradation .
this process requires components of the escrt machinery and the small - molecular - weight guanine nucleotide - binding protein rab7 .
polyubiquitylation mediates the targeting of membrane - residing kca3.1 channels to the lysosomes , while usp8 regulates the rate of kca3.1 channel degradation by deubiquitylating kca3.1 channels prior to lysosomal delivery
. this modulation could explain the increase of kca3.1 current observed following short exposure ( 90 min ) of glioblastoma cells to cxcl12 , since noise analysis indicates that the kca3.1 current increase is due to an increased number of channels in the membrane ( our unpublished data ) , while no changes in the kca3.1 channel mrna levels are observed .
two main transcription factors have been found to regulate the kca3.1 channel expression , ap-1 and rest .
ap-1 was first identified in t lymphocytes where its activity , stimulated by the erk1/2 pathway , promotes an increase in kca3.1 current and cell proliferation . in the glioblastoma cell line
gl-15 the inhibition of erk1/2 by the mek inhibitors pd98059 reduces the mrna levels for the kca3.1 channels , suggesting that the same modulation described in t lymphocytes is also working in glioblastoma models .
this modulation is relevant as the erk1/2 pathway is deregulated in most glioblastomas , because of the several mutations accumulated during gliomagenesis .
the second transcription factor found to modulate the kca3.1 channel expression is rest ( repressor element 1-silencing transcription factor ) .
the kcnn4 gene contains two re-1 sites whose occupancy by rest represses gene transcription . in vascular smooth muscle cells
thus , changes in glioblastoma rest levels could explain the erk - independent kcnn4 transcriptional downregulation we found in gl-15 glioblastoma cells with time of culture . rest has in fact been shown to negatively regulate the adult cns differentiation [ 74 , 75 ] , and kca3.1 mrna downregulation was found to be accompanied by the appearance of several differentiation markers .
early evidence for the expression of kca3.1 channels in glioma cells came from biochemical and electrophysiological studies performed about twenty years ago . in rat c6 glioma cell line
it was first observed that ca ionophores induced a rubidium flux sensitive to nanomolar concentration of chtx but not to ibtx , tea , and apamin [ 76 , 77 ] .
patch - clamp experiments in the same cell line confirmed the presence of a k - selective channel having a unitary conductance of 26 ps in symmetrical k and a sensitivity to submicromolar [ ca]i [ 77 , 78 ] .
this channel could also be activated by several physiological ca agonists , such as endothelin , serotonin , histamine , and bradykinin [ 23 , 7984 ] .
subsequent work from our laboratory showed that the kca3.1 channel was also expressed in human glioblastoma cell lines ( gl-15 and u251 ; [ 32 , 85 ] ) .
coapplication of the ca ionophore ionomycin with the kca2/kca3.1 channel activator ebio evoked in these cell lines a sustained k current inhibited by chtx , ctl , and tram-34 but not by the kca2 channel blocker d - tc .
single channel recordings confirmed the presence of a unitary k current with biophysical and pharmacological properties congruent with those reported for the cloned human kca3.1 channel [ 3235 , 85 ] . in accordance
, the kca3.1 channel transcripts could be amplified from both gl-15 and u251 cells . besides the u251 cell line , the kca3.1 channel transcripts were also found by sontheimer 's group in d54-mg , another human glioblastoma cell line , as well as in a human glioblastoma biopsy .
these authors , however , found neither evidence for a kca3.1 current in these tissues ( probed in whole - cell configuration with a [ ca]i of 750 nm ) , nor for the kca3.1 channel protein ( using western blot analysis and commercially anti - kca3.1 antibody ) . with regard to this apparent discrepancy on the functional expression of kca3.1 channels in human glioblastoma cells ,
a third group recently found a substantial level of kca3.1 channel transcripts in u87 and u251 cell lines , as well as in a glioblastoma biopsy .
moreover , they found that the same cells displayed a voltage insensitive , ca - activated k - selective current blocked by ctl and tram-34 , indicating that the kca3.1 channel was expressed in human glioblastoma cells .
the expression of the kca3.1 channel protein in glioblastoma cells was further confirmed by the same group with western blot analysis .
these authors tried to explain the discrepancy of their results with those of sontheimer 's group by considering the different experimental conditions used in the whole - cell recordings and the different sensitivity of the antibodies used in the western blot analysis .
the high expression of the kca3.1 channel in glioblastoma cells could have a major diagnostic and therapeutic relevance , provided that its presence in the brain was restricted to the transformed glial cells .
early work performed soon after the cloning of the human kca3.1 channel showed that the kca3.1 channel transcripts were not expressed in the human central nervous system , although they were found in many other human tissues ( placenta , lung , salivary gland , colon , prostate , thymus , spleen , bone marrow , lymph nodes , lymphocytes , and in many of these tissues the functional expression of the kca3.1 channel was confirmed by patch clamp experiments ) [ 3335 ] .
this was confirmed by an rt - pcr study showing that kca3.1 channel transcripts could be found in d54-mg and u251 human glioblastoma cell lines , as well as in a human glioblastoma biopsy but not in a grade iii astrocytoma nor in normal human brain and in cultured rat astrocytes .
all these studies strongly suggested that the kca3.1 channel was only scantly expressed in human normal brain tissue , while being strongly upregulated in glioblastomas .
earlier electrophysiological studies focused on normal rat and mouse glial cells did not find any evidence for the expression of the kca3.1 channel , while reporting the presence of other ca - activated k channels such as kca1 and apamin - sensitive sk channels [ 87 , 89 , 90 ] .
the expression of kca3.1 channels was instead reported in cultured rat microglia [ 91 , 92 ] , but these cells did not appear to express kca3.1 channels in in vivo slices .
currents that could be ascribed to the kca3.1 channel were observed in rat dorsal root ganglion and autonomous neurons [ 9496 ] , and most recently in rat cerebellar purkinje cells .
more recent studies indicate , however , that normal mouse astrocytes express low levels of kca3.1 channels .
more specifically , one study shows that about 10% of gfap - positive mouse astrocytes is immunoreactive to antibody against kca3.1 channels , and this percentage increases 5-fold following spinal cord injury .
this latter result is consistent with the observation that kca3.1 channels are highly expressed in activated astrocytes .
a second study also reports kca3.1 immunoreactivity in mouse astrocytes ( mostly at the endfoot ) and shows that the channel participates to the neurovascular coupling .
the study further shows that 50% of gfap - positive astrocytes in slice preparation expresses tram-34 sensitive and ns309-activated kca3.1 currents . taken together , these data would suggest that kca3.1 channels are present in a fraction of normal mouse astrocytes .
further dedicated experiments are needed to conclusively clarify whether human normal astrocytes express kca3.1 channels , and whether interspecies differences exist in the expression of kca3.1 channels in the brain .
kca3.1 channel expression has been shown to be upregulated in many cancer cell types , and in most of them a role of this channel in promoting cell growth and cell cycle progression has been evidenced ( reviewed in ) .
a similar role in glioblastoma cells is suggested by data showing that ctl inhibits the growth of glioblastoma cell lines ( by inducing a cell cycle arrest at g1-s transition ) and delays the development of intracranial glioblastoma tumor formation [ 100102 ] .
however , given the several unspecific effects of ctl , these data do not conclusively show whether kca3.1 channels have a role in the growth of glioblastoma cells . a recent work aimed at specifically addressing this issue found that both ctl and the more specific ctl analog tram-34 inhibited the growth of u87 and u251 cells , although with ic50s much higher than those needed to inhibit channel activity .
by contrast , when inhibition of kca3.1 current ( down to 20% ) was attained by rna interference , no measurable effect was observed on cell growth .
based on these observations the authors concluded that kca3.1 channel activity is unlikely to have a major role in glioblastoma cell proliferation , and the effects of kca3.1 channel inhibitors are most likely unspecific .
it should be noticed , however , that under the assumption that the effect of kca3.1 channel on cell growth is mediated by the channel - induced hyperpolarization ( that would facilitate ca influx through the membrane ) , an ic50 for cell growth inhibition higher than that for channel block has to be expected , as documented for many k channel blockers ( reviewed in ) . a role of kca3.1 channels in glioblastoma cells proliferation
can not thus be excluded based on the available data , and further experiments addressing this point are needed .
cell migration plays a crucial role in the pathophysiology of glioblastomas , and several ion channels have been shown to have a major role in this process ( cf.section 1 ) .
given the abundant expression of kca3.1 channels in glioblastoma cells and the substantial role this channel has in the migration of other cell types , we recently verified whether glioblastoma cells require kca3.1 channel activity to move .
more specifically , we asked whether physiological motogens likely surrounding glioblastoma cells in vivo use kca3.1 channels for their promigratory activity . among them
, the chemokine cxcl12/sdf-1 appeared of interest as its receptors cxcr4 are widely expressed in glioblastoma tissue [ 104107 ] , and their activation plays a key role in the migration of glioblastoma cells [ 108110 ] .
interestingly , we found that kca3.1 channel activity was required in the chemotactic response to sdf-1 of gl-15 and u251 cell lines , primary cultures and freshly dissociated tissue .
the chemotactic response , probed with standard transwell chamber , was indeed strongly attenuated both in presence of tram-34 and by kca3.1 channel silencing by rna interference . in patch - clamp experiments we found that in a fraction of gl-15 cells brief applications of sdf-1 activate kca3.1 channels by increasing the intracellular [ ca]i .
more prolonged sdf-1 applications ( three hours incubation ) on gl-15 cells induced instead an upregulation of the maximal kca3.1 channel conductance , suggesting a posttranslational upregulation of the channel protein .
we further found that the kca3.1 channel activation is not a general requirement for motogen - induced migration in glioblastoma cells .
kca3.1 channel inhibitors were in fact ineffective in modulating the chemotactic response to epidermal growth factor ( egf ) , another physiologically relevant chemotactic inducer in glioblastoma .
patch - clamp experiments on gl-15 cells showed that egf activates a kca3.1 current very similar to that seen in response to sdf-1 .
additional experiments showed that egf , unlike sdf-1 , was not able to upregulate the kca3.1 channel functional expression following prolonged incubation , suggesting this sdf-1-induced modulation may be the relevant one for chemotaxis .
other in vivo promigratory signals for glioblastoma cells could be present in the serum that can infiltrate into the tumor area of glioblastomas as result of the blood - brain barrier breakdown [ 112 , 113 ] .
several studies show that fetal calf serum ( fcs ) enhances the migration of glioblastoma cells by inducing oscillations of the [ ca]i .
[ ca]i oscillations are thought to facilitate the detachment of focal adhesions , through stimulation of focal adhesion kinase , and the retraction of the cell rear towards the direction of movement .
however , since the fcs - induced [ ca]i oscillations reach peaks sufficiently high to activate kca3.1 channels , we hypothesized that k efflux through kca3.1 channels could serve for the volume changes needed during cell migration .
we found that in about 40% of u-87 cells , acute application of 10% fcs resulted in an oscillatory activity of a k - selective , tram-34 sensitive current , displaying frequencies well within those observed for the fcs - induced [ ca]i oscillations . beside inducing a cyclical activation of kca3.1 channels ,
fcs also promoted the stable ( nonoscillatory ) activation of a cl - selective current having biophysical and pharmacological properties resembling those found for the volume - activated cl current ( icl , swell ) widely expressed in glioblastoma cells .
coherently , transwell migration assays performed in the presence of kca3.1 and cl channel inhibitors indicated that the activity of these two channels was needed for the promigratory activity of fcs .
finally , the cl channel blocker 5-nitro-2-(3-phenylpropil ) benzoic acid ( nppb ) has been shown to block kca3.1 channels at concentrations often used to block cl channels , suggesting that the particularly high efficacy of this compound on glioblastoma cell migration is due to its inhibitory effects on both channel types . as discussed in the introduction and illustrated in figure 1 ,
the first mode holds that the channel is instrumental , together with the cl channel and aquaporins , to the combined outward ion flux needed for cell volume decrease . at
relatively low [ ca]i , shown to correspond to the lamellipodium protrusion , the membrane conductance is dominated by the icl , swell , and the membrane potential is very close to the cl equilibrium potential ( ecl ) . under these conditions
no transmembrane ion flux through the kca3.1 and cl channels is present , since there is no driving force for cl ions , and kca3.1 channels are closed . during this period
the membrane transporters , usually located at the front of migrating cells will bring ions and water inside the cell , thus allowing the cell volume expansion needed for cell protrusion .
by contrast , the opening of kca3.1 channels during the peaks of [ ca]i oscillations will move the resting membrane potential to values between ek and ecl , a condition promoting both k and cl efflux , followed by water for osmotic requirements .
the resulting reduction in cell volume , accompanied by the detachment of focal adhesions located at the cell rear , would thus facilitate the retraction of the cell body .
besides controlling cell volume kca3.1 channels could promote glioblastoma cell migration through the modulation of [ ca]i signals .
several works have indeed shown that the activity of kca3.1 channels facilitates the entry of ca ions from the extracellular medium by providing a counter ion to limit cell depolarization and also by hyperpolarizing the cell membrane and increasing the driving force for ca influx .
this was first demonstrated in activated t lymphocytes and subsequent works confirmed this role in other cell types expressing this channel [ 116118 ] . in gl-15 cells we found that prolonged applications of histamine induced an increase of [ ca]i consisting of a fast peak caused by the release of ca from the intracellular stores , followed by a sustained phase dependent on ca influx through a lanthanium - sensitive pathway .
interestingly , the activation of kca3.1 channels significantly enhanced the sustained phase , as indicated by a reduction of the histamine - induced [ ca]i in the presence of tram-34 .
this result strongly suggests that the activation of kca3.1 channels could contribute to glioblastoma cell migration by modulating the shape of [ ca]i oscillations . in accordance with this hypothesis
, we recently built a theoretical model of [ ca]i oscillations incorporating the dynamics of the membrane potential and found that a channel activity with the properties of kca3.1 channels could sensibly affect ip3 driven [ ca]i oscillations ( it increased both the amplitude and duration of each [ ca]i spike and the oscillatory frequency ) .
interestingly , we found that under particular conditions the presence of kca3.1 channel activity is necessary in order for the cell to generate [ ca]i oscillations [ 120 , 121 ] .
this last result would explain old experiments showing that the kca3.1 channel inhibitor chtx is able to abolish the bradykinin induced [ ca]i oscillations in c6 glioma cells . which of the two mechanisms ( cell volume regulation or control of the ca influx ) is the prominent one in the control of glioblastoma cell migration by kca3.1 channels remains to be established .
the data presented here indicate that kca3.1 channels play a relevant role in cell migration , a critical process in glioblastomas where the spreading and infiltration of their cells into the normal brain parenchyma represent major causes for tumor progression and recurrence following tumor surgical resection .
they show in addition that kca3.1 channels are abundantly expressed in glioblastoma cells , whereas they are only scantly present in healthy human brain tissues . these results combined would point to the kca3.1 channels as a potential target for newer therapeutic approaches against glioblastomas .
kca3.1 channel blockers are indeed beginning to be considered in therapy , and certain results appear encouraging .
first , the kca3.1 channel blocker tram-34 , as well as more recently developed analogs have been found to effectively penetrate into the brain and reach interesting brain concentrations after intraperitoneal injection [ 40 , 53 ] .
second , a kca3.1 channel inhibitor , senicapoc from icagen inc . , has already been used in phase ii clinical trials for sickle cell disease and asthma and appears to be well tolerated and safe in humans .
thus this compound could be a convenient starting point to develop effective drugs against glioblastoma .
it would be most interesting to investigate whether kca3.1 channels are expressed in glioblastoma stem cells , and whether they underlie , as in the ordinary glioblastoma cells , the main processes of cell growth , migration , and angiogenesis .
much remains to be done instead to clarify the diagnostic and prognostic relevance associated with the expression of the kca3.1 channel in glioblastoma cells .
it would be important to this respect to verify whether the level of kca3.1 channel expression is correlated with the grade of the tumor and the expression of other recognized tumor markers
. it would also be very important to conclusively clarify the involvement of kca3.1 channels in the cell cycle progression of glioblastoma cells , and whether their activity is needed for other functional roles relevant to this pathology .
notably , we have preliminary evidence for an effect of tram-34 in the glioblastoma - induced angiogenesis , a process that allows glioblastoma cells to ensure themselves for the necessary oxygen and nutrients [ 122 , 123 ] .
the relevance of this study is underpinned by the observation that antiangiogenic therapies are considered clinically very effective and promising . in the hypothesis that a role of kca3.1 channels in the glioblastoma - induced angiogenesis will be confirmed , the use of kca3.1 channel inhibitors may be expected particularly effective in the treatment of this pathology , given their inhibitory action on two distinct vital functions for the tumor mass , namely , cell spreading and angiogenesis . | glioblastomas are characterized by altered expression of several ion channels that have important consequences in cell functions associated with their aggressiveness , such as cell survival , proliferation , and migration .
data on the altered expression and function of the intermediate - conductance calcium - activated k ( kca3.1 ) channels in glioblastoma cells have only recently become available .
this paper aims to ( i ) illustrate the main structural , biophysical , pharmacological , and modulatory properties of the kca3.1 channel , ( ii ) provide a detailed account of data on the expression of this channel in glioblastoma cells , as compared to normal brain tissue , and ( iii ) critically discuss its major functional roles .
available data suggest that kca3.1 channels ( i ) are highly expressed in glioblastoma cells but only scantly in the normal brain parenchima , ( ii ) play an important role in the control of glioblastoma cell migration .
altogether , these data suggest kca3.1 channels as potential candidates for a targeted therapy against this tumor . |
it is characterized by a complex of neuropathological , biochemical , and behavioral symptoms , that gradually impair memory and the ability to learn , carry out daily activities and behavior .
in particular , ibotenic acid - induced lesions of the nbm in rats produce impairment in acquisition and retention phase of passive avoidance tasks .
studies have shown that nbm has been widely involved in the pathogenesis of ad and is accompanied with cognitive deficits . a number of studies . suggested that exercise in aged rats improves learning and neurogenesis .
so it may slow the onset and progression of learning and memory deficit in ad .
it is proved that regular exercise attenuates motor deficits , increases formation of new neurons , and ameliorates neurological impairment in several neurodegenerative diseases.[1214 ] some studies on transgenic mouse models of ad have showed beneficial effects of exercise on general activity , spontaneous alternation , object recognition memory and , spatial learning performance .
some animal studies have revealed that treadmill running improves spatial and passive avoidance learning in nbm - lesion rats . in previous studies , transgenic and nbm - lesion models , they investigated the effects of physical activity after onset of ad in these models and , there have been no basic researches performed to estimate the preventive effects of exercise .
the aims of this study were to investigate preventive effects of treadmill running in nbm - lesion rats on learning and memory deficit by passive avoidance task in order to examine whether treadmill running can prevent learning impairment in this model of ad .
male wistar rats ( 250 - 300 g ; n = 54 ) were obtained from jondishapour university , ahwaz , iran ) .
the animals were kept in animal house and provided with food and water ad libitum and experienced a 12:12-h light - dark cycle ( 07:00 - 19:00 ) in a temperature controlled environment ( 22 2c ) .
this study was approved by the ethics committee for animal experiments at isfahan university approved the study , and all experiments were conducted in accordance with the international guiding principles for biomedical research involving animals , which was revised in 1985 .
rats were randomly allocated to the following groups :
control group ( co ; n = 11 ) : no injection , no exercise.sham operation ( sh ; n = 10 ) : saline ( drug solvent ) was injected in nbm.alzheimer group ( a ; n = 11 ) : ibotenic acid was injected in nbm.exercise before alzheimer group ( e - a ; n = 12 ) : ibotenic acid was injected bilaterally in nbm , and then rats exercised 21 days.exercise group ( e ; n = 10 ) : rats exercised 21 days .
. sham operation ( sh ; n = 10 ) : saline ( drug solvent ) was injected in nbm .
alzheimer group ( a ; n = 11 ) : ibotenic acid was injected in nbm .
exercise before alzheimer group ( e - a ; n = 12 ) : ibotenic acid was injected bilaterally in nbm , and then rats exercised 21 days . exercise group ( e ; n = 10 ) : rats exercised 21 days .
rats were anesthetized with chloral hydrate ( 450 mg / kg , i.p ) and then placed in a stoelting stereotaxic apparatus ( incisor bar 3.3 mm , ear bars positioned symmetrically ) . the scalp was cleaned and incised on the midline and a burr hole was drilled through the skull and ibotenic acid ( cat number 12765 , sigma ) was injected at coordinates of : ap = 1.2 , ml = 3.2 , dv = 7.5 mm from surface skull .
10 g/l of ibotenic acid was injected ( 5 g/l each sides ) with microinjection pump at the speed of 120 lit / h . instead of ibotenic acid solution ,
post - operatively , the rats were given special care until spontaneous feeding was restored .
behavioral tests were conducted four weeks after the surgery and were evaluated blind to the treatments by the observer .
rats in the exercise group run on a treadmill at a speed of 20 - 21 m / min for 60 min daily ( 6 days a week ) , for 3 weeks at inclination . to familiarize animals with the experimental set up , the treadmill
was switched on and the speed was increased from 5 to 21 m / min and over the course of 6 days , the duration was increased from 10 to 60 min . when exercising rats moved back on the treadmill , electric shocks were sparingly used to impel the animal to run . from week 2 onwards , speed and duration were kept constant at 20 - 21 m / min , 60 min per run after warm up .
the non - runners groups were not put on the treadmill for the same duration of running as runners did .
after 21 days , rats in all exercise groups , were subjected to passive avoidance learning ( pal ) test .
the training apparatus had two compartments comprising a small chamber ( 25 25 20 cm ) and a large dark compartment ( 50 25 20 cm ) .
electric shocks were delivered to the grid floor by an isolated stimulator . at the beginning of the test
, each rat was placed in the apparatus for 5 min to become habituated . on the second day , an acquisition trial was performed ; rats were placed individually in the illuminated chamber . after a habituation period ( 1 min )
then after the rat entered the dark chamber , the door was lowered and an inescapable scrambled single electric shock ( 75v , 0.2 ma , 50hz ) was delivered for 3 second .
latency to cross the dark compartment ( i.e. , pre - shock latency ) was recorded .
after exposure to the foot shock , the rat was removed from the passive avoidance apparatus to its home cage .
the rat was placed in the lighted ( safe ) compartment again with access to the dark compartment without any shock .
the latency to enter the dark compartment was measured ( i.e. , testing latency ) up to a maximum of 300 seconds . in the passive avoidance test , all results were compared using a kruskal - wallis nonparametric one - way analysis of variance corrected for ties , followed by a two - tailed mann - whitney u test .
the comparisons of retention time 24 h ( within groups ) were analyzed by friedman test , followed by a wilcoxon signed ranks test .
after the completion of behavioral tests , the rats were sacrificed and brains were removed and fixed in formalin , and then were sectioned to verify ibotenic acid injection site .
the lesions were reconstructed on standardized sections of the rat brain [ figure 1 ] .
male wistar rats ( 250 - 300 g ; n = 54 ) were obtained from jondishapour university , ahwaz , iran ) .
the animals were kept in animal house and provided with food and water ad libitum and experienced a 12:12-h light - dark cycle ( 07:00 - 19:00 ) in a temperature controlled environment ( 22 2c ) .
this study was approved by the ethics committee for animal experiments at isfahan university approved the study , and all experiments were conducted in accordance with the international guiding principles for biomedical research involving animals , which was revised in 1985 .
rats were randomly allocated to the following groups :
control group ( co ; n = 11 ) : no injection , no exercise.sham operation ( sh ; n = 10 ) : saline ( drug solvent ) was injected in nbm.alzheimer group ( a ; n = 11 ) : ibotenic acid was injected in nbm.exercise before alzheimer group ( e - a ; n = 12 ) : ibotenic acid was injected bilaterally in nbm , and then rats exercised 21 days.exercise group ( e ; n = 10 ) : rats exercised 21 days .
. sham operation ( sh ; n = 10 ) : saline ( drug solvent ) was injected in nbm .
alzheimer group ( a ; n = 11 ) : ibotenic acid was injected in nbm .
exercise before alzheimer group ( e - a ; n = 12 ) : ibotenic acid was injected bilaterally in nbm , and then rats exercised 21 days . exercise group ( e ; n = 10 ) : rats exercised 21 days .
rats were anesthetized with chloral hydrate ( 450 mg / kg , i.p ) and then placed in a stoelting stereotaxic apparatus ( incisor bar 3.3 mm , ear bars positioned symmetrically ) . the scalp was cleaned and incised on the midline and a burr hole was drilled through the skull and ibotenic acid ( cat number 12765 , sigma ) was injected at coordinates of : ap = 1.2 , ml = 3.2 , dv = 7.5 mm from surface skull .
10 g/l of ibotenic acid was injected ( 5 g/l each sides ) with microinjection pump at the speed of 120 lit / h . instead of ibotenic acid solution ,
post - operatively , the rats were given special care until spontaneous feeding was restored .
behavioral tests were conducted four weeks after the surgery and were evaluated blind to the treatments by the observer .
rats in the exercise group run on a treadmill at a speed of 20 - 21 m / min for 60 min daily ( 6 days a week ) , for 3 weeks at inclination . to familiarize animals with the experimental set up ,
the treadmill was switched on and the speed was increased from 5 to 21 m / min and over the course of 6 days , the duration was increased from 10 to 60 min . when exercising rats moved back on the treadmill , electric shocks were sparingly used to impel the animal to run . from week 2 onwards , speed and duration were kept constant at 20 - 21 m / min , 60 min per run after warm up . the non - runners groups were not put on the treadmill for the same duration of running as runners did .
after 21 days , rats in all exercise groups , were subjected to passive avoidance learning ( pal ) test .
the training apparatus had two compartments comprising a small chamber ( 25 25 20 cm ) and a large dark compartment ( 50 25 20 cm ) .
electric shocks were delivered to the grid floor by an isolated stimulator . at the beginning of the test
, each rat was placed in the apparatus for 5 min to become habituated . on the second day , an acquisition trial was performed ; rats were placed individually in the illuminated chamber . after a habituation period ( 1 min ) , the guillotine door was lifted . then after the rat entered the dark chamber , the door was lowered and an inescapable scrambled single electric shock ( 75v , 0.2 ma , 50hz ) was delivered for 3 second .
latency to cross the dark compartment ( i.e. , pre - shock latency ) was recorded .
after exposure to the foot shock , the rat was removed from the passive avoidance apparatus to its home cage .
the rat was placed in the lighted ( safe ) compartment again with access to the dark compartment without any shock .
the latency to enter the dark compartment was measured ( i.e. , testing latency ) up to a maximum of 300 seconds .
in the passive avoidance test , all results were compared using a kruskal - wallis nonparametric one - way analysis of variance corrected for ties , followed by a two - tailed mann - whitney u test .
the comparisons of retention time 24 h ( within groups ) were analyzed by friedman test , followed by a wilcoxon signed ranks test .
after the completion of behavioral tests , the rats were sacrificed and brains were removed and fixed in formalin , and then were sectioned to verify ibotenic acid injection site .
the lesions were reconstructed on standardized sections of the rat brain [ figure 1 ] .
the latency was measured in pre foot shock ( acquisition time ) and 24 h post foot shocks ( retention time ) .
results indicated that the pre - shock latency was the same among all groups , but compared to group alzheimer ( a ) as well as between group a and group exercise - alzheimer ( e - a ) , acquisition time was longer in control ( co ) group ( p < 0.01 , p < 0.05 respectively ) . there was a significant difference between group exercise ( e ) and a ( p < 0.01 ) , as well as group sham ( sh ) and a [ p < 0.05 , figure 2 ] .
comparison of latency to enter the dark chamber before receiving foot shock ( acquisition time ) .
there were significant differences between control ( n= 11 ) and a ( n= 11 ) group ( p < 0.01 ) , as well as a and e ( n= 12 ) group ( p < 0.01 ) .
there were significant differences between a and others ( sh and e - a groups , p < 0.05 ) .
all results were analyzed by kruskal - wallis test followed by mann - whitney u test however , the retention time during testing ( i.e. , testing latency carried out 24 h after receiving foot shock ) was significantly decreased in group a when compared to other groups ( co , sh , e and groups e - a ; p < 0.001 , p < 0.01 , p < 0.001 , p < 0.001 respectively ) .
the retention time changes were not significant between group e and group sh . also , there were not significant differences in retention time between control group , as well as sh and e - a groups [ figure 3 ] .
comparison of latency to enter the dark chamber 24 h after receiving foot shock ( the retention time ) .
the retention time was significantly decreased in the a group compared to others ( control , sh and e - a group , p < 0.001 , p < 0.01 , p < 0.001 respectively ) .
all results were analyzed by kruskal - wallis test followed by mann - whitney u test the results of pre foot shock and post foot shock latency were analyzed by a paired sample to evaluate changes within groups . in this part
, our data showed that there were significant differences in pre and post foot shock latency among all groups [ p < 0.01 , figure 4 ] .
there were significant differences in pre and post foot shock latency in all groups ( p < 0.01 )
the latency was measured in pre foot shock ( acquisition time ) and 24 h post foot shocks ( retention time ) .
results indicated that the pre - shock latency was the same among all groups , but compared to group alzheimer ( a ) as well as between group a and group exercise - alzheimer ( e - a ) , acquisition time was longer in control ( co ) group ( p < 0.01 , p < 0.05 respectively ) . there was a significant difference between group exercise ( e ) and a ( p < 0.01 ) , as well as group sham ( sh ) and a [ p < 0.05 , figure 2 ] .
comparison of latency to enter the dark chamber before receiving foot shock ( acquisition time ) .
there were significant differences between control ( n= 11 ) and a ( n= 11 ) group ( p < 0.01 ) , as well as a and e ( n= 12 ) group ( p < 0.01 ) .
there were significant differences between a and others ( sh and e - a groups , p < 0.05 ) .
all results were analyzed by kruskal - wallis test followed by mann - whitney u test however , the retention time during testing ( i.e. , testing latency carried out 24 h after receiving foot shock ) was significantly decreased in group a when compared to other groups ( co , sh , e and groups e - a ; p < 0.001 , p < 0.01 , p < 0.001 , p < 0.001 respectively ) .
the retention time changes were not significant between group e and group sh . also , there were not significant differences in retention time between control group , as well as sh and e - a groups [ figure 3 ] .
comparison of latency to enter the dark chamber 24 h after receiving foot shock ( the retention time ) .
the retention time was significantly decreased in the a group compared to others ( control , sh and e - a group , p < 0.001 , p < 0.01 , p < 0.001 respectively ) .
all results were analyzed by kruskal - wallis test followed by mann - whitney u test the results of pre foot shock and post foot shock latency were analyzed by a paired sample to evaluate changes within groups . in this part
, our data showed that there were significant differences in pre and post foot shock latency among all groups [ p < 0.01 , figure 4 ] .
there were significant differences in pre and post foot shock latency in all groups ( p < 0.01 )
in the present study , the passive avoidance retention and retrieval function was impaired by ibotenic acid nbm - lesions .
our results showed that treadmill exercise training significantly improves passive avoidance performance in normal and nbm - lesion rats [ figures 2 and 3 ] .
chopin et al . demonstrated significant performance deficits in both passive avoidance and morris water maze tests in bilateral nbm - lesion rats .
they showed that regular exercise significantly attenuated lesion - associated decrease in brain functions . in these studies , animals exercised after onset ad in this animal model . in this research , relationship between treadmill running and preventive effects on learning and memory deficits observed in nbm - lesion rats were investigated .
our results indicated that in comparison to nbm - lesion rats , learning and memory in groups e - a were improved that is , exercise has preventive effects on acquisition and retention time impairment in passive avoidance test in nbm - lesion [ figures 2 and 3 ] .
furthermore , it will postpone memory impairments that may result from some exercise mechanisms such as increase in dopamine and muscarinic receptor density , acetylcholine level , neurotransmitter release in the hippocampus , brain derived neurotrophic factor , bdnf gene expression , neuron proliferation and survival in the animal 's brain . similarly , van praag et al .
also , other researchers suggested that exercise training ( swimming ) increased memory of rats in passive avoidance test , but such increase was temporary after stopping exercise because swimming requires physical effort and compared to treadmill running in rats , was known as an effective stressor . according to our results , treadmill running had beneficial effects on encountering learning and memory deficits with nbm - lesion as animal model of ad and probably can reduce dementia in patients .
exercise effects might be the result of structural and biological changes in brain , which enhance neuron numbers .
a cell proliferation increase or cell death decrease increases the length and number of dendrites connection between neurons , as well as synaptic plasticity in hippocampus , which is involved in learning and memory.[253234 ] it has been suggested that mechanisms for some mentioned effects of exercise including gene expression , increment of neurotrophic factors such as brain derived neurotrophic factor and insulin - like growth factor i , which are important for neuronal survival and differentiation , as well as synaptic plasticity.[3537 ] hence , all of the aforesaid factors may have a pivotal role in learning and memory enhancement by exercise .
our results at the behavioral level ( passive avoidance test ) emphasize the role of treadmill running in the prevention of learning and memory impairments in nbm - lesion rats .
memory deficit caused by nbm lesion was also reversed by treadmill running , suggesting enhancement of learning and memory functions through physical activity . | background : alzheimer 's disease was known as a progressive neurodegenerative disorder in the elderly and is characterized by dementia and severe neuronal loss in the some regions of brain such as nucleus basalis magnocellularis .
it plays an important role in the brain functions such as learning and memory .
loss of cholinergic neurons of nucleus basalis magnocellularis by ibotenic acid can commonly be regarded as a suitable model of alzheimer 's disease .
previous studies reported that exercise training may slow down the onset and progression of memory deficit in neurodegenerative disorders .
this research investigates the effects of treadmill running on acquisition and retention time of passive avoidance deficits induced by ibotenic acid nucleus basalis magnocellularis lesion.methods:male wistar rats were randomly selected and divided into five groups as follows : control , sham , alzheimer , exercise before alzheimer , and exercise groups .
treadmill running had a 21 day period and alzheimer was induced by 5 g/l bilateral injection of ibotenic acid in nucleus basalis magnocellularis.results:our results showed that ibotenic acid lesions significantly impaired passive avoidance acquisition ( p < 0.01 ) and retention ( p < 0.001 ) performance , while treadmill running exercise significantly ( p < 0.001 ) improved passive avoidance learning in nbm - lesion rats.conclusion:treadmill running has a potential role in the prevention of learning and memory impairments in nbm - lesion rats . |
recent discoveries have underlined a key role of astrophysics in the study of nature . in this paper
we are presenting a potential instrument for measuring high energy photon polarization with a proven detector technique which should allow preparation of a reliable tool for the space - borne observatory .
polarization of the photon has played an important role ( sometimes even before it was recognized ) in physics discoveries such as the famous young s interference experiment @xcite , michelson - morley s test of the ether theory @xcite , determination of the neutral pion parity @xcite and many others , including more recently the spin structure of the nucleon @xcite .
polarization of the cosmic microwave background ( cmb ) will likely be a crucial observable for the inflation theory ( see planck [ sci.esa.int/planck ] and bicep [ bicepkeck.org ] results ) . during the last decade ,
observations from the agile [ agile.rm.iasf.cnr.it ] and fermi - lat [ www-glast.stanford.edu ] pair production telescopes have enhanced our understanding of gamma ( @xmath0 ) ray astronomy . with the help of these telescopes numerous high energy @xmath0 ray sources have been observed .
however , the current measurements are insufficient to fully understand the physics mechanism of such @xmath0 ray sources as gamma ray bursts ( grbs ) , active galactic nuclei ( agns ) , blazars , pulsars , and supernova remnants ( snrs ) . even though both telescopes cover a wide range of energy ( from 20 mev to more than 300 gev ) , neither of them is capable of polarization measurements .
medium to high energy photon polarimeters for astrophysics were proposed by the nasa group @xcite and recently by the saclay group @xcite .
both are considering ar(xe)-based gas - filled detectors : the time projection chamber with a micro - well or micromega section for amplification of ionization .
in this paper we evaluate the features of an electron - positron pair polarimeter for the full energy range from 20 mev to 1000 mev and then propose a specific design for a polarimeter in the 100 to 300 mev energy range using silicon micro - strip detectors , msds , whose principal advantage with respect to the gas - based tpc is that the spatial and two - track resolution is about five to ten times better . the paper is organized in the following way : in section [ sec_moti ] we briefly discuss the motivation for cosmic @xmath0 ray polarimetry in the high energy region .
section [ sec_pol ] is devoted to measurement techniques , polarimeters being built and current proposals . in section [ sec_flux ]
we calculate the photon flux coming from the crab pulsar and crab nebula .
the design of the new polarimeter and its performance are discussed in the last few sections .
there are several recent reviews of photon polarimetry in astrophysics @xcite which address many questions which we just briefly touch on in this section .
photon polarimetry for energy below a few mev is a very active field of astrophysical research , and some examples of the productive use of polarimetry at these energies include : detection of exoplanets , analysis of chemical composition of planetary atmosphere , and investigation of interstellar matter , quasar jets and solar flares .
however , no polarization measurements are available in the medium and high energy regions because of the instrumental challenges .
the primary motivation in proposing a polarimeter is our interest in understanding the emission and production mechanisms for polarized @xmath0 rays in pulsars , grbs , and agns by measuring polarization of cosmic @xmath0 rays in this under - explored energy region ( @xmath1 @xmath2 mev ) .
additionally , the polarization observations from the rotation - powered and accretion - powered pulsar radiation mechanisms could help to confirm the identification of black hole candidates @xcite .
polarization measurements could reveal one of the possible effects induced by quantum gravity , the presence of small , but potentially detectable , lorentz or cpt violating terms in the effective field theory .
these terms lead to a macroscopic birefringence effect of the vacuum ( see @xcite for more information ) .
up to now , the highest energy linear polarization measurement has been for grb 061122 in the 250 - 800 kev energy range @xcite , and vacuum birefringence has not been observed in that region .
therefore , extending polarization sensitivity to higher energies could lead to detection of vacuum birefringence , which would have an extraordinary impact on fundamental physics , or in the case of null detection we could significantly improve the present limits on the lorentz invariance violation parameter .
further , according to the observations by the energetic gamma ray experiment telescope ( egret ) [ heasarc.gsfc.nasa.gov/docs/cgro/egret ] , the synchrotron emission of the crab nebula is significant in the energy below @xmath3 200 mev @xcite .
additionally , the theoretical studies state that most of the @xmath0 rays coming from the crab nebula around 100 mev may come from its inner knot @xcite , so the observations in the neighborhood of 100 mev will help to test this theoretical hypothesis and confirm the emission mechanism .
furthermore , the observation of the @xmath0 rays from the crab pulsar provides strong evidence of the location of the @xmath0 ray emitting region as it lies in the center of the nebula .
it is also worth mentioning that polarimetry could test the theories assuming existence of axions ( hypothetical particles introduced to solve the strong cp problem of qcd ) .
it is interesting that the same axions or axion - like particles can serve as a foundation for a relevant mechanism of sun luminosity @xcite .
a theoretical study @xcite has shown that polarization observations from grbs can be used to constrain the axion - photon coupling : @xmath4 for the axion mass @xmath5 ev .
the limit of the coupling scales is @xmath6 ; therefore , the polarimetry of grbs at higher energies would lead to tighter constraints . in two of the following subsections
we will briefly explain how polarization measurements are involved in confirming the emission mechanism and geometry of two above - mentioned sources .
pulsars are a type of neutron star , yet they are highly magnetized and rotate at enormous speeds .
the questions concerning the way magnetic and electric fields are oriented , how particles are accelerated and how the energy is converted into radio and @xmath0 rays in pulsars are still not fully answered .
because of the extreme conditions in pulsars interiors , they can be used to understand poorly known properties of superdense , strongly magnetized , and superconducting matter @xcite . moreover , by studying pulsars one can learn about the nuclear reactions and interactions between the elementary particles under these conditions , which can not be reproduced in terrestrial laboratories .
particle acceleration in the polar region of the magnetic field results in gamma radiation , which is expected to have a high degree of polarization @xcite .
depending on the place where the radiation occurs , the pulsar emission can be explained in the framework of a polar cap model or an outer cap model . in both models ,
the emission mechanism is similar , but polarization is expected to be dissimilar @xcite ; hence , polarimetry could be used to understand the pulsar s emission mechanism .
polarization measurements would also help to understand grbs .
the grbs @xcite are short and extremely bright bursts of @xmath0 rays .
usually , a short - time ( from @xmath5 s to about @xmath7 s ) peak of radiation is followed by a long lasting afterglow .
the characteristics of the radiation emitted during the short - time peak and during the afterglow are different .
the number of high - energy photons which may be detected during the short - time burst phase is expected to be small compared with the one for the long - lived emission .
while only about 3% of the energy emitted during the short - time burst is carried by high - energy photons with @xmath8 mev , the high energy photons of the afterglow carry about half of the total emitted energy . therefore , there is a possibility of observing polarization of high energy photons during the afterglow .
the emission mechanism of grbs , the magnetic composition , and the geometry and morphology of grb jets are still uncertain but can be at least partly revealed in this way .
it is worth noting that several studies have discussed how the degree of polarization , @xmath9 , depends on the grb emission mechanisms . in one example , using monte carlo methods toma @xmath10 @xcite showed that the compton drag model is favored when the degree of polarization @xmath9@xmath11@xmath12 , and @xmath13-@xmath14 concerns the synchrotron radiation with the ordered magnetic fields model .
moreover , studies by mundell @xmath10 @xcite and lyutikov @xmath10 @xcite have proven that polarimetry could assist in revealing the geometry of grb jets .
several physical processes such as the photoelectric effect , thomson scattering , compton scattering , and electron - positron pair production can be used to measure photon linear polarization .
polarimeters based on the photoelectric effect and thomson scattering are used at very low energies .
compton polarimeters are commonly used for energies from 50 kev to a few mev @xcite . ) above an energy range of a few mev . ]
some of the major achievements in astrophysics that were obtained using polarimetry are : the discovery of synchrotron radiation from the crab nebula @xcite ; the study of the surface composition of solar system objects @xcite ; the measurement of the x - ray linear polarization of the crab nebula @xcite , which is still one of the best measurements of linear polarization for astrophysical sources ; mapping of solar and stellar magnetic fields @xcite ; detection of polarization in the cmb radiation @xcite ; and analysis of large scale galactic magnetic fields @xcite . the measurement of polarization in this high energy @xmath0 ray regime can be done by detecting the electron - positron pairs produced by @xmath0 rays and analysis of a non - uniformaty of event distribution in the electron - positron pair plane angle , as discussed in ref .
however , implementation of this technique should consider limitations due to multiple coulomb scatterings in the detector , and there are no successful polarization measurements for astrophysical sources in the energy regime of interest in our paper . a number of missions have included cosmic @xmath0 ray observations , but only a few of them are capable of measuring polarization .
the polarimetry measurements were mainly restricted to @xmath0 rays with low energies e @xmath15 10 mev . as an example
, the reuven ramaty high energy solar spectroscopic imager ( rhessi ) [ hesperia.gsfc.nasa.gov/rhessi3 ] , launched to image the sun at energies from 3 kev to 20 mev , was capable of polarimetry up to 2 mev , and the results were successfully used to study the polarization of numerous solar flares @xcite .
the spi detector international gamma - ray astrophysics laboratory ( integral ) instrument [ sci.esa.int/integral ] has the capability of detecting polarization in the range of 3 kev to 8 mev @xcite .
it was used to measure the polarization of grb 041219a , and later a high degree of polarization of @xmath0 rays from that source @xcite was confirmed .
the tracking and imaging gamma ray experiment ( tigre ) compton telescope , which observes @xmath0 rays in the range of @xmath16
@xmath17 mev , can measure polarization up to 2 mev . recently ,
morselli @xmath10 proposed gamma - light to detect @xmath0 rays in the energy range 10 mev 10 gev ,
and they believe that it will provide solutions to all the current issues that could not be resolved by agile and fermi - lat in the energy range 10 200 mev .
it can also determine the polarization for intense sources for the energies above a few hundred mev with high accuracy @xcite .
in spite of the limitations of the instruments capability , there are numerous polarimetry studies in @xmath0 ray astrophysics , and various proposals have been put forth regarding medium and high energy @xmath0 ray polarimeters .
for example , bloser @xmath10 @xcite proposed the advanced pair telescope ( apt ) , also a polarimeter , in the @xmath3 50 mev 1 gev range . that proposal uses a gas - based time projection chamber ( tpc ) with micro - well amplification to track the @xmath18 , @xmath19 path .
the polarization sensitivity was estimated by using geant4 monte carlo simulations .
preliminary results indicated that it will be capable of detecting linearly polarized emissions from bright sources at 100 mev .
as an updated version of the apt , hunter @xmath10
@xcite suggested the advanced energetic pair telescope ( adept ) for @xmath0 ray polarimetry in the medium energy range ; further , they mentioned that it would also provide better photon angular resolution than fermi - lat in the range of @xmath3 5 to @xmath3 200 mev .
harpo is a hermetic argon tpc detector proposed by bernard @xmath10 @xcite which would have high angular resolution and would be sensitive to polarization of @xmath0 rays with energies in the mev - gev range .
a demonstrator for this tpc was built , and preliminary estimates of the spatial resolution are encouraging .
currently , the harpo team is finalizing a demonstrator set up to characterize a beam of polarized @xmath0 rays in the energy range of 2 76 mev @xcite .
observations of the @xmath0 rays from the crab pulsar and crab nebula have been reported in ref .
@xcite for eight months of survey data with fermi - lat .
the pulsar dominates the phase - averaged photon flux , but there is an off - pulse window ( 35% of the total duration of the cycle ) when the pulsar flux is negligible and it is therefore possible to observe the nebular emission . according to the conducted analysis , the spectrum of the crab nebula in the @xmath17-@xmath20 mev range can be described by the following combined expression : @xmath21 where the quantity @xmath22 is measured in @xmath23 representing the number of photons reaching @xmath24 @xmath25 of the detector area per second , per @xmath24 mev of energy .
the energy @xmath26 on the right hand side is measured in gev .
the prefactors @xmath27 and @xmath28 are determined by 35% of the total duration of the cycle , while @xmath29 and @xmath30 .
the first and second terms on the right hand side , as well as the indices `` sync '' and `` ic '' , correspond to the synchrotron and inverse compton components of the spectrum , respectively . as one can see , these terms have different dependence on the energy @xmath26 since they represent different contributions to the total spectrum . the first part ( the synchrotron radiation )
comes from emission by high energy electrons in the nebular magnetic field while the second part is due to the inverse compton scattering of the primary accelerated electrons . for convenience
let us rewrite the expression for the spectrum of the crab nebula in the form @xmath31 where the energy @xmath26 on both sides is now measured in mev , so @xmath32 and @xmath33 . integrating @xmath22 , for the photon flux above @xmath17 mev coming from the crab nebula we obtain the number @xmath34 giving for the total cycle duration @xmath35 , or @xmath36 .
at the same time the averaged spectrum of the crab pulsar is described in @xcite as follows : @xmath37 where @xmath38 , @xmath39 and the cut - off energy @xmath40 .
as before , the energy @xmath26 on both sides is measured in mev . integrating this expression , for the photon flux above @xmath17 mev coming from the crab pulsar we obtain the number @xmath41 , or @xmath42 .
thus , the pulsar s photon flux is twice as intensive as the nebula s
. a fast photometer could be insert in the polarimeter instrumentation to collect events and have a temporal tag and consequently distinguish between nebula and pulsar photons , see e.g. @xcite .
we will use the numbers above for an estimation of the polarimeter results at a 100 mev energy cut .
for a 500 mev cut the statistics drops by a factor of five ( because @xmath43 is much higher the exponential factor does not play a role ) .
it is worth noting that the estimates following from @xcite approximately agree with the corresponding estimates made in @xcite ( where the formulas ( 1 ) and ( 2 ) describe the synchrotron and inverse compton components of the crab nebula spectrum while the formula ( 3 ) describes the averaged crab pulsar spectrum ) .
really , according to @xcite , the total crab nebula photon flux above @xmath17 mev is @xmath44 , while for the crab pulsar the value again reads @xmath45 .
the photo production of an electron - positron pair in the field of nuclei is a well understood process which was calculated in qed with all details including the effect of photon linear polarization , see e.g. ref .
the kinematics and variables of the reactions are shown in fig .
[ fig : kinematics ] .
the distribution of events over an azimuthal angle @xmath46 of a positron ( electron ) relative to the direction of an incident photon has the following form : + @xmath47 , where @xmath48 is the analyzing power , @xmath49 is the degree of the photon linear polarization , and @xmath50 is the angle of the photon linear polarization vector in the detector coordinate system . in practice @xcite , angle @xmath51
could be used instead of @xmath46 because at the photon energies of interest the co - planarity angle @xmath52 .
pair photo production ( left picture ) and the azimuthal angles in the detector plane from ref .
the photon momentum is directed along the z axis .
the photon polarization vector @xmath53 is parallel to the x axis .
the angle @xmath54 is the angle between the photon polarization plane and the plane constructed by the momentum of the photon and the momentum of the positron ( the electron ) .
the angle @xmath55 is called the co - planarity angle .
the labels p and n indicate the positions of the crossings of the detector plane by the positron and the electron .
the azimuthal angle @xmath51 between the polarization plane and the vector @xmath56 is a directly measurable parameter.,scaledwidth=65.0% ] the value of analyzing power @xmath48 was found to be a complicated function of the event parameters and detection arrangement @xcite .
the numerical integration of the full expression could be performed for given conditions , see e.g. @xcite . in a high energy limit a compact expression for the integrated analyzing power for the pair photo - production from atomic electron
was obtained in ref .
@xcite .
the practical design and the test of the polarimeter for a beam of high energy photons were reported in ref .
there we detected both particles of the pair and reconstructed the azimuthal angle of the pair plane @xmath51 ( see fig .
[ fig : kinematics ] ) .
the analyzing power , averaged over energy sharing between electron and positron and pair open angle of the experimental acceptance , has been found to be 0.116@xmath570.002 , comparable to a 0.14 value as shown in fig .
[ fig : asy_e+e - m ] reproduced from @xcite .
when the pair components move through the converter , the azimuthal angle built on pair coordinates and pair vertex becomes blurred due to multiple scattering .
it is useful to note that the purpose of development in ref .
@xcite was a polarimeter for an intense photon beam .
the thickness of the converter in the beam polarimeter was chosen to be very small to minimize systematics of the measurement of the photon polarization degree .
however , the polarimeter could be calibrated by using the highly polarized photon beams produced in the laser - backscattering facilities . for the cosmic ray polarimeter
, we propose a larger converter thickness and calibration of the device .
such an approach is more productive for cosmic rays studies where a relative systematic error on the polarization degree at the level of 3 - 5% is acceptable .
let us also note that for the photon beam polarimetry there are additional options such as a coherent pair production in an oriented crystal and a magnetic separation of the pair components used many years ago in nuclear physics experiments .
for the space - borne photon investigation , those polarimeters are not applicable for the obvious reasons of the limited angle range for the coherent effects and the large weight and power consumption of the magnetic system .
an active converter with a coordinate resolution of a few microns would allow us to construct a dream device , a very efficient polarimeter .
a real world active - converter device , a gas - filled tpc , has a spatial resolution of 100 @xmath58 m and much larger two - track resolution of 1.5 - 2 mm ( for a few cm long drift distance ) .
such a polarimeter will be a very productive instrument for the photon energy range below 50 mev .
however , because of these resolutions , it would be hard to measure the degree of polarization of photons whose energy is bigger than 100 mev .
a polarimeter with separation of the converter and pair detector functions could benefit from the high coordinate resolution of the silicon msd of 10 - 15 @xmath58 m , its two - track resolution of 0.2 mm , and flexibility for the distance between a converter and pair hits detector : between them would be a vacuum gap .
the key parameters of the polarimeter are the efficiency , @xmath59 , and analyzing power , @xmath48 . here
we outline the analysis of the figure - of - merit , @xmath60 .
we will consider a polarimeter as a stack of individual flat cells , each of which is comprised of a converter with a two - dimensional coordinate readout and a coordinate detector for two - track events with no material between them .
the thickness of the converter , where the photon produces the electron - positron pair , defines in the first approximation the polarimeter efficiency as follows : @xmath61 the efficiency of one cell is @xmath62 with @xmath63 is the thickness of the converter in units of radiation length .
the @xmath64 is the reduction of the photon flux due to absorption in a single cell defined as @xmath65 , where @xmath66 is the thickness of the cell in units of radiation length , and @xmath67 is the number of cells in the device of length @xmath68 and geometrical thickness of the cell @xmath69 .
the converter thickness needs to be optimized because above some thickness it does not improve the @xmath70 or the accuracy of the polarimeter result ( see the next section ) . as it is shown in fig .
[ fig : asy_e+e - m ] , selection of the symmetric pairs ( @xmath71 ) provides an analyzing power @xmath72 while the averaged over pair energy sharing @xmath73 .
however , the value of the @xmath70 is largest when the cut on pair energy sharing is relaxed .
the practical case for the energy cut is @xmath74 , which allows us to avoid events with a low value of @xmath48 and most @xmath75-electron contamination .
the average value of @xmath48 for such a range of @xmath76 energy sharing is 0.20 .
the energy of the particles and the shower coordinates could be measured by a segmented electromagnetic calorimeter or estimated from the width of the track , which due to multiple scattering is inversely proportional to the particle momentum . in the photon energy range of interest
, both electron and positron will pass through a large number of cells .
determination of particle energy based on multiple scattering would provide @xmath320% relative energy resolution , which is sufficient for the proposed cut @xmath74 .
estimation of the particle energy could also be useful for rejection of the hits in msds induced by the @xmath75-electrons .
a monte carlo simulation was used to evaluate the general effects of pair production in the converter and the specific design of the polarimeter .
we used a geant3-based mc code to study the photon detection efficiency and electron - positron pair azimuthal distribution in a wide range of the converter thickness up to 10% of radiation length . because both the pair opening angle and multiple scattering are scaled with the photon energy , the distributions are almost energy independent .
we present first the results for the 100 mev photon energy for different thicknesses of the converter at a fixed distance of 20 mm between the converter end and the detector .
we used a standard geant3 pair production generator for the unpolarized photons and at the conversion point introduced a weighting factor for each event as @xmath77 to simulate a polarization effect on an event - by - event basis .
the value of azimuthal angle modulation at the pair production point was fixed at 0.20 , which is the average value of the analyzing power @xmath48 at production over the selected range of particle energies .
the pair component propagation was realized in the mc , and the track parameters were evaluated .
[ fig : mc100 ] shows the summary of mc results .
the apparent optimum converter thickness is close to 1 mm for which the projected @xmath78 is reasonably large and the @xmath70 is close to the saturation limit .
however , we are expecting that when @xmath75-electron hits are included in analysis the optimum thickness for the 100 mev photon case will be smaller and the @xmath70 would be a bit lower .
-0.05 in the coordinate detector allows determination of the opening angle between the pair components and the azimuthal angle of the pair plane relative to the lab coordinate system , the main variable for measurement of the photon polarization .
such a detector is characterized by the coordinate resolution , @xmath79 , and the minimum two - track distance , @xmath80 , at which coordinates of two tracks could be determined with quoted @xmath79 accuracy .
the @xmath80 is typically 2 mm for drift chamber .
for tpc with micromega amplification stage and strip - type readout , @xmath80 is about 4 strips or 1.6 mm ( pitch equal to 400 @xmath81 ) . for silicon msd @xmath80
is about 0.20 mm ( pitch equal to 50 @xmath81 ) .
the opening angle between the pair components is on the order of @xmath82 , where @xmath83 is the photon energy and @xmath84 is the electron rest mass .
the events with an opening angle larger than @xmath85 but less than @xmath86 provide most of the analyzing power , as it is shown in ref .
it is easy to find that the resulting geometrical thickness of the cell is @xmath87 , whose numerical values are shown in tab .
[ tab : cell_thickness ] . 0.1 in .the geometrical cell thickness , @xmath69 , in * cm * for the different detectors and photon energies .
[ cols="^,^,^,^",options="header " , ] [ tab : cell_thickness ] for photon energies above 100 mev , the silicon msd is a preferable option because of the limit on the apparatus s total length . indeed , considering a 300 cm total length and an energy of 500 mev , the number of cells is 5 for the drift chamber option , 6 for the tpc / micromega , and 46 for the msd option . on another side ,
the total amount of matter in the polarimeter should be limited to one radiation length or less , because of significant absorption of the incident photons which will reduce the average efficiency per cell .
for example , in the msd option 54% absorption will occur with 46 cells ( 1 mm thickness 2d readout converter detector and two 0.3 mm thickness 1d readout track detectors ) .
the detection efficiency ( pair production in the converters ) could be estimated as @xmath88 ( eq . [ eq : eff ] ) , which is about 34% for the selected parameters of the polarimeter . however , the useful statistics result is lower due to a cut on the pair components energies and contributions of the non - pair production processes , especially at low photon energy . for a photon energy of 100 mev ,
the obtained efficiency is 0.28% per cell and the overall efficiency for the 46 cell polarimeter is 9% .
the efficiency becomes significantly larger ( @xmath315% ) for a photon energy of 1000 mev .
assuming observation of the crab pulsar photon source with a 1 m@xmath89 detector for one year the total statistics of pairs ( above a 100 mev photon energy cut ) was estimated to be @xmath90 .
for the projected analyzing power @xmath48 of 0.10 the statistical accuracy of the polarization measurement is @xmath91 for the msd detector option .
realization of such high accuracy would require a prior calibration of the polarimeter at a laser - back scattering facility .
the detectors of the cell of the msd - based polarimeter include two parts the first for the measurement of the x and y coordinates of each pair component and the second for the measurement of the x and y coordinates of the production vertex .
the second part should be done using a two - dimensional readout msd because with a one - dimensional readout the most useful events will have only one coordinate of the vertex .
the first part of the cell could be realized with a one- or two - dimensional readout .
we consider below the two - dimensional readout for the third plane only .
-0.05 in the thickness of the silicon plate is 0.3 ( 1.0 ) mm , and the readout strip pitch is 50(100 ) @xmath81 for the first two ( third ) planes .
two first msds are rotated by 90@xmath92 , so three planes allow determination of the coordinates in two - track events .
the third plane will serve as a converter for the next cell .
it provides both coordinates of the production vertex , see fig .
[ fig : cells ] for a three cell example .
the first two msds also serve as veto detector(s ) for the photon converted in the third msd .
a proposed 46 cell structure in a 300 cm long polarimeter leads to a cell length 6.5 cm , which allows optimal coverage of a wide range of photon energies .
a calorimeter will be used for crude ( 10 - 20% ) measurement of the photon energy ( combined energy of the @xmath93 pair ) .
the configuration proposed above called for a 1 mm thickness msd with two - sided readout strips , which is twice as large as the maximum currently available from industry .
we are nevertheless expecting that such an advance in technology could be made for the current project . in any case
the 1 mm converter could be replaced by two 0.5 mm converters .
the projected results of polarization measurement are shown in fig . [
fig : projected - result ] .
the photon angular resolution of the proposed system could be estimated from the msd spatial resolution and thickness of the cell as 1 mrad for 100 mev photon energy .
a @xmath0 ray polarimeter for astrophysics could be constructed using silicon msd technology .
each of 46 cells will include one msd with 2d readout of 1 mm thickness and 0.1 mm pitch and two msds with 1d readout of 0.3 mm thickness and 0.05 mm pitch . using a total of 138 m@xmath89 area of msd ( 46 cells ) and @xmath94 readout channels ( assuming a factor of 10 multiplexing ) the polarimeter would provide a device with 9 - 15% photon efficiency , a 0.10 analyzing power , and a 1 mrad angular resolution . in a year - long observation , the polarization of the photons from the crab pulsar
would be measured to 6% accuracy at an energy cut of 100 mev and @xmath315% accuracy at an energy cut of 1000 mev .
the authors are grateful to s.d .
hunter for stimulating and fruitful discussion .
we would like to acknowledge contributions by v. nelyubin and s. abrahamyan in the development of the mc simulation .
this work is supported by nasa award nnx09av07a and nsf crest award hrd-1345219 . t. young , phil .
royal society london , * 92 * 12 ( 1802 ) .
a. michelson and e. morley , american journal of science * 34 * , 333 ( 1887 ) . c. n. yang , phys .
rev . * 77 * , 722 ( 1950 ) ; j. h. berlin and l. madansky , phys .
rev . * 78 * , 623 ( 1950 ) . c. aidala , s. bass , d. hasch , and g. mallot , rev .
* 85 * , 655 ( 2013 ) , arxiv : hep - ph/1209.2803 .
p. f. bloser _ et al .
_ , the mega project : science goals and hardware development , new astronomy reviews * 50 * , 619 ( 2006 ) .
d. bernard and a. delbart , nucl .
instr . meth . a * 695 * , 71 ( 2012 ) ; d. bernard , nucl .
instr . meth .
a * 729 * , 765 ( 2013 ) , arxiv : astro - ph/1307.3892 .
f. lei , a.j . dean and g.l .
hills , space science reviews * 82 * , 309 ( 1997 ) .
m.l . mcconnell and j.m .
ryan , new astronomy reviews * 48 * , 215 ( 2004 ) .
m.l . mcconnell and p.f .
bloser , chinese journal of astronomy and astrophysics * 6 * , 237 ( 2006 ) .
h. krawczynski _ et al .
_ , astroparticle physics * 34 * , 550 ( 2011 ) . w. hajdas and e. suarez - garcia , polarimetry at high energies , in @xmath95observing photons in space : a guide to experimental space astronomy@xmath95 , eds . m.c.e .
huber _ et al .
_ , 599 ( springer , 2013 ) .
m. pohl , particle detection technology for space - borne astro - particle experiments , arxiv : physics/1409.1823 .
m. dovciak _
et al . _ ,
mnras * 391 * , 32 ( 2008 ) , arxiv : astro - ph/0809.0418 .
s. r. kelner , soviet journal of nuclear physics * 10 * , 349 ( 1970 ) ; s.r .
kelner , yu.d .
kotov , and v.m .
logunov , soviet journal of nuclear physics * 21 * , 313 ( 1975 ) .
m. l. mcconnell _ et al .
_ , in aas / solar physics division meeting 34 , bulletin of the american astronomical society , vol .
35 , p. 850
( 2003 ) .
e. kalemci _
et al . _ ,
* 169 * , 75 ( 2007 ) , arxiv : astro - ph/0610771 .
a. morselli _
_ , nuclear physics b , proc
. supp . * 239 * , 193 ( 2013 ) , arxiv : astro - ph/1406.1071 .
et al . _ ,
astroparticle physics * 59 * , 18 ( 2014 ) .
d. bernard _ et al .
_ , harpo : a tpc as a gamma - ray telescope and polarimeter , arxiv : astro - ph/1406.4830 ; d. bernard , harpo : a tpc as a high - performance @xmath0-ray telescope and a polarimeter in the mev - gev energy range , conseil scientifique du labex p2io , 17 december 2014 .
_ , the astrophysical journal * 708 * , 1254 ( 2010 )
. f. meddi _ et al .
_ , publications of the astronomical society of the pacific , volume 124 , issue 195 , pp.448 - 453 ( 2012 ) ; f. ambrosino _ et al .
_ , journal of the astronomical instrumentation , volume 2 , issue 1 , i d . 1350006 ; f. ambrosino _ et al .
_ , proceedings ot the spie , volume 9147 , i d .
91478r 10 pp .
r. buehler _ et al .
_ , the astrophysical journal * 749 * , 26 ( 2012 )
. h. olsen and l. c. maximon , phys .
rev . * 114 * , 887 ( 1959 ) ; l. c. maximon and h. olsen , phys . rev . * 126 * , 310 ( 1962 ) .
b. wojtsekhowski , d. tedeschi , and b. vlahovic , nucl .
instr . meth .
a * 515 * , 605 ( 2003 ) .
v. boldyshev and y. peresunko , yad .
* 14 * , 1027 ( 1971 ) , translation in the soviet journal of nuclear physics , * 14(5 ) * , 576 ( 1972 ) . c. de jager _
j. a * 19 * , 275 ( 2004 ) ; arxiv : physics/0702246 . | a high - energy photon polarimeter for astrophysics studies in the energy range from 20 mev to 1000 mev is considered .
the proposed concept uses a stack of silicon micro - strip detectors where they play the roles of both a converter and a tracker .
the purpose of this paper is to outline the parameters of such a polarimeter and to estimate the productivity of measurements .
our study supported by a monte carlo simulation shows that with a one - year observation period the polarimeter will provide 6% accuracy of the polarization degree for a photon energy of 100 mev , which would be a significant advance relative to the currently explored energy range of a few mev .
the proposed polarimeter design could easily be adjusted to the specific photon energy range to maximize efficiency if needed . |
the population in pakistan has a high risk of diabetes and coronary heart disease , and common risk factors related to the two conditions are present at early ages . this elevated risk amongst pakistanis
migrants from the indian subcontinent living in europe and america have higher rates of cardiovascular risk factors compared to the locals [ 610 ] .
few articles have compared the prevalence of risk factors in immigrants living in western societies with those still living in their homesteads in the subcontinent [ 1114 ] .
two of these were comparisons of indians living in london with people living in india .
both showed that british - indians had unfavourable cardiovascular risk factor profile compared to those living in india [ 11 , 12 ] .
one team studied indians in australia and their relatives in india and found that women living in australia had a more desirable risk profile compared to those still living in india .
the researchers concluded that the lack of undesirable weight gain was the reason for the lower risk .
the population of the norwegian capital has been increasingly diversified during the last decades , and there is now a large norwegian - pakistanis population in oslo .
the prevalence of obesity and diabetes in oslo has proven to be very high especially amongst those from pakistan [ 8 , 16 ] .
it has been suggested that migration from developing countries to developed countries leads to these changes .
no study has so far compared pakistanis living in norway with pakistanis living in pakistan .
it is therefore necessary to find out whether there is an elevated prevalence of risk factors for cardiovascular disease in pakistanis residing in oslo compared to those living in the country of origin . in this paper , we compare two pakistani populations , one in norway and the other in pakistan , for cardiovascular risk factors .
most pakistanis living in norway are from an area called kharian , in punjab , and therefore we chose this area for an epidemiological study .
data were obtained from two population - based , cross - sectional surveys conducted in oslo , norway , between 2000 and 2002 with similar protocol .
the oslo health study ( hubro ) was a collaboration between the norwegian institute of public health , the university of oslo and the oslo municipality ( 2000 - 01 ) .
all oslo residents born in 1924 , 1925 , 1940 , 1941 , 1955 , 1960 , and 1970 were invited to a health survey . of these 18,770 ( 46% ) attended .
the second survey , the oslo immigrant health study , was conducted by the norwegian institute of public health and the university of oslo in 2002 . in this survey individuals born in turkey , iran , pakistan sri lanka , or vietnam between 1942 and 1971
were invited to participate , and the response rate for pakistani immigrants was 31.7% . in the present analysis , we have only included participants born in pakistan . of these
a total of 770 participants were included from the norwegian material ; 415 men , and 355 women were included in norway while in pakistan 358 men and 872 women were included .
subjects were enrolled from 44 villages from this area , about 150 km from the capital islamabad .
this is primarily an agricultural community but has developed rapidly in the recent decades due to migration to the west .
all participants were 20 years or older and met after fasting for 810 hours before the examinations .
only subjects aged between 30 and 61 were included in the current analysis in order to match the oslo sampling procedure .
the methods for the survey in pakistan are described in detail elsewhere . from the pakistani material , 1230 subjects
this professional scale comes complete with an attached height rod , where both weight and height can be measured simultaneously .
normal weight was defined as bmi up to 24.9 kg / m , and overweight was bmi between 25 and 29.9 kg / m .
waist above 80 cm for women and 90 cm for men was labelled high .
waist - hip ratio cutoffs were set at 0.8 and 0.9 for females and males , respectively .
hdl levels below 0.9 and 1.0 mmol for males and females , respectively , were regarded as low . in norway , blood pressure was measured with an automatic device ( dinamap , criticon , tampa , fla , usa ) , while a standard sphygmomanometer was used in pakistan .
systolic pressure above 140 mmhg and diastolic pressure above 90 mmhg were classified as high .
student 's t - test was used to calculate p values when comparing two means . age adjusted prevalence and means using direct standardization with averaged weight as the standard population are presented in brackets or below the tables were applicable .
four hundred and fifteen pakistani men and 355 women were included in norway , while in pakistan 358 men and 872 women were included .
the mean age in norway was 44.2 years for males and 42.4 for females ( table 1 ) . in pakistan , the mean age was 46.4 and 44.2 for males and females , respectively .
both genders had similar height in norway and pakistan , but their weight on the other hand was not similar .
pakistanis living in norway had significantly higher mean weight and bmi ( table 1 ) .
being overweight and obese , in terms of having a bmi between 25 and 30 and above 30 , was more commonly seen among pakistanis in norway ( table 2 ) .
more than one - fifth of the pakistani males in norway were obese , while only 7% of the males in pakistan had a bmi above 30 .
it was more common for males in norway to have high waist girth and whr compared to males in pakistan .
pakistani males in norway had higher waist circumference , as well as hip girth and waist - hip ratio ( whr ) , compared to males in pakistan .
women in pakistan had higher systolic and diastolic pressure compared to females in norway ( table 3 ) .
both males and females in norway had higher total cholesterol compared to their counterparts in pakistan .
systolic and diastolic blood pressure increased with increasing bmi for both genders in both norway and pakistan ( table 4 ) . not surprisingly , waist and whr increased with bmi so did total cholesterol . with increasing bmi
, hdl decreased in the norwegian - pakistanis but increased in those living in pakistan .
the highest standardized beta coefficient was seen for waist girth , with a standardized beta value of approximately 0.8 for all groups .
the standardized beta value for whr was 0.48 and 0.52 for males in norway and pakistan , respectively .
however , the results obtained in pakistan ( n = 401 ) were similar to those in norway . in pakistan , 40.5% of the males and 2.8% females
were current smokers ; in norway , 34% of the males and 3.8% of the females were smokers .
none of the females in pakistan said they were previous smokers , while 2.2% of the females in norway said so . among the males in pakistan ,
7.4% said they were previous smokers , of the norwegian - pakistanis , 18.5% were previous smokers .
we demonstrated high prevalence of obesity and cardiovascular risk factors in both populations , this is in line with earlier studies [ 11 , 12 ] .
obesity , overweight , and having high levels of lipids were more common in norway , while high blood pressure was seen more frequently in pakistan .
we believe that the two populations are comparable because the majority of the pakistanis living in norway actually migrated from this particular area in pakistan , an area called kharian in the district of gujrat .
therefore , it is reasonable to postulate that the populations are genetically and culturally comparable .
the differences we observe between the two populations could therefore be due to the effects of migration and changes in lifestyle from a low - income to a high - income country .
the difference in weight and bmi are of such a magnitude that they can not be explained by possible measuring error .
this is particularly true for the males ; the difference is more than 10 kg , whilst it is almost 8 kg in women .
this difference is reflected in the bmi ; increased bmi in oslo among the pakistani population may only be explained by added weight in this population , since height remains the same in both populations .
the difference between the populations in waist and hip girth are also evident among the males .
the men living in norway have almost a waist of more than seven cm greater than the males in pakistan ; similarly , the hip is also more than six cm larger in the norwegian - pakistanis .
some women in pakistan might have been reluctant to remove their clothes for the measurement of the waist and hip girth even though same gender investigators did all the measurements .
women living in rural pakistan might also have had a higher number of pregnancies , which could have resulted in a higher waist , hip , and waist - hip ratio . on the other hand
, the whr did not increase as steeply with increasing bmi among the females in pakistan as it did amongst the pakistani females in norway .
one study has shown that expatriate indian women in australia did in fact have a better risk profile than their counterparts still living in india .
however , this is not surprising since obesity was highly increased among the pakistani population residing in oslo .
several studies have showed that people from south asia living in western societies have a relatively low level of physical activity [ 22 , 23 ] .
this might be the cause of the high level of adiposity among the pakistanis living in norway .
in addition , higher consumption of unhealthy fatty foods , which is available in oslo due to privileged income and sedentary lifestyle , may have contributed to the observed conditions .
the pakistanis living in norway have lower pressures , except for the systolic pressure in males .
we do not have data on use of antihypertensive drugs in pakistan , but we find it reasonable to believe that such medication might be less common than in norway .
due to low access to doctors , undiagnosed hypertension might be more common in rural pakistan compared to oslo .
earlier studies have showed that the awareness about hypertension is low in pakistan , and few patients have had their blood pressure measured [ 24 , 25 ] .
it is also important to note that the blood pressure was measured differently in the two populations .
cautiousness should therefore be applied when comparing blood pressure in the two populations and interpreting these results .
high levels of hypertension have been reported earlier amongst pakistanis [ 3 , 4 ] , although , some large studies have reported considerably lower prevalence of hypertension [ 26 , 27 ] .
smoking is common amongst pakistani males both in norway and in pakistan ; women , however , are fortunately spared .
this pattern of smoking has been demonstrated in several studies that have looked at smoking habits amongst pakistanis in pakistan and abroad [ 2 , 4 , 5 ] .
our data demonstrate differences in cardiovascular risk factors in these two populations , possibly as a consequence of migration and related changes in lifestyle . more research is needed on the modification of lifestyle and food habits following migration .
nevertheless , pakistanis living in norway have proven to have higher levels of diabetogenic and cardiovascular risk factors and therefore should be treated as high risk group for both prevention and treatment . | objectives . previous studies have shown that the norwegian - pakistanis had considerably higher prevalence for diabetes and obesity compared to norwegians .
we studied the additional risk of obesity , dyslipidemia , and hypertension among pakistanis in norway compared to pakistanis living in pakistan .
method .
770 norwegian - pakistani adults ( 53.9% men and 46.1% women ) born in pakistan from two surveys conducted in norway between 2000 and 2002 were compared with a sample of 1230 individuals ( 29.1% men and 70.9% women ) that participated in a survey in pakistan in 2006 .
results .
both populations had similar height , but norwegian - pakistanis had considerably higher mean weight .
of the norwegian - pakistanis , 56% of the males and 40% of the females had a bmi above 25 kg / m2 , as opposed to 30% and 56% in pakistan , for males and females , respectively .
norwegian - pakistanis had higher total cholesterol .
conclusion .
obesity and an unfavourable lipid profile were widely prevalent in both populations ; the highest level was recorded amongst those living in norway .
the increased risk for obesity and dyslipidemia may be ascribed to change of lifestyle after migration . |
nanochains of metals @xcite , as well as of carbon , semiconductors and organic materials @xcite have recently been the subject of experimental and theoretical studies .
similar chains of many other chemical elements and compounds have not been studied . because of the present interest in nanotechnology these studies are important .
chains with particular properties are candidates for preparation of nanostructures with chosen applications .
it is also possible to deposit chains on various substrates and to obtain one - dimensional conductors and quantum confinement .
there are many crystalline phases of bulk silica , for example , quartz , tridymite , cristobalite , keatite , coesite , and stishovite @xcite .
in addition , amorphous sio@xmath2 , which is abundant in nature , has also been investigated and used in various technological applications .
these silica bulk phases have been studied by several experimental @xcite and theoretical @xcite methods .
silica is a very good electrical insulator .
macroscopic silica wires are used as waveguides in the visible and near - infrared spectral ranges .
silica films are often applied in optics , and are used as electric and thermal insulators in electrical devices @xcite .
sio@xmath2 substrates are important in microelectronics , optics and chemical applications
. therefore silica surfaces have been also investigated @xcite .
much less study has been devoted to silica nanostructures .
various cylindrical nanostructures of silica have recently been synthesized : nanowires , nanotubes , nanoflowers , bundles , and brush - like arrays @xcite .
their structural , mechanical , optical , and catalytic properties have been examined .
silica nanowires , with diameters ranging from ten to several hundred nanometers , have been produced using various experimental techniques .
they have been proposed for use as high - intensity light sources , near - field optical microscopy probes , and interconnections in integrated optical devices .
the properties of infinite silica chains have not been theoretically investigated .
however , several theoretical studies of silica clusters have been carried out using gaussian @xcite , gamess @xcite , siesta @xcite and dmol @xcite packages , as well as several other density functional theory ( dft ) programs @xcite .
nanotubes of sio@xmath3 , @xmath4 , have recently been studied using the vasp dft program @xcite .
all these computational studies of silica nanostructures , have shown that their properties are often different in relation to the bulk .
therefore , it is also important to study infinite silica nanochains where periodic boundary conditions are used along the axis .
these one - dimensional structures of silica are interesting from the theoretical point of view , as well as models of very long real nanowires .
they provide additional systems for investigating the structure and bonding in silica materials , and offer possibilities of designing new nanostructures .
it is possible to prepare such thin silica wires on the substrates .
the one that is the most interesting for applications is the assembly of silica chains on silicon surfaces and nanowires . in this work
, the structure , energetics and electronic properties of thin silica nanowires were investigated using a computational method .
infinite linear and zigzag chains , as well as a nanowire composed of periodically repeated si@xmath0o@xmath1 structural units , were constructed and optimized using a plane wave pseudopotential approach to the density functional theory .
the rest of the paper is organized as follows .
section [ sec:2 ] presents the method . in section [ sec:3 ]
the results and discussion are given .
conclusions are outlined in section [ sec:4 ] .
_ ab initio _ dft calculations @xcite within the plane - wave pseudopotential method were performed to study silica chains .
the pseudopotential approach has been very successful in describing the structural and electronic properties of various materials @xcite .
the abinit code was used @xcite .
the same method has already been applied to calculate various properties of bulk silica @xcite and the ( 0001 ) @xmath5-quartz surface @xcite . in this calculation
the generalized gradient approximation and the exchange - correlation functional of perdew , burke , and ernzerhof were applied @xcite .
the pseudopotentials of troullier and martins @xcite generated by the fritz haber institute code @xcite were used ; these pseudopotentials were taken from the abinit web page @xcite .
they were tested by doing calculations for the bulk @xmath5-quartz , and these results were compared with experiments @xcite .
it was found that the computationally optimized structural parameters of quartz were very close to the experimental ones ; the differences are below @xmath6 .
several properties of silica nanowires were also calculated using the local density approximation ( lda ) with the teter extended norm - conserving pseudopotentials taken from the abinit web page @xcite .
the results obtained using the teter pseudopotentials were compared with experiments for bulk quartz structure , and differences of @xmath6 have been obtained .
only minor quantitative differences were found between the lda and gga results for silica nanowires .
the calculations were performed with a kinetic - energy cutoff of @xmath7 hartree .
the wires were positioned in a supercell of side @xmath8 a.u . along the x and y directions .
the axis of the wires was taken along the z direction , and the periodic boundary conditions were applied .
the monkhorst - pack method with 15 k - points sampling along the z direction was used in the integration of the brillouin zone @xcite .
structural relaxation for silica nanowires was carried out by performing a series of self - consistent calculations and computing the forces on atoms .
the geometry optimizations were performed using the broyden method of minimization until the forces were less than @xmath9 ev / .
all atoms were allowed to relax without any imposed constraint .
infinite si - o chains were investigated .
two , four , and six atoms in a unit cell of a chain were studied to explore a possible dimerization and the existence of a zigzag structure . in previous studies of silica clusters
it has been found that in stable structures there often exists a unit of two si@xmath2o@xmath2 rhombuses sharing one silicon atom .
this unit contains a tetrahedrally bonded si atom and therefore shows the structural feature most often present in the bulk of sio@xmath2 .
two adjacent rhombohedral rings in clusters are perpendicular to each other .
it was calculated in this work that an optimized infinite silica wire forms if a si@xmath0o@xmath1 unit is repeated periodically along a direction where the silicon atoms are positioned .
the si@xmath0o@xmath1 unit contains three whole si@xmath2o@xmath2 rhombuses .
infinite tubular nanostructures of silica , similar to the finite mgo nanotubes studied recently @xcite , are not stable because their oxygen atoms are in the 4-fold coordinated configurations .
however , calculations on silica clusters have shown that the oxygen atom prefers a lower coordination . in experimental studies of silica nanowires ,
much bigger structures having diameters @xmath10 nm and lengths up to tens of millimeters have been prepared @xcite .
it has been shown that these silica nanostructures synthesized in the laboratories are amorphous .
dft - based studies of such already fabricated silica nanowires are not feasible within current computational power .
the optimized distances and the binding energies of all nanowires are presented in table [ tab : table1 ] .
the optimized geometries of silica chains are shown in figure [ fig : fig1 ] .
no dimerization was found for the linear chain .
the zigzag chain is also stable and its energy is above that of the linear chain .
nonlinearity of the o - si - o bonds is less favorable in a situation where there is no additional oxygen atoms , as in the case of the bulk tetrahedral sio@xmath0 bonding .
the optimized structure of the si@xmath0o@xmath1 unit is shown in figure [ fig : fig2](a ) .
it is well known that the si - o distance in the bulk silica is most often about @xmath11 nm .
it was calculated here that a larger si - o distance of @xmath12 nm exists in a linear chain , @xmath13 nm in zigzag one , and @xmath14 nm in a si@xmath0o@xmath1 nanowire . in the zigzag chain
the angles are @xmath15 .
the width of the nanowire shown in figure [ fig : fig2](a ) is up to about @xmath16 nm . in the si@xmath0o@xmath1 wire the oxygen atoms are bonded to two silicon atoms and the silicon atoms are bonded to four oxygen atoms .
such sio@xmath0 tetrahedra are typical for bulk materials involving silicon and oxygen . in the rhombuses of the si@xmath0o@xmath1 wire
the si - o - si angles are @xmath17 and @xmath18 , while the o - si - o ones are @xmath19 and @xmath20 .
the o - si - o angle is @xmath21 when the oxygen atoms are in adjacent rhombuses .
thus , the coordination of the silicon atoms is distorted from an ideal tetrahedral geometry .
cccc structure & linear chain & zigzag chain & si@xmath0o@xmath1 nanowire + a & 0.175 & 0.170 & 0.167 +
l & 0.35 & 0.291 & 0.234 ; 0.235 + e & -6.40 & -5.51 & -7.38 + figure [ fig : fig1 ] also presents the bonding wells for the chains .
the minima are rather pronounced and show a substantial stability of these nanowires .
by contrast , it was not possible to obtain a similar figure for the si@xmath0o@xmath1 wire . even very small perturbations ( @xmath22 ) of the length along the wire axis destabilize the si@xmath0o@xmath1 wire . a small difference between the angle within one rhombus exists ( @xmath19 vs @xmath20 ) .
it was not possible to stabilize such a three - dimensional thin wire using a smaller si@xmath2o@xmath2 cell .
the si@xmath0o@xmath1 nanowire is at the border of instability .
however , it was also found that the calculation where the lda approximation to the dft theory with the teter extended norm - conserving pseudopotentials @xcite was used produces a similar optimized si@xmath0o@xmath1 infinite wire .
for example , in this lda approximation the si - si distance is @xmath23 nm , whereas the si - o distances are @xmath24 ; @xmath25 nm
. it should be possible to assemble silica chains on the surfaces , using various nanotubes and nanowires , or long channels in porous materials .
the role of the substrate is to increase the stability of very thin silica nanowires .
the band structure of silica chains is shown in figure [ fig : fig3 ] .
the plot of the electronic structure of a linear chain ( presented in figure [ fig : fig3](a ) ) shows that one band crosses the fermi level ; therefore this system is metallic .
the electronic structure of a zigzag chain is shown in figure [ fig : fig3](b ) .
this wire is an insulator .
the band plot in figure [ fig : fig2](b ) shows that the si@xmath0o@xmath1 nanowire is also an insulator .
when the number of neighbours in si - o nanowires increases , electronic behavior goes from metallic to insulating , as in the bulk . at the gamma point ,
the difference between the valence and conduction band is @xmath26 ev for the si@xmath0o@xmath1 nanowire , and @xmath27 ev for the zigzag chain .
tetrahedral sio@xmath0 clusters exist in the si@xmath0o@xmath1 nanowire .
this structure is similar to the fragments of the cristobalite bulk lattice .
the three - dimensional si@xmath0o@xmath1 wire behaves as an insulator , and a similar electronic behavior and a band gap value exist in the cristobalite crystal @xcite .
table 1 shows that the si - o and si - si distances are smaller in the zigzag chain than in the linear one .
this compression of bonds as a result of the rearrangement of atoms into the zigzag chain removes a crossing band from the fermi level , and an insulating behavior arises in this structure .
the si - o distance in the linear chain is larger than in the majority of silica bulk phases , as well as in the zigzag and si@xmath0o@xmath1 wires .
that decreases the extent of @xmath28 bonding between silicon and oxygen atoms in the linear wire .
the weak metallic behavior arises in the linear silica chain as a consequence of this weaker bonding and a small coordination .
atomic charges were calculated using the hirshfeld partitioning of the electron density @xcite .
the hirshfeld method ( or `` stockholder '' partitioning ) uses the charge density distribution to determine atomic charges in the molecule or nanostructure .
first , the reference state of the promolecule density is defined as @xmath29 where @xmath30 is the electron density of the isolated atom a placed at its position in the molecule .
the atomic charge is @xmath31 where @xmath32 is the atomic deformation density given by @xmath33 in equation ( 2 ) , @xmath34 is the relative contribution ( `` share '' ) of the atom a in the promolecule , whereas @xmath35 is the molecular deformation density .
the sharing factor is a weight that determines a relative contribution of the atom @xmath36 to the promolecule density in the point @xmath37 .
it is defined as @xmath38 the molecular deformation density ( used in equation ( 2 ) ) is @xmath39 where @xmath40 is the molecular electron density .
the hirshfeld partitioning is almost insensitive to the basis set and minimizes missing information @xcite .
the hirshfeld charges are presented in table [ tab : table2 ] .
the calculations show that for all silica wires the charge transfer occurs from si to o atoms .
this indicates ionic bonding .
all oxygen atoms get similar amounts of the electron density , regardless of the structure . cccc structure & linear chain & zigzag chain & si@xmath0o@xmath1 nanowire + @xmath41 & 0.212 & 0.275 & 0.446 + @xmath42 & -0.212 & -0.275 & -0.225 + the character of the bonding
was also analysed using the electronic charge density . in figure
[ fig : fig4 ] the charge density isosurface plots are presented .
this visualization was performed by the xcrysden package @xcite .
the well - defined spherical charges are located and accumulated on the oxygen atoms .
similar charge density plots that show a predominantly ionic bonding have been , for example , obtained for bulk @xmath5-quartz @xcite .
three configurations of infinite silica nanowires were optimized and studied using _ ab initio _ dft calculations in the pseudopotential approximation . the structural properties of these wires were investigated .
it was found that a linear chain is energetically more favorable than a zigzag wire .
the calculations of the bonding wells showed that both chains are stable , whereas the infinite si@xmath0o@xmath1 wire is at the border of instability .
the hirshfeld charges were calculated and the results show that a similar transfer of a charge to oxygen atoms exists for all wires . it was found that the zigzag chain and the si@xmath0o@xmath1 wire are insulators , while a single state crosses the fermi level in the band plot of the linear chain .
the existence of a metallic state offers the possibility to use simple long silica chains in conducting nanodevices without doping .
it is possible to deposit and assemble these chains on various surfaces , nanotubes , or inside the long and wide pores of suitable bulk materials .
99 gonze x , beuken j m , caracas r , detraux f , fuchs m , rignanese g m , sindic l , verstraete m , zerah g , jollet f , torrent m , roy a , mikami m , ghosez ph , raty j y and allan d c , 2002 _ comput . mat .
sci . _ * 25 * 478 , http://www.abinit.org | thin nanowires of silicon oxide were studied by pseudopotential density functional electronic structure calculations using the generalized gradient approximation .
infinite linear and zigzag si - o chains were investigated . a wire composed of three - dimensional periodically repeated si@xmath0o@xmath1 units
was also optimized , but this structure was found to be of limited stability .
the geometry , electronic structure , and hirshfeld charges of these silicon oxide nanowires were computed .
the results show that the si - o chain is metallic , whereas the zigzag chain and the si@xmath0o@xmath1 nanowire are insulators . |
thioglycosides are
widely used as glycosyl donors in the synthesis
of complex oligosaccharides because their stability , ease of activation ,
and flexibility in the tuning of glycosyl coupling conditions .
numerous
thioglycosyl donors have been developed , resulting in both technological
advances and mechanistic insights for glycosyl activation .
for example , thioglycosides are often orthogonal
with respect to glycosyl donors that are activated by lewis acids , and their reactivities can be tuned to enable
programmable one - pot oligosaccharide syntheses .
thioglycosyl donors can also be activated at low temperatures
to generate highly reactive glycosyl triflates followed by coupling
with an acceptor .
however , thioglycosides are susceptible to side reactions , such
as aglycon transfer or cross - reactivity between the thiophilic promoter
and acceptor , and the influence of protecting
groups on donor reactivity ( i.e. , armed vs disarmed ) can sometimes
be counterproductive toward glycosyl coupling .
such limitations motivate
the search for new glycosyl donors and activation conditions . in this article
, we evaluate glycosyl dithiocarbamates ( dtcs ) as
activatible donors for glycoconjugate and oligosaccharide synthesis .
we have recently described the in situ generation of glycosyl dtcs
as intermediates in a modified version of glycal assembly ; here , we deliberately isolate glycosyl dtcs to
understand their reactivities further .
dtcs already have broad applicability
in organic synthesis , as ligands in coordination
chemistry , and for the functionalization
of metal surfaces .
an especially appealing quality of dtcs is that many
of them can be prepared in situ simply by adding amines to cs2 in polar solvents .
dtcs are excellent
candidates for electrophilic activation based
on their strong affinity for metals and
the relatively low oxidative potentials of thioamide species .
the latter is significant , as the activation barriers of thioglycosides
and related species have been shown to correlate with their oxidative
potentials .
surprisingly , glycosyl dtcs
have been largely overlooked as donors despite the use of closely
related glycosyl thioimidates in carbohydrate synthesis .
this may be partly due to earlier challenges
in the preparation of glycosyl dtcs by the nucleophilic substitution
of glycosyl halides with dtc salts or by the
dehydrative substitution of lactols ( hemiacetals ) under phase - transfer
conditions .
furthermore , although glycosyl
dtcs have been shown to produce disaccharides in good yield , the coupling
conditions involve excess activating agent and offer limited stereocontrol .
we find that glycosyl dtcs are efficiently prepared from glycals ,
and we demonstrate their use in the synthesis of oligosaccharides
using mild lewis acids .
a remarkable aspect of this study is
that glycosyl dtcs are highly
-selective in the absence of predesignated auxiliary groups .
-linked glycosides are typically formed using glycosyl donors
with an acyl group at the c2 position ( figure 1 , top ) .
however , acylated glycosyl donors can have variable reactivity :
in some cases , a 2-o - benzoate or pivaloate can enhance
donor reactivity , but in other cases , the electron - withdrawing
nature of acyl groups can be deactivating .
furthermore , donors with c2 acyl groups can form stabilized 1,2-dioxolenium
intermediates whose ambident nature can give rise to competing orthoester
byproducts , particularly when challenged with sterically hindered
acceptors .
such issues are neatly circumvented
when using glycosyl dtcs ( figure 1 , bottom ) .
with regard to the basis for -selectivity , we present a series
of experiments to show that the c2 hydroxyl itself plays an essential
directing role in -glycoside formation .
-selective glycosylation
using ( i ) 2-o - acyl
glycosyl donors ( with potential formation of orthoester byproduct )
or ( ii ) glycosyl dithiocarbamates with free c2 hydroxyls .
at the outset , we presumed -glycosyl
dtcs could be readily
prepared via sn2 epoxide ring opening of -epoxyglycals ,
which are in turn prepared selectively by treating glycals with dimethyldioxirane
( dmdo ) . in our previous
studies with glycals and the closely related 4-deoxypentenosides ,
their corresponding epoxides reacted readily in thf with mildly basic
nucleophiles such as thiolates .
however , initial attempts
to treat tri - o - benzyl epoxyglucal 1 with
et2dtc diethylammonium salt ( prepared in situ from a 2:1
ratio of et2nh and cs2 ) resulted in little to
no glycosyl dtc formation .
the addition of lewis acids such as liclo4 , which has been reported to catalyze epoxyglycal ring opening
under aprotic conditions , did not lead
to significant improvements .
however , epoxide ring openings with dtc
salts in protic solvents proved to be highly effective .
thus , treatment
of 1 with stoichiometric amounts of dmdo produced -epoxyglycal
with high stereoselectivity , with subsequent addition of various dialkyl - dtcs
in meoh giving -glycosyl dtcs 25 in good to high overall yields ( table 1 ) .
the glycosyl dtcs were stable toward chromatographic separation and
could be stored at 20 c for months without any decomposition .
c nmr chemical - shift analysis indicated the anomeric ( c1 )
and c = s carbonyl signals to be at 9092 and 192198
ppm ; h nmr analysis of diethyl - dtc glycoside 2 revealed the c1 methine proton to be notably downfield (
6.10 ppm ; j = 10 hz ) , consistent with
previous literature .
( i ) dmdo ( 1.5
equiv ) , 0 c , dcm ; ( ii ) cs2 ( 4 equiv ) , r2nh ( 8 equiv ) , 4:1 thf / meoh , 0 c to rt ; [ rxn ] = 0.1 m. facioselectivity
of epoxidation was 10:1 /. isolated yield of -glycosyl
dtc .
glycosyl n , n - diphenyl - dtc 6 was synthesized in
a similar fashion except that a dtc lithium
salt was generated from cs2 and ph2nli , with
the latter prepared from diphenylamine with li - dimsyl .
again , epoxide
ring openings proceeded most efficiently when performed in a mixture
of thf and meoh ( 82% yield ) ; other aprotic polar solvents produced
glycosyl dtc 6 in lower yields ( table 2 ) .
it is worth noting that epoxide
ring openings with dtcs were also efficient in aqueous thf , but reactions
in pure meoh or other alcohols gave poorer results ( entries 2 and
3 ) .
we did not observe significant amounts of solvolysis despite the
well - established sensitivity of epoxyacetals to alcohols under mildly
acidic conditions .
standard conditions : ( i ) dmdo ( 1.5
equiv ) , 0 c , dcm ; ( ii ) cs2 ( 4 equiv ) , ph2nli ( 2 equiv ) , 0 c to rt ; = 0.1 m. 10:1 /. isolated yield of -glycosyl
dtc . other alcohols
the efficiency of epoxide ring
opening under protic conditions
can be attributed to the combination of the high nucleophilicity of
dtc anion and the low activation barrier for reprotonation .
it is also worth mentioning that the -mannosyl
dtc ( the expected ring - opening product from the minor -epoxyglucal )
is never observed ; h nmr analysis of the crude reaction
mixture after dtc treatment revealed only -glucosyl dtc without
any trace of mannosyl dtc .
this implies that the -epoxide is
unreactive to the dtc anion and decomposes upon workup , a form of
chiral resolution favoring the -epoxyglycal .
the practical
result is a clean isolation of -glycosyl dtc products regardless
of the facioselectivity of glycal epoxidation .
glycosyl dtc 2 was converted into tetra - o - benzylglucosyl dtc ( 2a ) , which produced high - quality
crystals via slow evaporation of a toluene hexanes solution
at room temperature ( figure 2 ) .
x - ray crystallographic
analysis revealed a c1 conformation with the
dithiocarbamate adopting an exoanomeric ( least sterically encumbered )
conformation . the o5c1s bond angle ( 109.25 )
is essentially that of a tetrahedral sp carbon , with minimal
torsional strain and no evidence of secondary interactions between
the carbodithioate and the carbohydrate structure .
we thus assume
dtc activation to be driven by its affinity for the thiophilic agent .
x - ray
crystal structure of tetra - o - benzylglucosyl
dtc ( 2a ) , displayed as a capped - stick model ( left ) with
50% ellipsoids ( right )
. the straightforward
conversion of glycals into glycosyl dtc donors prompted us to investigate
their coupling with glycal acceptors , inspired by the seminal studies
by danishefsky and co - workers .
glycosyl
coupling conditions were systematically screened using n , n - diethyl - dtc donor 2 and 4,6-benzylidene - protected
acceptor 7 ( table 3 ) .
standard
lewis acids such as tmsotf did not produce disaccharide glycals at
low temperatures , and warming the reaction to 0 c resulted in
decomposition of 2 ( entry 1 ) .
thiophilic agents such as dimethylsulfonium triflate ( dmtst ) gave complex mixtures , most likely because of
the sensitivitiy of the enol ether moieties in the acceptor and product
( entry 2 ) .
we then examined metal triflate salts , which have high
affinity for the dtc group and low reactivity toward the enol ether
functionality . zn(otf)2 and
agotf were previously used
for glycosyl dtc activation , but neither
gave satisfactory results ( entries 3 and 4 ) .
fortunately , cu(otf)2 proved to be highly effective and produced -1,3-linked
disaccharide 8 as the major product ( entry 5 ) .
the highest
and most reproducible yields were obtained by ( i ) sequential addition
of acceptor 7 to preactivated donor 2 and
( ii ) using a weak base such as tri - tert - butylpyrimidine
( ttbp ) .
couplings performed in the presence of stronger bases like
et2npr ( entry 6 ) required
higher temperatures ( 0 c ) to reach completion , presumably because
of coordination between the amine and cu(otf)2 .
we note
that bronsted bases were not used in earlier reports involving glycosyl
dtc or thioimidate activation .
however , in the absence of base , the coupling reaction was compromised
by self - condensation of acceptor 7 into disaccharide 9 via ferrier rearrangement ( entry 7 ) .
lastly , selectivity
was maximized by using 1:1 dichloroethane / dichloromethane ( dce / dcm )
at 30 c ( entry 8) , but more polar solvents such as acetonitrile
or diethyl ether resulted in low yields .
standard
conditions : = 0.15 m ; acceptor 7 ( 1.5
equiv ) , lewis acid ( 2 equiv ) ,
base ( 2 equiv ) , 4 molecular sieves , dcm , 50 to 30
c .
disaccharide 9 was
isolated but not fully characterized ; h and c nmr spectra are available in the supporting
information . to determine
if the oxidation state of cu had any influence on
glycosyl dtc activation , we also performed couplings with cuotf(c6h6)0.5 , an air - sensitive
metal salt .
controlled addition of solid
cuotf at low temperature to the reaction mixture produced
similar results , often with slightly better isolated yields .
cuotf was likewise compatible with acid - sensitive functional
groups such as benzylidene acetals and enol ethers .
we therefore used
cuotf and cu(otf)2 interchangeably
for all subsequent glycosyl couplings . to determine whether
cu(otf)x - mediated
coupling could be influenced by the redox potential of the glycosyl
dtc donor , we compared the efficiencies of coupling donors 26 with acceptor 7 .
donors 26 were chosen on the basis of an earlier
electrochemical study that showed the redox potentials of n , n - disubstituted dtcs to be influenced
by their substituents , with e values
( versus sce ) ranging from ca .
these values suggested that glycosyl dtc 6 might be
more reactive than 24 if electron
transfer was involved .
n , n - diphenyl dtc 6 was the least reactive donor , and unreacted
donor was recovered after workup .
this series implies that dtc activation
is essentially driven by lewis acid base interactions ( table 4 ) and confirms et2-dtc derivative 2 to be the most efficient and most practical donor for glycosyl
couplings .
standard conditions : acceptor 7 ( 1.5
equiv ) , cu(otf)2 ( 2 equiv ) , ttbp ( 2 equiv ) ,
4 molecular sieves , 1:1 dce / dcm , 50 to 30 c ;
[ rxn ] = 0.1 m. determined by h nmr
signal integration .
glycosyl dtc donor 2 was tested with a variety of acceptors and was found to
produce -coupling products in good to high yields ( table 5 , entries 18 ) .
the glycosyl dtc couplings
were efficient with 1.21.5 equiv of acceptor and compatible
with acid - sensitive functional groups .
importantly , the coupling procedure
was simplified by performing three operations ( dmdo oxidation , dtc
ring opening , and glycosylation ) in sequence without workup or chromatographic
purification of the glycosyl dtc .
thus ,
dmdo oxidation of glycal 1 was followed immediately by
stoichiometric addition of diethyl - dtc salt in meoh to afford the
corresponding -glycosyl dtc 2 , which was then
dried by azeotropic distillation with toluene and subjected to cu(otf)x - mediated glycosyl couplings under the optimized
conditions above .
this one - pot procedure produced -glycosides
such as 1012 , -1,6-linked
disaccharides 1315 , and -1,4-linked
disaccharides 16 and 17 , all with exclusive
selectivity .
standard conditions : ( i ) dmdo ( 1.5
equiv ) , dcm , 0 c ; ( ii ) cs2 ( 1.05 equiv ) , et2nh ( 2.1 equiv ) , 4:1 thf / meoh , rt ; ( iii ) cu(otf)2 ( 2 equiv ) ,
ttbp ( 2 equiv ) , 4 molecular sieves , 1:1 dce / dcm , 50
c , then acceptor ( 1.5 equiv ) , 30 c ; [ rxn ] = 0.1
m. isolated yield of
isomer ,
unless stated otherwise . also isolated 9% of isomer .
we also evaluated cu(otf)x - mediated
coupling with 4,6-benzylidene - protected glucal 18 and
galactal 20 ( table 5 , entries
9 and 10 ) .
the dtc donor derived from 18 was combined
with acceptor 7 but found to be significantly less reactive
relative to 2 ( cf .
table 3 ) ; the
reaction was warmed to 0 c before affording disaccharide 19 in 42% yield .
the lower reactivity and yield can be attributed
to the conformational constraint imposed by the 4,6-benzylidene acetal
( to be discussed below ) .
the dtc donor derived from galactal 20 was coupled with 3,4-di - o - benzyl glucal
( 22 ) to produce -1,6-linked disaccharide 21 in 78% yield , albeit with moderate selectivity ( / ,
5:1 ) . in these limiting cases , issues of reactivity and stereoselectivity
are readily addressed by installing a c2 auxiliary such as a benzoate
onto the glycosyl dtc donor , which guarantees selectivity
and improves the coupling yield by as much as 30% . having established an efficient and -selective
glycosylation
via in situ generation of glycosyl dtcs , we applied this methodology
toward the reiterative synthesis of a -1,6-linked tetrasaccharide
( scheme 1 ) .
first , glucal 1 was
converted into 2 and coupled with acceptor 22 using cuotf - mediated glycosylation to afford -1,6-linked
disaccharide 13 in 69% overall yield .
was then protected as benzyl ether 23 and subjected to dmdo oxidation and et2nh / cs2 ring opening to yield glycosyl dtc 24 with > 95%
selectivity .
cuotf - mediated glycosylation with a second round
of 22 yielded ,-linked trisaccharide 25 in 63% isolated yield along with 7% of the ,-linked
product .
2-o - benzylation of trisaccharide 25 afforded trisaccharide glycal 26 followed
by in situ conversion into -glycosyl dtc 27 and
coupling with 22 to obtain ,,-linked
tetrasaccharide 28 in 49% isolated yield as well as 10%
of the ,,-linked product .
overall , the synthesis
of tetrasaccharide 28 was accomplished in just 11 steps
and 19% overall yield from glucal donor 1 and 4.5 equiv
of glucal acceptor 22 .
it should be noted that the c2 protecting group
in disaccharides 23 and 26 was important
for high facioselectivity
during epoxidation .
dmdo oxidation of glycal 13 at 50
c resulted in a 3:1 ratio of - and -epoxides ,
whereas epoxidation of 23 and 26 at 50
c yielded the desired -epoxyglycals in > 20:1 /
selectivity .
we have observed similar effects in the dmdo oxidation
of a 4-deoxypentenosyl ( 4-dp ) disaccharide having a remote hydroxyl
group .
the glycosyl dtc couplings
proceed cleanly with high yields and
selectivities despite the presence of a free c2 hydroxyl on
the donor .
a survey of the literature reveals just a handful of examples
on -selective coupling of glycosyl donors with free c2 hydroxyls
using either glycosyl phosphates or thioglycosides . in the latter case ,
selectivity was attributed to in situ
generation of an -oxonium intermediate ( i.e. , protonated epoxyglycal )
followed by sn2 ring opening at c1 to produce -glycosides .
however , one must also consider the conformational strain of placing
the c2 hydroxyl of the glucopyranose in a pseudoaxial position prior
to epoxide ring closure . to test the putative
-oxonium intermediate ,
we treated an -epoxyglucal and
acceptor at low temperature with either tfoh or cu(otf)2 ( scheme 2a ) .
however , neither condition resulted
in the selective formation of -glycosides , allowing us to eliminate
this as a possible intermediate . to determine whether the c2 hydroxyl was important for
-selective
glycosylation
, we compared donor 2 ( with free c2 hydroxyl )
with 2-o - triethylsilyl ( tes ) ether 29 using isopropanol and cu(otf)2 for activation ( scheme 2b , c ) .
as expected , glycosylation with 2 was highly -selective and yielded -isopropyl glucoside 10 as the major product , whereas glycosylation with 29 produced 30 in equally high yield but with
poor stereoselectivity , suggesting an active role for the c2 hydroxyl
in stereoselective coupling .
we also considered whether
selectivity could be attributed
to sn2-like reactivity of an -glycosyl triflate
intermediate , which is known to exist at low temperatures following
glycosyl activation .
however , cu(otf)2-mediated glycosylation of the 4,6-benzylidene - protected
dtc donor derived from 18 produces -coupling product
exclusively and is also slow relative to unconstrained donor 2 , requiring temperatures up to 0 c to reach completion
( table 5 , entry 9 ) .
in addition , it has been
shown that conformationally constrained donors with 4,6-benzylidene
acetals favor -glycoside formation , as the increased stability
of the -glycosyl triflate forces the coupling to proceed through
the more reactive -glycosyl triflate .
we propose that cu(otf)x activation
of the glycosyl dtc promotes intramolecular addition of the c2 hydroxyl
to form a bicyclic , trans - fused orthodithiocarbamate , which rapidly
equilibrates via an oxocarbenium intermediate to a cis - fused bicyclic
system with reduced torsional strain , accompanied by a stabilizing
anomeric effect on the c s axial bond ( scheme 3 ) .
the activated orthodithiocarbamate
is thus directed by the neighboring c2 hydroxyl , allowing the acceptor
to attack the exposed -face for -glycoside formation .
we note that a similar equilibration may be possible for glycosyl
phosphates and dithiophosphates .
additional evidence supporting the bicyclic orthodithiocarbamate
intermediate was obtained by attempts to alkylate glycosyl dtc 6 under basic conditions .
exposure of 6 to nah
or nahdms and bnbr in dmf produced the desired 2-o - benzyl ether 31 in low yields along with major side
product 32 , formed by the migration of the thiocarbamoyl
unit to the c2 hydroxyl and s - benzylation at c1 ( scheme 4 ) .
this necessarily implies the facile formation
of the bicyclic orthodithiocarbamate , which decomposes under anionic
conditions to the c2 thiocarbamate and c1 thiolate
prior to alkylation .
a similar migration has been observed in attempts
to alkylate glycosyl phosphoryldithioates , which produced a s - alkyl , 2-o - thiophosphoryl glycoside .
the -selective
coupling was further investigated by low - temperature c nmr experiments using cuotf activation because of its diamagnetic
character .
glycosyl dtc 2 was treated at 50 c with cuotf(c6h6)0.5 in cd2cl2 and
minimal 2-butanone to produce a green heterogeneous mixture , which
caused the disappearance of the thiocarbonyl signal ( c7 ) at 190.7
ppm and a reduction or broadening of the remaining signals ( figure 3 ) .
warming the solution to 10 c resulted
in additional changes : ( i ) the appearance of two new peaks at 111.0
and 111.7 ppm , ( ii ) the loss of signal at 89.1 ppm ( c1 ) and the appearance
of two new peaks at 97.2 and 97.5 ppm , and ( iii ) the loss of signals
at 47.1 and 49.8 ppm ( methylene carbons of et2dtc ) and
the appearance of four new peaks between 2333 ppm .
these signals
can be ascribed to the formation of orthodithiocarbamates ( c7 epimers ) .
we also observe a minor signal at 197 ppm , suggesting the coexistence
of a stabilized oxocarbenium species , possibly generated by reversible ligand dissociation from the 2cu complex .
c nmr analysis of glycosyl dtc
activation using cuotf(c6h6)0.5 with a small amount of 2-butanone
( cd2cl2 , 125 mhz ) .
bottom , glycosyl dtc 2 at 50 c ; middle , 2 + cuotf at
50 c ; and top , 2 + cuotf at 10
c .
a similar low - temperature c nmr study was performed
on tetra - o - benzylglucosyl dtc ( 2a ) .
at 10 c , new signals
could again be observed , but these
are not suggestive of the putative orthodithiocarbamate .
changes include
the replacement of thiocarbonyl signal at 191.9 with two peaks at
197.2 and 193 ppm ( the latter attributable to free cs2 ) ,
and the replacement of the c1 signal at 89.1 ppm with a new peak at
104.3 ppm ( figure 4 ) .
although the
latter value is close to the c1 chemical shift for glucosyl triflates , such species are known to decompose above 20 c , whereas
activated 2a appeared to be stable at 10 c .
cuotf complexes are more stable
than glycosyl triflates but retain sufficient reactivity for efficient
coupling ( cf . scheme 2 ) .
c nmr analysis
of glycosyl dtc cuotf complex
using 2a and cuotf(c6h6)0.5 with a small amount of 2-butanone ( cd2cl2 , 125 mhz ) . bottom , glucosyl dtc 2a at 50
c ; top , 2a after treatment with cuotf at 50
c and warming to 10 c .
glycosyl dithiocarbamates can be prepared
efficiently from glycals
by in situ dmdo oxidation and dtc ring opening , enabling a one - pot
glycosyl coupling with various acceptors .
the cu(otf)x - mediated glycosylations are highly -selective
despite the absence of a predesignated c2 acyl group for anchimeric
assistance .
we note that yields can be further improved by chromatography - free
acylation of the glycosyl dtc donor prior to glycosyl coupling .
glycosyl dtcs are useful in the reiterative assembly
of -linked oligosaccharides , as demonstrated by an 11-step
synthesis of a 1,6-linked tetrasaccharide from glucal 1 in 19% overall yield .
control experiments and low - temperature nmr
analysis suggest the involvement of a cis - fused bicyclic orthodithiocarbamate
in the -selective coupling .
all starting materials and reagents
were obtained from commercial sources and used as received unless
otherwise noted .
all solvents used were freshly distilled
prior to use ; 4 molecular sieves were flame - dried under reduced
pressure .
h and c nmr spectra were recorded on spectrometers operating at 300 , 400 ,
or 500 mhz and referenced to the solvent used ( 7.16 and 128.06 ppm
for c6d6 , 7.26 and 77.00 ppm for cdcl3 ) .
ft - ir spectra were acquired using an attenuated total reflectance
( atr ) module .
silica gel chromatography was performed in hand - packed
columns , and preparative tlc separations were performed with 0.5 mm
silica - coated plates .
tlc analysis was monitored with 0.25 mm silica - coated
plates ( g60f254 ) and detected by uv absorption at 254 nm
or by staining with p - anisaldehyde sulfuric
acid at 150 c .
dmdo solutions were prepared according to
our previous report . in a typical procedure ,
a glycal ( 0.25 mmol ) is dried by azeotropic distillation with toluene
and dissolved in ch2cl2 ( ca .
0.3 m ) , treated
at 0 c with a precooled solution of dmdo ( 0.22 m in ch2cl2 , 1.5 equiv ) , and stirred for 1 h or until the starting
material is completely consumed .
the reaction mixture is carefully
concentrated to dryness under reduced pressure , starting at 78
c with gradual warming to rt over 1 h until a white solid is
obtained .
the crude -epoxyglycal is dissolved in degassed thf
( 2 ml ) and cooled to 0 c . in parallel ,
a degassed solution of
diethylamine ( 560 l , 5.2 mmol ) in meoh ( 4.9 ml ) is cooled to
0 c under an argon atmosphere followed by dropwise addition
of cs2 ( 160 l , 2.6 mmol ) and stirring at rt for
30 min to obtain a pale yellow solution of et2dtc ( 0.53
m in meoh ) .
a portion of this stock ( 0.49 ml , 1.05 equiv ) is added
dropwise at 0 c to the stirred epoxyglycal solution , which is
then warmed to rt for 2.5 h. the reaction mixture is concentrated
to dryness under reduced pressure and further dried by azeotropic
distillation with toluene ( 3 2 ml ) .
the glycosyl dtc can be
used without further workup or purification . in a typical procedure ,
the crude glycosyl dtc ( 0.25 mmol )
is dried
by azeotropic distillation with toluene , dissolved in degassed 1:1
dce / dcm ( 1.7 ml ) , treated with ttbp ( 124 mg , 0.5 mmol ) and activated
4 molecular sieves ( 200 mg ) , and stirred for 1 h at rt . the
reaction mixture
is cooled to 50 c for 15 min , treated
with anhydrous cuotf2 ( 184 mg , 2 equiv ) or
cuotf(c6h6)0.5 ( 126
mg , 2 equiv ) in one portion , and stirred for 10 min , during which
the color turns from pale yellow to brown to dark green . a solution
of glycosyl acceptor in degassed 1:1 dce / dcm ( 0.8 ml , 1.5 equiv ) is
cooled to 50 c , added dropwise to the reaction mixture
via cannula , stirred for 5 min , warmed to 30 c over
a period of 45 min , and stirred at 30 c for an additional
12 h. the dark green reaction mixture is warmed to 10 c over
a period of 3 h , then quenched with a saturated nahco3 solution
with vigorous stirring for 15 min at rt .
the yellow reaction mixture
is then passed through a pad of celite and washed with etoac ( 3
5 ml ) prior to standard aqueous workup and purification by silica
gel chromatography .
tri - o - benzyl glucal 1 ( 91 mg , 0.22 mmol ) was subjected to
dmdo oxidation at 0 c for 1 h to provide -epoxyglucal
in quantitative yield ( 10:1 , / ) .
the crude epoxyglucal
was subjected to dtc ring opening in thf with a freshly prepared solution
of et2dtc ( 0.88 mmol , 0.5 m in meoh ) as described above .
after workup , the pale yellow syrup was purified by silica gel chromatography
( neutralized with 1% et3n ) using a 540% etoac in
hexanes gradient with 1% et3n , with mixed fractions separated
by preparative tlc ( 30% etoac in hexanes with 1% et3n )
to afford glycosyl dtc 2 as a colorless syrup ( 111 mg ,
87% ) .
h nmr ( 500 mhz , c6d6 ) :
7.397.05 ( m , 15h ) , 6.15 ( d , 1h , j = 10.4
hz ) , 5.00 ( d , 1h , j = 11.5 hz ) , 4.89 ( d , 1h , j = 11.5 hz ) , 4.85 ( d , 1h , j = 11.5 hz ) ,
4.63 ( d , 1h , j = 11.5 hz ) , 4.42 ( d , 1h , j = 12.0 hz ) , 4.25 ( d , 1h , j = 12.0 hz ) , 4.00 ( dd ,
1h , j = 9.0 , 9.5 hz ) , 3.92 ( dd , 1h , j = 9.0 , 9.5 hz ) , 3.743.57 ( m , 6h ) , 3.183.02 ( m , 2h ) ,
2.65 ( br s , 1h ) , 0.96 ( t , 3h , j = 7.0 hz ) , 0.75 ( t ,
3h , j = 7.5 hz ) . c nmr ( 125 mhz , c6d6 ) : 192.6 , 139.7 , 139.4 , 139.0 , 128.6 ,
128.5 , 128.4 , 128.2 , 128.1 , 127.8 , 127.7 , 127.6 , 90.5 , 87.6 , 80.1 ,
77.8 , 75.5 , 74.8 , 73.7 , 73.5 , 69.1 , 49.7 , 47.0 , 12.6 , 11.6 . ir ( thin
film ) : 2869 , 1502 , 1451 , 1413 , 1354 , 1261 , 1202 , 1067 , 919 , 746 , 695
cm .
ms : m / z calcd for c32h39no5s2na [ m + na ] , 604.2167 ; found , 604.2158 .
compound 2 was also characterized as the 2-o - acetate
by treatment with ac2o ( 1 ml ) in pyridine ( 2 ml ) at rt
for 12 h followed by concentration and azeotropic distillation with
toluene ( 3 1 ml ) .
the h nmr and pfg - cosy spectrum
of 2-o - acetyl-2 confirmed the
configuration ( j1,2 = 10.7 hz ) .
h nmr ( 500 mhz , c6d6 ) : 7.25 ( dd , 6h , j =
7.6 , 9.5 hz ) , 7.227.02 ( m , 9h ) , 6.30 ( d , 1h , j = 10.7 hz ) , 5.72 ( t , 1h , j = 10.7 hz ) , 4.72 ( dd ,
2h , j = 9.0 , 11.5 hz ) , 4.62 ( d , 1h , j = 11.7 hz ) , 4.55 ( d , 1h , j = 11.4 hz ) , 4.41 ( d ,
1h , j = 11.9 hz ) , 4.24 ( d , 1h , j = 11.9 hz ) , 3.89 ( m , 1h ) , 3.803.74 ( m , 2h ) , 3.72 ( m , 1h ) ,
3.703.58 ( m , 2h ) , 3.53 ( dt , 1h , j = 13.9 ,
7.0 hz ) , 3.00 ( q , 2h , j = 7.0 hz ) , 1.64 ( s , 3h ) ,
0.90 ( t , 3h , j = 7.0 hz ) , 0.66 ( t , 3h , j = 7.1 hz ) .
c nmr ( 120 mhz , c6d6 ) : 192.1 , 169.4 , 139.3 , 139.2 , 139.0 , 128.6 , 128.5 , 128.4 ,
128.3 , 128.1 , 128.1 , 127.9 , 127.8 , 127.7 , 127.7 , 127.6 , 88.5 , 85.6 ,
80.2 , 78.1 , 75.4 , 74.8 , 73.5 , 71.2 , 69.0 , 49.6 , 46.7 , 20.6 , 12.8 ,
11.5 .
ir ( thin film ) : 2876 , 1745 , 1492 , 1458 , 1413 , 1359 , 1268 , 1231 ,
1206 , 1149 , 1061 , 917 , 823 , 746 , 701 cm .
glycosyl dtc 2 ( 273 mg , 0.47 mmol ) was disolved in dmf ( 5.8 ml ) , cooled
to 50 c under argon , and then treated with bnbr ( 280
l , 2.35 mmol ) .
a 0.6 m solution of nahmds in toluene ( 1.6 ml ,
0.94 mmol ) was added dropwise to the reaction mixture , and the mixture
was stirred at 50 c for 5 min .
the reaction mixture
was allowed to warm to room temperature over a period of 2 h , quenched
at 0 c with saturated nh4cl ( 10 ml ) , and extracted
with et2o ( 3 10 ml ) .
the combined organic extracts
were washed with h2o ( 3 10 ml ) and brine ( 10 ml ) ,
dried over na2so4 , and concentrated under reduced
pressure .
after workup , the yellow syrup was purified by silica gel
chromatography ( neutralized with 1% et3n ) using a 560%
etoac in hexanes gradient with 1% et3n to afford glycosyl
dtc 2a as a colorless syrup ( 259 mg , 82% ) .
h nmr ( 500 mhz , c6d6 ) : 7.476.91
( m , 20h ) , 6.46 ( d , 1h , j = 10.1 hz ) , 4.88 ( d , 1h , j = 11.4 hz ) , 4.864.80 ( m , 2h ) , 4.79 ( s , 2h ) , 4.68
( d , 1h , j = 11.4 hz ) , 4.42 ( d , 1h , j = 11.9 hz ) , 4.24 ( d , 1h , j = 11.9 hz ) , 4.00 ( ddd ,
1h , j = 2.4 , 6.5 , 9.2 ) , 3.843.79 ( m , 2h ) ,
3.783.56 ( m , 5h ) , 3.13 ( dt , 1h , j = 7.2 ,
14.4 hz ) , 3.01 ( dt , 1h , j = 7.2 , 14.7 hz ) , 0.96 ( t ,
3h , j = 7.0 hz ) , 0.72 ( t , 3h , j =
7.1 hz ) .
c nmr ( 125 mhz , c6d6 ) :
192.3 , 139.3 , 139.2 , 138.9 , 138.8 , 128.3 , 128.2 , 127.8 , 127.6 ,
127.4 , 89.5 , 87.3 , 80.3 , 79.7 , 78.0 , 75.5 , 75.0 , 74.6 , 73.3 , 68.9 ,
49.4 , 46.5 , 12.4 , 11.4 .
ir ( thin film ) : 2931 , 2866 , 1489 , 1454 , 1417 ,
1356 , 1269 , 1207 , 1090 , 1070 , 916 , 829 , 735 , 698 cm .
m / z calcd for
c39h46no5s2 [ m + h ] , 672.2817 ; found , 672.2812 .
tri - o - benzyl glucal 1 ( 95 mg , 0.23 mmol ) was subjected
to dmdo oxidation at 0 c for 1 h to produce -epoxyglucal
in quantitative yield followed by dtc ring opening with piperidinyl - dtc
( 0.92 mmol , 0.5 m in meoh ) as described above . the reaction mixture
was concentrated and purified by silica gel chromatography using a
550% etoac in hexanes gradient with 1% et3n to
afford glycosyl dtc 3 as a colorless syrup ( 121 mg , 89% ) .
h nmr ( 500 mhz , c6d6 ) : 7.38
( d , 2h , j = 7.5 hz ) , 7.287.07 ( m , 13h ) , 6.20
( d , 1h , j = 10.3 hz ) , 5.02 ( d , 1h , j = 11.5 hz ) , 4.87 ( t , 2h , j = 10.6 hz ) , 4.64 ( d ,
1h , j = 11.4 hz ) , 4.43 ( d , 1h , j = 12.0 hz ) , 4.26 ( d , 1h , j = 12.0 hz ) , 4.05 ( t ,
2h , j = 9.5 hz ) , 3.94 ( t , 2h , j =
9.1 hz ) , 3.783.67 ( m , 4h ) , 3.433.18 ( m , 2h ) , 2.71
( br s , 1h ) , 1.16 ( br s , 2h ) , 0.98 ( br s , 4h ) .
c nmr ( 125
mhz , c6d6 ) : 192.2 , 139.5 , 139.2 , 139.0 ,
138.8 , 138.4 , 136.7 , 135.0 , 128.3 , 128.2 , 128.0 , 127.9 , 127.8 , 127.6 ,
90.3 , 87.3 , 79.9 , 77.6 , 75.2 , 74.5 , 73.4 , 73.3 , 69.0 , 52.6 , 51.1 ,
25.6 , 25.0 , 23.8 .
ir ( thin film ) : 2941 , 2852 , 1480 , 1421 , 1362 , 1244 ,
1222 , 1067 , 738 , 691 cm .
ms : m / z calcd for c33h39no5s2na [ m + na ] , 616.2167 ; found ,
616.2160 .
compound 3 was also characterized as the 2-o - acetate by treatment with ac2o ( 1 ml ) in pyridine
( 2 ml ) as described above .
the h nmr and pfg - cosy spectrum
of 2-o - acetyl-3 confirmed the
configuration ( j1,2 = 10.8 hz ) .
h nmr ( 400 mhz , c6d6 ) : 7.476.83 ( m , 15h ) , 6.38 ( d ,
1h , j = 10.8 hz ) , 5.78 ( dd , 1h , j = 10.8 , 8.9 hz ) , 4.74 ( dd , 2h , j = 8.8 , 11.6 hz ) ,
4.63 ( d , 1h , j = 11.7 hz ) , 4.56 ( d , 1h , j = 11.4 hz ) , 4.42 ( d , 1h , j = 12.0 hz ) , 4.25 ( d ,
1h , j = 11.9 hz ) , 4.07 ( m , 1h ) , 3.93 ( t , 1h , j = 9.3 hz ) , 3.823.73 ( m , 2h ) , 3.72 ( d , 1h , j = 1.7 hz ) , 3.68 ( dd , 1h , j = 3.2 , 11.2
hz ) , 3.23 ( t , 1h , j = 11.4 hz ) , 3.09 ( d , 1h , j = 14.6 hz ) , 1.66 ( s , 3h ) , 1.220.99 ( m , 3h ) , 0.990.59
( m , 4h )
. tri - o - benzyl glucal 1 ( 95 mg , 0.23 mmol ) was subjected to
dmdo oxidation at 0 c for 1 h to provide -epoxyglucal
in quantitative yield followed by dtc ring opening in thf with bn2dtc ( 0.92 mmol , 0.5 m solution in meoh ) as described above .
the pale yellow syrup was concentrated and purified by silica gel
chromatography using a 550% etoac in hexanes gradient with
1% et3n to afford glycosyl dtc 4 as a colorless
syrup ( 135 mg , 83% ) .
h nmr ( 500 mhz , c6d6 ) : 7.34 ( d , 2h , j = 7.3 hz ) , 7.287.26
( m , 3h ) , 7.23 ( d , 2h , j = 7.3 hz ) , 7.206.98
( m , 16h ) , 6.93 ( d , 2h , j = 6.5 hz ) , 6.15 ( d , 1h , j = 10.3 hz ) , 5.29 ( d , 1h , j = 14.8 hz ) ,
5.17 ( d , 1h , j = 14.8 hz ) , 4.91 ( d , 1h , j = 11.5 hz ) , 4.86 ( d , 1h , j = 11.4 hz ) , 4.81 ( d ,
1h , j = 11.5 hz ) , 4.62 ( d , 1h , j = 11.0 hz ) , 4.614.52 ( m , 2h ) , 4.42 ( d , 1h , j = 11.9 hz ) , 4.27 ( d , 1h , j = 11.9 hz ) , 3.90 ( t ,
1h , j = 9.2 hz ) , 3.82 ( dd , 1h , j = 8.6 , 10.3 hz ) , 3.783.60 ( m , 4h ) , 3.58 ( br s , 1h ) .
c nmr ( 125 mhz , c6d6 ) : 196.7 ,
139.4 , 139.2 , 138.8 , 135.7 , 128.9 , 128.7 , 128.3 , 128.3 , 128.2 , 128.1 ,
127.9 , 127.4 , 127.0 , 91.2 , 87.1 , 80.0 , 77.5 , 75.1 , 74.5 , 73.2 , 73.2 ,
68.8 , 56.4 , 54.1 .
ir ( thin film ) : 2928 , 1506 , 1451 , 1404 , 1341 , 1219 ,
1079 , 746 , 700 cm .
[ ]d25 = + 36 ( c 1.0 , ch2cl2 ) . hresi ms :
m / z calcd for c42h43no5s2na [ m + na ] , 728.2480 ; found , 728.2474 .
compound 4 was also characterized as the 2-o - acetate by treatment with ac2o ( 1 ml ) in pyridine ( 2
ml ) at rt for 12 h as described above .
the h nmr and pfg - cosy
spectrum of 2-o - acetyl-4 confirmed the
configuration ( j1,2 = 10.5 hz ) .
h nmr ( 500 mhz , c6d6 ) : 7.456.71 ( m , 25h ) , 6.30 ( d ,
1h , j = 10.5 hz ) , 5.70 ( dd , 1h , j = 9.0 , 10.5 hz ) , 5.30 ( d , 1h , j = 14.9 hz ) , 5.15
( d , 1h , j = 14.9 hz ) , 4.73 ( dd , 2h , j = 8.3 , 11.6 hz ) , 4.63 ( d , 1h , j = 11.7 hz ) , 4.584.50
( m , 2h ) , 4.514.42 ( m , 2h ) , 4.32 ( d , 1h , j = 12.0 hz ) , 3.87 ( t , 1h , j = 9.3 hz ) , 3.843.73
( m , 3h ) , 3.68 ( dd , 1h , j = 3.8 , 11.4 hz ) , 1.61 ( s ,
3h ) .
tri - o - benzyl glucal 1 ( 29 mg , 0.07 mmol ) was subjected
to dmdo oxidation at 0 c for 1 h to provide -epoxyglucal
in quantitative yield followed by dtc ring opening in thf with ( i - pr)2dtc ( 0.28 mmol , 0.5 m in meoh ) as described
above .
the pale yellow syrup was concentrated and purified by silica
gel chromatography using a 530% etoac in hexanes gradient
with 1% et3n ; mixed fractions were separated by preparative
tlc ( 15% etoac in hexanes with 1% et3n ) to afford glycosyl
dtc 5 as a pale yellow syrup ( 22 mg , 52% ) .
h nmr ( 500 mhz , c6d6 ) : 7.36 ( d , 2h , j = 7.5 hz ) , 7.277.19 ( m , 4h ) , 7.187.05
( s , 9h ) , 4.99 ( d , 1h , j = 11.5 hz ) , 4.87 ( d , 1h , j = 11.4 hz ) , 4.83 ( d , 1h , j = 11.5 hz ) , 4.62 ( d , 1h , j = 11.4 hz ) , 4.39 ( d , 1h , j = 11.9 hz ) ,
4.23 ( d , 1h , j = 11.9 hz ) , 4.03 ( br s , 1h ) , 3.91
( t , 1h , j = 9.4 hz ) , 3.743.61 ( m , 5h ) , 2.67
( br s , 1h ) , 1.700.67 ( m , 14h ) .
c nmr ( 100 mhz ,
c6d6 ) : 139.7 , 139.5 , 139.0 , 128.6 , 128.5 ,
128.3 , 128.2 , 128.1 , 127.8 , 127.7 , 127.6 , 87.6 , 80.1 , 77.8 , 75.4 ,
74.8 , 73.7 , 73.6 , 73.4 , 69.1 , 19.7 .
ir ( thin film ) : 2869 , 1455 , 1379 ,
1298 , 1198 , 1063 , 700 cm .
m / z calcd for c34h44no5s2 [ m + h ] , 610.2661 ; found ,
610.2673 .
compound 5 was also characterized as the 2-o - acetate by treatment with ac2o ( 1 ml ) in pyridine
( 2 ml ) at rt for 12 h as described above .
the h nmr and
pfg - cosy spectrum of 2-o - acetyl 5 confirmed
the configuration ( j1,2 = 9.7
hz ) .
h nmr ( 500 mhz , c6d6 ) : 7.26 ( t , 5h , j =
7.8 hz ) , 7.227.01 ( m , 10h ) , 6.43 ( d , 1h , j = 10.5 hz ) , 5.76 ( t , 1h , j = 9.7 hz ) , 4.73 ( dd ,
2h , j = 9.0 , 11.5 hz ) , 4.62 ( d , 1h , j = 11.7 hz ) , 4.55 ( d , 1h , j = 11.4 hz ) , 4.40 ( d ,
1h , j = 11.9 hz ) , 4.25 ( d , 1h , j = 11.9 hz ) , 3.89 ( t , 1h , j = 9.3 hz ) , 3.813.69
( m , 3h ) , 3.65 ( dd , 1h , j = 3.4 , 11.2 hz ) , 1.65 ( s ,
3h ) , 1.680.39 ( m , 12h ) .
c nmr ( 100 mhz , c6d6 ) : 169.4 , 139.3 , 139.2 , 139.1 , 128.6 ,
128.5 , 128.3 , 128.2 , 128.1 , 127.8 , 127.7 , 127.6 , 85.7 , 80.2 , 78.2 ,
75.4 , 74.8 , 73.4 , 68.9 , 20.7 , 19.6 .
tri - o - benzyl glucal 1 ( 31 mg , 0.075 mmol ) was subjected to
dmdo oxidation at 0 c for 1 h to provide -epoxyglucal
( 10:1 : ) in quantitative yield followed by dtc ring
opening in thf with li-(ph)2dtc ( 0.15 mmol ) dissolved in
meoh .
the pale yellow syrup was purified by silica gel chromatography
using a 550% etoac in hexanes gradient with 1% et3n ; mixed fractions were separated by preparative tlc ( 20% etoac in
hexanes with 1% et3n ) to afford glycosyl dtc 6 as a colorless solid ( 42 mg , 82% ) .
h nmr ( 500 mhz , c6d6 ) : 7.30 ( d , 2h , j =
7.5 hz ) , 7.23 ( d , 2h , j = 7.4 hz ) , 7.21 ( d , 2h , j = 7.5 hz ) , 7.197.05 ( m , 13h ) , 7.00 ( dd , 4h , j = 9.5 , 10 hz ) , 6.91 ( dd , 2h , j = 7.5 ,
9.5 hz ) , 5.99 ( d , 1h , j = 10.1 hz ) , 4.84 ( dd , 2h , j = 11.0 , 11.5 hz ) , 4.77 ( d , 1h , j = 11.5
hz ) , 4.60 ( d , 1h , j = 11.4 hz ) , 4.40 ( d , 1h , j = 11.9 hz ) , 4.21 ( d , 1h , j = 11.9 hz ) ,
3.85 ( dd , 1h , j = 11.0 , 11.5 hz ) , 3.733.63
( m , 4h ) , 3.59 ( dd , 1h , j = 10.5 , 11.0 hz ) , 1.96 ( br
s , 1h ) .
c nmr ( 100 mhz , c6d6 ) :
198.3 , 139.6 , 139.4 , 138.9 , 129.7 , 128.6 , 128.5 , 128.5 , 128.3 ,
128.2 , 128.1 , 127.8 , 127.7 , 127.6 , 127.6 , 90.7 , 87.4 , 79.9 , 77.7 ,
75.3 , 74.7 , 73.5 , 72.9 , 69.1 .
ir ( thin film ) : 2877 , 1590 , 1489 , 1455 ,
1353 , 1041 , 755 , 700 cm .
m / z calcd for c40h39no5s2na [ m + na ] , 700.2167 ; found ,
700.2159 .
product 6 was also characterized as the 2-o - acetate by treatment with ac2o ( 1 ml ) in pyridine
( 2 ml ) at rt for 12 h as described above .
the h nmr and
pfg - cosy spectrum of 2-o - acetyl-6 confirmed
the configuration ( j1,2 = 10.5
hz ) .
h nmr ( 300 mhz , cdcl3 ) : 7.307.04 ( m , 30h ) , 5.53 ( d , 1h , j = 10.5 hz ) , 5.03 ( t , 1h , j = 10.5 hz ) ,
4.69 ( d , 1h , j = 11.1 hz ) , 4.65 ( d , 1h , j = 10.2 hz ) , 4.57 ( d , 1h , j = 10.4 hz ) , 4.55(d ,
1h , j = 12.0 hz ) , 4.43(d , 1h , j =
9.0 hz ) , 4.39 ( d , 1h , j = 10.2 hz ) , 3.713.60
( m , 4h ) , 3.53 ( m , 1h ) , 1.79 ( s , 3h ) . glycosyl dtc 2 ( 58 mg ,
0.1 mmol ) was subjected to cu(otf)2-mediated glycosylation
with glucal acceptor 7 ( 35 mg , 0.15 mmol ) as previously
described with a slight modification : glucal acceptor 7 was added into the reaction mixture in 1:1 dce / dcm without precooling
because of its limited solubility . after workup , the crude mixture
was passed through a plug of silica gel ( neutralized with 1% et3n ) packed on a fritted funnel and eluted with 2% etoac in
toluene with 1% et3n to remove nonpolar byproducts followed
by etoac to collect the product , which was concentrated to dryness .
the pale yellow syrup was purified by preparative tlc ( 10% etoac in
toluene with 1% et3n and then 15% etoac in toluene with
1% et3n ) to afford an inseparable mixture of 1,3-linked
disaccharide glucals as a colorless syrup ( 47 mg , 70% /
1:9 ) .
the / ratio was determined by the peak integration
of the 2-o - acetyl derivative .
the diastereomeric
mixture was recrystallized with 5% benzene in hexanes at 20
c to afford pure -1,3-linked disaccharide glucal 8 as a white solid .
h nmr ( 500 mhz , c6d6 ) : 7.56 ( d , 1h , j = 7.2 hz ) ,
7.37 ( d , 1h , j = 7.3 hz ) , 7.29 ( d , 1h , j = 7.3 hz ) , 7.23 ( d , 1h , j = 7.2 hz ) , 7.197.07
( m , 16h ) , 6.10 ( dd , 1h , j = 1.2 , 6.1 hz ) , 5.31 ( s ,
1h ) , 5.06 ( d , 1h , j = 11.5 hz ) , 4.89 ( d , 1h , j = 11.3 hz ) , 4.86 ( dd , 1h , j = 1.9 , 6.1
hz ) , 4.80 ( d , 1h , j = 11.5 hz ) , 4.64 ( d , 1h , j = 7.6 hz ) , 4.57 ( d , 1h , j = 11.4 hz ) ,
4.53 ( d , 1h , j = 8.0 hz ) , 4.42 ( d , 2h , j = 4.2 hz ) , 4.10 ( dd , 1h , j = 5.2 , 10.4 ) , 3.99 ( dd ,
1h , j = 7.7 , 10.2 hz ) , 3.77 ( t , 1h , j = 8.4 hz ) , 3.733.57 ( m , 5h ) , 3.48 ( t , 1h , j = 10.4 hz ) , 3.40 ( dt , 1h , j = 10.0 , 3.5 hz ) , 2.76
( s , 1h ) .
c nmr ( 125 mhz , c6d6 ) :
144.7 , 139.8 , 139.3 , 139.1 , 137.9 , 129.2 , 128.6 , 128.5 , 128.5 ,
128.4 , 128.4 , 128.3 , 128.1 , 128.0 , 127.9 , 127.7 , 127.7 , 127.6 , 126.8 ,
102.8 , 101.8 , 101.4 , 84.8 , 78.5 , 77.8 , 76.0 , 75.0 , 75.0 , 74.4 , 73.6 ,
71.6 , 69.4 , 69.3 , 68.4 . ir
( thin film ) : 3515 , 2522 , 2862 , 1646 , 1498 ,
1453 , 1359 , 1229 , 1104 , 1064 , 755 , 701 cm . [ ]d25 18.2
( c 0.9 , ch2cl2 ) .
m / z calcd for c40h42o9na [ m + na ] , 689.2727 ; found , 689.2737 .
product 8 was also characterized as the 2-o - acetate ; the h nmr and pfg - cosy spectrum of 2-o - acetyl-8 confirmed the configuration
( j1,2 = 8.5 hz ) .
h nmr ( 500 mhz , c6d6 ) : 7.62 ( d , 2h , j = 7.3 hz ) , 7.29
( t , 4h , j = 8.1 hz ) ,
7.197.02 ( m , 14h ) , 6.07 ( dd , 1h , j = 1.5 ,
6.0 hz ) , 5.45 ( t , 1h , j = 8.5 hz ) , 5.33 ( s , 1h ) ,
4.744.57 ( m , 5h ) , 4.53 ( dt , 1h , j = 7.0 , 2.0 hz ) , 4.44 ( m ,
1h ) , 4.42 ( s , 2h ) , 4.11 ( dd , 1h , j = 5.1 , 10.4 hz ) ,
4.03 ( dd , 1h , j = 7.3 , 10.3 hz ) , 3.69 ( dt , 1h , j = 5.0 , 10.2 hz ) , 3.65 ( dd , 1h , j = 1.9 ,
11.0 hz ) , 3.623.53 ( m , 3h ) , 3.48 ( t , 1h , j = 10.4 hz ) , 3.42 ( m , 1h ) , 1.71 ( s , 3h ) . tri - o - benzyl glucal 1 ( 104
mg , 0.25 mmol ) was subjected to
dmdo oxidation at 0 c for 1 h followed by dtc ring opening as
previously described . the crude glycosyl dtc 2 was then
subjected to cu(otf)2-mediated glycosylation with i - proh ( 29 l , 0.38 mmol ) as described in the general
procedure . after workup
, the crude mixture was passed through a plug
of silica gel ( neutralized with 1% et3n ) packed on a fritted
funnel and eluted with 1% etoac in toluene with 1% et3n
to remove nonpolar byproducts followed by etoac with 1% et3n to collect the product mixture .
the pale yellow syrup was concentrated
and repurified by silica gel chromatography using a 520% etoac
in hexanes gradient with 1% et3n to afford -i - pr glucoside 10 as a colorless syrup ( 87
mg , 71% ) and -i - pr glucoside 10 as a crystalline solid ( 11 mg , 9% ) .
major isomer 10 : h nmr ( 500 mhz , c6d6 ) :
7.42 ( d , 2h , j = 7.4 hz ) , 7.30 ( d , 2h , j = 7.5 hz ) , 7.24 ( d , 2h , j = 7.4 hz ) , 7.187.16
( m , 6h ) , 7.137.05 ( m , 3h ) , 5.13 ( t , 1h , j = 9.9 hz ) , 4.92 ( d , 1h , j = 11.3 hz ) , 4.89 ( d ,
1h , j = 11.6 hz ) , 4.55 ( d , 1h , j = 11.3 hz ) , 4.47 ( d , 1h , j = 12.5 hz ) , 4.42 ( d ,
1h , j = 12.5 hz ) , 4.22 ( d , 1h , j = 7.2 hz ) , 3.86 ( dt , 1h , j = 12.3 , 6.2 hz ) , 3.743.61
( m , 5h ) , 3.41 ( m , 1h ) , 2.41 ( br s , 1h ) , 1.21 ( d , 3h , j = 6.2 hz ) , 1.01 ( d , 3h , j = 6.1 hz ) .
c nmr ( 125 mhz , c6d6 ) : 140.1 , 139.7 ,
139.4 , 128.9 , 128.9 , 128.8 , 128.7 , 128.6 , 128.4 , 128.3 , 128.2 , 128.0 ,
128.0 , 102.1 , 85.3 , 78.4 , 76.0 , 75.4 , 73.8 , 71.9 , 69.9 , 24.2 , 22.4 .
ir ( thin film ) : 3468 , 2898 , 1464 , 1362 , 1109 , 1054 , 738 , 700 cm .
m / z calcd for c30h36o6na [ m + na ] , 515.2410 ; found , 515.2417 .
major isomer 10 was also characterized as the 2-o - acetate ; h nmr and pfg - cosy confirmed the configuration ( j1,2 = 8.6 hz ) .
h nmr ( 300 mhz , cdcl3 ) :
7.406.96 ( m , 15h ) , 4.88 ( t , 1h , j = 8.6 hz ) , 4.72 ( d , 2h , j = 11.4 hz ) , 4.634.42
( m , 4h ) , 4.32 ( d , 1h , j1,2 = 8.0 hz ) ,
3.83 ( q , 1h , j = 6.1 hz ) , 3.723.50 ( m , 4h ) ,
3.41 ( m , 1h ) , 1.89 ( s , 3h ) , 1.16 ( d , 3h , j = 6.2
hz ) , 1.04 ( d , 3h , j = 6.1 hz ) .
h nmr ( 500 mhz , c6d6 ) :
7.41 ( d , 2h , j = 7.4 hz ) , 7.30 ( d , 2h , j = 7.5 hz ) , 7.26 ( d , 2h , j = 7.3 hz ) ,
7.187.07 ( m , 9h ) , 5.02 ( d , 1h , j = 11.4 hz ) ,
4.95 ( d , 1h , j = 11.2 hz ) , 4.85 ( d , 1h , j = 4.0 hz ) , 4.78 ( d , 1h , j = 11.4 hz ) , 4.60 ( d ,
1h , j = 11.2 hz ) , 4.45 ( d , 1h , j = 12.2 hz ) , 4.38 ( d , 1h , j = 12.2 hz ) , 4.05 ( ddd ,
1h , j = 1.7 , 4.5 , 10.0 hz ) , 3.87 ( t , 1h , j = 9.0 hz ) , 3.78 ( dr , 1h , j = 4.0 , 9.6
hz ) , 3.753.68 ( m , 3h ) , 3.66 ( dd , 1h , j =
1.8 , 10.7 hz ) , 1.88 ( d , 1h , j = 9.9 hz ) , 1.09 ( d ,
3h , j = 6.2 hz ) , 0.90 ( d , 3h , j =
6.1 hz ) .
c nmr ( 125 mhz , c6d6 ) :
139.8 , 139.4 , 139.1 , 128.6 , 128.5 , 128.3 , 128.2 , 128.0 , 128.0 ,
127.9 , 127.7 , 127.6 , 127.6 , 97.6 , 84.4 , 78.1 , 75.4 , 75.1 , 73.7 , 73.5 ,
71.5 , 70.6 , 69.6 , 23.4 , 21.7 .
ir ( thin film ) : 3497 , 2917 , 2857 , 1500 ,
1458 , 1363 , 1330 , 1129 , 1074 , 1025 , 741 , 702 cm .
m / z calcd for
c30h36 o6na [ m + na ] ,
515.2410 ; found , 515.2418 .
tri - o - benzyl glucal 1 ( 104
mg , 0.25
mmol ) was subjected to dmdo oxidation at 0 c for 1 h followed
by dtc ring opening as previously described .
the crude glycosyl dtc 2 was subjected to cu(otf)2-mediated glycosylation
with benzyl alcohol ( 39 l , 0.38 mmol ) as described in the general
procedure . after workup
, the crude mixture was passed through a plug
of silica gel ( neutralized with 1% et3n ) packed on a fritted
funnel and eluted with 0.5% etoac in toluene with 1% et3n to remove byproducts followed by etoac with 1% et3n
to collect the product mixture .
the pale yellow syrup was concentrated
and repurified by silica gel chromatography using a 530% etoac
in hexanes gradient with 1% et3n to afford -benzyl
glucoside 11 as a crystalliine solid ( 85 mg , 63% ) and
-benzyl glucoside 11 as a crystalline
solid ( 12 mg , 9% ) .
major isomer 11 : h nmr
( 500 mhz , c6d6 ) : 7.39 ( d , 2h , j = 7.8 hz ) , 7.347.27 ( m , 4h ) , 7.267.22
( m , 2h ) , 7.217.13 ( m , 8h ) , 7.137.07 ( m , 4h ) , 5.05
( d , 1h , j = 11.5 hz ) , 4.90 ( d , 1h , j = 11.3 hz ) , 4.86 ( d , 1h , j = 11.9 hz ) , 4.82 ( d ,
1h , j = 11.5 hz ) , 4.55 ( d , 1h , j = 11.3 hz ) , 4.45 ( dd , 2h , j = 5.6 , 12.0 hz ) , 4.40
( d , 1h , j = 12.2 hz ) , 4.24 ( d , 1h , j = 7.7 hz ) , 3.763.67 ( m , 4h ) , 3.59 ( t , 1h , j = 8.9 hz ) , 3.37 ( ddd , 1h , j = 2.6 , 4.0 , 9.7 hz ) ,
2.09 ( br s , 1h ) .
c nmr ( 125 mhz , c6d6 ) : 139.7 , 139.4 , 139.0 , 138.1 , 128.7 , 128.6 , 128.5 , 128.5 ,
128.4 , 128.3 , 128.1 , 127.9 , 127.7 , 127.7 , 102.3 , 84.9 , 78.0 , 75.7 ,
75.7 , 75.1 , 75.0 , 73.6 , 71.0 , 69.5 .
ir ( thin film ) : 3481 , 2873 , 1502 ,
1455 , 1358 , 1117 , 1058 , 695 cm .
m / z calcd for c34h36o6na [ m + na ] , 563.2410 ; found , 563.2407 .
product 11 was also characterized as the 2-o - acetate ; h nmr and pfg - cosy confirmed a configuration
( j1,2 = 9.2 hz ) .
h nmr ( 500 mhz , c6d6 ) : 7.30 ( t , 5h , j = 7.4 hz ) , 7.257.00
( m , 15h ) , 5.54 ( dd , 1h , j2,3 = 8.0 , j1,2 = 9.2 hz ) , 4.86 ( d , 1h , j = 12.4 hz ) , 4.71 ( dd , 2h , j = 7.8 , 11.3 hz ) , 4.64
( d , 1h , j = 11.6 hz ) , 4.53 ( d , 1h , j = 12.4 hz ) , 4.45 ( t , 3h , j = 11.5 hz ) , 4.39 ( m ,
1h ) , 3.723.60 ( m , 3h ) , 3.57 ( t , 1h , j = 8.6
hz ) , 3.41 ( m , 1h ) , 1.69 ( s , 3h ) .
h nmr
( 300 mhz , cdcl3 ) :
7.407.23 ( m , 18h ) , 7.13 ( dd , 2h , j = 2.8 ,
6.6 hz ) , 4.93 ( d , 1h , j = 3.6 hz ) , 4.82 ( dd , 2h , j = 2.6 , 10.9 hz ) , 4.75 ( d , 1h , j = 11.7
hz ) , 4.64 ( d , 1h , j = 12.0 hz ) , 4.584.43
( m , 3h ) , 3.863.56 ( m , 7h ) , 2.10 ( d , 1h , j = 8.8 hz ) . product 11 was also characterized
as the 2-o - acetate ; h nmr and pfg - cosy
confirmed the configuration ( j1,2 = 3.7 hz ) .
h nmr ( 300
mhz , cdcl3 ) : 7.556.96 ( m , 20h ) , 5.11 ( d ,
1h , j = 3.7 hz ) , 4.89 ( dd , 1h , j = 3.7 , 10.1
hz ) , 4.854.78 ( m , 2h ) , 4.734.62 ( m , 2h ) , 4.564.44
( m , 3h ) , 4.06 ( t , 1h , j = 8.7 hz ) , 3.86 ( dq , 1h , j = 10.1 , 1.8 hz ) , 3.793.68 ( m , 2h ) , 3.62 ( dd , 2h , j = 1.9 , 10.7 hz ) , 2.00 ( s , 3h ) .
tri - o - benzyl glucal 1 ( 83
mg , 0.2 mmol ) was subjected to
dmdo oxidation at 0 c for 1 h followed by dtc ring opening as
previously described .
the crude glycosyl dtc 2 was subjected
to cu(otf)2-mediated glycosylation with methyl 2,3,6-tri - o - benzyl--d - glucopyranoside ( 139 mg , 0.3
mmol ) as described in the general procedure to yield the desired 1,4-linked
disaccharide . after workup
, the crude mixture was passed through a
plug of silica gel ( neutralized with 1% et3n ) packed on
a fritted funnel and eluted with 2% etoac in toluene with 1% et3n to remove byproducts followed by etoac with 1% et3n to collect the product mixture .
the pale yellow syrup was concentrated
and repurified by silica gel chromatography using a 530% etoac
in hexanes gradient with 1% et3n to afford 1,4--linked
disaccharide 16 as a colorless syrup ( 86 mg , 48% ) .
h nmr ( 500 mhz , c6d6 ) : 7.49
( d , 2h , j = 7.6 hz ) , 7.41 ( d , 2h , j = 7.5 hz ) , 7.36 ( d , 2h , j = 7.5 hz ) , 7.267.05
( m , 24h ) , 5.31 ( d , 1h , j = 11.5 hz ) , 5.10 ( d , 1h , j = 12.0 hz ) , 5.05 ( d , 1h , j = 11.5 hz ) ,
4.89 ( dd , 1h , j = 7.5 , 11.5 hz ) , 4.74 ( d , 1h , j = 8.0 hz ) , 4.634.21 ( m , 8h ) , 4.04 ( dd , 1h , j = 9.5 hz ) , 4.04 ( dd , 1h , j = 3.0 , 11.0
hz ) , 3.90 ( br s , 1h ) , 3.823.75 ( m , 2h ) , 3.70 ( dd , 1h , j = 1.5 , 11.0 hz ) , 3.63 ( t , 2h , j = 9.0
hz ) , 3.59 ( dd , 1h , j = 4.0 , 11.0 hz ) , 3.55 ( d , 1h , j = 9.5 hz ) , 3.52 ( dd , 1h , j = 3.5 , 8.5
hz ) , 3.39 ( d , 1h , j = 9.5 hz ) , 3.31 ( d , 1h , j = 2.5 hz ) , 3.13 ( s , 3h ) , 3.03 ( s , 1h ) . c
nmr ( 125 mhz , c6d6 ) : 140.2 , 139.6 , 139.2 ,
138.9 , 138.4 , 128.5 , 128.3 , 128.2 , 128.1 , 127.9 , 127.4 , 127.3 , 127.0 ,
103.5 , 98.3 , 84.9 , 80.7 , 80.4 , 77.6 , 77.5 , 75.9 , 75.7 , 74.9 , 74.7 ,
73.5 , 73.3 , 73.0 , 70.3 , 69.1 , 68.9 , 54.8 .
ir ( thin film ) : 3466 , 2919 ,
1725 , 1501 , 1455 , 1365 , 1274 , 1107 , 1061 , 741 , 704 cm .
m / z calcd for
c55h60o11na [ m + na ] ,
919.4033 ; found , 919.4038 .
product 16 was also characterized
as the 2-o - acetate ; h nmr and
pfg - cosy confirmed the configuration ( j1,2 = 9.4 hz ) .
h nmr ( 500
mhz , c6d6 ) : 7.48 ( d , 2h , j = 7.4 hz ) , 7.35 ( d , 2h , j = 7.4 hz ) , 7.32 ( d , 2h , j = 7.2 hz ) , 7.28 ( d , 2h , j = 7.6 hz ) ,
7.24 ( d , 2h , j = 7.1 hz ) , 7.217.00 ( m , 20h ) ,
5.46 ( t , 1h , j = 9.4 hz ) , 5.27 ( d , 1h , j = 11.9 hz ) , 4.95 ( d , 1h , j = 11.9 hz ) , 4.88 ( d ,
1h , j = 8.0 hz ) , 4.74 ( d , 1h , j =
11.6 hz ) , 4.704.53 ( m , 5h ) , 4.514.39 ( m , 5h ) , 4.324.20
( m , 2h ) , 3.983.94 ( m , 2h ) , 3.71 ( dd , 1h , j = 2.5 , 9.5 hz ) , 3.69 ( d , 1h , j = 6.8 hz ) , 3.66
( dd , 1h , j = 1.5 , 11.0 hz ) , 3.593.51 ( m ,
3h ) , 3.42 ( dd , 1h , j = 3.4 , 8.9 hz ) , 3.12 ( s , 3h ) ,
1.69 ( s , 3h ) . tri - o - benzyl glucal 1 ( 42 mg , 0.1 mmol ) was subjected to
dmdo oxidation at 0 c for 1 h and then subjected to dtc ring
opening as previously described .
the crude glycosyl dtc 2 was subjected to cu(otf)2-mediated glycosylation with
the acceptor ( 76 mg , 0.12 mmol ) as described in the general procedure .
after workup
, the crude mixture was passed through a plug of silica
gel ( neutralized with 1% et3n ) packed on a fritted funnel
and eluted with 2% etoac in toluene with 1% et3n to remove
byproducts followed by etoac with 1% et3n to collect the
product mixture .
the pale yellow syrup was concentrated and repurified
by preparative tlc ( 30% etoac in hexanes with 1% et3n and
then 40% etoac in hexanes with 1% et3n ) to afford 1,4--linked
disaccharide 17 as a colorless syrup ( 60 mg , 56% ) .
h nmr ( 500 mhz , c6d6 ) : 7.99
( dd , 2h , j = 1.3 , 8.0 hz ) , 7.94 ( d , 2h , j = 6.8 hz ) , 7.53 ( d , 1h , j = 6.3 hz ) , 7.46 ( d , 1h , j = 6.4 hz ) , 7.43 ( d , 2h , j = 7.5 hz ) ,
7.337.04 ( m , 19h ) , 6.906.74 ( m , 2h ) , 6.34 ( t , 1h , j = 10.8 hz ) , 5.84 ( d , 1h , j = 8.4 hz ) ,
5.11 ( d , 1h , j = 11.6 hz ) , 4.92 ( d , 2h , j = 11.4 hz ) , 4.83 ( dd , 1h , j = 8.5 , 10.8 hz ) , 4.75
( d , 1h , j = 7.6 hz ) , 4.57 ( d , 1h , j = 11.3 hz ) , 4.39 ( t , 1h , j = 9.4 hz ) , 4.27 ( d ,
2h , j = 11.9 hz ) , 4.19 ( d , 1h , j = 11.9 hz ) , 4.12 3.95 ( m , 2h ) , 3.77 ( t , 1h , j = 9.2 hz ) , 3.73 3.46 ( m , 7h ) , 1.88 ( s , 3h ) , 1.20 ( d , 3h , j = 6.2 hz ) , 1.17 ( s , 9h ) , 0.97 ( d , 3h , j = 6.1 hz ) .
c nmr ( 125 mhz , c6d6 ) : 139.7 , 139.4 , 139.0 , 138.1 , 128.7 , 128.6 , 128.5 , 128.5 ,
128.4 , 128.3 , 128.1 , 127.9 , 127.7 , 127.7 , 102.3 , 84.9 , 78.0 , 75.7 ,
75.7 , 75.1 , 75.0 , 73.6 , 70.9 , 69.5 .
ir ( thin film ) : 3432 , 2936 , 1722 ,
1384 , 1229 , 1117 , 1041 , 763 , 695 cm .
hr - maldi ms : m / z calcd for c62h69 no13sina [ m + na ] , 1086.4434 ; found , 1086.4458 .
product 17 was also characterized as the 2-o - acetate ; h nmr and pfg - cosy confirmed the
configuration ( j1,2 = 8.6 hz ) .
h nmr ( 500 mhz , c6d6 ) : 7.97 ( d , 2h , j =
8.0 hz ) , 7.87 ( d , 2h , j = 8.0 hz ) , 7.56 ( d , 1h , j = 6.7 hz ) , 7.47 ( d , 1h , j = 6.8 hz ) ,
7.35 ( d , 3h , j = 7.2 hz ) , 7.31 ( d , 3h , j = 7.5 hz ) , 7.307.04 ( m , 15h ) , 6.886.76 ( m , 2h ) ,
6.28 ( t , 1h , j = 10.7 hz ) , 5.74 ( d , 1h , j = 8.5 hz ) , 5.39 ( t , 1h , j = 8.6 hz ) , 4.93 ( d , 1h , j1,2 = 8.1 hz ) , 4.79 ( dd , 2h , j = 6.3 , 15.3 hz ) , 4.68 ( d , 2h , j = 11.3 hz ) , 4.45
( d , 1h , j = 11.3 hz ) , 4.36 ( t , 1h , j = 9.5 hz ) , 4.26 ( d , 1h , j = 11.8 hz ) , 4.18 ( d ,
1h , j = 11.8 hz ) , 4.03 ( dd , 1h , j = 2.8 , 11.3 hz ) , 3.97 ( dt , 1h , j = 12.4 , 6.2 hz ) ,
3.88 ( d , 1h , j = 10.7 hz ) , 3.703.58 ( m , 5h ) ,
3.51 ( m , 1h ) , 1.85 ( s , 3h ) , 1.62 ( s , 3h ) , 1.19 ( d , 3h , j = 6.2 hz ) , 1.14 ( s , 9h ) , 0.98 ( d , 3h , j = 6.1 hz ) .
4,6-benzylidene glucal 18 ( 49 mg , 0.15 mmol ) was subjected to dmdo oxidation at 0
c for 1 h followed by dtc ring opening as previously described .
the crude dtc glycosyl donor was subjected to cu(otf)2-mediated
glycosylation with 4,6-benzylidene glucal 7 ( 54 mg , 0.23
mmol ) as described in the general procedure except that acceptor 7 was added to the reaction mixture without precooling because
of its low solubility . after workup , the crude mixture was passed
through a plug of silica gel ( neutralized with 1% et3n )
packed on a fritted funnel and eluted with 2% etoac in toluene with
1% et3n to remove byproducts followed by etoac with 1%
et3n to collect the product mixture .
the pale yellow solid
was concentrated and repurified by preparative tlc ( 5% etoac in dce
with 1% et3n and then 10% etoac in dce with 1% et3n ) to afford 1,3--linked disaccharide glucal 19 as a white solid ( 36 mg , 42% ) .
h nmr ( 500 mhz , 1:1 c6d6/cdcl3 ) : 7.567.39
( m , 5h ) , 7.31 ( d , 3h , j = 6.9 hz ) , 7.207.09
( m , 7h ) , 6.40 ( dd , 1h , j = 2 , 6.5 hz ) , 5.62 ( s , 1h ) ,
5.54 ( s , 1h ) , 4.94 ( d , 1h , j = 12.0 hz ) , 4.86 ( dd ,
1h , j = 1.9 , 6.1 hz ) , 4.79 ( d , 1h , j = 12.0 hz ) , 4.63 ( d , 1h , j = 7.5 hz ) , 4.60 ( d ,
1h , j = 7.0 hz ) , 4.38 ( dd , 1h , j = 5.0 , 10.5 hz ) , 4.24 ( dd , 1h , j = 5.0 , 10.5 hz ) ,
4.02 ( dt , 1h , j = 5.2 , 10.3 hz ) , 4.02 ( t , 1h , j = 7.5 hz ) , 3.94 ( dt , 1h , j = 5.0 , 10.0
hz ) , 3.85 ( t , 1h , j = 10 hz ) , 3.77 ( t , 1h , j = 10 hz ) , 3.713.61 ( m , 3h ) , 3.39 ( m , 1h ) , 2.57
( s , 1h ) . c nmr ( 125 mhz , 1:1 c6d6/cdcl3 ) : 145.1 , 139.2 , 138.1 , 137.7 , 129.4 , 129.1 ,
128.6 , 128.4 , 128.3 , 128.1 , 127.9 , 126.5 , 102.0 , 101.8 , 101.5 , 81.6 ,
80.5 , 78.4 , 74.6 , 74.4 , 73.1 , 69.2 , 69.0 , 68.4 , 67.0 .
ir ( thin film ) :
2923 , 1645 , 1451 , 1362 , 1235 , 1100 , 1016 , 695 cm .
m / z calcd
for c33h34o9na [ m + na ] , 597.2101 ; found , 597.2107 .
product 19 was also characterized
as the 2-o - benzoate ; h nmr and
pfg - cosy of 2-o - benzoyl 19 confirmed
the configuration ( j1,2 = 7.8 hz ) .
h nmr ( 500
mhz , 5:2 c6d6/cdcl3 ) : 8.007.97
( m , 2h ) , 7.46 ( d , 2h , j = 7.6 hz ) , 7.42 ( d , 2h , j = 7.7 hz ) , 7.297.03 ( m , 11h ) , 7.026.83
( m , 3h ) , 5.89 ( d , 1h , j = 6.2 hz ) , 5.45 ( t , 1h , j = 7.8 hz ) , 5.21 ( d , 2h , j = 4.2 hz ) ,
4.75 ( d , 1h , j = 12.3 hz ) , 4.64 ( d , 1h , j = 12.2 hz ) , 4.57 ( d , 1h , j = 7.7 hz ) , 4.43 ( dd ,
1h , j = 1.2 , 6.2 hz ) , 4.32 ( d , 1h , j = 7.3 hz ) , 4.13 ( dd , 1h , j = 4.9 , 10.4 hz ) , 4.07
( dd , 1h , j = 5.1 , 10.5 hz ) , 3.76 ( dd , 1h , j = 7.4 , 10.2 hz ) , 3.70 ( t , 1h , j = 8.7
hz ) , 3.67 ( t , 1h , j = 9.0 hz ) , 3.61 ( dt , 1h , j = 10.3 , 5.2 hz ) , 3.56 ( t , 1h , j = 10.2
hz ) , 3.44 ( t , 1h , j = 10.3 hz ) , 3.27 ( dt , 1h , j = 5.0 , 9.4 hz ) .
tri - o - benzyl
galactal 20 ( 64 mg , 0.15 mmol ) was
subjected to dmdo oxidation at 50 c for 12 h to provide
-epoxygalactal ( 20:1 / ) in quantitative yield .
we note that dmdo oxidation of galactal 20 at 0 c
caused significant overoxidation , as identified by an aldehyde proton
signal at 9.6 ppm in h nmr and by additional uv - active
byproducts .
the -epoxygalactal was subjected to dtc ring opening
as previously described ; the crude dtc donor was then subjected to
cuotf - mediated glycosylation with 3,4-di - o - benzyl - d - glucal 22 ( 75 mg , 0.23 mmol ) as described in
the general procedure . after workup , the pale yellow syrup was purified
by silica gel chromatography using a 27% etoac in toluene
gradient with 1% et3n to afford an inseparable mixture
of 1,6-linked disaccharide glucals as a colorless syrup ( 92 mg , 78% ;
/ 1:5 ) ; the / ratio was determined by
nmr peak integration of the 2-o - acetyl derivatives .
these were separated by preparative tlc ( 10% etoac in toluene with
1% et3n ) to afford 2-o - acetyl--disaccharide 21 ( j1,2 = 9.3 hz ) as the major
product and 2-o - acetyl--disaccharide 21 ( j1,2 = 3.7 hz ) as the minor product .
h nmr ( 500 mhz , c6d6 ) : 7.307.05
( m , 25h ) , 6.25 ( d ,
1h , j = 6.2 hz ) , 5.93 ( t , 1h , j =
9.3 hz ) , 4.92 ( d , 1h , j = 11.6 hz ) , 4.73 ( d , 1h , j = 11.7 hz ) , 4.70 ( dd , 1h , j = 3.0 , 6.0
hz ) , 4.63 ( d , 1h , j = 11.5 hz ) , 4.52 ( d , 1h , j = 11.5 hz ) , 4.43 ( d , 1h , j = 8.0 hz ) ,
4.39 ( dd , 2h , j = 4.0 , 12.5 hz ) , 4.31 ( d , 1h , j = 12.0 hz ) , 4.264.28 ( m , 4h ) , 4.13 ( m , 1h ) , 4.07
( br s , 1h ) , 3.99 ( dd , 1h , j = 5.5 , 11.0 hz ) , 3.90
( t , 1h , j = 7.0 hz ) , 3.86 ( br s , 1h ) , 3.70 ( t , 1h , j = 8.0 hz ) , 3.56 ( dd , 1h , j = 5.5 , 9.0
hz ) , 3.38 ( t , 1h , j = 6.5 hz ) , 3.33 ( dd , 1h , j = 2.5 , 10.0 hz ) , 1.81 ( s , 3h ) .
c nmr ( 125
mhz , c6d6 ) : 169.0 , 144.8 , 139.3 , 138.8 ,
138.8 , 128.7 , 128.6 , 128.6 , 128.5 , 128.3 , 128.2 , 128.1 , 127.9 , 127.8 ,
127.7 , 127.5 , 102.2 , 100.0 , 81.0 , 76.8 , 75.2 , 74.8 , 74.6 , 74.0 , 73.6 ,
73.4 , 73.2 , 71.9 , 71.4 , 70.2 , 68.9 , 67.5 , 20.9 .
ir ( thin film ) : 3371 ,
2877 , 1763 , 1645 , 1510 , 1455 , 1358 , 1253 , 1071 , 734 , 691 cm .
m / z calcd for
c49h52o10na [ m + na ] ,
823.3458 ; found , 823.3466 .
h nmr ( 500
mhz , c6d6 ) : 7.357.06 ( m , 25h ) ,
6.18 ( dd , 1h , j = 0.7 , 6.1 hz ) , 5.84 ( dd , 1h , j = 3.7 , 10.5 hz ) , 5.52 ( d , 1h , j = 3.7
hz ) , 5.00 ( d , 1h , j = 11.5 hz ) , 4.83 ( d , 1h , j = 11.4 hz ) , 4.674.64 ( m , 2h ) , 4.57 ( d , 1h , j = 11.5 hz ) , 4.474.40 ( m , 3h ) , 4.334.24
( m , 4h ) , 4.104.08 ( m , 2h ) , 4.04 ( dd , 1h , j = 4.3 , 12.0 hz ) , 3.963.93 ( m , 2h ) , 3.79 ( t , 1h , j = 7.7 hz ) , 3.75 ( dd , 1h , j = 1.5 , 12.9
hz ) , 3.69 ( dd , 1h , j = 5.7 , 13.5 hz ) , 1.85 ( s , 3h ) .
c nmr ( 125 mhz , c6d6 ) : 169.9 ,
144.6 , 139.4 , 139.2 , 139.2 , 138.9 , 128.6 , 128.5 , 128.3 , 128.1 , 127.8 ,
127.5 , 100.3 , 97.8 , 77.4 , 77.1 , 76.1 , 75.2 , 75.1 , 73.8 , 73.6 , 72.4 ,
71.7 , 70.5 , 70.1 , 69.4 , 66.3 , 20.9 .
ir ( thin film ) : 2936 , 2848 , 1738 ,
1502 , 1447 , 1362 , 1244 , 1113 , 1063 , 750 , 695 cm .
m / z calcd for
c49h52o10na [ m + na ] ,
823.3458 ; found , 823.3462 .
1,6--linked
disaccharide 13 ( 1.65 g , 2.1 mmol ) and tbai ( 214 mg ,
0.6 mmol ) were dissolved in dmf ( 29 ml ) , cooled to 0 c under
argon , and treated with bnbr ( 500 l , 4.2 mmol ) and a 60% dispersion
of nah in mineral oil ( 336 mg , 8.4 mmol ) .
the reaction was stirred
at rt for 12 h , quenched at 0 c with saturated nh4cl ( 25 ml ) , and extracted with et2o ( 3 25 ml ) .
the combined organic extracts were washed with h2o ( 3
25 ml ) and brine ( 25 ml ) , dried over na2so4 ,
and concentrated under reduced pressure .
hexanes
gradient with 1% et3n to afford the corresponding benzyl
ether 23 as a white solid ( 1.83 g , 95% ) .
h nmr ( 400 mhz , c6d6 ) : 7.457.06
( m , 30h ) , 6.30 ( d , 1h , j = 6.0 hz ) , 5.16 ( d , 1h , j = 11.2 hz ) , 5.01 ( d , 1h , j = 11.2 hz ) ,
4.86 ( d , 1h , j = 11.4 hz ) , 4.83 ( d , 1h , j = 11.4 hz ) , 4.79 ( d , 1h , j = 11.3 hz ) , 4.74 ( dd ,
1h , j = 2.8 , 6.2 hz ) , 4.72 ( d , 1h , j = 12.2 hz ) , 4.58 ( t , 1h , j = 12.6 hz ) , 4.48 ( d ,
1h , j = 12.2 hz ) , 4.444.39 ( m , 3h ) , 4.36
( dd , 1h , j = 2.0 , 11.2 hz ) , 4.31 ( m , 1h ) , 4.19 ( t ,
1h , j = 5.9 hz ) , 4.12 ( br s , 1h ) , 3.96 ( dd , 1h , j = 5.8 , 11.3 hz ) , 3.93 ( d , 1h , j = 5.9
hz ) , 3.753.61 ( m , 5h ) , 3.32 ( dt , 1h , j =
9.5 , 2.5 hz ) , 2.11 ( br s , 1h ) .
c nmr ( 100 mhz , c6d6 ) : 144.4 , 139.2 , 139.1 , 139.0 , 139.0 ,
138.8 , 138.7 , 128.2 , 128.1 , 127.9 , 127.7 , 127.6 , 127.4 , 127.3 , 104.2 ,
99.8 , 84.7 , 82.2 , 77.9 , 76.6 , 75.2 , 75.1 , 74.7 , 74.6 , 74.5 , 73.2 ,
73.0 , 70.0 , 69.0 , 68.2 .
ir ( thin film ) : 2884 , 1651 , 1502 , 1349 , 1109 ,
1066 , 752 , 697 cm .
m / z calcd for c54h56o9na [ m + na ] , 871.3822 ; found , 871.3804 .
disaccharide
glucal 23 ( 604 mg , 0.71 mmol ) was subjected to dmdo oxidation
at 50 c for 12 h to produce -epoxyglucal ( 20:1 ,
/ ) in quantitative yield followed by dtc ring opening
as previously described .
the crude glycosyl dtc donor was then subjected
to cuotf - mediated glycosylation with 3,4-di - o - benzyl
glucal 22 ( 278 mg , 0.85 mmol ) as described in the general
procedure . after workup
, the pale yellow syrup was purified by silica
gel chromatography using a 520% etoac gradient in hexanes
with 1% et3n ; the mixed fractions were further separated
by preparative tlc ( 15% etoac in hexanes with 1% et3n )
to afford ,-1,6-linked trisaccharide glucal 25 as a white solid ( 537 mg , 63% ) and ,-1,6-linked trisaccharide
glucal 25 as a minor product ( 56 mg , 7% ) .
major
,-isomer 25 : h nmr ( 500 mhz ,
c6d6 ) : 7.537.15 ( m , 40h ) , 6.38
( d , 1h , j = 6.5 hz ) , 5.18 ( dd , 2h , j = 6.0 , 11.5 hz ) , 5.10 ( d , 1h , j = 11.5 hz ) , 4.974.86
( m , 6h ) , 4.82 ( dd , 1h , j = 4.2 , 6.0 hz ) , 4.75 ( d ,
1h , j = 12.0 hz ) , 4.67 ( dd , 2h , j = 6.0 , 11.5 hz ) , 4.63 ( d , 1h , j = 7.5 hz ) , 4.59
( d , 1h , j = 12.5 hz ) , 4.54 ( d , 1h , j = 5.5 hz ) , 4.51 ( d , 1h , j = 7.0 hz ) , 4.424.40
( m , 3h ) , 4.35 ( d , 1h , j = 8.0 hz ) , 4.21 ( m , 1h ) ,
4.16 ( dt , 1h , j = 2.0 , 6.0 hz ) , 3.983.93
( m , 2h ) , 3.913.68 ( m , 8h ) , 3.57 ( t , 1h , j = 7.0 hz ) , 3.47 ( m , 1h ) , 3.48 ( dd , 1h , j = 7.0 ,
14.0 hz ) , 2.71 ( br s , 1h ) .
c nmr ( 100 mhz , c6d6 ) : 144.64 , 139.7 , 139.3 , 139.2 , 139.1 , 139.0 ,
128.6 , 128.5 , 128.5 , 128.3 , 128.1 , 127.9 , 127.8 , 127.6 , 104.6 , 103.8 ,
100.4 , 85.2 , 84.9 , 82.6 , 78.4 , 76.8 , 75.7 , 75.6 , 75.4 , 75.1 , 75.0 ,
74.9 , 73.6 , 73.6 , 70.4 .
ir ( thin film ) : 2920 , 1645 , 1506 , 1459 , 1345 ,
1063 , 733 , 700 cm .
m / z calcd for c74h78o14na [ m + na ] , 1213.5289 ; found , 1213.5292 .
minor ,-isomer 25 : h nmr ( 500 mhz , c6d6 ) : 7.507.04 ( m , 40h ) , 6.21 ( d , 1h , j = 8.5 hz ) , 5.12 ( d , 1h , j = 11.5 hz ) ,
4.98 ( dd , 2h , j = 9.0 , 10.0 hz ) , 4.894.72
( m , 7h ) , 4.70 ( dd , 1h , j = 2.5 , 6.0 hz ) , 4.644.61
( t , 2h , j = 9.5 hz ) , 4.56 ( d , 1h , j = 11.5 hz ) , 4.48 ( d , 1h , j = 12.0 hz ) , 4.444.40
( m , 3h ) , 4.33 ( d , 1h , j = 9.5 hz ) , 4.28 ( d , 1h , j = 11.5 hz ) , 4.12 ( dd , 2h , j = 3.0 , 8.0
hz ) , 4.06 ( dd , 1h , j = 5.0 , 11.0 hz ) , 3.96 ( m , 1h ) ,
3.943.87 ( m , 2h ) , 3.82 ( dt , 1h , j = 4.0 ,
9.5 hz ) , 3.733.65 ( m , 7h ) , 3.60 ( t , 1h , j = 8.5 hz ) , 3.35 ( dt , 1h , j = 2.5 , 9.5 hz ) , 2.05
( d , 1h , j = 9.5 hz ) .
c nmr ( 125 mhz ,
c6d6 ) : 144.7 , 139.6 , 139.5 , 139.3 , 139.1 ,
139.0 , 129.2 , 128.7 , 128.5 , 128.3 , 128.1 , 127.8 , 109.5 , 104.5 , 85.2 ,
82.6 , 78.4 , 78.2 , 75.6 , 74.9 , 74.7 , 74.1 , 73.6 , 71.3 , 71.1 , 70.4 ,
69.4 , 69.0 , 30.2 .
ir ( thin film ) : 3966 , 2911 , 1648 , 1498 , 1458 , 1362 ,
1081 , 1027 , 741 , 704 cm .
m / z calcd for c74h78o14na [ m + na ] , 1213.5289 ; found , 1213.5321 .
trisaccharide
glycal 25 was also characterized as the 2-o - acetate ; h nmr and pfg - cosy confirmed the
configuration ( j1,2 = 8.2 hz ) .
h nmr ( 400
mhz , cdcl3 ) :
7.656.78 ( m , 40h ) , 6.28 ( d , 1h , j = 6.2 hz ) , 5.50 ( t , 1h , j = 8.2 hz ) , 5.04 ( t , 1h , j = 11.6 hz ) , 4.924.58 ( m , 15h ) , 4.55 ( d , 1h , j = 12.0 hz ) , 4.46 ( d , 1h , j = 12.2 hz ) ,
4.39 ( d , 1h , j = 7.8 hz ) , 4.31 ( d , 1h , j = 12.2 hz ) , 4.26 ( d , 1h , j = 11.3 hz ) , 4.014.07
( m , 2h ) , 3.913.87 ( m , 2h ) , 3.813.74 ( m , 5h ) , 3.643.54
( m , 3h ) , 3.533.45 ( m , 2h ) , 1.72 ( s , 3h ) .
trisaccharide
glucal 25 ( 186 mg , 0.16 mmol ) and tbai ( 12 mg , 0.03 mmol )
were dissolved in dmf ( 1.6 ml ) , cooled to 0 c under argon , and
then treated with bnbr ( 38 l , 0.32 mmol ) and a 60% dispersion
of nah in mineral oil ( 13 mg , 0.31 mmol ) .
the ice bath was removed ,
and the reaction mixture was stirred at rt for 12 h , quenched at 0
c with saturated nh4cl ( 3 ml ) , and extracted with
et2o ( 3 5 ml ) .
the combined organic extracts were
washed with h2o ( 3 10 ml ) and brine ( 5 ml ) , dried
over na2so4 , and concentrated under reduced
pressure .
the resulting foamy solid was recrystallized with et2o in hexanes to afford benzyl ether 26 as white
crystals ( 184 mg , 92% yield ) .
h nmr ( 500 mhz , c6d6 ) : 7.457.06 ( m , 45h ) , 6.31 ( d , 1h , j = 6.0 hz ) , 5.17 ( d , 1h , j = 11.0 hz ) ,
5.12 ( d , 1h , j = 11.0 hz ) , 5.00 ( d , 2h , j = 11.5 hz ) , 4.874.77 ( m , 6h ) , 4.74 ( d , 1h , j = 5.0 hz ) , 4.67 ( d , 1h , j = 12.0 hz ) , 4.614.54
( m , 4h ) , 4.50 ( d , 1h , j = 12.0 hz ) , 4.454.38
( m , 4h ) , 4.33 ( d , 1h , j = 11.5 hz ) , 4.29 ( d , 1h , j = 12.0 hz ) , 4.13 ( m , 1h ) , 4.09 ( br s , 1h ) , 3.93 ( dd , 1h , j = 5.5 , 11.0 hz ) , 3.89 ( t , 1h , j = 7.0
hz ) , 3.80 ( dd , 1h , j = 6.0 , 11.0 hz ) , 3.753.59
( m , 8h ) , 3.46 ( m , 1h ) , 3.40 ( m , 1h ) .
c nmr ( 125 mhz ,
c6d6 ) : 144.8 , 139.6 , 139.5 , 139.4 , 139.3 ,
139.2 , 139.0 , 128.6 , 128.5 , 128.4 , 128.3 , 128.1 , 128.0 , 127.9 , 127.7 ,
104.6 , 104.3 , 100.2 , 85.2 , 85.1 , 82.7 , 78.6 , 78.4 , 76.8 , 75.6 , 75.5 ,
75.3 , 75.1 , 74.9 , 74.8 , 73.6 , 73.4 , 70.3 , 69.5 , 68.9 , 68.4 .
ir ( thin
film ) : 2915 , 1510 , 1451 , 1366 , 1075 , 733 , 695 cm .
esi ms : m / z for c81h84o14na [ m + na ] , 1304.07 .
trisaccharide
glycal 26 ( 111 mg , 0.087 mmol ) was subjected to dmdo
oxidation at 50 c for 12 h to produce -epoxyglucal
( 20:1 , / ) in quantitative yield followed by dtc ring
opening as previously described .
the crude glycosyl dtc donor was
subjected to cuotf - mediated glycosylation with 3,4-di - o - benzyl glucal 22 ( 42 mg , 0.13 mmol ) as described in
the general procedure . after workup
, the crude mixture was passed
through a plug of silica gel ( neutralized with 1% et3n )
packed on a fritted funnel and eluted with 2% etoac in toluene with
1% et3n to remove byproducts followed by etoac with 1%
et3n to collect the product mixture .
hexanes
gradient with 1% et3n to afford the desired ,,-1,6-linked
tetrasaccharide 28 as a white solid ( 68 mg , 49% ) and
,,-1,6-linked tetrasaccharide 28 as a minor product ( 14 mg , 10% ) .
major ,,-isomer 28 : h nmr ( 500 mhz , c6d6 ) : 7.606.91 ( m , 55h ) , 6.28 ( d , 1h , j = 5.9 hz ) , 5.185.03 ( m , 4h ) , 5.00 ( dd , 2h , j = 3.8 , 11.3 hz ) , 4.904.75 ( m , 6h ) , 4.764.66 ( m ,
2h ) , 4.634.20 ( m , 14h ) , 4.14 ( d , 1h , j =
6.6 hz ) , 4.05 ( m , 1h ) , 3.93 ( t , 1h , j = 7.5 hz ) ,
3.903.50 ( m , 16h ) , 3.433.37 ( m , 2h ) , 1.04 ( t , 1h , j = 7.1 hz ) .
c nmr ( 125 mhz , c6d6 )
144.7 , 139.7 , 139.5 , 139.4 , 139.3 , 139.3 , 139.2 , 139.0 , 128.7 ,
128.6 , 18.6 , 128.6 , 128.5 , 128.4 , 128.3 , 128.1 , 128.0 , 127.9 , 127.9 ,
127.7 , 127.7 , 127.6 , 104.7 , 103.7 , 100.4 , 85.2 , 85.2 , 84.9 , 82.8 ,
82.7 , 78.7 , 78.4 , 78.1 , 76.9 , 76.0 , 75.6 , 75.5 , 75.4 , 75.3 , 75.1 ,
75.1 , 75.0 , 74.9 , 74.8 , 73.7 , 73.6 , 70.5 , 69.5 , 69.2 , 68.9 , 68.4 .
ir ( thin film ) : 2924 , 1717 , 1464 , 1274 , 1058 , 809 , 738 , 695 cm .
minor ,,-isomer 28 : h nmr ( 500 mhz , c6d6 ) : 7.676.81 ( m , 55h ) , 6.21 ( d , 1h , j = 6.0 hz ) , 5.12 ( t , 2h , j = 10.7 hz ) ,
5.01 ( d , 2h , j = 11.2 hz ) , 4.96 ( d , 2h , j = 10.3 hz ) , 4.91 ( d , 1h , j = 3.6 hz ) , 4.904.71
( m , 7h ) , 4.714.61 ( m , 3h ) , 4.604.45 ( m , 4h ) , 4.43
( d , 2h , j = 11.8 hz ) , 4.394.24 ( m , 3h ) , 4.13
( m , 1h ) , 4.114.03 ( m , 2h ) , 3.983.89 ( m , 3h ) , 3.863.82
( m , 3h ) , 3.793.58 ( m , 12h ) , 3.48 ( ddd , 1h , j = 2.0 , 5.6 , 9.6 hz ) , 3.38 ( dt , 1h , j = 9.9 , 3.1
hz ) , 2.15 ( d , 1h , j = 9.2 hz ) .
c nmr
( 125 mhz , c6d6 ) : 144.7 , 139.7 , 139.6 ,
139.5 , 139.5 , 139.4 , 139.3 , 139.0 , 128.7 , 128.7 , 128.6 , 128.6 , 128.5 ,
128.4 , 128.3 , 128.1 , 128.0 , 127.9 , 127.7 , 127.6 , 104.8 , 104.3 , 100.4 ,
99.7 , 85.2 , 85.2 , 83.7 , 82.7 , 82.7 , 78.6 , 78.3 , 78.1 , 76.6 , 76.1 ,
75.6 , 75.4 , 75.3 , 75.0 , 74.9 , 74.8 , 74.7 , 73.9 , 73.6 , 71.4 , 70.4 ,
69.4 , 69.0 , 66.7 .
ir ( thin film ) : 2898 , 1746 , 1497 , 1451 , 1362 , 1236 ,
1101 , 1046 , 742 , 683 cm .
esi ms : m / z for c101h106o19na [ m + na ] , 1647 .
tetrasaccharide glycals 28 and 28 were also characterized as
2-o - acetates ; h nmr and pfg - cosy
confirmed the configuration of 28 ( j1,2 = 9.5 hz ) and the configuration
of 28 ( j1,2 = 3.6 hz ) .
h nmr ( 500 mhz , c6d6 ) :
7.736.64 ( m , 55h ) , 6.38 ( dd , 1h , j = 1.4 ,
6.2 hz ) , 5.56 ( dd , 1h , j = 8.0 , 9.5 hz ) , 5.315.01
( m , 3h ) , 5.014.83 ( m , 8h ) , 4.834.30 ( m , 18h ) , 4.214.16
( m , 2h ) , 4.053.62 ( m , 14h ) , 3.633.40 ( m , 3h ) , 1.80
( s , 3h ) .
h nmr ( 500 mhz , c6d6 ) :
7.526.96 ( m , 55h ) , 6.14 ( dd , 1h , j = 1.3 ,
6.1 hz ) , 5.49 ( d , 1h , j = 3.6 hz ) , 5.23 ( dd , 1h , j = 3.5 , 10.1 hz ) , 5.13 ( d , 2h , j = 11.4
hz ) , 5.01 ( d , 2h , j = 11.2 hz ) , 4.92 ( d , 1h , j = 11.6 hz ) , 4.894.71 ( m , 12h ) , 4.694.23
( m , 15h ) , 4.18 ( ddd , 1h , j = 1.8 , 5.1 , 10.1 hz ) ,
4.10 ( dt , 1h , j = 6.6 , 1.8 hz ) , 4.063.95
( m , 2h ) , 3.86 ( dd , 1h , j = 5.9 , 11.4 hz ) , 3.833.58
( m , 16h ) , 3.54 ( ddd , 1h , j = 1.9 , 5.9 , 9.7 hz ) , 3.39
( dt , 1h , j = 9.8 , 3.2 hz ) , 1.80 ( s , 3h ) .
glycosyl dtc 2 ( 116 mg , 0.2 mmol ) and imidazole ( 48
mg , 0.7 mmol ) were disolved in thf ( 2 ml ) , cooled to 0 c under
argon , and treated with tescl ( 170 l , 1 mmol ) .
the reaction
mixture was stirred at room temperature for 12 h , quenched at 0 c
with saturated nahco3 ( 10 ml ) , and extracted with etoac
( 3 20 ml ) .
the combined organic extracts were washed with brine
( 10 ml ) , dried over na2so4 , and concentrated
under reduced pressure . after workup , the yellow syrup was purified
by silica gel chromatography ( neutralized with 1% et3n )
using a 2050% etoac in hexanes gradient with 1% et3n to afford 2-o - triethylsilyl
glycosyl dtc 29 as a colorless syrup ( 115 mg , 83% ) .
h nmr ( 500 mhz , c6d6 ) : 7.35
( d , 2h , j = 7.5 hz ) , 7.23 ( d , 2h , j = 7.5 hz ) , 7.207.02 ( m , 11h ) , 6.22 ( d , 1h , j = 10.1 hz ) , 5.01 ( d , 1h , j = 11.8 hz ) , 4.82 ( d ,
1h , j = 11.8 hz ) , 4.73 ( d , 1h , j = 11.2 hz ) , 4.64 ( d , 1h , j = 11.2 hz ) , 4.40 ( d ,
1h , j = 11.9 hz ) , 4.21 ( d , 1h , j = 11.9 hz ) , 4.13 ( dd , 1h , j = 8.4 , 10.0 hz ) , 4.00
( t , 1h , j = 9.5 hz ) , 3.773.62 ( m , 6h ) , 3.29
( m , 1h ) , 3.09 ( m , 1h ) , 1.01 ( t , 9h , j = 8.0 hz ) ,
0.990.96 ( m , 3h ) , 0.81 ( t , 3h , j = 7.0 hz ) ,
0.73 ( q , 6h , j = 7.8 hz ) .
c nmr ( 125
mhz , c6d6 ) : 192.9 , 140.0 , 139.5 , 139.3 ,
128.8 , 128.8 , 128.7 , 128.6 , 128.5 , 128.4 , 128.2 , 128.1 , 127.9 , 127.7 ,
127.3 , 91.8 , 88.4 , 80.2 , 79.1 , 75.7 , 75.0 , 73.9 , 73.7 , 69.4 , 49.9 ,
46.9 , 13.0 , 11.9 , 7.7 , 6.3 .
ir ( thin film ) : 2936 , 2874 , 1489 , 1458 ,
1416 , 1351 , 1263 , 1203 , 1143 , 1070 , 1016 , 922 , 800 , 732 , 692 cm .
m / z calcd for c38h53no5s2sina [ m + na ] , 718.3032 ; found , 718.3039 .
2-o - triethylsilyl
glycosyl dtc 29 ( 35 mg , 0.05 mmol ) was subjected to cu(otf)2-mediated glycosylation with i - proh ( 8 l ,
0.1 mmol ) as described in the general procedure . after workup
, the
crude mixture was passed through a plug of silica gel ( neutralized
with 1% et3n ) packed on a fritted funnel and eluted with
2% etoac in toluene with 1% et3n to remove byproducts followed
by etoac with 1% et3n to collect the product mixture .
the
pale yellow syrup was concentrated and repurified by preparative tlc
( 5% etoac hexanes with 1% et3n ) to afford -i - pr glucoside 30 as a colorless syrup ( 16
mg , 53% ) and -i - pr glucoside 30 as a colorless syrup ( 8 mg , 26% ) .
major isomer 30 : h nmr ( 500 mhz , c6d6 ) :
7.41 ( d , 2h , j = 7.7 hz ) , 7.30 ( d , 2h , j = 7.6 hz ) , 5.01 ( d , 1h , j = 12.0 hz ) , 4.96 ( d ,
1h , j = 12.0 hz ) , 4.79 ( d , 1h , j = 12.0 hz ) , 4.79 ( d , 1h , j = 11.9 hz ) , 4.54 ( d ,
1h , j = 11.0 hz ) , 4.46 ( d , 1h , j = 12.0 hz ) , 4.40 ( d , 1h , j = 12.0 hz ) , 4.32 ( d ,
1h , j = 7.5 hz ) , 4.04 ( q , 1h , j =
6.0 hz ) , 3.79 ( d , 1h , j = 8.5 hz ) , 3.72 ( d , 1h , j = 9.5 hz ) , 3.693.68 ( m , 2h ) , 3.59 ( t , 1h , j = 9.0 hz ) , 3.40 ( dt , 1h , j = 3.0 , 10.0
hz ) , 1.26 ( d , 3h , j = 6.0 hz ) , 1.12 ( d , 3h , j = 6.0 hz ) , 1.08 ( t , 9h , j = 8.0 hz ) ,
0.80 ( q , 6h , j = 8.0 hz ) .
c nmr ( 125
mhz , c6d6 ) : 139.5 , 139.0 , 138.8 , 128.3 ,
128.3 , 128.1 , 128.0 , 127.8 , 127.6 , 127.5 , 127.1 , 101.1 , 86.4 , 78.6 ,
75.9 , 75.4 , 75.2 , 74.7 , 73.3 , 70.0 , 69.3 , 23.7 , 21.2 , 7.11 , 5.50 ,
1.19 .
ir ( thin film ) : 2877 , 1506 , 1354 , 1117 , 1063 , 695 cm .
m / z calcd
for c36h50o6sina [ m + na ] , 629.3269 ; found , 629.3258 .
minor isomer 30 : h nmr ( 500 mhz , c6d6 ) : 7.37
( d , 2h , j = 7.1 hz ) , 7.32 ( d , 2h , j = 7.1 hz ) , 7.237.07 ( m , 11h ) , 5.00 ( d , 1h , j = 12.0 hz ) , 4.974.94 ( m , 2h ) , 4.86 ( d , 1h , j = 11.5 hz ) , 4.64 ( d , 1h , j = 11.0 hz ) , 4.48 ( d ,
1h , j = 12.5 hz ) , 4.40 ( d , 1h , j = 12.0 hz ) , 4.21 ( t , 1h , j = 9.0 hz ) , 4.18 ( ddd ,
1h , j = 1.5 , 4.0 , 10.0 hz ) , 3.913.84 ( m ,
3h ) , 3.81 ( dd , 1h , j = 4.0 , 10.5 hz ) , 3.70 ( dd , 1h , j = 2.0 , 10.5 hz ) , 1.22 ( d , 3h , j = 6.5
hz ) , 1.14 ( d , 3h , j = 6.0 hz ) , 0.98 ( t , 9h , j = 7.5 hz ) , 0.62 ( dd , 3h , j = 1.0 , 7.5
hz ) , 0.59 ( dd , 3h , j = 2.0 , 8.5 hz ) .
nmr ( 125 mhz , c6d6 ) : 140.3 , 139.8 , 139.5 , 128.9 ,
128.8 , 128.7 , 128.6 , 128.4 , 128.2 , 127.8 , 127.7 , 98.8 , 83.5 , 79.3 ,
76.0 , 75.5 , 74.9 , 73.9 , 71.8 , 70.9 , 70.1 , 24.0 , 22.2 , 7.5 , 5.8 . ir
( thin film ) : 2907 , 1510 , 1455 , 1371 , 1168 , 1105 , 999 , 742 , 691 cm .
m / z calcd for c36h50o6sina
[ m + na ] , 629.3274 ; found , 629.3285 . | in
this article , we evaluate glycosyl dithiocarbamates ( dtcs ) with
unprotected c2 hydroxyls as donors in -linked oligosaccharide
synthesis . we report a mild , one - pot conversion of glycals into -glycosyl
dtcs via dmdo oxidation with subsequent ring opening by dtc salts ,
which can be generated in situ from secondary amines and cs2 .
glycosyl dtcs are readily activated with cu(i ) or cu(ii ) triflate
at low temperatures and are amenable to reiterative synthesis strategies ,
as demonstrated by the efficient construction of a tri--1,6-linked
tetrasaccharide .
glycosyl dtc couplings are highly -selective
despite the absence of a preexisting c2 auxiliary group .
we provide
evidence that the directing effect is mediated by the c2 hydroxyl
itself via the putative formation of a cis - fused bicyclic intermediate . |
SECTION 1. SHORT TITLE.
This Act may be cited as the ``Expanding Exemptions to Enable More
Public Trust Act'' or the ``EXEMPT Act''.
SEC. 2. GENERAL EXEMPTIONS.
(a) Amendments.--Section 30113 of title 49, United States Code, is
amended--
(1) in subsection (b)(3)(B)--
(A) in clause (iii), by striking ``; or'' and
inserting a semicolon;
(B) in clause (iv), by striking the period at the
end and inserting ``; or''; and
(C) by adding at the end the following:
``(v) the exemption would make easier the
development or field evaluation of--
``(I) a feature of a highly
automated vehicle providing a safety
level at least equal to the safety
level of the standard for which
exemption is sought; or
``(II) a highly automated vehicle
providing an overall safety level at
least equal to the overall safety level
of nonexempt vehicles.''; and
(2) in subsection (c), by adding at the end the following:
``(5) if the application is made under subsection
(b)(3)(B)(v) of this section--
``(A) such development, testing, and other data
necessary to demonstrate that the motor vehicle is a
highly automated vehicle; and
``(B) a detailed analysis that includes supporting
test data, including both on-road and validation and
testing data showing (as applicable) that--
``(i) the safety level of the feature at
least equals the safety level of the standard
for which exemption is sought; or
``(ii) the vehicle provides an overall
safety level at least equal to the overall
safety level of nonexempt vehicles.''.
(b) Definitions.--Section 30102 of title 49, United States Code, is
amended--
(1) in subsection (a)--
(A) by redesignating paragraphs (1) through (13) as
paragraphs (2), (3), (4), (5), (8), (9), (10), (11),
(12), (13), (15), (16), and (17), respectively;
(B) by inserting before paragraph (2) (as so
redesignated) the following:
``(1) `automated driving system' means the hardware and
software that are collectively capable of performing the entire
dynamic driving task on a sustained basis, regardless of
whether such system is limited to a specific operational design
domain.'';
(C) by inserting after paragraph (5) (as so
redesignated) the following:
``(6) `dynamic driving task' means all of the real time
operational and tactical functions required to operate a
vehicle in on-road traffic, excluding the strategic functions
such as trip scheduling and selection of destinations and
waypoints, and including--
``(A) lateral vehicle motion control via steering;
``(B) longitudinal vehicle motion control via
acceleration and deceleration;
``(C) monitoring the driving environment via object
and event detection, recognition, classification, and
response preparation;
``(D) object and event response execution;
``(E) maneuver planning; and
``(F) enhancing conspicuity via lighting,
signaling, and gesturing.
``(7) `highly automated vehicle'--
``(A) means a motor vehicle equipped with an
automated driving system; and
``(B) does not include a commercial motor vehicle
(as defined in section 31101).''; and
(D) by inserting after paragraph (13) (as so
redesignated) the following:
``(14) `operational design domain' means the specific
conditions under which a given driving automation system or
feature thereof is designed to function.''; and
(2) by adding at the end the following:
``(c) Revisions to Certain Definitions.--
``(1) If SAE International (or its successor organization)
revises the definition of any of the terms defined in paragraph
(1), (6), or (14) of subsection (a) in Recommended Practice
Report J3016, it shall notify the Secretary of the revision.
The Secretary shall publish a notice in the Federal Register to
inform the public of the new definition unless, within 90 days
after receiving notice of the new definition and after opening
a period for public comment on the new definition, the
Secretary notifies SAE International (or its successor
organization) that the Secretary has determined that the new
definition does not meet the need for motor vehicle safety, or
is otherwise inconsistent with the purposes of this chapter. If
the Secretary so notifies SAE International (or its successor
organization), the existing definition in subsection (a) shall
remain in effect.
``(2) If the Secretary does not reject a definition revised
by SAE International (or its successor organization) as
described in paragraph (1), the Secretary shall promptly make
any conforming amendments to the regulations and standards of
the Secretary that are necessary. The revised definition shall
apply for purposes of this chapter. The requirements of section
553 of title 5 shall not apply to the making of any such
conforming amendments.
``(3) Pursuant to section 553 of title 5, the Secretary may
update any of the definitions in paragraph (1), (6), or (14) of
subsection (a) if the Secretary determines that materially
changed circumstances regarding highly automated vehicles have
impacted motor vehicle safety such that the definitions need to
be updated to reflect such circumstances.''. | Expanding Exemptions to Enable More Public Trust Act or the EXEMPT Act This bill authorizes the Department of Transportation to exempt highly automated vehicles from certain motor vehicle safety standards if an exemption would facilitate the development or field evaluation of the safety features of such vehicles. |
advances in laboratory technologies and data analysis methodologies are permitting the exploitation of complex experimental data sets in ways that were unthinkable just a few years ago ( 13 ) .
however , although the number of scientific articles containing relevant data is steadily increasing , the majority of published data is still not easily accessible for automated text processing systems .
in fact , the information is still buried within the articles rather than being summarized in computer readable formats ( 4 ) .
therefore , it is necessary to perform the additional step of annotating the experimental data in formats suitable for systematic consultation or computation .
this task is performed manually by curators of databases specialized in diverse biological domains , ranging from cellular phenotypes and tissue anatomy to gene function .
the importance and the critical role played by such themed biocuration efforts are evident by the multitude of databases reported over the years in the nar database special issue ( 5 ) and by the birth of dedicated journals such as database .
different models have been followed to generate annotations from the literature ( 6,7 ) . in the museum model ,
a relatively small group of specialized curators perform a particular literature curation effort , while in the jamboree model a group of experts meet for a short intensive annotation workshop .
when various research groups scattered at different locations share common research interests and they jointly organize into a collaborative decentralized annotation effort ( working from their own laboratories ) , the so - called cottage industry model is followed .
devoted expert curators produce quality annotations , but because manual curation is time - consuming and there is a limited number of curators , it is difficult to keep current with the literature .
potential alternatives inspired by successful efforts , such as wikipedia , are the open community model ( 8) and the author - based annotations model ( 9,10 ) . the first does not have major restrictions on the actual annotators , as the whole community can contribute to generate annotations . in some cases , qualified roles for the contributors have been proposed to guarantee a certain level of confidence in the annotations .
the idea behind author - based annotations is that the authors themselves provide minimal annotations of their own article during the writing or submission process , going beyond author - provided keywords for indexing purposes .
each of the manual literature curation models previously introduced here still faces the problem of the increasing volume of literature ( 11 ) .
therefore , some attempts have been made to generate annotations automatically using automated text mining .
databases constructed according to the automated text - mining model are limited by performance issues but can generate valuable results in case of lack of manual annotations ( 12,13 ) . a hybrid approach , namely text - mining - assisted manual curation ,
wherein semi - automated literature mining tools are integrated into the biocuration workflow , represents a more promising solution ( 14,15 ) .
controlled vocabularies have been fundamental for all of these diverse annotation types , from the purely manual ones to totally automatic annotations .
key tools in the annotation of experimental data are bio - ontologies , a well - defined set of logic relations and controlled vocabularies that permit an accurate description of the experimental findings ( 16 ) .
the biocreative initiative ( critical assessment of information extraction systems in biology ) ( 17,18 ) is a community - wide effort for the evaluation of text mining and information extraction systems applied to the biological domain .
its major purpose is to stimulate the development of software that can assist the biological databases in coping with the deluge of data generated by the omics era .
we provide here a general overview of the biocreative experience with biomedical ontologies . for the biocreative initiatives
, it was of particular importance that annotations chosen as part of a challenge task had been generated through a model followed by research groups employing expert curators using well - established biocuration workflows refined over years of manual literature curation .
in particular , we will focus on the attempts that have been made to automatically extract protein
protein interaction ( ppi ) data taking advantage of ontologies , and to associate ontology terms to the interactions .
the opportunity to decipher the mechanisms underlying cellular physiology from the analysis of molecular interaction networks has prompted the establishment of databases devoted to the collection of such data , with great attention to protein and genetic interactions ( 1922 ) .
some of the major protein interaction databases ( 1925 ) are now federated in the international molecular exchange ( imex ) consortium , whose primary goals are to minimize curation redundancy and to share the data in a common format .
all active imex members share the same data representation standard , the human proteome organisation proteomics standards initiative molecular interactions ( hupo psi - mi ) ( 26 ) .
the psi - mi provides the logic model and the controlled vocabulary for representation of molecular interactions .
not surprisingly , the members of the imex consortium themselves are the main contributors to the development and maintenance of the psi - mi ontology .
the psi - mi was introduced with the intent to facilitate data integration among databases specifically for the representation of binary or n - nary interactions .
it also allows in - depth annotation of the experimental set - up such as the experimental or biological role of the interactors , the experimental method employed for the detection of the interaction , the binding domain of the interactors , and the kinetics of the binding reaction , among other attributes ( the psi - mi ontology can be explored at the ebi ontology look - up service ) ( 27 ) .
the psi - mi is not restricted to the representation of physical interactions but permits the thorough annotation of genetic interactions and even experimental evidence of co - localization among molecules .
each attribute of the interaction is described by a rich controlled vocabulary which is organized in a well - defined hierarchy and continuously updated and maintained by the psi - mi workgroup . regrettably , despite the cooperative efforts of the imex databases , the complete annotation of interaction data from the biomedical literature , and in particular , the subset of interactions involving human genes and their products , remains far from complete .
the time - consuming nature of manual curation severely hampers the achievement of an exhaustive collection of molecular interactions .
the thorough annotation of the experimental data contained in a single scientific article can take anywhere from minutes to hours .
hence , any automated support that assists the database curators be it the selection of the relevant literature or identification and annotation of the interactions is more than welcome by the database community .
figure 1 provides a schematic representation of the manual literature curation of psi - mi concepts for protein interaction annotation .
figure 1.this figure shows schematically how protein interaction data is annotated and/or marked up using ontologies .
systems such as myminer ( myminer.armi.monash.edu.au/links.php ) , have been used for text labeling and highlighting purposes in the context of the biocreative competition .
finding associations between textual expressions referring to experimental techniques used to characterize protein interactions and their equivalent concepts in the mi ontology is cumbersome in some cases when deep domain inference is required .
experienced curators are able to quickly navigate the term hierarchy to find the appropriate terms while novice annotators often need to search the ontology using method keywords as queries and consult associated descriptive information for potential candidate terms .
this figure shows schematically how protein interaction data is annotated and/or marked up using ontologies .
systems such as myminer ( myminer.armi.monash.edu.au/links.php ) , have been used for text labeling and highlighting purposes in the context of the biocreative competition .
finding associations between textual expressions referring to experimental techniques used to characterize protein interactions and their equivalent concepts in the mi ontology is cumbersome in some cases when deep domain inference is required .
experienced curators are able to quickly navigate the term hierarchy to find the appropriate terms while novice annotators often need to search the ontology using method keywords as queries and consult associated descriptive information for potential candidate terms .
a number of initiatives have been started in order to facilitate the automated extraction of information from the biomedical literature and of ppi data in particular .
the structured digital abstracts developed by febs letters in collaboration with the mint database ( 20 ) , for instance , is a structured text appended to the classical abstract that can be easily parsed by text - mining tools .
each biological entity ( proteins ) and relationship between these entities is tagged with appropriate database identifiers , thus permitting an unambiguous interpretation of the data .
in recent years , we have witnessed a flourishing of ontologies that attempt to accurately represent the complexity of the biological sciences ( 28 ) .
hence , we now have ontologies describing a wide variety of biological concepts , spanning from clinical symptoms to molecular interactions .
they not only attempt to capture in a more formal way the meaning ( semantics ) of a particular domain based on community consensus ( 29 ) but are also a key element for database interoperability and querying , as well as knowledge management and data integration ( 30 ) .
some of these ontologies can now be integrated with other ontologies , broadening their descriptive potential ( 31 ) .
furthermore , the gene ontology ( go ) ( 32 ) has grown considerably over 10 years , counting now almost 35 000 terms , compared to the initial 5000 .
[ for a general introduction to the go annotation process refer to hill et al .
the increasing number of biological terms and concepts covered by these ontologies has prompted a growing interest in their potential for use in the development of methods for automatic data extraction from the biomedical literature .
however , while biomedical ontologies are indispensable in the daily practice of database curators , it remains to be established if text mining can really benefit from well - established ontologies .
in fact , while an analysis of the lexical properties of the go indicates that a large percentage of go terms are potentially useful for text mining tools ( 34 ) , other evidence suggests that many of the open bbiomedical ontologies ( 28 ) are not suitable for effective natural language processing applications ( 35 ) .
this discrepancy is due to the fact that often the information is not only present as natural language data , but often also requires interpretation of information contained in images or obtained by interpreting the data reported in the articles . as a consequence , not every piece of information
the results of the first biocreative challenge suggest that a combination of several factors can influence the performance of text mining systems in the extraction of go terms associated with defined genes , including the specificity of the terms and their go branch membership ( 36 ) .
ontologies benefitting from an iterative process of expansion and restructuring based on direct observations ( analysis of scientific literature ) made by communities of active users more likely will successfully result in a resource for text - mining purpose .
inclusion of such observations in the ontologies will dramatically increase their potential in the context of text mining . nevertheless , some popular text - mining - based applications , such as textpresso ( 37 ) , ncbo annotator ( 38 ) , geneways ( 39 ) , domeo ( 40 ) or pubonto ( 41 ) , rely on the usage of ontologies . these kinds of systems are currently exploring ontologies mainly as lexical resources of controlled vocabulary terms for text indexing or markup purposes .
they assist the end users in improving the detection of annotation - relevant information at a very general level .
efficiently handling complex terms and annotation types is thus still a challenge for such approaches , making the results of the biocreative tasks particularly interesting to better understand the comparison between manual and automated extractions .
adapting some of the methodologies that participated in biocreative into such technical frameworks could potentially capture previously missing annotation types or concepts .
the biocreative challenge was established in 2004 with the purpose of assessing the state - of - the - art of text - mining technologies applied to biological problems . although it is called a challenge , the primary aim of biocreative
instead the ambition of biocreative is manifold : ( i ) to benchmark the performance of text mining applications , ( ii ) to promote communication between bioinformaticians , text miners , and database curators , ( iii ) to define shared training and
gold standard test data and ( iv ) to spur the development of high - performance suites . to date , four editions of biocreative have been organized , each consisting of two or more specific tasks ( table 1 ) .
each task was designed to test the ability of the systems to detect biological entities ( gene or proteins ) and/or to link them to stable database identifiers , and evaluate how efficiently facts or functional relations can be associated with the biological entities ( e.g. protein function and ppi ) .
figure 2 shows how these biocreative challenges have evolved over time in the context of related community efforts , resources and applications .
figure 2.historical view and timeline of the biocreative challenges in the context of other community efforts , textual resources ( corpora ) and applications developed in the area of biomedical text mining .
the upper bar shows the number of new records added to pubmed each year , expressed in thousands ( k ) .
pink squares , appearance of biomedical text mining methods ; green octagons , relevant ontologies , lexical resources and corpora ; yellow boxes , community challenges ; blue ovals , biomedical text mining applications .
table 1.summary of the biocreative editions related to the identification of ontology terms in articlesinformationbiocreative i , task 1biocreative i , task 2biocreative ii imsbiocreative iii
imsdescriptionreturn evidence text fragments for protein go document tripletspredict go annotations derivable from a given protein article pairprediction of mi annotations from ppi - relevant articlesprediction of mi annotations from ppi - relevant articles ( ranked with evidence passages)ontologiesgogomi ontologymi ontologycurators / databasesgoa - ebigoa - ebimint and intactbiogrid and mintparticipants9628data / formatfull - text articles , sgml formatfull - text articles , sgml formatfull - text articles , pdf and html formatfull - text articles , pdf formattraining803 articles803 articles740 articles2003 training articles and 587 development set articlestest113 articles99 articles358 articles223 articlesevaluationthree labels ( correct , general , wrong ) , % correct casesthree labels ( correct , general , wrong ) , % correct casesprecision , recall and f - score ; mapping to the parent termsprecision , recall , f - score , ranked predictions ( auc ip / r)methodsterm lookup , pattern matching / template extraction , term tokens ( information content of go words , n - gram models ) , part - of - speech of go words and machine learningterm lookup , pattern matching / template extraction , term tokens ( information content of go words , n - gram models ) , part - of - speech of go words and machine learningpattern matching , automatically generating variants of mi terms , handcrafted patternscross - ontology mapping , manual and automatic extension of method names , statistic of work tokens building terms ( mutual information , chi square ) , machine learning of training set articlesresult highlightsprecisions from 46% to 80% , accuracy of 30%precisions from 9% to 35%precision from 32% to 67% , best f - score of 48most between 30% and 80% , best f - score of 55observationlimited recall , effect of go term lengthlimited recall , difference in performance depending on go categories , cellular component terms are easierdifficulties with very general method termsdifficulties in case of methods not specific to ppis , problems with recall historical view and timeline of the biocreative challenges in the context of other community efforts , textual resources ( corpora ) and applications developed in the area of biomedical text mining .
the upper bar shows the number of new records added to pubmed each year , expressed in thousands ( k ) .
pink squares , appearance of biomedical text mining methods ; green octagons , relevant ontologies , lexical resources and corpora ; yellow boxes , community challenges ; blue ovals , biomedical text mining applications .
summary of the biocreative editions related to the identification of ontology terms in articles the first edition of the biocreative challenge ( 17 ) was geared to the needs of model organism database curators .
the first task was further divided into two subtasks : the recognition of gene mentions in the text ( 42 ) and the linking of identified proteins from yeast , fly and mouse in abstracts to model organism database identifiers ( 43 ) .
the second task challenged the participants to annotate human gene products , defined by their uniprotkb / swiss - prot accession codes ( 44 ) , with the corresponding go codes by mining full - text articles ( 36 ) . in particular , teams were asked to return the textual evidence for the go term assigned to a defined set of proteins .
figure 3 illustrates schematically the idea behind the associated annotation process where for proteins described in a given paper , go annotation evidence had to be extracted
the process illustrates the individual steps of the annotation process , covering the initial selection of relevant documents for go annotation of proteins , identification of proteins and their corresponding database identifiers followed by the extraction of associations to go terms and the retrieval of evidence sentences / passages .
protein go term triplet for one subtask , and to actually detect go protein associations ( together with evidence passages ) for the other subtask .
the process illustrates the individual steps of the annotation process , covering the initial selection of relevant documents for go annotation of proteins , identification of proteins and their corresponding database identifiers followed by the extraction of associations to go terms and the retrieval of evidence sentences / passages .
the participating teams had to provide the evidence passages for a given document protein
protein associations ( together with evidence passages ) for the other subtask . precision and recall
were the basic metrics employed to evaluate the performance of the systems during this biocreative challenge .
precision is the fraction of true positive ( tp ) cases , i.e. correct results , divided by the sum of tp and false positive ( fp ) cases . recall can be considered as the fraction of tp results divided by the sum of tp and false negative ( fn ) results , i.e. relevant cases missed by the system . to account for both of these measures , the f - measure , i.e. harmonic mean of precision and recall
database curators had to manually evaluate the automatically extracted evidence passages to determine if they correctly supported the annotations , as exemplified in figure 4 ( 36 ) .
figure 4.example predictions of the go task of biocreative i. ( a ) here a correct prediction is shown , containing the information on the corresponding document , protein and go term as well as the supporting evidence text passages extracted automatically from the full - text article .
( b ) example prediction ( wrong ) showing a screen shot of the original evaluation interface developed at the time for this task ( based on apache / php ) .
the original evaluation application is not functional anymore and was implemented specifically for this task .
the database curators manually evaluated both the correctness of the protein as well as the go terms .
example predictions of the go task of biocreative i. ( a ) here a correct prediction is shown , containing the information on the corresponding document , protein and go term as well as the supporting evidence text passages extracted automatically from the full - text article .
( b ) example prediction ( wrong ) showing a screen shot of the original evaluation interface developed at the time for this task ( based on apache / php ) .
the original evaluation application is not functional anymore and was implemented specifically for this task .
the database curators manually evaluated both the correctness of the protein as well as the go terms .
the first biocreative competition saw the participation of 27 teams and some of the text mining algorithms yielded encouraging results in the identification of the gene names and in linking them to database identifiers ( 80% precision / recall ) ( 43 ) .
the identification of gene mentions in sentences was addressed using machine - learning and natural language processing techniques and benefited from training and test data in the form of labeled text prepared by biologists . for linking ( normalizing ) genes mentioned in abstracts
in the case of yeast , an f - score of 0.92 could be reached , while in the case of fly ( f - score of 0.82 ) and mouse ( f - score of 0.79 ) the performance was considerable lower due to less consistent naming nomenclature use and high degree of ambiguity of gene names .
conversely , the results of the functional annotation task proved that the interpretation of complex biological data , and thus linking text to the go ontology , is extremely challenging for text mining tools .
the obtained results indicated that some categories of go , in particular , the terms expressing sub - cellular location provided by the cellular component ( cc ) branch seemed to be more amenable for text - mining strategies .
the task of extracting ppi data was introduced in the second edition of biocreative ( 45 ) .
several subtasks were defined : detecting the literature containing protein interaction data ( interaction article subtask , ias ) , identifying the interaction pairs and linking the interacting partners to uniprotkb / swiss - prot identifiers ( interaction pair subtask , ips ) , identifying the experimental methods employed to detect the interaction ( interaction method subtask , ims ) and retrieving the textual evidence of the interaction ( interaction sentences subtask , iss ) .
the ppi task was a collaborative effort with intact and mint , databases whose curators annotated the training and test sets used in the various tasks ( 46 ) .
the experimental methods are important to infer how likely it is that a given protein interaction actually occurs in vivo , and it is usually the cumulative evidence rather than a single experiment that defines the reliability of the interaction . at a practical level , for curators ,
it is fundamental to identify in the article if there are experimental techniques usually associated with the detection of protein interactions ( e.g. two hybrid , affinity purification technologies ) .
these facts motivated the introduction of the ims ( 45 ) . for the ims subtask ,
the two participating teams were asked to identify from the text the list of the experimental techniques employed for the detection of ppis , and their results were compared with a reference list generated by manual annotation .
the experimental interaction detection techniques allowed for this task consisted of a sub - graph specified in the psi - mi ontology .
the highest score for exact match precision was 48% , but if matching to parent terms in the ontology was allowed , the score raised to an encouraging 65% ( 45 ) .
this improved performance was obtained by considering as correct those predicted terms that , when compared to the manually annotated terms , were either an exact match or a direct parent concept based on the psi - mi ontology graph structure .
this result is due to the fact that some ontology terms are far too specific to match the vocabulary routinely used in the biomedical literature .
for instance , while coimmunoprecipitation ( mi:0019 ) is widely used in the scientific literature , its child terms anti bait coimmunoprecipitation ( mi:0006 ) and anti tag coimmunoprecipitation ( mi:0007 ) are not .
the two child terms are used for annotation by database curators to further indicate if the experiment has been conducted with an antibody recognizing the protein or a tag fused to the target protein , respectively .
the use of these terms is therefore largely limited to human curator interpretation of the literature rather than explicit text mentions of these terms .
attempts that might be promising particularly for terms that are lengthy and representative of complex concepts could also consider the use of term definitions . with this respect , go term definitions had been exploited by piao et al . (
the definitions of psi - mi terms have also been used for linking psi - mi terms to full - text articles by analyzing unigrams and character n - grams from the psi - mi definition and synonyms ( 48 ) .
several studies have been published in the biomedical domain with the purpose to quantify through metrics how closely related two terms are in their meanings , i.e. their semantic similarity ( 49 ) .
this is an important issue not only for comparing text - mining results to manual annotations , but also for measuring consistency of manual annotations themselves in inter - annotator agreement studies or to determine the functional similarity between genes annotated with those terms .
a simple approach for measuring semantic similarity can be the calculation of the distance between two terms in the graph path underlying the ontology .
semantic similarity calculations have been promising for resources like wordnet ( 50,51 ) , which is essentially a lexical database of english words together with their semantic relation types with practical usage for text analysis .
this resource differs therefore in scope from go or the psi - mi ontology , whose primary use is for annotation of gene products .
semantic similarity calculations have shown useful results to quantity functional similarity between gene products based on their go annotations ( 49 ) , but using them for directly quantifying the similarity between predicted and manually annotated terms in the context of biocreative remained problematic .
the ims task was replicated in the biocreative iii edition ( 5254 ) and saw increased participation , with eight teams .
the difference from the previous edition was that participants were asked to provide a list of interaction detection method identifiers for a set of full - text articles , ordered by their likelihood of having been used to detect the ppis described in each article and providing also a text evidence passage for the interaction method .
figure 5 shows a set of example predictions of various degrees of difficulty corresponding to biocreative iii submissions . the training and development set
were derived from annotations provided by databases compliant with the psi - mi annotation standards , while the biogrid and mint database curators carefully prepared the test set .
participating teams went beyond simple term look - up and many of them considered this task as a multi - class classification problem .
the best precision obtained by a submission for this task was of 80.00% at a recall of 41.50% ( f - score of 51.508 ) ( 53 ) .
the highest f - score was of 55.06 ( 62.46% precision with 55.17% recall ) ( 53 ) .
figure 5.representative predictions submitted for the mi task of biocreative iii of diverse degrees of difficulty for automated systems .
participating teams had to return the article identifier , the concept identifier for the interaction detection method according to the mi ontology , a rank , a confidence score as well as a supporting text evidence passages extracted from the full - text article .
this figure provides colored highlights of original predictions to better grasp the output . in red , the original term from the mi ontology and its
as can be seen some cases are rather straightforward , and could be detected by direct term lookup , while others require generating lexical variants or even more sophisticated machine learning and statistical word analysis .
representative predictions submitted for the mi task of biocreative iii of diverse degrees of difficulty for automated systems .
participating teams had to return the article identifier , the concept identifier for the interaction detection method according to the mi ontology , a rank , a confidence score as well as a supporting text evidence passages extracted from the full - text article .
this figure provides colored highlights of original predictions to better grasp the output . in red , the original term from the mi ontology and its synonyms
as can be seen some cases are rather straightforward , and could be detected by direct term lookup , while others require generating lexical variants or even more sophisticated machine learning and statistical word analysis .
a common approach followed by participating teams was , in addition to pattern matching techniques , the use of various kinds of supervised machine learning techniques that explored a range of different features .
machine - learning methods tested included nave bayes multiclass classifiers [ team 65 , ( 55 ) ] , support vector machines [ svms ; teams 81 ( 56 ) and 90 ( 48 ) ] , logistic regression [ lr ; team 69 , ( 53 ) ] and nearest neighbors [ team 100 , ( 53 ) ] .
another common practice was based on dictionary extension approaches using manually added terms based on the training data inspection , the use of cross - ontology mapping based on medical subject headings ( mesh ) and unified medical language system ( umls ) terms as well as rule - based expansion of the original dictionary of method terms .
most participating teams explored statistical analysis of words , bigrams and collocations present in the training and development set articles .
exact and partial word tokens building the original method term lists were also exploited too .
finally , pattern - matching techniques together with rule - based approaches combined with machine - learning classifier could be successfully adapted for this task .
team 88 of biocreative iii ( 53 ) used a dictionary - based strategy to recover mentions of interaction method terms .
as finding exact mentions of method terms results generally in limited recall , team 70 ( 53 ) used approximate string searches for finding method mentions . another option to boost recall was followed by team 65 ( 55 ) , which considered sub - matches at the level of words and applied pattern - matching techniques .
such methods are suitable to handle multi - term words , which comprise an important fraction of the psi - mi terms .
this team used a corpus - driven approach to derive conditional probabilities of terms and the detect ( 56 ) complemented pattern matching with a sentence classification method relying on svms .
this type of machine learning method together with logistic regression was also tested by team 90 ( 48 ) , trying out many features , like type and text of named entities , words proximity to the entities and information on where in a document these entities where mentioned .
they included features that covered term and lexicon membership properties and carried out a global analysis at the level of the documents as well as at the level if individual sentences .
a software that directly resulted from participation at the imt is the ontonorm framework ( 57 ) from team 89 ( 58 ) which integrated dictionary - based pattern - matching together with a binary machine - learning classification system and the calculation of mutual information and chi - squared scores of unigrams and bigrams relevant for method terms . according to an observation of team 100 ( 53 ) , how competitive a given strategy was depended heavily on the actual psi - mi term .
they therefore used a psi - mi term specific knowledge - based approach , applying for instance pattern matching approached for some terms , while others were detected through a nearest neighbors method .
the availability of text - mining tools can assist scientific curation in many ways , from the selection of the relevant literature to greatly facilitate the completion of a database entry ( saving a conspicuous amount of time ) .
furthermore , there is a lot of ferment in the area of ontology driven annotation of biomedical literature as witnessed by the beyond the pdf initiative ( 59 ) .
the whole biocreative experience highlighted that in order to obtain substantial advances in the development of text - mining methodologies , it is necessary to develop close collaboration among different communities : text miners , database curators and ontology developers .
in particular , such vicinity instilled into the text - mining community a more mature comprehension of crucial biological questions ( e.g. gene species annotation ) and the necessity to make methods and results more easily accessible to biologist and database annotators ( e.g. user - friendly visualization tools ) .
what is crucial for text miners in the development of more efficient predictive algorithms is the availability of a large corpus of manually annotated training data .
ideally , such text - bound annotations should cover a variety of representative text phrases mapped to the same concept .
how feasible it is to generate large enough annotated text data sets for complex annotation types at various levels of granularity is still unclear .
this necessity prompted various initiatives to compile ad hoc curated data sets [ e.g. the genia corpus ( 60 ) ] .
unfortunately , such collections are usually created as a specific resource for natural processing language sciences but are not suitable for all applications .
another effort to provide syntactic and semantic text annotations of biomedical articles using various ontologies is the craft corpus initiative , which aims to provide concept annotations from six different ontologies including go and the cell type ontology ( cl ) ( 61 ) .
one of the merits of biocreative has been to permit the public deposition of annotated corpora .
biocreative has also been very effective in identifying the main areas of application , limitations and goals of text mining in the area of protein / gene function and interactions .
data sets routinely annotated by databases are ideal candidates for the compilation of large reference data sets .
unfortunately , databases do not capture the textual passages linked to the experimental evidence and this represents a significant hurdle to the development of text - mining suites .
in addition , it is still very hard to convince databases and publishers to provide access to text - bound annotations ( manual text labelling ) , but this has also difficulties related to technical and organizational aspects . in this respect
the identification of the experimental methods ( as described by psi - mi ) linked to protein interactions can be an important resource facilitating the retrieval of protein interactions , but this requires an extra effort to increase the aliases of the dictionary and/or to identify the critical textual passages .
ideally , an effective strategy to effectively employ bio - ontologies in text - mining technologies would consist of an in - depth annotation of text passages associated with the ontology terms , thus creating an effective dictionary .
this could serve as valuable data for machine learning approaches as well as be useful for automatic term extraction techniques to enrich iteratively the lexical resources behind the original ontologies . on the other hand ,
there is a need to consider more closely the use of text - mining methods for the actual development and expansion of controlled vocabularies and ontologies , relying for instance on corpus - based term acquisition .
such an approach has shown promising results for the metabolomics ( 29 ) and animal behavior ( 62 ) domains where term recognition and filtering methods using generic software tools has been explored . at the current stage , it is possible to say that the biocreative effort has successfully promoted the exploration of a set of sophisticated methods for the automatic detection of ontology concepts in the literature , some of which can generate promising results .
what is still missing is to determine more systematically which methods are more robust or competitive for particular types of concepts or terms as well as to have more granular annotations at the level of labeling textual term evidences .
ultimately , the incorporation of concept recognition systems into text - mining tools will greatly depend on their availability and flexibility to handle more customized term lists and ontology relation types .
this work was supported by the national center for research resources ( ncrr ) and the office of research infrastructure programs ( orip ) of the national institutes of health ( nih ) ( 1r01rr024031 to m.t . )
( r24rr032659 to m.t . ) ; the biotechnology and biological sciences research council ( bb / f010486/1 to m.t . ) ; the canadian institutes of health research ( frn 82940 to m.t . ) ; the european commission fp7 program ( 2007 - 223411 to m.t . ) ; a royal society wolfson research merit award ( to m.t . ) ; the scottish universities life sciences alliance ( to m.t . ) ; projects bio2007 ( bio2007 - 666855 ) ( to m. k. and a.v . ) , consolider ( csd2007 - 00050 ) ( to m. k. and a.v . ) , microme ( grant agreement number 222886 - 2 ) ( to m. k. and a.v . ) . funding for open access charges : national institutes of health ( 1r01rr024031 ) . | there is an increasing interest in developing ontologies and controlled vocabularies to improve the efficiency and consistency of manual literature curation , to enable more formal biocuration workflow results and ultimately to improve analysis of biological data .
two ontologies that have been successfully used for this purpose are the gene ontology ( go ) for annotating aspects of gene products and the molecular interaction ontology ( psi - mi ) used by databases that archive protein protein interactions .
the examination of protein interactions has proven to be extremely promising for the understanding of cellular processes .
manual mapping of information from the biomedical literature to bio - ontology terms is one of the most challenging components in the curation pipeline .
it requires that expert curators interpret the natural language descriptions contained in articles and infer their semantic equivalents in the ontology ( controlled vocabulary ) . since
manual curation is a time - consuming process , there is strong motivation to implement text - mining techniques to automatically extract annotations from free text . a range of text mining strategies has been devised to assist in the automated extraction of biological data .
these strategies either recognize technical terms used recurrently in the literature and propose them as candidates for inclusion in ontologies , or retrieve passages that serve as evidential support for annotating an ontology term , e.g. from the psi - mi or go controlled vocabularies . here
, we provide a general overview of current text - mining methods to automatically extract annotations of go and psi - mi ontology terms in the context of the biocreative ( critical assessment of information extraction systems in biology ) challenge .
special emphasis is given to protein
protein interaction data and psi - mi terms referring to interaction detection methods . |
understanding how the order flow affects the dynamics of prices in financial markets is of utmost importance , both from a theoretical point of view ( why and how prices move ? ) and for practical / regulatory applications ( i.e trading costs , market stability , high frequency trading , ` tobin ' taxes , etc . ) .
the availability of massive data sets has triggered a spree of activity around these questions @xcite ( for a review see @xcite ) .
one salient ( and initially unexpected ) stylized fact is the long - memory of the order flow , i.e. the fact that buy / sell orders are extremely persistent , leading to a slowly decaying correlation of the sign of the order imbalance @xcite .
this immediately leads to two interesting questions : first , why is this so ?
is it the result of large `` metaorders '' being split in small pieces and executed incrementally , or is it due to herding or copy - cat trades , i.e. trades induced by the same external signal or by some traders following suit , hoping that the initial trade was informed about future price movements ?
second , how is it possible that a highly predictive order flow impacts the price in such a way that very little predictability is left in the time series of price changes ?
several empirical investigations , as well as order of magnitude comparisons between the typical total size of metaorders and the immediately available liquidity present in the order book , strongly support the `` splitting '' hypothesis @xcite .
since the metaorder has to be executed over some predefined time scale ( typically several days for stocks ) , the structure of the order flow is expected to be , in a first approximation , independent of the short term dynamics of the price and can be treated as exogenous
see below .
the idea then naturally leads to a class of so - called `` propagator '' models , where the mid - point price @xmath0 ( just before trade at time @xmath1 ) can be written as a linear superposition of the impact of all past trades , considered as given , plus noise @xcite : @xmath2 + m_{-\infty } \label{eqn : prom_price_process}\ ] ] where @xmath3 is the sign of trade at time @xmath4 ( @xmath5 for buy / sell market orders ) , @xmath6 is a noise term which models any price changes not induced by the trades ( e.g. limit orders / cancellations inside the spread , jumps due to news , etc . ) .
the function @xmath7 is called the `` propagator '' and describes the decay of impact with time .
the crucial insight of this formulation is precisely that this impact decay may counteract the positive auto - correlation of the trade signs and eventually lead to a diffusive price dynamics ( see @xcite and below ) .
although highly simplified , the above framework leads to an interesting approximate description of the price dynamics . still , many features are clearly missing , see @xcite : * first , the above formalism posits that all market orders have the same impact , in other words @xmath8 only depends on @xmath9 and not on @xmath1 and @xmath4 separately , which is certainly very crude .
for example , some market orders are large enough to induce an immediate price change , and are expected to impact the price more than smaller market orders .
one furthermore expects that depending on the specific instant of time and the previous history , the impact of market orders is different . * second , limit orders and cancellations should also impact prices , but their effect is only taken into account through the time evolution of @xmath7 itself that phenomenologically describes how the flow of limit orders opposes that of market orders and reverts the impact of past trades . *
third , the model assumes a _ linear _ addition of the impact of past trades and neglect any non - linear effects which are known to exist .
for example , the total impact of a metaorder of size @xmath10 is now well known to grow as @xmath11 , a surprising effect that can be traced to non - linearities induced by the deformation of the underlying supply and demand curve , see e.g. @xcite .
however , before abandoning the realm of linear models , it is interesting to see how far one can go within the ( possibly extended ) framework of propagator models , in order to address point 1 and 2 above .
the aim of this work is to explore generalised linear propagator models , in the spirit of @xcite , with a fully consistent description of the impact of different market events and of the statistics of the order flow . for the sake of readability ,
we have decided to present our results in two companion papers . in the present first part ( i ) , we investigate in detail two possible generalisations of eq .
[ eqn : prom_price_process ] above , where price - changing and non price - changing market orders are treated differently .
we show that separating these two types of events already leads to a significant improvement of the predictions of the model , in particular for large tick stocks .
we revisit the difference between the `` transient impact model '' ( tim ) and the `` history dependent impact model '' ( hdim ) introduced in @xcite , correct some misprints in that paper , and show that hdim is always ( slightly ) better than tim for small tick stocks , as expected intuitively .
we then turn to the modelling of the order flow in the companion paper ( ii ) , with in mind the necessity of keeping the linearity of the predictors of future order flow , as assumed in hdims .
the so - called mixed transition distribution ( mtd ) model is a natural framework for constructing a versatile time series model of events , with a broad variety of correlation structures @xcite .
the propagator model defined by eq .
[ eqn : prom_price_process ] above can alternatively be written in its differential form , where instead of the price process we consider the return process , @xmath12 : @xmath13 where @xmath14 . in the following
we will call this model transient impact model ( as in @xcite ) and we label the predicted values according to the above model with tim1 where the `` 1 '' refers to the fact that one propagator function , @xmath7 , characterizes the model .
empirical results show @xcite that for small ticks @xmath7 is a decreasing function with time , therefore the kernel @xmath15 is expected to be a negative function .
this means that the impact of a market order is smaller if it follows a sequence of trades of the same sign than if it follows trades of the opposite sign .
the authors of @xcite call this behaviour the `` asymmetric liquidity '' mechanism : the price impact of a type of order ( buy or sell ) is inversely related to the probability of its occurrence .
the reason for this mechanism is that liquidity providers tend to pile up their limit orders in opposition of a specific trend of market orders @xcite , whereas liquidity takers tend to reduce the impact of their trades by adapting their request of liquidity to the available volume during the execution of their metaorders @xcite . in order to calibrate the above model
, we can measure the empirical response function @xmath16 $ ] and the empirical correlation function of the order signs @xmath17 $ ] .
these two functions form a linear system of equations @xmath18c(n),\ ] ] whose solution is the propagator function @xmath7 , for @xmath19 .
an alternative method of estimation , which is less sensitive to boundary effects , uses the return process of eq .
[ eqn : prom_return_process ] , such that the associated response function @xmath20 $ ] and @xmath21 are related through : @xmath22 whose solution represents the values of the kernel @xmath23 .
the relation between @xmath24 and @xmath25 is : @xmath26 allowing to recover the response function from its differential form .
once the propagator @xmath7 is calibrated on the data , the model is fully specified by the statistics of the noise @xmath27 . for simplicity , we will assume that @xmath27 has a low - frequency , white noise part of variance @xmath28 , describing any `` news '' component not captured by the order flow itself , and a fast mean - reverting component of variance @xmath29 describing e.g. high frequency activity inside the spread ( affecting the position of the mid - point @xmath0 ) or possible errors in the data itself .
once the model is fully calibrated on data , we examine its performance by considering the prediction of two quantities , namely the negative lag response function and the signature plot .
the former is the extension of the price response function , @xmath24 , to @xmath30 values , measuring the correlation between the present sign of the market order and the past price changes : @xmath31 .
\label{eqn : negativelag}\ ] ] @xmath32 , with @xmath33 , is fully specified by the model , independently of @xmath28 and @xmath29 .
naturally the one propagator model assumes a `` rigid '' order flow that does not adapt to price changes and leads to : @xmath34 where tim1 reminds us that this is the prediction according to the one propagator model .
empirically , however one expects that the order flow should be adapting to past price changes , and an upward movement of the price should attract more sellers ( and vice - versa ) . in section [ sec : prop_empirical_test ] we will compare the prediction of eq . to empirical results .
the second prediction of the propagator model concerns the scale - dependent volatility of price changes , or `` signature plot '' , defined as : @xmath35.\ ] ] using the propagator model , one finds the following exact expression : @xmath36 ^ 2 + 2 \psi(\ell)+ \frac{d_\mathrm{hf}}{\ell } + d_\mathrm{lf},\ ] ] where @xmath37 is the correlation - induced contribution to the price diffusion : @xmath38\left[g(\ell+m)-g(m)\right]c(m - n ) \\ & + \sum_{0 \leq n < \ell } \sum_{m > 0 } g(\ell - n ) \left[g(\ell+m)-g(m)\right ] c(m+n).\end{aligned}\ ] ] hence , once @xmath7 is known , the signature plot of the price process can be computed and compared with empirical data
. the above model describes trades that impact prices , but with a time dependent , decaying impact function @xmath7 .
one can in fact interpret the same model slightly differently , by writing as an identity : @xmath39 this can be read as a model where the deviation of the realized sign @xmath40 from an expected level @xmath41 impacts the price linearly and permanently .
if @xmath41 is the best possible predictor of @xmath40 , then the above equation leads by construction to an exact martingale for the price process ( i.e. the conditional average of @xmath42 on all past information is zero ) @xcite . since the impact depends on the past history of order flow , following ref .
@xcite , we refer to the model on the left of eq .
[ eqn : hdim_return_process ] as the history dependent impact model and since only one type of past events is considered in the predictor we label it with hdim1 . when the best predictor is furthermore _ linear _ in the past order signs ( as in the right equation of eq .
[ eqn : hdim_return_process ] ) , then the tim1 defined by eq .
[ eqn : prom_return_process ] is _ equivalent _ to the hdim1 , eq .
[ eqn : hdim_return_process ] .
we will see below that as soon as one attempts to generalize the propagator model to multiple event types , tim and hdim become no longer equivalent .
when is the best predictor of the future price a linear combination of past signs , such that tim and hdim are equivalent when restricted to one type of market orders only ?
the answer is that this is true whenever the string of signs is generated by a so - called discrete autoregressive ( dar ) process ( see @xcite ) .
dar processes are constructed as follows ( our description here lays the ground for the more general mtd models described in the companion paper ) .
the sign at time @xmath1 is thought of as the `` child '' of a previous sign @xmath43 , where the distance @xmath44 is a random variable distributed according to a certain discrete distribution @xmath45 , with : @xmath46 if @xmath47 , the model is called as dar(p ) , and involves only @xmath48 lags .
once the `` father '' sign is chosen , one postulates that : @xmath49 one can then show that in the stationary state , the signs @xmath50 are equiprobable , and the sign auto - correlation function @xmath21 obeys the following yule - walker equation : @xmath51 there is therefore a one - to - one relation between @xmath45 and @xmath21 .
note that in the empirical case where @xmath21 decays as a power - law @xmath52 with exponent @xmath53 , one can show that @xmath54 and @xmath55 .
now , from the very construction of the process , the conditional average of @xmath40 is given by : @xmath56 such that one can indeed identify the hdim1 with a tim1 , with : @xmath57 when @xmath58 , one finds as expected @xmath59 with @xmath60 @xcite .
in order to develop the idea that large market orders ( compared to the volume at the opposite best ) may have a different impact than small ones , we need to extend the above propagator model to different events @xmath61 , where we choose here two types of events @xmath61 defined as : @xmath62 we follow the general framework of @xcite , but here the definition of price changing events is different .
they refer to the total returns until the next transaction and they include the behaviour of liquidity takers and liquidity providers .
these different events are discriminated by using indicator variables denoted as @xmath63 .
the indicator , @xmath63 , is @xmath64 if the event at @xmath1 is of type @xmath65 and zero otherwise .
the time average of the indicator function is the unconditional probability of event @xmath65 , @xmath66 $ ] .
the usage of the indicator function simplifies the calculation of the conditional expectations , which will be intensively used in the following .
for example , if a quantity @xmath67 depends on the event type @xmath65 and the time @xmath1 , then its conditional expectation is @xmath68=\frac{\mathbb{e}[x_{\pi_t , t}i(\pi_t=\pi)]}{\mathbb{p}(\pi)}.\ ] ] by definition of the indicator function we have that @xmath69 at this stage , the natural generalisation of the tim is to write the return process as @xmath70 where @xmath71 .
therefore we call this model tim2 .
the resulting price process is a linear superposition of the decaying impact of different ( signed ) events : @xmath72+m_{-\infty}. \label{eqn : two_prom_price_process}\ ] ] which can be used to compute the signature plot @xmath73 of model ( see appendix [ app : tim_signature_plot ] ) .
the tim2 can be calibrated very similarly as the tim1 above , by noting that the differential response function @xmath74=\frac{\mathbb{e}[r_{t+\ell}\cdot\epsilon_t i(\pi_t=\pi)]}{\mathbb{p}(\pi)},\ ] ] and the conditional correlation is not bounded in @xmath75 $ ] because we normalize the expectation in the numerator by the product @xmath76 rather than by the joint probability @xmath77 .
this choice is done for speeding the computations and we have verified that the difference is very small . ] of order signs of a pair of events @xmath78 and @xmath79 @xmath80}{\mathbb{p}(\pi_1)\mathbb{p}(\pi_2 ) } \label{eqn : conditional_corr_order_signs}\ ] ] are related through : @xmath81 we use these quantities to evaluate the conditional response function @xmath82 , the total impact function @xmath83 and the corresponding response function @xmath24 . as for the tim1 , once we have calibrated @xmath84 , we compute the predicted values of these response functions for negative lags , @xmath85 and @xmath86 , and the predicted signature plot @xmath87 . however , this is not the only generalisation of the propagator model .
in fact , the hdim formulation , eq . [ eqn : hdim_return_process ] , lends itself to the following , different extension : @xmath88 meaning that the expected sign for an event of type @xmath65 is a linear regression of past signed events , with an `` influence kernel '' @xmath89 that depends on both the past event type @xmath90 and the current event @xmath65 .
this model is the hdim2 .
it is clear that tims are actually special cases of hdims , with the identification : @xmath91 i.e. the influence kernel @xmath89 does not depend on the present event type @xmath65 : only the type of the past event @xmath90 matters .
the calibration of this model turns out to be more subtle and is discussed in appendix [ app : hdim_calibration ] ( where some errors and misprints appearing in the text of @xcite are corrected ) .
as above , we may ask when it is justified to consider that the expected sign for an event of type @xmath65 is a linear regression of past signed events .
this requires to generalize the dar model described in section [ sec : dar_model ] above to a multi - event framework
. this will precisely be the aim of part ii of this work , where we introduce mtds as a natural generalisation of dar for order book events .
much as for the simple propagator model , one can test the predictive power of the tim and hdim framework by comparing the conditional response functions for negative lags @xmath92 $ ] , @xmath93 with empirical data , as well as the signature plot @xmath73 of the price process . in the following section we will investigate the results of the estimation of the above models , and compare these predicted quantities with their empirical determination . our conclusion , in a nutshell , is that introducing two types of events substantially increases the performance of the propagator models and that perhaps expectedly the hdim fares better than tim , but only very slightly .
we have analysed the trading activity of the 50 most traded stocks at nyse and nasdaq stock exchanges , during the period february 2013 - april 2013 with a total of 63 trading days .
we have chosen a wide panel of stocks of different types in order to perform a deep analysis of the two markets .
we have considered only the trading activity in the period 9:30 - 15:30 in all the days under analysis , in order to reduce intraday patterns of activity , such as volume traded , average spread , etc .
in particular we try to avoid the trading activity just after the pre - auction and the closing period of the end of the trading day . after trimming the beginning and the end of each trading day , for each stock we concatenate the data on different trading days and carry out our analysis on these time series .
the tick size of all the stocks is 0.01 usd . in table
[ tab : data_stock ] we list the details of the stocks analysed . in particular , we have listed the volatility in basis points , the average daily traded amount in usd , the average bid - ask spread in ticks , and the average tick size - price ratio and we ranked the stocks by these values .
we can divide the sample in two different groups , which are the large and small tick stocks .
the bid - ask spread of a large tick stock is most of the times equal to one tick , whereas small tick stocks have spreads that are typically a few ticks .
we will emphasise in the following sections the very different behaviour of these two groups of stocks .
there exist also a number of stocks in the intermediate region between large and small tick stocks , which have the characteristics of both types . for the period studied , the stock of apple inc .
( aapl ) had on average a bid - ask spread of @xmath94 ticks , clearly making it a small tick stock . on the other hand , microsoft inc .
( msft ) , with average bid - ask spread being @xmath95 ticks is a good candidate for a large tick stock . to illustrate our empirical analysis , we chose to show results for these two stocks in the following .
.details of analysed stocks : rank by average traded daily amount ( m$ ) , volatility , rank by average spread over tick size , and by average tick size ( bp ) .
[ cols="<,^,^ , < , > , < , > , < , > " , ] the top panels of fig .
[ fig : estimated_transient_impact_model ] show the estimation of the propagators @xmath7 for msft and aapl . for both large and small tick stocks
the decay of the propagator is slow , well above the noise level after 1000 transactions .
we can see that for msft ( as well as for other large tick stocks ) the propagator function first increases for a few time lags , and starts decreasing only after that .
thus , the derivative @xmath23 is positive for small lags , and since @xmath96 too , the market impact should be reinforced by a sequence of orders on the same side of the order book .
this should lead to violations of the market efficiency on short time scales .
this is a direct symptom of the inadequacy of the one - event propagator formalism for large ticks : in fact , we will see that the order flow can not be considered to be independent of the price changes in this case . after an uptick move
, there is a high probability that the next order will be in the opposite direction , reinstalling price efficiency .
this will be well captured by the two - event propagator below . for aapl and other small tick stocks
we only see a monotone the decay of the propagator .
the assumption of a rigid order flow , insensitive to price moves , will be approximately correct in that case ( see @xcite ) , the relaxation of the propagator alleviating the correlation of the signs .
we can already anticipate that the two - event propagator framework will be much more beneficial for large tick stocks than for small tick stocks .
close to zero and bounded by the two vertical lines is linear , whereas outside this region the scale is logarithmic . ]
close to zero and bounded by the two vertical lines is linear , whereas outside this region the scale is logarithmic . ]
close to zero and bounded by the two vertical lines is linear , whereas outside this region the scale is logarithmic . ]
close to zero and bounded by the two vertical lines is linear , whereas outside this region the scale is logarithmic . ]
the bottom panels of fig .
[ fig : estimated_transient_impact_model ] show the price response for both positive and negative lags .
the dashed lines in the plots show the theoretical prediction of the one - event propagator model by using the estimated kernels . in the case of msft the measured response function for negative lags
@xmath44 is well above the prediction of the propagator model ( solid line ) , that , as we discussed assumes a rigid order flow not depending on price changes .
as anticipated above , this means that in the data there exists an additional anti - correlation between past returns and the subsequent order flow , which is not captured by the model . a similar ,
though much weaker deviation can be seen in the case of aapl . in general , this effect is very pronounced in the case of large tick stocks , whereas in the case of small tick stocks it exists but is much weaker .
in fact , in fig . [
fig : anomalous_negative_stocks ] we plot the ratio @xmath97/\sigma$ ] for @xmath98 , @xmath99 being the volatility per trade , by ranking the stocks in the x - axis by the average spread .
we observe that for small tick stocks ( left part of the plot ) the difference is relatively small , while for large tick stocks ( right part of the plot ) the prediction error on the negative lag response of the tim1 becomes quite large , especially for large lags @xmath44 .
turning now to the signature plot @xmath73 , we see in fig .
[ fig : estimated_transient_impact_model_signature_plot ] that small tick and large tick stocks behave very differently . for small tick stocks , we see that @xmath73 increases with @xmath44 as soon as @xmath100 , corresponding to a `` trend - like '' behaviour .
the decreasing behaviour of @xmath73 for smaller lags corresponds to high frequency activity with the spread , leading to a minimum in @xmath73 . for large tick stocks this is absent and one finds `` mean - reverting '' behaviour , with a steadily decreasing signature plot
the prediction of the one - event propagator model fares quite well at accounting for the trending behaviour of small tick stocks , provided the two extra fitting parameters @xmath28 and @xmath29 are optimized with ols in order to minimize the distance between the empirical and the theoretical curves of the model .
we note for example that choosing @xmath101 would underestimate ( in the case of aapl ) the long - term volatility by a factor of two . for large tick stocks , however , the mean - reverting behaviour is completely missed .
we now turn to propagator models that distinguish between price - changing and non price - changing market orders , and see how the situation for large tick stocks indeed greatly improves . and the theoretical curves of the estimated tim1 for msft ( @xmath102 and @xmath103 ) and appl ( @xmath104 and @xmath105 ) . ] and the theoretical curves of the estimated tim1 for msft ( @xmath102 and @xmath103 ) and appl ( @xmath104 and @xmath105 ) . ] the aim of this section is to show that an extended propagator model allows us to reproduce satisfactorily the additional anti - correlations between past returns and subsequent order signs ( revealed by the discrepancy between @xmath106 and @xmath107 ) by including an implicit coupling between past returns and order flow .
we will also require that the signature plot @xmath73 is correctly accounted for , in particular for large tick stocks .
measured on msft and appl data .
note that the first subscript corresponds to the event that happened first chronologically .
the scale for values of the correlations close to zero and bounded by horizontal solid lines is linear , whereas outside this region the scale is logarithmic . ]
measured on msft and appl data .
note that the first subscript corresponds to the event that happened first chronologically .
the scale for values of the correlations close to zero and bounded by horizontal solid lines is linear , whereas outside this region the scale is logarithmic . ]
close to zero and bounded by vertical solid lines is linear , whereas outside this region the scale is logarithmic . ]
close to zero and bounded by vertical solid lines is linear , whereas outside this region the scale is logarithmic . ]
the extended version of the propagator model with two events @xmath71 can follow two routes , as discussed above .
one is the tim2 , which can be estimated much as the one - event model , by solving the linear system of eq .
[ eqn : extended_transient_estimation ] .
the second is the hdim2 , whose estimation involves determining the influence kernels @xmath108 for @xmath109 , because @xmath110 by construction .
the calibration requires estimating three - point correlation functions or approximating them in terms of two - point correlations as detailed in section [ sec : hdim_estimation ] we will follow the latter approximation . thus , the correlation @xmath111 of the different signed events , defined in eq . is an important input of the calibration for both generalised linear models . note that the first subscript corresponds to the event that happened first chronologically .
we start by showing its empirical estimation for the two typical stocks ( see fig . [
fig : estimated_cond_corr ] ) . for aapl , all auto - correlation and cross - correlation functions have almost the same power - law decay and they are all positive .
this is expected since c and nc events are not radically different for small tick stocks .
note that the unconditional probability of price changing market orders is @xmath112 .
correlation functions look similar for other small tick stocks too . for msft
the curves reveal a different behaviour .
for example the @xmath113 auto - correlation has the familiar power - law shape possibly due to order splitting .
the @xmath114 correlation is also positive but decays faster .
note that it starts at @xmath115 , which means that a c order immediately following a nc order is in the same direction with very high probability .
this describes nc orders that leave a relatively small quantity at the best offer , which is then immediately `` eaten '' by the next market orders .
its relatively fast decay suggests that agents splitting their metaorders avoid being aggressive and nearly only send nc orders .
the other two correlations @xmath116 and @xmath117 both start negative and capture the effect we are interested in : after a price changing event , it is highly likely that the subsequent order flow ( either c or nc ) will be in the other direction . note however that @xmath118 and that it is exceedingly rare to observe a succession of two c events separated by a small lag .
this type of behaviour is the one that can be seen in general for large tick stocks .
the estimation procedure involves the empirical determination of the response function for positive lags , and allows us to calculate the theoretical prediction of the response function for negative lags , as well as the signature plot .
[ fig : estimated_response_function ] shows the empirical response function for positive lags @xmath119 and negative lags @xmath106 , together with the predicted response function @xmath120 , according to the calibrated tim2 . in the case of large tick stocks the empirical curves
are perfectly reproduced , whereas for small tick stocks some little deviation still persists .
the improvement with respect to the tim1 is quite remarkable .
this can be seen the from comparison of the prediction of the response function for negative lags of the tim1 , @xmath121 , also plotted in fig .
[ fig : estimated_response_function ] .
close to zero and bounded by vertical solid lines is linear , whereas outside this region the scale is logarithmic . ]
close to zero and bounded by vertical solid lines is linear , whereas outside this region the scale is logarithmic . ]
+ close to zero and bounded by vertical solid lines is linear , whereas outside this region the scale is logarithmic . ]
close to zero and bounded by vertical solid lines is linear , whereas outside this region the scale is logarithmic . ] of tim2 .
the scale for @xmath44 close to zero and bounded by horizontal solid lines is linear , whereas outside this region the scale is logarithmic .
( bottom panels ) signature plots , empirical and predicted by the calibrated tim2 . left : msft with @xmath122 and @xmath123 , right : appl with @xmath124 and @xmath125 . ] of tim2 .
the scale for @xmath44 close to zero and bounded by horizontal solid lines is linear , whereas outside this region the scale is logarithmic .
( bottom panels ) signature plots , empirical and predicted by the calibrated tim2 . left : msft with @xmath122 and @xmath123 , right : appl with @xmath124 and @xmath125 . ]
+ of tim2 .
the scale for @xmath44 close to zero and bounded by horizontal solid lines is linear , whereas outside this region the scale is logarithmic .
( bottom panels ) signature plots , empirical and predicted by the calibrated tim2 . left : msft with @xmath122 and @xmath123 , right : appl with @xmath124 and @xmath125 . ] of tim2 .
the scale for @xmath44 close to zero and bounded by horizontal solid lines is linear , whereas outside this region the scale is logarithmic .
( bottom panels ) signature plots , empirical and predicted by the calibrated tim2 . left : msft with @xmath122 and @xmath123 , right : appl with @xmath124 and @xmath125 . ]
let us now discuss the observed response functions for positive lags , and the resulting calibrated propagators for small tick stocks , as for aapl , shown in fig .
[ fig : estimated_resm_rc ] and [ fig : estimated_g_diff_rc ] ( right panels ) .
the conditional response function @xmath126 after an event of type @xmath127 is a rigid shift of the @xmath128 curve .
the reaction of market agents to the two types of events is therefore very similar .
the shift indeed is due to the very definition of event types , that leads to a non - zero value of @xmath129 , comparable to the average spread .
turning now to the conditional response function for negative lags , we observe a small deviation between the model and the empirical data : there exists an additional anti - correlation between past returns and future order signs which is not captured by the model .
the curves @xmath130 and @xmath131 behave in similar way , but in the latter case the anti - correlation is stronger than in the former case .
the propagator functions @xmath132 can be fit by a power - law , but the @xmath133 curves are non monotonic ( fig . [
fig : estimated_g_diff_rc ] ) .
note that , as a result of the non - trivial structure of the correlation , the calibration of the tim2 leads to @xmath134 .
this is inconsistent with the interpretation of the model which would require @xmath135 and shows the theoretical limitations of the tim framework . in the case of the hdim framework , by construction ,
we have that @xmath136 .
the results of the estimation of the model for large tick stocks are completely different .
[ fig : estimated_resm_rc ] and [ fig : estimated_g_diff_rc ] ( left panels ) show the results for msft .
the @xmath128 curve is a positive and increasing function which starts , as expected , from zero and reaches a plateau for large lags .
the @xmath126 curve starts from the value of the spread in basis point and slightly decreases , which means that the reaction of the market after price change events consists in a mean reversion of the price . for negative lags
, the curve @xmath130 shows that if an event occurs that does not change the price , then for small lags the past returns are on average anti - correlated with the present order sign .
the case of the @xmath131 is quite interesting , because it shows that if a price changing event occurs , then the past returns are on average anti - correlated with the present order sign .
the propagator functions @xmath137 are almost constant with different values : @xmath138 is equal to the spread , whereas @xmath133 is equal to zero .
the fact that the two propagators are constant means that the price process in eq .
[ eqn : two_prom_price_process ] is simply a sum of non - zero price changes , all equal to the spread , and for which the impact is permanent .
therefore , as noted in @xcite the dynamics of the price is completely determined by the sequence of random variables @xmath139 , and the temporal structure of their correlations . more precisely , if spread fluctuations can be neglected , tim2 lead to the following simple predictions : @xmath140 , \nonumber \\
\mathcal{r}^{\text{tim2}}_{\pi}(\ell<0 ) & \approx -\sum_{0 < n \leq \ell } \sum_{\pi_1 } \mathbb{p}(\pi_1 ) g_{\pi_1}(1 ) c_{\pi_1 , \pi}(n ) \nonumber \\ & = - g_\mathrm{c}(1)\sum_{0 < n \leq \ell } \mathbb{p}(\mathrm{c } ) c_{\mathrm{c } , \pi}(n ) .
\label{eq : rmean}\end{aligned}\ ] ] and : @xmath141 note that the both the empirical response for negative lags and the signature plot are now perfectly reproduced .
the improvement from the tim1 is quite remarkable .
the calibration of the hdim2 model requires the determination of the influence matrix @xmath142 , which can be done from the empirical knowledge of the response matrices @xmath143 since @xmath144 where @xmath145}{\mathbb{p}(\pi_1)\mathbb{p}(\pi_2 ) } , \\ c_{\pi,\pi_1,\pi_2}(k,\ell)&=\frac{\mathbb{e}[i(\pi_{t - k}=\pi)\epsilon_{t - k } \cdot i(\pi_{t-\ell}=\pi_1)\epsilon_{t-\ell } \cdot i(\pi_t=\pi_2)]}{\mathbb{p}(\pi)\mathbb{p}(\pi_1)\mathbb{p}(\pi_2)}.\end{aligned}\ ] ] actually the previous equation is not convenient to be used for the estimation of the model , because it includes the empirical determination of the three - point correlation functions @xmath146 .
therefore , in @xcite authors employed a gaussian assumption which leads to the factorization of the three - point correlation functions in terms of two - point correlation functions : @xmath147 the resulting formula for the signature plot @xmath148 is considerably more complicated .
we report it for completeness in appendix [ app : hdim_calibration ] . on purely theoretical grounds , hdims
are better founded than tims and we have extended the above analysis to hdims as well . in the case of large tick stocks , there is no gain over the tim framework since the influence kernels are found to be extremely small .
any gain is therefore only possible for small tick stocks .
we show the empirical determination of the two influence kernels @xmath149 as well as the resulting predicted response @xmath150 for aapl in fig .
[ fig : estimated_g_diff_hdim ] .
as can be noted , the estimated kernels differ whether the sequence of events which precede the price - changing trade is composed of price - changing or non price - changing orders .
we can argue that eq .
[ eqn : hdim_tim_model ] which neglects the role of the realised event is too restrictive .
it is worth to comment that , when statistically different from zero , the influence kernel @xmath151 is negative .
then , a sequence of price - changing orders on the same side of the final c trade is going to impact the market less than a c order preceded by a sequence of price - changing events of the opposite sign . thus we see the same asymmetric liquidity mechanism described in @xcite . as a sole difference with the picture described in section [ sec : propagator ] , the influence kernel @xmath152 is positive for the very last nc event occurring before a price - changing event .
this implies that the impact of the c market order is larger if it follows a sequence of nc trades whose last event occurs on the same side of the c event .
we see some further improvement over the tim2 for the conditional response functions at negative lags .
it seems that hdim2 performs slightly better than tim2 in capturing the excess anti - correlation measured from the data between past returns and future order signs .
we also observe an improvement albeit in a marginal way for the signature plot in fig .
[ fig : estimated_g_diff_hdim ] .
we recall here that in the 6-event extension of the propagator model considered in @xcite , hdims appeared to fare slightly worse than tims for small tick stocks , for a reason that is still not well understood , and that would deserve further scrutiny . of the hdim2 calibrated on aapl .
( bottom panels ) conditional response functions ( blue markers ) , theoretical predictions of the hdim2 ( blue solid line ) , and of the tim2 ( red dashed lines ) calibrated on aapl data .
the scale for @xmath44 close to zero and bounded by vertical solid lines is linear , whereas outside this region the scale is logarithmic . ] of the hdim2 calibrated on aapl .
( bottom panels ) conditional response functions ( blue markers ) , theoretical predictions of the hdim2 ( blue solid line ) , and of the tim2 ( red dashed lines ) calibrated on aapl data .
the scale for @xmath44 close to zero and bounded by vertical solid lines is linear , whereas outside this region the scale is logarithmic . ] + of the hdim2 calibrated on aapl .
( bottom panels ) conditional response functions ( blue markers ) , theoretical predictions of the hdim2 ( blue solid line ) , and of the tim2 ( red dashed lines ) calibrated on aapl data .
the scale for @xmath44 close to zero and bounded by vertical solid lines is linear , whereas outside this region the scale is logarithmic . ] of the hdim2 calibrated on aapl .
( bottom panels ) conditional response functions ( blue markers ) , theoretical predictions of the hdim2 ( blue solid line ) , and of the tim2 ( red dashed lines ) calibrated on aapl data .
the scale for @xmath44 close to zero and bounded by vertical solid lines is linear , whereas outside this region the scale is logarithmic . ] and @xmath153 ) , and by the tim2 . ]
the above study attempts to build the most accurate linear model of price dynamics based on the only observation of market orders .
we have seen that treating all market orders on the same footing , as in the first version of the propagator model , leads to systematic discrepancies that increase with the tick size . for large tick sizes ,
the predictions of this simple framework are qualitatively erroneous , both for the price response at negative lags and for the diffusion properties of the price .
this can be traced to the inability of the model to describe the feedback of price changes on the order flow , which is strong for large tick stocks .
generalizing the model to two types of market orders , those which leave the price unchanged and those which lead to an immediate price change , considerably improves the predictive power of the model , in particular for large ticks for which the above inadequacy almost entirely disappears , leading to a remarkable agreement between the model s predictions and empirical data .
we have also seen that , although better justified theoretically , the `` history dependent '' impact models ( hdim ) fare only slightly better than the `` transient '' impact models ( tim ) when only two event types are considered .
still , we are left with two important questions about the order flow itself , which we considered `` rigid '' in the above formalism , in the sense that it is entirely described by its correlation structure and does not explicitly react to past events ( at variance with the price itself )
. it would be desirable to develop a more dynamic description of the order flow , for at least two reasons .
one is that linear models are best justified in a context where the best predictor of the order flow is itself linear , as is the case of dar processes for the sign of market orders .
we therefore need to generalize dar processes to a multi - event context , and see how well the corresponding so - called mtd models account for the statistics of the order flow , i.e. the string of @xmath154 , @xmath155 , @xmath156 , @xmath157 events .
the second reason is that the `` true '' impact of an additional market order , not present in the past time series , should include the mechanical contributions captured by the tims or hdims , but also the possible change of the order flow itself due to an extra order in the market , an effect clearly not captured by our assumption of a rigid order flow .
we thus need to define and calibrate the equivalent of the influence kernels defined above , but for the order flow itself .
this is what we do in the following companion paper .
we thank i. mastromatteo , j. donier , j. kockelkoren and especially z. eisler for many inspiring discussions on these topics .
the exact expression of the diffusive curve @xmath73 , given in @xcite , is : @xmath158 ^ 2 \\ & + 2\sum_{0 \leq n < m
< \ell } \sum_{\pi_1,\pi_2 } \mathbb{p}(\pi_{1,2 } ) g_{\pi_1}(\ell - n)g_{\pi_2}(\ell - m ) c_{\pi_1,\pi_2}(m - n ) \\ & + 2\sum_{0 \leq n < m } \sum_{\pi_1,\pi_2 } \mathbb{p}(\pi_{1,2 } )
\left[g_{\pi_1}(\ell+n)-g_{\pi_1}(n)\right]\left[g_{\pi_2}(\ell+m)-g_{\pi_2}(m)\right]c_{\pi_2,\pi_1}(m - n ) \\ & + \sum_{0 \leq n < \ell } \sum_{m>0 } \sum_{\pi_1,\pi_2 } \mathbb{p}(\pi_{1,2 } ) g_{\pi_1}(\ell - n)\left[g_{\pi_2}(\ell+m)-g_{\pi_2}(m)\right]c_{\pi_2,\pi_1}(m+n),\end{aligned}\ ] ] where @xmath159 .
knowing the @xmath142 s and using the factorization of three - point and four - point correlations in terms of two - point correlations , one can finally estimate the diffusion curve , which is given by the following approximate equation : @xmath160 \\ & + 2\sum_{0<n < m}\sum_{\pi_1,\pi_2,\pi_3}\mathbb{p}(\pi_{1,2,3})\kappa_{\pi_1,\pi_3}(m)\kappa_{\pi_2,\pi_3}(n)c_{\pi_1,\pi_2}(m
- n ) \\ & + 2\sum_{n>0}\sum_{\pi_1,\pi_2 } g_{\pi_2}(1 ) \kappa_{\pi_1,\pi_2}(n ) c_{\pi_1,\pi_2}(n ) \bigg]\ell \\ & + 2\sum_{0
< n < \ell}\sum_{\pi_1,\pi_2}(\ell - n)\mathbb{p}(\pi_{1,2})g_{\pi_1}(1)g_{\pi_2}(1 ) c_{\pi_1,\pi_2}(n ) \\ & + 2\sum_{0 < n < \ell } \sum_{i>0 } \sum_{\pi_1,\pi_2,\pi_3}(\ell - n ) \mathbb{p}(\pi_{1,2,3 } ) g_{\pi_1}(1 ) \kappa_{\pi_2,\pi_3}(i ) c_{\pi_1,\pi_2}(n+i ) \\ & + 2\sum_{0 < n
< \ell } \sum_{\substack{i>0 \\ i \neq n } } \sum_{\pi_1,\pi_2,\pi_3}(\ell - n ) \mathbb{p}(\pi_{1,2,3 } ) g_{\pi_1}(1 ) \kappa_{\pi_2,\pi_3}(i ) c_{\pi_1,\pi_2}(n - i ) \\
\qquad & + 2\sum_{0 < n <
\ell}\sum_{\pi_1,\pi_2}(\ell - n ) \mathbb{p}(\pi_{1,2 } ) g_{\pi_1}(1 ) \kappa_{\pi_1,\pi_2}(n ) [ \pi_{\pi_1,\pi_2}(n)+1 ] \\ & + 2\sum_{0 < n
< \ell } \sum_{\substack{i , j>0 \\ j \neq n } } \sum_{\substack{\pi_1,\pi_2 \\ \pi_3,\pi_4 } } ( \ell - n ) \mathbb{p}(\pi_{1,2,3,4 } ) \kappa_{\pi_1,\pi_2}(i ) \kappa_{\pi_3,\pi_4}(j ) c_{\pi_1,\pi_3}(n+i - j)[\pi_{\pi_2,\pi_4}(n)+1 ] \\ & + 2\sum_{0 < n <
\ell } \sum_{i>0 } \sum_{\pi_1,\pi_2,\pi_3}(\ell - n ) \mathbb{p}(\pi_{1,2,3 } ) \kappa_{\pi_1,\pi_2}(i ) \kappa_{\pi_2,\pi_3}(n ) c_{\pi_1,\pi_2}(i),\end{aligned}\ ] ] where @xmath159 and @xmath161}{\mathbb{p}(\pi_1)\mathbb{p}(\pi_2)}-1.\ ] ] 99 hasbrouck , j. ( 1988 ) . trades , quotes , inventory and information .
journal of financial economics , 22 , 229 - 252 .
hasbrouck , j. ( 1991 ) .
measuring the information content of stock trades .
journal of finance , 46 , 179 - 207 .
jones , c. m. , kaul , g. , and lipson , m. l. ( 1994 ) .
transactions , volume , and volatility .
review of financial studies , 7 , 631 - 651 .
biais , b. , hillion , p. , and spatt , c. ( 1995 ) .
an empirical analysis of the limit order book and order flow in the paris bourse .
journal of finance , 50 , 1655 - 1689 .
dufour , a. , and engle , r. f. ( 2000 ) .
time and the price impact of a trade .
journal of finance , 55(6 ) , 2467 - 2498 .
cont , r. , kukanov , a. , and stoikov , s. ( 2014 ) .
the price impact of order book events .
journal of financial econometrics , 12(1 ) , 47 - 88 .
bacry , e. , and muzy , j. f. ( 2014 ) .
hawkes model for price and trades high - frequency dynamics .
quantitative finance , 14(7 ) , 1147 - 1166 .
bouchaud , j .-
, farmer , j. d. , and lillo , f. ( 2009 ) .
how markets slowly digest changes in supply and demand . in : _
handbook of financial markets : dynamics and evolution _ , 2009 ( north - holland : amsterdam ) .
bouchaud , j .-
p . , gefen , y. , potters , m. , and wyart , m. ( 2004 ) .
fluctuations and response in financial markets : the subtle nature of `` random '' price changes . quantitative finance , 4(2 ) , 176 - 190 .
lillo , f. , and farmer , j. d. ( 2004 ) .
the long memory of the efficient market .
studies in nonlinear dynamics & econometrics , 8(3 ) .
lillo , f. , mike , s. , and farmer , j. d. ( 2005 ) .
theory for long memory in supply and demand .
physical review e , 71(6 ) , 066122 .
tth , b. , palit , i. , lillo , f. , and farmer , j. d. ( 2015 ) .
why is equity order flow so persistent ?
journal of economic dynamics and control , 51 , 218 - 239 .
bouchaud , j .-
, kockelkoren , j. , and potters , m. ( 2006 ) .
random walks , liquidity molasses and critical response in financial markets . quantitative finance , 6(02 ) , 115 - 123 .
eisler , z. , bouchaud , j .- p . , and kockelkoren , j. ( 2012 ) .
the price impact of order book events : market orders , limit orders and cancellations . quantitative finance , 12(9 ) , 1395 - 1419 .
tth , b. , lemperiere , y. , deremble , c. , de lataillade , j. , kockelkoren , j. , and bouchaud , j .-
anomalous price impact and the critical nature of liquidity in financial markets .
physical review x , 1(2 ) , 021006 .
mastromatteo , i. , tth , b. , and bouchaud , j .-
agent - based models for latent liquidity and concave price impact .
physical review e , 89(4 ) , 042805 .
donier , j. , bonart j. , mastromatteo i. , and bouchaud j .- p .
( 2015 ) . a fully consistent , minimal model for non - linear market impact
. quantitative finance , 15(7 ) , 1109 - 1121 .
eisler , z. , bouchaud , j .- p . and kockelkoren , j. ( 2012 ) .
models for the impact of all order book events , in market microstructure : confronting many viewpoints ( eds f. abergel , j .-
bouchaud , t. foucault , c .- a .
lehalle , and m. rosenbaum ) , john wiley & sons ltd , oxford , uk .
raftery , a. e. ( 1985 ) .
a model for high - order markov chains .
journal of the royal statistical society .
series b ( methodological ) , 528 - 539 .
berchtold , a. ( 1995 ) .
autoregressive modeling of markov chains .
statistical modelling : proceedings of the 10th international workshop on statistical modelling , 19 - 26 .
springer - verlag .
tth , b. , eisler , z. , lillo , f. , kockelkoren , j. , bouchaud , j .-
p . and farmer , j. d. ( 2012 ) .
how does the market react to your order flow ?
quantitative finance , 12(7 ) , 1015 - 1024 taranto , d. e. , bormetti , g. , and lillo , f. ( 2014 ) .
the adaptive nature of liquidity taking in limit order books .
journal of statistical mechanics : theory and experiment , 2014(6 ) , p06002 .
jacobs , p. a. , and lewis , p. a. ( 1978 ) .
discrete time series generated by mixtures .
i : correlational and runs properties .
journal of the royal statistical society .
series b ( methodological ) , 94 - 105 .
madhavan , a. , richardson , m. , and roomans , m. ( 1997 ) .
why do security prices change ? a transaction - level analysis of nyse stocks . the review of financial studies , 10(4 ) , 1035 - 1064 . | market impact is a key concept in the study of financial markets and several models have been proposed in the literature so far .
the transient impact model ( tim ) posits that the price at high frequency time scales is a linear combination of the signs of the past executed market orders , weighted by a so - called propagator function .
an alternative description the history dependent impact model ( hdim ) assumes that the deviation between the realised order sign and its expected level impacts the price linearly and permanently .
the two models , however , should be extended since prices are a priori influenced not only by the past order flow , but also by the past realisation of returns themselves . in this paper
, we propose a two - event framework , where price - changing and non price - changing events are considered separately .
two - event propagator models provide a remarkable improvement of the description of the market impact , especially for large tick stocks , where the events of price changes are very rare and very informative .
specifically the extended approach captures the excess anti - correlation between past returns and subsequent order flow which is missing in one - event models .
our results document the superior performances of the hdims even though only in minor relative terms compared to tims .
this is somewhat surprising , because hdims are well grounded theoretically , while tims are , strictly speaking , inconsistent . |
Authorities have arrested two eastern Ohio girls suspected of making social media threats against a West Virginia girl who accused two high school football players of raping her in a case that drew widespread attention.
Trent Mays, 17, left, gets a hug from his father, Brian Mays, after Trent and co-defendant Ma'Lik Richmond, 16, were found delinquent on rape and other charges after their trial in juvenile court in Steubenville,... (Associated Press)
Ohio Attorney General Mike DeWine, right, answers questions about the successful prosecution of two juveniles in a rape case during a news conference Sunday, March 17, 2013, at the Jefferson County Justice... (Associated Press)
Defense attorney Walter Madison, right, holds his client, 16-year-old Ma'Lik Richmond, second from right, while defense attorney Adam Nemann, left, sits with his client Trent Mays, foreground, 17, as... (Associated Press)
Ma'Lik Richmond covers his eyes and cries as his attorney Walter Madison, standing, asks the court for leniency after Richmond and co-defendant Trent Mays, lower left, were found delinquent on rape and... (Associated Press)
Ohio Attorney General Mike DeWine said the girls arrested Monday posted threatening Facebook and Twitter comments on Sunday, the day the players were convicted in Steubenville (STOO'-behn-vihl). DeWine says the girls are being held in juvenile detention on allegations of aggravated menacing after an investigation by state and local authorities.
DeWine says he hopes the arrests end harassment of the alleged victim.
A judge sentenced the players to at least a year in juvenile prison. A grand jury will look into whether others broke the law by not speaking up after the attack last summer.
THIS IS A BREAKING NEWS UPDATE. Check back soon for further information. AP's earlier story is below.
The head football coach at Steubenville High School and the owners of a house where an infamous 12-minute video was filmed could be investigated as Ohio prosecutors look into how adults responded to allegations of rape last year.
One day after a judge convicted two high school football players of raping the 16-year-old girl in August, Steubenville's top official said she welcomed a new, wide-ranging probe into possible wrongdoing connected with the rape.
The announcement of the guilty verdict was barely an hour old Sunday when state Attorney General Mike DeWine said he was continuing his investigation and would consider charges against anyone who failed to speak up after the summertime attack. That group could include other teens, parents, school officials and coaches for the high school's beloved football team, which has won nine state championships.
According to trial testimony, one of the two football players said the coach knew about what happened and "took care of it."
The video, passed around widely online, depicted a student joking about the attack. "She is so raped right now," the boy says.
Investigators interviewed the owners of a Steubenville house where the video was filmed, which was also the same place a photograph was taken of the girl being carried by her ankles and wrists, DeWine's office confirmed Monday. That picture, Exhibit No. 1 at the trial, generated international outrage. There is no phone listing for the home.
Numerous students, including defendant Trenton Mays, referred to the girl as "dead" in text messages the night of the attacks, apparently in reference to her unconscious state. The girl, who acknowledged drinking, testified she had no memory of the assaults.
A grand jury will meet in mid-April to consider evidence gathered by investigators from dozens of interviews, including with the football program's 27 coaches, which include junior high, freshman and volunteer coaches.
Text messages introduced at trial suggested the head coach was aware of the rape allegation early on. Reno Saccoccia "took care of it," Mays said in one text introduced by prosecutors.
DeWine said coaches are among officials required by state law to report suspected child abuse. Saccoccia has not commented.
The case brought international attention to the small city of 18,000 and led to allegations of a cover-up to protect the Steubenville High School football team.
Steubenville city manager Cathy Davison said residents want to see justice done, and the city will be better off going forward because of the wider investigation.
"Football is important in Steubenville, but I think overall if you looked at the community in and of itself, it's the education process, the moral fiber of our community, and the heritage of our community, that is even more important," Davison told The Associated Press.
Steubenville schools Superintendent Mike McVey released a statement Monday reiterating his position that the district was waiting until the trial ended to take action. He declined to address the grand jury investigation.
"What we've heard so far is deeply disturbing," McVey's statement said. "At this time, we believe it is important to allow the legal process to play out in court before we as a school district make any decisions or take action against any of the individuals involved with this case."
It's unclear what could happen to the school's sports programs if coaches were charged. Sanctions against teams or programs typically involve violations of rules related to playing, such as improper recruiting of student-athletes or playing ineligible athletes, said Tim Stried, spokesman for the Ohio High School Athletic Association.
"The incident that happened was not during a contest, was not even at school. No playing rules were violated, and it didn't have anything to do with eligibility or recruiting," Stried said.
Mays and Ma'Lik Richmond were charged with penetrating the West Virginia girl with their fingers, first in the back seat of a moving car after a mostly underage drinking party on Aug. 11, and then in the basement of a house.
Mays, 17, and Richmond, 16, were sentenced to at least a year in juvenile prison for the rapes. Mays was ordered to serve an additional year for photographing the underage girl naked.
They can be held until they turn 21.
Special Judge Thomas Lipps recommended the boys be assigned to Lighthouse Youth Center-Paint Creek in Chillicothe. The Ohio Department of Youth Services contracts with the secure, residential center. Lipps said it had a strong program for treating juvenile sex offenders.
___
Andrew Welsh-Huggins can be reached on Twitter at https://twitter.com/awhcolumbus | – Police have arrested two 16-year-old girls for allegedly posting social media death threats against the victim of the Steubenville rape case, WTOC reports. The threats were made on Facebook and Twitter, the AP reports; CBS News quotes one of them as reading, "you ripped my family apart, you made my cousin cry, so when I see you xxxxx, it’s gone be a homicide." Police say they've linked one of the girls to one of the convicted rapists, Ma'Lik Richmond. The girls are being held in a juvenile facility and face charges of aggravated menacing. (A grand jury probe may lead to other arrests in the rape case.) |
Story highlights President Barack Obama says Russia's actions will have costs, consequences
Russia's ambassador to the United States blames Kiev for the current escalation
Ukraine says it will reinstate compulsory military service in the fall
Donetsk rebel leader: Up to 4,000 Russians are fighting; some are active servicemen
A top Ukrainian army officer said a "full-scale invasion" of his country was under way Thursday, as a U.S. official said up to 1,000 Russian troops had crossed Ukraine's southern border to fight alongside pro-Russian rebels.
U.S. officials said Russian troops were directly involved in the latest fighting, despite Moscow's denials.
Rebels backed by Russian tanks and armored personnel carriers fought Ukrainian forces on two fronts Thursday: southeast of rebel-held Donetsk, and along the nation's southern coast in the town of Novoazovsk, about 12 miles (20 kilometers) from the Russian border, said Mykhailo Lysenko, the deputy commander of the Ukrainian Donbas battalion.
Photos: Russian Combat Forces Inside the Territory of Ukraine Photos: Russian Combat Forces Inside the Territory of Ukraine NATO: Images show Russian forces in Ukraine – At a press conference on Thursday, August 28, Dutch Brig. Gen. Nico Tak, a senior NATO commander, revealed satellite images of what NATO says are Russian combat forces engaged in military operations in or near Ukrainian territory. NATO said this image shows Russian self-propelled artillery units set up in firing positions near Krasnodon, in eastern Ukraine. Hide Caption 1 of 6 Photos: Russian Combat Forces Inside the Territory of Ukraine NATO: Images show Russian forces in Ukraine – These two images show a military deployment site on the Russian side of the border near Rostov-on-Don, NATO said. This location is about 31 miles from the Dovzhansky border checkpoint. Hide Caption 2 of 6 Photos: Russian Combat Forces Inside the Territory of Ukraine NATO: Images show Russian forces in Ukraine – This image, captured on July 23, depicts what are NATO says are probably six Russian 2S19 self-propelled, 153mm guns near Kuybyshevo, Russia. This site is 4 miles south of the Ukraine border, near the village of Chervonyi Zhovten. Although the guns are not in Ukraine, NATO said, they are pointed north, toward Ukrainian territory. Hide Caption 3 of 6 Photos: Russian Combat Forces Inside the Territory of Ukraine NATO: Images show Russian forces in Ukraine – This image shows a wider overview including the position of the self-propelled guns from image 4. Hide Caption 4 of 6 Photos: Russian Combat Forces Inside the Territory of Ukraine NATO: Images show Russian forces in Ukraine – Pictured here, NATO said, are Russian military units moving in a convoy formation with self-propelled artillery in the area of Krasnodon, Ukraine. Hide Caption 5 of 6 Photos: Russian Combat Forces Inside the Territory of Ukraine NATO: Images show Russian forces in Ukraine – Intelligence now indicates that up to 1,000 Russian troops have moved into southern Ukraine with heavy weapons and are fighting there, a U.S. official told CNN on Thursday. Hide Caption 6 of 6
"This is a full-scale invasion," Lysenko said, referring to the fighting in the south.
Intelligence now indicates that up to 1,000 Russian troops have moved into southern Ukraine with heavy weapons and are fighting there, a U.S. official told CNN on Thursday.
JUST WATCHED Obama: Russia behind violence in Ukraine Replay More Videos ... MUST WATCH Obama: Russia behind violence in Ukraine 02:05
JUST WATCHED Fighting in Ukraine may be spreading Replay More Videos ... MUST WATCH Fighting in Ukraine may be spreading 01:40
Photos: Photos: Crisis in Ukraine Photos: Photos: Crisis in Ukraine Crisis in Ukraine – Pro-Russian rebels fire artillery Tuesday, October 14, at Donetsk Sergey Prokofiev International Airport, which is on the outskirts of Donetsk, Ukraine. Fighting between Ukrainian troops and pro-Russian rebels in the country has left more than 3,000 people dead since mid-April, according to the United Nations. Hide Caption 1 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – Smoke rises behind the train station in Donetsk, Ukraine, during an artillery battle between pro-Russian rebels and Ukrainian government forces on Sunday, October 12. Hide Caption 2 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – Ukrainian President Petro Poroshenko, center, inspects Ukrainian army positions close to Donetsk on Friday, October 10. Hide Caption 3 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – The main terminal of Donetsk Sergey Prokofiev International Airport is hit by shelling during fighting between pro-Russian rebels and Ukrainian forces on Wednesday, October 8. Hide Caption 4 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – Pro-Russian rebels fire mortars toward Ukrainian positions near to the Donetsk airport on October 8. Hide Caption 5 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – A pro-Russian rebel walks past a burning house after shelling in the town of Donetsk, Ukraine, on Sunday, October 5. Hide Caption 6 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – An Ukrainian sniper aims his weapon at a checkpoint near Popasna, Ukraine, on Thursday, October 2. Hide Caption 7 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – Smoke rises from the area near the Donetsk airport after heavy shelling on October 2. Hide Caption 8 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – An injured man is transported at a hospital after shelling in Donetsk on Wednesday, October 1. Hide Caption 9 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – A woman injured at a bus station cries at a Donetsk hospital on October 1. Hide Caption 10 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – A pro-Russian rebel guards a damaged school in Donetsk on October 1. Hide Caption 11 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – Ukrainian servicemen patrol in the Donetsk region on Friday, September 26. Hide Caption 12 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – A pro-Russian rebel guards a destroyed bridge in Nyzhnya Krynka, Ukraine, on Tuesday, September 23. Hide Caption 13 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – Residents clean up debris at a building damaged by rockets in Debaltseve, Ukraine, on Monday, September 22. Hide Caption 14 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – A firefighter checks out a damaged office building after shelling in Donetsk on Sunday, September 21. Hide Caption 15 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – A Ukrainian soldier guards pro-Russian rebels during a prisoner exchange near Donetsk on September 21. Hide Caption 16 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – Members of the Ukrainian military, held as prisoners of war, sit in a bus waiting to be exchanged near Donetsk on Saturday, September 20. Hide Caption 17 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – Smoke rises after an explosion at a weapons factory controlled by pro-Russian rebels near Donetsk on September 20. The cause of the explosion was not immediately known. Hide Caption 18 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – A Ukrainian helicopter patrols an area near Donetsk on September 20. Hide Caption 19 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – A firefighter walks past the rubble of a building destroyed by shelling in Donetsk on Wednesday, September 17. Hide Caption 20 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – A protester holds a smoke bomb during a demonstration outside the Presidential Palace in Kiev, Ukraine, on September 17. Activists protested the adoption of legislation giving greater autonomy to rebel-held parts of eastern Ukraine's Donetsk and Luhansk regions. Hide Caption 21 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – A man covers the body of his mother, who was killed on a bus during a battle in Donetsk on Tuesday, September 16. Hide Caption 22 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – People walk through a market Monday, September 15, in the Kievsky district of Donetsk. Hide Caption 23 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – Smoke rises around the Donetsk International Airport on Saturday, September 13, as shelling continues between pro-Russian rebels and the Ukrainian army. Hide Caption 24 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – A Ukrainian soldier stands guard as residents rally in support of a united Ukraine in the southern Ukrainian city of Mariupol on September 13. Hide Caption 25 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – Russian trucks, flying Russian flags and carrying humanitarian aid for eastern Ukraine, line up at a border checkpoint in Donetsk on September 13. Hide Caption 26 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – Workers unload supplies from Russia in Luhansk, Ukraine, on September 13. More than 200 Russian trucks entered Ukraine with supplies for the city, which has been cut off from electricity and water for weeks. Hide Caption 27 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – Ukrainian soldiers ride on an armored vehicle near Kramatorsk, Ukraine, on September 13. Hide Caption 28 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – People look at a large crater from a reported missile strike that hit a bus station Friday, September 12, in Makiivka, Ukraine. Hide Caption 29 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – Workers clear rubble Thursday, September 11, after the bombing of a mine in Donetsk. Hide Caption 30 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – A column of Ukrainian forces is seen in Volnovakha, Ukraine, on September 11. Hide Caption 31 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – An elderly woman sits next to a Ukrainian soldier standing guard in Volnovakha on September 11. Hide Caption 32 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – Armed pro-Russian rebels walk September 11 in front of the destroyed Luhansk International Airport. The rebels took control of the airport on September 1 after heavy fighting with the Ukrainian army. Hide Caption 33 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – Residents of Lutuhyne, Ukraine, push containers in a wheelbarrow September 11 as they walk between destroyed armored vehicles left behind by the Ukrainian army. Hide Caption 34 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – Ukrainian troops protect themselves from a nearby shooting in Debaltseve on Tuesday, September 9. Hide Caption 35 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – Young residents of Berdyansk, Ukraine, dig trenches September 9 to help Ukrainian forces protect the city from possible rebel attacks. Hide Caption 36 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – Ukrainian President Petro Poroshenko, left, inspects military personnel during a visit to Mariupol on Monday, September 8. Hide Caption 37 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – A pro-Russian rebel stands next to a truck with a heavy machine gun attached to it Sunday, September 7, in Donetsk. Hide Caption 38 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – Homes smolder after being hit by shelling in Donetsk on September 7. Hide Caption 39 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – A woman stands next to a road sign September 7 after an overnight bombing attack at an Ukrainian army checkpoint on the outskirts of Mariupol. Hide Caption 40 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – Ukrainian troops stand on a deserted road as they patrol the border area of the Donetsk and Luhansk regions Friday, September 5, near Debaltseve. Hide Caption 41 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – A man repairs damage to a building caused by shelling in Donetsk on September 5. Hide Caption 42 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – Smoke rises on the outskirts of Mariupol after pro-Russian rebels fired heavy artillery on September 5. Hide Caption 43 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – Ukrainian soldiers ride a tank on a road close to where pro-Russian rebels fired heavy artillery outside Mariupol on September 5. Hide Caption 44 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – A wounded Ukrainian soldier is helped by a medical team on the outskirts of Mariupol on September 5. Hide Caption 45 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – A pro-Russian rebel holds a destroyed weapon in the village of Novokaterynivka, Ukraine, on Thursday, September 4. Hide Caption 46 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – Unmarked military vehicles burn on a country road in Berezove, Ukraine, on September 4 after a clash between Ukrainian troops and pro-Russian rebels. For months, Ukrainian government forces have been fighting the rebels near Ukraine's eastern border with Russia. Hide Caption 47 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – People wait by their cars near Berezove on September 4 as rockets hit the road ahead. Hide Caption 48 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – A pro-Russian rebel holds a dog, which has a hand grenade attached to its leash, in Donetsk on Wednesday, September 3. Hide Caption 49 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – Pro-Russian rebels fire at Ukrainian army positions in Donetsk on September 3. Hide Caption 50 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – A Ukrainian military vehicle patrols in the Donetsk region on September 3. Hide Caption 51 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – Pro-Russian rebels hold a man near a column of destroyed Ukrainian military vehicles in Novokaterynivka on Tuesday, September 2. Hide Caption 52 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – A Ukrainian military truck passes by a serviceman resting in his military camp in Ukraine's Donetsk region on September 2. Hide Caption 53 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – A destroyed Ukrainian military vehicle sits abandoned on the side of the road near Novokaterynivka on September 2. Hide Caption 54 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – Men clear rubble in Ilovaisk, Ukraine, on Sunday, August 31. Hide Caption 55 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – Ukrainian troops leave the rebel-held town of Starobesheve on Saturday, August 30. Hide Caption 56 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – A Ukrainian loyalist fighter from the Azov Battalion stands guard on a hill on the outskirts of Mariupol on August 30. Hide Caption 57 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – A pro-Russian rebel listens to the news on a transistor radio in the town of Novoazovsk, Ukraine, on Friday, August 29. Hide Caption 58 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – Ukrainian soldiers stop on a roadside as they wait for the start of their march into Mariupol on Wednesday, August 27. Hide Caption 59 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – A man opens a box filled with rocket-propelled grenades left by the Ukrainian army in Starobesheve. Hide Caption 60 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – A pro-Russian rebel walks through a local market damaged by shelling in Donetsk on Tuesday, August 26. Hide Caption 61 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – Ukrainian servicemen of the volunteer battalion Azov leave for Novoazovsk on August 26. Hide Caption 62 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – Pro-Russian rebels escort captured Ukrainian soldiers in a central square in Donetsk on Sunday, August 24. Hide Caption 63 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – People yell as Ukrainian prisoners are paraded through Donetsk in eastern Ukraine on August 24. Hide Caption 64 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – A pro-Russian rebel delivers a speech atop a damaged Ukrainian armored personnel carrier in Donetsk on August 24. Hide Caption 65 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – People look at damaged Ukrainian military equipment in Donetsk on August 24. Hide Caption 66 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – The first trucks of a Russian aid convoy roll on the main road to Luhansk in eastern Ukraine on Friday, August 22. The head of Ukraine's security service called the convoy a "direct invasion" under the guise of humanitarian aid since it entered the country without Red Cross monitors. Hide Caption 67 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – A pro-Russian rebel holds shrapnel from a rocket after shelling in Donetsk on August 22. Hide Caption 68 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – Residents sit in a makeshift bomb shelter during a shelling in Makiivka on Wednesday, August 20. Hide Caption 69 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – Dogs play together as a Russian convoy carrying aid supplies stops at a border control point with Ukraine on August 20. Hide Caption 70 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – Ukrainian forces take their position not far from Luhansk on August 20. Hide Caption 71 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – Clouds of smoke are on the horizon as Ukrainian forces and pro-Russian rebels clash in Yasynuvata, Ukraine, on Tuesday, August 19. Hide Caption 72 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – An Ukrainian helicopter flies near Kramatorsk on August 19. Hide Caption 73 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – Ukrainian soldiers load a missile during fighting with pro-Russian rebels Monday, August 18, near Luhansk. Hide Caption 74 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – Boys play at a refugee camp, set up by the Russian Emergencies Ministry, near the Russian-Ukrainian border on August 18. Hide Caption 75 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – Ukrainian soldiers carry weapons at a checkpoint near Debaltseve on Saturday, August 16. Hide Caption 76 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – Pro-Russian rebels greet each other as they pass near Krasnodon, Ukraine, on August 16. Hide Caption 77 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – A fireman tries to extinguish a fire after shelling in Donetsk on August 16. Hide Caption 78 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – Ukrainian border guards patrol near Novoazovsk on Friday, August 15. Hide Caption 79 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – Trucks of a Russian humanitarian convoy are parked in a field outside the town of Kamensk-Shakhtinsky, in the Rostov region of Russia about 20 miles from the Ukraine border, on August 15. Ukrainian officials were preparing to inspect the convoy, which was bound for the conflict-torn east. Hide Caption 80 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – A truck driver from the convoy jumps out of a trailer on August 15. The Ukrainian government had expressed fears that the convoy was a large-scale effort to smuggle supplies or troops to pro-Russian rebels. Hide Caption 81 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – A tank belonging to pro-Russian rebels moves along a street in Donetsk on August 15. Hide Caption 82 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – A Ukrainian soldier walks past a line of self-propelled guns as a column of military vehicles prepares to head to the front line near Ilovaisk on Thursday, August 14. Hide Caption 83 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – A Ukrainian soldier prepares a mortar at a position near Ilovaisk on August 14. Hide Caption 84 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – A man inspects damage at his house after a shelling in Donetsk on August 14. Hide Caption 85 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – A convoy of trucks, which Moscow said was carrying relief goods for war-weary civilians, moves from Voronezh, Russia, toward Rostov-on-Don, Russia, on August 14. Hide Caption 86 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – Pro-Russian rebels on the outskirts of Donetsk stand at a checkpoint near a bullet-riddled bus on Wednesday, August 13. Hide Caption 87 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – A pro-Russian rebel inspects damage after shelling in Donetsk on Thursday, August 7. Hide Caption 88 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – Smoke billows from a Ukrainian fighter jet crash near the village of Zhdanivka, Ukraine, on August 7. Hide Caption 89 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – Residents of eastern Ukraine cry in a hospital basement being used as a bomb shelter August 7 in Donetsk. Hide Caption 90 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – Ordnance from a Ukrainian rocket launcher shoots toward a pro-Russian militant position in the Donetsk region on August 7. Hide Caption 91 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – Relatives of Ukrainian military member Kyril Andrienko, who died in combat in eastern Ukraine, gather during his funeral in Lviv, Ukraine, on August 7. Hide Caption 92 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – Refugees from southeastern Ukraine wait at a refugee camp in Donetsk on Wednesday, August 6. Hide Caption 93 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – A pro-Russian rebel adjusts his weapon in Donetsk on August 6. Hide Caption 94 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – Men walk past a bomb crater in Donetsk on August 6. Hide Caption 95 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – A man steps out of his car as Ukrainian soldiers inspect the vehicle at a checkpoint in Debaltseve on August 6. Hide Caption 96 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – Ukrainian servicemen sit on a bus near Slovyansk, Ukraine, on Tuesday, August 5. Hide Caption 97 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – A pro-Russian separatist guards a road as Australian, Malaysian and Dutch investigators prepare to examine the crash site of Malaysia Airlines Flight 17 near the village of Rossipne, Ukraine, on August 5. U.S. and Ukrainian officials allege that a Russian-made missile shot down the plane from rebel-held territory, killing all 298 people on board. Russia and the rebel fighters deny involvement. Hide Caption 98 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – Rescue workers carry the body of a woman who was killed during a bomb shelling in Donetsk on August 5. Hide Caption 99 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – A boy stands in a hallway of a refugee hostel run by pro-Russian rebels in Donetsk on Monday, August 4. Hide Caption 100 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – Ukrainian servicemen from the Donbass volunteer battalion clean their guns Sunday, August 3, in Popasna, Ukraine. Hide Caption 101 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – Ukrainian soldiers fire shells toward rebel positions near Pervomaysk, Ukraine, on Saturday, August 2. Hide Caption 102 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – Ukrainian troops patrol near the village of Novoselovka on Thursday, July 31. Hide Caption 103 of 104 Photos: Photos: Crisis in Ukraine Crisis in Ukraine – A woman says goodbye to her mother as she flees her home in Shakhtersk, Ukraine, on Tuesday, July 29. See more photos of the crisis from earlier this year Hide Caption 104 of 104
NATO provided what it said is evidence: satellite images showing Russian troops engaged in military operations inside Ukraine.
"The images, captured in late August, depict Russian self-propelled artillery units moving in a convoy through the Ukrainian countryside and then preparing for action by establishing firing positions in the area of Krasnodon, Ukraine," NATO said in a news release.
Commercial satellite imagery shows the same, according to a British security source with detailed knowledge of UK intelligence estimates. One image that British intelligence has analyzed, dated Tuesday, shows 15 heavy trucks, at least seven armored vehicles and at least nine artillery positions.
Russia's military actions in eastern Ukraine "must cease immediately," British Prime Minister David Cameron said Thursday.
"I'm extremely concerned by mounting evidence that Russian troops have made large-scale incursions into South Eastern Ukraine, completely disregarding the sovereignty of a neighbor," Cameron said. "The international community has already warned Russia that such provocative actions would be completely unacceptable and illegal."
As the Russian presence grows, so does its influence over the separatist leadership in Ukraine, the British security source told CNN.
According to the source, the UK has determined that Russian artillery and rockets -- across the border and from within Ukraine -- have been fired against the Ukrainian military.
Two SA-22A gun/missile air defense systems were observed in separatist-controlled parts of Luhansk province on August 2, the source said. This system is not in Ukraine's inventory but is used by the Russian military.
Ukraine's National Defense and Security Council said that Russian forces were in full control of Novoazovsk as of Wednesday afternoon.
Russia's military fired Grad rockets into the town and its suburbs before sending in two convoys of tanks and armored personnel carriers from Russia's Rostov region, it said in a statement
"Ukrainian troops were ordered to pull out to save their lives. By late afternoon both Russian convoys had entered the town. Ukraine is now fortifying nearby Mariupol to the west," the NDSC said.
JUST WATCHED Ukraine: Russians captured in east Replay More Videos ... MUST WATCH Ukraine: Russians captured in east 01:46
JUST WATCHED Ukrainians fight to survive amid siege Replay More Videos ... MUST WATCH Ukrainians fight to survive amid siege 02:32
JUST WATCHED Questions return with Russian convoy Replay More Videos ... MUST WATCH Questions return with Russian convoy 02:32
A number of villages in the Novoazovsk, Starobeshiv and Amvrosiiv districts were also seized, it said.
The NDSC also warned that a rebel counterattack is expected in the area where Malaysia Airlines Flight 17 was shot down in July. Ukrainian and Western officials say they believe it was downed by rebels armed with Russian-made weapons.
Novoazovsk is strategically important because it lies on the main road leading from the Russian border to Ukraine's Crimea region, which Russia annexed in March. Separatist leaders in the Donetsk and Luhansk regions then declared independence from Kiev.
U.N. Security Council meets
As international concern mounted over the apparent escalation in fighting, the U.N. Security Council held an emergency meeting on Ukraine.
Samantha Power, the U.S. ambassador to the United Nations, accused Russia of lying.
"It has manipulated. It has obfuscated. It has outright lied. So we have learned to measure Russia by its actions and not by its words," Power said, calling for "serious negotiations."
"In the face of this threat, the cost of inaction is unacceptable," she warned.
According to Vitaly Churkin, Russia's ambassador to the United Nations, more than 2,000 people have been killed in the conflict and more than 800,000 have been displaced.
He blamed the current escalation on the "reckless policy" of Kiev.
"The Kiev authorities have torpedoed all political agreements on resolving the crisis," Churkin told the Security Council meeting. "The only thing we're seeing is a fight against dissent."
Ukraine's deputy ambassador to the United Nations, meanwhile, put his colleagues on alert.
"The world is challenged by a military-nuclear might, ignoring universal principles and craving absolute power," Oleksandr Pavlichenko said about Russia. "How many more red lines have to be crossed before this challenge is addressed?" he asked.
The latest flare-up comes despite a meeting between Ukrainian President Petro Poroshenko and Russian President Vladimir Putin in Belarus on Tuesday at which some progress appeared to have been made toward finding a diplomatic solution to the crisis.
Poroshenko canceled a planned trip to Turkey on Thursday "due to sharp aggravation of the situation in Donetsk region ... as Russian troops were brought into Ukraine," a statement from his office said.
In a Cabinet meeting, Ukrainian Prime Minister Arseniy Yatsenyuk said that Russia "has very much increased its military presence in Ukraine" and that tougher measures may be needed to curb Russia's support for the rebels.
"Unfortunately, the sanctions were unhelpful as to de-escalating the situation in Ukraine," he said, referring to the economic sanctions imposed by the United States and European Union against Russian individuals and companies.
Yatsenyuk suggested one way to halt "Russian aggression" could be to freeze all assets and ban all Russian bank transactions until Russia "pulls out all its military, equipment and agents" from Ukraine.
"Vladimir Putin has purposely started a war in Europe. It is impossible to hide from the fact," he said.
President Barack Obama, similarly, placed blame for the violence in Ukraine on Russia.
"The violence is encouraged by Russia. The separatists are trained by Russia; they are armed by Russia; they are funded by Russia," he told reporters in Washington.
Calling sanctions against Russia already in place "effective," he said it would face additional costs and consequences for its ongoing incursion.
"Russia is already more isolated than at any time since the end of the Cold War," Obama said.
U.N. Secretary General Ban Ki-moon said the flow of weapons is hindering efforts.
"This remains a key obstacle to the de-escalation of the situation on the ground, as arms and heavy weaponry reportedly continue to flow unabated into Ukraine from Russia," Ban said. "There is an urgent need to ensure a secure border between the two countries, with international verification."
U.S. ambassador: Russia is directly involved
U.S. Ambassador to Ukraine Geoffrey Pyatt also said Thursday that Russian soldiers were directly involved in the fighting, alongside the pro-Russian rebels.
"Russian-supplied tanks, armored vehicles, artillery and multiple rocket launchers have been insufficient to defeat Ukraine's armed forces, so now an increasing number of Russian troops are intervening directly in the fighting on Ukrainian territory," he said on Twitter.
"Russia has also sent its newest air defense systems including the SA-22 into eastern Ukraine and is now directly involved in the fighting."
Moscow denies supporting and arming the pro-Russian rebels. It has also repeatedly denied allegations by Kiev that it has sent troops over the border.
A Russian senator and the deputy head of the Committee on Defense and Security in Russia's upper house of Parliament, Evgeny Serebrennikov, dismissed the latest reports of a Russian incursion as untrue.
"We've heard many statements from the government of Ukraine, which turned out to be a lie. What we can see now is just another lie," he said to Russian state news agency RIA Novosti.
Russian lawmaker Leonid Slutsky also accused Kiev of lies, in comments to RIA Novosti.
"I can only say that there's no ground for claims like this, and the junta tries to lay its own fault at someone else's door," he said, referring to the Kiev government.
Moscow regards it as illegitimate because it took charge after Ukraine's pro-Russian President Viktor Yanukovych was ousted in February.
Rebel leader: 3,000 to 4,000 Russians in our ranks
However, the Prime Minister of the self-declared Donetsk People's Republic, Alexander Zakharchenko, acknowledged Thursday that there are current Russian servicemen fighting in the rebels' ranks in eastern Ukraine.
In his statement, televised on state-run Russia 24, Zakharchenko said the rebels have never concealed that many Russians are fighting with them. He said up until now there were 3,000 to 4,000 volunteers, some of whom are retired Russian servicemen.
Zakharchenko went on to reveal that the Russian servicemen currently fighting in their ranks are active, "as they came to us to struggle for our freedom instead of their vacations."
On Tuesday, Ukraine's Security Service said it had detained 10 Russian soldiers in Ukraine.
Russian state media cited a source in the Russian Defense Ministry as saying the soldiers had been patrolling the border and "most likely crossed by accident" at an unmarked point.
The NDSC said Thursday that Ukraine's Security Service detained another Russian serviceman who testified that his unit was supplying heavy military equipment to militants.
Separately, Ukraine announced Thursday that it would reinstate compulsory military service in the fall. Fresh recruits will not be sent to the country's area of conflict in the east, Mihaylo Koval, the deputy secretary of the National Defense and Security Council, told reporters.
Ukrainian volunteers retreat from Mariupol area
Pro-Kiev forces apparently already have engaged with rebel forces between Novoazovsk and Mariupol, the Sea of Azov port city 35 kilometers (22 miles) to the west that the country's security council said was being fortified.
A CNN crew north of Mariupol saw a ragged convoy of about 25 vehicles, some with their windows smashed out, belonging to pro-Kiev volunteer fighters heading away from the city Thursday afternoon.
The volunteers, including two from the country of Georgia, said they'd been involved in fighting in the Mariupol area but didn't provide details.
Earlier Thursday and farther north, the CNN crew was near Donetsk city, which Ukrainian forces have been trying to wrest from rebels for weeks. Heavy Ukrainian artillery fire targeted areas near Donetsk's southern suburbs amid a heavy downpour of rain.
The main highway 9 miles (15 kilometers) south of Donetsk was deserted. With return fire coming from Donetsk, villagers in the area said they'd been taking shelter indoors or underground, coming out only for an hour or two a day to get supplies.
The city of Luhansk, a rebel stronghold, has also been at the center of fighting for days, prompting a humanitarian crisis. The NDSC said it remained without water, power or phone connections Thursday. ||||| UNITED NATIONS (AP) — The U.N. Security Council is preparing to meet in emergency session on the growing crisis in Ukraine.
Diplomats said Thursday that the council will meet at 2 p.m. (1800 GMT) at the request of Lithuania.
Alarm has grown as a top NATO official said at least 1,000 Russian troops have poured into Ukraine with sophisticated equipment and have been in direct "contact" with Ukrainian soldiers, resulting in casualties.
Russian Ambassador to the U.N. Vitaly Churkin told reporters "You're at a loss" as he walked into a morning council session and gave no further comment.
UK Ambassador Mark Lyall Grant told reporters that "Russia will be asked to explain why Russia has its troops inside Ukraine. It's very clear that Russian regular troops are now in Ukraine." | – The UN Security Council is set to hold an emergency meeting this afternoon at the request of Lithuania to address what Ukraine is calling a Russian invasion across the border, reports the AP. At least 1,000 Russian troops were said to have entered Ukraine with "sophisticated equipment" and have already killed Ukrainian soldiers, according to a NATO official cited by the AP. Although Russia maintains it's not helping pro-Russian rebels on the front lines, the US ambassador to Ukraine tweeted that "Russian-supplied tanks, armored vehicles, artillery, and multiple rocket launchers have been insufficient to defeat Ukraine's armed forces, so now ... Russian troops are intervening directly in the fighting on Ukrainian territory," notes CNN. "Russia will be asked to explain [at the UN meeting] why Russia has its troops inside Ukraine," says the UK's ambassador to the UN, per the AP. |
anisotropies in the cosmic background radiation ( cbr ) are a strong potential source of information on the cosmological model .
unfortunately , anisotropy observations are hard and significant measures were obtained only recently . as a matter of fact ,
the theory of cbr anisotropies is well understood ( see , e.g. , hu & sugiyama 1995 and references therein ) and public numerical codes allow to calculate the expected anisotropies for a wide range of cases ( seljak & zaldarriaga 1996 ) .
it is then easy to see that cbr anisotropies depend on all the ingredients that define a cosmological model : the background metric , the substance mix and the primeval fluctuation spectrum .
several authors used available codes to predict cbr features for suitable ranges of model parameters . however , within the range of models consistent with the inflationary paradigm , not enough attention , in our opinion , has been devoted yet to mixed models .
anisotropies expected for them were calculated by ma & betschinger ( 1995 ) , de gasperis et al .
( 1995 ) , dodelson et al .
( 1996 ) , but parameter choices were restricted to cases for which anisotropies only marginally differ from the standard cdm case . here we plan to extend the analysis to a wider set of mixed models including those for which a greater discrepancy from standard cdm can be expected and , in particular , models with primeval spectral index @xmath0 and late derelativization of the hot component .
if hot dark matter ( hdm ) is made of massive @xmath13 s , there is a precise constraint between its density parameter ( @xmath9 ) and its derelativization redshift ( @xmath3 ) : @xmath14 ( see eqs .
2.32.4 below ; @xmath15 is the number of @xmath13 spin states ) .
henceforth , in order to have @xmath160.15 , @xmath3 can not be lower than @xmath17 , even for @xmath18 . in order to have lower @xmath3 and/or greater @xmath9
, hdm must arise from the decay products of heavier particles .
in fact , decay products have extra kinetic energy arising from mother particle mass energy and therefore have a later @xmath3 .
several authors considered such scenario , assuming the decay of a heavier neutrino into a lighter one ( bond & efstathiou 1991 , dodelson et al .
1994 , white et al .
1995 , mcnally & peacock 1995 , ghizzardi @xmath19 bonometto 1996 ) , and there are recent attempts of constraining it using cbr data ( hannestad 1998 ) .
however , a wider range of models give rise to similar pictures , @xmath20 , if metastable supersymmetric particles decay into lighter ones ( bonometto et al .
, 1994 ; borgani et al . 1996 ) . in a number of recent papers hdm arising from decays
was called @xmath21 , to stress its capacity to grant a later derelativization , which weakens its contribution to the formation of inhomogeneities .
cbr anisotropies were first detected by the cobe
dmr experiment ( smoot et al . 1992 ) .
the angular scales observed by cobe were rather wide ( @xmath22 ) and allowed to inspect a part of the spectrum which is almost substance independent .
nevertheless , cobe measurement provide the normalization of density fluctuations out of the horizon , and fair constraints on the primeval spectral index @xmath5 ( bennet et al .
more recent baloon borne and ground based experiments investigated cbr fluctuations on scales comparable with the horizon scale at recombination in the standard cdm model , on these scales , one expects the first doppler peak and , unlike what happens on larger scales , anisotropies are related both to the spectral index @xmath5 and to substance mix . in principle , these scale are the best ones to test mixed models , as on even smaller scales ( @xmath23 see below ) the cbr spectrum can be distorted by other effects , like reionization and lensing .
the degree scale measurements currently available seem to have detected the doppler peak at a fair angular scale , but with an amplitude higher than expected in a standard cdm scenario with @xmath24 ( scott et al .
1995 , de bernardis et al .
an analysis of such outputs seems to exclude low values of the density parameter @xmath25 ( hancock et al . 1998 , bartlett et al . 1998 ) .
here @xmath26 is the present average density in the universe and @xmath27 depends on the value of the hubble parameter @xmath28km@xmath29s@xmath30mpc@xmath30 furthermore , lineweaver ( 1997 ) and lineweaver @xmath19 barbosa ( 1998 ) outline that observed anisotropies are still too large to agree with cdm and @xmath31 unless @xmath32 .
alternative causes for such greater fluctuations can be @xmath0 or a cosmic substance comprising a substantial non cdm component . a number of large scale structure ( lss ) observables are obtainable from the linear theory of fluctuation growing .
a wide range of mixed models predict fair values for them .
tests of non linear features were also performed through n
body simulations ( see e.g. ghigna et al . ( 1997 ) and referencies therein ) .
although some technical aspects of mixed model simulations are still questionable , one can state that suitable dm mixtures allow to fit lss data from 1 to 100 mpc , almost up to scales covered by the current cbr experiments , so that a simultaneous analysis of cbr anisotropies and lss allow complementary tests of the models .
it ought to be outlined that @xmath1 and a hot component have a partially compensating effect on lss and , in a previous paper ( bonometto @xmath19 pierpaoli 1998 , hereafter bp ; see also lucchin et al .
1996 , liddle et al .
1996 ) , we discussed how they could be combined to obtain a fit with lss data .
on cbr fluctuations , instead , they add their effects and a quantitative analysis is needed to see how far one can go from both @xmath24 and pure cdm .
most of this paper is focused on such analysis and on the tools needed to perform it .
in particular , let us outline that public codes , like cmbfast , can not predict cbr anisotropies for mixed models with a hot component of non thermal origin .
a part of this work , therefore , required a suitable improvement of current algorithms .
in this work only models with @xmath33 , @xmath34 and @xmath35 are considered .
the _ cosmic substance _ is fixed by the partial density parameters : @xmath36 for baryons , @xmath37 for cold dark matter ( cdm ) , @xmath9 and @xmath3 ( see below ) for hdm .
here we shall also distinguish between hdm made of massive @xmath13 s ( @xmath38 models ) and hdm arising from heavier particle decay ( @xmath21 models ) .
the relation between the nature of hdm and the amount of sterile massless components ( smlc hereafter ) needs also to be discussed ( in standard cdm , smlc is made by 3 massless @xmath13 s ) .
early deviations from homogeneity are described by the spectrum @xmath39 ( @xmath40 is the distance from the recombination band , as @xmath41 is the present horizon radius ) . here
@xmath42 and @xmath43 is the comoving length scale .
models with @xmath44 were considered . in appendix
a we review which kinds of inflationary models are consistent with such @xmath5 interval .
section 2 is dedicated to a brief discussion on the different kinds of hot dark matter that could lead to a mixed dark matter scenario . in section 3
we analyze the cbr spectrum of such models , distinguishing between the effects due to the smlc and those due to the actual phase
space distributions of the hot particles and discussing how current algorithms need to be modified to provide cbr anisotropies for volatile models .
we then perform an analysis of the parameter space : models are preselected according to lss constraints related to the linear theory .
this selection is based on bp results , whose criteria will be briefly reviewed ( section 4 ) .
bp results , however , were restricted to the case @xmath45 . here
we shall inspect a greater portion of the parameter space , by considering also models with @xmath46 , allowing for a substantial dependence of the cbr power spectrum on the baryon abundance .
section 5 is dedicated to a comparison of the cbr spectra with current available data , and to the final discussion .
the _ substance _ of mixed models can be classified according to their behaviour when galactic scales ( @xmath47@xmath48 ) enter the horizon . particles already non
relativistic are said _
cold_. their individual masses or energy distributions do not affect cosmological observables .
_ tepid _ or _ warm _ components ( see , @xmath20 pierpaoli et al . , 1998 )
become non relativistic while galactic scales enter the horizon .
component(s ) , instead , become non relativistic after the latter scale has entered the horizon .
neutrinos , if massive , are a typical hot component .
they were coupled to radiation for @xmath49kev . if their mass @xmath50 , their number density , at any @xmath51 , is @xmath52 ( after electron annihilation @xmath53 = ( 4/11)^{1/3 } t$ ] , where @xmath54 is radiation temperature ) and their momentum distribution ( normalized to unity ) reads @xmath55 + 1 } \eqno ( 2.1)\ ] ] also when @xmath56 . henceforth , when @xmath57 , their distribution is not _
thermal _ , although its shape was originated in thermal equilibrium .
notice that , for high @xmath58 , @xmath59 is cut off as @xmath60 . using the distribution ( 2.1 )
we can evaluate @xmath61 if we define @xmath62 as the redshift for which @xmath63 , eq . ( 2.2 ) tells us that @xmath3 occurs when @xmath64 . in the following
we shall use the parameter @xmath65 , which normalizes @xmath3 at a value ( @xmath66 ) in its expected range . at @xmath67 ,
photons and cdm would have an equal density in a pure cdm model with @xmath68 and a present cbr temperature @xmath69k . henceforth , such redshift is in the range where we expect that relativistic and non
relativistic components have equal density ( equivalence redshift : @xmath70 ) ; besides of photons and smlc , it is possible that hdm contributes to the relativistic component at @xmath70
. its value , in different models , is given by eq .
( 2.11 ) , herebelow . in general
, we shall normalize @xmath71 at the above value ( which is well inside 1 @xmath72 in respect to data ) and define @xmath73 .
for neutrinos of mass @xmath74 , @xmath75 while @xmath76 let us now compare these features with those of a volatile model , where the hot component originates in the decay of heavier particles @xmath77 ( mass @xmath78 ) , decoupled since some early time @xmath79 ( see also pierpaoli & bonometto 1995 , and pierpaoli et al .
if the temperature @xmath80 , such hot component may have a number density much smaller than massive neutrinos .
let @xmath81 be the heavy particle comoving number density at decoupling . at @xmath82
their comoving number density reads : @xmath83 \eqno ( 2.5)\ ] ] with @xmath84 ( decay time ) . assuming a two body decay process @xmath85 , into a light ( _ volatile _ ) particle @xmath86 ( mass @xmath87 ) and a massless particle @xmath88 , it is shown that the volatile distribution , at @xmath89 , reads @xmath90 where @xmath91 \eqno ( 2.7)\ ] ] provided that @xmath77 s , before they decay , never attain a density exceeding relativistic components causing a temporary matter dominated expansion . at high @xmath58 ,
the distribution ( 2.6 ) is cutoff @xmath92 $ ] and this is true also if this temporary regime occurs .
in bp it is shown that , if the massless particle @xmath88 is a photon ( @xmath93 ) , such temporary expansion can never occur in physically relevant cases , which must however satisfy the restriction @xmath94 .
this limitation does not hold if @xmath88 is a massless scalar as is expected to exist in theories where a global invariance is broken below a suitable energy scale ( examples of such particles are _ familons _ and _ majorons _ ) . using the distribution ( 2.6 ) , it is easy to see that the average @xmath95 and @xmath86 s will therefore become non relativistic when @xmath96 ; henceforth @xmath97 can be used in the distribution ( 2.6 ) , instead of eq .
( 2.7 ) .
if @xmath88 s are sterile scalars and the decay takes place after bbns , they will contribute to smlc and affect cbr and lss just as extra massless neutrino states .
let us recall that , in the absence of @xmath77 decay , the ratio @xmath98 .
@xmath77 decay modifies it , turning @xmath15 into an effective value @xmath99 in particular , @xmath88 s lower the _ equivalence _ redshift . for @xmath100 ,
also @xmath86 s are still relativistic at equivalence .
accordingly , the equivalence occurs at either @xmath101 in the former and latter case , respectively .
volatile models , as well as neutrino models , can be parametrized through the values of @xmath9 and @xmath102
. however , at given @xmath9 , the latter ones are allowed only for discrete @xmath102 values ( notice that such @xmath102 values are independent of @xmath103 and can be only marginally shifted by changing @xmath104 ) .
the former ones , instead , are allowed for a continuous set of @xmath102 values .
this can be seen in fig . 1 , which is taken from bp , where more details on volatile models can be found .
in fig . 1 we also show which models are consistent with lss constraints and cobe quadrupole data , for various @xmath0 .
such constraints will be briefly discussed in the next section . in general , they are fulfilled for a part of the allowed @xmath105 values .
1 also shows that there is a large deal of mixed models with low @xmath3 which are allowed by lss data and are not consistent with hdm made of massive @xmath13 s .
in this article we shall show that a portion of this extra parameter space seems however forbidden by cbr constraints .
to describe the evolution of radiation anisotropies in an expanding universe , it is convenient to write the metric in the form @xmath106 ~. \eqno ( 3.1)\ ] ] in the conformal newtonian gauge . here
@xmath107 ( components of the vector @xmath108 ) are space coordinates , @xmath109 is the conformal time , @xmath110 is the scale factor and @xmath111 gives the spatial part of the metric tensor in the homogeneity limit .
the deviation from a pure friedmann metric , due to gravitational field inhomogeneities , are given by the _ potentials _ @xmath112 and @xmath113 .
in the presence of inhomogeneities , the temperature of radiation @xmath114 $ ] contains an anisotropy term , which can be thought as a superposition of plain waves of wave
numbers @xmath115 . in respect to a given direction @xmath116 , the amplitude of the single @xmath115 mode can be expanded into spherical harmonics . for our statistical aims
it is however sufficient to consider the anisotropy as a function of @xmath117 and use the expansion : @xmath118 where @xmath119 are legendre polynomials and whose coefficients can be used to work out the angular fluctuation spectrum @xmath120 which , for a gaussian random field , completely describes angular anisotropies . at the present time @xmath121 and for a comoving scale given by the wavenumber @xmath122
, we can compute @xmath123 performing a time integral ( seljak and zaldarriaga , 1996 ) @xmath124 \eqno ( 3.4)\ ] ] over the source function @xmath125 , which depends upon inhomogeneity evolution inside the last scattering band and from it to now .
the physics of microwave background anisotropies due to adiabatic perturbations has been deeply investigated in the last few years .
it has been shown that the characteristics of the peaks in the @xmath7 spectrum are related to the physics of acoustic oscillations of baryons and radiation between the entry of a scale in the horizon and the last scattering band , and on the history of photons from last scattering surface to us .
background features , like the overall matter and radiation density content , @xmath103 and @xmath126 , have an influence both on the positions of the peaks and on their amplitude , but the latter also depends greatly on the baryon content @xmath127 and more slightly on the characteristic of the hot component . in the following
we shall analyze in detail the angular spectrum of volatile models , outlining its peculiarities with respect to standard cdm and neutrino models . in order to do so
, we need to modify available public codes , like cmbfast , allowing them to deal with a hot component whose momentum distribution is ( 2.6 ) .
it should also be recalled that volatile and neutrino models , for given @xmath9 and @xmath102 , are expected to include a different amount of smlc . in neutrino models
smlc is less than in pure cdm and even vanishes if all @xmath13 s are massive ( unless extra smlc is added @xmath128 ) . in volatile models , instead , smlc is however more than in pure cdm , as scalar @xmath88 s are added on top of standard massless @xmath13 s .
several @xmath7 spectra of volatile models are presented in figs .
they show two main features , if compared with standard cdm : the first doppler peak is higher and the second and third doppler peaks are slightly shifted to the right . in principle , we expect volatile model spectra to differ from neutrino model spectra because of the momentum distribution of volatiles and the extra smlc they have to include . in the following
, we shall try to disentangle these two effects . to this aim
we coupled each volatile models with a @xmath129 neutrino case with identical @xmath9 and @xmath102 , but a greater number of neutrino degrees of freedom , so to ensure equal high - redshift energy densities . in fig
. 2 we report the scale factor dependence of the energy densities @xmath130 of volatiles and a massive neutrinos in two coupled models . in the case shown , the two energy densities never differ in ratio more than @xmath131 ; for different choice of the parameters the curve is just shifted to higher or lower redshifts according to the value of @xmath3 .
more in detail fig .
2 states that volatiles have a slower derelativization than neutrinos : the transition phase from the relativistic to the non relativistic regime starts earlier and goes on for a longer time .
this behaviour is related to the different shapes of the two distribution functions , and to the fact that the volatile one is smoother around @xmath132 , which corresponds to a value significantly smaller than its maximum , after which it is rapidly cutoff [ see eqs .
( 2.6 ) , ( 2.7 ) ] .
friedman equations show that @xmath133 is approximately constant .
hence , once we know @xmath130 , we can perform a comparison between the conformal times of coupled volatile and @xmath129 neutrino cases .
it shows a marginal discrepancy as already the @xmath134 in the volatile and @xmath129 neutrino cases are very similar , and moreover the hot component always contributes as a small fraction of the total energy density . on the contrary , if a similar comparison is performed between standard cdm and volatile models , big discrepancies are found , especially at high redshifts .
in fact , in the volatile cases the relativistic background is greater due to the contribution of the sterile component , and the conformal time is therefore smaller than in the cdm case ( see fig . 3 ) .
this implies visible effects on the position of the doppler peaks , which are due to the oscillatory phase with which the photon baryon fluid meets the last scattering band ( see hu & sugiyama 1995 ) .
the photon
baryon fluid oscillates as @xmath135 , where @xmath122 is the comoving scale and @xmath136 is the sound horizon ( @xmath137 , @xmath138 is the sound speed ) . given the photon
baryon ratio , @xmath139 follow a similar trend as @xmath140 .
since in volatile models @xmath140 is smaller than in cdm , so will be @xmath139 , and the peaks of the spectrum will appear in correspondence to higher @xmath122 ( i.e. higher @xmath141 ) values .
this is a specific features of these models , in neutrino models the same effect plays a role , but shifting the peaks in the opposite direction ( dodelson et al .
1996 ) . for a given @xmath5 , the height of the peaks
is fixed by ( i ) the ratio between baryon and photon densities , @xmath142 @xmath143 , and ( ii ) the ratio between matter and radiation densities . at fixed @xmath127 and @xmath103 the main reason for a higher doppler peak in volatile models ( with respect to cdm )
is the delayed matter radiation equivalence , for which both smlc and , possibly , volatiles can be responsible . in neutrino models without @xmath144@xmath145 smlc , only the possible delay due to late derelativizing @xmath13 s may exist .
this is why volatile @xmath7 spectra and standard neutrino ones look so different .
however , there is a tiny further contribution in the boost of the peak due to the free - streaming of the hot component .
several authors ( ma & bertschinger 1995 , dodelson et al .
1996 ) have shown that even in high @xmath3 neutrino models the doppler peaks are enhanced with respect to cdm , and in that case the free - streaming of the hot component is to be considered responsible for the enhancement . free streaming , in fact , causes a decay in the potential @xmath113 which contributes as a forcing factor ( trough @xmath146 ) in the equations whose solution are the _ sonic _ oscillations in the photon baryon fluid , displacing their zero point and , henceforth , the phase by which they enter the last scattering band . in the standard neutrino case
, this effect causes a variation of @xmath147 at most on the @xmath7 , and typically of @xmath148 on the first doppler peak . in principle
one can expect that the different momentum distribution of volatiles may alterate the free streaming behaviour .
such differences , if they exist , can be found by comparing volatile spectra with the @xmath129 neutrino ones .
the differences between the two spectra are presented are shown in fig . 4 , and amount to @xmath149 at most .
although modest , this is another feature that characterizes volatile models with respect to neutrino one . in comparison with such finely tuned predictions from theoretical models ,
currently available data are still affected by huge errorbars .
however , some feature seems already evident from them . in fig .
615 we perform a comparison of model predictions with data and show that the doppler peak observed by the saskatoon experiment ( netterfield et al .
1997 ) exceeds the one expected in pure cdm once it is normalized to cobe data ( bennet et al . 1996 ) .
while it is evident that volatile models show a higher doppler peak , it is clear that a fit could be reached also changing other parameters , @xmath20 , by taking @xmath1 . in fig .
4 we show what happens in neutrino models if the spectrum is anti tilted to @xmath150 and to @xmath151 . indeed , the first doppler peak is raised ( which is desirable ) , but also the following peaks are raised , making difficult the agreement with the results from the cat experiment ( scott et al .
1996 ) . in section 5 similar considerations will be used in order to constrain the whole set of volatile models .
mixed model parameters can be constrained from particle physics and/or from lss . in this section
we review a number of the latter constraints , which can be tested without discussing non linear evolution . in appendix a we debate constraints on the spectral index @xmath5 arising from inflation . even without considering their non linear evolution
, models can be constrained through the following prescriptions : \(i ) the numerical constant @xmath152 , in the spectrum ( 1.1 ) , must give a value of @xmath153 consistent with the cobe quadrupole @xmath154 .
values of @xmath152 consistent with the @xmath154 values , for a given @xmath5 , within 3@xmath155 s , can be kept .
\(ii ) cobe quadrupole therefore fixes the normalization at small @xmath122 .
the first large @xmath122 test to consider , then , is the behaviour on the @xmath156mpc scale .
quite in general , the mass @xmath157 , within a sphere of radius @xmath158 , is @xmath159 therefore , the @xmath156mpc scale is a typical cluster scale . here
optical and x ray data are to be exploited to work out the mass variance @xmath160 and models should fit such observational outputs .
optical data provide the cluster mass function through a virial analysis of galaxy velocities within clusters .
x ray determinations , instead , are based on observational temperature functions .
if clusters are substantially virialized and the intracluster gas is isothermal , the mass @xmath161 of a cluster can then be obtained , once the ratio @xmath162 between thermal or galaxy kinetic energy ( per unit mass ) and gravitational potential energy ( per unit mass ) is known .
values for @xmath163 s are currently obtained from numerical models .
henry @xmath19 arnaud ( 1991 ) compiled a complete x ray flux
limited sample of 25 clusters which is still in use for such determinations .
assuming an isothermal gas , full virialization and @xmath164 they had estimated @xmath165 .
their error does not include @xmath163 uncertainty .
various authors then followed analogous patterns ( see , e.g. , white et al .
1993 , viana @xmath19 liddle 1996 ) .
recently eke et al .
( 1996 ) used navarro et al .
( 1995 ) cluster simulations to take @xmath166 with an error @xmath167 .
accordingly they found @xmath168 . by comparing the above results one can estimate that , to obtain @xmath169 , under the assumption of full virialization and purely isothermal gas , @xmath170 is needed .
an estimate of cluster masses independent from cluster models can be obtained by comparing optical and x ray data .
recent analyses ( girardi et al . 1998 ) seem to indicate values of @xmath171 . in our opinion , such outputs
do not strengthen the case of a safe cluster mass determination , as they are more than 12 % below navarro et al .
( 1995 ) ratio and might indicate a non equilibrium situation .
furthermore , it ought to be outlined that cluster mass determinations based on a pure virial equilibrium assumption conflict with the observed baryon abundances and would require cosmological models with @xmath1720.20 , in contrast with bbns constraints , if all dark matter is cdm and @xmath35 . if hdm is only partially bound in clusters and their masses are underestimated by @xmath17320@xmath174 , the latter conflict can be overcome .
( alternative way outs , of course , are that @xmath175 or @xmath176 . ) therefore , in order that data be consistent with mixed models , some mechanism should cause a slight but systematic underestimate of cluster masses . owing to such uncertainties , we can state that cluster data constrain @xmath160 within the interval 0.460.70 .
these constraints can also be expressed with direct reference to the cumulative cluster number density . defining the mass @xmath177 for which the top - hat mass variance @xmath178 ( here @xmath179 values from 1.55 to 1.69 can be considered ) the press @xmath19 schechter approach yields the number density @xmath180 \exp(-u^2/2 ) ~. \eqno ( 4.2)\ ] ] a usual way to compare it with data amounts to taking @xmath181m@xmath182 and considering then @xmath183 for the above @xmath161 value . with a range of uncertainty comparable with
the one discussed for @xmath160 , optical and x ray data converge towards a value of @xmath184 . henceforth viable models should have @xmath185 , for one of the above values of @xmath179 .
there is a slight difference between testing a model in respect to @xmath160 or @xmath186 .
this amounts to the different impact that the slope of the transferred spectrum has on expected values .
observations , however , also constrain the observed spectral slope , as we shall detail at the point ( iv ) .
\(iii ) in order to have @xmath186 and @xmath160 consistent with observations , the @xmath152 interval obtained from cobe quadrupole may have to be restricted .
the residual range of @xmath187 values can then be used to evaluate the expected density of high@xmath188 objects , that mixed models risk to _ under_produce .
the most restrictive constraints comes from computing @xmath189 in damped lyman @xmath190 systems ( for a review see wolfe , 1993 ) .
it can be shown that @xmath191 , \eqno ( 4.3)\ ] ] where @xmath192 is the ( top hat ) mass variance ( for mass @xmath161 at redshift @xmath188 ) and @xmath190 is an efficiency parameter which should be @xmath193 .
more specifically , using such expression , one can evaluate @xmath194 .
then , taking @xmath195 , @xmath196 and @xmath197 we have a figure to compare with the observational value given by storrie
lombardi et al . ( 1995 ) : @xmath198 .
only models for which the predicted value of @xmath199 exceeds 0.5 , at least for a part of the allowed @xmath187 interval , are therefore viable . in turn , also for viable models , this may yield a further restriction on the @xmath187 interval .
\(iv ) models viable in respect to previous criteria should also have a fair slope of the transferred spectrum .
its slope can be quantified through the _ extra power _
parameter @xmath200 ( @xmath201 are mass variances on the scales @xmath202mpc ) . using apm and abell / aco samples peacock and dodds ( 1994 ) and borgani et al .
( 1997 ) obtained @xmath203 in the intervals 0.190.27 and 0.180.25 , respectively .
such intervals essentially correspond to @xmath204 s .
furthermore the lower limit can be particularly sensitive to underestimates of non linear effects .
henceforth , models yielding @xmath205 outside an interval 0.130.27 are hardly viable .
one can also test models against bulk velocities reconstructed potent from observational data .
this causes no constraint , at the 2@xmath155 level , on models which survived previous tests . in bp
a number of plots of the transferred spectra of viable models , were shown against lcrs reconstructed spectral points ( lin et al .
however , previous constraints include most quantitative limitations and models passing them fit spectral data .
fig . 1 is taken from bp and reports the curves on the @xmath206 plane limiting areas where viable mixed models exist for various primeval @xmath5 values , if @xmath45 .
all models considered in the next sections , both for @xmath45 and @xmath207 , were previously found to satisfy the above constraints .
in this section we give the cbr spectra of the hot volatile models and compare them with available data , ranging from @xmath208 to @xmath209 .
we evaluated the spectra for several parameter choices allowed by lss constraints ( see fig . 1 ) .
significant example of @xmath7 spectra are shown in figs.615 while the corresponding lss predictions are summarized in table 1 .
ccccccc + @xmath210 & @xmath102 & @xmath5 & @xmath211 & @xmath212 & @xmath213 & @xmath214 + + + 0.11 & 8 & 1 & 6.9 & 0.530.68 & 0.23 & 1.89.8 + 0.11 & 8 & 1.1 & 6.9 & 0.63 & 0.27 & 6.9 + 0.16 & 2.83 & 1 & 5 & 0.600.68 & 0.20 & 4.09.3 + 0.16 & 2.83 & 1.1 & 5 & 0.67 & 0.24 & 8.9 + 0.11 & 16 & 1.1 & 10.7 & 0.520.64 & 0.24 & 1.67.3 + 0.19 & 8 & 1.1 & 9.7 & 0.550.66 & 0.16 & 2.07.7 + 0.19 & 8 & 1.2 & 9.7 & 0.550.67 & 0.19 & 2.18.1 + 0.20 & 4 & 1.2 & 6.5 & 0.67 & 0.20 & 8.3 + 0.24 & 4 & 1.3 & 7.2 & 0.640.69 & 0.17 & 6.49.4 + 0.23 & 8 & 1.3 & 8.9 & 0.570.68 & 0.17 & 2.79.3 + + 0.11 & 8 & 1 & 6.9 & 0.520.65 & 0.21 & 1.47.6 + 0.11 & 8 & 1.1 & 6.9 & 0.580.67 & 0.24 & 3.68.9 + 0.16 & 2.83 & 1 & 5 & 0.660.68 & 0.18 & 7.49.0 + 0.16 & 2.83 & 1.1 & 5 & 0.600.65 & 0.21 & 4.47.1 + 0.11 & 16 & 1.1 & 10.7 & 0.470.67 & 0.22 & 1.07.9 + 0.19 & 8 & 1.1 & 9.7 & 0.570.68 & 0.14 & 2.98.7 + 0.19 & 8 & 1.2 & 9.7 & 0.490.67 & 0.17 & 1.08.0 + 0.20 & 4 & 1.2 & 6.5 & 0.660.68 & 0.13 & 7.18.0 + 0.24 & 4 & 1.3 & 7.2 & 0.540.69 & 0.14 & 1.68.8 + 0.23 & 8 & 1.3 & 8.9 & 0.520.70 & 0.15 & 1.210 + some parameter sets are compatible with neutrino hot dark matter , while models with a low @xmath3 and @xmath215 are obtainable with volatile hot dark matter . since the height of the first doppler peak is very sensitive to the baryon abundance , we considered two values of @xmath127 , namely 0.05 and 0.1 .
spectra are normalized to @xmath216 assuming no contribution of gravitational waves .
as is known , their contribution would raise the low
l tail of the @xmath7 spectrum , therefore reducing the gap between the sacks wolfe _ plateau _ and the top of the first doppler peak .
models with @xmath217 systematically show a peak less pronounced than models with @xmath218 .
it is well known that models with a given @xmath103 and hot component show a lower doppler peak for smaller @xmath127 ; in top of that , here there is a further effect : lss constraints often are compatible with a part of the observational @xmath216 interval , and low @xmath127 models tend to be consistent with low @xmath216 values . while in fig .
5 we plot most data are available , in figs.615 we compare models with data from cobe ( tegmark 1996 ) saskatoon ( netterfiled et al .
1997 ) and cat ( scott et al .
1996 ) only . figs .
615 show a systematic trend : for a given large @xmath141 value , @xmath7 increases with both @xmath5 and @xmath219 . on the contrary , for a given large @xmath122 value , the matter fluctuation spectrum @xmath6 increases with @xmath5 but
is damped for large @xmath102 , so that these to effects tends to compensate .
this is one of the reasons why lss constraints can be compatible with @xmath5 as high as 1.4 . on the contrary , figs.1415 show that cbr spectra already disfavour @xmath220 if @xmath221 ( @xmath222 ) is considered , no matter the value of @xmath127 .
volatile models with @xmath223 are largely out of the errorbars , and should be considered as scarcely viable . nevertheless , even for @xmath151 , volatile models allow a higher first doppler peak without raising the following ones , and therefore fit the data better than neutrino models . just as large @xmath5 , also large @xmath102 causes conflict with data , by itself . for example , fig . 10 show that models with @xmath224 are disfavoured , even with low @xmath9 and @xmath150 .
as pointed out in section 2 , volatile models require a sterile component whose energy density is proportional to @xmath225 .
its effective number of degrees of freedom is linked to the equivalence redshift , which in turn affects both the shape parameter @xmath205 and the height of the first doppler peak .
dodelson et al .
( 1994 ) considered the matter power spectrum in the case of a @xmath109neutrino decay ( @xmath109cdm model ) , and found that even in that case the effective number of degrees of freedom @xmath226 is bigger that in standard cdm .
they outlined that , in order to lower @xmath205 at least down to 0.3 ( peacock & dodds 1994 ) , in a @xmath68 universe with @xmath24 , an equivalent number of massless neutrinos as high as 16 is needed .
white et al . ( 1995 ) , who also consider @xmath109cdm models but with a lighter neutrino , also pointed out that the predicted @xmath205 of these models is lower due to the high @xmath226 , and show that a lower @xmath205 implies a higher first doppler peak .
their work , however , is only qualitative , and they do nt infer any restriction in the parameter space using the data .
lookng at the data , we found out that if @xmath150 , cbr data models with an equivalent number of neutrino species @xmath227 as in figs .
10 , 12 and 15 are disfavoured .
models like the one shown in fig . 8 ( @xmath228 ) seem to better fit the data , although even lower @xmath211 ( @xmath229 ) , as provided by the model in fig . 9 , should not be disreguarded .
keeping to @xmath24 , lss already exclude very high @xmath225 values , so that a low @xmath211 is automatically ensured .
the models shown in figs .
67 seem to well fit the data , with a corresponding @xmath230 .
in this work we have analized mixed models from the point of view of both lss and cmb predictions .
we considered different hot dark matter components : the standard neutrino case and the volatile case in which particles come from the decay of heavier ones .
first we tested the mixed models on available lss data requiring fair predictions for @xmath160 , @xmath205 , dlas and @xmath186 .
this analysis shows that it must be @xmath231 .
this comes as no surprise , as mixed models with greater @xmath9 have not been considered since long .
the new result is that taking @xmath5 up to 1.4 does not ease the problems previously found for large @xmath9 . on the contrary , volatile models together with @xmath0
significantly widen the parameter space in the low @xmath3 direction and viable models even with @xmath232 can be found .
in fact , as far as @xmath6 is concerned , we found a nearly degenerate behaviour of the parameters @xmath5 and @xmath3 , as the damping on the high @xmath122 values due to low @xmath3 can be compensated by high @xmath5 .
cbr data , apparently , break the degeneracy . in section 3
we have shown that the cbr spectrum of volatile models is significantly different from standard cdm and also from neutrino models usually considered .
in fact smlc and late @xmath3 volatiles cause a late @xmath70 and , henceforth , a higher first doppler peak .
minor effects are caused by the typical momentum distribution of volatiles .
these effects amounts to 2 % at most in the @xmath7 spectrum and only an accurate analysis of the results of future satellites , as map and planck , could allow to detect it .
cbr spectra of volatile models were then compared with available data from different experiments , namely those from cobe , saskatoon and cat experiments .
data on cbr spectrum at large @xmath141 imply that temperature fluctuations @xmath233 are appreciated .
therefore , measures of the cbr spectrum , for high @xmath141 values , still need to be treated with some reserve .
it seems however clear that recent observations tend to indicate a doppler peak higher than expected both for pure cdm and for mixed models with early derelativization , such as most neutrino models .
taking @xmath1 and/or late derelativization raises the doppler peak and affects the cbr spectrum at high @xmath141 .
the first question we tried to answer is how far we can and have to go from pure cdm and @xmath24 to meet current large @xmath141 data .
we found that volatile models could cure this discrepancy , while ensuring a viable scenario for structure formation . in turn ,
large @xmath141 data imply restrictions in the parameter space , complementary to the ones derived from lss while a fit of such data requires only a slight departure from pure cdm and @xmath24 .
allows us to say that mixed models are in very good shape .
for example , fig.s 8 - 9 show the @xmath7 behaviour for @xmath150 and hdm ranging from 11@xmath174 to 16 % .
such models provide excellent fits to current data and , as explained in bp , are also in agreement with lss .
other models , for larger @xmath5 and @xmath9 or lower @xmath3 , show only a marginal fit with current observations .
hopefully , future data on high @xmath141 s will be more restrictive and allow safer constraints . at present ,
such models can not be ruled out , although they are more discrepant from pure cdm and @xmath24 than high @xmath141 data require . in our opinion , however , cbr data can already be said to exclude a number of models which fitted lss data . in general , models with @xmath234 and @xmath235 seem out of the range of reasonable expectations . altogether , three kinds of departures from cdm and zeldovich were considered in this work : large @xmath9 , low @xmath3 and @xmath1 .
large ( but allowed ) @xmath9 values , by themselves , do not ease the agreement of models with high @xmath141 data .
taking @xmath1 eases the agreement of models with data for @xmath236 , as is expected , but seems to rise the angular spectrum above data for greater @xmath141 s .
taking low @xmath3 , instead , raises the doppler peak , but does not spoil the agreement with greater @xmath141 data .
current data , therefore , seems to support models with a limited amount of hdm or volatile materials , possibly in association with @xmath5 slightly above unity , to compensate some effects on lss .
note that the analysis of this work is carried out keeping @xmath68 , allowing for no cosmological constant , and constraining the total density to be critical .
e.g. , raising @xmath103 would probably allow and require a stronger deviation from pure cdm and @xmath24 .
we plan to widen our analysis of the parameter space in the near future , also in connection with the expected arrival of fresh observational data on the cbr spectrum .
e.p . wishes to thank the university of milan for its hospitality during the preparation of this work .
bartlett j.g .
, blanchard a. , le dour m. , douspis m. , barbosa d. , 1998 , astro
ph/9804158 bennet c.l .
1996 , apj 464 , l1 bond j.r . , efstathiou g. , 1991 , phys .
lett . , b265 , 245 bonometto s.a .
, gabbiani f. and masiero a. , 1994 , phys.rev .
d49 , 3918 bonometto s. , pierpaoli e. , 1998 , new astronomy ( in press ) borgani s. , moscardini l. , plionis m. , grski k.m . , holzman j. , klypin a. , primack j.r . , smith c.c . and stompor r. , 1997 , new astr . 321 , 1 borgani s. , masiero a. , yamaguchi m. , 1996 , phys .
lett . , b386 , 189 copeland e.j . ,
liddle a.r . ,
, stewart e.d . and
wands d. , 1994 , phys.rev .
d49 , 6410 de bernardis p. , balbi a. , de gasperis g. , melchiorri a. , 1997 , apj 480 , 1 de gasperis g. , muciaccia p.f .
, vittorio n. , 1995 , apj 439 , 1 dodelson s. , gyuk g. , turner m.s . , 1994 , phys .
72 , 3754 dodelson s. , gates e. , stebbins a. , 1996 , apj 467 , 10 ghigna s. , borgani s. , tucci m. , bonometto s.a .
, klypin a. , primack j.r . , 1997 ,
apj 479 , 580 ghizzardi s. , and bonometto s.a . , 1996 ,
a&a , 307 , 697 eke v.r . , cole s. , frenk c.s . , 1996 , mnras 282 , 263 linde a. , 1991a , phys.lett b249 , 18 linde a. , 1991b , phys.lett .
b259 , 38 girardi m. , borgani s. , giuricin g. , mardirossian f. , mezzetti m. , 1998 , astro - ph/9804188 hancock s. , rocha g. , lasenby a.n . , gutierrez c.m . , 1998 ,
mnras 294 , l1 hannestad s. , 1998 , astro
ph/9804075 henry j.p . ,
arnaud k.a . , 1991 , apj 372 , 410 hu w. , sugiyama n. , 1995 , phys .
d51 , 2599 liddle a.r . ,
lyth d.h . ,
schaefer r.k . , shafi q. , viana p.t.p . , 1996 , mnras 281 , 531 lin h. , kirshner r.p .
, shectman s.a .
, landy s.d .
, oemler a. , tucker d.l . , schechter p. , 1996
, apj 471 , 617 lineweaver c. , 1997 , astro
ph/9702040 lineweaver c. , barbosa d. , 1998 , a&a 329 , 799 lucchin f. , colafrancesco s. , de gasperis g. , matarrese s. , mei s. , mollerach s. , moscardini l. and vittorio n. , 1996 , apj 459 , 455 navarro j.f .
, frenk c.s . ,
white s.d.m . , 1995 ,
mnras 275 , 720 ma c.p .
, bertschinger e. , 1995 , apj 455 , 7 mcnally s.j . , peacock j.a . , 1995 ,
mnras 277 , 143 netterfield c.b .
, devlin m.j . , jarosik n. , page l. , wollack e.j . , 1997 , apj 474 , 47 peacock j.a .
and dodds s.j . , 1994 ,
mnras 267 , 1020 pierpaoli e. and bonometto s.a . , 1995 , a&a 300 , 13 pierpaoli e. , coles p. , bonometto s.a . and borgani s. , 1996 , apj 470 , 92 pierpaoli e. , borgani s. , masiero a. , yamaguchi m. , 1998 , phys.rev .
d57 , 2089 scott d. , silk j. , white w. , 1995 , science 268 , 829 scott p.f .
et al . , 1996 ,
apj 461 , l1 seljak u. , zaldarriaga m. , 1996 , apj 469 , 437 smoot g.f .
et al . , 1992 , apj 396 , l1
storrie lambardi l.j . ,
mcmahon r.g .
, irwin m.j . and hazard c. ( 1995 ) _ proc .
eso workshop on qso a.l .
_ & astro ph/9503089 tegmark m. , 1996 , apj 464 , l38 viana p.t.p . ,
liddle a.r . , 1996 ,
mnras 281 , 323 white s.d.m . , efstathiou g. , frenk c. , 1993 , mnras 262 , 1023 white m. , gelmini g. , silk j , 1995 , phys .
d51 , 2669 zabludoff a.i .
, huchra j.p . , geller m.j . , 1990 , apjs 74 , 1 * appendix a * 0.8truecm
this section is a quick review of results in the literature , aiming to show that there is a wide class of inflationary models which predict @xmath0 , but @xmath237 . during inflation , quantum fluctuations of the _ inflaton _ field @xmath238 on the event horizon
give rise to density fluctuations .
their amplitude and power spectrum are related to the hubble parameter @xmath239 during inflation and to the speed @xmath240 of the _ slow rolling down _ process .
the critical quantity is the ratio @xmath241 , where @xmath239 and @xmath240 are taken when the scale @xmath242 is the event horizon .
it can be shown that @xmath243 and , if @xmath244 ( slowly ) decreases with time , we have the standard case of @xmath5 ( slightly ) below unity . such decrease occurs if the downhill motion of @xmath238 is accelerated and an opposite behaviour occurs if @xmath240 decreases while approaching a minimum . the basic reason why a potential yielding such a behaviour seems unappealing , is that the very last stages of inflation should rather see a significant @xmath238field acceleration , ending up into a regime of damped oscillations around the true vacuum , when reheating occurs .
however , the usual perspective can be reversed if the reheating does not arise when an initially smooth acceleration finally grows faster and faster , but is triggered by an abrupt first order phase transition , perhaps due to the break of the gut symmetry . before it and since the planck time , most energy resided in potential terms , so granting a vacuum dominated expansion .
this picture of the early cosmic expansion is the so called _
hybrid inflation _ , initially proposed by linde ( 1991a ) . a toy model to realize such scenario ( linde 1991b , 1994 ) is obtainable from the potential @xmath245 which depends on the scalar fields @xmath238 and @xmath246 , that we shall see to evolve slowly and fastly , respectively .
if the _ slow _ field is embedded in mass terms , the potential reads @xmath247 where @xmath248 eq .
( a.3 ) shows that @xmath249 has a minimum at @xmath250 , provided that @xmath251 . if @xmath252 , instead , the minimum is for @xmath253@xmath254 , yielding @xmath255 when @xmath256 .
large @xmath238 values therefore require that @xmath246 vanishes and then the potential @xmath257 gives a planck
time inflation , as @xmath249 goes from an initial value @xmath258 to a value @xmath259 .
the downhill motion of @xmath238 will decelerate as soon as the second term at the @xmath260 of eq .
( a.5 ) becomes negligible , in respect to @xmath261 , which acts as a cosmological constant .
this regime breaks down when the critical value @xmath262 is attained
. then @xmath263 changes sign and the configuration @xmath264 becomes unstable .
we have then a transition to the true vacuum configuration @xmath253 , which reheats ( or heats ) the universe .
there are several constraints to the above picture , in order that at least 60 e foldings occur with @xmath265 and fluctuations have a fair amplitude .
they are discussed in several papers ( see , @xmath20 , copeland et al . 1994 , and references therein ) and cause the restriction @xmath266 .
it is fair to outline that _ hybrid _ inflation is not just one of the many possible variations on the inflationary theme . in spite of the apparent complication of the above scheme , it is an intrinsically simple picture and one of the few patterns which can allow to recover a joint particle
astrophysical picture of the very early universe . | we compute cbr anisotropies in mixed models with different hot components , including neutrinos or volatile hdm arising from the decay of heavier particles .
the cbr power spectra of these models exhibit a higher doppler peak than cdm , and the discrepancy is even stronger in volatile models when the decay gives rise also to a neutral scalar .
cbr experiments , together with large scale structure ( lss ) data , are then used to constrain the space parameter of mixed models , when values of the primeval spectral index @xmath0 are also considered . even if @xmath1 is allowed
, however , lss alone prescribes that @xmath2 .
lss can be fitted by taking simultaneously a low derelativization redshift @xmath3 ( down to @xmath4 ) and a high @xmath5 , while cbr data from baloon
borne experiment cause a severe selection on this part of the parameter space .
in fact , while late derelativization and @xmath1 have opposite effects on the fluctuation spectrum @xmath6 , they sum their action on the angular spectrum @xmath7 .
henceforth @xmath8 seems excluded by baloon
borne experiment outputs , while a good fit of almost all cbr and lss data is found for @xmath9 values between 0.11 and 0.16 , @xmath10 and @xmath1120005000 .
a smaller @xmath5 is allowed , but @xmath3 should never be smaller than @xmath12 .
dark matter : decaying particles , dark matter : massive neutrinos , large scale structure of the universe , cosmic microwave background : anisotropies . |
He Earned a Total of $2,500 in Scholarships, One for Best Single Photograph
San Francisco (June 24, 2015) — Winning college journalists in the National Photojournalism Championships were announced recently by the William Randolph Hearst Foundation‘s Journalism Awards Program.
Timothy Tai, a Missouri School of Journalism photojournalism student, was one of the six finalists. He received a $1,500 scholarship for this placement.
The $1,000 award for Best Single Photograph went to Tai.
The Hearst Championships are the culmination of the 2014-15 Journalism Awards Program, which are held in 108 member colleges and universities of the Association of Schools of Journalism and Mass Communication with accredited undergraduate journalism programs.
The 29 winners from the 14 monthly competitions participated in the 55th annual Hearst Championships in San Francisco, completing rigorous on-the-spot assignments in writing, photography, radio, television and multimedia categories. Media professionals who judged the finalists’ work throughout the year and at the Championships chose the assignments. Winners were announced during the final awards ceremony on June 4 at the Westin St. Francis Hotel.
The photojournalism judges were: Sue Morrow, assistant multimedia director, Sacramento Bee, California; Jakub Mosur, freelance photographer, San Francisco; and Kenneth Irby, senior faculty, The Poynter Institute, St. Petersburg, Florida.
The William Randolph Hearst Foundation was established by its namesake in 1948 under California non-profit laws, exclusively for educational and charitable purposes. Since then, the Hearst Foundations have contributed nearly 1 billion dollars to numerous educational programs, health and medical care, human services and the arts in every state.
The Hearst Journalism Awards Program was founded in 1960 to foster journalism education through scholarships for outstanding college students. Since its inception, the program has distributed more than $12 million in scholarships and grants for the very best work by student journalists. ||||| Published on Nov 9, 2015
Update: Feb. 25, Melissa Click has been fired: http://www.nytimes.com/2016/02/26/us/...
University of Missouri student and ESPN photojournalist Tim Tai argues with students at University of Missouri, Monday, November 9, 2015. Just earlier, Tim Wolfe had resigned as the University of Missouri System president. Minutes later students created a human barrier between the ConcernedStudent tent village and about 50+ student and national journalists.
I took this video when I was covering the event as a citizen journalist.
The next day ConcernedStudent took down its media-free safe space, and issued flyers saying, "The media is important to tell our story and experiences at Mizzou to the world, let's welcome and thank them."
Seen in the Video: Assistant Director of Greek Life, Janna Basler; Assistant Professor Dr. Melissa Click; Storm Ervin, Concerned Student original 11 member.
http://www.nytimes.com/2015/11/10/us/...
Story Contact:
Videographer: Mark Schierbecker
http://www.MarkSchierbecker.com/
Mark@MarkSchierbecker.com ||||| These crawls are part of an effort to archive pages as they are created and archive the pages that they refer to. That way, as the pages that are referenced are changed or taken from the web, a link to the version that was live when the page was written will be preserved.Then the Internet Archive hopes that references to these archived pages will be put in place of a link that would be otherwise be broken, or a companion link to allow people to see what was originally intended by a page's authors.The goal is to fix all broken links on the web . Crawls of supported "No More 404" sites. ||||| The protests at the University of Missouri were assisted by dozens of players on the school’s football team who declared that they would boycott games until the school’s president stepped down. This important, complicated story can be explored using an impressive timeline published by Missouri’s student newspaper. Tai’s story is one footnote to this larger narrative.
First Amendment protections for photographers are vital. And I agree with my colleague, James Fallows, that Tai demonstrated impressive intellectual and emotional poise. But video of his encounter with protestors is noteworthy for another reason.
In the video of Tim Tai trying to carry out his ESPN assignment, I see the most vivid example yet of activists twisting the concept of “safe space” in a most confounding way. They have one lone student surrounded. They’re forcibly preventing him from exercising a civil right. At various points, they intimidate him. Ultimately, they physically push him. But all the while, they are operating on the premise, or carrying on the pretense, that he is making them unsafe.
It is as if they’ve weaponized the concept of “safe spaces.”
“I support people creating ‘safe spaces’ as a shield by exercising their freedom of association to organize themselves into mutually supporting communities,” Ken White wrote prior to this controversy. “But not everyone imagines ‘safe spaces’ like that. Some use the concept of ‘safe spaces’ as a sword, wielded to annex public spaces and demand that people within those spaces conform to their private norms.”
Yesterday, I wrote about Yale students who decided, in the name of creating a “safe space” on compass, to spit on people as they left a talk with which they disagreed. “In their muddled ideology,” I wrote, “the Yale activists had to destroy the safe space to save it.”
Here the doublethink reaches its apex: ||||| Students at the University of Missouri have received national attention in recent weeks for protesting against racism on campus, especially verbal vitriol directed at the student body president, who is black. The school’s football team refused to play until the school’s president, Timothy M. Wolfe, resigned, and yesterday Mr. Wolfe did just that, along with school chancellor Bowen R. Loftin.
The school’s campus also became embroiled in a debate about press freedom yesterday after Tim Tai, a student photographer and freelancer for ESPN, was blocked from taking photos by protesters chanting “Hey hey, ho ho, reporters have got to go.” Mr. Tai answered “I have a job to do,” and the protesters responded “We don’t care about your job.”
The student group that blocked Mr. Tai defended itself on Twitter, and some conservative factions have tried to paint him as a martyr. But most members of the news media applauded the young reporter’s grace under pressure:
Very impressed with the professionalism and fortitude @nonorganical, the #Mizzou photographer harassed by students, showed in that video. — Nick Confessore (@nickconfessore) November 10, 2015
–Saluting a brave young photographer who taught a lesson on the 1st Amendment. Tim Tai has a bright future. — PETER MAER (@petermaer) November 10, 2015
Just watched the video of Tim Tai (@nonorganical) being harassed at #Mizzou. Made me straight sick to my stomach. Solidarity, brotha. — Andy Greder (@andygreder) November 10, 2015
What happened to Tim Tai @nonorganical today happens to students reporters in ways small and large almost daily. But not nearly as publicly. — reedkath (@reedkath) November 10, 2015
Stand tall, Tim Tai. You did your profession proud. https://t.co/WEYrnbBCfH — timhoover (@timhoover) November 10, 2015
Thank you for both standing your ground and doing it in a respectful way today @nonorganical — Christine Jackson (@Cjax1694) November 9, 2015
Mr. Tai reacted by acknowledging his new celebrity, while trying to redirect the story to the issues at hand:
Wow. Didn't mean to become part of the story. Just trying to do my job. Thanks everyone for the support. — Tim Tai (@nonorganical) November 10, 2015
I'm a little perturbed at being part of the story, so maybe let's focus some more reporting on systemic racism in higher ed institutions. — Tim Tai (@nonorganical) November 10, 2015
Ironically, one of the most vocal protesters blocking Mr. Tai was Melissa Click, an assistant professor of mass media in the university’s School of Communications (and as such, a courtesy member of the School of Journalism). Ms. Click, whose research interests include “50 Shades of Grey readers” and “the impact of social media in fans’ relationship with Lady Gaga” according to her faculty page, yelled “Who wants to help me get this reporter out of here? I need some muscle over here.”
A social media movement to fire Ms. Click has sprouted up on Twitter, using the hashtag #MelissaClickMustGo. David Kurpius, dean of the Missouri School of Journalism, said in a statement Tuesday that the faculty would review Ms. Click’s special designation.
UPDATE: To their credit, protesters are treating the media more respectfully today—this flyer was distributed at the student campsite this morning: ||||| COLUMBIA, Mo. — A video that showed University of Missouri protesters restricting a student photographer’s access to a public area of campus on Monday has ignited discussions about press freedom.
Tim Tai, a student photographer on freelance assignment for ESPN, was trying to take photos of a small tent city that protesters had created on a campus quad. Concerned Student 1950, an activist group that formed to push for increased awareness and action around racial issues on campus, did not want reporters near the encampment.
Protesters blocked Mr. Tai’s view and argued with him, eventually pushing him away. At one point, they chanted, “Hey hey, ho ho, reporters have got to go.”
“I am documenting this for a national news organization,” Mr. Tai told the protesters, adding that “the First Amendment protects your right to be here and mine.” | – A University of Missouri photojournalist on freelance assignment for ESPN found himself in a confrontation with student activists Monday as they tried to ban him from the tent city they'd set up on campus in response to recent racial strife there, the New York Times reports. A tense video taken by Mark Schierbecker shows—despite demands by the Concerned Student 1950 group that Tim Tai leave them be in their "safe space"—a resolute Tai standing firm, noting his First Amendment rights and saying, "I have a job to do." As activists started walking, forcing him back, Schierbecker was left near the tents, where he was met by a woman now IDed as Melissa Click, an assistant professor of mass media, who, per the Washington Post, told him to "get out," appeared to grab for his camera, then called out to protesters, "Who wants to help me get this reporter out of here? I need some muscle over here." Gawker notes that just days earlier, Click had appealed to get the students' message "into the national media." Tai tweeted Monday evening, "Wow. Didn't mean to become part of the story. Just trying to do my job." His feed has been flooded with support for his "professionalism" during the incident, which Conor Friedersdorf of the Atlantic says activists provoked when they "weaponized" the idea of "safe spaces." Early Tuesday morning, Tai tried to turn the story back to the original issues, tweeting, "I'm a little perturbed at being part of the story, so maybe let's focus some more reporting on systemic racism in higher ed institutions." Meanwhile, the Observer notes that the hashtag #MelissaClickMustGo has popped up, while a Fusion reporter tweeted a photo of a "teachable moment" flier said to have been passed around Tuesday morning that welcomed the media as "important to tell our story" and having the same First Amendment rights as activists do. (Two major Mizzou administrators have stepped down because of the tensions on campus.) |
SECTION 1. SHORT TITLE.
This Act may be cited as the ``Technology Export Review Act''.
SEC. 2. ANNUAL REVIEW OF CONTROLLED ITEMS.
Section 4 of the Export Administration Act of 1979 (50 U.S.C. App.
2403) is amended by adding at the end the following:
``(h) Control List Review.--
``(1) In general.--In order to ensure that requirements for
validated licenses to export are periodically removed as goods
and technology become obsolete with respect to the specific
objectives of the export controls requiring such licenses, the
Secretary shall conduct periodic reviews of such controls
imposed under sections 5 and 6. The Secretary shall complete
such a review not later than 6 months after the date of the
enactment of this subsection, and not later than the end of
each 1-year period thereafter.
``(2) Review elements.--In conducting each review under
paragraph (1), the Secretary shall do the following with
respect to the export controls requiring a license described in
paragraph (1):
``(A) Objectives of controls.--The Secretary shall
identify the specific objectives of the export
controls, for the 12-month period following the
completion of the review, for each country or group of
countries for which a validated license is required.
When an objective of an export control is to defer the
development of a specific capability in such country or
group of countries, the Secretary shall specify for
what period of time the controls are expected to defer
such capability.
``(B) Quantity and performance.--The Secretary
shall estimate, for the 12-month period described in
subparagraph (A), the quantities and performance (as
specified in specific performance parameters on the
control list) of the goods and technology to which the
controls apply that must be obtained by each country or
group of countries for which a validated license is
required in order to defeat the objectives of the
export controls.
``(C) Availability to controlled destinations.--The
Secretary shall evaluate the effectiveness of the
export controls in achieving their specific objectives,
including explicit descriptions of the availability
from sources outside the United States, or from sources
inside the United States resulting from the inability
of the United States Government to effectively enforce
controls, during the 12-month period described in
subparagraph (A), to controlled countries of goods and
technology to which the export controls apply.
``(D) Economic impact.--The Secretary shall
evaluate the economic impact, during the 12-month
period described in subparagraph (A), of the export
controls on exporting companies, including estimates of
lost sales, loss in market share, and administrative
overhead.
``(3) Changes in controls.--
``(A) Changes.--After completing each review under
this subsection, the Secretary shall, if warranted by
the findings of the review and after consultation with
appropriate departments or agencies--
``(i) eliminate the requirement for an
export license for a particular good or
technology;
``(ii) make such a good or technology
eligible for delivery under a distribution
license or other license authorizing multiple
exports;
``(iii) eliminate a performance threshold
or other characteristic upon which the
requirement for a validated license for such a
good or technology is based; or
``(iv) increase the performance levels at
which an individual validated license for such
a good or technology is required, at which it
is eligible for delivery under a distribution
license, or at which special conditions or
security safeguard plans are imposed as a
condition of export.
``(4) Hearings.--The Secretary shall conduct public
hearings not less than once each year in order to solicit
information from all interested parties on all matters to be
addressed in each review conducted under this subsection.
``(5) Removal of controls on mass-market products.--
``(A) Mass-market products defined.--For the
purposes of this paragraph, the term `mass-market
product' means any good or technology sold, licensed,
or otherwise distributed as a discrete item and which
will have been distributed for end use outside the
United States in a quantity exceeding 100,000 units
over a 12-month period, as determined under
subparagraph (B).
``(B) Anticipatory review of mass-market
products.--Not later than--
``(i) 6 months after the date of the
enactment of this subsection, and
``(ii) the end of each 1-year period
thereafter,
the Secretary shall, in consultation with the
appropriate technical advisory committee, industry
groups, and producers, identify those items described
in subparagraph (A) (including products differentiated
on the control list according to specific performance
parameters) that will be distributed for end use
outside the United States in a quantity exceeding
100,000 units beginning on the applicable date
described in clause (i) or (ii). For purposes of this
paragraph, estimates of numbers of items that will be
distributed shall be based on reliable estimates
provided by producers of such items.
``(C) Action by the secretary.--Not later than 30
days after an item is determined by the Secretary under
subparagraph (B) to be a mass-market product, the
Secretary shall propose to any group of countries which
imposes export controls on the item cooperatively with
the United States the elimination of controls on the
item in accordance with the procedures of such group,
and shall publish a notice of such proposal in the
Federal Register.
``(6) Relationship to other provisions.--The requirements
of this subsection are in addition to any other requirements of
this Act. The Secretary may coordinate reviews under this
subsection with reviews conducted under section 5(c).''.
SEC. 3. EQUAL TREATMENT OF COMPONENTS.
Section 4 of the Export Administration Act of 1979 is amended by
adding at the end the following new subsection:
``(i) Treatment of Semiconductors.--The export control treatment
imposed under the authority of this Act upon semiconductor devices
shall be no more restrictive or burdensome to the exporter than the
export control treatment imposed under the authority of this Act upon
computer systems or telecommunications systems for which the
semiconductor devices serve or can serve as components.''. | Technology Export Review Act - Amends the Export Administration Act of 1979 to direct the Secretary of Commerce to conduct periodic reviews of national security and foreign policy export controls on goods and technology in order to ensure that requirements for validated licenses to export are periodically removed as such items become obsolete with respect to the objectives of such controls.
Requires the Secretary, if the review warrants, to: (1) eliminate the requirement for an export license for a particular good or technology; (2) make such item eligible for delivery under a distribution license or other license authorizing multiple exports; (3) eliminate a performance threshold upon which the license requirement for the item is based; or (4) increase the performance levels at which a license for such item is required, at which it is eligible for delivery under a distribution license, or at which special conditions or security safeguard plans are imposed as a condition of export.
Requires the Secretary to: (1) identify mass-market products (any good or technology sold, licensed, or distributed as a discrete item which will have been distributed for end use outside the United States in a quantity exceeding 100,000 units over a 12-month period); and (2) propose the elimination of export controls on such an item to any group of countries which imposes similar controls on it cooperatively with the United States.
Requires the export control treatment imposed under this Act upon semiconductor devices to be no more restrictive or burdensome to the exporter than controls imposed under such Act upon computer systems or telecommunications systems for which the semiconductor devices serve as components. |
formation of massive galaxies provides a critical test of theories of galaxy formation and evolution . before modern deep observations ,
the most massive galaxies known were local elliptical galaxies with no ongoing star formation . the classical model for these objects ( e.g. , * ? ? ? * ) was monolithic formation at high redshifts , followed by passive evolution . a more recent galaxy formation theory in the cold dark matter ( cdm )
paradigm @xcite predicts quite the opposite scenario : a galaxy - galaxy merging - tree model . in this scenario ,
small galaxies formed early in cosmic time , and massive galaxies were assembled later at much lower redshifts by a series of mergers .
observations of local ultra - luminous infrared galaxies ( ulirgs , @xmath5 ) @xcite ] detected by _ iras _
are consistent with the merger theory .
most local ulirgs have disturbed morphologies , consistent with being merging systems @xcite .
ulirgs in later stages of merging have @xmath6 light profiles @xcite . @xcite and @xcite measured local ulirg dynamical masses and found an average of @xmath7 .
these features are consistent with numerical simulation studies of galaxy mergers , indicating that local ulirgs are merging systems transforming gas - rich galaxies into @xmath8 elliptical galaxies @xcite .
the story is different at @xmath9 .
deep near - infrared surveys @xcite have identified apparently luminous passive galaxies already in place at @xmath10 , implying that they formed at even higher redshifts .
the existence of galaxies with @xmath11 at high redshifts may challenge the merger theory of forming such objects at lower redshifts .
however , @xcite used a semi - analytic model to show that significant numbers of @xmath12 galaxies were in places by @xmath2 but many also formed at lower redshifts , that is , there was a whole `` downsizing '' trend for massive galaxies to form their stars early , but it is merely statistical not absolute . possibly consistent with this is the fact that the contribution of lirgs and ulirgs to the total ir luminosity density is more than 70% at @xmath13 @xcite compared to a negligible percentage locally @xcite .
moreover , the redshift surveys for sub - millimeter galaxies ( smgs ) by @xcite reveal a rapidly evolving ulirg population at @xmath14 .
such strong evolution is also seen in ulirgs selected with _ bzk _
color and mips 24 flux at @xmath2 with their number density apparently 3 orders of magnitude higher than the local number density
. thus local ulirgs would well be the tail end of earlier intense activity .
the spitzer mips 24 band has been very effective in probing infrared emission from galaxies at redshifts up to @xmath15 @xcite .
@xcite , @xcite , @xcite , and @xcite argued that 24 emission from galaxies at @xmath16 is powered by both active galactic nuclei ( agn ) and star formation .
spectroscopic observations of a few 24 luminous smgs and lyman break galaxies ( lbgs ) at @xmath17 @xcite with the infrared spectrograph ( irs ) on spitzer support this view , showing both strong continua and emission features of polycyclic aromatic hydrocarbons ( pah ) in the rest - frame @xmath18 .
systematic infrared spectroscopic surveys of 24 luminous but optically faint sources @xcite reveal a dusty , @xmath19 agn population not detected in optical surveys .
most of these agns are ulirgs with power - law spectral energy distributions ( seds ) in the mid - infrared and deep silicate absorption at 9.7 @xcite .
@xcite observed a sample of x - ray agn with similar properties though generally less silicate absorption .
optically - faint radio sources are a mix of agn and starbursts @xcite but are predominantly agn . in general ,
optically - faint objects have weak infrared spectral emission features , and most objects are likely to be agn @xcite . however , not all 24 luminous objects at @xmath20 are agn - dominated . for example
, @xcite and @xcite also identified samples with an apparent 1.6 stellar peak in the irac 4.5 or 5.8 bands .
both samples show a very narrow redshift distribution due to a selection of the mips 24 band toward strong 7.7 pah emission at @xmath1 .
irs spectroscopy of 24 -luminous smgs @xcite shows similar spectral features , namely strong pah emission in objects with a 1.6 stellar emission bump @xcite , indicating intensive star formation in both types of objects .
this paper presents an irs spectroscopic and multi - wavelength study of a ulirg sample at @xmath2 .
the sample comes from the all - wavelength extended groth - strip international survey ( aegis , * ? ? ?
* ) , which consists of deep surveys ranging from x - ray to radio wavelengths .
selection of our sample catches a starburst - dominated phase of ulirg with @xmath21l@xmath22 , which is very rare among local ulirgs . in this paper , we will study their properties including star formation , stellar masses , agn fractions , and contribution to the universe s star formation history .
2 describes the sample selection .
the irs spectroscopic results are presented in 3 , and 4 contains an analysis of stellar populations , star formation rate , and agn fraction .
5 summarizes our results .
all magnitudes are in the ab magnitude system unless stated otherwise , and notation such as `` [ 3.6 ] '' means the ab magnitude at wavelength 3.6 .
the adopted cosmology parameters are @xmath23 km s@xmath24 mpc@xmath24 , @xmath25 , @xmath26 .
we wish to study the multi - wavelength properties of star - forming galaxies at @xmath2 .
there are many ways of using optical and nir colors to select such a sample .
the samples with the best spectroscopic confirmation are the @xmath27 color - selected bm / bx sources @xcite , which have estimated typical stellar masses of about @xmath28 @xcite .
the average 24 flux density for these sources is @xmath29 @xmath30jy @xcite , which suggests modest rest frame mid - ir luminosities , consistent with lirgs ( @xmath31 ) .
a different sample , based on near - infrared color selection , is the distant red galaxies ( drg , @xmath32 ) @xcite .
these galaxies are redder and dustier than the uv - selected bm / bx sources and are believed to be more massive than @xmath7 @xcite .
dusty drgs have estimated total infrared luminosity in the range @xmath33 @xcite .
a third sample @xcite uses _ bzk _ colors to select galaxies at @xmath34 ; massive _ bzk _ galaxies are mid - ir luminous . @xcite compared bm / bx , drgs , and _ bzk _ galaxies and found that _
galaxies include most drgs and bm / bx galaxies . this comparison is nicely shown in fig .
9 of @xcite .
the specific irs targets for this program were selected from a 24 sample @xcite in the egs region .
these sources show a clump at the predicted colors for @xmath46 in figure [ f : cc ] , but redshifts were not known in advance .
individual targets were selected to have irac colors satisfying eqs . 1 and 2 and also @xmath47 mjy .
in addition , each candidate was visually examined in the deep subaru r - band image @xcite to avoid confused or blended targets . with these criteria ,
12 targets were selected in the 2@xmath4810egs region for irs observation .
table 1 lists the sample galaxies . in this redshift range
, most sources will be either ulirgs with total infrared luminosity @xmath49 ) for our sample are calculated with mips 24 , 70 , 160 and 1.1 mm flux densities and chary - elbaz @xcite sed models .
details are given in
[ s : lir ] . ] or agns with high mid - ir luminosities . for convenience ,
the galaxy nicknames used in the _ spitzer _ database are used in this paper , but these do not follow proper naming conventions and should not be used as sole object identifiers .
proper names are also given in table 1 .
most of the previous irs surveys of ir luminous sources at @xmath2 have used rather different selection criteria .
table 2 summarizes the sample criteria for various other irs surveys . @xcite and @xcite used extreme optical - to-24 color to select dusty objects .
objects in these samples have much redder [ 3.6]-[8.0 ] irac colors than the majority of 24 sources ( fig .
[ f : cc ] ) and are mostly agns as shown by their strong power - law continua , but weak or absent pah emission features @xcite .
@xcite selected agn using similar criteria .
they also selected a separate starburst - dominated sample at @xmath2 based on the stellar 1.6 emission bump .
the exact criterion required the peak flux density to be at either 4.5 or 5.8 , thus rejects low - redshift galaxies and agn with strong power - law seds .
the resulting sample is very similar to ours though overall a bit redder ( fig .
[ f : cc ] ) .
all objects in the @xcite starburst sample show strong pah emission features .
irs observations of this sample are part of the gto program for the _ spitzer_/irac instrument team ( program i d : 30327 )
. objects were observed only with the irs long - slit low - resolution first order ( ll1 ) mode , giving wavelength coverage @xmath50 with spectral resolution @xmath51 .
the wavelength coverage corresponds to @xmath52 in the rest - frame for galaxies at @xmath53 .
this wavelength range includes strong pah emission features at 7.7 , 8.6 , and 11.3 and silicate absorption from 8 to 13 ( peaking near 9.7 ) . detecting these features permits redshift measurement and study of dust properties .
total exposure time for each target was based on its 24 flux density .
mapping mode @xcite was used to place each object at 6 positions spaced 24 apart along the 168 irs slit .
this not only gives more uniform spectra for the target objects , rejecting cosmic rays and bad pixels , but also increases sky coverage for possible serendipitous objects around each target .
table 1 gives the target list and other parameters for the observations .
all data were processed with the spitzer science center pipeline , version 13.0 .
extraction of source spectra was done with both the smart analysis package @xcite and our customized software .
lack of irs coverage at @xmath54 for this sample is compensated with deep _ akari _ 15 imaging @xcite .
all objects except two outside the _ akari _ area are detected at 15 providing measurement of the continua at rest - frame @xmath55 for galaxies at @xmath53 .
figure [ f : spec ] presents the irs spectra .
pah emission features at 7.7 and 11.3 and silicate absorption peaking at 9.7 are detected from 10 sources , indicating a narrow redshift range of @xmath56 .
the pah emission features at 7.7 and 11.3 show pronounced variations in their profiles and peak wavelengths .
both 7.7 and 11.3 pah features have at least two components @xcite . for example
, the 7.7 pah feature has a blue component at 7.6 and a red component at wavelength longwards of 7.7 .
thus different types of pah spectral templates potentially yield different redshift measurements . to check this ,
we use two local mir spectral templates with different pah profiles , an average local starburst spectrum and an average local ulirg spectrum to determine redshifts .
both templates yield very similar redshifts ( table 3 ) .
the starburst template fits all objects better with a typical 2% redshift uncertainty .
egs_b2 is identified at @xmath57 with pah emission features at 8.6 and 11.3 and the [ ] emission line at 12.81 .
redshift @xmath58 for egs12 is confirmed by detecting @xmath59 at 1.992 ( figure [ f : nir_spec ] ) in a nir spectrum taken with the moirc spectrograph on the subaru telescope @xcite .
the spectrum of egs_b6 , however , shows two emission lines at 27.7 and 31.1 that we are not able to identify consistently with any redshift .
egs_b6 is resolved to two objects 07 apart in the hst acs image @xcite , and an optical spectrum of this system shows two galaxies at @xmath60 and @xmath61 .
we therefore omit egs_b6 from the sample for further analysis .
the 24 images show several serendipitous objects in slits of all 12 targets , most of which are too faint to permit redshift identification .
only one source , egs24a , have @xmath62 mjy .
this object , found in the slit of egs24 , shows the silicate absorption feature at @xmath63 ( fig .
[ f : spec ] ) .
the redshift distribution of the sample ( fig .
[ f : zhist ] ) is very similar to that of the starburst - dominated ulirgs studied by @xcite , even though our limiting flux density at 24 is a factor of two fainter than theirs .
recently @xcite use the same criteria to select a larger sample in the lockman hole region for the irs observation and yield a very similar redshift distribution .
the narrow distribution for starburst - dominated ulirgs is due to the selection of strong 7.7 pah emission by the mips 24 band at @xmath1 .
the peak of the redshift distributions for @xcite , @xcite , and our sample is at this redshift , confirming the selection effect .
on the other hand , luminous 24 sources with power - law sed have a much wider redshift range up to @xmath15 @xcite , but they will not pass our irac color criteria or the `` bump '' sed criterion in @xcite and @xcite .
the pah features visible in the individual spectra of the sample galaxies are even more prominent in the average spectrum for the sample , as showed in fig [ f : stack_sed ] , which also stacks local starburst @xcite and ulirg samples for comparison .
the local ulirg sample is divided into seyfert , liner , and hii sub - samples according to their optical spectral classification @xcite .
pah emission features are found have different feature profiles .
@xcite classified profiles of each pah emission feature , according to the peak wavelength , into 3 main classes : class a , b , and c. pah emission features are known to have more than one component in each feature .
for example , the 7.7 pah emission feature have two major components at 7.6 and 7.8 : class a is defined as 7.6 dominated pah ; class b is the 7.8 component dominated pah ; and class c is red component dominated with peak shifting beyond 7.8 .
the 7.7 pah in the local starburst spectrum appears to be more consistent with class a with the peak at wavelength shorter than 7.7 .
all local ulirg spectra have a typical class b pah profile , with a red wing extending beyond 8 . in 3.1 , we already found that the starburst template fits each irs spectrum of our sample better than the ulirg template .
it is not surprising then that the average 7.7 pah profile of our sample is more similar to the average starburst spectrum , thus consistent with the class a. another significant difference is that our ulirg sample has an average @xmath64 and @xmath65 are the 7.7 and 11.3 pah emission luminosities defined as @xmath66 ratio about twice as high as local ulirgs but similar to local starbursts @xcite .
we also plot the average spectra of @xcite in figure [ f : stack_sed ] for comparison .
the average spectrum for strong pah objects in @xcite is more similar to the local seyfert type ulirg , implying a dominant agn contribution in the spectra of their sample .
we conclude from irs stacking that the 7.7 pah profiles and @xmath64 ratios for the present sample are more consistent with those of local starburst galaxies rather than local ulirgs .
pah emission features are a tracer of star formation , one of the energy sources powering ulirgs @xcite . in order to subtract the local continuum
, we adopted the method used by @xcite , fitting the @xmath67 spectrum into three components : the pah emission features , a power - law continuum , and the silicate absorption .
an iterative fit determined the continuum for each object .
the initial input pah template was from the ngc 7714 irs spectrum after subtracting its power - law continuum .
the silicate absorption profile was from @xcite with central optical depth @xmath68 a free parameter .
the 7.7 and 11.3 pah line luminosities and equivalent widths for the local starburst sample @xcite , the local ulirg sample @xcite , and the present sample were derived the same way .
@xcite used a different method to derive the same parameters ; their method would give lower 7.7 pah flux densities and luminosities .
this is due to the complicated continuum at @xmath69 .
our 11.3 pah flux densities are consistent with theirs .
table 3 gives the results .
aegis @xcite and fidel @xcite provide a rich x - ray to radio data set to study the ulirg seds .
objects in our sample are measured at many key bands : all are detected in all four irac bands @xcite , all but two by _ akari _ at 15 @xcite , and all at 1.4 ghz @xcite .
most are also detected at 70 and 160 in the fidel survey @xcite .
only two objects , egs14 and egs_b2 , are detected in the _ chandra _ 200 ks x - ray imaging @xcite .
the flux densities in these key bands trace stellar mass , star formation rate , and agn activity .
objects in the present sample were also observed with mambo on iram , and most were detected at 1.2 mm @xcite .
table 4 gives the photometry , and the uv - to - radio seds are shown in figure [ f : sed ] .
the multi - wavelength photometry permits us to compare the present sample with the sub - millimeter galaxy population .
there is a small region covered by scuba in egs by @xcite , but no galaxies in the present sample are in the scuba region .
we fit fir seds for the sample and predict their 850 flux densities @xmath71 to be in the range @xmath72mjy ( table 4 ) .
these values are similar to the flux densities for smgs at the same redshifts @xcite .
the median @xmath71 for this sample is 4.5 mjy , compared with the median @xmath71 of 5.5 mjy for smgs at @xmath73 found by @xcite and 7.5 mjy by @xcite . in more detail ,
7 out 12 objects in the present sample have @xmath71 fainter than 5 mjy , while the flux densities for most smgs in @xcite and @xcite are brighter than 5 mjy .
we therefore argue that this sample is part of a slightly faint smg population .
optical and radio morphologies of the galaxies provide important information on their assembly histories .
_ hst _ acs f814w imaging @xcite covers the central 1@xmath4810 region of the egs .
egs 1/4/b2 are outside the acs coverage , but rough optical morphologies are available from subaru @xmath74-band images .
optical images of each object are presented with their seds in figure [ f : sed ] .
most objects have irregular or clumpy morphologies in the rest - frame @xmath75 bands with a typical size of 15 , suggesting extended star formation in a region with a size of about 13 kpc .
the 1.4 ghz radio imaging of egs has a mean circular beam width of @xmath7638 fwhm @xcite and is unable to resolve morphologies except in a few cases .
egs 23 and 24 show elongated radio morphologies aligned with their optical extent , indicating that the radio and rest - frame uv light are from the same extended star formation regions in both cases .
the spatial distribution of the stellar population is traced by the rest - frame optical imaging .
@xcite , @xcite , and @xcite have argued that uv - dominated star - forming galaxies at high redshifts have similar morphologies in the rest - frame uv and optical bands .
one outstanding property of galaxies in the present sample is their extremely red optical - nir color .
seven objects in the sample have observed @xmath77 , qualifying them as extremely red objects ( ero ) .
egs4 is the reddest with @xmath78 .
red colors like these are common among distant ulirgs ; examples include ero j164502 + 4626.4 ( = [ hr94 ] 10 or sometimes `` hr 10 '' ) at @xmath79 @xcite and cfrs 14.1157 at @xmath80 @xcite .
eros are commonly seen as counterparts to smgs @xcite .
the red optical - nir colors , corresponding to rest @xmath81 for our sample , indicate either dust extinction in these objects or high stellar mass .
the stellar population modeling in the next paragraph suggests objects in our sample have both heavy dust extinction and high stellar masses .
the heavy dust extinction does not seem to reconcile with objects being detected in the acs 606w and 814w bands , which are the rest - frame 1800 - 2600for galaxies at @xmath2 .
the irregular and clumpy morphologies in figure [ f : sed ] highly non - uniform dust extinction in the objects in our sample .
only two objects are undetected in the deep acs 814w image , probably due to higher column density of dust in the compact stellar and gas distribution .
stellar population modeling provides a way of determining physical parameters from the observational data , but it is very difficult to model stellar populations in ulirgs .
@xcite measured dynamical masses for a sample of local ulirgs with nir spectroscopy and found stellar masses in the range of @xmath82 with a mean of @xmath83 .
their local sample has a mean absolute k - band magnitude of @xmath84 after adopting a dust correction of @xmath85 mag .
the irac 8 flux densities of this sample correspond to a similar k - band magnitude range with @xmath86 if the same dust correction is used .
this suggests a similar mass range for our sample , @xmath87 .
ulirgs have a burst star formation history , very young stellar populations , and non - uniform dust distribution , all of which can introduce large uncertainties in modeling their stellar populations . on the other hand ,
stellar masses are the most robust property against variations in star formation history , metallicities , and extinction law in modeling stellar population @xcite .
we perform a stellar population analysis on the present sample , mainly to measure their stellar masses .
we fit galaxy seds with ( * ? ? ?
* hereafter bc03 ) stellar population models with a salpeter imf and a constant star formation rate .
several groups @xcite have argued that a constant star formation rate provides a reasonable description of stellar population evolution for galaxies with ongoing star formation at high redshifts , such as lbgs , lyman - alpha emitters ( laes ) , and drgs . the stellar population age , dust reddening @xmath88 , stellar mass , and derived star formation rate from the model fitting
are listed in table 5 , and the model sed fits are shown in figure [ f : sed ] .
objects in this sample have estimated stellar masses with @xmath89 , similar to values found for local ulirgs @xcite , drgs , and _ bzk _ galaxies @xcite .
two of the most - popular methods of estimating star formation rates of local galaxies use total infrared luminosity @xmath90 and radio luminosity @xmath91 @xcite .
the validity of these methods needs to be established at high redshift .
most objects in the present sample are detected at 70 , 160 , and 1.2 mm , permitting a direct measurement of total infrared luminosity @xmath90 @xcite . in practice , we derived @xmath90 by fitting sed templates @xcite to the observed 70 , 160 , and 1.2 mm flux densities ( fig .
[ f : sed ] ) .
all galaxies in the sample have @xmath49 ( table 4 ) , qualifying them as ulirgs .
egs14 and egs21 have @xmath92 and are thus hyperlirgs . all sample galaxies are also detected at 1.4 ghz @xcite .
we will be able to verify : 1 ) whether @xmath91 is correlated with @xmath90 for ulirgs at @xmath2 ; and 2 ) whether such a correlation at high redshifts is consistent with the local one @xcite .
figure [ f : lir ] plots the radio luminosity @xmath91 vs @xmath90 for this sample and variety of local starburst and ulirg samples . the fir - radio ratio _ q _ for this sample defined by @xcite ]
are in table 4 with a mean @xmath93 .
@xcite measured @xmath90 using 350 850 and 1.2 mm flux densities and obtained a mean @xmath94 for smgs at @xmath17 .
both measurements yields _ q _ for ulirgs at @xmath2 close to , but smaller than the local value q=2.36 .
@xcite showed more clearly a trend in their agn dominated sample at @xmath34 : sources with strong pah emission have _ q _ in @xmath95 ; while all power - law sources have @xmath96 .
normally , radio excess is due to non - thermal emission from agns , but galaxy merging can also enhance the non - thermal synchrotron radiation @xcite .
merging processes are evident in our sample .
we will argue in following paragraphs that agns activities may exist in most objects in the sample .
in fact , two x - ray sources , egs14 and egs_b2 , and the serendipitous power - law source egs24a show higher radio excess ( lower _ q _ ) than rest objects in the sample .
two scenarios can be differentiated by their radio morphologies : agns are point sources and mergers , in most cases , are extended sources .
currently we can not determine which scenario is responsible to the radio excess due to lower resolution of the 1.4ghz radio images ( figure [ f : sed ] ) .
another measure used to estimate @xmath90 for local galaxies is the irac 8 luminosity , @xmath97 , though there is considerable debate about how reliable this method is .
@xmath98 is defined as @xmath99 where @xmath100 is the rest frame irac 8 flux density @xcite .
@xmath97 is found to be correlated with @xmath90 for local galaxies @xcite .
the mips 24 band directly measures the rest irac 8 flux densities for our sample .
a galaxy s 8 flux density actually has two components ( aside from starlight , which can be subtracted if necessary ) : the 7.7 pah emission feature complex and a featureless continuum , coming from an agn or warm dust in the interstellar medium .
there are several models for the ir emission from galaxies , which convert @xmath97 to @xmath90 ( * ? ? ?
* ; * ? ? ?
* hereafter ce01 and dh02 ) .
empirically , @xcite and @xcite found a correlation between @xmath97 and both @xmath91 and @xmath90 for star - forming galaxies . at the high luminosity end ,
local ulirgs deviate from this correlation with higher @xmath97/@xmath90 ratios , such a trend was also see by @xcite .
figure [ f : lum8 ] shows a correlation between @xmath97 and @xmath90 for all populations .
however , the @xmath101 relation for objects in our sample and the local ulirgs with high @xmath90 has a higher offset than that for the local starburst galaxies and the model prediction @xcite .
this indicates that , for a given @xmath90 , @xmath97 for objects in our sample and some of local ulirgs is higher than the model prediction . thus objects in our sample have an 8 excess comparing with the ce01 and dh02 model prediction .
the empirical @xmath101 relation of @xcite derived with samples at various redshifts matches local starburst galaxies , but predicts much high @xmath97 for ulirgs and hyperlirgs .
the @xmath101 relation for our sample permits to estimate @xmath90 for same type of objects with only 24 flux densities .
our irs spectra can be used to separate the pah from continuum in the ( rest ) 8 band , and each component s contribution to @xmath97 can be measured .
pah luminosity is thought to be a generally good tracer of star formation rate , but the @xmath102 ratio is known to be luminosity - dependent , decreasing at high luminosity @xcite .
figure [ f : lum77 ] shows @xmath102 versus @xmath90 . in this diagram , each population is well separated from the others . the average @xmath102 ratio for local ulirgs is seen to be lower than for local starburst galaxies .
the hyperlirgs in @xcite and @xcite have the lowest @xmath102 ratio .
in contrast , the present sample has the highest @xmath3 ratio , and the trend is the same for the 11.3 pah feature ( fig .
[ f : lum113 ] ) .
objects with such a high pah luminosity have neither been found locally nor in the mips 24 luminous sample at @xmath2 @xcite .
starburst galaxies were expected to have the highest @xmath3 , and @xmath3 was seen to decrease with increasing @xmath90 .
our sample shows a new ulirg population with much higher pah emissions at 7.7 and 11.3 .
we argue that the high @xmath3 ratio for our sample is generally compatible to extrapolation from the @xmath103 relation for starburst galaxies . both @xmath65 and @xmath104 for local starburst galaxies
are strongly correlated with @xmath90 in fig .
[ f : lum77 ] and fig .
[ f : lum113 ] .
we fit both data sets and obtain the following relations : @xmath105 and @xmath106 .
both relations convert to @xmath107 and @xmath108 respectively , as plotted in fig .
[ f : lum77 ] and fig .
[ f : lum113 ] .
the @xmath109-@xmath90 relation for local starbursts predicts a higher @xmath109-@xmath90 ratio in the @xmath90 range for our sample .
our sample have high @xmath109-@xmath90 ratio close to the extrapolation comparing with other ulirgs population , indicating a starburst domination .
the deficient pah emission in our sample implies most likely existence of agn in our sample , though strong uv from intensive star forming region can also destroy pah . the mir spectral properties and @xmath3 of our sample are closer to local starburst galaxies , even though their @xmath90 differs by 2 orders of magnitude .
@xcite reached the same conclusion by comparing silicate absorption strength for their sample with those for local ulirg and starburst galaxies , and they propose six possible scenarios to explain the similarity between high redshift ulirgs and local starburst galaxies .
our multi - wavelength data set provides further constrain on physical properties of our sample .
the acs i - band images ( figure [ f : sed ] ) show multi - clumpy morphologies extended to @xmath110 kpc size for most objects in our sample . at @xmath34 , the observed i - band probes the rest nuv band , thus is sensitive to star formation .
local ulirgs , however , have much more compact morphologies in the galex nuv images .
the extend morphologies of our sample support both gas - rich merging and starburst geometry scenarios proposed by @xcite . in this scenario ,
the silicate dust column density is reduced after star formation region is stretched to a large scale during merging .
the extended morphologies in rest nuv indicate an extended star formation in our sample , thus extended distribution of pah emission .
in such an extended distribution , more pah can survive in strong uv radiation field from central agn than those in a compact distribution .
this scenario thus explains the higher @xmath3 in our sample than local ulirgs .
star forming galaxies at @xmath2 are found to generally have much less dust extinction than their local counterparts .
@xcite found that there is a correlation between @xmath111 and @xmath112 for star forming galaxies at @xmath2 , where @xmath113 is the monolithic luminosity at 1600 .
this correlation has a higher offset than the local relation , indicating less dust extinction in the line of sight for galaxies at @xmath2 .
most objects in our sample lie upon the @xmath114-@xmath112 relation for galaxies @xmath2 ( figure [ f : lir_1600 ] ) .
@xcite argued that dust distribution and star formation region become more compact in local galaxies .
we argue that lower surface density of dust density and extended star formation region with high sfr permit to detect both uv and pah emission from most objects in ours sample . the star formation rate for a galaxy can be estimated from its fir and ultraviolet emission .
specifically , sfr is given as @xcite @xmath115 where @xmath116 is the monochromatic luminosity ( uncorrected for dust extinction ) at rest frame 280 nm @xcite .
the constant _ c _ is @xmath117 for the salpeter imf @xcite , and @xmath118 for the kroupa imf @xcite . in the following text
, we will adopt the salpeter imf for all @xmath90-sfr conversion in this paper .
sfr will reduce by a factor of @xmath762 if we switch to the kroupa imf @xcite .
the 280 nm band shifts to the observed @xmath119 band at @xmath53 .
@xmath116 was calculated from the acs f814w magnitude if available or otherwise the cfht @xmath119 magnitude .
all objects in our sample have @xmath116 in the range @xmath120 , less than 1% of their @xmath90 . the star formation rate seen at rest - frame 280
nm is at most 20 yr@xmath24 , and most uv light produced by newborn stars is absorbed by dust and re - emitted in the infrared .
thus we omit the @xmath116 contribution in our sfr calculation .
total infrared luminosity , @xmath90 , of ulirgs may be partly powered by agns @xcite , thus using @xmath90 may over - estimate their sfr .
the pah emission only traces star formation , and is free of agn contamination .
we calculate sfr for our sample with their @xmath109 using the @xmath121 relation , established from local starburst galaxies shown in figure [ f : lum77 ] and figure [ f : lum113 ] .
results are given in table 5 .
star formation rates for our sample converted from @xmath90 using equation 3 are much higher , with an average @xmath122 yr@xmath24 . @xmath65 and @xmath104 ( table 5 )
give smaller , star formation rates , in the range @xmath123 yr@xmath24 for most objects , that are quite consistent with the stellar population modeling results .
the discrepancy between both star formation estimations may be due to : 1 .
part of star formation occurs in region with no pah , thus @xmath109 underestimates the sfr ; 2 .
@xmath90 contains agn contribution , thus over - estimate the sfr .
it is very possible that both can happen in one object simultaneously , namely its agn destroys pah in surrounding area where star formation occurs .
this will further increase the discrepancy , so the real sfr should be in between both estimations .
our sample have both high star formation rate and high stellar mass , supporting the galaxy formation in the `` downsizing '' mode . the star formation rates and stellar masses for our sample are consistent with the sfr - stellar mass relation obtained from _
bzk _ galaxies at @xmath2 ( figure [ f : lms ] ) .
@xcite showed that simulated galaxy populations taken from the milllennium simulation lightcones of @xcite and @xcite failed to re - produce the sfr - star mass relation at @xmath124 , thus underestimate number of ulirgs at @xmath2 .
it has been long anticipated that ulirgs have a dominant contribution to the total infrared luminosity density , thus star formation rate density , at @xmath2 @xcite .
we use the @xmath125 method to calculate the total infrared luminosity density for our sample to be @xmath126 . the sample of @xcite with the same limiting flux yields a density of @xmath127 .
we argue that the difference is due to the cosmic variance , because these objects are massive galaxies and thus have a much stronger spatial correlation .
both densities are lower than ulirg @xmath90 density at @xmath128 , @xmath129 for all ulirgs @xcite .
most objects in our sample and those of @xcite have @xmath130 , the major contribution to the @xmath90 density at @xmath2 comes from ulirgs with @xmath131 @xcite .
one direct way of identifying an object as an agn is to measure its x - ray luminosity .
two objects in the sample , egs14 and egs_b2 , are in the main aegis - x catalog from the _ chandra _ 200 ks images @xcite .
their x - ray fluxes @xmath132 are @xmath133 and @xmath134 erg @xmath135 s@xmath24 respectively . calculated
x - ray luminosities @xmath136 @xcite are @xmath137 erg s@xmath24 for egs14 and @xmath138 erg s@xmath24 for egs_b2 .
hardness ratios are 0.45 and -0.30 , respectively .
therefore egs14 is a type 2 ( obscured ) agn , and egs_b2 is very close to a type 1 ( unobscured ) qso according to the x - ray luminosity and hardness ratios @xcite .
in addition to egs14 and egs_b2 , egs1 has a low - significance x - ray counterpart . at the location of this source
there were 6.5 net soft band counts ( 10 counts total with an estimated 3.5 count background ) .
this gives a poisson probability of a false detection of @xmath139 .
the source was not detected in the hard band .
if the detection is real , egs1 has @xmath140 @xmath141 erg @xmath135 s@xmath24 with @xmath142 @xmath143 erg s@xmath24 , thus is qualified to be an agn .
the remaining 10 ulirgs are not detected in the current chandra observation .
stacking in the soft band gives 19.5 counts above an average background of 9.85 , corresponding to @xmath144 erg @xmath135 s@xmath24 or @xmath145 erg s@xmath24 at 2@xmath146 significance .
there was no detection in the hard band .
even if egs1 is added to the stacking , nothing shows up in the hard band , but the soft band detection significance rises to 3.2@xmath146 .
the mean flux is @xmath147 erg @xmath135 s@xmath24 or @xmath148 erg s@xmath24 .
this average x - ray luminosity represents either a very weak agn or strong star formation . using the relation of @xcite
, this average x - ray luminosity corresponds to a star formation rate of 220 /yr , consistent with the sed and pah estimation .
however , we argue that the stacked x - ray signal comes from central point sources .
these objects have very extended and elongated morphologies in the rest - frame nuv band .
if the x - ray photons are from these star formation regions , stacking would not yield any signal unless they are aligned .
emission in the rest 36 wavelength range is another indicator of agn activity @xcite .
the longer end of that range , which has minimal stellar and pah emission contamination , is ideal for detecting what is nowadays thought to be hot dust emission closely related to the agn accretion disk .
luminosity at these wavelengths ( @xmath149 ) can be converted to @xmath90 for qsos with the qso sed templates @xcite .
_ akari _ 15 photometry @xcite provides the best measurement of @xmath149 for our sample .
all galaxies within the _ akari _ coverage are detected except egs26 , for which the 3@xmath146 limiting flux density is @xmath150 @xmath30jy @xcite .
the _ akari_15 band is wide enough to include the 6.2 pah feature for objects with @xmath73 , but this feature is much weaker than the 7.7 feature .
thus the _ akari _ 15 band is a better measure of agn emission than the mips 24 band .
in fact , the @xmath151 ratio for our sample measures the continuum - to - pah ratio , and thus the agn fraction .
figure [ f : ratio ] shows this ratio versus redshift .
the ratios for the two known agns with _ akari _
coverage , egs14 and egs24a , are very close to the expected values for seyfert 2 s . egs11 and egs12 are similar to expectations for h ii - type ulirgs .
the flux ratios for the remaining objects in our sample show even more pah than starbursts , indicating starburst domination in these objects .
smgs @xcite have very similar @xmath151 ratios as objects in the present sample , implying the same properties shared by both samples .
the smgs also show very strong pah features in their irs spectra .
this supports the argument that most objects in the present sample are part of a smg population , and are starburst dominated ulirgs .
a starburst dominated ulirg can still have a deeply dust - obscured agn .
many current theoretical models ( e.g. , * ? ? ?
* ; * ? ? ?
* ; * ? ? ?
* ; * ? ? ?
* ; * ? ? ?
* ) suggest that such a dust - obscured agn can have a significant contribution to @xmath90 of a ulirg . a study of local ulirg irs spectra show an average of 15% @xmath90 are from central dust - obscured agns @xcite .
@xcite argued that ulirg luminosity in @xmath152 is dominated by hot dust emission from agns .
most objects in the present sample are detected at 15 thus permit to measure their rest 5 luminosities , @xmath153 , which trace agn activity .
@xmath153 for the present sample is in range of @xmath154 ( table 4 ) . using @xmath155 from the @xcite qso
sed , we calculate that such a qso contribution is about 14% of @xmath90 for objects in our sample , consistent with those for local ulirgs .
the results for the present sample combined with others in table 1 show that high - redshift ulirgs have a diverse range of properties , and different selection criteria to pick out different populations . the combination of irac colors and mips 24 flux used here selects ulirgs with strong 7.7 pah in a rather narrow redshift range around @xmath156 .
this sample shows a starburst dominated stage in gas - rich merging powered ulirgs at @xmath2 . in this stage
, intensive star formation occurs in much extended region with a typical scale of @xmath157 kpc indicated by their acs morphologies .
objects in this sample have higher total infrared luminosities than local ulirgs , but the @xmath3 ratios for the sample are higher than those of local ulirgs .
we argue that the high @xmath3 ratio is due to the extended pah distribution , which is less affected by strong uv emission from central agns .
most objects follows the same @xmath111-@xmath158 relation as that for bm / bx , drg and _ bzk _ galaxies , though they are at higher luminosity end .
stellar masses in this sample already exceed @xmath7 .
most stars must have formed prior to this stage .
the sfr - stellar - mass relation for this sample is also consistent with that for the rest populations at @xmath2 , which is much higher than the theoretical model prediction .
only a few of the ulirgs in our sample show direct evidence to have agns with either high x - ray luminosities or hot dust emission in the mid - infrared .
several pieces of evidence show weak agns existing in this starburst dominated ulirg sample : systematically higher @xmath159 ratio than the local radio - fir relation , and an average x - ray emission of @xmath148 erg s@xmath24 from point sources .
agn contributes on average 15% of total infrared luminosity for our sample .
this sample presents an early stage with very intensive star formation but weak or heavily obscured agns .
ulirgs in other samples at similar redshift but with different selection methods @xcite have higher total infrared luminosities and lower pah luminosities , indicating increasing agn and decreasing star formation at higher @xmath90 .
this work is based in part on observations made with the spitzer space telescope , which is operated by the jet propulsion laboratory , california institute of technology under a contract with nasa .
support for this work was provided by nasa through an award issued by jpl / caltech .
alexander , d. , et al .
2005 , , 632 , 736 alonso - herrero , a. , et al .
2006 , , 640 , 167 armus , l. , et al .
2007 , , 656 , 148 ashby , m. , et al .
2008 , in preparation barmby , p. , et al .
2008 , , in press barger , a. , et al .
1998 , nature , 394 , 248 barnes , j. , & hernquist , l. 1996 , , 471 , 115 bavouzet , n. , et al .
2008 , , 479 , 83 bell , e. , et .
2005 , , 625 , 23 beirao , p. , et al .
2006 , , 643 , 1 blumenthal , g. , et al .
1984 , nature , 311 , 517 brandl , b. , et al . 2006 , , 653 , 1129 bruzual , g. & charlot , s. 2003 , mnras , 344 , 1000 carilli , c. l. , & yun , m. s. 2000 , , 539 , 1024 carleton , n. p. , elvis , m. , fabbiano , g. , willner , s. p. , lawrence , a. , & ward , m. 1987 , , 318 , 595 cattaneo , a. , et al .
2008 , mnras , 389 , 567 chapman , s. , et al . 2002 , , 570 , 557 chapman , s. , et al .
2003 , nature,422,695 chapman , s. , et al .
2005 , , 622,772 chary , r. & elbaz , d. 2001 , , 556 , 562 chiar , j. e. , & tielens , a. g. g. m. 2006 , , 637 , 774 cole , s. , et al .
2000 , , 319 , 168 condon , j. j. 1992 , , 30 , 575 conselice , c. , et al .
2005 , , 620 , 564 conselice , c. , et al .
2006 , , 660 , 55 conselice , c. , et al . 2007a , , 660 , 55 conselice , c. 2007b , , 638 , 686 cox , t. j. , jonsson , p. , primack , j. r. , & somerville , r. s. 2006 , , 373 , 1013 croton , d. j. , et al .
2006 , , 365 , 11 daddi , e. , et al .
2007 , , 631 , 13 daddi , e. , et al . 2007a , , 670 , 156 daddi , e. , et al .
2007b , , 670 , 173 dale , d. & helou , g. 2002 , , 576 , 159 davis , m. , et al .
2007 , , 660 , 1 desai , v. , et al .
2007 , , 669 , 810 dey , a. , et al .
2008 , , 677 , 943 dickinson , m. , et al .
2007 , baas , 211 , 5216 van dokkum , p. , et al .
2004 , , 611,703 dubinski , j. , mihos , c. , & henquest , l. 1999 , , 526 , 607 egami , e. , et al .
2008 , in preparation elvis , m. , et al .
1994 , , 95 , 1 eggen , o. j. , et al . 1962 , , 136 , 748 elbaz , d. , et al .
2002 , , 381 , 1 elbaz , d. , et al .
2007 , , 468 , 33 farrah , d. , et al .
2007 , , 667 , 149 farrah , d. , et al .
2008 , , 677 , 957 frster schreiber , n. m. , et al .
2004 , , 616 , 40 franx , m. , et al .
2003 , , 587,79 frayer , d. , et al .
2004 , , 127 , 728 genzel , r. , et al .
1998 , , 498 , 579 genzel , r. , et al .
2001 , , 563 , 527 georgakakis , a. , et al .
2007 , , 660 , l15 glazebrook , k. , et al .
2004 , nature , 430 , 181 greve , t. , et al .
2005 , , 359 , 1165 gu , q. , et al .
2006 , , 366 , 480 higdon , s. , et al .
2004 , , 116 , 975 hines , d. , et al .
2007 , , 641 , 85 hopkins , p. , et al . 2006 , , 652 , 864 houck , j. , et al .
2005 , , 622 , l105 huang , j. , et al .
2004 , , 154 , 44 huang , j. , et al .
2005 , , 634,136 huang , j. , et al .
2007a , , 660 , 69 huang , j. , et al . 2007b , , 664 , 840 huang , j. , et al .
2009 , in preparation hughes , d. , et al .
1998 , nature , 394 , 241 i m , m. , et al . 2008 , in preparation ivison , r. , et al .
2007 , , 660 , 77 hony , s. , van kerckhoven , c. , peeters , e. , tielens , a. g. g. m. , hudgins , d. m. , & allamandola , l. j. 2001 , , 370 , 1030 hudgins , d. m. , & allamandola , l. j. 1999 , , 516 , l41 james , p. b. , et al . 1999 , , 309 , 585 jiang , l. , et al .
2006 , , 132 , 2127 kovacs , a. , et al .
2006 , , 650 , 592 kennicutt , r. c. 1998 , , 36,189 kim , d - c . & sanders , d. b. 1998 , , 119 , 41 kim , d - c . , et al .
2002 , , 143 , 277 kitzbichler , m. g. & white , s. d. m. 2007 , , 376 , 2 kormendy , j. & sanders , d. 1992 , , 390 , 73 labbe , i. , et al . 2005 , , 624 , 81 lai , k. , et al .
2007 , , 655 , 704 laird , e. , et al .
2008 , , submitted le floch , e. , et al .
2005 , , 632 , 169 le floch , e. , et al .
2007 , , 660 , l65 lotz , j. , et al .
2008 , 672 , 177 lu , n. , et al .
2003 , , 588 , 199 lutz , d. , et al .
2005 , , 625 , 83 magids , g. , et al .
2008 , , in press menendez - delmestre , k. , et al .
2007 , , 655 , l65 menendez - delmestre , k. , et al .
2008 , in prep mihos , c. & herquist , l. 1994 , 437 , 611 mihos , c. & herquist , l. 1996 , 464 , 641 mccarthy , p. , et al .
2004 , , 614 , 9 nandra , k. , et al .
2008 , , 660 , 11 nardini , e. , et al .
2008 , , submitted .
papovich , c. , et al .
2004 , , 154 , 70 papovich , c. , dickinson , m. , giavalisco , m. , conselice , c. j. , & ferguson , h. c. 2005 , , 631 , 101 papovich , c. , et al.2006 , , 640 , 92 papovich , c. , et al .
2007 , , 668 , 45 papovich , c. , et al .
2008 , , 676 , 206 peeters , e. , hony , s. , van kerckhoven , c. , tielens , a. g. g. m. , allamandola , l. j. , hudgins , d. m. , & bauschlicher , c. w. 2002 , , 390 , 1089 pope , a. , et al .
2006 , , 370 , 1185 pope , a. , et al . 2008 , , 675 , 1171 ranalli , p. et al .
2003 , , 399 , 39 reddy , n. , et al .
2005 , , 633 , 748 reddy , n. , et al .
2006 , , 653 , 1004 reddy , n. , et al .
2008 , , 675 , 48 rigby , j. , et al .
2008 , , 675 , 262 rigopoulou , et al . 1999 , , 118 , 2625 rigopoulou ,
2006 , , 648 , 81 rothberg , b. & joseph , r. d. 2004 , , 128 , 2098 sajina , a. , et al .
2007 , , 664 , 713 sajina , a. , et al .
2008 , , 683,659 sanders , d. , et al .
1988 , , 328 , l35 sanders , d. & mirabel , i. f. 1996 , , 34 , 749 shapley , a. , et al .
2001 , , 562 , 95 shapley , a. , et al .
2005 , , 626 , 698 shi , y. , et al .
2005 , , 629 , 88 shi , y. , et al .
2007 , , 669 , 841 smail , i. , et al .
1999 , , 308 , 1061 smith , j. d. , et al .
2007 , , 656 , 770 steidel , c. , et al .
2004 , , 604 , 534 storchi - bergmann , t. , et al . 2005 , , 624 , 13 szokoly , g. p. , et al .
2004 , , 155 , 271 tacconi , l. , et al .
2002 , , 580 , 73 teplitz , h. , et al .
2007 , , 659 , 941 valiante , e. , et al .
2007 , , 660 , 1060 van diedenhoven , b. , peeters , e. , van kerckhoven , c. , hony , s. , hudgins , d. m. , allamandola , l. j. , & tielens , a. g. g. m. 2004 , , 611 , 928 veilleux , s. , et al .
1995 , , 98 , 171 veilleux , s. , et al . 1999 , , 522 , 113 yasuyuki , w. , & masayuki , u. 2005 , , 618 , 649 yasuyuki , w. , et al .
2008 , , 677 , 895 webb , t. m. a. , et al .
2003 , , 597,680 webb , t. m. a. , et al . 2006 , , 636 , l17 weedman , d. w. , le floch , e. , higdon , s. j. u. , higdon , j. l. , & houck , j. r. 2006 , , 638 , 613 weedman , d. , et al . 2006 , , 653 , 101 weedman , d. w. , et al .
2006 , , 651 , 101 white , s. d. m. , & frenk , c. s. 1991 , , 379 , 52 windhorst , r. , et al .
2002 , , 143 , 113 wolf , c. , et al .
2005 , , 630 , 771 wu , h. , et al .
2005 , , 632 , 79 yan , l. , et al .
2005 , , 628 , 604 younger , j. , et al .
2008 , in preparation .
lcccccc nickname & egsirac & ra & dec & @xmath160 & cycles & exp time + & & & mjy & & s + egs1 & j142301.49 + 533222.4 & 14:23:01.50 & + 53:32:22.6 & 0.55 & 10 & 7314 + egs4 & j142148.49 + 531534.5 & 14:21:48.49 & + 53:15:34.5 & 0.56 & 10 & 7314 + egs10 & j141928.10 + 524342.1 & 14:19:28.09 & + 52:43:42.2 & 0.62 & 8 & 5851 + egs11 & j141920.44 + 525037.7 & 14:19:17.44 & + 52:49:21.5 & 0.59 & 8 & 5851 + egs12 & j141917.45 + 524921.5 & 14:19:20.45 & + 52:50:37.9 & 0.74 & 5 & 3657 + egs14 & j141900.24 + 524948.3 & 14:19:00.27 & + 52:49:48.1 & 1.05 & 3 & 2194 + egs23 & j141822.47 + 523937.7 & 14:18:22.48 & + 52:39:37.9 & 0.67 & 7 & 5120 + egs24 & j141834.58 + 524505.9 & 14:18:34.55 & + 52:45:06.3 & 0.66 & 7 & 5120 + egs24a & j141836.77 + 524603.9 & 14:18:36.77 & + 52:46:03.9 & 0.66 & 7 & 5120 + egs26 & j141746.22 + 523322.2 & 14:17:46.22 & + 52:33:22.4 & 0.49 & 11 & 8045 + egs_b2 & j142219.81 + 531950.3 & 14:22:19.80 & + 53:19:50.4 & 0.62 & 8 & 5851 + egs_b6 & j142102.68 + 530224.5 & 14:21:02.67 & + 53:02:24.8 & 0.72 & 6 & 4388 + lcl sample & 24 flux density & color criteria @xcite & @xmath1610.75 mjy & @xmath162 + @xcite & @xmath1610.9 mjy & @xmath163 and + & & @xmath164 + @xcite(agn ) & @xmath1611.0 mjy & @xmath165 erg @xmath135 s@xmath24 + @xcite(sb ) & @xmath1611.0 mjy & irac flux density peak at either 4.5 or 5.8 + this paper & @xmath1610.5 mjy & @xmath166-[4.5]<0.4 $ ] and + & & @xmath167-[8.0]<0.5 $ ] + lcccccc object & redshift@xmath168 & redshift@xmath169 & @xmath170 & @xmath171 & @xmath172 & @xmath173 & & & & & & egs1 & 1.95@xmath1740.03 & 1.90@xmath1740.02&11.23@xmath1740.03 & 2.38@xmath1740.22 & 10.18@xmath1740.15 & 1.68@xmath1740.26 egs4 & 1.94@xmath1740.03 & 1.88@xmath1740.02&10.89@xmath1740.06 & 0.57@xmath1750.07 & 9.82@xmath1740.37 & 0.17@xmath1740.07 egs10 & 1.94@xmath1740.02 & 1.94@xmath1740.01&11.33@xmath1740.02 & 2.39@xmath1740.12 & 10.04@xmath1740.31 & 0.26@xmath1740.10 egs11 & 1.80@xmath1740.02 & 1.80@xmath1740.01&11.02@xmath1740.05 & 0.79@xmath1740.10 & 10.25@xmath1740.12 & 1.19@xmath1740.16 egs12 & 2.01@xmath1740.03 & 2.02@xmath1740.03&11.37@xmath1740.02 & 1.46@xmath1740.08 & 10.61@xmath1740.11 & 1.28@xmath1740.55 egs14 & 1.87@xmath1740.06 & 1.86@xmath1740.03&11.33@xmath1740.04 & 1.13@xmath1740.09 & 10.63@xmath1740.10 & 2.98@xmath1740.35 egs21 & 3.01@xmath1740.03 & 3.00@xmath1740.03&11.73@xmath1740.06 & 1.59@xmath1740.10 & & egs23 & 1.77@xmath1740.02 & 1.77@xmath1740.01&11.15@xmath1740.04 & 1.45@xmath1740.12 & 10.54@xmath1740.05 & 1.08@xmath1740.08 egs24 & 1.85@xmath1740.03 & 1.85@xmath1740.01&11.25@xmath1740.03 & 2.24@xmath1740.18 & 10.56@xmath1740.07 & 0.36@xmath1740.08 egs26 & 1.77@xmath1740.03 & 1.78@xmath1740.02&11.16@xmath1740.03 & 2.61@xmath1740.20 & 10.42@xmath1740.06 & 1.12@xmath1740.18 egs_b2 & 1.59@xmath1740.01 & 1.60@xmath1740.01 & & & 10.45@xmath1740.04 & 0.30@xmath1740.04 lccccccccccccc object & @xmath176 & @xmath177 & @xmath178 & @xmath179 & @xmath180 & @xmath181 & @xmath182 & @xmath183 & @xmath184 & @xmath185 & @xmath186 & @xmath90 & q + & @xmath30jy & @xmath30jy & @xmath30jy & @xmath30jy & @xmath30jy & @xmath30jy & mjy & mjy & mjy & mjy & mjy & & + egs1 & 45.0@xmath1740.3&55.4@xmath1740.4&63.6@xmath1741.5&56.3@xmath1741.6 & & 554@xmath17435 & @xmath1871.5 & 12.1@xmath1748.9&3.3&1.86@xmath1740.50&0.069@xmath1740.010 & 12.72@xmath1740.15&2.25 + egs4 & 32.1@xmath1740.3&44.9@xmath1740.4&52.1@xmath1741.5&40.1@xmath1741.5&125@xmath17424 & 557@xmath17422&2.4@xmath1740.5 & @xmath18721.0 & 3.9&1.87@xmath1740.48&0.062@xmath1740.010 & 12.62@xmath1740.12&2.19 + egs10 & 21.8@xmath1740.3&23.0@xmath1740.3&34.0@xmath1741.4&28.9@xmath1741.5&77@xmath17428 & 623@xmath17435&4.2@xmath1740.7 & 45.5@xmath1748.7&5.2&1.65@xmath1740.69&0.085@xmath1740.014 & 12.83@xmath1740.10&2.33 + egs11 & 27.8@xmath1740.3&36.9@xmath1740.4&38.2@xmath1741.4&30.6@xmath1741.5&192@xmath17431 & 591@xmath17420&5.0@xmath1740.6 & @xmath18721.0 & 3.3&0.85@xmath1740.44&0.067@xmath1740.017 & 12.58@xmath1740.09&2.24 + egs12 & 27.7@xmath1740.3&31.0@xmath1740.3&38.3@xmath1741.4&33.2@xmath1741.5&196@xmath17425 & 743@xmath17423&3.9@xmath1740.6 & a & 5.4&1.58@xmath1740.47&0.036@xmath1740.010 & 12.77@xmath1740.07&2.59 + egs14 & 66.1@xmath1740.2&89.6@xmath1740.4&101.7@xmath1741.5&88.4@xmath1741.6&457@xmath17439 & 1053@xmath17441&3.8@xmath1740.6 & 76.7@xmath1749.6&6.4&4.54@xmath1740.68&0.316@xmath1740.023 & 13.18@xmath1740.06&1.95 + egs21 & 39.5@xmath1740.3&45.3@xmath1740.4&50.7@xmath1741.5&35.0@xmath1741.5&59@xmath17414 & 605@xmath17423&2.8@xmath1740.5 & 34.2@xmath1749.4&8.4&1.31@xmath1740.35&0.070@xmath1740.014 & 13.15@xmath1740.07&2.26 + egs23 & 47.8@xmath1740.3&60.6@xmath1740.4&69.1@xmath1741.5&51.3@xmath1741.5&132@xmath17429 & 665@xmath17418&3.7@xmath1740.4 & 62.4@xmath1748.7&4.5&1.81@xmath1740.40&0.119@xmath1740.015 & 12.79@xmath1740.08&2.08 + egs24 & 36.4@xmath1740.3&44.0@xmath1740.4&46.8@xmath1741.4&37.1@xmath1741.5&65@xmath17425 & 663@xmath17429&3.4@xmath1740.6 & 9.7@xmath1749.0 & 2.7&1.49@xmath1740.74&0.047@xmath1740.012 & 12.51@xmath1740.18&2.16 + egs26 & 31.7@xmath1740.3&43.3@xmath1740.4&47.1@xmath1741.4&34.0@xmath1741.5&58@xmath17420 & 492@xmath17416&1.5@xmath1740.5 & 21.6@xmath1748.4&4.5&1.14@xmath1740.36&0.097@xmath1740.017 & 12.49@xmath1740.15&2.15 + egs24a & 22.3@xmath1740.3&32.3@xmath1740.3&46.7@xmath1741.5&575@xmath1741.6&223@xmath17436 & 997@xmath17430&2.5@xmath1740.5 & 15.1@xmath1748.4&6.4&2.87@xmath1740.54&0.112@xmath1740.013 & 12.91@xmath1740.10&1.91 + egs_b2 & 94.0@xmath1740.2&124.8@xmath1740.4&115.0@xmath1741.5&117.1@xmath1741.6 & & 616@xmath17430&3.4@xmath1740.5 & 21.7@xmath1747.0&2.2&&0.151@xmath1740.009 & 12.34@xmath1740.14&1.80 + lcccc name & age & @xmath88 & @xmath188 & sfr + & gyr & & @xmath7 & yr@xmath24 + egs1 & 1.9 & 0.3 & 5 & 240 + egs4 & 1.4 & 0.7 & 5 & 320 + egs10 & 1.1 & 0.4 & 2 & 182 + egs11 & 2.0 & 0.7 & 4 & 196 + egs12 & 0.29 & 0.4 & 1 & 480 + egs14 & 0.26 & 0.6 & 3 & 1320 + egs23 & 1.1 & 0.6 & 5 & 400 + egs24 & 0.29 & 0.5 & 5 & 580 + egs26 & 1.8 & 0.6 & 4 & 220 + egs_b2 & 0.03 & 0.6 & 0.9 & 3800 + lcccc name & sfr(bc03 ) & sfr(7.7 ) & sfr(11.3 ) &
sfr(@xmath90 ) + & @xmath189 & & & + egs1 & 240 & 386@xmath17418 & 263@xmath17468 & 945@xmath174326 + egs4 & 320 & 226@xmath17421 & 142@xmath17490 & 750@xmath174207 + egs10 & 182 & 452@xmath17414 & 207@xmath174110 & 1217@xmath174280 + egs11 & 196 & 277@xmath17421 & 297@xmath17461 & 684@xmath174142 + egs12 & 480 & 481@xmath17415 & 549@xmath174103 & 1060@xmath174171 + egs14 & 1320 & 451@xmath17428 & 568@xmath17497 & 2724@xmath174376 + egs23 & 400 & 340@xmath17421 & 487@xmath17442 & 1110@xmath174204 + egs24 & 580 & 398@xmath17419 & 504@xmath17460 & 582@xmath174241 + egs26 & 220 & 346@xmath17416 & 390@xmath17441 & 556@xmath174192 + egs_b2 & 3800 & & 417@xmath17429 & 394@xmath174127 + | we analyze a sample of galaxies chosen to have @xmath0 and satisfy a certain irac color criterion .
irs spectra yield redshifts , spectral types , and pah luminosities , to which we add broadband photometry from optical through irac wavelengths , mips from 24 - 160 , 1.1 millimeter , and radio at 1.4 ghz .
stellar population modeling and irs spectra together demonstrate that the double criteria used to select this sample have efficiently isolated massive star - forming galaxies at @xmath1 .
this is the first starburst - dominated ulirg sample at high redshift with total infrared luminosity measured directly from fir and millimeter photometry , and as such gives us the first accurate view of broadband seds for starburst galaxies at extremely high luminosity and at all wavelengths .
similar broadband data are assembled for three other galaxy samples local starburst galaxies , local agn / ulirgs , and a second 24-luminous @xmath2 sample dominated by agn .
@xmath3 for the new @xmath2 starburst sample is the highest ever seen , some three times higher than in local starbursts , whereas in agns this ratio is depressed below the starburst trend , often severely .
several pieces of evidence imply that agns exist in this starburst dominated sample , except two of which even host very strong agn , while they still have very strong pah emission .
the acs images show most objects have very extended morphologies in the rest - frame uv band , thus extended distribution of pah molecules .
such an extended distribution prevents further destruction pah molecules by central agns .
we conclude that objects in this sample are ulirgs powered mainly by starburst ; and the total infrared luminosity density contributed by this type of objects is @xmath4 . |
SECTION 1. APPLICABILITY OF PUBLIC DEBT LIMIT TO FEDERAL TRUST FUNDS
AND OTHER FEDERAL ACCOUNTS.
(a) Protection of Federal Funds.--Notwithstanding any other
provision of law--
(1) no officer or employee of the United States may--
(A) delay the deposit of any amount into (or delay
the credit of any amount to) any Federal fund or
otherwise vary from the normal terms, procedures, or
timing for making such deposits or credits, or
(B) refrain from the investment in public debt
obligations of amounts in any Federal fund,
if a purpose of such action or inaction is to not increase the
amount of outstanding public debt obligations, and
(2) no officer or employee of the United States may
disinvest amounts in any Federal fund which are invested in
public debt obligations if a purpose of the disinvestment is to
reduce the amount of outstanding public debt obligations.
(b) Protection of Benefits and Expenditures for Administrative
Expenses.--
(1) In general.--Notwithstanding subsection (a), during any
period for which cash benefits or administrative expenses would
not otherwise be payable from a covered benefits fund by reason
of an inability to issue further public debt obligations
because of the applicable public debt limit, public debt
obligations held by such covered benefits fund shall be sold or
redeemed only for the purpose of making payment of such
benefits or administrative expenses and only to the extent cash
assets of the covered benefits fund are not available from
month to month for making payment of such benefits or
administrative expenses.
(2) Issuance of corresponding debt.--For purposes of
undertaking the sale or redemption of public debt obligations
held by a covered benefits fund pursuant to paragraph (1), the
Secretary of the Treasury may issue corresponding public debt
obligations to the public, in order to obtain the cash
necessary for payment of benefits or administrative expenses
from such covered benefits fund, notwithstanding the public
debt limit.
(3) Advance notice of sale or redemption.--Not less than 3
days prior to the date on which, by reason of the public debt
limit, the Secretary of the Treasury expects to undertake a
sale or redemption authorized under paragraph (1), the
Secretary of the Treasury shall report to each House of the
Congress and to the Comptroller General of the United States
regarding the expected sale or redemption. Upon receipt of such
report, the Comptroller General shall review the extent of
compliance with subsection (a) and paragraphs (1) and (2) of
this subsection and shall issue such findings and
recommendations to each House of the Congress as the
Comptroller General considers necessary and appropriate.
(c) Public Debt Obligation.--For purposes of this section, the term
``public debt obligation'' means any obligation subject to the public
debt limit established under section 3101 of title 31, United States
Code.
(d) Federal Fund.--For purposes of this section, the term ``Federal
fund'' means any Federal trust fund or Government account established
pursuant to Federal law to which the Secretary of the Treasury has
issued or is expressly authorized by law directly to issue obligations
under chapter 31 of title 31, United States Code, in respect of public
money, money otherwise required to be deposited in the Treasury, or
amounts appropriated.
(e) Covered Benefits Fund.--For purposes of subsection (b), the
term ``covered benefits fund'' means any Federal fund from which cash
benefits are payable by law in the form of retirement benefits,
separation payments, life or disability insurance benefits, or
dependent's or survivor's benefits, including (but not limited to) the
following:
(1) the Federal Old-Age and Survivors Insurance Trust Fund;
(2) the Federal Disability Insurance Trust Fund;
(3) the Civil Service Retirement and Disability Fund;
(4) the Government Securities Investment Fund;
(5) the Department of Defense Military Retirement Fund;
(6) the Unemployment Trust Fund;
(7) each of the railroad retirement funds and accounts;
(8) the Department of Defense Education Benefits Fund and
the Post-Vietnam Era Veterans Education Fund; and
(9) the Black Lung Disability Trust Fund.
SEC. 2. CONFORMING AMENDMENT.
(a) In General.--Subsections (j), (k), and (l) of section 8348
of title 5, United States Code, and subsections (g) and (h) of section
8438 of such title are hereby repealed.
(b) Retention of Authority To Restore Trust Funds With Respect
to Actions Taken Before Date of Enactment.--
(1) In general.--The repeals made by subsection (a) shall not
apply to the restoration requirements imposed on the Secretary
of the Treasury (or the Executive Director referred to in
section 8438(g)(5) of title 5, United States Code) with respect
to amounts attributable to actions taken under subsection
(j)(1) or (k) of section 8348, or section 8438(g)(1), of such
title before the date of the enactment of this Act.
(2) Restoration requirements.--For purposes of paragraph (1),
the term ``restoration requirements'' means the requirements
imposed by--
(A) paragraphs (2), (3), and (4) of subsection (j),
and subsection (l)(1), of section 8348 of such title,
and
(B) paragraphs (2), (3), (4), and (5) of subsection
(g), and subsection (h)(1), of section 8438 of such
title.
Passed the House of Representatives December 14, 1995.
Attest:
ROBIN H. CARLE,
Clerk. | Prohibits a U.S. officer or employee from: (1) delaying the deposit or credit of any amount into any Federal fund, otherwise varying from normal procedures for making deposits or credits, or refraining from investments in public debt obligations of amounts in such fund if the purpose of such action or inaction is to not increase the amount of outstanding public debt obligations; and (2) disinvesting amounts in any such fund which are invested in public debt obligations if a purpose is to reduce the amount of outstanding public debt obligations. Prescribes that during any period for which cash benefits or administrative expenses would not be payable from a covered benefits fund because of an inability to issue further public debt obligations due to the applicable public debt limit, such obligations held by a covered benefits fund will only be sold or redeemed for payment of: (1) such benefits; or (2) administrative expenses and only if cash assets of such fund are not available from month to month for the purpose of making such payments. Requires the Secretary of the Treasury to notify each House of the Congress and the Comptroller General not less than three days before an expected sale or redemption. |
Published 15 April 2015
The Liberal Democrat Party launched its 2015 election manifesto today (15 April) which included pledges to increase support for carers and invest in dementia research.
As well as this there was a commitment to continue to integrate health and social care systems and end the practice of 'care cramming' by care workers.
Alzheimer's Society's own election manifesto called on the new government to ensure everyone diagnosed with dementia has access to a Dementia Adviser, demonstrate leadership in creating a dementia-friendly society and ensure that people with dementia receive the same state support as people with comparable conditions.
Responding to the Liberal Democrat manifesto, Chief Executive of Alzheimer's Society, Jeremy Hughes said:
'Dementia is the biggest health and social care challenge facing the country. The Liberal Democrat commitment to be ambitious on dementia outcomes is welcome. 'The half a million family carers for people with dementia are the unsung heroes providing vital support to loved ones. The commitment to increase support and recognition of their role in caring will be widely welcomed. The commitment to double dementia research in the next Parliament is vital and must focus on prevention and care as well as seeking a cure.'
Further information ||||| Image copyright PA Image caption Those who were overweight had an 18% reduction in dementia, researchers found
Being overweight cuts the risk of dementia, according to the largest and most precise investigation into the relationship.
The researchers admit they were surprised by the findings, which run contrary to current health advice.
The analysis of nearly two million British people, in the Lancet Diabetes & Endocrinology, showed underweight people had the highest risk.
Dementia charities still advised not smoking, exercise and a balanced diet.
Dementia is one of the most pressing modern health issues. The number of patients globally is expected to treble to 135 million by 2050.
There is no cure or treatment, and the mainstay of advice has been to reduce risk by maintaining a healthy lifestyle. Yet it might be misguided.
'Surprise'
The team at Oxon Epidemiology and the London School of Hygiene and Tropical Medicine analysed medical records from 1,958,191 people aged 55, on average, for up to two decades.
Their most conservative analysis showed underweight people had a 39% greater risk of dementia compared with being a healthy weight.
But those who were overweight had an 18% reduction in dementia - and the figure was 24% for the obese.
"Yes, it is a surprise," said lead researcher Dr Nawab Qizilbash.
He told the BBC News website: "The controversial side is the observation that overweight and obese people have a lower risk of dementia than people with a normal, healthy body mass index.
"That's contrary to most if not all studies that have been done, but if you collect them all together our study overwhelms them in terms of size and precision."
Image copyright SPL Image caption Loss of tissue in a demented brain compared with a healthy one
Any explanation for the protective effect is distinctly lacking. There are some ideas that vitamin D and E deficiencies contribute to dementia and they may be less common in those eating more.
But Dr Qizilbash said the findings were not an excuse to pile on the pounds or binge on Easter eggs.
"You can't walk away and think it's OK to be overweight or obese. Even if there is a protective effect, you may not live long enough to get the benefits," he added.
Heart disease, stroke, diabetes, some cancers and other diseases are all linked to a bigger waistline.
Analysis
By James Gallagher, Health editor, BBC News website
These findings have come as a surprise, not least for the researchers themselves.
But the research leaves many questions unanswered.
Is fat actually protective or is something else going on that could be harnessed as a treatment? Can other research groups produce the same findings?
Clearly there is a need for further research, but what should people do in the meantime?
These results do not seem to be an excuse to eye up an evening on the couch with an extra slice of cake.
The Alzheimer's Society and Alzheimer's Research UK have both come out and encouraged people to exercise, stop smoking and have a balanced diet.
Dr Simon Ridley, of Alzheimer's Research UK, said: "These new findings are interesting as they appear to contradict previous studies linking obesity to dementia risk.
"The results raise questions about the links between weight and dementia risk. Clearly, further research is needed to understand this fully."
The Alzheimer's Society said the "mixed picture highlights the difficulty of conducting studies into the complex lifestyle risk factors for dementia".
Prof Deborah Gustafson, of SUNY Downstate Medical Center in New York, argued: "To understand the association between body mass index and late-onset dementia should sober us as to the complexity of identifying risk and protective factors for dementia.
"The report by Qizilbash and colleagues is not the final word on this controversial topic."
Dr Qizilbash said: "We would agree with that entirely." ||||| People who are underweight in middle-age – or even on the low side of normal weight – run a significantly higher risk of dementia as they get older, according to new research that contradicts current thinking.
The results of the large study, involving health records from 2 million people in the UK, have surprised the authors and other experts. It has been wrongly claimed that obese people have a higher risk of dementia, say the authors from the London School of Hygiene and Tropical Medicine. In fact, the numbers appear to show that increased weight is protective.
At highest risk, says the study, are middle-aged people with a BMI [body mass index] lower than 20 – which includes many in the “normal weight” category, since underweight is usually classified as lower than a BMI of 18.5.
These people have a 34% higher chance of dementia as they age than those with a BMI of 20 to just below 25, which this study classes as healthy weight. The heavier people become, the more their risk declines. Very obese people, with a BMI over 40, were 29% less likely to get dementia 15 years later than those in the normal weight category.
Our results suggest that doctors and policymakers need to rethink how to best identify who is at high risk of dementia Prof Stuart Pocock
Prof Stuart Pocock, one of the authors, said: “Our results suggest that doctors, public health scientists, and policymakers need to rethink how to best identify who is at high risk of dementia. We also need to pay attention to the causes and public health consequences of the link between underweight and increased dementia risk which our research has established.
“However, our results also open up an intriguing new avenue in the search for protective factors for dementia – if we can understand why people with a high BMI have a reduced risk of dementia, it’s possible that further down the line, researchers might be able to use these insights to develop new treatments for dementia.”
Lead author Dr Nawab Qizilbash from Oxon Epidemiology told the Guardian that the message from the study was not that it was OK to be overweight or obese in middle-age. “Even if there were to be a protective effect in dementia, you may not live long enough to benefit because you are at higher risk from other conditions,” he said.
The study, published in the Lancet Diabetes and Endocrinology journal, looks only at data, correlating BMI with dementia diagnoses in general practice records and making allowances for anything that could skew the picture. “We haven’t been able to find an explanation,” said Qizilbash. “We are left with this finding which overshadows all the previous studies put together. The question is whether there is another explanation for it. In epidemiology, you are always left with the question of whether there is another factor.”
Many issues “related to diet, exercise, frailty, genetic factors, and weight change” could play a part, says the paper. There have been small studies that suggest a deficiency of vitamin E or vitamin D may play a part in dementia, but these are purely speculative, said Qizilbash.
But the study “opens up an avenue to look at the protective effects on dementia of diet, vitamins, weight change as well as frailty and potentially genetic influences”.
Others were cautious about the results, while acknowledging the scale of the study, which gives it greater power than previous pieces of research.
This study doesn’t tell us that being underweight causes dementia, or that being overweight will prevent the condition Dr Simon Ridley
Dr Simon Ridley, from Alzheimer’s Research UK, said further work is needed. “This study doesn’t tell us that being underweight causes dementia, or that being overweight will prevent the condition,” he said.
“Many other studies have shown an association between obesity and an increased risk of dementia. These findings demonstrate the complexity of research into risk factors for dementia and it is important to note that BMI is a crude measure – not necessarily an indicator of health. It’s also not clear whether other factors could have affected these results.”
The best protection against dementia, he added, is “eating a healthy, balanced diet, exercising regularly, not smoking, and keeping blood pressure in check”.
Prof Deborah Gustafson from SUNY Downstate Medical Center in New York, USA, in a commentary with the study in the journal, writes: “To understand the association between BMI and late-onset dementia should sober us as to the complexity of identifying risk and protective factors for dementia. The report by Qizilbash and colleagues is not the final word on this controversial topic.”
| – The skinnier you are in middle age, the more likely you are to get dementia in your old age, according to British researchers who sound baffled by their own findings. In the largest study of its kind, the researchers looked at up to 20 years of medical records from almost 2 million people, average age 55, and found that people classed as underweight or on the low side of normal were 34% more likely to develop dementia than people considered to be at a normal weight. Contradictory to earlier—but much smaller—studies, the researchers found that the risk continued to decrease as weight increased, with overweight people 18% less likely to develop dementia, and obese people 29% less likely, the BBC reports. "The reasons for and public health consequences of these findings need further investigation," the researchers write in the Lancet. "We haven't been able to find an explanation," the lead researcher tells the Guardian. "We are left with this finding which overshadows all the previous studies put together. The question is whether there is another explanation for it. In epidemiology, you are always left with the question of whether there is another factor." He stresses that the study should not be used as an excuse for overeating, because even if obesity does have a protective effect, "you may not live long enough to benefit." In a statement, an Alzheimer's Society spokesman says that while the study shows the need for more research, people are still encouraged to keep their brains healthy by not smoking, exercising regularly, and eating a healthy, balanced diet. (Other researchers say this diet cuts the Alzheimer's risk by 53%.) |
when probing small distances inside a hadron with a fixed momentum scale @xmath2 one resolves its constituents quarks and gluons .
as one increases the energy of the scattering process , the parton densities seen by the probe grow . at some energy much bigger than the hard scale
, the gluon density has grown so large that non - linear effects become important .
one enters the saturation regime of qcd , a non - linear yet weakly - coupled regime that describes the hadron as a dense system of weakly interacting partons ( mainly gluons ) .
the transition to the saturation regime is characterized by the so - called saturation momentum @xmath3 this is an intrinsic scale of the high - energy hadron which increases as @xmath4 decreases . @xmath5 but as the energy increases , @xmath6 becomes a hard scale , and the transition to saturation occurs when @xmath6 becomes comparable to @xmath7 although the saturation regime is only reached when @xmath8 observables are sensitive to the saturation scale already during the approach to saturation when @xmath9 this is especially true in the case of hard diffraction in deep inelastic scattering ( dis ) . both inclusive ( @xmath10 ) and diffractive ( @xmath11 )
dis are processes in which a photon ( of virtuality @xmath12 ) is used as the hard probe , and at small values of @xmath13 parton saturation becomes relevant .
the dipole picture naturally describes inclusive and diffractive events within the same theoretical framework .
it expresses the scattering of the virtual photon through its fluctuation into a color singlet @xmath14 pair ( or dipole ) of a transverse size @xmath15 .
the dipole is then what probes the target proton , seen as a color glass condensate ( cgc ) : a dense system of gluons that interact coherently .
therefore , despite its perturbative size , the dipole cross - section is comparable to that of a pion .
the same dipole scattering amplitude @xmath16 enters in the formulation of the inclusive and diffractive cross - sections : @xmath17 where @xmath18 is the well - known @xmath19 wavefunction . to obtain the right - hand sides
, we have decomposed the dipole - size integration into three domains : @xmath20 @xmath21 and @xmath22 and used the dipole amplitude @xmath16 discussed below .
one sees that hard diffractive events ( @xmath23 ) are much more sensitive to saturation than inclusive events , as the contribution of small dipole sizes is suppressed and the dominant size is @xmath24
the good - and - walker picture of diffraction was originally meant to describe soft diffraction .
they express an hadronic projectile @xmath25 in terms of hypothetic eigenstates of the interaction with the target @xmath26 that can only scatter elastically : @xmath27 the total , elastic and diffractive cross - sections are then easily obtained : @xmath28 ^ 2\hspace{0.5 cm } \sigma_{diff}=\sum_n c_n^2 t_n^2\ .\label{gaw}\ ] ] it turns out that in the high energy limit , there exists a basis of eigenstates of the large@xmath29 qcd @xmath30matrix : sets of quark - antiquark color dipoles @xmath31 characterized by their transverse sizes @xmath32 in the context of deep inelastic scattering ( dis ) , we also know the coefficients @xmath33 to express the virtual photon in the dipole basis . for instance , the equivalent of @xmath34 for the one - dipole state is the photon wavefunction @xmath35 r0.5 this realization of the good - and - walker picture allows to write down exact ( within the high - energy and large@xmath29 limits ) factorization formulae for the total and diffractive cross - sections in dis . they are expressed in terms of elastic scattering amplitudes of dipoles off the cgc @xmath36 where @xmath37 is the total rapidity .
the average @xmath38 is an average over the cgc wavefunction that gives the energy dependence to the cross - sections .
formulae are similar to ( [ gaw ] ) with extra integrations over the dipoles transverse coordinates .
for instance , denoting the minimal rapidity gap @xmath39 the diffractive cross - section reads @xcite @xmath40 this factorization is represented in fig .
[ ddis ] . besides the @xmath12 dependence
, the probabilities to express the virtual photon in the dipole basis @xmath41 also depend on @xmath42 starting with the initial condition @xmath43 the probabilities can be obtained from the high - energy qcd rapidity evolution .
finally , the scattering amplitude of the n - dipole state @xmath44 is given by @xmath45 where @xmath46 is the scattering amplitude of the one - dipole state .
the rapidity evolution of the correlators @xmath47 should obtained from the cgc non - linear equations ; one can then compute the diffractive cross - section . when taking @xmath48 in formula , one recovers the formula used for our previous estimates , which corresponds to restricting the diffractive final state to a @xmath14 pair . in practice
the description of hera data also requires a @xmath49 contribution .
within the high - energy and large@xmath29 limits , the scattering amplitudes off the cgc are obtained from the pomeron - loop equation @xcite derived in the leading logarithmic approximation in qcd .
this is a langevin equation which exhibits the stochastic nature @xcite of high - energy scattering processes in qcd .
its solution @xmath50 is an event - by - event dipole scattering amplitude function of @xmath51 and @xmath52 ( @xmath53 is a scale provided by the initial condition ) . r0.5 the solution @xmath54 is characterized by a saturation scale @xmath6 which is a random variable whose logarithm is distributed according to a gaussian probability law @xcite .
the average value is @xmath55 and the variance is @xmath56 the saturation exponent @xmath57 determines the growth of @xmath58 with rapidity , and the dispersion coefficient @xmath59 defines two energy regimes : the geometric scaling regime ( @xmath60 ) and diffusive scaling regime ( @xmath61 ) .
evolving a given initial condition yields a stochastic ensemble of solutions @xmath62 from which one obtains the dipole correlators : @xmath63 where in the right - hand side , @xmath64 is an average over the realizations of @xmath65 indeed , both quantities @xmath66 and @xmath67 obey the same hierarchy of equations .
one obtains the following results for the dipole scattering amplitudes @xcite : @xmath68 all the scattering amplitudes are expressed in terms of @xmath69 the amplitude for a single dipole which features the following scaling behaviors : @xmath70 in the saturation region @xmath71 @xmath72 as the dipole size @xmath73 decreases , @xmath74 decreases towards the weak - scattering regime following the scaling laws ( [ gs ] ) or ( [ ds ] ) , depending on the value of @xmath75 as shown in fig .
[ ploop ] . in the geometric scaling regime ( @xmath60 )
, the dispersion of the events is negligible and the averaged amplitude obeys ( [ gs ] ) . in the diffusive scaling regime ( @xmath61 ) , the dispersion of the events is important , resulting in the behavior ( [ ds ] ) .
when pomeron loops are not included in the evolution , only the geometric scaling regime appears .
in the geometric scaling regime , instead of being a function of the two variables @xmath73 and @xmath76 @xmath77 is a function of the single variable @xmath78 up to inverse dipole sizes significantly larger than the saturation scale @xmath79 this means that in the geometric scaling window in fig . [ ploop ] , @xmath77 is constant along lines parallel to the saturation line .
physically , they are lines along which the dipole sees a constant partonic density inside the proton . in dis
, this feature manifests itself via the so - called geometric scaling property . instead of being a function of @xmath12 and @xmath4 separately ,
the total cross - section is only a function of @xmath80 up to large values of @xmath81 similarly , the diffractive cross - section is only a function of @xmath82 and @xmath83 @xmath84 experimental measurements are compatible with those predictions @xcite , with the parameters @xmath85 and @xmath86 for the average saturation scale @xmath87 this determines the saturation exponent @xmath88 hera probes the geometric scaling regime and one could expect so of future measurements at an electron - ion collider . the estimates of section
i ( where one should now replace @xmath6 by @xmath58 ) are obtained in the geometric scaling regime : the total cross - section is dominated by semi - hard sizes ( @xmath89 ) while the diffractive cross - section is dominated by dipole sizes of the order of the hardest infrared cutoff in the problem : @xmath90 in the diffusive scaling regime , up to values of @xmath91 much bigger than the average saturation scale @xmath92 things change drastically : both inclusive and diffractive scattering are dominated by small dipole sizes , of order @xmath93 yet saturation plays a crucial role .
cross - sections are dominated by rare events in which the photon hits a black spot that he sees at saturation at the scale @xmath94 in average the scattering is weak ( @xmath95 ) , but saturation is the relevant physics .
our poor knowledge of the coefficient @xmath59 prevents quantitative analysis , still the diffusive scaling regime has striking signatures .
for instance the inclusive and diffractive cross - sections do not feature any pomeron - like ( power - law type ) increase with the energy .
it is likely out of the reach of hera , and future studies in the context of @xmath96 collisions at the lhc are certainly of interest .
99 slides : + ` http://indico.cern.ch/contributiondisplay.py?contribid=280&sessionid=7&confid=9499 ` y. hatta , e. iancu , c. marquet , g. soyez and d. triantafyllopoulos , _ nucl . phys . _ * a773 * ( 2006 ) 95 . a.h .
mueller , a.i .
shoshi and s.m.h .
wong , _ nucl .
_ * b715 * ( 2005 ) 440 ; + e. iancu and d.n .
triantafyllopoulos , _ phys
. lett . _ * b610 * ( 2005 ) 253 . | following the good - and - walker picture , hard diffraction in the high - energy / small@xmath0 limit of qcd can be described in terms of eigenstates of the scattering matrix off a color glass condensate . from the cgc non - linear evolution equations ,
it is then possible to derive the behavior of diffractive cross - sections at small @xmath1 i discuss recent results , in particular the consequences of the inclusion of pomeron loops in the evolution . |
the helicobacter pylori ( h. pylori ) bacterium is responsible for 5.5% of all infection - associated cancers and is the major cause of gastric cancer in consequence of chronic inflammation .
persistent gastric mucosa inflammation results in chronic gastritis and progresses through a multistep process to gastric atrophy , intestinal metaplasia , dysplasia , and finally carcinoma .
the clinical consequences of h. pylori infection are determined by bacteria virulence genes as well as by host genetic factors such as immune response genes , besides environmental factors [ 35 ] . among the bacterial products , the caga ( cytotoxin - associated gene a ) and vaca ( vacuolating cytotoxin ) proteins are the major virulence factors related to the severity of gastric lesions and cell responses [ 6 , 7 ] .
the gastric epithelium cells provide the first point of contact for h. pylori adhesion through interaction with toll - like receptors ( tlrs ) , responding to the infection by activating various signaling pathways .
tlrs are key regulators of both innate and adaptive immune responses , recognizing several microbial products , such as lipoproteins , peptidoglycans , and lipopolysaccharides ( lps ) .
the lps of h. pylori is recognized mainly not only by tlr4 , but also by tlr2 , which recognizes other forms that are structurally different from those recognized by tlr4 .
both tlr2 and tlr4 are activated , after the bacteria recognition , in cooperation with the adapter molecule myd88 , triggering the mitogen - activating protein kinase ( mapk ) signaling pathway . at this point
, there is a subsequent activation of the transcription factor nf-b , which leads to the rapid expression of inducible nitric oxide synthase ( inos ) and proinflammatory cytokines , chemokines and their receptors , and interleukins [ 12 , 13 ] .
when these factors are stimulated , they initiate a marked inflammatory response of the mucosa , characterized as chronically active gastritis , and may acquire oncogenic potential [ 14 , 15 ] .
so far , the strategy for prevention of h. pylori - associated gastric cancer has been the eradication of these bacteria , regarded as a first - line therapy to reverse the preneoplastic lesions and prevent malignant progression .
however , treatment is not adopted for asymptomatic carriers in developing countries , due to its high cost .
h. pylori is susceptible to most antibiotics , although resistance has been common , and triple or quadruple therapy consisting of two antibiotics , a proton pump inhibitor , and bismuth has lately been used to eradicate the bacteria .
studies to evaluate changes in expression levels of genes involved in the recognition of the bacteria and the immune response of the host in patients infected by h. pylori are scarce , both before and after eradication treatment . moreover , there are no reports about the expression of tlr2 and tlr4 in gastric lesions before and after bacterial clearance .
therefore , the main goal of the present study was to evaluate , for the first time , the mrna and protein expression levels of tlr2 and tlr4 in h. pylori - infected chronic gastritis patients and the occurrence of changes in the expression levels of these receptors after successful h. pylori eradication therapy .
at first , 59 patients scheduled for upper endoscopy with positive histological and molecular diagnosis for h. pylori and not yet submitted to eradication therapy were enrolled prospectively between may 2010 and december 2012 from the gastro - hepatology outpatient clinic at the base hospital and the joo paulo ii hospital , both at so jos do rio preto , sp , brazil . from each patient ,
gastric biopsies of the antrum region were collected for histological analyses and molecular and immunohistochemical studies .
none of the individuals had taken any antibiotics , nonsteroidal anti - inflammatory drugs , or corticosteroids during the two months prior to endoscopy , nor did they take proton pump inhibitors or h2 antagonists in the 15 days preceding the procedure .
gastric biopsy specimens were examined histologically by a specialized pathologist for the presence of the bacteria and histopathologically classified as superficial chronic gastritis ( n = 45 ; mean age 44 years ; 19 females and 17 males ) , atrophic gastritis ( n = 8 ; mean age 50 years ; 3 females and 5 males ) , and atrophic gastritis with intestinal metaplasia ( n = 6 ; mean age 50 years ; 4 females and 2 males ) , according to the sydney system , constituting the so - called cg - hp+ group . of the 59 cg - hp+ patients , only 37 ( 63% ) concluded the treatment and were called completed treatment group , and 23/37 ( 62% ) of them had the bacteria eradicated , as evidenced by concordant histological and molecular h. pylori - negative diagnosis .
however , 14/37 ( 38% ) remain infected showing histological or molecular h. pylori - positive diagnosis ( table 1 ) .
four gastric biopsy specimens presented histologically normal h. pylori - negative gastric mucosa ( normal hp- group ) and were used as control ( mean age 35.6 years ; 3 females and 1 male ) .
epidemiological data of patients and controls were collected using a standard interviewer - administered questionnaire , containing questions about smoking habits , alcohol intake , previous or ongoing treatment , use of medications , previous surgeries , and family history of cancer .
the cg - hp+ group was submitted to standard triple therapy consisting of amoxicillin ( 1 g ) , clarithromycin ( 500 mg ) , and omeprazole ( 20 mg ) , all given twice daily for seven days .
three months after treatment , the individuals underwent another endoscopy for collection of gastric biopsies of the antrum region .
immediately after collection , the biopsy specimens were placed into rna later solution ( applied biosystems ) and stored at 20c until nucleic acid extraction .
the study protocol was approved by the local research ethics committee ( cep / ibilce / unesp number 030/10 ) , and written informed consent was obtained from all participating individuals .
dna / rna extraction from the gastric biopsies was performed according to the protocol accompanying the reagent trizol ( invitrogen ) and the concentrations were determined in a nanodrop nd1000 spectrophotometer ( thermo scientific ) .
firstly , multiplex pcr was performed , using 100 ng of dna in a final volume of 25 l containing specific primers for h. pylori genes such as urea and tsaa , besides the constitutive human cyp1a1 gene , according to our protocol which was described in previous study .
molecular diagnosis was considered positive when at least one gene ( urea or tsaa ) had been amplified .
the h. pylori - positive samples were also subjected to pcr for investigation of polymorphisms in the sm regions of the gene vaca as previously described .
primers amplify s1 fragment of 176 bp or s2 fragment of 203 bp , while primers for m alleles amplify m1 fragment of 400 bp or m2 fragment of 475 bp .
reverse transcription ( rt ) of total rna was performed using a high capacity cdna archive kit ( applied biosystems ) , in a total volume of 25 l , according to the manufacturer 's protocol .
then , qpcr was carried out in a steponeplus real time pcr system 2.2.2 ( applied biosystems ) , using specific taqman probes for target genes tlr2 ( assay i d hs00610101_m1 , applied biosystems ) and tlr4 ( assay i d hs01060206_m1 , applied biosystems ) and two reference genes , actb ( part number : 4352935e , applied biosystems ) and gapdh ( glyceraldehyde 3-phosphate dehydrogenase ) ( part number : 4352934e , applied biosystems ) , used as endogenous controls according to the manufacturer 's instructions .
all reactions were performed in triplicate in a final volume of 20 l , using 100 ng/l cdna and a blank to ensure the absence of contamination .
relative quantification ( rq ) of tlr2 and tlr4 mrna was obtained according to the model proposed by livak and schmittgen and normalized to the actb and gapdh reference genes and a pool of normal hp- samples .
immunohistochemical analysis was performed in 14 samples from the cg - hp+ group before and after bacteria eradication and four samples from the normal hp- group .
deparaffinized tissue slides were then submitted to antigen retrieval , using a high - temperature antigen - unmasking technique .
the sections were incubated with specific primary antibodies : rabbit polyclonal antibody anti - tlr2 ( 06 - 1119 , 1 : 50 dilution ; millipore ) and mouse monoclonal anti - tlr4 ( 76b357.1 , 1 : 200 dilution ; abcam ) .
then the slides were incubated with biotinylated secondary antibody ( picture max polymer detection kit , invitrogen ) for 30 minutes , following the manufacturer 's protocol .
immunostaining was done with 3,3-diaminobenzidine tetrahydrochloride ( dab ) containing 0.005% h2o2 , counterstained with hematoxylin .
placenta mucosa and appendix tissue were used , respectively , as positive controls for the tlr2 and tlr4 proteins .
the immunostaining was evaluated in the cytoplasm by densitometric analysis with an arbitrary scale going from 0 to 255 , performed with axio vision software under a zeiss - axioskop ii light microscope .
sixty equally distributed points were scored in each one of the regions , and the results were expressed as mean se .
the distribution of continuous data was evaluated using the d'agostino and pearson omnibus normality test or shapiro - wilk normality test .
data are presented as median and range , as mean standard deviation ( sd ) , or as frequencies , according to the data distribution .
student 's t - test for paired and unpaired data or correspondent nonparametric tests , such as the mann - whitney test and the wilcoxon signed rank test , were used for comparisons between groups . to evaluate the association between relative gene expression and risk factors such as age , gender , smoking , drinking , and bacterial virulence genotypes ,
the correlation between tlr2 and tlr4 mrna expression before and after eradication therapy was analyzed using spearman 's correlation . for protein expression ,
the means obtained from the densitometry analysis were compared before and after treatment and with the normal hp- group using anova followed by the bonferroni test . the level of significance was set at p 0.05 .
table 2 shows the data regarding the relative expression levels of tlr2 and tlr4 mrna of 37 cg - hp+ patients who concluded the treatment ( completed treatment group ) , 23 cg - hp+ patients in which the bacteria were eradicated , allowing paired analysis before and after eradication therapy , and 14 cg - hp+ patients in which the bacteria were noneradicated .
the relative expression levels of tlr2 and tlr4 mrna after normalization with the actb and gapdh reference genes and comparison with normal mucosa h. pylori - negative in all groups , either before or after treatment , were increased significantly ( p < 0.05 ) .
considering all patients that completed the treatment , no significant change was found after treatment in the relative expression levels of either tlr2 or tlr4 mrna ( tlr2 = 1.55 and tlr4 = 1.64 ) in comparison to the same cases before the treatment ( tlr2 = 1.31 and tlr4 = 1.45 ) . in the group that eradicated the bacteria , heterogeneity of relative expression levels for both tlr2 and tlr4 mrnas can be observed before and after the treatment ( figures 1(a ) and 1(b ) ) . however no significant differences were observed for both genes comparing the expression levels in this group before and after treatment ( p = 0.533 and p = 0.094 for tlr2 and tlr4 , resp . )
furthermore , a positive correlation between the rq values of tlr2 and tlr4 mrna before and after treatment considering only the eradicated patients was found ( before : r = 0.85 , p < 0.0001 ; after : r = 0.55 , p = 0.006 ) .
the influence of caga and vaca bacterial genotypes on the gene expression levels , both before and after treatment ( table 3 ) , showed no evidenced significant difference between caga+ and caga genotypes ( p > 0.05 ) for both analyzed genes .
we also evaluated the association between relative expression levels of tlr2 and tlr4 mrna and the risk factors such as age , gender , smoking , drinking , and histological type of gastric lesion .
none of the factors investigated showed significant differences ( data not shown ) . in normal mucosa ,
the tlr2 and tlr4 protein expression was weak or absent , mainly in the foveolar epithelium ( figures 2(a ) and 2(b ) ) . nevertheless , the cg - hp+ samples collected before the treatment showed a cytoplasmatic , perinuclear , and focal immunostaining pattern , mostly in the basal area of the foveolar epithelium . a strong expression in the inflammatory cells
was also observed ( figures 2(c ) and 2(d ) ) . after the eradication of h. pylori , an immunostaining pattern similar to the one observed before the treatment was found for both tlr2 and tlr4 proteins ( figures 2(e ) and 2(f ) ) .
the mean optical densitometry values observed in the normal hp- group for tlr2 and tlr4 were 105.6 2.7 and 101.4 6.5 , respectively . while the cg - hp+ group before treatment presented significantly increased mean values for both tlr2 ( 151.7 6.1 ) and tlr4 ( 132.2 4.7 ) in comparison with the normal hp- group ( p = 0.020 and p = 0.007 , resp . ) .
after eradication of the bacteria , both tlr2 and tlr4 proteins showed a slight reduction in their mean optical densitometry values ( 136.1 6.1 and 122.8 5.8 , resp . ) .
however , there were no significant differences between these values before and after treatment ( p = 0.064 and p = 0.198 , resp . )
( figures 2(g ) and 2(h ) ) , confirming the findings regarding the mrna relative expression .
in this study we investigated for the first time the occurrence of alterations in the tlr2 and tlr4 mrna and protein expression in h. pylori - infected patients with chronic gastritis , before and after successful bacteria eradication treatment . our results did not reveal significant changes in the relative expression levels of either tlr2 or tlr4 mrna after treatment in eradicated patients , which was confirmed by immunohistochemistry .
moreover , the mrna expression of both receptors remained increased after eradication therapy compared to the normal hp- group , showing that the eradication of the bacteria did not normalize the expression of these receptors , at least under the conditions evaluated . additionally , we also observed a positive correlation between the mrna expression values of tlr2 and tlr4 confirming that h. pylori activates both receptors .
. lps of gram - negative bacteria are recognized mainly by tlr4 and also tlr2 activating signaling pathways that culminate in an inflammatory response .
it is believed that the interaction between bacterial virulence and a genetically susceptible host is associated with more severe chronic inflammation , which may , in the long run , lead to cancer . under normal physiological conditions ,
the expression of these receptors in the mucosa of the gastrointestinal tract is low due to the action of their antagonists , such as tollip ( toll - interacting protein ) and ppar ( peroxisome proliferator - activated receptor ) that show higher levels in order to prevent inappropriate activation of nonpathogenic antigens [ 2729 ] . in our study
, we observed a slightly increased expression of both tlr2 and tlr4 in cg - hp+ patients even after successful h. pylori eradication compared to the noninfected normal mucosa . in children infected with h. pylori , lagunes - servin et al .
( 2013 ) found an increase in the expression of the tlr2 , tlr4 , tlr5 , and tlr9 in the gastric epithelium compared with noninfected children and also an association with pro- and anti - inflammatory cytokines ( il-8 , tnf- , and il-10 ) .
these findings confirm that h. pylori has the ability to increase the in vivo expression of tlrs by gastric epithelial cells early during infection in children , starting a chronic and balanced inflammatory process that will continue for decades , and so may contribute to the development of h. pylori - associated diseases later in adulthood .
( 2013 ) observed that , considering the different tlrs of normal h. pylori - negative mucosa , the mrnaof tlr5 was the most expressed , followed by those of tlr2 and tlr4 .
furthermore , these authors found tlr2 and tlr4 overexpression in intestinal metaplasia , independent of the h. pylori status , and in the dysplasia / cancer sequence .
moreover , upregulation of tlr2 and tlr4 mrna was also observed in h. pylori - associated normal mucosa .
these results were confirmed by immunohistochemical analyses , which found an increase in protein expression in h. pylori - infected normal mucosa , further increasing in intestinal metaplasia and dysplasia / carcinoma .
these findings suggest that progressive activation of these receptors , initially not only by h. pylori , but also by other pamps ( pathogen - associated molecular patterns ) or damps ( damage - associated molecular patterns ) , at later stages , may play an important role in gastric carcinogenesis and tumor progression .
upregulation of tlr4 expression responsiveness to lps and h. pylori in gastric cell lines has also been reported [ 32 , 33 ] .
h. pylori infection induced both tlr4 mrna and protein expression in ags cells that were dependent on bacterial load and infection duration .
however , the transfection of ags cells with tlr4 sirna followed by the bacterial infection suppressed the expression of this receptor . moreover , lps of h. pylori upregulate tlr4 expression via tlr2 signaling in mkn28 gastric cell lines by the mek1/2-erk1/2 map kinase pathway , leading also to an increase in cell proliferation .
conversely , previous studies [ 3537 ] did not observe any relevant role of tlr4 in the cellular recognition of h. pylori in agc cells .
these controversial results may be due to differences in the lipid a structures produced by distinct h. pylori strains [ 3840 ] .
therefore , the interaction of the bacteria with tlr2 should also be considered , mainly after the first contact with the gastric mucosa , triggering immunologic responses such as induction of il-8 and subsequent activation of nf-b .
our study revealed no reduction of the transcript levels of tlr2 and tlr4 or their proteins 3 months after treatment , showing that the successful eradication of h. pylori does not change the expression of these receptors within a short period after the treatment . similarly , garza - gonzlez et al .
( 2008 ) found no quantitative differences in the tlr4 and tlr5 mrna levels either , regardless of the presence or absence of h. pylori in gastric epithelial cells biopsies and ags cells , suggesting that the mrna levels of both receptors may not be influenced by the infection process or at least not at the time points selected for analysis .
however , in our study , we observed higher levels of tlr2 and tlr4 mrna and of both proteins in h. pylori - infected mucosa compared to noninfected normal mucosa .
it should however be taken into consideration that the posttreatment time elapsed until biopsy collection which may not have been sufficient for mucosal renovation and transcription level normalization .
moreover , alterations in mrna expression levels after h. pylori infection eradication therapy have been demonstrated , involving genes associated with cell damage , inflammation , proliferation , apoptosis , and intestinal differentiation [ 43 , 44 ] .
this study did not investigate the molecular mechanisms involved in the inflammatory cascade induced by h. pylori infection triggered by tlr4 and tlr2 .
therefore , further investigations are needed to clarify the possible involvement of signaling pathway myd88-mapk - nfb as well as the role of ppars ( peroxisome proliferator - activated receptors ) on inhibition of pathway regulating expression of proinflammatory genes and stress kinase pathways [ 31 , 45 , 46 ] , which suppresses inflammation in h. pylori infection . when we compared the expression levels of tlr2 and tlr4 mrna with risk factors and bacterial virulence genotypes , we did not find any association .
the studies that assess the effects of caga and vaca virulence factors on the gene and protein expression are controversial .
our results evidenced that there were no quantitative differences in the mrna levels of these receptors regardless of caga and vaca status .
( 2008 ) , which demonstrated that the mrna levels of tlr4 and tlr5 in gastric cells both in vivo and in vitro were not influenced by the vaca status , suggesting that this virulence factor may not be involved in the first steps of innate immune - recognition of h. pylori .
another study evidenced downregulation of tlrs 2 and 5 and upregulation of tlr9 by h. pylori in human neutrophils regardless of cagpai status and the integrity of t4ss . in conclusion , we report a discrete increase in tlr2 and tlr4 mrna and protein expression in cg - hp+ patients before eradication therapy and the maintaining of this expression pattern even after treatment , suggesting that these receptors remain expressed in the gastric mucosa even after eradication of the bacteria , at least for the period evaluated .
therefore , considering the higher risk of malignant progression in patients infected by h. pylori for a long time , further investigations are needed to clarify the changes in the expression of other genes related with the inflammatory cascade induced by bacteria , such as those encoding cytokines and malignant transformation processes as well as the signaling pathways involved . | objective .
helicobacter pylori ( hp ) is recognized by tlr4 and tlr2 receptors , which trigger the activation of genes involved in the host immune response .
thus , we evaluated the effect of eradication therapy on tlr2 and tlr4 mrna and protein expression in h. pylori - infected chronic gastritis patients ( cg - hp+ ) and 3 months after treatment . methods . a total of 37 patients cg - hp+ were evaluated .
the relative quantification ( rq ) of mrna was assessed by taqman assay and protein expression by immunohistochemistry .
results . before treatment both tlr2 and tlr4 mrna in cg - hp+ patients were slightly increased ( tlr2 = 1.32 ; tlr4 = 1.26 ) in relation to hp - negative normal gastric mucosa ( p 0.05 ) .
after successful eradication therapy no significant change was observed ( tlr2 = 1.47 ; tlr4 = 1.53 ; p > 0.05 ) .
in addition , the caga and vaca bacterial genotypes did not influence the gene expression levels , and we observed a positive correlation between the rq values of tlr2 and tlr4 , both before and after treatment .
immunoexpression of the tlr2 and tlr4 proteins confirmed the gene expression results . conclusion . in conclusion , the expression of both tlr2 and tlr4 is increased in cg - hp+ patients regardless of caga and vaca status and this expression pattern is not significantly changed after eradication of bacteria , at least for the short period of time evaluated . |
Mike Kemp / Getty Images
Everyone knows someone who met their spouse online. A friend of mine whom I hadn’t seen in years told me recently that she, too, met her husband on an Internet dating site. They’re happily married, just moved into a new house, and are now talking about starting a family.
When I asked her if she thought online matchmaking was a better way than offline dating to find guys who were more compatible with her — and, therefore, better husband material — she laughed. “No, because I couldn’t stand him when I first met him,” she says of her husband. She thought he was full of himself and rude during their first encounter. It definitely wasn’t love at first sight, she said — that took a while.
In other words, according to my friend, Internet dating is just as unpredictable as the non-digital version. You never know how things are going to evolve until they do. But the benefit, she says, is that dating online gives you access to a lot more people than you’d ordinarily ever get to meet — and that’s how she connected with her future husband.
These observations have been borne out in a new study by social psychologists collaborating across the country. The extensive new study published in the journal Psychological Science in the Public Interest sought to answer some critical questions about online dating, an increasingly popular trend that may now account for 1 out of every 5 new relationships formed: fundamentally, how does online dating differ from traditional, face-to-face encounters? And, importantly, does it lead to more successful romantic relationships?
(MORE: How Do I Love Thee? Let Me Tweet the Ways)
For their 64-page report, the authors reviewed more than 400 studies and surveys on the subject, delving into questions such as whether scientific algorithms — including those used by sites like eHarmony, PerfectMatch and Chemistry to match people according to similarities — can really lead to better and more lasting relationships (no); whether the benefits of endless mate choices online have limits (yes); and whether communicating online by trading photos and emails before meeting in person can promote stronger connections (yes, to a certain extent).
Overall, the study found, Internet dating is a good thing, especially for singles who don’t otherwise have many opportunities to meet people. The industry has been successful, of course — and popular: while only 3% of Americans reported meeting their partners online in 2005, that figure had risen to 22% for heterosexual couples and 6% for same-sex couples by 2007-09. Digital dating is now the second most common way that couples get together, after meeting through friends. But there are certain properties of online dating that actually work against love-seekers, the researchers found, making it no more effective than traditional dating for finding a happy relationship.
“There is no reason to believe that online dating improves romantic outcomes,” says Harry Reis, a professor of psychology at University of Rochester and one of the study’s co-authors. “It may yet, and someday some service might provide good data to show it can, but there is certainly no evidence to that right now.”
One downside to Internet dating has to do with one of its defining characteristics: the profile. In the real world, it takes days or even weeks for the mating dance to unfold, as people learn each other’s likes and dislikes and stumble through the awkward but often rewarding process of finding common ground. Online, that process is telescoped and front-loaded, packaged into a neat little digital profile, usually with an equally artificial video attached.
That leaves a) less mystery and surprise when singles meet face to face. That’s not necessarily a bad thing, as profiles can help quickly weed out the obviously inappropriate or incompatible partners (who hasn’t wished for such a skip button on those disastrous real-life blind dates?), but it also means that some of the pleasure of dating, and building a relationship by learning to like a person, is also diluted.
It also means that b) people may unknowingly skip over potential mates for the wrong reasons. The person you see on paper doesn’t translate neatly to a real, live human being, and there’s no predicting or accounting for the chemistry you might feel with a person whose online profile was the opposite of what you thought you wanted. Offline, that kind of attraction would spark organically.
The authors of the study note that people are notoriously fickle about what’s important to them about potential dates. Most people cite attractiveness as key to a potential romantic connection when surveying profiles online, but once people meet face to face, it turns out that physical appeal doesn’t lead to more love connections for those who say it is an important factor than for those who say it isn’t. Once potential partners meet, in other words, other characteristics take precedence over the ones they thought were important.
“You can’t look at a piece of paper and know what it’s like to interact with someone,” says Reis. “Picking a partner is not the same as buying a pair of pants.”
(MORE: Online Dating Enjoying a Boom Among Boomers)
Making things harder, many sites now depend on — and heavily market — their supposedly scientific formulas for matching you with your soul mate based on similar characteristics or personality types. It may seem intuitively logical that people who share the same tastes or attitudes would be compatible, but love, in many cases, doesn’t work that way.
Some online dating sites, for example, attempt to predict attraction based on qualities like whether people prefer scuba diving to shopping, or reading to running, or whether they tend to be shy or more outgoing. But social science studies have found that such a priori predictors aren’t very accurate at all, and that the best prognosticators of how people will get along come from the encounters between them. In other words, it’s hard to tell whether Jim and Sue will be happy together simply by comparing a list of their preferences, perspectives and personality traits before they meet. Stronger predictors of possible romance include the tenor of their conversations, the subject of their discussions, or what they choose to do together.
“Interaction is a rich and complex process,” says Reis. “A partner is another human being, who has his or her own needs, wishes and priorities, and interacting with them can be a very, very complex process for which going through a list of characteristics isn’t useful.”
The authors also found that the sheer number of candidates that some sites provide their love-seeking singles — which can range from dozens to hundreds — can actually undermine the process of finding a suitable mate. The fact that candidates are screened via their profiles already sets up a judgmental, “shopping” mentality that can lead people to objectify their potential partners. Physical appearance and other intangible characteristics may certainly be part of the spark that brings two people together, but having to sift through hundreds of profiles may become overwhelming, forcing the looker to start making relationship decisions based on increasingly superficial and ultimately irrelevant criteria.
And remember, says Reis, “Online dating sites have a vested interest in your failure. If you succeed, the site loses two paying customers.”
Communicating online before meeting can help counter some of this mate-shopping effect, but it depends on how long people correspond electronically before taking things offline. A few weeks of email and photo exchanging serves to enhance people’s attraction when they finally meet, researchers found, but when the correspondence goes on too long — for six weeks — it skews people’s expectations and ends up lowering their attraction upon meeting. Over time, people start to form inflated or overly particular views about the other person, which leaves them at risk for being disappointed in the end.
Considering the many pitfalls, what accounts for the enduring popularity — and success — of online dating sites? Part of it may be the fact that singles who use online dating sites are a particularly motivated lot. Their desire to find a spouse and get married may make them more likely to actually find a life partner on the site, or believe that they have. And they’re also probably more likely to believe that the matchmaking algorithms that power so many sites really can find them that person who’s “meant to be.”
It also offers an attractive solution for an age-old problem for singles — where to meet potential mates. As more people delay marriage, either for financial or professional reasons, and with more people constantly moving around to find better jobs, disrupting their social networks, the easily accessed digital community of like-minded singles becomes a tantalizing draw.
Still, those who go online looking for love are left navigating a minefield of odds — not unlike dating in the non-digital realm. But at least there’s solace in matches like my friend’s. If there’s one thing online dating does better than any matchmaker or network of friends who are eager to set you up with that “someone who’s perfect for you,” it’s finding you lots and lots of candidates. “Like anything on the Internet, if you use online dating wisely, it can be a great advantage,” says Reis. You just have to accept that not all of your matches will be your Mr. or Ms. Right.
Alice Park is a writer at TIME. Find her on Twitter at @aliceparkny. You can also continue the discussion on TIME’s Facebook page and on Twitter at @TIME. ||||| CHICAGO Combing dating websites for that perfect love match can be very frustrating, and a group of U.S. psychology professors released a report on Monday explaining why there is no substitute for meeting face-to-face.
"Online dating is a terrific addition for singles to meet. That said, there are two problems," report author Eli Finkel, an associate professor of psychology at Northwestern University, said in an interview.
First, poring over seemingly endless lists of profiles of people one does not know, as on Match.com, does not reveal much about them. Second, it "overloads people and they end up shutting down," Finkel said.
He compared it to shopping at "supermarkets of love" and said psychological research shows people presented with too many choices tend to make lazy and often poor decisions.
The study's authors also questioned the algorithms employed by sites such as eHarmony.com to match people based on their interests or personality - comparing it to having a real estate agent of love.
While the algorithm may reduce the number of potential partners from thousands to a few, they have never met and may be as incompatible as two people meeting at random, Finkel said, adding the odds are no better than finding a relationship by strolling into any bar.
"Eighty years of relationship science has reliably shown you can't predict whether a relationship succeeds based on information about people who are unaware of each other," he said.
The algorithms are proprietary and were not shared with the researchers. "The assumption is they work. We reviewed the literature and feel safe to conclude they do not," he said.
He dismissed the dating websites' own studies on their success as unscientific, and said there are as yet no objective, data-driven studies of online dating. The researchers reviewed the literature on online dating and compared it to previous research.
Finkel said he and four psychology professors from other schools were enlisted by the Association for Psychological Science to write about the online dating industry, and the report was being published in the organization's journal, Psychological Science in the Public Interest.
Perhaps solving what Finkel termed the "original sins" of online dating are mobile dating websites such as Badoo.com and Zoosk.com. The sites offer some information about other members but more importantly allow participants visiting a museum, say, to ask others logged on nearby to meet up.
"There's no better way to figure out whether you're compatible with somebody than talking to them over a cup of coffee or a pint of beer," Finkel said.
(Reporting By Andrew Stern; Editing by Eric Beech) | – Online dating could help you find your perfect match—but your chances aren't any better than they'd be at a bar, a study suggests. You can't tell much about the people listed on sites like Match.com. Browsing such lists "overloads people and they end up shutting down," the psychology professor behind the study tells Reuters. It amounts to shopping at "supermarkets of love": When you have too many choices, you make bad decisions. Algorithms that sites like eHarmony use to match people probably don't help much, the researchers say. "Eighty years of relationship science has reliably shown you can't predict whether a relationship succeeds based on information about people who are unaware of each other," says the professor. In short, "there is no reason to believe that online dating improves romantic outcomes," a co-author tells Time. "It may yet, and someday some service might provide good data to show it can, but there is certainly no evidence to that right now.” |
in 1934 , baade and zwicky published a prophetic paper making a phenomenological connection between supernovae ( sns ) , the core - collapse of massive stars , and the formation of neutron stars ( then hypothetical ) ; all purely on grounds of energetics .
decades later their conjecture was first vindicated by the discoveries of young pulsars in the crab and vela supernova remnants ( snrs ) , and now in a handful of other galactic snr .
supernova remnants come in at least two distinct morphological types , i.e. , shells and plerions ( weiler & sramek 1988 ) , and a majority are the result of core collapse in massive progenitors ( the non - type ia sns ; van den bergh & tammann 1991 ) . the baade - zwicky picture , in its simplest interpretation , is somewhat problematic .
a majority of snr appear not to contain either central pulsars or pulsar plerions ( as pulsars are beamed , plerions ought to be more commonplace than pulsars in the interiors of shells ) .
the predominant `` hollowness '' of shell - remnants is ill - understood , and poses questions about the fate of _ most _ core collapses of massive stars .
this conundrum is nowhere more apparent than in the studies of the youngest snr , especially those of the historical supernovae ( strom 1994 ) .
of the eight historical supernovae which have expanded into full - blown snr , only the crab nebula has a pulsar .
there is weak plerionic activity ( but no beamed pulsars ) in two others ( sn 386 a.d . and sn 1181 a.d . ; see vasisht et al .
it follows , therefore , that there is a need to give up our notions about the birth properties of young neutron stars , best typified by the crab pulsar : fast rotation ( @xmath4 s ) and a dipole field strength clustered around @xmath5 g. that neutron stars may be born in a fashion drastically different from the crab has become increasingly evident via recent x - ray studies .
preliminary evidence of this kind includes : the discovery of radio - quiet , cooling neutron stars ( vasisht et al . 1997 and
refs . therein ; gotthelf , petre & hwang 1997 ) in snr , the association of the exotic soft gamma - ray repeaters with young ( @xmath6 yr old ) snr ( see thompson & duncan 1995 ) , and observations of magnetically dominated plerions ( vasisht et al .
1996 ) . also , the slowly - spinning ( @xmath7 s ) , anomalous x - ray pulsar ( axp ) in the @xmath6 yr old snr ctb 109 , has been known for several years ( gregory & fahlman 1980 ) , although its nature is still widely debated ( mereghetti & stella 1995 ; van paradijs , taam & van den heuvel 1995 ; thompson & duncan 1996 ) .
this _ letter _ discusses 1e1841@xmath0045 , an unresolved einstein point - source discovered near the geometrical center of the shell - type snr kes73 ( kriss et al .
the refined rosat hri location of the object is @xmath8 and @xmath9 ( @xmath10 at 90% confidence ; ( helfand et al .
the snr shows no evidence for an extended plerionic core from either radio brightness morphology , polarization properties or spectral index distribution .
this suggests that kes73 , in spite its inferred youth , lacks a bright radio plerion .
to date , no optical counterpart to the central x - ray source has been identified for 1e1841@xmath0045 .
herein , we present the discovery of @xmath11 s pulsed x - rays from 1e1841@xmath0045 and argue that the source is young and unusual . in our companion paper ( gotthelf & vasisht 1997 ; hereafter gv97 ) we present the results of imaging - spectroscopy of kes73 and the compact source .
kes73 was observed with the asca observatory ( tanaka , inoue & holt 1994 ) on 1993 , october 11 - 12 , as a performance and verification ( pv ) target .
data were acquired by the two gas imaging spectrometers ( gis2 and gis3 ) and collected with a photon time - of - arrival resolution of @xmath12 @xmath13s in medium bit - rate mode and @xmath14 @xmath13s in high bit - rate mode .
we used data made available in the asca public archive , screened with the standard rev1 processing to exclude time intervals corresponding to high background contamination , i.e. , from earth - block , bright earth , and saa passages .
an effective exposure of @xmath15 s was achieved with each detector and the on - source measured count rates for the gis2 and gis3 instruments were 1.55 ( gis2 ) and 1.66 ( gis3 ) counts s@xmath16 , respectively .
here , we concentrate on the gis data exclusively and present data from the two solid - state imaging spectrometers ( sis ) on - board asca , in our companion paper ( gv97 ) , which reports on spectral and imaging results .
we summarize the spectra pertinent to this paper below .
the spectrum of 1e1841@xmath0045 is fit by an absorbed , soft power - law of photon index @xmath17 ( @xmath18 ) .
the foreground absorption towards kes73 is found to be @xmath19 @xmath20 , and is consistent with the kinematic distance estimate of 7 kpc ( sanbonmatsu & helfand 1992 ) .
the power - law spectral normalization is found to be consistent with no long term spectral variation when compared to the count rates observed by rosat hri , @xmath21 cps ( helfand et al .
we deduce an unabsorbed model flux of @xmath22 erg @xmath20 s@xmath16 ( 0.5 - 10.0 kev ) yielding a source luminosity , @xmath23 erg s@xmath16 ; the snr distance is @xmath24 kpc .
the long term temporal variability of 1e1841@xmath0045 has been examined by helfand et al .
( 1994 ) who found no concrete evidence for flux changes on the 10 year baseline between einstein and rosat .
we examined the asca data for aperiodic variability by selecting source photons from a @xmath25 radius aperture and binning them on @xmath26 min ( asca orbital period ) , 10 min and 1 min durations .
the obtained light curves were @xmath27 tested against a uniform model , but no evidence for significant variations on these time - scales was found . a search for coherent pulsation from the central object was made by combining the two gis high time resolution datasets ( @xmath28 @xmath13s ) .
photons were selected from the entire snr region of @xmath29 , centered on the compact object from ( i ) the entire energy band ( 0.5 - 10.0 kev ) ( ii ) the hard band ( 2.5 - 10.0 kev ) .
the photon time - of - arrivals were barycentered and binned with resolutions of 488 @xmath13s , 32 ms and 0.5 s. the barycentering and binning procedures were tested on a series of datasets of the crab pulsar and psr 0540@xmath069 .
fourier transforms on the entire datasets were performed at each time resolution , interesting periodicities were harmonically summed and later folded at the period of interest .
a significant high - q x - ray modulation with no overtones is seen at a period @xmath30 s ( @xmath31 hz ; see figs 1 and 2 ) .
the modulation is obvious in all the above datasets and separately in either gis on - source time series ; it is not observed in off - source gis data , making it unlikely to be an instrumental artifact .
the period emerges with greatest significance in time - series which contain emission mainly from the central source ( hard - band , 2.5 - 10.0 kev ) , and is not significant in the soft energy band between 0.5 - 2.5 kev , where the nebula is dominant .
we suggest that the central object is weakly pulsed at 11.7667 s , possibly a neutron star spin period .
the pulsed luminosity is @xmath32 erg s@xmath16 , and the modulation level is about 35% of the steady flux from the compact source after subtraction of the estimated contribution of the background and the snr thermal component above 2.5 kev . with a period in hand ,
we have reanalyzed the 18-ks of rosat hri data obtained between march 16 - 18 , 1992 ( helfand et al .
1994 ) , selecting the few ( @xmath33 ) source and background photons available from the vicinity of 1e1841@xmath0045 .
we perform a conditional search in a small range of periods around 11.76 s using the folding technique ( 12 phase bins per fold ) ; the resulting periodogram is displayed in fig 1b .
we cautiously forward the suggestion of a peak - up at a barycentered period of 11.7645 s ( 4.0 @xmath34 ; which is the expected significance given the hri count rate and the asca derived modulation ) ; the observed fwhm of the periodogram excess of @xmath35 s is consistent with expectations , given the period and the time - span of the hri observations . also , the crest and trough in the hri profile ( fig 2b ) match those of the gis profile quite well .
a linear interpolation , assuming steady spin - down , gives a period derivative of @xmath36 s s@xmath16 .
our interpretation of 1e1841@xmath0045 is that of a young neutron star that was born during a supernova that now forms the snr kes73 . the kinematic distance to the snr of 6.7 - 7.5
kpc is consistent with its high foreground x - ray absorption , @xmath37 @xmath20 .
the small shell radius ( @xmath38 pc ) , along with intense radio and x - ray shell emission are characteristics of a young supernova remnant .
this notion is supported by our detection of a hot thermal ( @xmath39 kev ) x - ray continuum in the shocked gas .
helfand et al .
( 1994 ) have argued that the snr is likely to be in transition between free expansion and the adiabatic phases , whereby , the sedov age of @xmath40 yr must be an upper bound .
the sis spectra show the enhancement of mg over the species s , si and fe , along with possible evidence for highly absorbed emission from o and ne in the 0.5 - 0.9 kev range ( see gv97 ) .
during the course of their evolution , massive stars produce large quantities of o - group ( o , ne and mg ) which are ejected during the supernova ( see hughes et al .
hence , there is spectral evidence that kes73 is young and still ejecta dominated .
on spectral grounds we also favor a type ii or ib origin for kes73 , i.e. , from a massive progenitor .
we consider it unlikely that the neutron star was born in an accretion - induced collapse of a heavy white dwarf ( lipunov & postnov 1985 ) .
the pulsar in 1e1841@xmath0045 has common properties with the peculiar x - ray pulsar , 1e2259 + 586 ( gregory & fahlman 1980 ; corbet et al .
1995 ) , and the soft gamma - ray repeaters ( thompson & duncan 1996 ) .
1e 2259 + 586 is a 7-s spin rotator , and coincides with a @xmath41 yr - old snr ( ctb 109 ) .
much like 1e1841@xmath0045 , it has a soft x - ray spectrum best represented by a blackbody at 0.45 kev with a non - thermal tail with @xmath42 , and @xmath43 erg s@xmath16 .
it has a history of nearly steady spin - down and no detected binary modulation ( iwasawa , koyama & halpern 1992 ) , optical companion , or quiescent radio emission .
there are four other x - ray pulsars , 4u0142 + 614 ( hellier 1994 ; israel et al .
1994 ) , 1e1048.1 - 5937 ( seward , charles & smale 1986 ) , rxj1838@xmath00301 ( schwentker 1994 ; possibly also associated with a @xmath41 yr old snr ) , and 1rxs j170849.0@xmath0400910 ( sugizaki et al . 1997 ) which have low luminosities ( @xmath44 erg s@xmath16 ) , periods of order 10-s that are steadily increasing , soft spectra and no detected companions or accretion disks . in all the above cases , spectra can be fit with soft power - laws with indices in the range 2.3 - 3.5 ( corbet et al .
collectively , these have been grouped into a class called the breaking x - ray pulsars ( mereghetti & stella 1995 ) or alternatively , the anomalous x - ray pulsars ( van paradijs et al .
we observe axps through a substantial distance in the galactic disk ( with foreground columns densities @xmath45 @xmath20 ) , which suggests that they are not commonplace .
the rotational energy of 1e1841@xmath0045 is far too small to power its total x - ray emission of @xmath46 .
the maximum luminosity derivable just from spin - down is @xmath47 where @xmath48 is the moment of inertial of the neutron star , and @xmath49 yr is the snr age
. the mechanisms for powering the x - rays could then be either ( i ) accretion from a high - mass x - ray binary ( helfand et al .
1994 ) , a low mass companion ( mereghetti & stella 1995 ) , or a fossil disk ( van paradijs et al . 1995 ) , or a merged white dwarf ( paczyncski 1990 ) ( ii ) intrinsic energy loss , such as initial cooling or the decay of magnetic fields in a magnetic neutron star ( thompson & duncan 1996 ) . the strongest argument for accretion as
the source of energy is that the inferred accretion rate is just that required if the ns is close to its equilibrium spin period @xmath50 , with a field @xmath51 g typical for young pulsars ( see bhattacharya & van den heuvel 1991 ) @xmath52 however , only a pathological evolution scenario involving accretion could bring an energetic dipole rotator to its present rotation rate within @xmath53 yr . the strongest support for 1e1841@xmath0045 as an accretor will be from future identification of an infra - red ( large foreground @xmath54 mag . ) companion or an accretion disk . as in the case of other axps an infrared counterpart may not be easily identified ( coe , jones & lehto 1994 and refs . therein ) .
the pulsar has properties that may already preclude accretion as a power source : ( i ) high - mass neutron star binaries with a ns accreting from the companion wind sometimes go into low luminosity states with @xmath55 erg s@xmath16 , and have periods in the range 0.07 - 900 s. however , in general they display hard spectra ( @xmath56 ) and strong aperiodic variability on all time - scales ( nagase 1989 ) . if the x - ray source is indeed a high mass x - ray binary then the inferred @xmath57 could be the result of orbital doppler effects .
( ii ) neutron stars with disk accretion from a low mass companion or a fossil disk , with the latter having formed from sn debris or a thorne - zytkow phase ( van paradijs et al .
1995 ) , should display similar accretion noise in the light - curve .
( iii ) other axps show near steady spin - down on time - scales @xmath58 yr , although this is controversial ( baykal & swank 1996 ; corbet & mihara 1997 ) .
the long term torque behavior of 1e1841@xmath0045 will only be evident with future observations .
iv ) finally , accretion models would have to be stretched in order to explain the slow rotation period ( inside a young snr ) and its associated spin - down time - scale , @xmath59 yr .
first , it difficult for accretion torques to spin - down a pulsar to 12-s in @xmath60 yr from initial periods @xmath61 ms unless , of course , the pulsar were _ born a very slow rotator _ , which is quite interesting in its own right .
secondly , if the pulsar were rotating near its equilibrium period , as in the ghosh and lamb ( 1979 ) scenario , the spin - down time of @xmath62 yr is inconsistent with the implied accretion rate , @xmath63 m@xmath64 yr@xmath16 ( assuming the pulsar has a normal dipolar field @xmath65 g ) ; these usually lie in range @xmath58 yr . a dipolar magnetic field vs. period scaling
can also be obtained under the assumption that the neutron star is isolated , and has undergone conventional pulsar spin - down from torques provided by a relativistic wind , as in the crab . for dipolar secular spin - down ,
the implied @xmath66 is enormous , @xmath67 such highly magnetized neutron stars ( with dipolar field strengths @xmath68 g ) or `` magnetars '' have been postulated by thompson & duncan ( 1995 and refs . therein ) to explain the action of soft gamma - ray repeaters .
magnetars have magnetic flux densities that are a factor @xmath69 larger than the typical @xmath51 g fields supported by radio or x - ray pulsars and perhaps represent a tail of @xmath70-field distribution in ns .
after birth , they spin - down too rapidly to be easily detectable as radio pulsars , assuming that they are capable of radio pulsar action at all .
the dipole energy in the star s exterior , a small fraction of the total magnetic energy , exceeds the rotational energy of the ns after roughly @xmath71 yr , where @xmath72 g. magnetism then quickly becomes the dominant source of free energy in an isolated magnetar . the derived spin - down age of the pulsar @xmath73 yr , is consistent with the snr age ( ages inferred from the estimator @xmath74 are larger than the true age as they measure linear spin - down).the equivalent dipolar field is @xmath75 g. there is an intriguing possibility that the the pulsar in kes73 was born as a magnetar @xmath76 yr ago , and has since spun - down to the long period of 11.7-s due to rapid dipole radiation losses
. it could be unobservable as a radio pulsar due to period dependence of beaming ( kulkarni 1992 ) ; it is also possible the radio pulsar mechanism may operate differently or not at all above the quantum critical field , @xmath77 g. in a magnetar , the observed x - ray luminosity would be driven by the decay of the stellar b - field via diffusive processes , which set in at an age @xmath78 yr ( thompson & duncan 1996 ; we assume that diffusion of field lines through the crust by hall drift , and the core by ambipolar diffusion occur on time - scales @xmath79 yr and @xmath80 yr , respectively ; goldreich & reisenegger 1992 ) . magnetic field decay powers the star on average , at a steady rate of @xmath81 erg s@xmath16 for the first decay time , @xmath82 yr . decay in the core is likely to keep the stellar surface hot via release of internal magnetic free energy , while crustal decay is likely to set up a steady spectrum of alfven waves in the magnetosphere which can accelerate particles to produce the soft non - thermal tail observed in the pulsar spectrum ( gv97 ) .
such a neutron star may in time ( @xmath6 yr ) display the soft gamma - ray repeater phenomenon ( thompson & duncan 1996 ) . in conclusion , we reiterate that 1e1841 - 045 is in our estimation a young ( @xmath2 yr - old ) neutron star spinning at an anomalously slow rate of @xmath1-s , possibly with very strong torques on its rotation .
the claim of rapid spin - down is based on a weak detection of periodicity in the archival rosat hri data , and needs to be urgently tested in future observations .
whatever the final consensus on 1e1841@xmath0045 , it is a most unusual and exciting object , the understanding of whose nature should make us rethink important aspects about the birth process of neutron stars as a whole .
* acknowledgments : * first of all , we thank the heasarc archives for making the asca data available to us .
gv would like to thank shri kulkarni for discussions and for making the trip to gsfc possible , and to the lhea at gsfc for hosting him .
we thank the lhea for generous use of its facilities .
gv s research is supported by nasa and nsf grants .
evg s research is supported by nasa .
gv thanks david helfand for earlier discussions on kes 73 . *
1 * ( top ) a power spectrum of photons from both gis cameras displayed in the range 11.75 - 11.78 s. photons were selected from the hard - band ( 2.5 - 10.0 kev ) .
the main peak near 11.766684 s is the putative pulsar period .
the powerful side - lobe peaks are separated from the main peak by 0.00017 hz , the asca orbital period .
( bottom ) a periodogram of @xmath27 vs. period of the hri data , peaks up at roughly 4@xmath34 as is expected from the gis profile ( even though the energy range is different ) .
the peak - up period is 11.7645 s. the total number of hri counts for this was @xmath83 .
the search was done with twelve bins across the folding period . *
2 * ( top ) a normalized folded profile of the gis data ( including background ) , with 12 bins of resolution , and a folding period of 11.7667 s. the profile is roughly sinusoidal , with about @xmath84 35% modulation ( after accounting for the background ) .
the start epoch of folding is mjd @xmath85 .
( bottom ) a normalized folded profile of rosat hri data , with 12 bins of resolution , at a folding period of 11.7645 s. the start epoch of folding is mjd @xmath86 . | we report the discovery of pulsed x - ray emission from the compact source 1e 1841@xmath0045 , using data obtained with the _ advanced satellite for cosmology and astrophysics_. the x - ray source is located in the center of the small - diameter supernova remnant ( snr ) kes73 and is very likely to be the compact stellar - remnant of the supernova which formed kes73 .
the x - rays are pulsed with a period of @xmath1 s , and a sinusoidal modulation of roughly 30% .
we interpret this modulation to be the rotation period of an embedded neutron star , and as such would be the longest spin period for an isolated neutron star to - date .
this is especially remarkable since the surrounding snr is very young , at @xmath2 yr old .
we suggest that the observed characteristics of this object are best understood within the framework of a neutron star with an enormous dipolar magnetic field , @xmath3 g. psfig.sty |
this was a cross - sectional study with 321(190 men and 131 women ) participants waiting at the medical centers of tehran .
they were recruited through random cluster sampling from the second , third and seventh districts of tehran medical centers ( private , governmental and charity ) , hospitals and clinics .
the present study utilized the researcher s devised waiting anxiety questionnaire , spielberger s stai , personality type , brs and epq ( adult form ) .
the waq has been devised on the basis of spielberger s stai which possessed 20 three - choice items .
the questions of the questionnaire are concerned with cognitive , physiological , emotional and behavioral ( states or traits ) aspects of anxiety .
the participants replied to the questions with three - point likert scale ( never , sometimes , often ) which were scored from 0 to 2 , respectively . in all the items except items 3 , 11 , 15 and 18 , the
never choice showed absence of anxiety , the sometimes choice indicated medium anxiety , and the often choice showed high degrees of anxiety .
in addition to this questionnaire , the participants responded to spielberger s stai which evaluates individuals anxiety and uncertainty or assesses how they react to mental pressure .
( 1970 ) and involves 40 questions among which 20 assess latent anxiety and the other 20 are concerned with hid anxiety .
this questionnaire has been utilized as the most common test to evaluate anxiety in different studies during the last 20 years .
( 2003 ) in iran and has been applied frequently in various researches , and its validity and reliability have been scrutinized for numerous times ( 16 ) burtner rating scale ( brs ) has been devised by burtner and freedman ( 1976 ) to assess type a and b ; it involves 14 items and each item is comprised of two phrases .
conventional point of 70 can be considered to separate type a and b because the range of scores ( 0 - 70 ) leans towards type b rather than type a ; its validity and reliability were investigated on 420 participants in iran by isfahan medical university ; its validity was reported 0.79 ; and the reliability coefficient of test- retest for this scale was reported to be 0.71 to 0.84 .
furthermore , its simultaneous validity with the organized interview was 0.75 , and it was 0.70 with the scale of genkinz et al .
the 90-item eysenck personality questionnaire ( epq ) was developed to identify one s personality characters ( 19 ) .
epq targets three important personality dimensions : psychoticism ( p ) , extroversion ( e ) and neuroticism ( n ) ( 20 ) .
this tool contains 21 questions to identify the subject s truthful compliance to the questions the study data were analyzed by descriptive statistical methods , pearson correlation coefficient and method of principle factor analysis .
the present study utilized the researcher s devised waiting anxiety questionnaire , spielberger s stai , personality type , brs and epq ( adult form ) .
the waq has been devised on the basis of spielberger s stai which possessed 20 three - choice items .
the questions of the questionnaire are concerned with cognitive , physiological , emotional and behavioral ( states or traits ) aspects of anxiety .
the participants replied to the questions with three - point likert scale ( never , sometimes , often ) which were scored from 0 to 2 , respectively . in all the items except items 3 , 11 , 15 and 18 , the
never choice showed absence of anxiety , the sometimes choice indicated medium anxiety , and the often choice showed high degrees of anxiety .
in addition to this questionnaire , the participants responded to spielberger s stai which evaluates individuals anxiety and uncertainty or assesses how they react to mental pressure .
( 1970 ) and involves 40 questions among which 20 assess latent anxiety and the other 20 are concerned with hid anxiety .
this questionnaire has been utilized as the most common test to evaluate anxiety in different studies during the last 20 years .
( 2003 ) in iran and has been applied frequently in various researches , and its validity and reliability have been scrutinized for numerous times ( 16 ) burtner rating scale ( brs ) has been devised by burtner and freedman ( 1976 ) to assess type a and b ; it involves 14 items and each item is comprised of two phrases .
conventional point of 70 can be considered to separate type a and b because the range of scores ( 0 - 70 ) leans towards type b rather than type a ; its validity and reliability were investigated on 420 participants in iran by isfahan medical university ; its validity was reported 0.79 ; and the reliability coefficient of test- retest for this scale was reported to be 0.71 to 0.84 .
furthermore , its simultaneous validity with the organized interview was 0.75 , and it was 0.70 with the scale of genkinz et al .
the 90-item eysenck personality questionnaire ( epq ) was developed to identify one s personality characters ( 19 ) .
epq targets three important personality dimensions : psychoticism ( p ) , extroversion ( e ) and neuroticism ( n ) ( 20 ) .
this tool contains 21 questions to identify the subject s truthful compliance to the questions the study data were analyzed by descriptive statistical methods , pearson correlation coefficient and method of principle factor analysis .
the study participants consist of 321 individuals , 190 men and 131 women aged 19 to 45 ( mean age of 35.11yrs ) . of all the participants ,
207 were married ; of 321 participants , 50 did not have a high school diploma , while 271 had tertiary education .
table 1 indicates the statistical characteristics of 20 items of the questionnaire , the general score , and the correlation of each item with the general score and the effect of omission of each item in alpha cronbach .
the mean of the 20 items was from 1.03 ( in 18 items ) to 1.93 ( in 15 items ) and their standard deviations were from 0.15 % ( in 19 items ) to 0.95% ( in 19 items ) .
the mean and standard deviation of the whole questionnaire was 29.04 and 8.49 , respectively .
the content validity of the questionnaire was confirmed by 10 specialists ( psychologists and psychiatrists ) , and also split method was applied to evaluate reliability ; guttmann coefficient for split method was 0.84 . to scrutinize reliability with the test - retest 80 participants
were randomly chosen , and they again completed the waiting anxiety test two weeks later .
the coefficient of correlation between these two tests was reported to be 0.82 which demonstrates the high reliability of the questionnaire . in order to analyze internal validity and simultaneous validity of the waiting anxiety in this study ,
the correlation between waq with spielberger s stai brs and epq ( adult form ) was calculated . moreover ,
the coefficient of correlation of waq with stai was 0.65 ; it was 0.78 with type a personality , 0.23 with psychosis , 0.43 with neuroticism and -0.47 with extraversion which was significant ( p < 0.0001 ) in all the instances .
the amounts acquired for kmo was higher than 0.7 and the significant level of bartlett sphericity test was also less than 0.5 .
therefore , the data in the present study can be considered a factor ( 21 ) .
therefore , four factors were extracted which confirm the factor analysis validity of waq in table 3 .
the reason for the utilization of this method in this study is that varimax rotation produces factors which show high correlations with smaller set of variables , whereas they reveal a weak correlation with another set of variables .
the factor load of higher than 0.46 was considered to select the items for each factor .
the mentioned factors are as follows : items for physiology ( factor 1 ) : waiting in line upsets me so as breathing will be difficult for me .
while waiting , my heart beat increases . while waiting , my body temperature changes .
items for cognitive ( factor 2 ) : when i am waiting for a turn , i like to finish my work earlier than the other people .
when i am waiting , i often think that i have dropped behind my works .
i think that if i did not have to wait , i would visit the doctor more .
items for behavioral ( factor 3 ) : when i am waiting , i have to walk .
when i think i have to wait , i do not enter in to the environment at all .
when i am standing in line , i should take care not to miss my turn by others .
items for emotional ( factor 4 ) : when i am in waiting room , i feel pleased .
when i am waiting to visit the doctor , i feel relaxed and comfortable . in the waiting room ,
the correlation analysis of the waq demonstrates a significant relation with such demographic variables as overall anxiety , gender and education , whereas no relation was observed with family anxiety and occupation .
t test was applied to analyze the relation between gender and waiting anxiety which revealed that gender posits a significant influence on waiting anxiety ( t = 2.045 , df = 314 , p = 0.05 ) .
with respect to education , it was indicated that individuals with tertiary education proposed more waiting anxiety rather than those individuals who did not have a high school diploma ( t = -5.166 , df .
the relationship of overall anxiety rate on waiting anxiety in different states demonstrated that those individuals experiencing general anxiety have higher waiting anxiety ( t = -3.928 , df.= 314 , p = 0.001 ) .
the findings suggest that waq is a valid and reliable questionnaire to be used in iranian waiting population .
the findings of this study reveal that the mean of participants in waq was 29.04 .
the correlation analysis between items of the waq and the total score indicated that each of them were highly correlated ( 0.49 to 0.78 ) with the total score .
with respect to reliability , it was indicated that all the items had almost the same role in the total score .
furthermore , omitting no item did not increase alpha significantly ; therefore , changing or omitting the questionnaire s items did not seem essential .
simultaneous reliability of this questionnaire was confirmed by calculating the correlation of the waq with stai , brs and epq ; the validity of the waiting anxiety construct was significantly acceptable so that the correlation coefficient of the waq with stai was 0.67 which indicates consistency of these two scales to scrutinize anxiety .
it indicates that an individual s anxiety states while waiting are so similar to the states and traits of stressed individuals .
the correlation coefficient of waiting anxiety with personality type questionnaire was 0.78 which illuminates that individuals with type a personality have the tendency to anxiety while they are waiting .
in other words , symptoms of waiting anxiety exist more in individuals with type a personality and it will increase in environments in which an individual is waiting . the correlation coefficient of the waiting anxiety with epq in the neuroticism and extraversion subscales were 0.43 and 0 .
48 , respectively which was significant in all the items ( p < 0.001 ) , and indicates high rate of anxiety in most individuals who have stressful personality while waiting .
moreover , the significant negative correlation of extroversion with waiting anxiety shows less anxiety rate in extraverted individuals while waiting .
the consistency of neuroticism signs with waiting anxiety shows that individuals with neurotic personality reveal more symptoms of waiting anxiety .
the study results also revealed that overall anxiety affects waiting anxiety as individuals who have anxiety in different levels show more waiting anxiety , whereas it does not show any significant correlation with family anxiety and occupation .
the relation between education and waiting anxiety was investigated , and it was demonstrated that individuals with university education show higher waiting anxiety rather than individuals who did not complete high school .
this proposes that educated individuals probably feel more urgency in time . their higher rivalry and superiority sense leads them to experience more anxiety in the waiting times .
therefore , its generalizability may not extend beyond this study . the effect of environmental factors on waiting anxiety can be investigated in future studies .
waiting environment is a crucial factor in creating anxiety especially in medical places and our findings suggest that waq possesses appropriate validity and reliability to measure the individuals anxiety during waiting time . | objectivethis study aimed to develop and validate a questionnaire to measure waiting anxiety.methodsthis was a cross - sectional study .
extensive review of literature and expert opinions were used to develop and validate the waiting anxiety questionnaire .
a sample of 321 participants was recruited through random cluster sampling ( n= 190 iranian men and n= 131 women ) .
the participants filled out waq , the speilbergers state - trait anxiety inventory ( stai ) , burtner rating scale ( brs ) and eysenk personality questionnaire ( epq ) for adults.resultsinternal consistency of waq was revealed , meaning that all the 20 items were highly correlated with the total score .
the cronbach alpha equaled 0.83 for the waiting anxiety questionnaire.the pearson correlation coefficient of the questionnaire with the stai , brs and extraversion and neuroticism subscales of epq was 0.65 , 0.78 , - 0.47 and 0.43 , respectively , which confirmed its convergent and divergent validity .
factors analysis extracting four cognitive , behavioral , sentimental and physiological factors could explain 67% of the total variance with an eigen value of greater than 1.conclusionour findings suggest that waq possesses appropriate validity and reliability to measure the individuals anxiety during the waiting time . |
we focus on the logarithmic sobolev inequality for unbounded spin systems on the d - dimensional lattice @xmath1 ( @xmath2 ) with quadratic interactions .
the aim of this paper is to prove that when the single site measure without interactions ( consisting only of the phase ) @xmath3 satisfies the log - sobolev inequality , then the gibbs measure of the associated local specification @xmath4 , with hamiltonian @xmath5 also satisfies a log - sobolev inequality , when the interactions @xmath6 are quadratic .
since the main condition about the phase measure does not involve the local specification @xmath7 nor the one site measure @xmath8 , we present a criterion for the infinite dimensional gibbs measure inequality without assuming or proving the usual dobrushin and shlosman s mixing conditions for the local specification as in @xcite and more recently in @xcite . as a matter of fact , in order to control the boundary conditions involved in the interactions
, we will make use of the u - bound inequalities introduced in @xcite to prove coercive inequalities in a standard non statistical mechanics framework . as a result
we prove the inequality for a variety of phases extending beyond the usual euclidean case , as well as involving measures with phase like @xmath9 that go beyond the typical convexity at infinity . for the investigation of criteria for the logarithmic sobolev inequality for the infinite dimensional gibbs measure of the local specification @xmath4
two main approaches have been developed .
the first approach is based in proving first that the measures @xmath4 satisfy a log - sobolev inequality with a constant uniformly on the set @xmath10 and the boundary conditions @xmath11 .
then the inequality for the gibbs measure follows directly from the uniform inequality for the local specification .
criteria for the local specification to satisfy a log - sobolev inequality uniformly on the set @xmath10 and the boundary conditions have been investigated by @xcite , @xcite , @xcite , @xcite , @xcite , @xcite and @xcite .
similar results for the weaker spectral gap inequality have been obtained by @xcite .
the second approach focuses in obtaining the inequality for the gibbs measure directly , without showing first the stronger uniform inequality for the local specification .
such criteria on the local specification in the case of quadratic interactions for the infinite - dimensional gibbs measure on the lattice have been investigated by @xcite , @xcite and @xcite .
the problem of passing from the single site to infinite dimensional measure , in the case of quadratic interactions , is addressed by @xcite , @xcite and @xcite .
what it has been shown is that when the one site measure @xmath8 satisfies a log - sobolev inequality uniformly on the boundary conditions , then in the presence of quadratic interactions the infinite gibbs measure also satisfies a log - sobolev inequality . for the single - site measure @xmath8 ,
necessary and sufficient conditions for the log - sobolev inequality to be satisfied uniformly over the boundary conditions @xmath11 , are also presented in @xcite , @xcite and @xcite .
the scope of the current paper is to prove the log - sobolev inequality for the gibbs measure without setting conditions neither on the local specification @xmath7 nor on the one site measure @xmath8 .
what we actually show is that in the presence of quadratic interactions , the gibbs measure always satisfies a log - sobolev inequality whenever the boundary free one site measure @xmath12 satisfies a log - sobolev inequality . in that way
we improve the previous results since the log - sobolev inequality is determined alone by the phase @xmath13 of the simple without interactions measure @xmath14 on @xmath15 , for which a plethora of criteria and examples of good measure that satisfy the inequality exist .
we consider the @xmath16-dimensional integer lattice @xmath1 with the standard neighborhood notion , where two lattice points @xmath17 are considered neighbours if their lattice distance is one , i.e. they are connected with an edge , in which case we write @xmath18 .
we will also denote @xmath19 for the set of all neighbours of a node @xmath20 and @xmath21 the boundary of a set @xmath22 .
our configuration space is @xmath23 , where @xmath15 is the spin space .
we consider unbounded @xmath24-dimensional spin spaces @xmath15 with the following structure .
we shall assume that @xmath15 is a nilpotent lie group on @xmath25 with a hrmander system @xmath26 , @xmath27 , of smooth vector fields @xmath28 , @xmath29 , i.e. @xmath30 are smooth functions of @xmath31 .
the ( sub)gradient @xmath32 with respect to this structure is the vector operator @xmath33 .
we consider @xmath34 when these operators refer to a spin space @xmath35 at a node @xmath36 this will be indicated by an index @xmath37 . for a subset @xmath10 of @xmath1
we define @xmath38 and @xmath39 the spin space @xmath15 is equipped with a metric @xmath40 for @xmath41 .
for example , in the case of @xmath15 being a euclidean space then @xmath16 is the euclidean metric or if @xmath15 is the heisenberg group , then @xmath16 is the carnot - carathodory metric .
we will consider examples and applications of the main theorem for both . in all cases , for @xmath42 we will conventionally write @xmath43 , for the distance of @xmath44 from @xmath45 @xmath46 where @xmath45 is a specific point of @xmath15 , for example the origin if @xmath15 is @xmath25 or the identity element of the group when @xmath15 is a lie group . furthermore , we assume that there exists a @xmath47 such that @xmath48 .
for instance , in the euclidean and the carnot - caratheodory metrics @xmath49 . a spin at a site @xmath50 of a configuration @xmath51 will be indicated by an index , i.e. we will write @xmath52 .
this takes values in @xmath35 which is an identical copy of the spin space @xmath15 . for a subset @xmath53
we will identify @xmath54 with the cartesian product of the @xmath35 for every @xmath55 .
the spin space @xmath15 is equipped with a natural measure .
for example , when @xmath15 is a group then we assume that the measure is one which is invariant under the group operation , for which we write @xmath56 . again , for any @xmath36 , we use a subscript to indicate the natural measure @xmath57 on @xmath35 . in the case of a euclidean space or the heisenberg group for instance ,
this is the lebesgue measure . for the product measure derived from the @xmath57 , @xmath58 we will write @xmath59
. the measures of the local specification @xmath60 for @xmath22 and @xmath61 , are defined as@xmath62 where @xmath63 is a normalization constant . the hamiltonian function @xmath64 has the form @xmath5 we call @xmath13 the phase and @xmath6 the interaction . in this work
we consider exclusively quadratic interactions @xmath6 , i.e. @xmath65 for some @xmath66 .
we will assume that there exists a @xmath67 such that @xmath68 and that @xmath69 .
for a function @xmath70 from @xmath71 into @xmath72 , we will conventionally write @xmath73 for the expectation of @xmath70 with respect to @xmath74 . for economy we will frequently omit the boundary conditions and
we will write @xmath75 instead of @xmath74 . the measures of the local specification obey the markov property @xmath76 we say that the probability measure @xmath77 on @xmath78 is an infinite volume gibbs measure for the local specifications @xmath79 if it satisfies the dobrushin - lanford - ruelle equation : @xmath80
we refer to @xcite , @xcite and @xcite for details . throughout the paper
we shall assume that we are in the case where @xmath77 exists ( uniqueness will be deduced from our results , see proposition [ 7prop2 ] ) .
furthermore , we will consider functions @xmath81 such that @xmath82 .
the main interest of the paper is the logarithmic sobolev inequality .
we say that a probability measure @xmath14 in @xmath15 satisfies the logarithmic sobolev inequality , if there exists a constant @xmath83 such that @xmath84 we notice two important properties for the log - sobolev inequality .
the first is that it implies the spectral gap inequalities , that is , there exists a constant @xmath85 such that @xmath86 the second is that both the log - sobolev inequality and the spectral gap inequality are retained under product measures .
proofs of these two assertions can be found in gross @xcite , guionnet and zegarlinski @xcite and bobkov and zegarlinski @xcite . under the spin system framework the log - sobolev inequality for
the local specification @xmath79 takes the form @xmath87 where the constant @xmath88 is now required uniformly on the subset @xmath10 and the boundary conditions @xmath89 . in the special case where @xmath90 then the constant is considered uniformly on the boundary conditions @xmath91 . the analogue log - sobolev inequality for the infinite volume gibbs measure @xmath77
is then defined as @xmath92 the aim of this paper is to show that the infinite volume gibbs measure @xmath77 satisfies the log - sobolev inequality ( [ lsg ] ) for an appropriate constant . as explained in the introduction , in the case of quadratic interactions , previous works concentrated in proving first the stronger ( [ lse ] ) for all @xmath22 , or assumed the log - sobolev inequality ( [ lse ] ) for the one site @xmath93 and then derived from these ( [ lsg ] ) .
our aim is to show that if we assume the weaker inequality ( [ ls ] ) for the phase measure @xmath94 , then in the presence of quadratic interaction this is sufficient to obtain directly the log - sobolev inequality for the gibbs measure ( [ lsg ] ) , without the need to assume or prove any of the stronger inequalities ( [ lse ] ) that require uniformity on the boundary conditions and/or the dimension of the measure .
the first result of the paper follows : [ theorem ] assume that the measure @xmath94 in @xmath95 satisfies the log - sobolev inequality and that the local specification @xmath96 has quadratic interactions @xmath6 as in ( [ quadratic ] ) .
then for @xmath67 sufficiently small the infinite dimensional gibbs measure in @xmath71 satisfies a log - sobolev inequality .
since the main hypothesis of the theorem refers just to the measure @xmath94 satisfying a logarithmic sobolev inequality , we can take all the probability measures from @xmath25 that satisfy a log - sobolev inequality and get measures on the statistical mechanics framework of spin systems on the lattice @xmath1 just by adding quadratic interactions as described in ( [ quadratic ] ) . from the plethora of theorems and criteria that have been developed for the euclidean @xmath25 for @xmath97 , among others in @xcite , @xcite , @xcite , @xcite and @xcite one can generalise these to the spin system framework just by applying them to the phase @xmath13 and
then add quadratic interactions @xmath6 . as a typical example , one can then for instance obtain for then euclidean space @xmath97 with @xmath16 the euclidian metric , @xmath98 and @xmath57 the lebesgue measure , the following example of measures : consider the phase @xmath99 for any @xmath100 and interactions @xmath101 .
then the associated gibbs measure satisfies a logarithmic sobolev inequality .
furthermore , as will be described in theorem [ theorem2 ] that follows , with additional assumptions on the distance and the gradient we can obtain results comparable to the once obtained in @xcite for general metric spaces .
we consider general @xmath24-dimensional non compact metric spaces . for the distance @xmath16 and the ( sub)gradient @xmath32 , in addition to the hypothesis of theorem [ theorem ] we assume that @xmath102 for some @xmath103 , and @xmath104 outside the unit ball @xmath105 for some @xmath106 .
we also assume that the gradient @xmath32 satisfies the integration by parts formula . in the case of @xmath33 with vector fields
@xmath28 it suffices to request that @xmath30 is a function of @xmath107 not depending on the @xmath108-th coordinate @xmath109 . if @xmath56 is the @xmath110dimensional lesbegue measure we assume that it satisfies the classical - sobolev inequality ( c - s ) @xmath111 for positive constants @xmath112 , as well as the poincar inequality on the ball @xmath113 , that is there exists a constant @xmath114 such that @xmath115 the classical sobolev inequality ( c - s ) is for instance satisfied in the case of the @xmath116 with @xmath16 being the eucledian distance , as well as for the case of the heisenberg group , with @xmath16 being the carnot - carathodory distance .
the poincar inequality on the ball for the lebesgue measure ( l - p ) is a standard result for @xmath117 ( see for instance @xcite , @xcite , @xcite and @xcite ) , while for @xmath118 one can look on @xcite . under this framework ,
if we combine our main result theorem [ theorem ] , together with corollary 3.1 and theorem 4.1 from @xcite we obtain the following theorem [ theorem2]assume distance @xmath16 and the ( sub)gradient @xmath32 are such that ( d1)-(d2 ) as well as ( c - s ) and ( l - p ) are satisfied . let a probability measure @xmath119 , where @xmath56 the lebesgue measure , such that @xmath120 defined with a differential potential @xmath121 satisfying @xmath122with @xmath100 and @xmath123 the conjugate of @xmath124 , and suppose that @xmath125 is a measurable function such that @xmath126 .
assume that the local specification @xmath96 has quadratic interactions @xmath6 as in ( [ quadratic ] ) .
then for @xmath67 sufficiently small the infinite dimensional gibbs measure satisfies a log - sobolev inequality .
an interesting application of the last theorem is the special case of the heisenberg group , @xmath127 .
this can be described as @xmath128 with the following group operation : @xmath129 @xmath127 is a lie group , and its lie algebra @xmath130 can be identified with the space of left invariant vector fields on @xmath127 in the standard way .
the vector fields@xmath131 \nonumber\end{aligned}\ ] ] form a jacobian basis , where @xmath132 denoted derivation with respect to @xmath133 . from this
it is clear that @xmath134 satisfy the hrmander condition ( i.e. , @xmath134 and their commutator @xmath135 $ ] span the tangent space at every point of @xmath136 ) .
the sub - gradient is given by @xmath137 for more details one can look at @xcite . in @xcite a first example of a measure on the heisenberg group with a gibbs measure that satisfies a logarithmic sobolev inequality was presented . here , with the use of theorem [ theorem2 ]
we can obtain examples with a phase @xmath13 that is nowhere convex and include more natural quadratic interactions .
such an example , that satisfies the conditions of theorem [ theorem2 ] with a phase that goes beyond convexity at infinity is the following : @xmath138 and @xmath139 where @xmath140 the group operation and @xmath141 the inverse in respect to this operation .
the proof of theorem [ theorem ] is divided into two parts presented on the next two propositions [ proposition ] and [ propubound ] . in the first one , we prove a weaker assertion , that the claim of theorem [ theorem ] is true under the conditions of theorem [ theorem ] together with the u - bound inequality ( [ ubound ] ) .
[ proposition ] assume that the measure @xmath94 satisfies the log - sobolev inequality and that the local specification @xmath96 has quadratic interactions @xmath6 as in ( [ quadratic ] ) .
furthermore , assume that there exists a @xmath142 such that the following u - bound inequality is satisfied @xmath143 then for @xmath67 sufficiently small the infinite dimensional gibbs measure satisfies the log - sobolev inequality .
the proof of this proposition will be presented in section [ finalproof ] .
then theorem [ theorem ] follows from proposition [ proposition ] and the next proposition which states that the conditions of theorem [ theorem ] imply the u - bound inequality ( [ ubound ] ) of proposition [ proposition ] .
[ propubound ] assume that the measure @xmath94 satisfies the log - sobolev inequality and that the local specification @xmath96 has quadratic interactions @xmath6 as in ( [ quadratic ] )
. then for @xmath67 sufficiently small the gibbs measure satisfies the u - bound inequality ( [ ubound ] )
. a few words about the structure of the paper .
since the proof of the main result presented in theorem [ theorem ] trivially follows from proposition [ proposition ] and proposition [ propubound ] , we concentrate on showing the validity of these two . for simplicity we will present the proof for the 2-dimensional lattice @xmath144 . at first , the proof of proposition [ propubound ]
will be presented in section [ sectionubound ] where the u - bound inequality ( [ ubound ] ) is shown to hold under the conditions of the main theorem .
the proof of proposition [ proposition ] will occupy the rest of the paper .
in particular , in section [ section4 ] a sweeping out inequality will be shown as well as a spectral gap type inequality for the one site measure . in section [ secls ] a second sweeping out inequality is proven . in section [ proof sec6 ] logarithmic sobolev type inequalities for the one site measure
as well as for the infinite product measure are proven . then in the section [ spectralgap ] we present a spectral gap type inequality for the product measure directly from the log - sobolev inequality shown in the previous section . using this we show convergence to the gibbs measure as well as it s uniqueness .
then at the final part of the section , in subsection [ finalproof ] , we put all the previous bits together to prove proposition [ proposition ] .
u - bound inequalities where introduced in @xcite in order to prove @xmath123 logarithmic sobolev inequalities . in this work
we use u - bound inequalities in order to control the quadratic interactions . in this section
we prove proposition [ propubound ] , that states that if the measure @xmath145 satisfies the log - sobolev inequality and the local specification has quadratic interactions then the u - bound inequality ( [ ubound ] ) is satisfied .
[ lemu1]if @xmath146 satisfies the log - sobolev inequality and the local specification has quadratic interactions @xmath6 as in ( [ quadratic ] ) , then for any @xmath147 @xmath148for positive constants @xmath149 and @xmath150 .
if we use the following entropic inequality ( see @xcite ) @xmath151 for any probability measure @xmath152 and @xmath153 , @xmath154 , we get @xmath155 for the first term on the right hand side of ( [ 3eq3 ] ) we can use theorem 4.5 from @xcite ( see also @xcite ) which states that when a measure @xmath14 satisfies the log - sobolev inequality then for any function @xmath156 such that @xmath157for @xmath158 we have @xmath159 for all @xmath160 sufficiently small . since @xmath161 from our hypothesis on @xmath162 , we obtain that for @xmath160 sufficiently small @xmath163 for some @xmath164 . from this and
the fact that @xmath14 satisfies the log - sobolev inequality ( [ ls ] ) with some constant @xmath88 , ( [ 3eq3 ] ) becomes @xmath165 if we substitute @xmath70 by @xmath166 , where we denoted @xmath167 , we get @xmath168 for the second term of the right hand side of ( [ 3eq4 ] ) we have @xmath169 if we substitute this on ( [ 3eq4 ] ) and divide both parts with @xmath170 we will get @xmath171 if we take the expectation with respect to the gibbs measure we obtain @xmath172 from our main assumption ( [ quadratic ] ) about the interactions , @xmath173 , we have that @xmath174 which leads to @xmath175 for @xmath67 sufficiently small so that @xmath176 and @xmath177 we obtain @xmath178and the lemma follows for appropriate constants @xmath149 and @xmath179 . in the next lemma
we show a technical calculation of an iteration that will be used .
[ lemu2]if for any @xmath147 @xmath180for some @xmath181 and some @xmath182 sufficiently small , and @xmath183 then @xmath184for any @xmath185 .
we will first show that for any @xmath186 there exists an @xmath182 such that @xmath187we will work by induction .
step 1 : the base step of the induction ( @xmath188 ) . from ( [ 3eq5 ] )
we have @xmath189if we use again ( [ 3eq5 ] ) to bound the last term we obtain @xmath190for @xmath191 small enough so that @xmath192 , @xmath193 and @xmath194 we have @xmath195since @xmath196 and @xmath197 .
this proves the base step .
step 2 : the induction step .
we assume that ( [ 3eq7 ] ) holds true for some @xmath198 , and we will show that it also holds for @xmath199 , that is @xmath200 to bound the left hand side of ( [ 3eq8 ] ) we can use again ( [ 3eq5 ] ) @xmath201 if we bound @xmath202 by ( [ 3eq7 ] ) we get @xmath203 for @xmath191 small enough such that @xmath204 , @xmath205 and @xmath206 we obtain @xmath207 which finishes the proof of ( [ 3eq7 ] ) .
we can now complete the proof of the lemma .
at first we can bound the second term on the right hand side of ( [ 3eq5 ] ) by ( [ 3eq7 ] ) . that gives @xmath208for @xmath191 sufficiently small so that @xmath209 .
if we use again ( [ 3eq7 ] ) to bound the third term on the right hand we have @xmath210 where above we used once more that @xmath209 . if we rearrange the terms we have @xmath211 if we continue inductively to bound the right hand side by ( [ 3eq7 ] ) and take under account ( [ 3eq6 ] ) , then for @xmath191 sufficiently small such that @xmath212 we obtain @xmath213 which proves the lemma .
we now prove the u - bound inequality of proposition [ propubound ] .
[ [ section ] ] . ~
the proof of the proposition follows directly from lemma [ lemu1 ] and lemma [ lemu2 ] .
if one considers @xmath214 and @xmath215 then from ( [ induction ] ) of lemma [ lemu1 ] we see that condition ( [ 3eq5 ] ) is satisfied for @xmath216 and @xmath217 . furthermore , since by our hypothesis @xmath218 for some positive @xmath15 uniformly on @xmath20 and @xmath219 , we can choose @xmath67 sufficiently small so that for @xmath220 condition ( [ 3eq6 ] ) of lemma [ lemu2 ] to be also satisfied : @xmath221 since ( [ 3eq5 ] ) and ( [ 3eq6 ] ) are satisfied we can apply lemma [ lemu2 ] .
we then obtain @xmath222for @xmath67 small enough so that @xmath223 we get @xmath224which proves the proposition .
sweeping out inequalities for the local specification were introduced in @xcite , @xcite and @xcite to prove logarithmic sobolev inequalities . here
we prove a weaker version of them for the gibbs measure , similar to the ones used in @xcite and @xcite , where however , interactions higher than quadratic were considered .
[ 4lem1 ] assume that the measure @xmath146 satisfies the log - sobolev inequality and that the local specification has quadratic interactions @xmath6 as in ( [ quadratic ] )
. then , for @xmath67 sufficiently small , for every @xmath225 @xmath226 for some constant @xmath227 .
consider the ( sub)gradient @xmath228 .
we can then write @xmath229 if we denote @xmath230 the density of the measure @xmath231 , then for every @xmath232 we have @xmath233 @xmath234 where in we bounded the coefficients @xmath235 by @xmath67 and we have denoted @xmath236 the covariance of @xmath70 and @xmath156 . if we take expectations with respect to the gibbs measure @xmath77 and use the hlder inequality in both terms of we obtain @xmath237if we take the sum over all @xmath238 from @xmath239 to @xmath240 in the last inequality and take under account ( [ 4eq1 ] ) we get @xmath241 where above we used that the interactions are quadratic as in hypothesis ( [ quadratic ] ) .
this leads to @xmath242 in order to bound the third term on the right hand side of ( [ 4eq4 ] ) we can use proposition [ propubound ] @xmath243 since @xmath244 when @xmath245 and @xmath246 the last inequality takes the form @xmath247 for the fourth term on the right hand side of ( [ 4eq4 ] ) we can use again proposition [ propubound ] @xmath248 but @xmath249 when @xmath250 and @xmath246 . furthermore , since @xmath225 when @xmath251 , the @xmath252 s that neighbour @xmath20 will have distance from @xmath108 equal to @xmath253 when @xmath254 or @xmath45 when @xmath255 .
so ( [ 4eq6 ] ) becomes @xmath256 if we combine together ( [ 4eq4 ] ) , ( [ 4eq5 ] ) and ( [ 4eq7 ] ) we get @xmath257since @xmath258 and @xmath259 for every @xmath18 .
if we take the sum over all @xmath260 in both sides of the inequality we will obtain @xmath261if we choose @xmath67 sufficiently small so that @xmath262 we get @xmath263 plugging the last one into ( [ 4eq8 ] ) and choosing @xmath67 small enough so that @xmath264 we obtain @xmath265which finishes the proof for appropriate chosen constant @xmath266 .
furthermore combining together ( [ 4eq7 ] ) and lemma [ 4lem1 ] , we obtain the following corollary .
[ 4cor2 ] assume that the measure @xmath146 satisfies the log - sobolev inequality and that the local specification has quadratic interactions @xmath6 as in ( [ quadratic ] )
. then , for @xmath67 sufficiently small , for every @xmath225 the following holds @xmath267 for some constant @xmath268 .
where in the above corollary we used again ( [ newjnew ] ) .
the next lemma shows the poincar inequality for the one site measure @xmath269 on the ball .
the proof follows closely on the proof of a similar poincar inequality on the ball in @xcite and the local poincar inequalities from @xcite and @xcite .
[ poincareball ] define @xmath270 and @xmath271 . for any @xmath272 the following poincar type inequality on the ball holds@xmath273 for some positive constant @xmath274 , where @xmath275 .
denote @xmath276 where @xmath277 has density @xmath278 . since @xmath279 and @xmath280 we can bound @xmath281 .
this leads to @xmath282 if we use the invariance of the @xmath57 measure we can write @xmath283 if we use holder inequality and consider @xmath284 sufficiently large so that @xmath285 @xmath286 consider @xmath287\rightarrow m$ ] a geodesic from @xmath45 to @xmath288 such that @xmath289 .
then for @xmath290 we can write @xmath291 from the last inequality and ( [ 4eq11 ] ) we get @xmath292 we observe that for @xmath293 and @xmath294 we obtain @xmath295 so @xmath296 similarly , for @xmath293 and @xmath294 we calculate @xmath297 as well as @xmath298so , we can write @xmath299 using again the invariance of the @xmath57 measure @xmath300 for @xmath301 , since @xmath302 and @xmath303 the hamiltonian is bounded by @xmath304 so @xmath305 which gives the following bound @xmath306 if we take under account that @xmath307as well as @xmath308 we observe that @xmath309 is bounded from above uniformly on @xmath11 from a constant .
thus , we finally obtain that @xmath310 for some positive constant @xmath274 .
the next lemma gives a bound for the variance of the one site measure @xmath231 outside @xmath311 .
[ poincarenoball]assume that the measure @xmath146 satisfies the log - sobolev inequality and that the local specification has quadratic interactions @xmath6 as in ( [ quadratic ] ) .
then , for @xmath67 sufficiently small the following bound holds @xmath312 for any @xmath313 and @xmath314
. we can write @xmath315 since @xmath316 .
if we take the expectation with respect to the gibbs measure we get @xmath317 we can bound the first and second term on the right hand side from proposition [ propubound ] to get @xmath318 which leads to @xmath319 if we choose @xmath284 sufficiently large so that @xmath320 we finally obtain@xmath321 for some constant @xmath322 . we can now prove the spectral gap type inequality type inequality for the expectation with respect to the gibbs measure of the one site variance @xmath323 [ teleutspect1 ]
assume that the measure @xmath146 satisfies the log - sobolev inequality and that the local specification has quadratic interactions @xmath6 as in ( [ quadratic ] ) .
then , for @xmath67 sufficiently small , the following spectral gap type inequality holds @xmath324for some constant @xmath325 . for any @xmath326 we can bound the variance @xmath327 where we have again denoted @xmath328 setting @xmath329 and taking the expectation with respect to the gibbs measure in both sides of ( [ 4eq12 ] ) gives @xmath330 we can bound the first and the second term on the right hand side from lemma [ poincareball ] and lemma [ poincarenoball ] respectively
this leads to @xmath331 which proves the lemma for appropriate positive constant @xmath332 .
if we combine lemma [ teleutspect1 ] and corollary [ 4cor2 ] we also have [ 4cor6 ] assume that @xmath146 satisfies the log - sobolev inequality and that the local specification has quadratic interactions @xmath6 as in ( [ quadratic ] ) .
then , for @xmath67 sufficiently small , the following holds @xmath333 for some constant @xmath334 .
the following lemma provides the sweeping out inequality for the one site measure [ 4lem7 ] assume that the measure @xmath146 satisfies the log - sobolev inequality and that the local specification has quadratic interactions @xmath6 as in ( [ quadratic ] ) .
then , for @xmath67 sufficiently small , for every @xmath225 @xmath335 for a constant @xmath336 .
combining lemma [ teleutspect1 ] and lemma [ 4lem1 ] together , for @xmath67 sufficiently small , we obtain the following , @xmath337 because @xmath338 and @xmath325 .
since @xmath339 that implies that every node @xmath252 which has distance @xmath24 from @xmath20 , i.e. @xmath340 , will have distance @xmath341 or @xmath199 from @xmath108
. so the last inequality becomes .
@xmath342and the lemma follows for @xmath343 .
define the following sets @xmath344 where @xmath345 refers to the distance of the shortest path ( number of vertices ) between two nodes @xmath20 and @xmath108 .
note that @xmath346 for all @xmath347 and @xmath348 .
moreover @xmath349 . in the next proposition
we will prove a sweeping out inequality for the product measures @xmath350 .
[ 4prop8 ] assume that the measure @xmath146 satisfies the log - sobolev inequality and that the local specification has quadratic interactions @xmath6 as in ( [ quadratic ] ) . then , for @xmath67 sufficiently small , the following sweeping out inequality is true @xmath351 for constants @xmath352 and @xmath353 .
we can write @xmath354 if we denote @xmath355 the neighbours of note @xmath20 as shown on figure [ fig1 ] , and use lemma [ 4lem7 ] we get the following @xmath356 we will compute the second term in the right hand side of ( [ 4eq14 ] ) . for @xmath188 , we have @xmath357 for @xmath358 , we distinguish between the nodes @xmath252 in @xmath359 which neighbour only one of the neighbours @xmath360 of @xmath361 , which are the @xmath362 , and these which neighbour two of the node in @xmath363 , which are the @xmath364 and @xmath365 neighbouring @xmath366 and @xmath367 respectively , as shown in figure [ fig1 ] .
we can then write @xmath368 to bound the second term on the right hand side of ( [ 4eq16 ] ) , for any @xmath369 neighbouring the node @xmath370 we use lemma [ 4lem7 ] @xmath371 which leads to @xmath372 since for nodes @xmath373 , the nodes @xmath191 such that @xmath374 have distance from @xmath20 equal to @xmath375 , @xmath24 or @xmath376 we get @xmath377 to bound the first term on the right hand side of ( [ 4eq16 ] ) , for example for @xmath378 neighbouring the nodes @xmath379 and @xmath380 we use again lemma [ 4lem7 ] @xmath381 the first term for @xmath382 on the sum of ( [ 4eq18 ] ) by lemma [ 4lem7 ] is bounded by @xmath383 the terms for @xmath188 on the sum of ( [ 4eq18 ] ) become @xmath384 the terms for @xmath358 on the sum of ( [ 4eq18 ] ) can be divided on those that neighbour @xmath385 and those that not @xmath386 for the second term on the right hand side of ( [ 4eq21 ] ) @xmath387 while for the first term on the right hand side of ( [ 4eq21 ] ) we can use lemma [ 4lem7 ] @xmath388 from ( [ 4eq21])-([4eq23 ] ) we get the following bound for the terms for @xmath358 on the sum of ( [ 4eq18 ] ) @xmath389 finally , for the terms for @xmath390 on the sum on the right hand side of ( [ 4eq18 ] ) , we get@xmath391 if we put ( [ 4eq19 ] ) , ( [ 4eq20 ] ) , ( [ 4eq24 ] ) and ( [ 4eq25 ] ) in ( [ 4eq18 ] ) we get @xmath392 for @xmath393 . exactly the same bound can be obtain for the other term , @xmath394 , on the first sum of the right hand side of ( [ 4eq16 ] ) .
gathering together , ( [ 4eq16 ] ) , ( [ 4eq17 ] ) and ( [ 4eq26 ] ) @xmath395 for @xmath396 .
furthermore , for every @xmath397 , @xmath398 finally , if we put ( [ 4eq15 ] ) and ( [ 4eq27 ] ) and ( [ 4eq28 ] ) in ( [ 4eq14 ] ) we obtain @xmath399 for constant @xmath400 . if we repeat the same calculation recursively for the first term on the right hand side of ( [ 4eq29 ] ) , then for @xmath401 and @xmath402 we will finally obtain @xmath403 for a constant @xmath404 . from the last inequality and ( [ 4eq13 ] ) we get @xmath405 for a constant @xmath406 for @xmath67 sufficiently small such that @xmath407 .
in this section we prove the second sweeping out relation .
we start by first proving in the next lemma the second sweeping out relation between two neighbouring nodes .
[ 5lem1]assume that the measure @xmath146 satisfies the log - sobolev inequality and that the local specification has quadratic interactions @xmath6 as in ( [ quadratic ] ) . then , for @xmath67 sufficiently small , for every @xmath339 the following sweeping out inequality holds @xmath408for @xmath409 .
consider the ( sub)gradient @xmath410 .
we can then write @xmath411 then for every @xmath412 we can compute @xmath413 but from relationship of lemma [ 4lem1 ] , if we put @xmath414 in @xmath70 , we have @xmath415 where again @xmath416 denotes the density of @xmath417 . for the second term in we have @xmath418 while for the first term of the following bound holds @xmath419 where above we used the cauchy - swartz " inequality . if we plug and in we get @xmath420 combining together ( [ 5eq2 ] ) and ( [ 5eq6 ] ) we obtain @xmath421 in order to calculate the second term on the right hand side of ( [ 5eq7 ] ) we will use the following lemma . [ 5lem2 ] for any probability measure @xmath14 the following inequality holds @xmath422 for some constant @xmath423 uniformly on the boundary conditions . without loose of generality
we can assume @xmath424 .
the proof of lemma [ 5lem2 ] can be found in @xcite . applying this bound to the second term in ( [ 5eq7 ] )
leads to @xmath425 \end{aligned}\ ] ] from the last inequality and ( [ 5eq7 ] ) we have @xmath426+\\ & j^{2}\tilde c^{2 } \mathbb{e}^{j}\left[\vert f-\mathbb{e}^{j}f\vert^2 \mathbb{e}^{j}(x^k_iv(x_{j},x_{i}))^2\right ] \end{aligned}\ ] ] putting this in ( [ 5eq1 ] ) leads to @xmath427+\\ & j^{2}\tilde c^{2 } \mathbb{e}^{j}\left[\vert f-\mathbb{e}^{j}f\vert^2 \mathbb{e}^{j } \|\nabla_iv(x_{j},x_{i})\|^2\right]\end{aligned}\ ] ] if we take the expectation with respect to the gibbs measure and bound @xmath428 by ( [ quadratic ] ) we get @xmath429+\\ \nonumber & kj^{2}\tilde c^{2 } \nu \left[\vert f-\mathbb{e}^{j}f\vert^2 d^2(x_j ) \right]+kj^{2}\tilde c^{2 } \nu \left[\vert f-\mathbb{e}^{j}f\vert^2 \mathbb{e}^{j}d^2(x_j)\right]\end{aligned}\ ] ] if we bound the second and third term on the right hand side by corollary [ 4cor6 ] we get@xmath430\end{aligned}\ ] ] for the last term on the right hand side of ( [ 5eq8 ] ) we can write @xmath431=\nu \left[\mathbb{e}^{j}(\vert f-\mathbb{e}^{j}f\vert^2 ) d^2(x_j)\right]\end{aligned}\ ] ] and now apply the u - bound inequality ( [ ubound ] ) of proposition [ proposition ] @xmath432\leq & c \nu(\mathbb{e}^{j}(\vert f-\mathbb{e}^{j}f\vert^2 ) ) + \nonumber \\ & c \sum_{n=0}^{\infty } j^{n}\sum_{r : dist(r , j)=n}\ \nu\|\nabla_{r } ( \mathbb{e}^{j}\vert f-\mathbb{e}^{j}f\vert^2 ) ^{\frac{1}{2 } } \|^2 \end{aligned}\ ] ] in order to bound the variance on the first term on the right hand side we can use the spectral gap type inequality of lemma [ teleutspect1 ] @xmath433\leq & cd_{4}\nu\|\nabla_{j } f \|^2 + c d_4\sum_{n=1}^{\infty } j^{n-1}\sum_{r : dist(r , j)=n}\ \nu \|\nabla_{r } f^2\|+\nonumber \\ & c \sum_{n=0}^{\infty } j^{n}\sum_{r : dist(r ,
j)=n}\ \nu\|\nabla_{r } ( \mathbb{e}^{j}\vert f-\mathbb{e}^{j}f\vert^2 ) ^{\frac{1}{2 } } \|^2 \end{aligned}\ ] ] for @xmath382 the term of the second sum is zero , while for @xmath434 the nodes do not neighbour with @xmath108 , so we have @xmath435 \vert^2 \end{aligned}\ ] ] from cauchy - swartz inequality the last becomes @xmath436 putting together ( [ 5eq9 ] ) and ( [ 5eq10 ] ) @xmath437\leq & c j\sum _ { r\sim j}\ \nu\|\nabla_{r } ( \mathbb{e}^{j}\vert f-\mathbb{e}^{j}f\vert^2 ) ^{\frac{1}{2}}\|^2+cd_{4}\nu\|\nabla_{j } f\|^2+\\ & ( c d_4 + 2c ) \sum_{n=1}^{\infty } j^{n-1}\sum _ { dist(r , j ) = n}\ \nu\|\nabla_{r } f\|^2 \end{aligned}\ ] ] from ( [ 5eq8 ] ) and ( [ 5eq11 ] ) we get @xmath438 for a constant @xmath439 .
if we replace @xmath70 by @xmath440 in ( [ 5eq12 ] ) we get @xmath441 if we use lemma [ 4lem7 ] to bound the second and fourth term in the right hand side of the last inequality we obtain@xmath442 since @xmath20 and @xmath108 are neighbours and the @xmath252 s in the sum of the third term have distance less or equal to two from @xmath20 , we can write @xmath443 for a constant @xmath444 .
if we take the sum over all @xmath20 such that @xmath339 we get @xmath445 for @xmath67 sufficiently small so that @xmath446 we obtain @xmath447 if we use the last inequality to bound the last term on the right hand side of ( [ 5eq12 ] ) we obtain @xmath448 where above we used ( [ newjnew ] ) .
this finishes the proof for an appropriate constant @xmath449 . in the next proposition
we will extend the sweeping out relations of the last lemma from the two neighboring nodes to the two infinite dimensional disjoint sets @xmath450 and @xmath451 .
[ 5lem3]assume that the measure @xmath146 satisfies the log - sobolev inequality and that the local specification has quadratic interactions @xmath6 as in ( [ quadratic ] ) .
then , for @xmath67 sufficiently small , the following sweeping out inequality holds @xmath452 for @xmath453 and constants @xmath454 and @xmath455 .
the proof will follow the same lines of the proof of proposition [ 4prop8 ] .
if we denote @xmath456 the neighbours of note @xmath20 , then we can write @xmath457 we can use lemma [ 5lem1 ] to bound the last one @xmath458 we will compute the second term in the right hand side of ( [ 5eq15 ] ) . for @xmath188 , we have @xmath459 for @xmath358 , we distinguish between the nodes @xmath252 in @xmath359 which neighbour only one of the neighbours @xmath360 of @xmath361 , which are the @xmath362 , and these which neighbour two of the node in @xmath363 , which are the @xmath460 and @xmath365 neighbouring @xmath366 and @xmath367 respectively , as shown in figure [ fig1 ] . we can then write @xmath461 to bound the second term on the right hand side of ( [ 5eq17 ] ) , for any @xmath369 neighbouring a node @xmath370 we use lemma [ 5lem1 ] @xmath462 which leads to @xmath463 since for nodes @xmath373 , the nodes @xmath191 such that @xmath374 have distance from @xmath20 equal to @xmath464 or @xmath376 we get @xmath465 to bound the first term on the right hand side of ( [ 5eq17 ] ) , for example for @xmath378 neighbouring the nodes @xmath379 and @xmath380 we use again lemma [ 5lem1 ] @xmath466 the first term on the right hand side of ( [ 5eq19 ] ) by lemma [ 5lem1 ] is bounded by @xmath467 the term for @xmath188 in the sum in the second term on the right hand side of ( [ 5eq19 ] ) becomes @xmath468 the terms for @xmath358 on the sum of ( [ 5eq19 ] ) can be divided on those that neighbour @xmath364 and those that not @xmath469 for the second term on the right hand side of ( [ 5eq22 ] ) @xmath470 while for the first term on the right hand side of ( [ 5eq22 ] ) we can use lemma [ 5lem1 ] @xmath471 from ( [ 5eq22])-([5eq24 ] ) we get the following bound for the terms for @xmath358 on the sum of ( [ 5eq19 ] ) @xmath472 for every @xmath473 we have @xmath474 , which gives @xmath475since @xmath476 . putting ( [ 5eq20 ] ) , ( [ 5eq21 ] ) , ( [ 5eq25 ] ) and ( [ 5eq26 ] ) in ( [ 5eq19 ] ) leads to @xmath477 for @xmath478 .
the exact same bound can be obtain for the other term on the first sum of the right hand side of ( [ 5eq17 ] ) .
gathering together , ( [ 5eq17 ] ) , ( [ 5eq18 ] ) and ( [ 5eq19 ] ) leads to@xmath479 for @xmath480 .
furthermore , for every @xmath397 , @xmath481 finally , if we put ( [ 5eq16 ] ) and ( [ 5eq28 ] ) and ( [ 5eq29 ] ) in ( [ 5eq15 ] ) we obtain @xmath482 for constant @xmath483 .
if we repeat the same calculation recursively , for the first term on the right hand side of ( [ 5eq30 ] ) , then for @xmath484 and @xmath485 we will finally obtain @xmath486 for a constant @xmath487 . from the last inequality and ( [ 5eq14 ] ) we get @xmath488 for a constant @xmath489 for @xmath67 sufficiently small such that @xmath490 and @xmath491 the proof of the proposition follows for @xmath492 .
since the purpose of this paper is to prove the log - sobolev inequality for the infinite dimensional gibbs measure without assuming the log - sobolev inequality for the one site measure @xmath493 , but the weaker inequality for the measure @xmath494 , we will show in this section that when the interactions are quadratic we can obtain a weaker log - sobolev type inequality for the @xmath493 measure .
this will be the object of the next proposition .
[ 6prop1 ] assume that the measure @xmath146 satisfies the log - sobolev inequality and that the local specification has quadratic interactions @xmath6 as in ( [ quadratic ] )
. then , for @xmath67 sufficiently small , the one site measure @xmath493 satisfies the following log - sobolev type inequality @xmath495 for some positive constant @xmath496 .
assume @xmath497 .
we start with our main assumption that the measure @xmath498 satisfies the log - sobolev inequality for a constant @xmath499 , that is @xmath500 we will interpolate this inequality to create the entropy with respect to the one site measure @xmath501 in the left hand side . for this we will first define the function @xmath502 notice that @xmath503
. then inequality ( [ 6eq1 ] ) for @xmath504 gives @xmath505 denote by @xmath506 and @xmath507 the right and left hand side of respectively .
if we use the leibnitz rule for the gradient on the right hand side of we have @xmath508 on the left hand side of we form the entropy for the measure @xmath501 measure with hamiltonian @xmath509 .
@xmath510 since @xmath511 is no negative , the last equality leads to @xmath512 if we combine together with and we obtain @xmath513 if take the expectation with respect to the gibbs measure in the last relationship we have @xmath514 from @xcite and @xcite the following estimate of the entropy holds @xmath515 for some positive constant @xmath516 .
if we take expectations with respect to the gibbs measure at the last inequality we get @xmath517 we can now use to bound the second term on the right hand side of .
then we will obtain @xmath518 if we take under account that we are considering quadratic interactions and bound @xmath6 and @xmath519 by ( [ quadratic ] ) we get @xmath520 where above we also used that @xmath521 .
we can bound the first term on the right hand side by lemma [ teleutspect1 ] and the third and the fourth term by corollary [ 4cor6 ] .
@xmath522 which finishes the proof of the proposition for @xmath523 .
we now prove a log - sobolev type inequality for the product measure @xmath524 for @xmath525 .
[ 6prop2 ] assume that the measure @xmath146 satisfies the log - sobolev inequality and that the local specification has quadratic interactions @xmath6 as in ( [ quadratic ] )
. then , for @xmath67 sufficiently small , the following log - sobolev type inequality for the product measures @xmath526 holds @xmath527for @xmath525 , and some positive constant @xmath528 .
we will prove proposition [ 6prop2 ] for @xmath529 , that is @xmath530 for @xmath531 . in the proof of this proposition we will use the following estimation . for any @xmath532 and
@xmath533 denote @xmath534 and @xmath535 from the calculations of the components of the sum of @xmath536 in the proof of proposition [ 5lem3 ] and the recursive inequality ( [ 5eq30 ] ) we can surmise that there exists an @xmath537 such that @xmath538 we start with the following enumeration of the nodes in @xmath451 as depicted in figure [ fig2 ] .
denote the nodes in @xmath451 closest to @xmath539 , that is the neighbours of @xmath539 and name them @xmath540 , with @xmath541 being any of the four and the rest named clockwise . then choose any of the nodes in @xmath451 of distance two from @xmath542 and distance three from @xmath539 , and name it @xmath543 and continue clockwise the enumeration with the rest of the nodes in @xmath451 of distance three form @xmath539 .
then the same for the nodes of @xmath451 of distance four from @xmath539 .
we continue with the same way with the nodes in @xmath451 of higher distances from @xmath539 , moving clockwise while we move away from @xmath539 . in this way the nodes in @xmath451 are enumerated in a spiral way moving clockwise away from @xmath539 . in that way we can write @xmath544 .
then the entropy of the product measure @xmath545 can be calculated by being expressed in terms of the entropies of single nodes in @xmath451 for which we have shown a log - sobolev type inequality in proposition [ 6prop1 ] .
@xmath546 to compute the entropies in the right hand side of ( [ 6eq9 ] ) we will use the log - sobolev type inequality for the one site measure @xmath547 from proposition [ 6prop1 ] .
@xmath548 where above we used that @xmath549 , since by the way the spiral was constructed it s elements do not neighbour with each other . for every @xmath108 that neighbours with at least one of the @xmath550 we have that @xmath551 which because of ( [ 6eq8 ] ) implies @xmath552 for every @xmath108 that does not neighbour with any of the @xmath550 we have that @xmath553 putting ( [ 6eq11 ] ) and ( [ 6eq12 ] ) in ( [ 6eq10 ] ) we obtain @xmath554 combining this with ( [ 6eq9 ] ) @xmath555 since in the last two sums above , for every @xmath556 , the terms @xmath557 and @xmath558 appear one time for every @xmath559 with a coefficient @xmath560 , the accompanying coefficient for any of these terms is @xmath561 since for every node @xmath108 , @xmath562 .
so by rearranging the terms in the last inequality we get @xmath563 for @xmath490 . for the first sum @xmath564
we finally get @xmath565
in proposition [ 6prop1 ] and proposition [ 6prop2 ] we showed a log - sobolev type inequality for the one site measure @xmath493and then obtained a similar inequality through it for the product measures @xmath566 . in lemma [ teleutspect1 ] a spectral gap type inequality
was also shown for the one site measure @xmath493 for both cases . in the following proposition
the spectral gap type inequality of lemma [ teleutspect1 ] will be extended to the product measure @xmath567 . however , this does not happen through the spectral gap type inequality for the one site measure @xmath493 of lemma [ teleutspect1 ] but through the log - sobolev type inequality for @xmath566 of proposition [ 6prop2 ] .
[ 7prop1]assume that the measure @xmath146 satisfies the log - sobolev inequality and that the local specification has quadratic interactions @xmath6 as in ( [ quadratic ] )
. then , for @xmath67 sufficiently small , the following spectral gap type inequality holds @xmath568for @xmath569 and some positive constant @xmath570 .
the proof of this proposition is based in the use of the log - sobolev type inequality for the product measures @xmath571 of proposition [ 6prop2 ] and the sweeping out relations for the same measures from proposition [ 4prop8 ] . the proof of this proposition is presented in lemma 7.1 of @xcite .
spectral gap inequalities have been associated with convergence to equilibrium and ergodic properties . in the next proposition
we will use the weaker spectral gap inequality for the product measures @xmath572 of proposition [ 7prop1 ] to show the a.e .
convergence of @xmath573 to the infinite dimensional gibbs measure @xmath77 , where @xmath574 is defined as follows @xmath575 [ 7prop2]assume that the measure @xmath146 satisfies the log - sobolev inequality and that the local specification has quadratic interactions @xmath6 as in ( [ quadratic ] ) .
then , for @xmath67 sufficiently small , and @xmath576 as in ( [ 7eq1 ] ) , @xmath577 converges @xmath77-a.e . to the gibbs measure @xmath77 .
we will follow closely @xcite .
we will compute the variance of the @xmath578 with respect to the product measure @xmath579 for @xmath580 or @xmath239 when @xmath24 is odd or even respectively . for this we will use the spectral gap type inequality for the product measures @xmath572 presented in proposition [ 7prop1 ] .
@xmath581 where @xmath238 above is @xmath45 or @xmath239 if @xmath24 is odd or even respectively .
if we use the first sweeping out inequality of proposition [ 4prop8 ] we get @xmath582 where we recall @xmath583 .
if we apply @xmath584 more times proposition [ 4prop8 ] we obtain the following bound @xmath585 which converges to zero as @xmath24 goes to infinity , because @xmath586 . if we define the sets @xmath587 we can calculate @xmath588by chebyshev inequality . if we use ( [ 7eq2 ] ) to bound the last one we get @xmath589and for @xmath67 sufficiently small such that @xmath590 ( recall that @xmath591 ) we have that @xmath592 thus , @xmath593 converges @xmath594almost surely by the borel - cantelli lemma .
furthermore , @xmath595 we will first show that @xmath596 is a constant , which means that it does not depend on variables on @xmath450 or @xmath451 .
we first notice that @xmath597 is a function on @xmath450 or @xmath451 when @xmath24 is odd or even respectively .
this implies that the limits @xmath598 do not depend on variables on @xmath450 and @xmath451 respectively .
however , since the two subsequences @xmath599 and @xmath600 converge to @xmath601 @xmath594a.e .
we conclude that @xmath602 which implies that @xmath596 is a constant . from
that we obtain that @xmath603 since the sequence @xmath604 converges @xmath594almost , the same holds for the sequence @xmath605 .
it remains to show that @xmath606 .
at first we show this for positive bounded functions @xmath70 . in this case
we have @xmath607 by the dominated convergence theorem and ( [ 7eq3 ] ) . on the other hand
, we also have @xmath608 where above we used the definition of the gibbs measure @xmath77 . from ( [ 7eq4 ] ) and ( [ 7eq5 ] ) we get that @xmath609 for bounded functions @xmath70 .
we now extend it to no bounded positive functions @xmath70 .
consider @xmath610 for any @xmath611 .
then @xmath612since @xmath613 is bounded by @xmath238 .
then since @xmath614 is increasing on @xmath238 , by the monotone convergence theorem we obtain @xmath615 the assertions then can be extended to no positive functions @xmath70 by writing @xmath616 , where @xmath617 and @xmath618 . the proof of the main result will be based on the iterative method developed by zegarlinski in @xcite and @xcite ( see also @xcite and @xcite for similar application ) .
we will start with a lemma that shows the iterative step .
assume @xmath576 as in ( [ 7eq1 ] ) .
for any @xmath620 , @xmath621 = & \sum _ { m=0}^{n-1 } \mathcal{p}^{n - m-1 } [ ent_{{\mathbb{e}}^{\gamma_k}}(\mathcal{p}^m f ) ] + \mathcal{p}^n f\log \mathcal{p}^n f\end{aligned}\]]where @xmath238 above is @xmath45 or @xmath239 if @xmath24 is odd or even respectively .
one observes that for any @xmath622 @xmath623 -\\ ( { \mathbb{e}}^\lambda g)\log ( { \mathbb{e}}^\lambda g)\ ] ] the statement ( [ 7eq6 ] ) for @xmath188 can be trivially derived from ( [ 7eq7 ] ) if we put @xmath624 and @xmath625 . assuming ( [ 7eq6 ] ) is true for some @xmath626
, we prove it for @xmath199 .
apply with @xmath627 and @xmath577 in the place of @xmath156 , where @xmath238 above is @xmath45 or @xmath239 if @xmath24 is odd or even respectively:@xmath628 & = ent_{{\mathbb{e}}^{\gamma_k}}(\mathcal{p}^n f ) + ( { \mathbb{e}}^{\gamma_k } \mathcal{p}^n f)\log ( { \mathbb{e}}^{\gamma_k } \mathcal{p}^n f)\\ & = ent_{{\mathbb{e}}^{\gamma_k}}(\mathcal{p}^n f ) + ( \mathcal{p}^{n+1 } f)\log ( \mathcal{p}^{n+1 } f)\end{aligned}\ ] ] using this , and applying @xmath629 to we obtain for @xmath199 .
using proposition [ 7prop2 ] we have @xmath630 \to \nu[f\log f]$ ] and @xmath631\log \nu[f]$ ] , @xmath77-a.e . from this and fatou s lemma ,
gives @xmath632 \bigg ] \bigg\ } \nonumber \\ & = \liminf_{n \to \infty}\bigg\ { \sum_{m=0}^{n-1 } \nu[ent_{{\mathbb{e}}^{\gamma_k}}(\mathcal{p}^m f^{2 } ) ] \label{7eq8 } \bigg\}\end{aligned}\ ] ] where we used the fact that @xmath77 is a gibbs measure to obtain the last equality .
if we use proposition [ 6prop2 ] to bound the first term of the first sum we have @xmath633 similarly , for @xmath634 , we can use proposition [ 6prop2 ] and then we get @xmath635 & \le \tilde c\nu \|\nabla_{\gamma_k } \sqrt{\mathcal{p}^m f^2}\|^2 \le \tilde c [ c_1 c_2^{m-1 } \nu \|\nabla_{\gamma_1 } f\|^2 + c_2^{m } \nu \|\nabla_{\gamma_0 } f\|^2 ] \end{aligned}\ ] ] where , for the last inequalities we used proposition [ 5lem3 ] and induction . substituting in , we obtain ( recall that @xmath636 ) @xmath637 where @xmath638 is the largest of the two coefficients .
this ends the proof of the log - sobolev inequality for @xmath77 . | we assume one site measures without a boundary @xmath0 that satisfy a log - sobolev inequality . we prove that if these measures are perturbed with quadratic interactions , then the associated infinite dimensional gibbs measure on the lattice always satisfies a log - sobolev inequality .
furthermore , we present examples of measures that satisfy the inequality with a phase that goes beyond convexity at infinity . |
SECTION 1. SHORT TITLE.
This Act may be cited as the ``Creating Access to Rehabilitation
for Every Senior (CARES) Act of 2013''.
SEC. 2. ELIMINATION OF MEDICARE 3-DAY PRIOR HOSPITALIZATION REQUIREMENT
FOR COVERAGE OF SKILLED NURSING FACILITY SERVICES IN
QUALIFIED SKILLED NURSING FACILITIES.
(a) In General.--Subsection (f) of section 1812 of the Social
Security Act (42 U.S.C. 1395d) is amended to read as follows:
``(f) Coverage of Extended Care Services Without a 3-Day Prior
Hospitalization for Qualified Skilled Nursing Facility.--
``(1) In general.--Effective for extended care services
furnished pursuant to an admission to a skilled nursing
facility that occurs more than 90 days after the date of the
enactment of the Creating Access to Rehabilitation for Every
Senior (CARES) Act of 2013, coverage shall be provided under
this part for an individual for such services in a qualified
skilled nursing facility that are not post-hospital extended
care services.
``(2) Continued application of certification and other
requirements and provisions.--The requirements of the following
provisions shall apply to extended care services provided under
paragraph (1) in the same manner as they apply to post-hospital
extended care services:
``(A) Paragraphs (2) and (6) of section 1814(a),
except that the requirement of paragraph (2)(B) of such
section shall not apply insofar as it relates to any
required prior receipt of inpatient hospital services.
``(B) Subsections (b)(2) and (e) of this section.
``(C) Paragraphs (1)(G)(i), (2)(A), and (3) of
section 1861(v).
``(D) Section 1861(y).
``(E) Section 1862(a)(18).
``(F) Section 1866(a)(1)(H)(ii)(I).
``(G) Subsections (d) and (f) of section 1883.
``(H) Section 1888(e).
``(3) Qualified skilled nursing facility defined.--
``(A) In general.--In this subsection, the term
`qualified skilled nursing facility' means a skilled
nursing facility that the Secretary determines--
``(i) subject to subparagraphs (B) and (C),
based upon the most recent ratings under the
system established for purposes of rating
skilled nursing facilities under the Medicare
Nursing Home Compare program, has an overall
rating of 3 or more stars or a score of 4 stars
or higher on the individual quality domain or
on the staffing quality domain; and
``(ii) is not subject to a quality-of-care
corporate integrity agreement (relating to one
or more programs under this Act) that is in
effect with the Inspector General of the
Department of Health and Human Services and
that requires the facility to retain an
independent quality monitor.
The Secretary may make a determination under clause
(ii) based upon the most current information contained
in the website of the Inspector General.
``(B) Waiver of ratings to ensure access.--The
Secretary may, upon application, waive the requirement
of subparagraph (A)(i) for a skilled nursing facility
in order to ensure access to extended care services
that are not post-hospital extended care services in
particular underserved geographic areas.
``(C) Grace period for correction of ratings.--In
the case of a skilled nursing facility that qualifies
as a qualified skilled nursing facility for a period
and that would be disqualified under subparagraph
(A)(i) because of a decline in its star rating, before
disqualifying the facility the Secretary shall provide
the facility with a grace period of 1 year during which
the facility seeks to improve its ratings based on a
plan of correction approved by the Secretary.
``(D) Holding beneficiaries harmless in case of
disqualification of a facility.--In the case of a
skilled nursing facility that qualifies as a qualified
skilled nursing facility for a period and that is
disqualified under subparagraph (A), such
disqualification shall not apply to or affect
individuals who are admitted to the facility at the
time of the disqualification.''.
(b) MedPAC Study of Cost of Implementation.--The Medicare Payment
Advisory Commission shall conduct a study of, and submit a report to
Congress and the Secretary of Health and Human Services on, the cost of
impact of the amendment made by subsection (a), no later than June 1,
2016. | Creating Access to Rehabilitation for Every Senior (CARES) Act of 2013 - Amends title XVIII (Medicare) of the Social Security Act with respect to coverage of extended care services without regard to the three-day prior hospitalization requirement (non-post-hospital extended care services). Restricts such coverage to non-post-hospital extended care services in a qualified skilled nursing facility. Directs the Medicare Payment Advisory Commission (MEDPAC) to study the cost of impact of this Act. |
SECTION 1. SHORT TITLE.
This Act shall be known as the ``Internet Gambling Study Act''.
SEC. 2. FINDINGS AND PURPOSE.
(a) Findings.--The Congress finds as follows:
(1) Gambling is regulated primarily by State and tribal
governments and Federal statutes governing the interstate
placement of wagers are outdated.
(2) Over the past decade, the number of Americans gambling
on the Internet has risen dramatically to several million,
accounting for over half of a multibillion dollar worldwide
market.
(3) Many observers and industry analysts believe that it is
impossible to stop the sale of most products or services over
the Internet.
(4) Congress recently approved the Unlawful Internet
Gambling Enforcement Act, which imposes civil and criminal
penalties for the acceptance of any financial instrument by
those engaged in the business of unlawful Internet gambling.
(5) Congress must focus on establishing safeguards against
gambling by minors, compulsive gambling, fraud, money
laundering, and other forms of abuse.
(6) Although interpretations of a recent ruling of the
World Trade Organization's appellate body differ, legal experts
agree that it calls into question whether certain of Federal
and State gambling laws violate the commitments of the United
States under the General Agreement on Trade and Services.
(7) While only the United States and Antigua and Barbuda
are parties to that dispute, the ruling could have
ramifications for all interested parties, from the European
Union to Australia.
(b) Purpose.--The purpose of this Act is to provide for a detailed
examination by the National Research Council of the National Academy of
Sciences of the issues posed by the continued spread and growth of
interstate commerce with respect to Internet gambling, as well as the
impact of the Unlawful Internet Gambling Enforcement Act on Internet
gambling in the United States.
SEC. 3. COMPREHENSIVE STUDY OF INTERNET GAMBLING.
(a) Study Required.--
(1) In general.--The National Research Council of the
National Academy of Sciences shall enter into a contract to
conduct a comprehensive study of Internet gambling, including
the existing legal framework that governs such activities and
transactions and the impact of the Unlawful Internet Gambling
Enforcement Act on Internet gambling in the United States.
(2) Issues to be considered.--The study conducted under
paragraph (1) shall include--
(A) a review of existing Federal, State, tribal,
local, and international laws governing various forms
of wagering over the Internet, the effectiveness of
such laws, and the extent to which such provisions of
law conform or do not conform with each other;
(B) an assessment of the proliferation of Internet
gambling, including an analysis of its availability and
use within the United States;
(C) a determination of the impact of Internet
gambling on minors and compulsive gamblers and the
availability of regulatory and technological safeguards
to prevent or mitigate these impacts;
(D) a determination of the extent to which
terrorists and criminal enterprises are utilizing
Internet gambling for fraud and money laundering
purposes and the availability of regulatory and
technological safeguards to prevent or mitigate these
impacts;
(E) an assessment of the impact of the Unlawful
Internet Gambling Enforcement Act on the availability
and use within the United States of Internet gambling,
and on the adverse effects of Internet gambling
identified in subparagraphs (C) and (D);
(F) an assessment of recent technological
innovations and the practices of other nations and
international bodies that license and regulate Internet
gambling, and the practicality of using similar systems
to establish a legal framework in the United States;
(G) an analysis of the issues of federalism that
are presented by legislative and administrative
proposals designed to address the proliferation of
illegal Internet gambling, given the interstate and
international character of the Internet as a medium,
and the potential for State and tribal governments to
create a legal and regulatory framework for online
gambling within their jurisdictions or among those
jurisdictions where online gambling is legal;
(H) an assessment of the problems posed by
unregulated international Internet gambling to United
States interests and the potential means, if any, by
which the Federal Government may seek international
cooperation in addressing these concerns;
(I) an analysis of the potential impact of recent
World Trade Organization rulings regarding Internet
gambling and the long-term impact on existing and
future United States trade agreements under the General
Agreement on Trade and Services; and
(J) an analysis of the potential tax revenue that
could be generated by a legal, licensed, regulated
Internet gambling industry in the United States.
(b) Final Report.--The contract entered into under subsection (a)
shall require that the National Research Council submit to the
President, the Congress, State Governors, and Native American tribal
governments a comprehensive report on the Council's findings and
conclusions not later than 12 months after the date upon which the
contract is entered into. | Internet Gambling Study Act - Requires the National Research Council of the National Academy of Sciences to conduct a comprehensive study of Internet gambling, including the existing legal framework that governs such activities and transactions and the impact of the Unlawful Internet Gambling Enforcement Act on Internet gambling in the United States. |
Image copyright James Gallagher Image caption Peter has Huntington's disease and his siblings Sandy and Frank also have the gene
The defect that causes the neurodegenerative disease Huntington's has been corrected in patients for the first time, the BBC has learned.
An experimental drug, injected into spinal fluid, safely lowered levels of toxic proteins in the brain.
The research team, at University College London, say there is now hope the deadly disease can be stopped.
Experts say it could be the biggest breakthrough in neurodegenerative diseases for 50 years.
Huntington's is one of the most devastating diseases.
Some patients described it as Parkinson's, Alzheimer's and motor neurone disease rolled into one.
Peter Allen, 51, is in the early stages of Huntington's and took part in the trial: "You end up in almost a vegetative state, it's a horrible end."
Huntington's blights families. Peter has seen his mum Stephanie, uncle Keith and grandmother Olive die from it.
Tests show his sister Sandy and brother Frank will develop the disease.
The three siblings have eight children - all young adults, each of whom has a 50-50 chance of developing the disease.
Worse-and-worse
The unstoppable death of brain cells in Huntington's leaves patients in permanent decline, affecting their movement, behaviour, memory and ability to think clearly.
Peter, from Essex, told me: "It's so difficult to have that degenerative thing in you.
"You know the last day was better than the next one's going to be."
Huntington's generally affects people in their prime - in their 30s and 40s
Patients die around 10 to 20 years after symptoms start
About 8,500 people in the UK have Huntington's and a further 25,000 will develop it when they are older
Huntington's is caused by an error in a section of DNA called the huntingtin gene.
Normally this contains the instructions for making a protein, called huntingtin, which is vital for brain development.
But a genetic error corrupts the protein and turns it into a killer of brain cells.
The treatment is designed to silence the gene.
On the trial, 46 patients had the drug injected into the fluid that bathes the brain and spinal cord.
The procedure was carried out at the Leonard Wolfson Experimental Neurology Centre at the National Hospital for Neurology and Neurosurgery in London.
Doctors did not know what would happen. One fear was the injections could have caused fatal meningitis.
But the first in-human trial showed the drug was safe, well tolerated by patients and crucially reduced the levels of huntingtin in the brain.
Image caption Prof Sarah Tabrizi , from the UCL Institute of Neurology, led the trials.
Prof Sarah Tabrizi, the lead researcher and director of the Huntington's Disease Centre at UCL, told the BBC: "I've been seeing patients in clinic for nearly 20 years, I've seen many of my patients over that time die.
"For the first time we have the potential, we have the hope, of a therapy that one day may slow or prevent Huntington's disease.
"This is of groundbreaking importance for patients and families."
Doctors are not calling this a cure. They still need vital long-term data to show whether lowering levels of huntingtin will change the course of the disease.
The animal research suggests it would. Some motor function even recovered in those experiments.
Image copyright James Gallagher Image caption Sandy Sterne, Peter Allen, Hayley Allen, Frank Allen, Annie Allen and Dermot Sterne
Peter, Sandy and Frank - as well as their partners Annie, Dermot and Hayley - have always promised their children they will not need to worry about Huntington's as there will be a treatment in time for them.
Peter told the BBC: "I'm the luckiest person in the world to be sitting here on the verge of having that.
"Hopefully that will be made available to everybody, to my brothers and sisters and fundamentally my children."
He, along with the other trial participants, can continue taking the drug as part of the next wave of trials.
They will set out to show whether the disease can be slowed, and ultimately prevented, by treating Huntington's disease carriers before they develop any symptoms.
Prof John Hardy, who was awarded the Breakthrough Prize for his work on Alzheimer's, told the BBC: "I really think this is, potentially, the biggest breakthrough in neurodegenerative disease in the past 50 years.
"That sounds like hyperbole - in a year I might be embarrassed by saying that - but that's how I feel at the moment."
The UCL scientist, who was not involved in the research, says the same approach might be possible in other neurodegenerative diseases that feature the build-up of toxic proteins in the brain.
The protein synuclein is implicated in Parkinson's while amyloid and tau seem to have a role in dementias.
Off the back of this research, trials are planned using gene-silencing to lower the levels of tau.
Prof Giovanna Mallucci, who discovered the first chemical to prevent the death of brain tissue in any neurodegenerative disease, said the trial was a "tremendous step forward" for patients and there was now "real room for optimism".
But Prof Mallucci, who is the associate director of UK Dementia Research Institute at the University of Cambridge, cautioned it was still a big leap to expect gene-silencing to work in other neurodegenerative diseases.
She told the BBC: "The case for these is not as clear-cut as for Huntington's disease, they are more complex and less well understood.
"But the principle that a gene, any gene affecting disease progression and susceptibility, can be safely modified in this way in humans is very exciting and builds momentum and confidence in pursuing these avenues for potential treatments."
The full details of the trial will be presented to scientists and published next year.
The therapy was developed by Ionis Pharmaceuticals, which said the drug had "substantially exceeded" expectations, and the licence has now been sold to Roche.
Follow James on Twitter. ||||| Lawyers are bringing a case against a London hospital trust that could trigger major changes to the rules governing patient confidentiality. The case involves a woman who is suing doctors because they failed to tell her about her father’s fatal hereditary disease before she had her own child.
The woman discovered – after giving birth – that her father carried the gene for Huntington’s disease, a degenerative, incurable brain condition. Later she found out she had inherited the gene and that her own daughter, now eight, has a 50% chance of having it.
The woman – who cannot be named for legal reasons – says she would have had an abortion had she known about her father’s condition, and is suing the doctors who failed to tell her about the risks she and her child faced. It is the first case in English law to deal with a relative’s claim over issues of genetic responsibility.
“This could really change the way we do medicine, because it is about the duty that doctors have to share genetic test results with relatives and whether the duty exists in law,” said Anna Middleton, head of society and ethics research at the Wellcome Genome Campus in Cambridge.
Experts say that as more is discovered about the genetic components of medical conditions, including cancer and dementia, doctors will come under increasing pressure to consider not only their patients’ needs but also those of relatives who may share affected genes. The case also raises questions over how much effort clinicians need to put into tracing relatives, and whether they will be sued if their attempts do not go far enough.
Excitement as trial shows Huntington's drug could slow progress of disease Read more
In effect, lawyers say the definition of a patient is facing change. In future, a patient may be not just the person who provided a genetic sample, but may be defined as also those affected by that genetic sample.
“The outcome is potentially very important,” said a spokesman for Fieldfisher, the London law firm representing the woman. “Should clinicians be legally obliged to consider the interests of anyone they are reasonably aware of who could be affected by genetic information – or is the protection afforded by current professional guidance enough?”
The woman’s father shot and killed his wife in 2007 and was convicted of manslaughter. Two years later, doctors at St George’s Hospital in south London found he had Huntington’s disease and asked him to tell his daughter about his condition and her risk of developing it. But he refused to do so because he thought she might abort the child she was carrying. The doctors accepted his decision.
In April 2010 the woman gave birth to a daughter. Four months later, she learned her father had Huntington’s disease. She was subsequently diagnosed as also having the disease. She has had to cope with the impact of the disease, and the knowledge that her daughter has a 50% chance of succumbing to it.
The woman decided to sue St George’s Healthcare NHS Trust, who she believed should have told her that she was at risk. Her lawyers claim the trust’s doctors had a duty of care to share the father’s diagnosis with her, even against his wishes. However, when the case went to the high court, concern was raised that allowing it to proceed could undermine the doctor-patient relationship, while doctors might also be overly burdened by having to assess whether or not to make disclosures to patients’ relatives. The woman’s claim was struck out.
However, the decision was overturned by the court of appeal last year. It accepted that doctors might face extra pressure in considering whether to inform third parties about a person’s diagnosis, but said it was not necessarily in the public interest that clinicians be protected from that. This month, the case of Patient ABC versus St George’s Healthcare Trust was set for trial in November next year.
However, the very fact that the court of appeal has decided this issue might be enshrined in law indicates that some changes in medical practice are now inevitable.
This is emphasised by geneticist Anneke Lucassen and bioethicist Roy Gilbar, who state in the Journal of Medical Genetics: “As genetics enters mainstream medical practice, knowing when it might be appropriate to alert relatives about heritable risks becomes an issue for medical practice in general.” ”In fact, in some circumstances doctors do sometimes share information with patients’ relatives at present.
But Middleton said: “Enshrining that in law actually gives doctors more protection, but how much effort should a clinician make in chasing up relatives? And those relatives might be unhappy to be tracked down and given unwelcome information – for example, that they possess a gene that predisposes them to breast cancer. You cannot take back that information once you have given it.”
Huntington’s disease is a fatal neurological disease first identified by US physician George Huntington in 1872. The late US folk singer Woody Guthrie was among those who have had the condition.
It is usually caused by a mutant gene inherited from a parent, although in a small number of cases the mutation appears to arise spontaneously. Symptoms usually start between 30 and 50 years of age, although they can begin much earlier or later, and include stumbling and clumsiness, depression, involuntary jerking of the limbs and mood swings.
There is no cure and it is usually fatal 15 to 20 years after it appears. Doctors are able to provide some treatments for its symptoms. | – A woman who inherited Huntington's Disease is suing a London hospital for not divulging that her father had the degenerative illness, the Guardian reports. Still unidentified, the woman says she would have aborted her child if she'd known, and now worries for the future of her 8-year-old daughter—who has a 50% chance of inheriting the incurable brain condition. It's the first time England has faced such a case of genetic responsibility: "This could really change the way we do medicine, because it is about the duty that doctors have to share genetic test results with relatives and whether the duty exists in law," says Anna Middleton, an ethics expert at Cambridge University. It's "unhappy" news, she notes, and "you cannot take back that information once you have given it." The woman who's suing has a dark backstory: Her dad murdered his wife in 2007 and was diagnosed with Huntington's disease two years later, but asked doctors at St George's Hospital in south London not to tell his pregnant daughter for fear she'd get an abortion. The doctors agreed, but in 2010 the woman received her own diagnosis, giving her a 50% of succumbing to Huntington's. So she sued St. George's Healthcare NHS Trust, a case that's now proceeding after being thrown out by a high court and reinstated on appeal. On the brighter side, the BBC reported last year that an experimental drug can correct the defect that causes Huntington's. "For the first time we have the potential, we have the hope, of a therapy that one day may slow or prevent Huntington's disease," says the study's lead researcher. (Another woman is suing a hospital for ignoring her purple bracelet.) |
an ambitious goal in cosmology is to understand how the universe evolved from its presumed beginning in the big bang to the familiar collection of stars and galaxies that we observe around us today .
the last decade has seen tremendous progress in understanding the large role that gravitational instability almost certainly played .
although we still do not have complete analytic understanding , reasonable analytic approximations for the growth of gravitationally driven perturbations are now known , and sophisticated n - body simulators and simulations are freely available for obtaining more precise or detailed information .
remarkable progress has also been made in observationally constraining the initial conditions that are required as input to the simulations or approximations .
the available data appear largely consistent with the idea that primordial fluctuations were gaussian ( e.g. bromley & tegmark 1999 ) with a power - spectrum similar to that of an adiabatic @xmath9cdm model over @xmath10 orders of magnitude in spatial scale ( e.g. numerous recent results from observations of the cosmic microwave background , reviewed most recently by scott 2000 ; white , efstathiou , & frenk 1993 ; croft et al .
1999 ) .
but gravitational instability is only half the easy half!of the story .
it alone can not tell us how or when the stars that populate the universe today were formed .
presumably stars began to form within overdensities in the matter distribution as these overdensities slowly evolved from small ripples in the initial conditions into the large collapsed objects of today , but modeling this has proved tremendously difficult .
we can not easily model the formation of a single star ( see abel s contribution to these proceedings ) , let alone the ten billion stars in a typical galaxy .
even the most sophisticated theoretical treatments of galaxy formation rely on simplified `` recipes '' for associating the formation of stars with the gravitationally driven growth of perturbations in the underlying matter distribution .
the adopted recipes for star formation , although physically plausible , are by far the most uncertain component in theoretical treatments of galaxy formation . we will need to check them through observations of star - forming galaxies at high - redshift before we can be confident that our understanding of galaxy formation is reasonably correct .
these observations , and their implications , are the subject of my talk .
in the past 5 years several techniques have been shown effective for finding galaxies at @xmath1 .
i do nt have space to list them all ; a partial list would include deep optical magnitude limited surveys ( e.g. cohen s contribution to these proceedings ) , narrow band surveys ( e.g. hu , cowie , & mcmahon 1998 ) , targeted surveys around known agn at @xmath1 ( e.g. hall & green 1998 , djorgovski et al . 1999 ) , @xmath2 m surveys ( e.g. ivison et al .
2000 ) , and color - selected surveys ( e.g. steidel et al . 1999 , adelberger et al .
different selection techniques have different advantages and are optimized for answering different questions .
color - selected surveys , which detect numerous galaxies over large and ( hopefully ) representative volumes , are especially well suited for studying large scale structure at high redshift
. they will be the main focus of this review . in color - selected surveys ,
spectra are obtained only for objects with broad - band colors indicating that they are likely to lie at a given redshift .
the left panel of figure 1 illustrates why galaxies at certain redshifts have distinctive broad - band colors .
the right panel shows spectroscopic redshifts for galaxies satisfying various simple color selection criteria , demonstrating that color selection is a reasonably effective way of finding galaxies at a range of redshifts @xmath1 . at @xmath11
these galaxies were selected by exploiting the balmer - break ( adelberger et al .
2000 ) , at @xmath3 and @xmath12 by exploiting the lyman - break ( steidel et al .
1999 ) , and at @xmath13 by an approach similar to the `` uv drop in '' technique described by roukema in these proceedings .
the data in figure 1 represent only our own efforts ; many more galaxies at similar ( and higher ) redshifts have been found by other groups with a variety of techniques .
although different strategies for finding galaxies at @xmath1 result in samples weighted towards different types of objects , there is nevertheless significant overlap between the galaxy populations that are found .
the left panel of figure 2 , showing the ly-@xmath14 equivalent width distribution of color - selected lyman - break galaxies at @xmath3 , illustrates the point .
about 20% of lyman - break galaxies have equivalent widths large enough to be detected in standard narrow - band searches for high redshift galaxies . at fixed continuum luminosity ,
narrow - band searches detect only the fraction of galaxies with the largest equivalent widths , and at fixed equivalent width , color - selected surveys detect only the fraction of galaxies with the brightest continua ; but the galaxies detected with these techniques appear to belong to the same underlying population .
similarly , although the @xmath11 balmer - break selection criteria of adelberger et al .
( 2000 ) are designed to select optically bright star - forming galaxies at @xmath5 , a substantial fraction of these galaxies ( the limited available data suggests 1 in @xmath15 , or @xmath16 per square arcmin to @xmath17 ) have the red optical - to - infrared colors @xmath18 that are often thought to be characteristic of extremely dusty or old galaxies at this redshift .
many of the same galaxies will therefore be found both by surveys for old or dusty galaxies at @xmath11 that exploit their expected large optical - to - infrared colors and by surveys for star - forming galaxies at @xmath11 that exploit the balmer break . finally , there is even some overlap between far - uv selected samples and far - ir selected samples of galaxies at high redshift , though these two selection strategies might have been expected _ a priori _ to find completely different populations of objects .
for example , the two @xmath19mjy sources robustly identified with star - forming galaxies ( as opposed to agn ) at @xmath20 , smmj14011 + 0252 at @xmath21 ( ivison et al .
2000 ) and west - mmd11 at @xmath22 ( chapman et al .
2000 ) , have the relatively blue far - uv colors observed in optically selected galaxies at similar redshifts ; they are typical , aside from their unusually _ bright _ far - uv luminosities , of the kind of galaxies found in optical surveys .
the relationship between sub - mm selected and uv - selected high - redshift populations can be partially understood with plots like the right panel of figure 2 , which shows the inferred distribution of dust opacities among optically selected galaxies at @xmath3 ( adelberger et al .
these dust opacities were estimated with a relationship between @xmath23 and far - uv spectral slope that is obeyed by starburst galaxies in the local universe ( e.g. meurer , heckman , & calzetti 1999 ) .
it is not known if high - redshift galaxies obey this relationship ; see 4 below .
the majority of galaxies at this redshift appear to have middling dust opacities and are therefore far easier to detect in the optical than at @xmath2 m , but some galaxies , even in optical surveys , are so dusty that they would have been easier to detect with sub - mm rather than optical imaging .
although relatively dust - free galaxies appear to dominate high - redshift populations by number , it is unclear if they dominate by star - formation rate : dusty galaxies tend to have much larger star - formation rates and this compensates , to some unknown but probably large extent , for their smaller numbers .
i will discuss this further in 4 .
the large samples of star - forming galaxies at @xmath1 produced by color - selected surveys allow one to begin to try to fit star - forming galaxies into the larger context of structure formation in the universe .
the most obvious way to make a connection between the observed galaxies and perturbation in the underlying distribution of matter is to attempt to estimate the masses associated with individual galaxies by observing their velocity dispersions .
unfortunately this approach is surprisingly difficult . to begin with , it is hard to measure velocity widths for high - redshift galaxies .
the [ oii ] , h@xmath24 , and [ oiii ] nebular emission lines of galaxies at @xmath3 , for example , are redshifted into the bright sky of the near - ir , and as a result perhaps only 23 velocity widths can be measured per night even with an 8m - class telescope . a more serious problem is interpreting the velocity widths that have been measured .
the limited available data suggest that most lyman - break galaxies at @xmath3 have nebular line widths corresponding to @xmath25 7080 km / s , for example ( pettini et al .
1998 , 2000 ) .
these velocity dispersions are far smaller than the circular velocities that were expected for lyman - break galaxies on a number of other grounds ( e.g. baugh et al .
1999 ; mo , mao , & white 1999 ) and this has led some to suggest that lyman - break galaxies are low mass `` satellite galaxies '' undergoing mergers ( e.g. somerville s and kolatt s contributions to these proceedings ) .
but because the baryons in these galaxies presumably cooled and collapsed farther than the dark matter before stars began to form , their nebular line widths are expected to be significantly smaller than the full circular velocity of the dark matter potential .
the exact size of the difference is not easy to calculate .
analytic attempts at the calculation ( e.g. mo , mao , & white 1999 ) rely on a large number of simplifying assumptions , but the real situation may not be so simple : lehnert & heckman ( 1996 ) and kobulnicky & gebhardt ( 2000 ) have presented evidence for a complicated relationship between nebular line widths and circular velocities in the local universe among late - type and starburst galaxies that are presumably the closest analogs to detected high - redshift galaxies .
the spatial distribution of high redshift galaxies provides an alternate way of making a connection between star - forming galaxies and perturbations in the underlying distribution of dark matter . for a given cosmogony the spatial distribution of matter at any redshift is straightforward to calculate with simulations or analytic approximations .
once large numbers of star - forming galaxies have been detected near a single redshift we can therefore ask , for example , what kinds of collapsed objects in the expected distribution of mass at that redshift have the same spatial distribution as the observed galaxies . in this way
we can attempt to place star - forming galaxies in the larger context of structure formation .
observational constraints on the spatial clustering of galaxies at @xmath1 will be the main subject of this section , starting at @xmath11 and moving later to @xmath3 .
attempts to measure clustering strength for galaxies at @xmath11 have been carried out by carlberg et al .
, le fevre et al . , and cohen et al .
; their results are reviewed in cohen s contribution to these proceedings . here
i will focus instead on previously unpublished results from a survey of star - forming galaxies at @xmath26 ( adelberger et al .
this sample consists of several thousand photometric candidates with @xmath27 ; to date redshifts have been obtained for @xmath6 of them .
the mean redshift of the spectroscopically observed candidates is @xmath28 and the standard deviation is @xmath29 .
we currently have uniform spatial spectroscopic sampling in four @xmath30 and one @xmath31 square fields .
roughly 100 redshifts have been obtained in each ( figure 3 ) .
= 3.0truein in the four @xmath30 fields , the variance of galaxy counts in cubes of comoving side length @xmath32 mpc ( @xmath33 , @xmath34 ) , estimated as described in adelberger et al .
1998 from the data in figure 3 , is @xmath35 . for a power - law spatial correlation function of the form
@xmath36which is consistent with the angular clustering of these galaxies this variance corresponds to a comoving correlation length of @xmath37 mpc , or a variance of galaxy counts in spheres with radius @xmath38 mpc of @xmath39 .
the @xmath38 mpc variance is similar to the expected variance of mass at @xmath5 in spheres of the same size , estimated by evolving back to @xmath5 with linear theory the value of @xmath40 determined at @xmath41 from the abundance of galaxy clusters ( e.g. eke , cole , & frenk 1996 ) .
balmer - break galaxies are evidently fairly unbiased tracers of mass fluctuations at @xmath11 ( for the @xmath33 , @xmath34 cosmology assumed throughout ) . evolving their clustering strength forward to @xmath42 with the linear prescription of tegmark & peebles ( 1998 ) suggests that these galaxies are likely the progenitors of galaxies with @xmath43 in the local universe , i.e. relatively normal galaxies .
this result could perhaps have been anticipated from the comoving abundance of balmer - break galaxies , which , at @xmath44 to @xmath45 , is similar to that of @xmath46 galaxies in the local universe .
the increase in clustering strength from galaxies in our sample at @xmath5 to galaxies at @xmath42 therefore appears to be relatively easy to understand ; it is almost exactly what one might have expected gravitational instability to have produced acting on a population of formed objects .
remarkably the same is not true at higher redshifts . rather than decreasing further , at redshifts
@xmath47 the observed clustering strength of detected star - forming galaxies begins to rise .
hints that star - forming galaxies at @xmath20 might be strongly clustered were first provided by targeted surveys of small and carefully selected volumes , often around known agn ( e.g. giavalisco , steidel , & szalay 1994 ; le fevre et al .
1996 ; francis et al . 1997
; see also later work by campos et al .
1999 and djorgovski et al .
further evidence came subsequently from the color - selected survey of lyman - break galaxies at @xmath3 ( e.g. steidel et al .
1998 ) . figure 4 shows the projected correlation function @xmath48 ( e.g. davis & peebles 1983 ) of galaxies in this sample .
the implied correlation length , neglecting systematic errors , is @xmath49 mpc comoving ( @xmath33 , @xmath34 ) . a similar estimate of the correlation length follows from the relative variance of lyman - break galaxy counts in cubes of comoving side - length @xmath50 mpc , @xmath51 ( adelberger et al . 2000 ; this value supersedes our previous estimate , which was based on a smaller data set ) .
= 3.0truein gravitational instability acting on a population of galaxies that had @xmath52 comoving mpc at @xmath3 would not produce a population with @xmath53 comoving mpc ( similar to the observed @xmath54 of balmer - break galaxies ) at @xmath5 or a population with @xmath55 comoving mpc ( similar to normal local galaxies ) at @xmath41 .
this can be shown in a crude way by first assuming that the correlation function of lyman - break galaxies selected at @xmath3 would maintain a constant slope of @xmath56 at lower redshifts , and then using the linear approximation of tegmark & peebles ( 1998 ) to evolve the observed clustering of lyman - break galaxies at @xmath3 to lower redshifts .
limited space does not allow a more careful analysis or the consideration of cosmological models besides @xmath33 , @xmath34 , although both would affect our conclusions somewhat . in this simplified analysis
, we would expect former lyman - break galaxies to have @xmath55 comoving mpc at @xmath5 and @xmath57 comoving mpc at @xmath41 .
these correlation lengths are significantly larger than those of balmer - break galaxies at @xmath5 and of normal galaxies at @xmath41 , implying that lyman - break galaxies are unlikely to be the progenitors of either population .
what are they instead ?
why is their spatial clustering so much stronger than we might naively have expected ?
a clue is provided by their number density , @xmath58 ( @xmath33 , @xmath34 ) to @xmath59 , which is about 5 times lower than the number density of normal galaxies in the local universe or of balmer - break galaxies with @xmath27 in our @xmath5 sample .
perhaps the relatively rare lyman - break galaxies are not progenitors of typical galaxies at @xmath60 but instead are special in some way .
one way in which they are special is that they are the uv - brightest galaxies ( and therefore presumably the most rapidly star - forming galaxies ) at @xmath3 .
semi - analytic calculations ( e.g. baugh et al . 1998 , kauffmann et al .
1999 ) suggest that the most rapidly star - forming galaxies at high redshift will reside within the most massive collapsed objects , rather than within typical collapsed objects , and so perhaps we can understand the clustering of lyman - break galaxies by trying to associate them with massive collapsed objects at @xmath3 instead of with galaxy populations at lower redshift . in a classic paper , kaiser ( 1984 )
showed that in hierarchical models the most massive collapsed objects at any redshift are more strongly clustered than the distribution of matter as a whole .
a formalism for estimating the clustering strength of collapsed objects as a function of mass was subsequently developed by many authors ; see ( e.g. ) mo & white ( 1996 ) .
remarkably the observed clustering of lyman - break galaxies is indistinguishable ( as far as we can tell see wechsler s contribution to these proceedings ) from the predicted clustering of the most massive collapsed objects at @xmath3 down to a similar abundance ( e.g. adelberger et al .
this result suggests that there may indeed be a simple relationship between mass and star - formation rate in high redshift galaxies , as many semi - analytic models predicted ( e.g. baugh et al . 1998 ) .
if the mass of a galaxy plays a dominant role in determining its star formation rate , then we might expect the star - formation rate distribution of @xmath3 galaxies to be related in a simple way to the distribution of masses of collapsed objects at @xmath3 . as a crude guess at the star - formation rate associated with a collapsed `` halo
, '' we can take the mass cooling rate in the halo for large masses and the mass cooling rate times a number proportional to @xmath61 for small masses where supernova feedback is important ( e.g. white & frenk 1991 ) . in this approximation , for cooling dominated by bremsstrahlung
, we would expect @xmath62.5ex@xmath63 for large @xmath64 and @xmath62.5ex@xmath65 for small @xmath64 .
figure 5 shows the slopes of our simplistic `` theoretical '' sfr distribution in these two limits . at each abundance
the slope of the mass function was estimated with the press - schechter ( 1974 ) approximation for an @xmath33 , @xmath34 , @xmath66 , @xmath67 , @xmath68 cosmogony ; changing the values of any of these parameters would change the mass function slope somewhat .
also shown in figure 5 is the observed `` dust - corrected '' luminosity function of these galaxies , estimated using the @xmath24@xmath23 correlation of meurer et al .
( 1999 ) as described in adelberger & steidel ( 2000 ) .
the observed slopes agree reasonably well with our naive expectations , but the star - formation rates at any abundance are unexpectedly high , many times larger than the expected cooling rate for halos of similar abundance .
( standard formulae can be used to derive star - formation rates from the far - uv luminosities shown in figure 5 ; see , e.g. , madau , pozzetti , & dickinson 1998 . )
this has been taken as evidence that the star formation in lyman - break galaxies is fueled not by quiescent cooling but instead by the rapid cooling that would accompany a merger of two smaller galaxies ( e.g. somerville s and kolatt s contributions to these proceedings ) .
many uncertain steps lie behind the conclusion that the star formation rates in lyman - break galaxies far exceed their quiescent cooling rates , however .
the star - formation rates of lyman - break galaxies could be considerably lower than is usually deduced , for example , if we are wrong about the shape of the imf , about the magnitude of the required dust corrections , or even about the value of various cosmological parameters , while the cooling rates in lyman - break galaxies could be significantly higher than usual estimates if we are wrong about the baryon fraction , the metallicity , or the spatial distribution of the gas or dark matter in these galaxies
. it would be tremendously interesting if the star formation rates of lyman - break galaxies really were far higher than their quiescent cooling rates , but i do not think this has been conclusively shown .
= 2.0truein in any case i will assume for now that lyman - break galaxies are strongly clustered because they reside within rare and massive collapsed objects at @xmath3 , and return to the question of how they might be related to the balmer - break galaxies observed at @xmath5 .
the calculation above showed that @xmath3 lyman - break galaxies with @xmath27 can not be the progenitors of @xmath11 balmer - break galaxies with @xmath27 , and this was hardly surprising since the abundance of the lyman - break galaxies is so much lower . but
suppose we had significantly deeper @xmath69 photometry , so that we could detect fainter lyman - break galaxies and reach an abundance similar to that of balmer - break galaxies at @xmath11 .
could this deeper population of lyman - break galaxies evolve into a population like the balmer - break galaxies by @xmath11 ?
if halo mass and star formation rate are related in the simple way described above , then the deep lyman - break population would be somewhat less strongly clustered that the current @xmath27 population , helping to remove the inconsistency between the observed @xmath54 of balmer - break galaxies and the expected @xmath54 of lyman - break galaxy descendants . a deep lyman - break population , with abundance @xmath70 times that of the @xmath27 sample , has in fact been detected in the hdf , and its correlation length @xmath54 ( which can not be measured very accurately because of the hdf s small size ) appears smaller than that of the brighter ground - based population ( giavalisco et al . 2000 )
; but it looks like this effect is not strong enough to remove the inconsistency in clustering strength for lyman - break galaxy descendants and balmer - break galaxies at @xmath11 .
we are left with the result that galaxies selected by the balmer - break technique at @xmath11 are probably not ( for the most part ) the descendants of those detected with the lyman - break technique at @xmath3 . because balmer - break galaxies appear to be representative of typical star - forming galaxies at @xmath11 ,
the simplest interpretation is that lyman - break galaxies have largely stopped forming stars by @xmath11are they perhaps instead passively evolving into the elliptical galaxies observed at lower redshifts ? in principle this sort of argument could provide a very stringent constraint on the star - formation rates of lyman - break galaxy descendants at @xmath5 , since ( for @xmath33 , @xmath34 ) galaxies with as little as @xmath71/yr of star formation would have bright enough uv continua ( in the absence of dust obscuration ) to be included in our balmer - break sample .
but i am still not convinced that the argument is completely robust ; firmly establishing that former @xmath3 lyman - break galaxies are not significantly forming stars at @xmath11 will require a much better understanding than is currently available of the clustering of fainter and more numerous lyman - break galaxies at @xmath3 and of what kinds of star - forming objects might not satisfy our `` balmer - break '' selection criteria at @xmath5 .
a full observational understanding of star formation at high redshift can only be achieved if we are able to detect most of the star formation in representative portions of the high redshift universe . but how can we be sure that our surveys are not missing a large fraction of the star formation at high redshift ?
the sad answer is that we ca nt .
there are too many ways that star formation could be hidden from our surveys for us ever to be sure that we have detected most of it .
surveys can not detect the star formation that occurs in objects below their flux and surface brightness limits , for example , and they can not directly detect the formation of the low mass stars that ( at least in the local universe ) dominate the total stellar mass . at best we can aim for a reasonably complete census of the formation of massive stars that occurs in objects above our flux and surface brightness limits .
if these limits are deep enough , and if the high - redshift imf is similar enough in all environments to what we have assumed , then this sort of sample can serve as an acceptable proxy for a true census of all star formation at high redshift .
what is the best way to produce a reasonably complete survey of massive star formation ?
massive stars emit most of their luminosity in the uv , and so naively we might choose a deep uv - selected survey .
it has become clear in recent years , however , that most of the uv photons emitted by stars in rapidly star - forming galaxies are promptly absorbed by dust , and as a result most of the luminosity produced by massive stars tends to emerge from these galaxies in the far - ir where dust radiates .
this is true in the local universe for a broad range of rapidly star - forming galaxies , from the famous class of `` ultra luminous infrared galaxies '' ( ulirgs , galaxies with @xmath72 ) to the much fainter uv - selected starbursts contained in the iue atlas ( e.g. meurer et al . 1999 ) ; and the recent detection of a large extragalactic far - ir background ( e.g. fixsen et al . 1998 ) suggests that it is likely to have been true at high redshifts as well .
two implications follow .
the first is that far - ir luminosities probably provide a better measurement of rapidly star - forming galaxies star - formation rates than do uv luminosities .
the second is that even very rapidly star - forming galaxies may not be detected in a uv selected survey if they are sufficiently dusty .
far - ir / sub - mm selected surveys should therefore in principle provide a much better census of massive star formation at any redshift than uv selected surveys .
@xmath2 m surveys are likely to provide an especially good census at @xmath4 , because favorable @xmath73-corrections make a galaxy that is forming stars at a given rate appear almost equally bright at @xmath2 m for any redshift in this range ( see hughes contribution to these proceedings ) , and consequently a flux - limited sample at @xmath2 m is nearly equivalent to a star - formation limited sample at @xmath4 .
this is exactly the kind of sample that detailed attempts to understand star formation at high redshift require .
so why then was most of this review devoted to uv - selected high redshift samples rather than the apparently superior @xmath2 m samples ?
the reason is that @xmath2 m observations are comparatively difficult .
the deepest @xmath2 m image taken with scuba ( the current state - of - the - art sub - mm bolometer array ) reached a depth of 2mjy in a @xmath74 square arcminute region of the sky , for example , and five sources were detected ( hughes et al . 1998 ) .
in contrast a modern instrument on a 4m - class optical telescope can easily obtain photometry to a depth of @xmath75jy over a @xmath76 square arcminute region , detecting thousands of galaxies at @xmath1 . even though optical surveys do not select star - forming galaxies in the optimal way ,
most of what we know about galaxies at high redshift comes ( and will continue to come for many years ) from these surveys .
the question probably the most important question for those interested in high - redshift galaxy formation is whether the large and detailed view of the high redshift universe provided by uv selected surveys is reasonably complete .
this is equivalent to asking whether the galaxies responsible for producing the @xmath2 m extragalactic background are bright enough in the rest - frame uv to be included in current optical surveys .
if they are not , then optically selected surveys will not be able to teach us much of value about high - redshift star formation despite the wealth of information they contain . the straightforward way to constrain the uv luminosities of the objects that produce
the @xmath2 m background is to observe the uv luminosities of known @xmath2 m sources .
this is difficult in practice , however , because relatively few @xmath2 m sources have been detected and each has a significant positional uncertainty due to scuba s large diffraction disk .
often several optical sources lie within a sub - mm error box , and a great deal of effort is required to determine which one is the true optical counterpart ( e.g. ivison et al . 2000 ) .
moreover only about 30% of the @xmath2 m background can be resolved into discrete sources with current technology , and so optical observations of detected @xmath2 m sources can not conclusively tell us about the optical luminosities of the galaxies responsible for producing most of the @xmath2 m background .
an alternate approach , taken by adelberger & steidel ( 2000 ) , is to estimate the contribution to the @xmath2 m background from known optically selected populations at high redshift and compare this expected background to the observed background to see if there is significant shortfall .
meurer et al .
( 1999 ) have shown that the far - ir luminosities of uv - selected starbursts in the local universe can be estimated to within a factor of @xmath77 from the starbursts uv luminosities and spectral slopes @xmath24 .
their @xmath24/far - ir relationship was the foundation of our calculation .
it is not known if high - redshift galaxies obey this relationship .
chapman et al . ( 2000 ) presented evidence that the @xmath2 m fluxes of @xmath3 lyman - break galaxies may be somewhat lower than the relationship would suggest , but the evidence is very marginal when the uncertainties in lyman - break galaxies predicted @xmath2 m fluxes are taken into proper account .
adelberger & steidel ( 2000 ) have checked in a number of other ways whether @xmath1 star - forming galaxies obey the relationship .
figure 6 , showing the predicted and observed @xmath78 m fluxes of balmer - break galaxies in the @xmath5 sample described above , is an example .
the predicted fluxes in figure 6 assume that these galaxies will follow both the @xmath24/far - ir correlation of meurer et al .
( 1999 ) and the correlation between 69@xmath79 m luminosity and far - ir luminosity observed in the ( star - forming ) ulirg sample of genzel et al .
( 1998 ) and rigopoulou et al .
( 2000 ) . the @xmath78 m data are from the iso lw3 observations of flores et al .
. a large fraction of objects with predicted fluxes below the detection limit were not detected , and a large fraction of objects with predicted fluxes above the detection limit were detected .
this plot therefore provides some support for the notion that balmer - break galaxies at @xmath5 obey the @xmath24/far - ir relationship .
if we assume that all optically selected galaxies at @xmath1 obey this relationship , then we can make crude estimate of their total contribution to the @xmath2 m background . in this calculation
i will assume further that the comoving star formation density in optically selected populations is constant for @xmath80 ( e.g. steidel et al . 1999 ) , that the dust seds of optically selected galaxies at @xmath80 are similar to those of starbursts and ulirgs in the local universe , and that the ( unknown ) luminosity and @xmath24 distributions of optically selected galaxies at @xmath80 are similar to those measured for lyman - break galaxies at @xmath3 .
( see adelberger & steidel 2000 for a more complete discussion . ) under these assumptions optically selected populations at @xmath80 would be expected to produce @xmath2 m number counts and background that are surprisingly close to the observations ( figure 7 ) .
although the overall agreement is good , within the substantial uncertainties , there appear to be significant differences at the brightest @xmath2 m fluxes .
optically selected galaxies can not easily ( it seems ) account for the large number of observed sources with @xmath81mjy , and indeed barger , cowie , & richards ( 1999 ) have shown that these sources tend to have extremely faint optical counterparts .
it is possible that these bright sources are associated with agn , rather than the star - forming galaxies we have included in our calculation , but in any case it appears that the bulk of the @xmath2 m background could have been produced by known optically selected populations at high redshift .
this claim rests on a large number of assumptions that could easily be wrong , but i am willing to bet $ 100 that when alma finally resolves the @xmath82 background we will discover that most of it is produced by galaxy populations already detected and studied in the rest - frame uv . any takers ?
i would like to thank the organizers for support and for considerable patience .
special thanks are due to my collaborators c. steidel , m. dickinson , m. giavalisco , m. pettini , and a. shapley for their contributions to this work .
super special thanks go to c.s . for his comments on an earlier draft .
adelberger , k. l. et al .
1998 , apj , 505 , 18 adelberger , k. l. & steidel , c. c. 2000 , apj , submitted adelberger , k. l. et al .
2000 , aj , in preparation barger , a. j. , cowie , l. l. , & sanders , d. b. 1999 , apjl , 518 , 5 barger , a. j. , cowie , l. l. , & richards , e. a. 1999 , in asp conf .
photometric redshifts and the detection of high redshift galaxies , ed .
r. weymann et al .
( san francisco : asp ) baugh , c. m. , cole , s. , frenk , c. s. , & lacey , c. g. 1998 , apj , 498 , 504 blain , a. w. , kneib , j .-
ivison , r. j. , & smail , i. 1999 , apjl , 512 , l87 bromley , b. c. & tegmark , m. 1999 , apjl , 524 , l79 campos , a. et al .
1999 , apjl , 511 , l1 chapman , s. et al .
2000 , mnras , submitted croft , r. a. c. , weinberg , d. h. , pettini , m. , hernquist , l. , & katz , n. 1999 , apj , 520 , 1 davis , m. & peebles , p. j. e. 1983 , apj , 267 , 465 djorgovski , s. j. et al . 1999 , in asp conf .
, photometric redshifts and the detection of high redshift galaxies , ed .
r. weymann et al .
( san francisco : asp ) eke , v. r. , cole , s. , & frenk , c. s. 1996 , mnras , 282 , 263 fixsen , d. j. et al . 1998 , apj , 508 , 123 flores , h. et al .
1999 , apj , 517 , 148 francis , p. j. , woodgate , b. e. , & danks , a. c. 1997 , apjl , 482 , 25 genzel , r. et al .
1998 , apj , 498 , 579 giavalisco , m. , steidel , c. c. , & szalay , a. s. 1994 , apjl , 425 , l5 giavalisco , m. et al .
2000 , apj , in preparation hall , p. b. & green , r. f. 1998 , apj , 507 , 558 hu , e. m. , cowie , l. l. , & mcmahon , r. g. 1998 , apjl , 502 , l99 hughes , d. h. et al .
1998 , nature , 394 , 241 ivison , r. et al .
2000 , mnras , in press kaiser , n. 1984 , apjl , 284 , l9 kauffmann , g. , colberg , j. m. , diafero , a. , & white , s. d. m. 1999 , mnras , 307 , 529 kobulnicky , h. a. & gebhardt , k. 2000 , aj , in press le fevre , o. , deltorn , j. m. , crampton , d. , & dickinson , m. 1997 , apjl , 471 , 11 lehnert , m. d. & heckman , t. m. 1996 , apj , 472 , 546 madau , p. , pozzetti , l. , & dickinson , m. 1998 , apj , 498 , 106 mo , h. j. , & white , s. d. m. , 1996 , mnras , 282 , 347 mo , h. j. , mao , s. , & white , s. d. m. 1999 , mnras , 304 , 175 pettini , m. et al . , 1998 , apj , 508 , 539 pettini , m. et al .
, 2000 , apj , in preparation press , w. h. & schechter , p. 1974
, apj , 187 , 425 rigopoulou , d. et al .
1999 , aj , in press scott , d. 2000 , in asp conf .
, cosmic flows , ed .
s. courteau et al .
( san francisco : asp ) steidel , c. c. et al , 1998 , apj , 492 , 428 steidel , c. c. et al . 1999 ,
apj , 519 , 1 steidel , c. c. et al .
2000 , apj , in press tegmark , m. & peebles , p. j. e. , 1998 , apjl , 500 , l79 white , s. d. m. , efstathiou , g. & frenk , c. s. 1993 , mnras , 262 , 1023 white , s. d. m. & frenk , c. s. 1991 , apj , 379 , 25 | the advent of 8m - class telescopes has made galaxies at @xmath0 relatively easy to detect and study .
this is a brief and incomplete review of some of the recent results to emerge from surveys at these redshifts . after describing different strategies for finding galaxies at @xmath1 , and the differences ( and similarities ) in the resulting galaxy samples , i summarize what is known about the spatial clustering of star - forming galaxies at @xmath1 .
optically selected galaxies are the main focus of this review , but in the final section i discuss the connection between optical and sub - mm samples , and argue that the majority of the @xmath2 m background may have been produced by known optically selected populations at high redshift . among the new results presented
are the dust - corrected luminosity function of lyman - break galaxies at @xmath3 , the estimated contribution to the @xmath2 m background from optically selected galaxies at @xmath4 , revised estimates of the spatial clustering strength of lyman - break galaxies at @xmath3 , and an estimate of the clustering strength of star - forming galaxies at @xmath5 derived from a new spectroscopic sample of @xmath6 galaxies with @xmath7 , @xmath8 . |
in 2011 the kepler mission announced 997 stars with a total of 1235 planetary candidates that show transit - like signatures ( borucki et al .
it is estimated that more than 95% of these candidates are planets ( morton & johnson 2011 ) , although a much higher false positive rate of 35% for kepler close - in giant candidates is reported by santerne et al .
kepler has completed 3 years of observations and with a 4-year extension will accumulate 7.5 years of data for these host stars . searching the host stars for significant very long - term photometric variations on timescales of decades and for rare flare events
is important for understanding fully the environments of their planets and for constraining the habitability of exoplanets in general .
for example , a recent study by lecavelier des etangs et al .
( 2012 ) found increased evaporating atmosphere of hot jupiter hd 189733b during a transit , which is probably related to a x - ray flare 8h before the transit .
here we present the 100-year light curves of kepler planet - candidate host stars from the digital access to a sky century at harvard ( dasch ) project ( grindlay et al . 2009 , 2012 ) . the photometric variability of about 150,000 kepler target stars , including the sample with planetary candidates , shows low - level variability due to modulation by spots and other manifestations of magnetic activity , e.g. , white light flares ( basri et al .
none of these variabilities have amplitudes that would be detectable ( @xmath4 mag if only a single observation ) in the dasch data . here
we aim to complement the short - timescale and high - precision photometry by kepler with the much longer timescales available from dasch . the dasch project can reveal stars with rare flare events or extremely slow changes in luminosity .
it is of interest to search for both kinds of variability with dasch .
extremely large ( @xmath51mag ) flares could drive significant mass loss from the atmospheres of hot jupiters , and long term changes in host star luminosity could drive changes in planetary winds and spectral composition .
this paper provides dasch light curves for the initial sample of kepler planets presented by borucki et al .
in order to take advantage of the unprecedented kepler data on short timescales ( borucki et al .
2010 ) , which complements dasch data on long timescales , we have scanned and processed @xmath6 plates taken from the 1880s to 1990 in or covering part of the kepler field .
each plate covers 5@xmath740 degrees on a side , and most of them are blue - sensitive ( close to johnson b ) .
most plates have limiting magnitudes @xmath8 mag , while @xmath9 of them ( mc series ) are down to @xmath10 mag .
more details on the coverage and limiting magnitudes of the plates in the kepler field are described in tang et al .
( 2013 ) .
we used the kepler input catalog ( kic ; brown et al .
2011 ) for photometric calibration .
the typical relative photometric uncertainty is @xmath11 mag , as measured by the median light curve rms of stars ( laycock et al .
2010 ; tang et al . 2013
; see also table 1 and figure 3 in this paper ) . given that most stars are constant at the @xmath12 mag level ,
their light curve rms values are dominated by photometric uncertainty , and thus the median light curve rms represents the relative photometric uncertainty .
the typical absolute photometric uncertainty , as measured by the difference between our measurements and kic g band magnitudes , is @xmath13 mag , and up to @xmath14 mag for bright stars ( @xmath15 mag ) .
the larger absolute photometric uncertainty is caused by additional contributions from the uncertainties in kic g mag ( especially at the bright end ) , and the difference between the effective color of the plates ( close to johnson b ) and g band .
the magnitude uncertainties we provide in dasch light curves , are defined as the scattering of ( @xmath16 ) for stars in local spatial bins with similar magnitudes , and thus represent the absolute photometric uncertainty ( tang et al .
the typical absolute astrometric uncertainty ( per data point ) is @xmath17 , depending on plate scale ( laycock et al .
2010 ; los et al . 2011 ; servillat et al .
2011 ) .
example dasch light curves of 3 kepler hot jupiter host stars , as well as a binary are shown in figure 1 .
fainter stars have fewer dasch measurements , due to smaller number of deeper plates available . for a given star , every measurement is typically separated by days to months , and could be up to years .
therefore , one stellar flare , if happened during the plate observation , is expected to appear on a single plate only .
some fields do have multiple exposures which our pipeline is able to process individually ( see los et al 2011 ) , and would allow stellar flares to be confirmed by successive measurements ; no such flares were seen . among the 4 stars plotted in fig . 1 , k10666592 ( hat - p-7 ; pl et al .
2008 ) , k10264660 ( kepler-14 ; buchhave et al . 2011 ) and k8191672 ( kepler-5 ; koch et al .
2010 ) are confirmed hot jupiter host stars .
k5122112 ( koi 552 ; borucki et al .
2011 ) was listed as a hot jupiter candidate , but later has been unveiled as a binary ( bouchy et al . 2011 ) .
all three planets have @xmath18 , @xmath19 au , and equilibrium temperature @xmath20 k. no variation is detected .
[ tb ] all the light curve data and plots are available at the dasch website . only good measurements are included .
we excluded blended images , measurements within @xmath21 mag of the limiting magnitude which are more likely to be contaminated by noise , images within the outer border of the plates whose width is 10% of the plate s minor - axis length ( annular bin 9 ; see laycock et al 2010 ) , and dubious points with image profiles different from neighbor stars and thus are suspected to be emulsion defects or dust .
stars with strong correlation between magnitude measurements and plate limiting magnitudes , or between magnitude measurements and plate astrometry uncertainties , are also excluded , which are very likely to be polluted by noise or blends .
more detailed descriptions can be found at tang et al .
( 2013 ) .
note that these kepler planet host stars have variations at the @xmath22 level in the kepler light curves , which are much smaller than the plotting symbols in figure 1 and our photometric accuracy . for comparison ,
the kepler light curves of the 4 example kepler planet - candidate host stars ( from q0 or q1 to q6 ) are also shown in figure 1 as green dots .
we used the pdc corrected flux , which are converted to magnitudes and shifted to the mean magnitudes of dasch light curves ( see http://keplergo.arc.nasa.gov/calibrationsn.shtml ) . among 997 host stars ,
261 stars have at least 10 good measurements on dasch plates , and 109 stars have at least 100 good measurements .
distributions of g band magnitudes for all the host stars and host stars with at least 10 or 100 dasch measurements , are shown in figure 2 .
we have at least 100 measurements for 70% ( 73 out of 104 ) of all host stars with @xmath0 mag , and 44% ( 100 out of 228 ) of all host stars with @xmath1 mag .
most stars brighter than @xmath23 mag we lost ( i.e. with less than 100 good measurements ) , were due to blending with neighbor stars , as expected in such a crowded field , with 74% of them ( 23 out of 31 ) having bright neighbor stars with @xmath24 mag within 1 arcmin . for comparison , for @xmath0 stars with at least 100 good measurements ,
only 14% ( 10 out of 73 ) of them have @xmath24 mag neighbor stars within 1 arcmin .
[ tb ] we have carefully examined light curves of the 261 planet candidate host stars , and none of them showed variations ( flares or dips ) at the @xmath25 level , where @xmath26 here is our absolute photometric uncertainty with typical value of @xmath13 mag for @xmath27 objects , and up to @xmath28 mag for @xmath29 objects . nor did we see any long - term photometric trends .
we also compare the light curve rms of these host stars vs. other stars with similar magnitudes in the kepler field , as shown in figure 3 .
the raw light curve rms vs. g band magnitude for the 261 host stars with at least 10 dasch measurements are shown as blue dots .
the black solid line shows the median light curve rms of all the stars with at least 10 dasch measurements in the kepler field , and the red dashed lines and green dash - dotted lines show the @xmath30 and @xmath31 distributions , respectively .
the median , @xmath30 , and @xmath31 distributions of light curve rms are calculated in 0.5 magnitude bins , after three iterations of @xmath32-clipping , to exclude variable stars and dubious light curves contaminated by blending or plate defects .
stars with bright neighbors ( @xmath33 the flux of the star in kic g band ) within 30 are also excluded in the calculation of median , @xmath30 , and @xmath31 distributions , to avoid contamination from blending .
note the drop of rms for stars fainter than 14 mag is due to the fact that in general deeper plates are of better quality .
our plates are not a homogeneous sample .
stars with different magnitudes , even in the same region of the sky , are actually covered by different plates .
for example , a 12th mag star is detected on all the plates deeper than 12th mag , while a 15th mag star is only detected on plates deeper than 15th mag .
most of the deeper plates , such as mc series , are of much better quality compared with the shallow plates , which leads to the decrease of typical ( median , etc . )
light curve rms of fainter stars ( g@xmath514 mag ) .
[ tb ] none of the planet candidate host stars has a light curve rms more than @xmath31 greater than the median rms of stars of similar magnitudes .
there is one planet host star , i.e. kepler-21 ( kic 3632418 , @xmath34 ; howell et al .
2012 ) , with light curve rms close to the @xmath31 distribution .
further examination shows that some images are marginally contaminated by a @xmath35 mag neighbor star located at 51 away from the star , and thus its rms excess is dubious .
it has been suggested that the magnetic interaction between hot jupiters and their host stars should enhance stellar activity , and may lead to phase shifts of hot spots on the stellar chromosphere ( cuntz & shkolnik 2002 ; shkolnik et al .
2003 , 2005 ; kopp et al . 2011 ; poppenhaeger & schmitt 2011 ) . given the extremely long timescale covered by dasch , it is interesting to examine the light curves of stars hosting hot jupiters . among the 261 planet candidate host stars with at least 10 dasch measurements , 21 of them host hot jupiter planet candidates ( @xmath18 and @xmath36 au ) , and 9 of them have at least 100 dasch measurements ( 4 are shown in figure 1 as examples ) .
dasch light curve properties of these jupiter planet candidate host stars , including number of measurement , median magnitude and rms in the light curves , are listed in table 1 .
none of them showed variations at the @xmath25 level .
the kepler mission has now discovered more than 2000 planetary candidates ( only the first 1235 considered here ) and provided unparalleled precision light curves for their host stars .
here we complement that database with much longer timescale 100-year light curves from the dasch project . despite their inferior photometric accuracy ,
the dasch light curves sample such an extended period of time ( e.g. , tens of solar - like cycles of activity ) , that rare or very slow phenomena can be studied . from the statistical sample of the kepler planet host stars , limits on their long - term variations and rare flare events , such as the x - ray / euv flare from hd189733 ( lecavelier des etangs et al .
2012 ) , help us understand the planetary environments around main sequence stars and the habitability of exoplanets in general .
we note that the hd189733 system is not within the kepler field and so has not yet been scanned by dasch .
however , given the luminosity of the flare ( @xmath37 erg s@xmath38 ; lecavelier des etangs et al .
2012 ) , and the relatively luminous ( k1.5v ) host star , such a flare is beyond the detection limit of dasch .
relatively larger optical flares would be expected for cooler stars with hot jupiter companions . because kepler is mostly targeting on sun like stars , none of m dwarf and hot jupiter system is in the sample of the 261 host stars we studied .
we have scanned and processed @xmath6 plates taken from the 1880s to 1990 in or covering part of the kepler field , and studied the light curves of 261 planet host stars that have at least 10 good measurements on dasch plates .
we find no photometric variations at the @xmath3 level .
all the light curve data and plots of planet hosts in this study are available at the dasch website . besides
, we have released all the dasch data in the kepler field to the public in dasch data release 1 .
dasch light curves over @xmath39100 year timescales will continue to provide unique constraints for planet host stars , as well as any other interesting objects in the kepler field .
we thank the anonymous referee for suggestions that have helped improve this paper .
we thank alison doane , jaime pepper , david sliski and robert j. simcoe at cfa for their work on dasch , and many volunteers who have helped digitize logbooks , clean and scan plates ( http://hea-www.harvard.edu/dasch/team.php ) .
this work was supported in part by nsf grants ast0407380 and ast0909073 and now also the _ cornel and cynthia k. sarosdy fund for dasch_. + grindlay , j. , tang , s. , simcoe , r. , laycock , s. , los , e. , mink , d. , doane , a. , champine , g. 2009 , asp conference series , 410 , 101 .
edited by w. osborn and l. robbins .
san francisco : astronomical society of the pacific los , e. , grindlay , j. , tang , s. , servillat , m. , & laycock , s. 2011 , in asp conf .
422 , astronomical data analysis software systems xx , ed .
i. n. evans , a. accomazzi , d. j. mink , & a. h. rots ( san francisco , ca : asp ) , 269 ( arxiv:1102.4871 ) servillat , m. , los , e. , grindlay , j. , tang , s. , & laycock , s. 2011 , in asp conf .
422 , astronomical data analysis software systems xx , ed . i. n. evans , a accomazzi , d. j. mink , & a. h. rots ( san francisco , ca : asp ) , 273 ( arxiv:1102.4874 ) llrrrrc koi & kic & kic g & @xmath40 & lc median & lc rms & notes + & & ( mag ) & & ( mag ) & ( mag ) & + 1 & 11446443 & 11.74 & 570 & 11.72 & 0.109 & tres-2 ; odonovan et al . ( 2006 ) + 2 & 10666592 & 10.94 & 1069 & 10.73 & 0.113 & hat - p-7 ; pl et al . ( 2008 ) + 13 & 9941662 & 9.7 & 1348 & 9.76 & 0.103 & kepler-13 ; shporer et al . ( 2011 ) + 18 & 8191672 & 13.88 & 120 & 13.63 & 0.099 & kepler-5 ; koch et al . ( 2010 ) + 20 & 11804465 & 13.78 & 62 & 13.68 & 0.153 & kepler-12 ; fortney et al . ( 2011 ) + 97 & 5780885 & 13.26 & 170 & 13.13 & 0.185 & kepler-7 ; latham et al .
( 2010 ) + 98 & 10264660 & 12.36 & 404 & 12.3 & 0.128 & kepler-14 ; buchhave et al .
( 2011 ) + 100 & 4055765 & 12.91 & 243 & 12.85 & 0.115 & + 128 & 11359879 & 14.27 & 32 & 14.185 & 0.122 & kepler-15 ; endl et al .
( 2011 ) + 135 & 9818381 & 14.35 & 38 & 14.29 & 0.101 & kepler-43 ; bonomo et al .
( 2012 ) + 138 & 8506766 & 14.22 & 49 & 14.14 & 0.113 & + 191 & 5972334 & 15.56 & 54 & 15.635 & 0.066 & + 203 & 10619192 & 14.64 & 15 & 14.52 & 0.207 & kepler-17 ; dsert et al .
( 2011 ) + 846 & 6061119 & 16.01 & 45 & 16.02 & 0.096 & + 897 & 7849854 & 15.82 & 13 & 15.86 & 0.143 & + 976 & 3441784 & 9.9 & 1407 & 9.73 & 0.097 & + 1020 & 2309719 & 13.34 & 84 & 13.14 & 0.114 & + 1452 & 7449844 & 13.84 & 79 & 13.81 & 0.116 & + 1474 & 12365184 & 13.28 & 242 & 13.255 & 0.158 & + 1540 & 5649956 & 16.17 & 45 & 16.36 & 0.108 & + 1549 & 8053552 & 15.73 & 10 & 15.725 & 0.077 & + | we present 100 year light curves of kepler planet - candidate host stars from the digital access to a sky century at harvard ( dasch ) project .
261 out of 997 host stars have at least 10 good measurements on dasch scans of the harvard plates .
109 of them have at least 100 good measurements , including 70% ( 73 out of 104 ) of all host stars with @xmath0 mag , and 44% ( 100 out of 228 ) of all host stars with @xmath1 mag .
our typical photometric uncertainty is @xmath2 mag .
no variation is found at @xmath3 level for these host stars , including 21 confirmed or candidate hot jupiter systems which might be expected to show enhanced flares from magnetic interactions between dwarf primaries and their close and relatively massive planet companions . |
A California State Parks ranger was placed on leave after he was found intoxicated and asleep in his patrol car with a beer between his legs, officials said.
A passerby stumbled upon the ranger, Tyson Young, on the afternoon of Aug. 15 in Humboldt Redwoods State Park, which is off Highway 101 south of Eureka, according to the California Highway Patrol.
After trying unsuccessfully to wake up the ranger, the passerby snapped a picture of Young, who was sleeping soundly with a Keystone Light snugly tucked between his legs. The tipster photographer wished to remain anonymous, said the Lost Coast Outpost, which published the photo.
State Park Ranger Found Unconscious in Patrol Car With Beer Between His Legs (PHOTO): http://t.co/31n0kYwijG — Lost Coast Outpost (@LCOutpost) August 29, 2014
The man called 911, but Young reportedly woke up and left the scene. CHP officers then spotted Young’s vehicle near Myers Flat, pulled him over and arrested him on suspicion of drunken driving, said Officer Patrick Bourassa.
Young was cited and released to a State Parks supervisor, Bourassa said. He said the CHP has since forwarded the case to the Humboldt County district attorney with a recommendation to file charges.
Vicky Waters, a State Parks spokeswoman, said that Young, who has been with the agency for more than 10 years, was placed on administrative leave and had his peace-officer status revoked pending the results of the investigation. ||||| click to enlarge
The California Highway Patrol is requesting that prosecutors file charges against a California State Parks law enforcement ranger arrested on suspicion of driving under the influence while on duty.CHP officer Patrick Bourassa said a citizen called police shortly before 3 p.m. on Aug. 15 to report that a ranger was possibly driving under the influence on Avenue of the Giants, near Weott. Bourassa said officers responded to the area, and located a state parks vehicle driving north. Officers pulled the vehicle over and contacted its driver, Tyson Young.“Young displayed objective signs of intoxication and was detained for a DUI investigation,” Bourassa said, adding that the ranger was transported to CHP headquarters where he was subsequently arrested.Bourassa said Young was ultimately cited and released to a state park supervisor. While the case remains under investigation, Bourassa said, “We will be requesting that charges be filed.” He said he could not provide any additional information at this time.California State Parks spokeswoman Vicky Waters confirmed there was an incident involving Young, who she described as a “tenured park employee,” but declined to provide any details. Waters said Young has been placed on paid administrative leave and that his state peace officer status has been suspended. In addition to cooperating fully with the criminal investigation, Waters said State Parks is also conducting an internal investigation into the incident.According to the California State Parks website, Young has served as the supervising ranger for Humboldt Redwoods and Richardson Grove state parks.The Humboldt County District Attorney's Office has yet to make a charging decision in the case. | – Nothing like cracking open a brew and falling asleep ... at the wheel of your patrol car. That's apparently why California State Parks Ranger Tyson Young was found dozing in his parked vehicle on Aug. 15 in Northern California, the San Francisco Chronicle reports. A passerby (who's staying anonymous) found him passed out and intoxicated on State Route 254 in Humboldt Redwoods State Park, officials say. Afraid Young was injured, the passerby called out to him and banged the hood, but no luck, Last Coast Outpost reports. "I shook him, really shook him," said the man. "And then I saw the beer between his legs." So the guy called 911 and snapped the ranger's photo while waiting (the Chronicle has the pic). Young came to and drove off, but the CHP soon pulled him over. "Young displayed objective symptoms of intoxication and was detained for a DUI investigation," said a CHP officer. Young was arrested for DUI and put on administrative leave during the investigation. "We do not tolerate the use of alcohol in the workplace," said a State Parks spokesperson. "We take matters like this very seriously." Young was the supervising ranger for two state parks, the North Coast Journal reports. |
h. pylori eradication is strongly recommended in all patients with atrophic gastritis and peptic ulcer disease , but may also benefit subgroups of patients with dyspepsia , and patients who start with nsaid therapy [ 16 ] .
h. pylori eradication therapy is an important component of guidelines concerning these patients [ 7 , 8 ] .
currently , non - invasive management strategies and the widespread shortage in endoscopic capacity insure that many patients with h. pylori are managed without upper gastrointestinal endoscopy .
the american college of gastroenterology recommends that when an endoscopy is not performed , a serological test , which is the least expensive means of evaluating for evidence of h. pylori infection , should be done . when endoscopy is indicated , biopsy specimens can be taken for microscopic demonstration of the organism , culture , histology or urease testing . nowadays , in the netherlands
, biopsies are not routinely sent for culture and susceptibility testing of the infecting strain because of the high costs .
apart from patient compliance , resistance of helicobacter pylori to antibiotics can decrease the success of h. pylori eradication therapy .
regimens of choice for eradication of h. pylori should be guided by local antibiotic resistance rates . in the netherlands ,
the overall prevalence of resistance to clarithromycin and metronidazol was lower than in some surrounding countries possibly due to restrictive use of antimicrobials [ 1012 ] .
the advised treatment in the netherlands consists of a proton pump inhibitor ( ppi)-triple therapy for 7 days without prior susceptibility testing .
an increase of resistance rates to antimicrobial agents is however expected because increasing number of patients treated and increasing consumption of antibiotics , in particular macrolides , was observed in recent years .
the aim of the present study was firstly , to determine the efficacy of 7-day ppi - triple therapy for h. pylori in a well - defined group of patients with a rheumatic disease and serologic evidence of h. pylori infection who were on long - term nsaid therapy and secondly , to get insight in the prevalence of antibiotic resistance of h. pylori in the studied population .
this study was part of a placebo - controlled randomized clinical trial of which the clinical results have been described elsewhere , wherein we described that h. pylori eradication has no beneficial effect on the incidence of gastroduodenal ulcers or occurrence of dyspepsia in patients on long - term nsaid treatment . between may 2000 and june 2002 ,
patients with a rheumatic disease were eligible for inclusion if they were between 40 and 80 years of age , were positive for h. pylori on serological testing and were on long - term nsaid treatment .
forty - eight percent used a gastroprotective drug ( 7% h2 receptor antagonists [ h2ra ] , 37% proton pump inhibitors [ ppi ] , 7% misoprostol , 3% used a combination of these ) .
exclusion criteria were previous eradication therapy for h. pylori , known allergy for the study medication or presence of severe concomitant disease .
serologic testing for h. pylori igg - antibodies was performed with a commercial enzyme - linked immunosorbent assay ( pyloriset new eia - g , orion diagnostica , espoo , finland ) according to the manufacturer s instructions .
a serum sample was considered positive for igg antibodies to h. pylori if the test result was 250 international units ( iu ) .
this assay has been assessed , in a population similar to the population in the presented trial , and has proven a sensitivity and specificity in the netherlands of 98100% and 7985% , even in patients on acid suppressive therapy [ 1517 ] .
the study protocol was approved by research and medical ethics committees of all participating centers and all patients gave written informed consent .
after stratification by concurrent use of gastroprotective agents ( proton pump inhibitors , h2 receptor antagonists or misoprostol , but not prokinetics , or antacids ) , patients were randomly assigned to receive either h. pylori eradication therapy with omeprazole 20 mg , amoxicillin 1000 mg , and clarithromycin 500 mg ( oac ) twice daily for 7 days or placebo .
patients with an allergy for amoxicillin were treated with omeprazole 20 mg , metronidazole 500 mg and clarithromycin 250 mg ( omc ) or placebo therapy twice daily for one week in a distinct stratum .
all study personnel and participants were blinded to treatment assignment for the duration of the study .
the study protocol was approved by research and medical ethics committees of all participating centres and all patients gave written informed consent . at the 2-week follow - up visit , unused study medication was returned and remaining tablets were counted in order to check compliance .
patients were considered to be noncompliant if 6 days ( 85% ) of study medication were used .
three months after baseline , and additionally if clinically indicated , patients underwent endoscopy of the upper gastrointestinal tract .
four samples , two from the antrum and two from the corpus were used for histology .
the slides were scored independently by an experienced gastrointestinal pathologist and the investigator ( hdl ) , blinded to treatment assignment and clinical data , according to the updated sydney classification . in case of discrepant results ,
the remaining four biopsies were sent to a microbiological laboratory for culture and storage at 70c .
a patient was considered h. pylori - negative when histology as well as culture was negative .
all isolated strains were assessed for susceptibility to clarithromycin , metronidazole , tetracycline and amoxicillin at the central laboratory . both biopsy specimens of corpus and antrum were streaked on columbia agar ( ca ) ( becton dickinson , cockeysville , md , usa ) with 10% lysed horse blood ( bio trading , mijdrecht , the netherlands ) , referred to as columbia agar plates , and on ca with h. pylori selective supplement ( oxoid , basingstoke , uk ) .
plates were incubated for 72 h at 37c in a micro - aerophilic atmosphere ( 5% o2 , 10% co2 , 85% n2 ) .
identification was carried out by gram s stain morphology , catalase , oxidase , and urea hydrolysis measurements .
mics of metronidazole , clarithromycin , tetracycline and amoxicillin were determined by e - test ( ab biodisk , solna , sweden ) on ca plates essentially as described by glupczynski et al . .
ca plates were inoculated with a bacterial suspension with a turbidity of a 3 mcfarland standard ( 2 10 cfu / ml ) .
clsi ( tentative ) breakpoints 2009 for susceptibility ( s ) and resistance ( r ) were applied ( metronidazole mic 8 mg / l ( s ) and 16 mg / l ( r ) , amoxicillin mic 0.5 mg / l ( s ) and 2 mg / l ( r ) ; tetracycline mic 2 mg / l ( s ) and 8 mg / l ( r ) , and clarithromycin mic 0.25 mg / l ( s ) and 1 mg / l ( r ) ) .
measurements with a gaussian distribution were expressed at baseline as mean and sd , and measures with a non - gaussian distribution were expressed as the median and interquartile range ( iqr ; expressed as the net result of 75th percentile25th percentile ) . an additional analysis compared outcomes ( presence of h. pylori after h. pylori eradication therapy or placebo ) between stratum ( patients on gastroprotective drugs ( n = 165 ) and not on gastroprotective drugs ( n = 182 ) by computing the homogeneity of the common odds ratio .
differences in the proportions of patients with susceptible and resistant h. pylori strains and for compliant and non - compliant patients were analyzed with 95% confidence interval using the confidence interval analysis ( cia ) software for windows ( version 2.2.0 ) .
the level of significance was set at p < 0.05 , two sided .
a total of 347 patients consented to be randomly assigned to eradication therapy ( 172 patients ) or placebo ( 175 patients ) .
pylori igg antibodies were present in all patients ( median titre 1689 [ iqr 700 - 3732 ] ) .
the treatment groups were similar in terms of demographic , rheumatic disease , nsaid and other drug use .
our eligibility criteria resulted in a study group with mainly inflammatory rheumatic diseases ( rheumatoid arthritis 61% , spondylarthropathy 8% , psoriatic arthritis 7% , osteoarthritis 9% , other 15% ) .
the most commonly used nsaids were diclofenac ( 29% ) , naproxen ( 18% ) , and ibuprofen ( 13% ) , most at full therapeutic doses ( median relative daily dose 1 [ iqr 0.51 ] ) .
the mean age was 60 years ( sd 10 ) , 61% was female .
twenty - two patients had a known allergy for amoxicillin and received metronidazole instead ( 10 patients ) or placebo ( 12 patients ) .
of these 347 patients , data on culture and histology of 304 patients were available ( table 1 ) .
in two cases only culture data were available and in one case only histology result was available ; all three cases met the criteria for h. pylori - positivity and were found in the placebo group , but for clarity purposes were left out of table 1 .
a total of 32 patients ( with no significant differences between eradication and placebo groups ) refused the 3-month endoscopy , withdrew informed consent , or could not undergo endoscopy because of adverse events .
seven patients used anticoagulant therapy ruling out biopsy sampling according the protocol , and in one patient no biopsy specimens could be obtained because of discomfort requiring early completion of the procedure .
table 1results of culture and histology on h. pyloricultureeradication group ( n = 152)placebo group ( n = 152)histologytotal patients , n = 304h .
pylori negative9 ( 6%)132 ( 87%)22 ( 15%)32 ( 21% ) results of culture and histology on h. pylori at follow - up after 3 months , 79% ( 120 /152 ; 95% ci 7285% ) of the patients in the placebo group were h. pylori - positive by histology or culture of biopsy specimen . in the eradication group ,
this number was 13% ( 20/152 ; 95% ci 920% ) ( table 1 ) .
patients in the placebo group who were h. pylori negative at 3 months as assessed by culture and histology had significantly lower titers of h. pylori anti igg antibodies at baseline than those who were h. pylori culture- and or histology - positive ( mean difference 1582 , 95% ci 2637 to 527 , p = 0.004 ) . there were no differences between strata according to the use of gastroprotective drugs for the presence of h. pylori by culture and or histology ( p = 0.454 ) .
compliance was 89% in patients in the eradication group and 98% in the placebo group with the assigned regimen ( p < 0.001 ) .
in the eradication group , h. pylori could not be demonstrated in 91% of patients with full compliance ( n = 136 ) . in patients who did not take all 7 days of eradication therapy (
n = 16 ) , h. pylori was found in 50% ( difference of 41% ; 95% ci 1863% ) .
a total of 105 clinical isolates of h. pylori were available for susceptibility testing ( one isolate per patient ; 95 isolates from the placebo group , ten from the eradication group ) from the six participating laboratories in the netherlands .
table 2antibiotic resistance of h. pylori isolatesantibioticresistance ratesplacebo group , n = 95eradication group ,
n = 10clarithromycin4%20%metronidazole19%30%tetracycline2%0amoxicillin1%0all patients in the eradication group who were still h. pylori positive were assigned oac ( omeprazole 20 mg , amoxicillin 1000 mg , and clarithromycin 500 mg ) and not omc ( omeprazole 20 mg , metronidazole 500 mg and clarithromycin 250 mg)intermediate susceptible antibiotic resistance of h. pylori isolates all patients in the eradication group who were still h. pylori positive were assigned oac ( omeprazole 20 mg , amoxicillin 1000 mg , and clarithromycin 500 mg ) and not omc ( omeprazole 20 mg , metronidazole 500 mg and clarithromycin 250 mg ) intermediate susceptible in the placebo group ( n = 95 ) , resistance was found in 4% ( 4/95 ) to clarithromycin ( mic 1
mg / l ) , in 1% ( 1/95 ) and 2% ( 2/95 ) intermediate susceptibility to amoxicillin ( mic 1 mg / l ) and tetracycline ( mic 4
mg / l ) , respectively , and in 19% ( 18/95 ) resistance to metronidazole . amongst these 95 isolates
two were resistant to metronidazole in combination with intermediate susceptible to tetracycline , and one strain was resistant to metronidazole and clarithromycin .
the placebo group had an mic90 for clarithromycin of 0.085 mg / l , for metronidazole > 256 mg / l , for tetracycline 0.341
one h. pylori strain was resistant to clarithromycin and metronidazole , and intermediate susceptible to tetracycline and amoxicillin .
no difference was found in h. pylori resistance rates between men and women ( p = 0.217 ) or between patients who used gastroprotective agents and who those did not ( p = 0.25 ) . in the eradication group , two strains were resistant to clarithromycin and three to metronidazole .
a total of 105 clinical isolates of h. pylori were available for susceptibility testing ( one isolate per patient ; 95 isolates from the placebo group , ten from the eradication group ) from the six participating laboratories in the netherlands .
table 2antibiotic resistance of h. pylori isolatesantibioticresistance ratesplacebo group , n = 95eradication group ,
n = 10clarithromycin4%20%metronidazole19%30%tetracycline2%0amoxicillin1%0all patients in the eradication group who were still h. pylori positive were assigned oac ( omeprazole 20 mg , amoxicillin 1000 mg , and clarithromycin 500 mg ) and not omc ( omeprazole 20 mg , metronidazole 500 mg and clarithromycin 250 mg)intermediate susceptible antibiotic resistance of h. pylori isolates all patients in the eradication group who were still h. pylori positive were assigned oac ( omeprazole 20 mg , amoxicillin 1000 mg , and clarithromycin 500 mg ) and not omc ( omeprazole 20 mg , metronidazole 500 mg and clarithromycin 250 mg ) intermediate susceptible in the placebo group ( n = 95 ) , resistance was found in 4% ( 4/95 ) to clarithromycin ( mic 1
mg / l ) , in 1% ( 1/95 ) and 2% ( 2/95 ) intermediate susceptibility to amoxicillin ( mic 1 mg / l ) and tetracycline ( mic 4
mg / l ) , respectively , and in 19% ( 18/95 ) resistance to metronidazole . amongst these 95 isolates
two were resistant to metronidazole in combination with intermediate susceptible to tetracycline , and one strain was resistant to metronidazole and clarithromycin .
the placebo group had an mic90 for clarithromycin of 0.085 mg / l , for metronidazole > 256 mg / l , for tetracycline 0.341 mg
one h. pylori strain was resistant to clarithromycin and metronidazole , and intermediate susceptible to tetracycline and amoxicillin .
no difference was found in h. pylori resistance rates between men and women ( p = 0.217 ) or between patients who used gastroprotective agents and who those did not ( p = 0.25 ) . in the eradication group , two strains were resistant to clarithromycin and three to metronidazole .
we report the results of a study on the efficacy of a test and treat strategy for h. pylori in rheumatology patients of the netherlands who were positive for anti - h .
the main findings in the studied patient population were : ( 1 ) a 7-day ppi - triple eradication therapy either with clarithromycin or metronidazole was efficacious with eradication rates of 87% ( 95% ci 8091% ) without prior testing for susceptibility of the infecting strain ; ( 2 ) in 21% of the patients in the placebo group , the positive h. pylori serology test could not be confirmed by positive culture or histology ; ( 3 ) compliance was an important factor for successful eradication of h. pylori ; and ( 4 ) prevalence of antibiotic resistance in h. pylori was low .
the main reason for not performing endoscopy at baseline was that this was not feasible in everyday rheumatology practice ; therefore , serology was done to test for h. pylori .
the reliability of serological kits for h. pylori infection has been widely confirmed , contributing to the reputation of serology as a simple , minimally invasive and inexpensive diagnostic and screening test . the best available serology test at the time of the study was the pyloriset an eia - g from orion diagnostica , espoo , finland , with a specificity of 7991% as assessed in previous studies in the netherlands , including patients on acid suppressive therapy [ 2224 ] .
this specificity correlates well with our finding that in the placebo group , h. pylori could not be confirmed by culture or histology in 21% of the igg positive patients .
ppi usage ( in this study 37% of the population ) can result in false negative invasive and non - invasive diagnostic tests , such as culture , histology and 13-c urea breath , and should be stopped two weeks before testing .
13-c urea breath tests have better accuracy ( > 90% ) , but the serology test used in this study was less expensive and in all study centres easily available . on the other hand , we must not overlook that conditions during transport of biopsies are critical for successful isolation of h. pylori .
possibly , the antibacterial effect of nsaids , as has been suggested in in vitro studies , might also partly explain a false positive rate of serology of 21% [ 2729 ] .
however , in a randomized clinical trial of 122 patients , aspirin in combination with a standard 7-day course oac eradication was not significantly different compared to the standard therapy . based on culture and histology findings we conclude that one fifth of our patients were treated superfluously , with possible risk of side - effects of the eradication medication .
resistance to antibiotics in h. pylori is of particular concern because it is one of the major determinants in the failure of eradication regimens .
resistance rates for metronidazole and clarithromycin found in this study were similar as previously observed in other studies in the netherlands in the years 199798 and 19972002 [ 11 , 12 , 30 , 31 ] .
to our knowledge , there are no recent data available on h. pylori antibiotic primary resistance rates in the netherlands .
in addition , in this study , compliance played a crucial role in success of eradication of h. pylori , i.e. , treatment failure was as high as 50% in the non compliant group of patients . possibly , the high number of tablets that has to be consumed during h. pylori eradication therapy is a contributing factor for non - compliance in this group of elderly patients who were also on other medications . in conclusion , serology driven test and treat strategy eradication of h. pylori with a 7-day ppi - triple therapy is successful in the majority of patients .
success of eradication is , also in this group of rheumatology patients , to a great extent determined by compliance . | the treatment of choice of h. pylori infections is a 7-day triple - therapy with a proton pump inhibitor ( ppi ) plus amoxicillin and either clarithromycin or metronidazole , depending on local antibiotic resistance rates .
the data on efficacy of eradication therapy in a group of rheumatology patients on long - term nsaid therapy are reported here .
this study was part of a nationwide , multicenter rct that took place in 20002002 in the netherlands .
patients who tested positive for h. pylori igg antibodies were included and randomly assigned to either eradication ppi - triple therapy or placebo .
after completion , follow - up at 3 months was done by endoscopy and biopsies were sent for culture and histology . in the eradication group 13% ( 20/152 , 95% ci 920% ) and in the placebo group
79% ( 123/155 , 95% ci 7285% ) of the patients were h. pylori positive by histology or culture .
h. pylori was successfully eradicated in 91% of the patients who were fully compliant to therapy , compared to 50% of those who were not ( difference of 41% ; 95% ci 1863% ) .
resistance percentages found in isolates of the placebo group were : 4% to clarithromycin , 19% to metronidazole , 1% to amoxicillin and 2% to tetracycline . |
the realization of quantum communications relies on setting up entanglement of high fidelity between two far - away physical systems . in practice photons
are primarily used for the carrier of entanglement .
if one tries to establish entanglement by sending photons directly to a remote place , however , the range of communication will be limited by their absorption losses in the transmission channel . a solution to
the problem is quantum repeater @xcite .
there have been two categories of physical approaches to realizing quantum repeater .
one is dlcz protocol @xcite and its developments @xcite , which generate and connect entangled pairs of atomic ensembles over short distances through the coupling of single photon and collective atomic excitation modes .
the other is qubus or hybrid repeaters @xcite involving the operations on both qubits and continuous variable ( cv ) states . in the past years
the developmental works have improved the efficiency of the first type of quantum repeaters by several orders over the original dlcz protocol .
a recent theoretical analysis @xcite , however , indicates that the minimum average time for distributing an entangled pair over @xmath0 km by the quickest quantum repeater scheme of the dlcz - type should be still more than a half minute .
moreover , the phase noise caused by birefringence and polarization mode dispersion on the traveling single photons could damage the quality of the generated pairs . with the qubus repeater protocols , on the other hand
, the operation efficiency can be quickly improved at the cost of the fidelity of the generated pairs , and the irremovable decoherence effect on the cv state qubus in transmission channel is simply from photon absorption loss . in this work we present a new qubus repeater scheme combining some features of dlcz - type repeaters .
the resources ( local input qubit state and memory space ) required in the scheme are flexible , and long distance entanglement with high quality can be quickly realized with such flexibility .
with a single photon state in kerr medium generates an extra phase @xmath1 on the coherent beam .
the 50/50 beam splitter transforms two coherent states @xmath2 , where @xmath3 or @xmath4 , to @xmath5 .
a response of the photodiode d indicates that the two beams @xmath6 and @xmath7 are different with @xmath8 we start with the purification of the photon sources used in our scheme .
the output of a realistic single - photon source in a certain mode is approximated by a mixture of single photon fock state @xmath9 and vacuum @xmath10 , @xmath11 , where @xmath12 is the efficiency of the source ( see , e.g. , @xcite ) . to sift the vacuum component out of the mixture
, we apply a quantum non - demolition ( qnd ) measurement module illustrated in fig .
1 . in the module ,
one of the laser beams in coherent state @xmath13 interacts with @xmath14 through a proper cross - kerr nonlinearity , e.g. , the electromagnetically induced transparency ( eit ) medium , picking up a phase shift @xmath15 to @xmath16 if the single photon is present .
the two coherent states after the interaction are compared with a 50/50 beam splitter and a photodiode .
any response of the photodiode indicates that the coherent states are different as @xmath17 if the coherent beam amplitude satisfies @xmath18 , e.g. , the error probability @xmath19 will be as low as @xmath20 . with the light - storage cross - phase modulation ( xpm ) technique ,
e.g. , it is possible to realize a considerably large @xmath15 at a single photon level @xcite . in our scheme
we only need a very small xpm phase shift @xmath21 , which is matched by the sufficiently intense coherent beams @xmath13 , to lower the losses of the photonic modes in xpm process to a negligible level while achieving the close to unit @xmath22 in operation . before the end of this section
, we have a discussion on the photon detectors used in qnd modules . here
we only need threshold photon detector whose operation is described by the positive - operator - valued measure ( povm ) elements @xmath23 and @xmath24 , which respectively correspond to registering no photon and registering photon . the parameters @xmath25 and @xmath26 are photon detection efficiency and average dark count during detecting photons , respectively .
since the mean dark count can be made small ( a realistic detector could have @xmath27 ) , the state of the photon source and the coherent beams in qnd module will collapse to @xmath28 the tensor product of a pure state single photon and that of the two beams @xmath29 , as the detector in qnd module takes a response . here
@xmath30 and @xmath31 . given a very large amplitude @xmath32 of the input coherent beams , the dominant part of the photon number poisson distribution of @xmath33 will be fairly away from the small photon numbers , and a realistic detector even with a low photon detection efficiency @xmath25 could obtain the output of eq .
( [ detection ] ) almost with certainty
. such detector can be simple photodiode .
next , we respectively process two purified single photons at two different locations a and b with a linear optical circuit , as shown in fig .
2 which outlines the setup to generate the elementary links .
the purpose of the procedure is to transform an input photon pair in arbitrary rank - four mixed state , @xmath34 to that with the linear combination of only two bell states as the basis .
the pure state components @xmath35 as the eigenvectors of @xmath36 are the linear combinations of bell states @xmath37 , with @xmath38 and @xmath39 respectively representing the horizontal and the vertical polarization , and @xmath40 the eigenvalues of @xmath36 .
its basis vectors @xmath41 are called even parity and @xmath42 odd parity , respectively . then , with two polarization beam splitters ( pbs ) , we convert any input into the ports @xmath43 and @xmath44 in fig . 2 to a which - path space .
the polarization of the photon components on both path @xmath45 and @xmath46 can be transformed to h with half wave plates ( hwp or @xmath47 ) for the time being .
the circuits a and b are constructed with 50/50 beam splitters , totally reflecting mirrors and the proper phase shifters , and any of the pure state component in a general input @xmath36 will be mapped to the superposition of four bipartite states over the pairs of output ports @xmath48 , @xmath49 , @xmath50 and @xmath51 ( see appendix ) . over port @xmath52 and @xmath53 ( @xmath54 and @xmath55 ) ,
the output is a linear combination of @xmath56 and @xmath57 ; over the other two pairs of ports @xmath58 and @xmath51 , on the other hand , it is that of the other fixed set of bell states @xmath59 .
if we project out the photonic components over one of the four pairs of the output ports , the resulting state will be in a subspace with a linear combination of such two bell states ( one even parity but the other odd parity ) as the basis vector .
such projection can be done by two qnd modules ( see appendix ) .
for example , by projecting out the components on @xmath52 and @xmath53 merged from the tracks @xmath60 and @xmath61 ( at both locations ) with hwp and pbs , we will realize the following non - unitary transformations of the basis vectors : @xmath62 a simple input state @xmath63 will be correspondingly transformed to @xmath64 with the common constant neglected . with this example
, we will demonstrate how to obtain an approximate bell state by separating the even and the odd parity sectors of the output . of a photon pair state in eq .
( [ 1 ] ) , which is sent to a1-b1 terminals , is transformed to a superposition , @xmath65 , over four pairs of output ports .
each of these states is the fixed linear combination of two bell states ( one even parity but the other odd ) . through the kerr media , two coherent pulse trains running between the output ports interact with the single photons at location a and b in the order specified here ( the first coherent state is coupled to @xmath38 mode and the second to @xmath39 mode at location b , and they are coupled to the photons at location a in the opposite way ) , and separate the even and the odd parity components of the photon pairs .
the detection of a local qnd module heralds the single photon modes to be coupled to the traveling coherent beams and mapped to quantum memory . upon receiving a successful result of the @xmath66 module and the measurement result of the qnd module from location a , which are transmitted through a classical communication channel ,
the system operator at location b determines the generation and the type of an elementary link between a2-b2 terminals , together with the measurement of his / her local qnd module . ]
meanwhile , as illustrated in fig .
2 , we interact two identical coherent beams @xmath67 with the photonic modes going to track @xmath53 through two xpm operations @xmath68 and @xmath69 ) ( @xmath70 is the nonlinear intensity , @xmath71 the interaction time and @xmath72 the number operator of the corresponding mode ) by kerr nonlinearities , evolving the state ( projected out over tracks @xmath52 and @xmath53 here ) from the input @xmath73 to @xmath74 , where @xmath75 and @xmath76 .
if one also considers the losses of the photonic modes in the xpm processes , the above unitary operations will be replaced by non - unitary quantum operations leading to a mixed state similar to the form in eq .
( [ d ] ) below . in our case
only a very small @xmath15 should be generated within a short interaction time @xmath71 of the coherent beams and the single photon , so the generated state has a close to unit fidelity with the pure state in eq .
( [ a ] ) @xcite and the xpm processes can be well approximated by the unitary operations .
the coherent states in the above equation are transmitted through lossy optical fiber to location a , while the single photon modes @xmath77 and @xmath78 are temporarily stored in quantum memory . over a segment of fiber with the loss rate @xmath79 of the coherent beams ,
their losses can be modeled by a beam splitter of transmission @xmath80 for the distance @xmath81 . under such decoherence effect the initial state involving photon b and the coherent beams in eq .
( [ a ] ) will be decohered to @xcite @xmath82 where @xmath83 and @xmath84 , after the beams are sent to location a. to eliminate the undesired component @xmath85 effectively , we could set the proper parameters such that @xmath86 . with the qubus coherent beams satisfying @xmath87 , for instance , the fidelity @xmath88 with @xmath89 will be larger than @xmath90 .
after the coherent beams are transmitted to location a , we use the same kerr nonlinearities to interact the beams with the single photon modes there as shown in fig .
2 , realizing the state ( the undesired contribution from @xmath85 component is eliminated effectively by the setting , @xmath91 is the state with @xmath92 replaced by @xmath93 in eq .
( [ a ] ) , and the indexes a , b are neglected too ) @xmath94 then two phase shifters of @xmath95 and one 50/50 beam splitter are applied to transform the state to @xmath96 due to the setting of eliminating the decoherence effect , the detection rate of the first beam is very low if we measure it directly .
for example , in the case of @xmath97 and the elementary link distance @xmath98 km ( the attenuation length of the coherent beams is assumed to be @xmath99 km ) , the intensity of @xmath100 will be as low as @xmath101 , which could be hardly detected in reality .
here we propose an indirect measurement approach by applying an extra qnd module ( denoted as @xmath66 in fig .
2 ) in fig .
1 . in module
@xmath66 the weak output coherent beam interacts with one of the bright beams @xmath102 , realizing the following state ( @xmath103 ) : @xmath104 the dominant non - vacuum component of the weak beam is single photon which validates the above approximation . to consider all possible output states @xmath10 and @xmath105 ( @xmath106 ) generated by the vacuum and non - vacuum components of the weak beam , we can use a sufficiently large @xmath107 so that the overlaps of their photon number poisson distributions are negligible to number - resolving detection
. in our setting only simple photodiode is necessary because of the negligible occurring of the states with @xmath108 . by any response of the photodiode in module @xmath66 , together with the detection results of two qnd detection modules in fig .
2 , an approximate @xmath109 with the fidelity larger than @xmath110 is therefore created between the ports @xmath52 and @xmath53 . counting the possibilities over four pairs of output ports , we obtain the following success probability of realizing an approximate bell state ( @xmath21 ) : @xmath111 this is the total probability of the odd parity sector minus that of the vacuum component of the first coherent state in it , which contributes to no - response of the photodiode in @xmath66 module . the pre - factor
@xmath112 is due to the equal proportions of even and odd parity sectors from eq .
( [ parity ] ) and the other similar relations . ,
the probability by a single try of entangling a photon pair , and @xmath88 , the fidelity of the entangled pair .
the distances between the repeater stations are chosen as @xmath113 km , @xmath114 km , @xmath115 km , @xmath116 km and @xmath117 km . ]
the efficiency of generating the elementary links of high fidelity is low as shown in fig .
this can be overcome by the repeated operations with the frequency @xmath118 .
the input photons at location a and b can be in arbitrary states of eq .
( [ 1 ] ) even if @xmath40 and @xmath35 are unknown ( see appendix ) .
it is therefore convenient to speed up the entangling operations with the continuous supply of single photons including the processed ones which can be recycled in case of a failure event ( the losses of the photons in the xpm processes are neglected ) . two pulsed laser beams with the repetition rate @xmath119 , which are commonly generated in mode - locked system , realize an approximate bell state within the average time @xmath120 , the sum of the detection and the communication time ( counted after the arrival of the first pulse ) , where @xmath121 .
such repetitious operations should be matched by a quantum memory space @xmath122 $ ] ( @xmath123 $ ] is the least integer equal to or larger than @xmath124 ) for the processed temporary single photon modes at location b , and a single photon source with the repetition rate @xmath119 is also necessary for starting such operations there .
the memory modes of the successful events will be preserved while those of the failure are only stored for a time @xmath125 .
to eliminate the difference of the phases of two qubus coherent pulse trains gained in travel , we transmit them through the same fiber within a very short time interval ( ref .
@xcite presents an experimental study on the elimination of single photon phase noise in this way ) .
then the two pulse trains , which are controlled by optical switch ( pockels cell ) at the starting and terminal point as in fig .
2 , act as the reference of each other to fulfill the purpose .
given the high quality elementary links generated between the relay stations , we will connect them to doubled distances level by level through entanglement swapping .
the local bell state measurements here could be implemented by two - photon hong - ou - mandel type interference @xcite .
the success probability @xmath126 of a connection attempt , which is determined by those of retrieving two single photons from memory ( with the efficiency @xmath127 for taking one photon out of a memory unit of two modes ) and detecting both of them with the single photon detection efficiency @xmath25 , is the same for all levels . on average ,
@xmath128 $ ] pairs at the @xmath129-th level are needed to realize an @xmath130-th level entangled pair with certainty and , between each two neighboring stations separated by the distance @xmath81 , the total number of the required elementary links for such deterministic connection over the distance @xmath131 should be therefore @xmath128^n$ ] .
we adopt such strategy of connection : ( 1 ) generating so many elementary pairs between each two neighboring relay stations ; ( 2 ) performing all local bell state measurements iteratively at each connection level ; ( 3 ) communicating the measurement results together to two link ends iteratively at every connection level . summing up all these durations gives @xmath132^n+\sum_{k=1}^{n}2^{k-1}\frac{l_0}{c}+\sum_{k=0}^{n-1}[\frac{1}{p_c}]^{n - k}\tau_0+\frac{l_0}{c } \nonumber\\ & = & t_0\left[\frac{1 } { p_c}\right]^{log_2l / l_{0}}+\frac{\left[\frac{1}{p_c}\right]^{log_2l / l_{0}+1}-\left[\frac{1}{p_c}\right ] } { \left[\frac{1}{p_c}\right]-1}\tau_0+\frac{l}{c}\nonumber\\ \label{e}\end{aligned}\ ] ] as the average time for distributing an entangled pair over the distance @xmath133 , where @xmath134 and @xmath135 is the time for a local bell state measurement .
the last term @xmath136 on the first line of the above equation is that for the first coherent pulse to arrive at location a. there are @xmath137 bell state measurement results including failure .
@xmath138 is an upper bound time because elementary pairs generation can be performed simultaneously with connection .
the final fidelity of a connected pair is @xmath139 in terms of the elementary link fidelity @xmath88 . with a close to unit initial fidelity ( @xmath140 , @xmath141 ) , @xmath142 lowers slowly within the distance @xmath133 in the order of thousand kilometers .
km , @xmath143 , and @xmath144 km / s in optical fiber , and neglect the time for local bell state measurements . by the connection strategy we present
, @xmath138 scales linearly if @xmath145 and gets close to the two - way classical communication time with the also increased pulse train repetition rate @xmath119 .
the classical communication line is for comparison . ] from fig .
4 we see that @xmath138 can be reduced by adjusting @xmath119 and @xmath146 through two independent mechanisms .
the two theoretical extreme values of @xmath119 respectively correspond to @xmath147 and @xmath148 , where @xmath149 is the photodiode dead time of the qnd modules in fig . 2 , and the necessary quantum memory spaces for the two cases are @xmath150 and @xmath151 $ ] , respectively .
we provide the following table of @xmath138 and @xmath152 for distributing an entangled pair with the fidelity larger than @xmath153 over the distance of @xmath0 km ( @xmath98 km , @xmath143 , @xmath154 as in the previous example , and @xmath155 ; the classical communication time over the distance is @xmath156 s ) : [ cols="^,^,^,^,^,^,^,^",options="header " , ] the table demonstrates the trade - off between the required quantum memory coherence time and quantum memory space for the system .
even with the least temporary - mode memory space of @xmath46 per half station , such long - distance entanglement could be realized in @xmath157 minutes .
we should mention another scenario that can be performed by the system : first generating the elementary pairs of the medium fidelity by a larger probability per try , and then performing the purification of the generated rank - two mixed states by local operations and classical communications @xcite before entanglement swapping . without using this entanglement purification step ,
the implementation of the scheme is much simplified , and the high efficiency can be achieved by the repetitious operations with the flexible input photon states . the cost to pay for this efficiency is a large memory space for the temporary modes .
a number of recently developed multi - mode memories ( see , e.g. , @xcite ) could meet such requirement of the system in practical operations .
b. h. thanks c. f. wildfeuer , j. p. dowling for a material about kerr nonlinearity , y .- f .
chen , i. a. yu for the discussions on experimental feasibility , and the comments from c. simon .
this work is supported in part by the petroleum research fund and psc - cuny award .
since local operations and classical communication alone can not increase the entanglement of a bipartite system , we should perform non - local operation by two qubus coherent beams on the distant photons . moreover , if a bell state should be realized out of a photon pair in arbitrary rank - four mixed state of eq .
( [ 1 ] ) , the application of the non - unitary transformations , those in eq . ( [ parity ] ) and
the similar ones over the other pairs of output ports , on the input photon pair state will be essential . by these non - unitary transformations ,
all pure state components @xmath35 of an input photon pair state are probabilistically mapped to a one - dimensional subspace with a linear combination of two fixed bell states of the different parities as the basis vector .
then , the traveling qubus beams will separate the different parity components as in sec .
( [ section4 ] ) and realize a bell state in an ideal situation .
we here apply a general method to implement non - unitary transformation on single photon states by unitary operation in extended space and projection to subspace @xcite . in
what follows , we illustrate the design that realizes such non - unitary maps . in fig .
2 , after the state of an input photon pair is converted to a which - path space by two pbs , its bell - state basis vectors will be transformed to the following : @xmath158 where @xmath159 and @xmath160 are the creation operators of the which - path photonic modes at location a and b , respectively .
any pure bi - photon state as the linear combination of @xmath41 and @xmath161 is correspondingly transformed to @xmath162 where @xmath163 are the linear combination coefficients .
we here use @xmath164 to represent a bi - photon pure state over the paths numbered @xmath165 , @xmath166 , @xmath167 , @xmath168 at location a and the paths numbered @xmath169 , @xmath170 , @xmath167 , @xmath130 at location b. the indexes in the sets @xmath171 and @xmath172 are in the ascending order , and the numbers in two sets can be different . of eq .
( [ a2 ] ) .
the state of a single photon is converted to a which - path space by pbs .
then the mirrors m and the 50/50 beam - splitters transform the @xmath46-dimensional single photon state to a @xmath157-dimension one .
it is a part of the integrated circuits a or b in fig .
each photon of a pair converted to the which - path space is sent into a linear optical circuit performing a unitary operation @xmath173 in the extended @xmath174-dimensional space of the @xmath175-dimensional bi - photon state ( @xmath176 in eq .
( [ a2 ] ) means a @xmath175 identity matrix ) .
5 shows the local circuit performing this unitary operation . for technical simplicity
, the unitary operations in the extended spaces of a bipartite state can be represented by a vector - operator duality notation as in @xcite ( the vector - operator duality description of all involved transformations of photon pair states here is given in @xcite ) .
we could also simply understand this transformation as one acting on a @xmath174 bi - photon state @xmath177 with the zero coefficient components @xmath178 in the extended dimensions .
the operation @xmath179 on such an extended state @xmath180 gives @xmath181 the results of the operation @xmath179 on all @xmath182 take the same form as the above .
it is not necessary to process the local input photons at location a and b simultaneously , since @xmath183 .
the second pair of unitary operations on the states spanned by @xmath184 are the identical @xmath174 unitary operators , @xmath185 with the submatrix @xmath186 to respectively act on the single photon modes at both locations .
the consecutive operations of @xmath179 and @xmath187 on the bell - state basis vectors @xmath182 realize the states @xmath188 for @xmath189 and @xmath46 , and the states @xmath190 for @xmath191 and @xmath157 .
@xmath192 is implemented by three totally reflecting mirrors and the phase shifters creating @xmath193 .
the third pair of local unitary operations is simply @xmath194 ( @xmath195 represents the transpose ) . with the fourth pair of the local unitary operations ,
we extend the space of the bi - photon states further to @xmath196 dimension .
the corresponding operator is an @xmath196 unitary matrix @xmath197 constructed with the @xmath174 projection operators @xmath198 under all four successive local unitary operations , the bell - state basis vectors will be transformed to @xmath199 with the states @xmath200 , @xmath201 , @xmath202 and @xmath203 respectively defined in eqs .
( [ e ] ) and ( [ o ] ) for the even and odd parity bell - state basis vectors . in fig .
2 , the indexes @xmath204 , @xmath157 , @xmath137 and @xmath205 are changed to @xmath206 , @xmath207 , @xmath167 , and all circuits performing the four consecutive unitary transformations are represented by the integrated circuits a and b. the components of the circuits are just 50/50 beam splitters , totally reflecting mirrors and the appropriate phase shifters . after the photonic modes are converted back to polarization space as in fig .
2 , we will obtain the following transformations of the bell - state basis vectors by substituting @xmath200 , @xmath201 , @xmath202 and @xmath203 in eqs .
( [ e ] ) and ( [ o ] ) into eq .
( [ basis ] ) : @xmath208 let us look at the effect of the system in fig . 2 on any one of the pure state components , @xmath209 , in eq . ( [ 1 ] ) . without loss of generality , we have one of the beams , @xmath210 and @xmath211 , in the qnd modules to couple respectively to @xmath52 and @xmath53 modes , and two traveling coherent beams denoted as @xmath212 and @xmath213
are also coupled to @xmath52 and @xmath53 modes during the entangling operation shown in fig .
the qnd modules here are a type of the slightly modified one from fig .
1 , with a coherent beam coupling to both @xmath38 and @xmath39 modes in turn . without considering the losses of the qubus
beams @xmath212 and @xmath213 in travel , the state from the input @xmath35 ( following eq .
( [ bell ] ) ) evolves under the interaction with these coherent beams as follows : if we adopt a sufficiently large @xmath215 of the coherent beams , the overlap of @xmath10 and @xmath216 will be small enough , and only one qnd module at each location will be necessary .
then , after the interaction between @xmath210 , @xmath211 and the local single photon modes , the responses of both qnd modules at two locations project out the component @xmath217 proportional to @xmath218 defined in eq .
( [ bell ] ) , as seen from eq .
( [ output ] ) , and the other response and no - response patterns project out the similar components @xmath219 , @xmath220 , and @xmath221 , respectively . also given the photon - number - resolving detection ( fock state projector ) on the first traveling coherent beam finally output at location a and an enough large @xmath222 , @xmath56 or @xmath42 could be obtained between @xmath52 and @xmath53 depending on the projection on fock states . from a general input state @xmath36 in eq .
( [ 1 ] ) , the structures of the bi - photon states projected out by the qnd modules are only relevant to the output ports ; e.g. , between @xmath52 and @xmath53 , those from all pure state components @xmath35 are proportional to @xmath218 of eq . ( [ bell ] )
it implies that the finally output bi - photon state through the detections of the coherent beams will be a fixed bell state independent of the input @xmath36 , which is not necessary to be known to operations .
were there no loss of the traveling coherent beams , a bell state would be certainly realized by each entangling attempt following the operation order in fig . 2 and collecting the possibilities from all four pairs of output ports .
with which single photon modes the traveling beams should interact at one location is controlled by a detection result of the local qnd module . without such control with
the classically feedfowarded measurement results , we can simply use two groups of coherent beams running between @xmath223 and @xmath224 , respectively , to fulfill the same purpose at the price of some success probability per try . due to the unavoidable losses of the traveling beams in optical fiber
, we should adopt the setting discussed in sec .
[ section4 ] to maintain the high fidelity of the entangled bi - photon state .
the relation between the efficiency and the fidelity in generating an entangled pair is then given in fig .
3 . in this case
, the approximate bell state generated between @xmath52 and @xmath53 will be @xmath109 because of the opposite phases of the coherent states @xmath225 and @xmath226 in eq .
( [ k ] ) , and only photodiode will be necessary in detection .
finally , we provide a geometric interpretation for the involved operations here . viewed from an extended space , the vectors @xmath35 of eq .
( [ 1 ] ) are situated in a four - dimensional hyperplane spanned by @xmath227 .
the pairs of the local unitary transformations from @xmath179 to @xmath228 in the extended space rotate this hyperplane to a new position with the orthonormal vectors @xmath229 ( @xmath230 ) defined in eq .
( [ bell ] ) being the basis of the rotated hyperplane .
the circuits implementing these unitary operations @xmath231 are properly designed with a permutation symmetry in eq .
( [ path])@xmath232 and @xmath233 are invariant while @xmath234 and @xmath235 change the sign under the permutation of the indexes @xmath45 and @xmath46 , so that the new basis vectors @xmath236 will be the linear combinations of two different parity bell states . by all detection patterns of two qnd modules , all @xmath35 rotated to the hyperplane spanned by @xmath229 are definitely projected onto one of the four lines in the direction of a @xmath236 .
the @xmath236 are separable because local operations can not increase the entanglement of a bipartite system .
the parity gate with the operation of two qubus beams in fig . 2 then separates the different parity components in a @xmath236 to realize a bell state in the ideal situation with no loss of the qubus beams in travel . | we present a quantum repeater protocol that generates the elementary segments of entangled photons through the communication of qubus in coherent states .
the input photons at the repeater stations can be in arbitrary states to save the local state preparation time for the operations .
the flexibility of the scheme accelerates the generation of the elementary segments ( close to the exact bell states ) to a high rate for practical quantum communications .
the entanglement connection to long distances is simplified and sped up , possibly realizing an entangled pair of high quality within the time in the order of that for classical communication between two far - away locations . |
Baby Gammy, one-year-old at centre of Thai surrogacy scandal, granted Australian citizenship
Updated
Baby Gammy, who was at the centre of a surrogacy scandal in Thailand, has been granted Australian citizenship.
Born with Down syndrome via a surrogate mother in Thailand, Gammy was left behind while his healthy twin sister Pipah was taken home by their Australian parents, Wendy and David Farnell.
The surrogate mother of the twins, 21-year-old Thai woman Pattaramon Chanbua, said she applied for Australian citizenship for Gammy because she wanted to safeguard his future, not because she wanted to travel to Australia.
Gammy, who turned one on December 23 last year, has been granted citizenship and is now also eligible for an Australian passport, although that is a separate process that has not begun.
It is unclear whether he could also be eligible for Australian welfare benefits.
The Pattaramon family recently moved into a new home in Chonburi, 90 kilometres south of Bangkok, which was purchased from donated funds.
More than $240,000 was raised after Baby Gammy's plight became public.
The Farnells were heavily criticised last year for leaving one infant behind, prompting robust discussion about laws and regulations surrounding international surrogacy arrangements.
The Department for Child Protection began proceedings in the Family Court after it was revealed David Farnell, 56, had 22 child sex convictions, including indecent dealing with young girls.
The Farnell family retained custody of Pipah, subject to strict court conditions.
Topics: surrogacy, international-law, law-crime-and-justice, thailand, australia
First posted ||||| Despite being rejected by his Australian biological father, Thai-born baby Gammy can become an Australian citizen.
Australian rights: Baby Gammy, with Thai surrogate mother Pattaramon Chanbu, has a biological Australian father. Photo: AP
Bangkok: Gammy, the baby at the centre of Thailand's surrogacy scandal, has been granted Australian citizenship.
The 12-month-old baby with Down syndrome will now be eligible for Australian services such as healthcare. He will also be eligible to apply for an Australian passport.
Gammy, who was born with a heart condition, was abandoned by his Australian biological parents, David and Wendy Farnell, last year, prompting Thai authorities to shut down the country's then booming surrogacy industry.
Help at hand: Gammy has received donations of more than $240,000 to help pay for hospital treatment.
The baby's surrogate mother, Pattaramon Chanbua, said she applied for Australian citizenship because she wanted to safeguard Gammy's future, not because she wanted to travel to Australia.
Advertisement
But there are not expected to be any restrictions on Ms Pattaramon travelling to Australia with Gammy.
Mr and Mrs Farnell have been allowed by West Australian authorities to keep Gammy's twin sister, Pipah, with strict conditions, despite Mr Farnell's previous convictions for child sex offences.
Gammy was automatically eligible to become an Australian citizen because Mr Farnell's sperm was used, making him the biological parent.
Ms Pattaramon bitterly criticised the Farnells after they left Thailand with Pipah, saying she was still owed money by the Bunbury couple.
Gammy was critically unwell at the time.
When Fairfax Media revealed Gammy's plight, people around the world rushed to donate more than $240,000.
The money is managed by Australian charity Hands Across the Water, which recently provided a new house for Ms Pattaramon's family in Chonburi, 90 kilometres south of Bangkok.
Gammy turned one on December 23.
He has been regularly visiting a Thai hospital, with his bills paid by donated money through Hands Across the Water.
Legislation has been drafted by Thailand's military junta that will ban surrogacy except involving family members, with penalties of up to 10 years' jail for violators of the law. ||||| Image copyright Getty Images Image caption Baby Gammy's case made headlines around the world and provoked debate over surrogacy
Baby Gammy, who was born with Down's syndrome to a surrogate mother in Thailand, has been granted Australian citizenship, local media report.
Gammy was left behind while his twin sister Pipah went home with Australian parents David and Wendy Farnell last year.
The case sparked intense debate over international surrogacy agreements.
Surrogate mother Pattaramon Chanbua said she sought Australian citizenship to safeguard Gammy's future.
Gammy, who turned one in December, is eligible for Australian citizenship because David Farnell is his biological father.
He will now have access to healthcare in Australia and is eligible for an Australian passport.
Baby aborted
The Farnells faced heavy criticism for leaving one baby behind and taking the other. Besides Down's syndrome Gammy has a congenital heart condition.
Image copyright AFP Image caption Ms Chanbua said later she had not allowed Gammy to be taken by the Australian parents
Ms Chanbua, the 21-years-old surrogate mother, claimed that the Farnells wanted Gammy aborted when they found out he had Down's syndrome, but that was against her Buddhist beliefs.
In a TV interview, the Farnells said after Gammy was born, they wanted to bring both infants home.
Ms Chanbua told the Associated Press that she had then not allowed Gammy to go with them.
It was later revealed that David Farnell had child sex convictions, prompting Australia's Department of Child Protection to launch an investigation in August.
The Farnells retain custody of Pipah but with strict court conditions, according to Australian media reports.
Gammy's case drew donations from around the world which are being managed by an Australian charity and have been used to pay for his hospital bills and a new home for Ms Chanbua's family. | – The baby who last year made international headlines in a surrogacy controversy is now a 1-year-old Australian citizen, though he still lives in Thailand. Gammy, who has Down syndrome and a heart condition, can now access Australian health care, the BBC reports. He has been a regular patient at a Thai hospital, the Sydney Morning Herald reports; donations have covered the costs. Gammy was eligible for Australian citizenship because his biological father, David Farnell, is Australian. He can now also get an Australian passport. Farnell and his wife, Wendy, sparked the controversy when they took Gammy's twin sister home from a surrogate mother in Thailand, but Gammy stayed. The surrogate, Pattaramon Chanbua, applied for Australian citizenship for the boy; she says it was an effort to protect him for the future rather than a means for her to go to Australia. Even so, it now appears she'll be able to do so with Gammy without facing restrictions. Global donors have raised some $240,000 for the baby; that allowed Chanbua's family to move to a new home in Thailand, Australia's ABC reports. |
Image copyright EPA Image caption Rodrigo Duterte came to power in 2016 promising a crackdown on drug dealers
Philippines President Rodrigo Duterte has said he plans to withdraw his country from the International Criminal Court (ICC) after it began examining the country's drugs war.
"It is apparent that the ICC is being utilised as a political tool against the Philippines," Mr Duterte said.
He also condemned "baseless" attacks by the UN.
The ICC in February began examining alleged crimes committed during the controversial anti-drugs crackdown.
ICC chief prosecutor Fatou Bensouda said the court would be looking at reports of extrajudicial killings.
'Outrageous attacks'
Mr Duterte said he would leave the ICC "immediately", but the court says the process takes a year after an official notice of withdrawal.
A statement from the Philippine administration said the ICC inquiry was "in violation of due process".
The president also condemned "baseless, unprecedented and outrageous attacks" on him and his administration by the UN.
Media playback is unsupported on your device Media caption Mr Duterte has compared himself to Adolf Hitler in the past
"The acts allegedly committed by me are neither genocide nor war crimes. The deaths occurring in the process of legitimate police operations lacked the intent to kill."
The statement contradicts some of Mr Duterte's previous comments about the drugs war, including his willingness to "slaughter" drug addicts and dealers.
There has been growing international pressure on Mr Duterte about his country's war on drugs, which has caused the deaths of thousands.
Police claim they have killed around 4,000 drugs suspects, while rights groups suggest the figure could be far higher.
Ms Bensouda first said she was "deeply concerned" about reports of extrajudicial killings in October 2016, less than four months after Mr Duterte assumed office on a pledge to crack down on drug dealers.
And last month, as the ICC announced its preliminary inquiry, the UN Human Rights Council questioned the Philippines' human rights record and called on the country to accept a UN special rapporteur.
Harry Roque, a spokesperson for President Duterte, said in response that the ICC lacked jurisdiction over the case, calling the ICC a "court of last resort".
The court can only intervene when national authorities cannot or will not act. It has no police force of its own, and must rely on local powers to arrest and bring suspects to them.
While in theory withdrawal would not stop the court's inquiry into alleged crimes committed while the Philippines was a member, it could prove difficult to make local authorities co-operate.
There are currently 123 parties to the ICC, including the Philippines. The US has not ratified the treaty, while countries like China, India and Turkey have never signed it.
On Monday, local media reported that the Philippine Senate had filed a resolution saying the country's withdrawal from international treaties would only be valid with its consent.
The country's constitution states that adoption of an international treaty cannot be revoked without the support of both president and Senate. ||||| President accuses ICC of crusade against him after it opened inquiry into his war on drugs
Rodrigo Duterte is to withdraw the Philippines from the international criminal court after it opened a crimes against humanity investigation into his brutal war on drugs.
In a lengthy statement, the Philippines president accused the ICC and the UN of a crusade against him, denouncing what he described as “baseless, unprecedented and outrageous attacks on my person”.
“I therefore declare and forthwith give notice, as president of the republic of the Philippines, that the Philippines is withdrawing its ratification of the Rome statute [the treaty that established the ICC] effective immediately,” said Duterte.
The ICC announced last month it was investigating allegations that Duterte had committed crimes against humanity in his war on drugs, which has killed an estimated 8,000 people since he took office in May 2016.
Duterte initially said he welcomed the chance to defend his name. But on Wednesday he said the ICC had shown a “brazen ignorance of the law” and claimed that the Rome statute was fraudulently implemented in the Philippines to begin with and therefore not “effective or enforceable”.
Philippine politicians met the announcement with scorn and anger. Congressman Antonio Tino said the move was “utterly self-serving and driven by sheer panic at the prospect of a trial before the ICC for crimes against humanity related to his murderous war on drugs”. Tino added: “Saving his own skin has taken precedence over the long-term commitment made by the Philippines state to human rights.”
Kabataan party representative Sarah Elago said it showed that “Duterte intends to impose his fascist and tyrannical tendencies even against international critics”.
“Only the guilty become too eager to run away from prosecution,” Elago added. “If indeed he wants to prove his innocence, what better platform than a court?”
Relations between the Philippines and the international community have become increasingly antagonistic in recent weeks. Last week, the department of justice included a UN special rapporteur on a list of people declared to be communist terrorists. In response, the UN high commissioner for human rights, Zeid Ra’ad Al Hussein, said Duterte “needs to submit himself to some sort of psychiatric examination”.
Play Video 0:37 UN official says Philippine president needs psychiatric evaluation – video
In his statement on Monday, Duterte said Hussein’s comments were clear evidence of “international bias” and that the ICC was “being utilised as a political tool against the Philippines”.
He also described the ICC’s inquiry – which involves looking into a 77-page report submitted to it last year that allegedly documents Duterte’s crimes against humanity going back to 1988 when he was mayor of Davao – as “unduly and maliciously created”.
When the Philippines ratified the Rome statute in 2011 – nine years after it came into force – it was seen as a big step forward for human rights in Asia. The country’s withdrawal will be seen as a blow for international accountability in the region. The ICC, based in The Hague, is the world’s only permanent international tribunal that looks into war crimes and crimes against humanity.
Duterte has made his contempt for the ICC well-known in the past, calling it “bullshit”, “hypocritical” and “useless”, but in his statement on Wednesday, he went further, accusing the court of violating its own due process and depriving him of the right of innocence until proven guilty.
The woman taking on Duterte in a press freedom fight in the Philippines Read more
Should the ICC’s preliminary inquiries find evidence of crimes against humanity, the Philippines’ sudden withdrawal from the statute would not protect Duterte from being put on trial. A country’s withdrawal from the ICC takes effect a year after the UN has received the application and article 127 of the Rome statute specifies that “withdrawal shall not affect any cooperation with the court in connection with criminal investigations”.
James Gomez, Amnesty International’s south-east Asia director, described Duterte’s move as misguided and deeply regrettable. “Powerful individuals in the Philippines are more interested in covering up their own potential accountability for killings than they are in ensuring justice for the many victims of the country’s brutal war on drugs’,” Gomez said.
There are 139 countries signed up to the Rome statute, but with some powerful exceptions. The US signed the treaty in 2000 but never ratified it, citing concerns over sovereignty, similarly with Russia. Israel signed it for a short period but also never ratified it into law.
Should the UN accept Duterte’s withdrawal, it would make the Philippines only the second country to withdraw from the Rome statute, following Burundi in 2017. South Africa attempted to leave in 2016, but its withdrawal was revoked by the UN. ||||| FILE - In this Wednesday, Dec. 20, 2017, file photo, Philippine President Rodrigo Duterte addresses the troops during the 82nd anniversary celebration of the Armed Forces of the Philippines in suburban... (Associated Press)
FILE - In this Wednesday, Dec. 20, 2017, file photo, Philippine President Rodrigo Duterte addresses the troops during the 82nd anniversary celebration of the Armed Forces of the Philippines in suburban Quezon city northeast of Manila, Philippines. Duterte said Wednesday that his country is withdrawing... (Associated Press)
FILE - In this Wednesday, Dec. 20, 2017, file photo, Philippine President Rodrigo Duterte addresses the troops during the 82nd anniversary celebration of the Armed Forces of the Philippines in suburban Quezon city northeast of Manila, Philippines. Duterte said Wednesday that his country is withdrawing... (Associated Press) FILE - In this Wednesday, Dec. 20, 2017, file photo, Philippine President Rodrigo Duterte addresses the troops during the 82nd anniversary celebration of the Armed Forces of the Philippines in suburban... (Associated Press)
MANILA, Philippines (AP) — President Rodrigo Duterte announced Wednesday that the Philippines is withdrawing its ratification of a treaty that created the International Criminal Court, where he is facing a possible complaint over thousands of suspects killed in his anti-drug crackdown.
Critics expressed shock at Duterte's decision, saying he was trying to escape accountability and fearing it could foster an even worse human rights situation in the country. Others called the move a foreign policy blunder that could embolden China to scoff at Manila's victory in an international arbitration case against Beijing over contested territories.
An ICC prosecutor announced last month that she was opening a preliminary examination into possible crimes against humanity over alleged extrajudicial killings in Duterte's drug crackdown, angering the president.
Duterte said Wednesday that the court cannot have jurisdiction over him because the Philippine Senate's ratification in 2011 of the Rome Statute that established the court was never publicized as required by law. He called the failure to make the ratification public a "glaring and fatal error."
Thousands of mostly poor drug suspects have been killed under Duterte's drug crackdown. He argued Wednesday that the killings do not amount to crimes against humanity, genocide or similar atrocities.
"The so-called war against drugs is lawfully directed against drug lords and pushers who have for many years destroyed the present generation, specially the youth," Duterte said in a 15-page statement explaining his legal position.
"The deaths occurring in the process of legitimate police operation lacked the intent to kill," Duterte said. "The self-defense employed by the police officers when their lives became endangered by the violent resistance of the suspects is a justifying circumstance under our criminal law, hence, they do not incur criminal liability."
Duterte also invoked presidential immunity from lawsuits, which he said prevents the ICC from investigating him while he is in office. The president renewed his verbal attacks against U.N. human rights officials who have expressed alarm over the massive killings.
He said the U.N. expert on extrajudicial killings, Agnes Callamard, had without any proof "pictured me as a ruthless violator of human rights" who was directly responsible for extrajudicial killings. He also criticized ICC prosecutor Fatou Bensouda, who announced last month that she is opening a preliminary examination into the killings.
Last Friday, the United Nations' human rights chief, Zeid Ra'ad al-Hussein, suggested that Duterte "needs to submit himself to some sort of psychiatric evaluation" over his "unacceptable" remarks about some top human rights defenders.
Zeid demanded that the Human Rights Council, which counts the Philippines among its 47 member countries, "must take a strong position" on the issue, and insisted "these attacks cannot go unanswered."
Duterte has acknowledged his rough ways and tough approach to crime, but suggested many Filipinos have come to accept him.
He has lashed out at European governments, saying they should "go to hell" for imposing conditions on financial aid.
Opposition Rep. Carlos Isagani Zarate called Duterte's move to withdraw the country from the Rome Statute a "grave setback to human rights and accountability."
It is "intended to escape accountability by present and even future officials for crimes committed against the people and humanity," Zarate said.
Another opposition lawmaker, Tom Villarin, said Duterte's action "would have unprecedented repercussions on our international standing as a sovereign state."
Villarin said it could also embolden China, which has refused to comply with an international arbitration ruling that invalidated its vast territorial claims in the South China Sea under a 1982 U.N. treaty. The Philippines filed and largely won the arbitration case. | – Philippine President Rodrigo Duterte has had a bumpy relationship with the International Criminal Court. He once described it as "bulls---," then in February said he welcomed its preliminary investigation into allegations of crimes against humanity during Duterte's war on drugs. Now, he says he'll pull the Philippines from the court altogether, "effective immediately," though the BBC reports that formally withdrawing from the ICC is a year-long process. "It is apparent that the ICC is being utilized as a political tool against the Philippines," he said, while blasting the "baseless, unprecedented, and outrageous attacks" directed at him by the UN. The Guardian looks at the recent-most bad blood between the two that apparently spurred those comments. Duterte's government put a UN special rapporteur on a list of communist terrorists, leading the body's commissioner for human rights to say last week that Duterte needs "some sort of psychiatric examination." Should the Philippines successfully withdraw its ratification of the treaty that created the ICC, it would be only the second country to do so, after Burundi. But that wouldn't protect Duterte from a potential trial, as the treaty states "withdrawal shall not affect any cooperation with the court in connection with criminal investigations." Still, the BBC notes it could make the Philippines less cooperative. Speaking of cooperation, the country's Senate on Monday flagged the constitutional provision that requires the Senate to agree to the revocation of any international treaty. Duterte contends the Senate failed to publicize its 2011 ratification of the treaty as required by law, reports the AP. |
lead ( pb ) is a major environmental pollutant , and is highly toxic to the human body . since
the earliest recorded times
the metal has been
smelted , applied as a cosmetic , painted on buildings , and glazed on ceramic pots ( 1 ) . on the other hand , lead may be the oldest recognized chemical toxin ( 2 ) .
signs and symptoms of lead toxicity depend on blood lead level ( bll )
and age , and children are much more vulnerable to lead toxicity than adults .
anorexia ,
lethargy , vomiting , and colicky abdominal pain are the most common symptoms of lead
poisoning in children at bll of more than 40 g / dl .
blood lead of more than 80 g / dl can
cause encephalopathic symptoms in children .
irritability , lethargy , and ataxia may be early
signs of acute encephalopathy , and convulsions and coma are the hallmarks .
most
cases were caused by accidental ingestion of lead products such as fishing sinkers , and some
of them were fatal with bll of more than 200 g / dl ( 4 ) . apart from accidental ingestion , lead is absorbed daily from air and foods .
lead use and
environmental pollution increased dramatically during the 20th century , especially with use
of lead as a gasoline additive .
therefore , bll of people was generally high in the 20th
century . however , owing to various governmental measures , for example banning of leaded gasoline
use , serious lead poisoning cases have been decreasing recently almost all over the world .
many studies in recent years , however , have revealed the considerable harmfulness of
low - level lead exposure to children .
lead exposure is even now a public health problem ,
especially for children . in this paper , the effects of lead , mainly on growth and development of children ,
since the 1940s , many studies have been conducted to measure bll of children and adults for
the purpose of screening for chronic lead poisoning and/or evaluating the effects of
relatively low levels of blood lead on human health .
bll is usually measured by atomic
absorption spectrometry or inductively coupled plasma - mass spectrometry .
both are exact
methods and there is no difference in the results from identical samples . it was reported that the mean bll of u.s .
children aged 1 to 5 yr were 13.7 g / dl for
non - hispanic whites and 20.2 g / dl for non - hispanic blacks in 1976 ( 5 ) .
the bll of adult men and women in switzerland in 1984 were reported as
12.2 and 8.5 g / dl , respectively ( 6 ) .
it had long been considered that blood lead of less than 20 g / dl was almost harmless to
the human body , because no clinical or biochemical effects had been recognized in such a
condition until about two decades ago . since then , however , many studies have revealed that
much lower levels of blood lead can adversely affect human health , especially childhood
growth and development ( 7 , 8) .
scientific understanding of the health effects of lead has flourished
over the past two decades .
advances in this area of research have spawned a series of
efforts by governmental agencies to enhance the protection of public health from the adverse
effects of lead ( 9 ) . in switzerland , for example ,
leaded gasoline was predominantly used before the 1980s , but unleaded gasoline use has been
encouraged since the late 1980s . as a result , the mean bll of adults in switzerland
decreased remarkably in only nine years , from 12.2 to 6.8 g / dl in men and from 8.5 to 5.2
g / dl in women ( 6 ) .
decreases of bll of children have
also been observed in many industrialized countries ( 5 ) . in swedish children , for example , a dramatic decrease of bll was found during
the period 19782001 , from about 6 to 2 g / dl , which was considered to reflect the
beneficial effect of gradual withdrawal of leaded gasoline ( 10 ) .
a safety level of blood lead , however , does not exist , and lead exposure is still a serious
health problem especially for fetuses and children .
many studies have demonstrated that lead
exposure causes intellectual and behavioral impairment in children .
lead intoxication is
still a hazard in the industrialized countries , and especially pregnant women and children
are at high risk .
lead is absorbed from the
gastrointestinal tract ( approximately 510% of ingested dose in adults and as high as 40% in
children ) and lungs ( approximately 5070% of inhaled dose ) .
absorbed lead is temporarily in
the bloodstream and about 95% of the lead exists in the red blood cells .
the lead in
erythrocytes has a half - life of 35 days and distributes into soft tissue or bone stores
( 3 ) .
bones are the major depository for lead in the body , with about 90% of the total body lead
burden existing in the skeleton .
the half - life of lead in bone is 2030 yr . some
equilibration between bone and blood lead does occur . up to 70% of blood lead
may derive
from the bones , and during pregnancy and lactation more lead is mobilized from bone stores
( 11 ) .
lead crosses the placenta , possibly by both passive diffusion and active transport , and
accumulates in the fetus ( 12 ) . it is suggested that
lead might induce abortions , prematurity , and moreover some minor anomalies , for examples ,
hemangiomas , minor skin anomalies , hydrocele and undescended testicles ( 13 ) .
therefore , maternal lead burden is a serious health
problem for fetuses and neonates .
a considerable number of studies have been conducted to evaluate the influence of maternal
lead burden on fetal and postnatal growth , and many of them have demonstrated the
unfavorable effects of fetal lead exposure .
bellinger et al . investigated the relationship between prenatal low - level lead exposure and
fetal growth in a sample of 4,354 pregnancies in which the mean umbilical cord bll was 7.0
g / dl , and found that infants with bll greater than or equal to 15 g / dl had significantly
higher risk of low birth weight ( less than 2,500 g ) than those with bll less than 5 g / dl
( 14 ) .
collected maternal and umbilical cord blood specimens from 50 consecutive
mother - infant pairs from hospital delivery departments in three russian and three norwegian
communities , and measured the bll .
the corresponding maternal bll were 2.9 g / dl in russians
and 1.2 g / dl in norwegians .
both levels are relatively low , and maternal and cord bll were
strongly correlated .
they found that maternal bll was a negative explanatory variable for
birth weight , with or without adjustment for gestational age ( 15 ) .
investigated the effect of maternal lead burden on growth of breast - fed
newborns ( 16 ) .
in lactating women , lead is
transferred from bone to the bloodstream and then to breast milk .
the authors measured lead
levels among mother - infant pairs in umbilical cord blood at birth and maternal and infant
venous blood at one month postpartum , and the maternal bone lead levels with a
cd k - x - ray fluorescence instrument .
they reported that the mean maternal and
infant bll at one month of age were 9.7 and 5.6 g / dl , respectively , and the mean maternal
bone lead levels were 10.1 and 15.2 g of lead per one gram of bone mineral for the tibia
and patella , respectively .
they showed that infant bll were inversely associated with weight
gain , with an estimated decline of 15.1 g per 1 g / dl of blood lead .
furthermore , weight
gain of exclusively breast - fed infants was shown to decrease significantly with increasing
levels of maternal bone lead .
the multivariate regression analysis predicted a 3.6 g
decrease in weight at one month of age per 1 g of lead per gram bone mineral increase in
maternal patella lead levels ( 16 ) .
another study ,
however , could not confirm adverse effects of prenatal lead exposure on either neonatal size
or subsequent growth ( 17 ) .
there have been some clinical trials attempting to reduce the bll of lactating women .
hernandez - avila et al . conducted a randomized trial among lactating women in mexico city to
evaluate the effect of calcium supplementation ( 1.2 g of elemental calcium daily ) on the
decrease of bll .
they reported that calcium supplementation was effective at reducing lead
in lactating women who had high bone lead levels , with an estimated reduction in mean blood
lead of 16.4% ( 18 ) .
further studies are required to
assess the clinical effects of calcium supplementation on the bll of lactating women .
there have been many studies investigating the relationship between bll and the growth of
children .
examined about 2,700 children aged 7 yr and younger in the second
national health and nutrition examination survey ( nhanes ii ) in the united states , and found
an inverse correlation between bll in the range of 5 to 35 g / dl and body height .
they
concluded that low - level lead exposure could impair the somatic growth of children ( 19 ) .
. also found negative relationships
between growth parameters and bll in greek children aged 69 yr , an increase in bll of 10
g / dl being associated with a decrease of 0.86 cm in height , 0.33 cm in head circumference
and 0.40 cm in chest circumference ( 7 ) .
. carried out a study evaluating the relationship between somatic growth and
lead exposure in italian adolescents aged 1113 yr . the mean bll of the boys and the girls
were 8.5 and 7.0 g / dl , respectively .
significantly negative relationships were found
between bll and stature in 13-yr - old boys and 12-yr - old girls
. also , negative relationships
between bll and serum concentrations of gonadotropins ( lh , fsh ) were found , but only in
boys , with bll higher than 9.0 g / dl .
the authors suggested that even for low lead exposure ,
this heavy metal might affect linear growth and gonadotropin secretion of adolescents ( 20 ) . in animal experiments ,
a dose - dependent decrease in
hypothalamic gonadotropin - releasing hormone and somatostatin was found in lead - treated
guinea pigs and their fetuses ( 21 ) .
recently wu et al . assessed measures of puberty in girls in relation to bll to determine
whether sexual maturation might be affected by current environmental lead exposure , using
data from the third national health and nutrition examination survey ( nhanes iii ) in the
united states .
they found a negative relationship between bll and attainment of menarche or
stage 2 pubic hair , which remained significant in logistic regression even after adjustment
for race , age , family size , residence in metropolitan area , poverty income ratio , and body
mass index .
they concluded that higher bll were significantly associated with delayed
attainment of menarche and pubic hair among u.s .
selevan et al . analyzed the relations between bll and pubertal development among girls aged
818 yr , including three ethnic groups , non - hispanic white , non - hispanic african - american
and mexican - american , also using data from nhanes iii .
they reported that bll of 3 g / dl
were associated with significant delays in breast and pubic hair development in
african - american and mexican - american girls , but not in white girls for reasons which were
not clear .
they suggested that environmental exposure to lead might delay growth and
pubertal development in girls , although confirmation was warranted in prospective studies
( 23 ) .
it has been suggested that children who are exposed to cigarette smoke have higher bll than
children who are not ( 24 , 25 ) .
we measured bll of japanese children and evaluated the effects of
passive smoking on bll , and found that passive smoking increased bll of preschool children .
the mean bll of preschool children who were exposed to cigarette smoke in their homes was
4.15 g / dl , and those whose family never smoked was 3.06 g / dl , a difference significant .
we
also found that passive smoking did not increase the bll of school children . regarding
the
reasons for the difference in the effects of passive smoking on the two groups of children ,
we speculate that preschool children might spend more time with their parents and might have
more contact with cigarette smoke than school children , and additionally , young infants have
limited ability to excrete lead from the body because of immaturity of the renal function
( 26 ) .
investigated the bll of a total of 4,391 non - hispanic white , non - hispanic
black , and mexican - american children of the united states aged 1 to 7 yr , using data from
nhanes iii .
they reported that the mean bll of the children who had smoking families was
4.36 g / dl and that of the children who did not was 3.29 g / dl ( 27 ) , very similar to the values reported in our study ( fig .
1 blood lead levels ( mean standard deviation ) of children who are exposed to
cigarette smoke in their homes ( ps(+ ) ) , and those who are not ( ps( ) ) , in japan ( 26 ) and the u.s.a . ( 27 ) .
stromberg et al . also found a significant effect of parental smoking habits on the
bll of swedish children , an 18% increase on average ( 10 ) .
blood lead levels ( mean standard deviation ) of children who are exposed to
cigarette smoke in their homes ( ps(+ ) ) , and those who are not ( ps( ) ) , in japan ( 26 ) and the u.s.a . ( 27 ) .
the adverse effect of passive smoking on children s health is an universal problem in the
world .
children should be protected from cigarette smoke for the purpose of avoiding the
risk of increased bll which might adversely affect their intellectual development and
physical growth . | lead is highly toxic to the human body and children are much more vulnerable to lead
toxicity than adults .
many studies have revealed that relatively low levels of blood lead
can adversely affect human health , especially childhood growth and development .
blood lead
levels ( bll ) of children and adults have been decreasing recently almost all over the
world , but a safety level for blood lead does not exist , and lead exposure is still a
serious health problem especially for fetuses and children .
maternal lead burden causes
fetal lead exposure and increases the risk of abortions , prematurity , low birth weight ,
and some minor anomalies .
infant bll are inversely associated with weight gain . a negative
relationship between somatic
growth and bll in children has been revealed .
it has been
suggested that lead exposure causes decrease of gonadotropin secretion of adolescents and
delay of pubertal development .
several studies have revealed that children who are exposed
to cigarette smoke have higher bll than children who are not .
children should be protected
from cigarette smoke for the purpose of avoiding the risk of increased bll which might
adversely affect their intellectual development and physical growth . |
big detectors at high energy colliders require the detection of electrons ( @xmath2 ) and muons ( @xmath3 ) with high efficiency , high purity and high precision for the reconstruction of the decays @xmath4 , @xmath5 , @xmath6 , @xmath7 , for the identification of @xmath8 lepton decays to @xmath2 and @xmath3 , for searches for lepton number violation , for the positive tagging of events with missing neutrinos , and for the isolation of event samples with supposed decays of supersymmetric or other massive states decaying partly to leptons .
the muon system of the 4th concept , fig . [ fig:4th ] , achieves almost absolute muon identification for isolated tracks .
we achieve excellent @xmath11 separation using three independent measurements : ( a ) energy balance from the tracker through the calorimeter into the muon spectrometer , ( b ) a unique separation of the radiative component from the ionization component in the dual - readout calorimeter ; and , ( c ) a measurement of the neutron content in the dual - readout fiber calorimeter .
the central tracking system has a resolution of about @xmath12 ( @xmath13 ( gev / c)@xmath14 ) and the muons spectrometer in the annulus between the solenoids with a b=1.5 t field has resolution of about @xmath15 ( @xmath16 ( gev / c)@xmath14 ) .
a muon of momentum @xmath17 which radiates energy @xmath18 in the volume of the calorimeter that is measured with a resolution of @xmath19 will have a momentum - energy balance constraint of @xmath20/p$ ] which yields a rejection of about 30 for a 100 gev muon radiating 20 gev . a non - radiating muon penetrating the mass of a fiber dual readout calorimeter will leave a signal in the scintillating fibers equivalent to the @xmath21 of the muon , which in dream is about 1.1 gev .
there will be no erenkov signal since the erenkov angle is larger than the capture cone angle of the fiber .
a radiating muon will add equal signals to both the scintillating and erenkov fibers and , therefore , the difference of the scintillating @xmath22 and erenkov @xmath23 signals is @xmath24 independent of the degree of radiation .
the distributions of @xmath25 _ vs. _
@xmath26 for 20 gev left in the h4 beam at 20 gev / c , so these data are from 40 gev / c @xmath27 . ] and 200 gev @xmath28 and @xmath27 are shown in fig .
[ fig : mu - pi ] in which for an isolated track the @xmath10 rejection against @xmath3 is about @xmath0 at 20 gev and @xmath29 at 200 gev .
the distribution of @xmath25 as a mean that is very nearly 1.1 gev as expected , and the radiative events are evident at larger @xmath26 .
r0.7 the dream collaboration has succeeded in the measurement of neutron content in hadronic showers event - by - event in the dream module by summing the scintillating channels of the module in three radial rings and digitizing the pmt output at 1.25 ghz .
these data , now being analyzed , show clearly the long - time neutron component in hadron showers that is absent in electromagnetic showers ( and also absent in the erenkov fibers of the dream module for both @xmath2 and @xmath30 ) .
we expect to estimate a neutron fraction , @xmath31 , each event the same way we estimate @xmath32 each event , and be able to reject localized hadronic activity in the calorimeter with factors of 10 - 50 .
any simple product of these three rejection factors , or any estimate of the muon efficiency and purity , gives an optimistic result that will clearly be limited by tracking efficiencies , overlapping shower debris in the calorimeter , or track confusion either in the main tracker or the muon spectrometer before these beam test numbers are reached .
nevertheless , we expect excellent muon identification . | muons can be identified with high efficiency and purity and reconstructed with high precision is a detector with a dual readout calorimeter and a dual solenoid to return the flux without iron .
we shown cern test beam data for the calorimeter and calculations for the magnetic fields and the track reconstruction . for isolated tracks ,
the rejection of pions against muons ranges from @xmath0 at 20 gev / c to @xmath1 at 300 gev / c . |
The political battle over whether to raise the U.S. debt ceiling will continue to rage in the halls of Congress as the country today hit its debt limit of $14.294 trillion and the stakes get higher.
The Republican leadership is facing increasing pressure from freshman members of Congress -- many of whom campaigned on the platform of cutting the deficit -- and the conservative Tea Party movement, even as they work with the Democratic leadership to avert a default on the debt that some economists warn could tank the U.S. economy.
Treasury Secretary Timothy Geithner has given a deadline of Aug. 2; the rough deadline until his department's emergency measures can stave off a crisis. The debt ceiling would need to be raised by roughly $2 trillion.
While the markets are adopting a wait-and-see approach, given that the final deadline for default is more than two months away, many people are skeptical about whether the House GOP can garner the support it needs from its freshman class.
Many of these members, buoyed by the Tea Party, argue they will not agree to raising the debt limit if their demands of major spending reductions are not met. Members of the Tea Party argue that claims about the repercussions of not raising the limit are over-hyped and simply "fear mongering."
"By raising it gives these guys cover to not make the tough decisions that need to be made," said Amy Kremer, chairwoman of the Tea Party Express. "What's coming out of the White House is a bunch of fear tactics that we're going to default on our debt and all this other stuff and its completely not true. There's enough revenue coming in from tax payments to pay our debt, pay Medicaid, pay Social Security, pay Medicare and still have money left over.
"Our message has not only been no, but hell no" to raising the debt limit, Kremer told ABC News.
The Tea Party message has resonated widely through Congress, especially because a large chunk of the freshman class rode to victory on the back of Tea Party support and any political moves contrary to their wishes could be suicidal.
But even those who say they realize the negative consequences of not raising the debt ceiling are hesitant about voting to do so unless significant spending cuts are made.
"America's caught between two unsatisfactory outcomes and we're wrestling to find a third," Rep. Mo Brooks, R-Ala., a freshman congressman, said. "The two that we are caught in between are unsustainable deficits ... [and] the debt ceiling issue, [which] if not properly addressed, will have definite and immediate adverse economic consequences. What I believe we have to do is address the debt ceiling in a way that satisfactorily addresses the unsustainable deficits."
Brooks said there hasn't been substantive progress made on budget and deficit negotiations, and he has little faith that the Senate -- including some Republican members -- and the White House will "understand the magnitude of the problem posed by unsustainable deficits."
"It's a damned if you do, damned if you don't problem," he told ABC News. "What difference does it make if you're given a choice between two poisons. Either way you're dead."
Business leaders have raised alarms on the possibly dire consequences of not raising the ceiling. More than 60 trade associations sent a joint letter to congressional leaders last week to urge them to raise the limit.
In a House hearing Thursday, executives of four major corporations also sounded warning bells.
"I think it would be devastating to the world economy, not just to the U.S. economy and not just to UTC [United Technologies Corp.] if Congress failed to raise the debt limit," Greg Hayes, chief financial officer of United Technologies, said.
"The full faith and credit of the U.S. government is the basis upon which the entire world financial system revolves around. If we think that the problems back in 2008, with the Lehman crisis, were devastating, a default by the U.S. government would have repercussions beyond anything we saw in 2008 and 2009."
Business lobby groups and local business leaders have fanned out across Capitol Hill to hold listening sessions with Congress members. ||||| Usually natural allies, the tea party and the business lobby are at odds over if and how to raise the national debt limit.
Dick Armey (r.) and Matt Kibbe of FreedomWorks. Tea partyers are growing impatient with the attitude of big business toward federal spending, Mr. Armey says.
With Monday’s bond sale, the US Treasury maxed out on the current national debt ceiling limit of $14.294 trillion, stepping up pressure on Congress to raise the debt limit by Aug. 2, when Treasury officials estimate that funds will run out.
The prospect of a government default is sharpening lines of demarcation between Washington's business establishment, which wants Congress to simply raise the debt limit, and many tea party-backed lawmakers, who don't.
It's an uncomfortable schism for the two sides, and it has the powerful US Chamber of Commerce and other big business interests working overtime to try to come to terms with those lawmakers – led by the GOP freshmen and other tea party politicians – disinclined to raise the debt limit absent strong curbs on future spending.
After all, the two are usually allies on legislative matters. The tea party movement calls for lower taxes and less regulation – interests shared by corporate America. Tea party candidates called for defunding President Obama's health-care reform on grounds that it creates uncertainty for businesses and stifles job creation. Tea party supporters are also more likely to back private enterprise – and trust it to benefit the nation – than are most Americans, polls show.
But a rift over the debt ceiling sets up a deeper clash between these two primarily Republican camps. The tea party is all about chopping the size and scope of government, something that doesn't sit all that well with businesses that depend on government spending and contracts – especially if it means taking the country to the brink of a default over the national debt.
The White House hopes, of course, to turn this situation to its advantage. Mr. Obama and Treasury Secretary Timothy Geithner are reaching out to top business leaders to help them make their case for a prompt congressional vote to raise the debt limit, before international creditors get anxious. Besides the US Chamber of Commerce, the National Association of Manufacturers and the Financial Services Forum have been talking since January with the House freshman class to urge compromise.
"Raising the debt limit has traditionally always been a partisan-vote exercise – whatever party has the White House, it's their burden to find the votes," says R. Bruce Josten, the Chamber's executive vice president for government affairs. "But this time, I will bet you it will require 50-50 [effort from Democrats and Republicans] – or near 50-50 – to get it done. We are in a whole new realm of negotiation."
Lobbying lawmakers
On Friday, the Chamber released a letter to Congress from a coalition of trade groups urging a “yes” vote on raising the debt ceiling no later than Aug. 2. “Failure to raise the debt by that time would create uncertainty and fear, and threaten the credit rating of the United States,” wrote Mr. Josten.
He added that Congress must also address the growing debt and the nation’s finances: “It is imperative that any path to deficit reduction focus on growing the economy and the tax base and cutting spending, especially mandatory spending, rather than shortsighted tax increases,” he added.
Grass-roots tea party groups, meanwhile, are pushing to hold conservative lawmakers to their pledges to vote against raising the ceiling – or to do so only if Congress enacts new spending limits.
"Most of the grass-roots movement is becoming impatient with the big business community's insistence that the federal government must remain a trough in which we can sop up our goodies," says former House majority leader Richard Armey, a Texas Republican who helped launch the tea party movement as chairman of FreedomWorks.
Republicans oppose raising the debt limit by 70 percent to 8 percent and want their member of Congress to vote against the measure, according to a Gallup poll, released on Friday. Democrats favor raising the ceiling by 33 percent to 26 percent. Independents oppose raising the debt ceiling by a 47 percent to 19 percent margin, the poll concludes.
Another poll, released Monday and focusing on the impact of default, finds that most Americans see the prospect of a default on the national debt as “disastrous,” but Republicans and tea-party supporters less so. Only 49 percent of Republicans and 43 percent of strong tea-party supporters said that default would be “disastrous,” while 38 percent of Republicans and 43 percent of tea-party supporters said that consequences would not be serious, according to the Politico-George Washington University battleground poll.
In contrast with big business groups, the National Federation of Independent Business (NFIB), representing 350,000 small businesses, has opposed a big federal stimulus and is focused on cutting deficits and reining in the debt.
"Our members are very concerned about the deficit and the debt. They absolutely want Congress to figure out these issues," says chief executive officer and president Dan Danner, noting that 25 NFIB members were elected to Congress in 2010. "Certainly, small-business owners on Main Street, with 10 or 15 employees, are a lot different than General Motors or General Electric. The stimulus has not worked for small business."
Counting the votes
Getting an accurate count of where lawmakers stand on the debt ceiling is fast becoming a cottage industry in Washington. The US Chamber of Commerce cites estimates that no more than 27 GOP House freshmen are committed to voting against raising the debt limit, period. Budget analyst Stanley Collender of Qorvis Communications writes that "there could be at least 80 GOP votes against raising it, no matter what the impact might be on interest rates, stock prices, and the economy."
Conservative groups aim to hold Republican lawmakers to their campaign pledges to stand firm on the debt ceiling. "The debt ceiling ... is a crisis that Republicans should not waste," says former Rep. Chris Chocola, an Indiana Republican who is now president of the Club for Growth, best known for financing primary challengers to Republicans viewed as not conservative enough. "They need to change the fundamental rules – and it needs to be more than spending cuts," he says.
The most efficient way to rein in government spending, he adds, is a balanced budget amendment to the Constitution, along the lines proposed by freshman Sen. Mike Lee (R) of Utah, a member of the Senate Tea Party Caucus. "This will be the most important vote they take in this Congress," says Mr. Chocola.
In the Senate, all 47 Republicans are committed to a balanced budget amendment, along with some Democrats – but probably not enough to get the required 66 votes.
"The Chamber of Commerce has a legitimate point of view.... But we should not be so eager to jump on the bandwagon that bad things will happen if we don't raise the debt limit without asking what bad things will happen if we do raise it," says Senator Lee. "If we raise the debt ceiling, we have to ask what we are doing to make sure we do not face this problem again." | – A rift between big business and the Tea Party movement is growing as the very real prospect of America defaulting on its debt looms, the Christian Science Monitor finds. The federal government hit the $14.3 trillion debt ceiling yesterday, and while Tea Party-backed lawmakers are sticking to their guns on cutting spending, their former allies among the business establishment want the government to simply raise the ceiling before funds run out. The US Chamber of Commerce—representing many powerful business interests that rely on government contracts—is urging Republican lawmakers to seek compromise, a position the White House hopes to exploit as it searches for the votes to raise the debt ceiling. But Tea Partiers are digging in for major spending cuts. "Our message has not only been no, but hell no" to raising the debt limit," Tea Party Express chairwoman Amy Kremer tells ABC. |
with stainless steel implant of glenoid fossa attempted for the correction of ankylosis in the 1960s , total temporomandibular joint ( tmj ) replacement -- which minimizes foreign body reaction and consists of highly biocompatible materials such as cr - co - mo alloy , titanium , and ultra - high - molecular - weight polyethylene -- began to be applied1 - 3 .
us food and drug administration - approved products are now used worldwide , and their long - term follow - up data are constantly reported4,5 .
nonetheless , concerns of early failure or long - term stability to tmj prosthesis such as knee or hip joint remain after multiple complication from the installation of proplast - teflon ( vitek inc .
, houston , tx , usa ) in tmj in the united states in the 1980s6 .
meanwhile , the use of stock prosthesis ( biomet microfixation , jacksonville , fl , usa ) was approved in korea in 2012 .
a report of two cases of total tmj replacement using stock prosthesis for patients who had undergone treatment of osteochondroma and ankylosis of tmj is presented below .
a 50-year - old female patient who had no specific medical history visited the hospital with facial asymmetry ( chin deviation to the right side ) , limitation of mouth opening , and jaw pain that had persisted for two years .
the clinical examination showed limitation of mouth opening ( approximately 20 mm ) along with tmj pain on the left side and deviation of chin top to the right side .
the radiography and computed tomography showed signs of considerable osseous proliferation extended to the cranial base and multiple radiopaque lesions suspected as phlebolith.(fig .
she underwent condylectomy , and her chin top -- which deviated to the right side -- was also reduced to the midsagittal plane .
active physical therapy was conducted , and clinical favorable mouth opening ( approximately 32 mm ) was finally achieved in four months.(fig .
2 ) two years after surgery , however , she complained of limitation of mouth opening and discomfort of mastication .
the x - ray revealed trauma from occlusion ( tfo ) in the maxillary molar area on the ipsilateral side as well as progressive osteoarthritis on the previous resection area .
stock tmj prosthesis consists of fossa and condylar components , each of which requiring preauricular and retromandibular approaches .
after condylectomy was performed to secure spaces for prosthesis via the preauricular approach , retromandibular incision was conducted .
bone trimming of uneven surface on articular eminence was followed by the installation of fossa component , and subsequent positioning of the condylar component was accomplished .
after the optimal occlusion of the patient was maintained , fixation of the mandibular component was done with titanium screw7.(fig .
3 ) since the tmj replacement , the patient had not suffered from remarkable postoperative complications including ipsilateral facial nerve weakness , paresthesia , pain , and dysfunction .
she began light mouth opening exercise two weeks after the surgery and subsequently performed active exercise .
finally , she was able to maintain approximately 30 mm mouth opening , favorable mastication capability , and alleviation of tmj pain since two months postoperative.(table 1 ) a 21-year - old male patient -- who had undergone closed reduction for the fracture of bilateral mandibular condyle caused by a traffic accident five years ago -- visited the hospital with limitation of mouth opening and pain on the left tmj .
he also had no specific medical history and showed limitation of mouth opening ( approximately 16 mm ) accompanying tmj pain and crepitus on the left side along with troubled lateral excursion .
the computed tomography image revealed signs of fibrous ankylosis and heterotopic bone formation on the postero - lateral surface of the left condyle head and temporal bone.(figs . 4 , 5 ) the plan of combined gap arthroplasty and total tmj replacement was finally established .
gap arthroplasty was performed , including condylectomy and excision of the 403030 mm lesion of fibrous and bony ankylosis in the left tmj.(fig .
a ) after normal range of jaw movement ( approximately 30 mm ) was secured , retromandibular incision was done .
bone trimming of articular eminence was consequently conducted , followed by the installation of prosthetic components7.(fig .
6b - d ) postoperatively , the patient also began light mouth opening exercise followed by subsequent active exercise and achieved approximately 40 mm mouth opening in 2 months .
7 , 8) and substantial alleviation of pain on the ipsilateral joint and maintained up to 40 mm mouth opening.(fig .
a 50-year - old female patient who had no specific medical history visited the hospital with facial asymmetry ( chin deviation to the right side ) , limitation of mouth opening , and jaw pain that had persisted for two years .
the clinical examination showed limitation of mouth opening ( approximately 20 mm ) along with tmj pain on the left side and deviation of chin top to the right side .
the radiography and computed tomography showed signs of considerable osseous proliferation extended to the cranial base and multiple radiopaque lesions suspected as phlebolith.(fig .
she underwent condylectomy , and her chin top -- which deviated to the right side -- was also reduced to the midsagittal plane .
active physical therapy was conducted , and clinical favorable mouth opening ( approximately 32 mm ) was finally achieved in four months.(fig .
2 ) two years after surgery , however , she complained of limitation of mouth opening and discomfort of mastication .
the x - ray revealed trauma from occlusion ( tfo ) in the maxillary molar area on the ipsilateral side as well as progressive osteoarthritis on the previous resection area .
stock tmj prosthesis consists of fossa and condylar components , each of which requiring preauricular and retromandibular approaches .
after condylectomy was performed to secure spaces for prosthesis via the preauricular approach , retromandibular incision was conducted .
bone trimming of uneven surface on articular eminence was followed by the installation of fossa component , and subsequent positioning of the condylar component was accomplished .
after the optimal occlusion of the patient was maintained , fixation of the mandibular component was done with titanium screw7.(fig .
3 ) since the tmj replacement , the patient had not suffered from remarkable postoperative complications including ipsilateral facial nerve weakness , paresthesia , pain , and dysfunction .
she began light mouth opening exercise two weeks after the surgery and subsequently performed active exercise .
finally , she was able to maintain approximately 30 mm mouth opening , favorable mastication capability , and alleviation of tmj pain since two months postoperative.(table 1 )
a 21-year - old male patient -- who had undergone closed reduction for the fracture of bilateral mandibular condyle caused by a traffic accident five years ago -- visited the hospital with limitation of mouth opening and pain on the left tmj .
he also had no specific medical history and showed limitation of mouth opening ( approximately 16 mm ) accompanying tmj pain and crepitus on the left side along with troubled lateral excursion .
the computed tomography image revealed signs of fibrous ankylosis and heterotopic bone formation on the postero - lateral surface of the left condyle head and temporal bone.(figs . 4 , 5 )
gap arthroplasty was performed , including condylectomy and excision of the 403030 mm lesion of fibrous and bony ankylosis in the left tmj.(fig .
a ) after normal range of jaw movement ( approximately 30 mm ) was secured , retromandibular incision was done .
bone trimming of articular eminence was consequently conducted , followed by the installation of prosthetic components7.(fig .
6b - d ) postoperatively , the patient also began light mouth opening exercise followed by subsequent active exercise and achieved approximately 40 mm mouth opening in 2 months .
7 , 8) and substantial alleviation of pain on the ipsilateral joint and maintained up to 40 mm mouth opening.(fig .
condylar reconstruction plates , condylar prostheses , costochondral graft , and distraction osteogenesis have been executed as reconstruction procedures for mandibular defects1 .
in marked contrast to autograft , the use of alloplastic material has neither morbidity of donor site nor requirement of prolonged maxillo - mandibular fixation and substantially reduces the time of operation and hospitalization .
other advantages are the high stability of occlusion , relatively higher predictability , and extensive reconstruction of joint defects .
indications of alloplastic total tmj replacement include ankylosis of tmj with severe anatomic alteration , failure of reconstruction with autogenous graft , severe inflammatory disease such as rheumatoid arthritis , and idiopathic condylar resorption8 - 11 . in case 2 ,
the 21-year - old patient opted for alloplastic reconstruction considering the higher risk of re - ankylosis , morbidity of donor site and severe anatomic alteration of tmj rather than combined gap arthroplasty and autogenous reconstruction . for decades
, literature has reported constant pain on tmj or penetration of mandibular prosthesis into the cranial base in rare cases wherein only the condylar head is reconstructed with prosthesis .
thus , total joint replacement has been widely performed in the usa and europe of late .
three of the major products are as follows : tmj implants ( tmj implants inc . , golden , co , usa ) , tmj concepts ( tmj concepts inc . , ventura , ca , usa ) , and biomet microfixation .
tmj concepts is based on computer aided design / computer aided manufacturing ( cad / cam ) technology , and only the customized type is available , whereas tmj implants and biomet microfixation are both available in the customized type and stock type12 .
stock prosthesis has obtained approval from the ministry of food and drug safety in korea in july 2012 .
relatively more literature reviews are suggested with long - term follow - up surveys on customized prosthesis than stock prosthesis .
mercuri et al.4 reported statistically significant results with regard to the decrease in pain score as well as the improvement of mandibular function and diet consistency during an average of 11.4 years ' follow - up survey conducted among 61 patients ( 41 bilateral replacements and 20 unilateral replacements ) after replacement using customized prosthesis . in the case of extensive congenital bone defect ,
customized prosthesis is likely to be more desirable than stock prosthesis and is available with an efficient design for posterior stop after orthognathic surgery13 .
meanwhile , some literatures suggested that the titanium backing of fossa component osseointegrated with the temporal bone may be unfavorable for re - operation14 . in terms of stock prosthesis ,
leandro et al.5 , who reported a 10 years ' follow - up survey , suggested that there had been remarkable improvement on the jaw function , pronunciation , diet , and pain according to an average of 3.5 years ' follow - up survey ( minimum of 1 year , maximum of 10 years ) among 300 patients ( 201 unilateral replacements and 99 bilateral replacements ) who underwent total tmj replacement using stock prosthesis . on the other hand ,
westermark15 reported more than 34 mm mouth opening and improved masticatory activity and pain as a result of a 2 to 8 years ' follow - up survey among 12 patients ( 5 unilateral replacements and 7 bilateral replacements ) who underwent replacement using stock prosthesis10 .
stock prosthesis minimizes the need for a complex process of three - dimensional model production and offers cheaper cost of operation than customized prosthesis .
nonetheless , concerns on the possibility of fine gap between host bone and prosthesis , removal of intact bone for fitting components , and perforation of surrounding anatomical structure remain due to the lack of support to prevent posterior displacement of the fossa component13 .
postoperative complication and short - term failure of alloplastic total tmj replacement may be related to infection , loosening of screw or prosthesis , fracture of prosthesis , metal allergy , post - surgical neuroma , and re - ankylosis of surrounding bone . in the long - term aspects ,
the lifespan of tmj prosthesis and large number of progress data have not been established to date .
integration of clinical follow - up data was also achieved only in the last two decades12 .
nevertheless , alloplastic total tmj replacement is considered a reliable procedure in terms of safety and durability of material , based on several results of recent studies , e.g. , absence of giant cell reaction found in proplast - teflon implant , advance verification of the same material in existing literatures on orthopedic joint replacement , and favorable results of follow - up survey in previous literature4 - 6,11,15,16 .
in these cases , the patients underwent unilateral tmj replacement ; no surgical modality was conducted on contralateral tmj . in a similar case report , guarda - nardini et al.17 introduced post - surgical protocol including the combination of passive and active exercises combined with the injection of hyaluronate on the contralateral non - operated tmj .
meanwhile , contralateral excursion and preservation of ipsilateral excursion had a limitation due to the detachment of the lateral pterygoid muscle during surgery.(table 1 ) according to voiner et al.18 , unilateral tmj replacement showed a wider range of mandibular motion than bilateral tmj replacement , and the difference was statistically significant .
looking into the treatment progress in case 1 closely , at postoperative 1-year period , the outcome of tmj replacement was favorable with regard to pain , jaw function , and quality of life , but the range of postoperative mouth opening was not sufficiently improved in contrast to the preoperative value .
such tendency is attributed to the lack of efforts to maintain periodic active mouth opening exercise ; the chronic condition of masticatory muscle and contralateral tmj contributed to tmj dysfunction .
the report of these two cases is in accordance with the abovementioned indications suggested by literature , confirming its suitability and short - term stability in terms of improvement of the jaw function , pain , diet , and , ultimately , quality of life .
although there are some mechanical and morphological limitations compared to the customized tmj prosthetic system , stock tmj prosthetic systems provide cost- and time - effective means of joint reconstruction , and successful outcomes can be expected with skilled operative techniques and proper case selections . | alloplastic total replacement of the temporomandibular joint ( tmj ) was developed in recent decades . in some conditions ,
previous studies suggested the rationale behind alloplastic tmj replacement rather than reconstruction with autogenous grafts .
currently , three prosthetic products are available and approved by the us food and drug administration . among these products , customized prostheses are manufactured , via computer aided design / computer aided manufacturing ( cad / cam ) system for customized design ; stock - type prostheses are provided in various sizes and shapes . in this report , two patients ( a 50-year - old female who had undergone condylectomy for the treatment of osteochondroma extending to the cranial base on the left condyle , and a 21-year - old male diagnosed with left temporomandibular ankylosis ) were treated using the alloplastic total replacement of tmj using stock prosthesis .
the follow - up results of a favorable one - year , short - term therapeutic outcome were obtained for the alloplastic total tmj replacement using a stock - type prosthesis . |
approximately , 1.2% of all meningiomas of the central nervous system affect the spine , being relatively rare compared to those in the intracranial compartment .
it is important to emphasize that to reach the definitive diagnosis of extra - axial soft tissue lesions and properly manage the patient , histopathological examination , and frozen section biopsy are required .
an 18-year - old adult male presented at the neurosurgery outpatient department with chief complaints of gradually progressing weakness and diminished sensation in both the lower limbs of 1 year duration .
there was no history of fever , trauma , or any chronic illness . on physical examination , spastic paraplegia and loss of all sensory modalities below the d7 dermatome were found . on gadolinium enhanced magnetic resonance imaging of the dorsal spine ,
an extradural spinal lesion extending from midbody of d7 to midbody of d9 vertebra [ figure 1 ] , which was hypointense on t1 and t2 [ figure 2 ] with homogenous enhancement with contrast was seen posterior and lateral to the spinal cord compressing the dura anteriorly , and extending to the left d8d9 neural foramina , thus with features suggestive of either neurofibroma or meningioma [ figure 3 ] .
the patient was planned for elective surgery and underwent d7d9 laminectomy and total excision of the lesion .
intraoperatively , it was a vascular lesion which was adherent to the dura mater [ figure 4 ] .
the patient improved in the postoperative period and regained grade 5/5 power in both lower limbs , sensations , and bowel and bladder function improved at 6 months follow - up .
mri of the dorsal spine , an extradural spinal lesion extending from midbody of d7 to midbody of d9 vertebra ( plane and contrast ) mri of the dorsal spine , showing an extradural spinal lesion hypointense on t1 and t2 images mri of the dorsal spine , showing an extradural spinal lesion with homogenous enhancement with contrast intraoperative photograph showing a vascular lesion adherent to the dura mater ( a ) histopathological examination showing sheets of meningothelial cells showing whirling pattern focally ( 40 ) .
( b ) magnified view of tumour showing syncytial sheets of meningothelial cells , with psammoma bodies on hpe ( 200 )
spinal meningiomas comprise approximately 1.2% of all the meningiomas and 25% of all the spinal cord tumors .
these tumors show female predominance with a male : female ratio of 1:9 , with no sex predilection in children . approximately , in 10% of cases , an extradural component is seen but an exclusively extradural meningioma is quite uncommon .
these tumors are attributed to the presence of ectopic or separated arachnoid tissue around the periradicular nerve root sleeve , where the spinal leptomeninges merge directly into the dura mater .
it is pertinent to be aware of the fact that totally extradural spinal meningiomas , especially the en plaque variety may mimic metastatic disease .
this can be ruled out by an intraoperative histology , i.e. , frozen section histopathology , which is a must for optimal surgical decision - making . in this case ,
intraoperative frozen section enabled us to correctly identify the pathology and perform the near - total resection .
after the intraoperative diagnosis of meningioma is confirmed for an extradural spinal lesion , gross total resection of the tumor including extensions into the bone or the paraspinal space should be conducted .
this approach is likely to give the best results since the prognosis of this tumor depends on the extent of resection .
intraoperatively , we were able to strip off the tumor from the spinal dura , without excising the dura to expose the shining white dura underneath [ figure 6 ] , as these extradural spinal meningiomas arise from the dural root sleeve and not from the external surface of the spinal dura .
most meningiomas are benign , well - circumscribed , slow growing tumors , and behave mostly according to the pathological ( who ) grading and usually follow an uneventful clinical course . however , the who grade ii ( atypical ) and grade iii ( anaplastic ) tumors can behave aggressively clinically and histologically .
bony involvement and paraspinal extent are responsible for the worse prognoses due to difficult removal .
intraoperativel photograph showing , shining - white dura after striping off the tumor to conclude , meningiomas should be included in the differential diagnosis of extradural intraspinal masses .
although benign , but it is necessary to conduct long - term clinical and radiological follow - up to diagnose recurrence .
| meningiomas are benign in nature and arise from the arachnoid cells .
they are mostly situated in the intracranial compartment , whereas spinal meningiomas are rare .
approximately , in 10% of cases , an extradural component is seen but an exclusively extradural meningioma is quite uncommon . however , who grade ii ( atypical ) and grade iii ( anaplastic ) tumors can behave aggressively .
we reported a case of purely extradural psammomatous meningioma in an adult male affecting the dorsal spine although uncommon meningiomas should be included in the differential diagnosis of extradural intraspinal masses . |
by considering non equilibrium systems with a long - term stationary state that possess a spatio - temporally fluctuating intensive quantity , more general statistics can be formulated , called superstatistics @xcite .
selecting the temperature as a fluctuating quantity among various available intensive quantities ; in @xcite a formalism was developed to deduce entropies associated to the boltzmann factors @xmath2 arising from their corresponding assumed @xmath3 distributions . following this procedure , the boltzmann - gibbs entropy and the tsallis entropy @xmath4 corresponding to the gamma distribution @xmath5 and depending on a constant parameter @xmath6 , were obtained . for the log - normal , @xmath7-distribution and other distributions
it is not possible to get closed analytic expression for their associated entropies and the calculations were performed numerically utilizing the corresponding @xmath2 in each case .
all these @xmath3 distributions and their boltzmann factors @xmath8 obtained from them , depend on a constant parameter @xmath6 , actually the @xmath7-distribution also depends on a second constant parameter .
consequently the associated entropies depend on @xmath6 .
an extensive discussion exists in the literature analyzing the possible viability of these kind of models to explain several physical phenomena @xcite . in previous works
@xcite we proposed a generalized gamma distribution depending on a parameter @xmath9 and calculated its associated boltzmann factor .
we were able to find an entropy that depends only on this parameter @xmath9 . by means of maximizing the entropy ,
@xmath9 was identified with the probability distribution .
furthermore , by considering the corresponding generalization of the von neumann entropy in @xcite it was shown that this same modified von neumann entropy can be found by means of a generalized replica trick @xcite . since the fundamental results of boltzmann @xcite obtained in the frame of diluted gases
, it is known that @xmath1-theorem is one of the cornerstones of statistical mechanics and thermodynamics . considering a system of @xmath10 hard spheres all of the same size with no interaction among them he showed that for the function @xmath11 , with @xmath12 denoting the ensemble average , it satisfies , @xmath13 , which in the frame of a local or global equilibrium , encodes the first microscopic basis for the second law of thermodynamics . in general
, the function @xmath1 dictates the evolution of an arbitrary initial state for a gas into local equilibrium with the subsequent arrival to thermodynamic equilibrium .
several extensions to the h - theorem are known .
a quantum version of the theorem was given by pauli in the early 20 s @xcite and the first special relativistic version was presented by marrot @xcite with posterior modifications , within the special relativistic frame , introduced by several authors @xcite .
more recently , efforts to develop an h - theorem that takes into account other characteristics like frictional dissipation @xcite , leading to a modification of the classical non increase behavior for the @xmath1 function , or a non - extensive quantum version @xcite imposing a restrictive interval for the @xmath6 parameter have been done .
+ in this work we follow the route of starting with a generalized @xmath1 function which satisfies the @xmath1 theorem , an entropy as a function of volume and temperature is obtained . using this entropy
, a broad thermodynamic information can be obtained . in the spirit of the original work of boltzmann ,
an ideal gas is considered for the thermodynamical analysis .
some thermodynamic response functions are calculated presenting deviations with respect to conventional extensive quantities . using the thermodynamic response functions and some approximations , we show that a modified non trivial equation of state can be obtained . more yet ,
a universal correction function emerge from this analysis for all the thermodynamical quantities .
we will first , in section ii , propose @xmath0 distributions that do not depend on an arbitrary constant parameter , but instead on a parameter @xmath9 that can be identified with the probability associated with the microscopic configuration of the system @xcite .
we will calculate the associated boltzmann factors .
it will be shown that for small variance of the fluctuations a universal behavior is exhibited by these different statistics .
it should be noted that by changing in these @xmath0 distribution @xmath9 by @xmath14 another family of boltzmann factors with the same correction terms arise but now alternating the signs in the correction terms .
we will not consider here these similar cases .
in section iii , a relevant result is obtained ; in particular for the @xmath1 associated with the gamma @xmath5 distribution we will show a corresponding generalized @xmath1-theorem . in section
iv a calculation of the modified entropy , arising from the @xmath1-function , as a function of temperature and volume for an ideal gas is given .
thermodynamic response functions like heat capacity , and ratio of isothermal compressibility and thermal expansion coefficient are calculated and relative deviations of the usual behavior are discussed . finally , some simulations results are given redifining the distribution probability using the generalized statistics .
for the square - well and lennard - jones fluids , internal energies and heat capacities are given using both the standard boltzmann - gibbs statistics and generalized probability of this work .
section v is devoted to present our conclusions .
we begin by assuming a gamma ( or @xmath15 ) distributed inverse temperature @xmath16 depending on @xmath9 , a parameter to be identified with the probability associated with the microscopic configuration of the system by means of maximizing the associated entropy . as the boltzmann - factor
is given by @xmath17 we may write this parameter @xmath9 gamma distribution as @xmath18 where @xmath19 is the average inverse temperature .
integration over @xmath16 yields the generalized boltzman factor @xmath20 as shown in @xcite , this kind of expression can be expanded for small @xmath21 , to get @xmath22.\label{4}\ ] ] we follow now the same procedure for the log - normal distribution , this can be written in terms of @xmath9 as @xmath23^{1/2 } } \exp \ { - \frac { \left [ \ln \frac{\beta ( p_l+1)^{1/2}}{\beta_0 } \right]^2}{2 \ln ( p_l+1 ) } \ } , \label{5}\ ] ] the generalized boltzmann factor can be obtained to leading order , for small variance of the inverse temperature fluctuations , @xmath24 . \label{6}\ ] ] in general , the @xmath7-distribution has two free constant parameters .
we consider , particularly , the case in which one of these constant parameters is chosen as @xmath25 .
for this value of the constant parameter we define a @xmath7-distribution in function of the inverse of the temperature and @xmath9 as @xmath26 once more the associated boltzmann factor can not be evaluated in a closed form , but for small variance of the fluctuations we obtain the series expansion @xmath27 . \label{8}\ ] ] as shown in @xcite one can obtain in a closed form the entropy corresponding to ( eqs .
[ 2 ] , [ 3 ] ) resulting in @xmath28 where @xmath29 is the conventional constant and @xmath30 .
the expansion of ( eq .
[ 9 ] ) gives @xmath31 given that the boltzmann factors ( eqs .
4,6,8 ) coincide up to the second term for the gamma @xmath5 , log - normal and @xmath7-distributions , for enough small @xmath21 the entropy ( eq . [ 10 ] ) correspond to all these distributions up to the first term that modifies the usual entropy .
we expect at least this modification to the entropy for several possible @xmath0 distributions .
the usual h - theorem is established for the @xmath1 function defined as @xmath32 the essential of this theorem is to ensure that for any initial state , a gas that satisfies boltzmann equation approach to a local equilibrium state , which means @xmath13 .
the new @xmath1 function can be written as @xmath33 considering the partial time derivative ( because in general the gas is not homogeneous ) , we have @xmath34 e^{f \ \ln f } \ \frac{\partial f}{\partial t}.\ ] ] using the mean value theorem for integrals , and realizing that the factor @xmath35 is always positive , it follows from the conventional h - theorem that the variation of the new h - function with time satisfies @xmath36 this is a very interesting result , we can see that other possible generalizations , for which the multiplying factor appearing in the integral is positive defined , will preserve the corresponding h - theorem .
it is well known that boltzmann equation under very general considerations admits solutions with global existence and exponential or polynomial decays to maxwellian states @xcite . the simplest maxwellian state among the five - parameter family is given by @xmath37 where @xmath38 , @xmath39 , with @xmath40 the planck s constant .
since this state is reached by the system under very general conditions , we propose the new h function defined as @xmath41 by expanding and doing the corresponding integration , we obtain the exact relation @xmath42 in order to get an insight into the new contributions coming from the generalized h - function to the thermodynamics of a system , we keep only the first terms of the previous series @xmath43 . \label{h0 } \end{aligned}\ ] ] in the classical limit and for the system not far from equilibrium we have the proportionality between the @xmath1 function and entropy @xmath44 from which expression ( [ h0 ] ) multiplied by @xmath45 gives the entropy ( to the corresponding order ) where the ideal contribution is given by @xmath46 and the rest of the terms correspond to the corrections to the thermodynamics of the system .
the extensivity property is broken by these new terms .
we notice that the sackur - tetrode expression for the entropy of the ideal gas @xmath47 can be recovered by an ad - hoc fixing term as it was originally proposed by gibbs . with the knowledge of the entropy as a function of volume and temperature it is possible to calculate some of the response functions for the system @xmath48 @xmath49 where @xmath50 is the thermal expansion coefficient and @xmath51 is the isothermal compressibility . for an ideal gas @xmath52 , @xmath53 , therefore , using the equation of state @xmath54 , where @xmath55 is the boltzmann constant and @xmath56 is the number of moles .
we obtain @xmath57 , \ ] ] @xmath58 , \ ] ] the first contribution gives us the well known result for a classical ideal gas .
it is remarkable that for both quantities the relative deviation from ideality have the same functional form .
in fact , such deviation can be expressed in terms of only one variable ( see figure 1 ) , @xmath59 @xmath60\ ] ] it can be observed that the usual behavior is obtained for very low densities or very high temperatures ; deviations are expected for very low temperatures or very high densities . in particular ,
high densities could be achieved with very small volumes ( for fixed @xmath10 ) , i.e. , for confined systems .
+ it is a well known thermodynamic result that an equation of state ( pressure as a function of volume and temperature ) can be obtained if the response functions @xmath50 and @xmath51 are given @xmath61 in terms of the new variables @xmath62 the change in pressure can be written as @xmath63 and the response functions @xmath64 must be expressed in terms of @xmath65 .
+ the quotient @xmath66 is known exactly , but the compressibility is not .
it is possible to obtain an approximate equation of state considering an isochoric process , for it , an exact expression can be given ( up to the considered order in our treatment ) . even within this coarse approximation , a remarkable similar expression to the previous obtained for the heat capacity and the expansion coeffcient
we can conjecture that the exact relative deviation for the pressure and the rest of the thermodynamic variables is given by the function @xmath67 up to the considered order in the expansion resembling the universal behavior found for the distribution functions . + for a system of @xmath68 spherical particles of diameter @xmath69 interacting with a square - well potential with energy depth @xmath70 and attractive range @xmath71 .
results are presented for a supercritical temperature @xmath72 , using the boltzmann - gibbs ( bg ) and generalized statistics ( gp ) with triangle and circle symbols , respectively .
auxiliary interpolating lines are used as a guide for a better visualization of the results . ] for a system of @xmath68 spherical particles of diameter @xmath69 interacting with a square - well potential with energy depth @xmath70 and attractive range @xmath71 .
results are presented for a supercritical temperature @xmath73 , using the boltzmann - gibbs ( bg ) and generalized statistics ( gp ) with black circle and open circle symbols , respectively .
auxiliary interpolating lines are used as a guide for a better visualization of the results . ] for a system of @xmath74 particles interacting with a lennard - jones potential with energy depth @xmath70 and size parameter @xmath69 .
results are presented for a subcritical and a supercritical temperatures @xmath75 and @xmath76 , using the boltzmann - gibbs ( bg ) and generalized statistics ( gp ) , as indicated in the figure . for the bg cases
we include the corresponding predictions using the lennard - jones equation of state by johnson _ et al _ in continuos and dashed lines for @xmath77 and @xmath78 respectively.@xcite ] for a system of @xmath74 particles interacting with a lennard - jones potential with energy depth @xmath70 and size parameter @xmath69 .
results are presented for a subcritical and a supercritical temperatures @xmath75 and @xmath76 , using the boltzmann - gibbs ( bg ) and generalized statistics ( gp ) , as indicated in the figure . for the bg cases
we include the corresponding predictions using the lennard - jones equation of state by johnson _
_ in continuous and dashed lines for @xmath77 and @xmath78 respectively.@xcite ] in order to explore the consequences of redefining the distribution probability using a generalized statistics , we performed canonical ensemble monte carlo computer simulations for a fluid composed of spherical particles interacting via two different models , square - well ( sw ) and lennard - jones ( lj ) potentials . in the first case we considered @xmath79 particles of diameter @xmath69 interacting via a sw pair potential with an attractive range @xmath80 and energy - depth @xmath70 ; for the second model we used @xmath81 particles
results were obtained for the internal energy @xmath82 and the constant - volume heat capacity @xmath83 , using both the standard boltzmann - gibbs statistics and the generalized probability of this work @xcite , see figures 2 and 3 for the sw system at temperature @xmath84 , and figures 4 and 5 for the lj system at temperatures @xmath75 and @xmath85 . in the first case we are studying a supercritical temperature , whereas in the second case a comparison is made between a subcritical and supercritical case .
in all the cases we observe that the generalized probability introduces an effective repulsive interaction . the effect on the thermodynamic properties is equivalent to increase the repulsive force between molecules and there is a reduction on the values for the internal energy . for the case of the heat capacity
the effects of modifying the probability are more noticeable near the critical density or at higher densities where clearly the gp values of @xmath86 are greater for the boltzmann - gibbs statistics .
the procedure to define an effective potential @xmath87 is by assuming that the generalized boltzmann factor in equation ( 4 ) can be mapped onto a classical boltzmann factor , @xmath88 an interesting consequence of this effective - repulsive potential behavior is the possibility to map the thermodynamic properties of a gp system onto a bg one by modifying the potential parameters , like the diameter @xmath69 and range @xmath89 of the sw fluid .
this type of mapping has been explored in the past in order to define the equivalence of thermodynamic properties between systems defined by different pair potentials @xcite , that now can be applied by mapping a gp and a bg potential - system .
if we think about the inverse problem , i.e , which is the pair potential that reproduces the experimental values of @xmath82 or @xmath90 of a real substance , it could be possible to assume that part of the information can be hindered in the statistical probability function used to obtain the information , i.e. , to be considering either a bg statistics with a specific potential model or a gp statistics with a modified pair potential .
based on a non - extensive statistical mechanics generalization of the entropy that depends only on the probability , we show that the first term correcting the usual entropy also arises from several @xmath0 distributions .
we also construct the corresponding @xmath1-function and demonstrate that a generalized @xmath1-theorem is fulfilled .
furthermore , expressing this @xmath1 function as a function of the simplest maxwellian state we find , up to a first approximation some modified thermodynamic quantities for an ideal gas showing that a generic correction term appear , resembling the universal behavior founded for the distribution functions .
several simulation results are presented for internal energies and heat capacities for the square - well and lennard - jones potential .
the simulation results support the theoretical results ( for the ideal gas ) showing that an effective repulsive interaction is obtained with the new formalism .
further research has to be done to elucidate the complete scenario proposed in this work .
+ we thank our supportive institutions .
o. obregn was supported by conacyt projects no .
257919 and 258982 , promep and ug projects .
j. torres - arenas was supported by conacyt project no .
152684 and universidad de guanajuato project no .
740/2016 .
wilk , g. ; wlodarczyk , z. interpretation of the nonextensivity parameter q in some applications of tsallis statistics and levy distributions .
_ * 2000 * , _ 84 _ , 2770 - 2773 + sakaguchi , h. j. fluctuation dissipation relation for a langevin model with multiplicative noise . _
. japan _ * 2001 * , _ 70 _ , 3247 - 3250 .
+ jung , s. ; swinney , h.l .
superstatistics in taylor couette flow , university of austin * 2002 * preprint . | we consider a previously proposed non - extensive statistical mechanics in which the entropy depends only on the probability , this was obtained from a @xmath0 distribution and its corresponding boltzmann factor .
we show that the first term correcting the usual entropy also arises from several @xmath0 distributions , we also construct the corresponding @xmath1 function and demonstrate that a generalized @xmath1-theorem is fulfilled . furthermore , expressing this @xmath1 function as function of the simplest maxwellian state we find , up to a first approximation some modified thermodynamic quantities for an ideal gas . in order to gain some insight about the behavior of the proposed generalized statistics , we present some simulation results for the case of a square - well and lennard - jones potentials , showing that an effective repulsive interaction is obtained with the new formalism . |
vertical banded gastroplasty ( vbg ) was one of the most commonly performed bariatric surgeries in the last decade .
however , in the following years the operation did not achieve optimum results as it was associated with long - term weight gain and some mechanical complications .
later , long - term studies have reported that the rate of conversion surgeries after open vbg ranged from 49.7 to 56% [ 2 , 3 ] . over the past years
, rygb is the most commonly performed conversion surgery after failed open vbg as it achieves good long - term results in weight loss .
however , it is associated with a high rate of complications and long - term metabolic side effects . as a primary bariatric surgery , minigastric bypass which was first described by rutledge
was found to achieve excellent results with short operative duration and low rates of postoperative complications [ 5 , 6 ] . this study is aiming to compare between lmgb and lrygb as conversion surgeries after failed open vbg with respect to indications and operative and postoperative outcomes .
sixty patients ( 48 females and 12 males ) presenting with failed vbg , an average bmi of 39.7 kg / m ranging between 26.5 kg / m and 53 kg / m , and a mean age of 38.7 ranging between 24 and 51 years were enrolled in this prospective randomized study .
patients were admitted at the bariatric unit , department of general surgery , el demerdash hospital , at ain shams university in cairo , egypt , from december 2013 to december 2015 .
approval from the ethical committee of the faculty of medicine at ain shams university was obtained to conduct this study .
all patients enrolled in this study were suffering from failed vbg , that is , weight loss of less than 50% of the excess body weight in 2 years and/or having vbg related complications such as stomal stenosis with persistence vomiting , resistant stomal ulcers , intractable bleeding , severe reflux esophagitis , pouch dilation or staple line disruption , gg fistula with weight regain , or poor control of obesity - associated comorbidities .
patients with severe debilitating nutritional deficiency , large incisional hernias , history of personality disorder , drug or alcohol addiction , or advanced malignancy were excluded from our study .
additionally , patients who were contraindicated for laparoscopy or general anesthesia ( e.g. , having major medical comorbidity such as cardiac patients ) or refused the laparoscopic procedure were also excluded . before the operation ,
assessment of patients ' general conditions , mental statuses , and obesity - associated comorbidities such as diabetes , hypertension , or cardiovascular diseases was performed , in addition to nutritional assessment for vitamin b12 , calcium , magnesium , iron and protein , fat , and carbohydrate body composition .
full preoperative work - up including blood chemistries , ultrasonography , barium meal , and upper endoscopy was performed for all patients .
all patients wrote an informed consent before the operation after they were provided with a full and clear explanation of benefits , risks , and long - term consequences of the conversion to bypass surgery . during the week prior to surgery , patients were instructed to eat a high protein diet and perform regular exercises , while during the day before operation , they were allowed to only take clear fluids .
intraoperatively , patients were intubated in a supine position and pneumoperitoneum was established through a 10 mm umbilical visiport .
one 5 mm trocar was placed under xiphoid process for the insertion of the liver retractor , 12 and 15 mm trocars were placed on the right and left middle clavicular lines few millimeters above the umbilicus , respectively , for the surgeon instruments , and another 5 mm trocar was placed on the left anterior axillary line for assistance .
oral ryle was inserted to deflate the stomach to facilitate the dissection . as a first step
, we tried to separate the stomach wall from the left lobe of the liver and overlying omentum in an attempt to identify the site of the mesh .
if it passed easily and freely without gastric outlet obstruction , the mesh was not removed and the operation was continued as mgb in which the first transverse staple line was placed at the level of the incisura and vertical stapling on the previous vbg staple line was then placed ( in this case , the pouch was usually not dilated ) .
if gastric outlet obstruction was found and did not allow the bougie to pass , the mesh was attempted to be removed without injuring the gastric wall . in case
we succeeded and the bougie was passed easily , mgb was performed as described above .
if we failed , the 1st transverse reload was to be taken just above the mesh and proceeded vertically to the angle of hiss .
if the vertical length of the gastric pouch was long ( enough to take 3 reloads each of size 60 mm ) , the operation was continued as mgb where after bypassing 180 cm of intestine from the ligament of treitz in an anticolic fashion , loop gastroenterostomy was performed . in cases where the vertical length of the pouch was short ( less than 3 reloads each of a size 60 ) , the operation was continued as rygb where the biliopancreatic limb was 70 cm and the alimentary limb was 150 cm using a linear stapler to create end to side gastrojejunostomy and side to side jejunojejunostomy . both enterotomies closed by v - lock and the mesenteric defects closed by nonabsorbable prolene 2/zero .
then , a tube drain with a size of 22 was inserted routinely and was removed 2 to 3 days after the operation .
nil by mouth for 48 hours followed by low - caloric clear liquids for 1 week and low - caloric semisolid food for 24 weeks postoperatively .
patients were discharged from the hospital in the 3rd day after gastrografin study was performed .
patients were followed up once every week for one month and then once every month for one year to monitor their postoperative outcome as regards general health condition , bmi , and complication .
sixty patients ( 48 females and 12 males ) complaining from failed vbg with a mean age of 38.7 years ( ranging from 24 to 51 years ) and an average bmi of 39.7 kg / m ( ranging from 26.5 kg / m to 53 kg / m ) were enrolled in this study ( table 1 ) . in the current study ,
70% of our patients were complaining from failing to achieve satisfactory weight loss or having weight regain after open vbg , while the remaining 30% were complaining from other vbg complications such as persistent vomiting , reflux esophagitis , or attacks of bleeding .
mgb with long gastric pouch was successfully performed in 39 cases and mesh was removed in 15 cases .
the mean duration of intervention was 145 min ( ranging from 125 to 235 min ) and the mean length of hospital stay was 4.7 days ( ranging from 4 to 18 days ) .
the mean bmi decreased to 30.1 kg / m ( ranging from 24.8 kg / m to 41.5 kg / m ) after 1 year of the operation .
one case had leakage after 2 days of the operation and upon performing reexploration , an iatrogenic injury in the ascending limb of omega loop in the mgb was found .
rygb with short gastric pouch was performed in 21 cases with a mean duration of operation of 185 min ( ranging from 130 to 312 min ) and a mean length of hospital stay of 6.2 days ( ranging from 5 to 7 days ) .
the mean bmi decreased to 29.8 kg / m ( ranging from 24.3 to 40.3 kg / m ) after 1 year of the operation .
one case had anastomotic stenosis in the gastrojejunostomy in the 8th month after the operation which was improved after balloon dilatation .
another case had intestinal obstruction and upon reexploration , hernia through the mesenteric defect was found .
in the past decade , vertical band gastroplasty was amongst the preferred bariatric surgeries for weight loss without being associated with metabolic side effects .
however , the procedure did not provide satisfactory long - term weight loss results as more than 20% of patients regained their weight after surgery .
weight regain after failed vbg was attributed to staple line disruption , pouch dilation , and the switch in patients eating habits to become sweet eaters . according to a study performed by van gemert et al .
, up to 56% of patients who underwent vbg would require revisional surgery over a period of 12 years .
rybg was the revisional surgery of choice after failed vbg is rybg since it can achieve good results in weight loss and permits corrections of comorbidities . however , revisional lrygb is a technically difficult procedure and is associated with higher morbidities and mortalities .
lrybg is considered a technically difficult procedure because of the high anastomosis near the esophagogastric junction which necessitates the complete release of the upper stomach which can be a highly difficult and risky step . moreover
, the high anastomosis near the esophagogastric junction can be under tension and may cause fistula formation .
developments made in laparoscopic revisional bariatric surgeries led to the arising of lmgb as a safer substitute to lrybg as it does not require the complete release of the upper stomach as the anastomosis is performed inferiorly so it is enough to create retrogastric tunnel for the stapler under direct vision guided by the bougie for stapler .
moreover , lmgb is superior in the fact that it is associated with single anastomosis with better blood supply for gastric tube decreasing the risk of leakage .
this study is subsequently addressing if lmgb is a legitimate revisional procedure for all cases with failed vbg .
we found that long pouch was successfully created after the spontaneous passage of the bougie through the stoma which occurred in 18 cases or after the removal of the mesh which occurred in 15 cases or due to the presence of dilation in the upper gastric pouch which is commonly associated with stomal stenosis as found in 6 cases .
this enabled us to convert vbg into mgb in 65% of our patients , while in 35% of the patients long pouch could not be created and the vertical length of the pouch was less than 3 reloaded of size 60 mm and therefore vbg was converted into rygb to avoid reflux esophagitis .
this indicates that not all cases with failed vbg can be converted into mgb and sometimes it is much better for the patients to convert into rygb .
this decision should be taken intraoperatively . the mean operative time and mean postoperative hospital stay in the cases converted to mgb were 145 min and 4.7 days , respectively , which were significantly shorter in comparison to cases converted into rygb where the mean operative time and mean postoperative hospital stay were 185 min and 6.2 days , respectively .
one year after the operation , there was no significant difference between the postoperative mean bmi of cases converted into mgb ( 30.1 kg / m ) and that of cases converted into rygb ( 29.8 kg / m ) indicating that both procedures have similar weight loss efficiencies .
there was a significant decrease in the rate of complications after mgb in comparison to rygb which was 2.5% and 9.5% , respectively .
after mgb , there was only one case out of the 39 cases that had leakage which was a traumatic injury due to hard grasping of the intestinal loop and not due to leakage from the gastrojejunostomy anastomosis , while after rygb one case had internal hernia and one case had stomal stenosis . in a study performed by gonzalez et al . , they stated that anastomotic strictures and leaks are relatively high after revisional lrygb .
additionally , another study performed by gagn et al . stated that strictures are common complication after revisional lrygb and it occurs because of proximal gastric pouch mucosal thickening or distal pouch ischemia due to chronic inflammation from vertical staple line .
therefore we can state that mgb is a simple procedure that is associated with short operative and low rate of complications .
however , mgb may not be applicable in all cases with failed vbg and therefore rygb may be needed in such cases .
lmgb is a safe and feasible revisional bariatric surgery after failed vbg and can achieve early good weight loss results similar to that of lrygp .
however , the decision to convert to lap rygb or mgb should be taken intraoperatively depending mainly on the actual intraoperative pouch length . | background .
long - term studies have reported that the rate of conversion surgeries after open vbg ranged from 49.7 to 56% . this study is aiming to compare between lmgb and lrygb as conversion surgeries after failed open vbg with respect to indications and operative and postoperative outcomes .
methods . sixty patients ( 48 females and 12 males ) presenting with failed vbg , with an average bmi of 39.7 kg / m2 ranging between 26.5 kg / m2 and 53 kg / m2 , and a mean age of 38.7 ranging between 24 and 51 years were enrolled in this study .
operative and postoperative data was recorded up to one year after the operation .
results .
mgb is a simple procedure that is associated with short operative time and low rate of complications .
however , mgb may not be applicable in all cases with failed vbg and therefore rygb may be needed in such cases .
conclusion .
lmgb is a safe and feasible revisional bariatric surgery after failed vbg and can achieve early good weight loss results similar to that of lrygp . however , the decision to convert to lap rygb or mgb should be taken intraoperatively depending mainly on the actual intraoperative pouch length . |
electric response of a high temperature superconductor ( htsc ) under magnetic field has been a subject of extensive experimental and theoretical investigation for years .
magnetic field in these layered strongly type - ii superconductors create magnetic vortices , which , if not pinned by inhomogeneities , move and let the electric field to penetrate the mixed state .
the dynamic properties of fluxons appearing in the bulk of a sample are strongly affected by the combined effect of thermal fluctuations , anisotropy ( dimensionality ) and the flux pinning @xcite .
thermal fluctuations in these materials are far from negligible and in particular are responsible for existence of the first - order vortex lattice melting transition separating two thermodynamically distinct phases , the vortex solid and the vortex liquid . magnetic field and reduced dimensionality due to pronounced layered structure ( especially in materials like bi@xmath1sr@xmath4cacuo@xmath5 ) further enhance the effect of thermal fluctuations on the mesoscopic scale
. on the other hand the role of pinning in high-@xmath0 materials is reduced significantly compared to the low temperature one , leading to smaller critical currents . at elevated temperatures the thermal depinning @xcite
further diminishes effects of disorder .
linear response to electric field in the mixed state of these superconductors has been thoroughly explored experimentally and theoretically over the last three decades .
these experiments were performed at very small voltages in order to avoid effects of nonlinearity .
deviation from linearity however are interesting in their own right .
these effects have also been studied in low-@xmath0 superconductors experimentallykajimura71,grenoble and theoretically @xcite and recently experiments were extended to htsc compounds @xcite . since thermal fluctuations in the low-@xmath0 materials are negligible compared to the inter - vortex interactions , the moving vortex matter is expected to preserve a regular lattice structure ( for weak enough disorder ) . on the other hand ,
as mentioned above , the vortex lattice melts in htsc over large portions of their phase diagram , so the moving vortex matter in the region of vortex liquid can be better described as an irregular flowing vortex liquid . in particular
the nonlinear effects will also be strongly influenced by the thermal fluctuations .
a simpler case of a zero or very small magnetic field in the case of strong thermal fluctuations was in fact comprehensively studied theoretically varlamov albeit in linear response only
. in any superconductor there exists a critical region around the critical temperature @xmath6 , in which the fluctuations are strong ( the ginzburg number characterizing the strength of thermal fluctuations is just @xmath7 for low @xmath0 , while @xmath8 for htsc materials ) . outside the critical region and for small electric fields ,
the fluctuation conductivity was calculated by aslamazov and larkin @xcite by considering ( noninteracting ) gaussian fluctuations within bardeen - cooper - schrieffer ( bcs ) and within a more phenomenological ginzburg - landau ( gl ) approach . in the framework of the gl approach ( restricted to the lowest landau level approximation ) , ullah and
dorsey @xcite computed the ettingshausen coefficient by using the hartree approximation .
this approach was extended to other transport phenomena like the hall conductivity @xcite and the nernst effect @xcite .
the fluctuation conductivity within linear response can be applied to describe sufficiently weak electric fields , which do not perturb the fluctuations spectrum @xcite . physically at electric field , which is able to accelerate the paired electrons on a distance of the order of the coherence length @xmath9 so that they change their energy by a value corresponding to the cooper pair binding energy , the linear response is already inapplicable @xcite .
the resulting additional field dependent depairing leads to deviation of the current - voltage characteristics from the ohm s law .
the non - ohmic fluctuation conductivity was calculated for a layered superconductor in an arbitrary electric field considering the fluctuations as noninteracting gaussian ones varlamov92,mishonov02 .
the fluctuations suppression effect of high electric fields in htsc was investigated experimentally for the in - plane paraconductivity in zero magnetic field @xcite , and a good agreement with the theoretical models @xcite was found . in this paper
the nonlinear electric response of the moving vortex liquid in a layered superconductor under magnetic field perpendicular to the layers is studied using the time dependent gl ( tdgl ) approach .
the layered structure is modeled via the lawrence - doniach discretization in the magnetic field direction . in the moving vortex liquid
the long range crystalline order is lost due to thermal fluctuations and the vortex matter becomes homogeneous on a scale above the average inter - vortex distances .
although sometimes motion tends to suppress the fluctuations , they are still a dominant factor in flux dynamics .
the tdgl approach is an ideal tool to study a combined effect of the dissipative ( overdamped ) flux motion and thermal fluctuations conveniently modeled by the langevin white noise .
the interaction term in dynamics is treated in gaussian approximation which is similar in structure to the hartree - fock one .
theoretically the nonlinear effects in htsc have been addressed @xcite .
however the results of ref .
are different from our in this paper and we will sketch the difference below .
firstly the model of ref . , is physically different from ours .
the authors in ref .
believe that the two quantities , layer distance and thickness in the lawrence - doniach for htsc are equal ( apparently not the case in htsc ) , while we consider them as two independent parameters .
another difference is we use so called self - consistent gaussian approximation to treat the model while ref . used the hartree - fock approximation .
a main contribution of our paper is an explicit form of the green function incorporating all landau levels .
this allows to obtain explicit formulas without need to cutoff higher landau levels . in ref .
, a nontrivial matrix inversion ( of infinite matrices ) or cutting off the number of landau levels is required .
note that the exact analytical expression of green function of the linearized tdgl equation in dc field can be even generalized also to ac field .
the method is very general , and it allow us to study transport phenomena beyond linear response of type - ii superconductor like the nernst effect , hall effect .
the renormalization of the models is also different from ref . .
one of the main result of our work is that the conductivity formula is independent of uv cutoff ( unlike in ref . ) as it should be as the standard @xmath10 theory is renormalizable .
furthermore gaussian approximation used in this paper is consistent to leading order with perturbation theory , see ref . in which it is shown that this procedure preserved a correct the ultraviolet ( uv ) renormalization ( is rg invariant ) . without electric field the issue was comprehensively discussed in a textbook kleinert @xcite .
one can use hartree - fock procedure only when uv issues are unimportant .
we can also show , if there is no electric field , the result obtained using tdgl model and gaussian approximation will lead the same thermodynamic equation using gaussian approximation .
the paper is organized as follows .
the model is defined in sec .
the vortex liquid within the gaussian approximation is described in sec .
iii . the i - v curve and the comparison with experiment
are described in sec .
iv , while sec .
v contains conclusions .
to describe fluctuation of order parameter in layered superconductors , one can start with the lawrence - doniach expression of the gl free energy of the 2d layers with a josephson coupling between them : @xmath11where @xmath12 is the order parameter effective thickness `` and @xmath13 distance between layers labeled by @xmath14 .
the lawrence - doniach model approximates paired electrons dos by homogeneous infinitely thin planes separated by distance @xmath13 .
while discussing thermal fluctuations , we have to introduce a finite thickness , otherwise the fluctuations will not allow the condensate to exist ( mermin - wagner theorem ) .
the thickness is of course smaller than the distance between the layers ( otherwise we would not have layers ) .
the order parameter is assumed to be non - zero within @xmath12 .
effective cooper pair mass in the @xmath15 plane is @xmath16(disregarding for simplicity the anisotropy between the crystallographic @xmath17 and @xmath18 axes ) , while along the @xmath19 axis it is much larger-@xmath20 . for simplicity
we assume @xmath21 , @xmath22 , although this temperature dependence can be easily modified to better describe the experimental coherence length .
the mean field '' critical temperature @xmath23 depends on uv cutoff , @xmath24 , of the mesoscopic `` or phenomenological '' gl description , specified later .
this temperature is higher than measured critical temperature @xmath0 due to strong thermal fluctuations on the mesoscopic scale .
the covariant derivatives are defined by @xmath25 where the vector potential describes constant and homogeneous magnetic field @xmath26 and @xmath27 is the flux quantum with @xmath28 .
the two scales , the coherence length @xmath29 and the penetration depth , @xmath30 define the gl ratio @xmath31 , which is very large for htsc . in this case of strongly type - ii superconductors
the magnetization is by a factor @xmath32 smaller than the external field for magnetic field larger than the first critical field @xmath33 , so that we take @xmath34 .
the electric current , @xmath35 , includes both the ohmic normal part @xmath36and the supercurrent @xmath37since we are interested in a transport phenomenon , it is necessary to introduce a dynamics of the order parameter .
the simplest one is a gauge - invariant version of the type a " relaxational dynamics @xcite . in the presence of thermal fluctuations , which on the mesoscopic scale are represented by a complex white noise rosenstein07
, it reads : @xmath38where @xmath39 is the covariant time derivative , with @xmath40 being the scalar electric potential describing the driving force in a purely dissipative dynamics .
the electric field is therefore directed along the @xmath41 axis and consequently the vortices are moving in the @xmath42 direction . for magnetic fields that are not too low , we assume that the electric field is also homogeneous rosenstein07 . the inverse diffusion constant @xmath43 , controlling the time scale of dynamical processes via dissipation , is real , although a small imaginary ( hall ) part is also generally present @xcite .
the variance of the thermal noise , determining the temperature @xmath44 is taken to be the gaussian white noise : @xmath45 throughout most of the paper we use the coherence length @xmath46 as a unit of length and @xmath47 as a unit of the magnetic field .
the dimensionless boltzmann factor in these units is : @xmath48where the covariant derivatives in dimensionless units in landau gauge are @xmath49 @xmath50 with @xmath51 , and the order parameter field was rescaled : @xmath52 .
the dimensionless fluctuations strength coefficient is @xmath53where the ginzburg number is defined by @xmath54 note that here we use the standard definition of the ginzburg number different from that in ref . .
the relation between parameters of the two models , the lawrence - doniach and the 3d anisotropic gl model , is @xmath55 , @xmath56 , where @xmath57 is an anisotropy parameter . in analogy to the coherence length and the penetration depth , one can define a characteristic time scale . in the superconducting phase a typical relaxation "
time is @xmath58 .
it is convenient to use the following unit of the electric field and the dimensionless field : @xmath59 . the tdgl eq .
( [ tdgl_i ] ) written in dimensionless units reads @xmath60@xmath61while the gaussian white - noise correlation takes a form @xmath62the covariant time derivative in dimensionless units is @xmath63 with @xmath64 being the vortex velocity and the thermal noise was rescaled as @xmath65 @xmath66 .
the dimensionless current density is @xmath67 where @xmath68with @xmath69 being the unit of the current density .
consistently the conductivity will be given in units of @xmath70 .
this unit is close to the normal state conductivity @xmath71 in dirty limit superconductors @xcite . in general
there is a factor @xmath72 of order one relating the two : @xmath73 .
thermal fluctuations in vortex liquid frustrate the phase of the order parameter , so that @xmath74 . therefore the contributions to the expectation values of physical quantities like the electric current come exclusively from the correlations , the most important being the quadratic one @xmath75 .
in particular , @xmath76 is the superfluid density .
a simple approximation which captures the most interesting fluctuations effects in the gaussian approximation , in which the cubic term in the tdgl eq .
( [ tdgl2l ] ) , @xmath77 , is replaced by a linear one @xmath78 @xmath79leading the renormalized " value of the coefficient of the linear term : @xmath80where the constant is defined as @xmath81 .
the average @xmath82 is expressed via the parameter @xmath83 below and will be determined self - consistently together with @xmath83 .
it differs slightly from a well known hartree - fock procedure in which the coefficient of the linearized term is generally different ( see @xcite for details ) . due to the discrete translation invariance in the field direction @xmath84 , it is convenient to work with the fourier transform with respect to the layer index : @xmath85and similar transformation for @xmath86 . in terms of fourier components the tdgl eq .
( [ tdgl3 ] ) becomes @xmath87+\varepsilon \right\ } \psi _ { k_{z}}(\mathbf{r},\tau ) = \overline{\zeta _ { k_{z}}}(\mathbf{r}% , \tau ) \text{. } \label{tdgl_kz1}\]]the noise correlation is @xmath88the relaxational linearized tdgl equation with a langevin noise , eq .
( tdgl_kz1 ) , is solved using the retarded ( @xmath89 for @xmath90 ) green function ( gf ) @xmath91 : @xmath92the gf satisfies @xmath93+\varepsilon \right\ } g_{k_{z}}(\mathbf{r},\mathbf{r}^{\prime } , \tau -\tau ^{\prime } ) \notag \\ & = & \delta ( \mathbf{r}-\mathbf{r}^{\prime } ) \delta ( \tau -\tau ^{\prime } ) , \label{gfdef}\end{aligned}\]]and is computed in the appendix a. the thermal average of the superfluid density ( density of cooper pairs ) is @xmath94where @xmath95 e^{-\left ( 2\varepsilon -b+v^{2}\right ) \tau } \notag \\ & & \times e^{-2\tau /d^{2}}i_{0}\left ( 2\tau /d^{2}\right ) \text{. } \label{f}\end{aligned}\]]here @xmath96 is the modified bessel function . the first pair of multipliers in eq .
( [ f ] ) is independent of the inter - plane distance @xmath97 and exponentially decreases for @xmath98 , while the last pair of multipliers depends on the layered structure .
the expression ( [ expect.v ] ) is divergent at small @xmath99 , so an uv cutoff @xmath24 is necessary for regularization . substituting the expectation value into the gap equation " , eq .
( [ gap.eq1 ] ) , the later takes a form @xmath100 in order to absorb the divergence into a renormalized " value @xmath101 of the coefficient @xmath102 , it is convenient to make an integration by parts in the last term for small @xmath24 : @xmath103\frac{d}{d\tau } \left [ \frac{% f(\varepsilon , \tau ) } { \cosh ( b\tau ) } \right ] -\ln ( b\tau _ { c}).\end{aligned}\]]physically the renormalization corresponds to reduction of the critical temperature by the thermal fluctuations from @xmath23 to @xmath0 .
the thermal fluctuations occur on the mesoscopic scale .
the critical temperature @xmath0 is defined at @xmath104 , and @xmath105 , and at low magnetic field less than @xmath106 ( for a typical high @xmath0 superconductor , @xmath107 , @xmath108 ) , the superconductor is at meissner phase , @xmath109 , leading to @xmath110 \right\ } , \label{tmf}\]]where @xmath111 is euler constant , and eq .
( [ gap.eq2 ] ) can be rewritten as @xmath112\frac{d}{d\tau } \left [ \frac{f(\varepsilon , \tau ) } { \cosh ( b\tau ) } \right ] \notag \\ & & + \frac{\omega t}{\pi s}\left\ { \gamma _ { e}-\ln ( bd^{2})\right\ } \text { , } \label{gapequation}\end{aligned}\]]where @xmath113 , @xmath114 , and @xmath115 where @xmath116(@xmath23 is now replaced by @xmath0 ) .
the formula is cutoff independent . in terms of energy uv cutoff @xmath117 , introduced for example in @xcite
, the cutoff time @xmath24 can be expressed as @xmath118 this is obtained by comparing a thermodynamic result for a physical quantity like superfluid density with the dynamic result ( see appendix b ) .
the temporary uv cutoff used is completely equivalent to the standard energy or momentum cutoff lambda used in thermodynamics ( in which the time dependence does not appear ) .
physically one might think about momentum cutoff as more basic and this would be universal and independent of particular time dependent realization of thermal fluctuations ( tdgl with white noise in our case ) .
roughly ( in physical units ) @xmath119 . in the next section
we will discuss the estimate of @xmath23 using this value due to the following reason . for high-@xmath0 materials
ordinary bcs is invalid and coherence length is of order of lattice spacing ( the cutoff becomes microscopic ) and therefore the energy cutoff is of order @xmath120 . except the formula to calculate @xmath23 , all other formulas in this paper is independent of energy cutoff .
the supercurrent density , defined by eq .
( [ current ] ) , can be expressed via the green s functions as : @xmath121performing the integrals , one obtains : @xmath122where the function @xmath123 was defined in eq .
( [ f ] ) .
consequently the contribution to the conductivity is @xmath124 .
the conductivity expression ( eq .
27 ) is not divergent when expressed as a function of renormalized @xmath0 ( the real transition temperature ) , so it is independent of the cutoff .
this is considered in detail in section iii .
b and is indeed different from the ref . .
in physical units the current density reads @xmath125 .
\label{finalcurrent}\]]this is the main result of the present paper .
we also considered the conductivity expression in 2d in linear response which do match the linear response conductivity expression derived in our previous work @xcite .
@xmath126 \right\ } , \label{conduc2dlinear}\ ] ] where @xmath127 is the polygamma function . in this section
we use physical units , while the dimensionless quantities are denoted with bars .
the experiment results of i. puica _ _ et al.__@xcite , obtained from the resistivity and hall effect measurements on an optimally doped yba@xmath1cu@xmath2o@xmath128 ( ybco ) films of thickness @xmath129 nm and @xmath130 k. the distance between the bilayers used the calculation is @xmath131 in ref . .
the number of bilayers is @xmath129 , large enough to be described by the lawrence - doniach model without taking care of boundary conditions . in order to compare the fluctuation conductivity with experimental data in htsc
, one can not use the expression of relaxation time @xmath132 in bcs which may be suitable for low-@xmath0 superconductor . instead of this
, we use the factor @xmath72 as fitting parameter . ) with fitting parameters ( see text ) .
the dashed line is the theoretical value of resistivity in linear response with the same parameters.,width=325,height=234 ] the comparison is presented in fig .
the resistivity @xmath133@xmath134curves were fitted to eq .
( [ rho ] ) with the normal - state conductivity measured in ref . to be @xmath135 @xmath136@xmath137 .
the parameters we obtain from the fit are : @xmath138 t ( corresponding to @xmath139 ) , the ginzburg - landau parameter @xmath140 , the order parameter effective thickness @xmath141 , and the factor @xmath142 , where we take @xmath143 for optimally doped ybco in ref . . using those parameters
, we obtain @xmath144 ( corresponding to @xmath145 ) .
the order parameter effective thickness @xmath12 can be taken to be equal to the layer distance ( see in ref . ) of the superconducting cuo plane plus the coherence length @xmath146 due to the proximity effect : @xmath147 @xmath148 @xmath149 , roughly in agreement in magnitude with the fitting value of @xmath150 .
we will now estimate @xmath23 for this sample .
for the underdoped ybco , the radius of the fermi surface of ybco was measured in ref .
, @xmath151 @xmath152 , while the effective mass is @xmath153 .
we will assume that@xmath154the fermi energy for underdoped ybco of ref .
is @xmath155 and is roughly the same for the optimal ybco studied in this paper .
the cutoff timein physical units is then , according to eq .
( [ tcutoff ] ) , @xmath156 s. equation ( [ tmf ] ) gives then @xmath157 k. : 0.04 ( 1 ) , 0.1 ( 2 ) , 0.4 ( 3 ) , 1.0 ( 4 ) at temperature @xmath158.,width=325,height=234 ] : 0.2 ( 1 ) , 0.3 ( 2 ) , 0.4 ( 3 ) , 1.0 ( 4 ) at magnetic flied @xmath159.,width=325,height=234 ] using the parameters specified above we plot several theoretical i - v curves . as expected the i - v curve shown in figs .
2 and figs . 3
has two linear portions , the flux flow part for @xmath160 and the normal ohmic part for @xmath161 . in the crossover region , @xmath162 ,
a i - v curve becomes nonlinear due to destruction of superconductivity ( the normal area inside the vortex cores increases to fill all the space ) . in fig .
2 the i - v curves are shown for different the magnetic fields , at a fixed temperature @xmath163 . at given electric field , as the magnetic field increases , the supercurrent decreases . when the magnetic field reaches @xmath164 , the i - v curve becomes linear . in fig .
3 the i - v curves are shown for different temperatures , at a fixed magnetic field @xmath165 . at given electric field , as the temperature increases , the supercurrent decreases . when the temperature reaches @xmath0 , the i - v curve becomes linear . with decreasing temperature the crossover becomes steeper .
we quantitatively studied the transport in a layered type - ii superconductor in magnetic field in the presence of strong thermal fluctuations on the mesoscopic scale beyond the linear response . while in the normal state the dissipation involves unpaired electrons , in the mixed phase it takes a form of the flux flow .
time dependent ginzburg - landau equations with thermal noise describing the thermal fluctuations is used to describe the vortex - liquid regime and arbitrary flux flow velocities .
we avoid assuming the lowest landau level approximation , so that the approach is valid for arbitrary values the magnetic field not too close to @xmath33 .
our main objective is to study layered high-@xmath0 materials for which the ginzburg number characterizing the strength of thermal fluctuations is exceptionally high , in the moving vortex matter the crystalline order is lost and it becomes homogeneous on a scale above the average inter - vortex distances .
this ceases to be the case at very low temperature at which two additional factors make the calculation invalid .
one is the validity of the gl approach ( strictly speaking not far from @xmath166 ) and another is effect of quenched disorder .
the later becomes insignificant at elevated temperature due to a very effective thermal depinning .
although sometimes motion tends to suppress fluctuations , they are still a dominant factor in flux dynamics . the nonlinear term in dynamics is treated using the renormalized gaussian approximation .
the renormalization of the critical temperature is calculated and is strong in layered high @xmath0 materials .
the results were compared to the experimental data on htsc .
our the resistivity results are in good qualitative and even quantitative agreement with experimental data on yba@xmath1cu@xmath2o@xmath128 in strong electric fields .
let us compare the present approach with a widely used londons approximation .
since we havent neglected higher landau levels , as very often is done in similar studies @xcite , our results should be applicable even for relatively small fields in which the london approximation is valid and used .
there is no contradiction since the two approximations have a very large overlap of applicability regions for strongly type - ii superconductors .
the gl approach for the constant magnetic induction works for @xmath167 , while the londons approach works for @xmath168 .
similar methods can be applied to other electric transport phenomena like the hall conductivity and thermal transport phenomena like the nernst effect .
the results , at least in 2d , can be in principle compared to numerical simulations of langevin dynamics .
efforts in this direction are under way .
we are grateful to i. puica for providing details of experiments and to f. p. j. lin , b. ya .
shapiro for discussions and encouragement , b. d. tinh thanks for the hospitality of peking university and d. li thanks for the hospitality of national chiao tung university .
this work is supported by nsc of r.o.c . #
98 - 2112-m-009 - 014-my3 and moe atu program , and d. li is supported by national natural science foundation of china ( grant number 90403002 and 10974001 ) .
in this appendix we outline the method for obtaining the green function in strong electric field for the linearized equation of tdgl ( [ gfdef ] ) .
the green function is a gaussian @xmath169 g_{k_{z}}\left ( x , y,\tau ^{\prime \prime } \right ) , \label{ansatz}\]]where @xmath170with @xmath171 @xmath172 is the heaviside step function , @xmath173 and @xmath174 are coefficients . substituting the ansatz ( [ ansatz ] ) into eq .
( [ gfdef ] ) , one obtains following conditions condition : @xmath175+% \frac{1}{\beta } + \frac{\partial _
{ \tau } c}{c}=0 , \label{cond1}\]]@xmath176the eq .
( [ cond2 ] ) determines @xmath174 , subject to an initial condition @xmath177 , @xmath178while eq .
( [ cond1 ] ) determines @xmath173 : @xmath179\right ) \tau ^{\prime \prime } \right\ } \notag \\ & \times & \sinh ^{-1}\left ( \frac{b\tau ^{\prime \prime } } { 2}\right ) .
\label{c.fun}\end{aligned}\]]the normalization is dictated by the delta function term in definition of the green s function eq .
( [ gfdef ] ) .
from tdgl , we obtain in the case @xmath105 : @xmath180the superfluid density at @xmath109 and @xmath104 can be obtained by taking @xmath18 and @xmath181 to zero limit in the above equation : @xmath182performing the integration by parts , one obtains @xmath183 in the case without external electric field ( or @xmath184 ) , the equation obtained from tdgl shall approach the thermodynamics result . in thermodynamics method , we shall evaluate the partition function @xmath185 where @xmath186 is defined in eq .
( [ boltz ] ) .
the superfluid density in the thermodynamic approach at the phase transition point @xmath187where @xmath188 .
we also remark that in thermodynamic approach , if we use the gaussian approximation , we will get the exact same equation derived in eq .
( gapequation ) without electric field derived from tdgl after using eq .
( cutoffrelation ) .
99 for a review , see , g. blatter , m. v. feigelman , v. b. geshkenbein , a. i. larkin , and v. m. vinokur , rev .
mod . phys . * 66 * , 1125 ( 1994 ) ; y. yeshurun , a. p. malozemoff , and a. shaulov , rev .
mod . phys . * 68 * , 911 ( 1996 ) . | the time - dependent ginzburg - landau approach is used to investigate nonlinear response of a strongly type - ii superconductor .
the dissipation takes a form of the flux flow which is quantitatively studied beyond linear response . thermal fluctuations , represented by the langevin white noise , are assumed to be strong enough to melt the abrikosov vortex lattice created by the magnetic field into a moving vortex liquid and marginalize the effects of the vortex pinning by inhomogeneities .
the layered structure of the superconductor is accounted for by means of the lawrence - doniach model .
the nonlinear interaction term in dynamics is treated within gaussian approximation and we go beyond the often used lowest landau level approximation to treat arbitrary magnetic fields .
the i - v curve is calculated for arbitrary temperature and the results are compared to experimental data on high-@xmath0 superconductor yba@xmath1cu@xmath2o@xmath3 . |
Media playback is unsupported on your device Media caption Radovan Karadzic listened intently as the verdict and sentence was read out
Former Bosnian Serb leader Radovan Karadzic has been convicted of genocide and war crimes in the 1992-95 Bosnian war, and sentenced to 40 years in jail.
UN judges in The Hague found him guilty of 10 of 11 charges, including genocide over the 1995 Srebrenica massacre.
Karadzic, 70, is the most senior political figure to face judgement over the violent collapse of Yugoslavia.
His case is being seen as one of the most important war crimes trials since World War Two.
He had denied the charges, saying that any atrocities committed were the actions of rogue individuals, not the forces under his command.
The trial, in which he represented himself, lasted eight years.
Media playback is unsupported on your device Media caption Chairman of the Presidency of Bosnia Bakir Izetbegovic said it was a verdict of "tremendous importance"
The current president of the Bosnian Serb Republic, Milorad Dodik, condemned the verdict.
"The West has apportioned blame to the Serbian people and that guilty cliche was imposed on all the decision-makers, including in this case today... Karadzic," he said at a ceremony to commemorate the anniversary of the start of Nato air strikes against Yugoslavia in 1999.
"It really hurts that somebody has decided to deliver this verdict in The Hague exactly today, on the day when Nato decided to bomb Serbia... to cause so much catastrophic damage and so many casualties," Mr Dodik added.
At the scene: Paul Adams, BBC News, The Hague
Radovan Karadzic had said no reasonable court would convict him. But listening to Judge Kwon, it was hard to see how any reasonable court could not convict him.
Mr Karadzic listened intently, the corners of his mouth pulled down in a look of permanent disgust and, just perhaps, disbelief. By the end of an hour and 40 minutes, it was obvious what was coming.
There's a strong sense of satisfaction here that one of the chief architects of Bosnia's bloody dismemberment has finally been found guilty. The court's work is almost done.
But all eyes now will be on the fate of Karadzic's main general, Ratko Mladic. His name came up a great deal during Judge Kwon's summation, particularly in regard to the massacre of Srebrenica.
It will be astonishing if Gen Mladic doesn't face a similar verdict and sentence.
Karadzic verdict vital to Bosnia's future
Balkans war: a brief guide
Profile: Radovan Karadzic
Bosniak and Serb reaction
Exploring the corridors of the Hague tribunal
Meanwhile, some relatives of victims expressed disappointment at the outcome.
"This came too late," said Bida Smajlovic, whose husband was killed at Srebrenica.
"We were handed down a verdict in 1995. There is no sentence that could compensate for the horrors we went through or for the tears of only one mother, let alone thousands," she was quoted as saying by Reuters news agency.
Karadzic's lawyer said he would appeal, a process that could take several more years.
Image copyright EPA Image caption Many Bosnians have been following the trial closely
"Dr Karadzic is disappointed and astonished. He feels that he was convicted on inference instead of evidence and will appeal [against] the judgement," Peter Robinson told journalists.
Karadzic faced two counts of genocide.
He was found not guilty of the first, relating to killing in several Bosnian municipalities.
But he was found guilty of the second count relating to Srebrenica, where Bosnian Serb forces massacred more than 7,000 Bosnian Muslim men and boys.
"Karadzic was in agreement with the plan of the killings," Judge O-Gon Kwon said.
Media playback is unsupported on your device Media caption What happened at Srebrenica? Explained in under two minutes
The massacre happened in July 1995 when Srebrenica, an enclave besieged by Bosnian Serb forces for three years, was overrun. The bodies of the victims were dumped in mass graves.
Karadzic was also found guilty of crimes against humanity relating to the siege and shelling of the city of Sarajevo over several years which left nearly 12,000 people dead.
The judge said he had significantly contributed to a plan which emanated from the leadership and whose primary purpose was to spread terror in the city.
Charges
Genocide
Count 1 - genocide (in municipalities of Bratunac, Foca, Klyuc, Prijedor, Sanski Most, Vlasenica and Zvornik) - not guilty
Count 2 - genocide (in Srebrenica) - guilty
Crimes against humanity
Count 3 - persecutions - guilty
Count 4 - extermination - guilty
Count 5 - murder - guilty
Count 7 - deportation - guilty
Count 8 - inhumane acts (forcible transfer) - guilty
Violations of the laws or customs of war
Count 6 - murder - guilty
Count 9 - terror (in Sarajevo) - guilty
Count 10 - unlawful attacks on civilians (in Sarajevo) - guilty
Count 11 - taking hostage of UN observers and peacekeepers - guilty
The full indictment
Mr Karadzic was also found guilty of orchestrating a campaign known as "ethnic cleansing" of non-Serbs from the territory of the breakaway Bosnian Serb republic, in which hundreds and thousands were driven from their homes.
He would only be expected to serve two-thirds of his sentence. His time spent in detention - slightly more than seven years - will count towards the total.
Top UN human rights official Zeid Ra'ad al-Hussein welcomed the verdict as "hugely significant".
He said the trial "should give pause to leaders across Europe and elsewhere who seek to exploit nationalist sentiments and scapegoat minorities for broader social ills".
At least 100,000 people in total died during fighting in the the Bosnian war. The conflict lasted nearly four years before a US-brokered peace deal brought it to an end in 1995.
Gen Ratko Mladic, who commanded Bosnian Serb forces, is also awaiting his verdict at The Hague.
Image copyright AP Image caption Radovan Karadzic in August 1995
Karadzic Timeline
1945: Born in Montenegro
1960: Moves to Sarajevo
1968: Publishes collection of poetry
1971: Graduates in medicine
1983: Becomes team psychologist for Red Star Belgrade football club
1990: Becomes president of Serbian Democratic Party
1990s Political leader of Bosnian Serbs
2008: Arrested in Serbia
2009: Trial begins at The Hague
2016: Guilty verdict, sentenced to 40 years
Were you affected by the war in Bosnia? Let us know about your experiences. Email haveyoursay@bbc.co.uk with your stories.
Please include a contact number if you are willing to speak to a BBC journalist. You can also contact us in the following ways: ||||| Wartime leader of Bosnian Serbs found guilty of 10 of 11 charges at international criminal tribunal for the former Yugoslavia
The former Bosnian Serb leader Radovan Karadžić has been found guilty of genocide over the 1995 massacre in Srebrenica and sentenced to 40 years in jail.
The key verdict from a United Nations tribunal in The Hague was delivered 18 months after a five-year trial of Karadžić, accused of being one of the chief architects of atrocities during the 1992-95 Balkans war.
The 70-year-old, who insisted his actions were aimed at protecting Serbs during the Bosnian conflict, was found guilty of 10 out of the 11 charges he faced at the international criminal tribunal for the former Yugoslavia.
Karadžic trial points to advantages of focused criminal tribunals Read more
Prosecutors said that Karadžić, as political leader and commander-in-chief of Serb forces in Bosnia, was responsible for some the worst acts of brutality during the war, including the 44-month deadly siege of Sarajevo and the 1995 massacre of more than 8,000 Bosnian men and boys in the Srebrenica enclave.
Speaking after the verdicts, Serge Brammertz, the tribunal’s chief prosecutor, said: “Moments like this should also remind us that in innumerable conflicts around the world today, millions of victims are now waiting for their own justice. This judgement shows that it is possible to deliver it.”
The presiding ICTY judge delivering the ruling, O-Gon Kwon, cleared Karadžić of one charge: responsibility for genocide in attacks on other towns and villages where Croats and Bosnians were driven out.
On Srebrenica, Kwon said: “On the basis of the totality of the evidence, the [ICTY] finds that the accused shared the expanded common purpose of killing the Bosnian Muslim males of Srebrenica and that he significantly contributed to it.”
Karadžić was the only person with the power to intervene and protect those being killed, Kwon said. “Far from that,” he said, “the accused ordered Bosnian male detainees to be transferred elsewhere to be killed. With full knowledge of the ongoing killing, Karadžić declared a state of war in Srebrenica.”
Karadžić’s other convictions were for five counts of crimes against humanity and four of war crimes, including taking UN peacekeepers hostage, deporting civilians, murder and attacks on combatants.
During the 100-minute verdict and sentencing, Karadžić sat impassively, not in the dock but on the defence bench, as he opted throughout the five-year trial to act as his own lead counsel.
He smiled and nodded to one or two familiar faces from the Serb press in the gallery, but hardly glanced at the public gallery, which was packed with survivors and victims’ family members, mostly women grieving lost sons and husbands. They obeyed the tribunal instructions to stay quiet throughout the proceedings, though there was a quiet grunt of disappointment when Karadžić was acquitted of one of the genocide charges.
The only time Karadžić appeared nervous was when he stood to receive sentence, his arms stiff by his side. His lawyer said he would appeal.
Outside the tribunal, there was anger that Karadžić did not receive a life sentence. “Is the tribunal not ashamed? Do the Bosnian Muslims and Bosnian Croats not have a right to justice? He got 40 years. That’s not enough,” said Kada Hotic, one of the bereaved mothers from Srebrenica.
The verdicts are the most significant moment in the 23-year existence of the ICTY, and among the last it will deliver. Set up in 1993, the court has so far indicted 161 suspects. Of those, 80 were convicted and sentenced, 18 acquitted, 13 sent back to local courts and 36 had the indictments withdrawn or died.
The former psychiatrist and charismatic politician, still with his characteristic bouffant hairstyle, is the most senior Balkans leader to face judgment at the ICTY. The former Serbian president Slobodan Milošević died in his cell in The Hague in 2006 before judges could deliver their verdicts on his trial.
Apart from Karadžić, three suspects remain on trial, including his military chief, Ratko Mladić and Serb ultranationalist Vojislav Šešelj. Eight cases are being appealed and two defendants are to face retrials. The judgment in Šešelj’s case is scheduled for next Thursday.
Karadžić was indicted along with Mladić in 1995 but evaded arrest until he was captured in Belgrade, Serbia, in 2008. At the time he was posing as New Age healer Dr Dragan Dabic, and was disguised by a thick beard and shaggy hair.
More than 20 years after the guns fell silent in Bosnia, Karadžić is still considered a hero in Serb-controlled parts of the country, and the verdict is unlikely to help reconcile the enduring deep divisions in Bosnia and the region. ||||| (CNN) Radovan Karadzic, nicknamed the "Butcher of Bosnia," was sentenced to 40 years in prison Thursday after being found guilty of genocide and other crimes against humanity over atrocities that Bosnian Serb forces committed during the Bosnian War from 1992 to 1995.
A special U.N. court in The Hague, Netherlands, found the 70-year-old guilty of genocide over his responsibility for the Srebrenica massacre, in which more than 7,000 Bosnian Muslim men and boys were executed by Bosnian Serb forces under his command.
Karadzic, former leader of the breakaway Serb Republic in Bosnia, is the highest-ranking political figure to have been brought to justice over the bitter ethnic conflicts that erupted with the collapse of the former Yugoslavia.
After the verdict, thousands of Serbian ultranationalist supporters of Karadzic took to the streets of Serbian capital of Belgrade, carrying images of the former leader and saying he was being punished for being a Serb.
On the streets of Belgrade, people voiced mixed reactions to the sentence.
"He was given 40 years, did not get a life? So it's a disaster," one man said.
Another said, "They should charge other people, not Radovan Karadzic. He defended Serbian people, sacrificed himself for Serbian people, but authorities in Serbia sent him to Hague."
Prosecutor Serge Brammertz said in a statement that the verdict and sentence "will stand against continuing attempts at denying the suffering of thousands and the crimes committed in the former Yugoslavia."
"Moments like this should also remind us that in innumerable conflicts around the world today, millions of victims are now waiting for their own justice," he added. "This judgment shows that it is possible to deliver it."
U.N. Secretary General Ban Ki-moon hailed the verdicts as a "historic" result for the people of the former Yugoslavia and for international criminal justice, while the U.N. high commissioner for human rights, Zeid Ra'ad Al Hussein, said they exposed Karadzic as "the architect of destruction and murder on a massive scale."
Karadzic, a former psychiatrist, was found guilty of 10 of the 11 charges against him, including extermination, persecution, forcible transfer, terror and hostage taking.
In a statement, the tribunal said it found Karadzic had committed the crimes through his participation in four "joint criminal enterprises," including an overarching plot from October 1991 to November 1995 "to permanently remove Bosnian Muslims and Bosnian Croats from Bosnian Serb-claimed territory."
The trial was heard by the International Criminal Tribunal for the former Yugoslavia -- an ad hoc court the United Nations established to prosecute serious crimes committed during the conflicts in the former Yugoslavia.
John Dalhuisen, Amnesty International's director for Europe and Central Asia, said the results confirmed Karadzic's "command responsibility for the most serious crimes under international law carried out on European soil since the Second World War."
The Croatian government hailed the verdicts Thursday -- which came at the end of an eight-year trial -- as welcome but long overdue, calling them "the minimum, for which the victims and their families unfortunately waited too long."
Genocide in Srebrenica
In July 1995, tens of thousands of Bosnian Muslims had sought refuge in the spa town of Srebrenica -- designated a U.N. "safe area" -- as the Bosnian Serb army marched toward them.
But with only about 100 lightly equipped Dutch peacekeepers there for protection, the town was overrun by Serb forces.
Delivering the verdicts, presiding Judge O-Gon Kwon said the tribunal found that about 30,000 Bosnian Muslim women, children and elderly men had been removed to Muslim-held territory by Bosnia Serb forces acting on Karadzic's orders.
Karadzic's forces then detained the Muslim men and boys in a number of locations before taking them to nearby sites, where they were executed by the thousands.
The tribunal found that Karadzic was the only person within the Serb Republic with the power to intervene to prevent them being killed, but instead he had personally ordered that detainees be transferred elsewhere to be killed.
It found he shared with other Bosnian Serb leaders the intent to kill every able-bodied Bosnian Muslim male from Srebrenica -- which amounted "to the intent to destroy the Bosnian Muslims in Srebrenica," the tribunal said in a statement.
Civilians targeted in Sarajevo
Other charges against Karadzic stemmed from the infamous siege of Sarajevo, from 1992 to 1995, during which more than 11,000 people died.
JUST WATCHED Haunted by the Bosnian War Replay More Videos ... MUST WATCH Haunted by the Bosnian War 03:06
The judge said Bosnian Serb forces had consistently and deliberately targeted civilians in Sarajevo, acts that constituted war crimes and crimes against humanity.
"Sarajevo civilians were sniped while fetching water, walking in the city, and when using public transport. Children were sniped at while playing in front of their houses, walking with their parents or walking home from school," the judge said.
He said Karadzic was "consistently informed" about the targeting of civilians, had allowed it to intensify and used it to exert pressure in pursuit of his political goals.
The judge said the sniping attacks on the civilian population, which instilled extreme fear among the city's residents, could not have occurred without Karadzic's support, and the only reasonable inference was that the former Serb leader had intended murder, unlawful attacks on civilians and terror.
U.N. peacekeepers taken hostage
The tribunal also found Karadzic guilty of taking U.N. peacekeepers hostage in May and June 1995, with the judge calling him a "driving force" behind a plot to put the hostages in key military and other strategic locations to deter NATO airstrikes on the targets.
The judge said the U.N. personnel were also threatened during their detention, with the goal of bringing a halt to the strikes altogether.
Karadzic was found not guilty on one of the counts of genocide, relating to crimes against Bosnian Muslims and Croats in "municipalities" throughout Bosnia-Herzegovina.
The tribunal found that Serb forces had killed, raped, forcibly displaced and tortured the other ethnic groups in the municipalities, and found Karadzic guilty of persecution, extermination, deportation, forcible transfer and murder in relation to crimes committed there.
However, the judge said, the court was unable to identify or infer genocidal intent, and therefore couldn't establish beyond a reasonable doubt that genocide had occurred there.
Bizarre path to justice
Karadzic, who had denied the charges against him -- blaming any war crimes committed on rogue elements -- has the right to appeal.
He is also entitled to credit for the time he has spent in custody since his arrest in July 2008.
His road to The Hague has been a long one, marked by bizarre twists. He went into hiding in 1996 and was not arrested until 12 years later. When he emerged, he was heavily disguised by a white beard, long hair and spectacles.
Radovan Karadzic used a disguise of a beard and glasses while in hiding.
Serb officials revealed that Karadzic had been hiding in plain sight -- working in a clinic in Belgrade, the capital of Serbia, under a false identity as a "healer."
He had also managed to publish a book of poetry during his time on the run.
He was extradited to The Hague to face charges and pleaded not guilty. He initially tried to represent himself, leading to delays in his trial, but eventually was forced to accept an attorney.
Thursday's verdict comes more than a year after the end of his trial in 2014. The 500-day trial included evidence from 586 witnesses and more than 11,000 exhibits. | – A UN tribunal has found Radovan Karadzic, aka the "Butcher of Bosnia," guilty of genocide, war crimes, and crimes against humanity and sentenced the 70-year-old to 40 years in prison. The International Criminal Tribunal for the former Yugoslavia found the former Bosnian Serb leader "criminally responsible" for the 3.5-year siege of Sarajevo that killed 12,000 and for the slaying of 8,000 Muslim men and boys at Srebrenica during the Bosnian war, reports the Guardian. The New York Times says the atrocities "were part of the most severe war crimes since World War II." The tribunal has previously convicted and sentenced 80 people; three others are on trial, including Karadzic's military chief. Karadzic had pleaded not guilty to 11 charges, including two counts of genocide, noting he had tried to protect Serbs and was a "true friend to Muslims," per the Times. But after a 491-day trial, judge O-Gon Kwon said Thursday that Karadzic was "consistently informed" about Bosnian Serb forces targeting civilians in Sarajevo and "in agreement with the plan of the killings" at Srebrenica, report the BBC and CNN. He was found guilty of all charges but one: a genocide charge related to a campaign to expel Bosnian Muslims and Croats from traditionally-Serb areas. However, he was convicted of persecution, extermination, deportation, forcible transfer, and murder in that case. |
Liam Neeson made a surprise appearance in the cold open to scold Vladimir Putin for moving troops into Crimea: "I hate it when things are taken."
Lena Dunham brought nudity, Scandal and more Girls to Saturday Night Live.
The show's money shot came when the first-time host debuted Girl, a spiritual sequel to SNL's much-loved Girls parody from last year's season opener. In Girl, Dunham and Girls costar Adam Driver (Taran Killam) told the story of Adam and Eve -- Lena Dunham style.
PHOTOS: 'Saturday Night Live': 10 Stars You Forgot Were Castmembers
Dunam's Eve pestered Adam to define their relationship ("are we like, man and wife?") and became angry on feminist grounds when he reassured her that she was part of him ("Literally. God made you from my rib, kid."). Vanessa Bayer reprised her role as Shoshanna, playing the snake in the Garden of Eden story and convincing Eve to eat the infamous apple.
Eve's response when God scolded her for the sin?
"Can you please not apple shame me right now? Seriously, I know I committed original sin, but at least it's original."
SNL totally hit it out of the park with its cold open, where Liam Neeson made a surprise appearance as himself to help President Obama (Jay Pharoah) condemn Russian President Vladimir Putin for moving troops into Crimea.
PHOTOS: From Live TV to the Big Screen: 12 'SNL' Sketches Made Into Movies
"Recently, I got a very disturbing call. Crimea had been taken. I hate it when things are taken," Neeson said, before speaking into the camera to address Putin directly.
"Mr. Putin. Vladimir. I've never met you. I don't have experience in international diplomacy. But what I do have is a very particular set of skills. Skills that would make me a nightmare for someone like you," Neeson said, channeling his Taken character. "By which I mean I'm an actor. In Hollywood. With a lot of connections."
New castmember Sasheer Zamata had her biggest role on SNL yet, playing Kerry Washington's Scandal character Olivia Pope -- an interesting choice as Washington helped SNL mock the show's lack of diversity back in November.
In the sketch, Olivia assigned impossibly difficult tasks to her team -- who all were unfazed by the demands. That's except for the newest member (Dunham), who got hung up on the logistics and asked endless questions about how they were going to accomplish these impossible tasks.
Dunham's opening monologue got really awkward when castmembers -- emboldened by her racy HBO show -- kept wanting to tell her about their sex lives.
SNL also had plenty of fun with the Oscars, with Killam playing newly crowned best actor winner Matthew McConaughey. Wearing a white tux, McConaughey spoke in gibberish in the style of his much-talked-about Oscars speech: "Don't congratulate me. Congratulate the man I was a week ago. Congratulate the man I'm chasing. Congratulate the man who never existed."
Later, Pharoah starred in Pimpin' Pimpin' Pimpin With Katt Williams, where the 4/20 friendly comedian welcomed Liza Minnelli (Dunham) to ask " How did you not whoop Ellen's ass?" when the Oscars host joked she was a transvestite, and to tell Harrison Ford (Killam) the actor's earring really wasn't working for him: "Seeing Indiana Jones with an earring is like seeing Darth Vader in Uggs." The kicker? Noel Wells reprised her role as Dunham.
The episode also saw cameos from Fred Armisen (playing one of Putin's friends growing up) and Jon Hamm as the guest on a teen girls talk show.
SNL is taking a few weeks off, but returns March 29 with host Louis. C.K.
SNL airs at 11:30 p.m. ET/PT on NBC.
Email: Aaron.Couch@THR.com
Twitter: @AaronCouch ||||| The hype that surrounds the announcement of an interesting “SNL” host like Lena Dunham is always odd. Usually the extra noise will come from the fans of that host, as opposed to people who watch “SNL” on a weekly basis. And then the show starts and … well, there’s Lena Dunham dressed as a teenager for a sketch like any other host would be doing. And Dunham was fine as a host – some of her sketches were funny, some weren’t; like pretty much any other show – though I suspect that her performance will be maddeningly over-analyzed because it’s Lena Dunham and that kind of thing seems to happen to her. (Oh, see, Nikki Finke has already done just that, tweeting, “One of the worst hosts of one of worst SNL shows.” I mean, that’s insanity.) Anyway, let's take a look at this week's Scorecard, shall we?
Sketch of the Night
”Ohh Child” (Killam, Thompson, Strong, Dunham, Wheelan) First of all, it really is impossible to get a good car sing-along going when GPS is constantly interrupting the song. I’m not always a fan of the tacked-on sketch ending that has nothing to do with the rest of the sketch – in this case, the joke of Dunham constantly being interrupted by the GPS turns into the fact that that the foursome is going to kill Brooks Wheelan. But, whatever, this one worked.
Score: 9.0
The Good
”What’s Poppin’” (Thompson, Pharoah, O’Brien, Bryant, Strong, Dunham) Aidy Bryant’s sad delivery of the line, “Hey, my flute amp,” may have been the funniest non-McConaughey moment of the entire show. And it’s great when Mike O’Brien gets something on the air – it’s just a spectacle of weird and this certainly qualifies. Also, “Tim” is a fantastic rap name.
Score: 7.5
”Weekend Update” (Strong, Jost, Killam, Bayer, Armisen) First, Taran Killam’s Matthew McConaughey was a highlight of the show. Killam nails McConaughey’s manic digressions and wisely doesn’t overdo the more easily parodied cartoonish elements of McConaughey’s persona. And, look, I’m a fan of Armisen and Bayer’s “friends of a tyrant” characters, but with a crowded enough cast already (I wrote about this problem this week), it was a little odd seeing Armisen pop back up for a character that wasn’t 100 percent necessary to see again.
Colin Jost was better than last week, but for whatever reason he’s not being allowed to do something that would show off his personality. (Since he’s the co-head writer, perhaps this is his own decision.) He reminds be of a backup quarterback who has just entered the game and has been told to just hand the ball off to the running back until he feels comfortable. Well, eventually he’s going to have to throw a pass downfield.
Score: 7.0
”Scandal” (Zamata, Dunham, Bennett, Pharoah, McKinnon, Strong, Killam) So … people who love “Scandal” seemed to really like this sketch. I do not watch “Scandal” so I had pretty much no idea what was going on. Regardless, there were still a couple of funny jokes in there for people like me.
Score: 7.0
”Cold Open: Obama Ukraine Address” (Pharoah, Neeson) Liam Neeson is really starting to own this whole “I’m Mr. Tough Guy” persona. I think part of Neeson believes that Putin might see this sketch and actually think twice about his actions. Actually, at this point, Neeson might be right in thinking this way.
Score: 6.5
”Biblical Movie” (Dunham, Killam) There was little chance that we were going to get through the night without seeing Taran Killam’s Adam Driver – which is good, because Killam does a great Adam Driver. I mean, I get it, “SNL” had to do some sort of “Girls” parody at some point in the evening (or they didn’t have to, I guess) and this was fine. Though, this feels like one of those sketches that I’m supposed to like – hey, it skewered a contemporary example of popular culture! – than a sketch that I actually do like.)
Score: 6.5
The Bad
”What Are You Even Doing” (Pedrad, Dunham, Moynihan, Mooney, Hamm) Well, Jon Hamm showed up, so that’s fun. You know, I get the feeling that his look of “What am I doing here?” wasn’t 100 percent acting, in that, “Of all of the sketches I could be used for, this is the one you choose?” (Kind of incredibly, all of the cameo appearances aside, Jon Hamm hasn’t hosted “SNL” since October of 2010.) I didn’t love this sketch, but I hope they try it again at some point. It just feels like a recurring sketch with a lot of potential that isn’t quite there yet. (Well, except for Bobby Moynihan, who looks like he’s been playing that part for ten years.)
Score: 5.5
”The Katt Williams Show” (Pharoah, Wheelan, Dunham, Killam, Wells) Yeah, I kind of had a feeling that with Dunham hosting that it would be a rough night for Noël Wells. And, here, she got to do her Lena Dunham impression, which just seemed a little odd. Dunham was fine as Liza Minnelli – she perhaps hammed it up a bit too much, but it’s not like Dunham is known for her ability to do impressions, so good on her for even attempting this. Taran Killam’s unfocused Harrison Ford is, sadly, about right. But, in the end, this all just felt like “an excuse to do impressions.”
Score: 5.0
”Lena Dunham Monologue” (Dunham, Bayer, Bryant, Moynihan, McKinnon) Dunham seemed nervous at first – which is fair! – then seemed to settle into her monologue. The problem is the concept of the cast revealing their sex secrets to Dunham went nowhere and actually made little sense.
Score: 5.0
”Concert Tickets” (Bennett, Mooney, Wheelan) Honestly, this just feels like a lesser version of some of the other shorts that Bennett and Mooney have put on throughout the season. It’s like, here’s our quirky concept (in this case, Will Smith tickets); here’s our monotone banter; here’s where we talk to a normal person who is confused by all of this (in this case, Brooks Wheelan). I like Bennett and Mooney and these two have come the closest out of all of the new cast members in actually making a real impact on the show, I just wish they’d do something new.
(Not online due to song rights issues.)
Score: 3.0
The Ugly
”Jewelry Party” (Strong, O’Brien, Bryant, Dunham, Pedrad, Bayer) Boy, this was a dud. It’s like someone decided that there needed to be a sketch about “issues,” but forgot to add any comedy. Then, at the last minute, someone realized there wasn’t any comedy so it was decided that Cecily Strong would do “a voice.” It was really weird: Instead of satirizing the goofy concept of “men’s rights,” they put poor Mike O’Brien in the sketch and he comes off as a nice guy (it’s impossible for O’Brien not to come off as a nice guy) while everyone tells him he’s awful. Where’s the joke? It was interesting to see “SNL” get somewhat political, but this feels like a huge missed opportunity.
Score: 1.5
Average Score for this Show: 5.77
· Lady Gaga 6.06
· Melissa McCarthy 6.03
· Edward Norton 5.91
· Paul Rudd 5.90
· Drake 5.82
· Jimmy Fallon 5.80
· Lena Dunham 5.77
· John Goodman 5.76
· Josh Hutcherson 5.75
· Jonah Hill 5.73
· Bruce Willis 5.68
· Kerry Washington 5.60
· Jim Parsons 5.51
· Tina Fey 5.35
· Miley Cyrus 5.20
Mike Ryan is senior writer for Huffington Post Entertainment. You can contact him directly on Twitter. Click below for this week's "SNL," Not Ready For Primetime Podcast featuring Mike Ryan and Hitfix's Ryan McGee.
If you would like to subscribe to the podcast, you can do that here. | – Lena Dunham took to the stage of Saturday Night Live last night, working her way through an Opening Monologue interrupted by castmembers oversharing about their sex lives, and spoofing Girls with a Garden-of-Eden themed movie. "Can you please not apple shame me right now?" Dunham's Eve asks God. "Seriously, I know I committed original sin, but at least it's original." The show "hit it out of the park with its Cold Open," writes Aaron Couch at the Hollywood Reporter, with a Liam Neeson cameo in which he schools Jay Pharoah's Barack Obama on how to handle Vladimir Putin. Other highlights included an improved turn from Colin Jost at the "Weekend Update" desk, a spoof on Matthew McConaughey's weird Oscars speech, and a breakout turn from newcomer Sasheer Zamata as Scandal's Olivia Pope. Mike Ryan has his scorecard over at the Huffington Post, in which he concludes Dunham, though much hyped, did "fine." |
Researcher Claudia Fugazza demonstrates the “do-as-I-do” method with her dog. (Photo by Mirko Lui)
Do you remember what you did last year on Thanksgiving? If so, that’s your episodic memory at work — you’re remembering an experience that happened at a particular time, in a particular place, maybe with particular people, and probably involving particular emotions.
Humans have episodic memory, and that’s pretty easy to prove, because we can use our words to describe the past events we recall. Demonstrating that animals have it is much more difficult.
But now researchers in Hungary say they’ve found evidence that dogs have episodic-like memory (they added the “like” because they acknowledge they cannot get inside a dog’s head to absolutely confirm this), specifically when it comes to remembering what their owners do. Even more interesting is that they can remember these things even when they don’t know they’ll have to remember them.
[DNA evidence helps free a service dog from death row]
To determine this, the researchers put 17 pet dogs through a multistep training process designed to first make them memorize an action, then trick them into thinking they wouldn’t need to do it. The dogs’ performance was described in a study published Wednesday in Current Biology.
First the dogs were trained in what is known as the “do as I do” method. It involves a dog’s owner demonstrating an action — say, touching a traffic cone or an umbrella — and then telling the dog to “Do it!” The pups’ successful imitations were rewarded by treats. Once they had mastered that trick, the owners switched things up on them. They performed an action, but instead of asking the dogs to imitate it, the humans told the pets to lie down. After several rounds of that, all the dogs eventually were lying down spontaneously — a sign, the authors wrote, that they’d lost any expectation that they were going to be told to imitate, or “Do it!”
“We cannot directly investigate what is in the dog’s mind,” lead author Claudia Fugazza, an ethologist at the University of Eotvos Lorand in Budapest, said in an interview. “So we have to find behavioral evidence of what they expect or not.”
Next, the owners switched things up on the dogs yet again. They’d do the action, and the dogs would lie down, and then the humans would totally violate the poor pooches’ expectations by waiting one minute and saying, “Do it!” The owners made the same command after waiting an interval of one hour.
[Here’s how scientists got dogs to lie still in a brain scanner for eight minutes]
This was the test: Had the dogs tucked the memory of their owners’ actions somewhere in their mind, and could they dig it out?
After the one-minute interval, about 60 percent of the dogs imitated the human action, even though they probably didn’t expect to be asked to. After the one-hour wait, about 35 percent imitated the action. Here’s a video demonstration:
A research group in Budapest ran a series of tests on dogs to see if they could remember certain actions after a 1 minute period of time. (Claudia Fugazza, Ákos Pogány, and Ádám Miklós)
“What’s lovely about the study is the way it shows dogs remembering an action that they’d seen at a later time — without doing it themselves,” Alexandra Horowitz, who runs the Dog Cognition Lab at Barnard, wrote in an email. “It speaks to what might be on their mind: that they are remembering episodes that they witness, not just things that they are the subjects of.”
Fugazza and colleagues had previously carried out a variation on this study that didn’t involve messing with the dogs’ expectations. In that one, the dogs were not taught to lie down, but just to “Do it!” — which the researchers say means the dogs expected to be told to imitate. The canine participants in that study aced that test, with nearly all imitating the human actions even after a one-hour delay.
The dogs’ much lower success in the current study “also suggests they were really using their episodic-like memory, because episodic memory in humans is known to decay faster, too,” Fugazza said.
[Your dog really does know what you’re saying, and a brain scan shows how]
Horowitz said she was perplexed, though, by the fact that more than one-third of the dogs studied didn’t imitate even after just one minute. “We wouldn’t expect some dogs to remember past events and others not,” if episodic-like memory is an ability that can be generalized to a species, she said.
Clive Wynne, a behavioral scientist who directs the Canine Science Collaboratory at Arizona State University, also expressed doubt that the results clearly demonstrated episodic memory, or something like it, in dogs.
“Maybe a lot of experience of “Do it” has led the dogs to always pay at least some attention to what the human does in case they are asked to copy it,” he said. “I can think of lots of not very exciting explanations for these findings.”
The authors say the results provide the first demonstration of non-humans remembering complex events without practicing them during a waiting period, and that the findings provide groundwork for more research on episodic memory in dogs and other animals. Fugazza said she thinks the study also shows dogs are more like us than we might believe.
“I think every dog owner knows that dogs remember events. What is new and important is that dogs can remember events even if those events do not seem to be important,” she said.
“Dogs probably pay more attention to us than we think and observe us more than we think,” Fugazza added. “If dogs could talk, what would they say?”
Read more:
First a polar bear petted a dog. Then a polar bear did what polar bears do: Ate a dog.
Two moose locked antlers in a fight, then froze together in a stream
Scientists just can’t stop studying falling cats ||||| Dogs can remember what their owners have been up to, say researchers probing the nature of canine memory.
A team from Hungary have discovered that dogs are able to recall their owner’s actions, even when they were not specifically instructed to do so, suggesting that dogs, like humans, have what is known as “episodic memory” – memories linked to specific times and places.
Why dog trainers will have to change their ways Read more
“I think that dog owners more or less suspect, at least, that dogs can remember events from the past - what is novel is the type of memory they can use for doing so,” said Claudia Fugazza lead author of the study from the MTA-ELTE Comparative Ethology Research Group. “This study shows that they can use a type of memory that allows them to recall and remember events that were not known to be important.”
To probe the nature of doggy memory, Fugazza and colleagues employed 17 dogs of various breeds that were used to being trained to copy their owner’s movements.
In the first step of the study, the dogs were exposed to six different objects and watched as their owner carried out a previously unseen action with one of three of the items, such as climbing on a chair or touching an umbrella. The dogs were then commanded to mimic the action with the words “do it!”.
In the second step, the dogs were trained to lie down after seeing their owner interacting with one of the six objects. The owners then carried out an unfamiliar action with one of the three items used in the first step. In response the dogs lay down, expecting a command to do so – but instead, after a delay, they were unexpectedly given the “do it!” commanded. The test was carried out twice for each dog, using different actions, once with a one minute delay and once with an hour’s delay.
The results, published in the journal Current Biology, reveal that while the dogs were more likely to imitate their owners when expected to do so, they were also able to imitate actions when the command was sprung upon them.
While 94.1% of dogs successfully mimicked their owner when expecting to do so, 58.8% correctly copied their owner when unexpectedly asked to “do it!” a minute later, and 35.3% correctly copied their owner when unexpectedly given the commanded an hour later.
Dogs feel jealous of rival pets, study finds Read more
The authors note that the rapid drop-off in success rates over time, together with evidence that the command was unexpected, shows that the dogs were recalling events that had not been imbued with importance – suggesting that they were relying on a type of episodic memory. The conclusion, they add, is backed up the dogs’ ability to mimic actions despite having never physically done them before.
“Traditionally episodic memory has been linked to self-awareness but as we do not know whether dogs are self-aware we call it episodic-like memory,” said Fugazza.
Laurie Santos, an expert in canine cognition from Yale University who was not involved in the research, praised the design of the study and said the work offered new insights into canine memory.
But, she added, it was not clear whether dogs are capable of remembering events with the same level of detail and context that humans do. “When I think of my last holiday dinner, there’s a richness to that where I remember where I was, and when, and who I was with and so on,” she said. “It’s not yet clear from the current study if dogs have that richness, but the paper is a nice step to starting to test these important questions.” ||||| Your Dog Remembers Every Move You Make
Enlarge this image toggle caption Mirko Lui/Cell Press Mirko Lui/Cell Press
You may not remember what you were doing a few minutes ago. But your dog probably does.
A study of 17 dogs found they could remember and imitate their owners' actions up to an hour later. The results, published Wednesday in Current Biology, suggest that dogs can remember and relive an experience much the way people do.
That's probably not a big surprise to people who own dogs, says Claudia Fugazza, an author of the study and an animal behavior researcher at Eotvos Lorand University in Budapest. Fugazza owns a Czechoslovakian Wolfdog named Velvet.
"Most dog owners at least suspected that dogs can remember events and past experiences," she says.
But demonstrating this ability has been tricky.
Fugazza and her colleagues thought they might be able to test dogs' memory of events using a training method she helped develop called "Do As I Do." It teaches dogs to observe an action performed by their owner, then imitate that action when they hear the command: "Do it."
Do As I Do Training This video shows episodic-like memory in dogs, using the "Do As I Do" method.
"If you ask a dog to imitate an action that was demonstrated some time ago," Fugazza says, "then it is something like asking, 'Do you remember what your owner did?' "
In the study, a trained dog would first watch the owner perform some unfamiliar action. In one video the team made, a man strides over to an open umbrella on the floor and taps it with his hand as his dog watches.
Then the dog is led behind a partition that blocks a view of the umbrella. After a minute, the dog is led back out and lies on a mat. Finally, the owner issues the command to imitate: "Do it."
The dog responds by trotting over to the umbrella and tapping it with one paw.
In the study, dogs were consistently able to remember what their owners had done, sometimes up to an hour after the event.
The most likely explanation is that the dogs were doing something people do all the time, Fugazza says. They were remembering an event by mentally traveling back in time and reliving the experience.
Even so, the team stopped short of concluding that dogs have full-fledged episodic memory.
"Episodic memory is traditionally linked to self-awareness," Fugazza says, "and so far there is no evidence of self awareness in dogs and I think there is no method for testing it."
For a long time, scientists thought episodic memory was unique to people. But over the past decade or so, researchers have found evidence for episodic-like memory in a range of species, including birds, monkeys and rats.
Dogs have been a special challenge, though, says Victoria Templer, a behavioral neuroscientist at Providence College.
"They're so tuned into human cues, which can be a good thing," Templer says. "But it also can be a disadvantage and make it very difficult, because we might be cuing dogs when we're totally unaware of it."
The Budapest team did a good job ensuring that dogs were relying on their own memories without getting any unwitting guidance from their owners, says Templer, who wasn't involved in the study.
She says the finding should be useful to scientists who are trying to understand why episodic memory evolved in people. In other words, how has it helped us survive?
One possibility, Templer says, is that we evolved the ability to relive the past in order to imagine the future.
So when we're going to meet a new person, she says, we may use episodic memories of past encounters to predict how the next one might go.
"If I can imagine that I'm going to interact with some individual and that might be dangerous, I'm not going to want to interact with them," she says.
And that could help make sure the genes that allow episodic memories get passed along to the next generation. | – As researcher Claudia Fugazza tells NPR, "most dog owners at least suspected" their furry friends remember the times they've shared together. Now a study published Tuesday in Current Biology offers some scientific evidence to back that feeling up. Fugazza and her team used a training method she developed call "Do As I Do" to get dogs to mimic human actions, such as touching an umbrella. Their method is detailed in this video. In short, researchers showed dogs could replicate an action they weren't specifically trained to do, weren't shown was important, and hadn't ever physically done themselves, the Guardian reports. More than a third of dogs tested were able to replicate a human's action an hour later. And while Fugazza won't go so far as to say dogs have episodic memory—memory tied to a specific time and place—she does conclude they have "episodic-like memory." "Episodic memory is traditionally linked to self-awareness," Fugazza tells NPR. "So far there is no evidence of self awareness in dogs, and I think there is no method for testing it." However, the fact that only some of the dogs tested could replicate a human's action, even just a minute later, could be evidence that they aren't actually remembering things, the Washington Post reports. One expert says there are "lots of not very exciting explanations" for the findings in the study. (Video of polar bear petting dog goes viral. Now the bad news.) |
Tweet with a location
You can add location information to your Tweets, such as your city or precise location, from the web and via third-party applications. You always have the option to delete your Tweet location history. Learn more ||||| Global warming is not expected to end anytime soon, despite what Breitbart.com wrote in an article published last week .
Though we would prefer to focus on our usual coverage of weather and climate science, in this case we felt it important to add our two cents — especially because a video clip from weather.com (La Niña in Pacific Affects Weather in New England ) was prominently featured at the top of the Breitbart article. Breitbart had the legal right to use this clip as part of a content-sharing agreement with another company, but there should be no assumption that The Weather Company endorses the article associated with it.
The Breitbart article – a prime example of cherry picking, or pulling a single item out of context to build a misleading case – includes this statement: "The last three years may eventually come to be seen as the final death rattle of the global warming scare."
In fact, thousands of researchers and scientific societies are in agreement that greenhouse gases produced by human activity are warming the planet’s climate and will keep doing so.
Along with its presence on the high-profile Breitbart site, the article drew even more attention after a link to it was retweeted by the U.S. House Committee on Science, Space, and Technology .
The Breitbart article heavily references a piece that first appeared on U.K. Daily Mail’s site.
Here’s where both articles went wrong:
CLAIM : "Global land temperatures have plummeted by one degree Celsius since the middle of this year – the biggest and steepest fall on record."
TRUTH : This number comes from one satellite-based estimate of temperatures above land areas in the lower atmosphere. Data from the other two groups that regularly publish satellite-based temperature estimates show smaller drops, more typical of the decline one would expect after a strong El Niño event.
Temperatures over land give an incomplete picture of global-scale temperature. Most of the planet – about 70 percent – is covered by water, and the land surface warms and cools more quickly than the ocean. Land-plus-ocean data from the other two satellite groups, released after the Breitbart article, show that Earth’s lower atmosphere actually set a record high in November 2016.
CLAIM : "It can be argued that without the El Niño (and the so-called "Pacific Blob") 2014-2016 would not have been record warm years." (David Whitehouse, Global Warming Policy Foundation, quoted by Breitbart)
TRUTH : NOAA data show that the 2014-16 El Niño did not even begin until October 2014. It was a borderline event until mid-2015, barely above the El Niño threshold. El Niño clearly added to the strength of the record global warmth observed since late 2015. However, if the El Niño spike is removed, 2016 is still the warmest year on record and 2015 the second warmest , according to climate scientist Zeke Hausfather (Berkeley Earth).
CLAIM : "Many think that 2017 will be cooler than previous years. Myles Allen of Oxford University says that by the time of the next big United Nations climate conference, global temperatures are likely to be no warmer than the Paris COP in 2015. This would be a strange thing to happen if, as some climate scientists have claimed, recent years would have been a record even without the El Niño." (David Rose, U.K. Daily Mail, quoted by Breitbart)
TRUTH : There is nothing unusual about a drop in global surface temperatures when going from El Niño to La Nina. These ups and downs occur on top of the long-term warming trend that remains when the El Niño and La Niña signals are removed. If there were no long-term trend, then we would see global record lows occurring during the strongest La Niña events. However, the last year to see global temperatures hit a record low was 1911, and the most recent year that fell below the 20th-century average was 1976.
For an even deeper dive on the science, we recommend the blog by our experts .
Finally, to our friends at Breitbart: The next time you write a climate change article and need fact checking help, please call. We're here for you. I'm sure we both agree this topic is too important to get wrong.
MORE ON WEATHER.COM: NASA Documents Worldwide Ice Loss ||||| The Weather Channel has been known to publish rubbish articles like “Woman Hit By Waves During Selfie” and “Before the Bikini: Vintage Beach Photos.” But even the channels’s thirsty editors can brutally own the internet’s bullshit vendors. A brand new segment calls out the climate change-denying reporting at Breitbart, and it’s two minutes’ of burns.
Reporter Kait Parker posted a compelling video today that condemns the conservative website for misrepresenting her own climate report as well as climate change data in general.
Advertisement
“Last week, Breitbart-dot-com published a story claiming global warming is nothing but a scare, and global temperatures were actually falling,” Parker says in her new video retort. “The problem is, they used a completely unrelated video about La Niña with my face in it to attempt to back their point.”
Parker is referencing a now-famous Breitbart story titled “Global Temperatures Plunge. Icy Silence From Climate Alarmists” that was published last week. The story was riddled with errors and erroneous scientific data. But what made matters worse is that the US House Committee on Space, Science, and Technology tweeted the story out as evidence of climate change denial.
Advertisement
The story alone wouldn’t have been so bad—Breitbart’s track record on science news is abysmal—but when the US House Committee tweeted out the link, it lent credibility to the story that it simply didn’t deserve. Scientists and politicians voiced their frustrations to no avail. And Parker defended her story with good old fashioned facts, helping climate change skeptics understand what’s really happening to planet Earth as outlets like Breitbart mislead millions of readers.
Then, the Weather Channel launched an all-out assault against Breitbart News, and the lies it’s spreading about climate change. The team published a line-by-line retort to the false Brietbart post titled explaining how it got the facts wrong.
Advertisement
The skewered climate change article is hardly the only one of its ilk to grace the pages of Breitbart.com, though. The conservative website has been promoting misinformation about climate change for years, having published shady science stories with headlines like, “Rebutting Climate Alarmists With Simple Facts,” “Climate Change: The Hoax That Costs Us $4 Billion a Day,” and “Climate Change: The Greatest-Ever Conspiracy Against The Taxpayer.”
We’ve reached out to Breitbart for comment on the takedown and will update this post if we hear back. ||||| Global land temperatures have plummeted by one degree Celsius since the middle of this year – the biggest and steepest fall on record.
But the news has been greeted with an eerie silence by the world’s alarmist community. You’d almost imagine that when temperatures shoot up it’s catastrophic climate change which requires dramatic headlines across the mainstream media and demands for urgent action. But that when they fall even more precipitously it’s just a case of “nothing to see here”.
The cause of the fall is a La Nina event following in the wake of an unusual strong El Nino.
As David Rose reports:
Big El Ninos always have an immense impact on world weather, triggering higher than normal temperatures over huge swathes of the world. The 2015-16 El Nino was probably the strongest since accurate measurements began, with the water up to 3C warmer than usual. It has now been replaced by a La Nina event – when the water in the same Pacific region turns colder than normal. This also has worldwide impacts, driving temperatures down rather than up. The satellite measurements over land respond quickly to El Nino and La Nina. Temperatures over the sea are also falling, but not as fast, because the sea retains heat for longer. This means it is possible that by some yardsticks, 2016 will be declared as hot as 2015 or even slightly hotter – because El Nino did not vanish until the middle of the year. But it is almost certain that next year, large falls will also be measured over the oceans, and by weather station thermometers on the surface of the planet – exactly as happened after the end of the last very strong El Nino in 1998. If so, some experts will be forced to eat their words.
Yes indeed. I recommend this sober assessment of the situation written earlier this month by Dr. David Whitehouse, science editor of the Global Warming Policy Foundation.
With 2016 being predicted as a record warm year it is interesting to speculate on what the El Nino’s contribution will be, which is, in a word, everything. It can be argued that without the El Nino (and the so-called “Pacific Blob”) 2014-2016 would not have been record warm years.
He calls the cooling a “reality check”, noting:
Many think that 2017 will be cooler than previous years. Myles Allen of Oxford University says that by the time of the next big United Nations climate conference global temperatures are likely to be no warmer than the Paris COP in 2015. This would be a strange thing to happen if, as some climate scientists have claimed, recent years would have been a record even without the El Nino.
The last three years may eventually come to be seen as the final death rattle of the global warming scare. Thanks what’s now recognised as an unusually strong El Nino, global temperatures were driven to sufficiently high levels to revive the alarmist narrative – after an unhelpful pause period of nearly 20 years – that the world had got hotter than ever before.
It resulted in a slew of “Hottest Year Evah” stories from the usual suspects. As I patiently explained at the time – here, here, and here – this wasn’t science but propaganda. If you’re a reader of Breitbart or one of the sceptical websites this will hardly have come as news to you. But, of course, across much of the mainstream media – and, of course, on all the left-leaning websites – these “Hottest Year Evah” stories were relayed as fact. And, inevitably, were often cited by a host of experts on Twitter as proof that evil deniers are, like, anti-science and totally evil and really should be thrown in prison for sacrificing the future of the world’s children by promoting Big-Oil-funded denialism.
This is why there is such an ideological divide regarding climate change between those on the left and those on the right. The lefties get their climate information from unreliable fake news sites like Buzzfeed.
Just recently, I had to school my former Telegraph colleague Tom Chivers, now of Buzzfeed, with a piece titled Debunked: Another Buzzfeed ‘Hottest Year Evah’ Story.
Perhaps I’m wrong: I don’t actually look at Buzzfeed, except when they’re doing something worthwhile like “Five Deadliest Killer Sharks” or “Ten Cutest Kitten Photos”. But I’ve a strong suspicion they haven’t yet covered this 1 degree C temperature fall because, well why would they? It just wouldn’t suit their alarmist narrative. | – The world is getting warmer—and Breitbart.com just got burned. The Weather Channel laid into the conservative site Tuesday for using one of its videos in what it says was a misleading article, which claimed the "last three years may eventually come to be seen as the final death rattle of the global warming scare." The channel says the Breitbart story is a "prime example of cherry picking, or pulling a single item out of context to build a misleading case," noting that "thousands of researchers and scientific societies are in agreement that greenhouse gases produced by human activity are warming the planet's climate and will keep doing so" and rebutting the scientific claims made in the article; the piece was retweeted by the House Committee on Space, Science, and Technology. Weather Channel scientist Kait Parker rips the Breitbart story apart in a video retort, reports Gizmodo, which notes that the site has been publishing "skewed" climate change stories for years. Breitbart published a story claiming "global warming is nothing but a scare, and global temperatures were actually falling," Parker says. "The problem is, they used a completely unrelated video about La Niña with my face in it to attempt to back their point." She goes on to say: "Cherry picking and changing the facts will not change the future, not the fact—note: fact, not opinion—that the Earth is warming." The Weather Channel says Breitbart should get in touch the next time it needs help fact-checking a climate change article, because "this topic is too important to get wrong." |
NEW ORLEANS — A new website ranks popular restaurant chains in the United States based on the healthiness of their food, and aims to make it easier for people to find healthy options when they eat out.
The website, called Grellin, uses nutrition information from meals at 100 of the nation's restaurant chains, and ranks the restaurants based on the proportion of their meals that qualify as "healthy." The researchers call this percentage the "Grellin grade."
Some of the restaurants that ranked as the healthiest include Au Bon Pain, Rubio's and Subway, which had more than 50 percent of their menu items meet the criteria for healthiness.
Dr. Lenard Lesser, of the Palo Alto Medical Foundation Research Institute, said the website was inspired by restaurant inspection grades, which people may use to determine if they want to eat at a certain restaurant.
"We hope that people tend to go towards restaurants that have a higher percentage of their menu that is healthy," and that restaurants with more healthy items get more customers, Lesser told Live Science. The website was announced today (Nov. 19) here at the annual meeting of the American Public Health Association.
To rank the restaurants, the researchers turned to a database called MenuStat to collect published nutrition information on menu items.
Then, the researchers gave food items a nutrition score out of 100, based on their nutrient content and the percentage of fruits, nuts and vegetables that the item had. Menu items that scored above 64 on nutrient content, and had fewer than 700 calories, were considered healthy.
The researchers created an algorithm to score menu items, but some items did not have enough information in the database to be scored, so the researchers turned to crowdsourcing. Participants underwent a brief training to learn how much fruit, vegetables and nuts were in certain foods, and then rated the fruit, vegetable and nut content of menu items, Lesser said. [Lose Weight Smartly: 7 Little-Known Tricks That Shave Pounds]
Here are the 10 healthiest restaurant chains (with their scores), according to the site:
1. (tie) Au Bon Pain and Rubio's (57)
3. Subway (54)
4. Bruegger's Bagels (48)
5. Cosi (47)
6. Panera Bread (44)
7. (tie) Jersey Mike's Subs and In-N-Out Burger (39)
9. Panda Express (37)
10. El Pollo Loco (36)
The Grellin website not only lets people see which restaurants are ranked as the healthiest, but also lets consumers view which menu items are healthy at any given restaurant. The website also gives each menu item a "run score," which corresponds to how many hours someone would have to run to burn off the calories in that food, Lesser said.
Some restaurants did not provide enough information to receive a Grellin grade, and these restaurants are marked on the website with a dash, in no particular order.
The website is currently limited to chain restaurants, but the researchers are looking to expand the rankings to include nonchain restaurants, Lesser said.
Follow Rachael Rettner @RachaelRettner. Follow Live Science @livescience, Facebook & Google+. Original article on Live Science. ||||| Starting in 1996, Alexa Internet has been donating their crawl data to the Internet Archive. Flowing in every day, these data are added to the Wayback Machine after an embargo period. | – A newly launched website called Grellin has crunched nutritional data from the menus of chain restaurants across the US to name the healthiest. Here are the top 10, with the number, or "Grellin grade," reflecting the proportion of meals that quality as "healthy," explains LiveScience: No. 1 (tie): Au Bon Pain (57) No. 1 (tie): Rubio's (57) No. 3: Subway (54) No. 4: Bruegger's Bagels (48) No. 5: Cosi (47) No. 6: Panera Bread (44) No. 7 (tie): Jersey Mike's Subs 39) No. 7 (tie): In-N-Out Burger (39) No. 9: Panda Express (37) No. 10: El Pollo Loco (36) Click for the full rankings. |
the issue of phase separation is currently one of the main topics of research in strongly correlated electron systems.@xcite phase separation ( ps ) fully develops in various manganese binary oxides , but there are also evidences of the key role played by clustered states in high tc superconductors.@xcite yet , after several years of intense experimental and theoretical research in this area , the true nature of the phase separated state observed in manganites is still controversial .
some phenomenological models points to the strain between the coexisting phases as the main reason for the appearance of phase separated states.self,nature04 in addition , there is a lot of theoretical evidence in favor of the role of intrinsic disorder in the stabilization of the phase separated state .
these models are based mainly in the double exchange theory , with a fundamental role played by electron - phonon coupling.khomskiieulet,motome,dagottoprb04 khomskii and co workers have also pointed out the tendency to ps of double exchange hamiltonians when elastic interactions are included.@xcite the presence of quenched disorder can lead to a rough landscape of the free energy densities , triggering the formation of clustered states , which are induced by phase competition.@xcite moreo et al .
obtained phase separated states in a monte - carlo simulation of a random - field ising model when disorder is included in the coupling and exchange interactions.moreoprl similar results were obtained by burgy and coworkers , using a uniform distribution of exchange interactions.@xcite as shown for the case of first order transitions , quenched impurities can lead , under certain circumstances , to a spread of local transition temperatures ( where local means over length scales of the order of the correlation length ) leading to the appearance of a clustered state with the consequent rounding of the first order transition.@xcite typical mechanisms to include disorder are chemical@xcite and structural.podzorov,controled despite the intense effort towards a microscopic understanding of the manganites,@xcite many macroscopic features of the phase separated state , including its thermodynamic properties and dynamic behavior , still remains to be studied in greater detail .
one of the most interesting features of the phase separated state is the entwining between its dynamic and static properties .
some of the phase separated manganites display slow relaxation features that hide experimentally the real equilibrium thermodynamic state of the system.@xcite this is why the construction of phase diagrams is currently focused on the dynamic properties of the phase separated state , with regions of the phase diagrams nominated as frozen or dynamic ps,@xcite strain glass or strain liquid@xcite or , directly , ascribed as spin glass phases.@xcite among the challenging issues that have not yet been addressed is an understanding of the phase separated state in terms of its thermodynamic properties .
an analysis of the behavior of phase separated systems based on the probable free energies functional has been schematically realized , ueharaeulet , babush but without the corresponding measurements supporting the proposed scenario . in the present study we perform an attempt to construct the thermodynamic potentials of the fm and non - fm phases of a ps manganite , through calorimetric and magnetic measurements .
the experiments were carried out in a polycrystalline sample of la@xmath4pr@xmath5ca@xmath6mno@xmath3 ( @xmath7 ) , a prototypical phase separated system in which the effects of the substitution of la by pr produces an overwhelming effect on its physical properties.@xcite we took advantage of the fact that in the mentioned compound homogeneous phases can be obtained at low temperatures in long time metastable states , which allow us to measure separately the specific heat of each phase , co or fm , in the low temperature region ( between 2k and 60k ) . with this data it is possible to write an expression for the difference between the gibbs energies of the homogeneous phases as a function of temperature and magnetic field .
the needed constants to link the thermodynamic potentials of both phases were obtained from indirect measurements based on the static and dynamic behavior of the system .
the phase separated state is modeled through the hypothesis that the free energy densities are spread over the sample volume , and that its non - equilibrium features are governed by a hierarchical cooperative dynamic . within this framework
it is possible to construct a phenomenological expression for the free energy of the phase separated state based on experimental data , which is able to describe consistently the behavior of the system as a function of temperature and applied magnetic field .
the measurements were made on a polycrystalline sample of la@xmath0pr@xmath8ca@xmath9mno@xmath3 .
details of material preparation were previously published.@xcite both magnetization and specific heat results were obtained with a quantum design ppms system .
magnetization data was measured with an extraction magnetometer , as a function of temperature , applied magnetic field , and elapsed time .
all temperature dependent data was measured with a cooling and warming rate of 0.8 k / min .
specific heat data was measured with a relaxation method , between 2 and 60 k.
following the @xmath10 phase diagram of the compound,@xcite after zero field cooling ( zfc ) the sample reaches the low temperature state mainly in the co phase , a state that has been described as frozen ps@xcite or strained glass@xcite this frozen state can be released by the application of a moderate magnetic field ( @xmath11=2.2t),@xcite above which the compound transforms into the fm phase in an abrupt metamagnetic transition.@xcite after this step transition the sample remains in this homogeneous fm state until a temperature around 70 k , even after the magnetic field is removed .
these facts were used to perform the measurements of the specific heat @xmath12 of each phase ( @xmath13co or fm ) between 2k and 60k , each one considered as homogeneous in this temperature range under specific field conditions . in fig .
1 the plot of @xmath14 vs @xmath15 is shown for measurements performed while warming after zfc with different procedures : under zero field ( co phase ) , and under different fields @xmath11 after a field sweep 0 - 9t-@xmath11 , for @xmath11=0 , 1 and 2 t ( fm phase ) .
the data of the co phase and that of the fm are clearly distinguished .
also , the results obtained for the fm phase for the fields employed are practically identical , which is a signature that the fm phase obtained after the application of 9 t remains homogeneous until the highest temperature investigated .
besides , the fact that the specific heats of the fm phase are almost independent of @xmath11 is a signal that , in the range of temperature investigated , there is no significant field dependent contribution to the entropy of the fm phase , indicating that the magnetization is saturated for all fields .
the data obtained was adjusted using standard models for co and fm phases.@xcite the small upturn observed at low temperatures corresponds to the onset of the ordering of the magnetic momentum of the pr atoms.@xcite the thermodynamic gibbs potential @xmath16 of each phase may be written as @xmath17 where the superscript @xmath18 indicates the phase ( co or fm ) , @xmath19 and @xmath20 are the enthalpy and the entropy respectively . from the specific heat data we could construct both e and s in the usual way:@xcite @xmath21 @xmath22 where @xmath23 is the magnetization of the phase ( we assume @xmath23=0 for the co phase ) and
@xmath24=2k is the lowest temperature reached in the measurements . as the specific heat of the fm phase
was found almost independent of @xmath11 , we consider the zeeman term in eq .
( 1 ) as the only dependence of the gibbs potentials of the fm phase with @xmath11 , so that @xmath25 . no dependence with @xmath11
is considered for the co phase , since there is no way to perform the measurements on the co phase under an applied @xmath11 due to its instability against the application of @xmath11 in the temperature upturn .
the terms @xmath26 and @xmath27 are respectively the values of the enthalpy and the entropy at the initial temperature @xmath24 . since we are interested in the energy difference between the phases involved , we have taken @xmath28 and @xmath29 as reference values , remaining @xmath30 and @xmath31as the constant to be determined . in order to determine @xmath32 we followed the previously published experimental data in relation to the abrupt magnetic transition from the co to the fm phase , which happens at low temperatures ( below 6k ) , under a magnetic field of around 2.2 t [ ref .
this transition is accompanied by a sudden increase of the sample temperature , which reaches a value around 30k after the transition . due to the velocity of the process ,
it is plausible to consider that the enthalpy remains constant at the transition point , so that the following relation is fulfilled : @xmath33 where @xmath34 is the magnetization of saturation of the fm phase ( @xmath34=3.67@xmath35 /mn=20.5 j / molt ) and @xmath11=2.2 t .
this calculation yields @xmath36 , indicating that the homogeneous fm phase has lower free energy at low temperatures even for @xmath11=0 , as suggested in ref .
dynamic . in order to fully construct the thermodynamic potentials we have to determine the remaining constant @xmath31 which , at this point , is what controls the transition temperature between the homogeneous phases , providing it is a positive quantity as expected for the difference of entropy between the fm and co phases due to the excess of configurational entropy of the latter.@xcite in fig .
2 the obtained thermodynamic potentials of the homogeneous phases are displayed , assuming a value @xmath37 , this value was obtained by adjusting the @xmath38 curve of fig .
4b , as explained below .
the plot indicates that the homogeneous fm state has a lower energy than the co state for all fields at low temperatures , while the co phase is the stable state at high temperatures , with field - dependent transition temperatures ranging from 30k for @xmath11=0 to 60k for @xmath11=2 t .
the above presented results give us an insight on the behavior of the system under the hypothesis that no phase separation occurs , i.e. , describes the thermodynamics of the homogeneous equilibrium phases .
in addition , in order to obtain a phase separated state from the thermodynamic data an appropriate modelling needs to include a priori the existence of the phase separated state .
however , one needs to be careful when comparing the predictions of the model with experimental results .
it is well established dynamic , reentrant that the phase separated state is characterized by a slow dynamics , which implies that equilibrium is hardly reached in laboratory times .
the equilibrium properties must be linked with the measured data and therefore a dynamic treatment is needed . in the discussion that follows we perform a qualitative analysis within a framework where both static and dynamic properties are treated on a phenomenological basis .
it is well known that the physical properties of the lpcmo system changes dramatically near @xmath39=0.32 , @xcite revealing the extreme sensitivity of the system to small variations in the mean atomic radius of the perovskite a - site .
this can be due to the effects that quenched disorder introduced by chemical substitution has on the local properties , @xcite or else to the role played by `` martensitic - type '' accommodation strains originated by the volume differences between the fm and co unit cells .
@xcite on one hand , the inclusion of disorder in a random - field ising model leads to a spatially inhomogeneous transition temperature , from the paramagnetic disordered phase to the `` ordered '' phase , characterized by the appearance of clustered states.burgy01,aliaga this fact implies a spatial dependence of the quadratic coefficient in a landau - type expansion .
on the other hand , strain induced by the shape - constrained transformation between the co and the fm phases could lead to the phase separated state through the frustration of long - range interactions .
@xcite in the latter view , the properties of a specific compound are governed by an `` effective '' pr concentration , which controls the capability of the system to accommodate the anisotropic strain.@xcite these two alternative pictures are not mutually exclusive ; the clustered states induced by disorder are enhanced if correlated disorder is included in the model,@xcite a feature that mimics the cooperative effects of the jahn - teller distortion , in a similar way as elastic interactions are able to induce long scale phase separation in phenomenological models @xcite ( in this last case a renormalized fourth order term is the responsible for the introduction of spatial inhomogeneities ) . additionally , local variations of the atomic composition can couple with the anisotropic strain triggering the formation of the phase separated state.@xcite these facts can be qualitatively described introducing non - uniform free energy densities for the co and fm phases .
the simplest form is a uniform distribution of these densities over the sample volume , with mean values equal to the free energy of the homogeneous phases . within the hypothesis that precursor effects of phase separation are due to variations of the local composition
, we follow imry and wortis theory @xcite describing the effects of disorder on a system displaying a first order transition in order to estimate the width of the free energies distributions .
following their ideas , and considering that disorder affects mainly the free energy density of the fm phase , ( @xmath40 is nearly constant as a function of pr content @xmath39 [ ref . @xcite ] ) we can write an expression for the local free energy density @xmath41 depending on the fluctuations of composition @xmath42 . @xmath43 where @xmath42 is taken over length scales comparables with the correlation length , which is around 1 nm for microscopic clusters in pr@xmath44ca@xmath45mno@xmath3 .
@xcite with the gaussian distribution proposed in ref @xcite @xmath42 can be as high as 0.1 over nanometer length scales ; the development of micrometer sized domains would need of the consideration of elastic interactions .
for the sake of simplicity we take a uniform distribution for @xmath42 over the sample volume . with these assumptions ,
the free energy density of the fm phase as a function of @xmath46 ( the volume coordinate is normalized to 1 ) is written as : @xmath47 where @xmath48 is the parameter controlling the width of the free energy functional , which can be estimated from eq .
( 4 ) as @xmath49 , taking into account that @xmath50 [ ref.@xcite ] and assuming @xmath51 in this way , the equilibrium fm fraction @xmath52 at the given @xmath53 and @xmath11 is obtained as : @xmath54 this expression could be used for determining the parameters @xmath55 and @xmath56 , using for instance the @xmath57 data at different temperatures . however , as stated before , the global behavior of the system at low temperatures is characterized by out - of - equilibrium features , so the true values for @xmath52 are not easily accessible experimentally . in order to circumvent the fact that the thermodynamic equilibrium state is not reached experimentally , an alternative approach is to consider that the response of the system within the phase separated regime as a function of temperature can be qualitatively described within a model of cooperative hierarchical dynamics , using an activated functional form with state - dependent energy barriers.@xcite the time evolution of the fm fraction @xmath46 is given by : @xmath58 where @xmath59 represents a fixed relaxation rate and @xmath60 is a ( field dependent ) energy barrier scale .
this model is similar to that employed to describe vortex dynamics in high @xmath61 superconductors@xcite and is based in dynamic scaling for systems with logarithmic relaxations.labarta the inbreeding between the dynamic behavior and the equilibrium state of the system is given by the functional form of the effective energy barriers @xmath62 , which diverge as the system approaches equilibrium .
this fact represents the main difference with respect to a pure superparamagnetic behavior , and predicts the existence of state - depending blocking temperatures @xmath63 at which , for any given fm fraction @xmath64 , the system becomes blocked , in the sense that the velocity of change of the fm fraction is lower than , for instance , the detectable velocity @xmath65 , estimated as @xmath66 10@xmath67 for a conventional data measurement that takes 30 sec.@xcite .
this gives the following relation for @xmath63 : @xmath68 through eq .
( 8) it is possible to obtain an experimental estimation of the factor @xmath69 which governs the interplay between the dynamics of the system and the measurement procedure .
figure 3 shows @xmath57 data for selected temperatures , and the experimentally obtained values of @xmath70 as a function of @xmath11 , determined through low temperatures measurements between 6 and 32 k. the values of @xmath11 are those at which the system reaches the state with @xmath71 .
the main assumption is that each point of the @xmath57 curve corresponds to a blocked state compatible with the measurement procedure . as indicated in the inset of figure 3 the relation @xmath72 holds , with @xmath73 .
we will show later that the equilibrium state at low temperatures is fully fm , so eq . ( 8) is a direct measurement of the field dependence of the factor @xmath74 . within the dynamic model adopted ,
this factor is independent of the particular value of @xmath75 choosen for its determination . with the assumption that this relation holds in the whole temperature range where phase separation occurs , we can write a simple equation relating the equilibrium state with the experimentally accessible state @xmath76 through this equation it is possible to make the link between the equilibrium state of the system , which can be obtained through the free energies functional , and the experimental data obtained through both @xmath57 and @xmath77 measurements . the upper panel of fig .
4 sketches the temperature evolution of @xmath52 for different magnetic fields , obtained from eq .
( 6 ) with @xmath37 and @xmath78 .
it shows that , with the set of parameters employed , the low temperature state of the system is fully fm for moderate fields . however , the accessible fm fraction after a zfc procedure is small for @xmath11 @xmath79 2 t , due to the weight of the blocking term in eq .
this is why , besides the fact that the difference between the free energies of the co and the fm states increases as temperature is lowered , the magnetic field needed to induce the co - fm transition also increases , a fact that at first sight could be interpreted as a reentrance of the co state . as the temperature is raised above 25k the influence of the blocking term decreases ; in this temperature region the main factor determining the field needed to induce de co - fm transition lies in the field dependence of the equilibrium fraction .
, for the indicated fields .
b ) fm fraction @xmath46 as a function of @xmath11 obtained through @xmath57 data ( open symbols ) , and calculated with eq .
( 9 ) ( solid line ) for @xmath80k .
the fm equilibrium fraction @xmath52 obtained from eq .
( 6 ) is also displayed ( dashed line ) .
c ) field needed to make the system half fm , as a function of @xmath53 , from @xmath57 measurements ( open symbols ) and calculated through eq .
( 9 ) ( solid line ) .
the temperature dependence of the field needed to make the equilibrium state of the system fully fm is also displayed ( dashed line ) . ]
the middle and lower panels of figure 4 show the comparison of experimental data with the results obtained through eq .
( 9 ) . in fig .
4b , @xmath57 data at 40 k normalized by its saturation value and the corresponding calculated values are displayed , showing the good agreement between them .
figure 4c shows measured and calculated values of the field @xmath81 at which @xmath82 , as a function of temperature ; this is the field needed to make half the system fm .
also shown is the temperature dependence of the field needed to make the equilibrium state of the system fully fm . as can be seen
, the calculated curve for @xmath81 reproduces the experimental behavior , with a minimum around 30k .
this minimum signals the crossover from the blocked regime at low temperatures ( frozen ps ) to the coexistence regime at higher temperatures ( dynamic ps ) . in the frozen ps regime
the stable state of the system is homogeneous fm for all fields needed to induce the growth of the fm phase to a value @xmath83 ; the presence of the co phase is only explained by the slow growing of the fm phase against the unstable co due to the energy barriers . in the dynamic ps regime , the influence of the blocking term diminished , the equilibrium state is true ps for moderate fields , and the effect of @xmath11 is mainly to unbalance the amount of the coexisting phases .
figure 5 shows the @xmath84 state diagram obtained from eqs .
( 6 ) and ( 9 ) for a field @xmath85 t , which displays both dynamic and static properties of the model system .
a line in the phase diagram divides it in two major regions , depending on the equilibrium fm fraction @xmath52 .
a point above the line indicates that the system has an excess of fm phase ; below the line the state of the system is characterized by the presence of metastable co regions .
each of these regions is in turn divided in two other , labeled @xmath86 @xmath87 or @xmath88 @xmath89 , indicating that if the system is in a state within this region is able to evolve toward equilibrium within the measurement time .
the regions labeled as @xmath90 indicate that the system is blocked , and no evolution is expected within the time window of the experiment .
data points @xmath91t@xmath92 obtained from @xmath57 and @xmath77 measurements are also showed . the @xmath57 data is obtained in a field sweep after zfc to the target temperature , and the @xmath77 in a field warming run after zfc to 2k .
the data extracted from the @xmath57 measurements gives information on the @xmath46 values for which the system becomes blocked at each temperature , for the specific field and for the characteristic measuring time , indicating the frontier between the dynamic and frozen fm regions .
the data obtained from @xmath77 measurements coincides with that of @xmath57 in the low temperature region , where @xmath93 . as the temperature
is increased , the system gets into a region for which @xmath94 with a fm fraction greater than the lower limit for @xmath75 , so it remains blocked without changes in the magnetic response , a fact characterized by the plateau observed in @xmath77 for the high temperature region .
this non - equilibrium phase diagram was constructed for the particular measurement procedure employed for the acquisition of @xmath57 and @xmath77 data . a modification in this procedure ( for instance , by changing the time spend at each measured point ) will result in a change in the factor @xmath65 , with the consequent change of the boundaries in the phase diagram .
for example , the effect of increasing the measured time by one order of magnitude is shown by dashed lines in the phase diagram of fig .
plane , for @xmath95 t , resulting from eq .
regions labeled dynamic indicate that the system can evolve in the measured time .
those labeled frozen indicates that the system is blocked .
data obtained from @xmath57 measurements at different temperatures ( up triangles ) and from @xmath77 at @xmath11=1.3 t after zfc ( down triangles ) are also shown .
the dashed lines show the new phase diagram boundaries if the measuring time is increased by one order of magnitude . ]
in conclusion , we presented a thermodynamic phenomenological model for a global description of the phase separated state of manganites .
the construction starts with the calculation of the free energies of the homogeneous fm and co phases .
the free energies obtained turned out to be very close in value : the difference was of the order of the magnetic energy for intermediate fields in the whole temperature range investigated .
the phase separated state is introduced by considering a uniform spread of the free energy density of the fm phase , and the dynamic behavior is included within a scenario in which the evolution of the system is determined through a cooperative hierarchical dynamic with diverging energy barriers as the system approaches equilibrium .
the main success of the model is to provide an understanding of the response of the phase separated state when both temperature and magnetic field are varied , being able to reproduce the dynamic and static properties of the system under study .
the same methodology can be also applied to other compounds sharing similar phase diagrams @xcite and properties , especially those displaying abrupt field induced transitions at low temperatures.schiffer,fisher,rana the key factors to determine the free energies of the homogeneous phases is the possibility to measure the specific heat of each phase separately , taking advantage of the existence of blocked states , and the measurement of the temperature reached by the compound under study after the co - fm abrupt transition at low temperature .
this last value and the field at which the abrupt transition occurs are the key parameters to determine the homogeneous ground state of the system at zero applied field . | we present a phenomenological model based on the thermodynamics of the phase separated state of manganites , accounting for its static and dynamic properties . through calorimetric measurements on la@xmath0pr@xmath1ca@xmath2mno@xmath3
the low temperature free energies of the coexisting ferromagnetic and charge ordered phases are evaluated .
the phase separated state is modeled by free energy densities uniformly spread over the sample volume .
the calculations contemplate the out of equilibrium features of the coexisting phase regime , to allow a comparison between magnetic measurements and the predictions of the model .
a phase diagram including the static and dynamic properties of the system is constructed , showing the existence of blocked and unblocked regimes which are characteristics of the phase separated state in manganites . |
Topics: Yosemite, Google, Government shutdown, National Park Service, Sustainability News, Life News, Politics News
Today is Yosemite National Park’s birthday. The vast California park, to which 1.3 million visitors flock annually, is celebrating 123 years of sequoias, waterfalls and breathtaking vistas that stretch over 760,000 acres. This year’s anniversary is particularly notable, as the park survived last month’s record-setting fire more or less intact.
Fittingly, today’s Google Doodle was designed in Yosemite’s honor. Unfortunately, if you click on the doodle, and then on one of the first pages that shows up in your search results, you get this:
Happy birthday, Yosemite. ||||| One of the last remaining veterans of World War II in Congress called Thursday afternoon for his colleagues to end the government shutdown, suggesting that members of the legislative body who have appeared at the World War II memorial this week are not doing enough to support those who served in the conflict.
“If this Congress truly wishes to recognize the sacrifice and bravery of our World War II veterans and all who’ve come after, it will end this shutdown and re-open our government now,” John Dingell, (D-Mich.), said in a joint statement issued with former Republican Sen. Bob Dole of Kansas, a combat-injured veteran of World War II.
The Department of Veterans Affairs warned this week that its progress is cutting the backlog of disability claims by 30 percent over the last six months is likely to be reversed by the shutdown. The department said that it is no longer able to pay overtime to claims processors, an initiative begun in May that officials say was supposed to continue until November.
“The current shutdown has slowed the rate at which the government can process veterans’ disability claims and, as the VA has stated, it is negatively impacting other services to our nation’s veterans,” Dingell and Dole said in their statement.
Dingell and Rep. Ralph Hall, (R-Tex.), are the only veterans of World War II remaining in Congress.
Several veterans groups, including the American Legion, have issued statements calling for an end to the shutdown.
“The American Legion wants Congress to stop its bickering and stop making America’s veterans suffer for its own lack of political resolve in the face of this national crisis,” the organization said Thursday. | – The World War II Memorial in Washington is currently barricaded, thanks to the government shutdown, but that didn't stop a group of WWII vets from entering it today. The group, part of an honor flight program from Mississippi, had chartered an airplane and made plans too far in advance to change the trip. US Park Police allowed their bus to stop ("I'm a veteran myself," says one), and members of Congress led them into the memorial—where they shouted and "surged" in as the barricades were moved, the Washington Post reports. In other strange shutdown news, today's Google Doodle honors Yosemite National Park's 123rd birthday ... but, as Salon points out, all national parks are currently closed, so if you click on the doodle and then on one of the top search results, you'll see the following message instead of the park's website: "Because of the federal government shutdown, all national parks are closed and National Park Service webpages are not operating." Click for more on the shutdown. |
copd was the tenth most common cause of disease - related deaths in japan in 2014 , and it was found that the number of copd - related deaths is increasing.1 an epidemiological survey in japanese population revealed that the prevalence of physician - diagnosed copd in japan is increasing , and ~8.6% of the patients aged 40 years are suffering from copd .
an estimated population of 5.3 million japanese are now at risk of developing copd ; however , only a small proportion ( 9.4% ) were diagnosed to have copd.2,3 decline in quality of life due to breathlessness is the major challenge in the management of copd that demands sustained improvement in lung function.4 long - acting bronchodilators of different pharmacological classes , either as monotherapy or in combination , are now the preferred choice and proven cornerstone for treating compromised airflow in patients with copd.4 in situations where a single bronchodilator fails to provide the desired effect , both global initiative for chronic obstructive lung disease ( gold ) strategy and japanese respiratory society ( jrs ) guidelines recommend the use of a fixed - dose combination of a long - acting 2-agonist ( laba ) and a long - acting muscarinic antagonist ( lama ) for management of symptomatic patients with copd.4,5 a fixed - dose laba / lama combination , indacaterol ( ind)/glycopyrronium ( gly ) , was evaluated in 14 controlled trials as maintenance treatment for patients with copd .
outcomes of these studies demonstrated better efficacy of ind / gly in terms of improving lung function and quality of life , with a comparable safety profile versus placebo,68 lama , open - label tiotropium ( tio),9 and a combination of laba / inhaled corticosteroid ( ics ) and salmeterol / fluticasone ( 50/500 g ) in patients with moderate - to - severe copd .
most of these trials have been conducted in caucasian patients ; however , two studies , shine and arise , evaluated the efficacy of ind / gly in a japanese patient subgroup as well.10,11 in the overall population of the shine study , ind / gly showed superior improvements in lung function and health status compared with placebo , ind , gly , and open - label tio with a similar safety profile in patients with copd.10 here , we report the efficacy and safety of ind / gly in the japanese subgroup of patients from the shine study .
shine was a 26-week , multicenter , randomized , double - blind , parallel - group , placebo- and active - controlled study ( www.clinicaltrial.gov identifier nct01202188).10 after the eligibility assessments , patients were randomized ( 2:2:2:2:1 ) to receive once daily ( od ) ind / gly 110/50 g , ind 150 g , gly 50 g , open - label tio 18 g , or matching placebo .
ind / gly , ind , gly , and placebo were administered via the breezhaler device , whereas tio was delivered via the handihaler device .
men and women aged 40 years with moderate - to - severe stable copd and a smoking history of 10 pack - years who had a postbronchodilator forced expiratory volume in 1 second ( fev1 ) 30% and < 80% of predicted normal value and a postbronchodilator fev1/forced vital capacity ratio < 0.70 were included in the study .
the key exclusion criterion was copd exacerbation that required treatment with antibiotics , systemic steroids ( oral or intravenous ) , or hospitalization during 6 weeks prior to screening or before randomization . during the study , salbutamol
the study was performed in accordance with the declaration of helsinki , good clinical practice guidelines , and all applicable regulatory requirements and was approved by the ministry of health , labor and welfare , japan , as well as all relevant national and local ethics review boards ( table s1 ) .
the primary end point of the study was trough fev1 ( mean of fev1 values measured at 23 hours 15 minutes and 23 hours 45 minutes ) after ind / gly administration at week 26 versus ind and gly . other key end points included peak fev1 , area under the curve for fev1 from 5 minutes to 4 hours ( fev1 auc5 min4 h ) , and trough fev1 throughout the study period .
nonspirometric analysis included the transition dyspnea index ( tdi ) focal score and st george s respiratory questionnaire ( sgrq ) total score at week 26 .
safety was assessed by monitoring adverse events ( aes ) and serious aes over the 26-week study period .
data were analyzed using a linear mixed model , which included fev1 as the dependent variable , treatment as an fixed effect , the variables ( fev1 , ics use , fev1 reversibility components , and smoking status ) at baseline as covariates , and centre as a random effect .
the estimated treatment differences were presented as least - squares mean with standard errors and the associated 95% confidence interval .
the secondary efficacy variables , which included peak fev1 , fev1 auc5 min4 h , trough fev1 over 26 weeks , tdi focal score , and sgrq total score , were also analyzed using the same mixed model as specified for the analysis of the primary variable , with the appropriate baseline measurement included as a covariate.10 in addition , the proportions of patients who achieved a clinically important improvement in the sgrq and tdi focal score were analyzed using logistic regression as specified for the analysis of the primary variable , with the appropriate baseline measurement included as a covariate .
the primary end point of the study was trough fev1 ( mean of fev1 values measured at 23 hours 15 minutes and 23 hours 45 minutes ) after ind / gly administration at week 26 versus ind and gly .
other key end points included peak fev1 , area under the curve for fev1 from 5 minutes to 4 hours ( fev1 auc5 min4 h ) , and trough fev1 throughout the study period .
nonspirometric analysis included the transition dyspnea index ( tdi ) focal score and st george s respiratory questionnaire ( sgrq ) total score at week 26 .
safety was assessed by monitoring adverse events ( aes ) and serious aes over the 26-week study period .
data were analyzed using a linear mixed model , which included fev1 as the dependent variable , treatment as an fixed effect , the variables ( fev1 , ics use , fev1 reversibility components , and smoking status ) at baseline as covariates , and centre as a random effect .
the estimated treatment differences were presented as least - squares mean with standard errors and the associated 95% confidence interval .
the secondary efficacy variables , which included peak fev1 , fev1 auc5 min4 h , trough fev1 over 26 weeks , tdi focal score , and sgrq total score , were also analyzed using the same mixed model as specified for the analysis of the primary variable , with the appropriate baseline measurement included as a covariate.10 in addition , the proportions of patients who achieved a clinically important improvement in the sgrq and tdi focal score were analyzed using logistic regression as specified for the analysis of the primary variable , with the appropriate baseline measurement included as a covariate .
of the total 2,144 patients randomized in the shine study , 182 ( 8.5% ) were japanese and randomized to receive ind / gly 110/50 g ( n=42 ) , ind 110 g ( n=41 ) , gly 50 g ( n=40 ) , open - label tio 18 g ( n=40 ) , and matching placebo ( n=19 ) .
the baseline demographic and clinical characteristics of the patients were comparable across all the groups ( table 1 ) .
more than 90% of the patients in the japanese cohort were men aged > 68 years , and their mean percentage of predicted postbronchodilator fev1 was ~60% , with only 19% reversibility of fev1 after bronchodilation .
more than 75% of patients had moderate - to - severe copd according to the gold 2009 guidelines , and almost 10% of patients in the japanese subgroup were having a history of copd exacerbation in the previous year and ~25% were using ics at baseline . in total , 160 patients ( 87.9% ) completed the study ; the major reasons for discontinuation from the study were aes and withdrawal of consent by the patients ( figure 1 ) .
the primary efficacy end point of the shine study was achieving a 190 ml improvement in trough fev1 from baseline with ind / gly in japanese patients .
improvement in lung function was greater in japanese patients treated using ind / gly compared with ind ( 90 ml ) , gly ( 100 ml ) , tio ( 90 ml ) , and placebo ( 280 ml ; figure 2).10 the spirometric profile in terms of trough fev1 with ind / gly was superior compared with tio and placebo on day 2 and week 26 , with treatment differences ranging from 90 ml to 290 ml ( figure 3 ) .
ind / gly showed greater improvements in peak fev1 from 5 minutes to 4 hours compared with ind , gly , tio , and placebo at week 26 , with treatment differences ranging from 100 ml to 390 ml ( all p<0.01 ; figure 4 ) .
in addition , an improvement in lung function was also observed with ind / gly in terms of fev1 auc5 min4 h compared with ind , gly , tio , and placebo , at the end of study , with treatment differences ranging from 90 ml to 380 ml ( all p<0.05 ; figure 5 ) .
dyspnea control in terms of improvement in tdi focal score in the japanese subgroup was comparable across all treatment groups .
treatment differences between ind / gly versus placebo were 0.67 units and versus ind , gly , tio were 0.69 units , 0.03 units , and 0.65 units , respectively
. a higher proportion of patients achieved a clinically meaningful improvement in the tdi focal score ( 1 unit ) with ind / gly versus ind ( odds ratio [ or ] , 1.60 ) , gly ( or , 1.01 ) , tio ( or , 1.16 ) , and placebo ( or , 1.15 ) .
improvement in health status assessed via reduction in the sgrq total score was similar across all treatment groups in the japanese patient subgroup .
treatment differences in the sgrq total score between ind / gly vs placebo , ind , gly , and tio were 1.31 units , 0.48 units , 3.75 units , and 0.96 units , respectively .
the proportion of patients achieving a clinically meaningful improvement in the sgrq total score ( 4-unit reduction ) was highest with ind / gly compared to ind ( or , 1.69 ) , gly ( or , 3.13 ) , tio ( or , 2.36 ) , and placebo ( or , 0.86 ) .
the overall incidence of aes was similar across the five treatment groups , and the lowest incidence was reported for patients treated with ind / gly ( 21 ( 50% ) ; ind , 27 ( 65.9% ) ; gly , 24 ( 60.0% ) ; tio , 31 ( 77.5% ) ; and placebo , 12 ( 63.2% ) ; table 2 ) .
copd exacerbations were found to be the frequent aes in all treatment groups , with their lowest occurrence in patients treated with ind / gly ( ~12% ) .
moreover , no cardiovascular aes , major adverse cardiac events , cerebro cardiovascular aes , abnormal changes from baseline qtc in resting ecg was obtained using bazett s formula ( qtcb ) , fridericia s formula ( qtcf ) ( < 10 mlliseconds ) or heart rate , or imbalance in hematocrit values was observed in the japanese subgroup .
of the total 2,144 patients randomized in the shine study , 182 ( 8.5% ) were japanese and randomized to receive ind / gly 110/50 g ( n=42 ) , ind 110 g ( n=41 ) , gly 50 g ( n=40 ) , open - label tio 18 g ( n=40 ) , and matching placebo ( n=19 ) .
the baseline demographic and clinical characteristics of the patients were comparable across all the groups ( table 1 ) .
more than 90% of the patients in the japanese cohort were men aged > 68 years , and their mean percentage of predicted postbronchodilator fev1 was ~60% , with only 19% reversibility of fev1 after bronchodilation .
more than 75% of patients had moderate - to - severe copd according to the gold 2009 guidelines , and almost 10% of patients in the japanese subgroup were having a history of copd exacerbation in the previous year and ~25% were using ics at baseline . in total , 160 patients ( 87.9% ) completed the study ; the major reasons for discontinuation from the study were aes and withdrawal of consent by the patients ( figure 1 ) .
the primary efficacy end point of the shine study was achieving a 190 ml improvement in trough fev1 from baseline with ind / gly in japanese patients .
improvement in lung function was greater in japanese patients treated using ind / gly compared with ind ( 90 ml ) , gly ( 100 ml ) , tio ( 90 ml ) , and placebo ( 280 ml ; figure 2).10 the spirometric profile in terms of trough fev1 with ind / gly was superior compared with tio and placebo on day 2 and week 26 , with treatment differences ranging from 90 ml to 290 ml ( figure 3 ) .
ind / gly showed greater improvements in peak fev1 from 5 minutes to 4 hours compared with ind , gly , tio , and placebo at week 26 , with treatment differences ranging from 100 ml to 390 ml ( all p<0.01 ; figure 4 ) .
in addition , an improvement in lung function was also observed with ind / gly in terms of fev1 auc5 min4 h compared with ind , gly , tio , and placebo , at the end of study , with treatment differences ranging from 90 ml to 380 ml ( all p<0.05 ; figure 5 ) .
dyspnea control in terms of improvement in tdi focal score in the japanese subgroup was comparable across all treatment groups .
treatment differences between ind / gly versus placebo were 0.67 units and versus ind , gly , tio were 0.69 units , 0.03 units , and 0.65 units , respectively
. a higher proportion of patients achieved a clinically meaningful improvement in the tdi focal score ( 1 unit ) with ind / gly versus ind ( odds ratio [ or ] , 1.60 ) , gly ( or , 1.01 ) , tio ( or , 1.16 ) , and placebo ( or , 1.15 ) .
improvement in health status assessed via reduction in the sgrq total score was similar across all treatment groups in the japanese patient subgroup .
treatment differences in the sgrq total score between ind / gly vs placebo , ind , gly , and tio were 1.31 units , 0.48 units , 3.75 units , and 0.96 units , respectively .
the proportion of patients achieving a clinically meaningful improvement in the sgrq total score ( 4-unit reduction ) was highest with ind / gly compared to ind ( or , 1.69 ) , gly ( or , 3.13 ) , tio ( or , 2.36 ) , and placebo ( or , 0.86 ) .
the overall incidence of aes was similar across the five treatment groups , and the lowest incidence was reported for patients treated with ind / gly ( 21 ( 50% ) ; ind , 27 ( 65.9% ) ; gly , 24 ( 60.0% ) ; tio , 31 ( 77.5% ) ; and placebo , 12 ( 63.2% ) ; table 2 ) .
copd exacerbations were found to be the frequent aes in all treatment groups , with their lowest occurrence in patients treated with ind / gly ( ~12% ) .
serious aes were comparable across all treatment groups . moreover , no cardiovascular aes , major adverse cardiac events , cerebro cardiovascular aes , abnormal changes from baseline qtc in resting ecg was obtained using bazett s formula ( qtcb ) , fridericia s formula ( qtcf ) ( < 10 mlliseconds ) or heart rate , or imbalance in hematocrit values was observed in the japanese subgroup .
the aim of this post hoc analysis was to explore the efficacy and safety profile of ind / gly in the japanese subgroup of the shine study . based on the overall results of the shine study , the efficacy of ind / gly was superior to that of placebo , with comparable safety.10 the results of this analysis support our claim that ind / gly improves lung function and health status in patients with moderate - to - severe copd with compromised airflow according to the gold guidelines , irrespective of ethnicity .
the fourth edition of the jrs guidelines , which is customized for the japanese scenario , recommend lama / laba for the management of moderate - to - severe copd with symptoms and the use of ics in combination with laba or lama only if the condition does not improve.4,5,12 gold strategy that considers diverse ethnicity as well as various copd phenotypes differs from the jrs guidelines , which are specific for the japanese population , in terms of prescribing ics.4,5 the difference in the recommendations may be due to the fact that japanese patients with copd experience fewer copd exacerbations compared to caucasians.12 in the japanese subgroup of the shine study , we observed greater improvements in lung function with ind / gly versus ind , gly , and tio mono - therapies , although the improvement in quality of life was comparable across all groups .
the results of this subgroup analysis were similar to those observed in the entire study population.10 the small sample size is perhaps responsible for the lower baseline fev1 observed in the tio group ( 1.22 l ) than the ind / gly group ( 1.36 l ) , which was at least partially responsible for the 90 ml
. however , baseline fev1 was used as a covariate in the statistical analysis , and the treatment effect was adjusted to account for the baseline difference in fev1 between the ind / gly and tio groups .
the aforementioned findings were similar to those reported in another study , wherein baseline fev1 affected the treatment efficacy of a bronchodilator , which showed a smaller change from baseline fev1 value in patients with lower baseline fev1.13 in addition to the observed improvements in trough fev1 , our study also demonstrated rapid bronchodilation with ind / gly in terms of fev1 auc5 min4 h compared to the monotherapy components , tio , and placebo . in previous studies , spirometric outcomes of ind / gly treatment showed a comparable reduction in the rate of copd exacerbations , along with superior improvement in lung function versus its monotherapy components , tio , and salmeterol / fluticasone and without considerable variations in the tdi focal score and sgrq total score among different treatment groups.9,11,14,15 in our study , the results of the japanese subgroup analysis showed comparable improvements in the tdi focal score and sgrq total score , which is consistent with the results of a previous study.14 a higher proportion of patients from the ind / gly group achieved minimal clinically important differences for tdi and sgrq compared with the other treatment groups .
moreover , the number of patients in the japanese subgroup was small to determine the significant improvement in the tdi focal score and sgrq total score .
the shine study demonstrated that safety profile of ind / gly in japanese patients with copd was consistent with the overall study population.10 the most frequently reported ae was copd exacerbation in the japanese subgroup , and the occurrence was ~12% , which was lower compared with tio ( 25% ) and placebo ( ~32% ) .
furthermore , the incidents of exacerbations were lesser in the japanese subgroup than the overall study population , where ~39% exacerbations were reported .
this fact was supported by the results of a previous study that demonstrated that ind / gly was superior in preventing moderate - to - severe copd exacerbations compared with tio , with concomitant improvements in lung function and health status.9 a pooled analysis of 14 randomized clinical trials with data from 11,404 patients with copd demonstrated that ind / gly did not increase the risk of investigated safety end points and had a comparable safety profile as its monocomponents , tio , and placebo.16
based on the results of this subgroup analysis , we can conclude that the efficacy and safety of ind / gly in the japanese population were similar to that in the overall study population of the shine study ; thus , ind / gly may be considered as a treatment option for the management of japanese patients with moderate - to - severe copd .
| backgroundcopd - related deaths are increasing in japan , with ~5.3 million people at risk.methodsthe shine was a 26-week , multicenter , randomized , double - blind , parallel - group study that evaluated safety and efficacy of indacaterol ( ind)/glycopyrronium ( gly ) 110/50 g once daily ( od ) compared with gly 50 g od , ind 150 g od , open - label tiotropium ( tio ) 18 g od , and placebo .
the primary end point was trough forced expiratory volume in 1 second ( fev1 ) at week 26 . other key end points included peak fev1 , area under the curve for fev1 from 5 minutes to 4 hours ( fev1 auc5 min4 h ) , transition dyspnea index focal score , st george s respiratory questionnaire total score , and safety . here , we present efficacy and safety of ind / gly in the japanese subgroup.resultsof 2,144 patients from the shine study , 182 ( 8.5% ) were japanese and randomized to ind / gly ( n=42 ) , ind ( n=41 ) , gly ( n=40 ) , tio ( n=40 ) , or placebo ( n=19 ) .
improvement in trough fev1 from baseline was 190 ml with ind / gly and treatment differences versus ind ( 90 ml ) , gly ( 100 ml ) , tio ( 90 ml ) , and placebo ( 280 ml ) along with a rapid onset of action at week 26 .
ind / gly showed an improvement in fev1 auc5 min4 h versus all comparators ( all p<0.05 ) .
all the treatments were well tolerated and showed comparable effect on transition dyspnea index focal score and st george s respiratory questionnaire total score .
the effect of ind / gly in the japanese subgroup was consistent to overall shine study population.conclusionind/gly demonstrated superior efficacy and comparable safety compared with its monocomponents , open - label tio , and placebo and may be used as a treatment option for the management of moderate - to - severe copd in japanese patients . |
a 5-year - old cat was examined for vomiting and anorexia of 2 days duration .
histopathologic evaluation of ante - mortem ultrasound - guided needle biopsies of the right kidney was consistent with proliferative , necrotizing and crescentic glomerulonephritis with fibrin thrombi , proteinaceous and red blood cell casts , and moderate multifocal chronic - active interstitial nephritis . owing to a lack of clinical improvement
post - mortem renal biopsies were processed for light microscopy , transmission electron microscopy and immunofluorescence .
this revealed severe focal proliferative and necrotizing glomerulonephritis with cellular crescent formation , podocyte injury and secondary segmental sclerosis .
ultrastructural analysis revealed scattered electron - dense deposits in the mesangium , and immunofluorescence demonstrated positive granular staining for light chains , consistent with immune complex - mediated glomerulonephritis .
severe diffuse acute tubular epithelial injury and numerous red blood cell casts were also seen .
to our knowledge , this is the first report of naturally occurring proliferative , necrotizing and crescentic immune complex glomerulonephritis in a cat .
a 5-year - old spayed female domestic shorthair cat was evaluated for vomiting and anorexia of 2 days duration .
the cat was housed indoors with 10 other cats , had no previous relevant medical problems and only received monthly topical selamectin .
initial blood chemistry tests revealed markedly elevated concentrations of blood urea nitrogen ( 162 mg / dl ; reference interval [ ri ] 1532 mg / dl ) , creatinine ( 13.7 mg / dl ; ri 1.02.0 mg / dl ) and phosphorous ( 14.5 mg / dl ; ri 3.06.6 mg / dl ) . at that time , the albumin concentration was within the ri ( 2.5 g / dl ; ri 2.43.8 g / dl ) with a mild elevation in aspartate aminotransferase ( 53 u / l ; ri 137 u / l ) .
a complete blood count revealed a moderate non - regenerative anemia with a packed cell volume of 23% ( ri 31.748.0% ) with no reticulocytes observed .
urine obtained via cystocentesis on the day of initial evaluation demonstrated minimal concentration ( urine specific gravity 1.013 ) with 3 + protein and rare cocci and rods observed in the sediment ; no casts were observed .
a urine protein : creatinine ratio was elevated at 1.81 ( normal < 0.20 ) .
ultrasound showed that the right kidney was 4.5 cm long and the left kidney was 4.2 cm long .
testing for feline leukemia virus and feline immunodeficiency virus ( elisa snap fiv / felv combo test ; idexx laboratories ) , dirofilaria antigen and ehrlichia canis , anaplasma phagocytophilum and borrelia burgdorferi ( snap 4dx plus test ; idexx laboratories ) antibodies was negative .
the cat received supportive care and antibiotic therapy consisting of intravenous fluids , ampicillin ( 22 mg / kg iv q8h , ampicillin sodium injection , powder , for solution ; sandoz ) , enrofloxacin ( 5 mg / kg iv q24h , baytril ; bayer ) , buprenorphine ( 0.0125 mg / kg iv q6h , buprenex [ buprenorphine hydrochloride ] injectable ; reckitt benckiser pharmaceuticals ) and ondansetron ( 0.2 mg / kg iv q8h , novaplus ondansetron injectable ; fresenius kabi usa ) .
aluminum hydroxide ( aluminum hydroxide liquid ; rugby laboratories ) was administered with food ( 11 mg / kg q8h ) .
maropitant ( 1 mg / kg iv q24h , cerenia [ maropitant citrate ] injectable ; zoetis ) and mirtazapine ( 1.875 mg po q72h , mirtazapine tablet , film coated ; aurobindo pharma ) were later added to the treatment regimen in the face of persistent nausea and anorexia . throughout hospitalization
the cat developed glucosuria ( 2 + ) despite a normal plasma glucose measurement ( 123 mg / dl ; ri 67168 ) and hypoalbuminemia ( albumin 2.2 g / dl ; ri 2.43.8 g / dl ) . to rule out renal lymphoma as the cause of azotemia
, a fine - needle aspirate of the right kidney was obtained 2 days after presentation .
ultrasound - guided needle biopsies of the left kidney were obtained on day 4 of hospitalization .
results were consistent with a necrotizing and proliferative glomerulonephritis ( gn ) of unknown etiology .
full necropsy was declined but the owner consented to the collection of renal tissue ; wedge samples of the right kidney were obtained immediately post mortem and submitted to the international veterinary renal pathology service .
samples were serially sectioned at a thickness of 3 m and stained with hematoxylin and eosin , periodic acid
schiff and masson s trichrome stains , and congo red and jones methenamine silver methods .
histopathology revealed diffuse segmental endocapillary hypercellularity with fibrinoid necrosis of glomerular tufts ( figure 1 ) .
serial sections of a glomerulus stained with ( a ) hematoxylin and eosin , ( b ) periodic acid
schiff method and ( c ) masson s trichrome . there is a segmental fibrinocellular crescent ( * ) and moderate hypercellularity in the mesangial and endocapillary compartments .
( d ) red blood cell casts ( arrows ) in collecting ducts of the renal medulla ( hematoxylin and eosin ; magnification 200 ) on transmission electron microscopy ( tem ) ( figure 2a ) , glomerular capillary loops were lined by swollen endothelial cells and contained inflammatory cells , fibrin and cell debris .
there were irregular intramembranous and rare subendothelial electron - dense deposits associated with interposed mesangial cells .
there was severe tubular injury characterized by epithelial cell necrosis with loss of nuclei , degenerative changes with surface blebbing , brush border loss and regeneration with anisokaryosis .
transmission electron microscopy of ( a ) a glomerulus reveals scattered electron - dense deposits in mesangial and subendothelial zones .
direct immunofluorescence using antibodies against ( b ) immunoglobulin ( ig)g , ( c ) igm and ( d ) iga . staining for igg was equivocal whereas there was distinct granular staining with antibodies against igm and iga renal tissue was kept in michel s buffer for 10 days prior to submission to the international veterinary renal pathology service .
once received , these samples were embedded in optimal cutting temperature ( oct compound ; tissue - tek ) and frozen .
serial sections of tissue were stained with polyclonal caprine antibodies against feline igg , igm and iga ( caprine anti - cat igg , igm and iga polyclonal antibodies ; bethyl laboratories ) , as well as polyclonal rabbit antibodies against human kappa and light chains ( llc ) and complement ( c)1q ( rabbit anti - human light chain , kappa light chain and c1q polyclonal antibodies ; dako north america ) , which are known to cross - react with the feline proteins .
evaluation of these samples revealed weak positive granular staining for iga , igm and llc within the mesangium ( figure 2b d ) . staining for igg was equivocal ; staining for c1q and kappa light chains were negative .
taken together , the pathologic evaluation demonstrated an immune complex - mediated , proliferative , necrotizing and crescentic gn with iga- and igm - dominant immune complexes .
gn is a common cause of kidney disease in dogs and humans ; however , it is uncommonly documented in cats .
glomerular injury disturbs the glomerular filtration barrier , resulting in proteinuria and subsequent tubulointerstitial damage .
many different types of glomerulopathies have been reported in dogs , including amyloidosis , membranoproliferative glomerulonephridites and viral - associated glomerulopathies . many are hypothesized to be secondary to systemic diseases , including neoplastic , infectious and non - infectious inflammatory disorders .
recently , a large retrospective study reported the pathologic lesions of 501 dogs were biopsied for the clinical indication of proteinuria .
approximately half of all dogs evaluated in that study had immune complex gn ( icgn ) .
histologic lesions consistent with icgn and confirmed with tem were observed in only one animal in one study of chronic kidney disease in 60 cats . in humans ,
crescents develop when glomerular capillaries rupture and blood , fibrin and inflammatory cells are released into bowman s space .
human crescentic gn can have many causes , including types of icgn , pauci - immune gn and anti- gbm disease .
human icgns that often cause crescents are lupus nephritis and iga nephropathy ( igan ) . no matter what the underlying disease process is , if more than half of the sampled glomeruli have crescents , the prognosis is worse .
advanced diagnostics such as tem and immunofluorescence ( if ) are required in human nephropathology to distinguish between the possible causes of crescentic gn .
crescentic gn in animals has been rarely reported and the most commonly affected animals are pigs and sheep with abnormalities in complement pathways .
red cell cylindruria is rare in veterinary species but common in humans with active crescentic gn . in humans ,
one study that evaluated renal pathology in humans with isolated microscopic hematuria demonstrated igan to be the most common diagnosis ; only 6.4% of humans had normal kidney structure .
other differential diagnoses for red blood cell casts include systemic lupus erythematosus and membranous gn . in cats , membranous gn , characterized by deposition of immune complexes along the subepithelial aspect of the gbm ,
is the most common form of gn ; other forms appear to be less common .
the prevalence , historical and diagnostic findings , as well as the clinical course of other forms of feline glomerulopathy are poorly characterized .
histopathology in this case revealed segmental necrosis of glomerular tufts with crescent formation . in humans
, these features can be observed in pauci - immune gn , anti - gbm gn and icgn .
anti - gbm gn has strong linear staining with igg , llc and c3 along capillary walls , and does not involve immune complex formation ; therefore , tem does not reveal electron - dense deposits .
notably , immune complexes are comprised of one or multiple immunoglobulin types together with an antigen . in humans
lupus nephritis is igg dominant but there is often concomitant iga , igm and complement components c3 and c1q .
iga is dominant or co - dominant in igan . using the diagnostic algorithms for humans
the if revealed granular labeling with iga , igm and llc in mesangial zones , which agrees with where most of the electron - dense deposits were identified ultrastructurally . in our experience and that of others , igm labeling is often non - specific .
furthermore , there is no association between igm deposition and crescentic or necrotizing gn in humans , dogs or cats , whereas there is an association between iga deposition and crescentic gn in humans .
unfortunately , the if sample was held in michel s buffer for 10 days the suggested maximum time is 5 days .
it is possible that negative igg staining was due to the prolonged storage in this medium .
therefore , this is a case of proliferative and crescentic icgn with iga dominant deposits , with the caveat that the igg staining might have been negatively affected by the prolonged storage in michel s buffer . of note , the histopathologic phenotype of necrotizing and crescentic lesions can be seen in human igan . to our knowledge , this is the first report of naturally occurring proliferative , necrotizing and crescentic gn with mesangial immune complex deposits observed in the cat .
this cat was one of a number of related cats that all demonstrated proteinuria and hematuria .
other affected cats in that report lacked glomerular abnormalities on light microscopy . if and tem were not performed in any of these cats ; therefore , whether any of these cats had icgn is unknown .
serum sickness , utilized intravenous administration of human serum albumin to cats to induce icgn .
information extrapolated from the serum sickness model of pgn would suggest that the cat in this report likely developed an immune response to an unknown circulating antigen . the mesangial immune complex deposits and positive iga if support the hypothesis that this cat had naturally occurring icgn .
infectious disease screening and urine culture were negative ; however , owing to the acute nature of the disease it is possible that convalescent antibody titers or a necropsy might have confirmed infection .
erythrocytic casts are rarely reported in veterinary species but can be common in humans with gn , depending on the type of glomerulopathy . in humans ,
erythrocytic casts have a sensitivity and specificity of 12.2% and 100.0% , respectively , in diagnosing glomerular source of hematuria .
the lack of red blood cell casts in the urinalysis of this cat , but presence on biopsy , might suggest that this phenomenon is underdiagnosed in this species .
possible explanations for the disparity between the urinalysis and biopsy findings might include tubular obstruction , as well as decreased glomerular filtration rate and urine output yielding lower excretion of casts . the urine was analyzed promptly in this cat , suggesting that cast dissolution due to prolonged storage is unlikely .
overall , the injury present in both the glomeruli and tubules made this cat s disease difficult to manage medically with traditional supportive care .
extracorporeal renal replacement therapy would have helped correct this cat s uremia ; however , it was declined .
immuno - suppressive therapy appears warranted based on the histopathology results ; however , a predictable response is unknown . because interstitial fibrosis was mild and tubular regeneration
was observed , it is possible that dialytic support and immunosuppressive therapy might have led to improvement in renal function in this case . | case summarya 5-year - old cat was examined for vomiting and anorexia of 2 days duration .
azotemia , hyperphosphatemia and hypoalbuminemia were the main biochemical findings .
serial analyses of the urine revealed isosthenuria , proteinuria and eventual glucosuria .
hyperechoic perirenal fat was detected surrounding the right kidney by ultrasonography .
histopathologic evaluation of ante - mortem ultrasound - guided needle biopsies of the right kidney was consistent with proliferative , necrotizing and crescentic glomerulonephritis with fibrin thrombi , proteinaceous and red blood cell casts , and moderate multifocal chronic - active interstitial nephritis .
owing to a lack of clinical improvement , the cat was eventually euthanized .
post - mortem renal biopsies were processed for light microscopy , transmission electron microscopy and immunofluorescence .
this revealed severe focal proliferative and necrotizing glomerulonephritis with cellular crescent formation , podocyte injury and secondary segmental sclerosis .
ultrastructural analysis revealed scattered electron - dense deposits in the mesangium , and immunofluorescence demonstrated positive granular staining for light chains , consistent with immune complex - mediated glomerulonephritis .
severe diffuse acute tubular epithelial injury and numerous red blood cell casts were also seen.relevance and novel informationto our knowledge , this is the first report of naturally occurring proliferative , necrotizing and crescentic immune complex glomerulonephritis in a cat . |
windings of random walk trajectories are of considerable interest in various fields of condensed matter physics , from polymers and dna to superconductors and bose - einstein condensates ( see e.g. @xcite and references therein ) . in this paper , we consider the influence of winding on the internal degrees of freedom of particles dynamics .
note that winding dependent degrees of freedom are not exotic .
the most obvious of these is , of course , the spin , which is of particular interest in view of the recent spintronic surge of activity . because of the spin - orbit interaction , the spin
is directly related to the winding of particle trajectories .
accordingly , our analysis is intimately related to the so - called spin hall effect in electron transport @xcite as well as to other types of spin transport , like , for example , photon ( light ) propagation in disordered media ( since photons also have non - zero spin ) .
other degrees of freedom , both classical and quantum , can also be winding dependent , as e.g. geometric phases ( the berry phase ) .
thus , the results of the present work are applicable not only to spin transport but also to any situation that can be reduced to random trajectories in a confined geometry with the dependence of certain characteristics on winding properties .
the paper is organized as follows .
section 2 presents the concept of winding and the corresponding terminology .
section 3 deals with the spin degree of freedom as directly dependent on a winding angle .
section 4 contains numerical simulations in the case of diffusion in a rectangular 2d domain with periodic boundary conditions in the longitudinal direction and hard walls in the transverse direction .
section 5 proposes a simple theoretical model of classical random walks with so - called `` soft walls '' , where the geometric constraints are described by an external confining potential in the transverse direction .
section 6 contains a brief discussion of the results .
let us introduce the winding angle @xmath0 as the vector sum of all the turnings of a particle trajectory @xmath1 .
the angle is called the _ total curvature _ in the literature ( see e.g. @xcite ) .
it is given by the integral of the differential curvature over a natural parameter ( the local time of the trajectory ) .
the differential curvature @xmath2 of the trajectory is defined as @xmath3 where the symbol `` @xmath4 '' denotes differentiation with respect to @xmath5 and the symbol @xmath6 denotes the vector product .
correspondingly , @xmath0 has the form @xcite @xmath7 note that the total curvature of a closed planar trajectory , equal to the `` winding number '' or `` turning number '' multiplied by @xmath8 , is a homotopy invariant .
it is worth mentioning that the term `` _ _ winding angle _ _ '' is also used in polymer physics ( see e.g. @xcite ) where , however , it rather refers to @xmath9 thus , @xmath10 is the winding angle of the radius vector of the trajectory in configuration space , whereas @xmath0 yields an analogous quantity but in velocity space .
the distribution of @xmath10 s for brownian trajectories has been studied for a long time ( see e.g. @xcite ) , while that of @xmath0 s has not been analyzed so far . on the other hand
, it seems that @xmath0 is a more appropriate quantity for certain problems , for example those including spin ( spin hall effect , etc ) , as we will argue below .
we consider for simplicity the 2d case .
generalizations to 3d and tensors are fairly simple , but the resulting formulas are more involved and less intuitive . in the planar case
the winding angle vector @xmath0 is normal to the plane , hence it suffices to consider @xmath11 , the unique non - zero component of @xmath0 .
the transport of the winding angle is described by the average current density @xmath12 where the symbol @xmath13 denotes averaging over a set of brownian trajectories , or , in the discrete case , of random walks . in the case of an infinite plane ( or a plane with periodic boundary conditions ) and of equilibrium ,
the particle density does not depend on the coordinates and the average @xmath14 vanishes identically as well as the average of the winding current density ( [ tok ] ) .
if , however , a confined planar geometry is considered , then @xmath14 will still be vanishing but , as we will show below , this will not be the case for the winding current , because of the geometrical constraints .
namely , due to the mere existence of boundaries , there will exist trajectories whose contribution to the winding will not be canceled by that of other trajectories . as a result , there will necessarily appear an anti - parallel edge current with @xmath15 and @xmath16 and @xmath17 will not identically vanish anymore .
these surviving edge currents can be viewed as analogs of spin edge currents in the spin hall effect @xcite .
before considering the winding currents , it is useful to discuss briefly an important example of spin - orbit interaction where the spin degree of freedom depends on the winding angle @xmath11
. it seems known that in electronic transport in the presence of charged impurities the electron spin depends on the winding of their trajectories .
this dependence manifests itself , for instance , as the so called _ skew scattering _ in the `` _ _ extrinsic _ _ '' spin hall effect rashbaextrspinhall , dassarmaextrspinhall .
however , in the literature on the subject , the concept of trajectory winding is mostly implicit .
recall that in electronic transport impurities interact with electrons and thus the scattering angle , hence the deviation of the electron trajectory from a ballistic straight line , depends , in relativistic theory , on the spin .
more generally , it is known that if , for any reason ( not necessarily related to electrons scattering on charged impurities ) the trajectory of a relativistic particle is not a straight line , the curving of the trajectory changes the spin degree of freedom of the particle .
this purely kinematic relativistic effect is due to the successive non - parallel lorentz transformations ( _ boosts _ ) that lead to the so - called wigner rotation of the spin , or the thomas - wigner precession .
the variation of the precession angle of a classical rotating moment resulting from an infinitesimal turning of the velocity has been obtained by thomas @xcite @xmath18 where @xmath19 is a vector of the precession angle , @xmath20 is the velocity , @xmath21 its variation , and @xmath22 is the speed of light .
( [ eq : wigner_rotation ] ) gives the leading relativistic approximation for the precession angle .
it is worth mentioning that the exact relativistic expression is still a matter of controversy ( see the recent articles malykin2006 , ritus2011 , zanimovskystepanovsky2010 ) .
note also that the dirac equation allows for a more precise study of this effect .
however the classical result ( [ eq : wigner_rotation ] ) of thomas is sufficiently accurate to describe well the spin - orbit splitting in atoms @xcite .
it has played an important role in the genesis of the quantum theory of radiation ( see e.g @xcite ) . comparing ( [ eq : thetadef ] ) and ( [ eq : wigner_rotation ] )
, we find a simple relation between the moment precession angle and the winding angle of the trajectory @xmath23 where @xmath24 is the trajectory turning angle .
note that , in many cases , the precession angle during the observation time is essentially equal to the winding angle multiplied by a small parameter @xmath25 ( e.g. the case of fermions at temperatures below fermi temperature , or any particles in the case of elastic scattering ) .
let us now argue that the spatial curvature of a trajectory is equivalent to the action of a magnetic field on a spin degree of freedom .
indeed , it is known that an homogeneous magnetic field in the rest frame of a classical spin results in its larmor precession @xcite @xmath26 where @xmath27 is the spin in angular momentum units , @xmath28 is the magnetic field , @xmath29 is the gyro - magnetic factor , and @xmath30 is the mass .
thus , an effective magnetic field with a larmor frequency equal to the frequency @xmath31 of thomas - wigner precession ( @xmath32 ) has the form @xmath33 the spin - field interaction energy is then @xmath34 hence , the direction and the magnitude of the effective magnetic field are intimately related to the winding angle .
returning to random walks of electrons in solids ( which can result from electrons scattering on charged impurities , see e.g. @xcite ) , we note that in this case it follows from ( [ eq - heff ] ) that the effective magnetic field of the lorentz transformation of the impurity electric field can be included in the wigner - thomas precession and vice versa , thus forming a complete spin - orbit interaction .
we also note that , leaving aside electrons in solids , the above discussion can refer in general to the diffusion of any particles with non - zero spin or magnetic moment in disordered media , for example spin 1 photons whose motion in a medium with random refractive index can be described as a diffusion process @xcite ) , as well as in foams , metamaterials , dense plasma , etc ...
(see e.g. @xcite and references therein ) .
let us now turn to numerical simulations which do support the existence of winding edge currents .
we consider the strip of width @xmath35 along the horizontal axis and of length @xmath36 along the vertical axis on a planar square lattice of period 1 . in the longitudinal direction
periodic boundary conditions are imposed at @xmath37 . in the transverse direction reflecting ( hard ) walls enclose the strip in a finite width @xmath38 .
we consider @xmath39 independent particles whose trajectories are random walks on the lattice . the diffusion time step @xmath40 is assumed to be unity and random hoppings are allowed only to the nearest neighbor sites .
let @xmath41and @xmath42 be respectively the position and the winding angle of the @xmath43th particle after @xmath44 jumps .
it is clear that each next turn ( left or right ) increases or decreases @xmath45 by @xmath46 .
we assume , for the sake of definiteness , that a backward scattering does not change the winding angle of the trajectory .
the time averaged winding current @xmath47 at a point @xmath48 of the lattice is after @xmath49 steps ( i.e. during the observation time @xmath50 ) @xmath51 where @xmath52 is the kronecker delta .
we analyzed a wide range of parameters @xmath53 , @xmath54 , @xmath49 , and @xmath55 .
typical results are shown in fig.[figure : simulation - jvec ] for @xmath56 , @xmath57 , and strip sizes @xmath58 . to display the 2d vector field @xmath17
, we use a 2d palette as in fig.[figure : palette ] .
the palette center corresponds to zero vector field and its vertical and horizontal edges correspond to the range of the components @xmath59 and @xmath60 of @xmath61 , respectively .
the palette is divided in four sectors of different colors with a radial variation of the color intensity .
this allows to distinguish the approximate direction and magnitude of the current .
the numerical simulations for @xmath61 are shown in fig.[figure : jplus ] . in the bulk of the sample
only small - scale fluctuations of the winding current can be seen .
however , things differ strongly from the bulk in the narrow neighborhoods of the walls : winding currents do exist along the edges .
they materialize as almost continuous and narrow lines on the left and right edges of the sample ( blue and red , according to the direction ) .
the non vanishing longitudinal component of the winding current , averaged over @xmath62 , is displayed in fig.[figure : jsum ] .
it is concentrated within one lattice site from the edges and directed downward near the left edge and upward near the right edge .
fig.[figure : jmodulus ] displays the absolute value of the current density @xmath63 and how it concentrates on the edges with an increase of the simulation time @xmath49 .
since the system is in equilibrium , i.e. , external fields are absent and the particles are distributed uniformly , winding currents can only originate from the confining boundaries and the chaotic kinetic energy of the random walk trajectories .
thus , these currents are by nature persistent .
let us consider a simple model where the effect of winding currents is explicit .
it consists of planar continuous brownian trajectories described by an ornstein - uhlenbeck process in a potential @xmath64 applied along the @xmath65 axis and infinitely growing as @xmath66 .
the potential plays the role of the _ soft walls _ confining the brownian particles , and thereby encodes the geometric constraints . as in the previous section ,
periodic boundary conditions at @xmath67 are assumed in the longitudinal direction .
the stochastic ( langevin ) equations of motion are @xmath68where @xmath69 is the coordinate of the particle , @xmath70 is its velocity , @xmath30 is the mass of the particle , @xmath71 is the viscosity and @xmath72 is the random force assumed to be the gaussian white noise , i.e. @xmath73 since we are interested in the winding of trajectories of the above random process , we consider the joint probability density of the coordinate , velocity and winding angle @xmath11 @xmath74the @xmath75 integration of ( [ distbis ] ) yields the density @xmath76 satisfying the usual kramers equation @xcite ( fokker - planck equation with external potential)@xmath77 where @xmath78 is defined in ( [ wnc ] ) and the standard summation convention over repeated indices is understood .
it is easy to check that the probability density @xmath79 in ( [ kreq ] ) converges as @xmath80 to a stationary maxwell - boltzmann distribution in @xmath81 and @xmath20 with a @xmath82 normalization factor in the longitudinal direction @xmath83 ( see ( [ pst ] ) ) .
note that one could also use reflecting boundary conditions or a weak confining potential in this direction .
the fokker - planck equation for @xmath84 can be obtained as follows .
let us denote @xmath85 then @xmath86 and @xmath87then eq . ( [ eq : langevin ] ) implies @xmath88\langle \xi _ { j}\mathfrak{\delta } \rangle \ .\end{gathered}\]]to deal with the correlation functions @xmath89 , we use the furutsu - novikov formula ( see e.g. @xcite , chapter 10 and @xcite , chapter 5 ) valid for any functional @xmath90 $ ] of the @xmath91-correlated gaussian random functions @xmath92 and @xmath93 @xmath94\rangle = \frac{d}{2}\big\langle \frac{\delta } { \delta \xi _ { j}(t)}r_{t}[\xi ] \big\rangle \ .\ ] ] we obtain after integration by parts @xmath95to find the variational derivatives of @xmath96 and @xmath97 with respect to @xmath98 , we apply the operation @xmath99 to the integrated langevin equations , then use the principle of causality and take the limit @xmath100 .
this yields @xmath101and @xmath102the variational derivative of @xmath75 in ( [ eq : maincor ] ) is calculated using ( [ eq : thetadef ] ) @xmath103and finally @xmath104thus , the correlation functions @xmath89 are expressed via the partial derivatives of @xmath105 @xmath106we assume for the sake simplicity that the diffusion coefficient @xmath78 does not depend on the velocity and we take into account the relation @xmath107 .
we obtain the fokker - plank equation @xmath108 in the case at hand where the potential @xmath64 depends only on the transverse coordinate @xmath81 ( modeling the soft walls of a deep valley ) the fokker - planck equation becomes @xmath109eq .
( [ eq : kinetic ] ) differs from the kramers equation by the terms on the right - hand side .
the physical meanings of the first and fourth terms are the most interesting .
it is already clear here and will be even more clear below that the former is the current in the @xmath83 direction and the latter is responsible for the diffusive spreading of @xmath75 .
we also note that after integration over @xmath75 ( eq : kinetic ) coincides with the kramers equation ( [ kreq ] ) .
( [ eq : kinetic ] ) allows us to study the transfer of arbitrary @xmath11-dependent quantities as well as non - equilibrium situations corresponding to arbitrary initial conditions . in the particular case of the winding current ( [ tok ] ) , which is linear in @xmath11
, the steady - state current can be found without actually solving the equation . indeed
, it is clear that the transverse component @xmath59 of the mean current density @xmath17 vanishes . to find the average longitudinal component @xmath110
let us multiply ( [ eq : kinetic ] ) by @xmath111 and integrate over @xmath20 and @xmath11 .
we obtain after integration by parts @xmath112the probability density @xmath113 approaches for large time the stationary distribution @xmath114where @xmath115is a normalization constant .
it follows then from ( [ eq : jhydro ] ) and ( [ pst ] ) that for @xmath116 the current @xmath110 relaxes to its stationary form @xmath117where @xmath118 is the steady state ( boltzmann ) probability distribution in the transverse direction .
we obtain finally for the local average density of winding current per unit length in the longitudinal direction@xmath119where @xmath120 is the number of particles per unit length in the longitudinal direction .
the average winding current density ( [ cut ] ) is proportional to the gradient of the potential and to the local density of diffusing particles , which is determined by the temperature and the height of the potential at a given point @xmath65 .
one can easily see that the currents are antiparallel on opposite walls .
the gradient of the potential and the boltzmann distribution also determine the currents width .
particular examples of currents profiles are shown in figure [ figure : profile - linear ] for an harmonic potential @xmath121 and a trapezoidal potential @xmath122 , where @xmath123 is the heaviside step function [ @xmath124 for @xmath125 , and @xmath126 for @xmath127 ] .
note that formula ( [ eq : mainresult ] ) can be interpreted ( up to a small relativistic factor ) as describing the diffusion of magnetic moments in an effective magnetic field @xmath128 $ ] , thereby corresponding to the setting of the spin hall effect in the potential @xmath64 , as already mentioned above .
we recall that in our approach there is no interaction , and no magnetic ( or any other type ) moments .
we are only dealing with distributions of trajectories winding , hence the above effect can be viewed as purely dynamical ( winding hall effect ) .
assume finally without loss of generality that @xmath129 .
then the spatial average of @xmath130 in ( [ cut ] ) over @xmath131 ( or @xmath132 ) is @xmath133 this can be viewed as a manifestation of the weak dependence of our results on the concrete form of the confining potential ( considering that @xmath134 is the effective planar density of particles ) .
we demonstrated that random walks or continuous brownian motions in a confined planar geometry generate dissipationless persistent currents of degrees of freedom associated with the winding of their trajectories .
this is a purely edge effect where the currents are formed in a close neighborhood of the boundaries . in the case of the soft walls , where the geometric confinement is provided by an infinitely growing transverse potential ,
the channel width of the current is determined by the gradient of the potential and the thermal energy of the particles . in the case of reflecting boundaries ( hard - walls ) ,
the channel width is just a single diffusion step , according to the numerical simulations . in the microscopic world , the spin and geometric phase are examples of degrees of freedom which depend on the winding .
note that it is commonly believed that in the spin hall effect the dissipationless edge currents arise in the conditions of the quantum _ _
intrinsic__regime , where dissipation and chaos are absent because of quantization . since
, however , the above _ extrinsic _ spin hall effect corresponds to an ordinary diffusion , thus to a sufficiently high dissipation , one would think that persistent spin currents should not occur .
however , as we have shown , this is not the case simply because of the chaotic kinetic energy of the random trajectories of the particles .
similar effects may occur in purely mechanical systems where relativistic or quantum phenomena are absent .
for example , the torque of a compass arrow in a moving car is also a degree of freedom which `` feels '' the curvature of the car trajectory .
thus , the effect studied here can be of interest beyond the scope of phenomena associated with the spin hall effect .
in the case of the ornstein - uhlenbeck model with soft walls we have obtained an analytical expression for the persistent current of the winding via the boltzmann distribution in an arbitrary transverse confining potential .
the kinetic equation takes into account the winding of particles and , in principle , allows to explore the joint probability distributions in the single - particle phase space of the degrees of freedom with an arbitrary dependence on the winding .
this work was supported by the ukrainian branch of the french - russian poncelet laboratory and the grant 23/12-n of the national academy of sciences of ukraine .
stphane ouvry would like to thank the b. verkin institute for low temperatures and engineering for hospitality during the initial stages of the work .
figure 3 . the winding current density in the harmonic and trapezoidal potentials and the profile @xmath136 in arbitrary dimensionless units . on the profile charts ,
the left y - axis is @xmath130 and the right y - axis is @xmath64 . | we discuss persistent currents for particles with internal degrees of freedom .
the currents arise because of winding properties essential for the chaotic motion of the particles in a confined geometry .
the currents do not change the particle concentrations or thermodynamics , similar to the skipping orbits in a magnetic field . winding , diffusion , stochastic process , spin , spintronics , spin hall effect |
recent studies have shown that the power spectrum of the stock market fluctuations is inversely proportional to the frequency on some power , which points to self - similarity in time for processes underlying the market @xcite .
our knowledge of the random and/or deterministic character of those processes is however limited .
one rigorous way to sort out the noise from the deterministic components is to examine in details correlations at @xmath8 scales through the so called master equation , i.e. the fokker - planck equation ( and the subsequent langevin equation ) for the probability distribution function ( @xmath9 ) of signal increments @xcite .
this theoretical approach , so called solving the inverse problem , based on @xmath10 statistical principles @xcite , is often the first step in sorting out the @xmath11 model(s ) . in this paper
we derive fpe , directly from the experimental data of two financial indices and two exchange rates series , in terms of a drift @xmath6 and a diffusion @xmath7 coefficient .
we would like to emphasize that the method is model independent .
the technique allows examination of long and short time scales _ on the same footing_. the so found analytical form of both drift @xmath6 and diffusion @xmath7 coefficients has a simple physical interpretation , reflecting the influence of the deterministic and random forces on the examined market dynamics processes . placed into a langevin equation
, they could allow for some first step forecasting .
we consider the daily closing price @xmath12 of two major financial indices , nikkei 225 for japan and nasdaq composite for us , and daily exchange rates involving currencies of japan , us and europe , @xmath0/@xmath1 and @xmath1/@xmath2 from january 1 , 1985 to may 31 , 2002 . data series of nikkei 225 ( 4282 data points ) and nasdaq composite ( 4395 data points ) and are downloaded from the yahoo web site ( @xmath13 ) .
the exchange rates of @xmath0/@xmath1 and @xmath1/@xmath2 are downloaded from @xmath14 and both consists of 4401 data points each .
data are plotted in fig .
1(a - d ) .
the @xmath1/@xmath2 case was studied in @xcite for the 1992 - 1993 years .
see also [ 6 ] , [ 8 - 10 ] and [ 11 ] for some related work and results on such time series signals , some on high frequency data , and for different time spans .
/@xmath1 and ( d ) @xmath1/@xmath2 exchange rates for the period from jan . 01 , 1985 till may 31 , 2002,title="fig : " ] /@xmath1 and ( d ) @xmath1/@xmath2 exchange rates for the period from jan . 01 , 1985 till may 31 , 2002,title="fig : " ] /@xmath1 and ( d ) @xmath1/@xmath2 exchange rates for the period from jan .
01 , 1985 till may 31 , 2002,title="fig : " ] /@xmath1 and ( d ) @xmath1/@xmath2 exchange rates for the period from jan .
01 , 1985 till may 31 , 2002,title="fig : " ]
to examine the fluctuations of the time series at different time delays ( or time lags ) @xmath15 we study the distribution of the increments @xmath16 .
therefore , we can analyze the fluctuations at long and short time scales on the same footing .
results for the probability distribution functions ( pdf ) @xmath17 are plotted in fig .
2(a - d ) . note that while the pdf of one day time delays ( circles ) for all time series studied have similar shapes , the pdf for longer time delays shows fat tails as in @xcite of the same type for nikkei 225 , @xmath0/@xmath1 and @xmath1/@xmath2 , but is different from the pdf for nasdaq . of ( a ) nikkei 225 , ( b ) nasdaq , ( c ) @xmath0/@xmath1 and ( d ) @xmath1/@xmath2 from jan .
01 , 1985 till may 31 , 2002 for different delay times .
each pdf is displaced vertically to enhance the tail behavior ; symbols and the time lags @xmath15 are in the insets .
the discretisation step of the histogram is ( a ) 200 , ( b ) 27 , ( c ) 0.1 and ( d ) 0.008 respectively , title="fig : " ] of ( a ) nikkei 225 , ( b ) nasdaq , ( c ) @xmath0/@xmath1 and ( d ) @xmath1/@xmath2 from jan . 01 , 1985 till may 31 , 2002 for different delay times .
each pdf is displaced vertically to enhance the tail behavior ; symbols and the time lags @xmath15 are in the insets .
the discretisation step of the histogram is ( a ) 200 , ( b ) 27 , ( c ) 0.1 and ( d ) 0.008 respectively , title="fig : " ] of ( a ) nikkei 225 , ( b ) nasdaq , ( c ) @xmath0/@xmath1 and ( d ) @xmath1/@xmath2 from jan . 01 , 1985 till may 31 , 2002 for different delay times .
each pdf is displaced vertically to enhance the tail behavior ; symbols and the time lags @xmath15 are in the insets .
the discretisation step of the histogram is ( a ) 200 , ( b ) 27 , ( c ) 0.1 and ( d ) 0.008 respectively , title="fig : " ] of ( a ) nikkei 225 , ( b ) nasdaq , ( c ) @xmath0/@xmath1 and ( d ) @xmath1/@xmath2 from jan . 01 , 1985 till may 31 , 2002 for different delay times .
each pdf is displaced vertically to enhance the tail behavior ; symbols and the time lags @xmath15 are in the insets .
the discretisation step of the histogram is ( a ) 200 , ( b ) 27 , ( c ) 0.1 and ( d ) 0.008 respectively , title="fig : " ] more information about the correlations present in the time series is given by joint pdf s , that depend on @xmath18 variables , i.e. @xmath19 .
we started to address this issue by determining the properties of the joint pdf for @xmath20 , i.e. @xmath21 . the symmetrically tilted character of the joint pdf contour levels ( fig .
3(a - c ) ) around an inertia axis with slope 1/2 points out to the statistical dependence , i.e. a correlation , between the increments in all examined time series . of ( a ) nikkei 225 , ( b ) nasdaq closing price signal and ( c ) @xmath0/@xmath1 and ( d ) @xmath1/@xmath2 exchange rates for @xmath22 and @xmath23 .
contour levels correspond to @xmath24 from center to border , title="fig : " ] of ( a ) nikkei 225 , ( b ) nasdaq closing price signal and ( c ) @xmath0/@xmath1 and ( d ) @xmath1/@xmath2 exchange rates for @xmath22 and @xmath23 .
contour levels correspond to @xmath24 from center to border , title="fig : " ] of ( a ) nikkei 225 , ( b ) nasdaq closing price signal and ( c ) @xmath0/@xmath1 and ( d ) @xmath1/@xmath2 exchange rates for @xmath22 and @xmath23 .
contour levels correspond to @xmath24 from center to border , title="fig : " ] of ( a ) nikkei 225 , ( b ) nasdaq closing price signal and ( c ) @xmath0/@xmath1 and ( d ) @xmath1/@xmath2 exchange rates for @xmath22 and @xmath23 .
contour levels correspond to @xmath24 from center to border , title="fig : " ] the conditional probability function is @xmath25 for @xmath26 . for any @xmath27 @xmath28 @xmath29 @xmath28 @xmath30 ,
the chapman - kolmogorov equation is a necessary condition of a markov process , one without memory but governed by probabilistic conditions @xmath31 the chapman - kolmogorov equation when formulated in @xmath32 form yields a master equation , which can take the form of a fokker - p1anck equation @xcite . for @xmath33 , @xmath34p(\delta x,\tau )
\label{efp}\ ] ] in terms of a drift @xmath6(@xmath35,@xmath36 ) and a diffusion coefficient @xmath7(@xmath35,@xmath36 ) ( thus values of @xmath36 represent @xmath37 , @xmath38 ) .
the coefficient functional dependence can be estimated directly from the moments @xmath39 ( known as kramers - moyal coefficients ) of the conditional probability distributions : @xmath40 @xmath41 for @xmath42 .
the functional dependence of the drift and diffusion coefficients @xmath6 and @xmath7 for the normalized increments @xmath35 is well represented by a line and a parabola , respectively .
the values of the polynomial coefficients are summarized in table 1 and fig .
4 . and
@xmath7 for the pdf evolution equation ( 3 ) ; @xmath35 is normalized with respect to the value of the standard deviation @xmath43 of the pdf increments at delay time 32 days : ( a , b ) nikkei 225 and ( c , d ) nasdaq closing price signal , ( e , f ) @xmath0/@xmath1 and ( g , h ) @xmath1/@xmath2 exchange rates , title="fig : " ] and @xmath7 for the pdf evolution equation ( 3 ) ; @xmath35 is normalized with respect to the value of the standard deviation @xmath43 of the pdf increments at delay time 32 days : ( a , b ) nikkei 225 and ( c , d ) nasdaq closing price signal , ( e , f ) @xmath0/@xmath1 and ( g , h ) @xmath1/@xmath2 exchange rates , title="fig : " ] and @xmath7 for the pdf evolution equation ( 3 ) ; @xmath35 is normalized with respect to the value of the standard deviation @xmath43 of the pdf increments at delay time 32 days : ( a , b ) nikkei 225 and ( c , d ) nasdaq closing price signal , ( e , f ) @xmath0/@xmath1 and ( g , h ) @xmath1/@xmath2 exchange rates , title="fig : " ] and @xmath7 for the pdf evolution equation ( 3 ) ; @xmath35 is normalized with respect to the value of the standard deviation @xmath43 of the pdf increments at delay time 32 days : ( a , b ) nikkei 225 and ( c , d ) nasdaq closing price signal , ( e , f ) @xmath0/@xmath1 and ( g , h ) @xmath1/@xmath2 exchange rates , title="fig : " ] and @xmath7 for the pdf evolution equation ( 3 ) ; @xmath35 is normalized with respect to the value of the standard deviation @xmath43 of the pdf increments at delay time 32 days : ( a , b ) nikkei 225 and ( c , d ) nasdaq closing price signal , ( e , f ) @xmath0/@xmath1 and ( g , h ) @xmath1/@xmath2 exchange rates , title="fig : " ] and @xmath7 for the pdf evolution equation ( 3 ) ; @xmath35 is normalized with respect to the value of the standard deviation @xmath43 of the pdf increments at delay time 32 days : ( a , b ) nikkei 225 and ( c , d ) nasdaq closing price signal , ( e , f ) @xmath0/@xmath1 and ( g , h ) @xmath1/@xmath2 exchange rates , title="fig : " ] and @xmath7 for the pdf evolution equation ( 3 ) ; @xmath35 is normalized with respect to the value of the standard deviation @xmath43 of the pdf increments at delay time 32 days : ( a , b ) nikkei 225 and ( c , d ) nasdaq closing price signal , ( e , f ) @xmath0/@xmath1 and ( g , h ) @xmath1/@xmath2 exchange rates , title="fig : " ] and @xmath7 for the pdf evolution equation ( 3 ) ; @xmath35 is normalized with respect to the value of the standard deviation @xmath43 of the pdf increments at delay time 32 days : ( a , b ) nikkei 225 and ( c , d ) nasdaq closing price signal , ( e , f ) @xmath0/@xmath1 and ( g , h ) @xmath1/@xmath2 exchange rates , title="fig : " ] for two values of @xmath15 , @xmath44 days , @xmath45 day for nasdaq .
contour levels correspond to @xmath46=-0.5,-1.0,-1.5,-2.0,-2.5 from center to border ; data ( solid line ) and solution of the chapman kolmogorov equation integration ( dotted line ) ; ( b ) and ( c ) data ( circles ) and solution of the chapman kolmogorov equation integration ( plusses ) for the corresponding pdf at @xmath47 = -50 and + 50,title="fig : " ] for two values of @xmath15 , @xmath44 days , @xmath45 day for nasdaq .
contour levels correspond to @xmath46=-0.5,-1.0,-1.5,-2.0,-2.5 from center to border ; data ( solid line ) and solution of the chapman kolmogorov equation integration ( dotted line ) ; ( b ) and ( c ) data ( circles ) and solution of the chapman kolmogorov equation integration ( plusses ) for the corresponding pdf at @xmath47 = -50 and + 50,title="fig : " ] for two values of @xmath15 , @xmath44 days , @xmath45 day for nasdaq .
contour levels correspond to @xmath46=-0.5,-1.0,-1.5,-2.0,-2.5 from center to border ; data ( solid line ) and solution of the chapman kolmogorov equation integration ( dotted line ) ; ( b ) and ( c ) data ( circles ) and solution of the chapman kolmogorov equation integration ( plusses ) for the corresponding pdf at @xmath47 = -50 and + 50,title="fig : " ] .values of the polynomial coefficients defining the linear and quadratic dependence of the drift and diffusion coefficients @xmath48 and @xmath49 for the fpe ( 3 ) of the normalized data series ; @xmath43 represents the normalization constant equal to the standard deviation of the @xmath15=32 days pdf [ cols="<,^,^,^,^,^,^,^",options="header " , ] [ tab1 ] the leading coefficient ( @xmath50 ) of the linear @xmath6 dependence has approximately the same values for all studied signals , thus the same deterministic noise ( drift coefficient ) .
note that the leading term ( @xmath51 ) of the functional dependence of diffusion coefficient of the nasdaq closing price signal is about twice the leading , i.e. second order coefficient , of the other three series of interest .
this can be interpreted as if the stochastic component ( diffusion coefficient ) of the dynamics of nasdaq is twice larger than the stochastic components of nikkei 225 , @xmath0/@xmath1 and @xmath1/@xmath2 .
a possible reason for such a behavior may be related to the transaction procedure on the nasdaq
. our numerical result agrees with that of ref .
@xcite if a factor of ten is corrected in the latter ref . for @xmath51 . the validity of the chapman - kolmogorov equation has also been verified .
a comparison of the directly evaluated conditional pdf with the numerical integration result ( 2 ) indicates that both pdf s are statistically identical .
the more pronounced peak for the nasdaq is recovered ( see fig .
an analytical form for the pdf s has been obtained by other authors @xcite but with models different from more classical ones @xcite .
the present study of the evolution of japan and us stock as well as foreign currency exchange markets has allowed us to point out the existence of deterministic and stochastic influences .
our results confirm those for high frequency ( 1 year long ) data @xcite .
the markovian nature of the process governing the pdf evolution is confirmed for such long range data as in @xcite for high frequency data .
we found that the stochastic component ( expressed through the diffusion coefficient ) for nasdaq is substantially larger ( twice ) than for nikkei 225 , @xmath0/@xmath1 and @xmath1/@xmath2 .
this could be attributed to the electronic nature of executing transactions on nasdaq , therefore to different stochastic forces for the market dynamics .
silva ac , yakovenko vm ( 2002 ) comparison between the probability distribution of returns in the heston model and empirical data for stock indices cond- mat/ 0211050 ; dragulescu aa , yakovenko vm ( 2002 ) probability distribution of returns in the heston model with stochastic volatility , cond - mat/0203046 | the evolution of the probability distributions of japan and us major market indices , nikkei 225 and nasdaq composite index , and @xmath0/@xmath1 and @xmath1/@xmath2 currency exchange rates is described by means of the fokker - planck equation ( fpe ) . in order to distinguish and quantify the deterministic and random influences on these financial time series
we perform a statistical analysis of their increments @xmath3 distribution functions for different time lags @xmath4 . from the probability distribution functions at various @xmath4 , the fokker - planck equation for @xmath5
is explicitly derived .
it is written in terms of a drift and a diffusion coefficient .
the kramers - moyal coefficients , are estimated and found to have a simple analytical form , thus leading to a simple physical interpretation for both drift @xmath6 and diffusion @xmath7 coefficients .
the markov nature of the indices and exchange rates is shown and an apparent difference in the nasdaq @xmath7 is pointed out . * key words . * econophysics ; probability distribution functions ; fokker - planck equation ; stock market indices ; currency exchange rates |
spider bite is endemic in parts of north america , mexico , tropical belt of africa and europe and can cause serious systemic manifestations .
loxosceles spiders belong to the family loxoscelidae / sicariidae . of the 13 species of loxosceles
loxosceles reclusa , or the brown recluse spider , is the spider most commonly responsible for this injury .
dermonecrotic arachnidism refers to the local skin and tissue injury as a result of spider - bite .
loxoscelism is the term used to describe the systemic clinical syndrome caused by envenomation from the brown spiders .
cutaneous manifestations occur in around 80% cases around the site of bite , predominantly the lower limbs .
the initial cutaneous manifestation is that of an erythematous halo with edema around the bite site .
the erythematous margin around the site continues to enlarge peripherally , secondary to gravitational spread of the venom into the tissues .
this gradually gives way to vesicles and finally a dark eschar or a necrotic ulcer .
mild systemic effects such as fever , malaise , pruritus and exanthema are common , whereas intravascular hemolysis and coagulation , sometimes accompanied by thrombocytopenia and renal failure , occur in approximately 16% of those who receive the bite . rarely ( < 1% of the cases of suspected l. reclusa bites with a higher incidence in south american loxoscelism ) , recluse venom may cause hemolysis , disseminated intravascular coagulation , which can lead to serious injury and possibly death .
although india is a home to a diverse array of arachnids , according to the latest updated list of spider species found in india , loxosceles rufescens is the only member of the loxosceles genus described in india .
systemic envenomation ( especially renal failure ) from loxosceles bite has been rarely described from india .
a 23-year - old male was bitten by a spider in the dorsal aspect of his right forearm .
he developed a painful blister with red margins in the distal forearm , around the site of bite , which subsequently turned into a necrotic lesion .
he had excruciating pain and erythematous swelling of the right hand and around the bite site .
initial blood urea was 133 mg / dl , creatinine 3.4 mg / dl , serum na 131.6 meq / l and k 4.67 meq / l .
hemoglobin was 6.2 g / dl , total leucocyte count 12,000/cmm ( n84l12e2m2b0 ) with normal platelet counts .
he was referred to our hospital for worsening renal function and deteriorating skin lesions . on arrival
there was blackish discoloration of the right distal forearm and hands and a gravitational pattern of involvement from the bite site down into the hands [ figure 1 ] .
radial pulse was palpable and he had preserved sensation in the fingers of his right hand .
though the spider was not brought , it was identified to be the brown recluse spider ( loxosceles spp ) based on the description and on showing representative pictures .
gravitational pattern of dermonecrosis following the bite of brown recluse spider ( loxosceles spp ) , day 5 investigations revealed hemoglobin of 4.3 gm / dl with normal leucocyte and platelet counts with a reticulocyte count of 9.1% .
he had advanced renal failure with normal electrolytes , serum urea 206 mg / dl and creatinine 6.6 mg / dl .
liver function tests were normal with mild unconjugated hyperbilirubinemia ( total bilirubin was 2.4 mg / dl with unconjugated bilirubin of 1.9
/ l , lactate dehydrogenase ( ldh ) 588 u / l ( normal range : < 200
u / l ) and serum haptoglobin was 18 mg / dl ( normal ) .
he was discharged after 3 weeks with a daily urine output of 1.8 l , serum creatinine of 1.8 mg / dl , hemoglobin 9.2 gm / dl and ldh of 180 mg / dl .
renal function had normalized ( serum creatinine 0.9 mg / dl , normal urine microscopy ) and ulcer had healed with desquamation of the involved skin [ figure 2 ] . healed skin lesion of the same patient at 6 weeks
intravascular hemolysis was evidenced by low hemoglobin level , elevated reticulocyte counts with elevated unconjugated bilirubin , high serum ldh and low serum haptoglobin levels .
the likely etiology of renal failure in our case is hemolysis leading to acute tubular injury .
there was no evidence of myonecrosis / rhabdomyolysis as evidenced by normal serum cpk and urine myoglobin levels .
although the spider was not brought by the patient for identification , the features were typical of loxoscelism , especially the gravitational pattern of dermonecrosis .
most of the case series of loxoscelism have documented dermonecrosis with a few cases of hemolysis and rare cases of renal injury .
wound debridement , elevation , application of ice and immobilization of the affected area may help ameliorate the extent of cutaneous damage .
dapsone has been recommended by some authorities to treat dermonecrosis on account of its leucocyte inhibiting properties .
patients exhibiting signs of systemic toxicity should be admitted and evaluated for evidence of coagulopathy and renal failure .
it has been found that sphingomyelinase activity of the loxosceles toxin induces activation of an endogenous metalloproteinase , which then cleaves glycophorins thus rendering it susceptible to complement mediated lysis .
in their study observed the transfer of complement - dependent hemolysis to other cells , suggesting that the loxosceles toxins can act on multiple cells .
this observation can explain the relatively significant extent of hemolysis observed in patients with inoculation of small amounts of the toxin ( max 30 g ) .
loxoscelism causes necrotic dermatologic injury through a unique enzyme ; sphingomyelinase d. loxosceles toxin has also been shown to have hyaluronidase , alkaline phosphatase and esterase activity .
these cause degradation of the extracellular matrix and contribute to the spread of the toxin in tissue compartment .
the dermatohistopathology of loxosceles bites include dermal edema , thickening of blood vessel endothelium , leukocyte infiltration , intravascular coagulation , vasodilatation , destruction of blood vessel walls and hemorrhage .
renal injury in loxoscelism has been attributed to pigmentary nephropathy due to hemoglobin or myoglobin , secondary to hemolysis or rhabdomyolysis .
however , in the absence of effective reporting systems , many fatal or near - fatal envenomations go unreported .
careful clinical and entomological studies should be done to look into this neglected disease entity .
it should be borne in mind that a case presenting with acute dermal inflammation or ulceration in a
pattern , along with features of hemolysis or rhabdomyolysis or , in rare instances , acute kidney injury could be due to loxoscelism and it is a close mimicker of hemotoxic snake bite . | spiders of the loxosceles species can cause dermonecrosis and acute kidney injury ( aki ) .
hemolysis , rhabdomyolysis and direct toxin - mediated renal damage have been postulated .
there are very few reports of loxoscelism from india .
we report a case of aki , hemolysis and a gravitational pattern of ulceration following the bite of the brown recluse spider ( loxosceles spp ) . |
SECTION 1. SHORT TITLE.
This Act may be cited as the ``Learning Opportunities With Creation
of Open Source Textbooks (LOW COST) Act of 2009''.
SEC. 2. FINDINGS.
Congress finds the following:
(1) The College Board reported that for the 2007 through
2008 academic years each student spent an estimated $805 to
$1,229 on college books and supplies depending on the type of
institution of higher education a student attended.
(2) The gross margin on new college textbooks is currently
22.7 percent according to the National Association of College
Stores.
(3) In a recent study, the Government Accountability Office
found that college textbook prices have risen at twice the rate
of annual inflation over the last two decades.
(4) An open source material project that would make high
quality educational materials freely available to the general
public would drop college textbook costs and increase
accessibility to such education materials.
(5) College-level open source course work materials in
math, physics, and chemistry represent a high-priority first
step in this area.
(6) The scientific and technical workforce at Federal
agencies, national laboratories, and federally supported
university-based research programs could make a valuable
contribution to this effort.
(7) A Federal oversight role in the creation and
maintenance of standard, publicly vetted textbooks is desirable
to ensure that intellectual property is respected and that
public standards for quality, educational effectiveness, and
scientific accuracy are maintained.
SEC. 3. OPEN SOURCE MATERIAL REQUIREMENT FOR FEDERAL AGENCIES.
(a) In General.--Not later than 1 year after the date of the
enactment of this Act, the head of each agency that expends more than
$10,000,000 in a fiscal year on scientific education and outreach shall
use at least 2 percent of such funds for the collaboration on the
development and implementation of open source materials as an
educational outreach effort in accordance with subsection (b).
(b) Requirements.--The head of each agency described in subsection
(a) shall, under the joint guidance of the Director of the National
Science Foundation and the Secretary of Energy, collaborate with the
heads of any of the agencies described in such subsection or any
federally supported laboratory or university-based research program to
develop, implement, and establish procedures for checking the veracity,
accuracy, and educational effectiveness of open source materials that--
(1) contain, at minimum, a comprehensive set of textbooks
or other educational materials covering topics in college-level
physics, chemistry, or math;
(2) are posted on the Federal Open Source Material Website;
(3) are updated prior to each academic year with the latest
research and information on the topics covered in the textbooks
or other educational materials available on the Federal Open
Source Material Website; and
(4) are free of copyright violations.
SEC. 4. GRANT PROGRAM.
(a) Grants Authorized.--From the amounts appropriated under
subsection (d), the Director and the Secretary shall jointly award
grants to eligible entities to produce open source materials in
accordance with subsection (c).
(b) Application.--To receive a grant under this section, an
eligible entity shall submit an application to the Director and the
Secretary at such time, in such manner, and containing such information
as the Director and Secretary may require.
(c) Uses of Grant.--An eligible entity that receives a grant under
this section shall use such funds--
(1) to develop and implement open source materials that
contain educational materials covering topics in college-level
physics, chemistry, or math; and
(2) to evaluate the open sources materials produced with
the grant funds awarded under this section and to submit a
report containing such evaluation to the Director and
Secretary.
(d) Authorization of Appropriations.--There are authorized to be
appropriated $15,000,000 to carry out this section for fiscal year 2010
and such sums as necessary for each succeeding fiscal year.
SEC. 5. REGULATIONS.
The Director and the Secretary shall jointly prescribe regulations
necessary to implement this Act, including redistribution and
attribution standards for open source materials produced under this
Act.
SEC. 6. DEFINITIONS.
In this Act:
(1) Director.--The term ``Director'' means the Director of
the National Science Foundation.
(2) Eligible entity.--The term ``eligible entity'' means an
institution of higher education, nonprofit or for-profit
organization, Federal agency, or any other organization that
produces the open source materials described in section 4(c).
(3) Federal open source material website.--The phrase
``Federal Open Source Material Website'' means the website
where the head of each agency described in section 3 shall post
the open source materials pursuant to such section, which shall
be made available free of charge to, and may be downloaded,
redistributed, changed, revised or otherwise altered by, any
member of the general public.
(4) Institution of higher education.--The term
``institution of higher education'' means an institution of
higher education as defined in section 101 of the Higher
Education Act of 1965 (20 U.S.C. 1001).
(5) Open source materials.--The term ``open source
materials'' means materials that are posted on a website that
is available free of charge to, and may be downloaded,
redistributed changed, revised or otherwise altered by, any
member of the general public.
(6) Secretary.--The term ``Secretary'' means the Secretary
of Energy. | Learning Opportunities With Creation of Open Source Textbooks (LOW COST) Act of 2009 - Requires each federal agency that expends more than $10 million in a fiscal year on scientific education and outreach to use at least 2% of such funds for collaboration on the development and implementation of open source materials as an educational outreach effort.
Directs such agencies, under the joint guidance of the Director of the National Science Foundation (NSF) and the Secretary of Energy (DOE), to collaborate with each other or with any federally supported laboratory or university-based research program to develop, implement, and establish procedures for checking the veracity, accuracy, and educational effectiveness of open source materials that: (1) contain a comprehensive set of textbooks or other educational materials covering topics in college-level physics, chemistry, or math; (2) such agencies post on a Federal Open Source Material Website, which shall be available to the public without charge; (3) are updated prior to each academic year with the latest research and information ; and (4) are free of copyright violations.
Requires the Director and the Secretary to award joint grants to eligible entities to: (1) develop and implement such open source materials; and (2) evaluate and report to the Director and Secretary on the materials produced. |
Advertisement Continue reading the main story
A former top aide to Gov. Chris Christie of New Jersey revealed Monday that she would not hand over documents in response to a subpoena from a legislative panel investigating the controversial closing of lanes at the George Washington Bridge last fall, citing her Fifth Amendment right against self-incrimination.
The former aide, Bridget Anne Kelly, informed the panel, through a letter from her lawyer, Michael Critchley, that in addition to the Fifth, she was also invoking the Fourth Amendment in defense of her privacy. The letter said that the panel’s request “directly overlaps with a parallel federal grand jury investigation.” It also contended that giving the committee “unfettered access” to her diaries, calendars and electronic devices could “potentially reveal highly personal confidential communications” unrelated to the bridge scandal.
The decision by Ms. Kelly, who had been a deputy chief of staff and a key cog in Mr. Christie’s political operation, was reported online Monday evening by The Record of northern New Jersey. And it unfolded as Mr. Christie was fielding questions during his regular radio program, “Ask the Governor,” on New Jersey 101.5 FM.
Photo
Asked about Ms. Kelly’s decision on the air, Mr. Christie said that “it doesn’t tell me anything” and that he respected her constitutional rights.
Assemblyman John S. Wisniewski and State Senator Loretta Weinberg, the Democratic leaders of the panel, issued a statement saying that they had received the letter and that they “are reviewing it and considering our legal options with respect to enforcing the subpoena.”
Ms. Kelly looms as a pivotal figure in the scandal. She is the official who wrote an email in August saying, “Time for some traffic problems in Fort Lee,” to another Christie ally, David Wildstein at the Port Authority of New York and New Jersey. Mr. Wildstein responded, “Got it,” and together, they were intimately involved in the lane closings, which occurred over four days in September.
Mr. Christie — who repeated on Monday night his insistence that he did not know about the scheme beforehand — later fired Ms. Kelly, and also cut ties with her boss, Bill Stepien, who had been Mr. Christie’s campaign manager in 2009 and 2013.
Still, both Mr. Stepien and Ms. Kelly have now invoked their Fifth Amendment rights, as Mr. Wildstein did during a legislative hearing. With some 18 other subpoenas issued by the State Legislature — which is controlled by Democrats — also outstanding, it is possible that others may also follow suit.
Separately, federal prosecutors are also looking into the bridge scandal, as well as allegations of undue influence in a Hoboken development proposal. Mr. Christie said Monday on the radio program that his office would cooperate with a subpoena issued by the United States attorney’s office in Newark — the office he headed before becoming governor in 2010 — “on a rolling basis.”
“That’s fine,” Mr. Christie said, about the federal subpoena.
Advertisement Continue reading the main story
Advertisement Continue reading the main story
Ms. Kelly’s announcement came several days after one of her subordinates, Christina Genovese Renna, the director of the state’s Intergovernmental Affairs, submitted her resignation. Ms. Renna, 32, resigned on Friday, the same day Mr. Wildstein’s lawyer said in a letter that “evidence exists” contradicting the governor’s account about when he learned of the lane closings.
In response to questions about the timing of her departure, Ms. Renna said in a statement released on Sunday that she had been considering leaving since shortly after the election. Her lawyer, Henry E. Klingeman, suggested in an email on Monday that as the investigation by the committee, and a preliminary inquiry by federal prosecutors, moves forward, a decision by his client to leave would be more fraught.
Ms. Renna received a subpoena because on Sept. 12, the fourth and last day of the lane closings, she sent an email from her personal account to Ms. Kelly’s. Ms. Renna wrote that Evan Ridley, a staff member in Intergovernmental Affairs, which was responsible for maintaining a relationship with Fort Lee officials, had received a call from that borough’s mayor, Mark Sokolich. Mr. Sokolich, the email said, was “extremely upset” about the closings, which were causing such severe backup in Fort Lee that “first responders are having a terrible time maneuvering the traffic.”
“Evan told the fine mayor he was unaware that the toll lanes were closed, but he would see what he could find out,” Ms. Renna wrote. Ms. Kelly forwarded Ms. Renna’s email to Mr. Wildstein, who asked Ms. Kelly to call him.
Before joining the Christie administration, Ms. Renna worked as a lobbyist for the Chamber of Commerce, Southern New Jersey, where Debra P. DiLorenzo, the president and chief executive, called her “an exemplary member of our staff.”
Ms. Renna declined to be interviewed for this article, but something about her political outlook might be gleaned from a letter an idealistic-sounding young Christina Genovese wrote to a local newspaper a decade ago.
In it, she complained that few of her “peers have opinions — or even care — about politics” and that “even fewer are registered to vote” as the presidential election approached.
“Our country has given us the opportunity to have a voice in our future,” she wrote, ending her letter with the question, “Why not take it?” ||||| Gov. Chris Christie, addressing the Bridgegate scandal in a live radio interview Monday night, said he wants the people of New Jersey to know, “I had nothing to do with this” and “I’m going to fix it.” He also took a swipe at critics he said were engaging in “a game of gotcha.”
Governor Christie in studio for Ask the Governor on February 3 (Mel Evans, Pool/Associated Press)
Christie also disclosed that his office has been subpoenaed by the U.S. Attorney’s office in the case and is cooperating fully.
[SEE FULL BRIDGEGATE COVERAGE]
In his first appearance on the Townsquare Network’s Ask the Governor program since Dec. 23, Christie was asked by host Eric Scott about the series of explosive developments that have followed, including revelations of emails showing his deputy chief of staff involved in the disruptive closing of approach lanes to the George Washington Bridge last September as apparent political retribution and his own decision to fire her “because she lied” about the matter.
“I had nothing to do with this,” Christie said, repeating earlier assertions about the lane closings several times Monday night. “No knowledge, no authority, no planning– nothing to do with this before this decision was made to close these lanes by the Port Authority.”
He added, “While I’m disappointed in what happened here, I’m going to fix it.” He said his office has begun providing documents subpoenaed by the joint state legislative committee investigating the scandal and will do the same in compliance with the subpoenas from the U.S. Attorney. He also referred to the independent law firm he has brought in to conduct an internal investigation, saying: “I can’t wait for them to be finished so I can get the full story here.”
Asked to be specific about when he first became aware of the lane closings and the massive traffic disruptions they caused in the Fort Lee area last September, Christie referred to reports in the Wall Street Journal about the surprise and outrage expressed by Patrick Foye, the New York-appointed executive director of the Port Authority.
Foye, after an angry exchange of emails with his New Jersey counterpart, Bill Baroni, then the PA’s deputy director, ordered the Fort Lee approach lanes reopened and vowed “to get to the bottom” of the closings.
Christie said the stories about Foye’s complaints prompted him to ask his staff to look into the dispute and produced the explanation that the lane closings were part of a traffic study, an explanation rejected by Foye at the time and since undermined by the release of other documents.
Most damaging among those documents was an email exchange last Aug. 13, in which Christie’s deputy chief of staff, Bridget Anne Kelly, wrote David Wildstein, a Christie appointee at the Port Authority: “Time for some traffic problems in Fort Lee” and Wildstein repllied, “Got it.” Other emails between the two suggested the lane closings were an act of political retribution against Ford Lee’s mayor for failing to join other Democratic mayors in endorsing Republican Christie for re-election as governor.
Those emails became public earlier this month, forcing a dramatic two-hour press conference Jan. 8, in which Christie announced he had just fired Kelly “because she lied” about having no knowledge of the lane closing plan.
Christie insisted Monday night he still did not know whether “political shenanigans” had inspired a traffic study or whether an actual traffic study became an opportunity for such “shenanigans.”
Wildstein, who resigned his Port Authority post in December, create a new ripple of trouble for the governor last Friday. A letter from his attorney, Alan L. Zegas’ letter to the Port Authority, was published online by The New York Times and quickly picked up by other media. In it, Zegas referred to “evidence” he said would contradict Christie’s claims that he had no knowledge of the lane closings until after they occurred.
Christie Monday night avoided any direct reference to Wildstein. His office had initially responded Friday by saying the letter did not contradict his assertion that he had no advance involvement or knowledge of the lane closings. A second response, emailed to Politico on Saturday, attacked Wildstein directly, saying: “Bottom line – David Wildstein will do and say anything to save David Wildstein.” The email also took aim at “David Wildstein’s past,” including newspaper accounts describing the governor’s former high school classmate and Port Authority appointee as “tumultuous” and even referring to an incident in which Wildstein “was publicly accused by his high school social studies teacher of deceptive behavior.”
But the governor did aim an aside at mounting media coverage and political attacks accompanying the ongoing Bridgegate investigations, saying that while he was cooperating fully with the probes, he refused to let Bridgegate “dominate” his attention.
“I can’t afford to allow this to dominate my time the way it dominates the time of some folks in the media and some partisans,” Christie said. He said he was limiting his own focus on Bridgegate to determining what needed to be corrected, adding, “All this other stuff is just a game of gotcha — when did I first learn about this or that.”
Of Kelly, who is reportedly resisting subpoenas from the joint legislative committee, citing her constitutional rights against self-incrimination, Christie Monday said, “I know everything I needed to know from an employment standpoint from Bridget Kelly when she didn’t tell me the truth and I fired her.”
Wildstein appeared before the State Assembly Transportation Committee initially investigating Bridgegate on Jan. 8 but declined to give testimony, citing his constitutional protections against self-incrimination. Zegas, who represented him at the hearing, has repeatedly said his client would only discuss the lane closing plan if given immunity from prosecution.
Also last Friday, another key Bridgegate figure, speaking through his attorney, said he would not comply with a legislative subpoena. Kevin Marino, the attorney for former Christie campaign manager Bill Stepien sent the joint state Senate and Assembly Bridgegate committee a letter citing Stepien’s constitutional protections against self-incrimination and illegal search and seizure.
The committee co-chairs, State Sen. Loretta Weinberg (D-Teaneck) and Assemblyman John Wisniewski (D-Sayreville), later released a statement saying, “We just received Mr. Marino’s letter this afternoon. We are reviewing it and considering our legal options with respect to enforcing the subpoena.”
Wisniewski and Weinberg said they had also received a copy of the Wildstein letter from Zegas and were reviewing that as well.
Monday was the deadline for subpoenaed documents to be delivered to the joint legislative committee. Twenty subpoenas were issued by the panel, but several recipients asked for extensions and an undisclosed number of those requests have been granted, the co-chairs said. None of the documents received were made public Monday.
“The committee has begun receiving material responsive to its subpoenas, with more responses expected in the near future in a cooperative effort with subpoena recipients,” wrote Weinberg and Wisniewski. “Numerous extensions have been granted to subpoena recipients, as is typical in such situations. . . “No documents will be released today. The committee will announce its next step as soon as that course is decided.”
Christie said Monday night that his office had not requested an extension and has begun providing documents to the committee.”
Kevin McArdle also contributed to this report.
Monday, February 3, 2014 – Ask The Governor Audio (click the arrow to listen)
Segment 1
Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.
Segment 2
Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.
Segment 3
Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.
Segment 4
Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.
Segment 5
Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser. ||||| Feds seek files from Christie's office; ex-aide Kelly won't turn over documents in response to subpoena
STAFF WRITERS
The Record
RECORD FILE PHOTO Former Christie deputy chief of staff Bridget Anne Kelly.
Federal prosecutors investigating the George Washington Bridge lane closures have demanded documents from Governor Christie’s office, he said Monday, a development that puts him at the opposite end from the kind of probe he once led as the state’s hard-charging U.S. attorney.
See Kelly's response to the subpoena
Christie acknowledged the subpoenas during a radio interview Monday evening, as news broke that his former deputy chief of staff, Bridget Anne Kelly, would not turn over documents in response to a subpoena issued by state lawmakers in a parallel investigation. An attorney for Kelly — who wrote the message “Time for some traffic problems in Fort Lee” — cited Kelly’s constitutional protection against self-incrimination. She joins Christie’s campaign manager as the second person to put up a roadblock to an ongoing legislative probe.
Full coverage: Chris Christie and the GWB lane closure controversy
But the fact that federal prosecutors sent a subpoena to Christie’s office signaled that the more high-stakes federal investigation had taken a serious turn for the governor, who was considered a presidential contender only a few weeks ago.
Christie emphatically told listeners of his monthly radio show that he didn’t know about the lane closures beforehand and pledged to get to the bottom of them as he cooperates with subpoenas from both the legislative panel and the U.S. Attorney’s Office.
“Before these lanes were closed, I knew nothing about it,” he said on “Ask the Governor” on NJ 101.5 FM on Monday night. “I didn’t plan it. I didn’t authorize it. I didn’t approve it. I knew nothing about it.”
Christie spoke just two hours after the deadline for 18 individuals, his campaign and his office to respond to legislative subpoenas seeking emails, text messages and other documents related to the lane closures, which many Democrats believe were retribution against the Fort Lee mayor for not endorsing the Republican governor, who won reelection in a landslide last year.
Several individuals asked for extensions. But Michael Critchley Sr., an attorney for Kelly, notified the legislative panel Monday evening that she would not turn over documents. Kelly joins Christie’s former campaign manager, Bill Stepien, in invoking her constitutional right against self-incrimination. Stepien’s attorney said Friday that he would not turn over documents.
The information requested by the legislative panel, Critchley wrote Monday, “directly overlaps with a parallel federal grand jury investigation.” The letter also cites her right to privacy. In a brief phone interview, Critchley said his client had not received a subpoena from federal prosecutors.
Providing the committee with “unfettered access to, among other things, Ms. Kelly’s personal diaries, calendars and all of her electronic devices amounts to an inappropriate and unlimited invasion of Ms. Kelly’s personal privacy and would also potentially reveal highly personal confidential communications completely unrelated to the reassignment of access lanes to the George Washington Bridge,” Critchley wrote.
“I would hope they would share any information they have that would let me get to the bottom of it, but on the other hand, they have constitutional rights like everybody else and have the right to exercise them. There’s nothing I can do about that,” Christie said when asked on the radio program about her refusal to comply with the subpoena.
Christie, who said he fired Kelly because she lied to him, has also said he did not ask her why she apparently ordered the lanes closed.
The governor said he is cooperating with subpoenas from both the legislative panel and the U.S. Attorney’s Office. On the radio show, he said his office began turning over documents to the Joint Legislative Select Committee on Investigations on Monday and will do so on a rolling basis as they are located.
An attorney for his campaign said earlier in the day that it had received an extension while it seeks approval from the state Election Law Enforcement Commission to use campaign funds to pay for legal bills and to hire a document retention firm. The legislative panel also granted an extension to Christina Genovese Renna, who served as director of intergovernmental affairs under Kelly until she resigned Friday.
Christie also used the radio show to dispute a former political appointee’s assertion that he knew about the closures when they were happening, and he said he hired a high-powered law firm to carry out a swift investigation so he can get answers.
“I can’t wait for them to be finished so I can get the full story here,” he said.
Christie didn’t rule out that he might have heard about traffic but said he didn’t know there was a problem until Patrick Foye, executive director of the Port Authority, sent an internal email, which was leaked to the press, questioning the closures.
Though the governor spoke at length about the September traffic jam, most callers to the show were seeking information on other issues, and Christie worked to put the incident behind him, saying he met with Democratic Senate President Stephen Sweeney and Assembly Speaker Vincent Prieto for an hour and a half Monday to talk about their agenda for the year.
“They and I understand that our job is to run the state of New Jersey,” he said.
There was no mention on the show of criticism from Environment New Jersey that Christie’s administration pushed for a natural gas pipeline through the Pine¬lands, a protected area, because Genovese Renna’s husband works for the company. A spokes¬man for the governor called the idea “ludicrous,” and a company spokesman said Renna, president of South Jersey Industries, had nothing to do with the utility subsidiary responsible for the project.
In a joint statement Assemblyman John Wisniewski, D-Middlesex, and Senate Majority Leader Loretta Weinberg, D-Teaneck, who lead the legislative committee, said Monday that “numerous extensions have been granted to subpoena recipients.” Weinberg said she did not know who was given extensions, and a spokesman for Wisniewski declined to provide additional information.
“No documents will be released today,” the statement said. “The committee will announce its next step as soon as that course is decided.”
In an interview, Weinberg said the committee was discussing the decisions by Kelly and Stepien to invoke the Fifth.
“It’s frustrating when we’re trying to find the truth of the situation that started with the governor saying he was going to cooperate and urge others to do the same,” Weinberg said. “Obviously, we’ll have to keep plugging away.”
Four Republican committee members — Assemblywomen Holly Schepisi of River Vale and Amy Handlin of Monmouth County, Assemblyman Michael Patrick Carroll of Morris County and Sen. Kevin O’Toole of Cedar Grove — sent a letter to the committee leaders Monday seeking equal access to documents and information.
Schepisi said Monday that she learned that the committee’s special counsel, Reid Schar, had met with the U.S. Attorney’s Office after reading about the meeting in The Record over the weekend. Wisniewski and Weinberg released a statement from Schar about the Friday meeting to the media on Saturday.
Schepisi said all of the members of the committee — and not just the leaders — should be receiving regular updates for the sake of “transparency, openness, fairness and ensuring that our committee is not abusing power as it’s investigating abuse of power.”
Weinberg said she thought Schar’s statement went to all members of the committee and that the Republicans would get equal access to the documents once they come in.
The lane closures have shaken up governor’s inner circle. Christie fired Kelly and cut ties with Stepien after he called the Fort Lee mayor an “idiot” in an email. Wildstein and Bill Baroni, who Christie named deputy executive director of the Port Authority, have both resigned. Wildstein and Baroni were also subpoenaed.
In a letter Friday, Wildstein’s attorney said “evidence exists” that Christie knew about the closures when they happened. Christie denied the allegation Monday after his staff sent an email attacking Wildstein’s credibility Saturday.
The committee is also seeking documents from Port Authority Chairman David Samson and Christie’s incoming chief of staff, Regina Egea, who oversaw independent agencies including the Port Authority.
When asked if Samson sought an extension, a spokeswoman for his attorneys referred questions to the legislative committee. Genovese Renna’s attorney, Henry Klingeman said she was granted an extension and plans to comply with the subpoenas.
Email: hayes@northjersey.com or boburg@northjersey.com ||||| Story highlights Fort Lee mayor: "I take it him at his word but it would appear ... a lot of folks don't"
Christie sticks to story that he first heard about lane closures from media reports
Governor again says he was told initially the traffic mess was part of a traffic study
CNN poll shows Christie support in potential presidential race slides
Embattled New Jersey Gov. Chris Christie forcefully stood by his account that he only found out about notorious traffic lane closures at the George Washington Bridge last year after they appeared in the media and that he knew absolutely nothing about a suggested political motive behind them.
"The answer is still the same," Christie said in a radio interview on Monday night, adding later that he can't wait to get the "full story" behind the scandal that has rocked his administration and, for now, has clouded any potential presidential run in 2016.
"The fact of the matter is I've been very clear about this. Before these lanes were closed, I knew nothing about them. I didn't plan it. I didn't authorize it. I didn't approve it. I knew nothing about it," he said in a studio appearance for a live call-in show hosted by New Jersey 101.5.
The fresh response came amid a new allegation from a former top adviser caught up in the scandal, David Wildstein, that "evidence exists" that Christie knew about the closures and resulting traffic gridlock over five days in Fort Lee in real time, which would, if true, contradict his account of events.
On CNN's "Piers Morgan Live" Fort Lee Mayor Mark Sokolich said he believed Christie but thought the governor should sign a sworn statement backing up his claims.
"I take it him at his word but it would appear from the polls that a lot of folks don't," Sokolich said.
Why is this important?
JUST WATCHED What did Christie really know? Replay More Videos ... MUST WATCH What did Christie really know? 08:00
JUST WATCHED Christie fires back against allegations Replay More Videos ... MUST WATCH Christie fires back against allegations 01:32
JUST WATCHED Bridge over troubled waters for Christie Replay More Videos ... MUST WATCH Bridge over troubled waters for Christie 05:10
JUST WATCHED Will alleged evidence sink Christie? Replay More Videos ... MUST WATCH Will alleged evidence sink Christie? 01:38
Christie's recollection ultimately may be critical in answering why the bridge lanes overseen by the Port Authority of New York and New Jersey were closed in the first place and who authorized it for sure -- and whether any laws were broken.
E-mails and political figures in New Jersey have suggested the gridlock was a bit of orchestrated political payback for the Fort Lee mayor, who did not endorse Christie for re-election last November.
A state legislative committee is investigating as is the Justice Department, which would be interested if there was any abuse of power. Both have subpoenaed Christie's office for documents, and he said his office is complying. Christie's office also has hired a private law firm to investigate.
And why is Wildstein important?
It has been suggested in the e-mails released by state legislative investigators in New Jersey that Wildstein, a top Christie appointee at the Port Authority, carried out the closures. He also has been subpoenaed, refused to answer questions from legislative investigators, and he's got a lawyer.
For his part, Christie has fired a top aide linked to what has metastasized into a political scandal coming on the heels of a successful re-election and prior to a possible White House bid. Others have left their jobs as the scandal unfolded, including Wildstein.
Former deputy chief of staff Bridget Ann Kelly, whose e-mail to Wildstein -- "Time for some traffic problems in Fort Lee" -- weeks before the gridlock occurred led to her firing by Christie in January, refused on Monday on constitutional grounds to comply with a state legislative subpoena to turn over documents, a source with knowledge of the matter told CNN's Chris Frates.
State lawmakers leading that investigation said they are reviewing the matter and "considering our legal options with respect to enforcing the subpoena."
Christie, in the radio interview, repeated what he said at a January news conference -- challenged by Wildstein in a letter written by his lawyer to the Port Authority on Friday -- about the timeline around when he became aware of the traffic mess. He also denied having any knowledge of a suggested political motive.
Who's who in the bridge scandal
Christie blasts Wildstein
JUST WATCHED Christie booed at Super Bowl event Replay More Videos ... MUST WATCH Christie booed at Super Bowl event 02:25
JUST WATCHED Jindal: Christie should stay RGA chair Replay More Videos ... MUST WATCH Jindal: Christie should stay RGA chair 01:14
JUST WATCHED Gov. Christie faces new allegations Replay More Videos ... MUST WATCH Gov. Christie faces new allegations 03:50
"The first time that this really came into my consciousness, as an issue" was when an e-mail from Port Authority Executive Director Pat Foye "was leaked to the media and reported on."
Foye was the person, according to e-mails, who started asking questions about the lane closures and ordered them reopened.
That's when Christie said he asked his chief of staff and his chief counsel to "look into this and see what's going on here."
He said any reference to the bridge situation prior to this wouldn't have meant anything to him because he wasn't clued into the fact that there was a problem.
Afterward, Christie said again that he was told the "Port Authority was engaged in a traffic study," which has now been called into serious question.
He also stressed that "nobody has said I knew anything about this before it happened, and I think that's the most important question."
A question of evidence
Christie's appearance follows steps by his office over the weekend to strike in an unusually personal way against Wildstein, a one-time high school classmate of the governor in Livingston, N.J.
"Bottom line - David Wildstein will do and say anything to save David Wildstein," a letter released by the governor's office said in a statement.
On Monday, Christie's office also planned to send to friends and allies a list of tweets and stories aiming to put the focus on The New York Times' handling of the disclosure by Wildstein, according to CNN's Jake Tapper.
The Times broke the story, saying Wildstein "had the evidence to prove" Christie knew about the lane closures. The newspaper quickly revised its lead to simply reflect what the letter written by Wildstein's attorney actually said: that "evidence exists," not that Wildstein was in possession of it.
The letter never disclosed the evidence.
Chris Christie scandal: A primer
First on CNN: Mayor behind Christie allegations full of contradictions
The letter also didn't suggest that Christie had knowledge of what his people might have been up to -- political or otherwise.
Read Wildstein's letter
The scandal and another allegation of strong-arm political tactics by Christie administration officials over Superstorm Sandy aid have generated a wave of negative political fallout for a governor overwhelmingly re-elected in November and considered a top-tier Republican presidential hopeful in 2016.
Christie's swagger and straight-shooting style had him riding high in the polls as late as December.
He topped other potential GOP 2016 White House hopefuls in various surveys. But those numbers have faded as the scandal has intensified, according to a new CNN/ORC International survey.
Christie trails Hillary Clinton by 16 percentage points in a hypothetical presidential match-up, a turnaround from December when he was up by 2 points.
Opinion: Are clouds gathering for Christie?
Christie to appear at CPAC | – A former Chris Christie aide at the center of the Fort Lee traffic scandal has refused to surrender subpoenaed documents; Bridget Anne Kelly is invoking the Fifth Amendment, protecting her from self-incrimination. Kelly, who penned the infamous email calling for "traffic problems in Fort Lee," is also pointing to the Fourth Amendment's privacy protections. Allowing an investigative panel to view her private documents could "potentially reveal highly personal confidential communications" not connected to the scandal, says a letter from her lawyer. The letter also notes that the subpoena "directly overlaps with a parallel federal grand jury investigation," the New York Times reports. The heads of the panel are "considering our legal options with respect to enforcing the subpoena," they say in a statement. Meanwhile, federal prosecutors have subpoenaed Chris Christie's own office, the Record reports. In a radio interview last night, Christie discussed the scandal with New Jersey 101.5. "I didn’t plan it. I didn’t authorize it. I didn’t approve it. I knew nothing about it," he said, per the Record. He said his office was providing subpoenaed documents and he was working to "fix" what had gone wrong. "All this other stuff is just a game of gotcha—when did I first learn about this or that," he said. Fort Lee Mayor Mark Sokolich, for his part, tells CNN he takes Christie "at his word, but it would appear from the polls that a lot of folks don't." |
as the number of total hip arthroplasties ( thas ) performed increases , so do the number of required revisions .
impaction bone grafting with wagner sl revision stem is a good option for managing bone deficiencies arising from aseptic osteolysis .
we studied the results of cementless diaphyseal fixation in femoral revision after total hip arthroplasty and whether there was spontaneous regeneration of bone stock in the proximal femur after the use of wagner sl revision stem ( zimmer , warsaw , in , usa ) with impaction bone grafting .
we performed 53 hip revisions using impaction bone grafting and wagner sl revision stems in 48 patients ; ( 5 cases were bilateral ) for variety of indications ranging from aseptic osteolysis to preiprosthetic fractures .
the mean harris hip score was 42 before surgery and improved to 86 by the final followup evaluation at a mean point of 5.5 years . of the 44 patients
, 87% ( n=39 ) had excellent results and 10% ( n=5 ) had good results .
short term results for revision tha with impaction bone grafting and wagner sl revision stems are encouraging .
however , it is necessary to obtain long term results through periodic followup evaluation , as rate of complications may increase in future .
severe proximal femoral bone loss is a formidable problem in reconstructive hip surgery.1 the results of surgery using a cemented revision femoral component are poor compared with those using a primary component.23 use of cemented components in revision surgery for femoral loosening without biologic reconstruction of deficient bone stock carries a high risk of loosening.245 there are various techniques for the biological reconstruction of the proximal femur . because the amount of autogenous bone graft is limited , allograft is widely used .
when the proximal femoral shaft is sufficiently stable , the exeter technique ( impaction grafting ) can be employed.6 other authors prefer massive allografts combined with a long stem prosthesis.7 one of the methods to address this issue is by using diaphyseal fitting cementless stem which does not rely on proximal femoral bone stock for primary fixation.8 also , with the possibility of second or third revision in future , restoration of bone stock is thought to be desirable . in 1987 , wagner presented a technique in which a cementless long stem prosthesis was fixed in the diaphysis and he reported excellent spontaneous osseous regeneration.8 we studied the results of cementless diaphyseal fixation stem in femoral revision after total hip arthroplasty using a wagner sl revision stem ( zimmer , warsaw , in , usa ) with impaction bone grafting .
53 revision total hip arthroplasties in 48 patients performed at our institution using the wagner sl revision stem with impaction bone grafting in a retrospective study conducted between july 1999 and december 2008 .
history of hip pain at rest and/or night pain or painful range of motion was noted .
blood erythrocyte sedimentation rate ( esr ) and c reactive protein level ( crp ) were assessed in all the cases .
preoperative hip aspiration was reserved for the cases which had high levels of esr or crp or if prior hip arthroplasty failed within first 5 years of index hip arthroplasty .
radiologic analysis was done to determine the areas of osteolysis and to assess the amount of bone loss on the femoral and the acetabular side .
femoral bone defects were classified according to the system of paprosky et al.9 preoperative templating of femur was done in all cases to get an idea of minimum length and diameter of the stem required for optimal bone fixation in the diaphysis . in all cases , vertical offset was measured and restored peroperatively to address the issue of limb length discrepancy .
we used impaction bone grafting and wagner sl revision stems for femoral revision and bone grafting with uncemented cups for acetabular revision in all cases .
the choice of stem was diaphyseal fitting wagner sl revision stem which is made of a titanium aluminum niobium alloy with a rough - blasted surface .
the shaft of the prosthesis has a conus angle of 2 and eight longitudinal ridges arranged in a circle around the stem.3 the stem is available in lengths of 190 - 385 mm .
cementless anchoring of the stem is achieved after implantation in a conically reamed femoral shaft .
if there are larger defects in the proximal part of the femur , stable stem fixation can be achieved only distally in the diaphyseal part of the femur .
the head is available in diameters of 22 , 28 and 32 mm . in all the cases
, we used a straight stem ( as curved stem was not available during the study period ) .
two different designs of wagner stem were used in the study period . in the initial 11 cases , we used standard design wagner prosthesis with 34 mm horizontal offset and in the rest 41 cases , increased offset design with 44 mm horizontal offset was used .
this difference was due to the availability of increased offset stem during the latter part of the study period .
advantage of using a diaphyseal fitting stem is that it by passes the proximal femoral osteolytic area , completely relying for fixation on diaphysis which is not affected by aseptic osteolysis .
we used a mixture of autograft and allograft in all cases , as autograft alone is often not sufficient to fill bone defects .
autografts were harvested from the posterior iliac crest in the lateral position before the beginning of actual revision surgery .
a commercially available bone mill was used to harvest the proper graft size from allografts .
the source of allografts was fresh - frozen femoral heads preserved in the bone bank after retrieval from hemireplacement arthroplasty in patients with fractured femoral necks .
a dedicated instrument set with the option of sequentially increasing diameter impaction broach was used for impaction bone grafting .
this was achieved by creating an open door osteotomy with the base on the anterolateral aspect of the femoral shaft , keeping the width of the osteotomy to less than one third of the femoral diameter .
the location of the osteotomy corresponded to the middle and lower third of the stem for better access to the distal cement plug and easy removal of the cement mantle around the stem .
the open door osteotomy was fixed with tensioned encirclage wires of minimum 20 gauze stainless steel wire loop in all the cases .
the stem was inserted into the medullary canal and was driven into position with a few strikes of a mallet .
the prosthesis was advanced until the required stability was achieved and the prosthesis did not move any further . for the last 2 cm , the prosthesis drops only 1 mm with each forceful blow from a 2-lb mallet . a clue
that the prosthesis has reached its final seating is the change in the sound of the mallet blow .
the conically reamed osseous bed in the medullary cavity should ideally be 100 mm long , with a minimum length of 70 mm.10 the diameter of the stems ranged from 14 to 19 mm ( mean , 16 mm ) [ figure 1 ] .
for many of the patients , we used stems that were 265 mm long and had a diameter of 16 mm .
of 53 femoral stems , 32 were 265 mm long , 18 were 225 mm and 3 were 305 mm . in 10 ,
the diameter was 14 mm ; in another 10 , 15 mm ; in 28 , 16 mm ; in 4 , 18 mm ; and in 1 , 19 mm .
it is important for the tip of the stem to extend into the intact medullary canal at least 7 cm distal to the end of the previous prosthetic bed . in choosing the diameter ,
it is important to remember that the reaming removes a thin layer of bone and the sharp longitudinal ribs cut slightly into the bone during insertion .
therefore , the outline of the stem on the template must overlap the inner outline of the cortex in the region of middle third of the stem by 1 mm on each side .
anteroposterior x - ray left hip ( a ) preoperative , showing failed tha ( b ) immediate postoperative , showintg long wagner stem in situ ( c , d ) 5 years followup , showing well incorporated impaction graft and long wagner stem in situ vertical offset was restored by deciding the final position of the stem in the last 1 cm in such a way that the distance between the proximal tip of prosthesis and superior portion of lesser trochanter equaled the preoperatively measured vertical offset distance .
all the patients were given prophylactic low molecular weight heparin to prevent deep vein thrombosis .
the indomethacin ( 75 mg ) was given for 6 weeks to all patients to prevent heterotrophic ossification .
as soon as the effects of anesthesia wore off after surgery , usually by evening , we had the patients begin static quadriceps and abductor - strengthening exercises and foot - pump exercises .
full weight bearing was allowed from the third postoperative day at the earliest to as late as 6 weeks after surgery , depending on the type of bone defect and the stability of reconstruction . in type 2
bone defects , immediate weight bearing was permitted ; in type 3 bone defects or in cases involving fracture of the greater trochanter , full weight bearing was delayed up to a maximum of 6 weeks .
any patient having persistent discharge from the wound after the first week of surgery was considered as early postoperative infection and early surgical intervention was done in all such cases .
for the first year after revision surgery , patients were examined monthly ; after that , they were examined every 6 months or till the time when radiographic and clinical findings show incorporation of impaction graft .
they were monitored for improvement in harris hip score , as well as for any complications .
of the 48 patients , 4 died from causes unrelated to surgery ; they had functioning hip joints , which were considered
the remaining 44 patients ( 39 had unilateral revision surgery and 5 had bilateral ) were available for complete clinicoradiologic analysis .
femoral component subsidence and migration were analyzed by measuring the vertical subsidence of component ( from tip of the greater trochanter to shoulder of the prosthesis ) according to the method of callaghan et al.11 allografts were assessed for incorporation into the host bone as evidenced by trabecular bridging of the host graft interface .
a clear reduction of density or breakdown of the transplanted bone was defined as bone resorption .
since it is impossible to see bone growing into opaque metal surfaces on radiographs , the process was identified by the gradual changes in the appearance of periprosthetic bone ( bone remodeling ) .
signs indicating successful bone ingrowth included narrowing of the intramedullary canal around the diaphyseal portion of the implant and atrophy of the bone around the proximal part of the stem .
signs of failed bone ingrowth included widening of the intramedullary space , formation of a demarcation line within the space and hypertrophy of the proximal bone , particularly around the lesser trochanter .
any signs of movement of the stem within the canal also indicated that biologic fixation has not occurred .
the average age of the patients at the time of surgery was 59 years ( range 44 - 68 years ) .
out of 53 cases , 43 had unilateral revision surgery and 5 had bilateral revision surgery .
30 patients underwent revision surgery because of painful aseptic loosening , 11 for a periprosthetic fractures with aseptic loosening , 3 for a broken femoral stem , 3 for septic loosening and 1 for a traumatic comminuted fracture of the proximal one third of the femoral shaft with hip dislocation 6 months after the primary uncemented total hip arthroplasty ( tha ) .
indication on index hip replacement surgery were as follows : , 39 patients underwent index tha for avascular necrosis , 7 for posttraumatic arthritis and 2 for septic arthritis .
the average time span from index to revision surgery was 7.5 years ( range , 6 months-18 years ) .
the revised stems were cemented charnley ( n=32 ) , muller ( n=13 ) ; isoelastic total hip arthroplasty ( n=3 ) ; cemented long stems ( n=2 ) ; cemented bipolar hip arthroplasty ( n=2 ) and , a cementless spotorno stem ( n=1 ) .
bone defects were clarified as per paparosky et al.,9 1 patients had type 2 defects and 37 patients had type 3 defects ( 32 with type 3a and 5 with type 3b ) .
the femoral component was revised in all cases , whereas the acetabulum was revised in 39 .
regarding the intraoperative complications , 12 patients had an inadvertent fracture of the greater trochanter during surgery while undergoing dislocation or cement removal and were fixed with circlage wiring .
contributing factors to fracture were relatively older age of the patients and profound stress shielding and osteopenia .
all were managed with early operative intervention ; six of them were cured and one needed implant removal .
one patient fell 7 days after surgery , during rehabilitation , which caused breakage of the wire used for fixation of the greater trochanter and resulted in stem rotation and was managed by surgical reintervention . 23 ( 48.91% ) of our patients had subsidence of < 5 mm and 2 patients ( 4.16% ) had subsidence of > 10 mm [ figure 2 ] .
the osteointegration of grafts into the host bone was noted within 9 - 18 months ( average 15 months ) of surgery .
we did not have any case of periprosthetic fracture , sciatic or femoral nerve palsy , or heterotrophic ossification .
there were no instances of graft rejection , progressive osteolysis , or rerevision of wagner stems .
the mean preoperative harris hip score of 42 points ( range 22 - 52 points ) had improved to 86 points ( range 74 - 94 points ) by the final followup evaluation [ figure 3 ] .
results were excellent in 87% of our cases ( 37 patients or 42 hips ) and good in 10% ( 5 patients or 5 hips ) .
anteroposterior x - ray right hip joint ( a ) preoperative showing subsidence ; ( b ) immediate postoperative showing long wagner stem in situ .
( c ) 8 months followup , showing subsidence x - ray left hip joint anteroposterior view ( a ) preoperative , showing implant failure ( b ) 6 year followup , showing incorporation of impaction graft and implant in situ
the idea of impaction bone grafting was originally conceived in 1975 by hastings and parker to overcome the bone loss seen in patients with protrusio acetabuli secondary to rheumatoid arthritis.12 three years later , mccollum and nunley showed the potential of morselized allograft to treat bone stock deficiency in protrusio acetabula.13 in 1983 , roffman et al .
reported the survival of bone chips under a layer of bone cement in an animal study.14 the graft appeared viable and new bone formed along the cement interface .
mendes et al . further developed the technique for use in primary hip arthroplasty with cement by reinforcing protrusio acetabuli with bone chips and mesh.15 they monitored eight patients for up to 6 years .
there were no revisions and histologic examinations confirmed bone graft incorporation . in 1984 , slooff et al .
modified the technique and described it as impaction bone grafting.16 the defect was contained by mesh and then bone graft was tightly packed in before an acetabular cup was inserted into the pressurized cement .
impaction bone grafting of the proximal part of the femur was initially developed by ling et al . in 1991 and reported by gie et al . in 1993.6
the efficacy of these techniques has been extensively supported by results from animal studies as well as histologic,1718 radiographic and biomechanical studies.1920 we used a modified slooff technique for impaction bone grafting , employing a mixture of autograft and allograft in patients selected for revision tha with a wagner sl revision stem .
the average age of patients at the time of surgery in our study was 59 years ( range 44 - 68 years ) .
males outnumbered the females in revision hip surgeries , as the most common indication for primary total hip arthroplasty is avascular necrosis of femur which is more common in males .
thirty two of 53 revised cases had cemented charnley stem , as this is the most commonly used stem at many centers in india .
aseptic osteolysis with or without periprosthetic fracture was the number one cause of revision surgery .
out of 53 cases , 12 had periprosthetic fracture , one within 6 months of surgery .
the patient who needed revision surgery at 6 months was the one with a traumatic comminuted fracture of the proximal one third of the femoral shaft and hip dislocation 6 months after undergoing primary uncemented tha with a cls spotorno stem .
the reason for the high number of periprosthetic fractures on presentation was because number of patients ( 22.5% ; 12 out of 53 ) missed followup examinations after index the tha with their primary surgeons and sought treatment only after fractures occurred . in our series ,
the reason for choosing this revision technique over others is that it conserves bone and is a more biological surgery , allowing restoration of bone defects in view of the possibility that second or third revisions might be required .
this technique is universal , meaning that it can be used with any type of bone defect often encountered in revision tha .
the use of impaction bone grafting in addition to the use of a wagner sl revision stem allows consistent incorporation of the bone graft in defects [ figure 1d ] .
high number of immediate postoperative infection may be due to our very low tolerance in terms of wound discharge of any kind in the early postoperative period long duration of surgery , previous hip surgery , increased blood loss and the use of allograft as a source of bone graft .
patients were declared cured only after minimum 6 months of close clinical and serological monitoring at regular intervals , with all parameters consistently being negative for infection .
this approach of ours resolved the issue in six patients ; in the other patient , the implant was removed because of recurrent infection .
this was a case of septic osteolysis in which a two stage revision was done . in the rest 2 cases ,
we had 3 patients out of 53 cases who had dislocation in the early postoperative period i.e. within 6 months .
in one patient , the dislocation was complicated by the dislodgement of the liner from the uncemented cup and was treated by changing the liner . the second patient had dislocation on the fourth day after surgery and was treated with closed reduction and bed rest for 4 weeks ; he had no further episodes of dislocation during his 3-year followup period .
the other patient had three dislocations in the first year after surgery and needed a change in the inclination of his acetabular cup .
we treated the patient by changing the component 's orientation and refixing the greater trochanter using the same wagner stem ; the patient was prescribed bed rest .
the patient then developed a superficial infection , which we treated with surgical debridement and 6 weeks of parenteral antibiotics .
femoral component subsidence and migration were analyzed by measuring the vertical subsidence of component ( from tip of the greater trochanter to shoulder of the prosthesis ) , according to the method of callaghan et al.11 the reason for the high incidence of subsidence ( within 5 mm in 23 patients ) may be our aggressive rehabilitation protocol which allows the patient full weight bearing from practically the third postoperative day .
high failure rates after cemented revision tha have led to the promotion of uncemented long stem femoral prostheses.21 uncemented femoral components have several advantages : the difficulties and complications associated with cement removal are eliminated , bone loss may be reduced and implant removal is relatively easier .
a review of the literature has shown lower rates of repeat revision after revision arthroplasties that use an uncemented femoral component.22 most such repeat revisions have been performed within the first few postoperative months and have been necessary because a suboptimal stem size was used .
we believe that our results are comparable with those of arthroplasties employing an extensively porous - coated chromium cobalt stem.222324 krishnamurthy et al.,23 in a series involving 297 extensively coated chromium
cobalt stems , noted a mechanical failure rate of only 2.4% at a mean followup point of 8.3 years .
new bone formation has been observed to occur regularly after femoral revision with the wagner sl revision stem.8252627 however , it is necessary to be aware of the limitations of the qualitative assessment of the bone formation on plain radiographs . in our experience , mechanical stability and careful removal of cement , scar and granulation tissue
are the essential preconditions for spontaneous restoration of the bone stock of the proximal femur . in difficult cases , a transfemoral approach is helpful ; but when this approach is used , the blood supply of the osseous lid must be preserved and detachment of the muscles must be avoided .
femoral bone restoration associated with the use of wagner sl revision stem may be due to the proximal transmission of force because of the conical shape of the prosthesis , the higher elasticity of the titanium alloy and the good histocompatibility of the rough - blasted surface.10 in our patients , at a mean followup point of 5.5 years after revision tha using impaction bone grafting and wagner sl revision stems , we observed a 97.9% survival rate for the stems , with rerevision of stems or stem removal for any cause counted as the endpoint or failure .
we conclude short term results for revision tha with impaction bone grafting wagner sl revision stems are encouraging
. however , it is necessary to obtain long term results through periodic followup evaluation because the rate of complications , such as femoral osteolysis , aseptic loosening , periprosthetic fracture and late infection , may increase with time . | background : as the number of total hip arthroplasties ( thas ) performed increases , so do the number of required revisions .
impaction bone grafting with wagner sl revision stem is a good option for managing bone deficiencies arising from aseptic osteolysis .
we studied the results of cementless diaphyseal fixation in femoral revision after total hip arthroplasty and whether there was spontaneous regeneration of bone stock in the proximal femur after the use of wagner sl revision stem ( zimmer , warsaw , in , usa ) with impaction bone grafting.materials and methods : we performed 53 hip revisions using impaction bone grafting and wagner sl revision stems in 48 patients ; ( 5 cases were bilateral ) for variety of indications ranging from aseptic osteolysis to preiprosthetic fractures .
the average age was 59 years ( range 44 - 68 years ) .
there were 42 male and 6 female patients .
four patients died after surgery for reasons unrelated to surgery .
44 patients were available for complete analysis.results:the mean harris hip score was 42 before surgery and improved to 86 by the final followup evaluation at a mean point of 5.5 years .
of the 44 patients , 87% ( n=39 ) had excellent results and 10% ( n=5 ) had good results .
the stem survival rate was 98% ( n=43).conclusion : short term results for revision tha with impaction bone grafting and wagner sl revision stems are encouraging
. however , it is necessary to obtain long term results through periodic followup evaluation , as rate of complications may increase in future . |
recently , simulations of compressing turbulent plasma demonstrated a sudden dissipation mechanism , which may enable a new paradigm for fast ignition inertial fusion @xcite .
a plasma with initial ( turbulent ) flow is compressed on a timescale that is much faster than the dissipation time of the flow .
this amplifies the turbulent kinetic energy ( tke ) in the flow , for an ideal gas with subsonic flows . in a very rapid three dimensional adiabatic compression the energy in the flow scales at the same rate as the temperature . as the temperature increases
, the plasma viscosity , which starts small , grows , because it scales as @xmath0 .
the viscosity first dissipates the smaller scales in the flow , which do not contain much energy .
eventually , as the compression continues , the energy - containing ( largest ) scales become viscous , and at this time all the tke very suddenly dissipates into temperature . by initially putting most of the plasma energy in tke , it may be possible to keep the plasma comparatively cool up until the sudden dissipation event , at which point it would ignite fusion or produce a burst of x - rays @xcite . however , in addition to the temperature , the plasma charge state , @xmath1 , factors strongly into the viscosity , @xmath2 .
laser and magnetically driven fusion experiments typically compress deuterium and tritium , with @xmath3 , so that , ignoring contaminants from the shell , the charge state is constant during the compression .
in contrast , compression experiments designed to produce x - rays use a variety of higher @xmath1 materials , which increase in ionization state during the compression .
this increase in ionization state has the effect of slowing the viscosity growth .
consider , for example , a neon gas - puff z - pinch @xcite that starts with @xmath4 ev and @xmath5 , and finishes with @xmath6 ev and @xmath7 .
the temperature increase causes a growth in the viscosity by a factor of @xmath8 , while the ( mean ) ionization state growth reduces the viscosity by a factor of @xmath9 , drastically cutting the overall viscosity increase . in the present work , as in ref .
[ @mbox= cite@citenum @space@spacechar @super@kern @open @close @xcite ] , we consider a plasma temperature that increases due to the 3d adiabatic compression of an ideal gas in a box of side length @xmath10 , going as @xmath11 .
the ( mean ) ionization state , @xmath1 , is treated as having some dependence on @xmath10 ( i.e. , the amount of compression ) as well .
this dependence is treated as fittable with some power , @xmath12 .
then , defining @xmath13 , the viscosity can be written @xmath14 in this model , regarding the ionization state as a function of l is equivalent to regarding it as a function of t ( @xmath15 ) , because @xmath16 .
ionization processes in z - pinch @xcite and laser driven plasmas are not simply temperature dependent , depending on density and more complex processes ( e.g. shock dynamics ) . however ,
if the ( mean ) ionization state for a given experiment can be reasonably fit to @xmath10 as described , then the net effect in the present model is that the overall temperature dependence of the viscosity can be treated as some power other than 5/2 .
we expect @xmath17 , reflecting the assumption that the charge state increases under increasing compression ( or temperature ) .
for a rough estimate of a possible value for @xmath18 and therefore @xmath19 , consider that the first 26 ionization states of krypton ( covering 13 ev - 1200 ev ) can be fit with @xmath20 .
this corresponds to @xmath21 , and @xmath22 . since the ionization state can be higher at a given temperature than one would predict purely based on comparing the temperature to the ionization energies , one expects based on this example that a wide range of @xmath19 is possible in experiments , possibly including negative values .
note that , if the adiabatic index of the compression is smaller than the value of @xmath23 assumed here , this also weakens the scaling of the viscosity with compression , effectively lowering @xmath19 ( @xmath19 is defined so that @xmath24 ) .
we consider initially turbulent plasma undergoing rapid , constant velocity , 3d isotropic compression and described by the same model as in davidovits and fisch @xcite , but with general @xmath19 rather than @xmath25 .
this model is described briefly in sec . [
sec : model and energy ] , and a derivation is given in the appendix sec . [
sec : derivation ] .
we show that there will be an eventual sudden dissipation when @xmath26 . for identical initial condition ,
starting viscosity , and compression velocity , lower @xmath19 cases show larger tke growth and later sudden dissipation ( when @xmath19 is still @xmath27 ) .
additionally , lower @xmath19 cases can show tke growth under compression rates that would lead to the tke damping in higher @xmath19 cases .
for @xmath28 , the tke reaches a statistical steady state under constant velocity compression , for any compression rate above a threshold which we determine .
when @xmath29 , it seems there is no sudden dissipation , with the tke increasing indefinitely instead .
there are a number of implications of these results .
the plasma in magnetically driven @xcite or laser driven @xcite compressions can be turbulent .
there can be substantial reduction in viscosity growth due to increasing ionization state for a gas - puff z - pinch .
to the extent the turbulence generation mechanism(s ) of a given compression approach is insensitive to @xmath1 , the present results show that , for a fixed rapid compression rate , a larger increase in @xmath1 ( weaker viscosity growth ) is expected to correspond to larger tke growth .
furthermore , increases in @xmath1 can make the difference between growing or decaying tke .
note that while these gas - puff z - pinches appear to have substantial non - radial tke even at stagnation @xcite , turbulence in the hot spot of ignition shots at the national ignition facility ( nif ) is expected to be dissipated by high viscosity @xcite .
the much higher temperatures in these hot spots create this high viscosity , but they are assisted by fuel of @xmath3 , to the extent it is not contaminated by mix .
our results demonstrate that even moderate reductions in the effective power @xmath19 from @xmath30 can cause large differences in tke growth , and can determine whether for a given amount of compression ( @xmath31 ) one can reach the dissipation regime .
the analysis in this work is carried out for 3d compressions , and as such is not strictly applicable to 2d compressions such as those in z - pinches . in a 2d compression , the relative scaling of the tke with compression , compared to the temperature is different ( if the temperature growth is still assumed to be adiabatic and isotropic , the latter now being a larger assumption , since the plasma is driven anisotropically ) .
nevertheless , the intuition developed here may still be useful ; all else being equal , more ionization enhances tke growth under rapid compression by weakening the viscosity growth .
the present work also neglects magnetic field effects , in line with other studies of turbulence in 3d compressions @xcite .
this , too , limits the applicability to z - pinch compressions , though there are also many instances in which the magnetic field need not dominate the dynamics in a z - pinch @xcite .
the evolving ionization state during compression may be exploitable to optimize tke growth before sudden dissipation , and to control the timing of the dissipation .
if the ions in the compression become maximally ionized , then the viscosity change reverts to being dominated by temperature , while tke growth up to this point will be larger than without ionization .
mixes of ion species open up a wide range of control possibilities for the viscosity dependence on compression , but also introduce other complications ( e.g. species separation ) , and are beyond the scope of the present work .
however , the prospect of controlling ion charge state and thereby viscosity appears to enlarge considerably the parameter space of opportunities for optimizing both the energy and pulse length in a sudden dissipation resulting in x - ray emission .
the structure of the paper is as follows .
section [ sec : model and energy ] gives a brief description of the model , and discusses the energy equation for the turbulence , which is used in sec .
[ sec : analysis ] to show some analytic results and to describe the general phenomenology . to go along with this analysis
, the results from numerical simulations of compressing turbulence with ionization are displayed in figs .
[ fig : kevsl_twobetas ] , [ fig : kevsl_beta1 ] and [ fig : vary_beta ] and discussed in the captions and sec .
[ sec : simulations ] .
section [ sec : discussion ] discusses implications of the results and caveats associated with them .
some secondary calculations associated with secs . [ sec : model and energy ] and [ sec : analysis ] are contained in the appendix , and referenced at the appropriate point .
the model used here follows previous work by wu @xcite and others @xcite , and is the same as that in davidovits and fisch @xcite , allowing for a general power @xmath19 for the viscosity dependence on temperature . for completeness
a derivation is given in the appendix section [ sec : derivation ] .
the essence of the model is as follows .
it describes the 3d , isotropic compression of homogeneous turbulence in the limit where the turbulence mach number goes to zero .
compression is achieved through an imposed background flowfield .
the effect of the flow is that a cube , of initial side length @xmath32 , will shrink in time but remain a cube .
the side length of the box as a function of time will be @xmath33 where @xmath34 is the ( constant ) velocity of each side of the cube . in the low mach limit
, density fluctuations are ignored , and the density increases in time as one would expect for the compression ,
@xmath35 the temperature of the compressing plasma is that for adiabatic compression of an ideal gas , @xmath36 the viscosity dependence on @xmath10 ( alternatively , @xmath37 ) is given by eq .
( [ eq : mu ] ) .
the evolution of the initially turbulent flow is solved in a frame that moves along with the background flow , on a domain that extends from @xmath38 $ ] in each dimension and has periodic boundary conditions .
the energy in the turbulence in this frame is the same as in the lab frame . in this frame , after using eqs .
( [ eq : mu],[eq : mean_density_solution],[eq : temperature_solution ] ) to write the density , temperature and viscosity dependence in terms of @xmath10 , the navier - stokes equation for the turbulence is @xmath39 the initial kinematic viscosity is @xmath40 , and @xmath41 .
the energy density in the fluctuating flow , calculated in the moving frame is @xmath42 .
the total energy is then @xmath43 . since @xmath44 ( see appendix sec . [
sec : derivation ] ) , this total energy is the same as the total energy in the lab frame ( in the lab frame , the density increases , but the volume to be integrated decreases in a manner that balances it ) .
the time evolution of the energy density is @xmath45 equation ( [ eq : moving_momentum ] ) is used to write this energy equation explicitly . in fourier ( @xmath46 , wavenumber )
space , since the flow is assumed to be homogeneous and isotropic , it is @xmath47 with @xmath48 a nonlinear term that includes the effects of the pressure and @xmath49 terms ( see , e.g. mccomb @xcite ) .
the effect of @xmath48 is to transfer energy between wavenumbers ( modes ) , conservatively .
integrated over the whole of @xmath46 space , it vanishes .
the total energy is @xmath50 in the moving frame , @xmath51 is fixed , given by the initial size of compressing system , e.g. capsule ( although the current model uses periodic boundaries ) . in principle structures can be arbitrary small , so @xmath52 , but practically @xmath53 will be zero above some @xmath46 .
the evolution of the total energy is , @xmath54 , but for @xmath55 , at a lower reynolds number , and with a logarithmic scale for @xmath10 . no eventual sudden dissipation
is observed , even after extreme amounts of compression .
after an initial growth phase , the turbulent kinetic energy ( tke ) saturates and fluctuates around the mean level predicted by eq .
( [ eq : e_steady_ub ] ) .
this theoretically predicted mean level of the tke is shown as a dotted line for each compression velocity .
note eq .
( [ eq : e_steady_ub ] ) must be written in the same velocity normalization as the figure before being applied .
[ fig : kevsl_beta1 ] ] , [ fig : kevsl_beta1 ] ; the turbulent kinetic energy ( tke ) for the same initial condition compressed at two different rates and different values of @xmath19 , showing the effect of varying the amount of ionization during compression ( @xmath19 ) .
the red , solid lines use a compression time that is half the initial turbulent decay time , while the blue , dashed lines use a compression time that is the same as the initial turbulent decay time . for a given compression rate ( @xmath56 or @xmath57 )
, the tke is larger at every stage of the compression when @xmath19 is lower ( when there is more ionization during compression ) .
for the case when @xmath58 , the tke purely decays when @xmath59 ( the plasma case with no ionization ) .
ionization during compression can cause this to no longer be the case ; when @xmath19 decreases to 1.5 or 1.0 , the tke either grows before dissipating , or grows without dissipating .
[ fig : vary_beta ] ]
since @xmath60 , the energy is guaranteed to decrease if the coefficient of @xmath53 in eq .
( [ eq : integrated_energy ] ) is negative for all @xmath61 $ ] ; conversely , it is guaranteed to increase if the coefficient is positive for all @xmath46 where @xmath62 .
( however , this latter condition is difficult to work with , since for @xmath63 there is always damping and as the energy increases @xmath48 will tend to move energy to higher @xmath46 ) .
these conditions are sufficient , but not necessary .
the guaranteed decrease condition requires that for every mode @xmath64 the left hand side is largest for @xmath65 , and trends to 0 as @xmath63 .
when @xmath66 , the right hand side of eq .
( [ eq : decrease_condition ] ) starts at 1 at @xmath67 and increases towards @xmath68 as @xmath69 . at some time
the condition will be satisfied for all @xmath46 , when @xmath70 thus , the energy will always decay eventually for fixed @xmath71 when @xmath66 .
this is not to say that the energy can not decrease before this condition is satisfied . when @xmath28 , the right hand side of eq .
( [ eq : decrease_condition ] ) is 1 . in this case
there is no time dependence in the condition for guaranteed energy decrease . if eq .
( [ eq : decrease_condition ] ) is initially satisfied for all @xmath46 , the energy will purely decay , with no initial growth phase .
otherwise , a fixed range of wavenumbers have a net positive coefficient for @xmath53 in eq .
( [ eq : integrated_energy ] ) , while the rest have a net negative coefficient ( ignoring the nonlinearity ) .
the wavenumber cutoff between these two regions is given by equality in eq .
( [ eq : decrease_condition ] ) , @xmath73 the width of wavenumbers with a net forcing ( linearly ) is @xmath74 .
since the range of net forced wavenumbers is fixed , it might be expected that the energy will reach a ( statistical ) steady state .
this is the case , and it can be shown ( see appendix sec .
[ sec : beta_1_steady ] ) that the statistically steady state energy is @xmath75 further , the spectrum itself , @xmath53 , converges to a statistical steady state @xmath76 in simulations @xcite . while the steady state energy is independent of the viscosity @xmath77 ( alternatively , the initial reynolds number ) , the details of the energy spectrum of the saturated turbulence will not be .
also , as already mentioned , if the initial viscosity is too large , there is no steady state and the energy will purely decay .
this steady state energy can be rewritten in terms of @xmath78 by using eq .
( [ eq : k_cutoff ] ) , @xmath79 once the sign of the coefficient of @xmath53 at a given @xmath46 in eq .
( [ eq : integrated_energy ] ) is being considered ( rather than the sign of _ all _ coefficients ) , the nonlinearity can not be ignored .
thus , @xmath80 is not necessarily a true ( statistically steady state ) cutoff between net forced and damped modes , but rather the linear cutoff . when @xmath29 , the right hand side of eq .
( [ eq : decrease_condition ] ) trends to 0 as time increases , and an increasing number of shorter wavelength modes will have a net forcing ( ignoring the nonlinearity ) .
this means that @xmath78 trends to infinity as @xmath69 . with the rather large caveats that in this case the problem is not an equilibrium one , and that the nonlinearity has been ignored in looking at the number of modes with a net forcing
, the result from the equilibrium case that the steady state energy is proportional to the number of linearly forced modes suggests that the energy for @xmath29 continually increases for late times ( after any initial transients are erased ) under constant compression . note that neutral gas , compared to plasma with no ionization , has a weak dependence of viscosity on temperature , with studies of compressing gas turbulence using values that fall in the @xmath29 case ( e.g. @xmath82 @xcite ) .
turbulence closure models in these works , which include the evolution of the tke in a neutral gas under compression , give a continually increasing tke when evaluated for an initially rapid , constant velocity , 3d compression , consistent the suggestion here .
in section [ sec : analysis ] we showed that ; for @xmath83 , the tke should always eventually damp , even with continued constant velocity compression ( which represents an ever increasing compressive force ) ; and when @xmath72 , the tke will either purely decay or reach a steady state under continued compression .
we also suggested that the energy always increases under continued constant velocity compression when @xmath81 ( if the compression is initially rapid , if not , the energy may decrease for some period before eventually increasing ) .
these represent different regimes of the viscosity dependence on compression - with little to no ionization during compression , @xmath19 will be near the ionization free value , @xmath84 , and the sudden viscous dissipation phenomenon will still be possible . if substantial ionization occurs during a phase of the compression , then @xmath19 may be significantly reduced from @xmath30 , and the viscous dissipation of the tke will be prevented . in order to get a better sense of the effect of decreasing @xmath19 , we perform direct numerical simulation of compressing turbulence for a few values of @xmath19 . the scaled form of the momentum equation , eq .
( [ eq : scaled_momentum ] ) , is simulated with periodic boundary conditions using the spectral code dedalus @xcite .
results are then translated back into the lab frame using the appropriate rescaling .
initial conditions are generated using the forcing method of lundgren @xcite .
all simulations are carried out on a @xmath85 fourier grid , which is dealiased to @xmath86 .
simulations are done for three different values of @xmath19 , @xmath30 , @xmath87 , and @xmath88 . of note
is that for @xmath89 , the forcing term drops out of eq .
( [ eq : scaled_momentum ] ) , and it is simply the usual navier - stokes equation .
this means that a single decaying turbulence simulation can give results for all compression velocities ( at one initial reynolds number ) .
figures [ fig : kevsl_twobetas ] , [ fig : kevsl_beta1 ] and [ fig : vary_beta ] and their captions describe results from these simulations . the simulations in fig .
[ fig : kevsl_twobetas ] are carried out with an initial reynolds number of 600 .
those for fig .
[ fig : kevsl_beta1 ] are carried out with an initial reynolds number of 100 , which is necessary so that the turbulence remains fully resolved at saturation .
those in fig .
[ fig : vary_beta ] also use an initial reynolds number of 100 , again to keep the @xmath55 case fully resolved at saturation .
the present model ignores many effects that do or may play an important role during compression of plasmas .
the suggested @xmath28 cutoff between eventually dissipating and perpetually growing tke need not hold true in a more complete model .
non - ideal equation of state effects are neglected .
it should be noted that only constant velocity compressions were considered ; compressions with time - dependent velocities would also change the cutoff .
boundary effects , which are ignored , would be expected to become increasingly important as the amount of compression increased .
the manner in which the ionization is accounted for neglects , among other effects , the energy required to achieve the ionization .
if this energy comes at the expense of the temperature , and the true rate of temperature increase is less than @xmath90 , this would alter @xmath19 , but the general idea remains the same .
subsonic compressions have been assumed , which is not necessarily the case for current compression experiments , nor is it the regime in which schemes utilizing the sudden dissipation effect would likely be operated . because the compressions are subsonic , the feedback of the dissipated tke into the temperature is also neglected , which is expected to only make the sudden dissipation , once it happens , even more sudden . as previously discussed , magnetic effects have also been neglected .
although this may be reasonable for 3d compressions , and certain regimes of 2d z - pinch compressions , to expand the study for general 2d compression ( z - pinch ) applicability will require the inclusion of magnetic effects .
the inclusion of magnetic fields through a magnetohydrodynamic ( mhd ) model will introduce a number of new considerations .
if there is a strong background magnetic field , the plasma conditions can be highly anisotropic , and the intuition from the present discussion may be difficult to apply . with magnetic fields included , turbulent energy can be stored in fluctuations of the magnetic field , and turbulent dissipation can occur through the plasma diffusivity , @xmath91 . continuing to assume the incompressible limit ,
then , in addition to the reynolds number , the magnetic reynolds number @xmath92 and the magnetic prandtl number @xmath93 are also important for characterizing any turbulence .
the plasma magnetic diffusivity scales with ion charge state and plasma temperature as @xmath94 .
then , the magnetic prandtl number has a temperature and charge state scaling of @xmath95 .
the behavior of mhd turbulence is influenced by the relative values of @xmath96 , @xmath97 , and @xmath98 ; they affect whether magnetic fluctuations will grow ( see e.g. @xcite ) , the saturated ratio of turbulent magnetic energy compared to kinetic energy , and the steady state ratio of viscous dissipation to dissipation through the magnetic diffusivity ( see e.g. @xcite ) . from the considerations in the present work ,
it is clear that , depending on the amount of ionization during compression , a range of behaviors for the dimensionless quantities is possible .
considering the limit of no ionization , and assuming the scalings @xmath99 , one has that : the viscosity increases with compression ( and the reynolds number decreases ) , the magnetic diffusivity decreases ( and the magnetic reynolds number increases ) , and the magnetic prandtl number increases . at high magnetic prandtl number and large magnetic reynolds number ( but assuming the reynolds number is still large enough for turbulent flow ) ,
the small scale dynamo is effective , so that magnetic perturbations can grow up quickly and saturate , while the ratio of kinetic dissipation to magnetic dissipation appears to grow large @xcite .
investigations of these effects are a subject of current research and debate , and are typically carried out in steady state , whereas the sudden dissipation effect relies on dynamics far from steady state . as such , specific investigations of mhd effects on sudden dissipation are needed before making predictions . a simple model of the impact of radiation , the effects of which
have been neglected in the preceding discussion , is included in the appendix sec .
[ sec : radiation ] .
this model consists of a temperature equation that includes mechanical heating and radiative cooling due to optically thin electron bremsstrahlung . with no radiation ,
the mechanical heating gives the @xmath100 adiabatic temperature scaling .
when the bremsstrahlung is included , it is shown that the temperature can still track closely with the adiabatic result for a large amount of the compression , provided the initial ratio of the radiation term to the mechanical heating term is very small . then , the results in the present work will not be significantly modified .
the ratio of radiation term to mechanical heating term , @xmath101 , can be written as @xmath102 . for details , see appendix sec .
[ sec : radiation ] . here
@xmath103 is the compression time , @xmath104 the density , @xmath37 the temperature , @xmath105 the ion mass number , and @xmath1 the ion charge state .
these considerations are a subset of the usual power balance requirements for inertial confinement experiments ( see , for example , lindl @xcite ) . in both cases
, it is desirable to operate in parameter regimes where the temperature increases under compression when radiation effects are included . from the perspective of radiation
, the presence and quantity of the hydrodynamic motion does not modify the potential operating regimes as compared to compression schemes without hydrodynamic motion .
the same will be true with the inclusion of line - radiation , important for high - z plasmas .
however , the operating regimes where the temperature increases under compression will be modified by the turbulence in ( at least ) two ways that are neglected in this work .
first , before any sudden dissipation event , there will be some level of viscous dissipation of hydrodynamic motion into temperature .
when the hydrodynamic energy is large compared to the thermal energy , this heating may somewhat relax the operating regimes where the temperature increases under compression .
on the other hand , a second neglected effect , turbulent heat transport , represents a cooling effect that opposes this heating effect .
once the sudden dissipation event is triggered , in the supersonic case with the feedback of dissipated tke into temperature included , the temperature should rise rapidly .
taking into account radiation will then be important for modeling the sudden dissipation event itself , which occurs over a small time interval so that the plasma volume hardly changes . despite these deficiencies ,
the present work serves to highlight the sensitivity of tke growth under compression to changes in the viscosity scaling with compression , in which ionization can play a strong role .
thus , even modest contamination of a low - z plasma with higher - z constituents may have substantial hydrodynamic implications , as , say , atomic mix in an icf hotspot .
finally , this sensitivity to the ionization state suggests that the possibilities for control of tke growth and sudden dissipation for x - ray production are now significantly expanded .
this expansion of possibilities comes in part from the prospect of considering a wide range of ion - species mixes .
although outside the scope of the present work , it can be anticipated that using a variety of mixtures could enable detailed and controlled shaping of the x - ray emission pulse .
this work was supported by doe through contracts no .
de - ac02 - 09ch1 - 1466 and nnsa 67350 - 9960 ( prime @xmath106 doe de - na0001836 ) , by dtra hdtra1 - 11 - 1 - 0037 , and by nsf contract no . phy-1506122 .
although essentially identical models have been discussed elsewhere @xcite , for the sake of completeness , and to explain some details , we present a derivation here . start with the continuity and momentum equations for compressible navier - stokes , @xmath107 the stokes hypothesis has been used , that the second viscosity coefficient , often denoted @xmath108 , is @xmath109 .
this form of the rate of strain tensor is consistent with the braginskii result @xcite .
the unknowns are rewritten as two parts , @xmath110 where @xmath111 is given , and the subscript 0 indicates ensemble averaged quantities , while prime quantities have 0 ensemble average .
the prime quantities are assumed to be statistically homogeneous , and ultimately the equations governing their evolution will have no explicit spatial dependence , allowing the use of periodic boundary conditions . for the prime quantities to be homogeneous , it can be shown ( see , e.g. blaisdell @xcite ) that the flow @xmath111 must be of the form , @xmath112 for this work , only pure ( no shear ) , isotropic compressions are considered , so that @xmath113 with @xmath114 the kronecker delta . when @xmath115 this enforced , `` background '' , flow is compressive
. with these assumptions , the continuity equation is @xmath116 taking an ensemble average gives an equation for @xmath117 .
denoting the average as @xmath118 , then by definition @xmath119 .
also , @xmath120 because ensemble averages , such as @xmath121 , are assumed to be homogeneous .
the equation for @xmath117 is then , @xmath122 it can be shown @xcite that only for @xmath123 can the homogeneous turbulence constraint be satisfied . dropping the second term in eq .
( [ eq : mean_continuity ] ) accordingly , the density is @xmath124.\label{eq : mean_density_exp}\ ] ] the fluctuating density is determined by eq .
( [ eq : continuity ] ) , which can be simplified by canceling the terms that sum to 0 according to eq .
( [ eq : mean_continuity ] ) .
it is @xmath125 for incompressible fluctuating ( non - background ) flow , we assume that the flow @xmath126 is low mach , so that sound waves can be neglected and the density perturbation @xmath127 can be ignored .
then , the fluctuating continuity equation reduces to the divergence free constraint on the prime velocity , @xmath128 with @xmath111 as given , @xmath129 , and @xmath117 depending only on time , the momentum equation is @xmath130 the ensemble averaged momentum equation is @xmath131 in arriving at @xmath132 , the viscosity @xmath133 is assumed to be independent of space . the mean momentum equation , eq .
( [ eq : mean_momentum ] ) , says @xmath134 is quadratic in @xmath135 , unless @xmath136 , in which case @xmath134 is independent of @xmath135 .
since @xmath137 sets the time dependence of the background flow ( the rate of compression ) , this means only for one particular background flow can @xmath134 be independent of @xmath135 . for the purposes of this work , we consider temperature dependent viscosity , @xmath138 .
the equation of state relates the pressure , density and temperature , @xmath139 .
this is @xmath140 , which becomes , after taking the ensemble average , @xmath141 in order to have @xmath142 , so that @xmath143 is independent of space , we must take @xmath144 so that @xmath145
. then , eqs .
( [ eq : mean_density_exp],[eq : mean_eos],[eq : flow_condition ] ) and the condition for an adiabatic compression together determine @xmath37 and @xmath134 .
subtracting eq .
( [ eq : mean_momentum ] ) from eq .
( [ eq : momentum ] ) gives the equation governing the fluctuating flow , @xmath146 the explicit spatial dependence can be removed by transforming coordinates .
transforming as @xmath147 yields , @xmath148 then , if the condition @xmath149 is satisfied , the explicit spatial dependence is removed from the moving frame momentum equation , eq .
( [ eq : moving_partway ] ) and it becomes , @xmath150 together , the conditions eq .
( [ eq : alpha_condition ] ) and eq .
( [ eq : flow_condition ] ) say that @xmath151 .
consistent with this , define @xmath152 then @xmath153 and the background flow ( @xmath154 ) is such that a cube of initial side length @xmath155 , placed in the flow at @xmath156 , will remain a cube and shrink in time at a constant rate while having a side length of @xmath157 .
using @xmath137 from eq .
( [ eq : a ] ) in eq .
( [ eq : mean_density_exp ] ) gives the expected density dependence , eq .
( [ eq : mean_density_solution ] ) . using the viscosity , density , and temperature solutions , eqs .
( [ eq : mu],[eq : mean_density_solution],[eq : temperature_solution ] ) , in the moving frame momentum equation , eq . ( [ eq : moving_non_l ] ) gives the model equation eq .
( [ eq : moving_momentum ] ) . the independent variables in eq .
( [ eq : moving_momentum ] ) can be rescaled , and some time dependent coefficients eliminated .
this is useful for simulations , and can be an aid in analysis . using the scalings , @xmath158 in eq .
( [ eq : moving_momentum ] ) gives , @xmath159 the standard nondimensionalization has been used , so that @xmath160 . equation [ eq : momentum_scalings ] has four independent powers of @xmath161 , and three undetermined scaling factors , @xmath162 , @xmath91 , and @xmath163 , so that the time dependence can be eliminated from all but one term .
one specific choice takes @xmath164 to eliminate the forcing term ( with @xmath165 in the coefficient ) , and then the time dependence of two other terms can be eliminated . the choice where the forcing term and all time dependence but the viscosity s are eliminated has been discussed by cambon et al .
@xcite ) . choosing to eliminate the time dependence of all but the forcing term , by selecting , a steady state solution ( @xmath168 ) to the total energy equation , eq .
( [ eq : integrated_energy ] ) , when @xmath28 , would mean @xmath169 where @xmath170 is the mean dissipation in steady state . when @xmath28 , the scaled momentum equation in the moving frame , eq .
( [ eq : scaled_momentum ] ) is the usual navier - stokes equation with a time independent forcing .
this equation has been studied in the context of a forcing scheme for isotropic fluid turbulence , where the term @xmath171 is added as an alternative to band - limited wavenumber space forcings @xcite .
numerical simulations by rosales and meneveau @xcite show that , in steady state , solutions have a characteristic length scale , @xmath172 , where @xmath173 is the domain size .
accounting for definitions and the scalings in eqs .
( [ eq : scalings ] ) , this relationship between @xmath174,@xmath32 , and @xmath175 allows us to solve for @xmath174 , @xmath176 then eqs . ( [ eq : equilibrium_condition ] ) and ( [ eq : epsilon_ss ] ) can be solved for @xmath177 , yielding eq .
( [ eq : e_steady_ub ] ) in section [ sec : analysis_beta1 ] .
given here is a simple accounting of the effects of radiation without straying far from the present model .
an optically thin plasma , with a single ion species of a single ( time dependent ) charge state @xmath1 is assumed .
the power density of electron bremsstrahlung emitted from an optically thin plasma , assuming @xmath178 and @xmath179 , is @xmath180 = c_b \left(t\left[ev\right]\right)^{1/2 } n^2 z^3 .
\label{eq : bremsstrahlung}\ ] ] the bremsstrahlung constant is @xmath181 .
the internal energy equation for the isotropically compressed plasma , including the mechanical work and bremsstrahlung terms only , and continuing to assume that the adiabatic index @xmath182 , is @xmath183 here @xmath184 is the boltzmann constant , @xmath185 can be found from eq .
( [ eq : l_def ] ) , and @xmath186 is the total number density , @xmath187 .
consistent with the spirit of the model described in sec .
[ sec : model and energy ] and the appendix sec .
[ sec : derivation ] , the density is taken to be @xmath188 ( see eq .
( [ eq : mean_density_solution ] ) ) . then , if the bremsstrahlung term in eq .
( [ eq : internal_energy ] ) is ignored , the solution is @xmath189 , as in eq .
( [ eq : temperature_solution ] ) . rewriting eq .
( [ eq : internal_energy ] ) as an equation for the normalized temperature , @xmath190 , as a function of the compression , while assuming that the charge state @xmath1 is a function of temperature , gives @xmath191 the first term in eq .
( [ eq : modified_t ] ) gives the mechanical heating ( adiabatic heating when taken alone ) , while the second term represents bremsstrahlung cooling .
the last term is associated with the energy needed to bring newly ionized electrons to the temperature @xmath37 .
if the charge state increases with temperature , and the temperature increases with compression ( with decreasing @xmath161 ) then it is a cooling term ( acts to decrease the temperature ) .
it should not , however , be taken as an accurate accounting of this energy .
our primary focus is comparing the radiation and adiabatic compression terms .
the relative size of the radiation term is set by the compression time , @xmath192 and the initial radiation time , @xmath193 = 3.01\times10^{-9 } \frac{a_i t_{kev}^{1/2}}{\rho_{g / cc}}\ ] ] the ratio @xmath194 multiplied by the charge state coefficient @xmath195 , gives the ratio of the bremsstrahlung cooling to the mechanical heating for any set of density , temperature , charge state , and ion mass number @xmath105 . to solve for the temperature evolution as a function of compression ,
one evaluates the ratio at the initial temperature and density , as in eq .
( [ eq : modified_t ] ) , and solves that equation . for an arbitrary function @xmath196 , the temperature will have some dependence on @xmath10 , which can be used instead of the adiabatic relation @xmath197 in the model described in secs . [
sec : model and energy ] , [ sec : derivation ] .
generally this will break the ability to reach a nicely scaled equation for the sake of simulation , eq .
( [ eq : scaled_momentum ] ) . to give a simple example , consider the case where the charge state takes a simple power law relation with the temperature , @xmath198 approximating @xmath199 , eq . ( [ eq : modified_t ] ) can be reduced to @xmath200 where the prefactor is due to the last term in eq .
( [ eq : modified_t ] ) ( the energy required to bring newly ionized electrons to temperature @xmath201 ) , and will result in an effective lower adiabatic index
. however , this is not a radiation effect , and will be ignored for the discussion here .
equation [ eq : general_phi_t ] can be solved analytically , and the solution takes a particularly simple form for @xmath202 , which has behavior that is qualitatively similar to the solutions for other @xmath203 .
when @xmath202 , and ignoring the prefactor on the derivative , the solution to eq .
( [ eq : general_phi_t ] ) is @xmath204\ ] ] for small initial compression time to radiation time ( @xmath205 ) , the temperature tracks very closely with @xmath206 , up until the radiation becomes important
@xmath207 for this @xmath208 case .
then , the model for turbulence behavior with ionization discussed in this work will be unmodified up until the point where the radiation becomes important .
provided that one starts the compression with a small initial @xmath194 , this can hold for large compression ratios .
note that in this case , the temperature is no longer a state - function of compression , since it depends also on the compression rate .
when the temperature tracks closely with @xmath206 , @xmath208 corresponds to the @xmath209 case , for which simulation results are included in figs .
[ fig : kevsl_twobetas ] and [ fig : vary_beta ] .
radiation considerations are discussed further in sec .
[ sec : discussion ] .
27ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1''''@noop [ 0]secondoftwosanitize@url [ 0 ]
+ 12$12 & 12#1212_12%12@startlink[1]@endlink[0]@bib@innerbibempty link:\doibase 10.1103/physrevlett.116.105004 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.107.105001 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.72.3827 [ *
* , ( ) ] @noop * * ( ) link:\doibase 10.1103/physrevlett.98.115001 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.111.035001 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.109.075004 [ * * , ( ) ] link:\doibase 10.1103/physreve.89.053106 [ * * , ( ) ] @noop ( ) @noop * * ( ) _ _ , @noop ph.d
. thesis , ( ) http://adsabs.harvard.edu/abs/1992ejmf...11..683c [ * * , ( ) ] link:\doibase 10.1007/s10494 - 014 - 9535 - 7 [ * * , ( ) ] @noop _ _ ( , , ) @noop _ _ ( , ) link:\doibase 10.1146/annurev.fl.19.010187.002531 [ * * , ( ) ] , link:\doibase 10.1017/s0022112090002075 [ * * , ( ) ] \doibase http://dx.doi.org/10.1063/1.2047568 [ * * , ( ) ] in link:\doibase 10.1007/978 - 3 - 642 - 77674 - 8_19 [ _ _ ] , ( , ) pp .
@noop in @noop _ _ ( , ) pp .
http://stacks.iop.org/1367-2630/9/i=8/a=300 [ * * , ( ) ] link:\doibase
10.1103/physrevlett.98.208501 [ * * , ( ) ] http://stacks.iop.org/0004-637x/791/i=1/a=12 [ * * , ( ) ] @noop * * ( ) @noop * * , ( ) \doibase http://dx.doi.org/10.1063/1.4826315 [ * * , ( ) ] | turbulent plasma flow , amplified by rapid 3d compression , can be suddenly dissipated under continuing compression .
this effect relies on the sensitivity of the plasma viscosity to the temperature , @xmath0 .
the plasma viscosity is also sensitive to the plasma ionization state .
we show that the sudden dissipation phenomenon may be prevented when the plasma ionization state increases during compression , and demonstrate the regime of net viscosity dependence on compression where sudden dissipation is guaranteed .
additionally , it is shown that , compared to cases with no ionization , ionization during compression is associated with larger increases in turbulent energy , and can make the difference between growing and decreasing turbulent energy . |
Editor’s note: Jamal Khashoggi is a Saudi journalist and author, and a columnist for Washington Post Global Opinions. Khashoggi’s words should appear in the space above, but he has not been heard from since he entered a Saudi consulate in Istanbul for a routine consular matter on Tuesday afternoon.
Read the Washington Post editorial: Where is Jamal Khashoggi? ||||| Image copyright Washington Post Image caption The blank column appears online and in the print edition
The Washington Post has printed a blank column in support of its missing Saudi contributor Jamal Khashoggi.
Mr Khashoggi - a critic of Saudi Crown Prince Mohammed bin Salman - has not been seen since visiting the Saudi consulate in Istanbul on Tuesday.
Saudi Arabia says he left the building but Turkey says he may still be inside.
The newspaper said it was "worried" and called on Mr bin Salman to "welcome constructive criticism from patriots such as Mr Khashoggi".
In an editorial, it asked that the crown prince do "everything in his power" to let the journalist work.
Previously an adviser to senior Saudi officials, Mr Khashoggi moved abroad after his Saudi newspaper column was cancelled and he was allegedly warned to stop tweeting his criticisms of the crown prince's policies.
The 59-year-old commentator has been living in self-imposed exile in the US and working as a contributor to the Washington Post.
What happened on Tuesday?
Mr Khashoggi went to the Istanbul consulate to obtain official divorce documents so that he could marry his Turkish fiancée, Hatice.
He left his phone with Hatice outside the consulate and asked her to call an adviser to Turkish President Recep Tayyip Erdogan if he did not return.
Media playback is unsupported on your device Media caption Jamal Khashoggi: Saudi Arabia needs reform, but one-man rule is "bad" for the kingdom
Hatice said she waited for Mr Khashoggi outside the consulate from about 13:00 (10:00 GMT) until after midnight and did not see him leave. She returned when the consulate reopened on Wednesday morning.
What do Saudi Arabia and Turkey say?
Turkey has said it believes he remains inside the building, while a Saudi official said Mr Khashoggi filled out his paperwork and then "exited shortly thereafter".
On Thursday the official Saudi Press Agency cited the consulate as saying it was working with Turkish authorities to probe Mr Khashoggi's disappearance "after he left the consulate building".
The US state department has also requested information about Khashoggi's whereabouts and expressed concern about his safety.
The BBC's Mark Lowen says the mystery threatens to deepen the strains in the relationship between Turkey and Saudi Arabia.
Turkey has taken the side of Qatar over its blockade by Saudi Arabia and other neighbours, and Turkey's rapprochement with Iran has riled the government in Riyadh, our correspondent adds.
Why might Saudi Arabia want Khashoggi?
He is one of the most prominent critics of the crown prince, who has unveiled reforms praised by the West while carrying out an apparent crackdown on dissent, which has seen human and women's rights activists, intellectuals and clerics arrested, and waging a war in Yemen that has triggered a humanitarian crisis.
A former editor of the al-Watan newspaper and a short-lived Saudi TV news channel, Mr Khashoggi was for years seen as close to the Saudi royal family and advised senior Saudi officials.
After several of his friends were arrested, his column was cancelled by the Al-Hayat newspaper and he was allegedly warned to stop tweeting, Mr Khashoggi left Saudi Arabia for the US, from where he wrote opinion pieces for the Washington Post and continued to appear on Arab and Western TV channels.
"I have left my home, my family and my job, and I am raising my voice," he wrote in September 2017. "To do otherwise would betray those who languish in prison. I can speak when so many cannot." | – The Washington Post has printed one of the most unusual op-ed pieces in its history. It consists of a byline—that of Jamal Khashoggi—and then is followed by blank space. In both its print and online versions, the space where Khashoggi's column should be has been left deliberately empty, reports the BBC. It's the newspaper's way of protesting the journalist's disappearance in Saudi Arabia. Khashoggi, a frequent critic of Saudi Crown Prince Mohammed bin Salman, has not been seen since Tuesday. He visited the Saudi consulate in Istanbul, Turkey, on that day, and while the Saudis say he left the building, Turkish officials say they don't think he did. The Post also has an editorial about the situation in which it asks the crown prince to make sure that Khashoggi is OK. The piece notes that the de facto Saudi leader is trying to modernize his nation's image by doing away with its authoritarian old ways. "If he is truly committed to this, he will welcome constructive criticism from patriots such as Mr. Khashoggi," says the editorial. "And he will do everything in his power to ensure that Mr. Khashoggi is free and able to continue his work." |
SECTION 1. SHORT TITLE.
This Act may be cited as the ``State Sponsors of Terrorism Review
Enhancement Act''.
SEC. 2. MODIFICATIONS OF AUTHORITIES THAT PROVIDE FOR RESCISSION OF
DETERMINATIONS OF COUNTRIES AS STATE SPONSORS OF
TERRORISM.
(a) Foreign Assistance Act of 1961.--Section 620A of the Foreign
Assistance Act of 1961 (22 U.S.C. 2371) is amended--
(1) in subsection (c)(2)--
(A) in the matter preceding subparagraph (A), by
striking ``45 days'' and inserting ``90 days''; and
(B) in subparagraph (A), by striking ``6-month
period'' and inserting ``24-month period'';
(2) by redesignating subsection (d) as subsection (e);
(3) by inserting after subsection (c) the following:
``(d) Disapproval of Rescission.--No rescission under subsection
(c)(2) of a determination under subsection (a) with respect to the
government of a country may be made if the Congress, within 90 days
after receipt of a report under subsection (c)(2), enacts a joint
resolution described in subsection (f)(2) of section 40 of the Arms
Export Control Act with respect to a rescission under subsection (f)(1)
of such section of a determination under subsection (d) of such section
with respect to the government of such country.'';
(4) in subsection (e) (as redesignated), in the matter
preceding paragraph (1), by striking ``may be'' and inserting
``may, on a case-by-case basis, be''; and
(5) by adding at the end the following new subsection:
``(f) Notification and Briefing.--Not later than--
``(1) ten days after initiating a review of the activities
of the government of the country concerned within the 24-month
period referred to in subsection (c)(2)(A), the President,
acting through the Secretary of State, shall notify the
Committee on Foreign Affairs of the House of Representatives
and the Committee on Foreign Relations of the Senate of such
initiation; and
``(2) 20 days after the notification described in paragraph
(1), the President, acting through the Secretary of State,
shall brief such committees on the status of such review.''.
(b) Arms Export Control Act.--Section 40 of the Arms Export Control
Act (22 U.S.C. 2780) is amended--
(1) in subsection (f)--
(A) in paragraph (1)(B)--
(i) in the matter preceding clause (i), by
striking ``45 days'' and inserting ``90 days'';
and
(ii) in clause (i), by striking ``6-month
period'' and inserting ``24-month period''; and
(B) in paragraph (2)--
(i) in subparagraph (A), by striking ``45
days'' and inserting ``90 days''; and
(ii) in subparagraph (B), by striking ``45-
day period'' and inserting ``90-day period'';
(2) in subsection (g), in the matter preceding paragraph
(1), by striking ``may waive'' and inserting ``may, on a case-
by-case basis, waive'';
(3) by redesignating subsection (l) as subsection (m); and
(4) by inserting after subsection (k) the following new
subsection:
``(l) Notification and Briefing.--Not later than--
``(1) ten days after initiating a review of the activities
of the government of the country concerned within the 24-month
period referred to in subsection (f)(1)(B)(i), the President,
acting through the Secretary of State, shall notify the
Committee on Foreign Affairs of the House of Representatives
and the Committee on Foreign Relations of the Senate of such
initiation; and
``(2) 20 days after the notification described in paragraph
(1), the President, acting through the Secretary of State,
shall brief such committees on the status of such review.''.
(c) Export Administration Act of 1979.--
(1) In general.--Section 6(j) of the Export Administration
Act of 1979 (50 U.S.C. App. 2405(j)), as continued in effect
under the International Emergency Economic Powers Act, is
amended--
(A) in paragraph (4)(B)--
(i) in the matter preceding clause (i), by
striking ``45 days'' and inserting ``90 days'';
and
(ii) in clause (i), by striking ``6-month
period'' and inserting ``24-month period'';
(B) by redesignating paragraphs (6) and (7) as
paragraphs (7) and (8), respectively; and
(C) by inserting after paragraph (4) the following
new paragraphs:
``(5) Disapproval of Rescission.--No rescission under paragraph
(4)(B) of a determination under paragraph (1)(A) with respect to the
government of a country may be made if the Congress, within 90 days
after receipt of a report under paragraph (4)(B), enacts a joint
resolution described in subsection (f)(2) of section 40 of the Arms
Export Control Act with respect to a rescission under subsection (f)(1)
of such section of a determination under subsection (d) of such section
with respect to the government of such country.
``(6) Notification and Briefing.--Not later than--
``(A) ten days after initiating a review of the activities
of the government of the country concerned within the 24-month
period referred to in paragraph (4)(B)(i), the President,
acting through the Secretary and the Secretary of State, shall
notify the Committee on Foreign Affairs of the House of
Representatives and the Committee on Foreign Relations of the
Senate of such initiation; and
``(B) 20 days after the notification described in paragraph
(1), the President, acting through the Secretary and the
Secretary of State, shall brief such committees on the status
of such review.''.
(2) Regulations.--The President shall amend the Export
Administration Regulations under subchapter C of chapter VII of
title 15, Code of Federal Regulations, to the extent necessary
and appropriate to carry out the amendment made by paragraph
(1). | State Sponsors of Terrorism Review Enhancement Act This bill amends the Foreign Assistance Act of 1961, the Arms Export Control Act, and the Export Administration Act of 1979, with respect to the rescission of a determination of a country as a state sponsor of terrorism, to require that the President has submitted to Congress a report justifying such rescission 90 days (currently 45 days) prior to the rescission taking effect, which certifies that the government concerned has not provided support for international terrorism during the preceding 24 months (currently 6 months). No such rescission under the Foreign Assistance Act of 1961 or the Export Administration Act of 1979 may be made if Congress, within 90 days after receipt of such a presidential report, enacts a joint resolution pursuant to the Arms Export Control Act prohibiting such rescission. |
the challenge in diagnosis is that there is no single definitive clinical symptoms or signs .
initial treatment approach to the patients with suspected acute bacterial meningitis depends on rapid diagnostic evaluation and emergent antimicrobial and adjunctive therapy .
once there is a suspicion , lumbar puncture should be performed immediately to determine whether the cerebrospinal fluid ( csf ) findings are consistent with clinical diagnosis .
ankylosing spondylitis is a complex , potentially debilitating disease that is insidious in onset and progressing to radiological sacroiliitis over several years . in advanced stage of disease ,
the affected tissue is gradually replaced by fibrocartilage and then becomes ossified . in 1940 , taylor described a modified para - median lumbosacral approach through the l5-s1 space .
the l5-s1 space is least likely to be obliterated by pathological processes such as degeneration and excessive scarring . here
lumbar puncture was successfully performed with taylor 's approach after it failed with the conventional approach .
a 42-year - old gentleman , weighing around 50 kg , presented with the history of headache , fever ( up to 102 f ) , and altered level of consciousness of 1-day duration . on examination , he was confused and neck stiffness was present .
he was febrile and tachycardic . with the suspicion of bacterial meningitis , empiric antibiotic therapy with ceftriaxone 2
recent radiograph of lumbosacral spine revealed bilateral sacroiliitis , calcification of anterior and posterior longitudinal ligaments with syndesmophytes and bamboo spine [ figure 1 ] . local examination of lumbar spine revealed loss of lumbar lordosis [ figure 2 ] .
multiple attempts for lumbar puncture in left lateral position at various levels ( l2 - 3 and l3 - 4 ) , with both midline and para - median approach were carried out by experienced anesthesiologists , but it failed .
after eight failed attempts , lumbar puncture was successfully performed with the taylor 's approach .
after infiltration with local anesthetic agent , 25 gauge quincke spinal needle was inserted at a point 1 cm medial and 1 cm caudal to the lowest prominence of posterior superior iliac spine , located immediately anterior to skin dimple [ figure 2 ] .
the needle was directed in a cephalo - medial direction towards the l5-s1 space and turbid csf was obtained in first attempt .
csf analysis was highly suggestive of bacterial meningitis and the culture report revealed streptococcus pneumoniae .
ceftriaxone was administered 2 g 12 hourly for 14 days and dexamethasone was continued 6 hourly for 4 days at the dose of 7.5 mg .
x - ray of lumbosacral spine showing bilateral sacroiliitis , calcification of anterior and posterior longitudinal ligaments with syndesmophytes and bamboo spine .
needle insertion point and direction is marked by black arrows patient lying in left lateral position showing loss of lumbar lordosis .
posterior superior iliac spine is marked by white arrow and the site of skin puncture for lumbar puncture by taylor 's apporach is marked by black arrow .
the outcome of the patients is improved by prompt antibiotics treatment . delay in antibiotic therapy
the odds for unfavorable outcome may increase by up to 30% per h of treatment delay .
after administration of antibiotics , the chance of positive csf culture decreases with time , but is likely to be positive within 4 h. ankylosing spondylitis is a chronic rheumatic disease causing chronic inflammation , bone destruction and aberrant bone repair . in the late stage of disease
lumbar puncture is technically challenging in these patients due to reduced articular mobility of spine , obliteration of interspinous spaces , midline ossification of interspinous ligament and difficulty in proper patient positioning .
taylor 's approach can provide a reliable alternative to midline approach for lumbar puncture by targeting the l5-s1 interlaminar space , which is the lowest and widest available space , which is least affected by arthritic and degenerative changes .
use of ultrasound for lumbar puncture has been shown to reduce the risk of failure as well as the number of needle insertions and redirections .
ultrasound guidance has been shown to be useful in obstetric and nonobstetric population with difficult surface anatomic landmarks . however , it is operator dependent and is not routinely available in all places . in our case ,
lumbar puncture with taylor 's approach can be helpful for obtaining csf sample for diagnostic evaluation in patients with deformity of spine , when conventional technique for lumbar puncture fails .
| meningitis and encephalitis are the neurological emergencies .
as the clinical findings lack specificity , once suspected , cerebrospinal fluid ( csf ) analysis should be performed and parenteral antimicrobials should be administered without delay .
lumbar puncture can be technically challenging in patients with ankylosing spondylitis due to ossification of ligaments and obliteration of interspinous spaces . here , we present a case of ankylosing spondylitis where attempts for lumbar puncture by conventional approach failed .
csf sample was successfully obtained by taylor 's approach . |
Omaha police said three officers had no choice but to respond the way they did to an armed robbery at a fast-food restaurant Tuesday night that left the suspect and a television crew member dead.
At a news conference Wednesday, Omaha Police Chief Todd Schmaderer identified the armed robbery suspect as Cortez Washington, 32, a parolee from Missouri. He was struck by gunfire and pronounced dead at the Nebraska Medical Center.
Watch the entire news conference
One "Cops" television crew member accompanying responding officers was also struck by gunfire and later died. He was identified as Bryce Dion, 38.
"It was an officer's round that struck Mr. Dion," Schmaderer said.
RELATED: 'Cops' contract appears to absolve liability for City of Omaha
Authorities said the incident began as an armed robbery at the Wendy's restaurant near 43rd and Dodge streets.
"He pointed (a gun) at me and told me to get all the money," said Roxanna Galloway, an assistant manager at Wendy's who witnessed the shooting.
Video: Wendy's asst. manager describes moments before shooting
See photos from the scene
Video: 'Cops' TV show crew member killed by police gunfire
Detective Darren Cunningham, who had just responded to a robbery at a nearby Little Caesar's Pizza, called for backup from the Wendy's around 9:20 p.m. Tuesday, police said.
Officers Brooks Riley and Jason Wilhelm, who were accompanied by a two-member crew with "Cops," immediately responded.
"I was typing in my numbers to get into the register and open it," Galloway said. "That's when the cops came inside."
Police said the three officers entered the restaurant, and three witnesses confirmed that Washington was holding a handgun and fired twice at Cunningham and Riley.
"After he ran outside and the cops ran after him, the cameraman from the 'Cops' show that was back in the lobby, he ended up coming up to the front. In between the doors was his partner," Galloway said.
Cunningham, Riley and Wilhelm returned fire, striking Washington. Washington was able to flee the restaurant, but he collapsed outside.
Dion was also struck by a single gunshot and collapsed in the east doorway as Washington fled.
"I am glad the cops came when they did," Galloway said. "(At) that point it was technically like I was a hostage."
Both Washington and Dion died from their injuries.
Video: Gunfire strikes business next door
Schmaderer said the shooting was captured on video by the production crew, and it has been entered into evidence. Schmaderer said he has personally reviewed the video and determined "officers had no choice but to respond in the manner in which they did."
The chief would not specify the number of rounds fired by officers; however, he said he didn't feel it was excessive.
Listen to police scanner traffic from the incident
It was later determined the gun Washington had was an Airsoft pistol that fires plastic pellet bullets. Schmaderer said it looked like and functions like an actual firearm.
Video: Can you tell the difference between an airsoft gun and the real thing?
A grand jury will review the evidence and investigate the actions of the officers, Schmaderer said.
Washington, originally from Kansas City, Kansas, had previously served time for four criminal convictions, including drug possession, property damage and two counts of fleeing from police.
Video: Suspect's family says police account of shooting is 'bogus'
The three officers involved in the shooting have been placed on paid administrative leave, as is standard policy in officer-involved shootings.
Langley Productions, which produces the show, issued the following statement Wednesday:
"We are deeply saddened and shocked by this tragedy and our main concern is helping his family in any way we can. Bryce Dion was a long term member of the “COPS” team and very talented and dedicated person. We mourn his passing. An investigation is ongoing and we are cooperating with local authorities."
Dion is believed to be the first member of the "Cops" production staff killed in the 25-year history of the television show.
Dion is credited on the Internet Movie Database (IMDb) as a sound mixer on a variety of television series.
Shows he's worked on include "Extreme Weight Loss," "Undercover Stings," "Container Wars," "Trading Spaces" and "Real Vice Cops."
"Cops" has been filming in Omaha for much of the summer. At the time the filming was announced, Schmaderer said, "I am proud of the department and want the professionalism of our officers on display for the city and world to see."
Schmaderer described Dion as a friend to the officers that he'd been embedded with.
"This is as if we lost one of our own," Schmaderer said. "That is the grieving process we're going through right now."
Asked whether "Cops" would stop production in Omaha, the police chief said, "We haven't gotten that far." The investigation is ongoing, he said, and is relying in part on the "Cops" team's footage.
"Mr. Dion paid the ultimate price for his service -- to provide the footage of the real-life dangers that law enforcement officers face on a daily basis to television viewers throughout the world," Schmaderer said. ||||| OMAHA, Neb. (AP) — Police who opened fire while disrupting a robbery at a fast-food restaurant in Omaha struck and killed a crew member with the "Cops" television show and the suspect, who was carrying a pellet gun, authorities said Wednesday.
Clime lab technicians sweep a Wendy's in Omaha, Neb., for evidence, Wednesday, Aug. 27, 2014. Police say a suspect has been killed and a crew member with the "Cops" television show has been wounded in... (Associated Press)
In this Tuesday, Aug. 26, 2014 photo, Omaha police are on the scene of an officer-involved shooting in Omaha. A suspect was shot dead and a crewman with the "Cops" television show was wounded. (AP Photo/THe... (Associated Press)
In this Tuesday, Aug. 26, 2014 photo, Omaha police are on the scene of an officer-involved shooting in Omaha. A suspect was shot dead and a crewman with the "Cops" television show was wounded. (AP Photo/THe... (Associated Press)
In this Tuesday, Aug. 26, 2014 photo, Omaha police are on the scene of an officer-involved shooting in Omaha. A suspect was shot dead and a crewman with the "Cops" television show was wounded. (AP Photo/THe... (Associated Press)
A clime lab technician carries a metal detector past bullet holes in the windows of a Wendy's in Omaha, Neb., Wednesday, Aug. 27, 2014. Police say a suspect has been killed and a crew member with the... (Associated Press)
A crime lab technician walks past bullet riddled windows at a Wendy's in Omaha, Neb., Wednesday, Aug. 27, 2014. Police say a suspect has been killed and a crew member with the "Cops" television show has... (Associated Press)
Police Chief Todd Schmaderer said witnesses and officers thought the robbery suspect's Airsoft handgun was real, but that it fires only plastic pellets. He said the suspect fired from the pellet gun before officers returned fire. The suspect was struck by gunfire, but fled outside of the restaurant before collapsing.
Officers continued firing on the suspect as he exited the restaurant, and that was when the "Cops" crew member was also struck, said Schamaderer.
Schmaderer said he believes the three officers involved acted properly during the attempted robbery Tuesday at a Wendy's in midtown Omaha.
Schmaderer said 38-year-old Bryce Dion who worked for Langley Productions and the suspect, 32-year-old Cortez Washington, were killed.
Schmaderer said video captured by another crewman of the "Cops" reality television show shows the chaotic situation in the restaurant. He said police would not release the video but that it will be part of the grand jury investigation into the shooting.
All three officers are on leave.
Langley Productions is based in Santa Monica, California.
"We are deeply saddened and shocked by this tragedy and our main concern is helping his family in any way we can," spokeswoman Pam Golum said. "Bryce Dion was a long term member of the 'COPS' team and very talented and dedicated person."
"Cops" is a reality TV show that depicts law enforcement officers in action. According to its website, the show has been filmed in at least 140 U.S. cities and three foreign counties. | – Another high-profile police shooting is in the news, this time because a crew member of the reality TV show Cops got killed while filming it. Omaha police also killed a robbery suspect at a Wendy's restaurant during last night's shooting, reports the Omaha World-Herald. It all unfolded as officers responded to a robbery in progress at the restaurant. Three officers entered the Wendy's and opened fire when the suspect fired his weapon at them, says the police chief. The suspect's weapon turned out to be a pellet gun. As the wounded suspect fled outside, the officers continued shooting and struck Cops crew member Bryce Dion, who was there filming, reports AP. "Officers had no choice but to respond in the manner in which they did," said chief Todd Schmaderer, who reviewed Cops' video of the shooting, reports KETV. Both Dion, 38, and suspect Cortez Washington, 32, died of their injuries at the hospital. The TV show has been filming in the city for most of the summer. "Bryce Dion was a long term member of the Cops team and very talented and dedicated person," says a statement from Langley Productions. "We mourn his passing." A grand jury will investigate the shooting. |
SECTION 1. SHORT TITLE.
This Act may be cited as the ``Territorial Consultation and
Notification Act of 1994''.
SEC. 2. FINDINGS.
The Congress finds the following:
(1) Article IV, Section 3, Clause 2 of the Constitution,
also known as the territorial clause, grants Congress plenary
authority to provide for the governance of the United States
territories, including the determination of status.
(2) The President and all executive branch officials should
closely consult with Congress on territorial matters.
(3) Congress has the responsibility to promote the progress
of the people of the territories toward self-government
consistent with the principle of self-determination as defined
in the United Nations Charter, and this requires that the
Congress have regular and reliable information with respect to
the views of the voters in the territories on political status
issues.
(4) The majority view of the voters in the territories can
be acquired by Congress most effectively and directly through
periodic plebiscites which are recognized by the people as the
opportunity to freely express their wishes.
(5) Under Federal statutes approved by Congress, limited
self-government has been authorized for each of the United
States territories, and all persons born in the territories are
native born citizens of the United States pursuant to the law.
(6) The decade of the 1990s has been declared by the United
Nations as the ``Decade to Eradicate Colonialism''.
(7) In the November 4, 1993, plebiscite, a majority of
Puerto Rican voters for the first time voted against their
current status as a United States territory and supported
significant changes in the political and legal relationship
between the United States and Puerto Rico.
SEC. 3. REFERENDUMS ON TERRITORIAL STATUS.
(a) In General.--All territories of the United States shall conduct
referendums on the sentiments of their citizens regarding territorial
status at least every five years.
(b) Report of Results to Congress.--Within 30 days after the date
results of an election held under subsection (a) are certified, the
Governor of the territory concerned shall submit a report of such
results to the President and to the Speaker of the House of
Representatives and the President of the Senate, who shall refer the
report to the appropriate committees.
(c) Report by Appropriate Committees of Congress.--Within 180
calendar days after the report described in subsection (b) is referred,
each committee to whom the report is referred may submit a report to
the Speaker of the House of Representatives or the President of the
Senate, as the case may be, in which the results of the election are
evaluated and recommendations (if any) are made for changes to the laws
or policies of United States are made.
(d) Implementation of Change in Status.--Within one year after a
vote under subsection (a) in which a change regarding the territorial
status has been approved, the President shall develop and report to the
committees of Congress specified in subsection (a) the plans of the
President for implementing the change in status.
SEC. 4. REPORT ON IMPACT OF POLICY AND REGULATORY MATTERS ON THE STATUS
OF UNITED STATES TERRITORIES.
The President shall submit annually to the Committee on Energy and
Natural Resources of the Senate and the Committee on Natural Resources
of the House of Representatives a report on all policy and regulatory
matters impacting the status of United States territories.
SEC. 5. NOTICE OF REGULATORY CHANGE AFFECTING THE STATUS OF UNITED
STATES TERRITORIES.
No regulation that affects the status of United States territories
may take effect until after 90 days after such regulation has been
submitted to the Committee on Energy and Natural Resources of the
Senate and the Committee on Natural Resources of the House of
Representatives.
SEC. 6. REPORT BY THE UNITED STATES REPRESENTATIVE TO THE UNITED
NATIONS ON MATTERS PERTAINING TO UNITED STATES
TERRITORIES.
Within 180 days after the date of enactment of this Act, the United
States Representative to the United Nations shall submit a report to
the Senate Committee on Foreign Relations and the House Committee on
Foreign Affairs. The report shall include the following:
(1) A description of any issues formally considered by the
United Nations during the past two years relating to the status
of United States territories.
(2) A description of any such issues that are expected to
receive formal consideration in the United Nations in the next
year.
SEC. 7. DEFINITION OF UNITED STATES TERRITORIES.
For the purposes of this Act, the term ``United States
territories'' means the Commonwealth of Puerto Rico, the Commonwealth
of the Northern Mariana Islands, American Samoa, Guam, and the Virgin
Islands. | Territorial Consultation and Notification Act of 1994 - Requires all U.S. territories to conduct referendums on the sentiments of their citizens regarding territorial status at least every five years. Provides for reports on the results of such referendums and requires the President, after a vote in which a change regarding territorial status has been approved, to report to specified congressional committees on plans for implementing such change.
Directs the President to report annually to specified congressional committees on all policy and regulatory matters affecting the status of U.S. territories. Bars a regulation that affects such status from taking effect until 90 days after the regulation has been submitted to such committees.
Requires the U.S. Representative to the Untied Nations to report to specified congressional committees on issues formally considered by the United Nations during the past two years relating to the status of U.S. territories and on any such issues that are expected to receive formal consideration in the next year. |
null | monolayers of transition
metal dichalcogenides are interesting
materials for optoelectronic devices due to their direct electronic
band gaps in the visible spectral range . here , we grow single layers
of mos2 on au(111 ) and find that nanometer - sized patches
exhibit an electronic structure similar to their freestanding analogue .
we ascribe the electronic decoupling from the au substrate to the
incorporation of vacancy islands underneath the intact mos2 layer .
excitation of the patches by electrons from the tip of a
scanning tunneling microscope leads to luminescence of the mos2 junction and reflects the one - electron band structure of
the quasi - freestanding layer . |
the 89 week old male osborne - mendel ( om ) and s5b / pl ( s5b ) rats used in these studies were bred in pennington biomedical research center breeding colonies .
rats were individually housed in an aaalac approved animal facility on a 12/12h light / dark cycle ( lights on at 0700 ) with food and water available .
animals were given access to a pelleted high fat ( 55% kcal from fat)/low carbohydrate ( 21% kcal from carbohydrate ) diet or a pelleted low fat ( 10% kcal from fat)/high carbohydrate ( 66% kcal from carbohydrate ) ( 2527 ) .
all procedures were approved by the pennington biomedical research center institutional animal care and use committee .
om and s5b rats ( 89 weeks old ) were fed either high fat or low fat diet for 2 weeks prior to sacrifice ( sacrificed at 1011 weeks of age ) .
food intake was measured daily , body weight was measured weekly and an index of body fat was determined at the time of sacrifice by measurement of the retroperitoneal and epididymal fat pads ( ( fat pad weight ( g)/body weight ( g))*100 ) . for real - time pcr
a one inch section of the ileum was removed , thoroughly cleaned and the enterocytes were removed by gentle scraping with a clean metal spatula .
excised enterocytes were immediately frozen on dry ice and stored at 80c until further processing .
rna was isolated from enterocytes using tri - reagent ( molecular research ctr , cincinnati , oh usa ) and rneasy minikit procedures ( qiagen , valencia , ca usa ) and based on previous experiments ( 27 ) .
briefly , enterocytes were homogenized in tri - reagent using a motorized tissue homogenizer , chloroform was added to the lysate , and the mixture was centrifuged ( 12,000g ) in phase lock tubes to separate rna .
ethanol ( 70% ) was added to the upper aqueous phase , which was filtered by centrifugation ( 8000g ) . following multiple washes ,
reverse transcription ( rt ) was conducted using the high - capacity cdna reverse transcriptase kit ( applied biosystems , foster city , ca , usa ) . for rt , 2.0g of rna from each sample
was added to random primers ( 10 ) , dntp ( 25 ) , multiscribe reverse transcriptase ( 50u/l ) and rt buffer ( 10 ) and incubated in a thermal cycler ( ptc-100 , mj research , inc , watertown , ma , usa ) for 10 min at 27c , then for 120 min at 37c .
taqman gene expression assays ( applied biosystems ) were used to assess levels of preproglucagon and the housekeeping gene , 18s . for real - time pcr , taqman universal pcr master mix ( applied biosystems ) , gene expression assay , and rt product ( 10ng ) were added to a 384 well plate .
the cycling parameters consisted of an initial 2 min incubation at 50 c , followed by 10 min at 95 c , then 15 sec at 95 c , and a 1 min annealing / extension step at 60 c ( 40 cycles ) .
the quantity of prepro - glucagon mrna levels were based on a standard curve and normalized to 18s levels ( abi prism 7900 sequence detection system , applied biosystems ) .
om and s5b rats ( 89 weeks old ) were fed a high fat diet for 2 weeks prior to sacrifice .
all rats were subjected to a 24h fast . at the end of the 24 fast , half of the rats were given access to high fat diet for 2h . at the end of this 2h period ,
animals were extensively handled during this experiment to habituate them to the experimental procedures .. at the time of sacrifice , trunk blood was collected to ensure adequate amounts of plasma for future assays .
dpp - iv inhibitor was added to each sample ( 10l dpp - iv inhibitor per 1ml blood ) , and samples were placed on ice .
plasma was removed from tubes and stored at 80c until processing . a glp-1 ( active , 736 ) amide ) elisa kit was used to assess plasma glp-1 in this experiment ( linco research , st .
the glp-1 assay was conducted as described in the protocol provided with the elisa kit . briefly , on day 1 , plasma and buffer were added to the elisa plate and the plate was incubated overnight . on day 2 , following a series of washes , detection conjugate and substrate were added to each well as described in the protocol and the plate was read on a fluorescence plate reader with an excitation / emission wavelength of 355nm/460 nm .
individual values were determined from a standard curve based on standards provided by the manufacturer .
om and s5b rats ( 89 weeks old ) were fed either the high fat or the low fat diet for 4 weeks prior to testing . during this period ,
all animals were habituated to periods of fasting , in which food was removed for 24 hours .
animals were also habituated to intraperitoneal injection procedures and during habituation , received an intraperitoneal injection of saline ten minutes prior to the return of food .
animals were tested using a latin square design in which each rat received each dose of ex- 4 .
this design was used to control for any carryover effects from the drug . prior to testing ,
rats were subjected to a 24h fast immediately prior to administration of varying doses of ex-4 ( 1g / kg , 5g / kg , 10g / kg ; sigma - aldrich ) or saline .
ten minutes following ex-4 administration , fresh food ( high fat or low fat ) was returned to the rat .
food intake was measured at 1h , 2h , 4h , and 24h following refeeding .
in experiment 1 , a mixed anova was conducted to assess differences in daily food intake ( kcal ) and weekly body weight changes ( strain diet time ) .
a between subjects anova ( strain diet ) was used to compare the index of body adiposity ( ( retroperitoneal + epididymal fat pad weight ( g)/body weight ) * 100 ) and preproglucagon mrna levels in the enterocytes of the ileum in high fat and low fat fed om and s5b rats .
bonferroni post - hoc tests were used to assess differences between individual groups . in experiment 2 ,
individual sample values were analyzed by a between subjects anova ( strain nutritional status ) . in experiment 3 , high fat food intake ( kcal ) and low fat food intake ( kcal ) following ex-4 administration were analyzed by a mixed anova ( time ex-4 concentration strain ) .
an additional mixed anova was conducted for each strain ( time ex-4 concentration ) .
bonferroni post - hoc tests were used to compare saline to each dose of ex-4 , when a significant main effect or interaction was detected . a significance level of p<.05
the 89 week old male osborne - mendel ( om ) and s5b / pl ( s5b ) rats used in these studies were bred in pennington biomedical research center breeding colonies .
rats were individually housed in an aaalac approved animal facility on a 12/12h light / dark cycle ( lights on at 0700 ) with food and water available .
animals were given access to a pelleted high fat ( 55% kcal from fat)/low carbohydrate ( 21% kcal from carbohydrate ) diet or a pelleted low fat ( 10% kcal from fat)/high carbohydrate ( 66% kcal from carbohydrate ) ( 2527 ) .
all procedures were approved by the pennington biomedical research center institutional animal care and use committee .
om and s5b rats ( 89 weeks old ) were fed either high fat or low fat diet for 2 weeks prior to sacrifice ( sacrificed at 1011 weeks of age ) .
food intake was measured daily , body weight was measured weekly and an index of body fat was determined at the time of sacrifice by measurement of the retroperitoneal and epididymal fat pads ( ( fat pad weight ( g)/body weight ( g))*100 ) . for real - time pcr
a one inch section of the ileum was removed , thoroughly cleaned and the enterocytes were removed by gentle scraping with a clean metal spatula .
excised enterocytes were immediately frozen on dry ice and stored at 80c until further processing .
om and s5b rats ( 89 weeks old ) were fed either high fat or low fat diet for 2 weeks prior to sacrifice ( sacrificed at 1011 weeks of age ) .
food intake was measured daily , body weight was measured weekly and an index of body fat was determined at the time of sacrifice by measurement of the retroperitoneal and epididymal fat pads ( ( fat pad weight ( g)/body weight ( g))*100 ) . for real - time pcr
a one inch section of the ileum was removed , thoroughly cleaned and the enterocytes were removed by gentle scraping with a clean metal spatula .
excised enterocytes were immediately frozen on dry ice and stored at 80c until further processing .
rna was isolated from enterocytes using tri - reagent ( molecular research ctr , cincinnati , oh usa ) and rneasy minikit procedures ( qiagen , valencia , ca usa ) and based on previous experiments ( 27 ) .
briefly , enterocytes were homogenized in tri - reagent using a motorized tissue homogenizer , chloroform was added to the lysate , and the mixture was centrifuged ( 12,000g ) in phase lock tubes to separate rna .
ethanol ( 70% ) was added to the upper aqueous phase , which was filtered by centrifugation ( 8000g ) . following multiple washes ,
reverse transcription ( rt ) was conducted using the high - capacity cdna reverse transcriptase kit ( applied biosystems , foster city , ca , usa ) . for rt
, 2.0g of rna from each sample was added to random primers ( 10 ) , dntp ( 25 ) , multiscribe reverse transcriptase ( 50u/l ) and rt buffer ( 10 ) and incubated in a thermal cycler ( ptc-100 , mj research , inc , watertown , ma , usa ) for 10 min at 27c , then for 120 min at 37c .
taqman gene expression assays ( applied biosystems ) were used to assess levels of preproglucagon and the housekeeping gene , 18s . for real - time pcr , taqman universal pcr master mix ( applied biosystems ) , gene expression assay , and rt product ( 10ng )
the cycling parameters consisted of an initial 2 min incubation at 50 c , followed by 10 min at 95 c , then 15 sec at 95 c , and a 1 min annealing / extension step at 60 c ( 40 cycles ) .
the quantity of prepro - glucagon mrna levels were based on a standard curve and normalized to 18s levels ( abi prism 7900 sequence detection system , applied biosystems ) .
om and s5b rats ( 89 weeks old ) were fed a high fat diet for 2 weeks prior to sacrifice .
all rats were subjected to a 24h fast . at the end of the 24 fast , half of the rats were given access to high fat diet for 2h . at the end of this 2h period ,
animals were extensively handled during this experiment to habituate them to the experimental procedures .. at the time of sacrifice , trunk blood was collected to ensure adequate amounts of plasma for future assays .
dpp - iv inhibitor was added to each sample ( 10l dpp - iv inhibitor per 1ml blood ) , and samples were placed on ice .
blood was centrifuged at 3000rpm for 10minutes at 4c and plasma was removed from tubes and stored at 80c until processing .
a glp-1 ( active , 736 ) amide ) elisa kit was used to assess plasma glp-1 in this experiment ( linco research , st .
the glp-1 assay was conducted as described in the protocol provided with the elisa kit .
briefly , on day 1 , plasma and buffer were added to the elisa plate and the plate was incubated overnight . on day 2 , following a series of washes , detection conjugate and substrate were added to each well as described in the protocol and the plate was read on a fluorescence plate reader with an excitation / emission wavelength of 355nm/460 nm .
individual values were determined from a standard curve based on standards provided by the manufacturer .
om and s5b rats ( 89 weeks old ) were fed either the high fat or the low fat diet for 4 weeks prior to testing . during this period ,
all animals were habituated to periods of fasting , in which food was removed for 24 hours .
animals were also habituated to intraperitoneal injection procedures and during habituation , received an intraperitoneal injection of saline ten minutes prior to the return of food .
animals were tested using a latin square design in which each rat received each dose of ex- 4 .
this design was used to control for any carryover effects from the drug . prior to testing ,
rats were subjected to a 24h fast immediately prior to administration of varying doses of ex-4 ( 1g / kg , 5g / kg , 10g / kg ; sigma - aldrich ) or saline .
ten minutes following ex-4 administration , fresh food ( high fat or low fat ) was returned to the rat .
food intake was measured at 1h , 2h , 4h , and 24h following refeeding .
om and s5b rats ( 89 weeks old ) were fed a high fat diet for 2 weeks prior to sacrifice .
all rats were subjected to a 24h fast . at the end of the 24 fast , half of the rats were given access to high fat diet for 2h . at the end of this 2h period ,
animals were extensively handled during this experiment to habituate them to the experimental procedures .. at the time of sacrifice , trunk blood was collected to ensure adequate amounts of plasma for future assays .
dpp - iv inhibitor was added to each sample ( 10l dpp - iv inhibitor per 1ml blood ) , and samples were placed on ice .
blood was centrifuged at 3000rpm for 10minutes at 4c and plasma was removed from tubes and stored at 80c until processing .
a glp-1 ( active , 736 ) amide ) elisa kit was used to assess plasma glp-1 in this experiment ( linco research , st .
the glp-1 assay was conducted as described in the protocol provided with the elisa kit .
briefly , on day 1 , plasma and buffer were added to the elisa plate and the plate was incubated overnight . on day 2 , following a series of washes , detection conjugate and substrate were added to each well as described in the protocol and the plate was read on a fluorescence plate reader with an excitation / emission wavelength of 355nm/460 nm .
individual values were determined from a standard curve based on standards provided by the manufacturer .
om and s5b rats ( 89 weeks old ) were fed either the high fat or the low fat diet for 4 weeks prior to testing . during this period ,
all animals were habituated to periods of fasting , in which food was removed for 24 hours .
animals were also habituated to intraperitoneal injection procedures and during habituation , received an intraperitoneal injection of saline ten minutes prior to the return of food .
animals were tested using a latin square design in which each rat received each dose of ex- 4 .
prior to testing , rats were subjected to a 24h fast immediately prior to administration of varying doses of ex-4 ( 1g / kg , 5g / kg , 10g / kg ; sigma - aldrich ) or saline .
ten minutes following ex-4 administration , fresh food ( high fat or low fat ) was returned to the rat .
food intake was measured at 1h , 2h , 4h , and 24h following refeeding .
in experiment 1 , a mixed anova was conducted to assess differences in daily food intake ( kcal ) and weekly body weight changes ( strain diet time ) .
a between subjects anova ( strain diet ) was used to compare the index of body adiposity ( ( retroperitoneal + epididymal fat pad weight ( g)/body weight ) * 100 ) and preproglucagon mrna levels in the enterocytes of the ileum in high fat and low fat fed om and s5b rats .
bonferroni post - hoc tests were used to assess differences between individual groups . in experiment 2 ,
individual sample values were analyzed by a between subjects anova ( strain nutritional status ) . in experiment 3 , high fat food intake ( kcal ) and low fat food intake ( kcal ) following ex-4 administration were analyzed by a mixed anova ( time ex-4 concentration strain ) . an additional mixed anova was conducted for each strain ( time ex-4 concentration ) .
bonferroni post - hoc tests were used to compare saline to each dose of ex-4 , when a significant main effect or interaction was detected .
food intake was measured daily for 2 weeks prior to sacrifice . a strain day
diet interaction ( f(13,442)=2.04 , p<.02 , figure 1 ) and strain diet interaction ( f(1,34)=24.79,p<.0001 ) were detected .
post - hoc analyses revealed that om rats consumed significantly more high fat diet ( kcal ) than low fat diet ( kcal ) throughout the experiment ( p<.05 ) .
s5b rats consumed significantly more high fat diet than low fat diet on most days ( days 1 , 2 , 3 , 4 , 7 , 9 , 10 , 11 , 12 , and 14 ; p<.05 ) .
om rats consumed more high fat diet than s5b rats throughout the experiment , except for days 1 and 14 , while om rats consumed more low fat diet than s5b rats on days 1 , 10 and 12 ( p<.05 ) .
main effects for strain and diet for week 1 were detected ( f(1,34)=82.31 , p<.0001 ; f(1,34)=43.90 , p<.0001 , respectively ; see figure 2a ) and a strain diet interaction was detected for week 2 ( f(1,34)=20.28 , p<.0001 ) .
om rats consuming high fat diet gained more weight than om rats consuming low fat diet and s5b rats consuming high fat diet gained more weight than s5b rats consuming low fat diet ( p<.05 ) .
an index of body fat was determined at the time of sacrifice by weighing the retroperitoneal and epididymal fat pads .
a strain diet interaction was detected for body fat index in these animals ( f(1,34)=21.95 , p<.0001 ; see figure 2b ) .
post - hoc tests revealed that om rats had a higher index of body fat than s5b rats and om rats consuming a high fat diet had a higher index of body fat than om rats consuming a low fat diet ( p<.05 ) .
preproglucagon mrna levels were measured in ileal enterocytes of om and s5b rats following 2 weeks access to either a high fat or a low fat diet .
a between - subjects anova revealed a main effect for strain ( f(1,35 ) = 16.07 , p < .0005 ; see figure 3 ) .
post - hoc analyses indicated significantly higher levels of preproglucagon mrna in the ileal enterocytes of om rats fed a low fat diet compared to s5b rats fed a low fat diet ( p<.05 ) .
additionally , the consumption of a high fat diet increased preproglucagon mrna levels in both om and s5b rats ( p<.05 ) .
there was a 20.5% 0.15 vs. 93.3% 0.25% ( mean sem ) increase in preproglucagon mrna levels in high fat fed om and s5b rats compared to low fat fed om and s5b rats , respectively ( t(17)=2.39 , p<.05 ) . circulating active glp-1 levels in om and s5b rats
were measured following either a 24h fast or a 24h fast followed by 2h access to a high fat diet .
as expected , a significant main effect was detected for nutritional status ( f(1,13)=18.5 , p<.001 ; see figure 4 ) .
post - hoc analyses revealed that refeeding for 2h with a high fat diet increased circulating active glp-1 levels in om and s5b rats ( p<.05 ) .
no differences were detected between the response to the high fat meal in om and s5b rats .
food intake was measured following ex-4 administration in om and s5b rats fed either a high fat or a low fat diet .
several significant interaction effects were detected between time following injection , dose of ex-4 and rat strain ( time ex-4 strain , f(9,147 ) = 4.38 , p<.0001 ; time ex-4 , f(9,147 ) = 3.94 , p < .0001 ; time strain f(3,147 ) = 3.27 , p<.02 ; ex-4 strain f(3,49 ) = 11.18 , p<.00001 ) .
post - hoc tests revealed that 1h , 2h and 4h following injection , obesity - resistant s5b rats receiving 1g / kg , 5g / kg and 10g / kg ex-4 ate significantly less high fat and low fat diet than saline - treated control s5b rats ( p<.05 ; see figure 5a ) . additionally , post - hoc analyses revealed decreases in low fat food intake in obesity - prone om rats at 5g / kg and 10g / kg of ex-4 ( p<.05 ; see figure 5b ) compared to saline - treated control om rats at 1h , 2h and 4h .
high fat food intake was decreased in om rats receiving 5g / kg and 10g / kg ex-4 at 1h following administration and in om rats receiving 10g / kg ex-4 at 2h following administration ( p<.05 ; see figure 5b ) .
high fat and low fat food intake were also measured at 24h following administration of ex-4 .
s5b rats ate less low fat diet at 24h when administered 1g / kg , 5g / kg and 10g / kg ex-4 compared to saline .
high fat diet intake in s5b rats was reduced at 24h following administration of 5g / kg and 10g / kg ex-4 ( see figure 6a , p<.05 ) .
twenty - four hour intake of low fat diet was decreased in om rats following administration of 5g / kg and 10g / kg ex-4 .
however , in high fat fed rats , only 10g / kg ex-4 decreased high fat food intake in om rats compared to saline - injected controls ( see figure 6b , p<.05 ) .
food intake was measured daily for 2 weeks prior to sacrifice . a strain day
diet interaction ( f(13,442)=2.04 , p<.02 , figure 1 ) and strain diet interaction ( f(1,34)=24.79,p<.0001 ) were detected .
post - hoc analyses revealed that om rats consumed significantly more high fat diet ( kcal ) than low fat diet ( kcal ) throughout the experiment ( p<.05 ) .
s5b rats consumed significantly more high fat diet than low fat diet on most days ( days 1 , 2 , 3 , 4 , 7 , 9 , 10 , 11 , 12 , and 14 ; p<.05 ) .
om rats consumed more high fat diet than s5b rats throughout the experiment , except for days 1 and 14 , while om rats consumed more low fat diet than s5b rats on days 1 , 10 and 12 ( p<.05 ) .
main effects for strain and diet for week 1 were detected ( f(1,34)=82.31 , p<.0001 ; f(1,34)=43.90 , p<.0001 , respectively ; see figure 2a ) and a strain diet interaction was detected for week 2 ( f(1,34)=20.28 , p<.0001 ) .
om rats consuming high fat diet gained more weight than om rats consuming low fat diet and s5b rats consuming high fat diet gained more weight than s5b rats consuming low fat diet ( p<.05 ) .
an index of body fat was determined at the time of sacrifice by weighing the retroperitoneal and epididymal fat pads .
a strain diet interaction was detected for body fat index in these animals ( f(1,34)=21.95 , p<.0001 ; see figure 2b ) .
post - hoc tests revealed that om rats had a higher index of body fat than s5b rats and om rats consuming a high fat diet had a higher index of body fat than om rats consuming a low fat diet ( p<.05 ) .
preproglucagon mrna levels were measured in ileal enterocytes of om and s5b rats following 2 weeks access to either a high fat or a low fat diet .
a between - subjects anova revealed a main effect for strain ( f(1,35 ) = 16.07 , p < .0005 ; see figure 3 ) .
post - hoc analyses indicated significantly higher levels of preproglucagon mrna in the ileal enterocytes of om rats fed a low fat diet compared to s5b rats fed a low fat diet ( p<.05 ) .
additionally , the consumption of a high fat diet increased preproglucagon mrna levels in both om and s5b rats ( p<.05 ) .
there was a 20.5% 0.15 vs. 93.3% 0.25% ( mean sem ) increase in preproglucagon mrna levels in high fat fed om and s5b rats compared to low fat fed om and s5b rats , respectively ( t(17)=2.39 , p<.05 ) .
circulating active glp-1 levels in om and s5b rats were measured following either a 24h fast or a 24h fast followed by 2h access to a high fat diet .
as expected , a significant main effect was detected for nutritional status ( f(1,13)=18.5 , p<.001 ; see figure 4 ) .
post - hoc analyses revealed that refeeding for 2h with a high fat diet increased circulating active glp-1 levels in om and s5b rats ( p<.05 ) .
no differences were detected between the response to the high fat meal in om and s5b rats .
food intake was measured following ex-4 administration in om and s5b rats fed either a high fat or a low fat diet . several significant interaction effects were detected between time following injection , dose of ex-4 and rat strain ( time ex-4 strain , f(9,147 ) = 4.38 , p<.0001 ; time ex-4 , f(9,147 ) = 3.94 , p < .0001 ; time strain f(3,147 ) = 3.27 , p<.02 ; ex-4 strain f(3,49 ) = 11.18 , p<.00001 ) .
post - hoc tests revealed that 1h , 2h and 4h following injection , obesity - resistant s5b rats receiving 1g / kg , 5g / kg and 10g / kg ex-4 ate significantly less high fat and low fat diet than saline - treated control s5b rats ( p<.05 ; see figure 5a ) . additionally , post - hoc analyses revealed decreases in low fat food intake in obesity - prone om rats at 5g / kg and 10g / kg of ex-4 ( p<.05 ; see figure 5b ) compared to saline - treated control om rats at 1h , 2h and 4h .
high fat food intake was decreased in om rats receiving 5g / kg and 10g / kg ex-4 at 1h following administration and in om rats receiving 10g / kg ex-4 at 2h following administration ( p<.05 ; see figure 5b ) .
high fat and low fat food intake were also measured at 24h following administration of ex-4 .
s5b rats ate less low fat diet at 24h when administered 1g / kg , 5g / kg and 10g / kg ex-4 compared to saline .
high fat diet intake in s5b rats was reduced at 24h following administration of 5g / kg and 10g / kg ex-4 ( see figure 6a , p<.05 ) .
twenty - four hour intake of low fat diet was decreased in om rats following administration of 5g / kg and 10g / kg ex-4 .
however , in high fat fed rats , only 10g / kg ex-4 decreased high fat food intake in om rats compared to saline - injected controls ( see figure 6b , p<.05 ) .
om rats are prone to diet - induced obesity , whereas s5b rats are resistant to diet - induced obesity .
therefore , these rat strains have been used to examine individual differences in the response to a high fat diet . when given ad libitum access to a high fat diet , om rats consume more high fat diet , gain more weight , and gain more body adiposity than s5b rats ( 18 ) . in the current experiments , we were particularly interested in the individual differences in the response to dietary fat by the gi tract in om and s5b rats .
previous studies indicate that om rats are less responsive to the satiating effects of intraduodenal infusions of sodium linoleate and intralipid than s5b rats ( 2 ) .
this suggests that the satiation response to fatty acids is blunted in obesity - prone om rats .
one possible mechanism mediating the decreased satiation response to fatty acids in om rats is the hormone , glp-1 .
the current experiments were conducted to assess the role of glp-1 in the intake of a high fat diet in an established animal model of obesity . in experiment 1 ,
om and s5b rats were fed either a high fat diet ( 55% kcal from fat ) or a low fat diet ( 10% kcal from fat ) for 14 days .
daily food intake , weekly body weight and an index of body adiposity were assessed . as expected om rats consumed more high fat diet , than low fat diet , and om rats consumed more high fat diet than s5b rats ( see figure 1 ) .
om rats given access to a high fat diet were hyperphagic throughout the experiment . as shown in figure 2a
, om rats fed the high fat diet gained more weight than om rats fed the low fat diet ( 73.9 2.8 g vs. 45.5 2.4 g , mean sem , respectively ) .
s5b rats fed the high fat diet gained more weight than s5b rats fed the low fat diet ( 40.0 1.9 g vs. 31.6 1.7 g , mean sem , respectively ) . based on our index of body fat ,
om rats had level of body fat than s5b rats ( see figure 2b ) , which was exaggerated by consumption of the high fat diet . using real - time pcr , preproglucagon mrna levels in the ileum
were measured in om and s5b rats fed the low fat or high fat diets . based on the previous report ( 2 ) that om rats exhibited a decreased satiety response to intraduodenal infusions of fatty acids , we expected preproglucagon mrna levels to be similar or even lower in om than s5b rats and that these two strains would exhibit a differential response to the high fat diet .
our data suggest that preproglucagon mrna levels in the ileal enterocytes were higher in om rats compared to s5b rats and that high fat diet increased preproglucagon mrna expression in both om and s5b rats ( see figure 3 ) .
the relative increase in preproglucagon expression in response to the high fat diet differed between strains .
s5b rats exhibited a greater increase in preproglucagon expression in response to high fat diet ( 93.3 0.25% , mean sem ) , than om rats ( 20.5 0.15% , mean sem ) .
one possible explanation for these results is that the precursor protein , preproglucagon , can be processed into several different biological peptides including , glp-1 , glp-2 , glucagon , glucose - dependent insulinotropic peptide ( gip ) , glicentin - related pancreatic peptide ( grpp ) and glicentin , which is later cleaved into oxyntomodulin and grpp ( 2830 ) .
the enzymes , prohormone convertase 1 and 2 , cleave proglucagon into different products depending on the tissue ( 31 ) . in the pancreas
, glucagon is the major product and in the brain and intestine , glp-1 , glp-2 and oxyntomodulin are the major products ( 32 ) .
therefore , our data suggest that the precursor for glp-1 , glp-2 and oxyntomodulin is elevated in om rats and is increased to differing degrees by a high fat diet in om and s5b rats .
experiment 2 was conducted to determine if circulating levels of glp-1 differed between om and s5b rats fed a high fat diet and to determine if the meal - initiated release of glp-1 differed between the two strains .
glp-1 levels are increased within minutes of consuming a meal , remain elevated for 3 hours and provide a negative feedback signal to the brain , which leads to meal termination .
our data suggest that om and s5b rats exhibit a similar meal - initiated increase in circulating glp-1 following 2h access to a high fat diet ( see figure 4 ) .
circulating glp-1 binds to glp-1 receptors in the central nervous system and the peripheral nervous system ( 9;15;2123 ) .
the data from experiments 1 and 2 , suggest that obesity - prone om rats do not exhibit deficits in preproglucagon mrna expression or in meal - initiated glp-1 release , compared to obesity - resistant s5b rats .
therefore , it is possible that a dysregulation of central and peripheral glp-1 receptors mediate the decreased satiation response to fatty acids that has been found in om rats , compared to s5b rats . to begin to examine this possibility , we conducted experiment 3 . in experiment 3 , ex-4 ( ex-4 ) , a glp-1 receptor agonist ,
was administered to 24h fasted om and s5b rats fed either a high fat or a low fat diet . it was hypothesized that om rats would be less sensitive to the satiating effects of ex-4 .
the data generated in experiment 3 suggest that ex-4 administration dose - dependently decreased high fat and low fat food intake in fasted s5b rats for up to 24h following administration ( see figure 5a ) .
as hypothesized , om rats were less sensitive to the satiating effects of ex-4 than s5b rats .
ex-4 administration transiently decreased high fat food intake in fasted om rats ( see figure 5b ) . unlike s5b rats ,
only the two highest doses of ex-4 had any effect on either low fat or high fat food intake in om rats .
at the two highest doses of ex-4 , s5b rats had an almost complete suppression of food intake .
the current findings that obesity - prone om rats exhibit higher levels of preproglucagon mrna in the ileal enterocytes than s5b rats , exhibit a similar release of glp-1 following a meal as s5b rats , but are less sensitive to the satiating effects of the glp-1 receptor agonist , ex-4 , than s5b rats , suggest that dysregulation of glp-1 receptors may mediate the decreased satiation response to fatty acids in om rats .
this decreased satiation response would then lead to the increased consumption of fatty acids / high fat diets , which subsequently leads to obesity in obesity - prone om rats .
therefore , deficits in glp-1 signaling are likely a mechanism by which om rats become obese and a mechanism that requires further investigation . | backgroundosborne - mendel ( om ) rats are prone to obesity when fed a high fat diet , while s5b / pl ( s5b ) rats are resistant to diet - induced obesity when fed the same diet .
om rats have a decreased satiation response to fatty acids infused in the gastrointestinal tract , compared to s5b rats .
one possible explanation is that om rats are less sensitive to the satiating hormone , glucagon - like peptide 1 ( glp-1 ) .
glp-1 is produced in the small intestine and is released in response to a meal .
the current experiments examined the role of glp-1 in om and s5b rats.methodsexperiment 1 examined preproglucagon mrna expression in the ileum of om and s5b rats fed a high fat ( 55% kcal ) or low fat ( 10% kcal ) diet .
experiment 2 investigated the effects of a 2h high fat meal following a 24h fast in om and s5b rats on circulating glp-1 ( active ) levels .
experiment 3 examined the effects of exendin-4 ( glp-1 receptor agonist ) administration on the intake of a high fat or a low fat diet in om and s5b rats.resultspreproglucagon mrna levels were increased in the ileum of om rats compared to s5b rats and were increased by high fat diet in om and s5b rats .
om and s5b rats exhibited a similar meal - initiated increase in circulating glp-1 ( active ) levels .
exendin-4 dose - dependently decreased food intake to a greater extent in s5b rats , compared to om rats .
the intake of low fat diet , compared to the intake of high fat diet , was more sensitive to the effects of exendin-4 in these strains.conclusionsthese results suggest that though om and s5b rats have similar preproglucagon mrna expression in the ileum and circulating glp-1 levels , om rats are less sensitive to the satiating effects of glp-1 .
therefore , dysregulation of the glp-1 system may be a mechanism through which om rats overeat and gain weight . |
Media and the left going crazy over Ted Nugent comments Wednesday, Apr 18, 2012 at 3:31 PM EDT
Your browser does not support iframes.
Did Ted Nugent really threaten the life of the President or was he just trying to galvanize the crowd at the NRA to vote and encourage other conservatives to vote? Glenn, a friend of Nugent, offered his opinion on radio this morning and wondered why no one had come to his defense. After all, should the Secret Service be investigating Nuge when real threats are being made by radicals like Louis Farrakhan? Watch the clip from radio this morning to get Glenn’s reaction to the controversy – and don’t miss his interview with Nugent HERE! ||||| Ted Nugent said he will meet with the Secret Service on Thursday to discuss his controversial comments about President Barack Obama.
"I will be as polite and supportive as I possibly can be, which will be thoroughly," Nugent told Glenn Beck on Wednesday.
During a National Rifle Association convention last weekend, Nugent said, "If Barack Obama becomes the president in November, I will either be dead or in jail by this time next year."
The U.S. Secret Service said on Tuesday that it was aware of Nugent's comments and would investigate.
"The bottom line is I've never threatened anybody's life in my life," Nugent said on Beck's radio show. "I don't threaten, I don't waste breath threatening. I just conduct myself as a dedicated 'We the people' activist because I've saluted too many flag-draped coffins to not appreciate where the freedom comes from."
The gun-loving "Cat Scratch Fever" singer has not apologized for the incendiary talk but added: "I'm not trying to diminish the seriousness of this, because if the Secret Service are doing it they are serious."
On Wednesday, a defiant Nugent sounded off on the backlash.
"This is the Saul Alinsky 'Rules for Radicals' playbook," Nugent said Wednesday on a CNN radio show. "The Nazis and the Klan hate me. I'm a black Jew at a Nazi Klan rally. There are some power-abusing, corrupt monsters in our federal government who despise me because I have the audacity to speak the truth--to identify the violations of our federal government--in particular Eric Holder, the President and Tim Geithner."
More popular Yahoo! News stories:
• Secret Service looking into Ted Nugent's violent anti-Obama message
• Indiana Gov. Mitch Daniels backs Romney in GOP primary
• Mitt Romney's 2012 fundraising goal: $800 million
Want more of our best political stories? Visit The Ticket or connect with us on Facebook, follow us on Twitter, or add us on Tumblr. Handy with a camera? Join our Election 2012 Flickr group to submit your photos of the campaign in action. | – As if the Secret Service didn't have enough to deal with just now, agents are meeting today with shoot-from-the-lip rocker Ted Nugent. They're concerned about his comments that he'll be "dead or in jail" if President Obama wins reelection, apparently indicating some kind of violence. During the meeting he intends to be as "polite and supportive as I possibly can be, which will be thoroughly," Nugent told Glenn Beck yesterday. "The bottom line is I've never threatened anybody's life in my life," Nugent added. "I don't threaten, I don't waste breath threatening. I just conduct myself as a dedicated 'We the people' activist." Nugent hasn't apologized for the apparently threatening comments, made at a National Rifle Association rally. But he is taking the Secret Service concern seriously. "I'm not trying to diminish the seriousness of this, because if the Secret Service are doing it they are serious," he said. |
CHENNAI: Scientists have tracked down a drug-resistant superbug that infects patients and causes multiple organ failure to Indian hospitals but doctors here see in it the germ of a move to damage the country's booming medical tourism industry.The 'superbug' resistant to almost all known antibiotics has been found in UK patients treated in Indian hospitals. Named after the Indian capital, it is a gene carried by bacteria that causes gastric problems, enters the blood stream and may cause multiple organ failure leading to death."India also provides cosmetic surgery for Europeans and Americans, and it is likely the bacteria will spread worldwide," scientists reported in The Lancet Infectious Diseases Journal on Wednesday. While the study has the medical world turning its focus on infection control policies in Indian hospitals, the Indian Council of Medical Research has alleged a bias in the report and said it is an attempt to hurt medical tourism in the country that is taking away huge custom from hospitals in the West. "Such infections can flow in from any part of the world. It's unfair to say it originated from India," said ICMR director Dr VM Katoch.Katoch has reasons to fume, as the superbug NDM-1 (New Delhi metallo-beta-lactamase) is named after the national capital, where a Swedish patient was reportedly infected after undergoing a surgery in 2008. Since then there have been several cases reported in the UK and in 2009, the health protection agency in the UK issued an alert on the 'gram negative' bacterial infection that is resistant to even the most powerful and reserved class antibiotics called carbapenems.In a joint study led by Chennai-based Karthikeyan Kumarasamy, pursuing his PhD at University of Madras and UK-based Timothy Walsh from department of immunity, infection and biochemistry, department of medicine, Cardiff University researchers sought to examine whether NDM-1 producing bacteria was prevalent in South Asia and Britain."We saw them in most of the hospitals in Chennai and Haryana. We estimate that the prevalence of this infection would be as high as 1.5%," Kumarasamy told TOI. "We found the superbug in 44 patients in Chennai, and 26 in Haryana, besides 37 in the UK and 73 in other places across India, Pakistan and Bangaladesh," he said.What makes the superbug more dangerous is its ability to jump across different bacterial species. So far, it has been found in two commonly seen bacteria, E coli and K pneumoniae. "We have found that the superbug has the potential to get copied and transferred between bacteria, allowing it to spread rapidly. If it spreads to an already hard-to-treat bacterial infection, it can be turn more dangerous," Kumarasamy said.Senior doctors working in infection control said India lacks policies on antibiotics, infection control and registries for hospital-acquired infections. By the ICMR director's own admission, India cannot scientifically fight back allegations of being the source of such superbugs, as the country does not have a registry of such hospital-acquired infections."Two in every five patients admitted to hospitals acquire infections. This extends the patient's stay in the hospital, increases the expenses and causes side-effects," said Dr Dilip Mathai, head of the department of internal medicine, Christian Medical College, Vellore.For a long time, India has been seeing Extended Spectrum Beta-Lactamase (ESBL), which are enzymes that have developed a resistance to antibiotics like penicillin. ESBL enzymes are most commonly produced by two bacteria - E coli and K pneumoniae, the two bacteria in which the new superbug has been found. "These were treated by a reserved class of antibiotics called carbapenems. We have seen at least 3% of people infected with this do not react to these reserved drugs," he said.Public health experts say globalisation has allowed bacteria to spread rapidly across the world and India, as a medical hub, should be geared for the challenge. Katoch, who is also the secretary, department of medical research, agrees. "At present, we don't have any system in place. There are neither rules for hospitals nor a registry to record hospital-acquired infections. We are now in the process of forming a cell that will activate a registry and issue guidelines for an integrated surveillance system," he said. ||||| Image caption NDM-1 has been found in E.coli bacteria
A new superbug that is resistant to even the most powerful antibiotics has entered UK hospitals, experts warn.
They say bacteria that make an enzyme called NDM-1 have travelled back with NHS patients who went abroad to countries like India and Pakistan for treatments such as cosmetic surgery.
Although there have only been about 50 cases identified in the UK so far, scientists fear it will go global.
Tight surveillance and new drugs are needed says Lancet Infectious Diseases.
The fear would be that it gets into a strain of bacteria that is very good at being transmitted between patients Dr David Livermore, Researcher from the HPA Q&A: NDM-1 superbugs
NDM-1 can exist inside different bacteria, like E.coli, and it makes them resistant to one of the most powerful groups of antibiotics - carbapenems.
These are generally reserved for use in emergencies and to combat hard-to-treat infections caused by other multi-resistant bacteria.
And experts fear NDM-1 could now jump to other strains of bacteria that are already resistant to many other antibiotics.
Ultimately, this could produce dangerous infections that would spread rapidly from person to person and be almost impossible to treat.
At least one of the NDM-1 infections the researchers analysed was resistant to all known antibiotics.
Similar infections have been seen in the US, Canada, Australia and the Netherlands and international researchers say that NDM-1 could become a major global health problem.
Infections have already been passed from patient to patient in UK hospitals.
The way to stop NDM-1, say researchers, is to rapidly identify and isolate any hospital patients who are infected.
Normal infection control measures, such as disinfecting hospital equipment and doctors and nurses washing their hands with antibacterial soap, can stop the spread.
And currently, most of the bacteria carrying NDM-1 have been treatable using a combination of different antibiotics.
Analysis The Indian health ministry and the medical fraternity are yet to see the Lancet report but doctors in India say they are not surprised by the discovery of the new superbug. "There is little drug control in India and an irrational use of antibiotics," Delhi-based Dr Arti Vashisth told the BBC. Doctors say common antibiotics have become ineffective in India partly because people can buy them over the counter and indulge in self-medication. They also take small doses and discontinue treatment. Gastroenterologist Vishnu Chandra Agarwal says in the past year he has come across many patients with E.coli infections who have not responded to regular antibiotics. "In about a dozen cases, I have used a chemical - furadantin - to treat my patients. And it has worked. It makes them horribly nauseous, but it works," he says.
But the potential of NDM-1 to become endemic worldwide is "clear and frightening", say the researchers in The Lancet infectious diseases paper.
The research was carried out by experts at Cardiff University, the Health Protection Agency and international colleagues.
Dr David Livermore, one of the researchers and who works for the UK's Health Protection Agency (HPA), said: "There have been a number of small clusters within the UK, but far and away the greater number of cases appear to be associated with travel and hospital treatment in the Indian subcontinent.
"This type of resistance has become quite widespread there.
"The fear would be that it gets into a strain of bacteria that is very good at being transmitted between patients."
He said the threat was a serious global public health problem as there are few suitable new antibiotics in development and none that are effective against NDM-1.
The Department of Health has already put out an alert on the issue, he said.
"We issue these alerts very sparingly when we see new and disturbing resistance."
Travel history
The National Resistance Alert came in 2009 after the HPA noted an increasing number of cases - some fatal - emerging in the UK.
The Lancet study looked back at some of the NDM-1 cases referred to the HPA up to 2009 from hospitals scattered across the UK.
At least 17 of the 37 patients they studied had a history of travelling to India or Pakistan within the past year, and 14 of them had been admitted to a hospital in these countries - many for cosmetic surgery.
For some of the patients the infection was mild, while others were seriously ill, and some with blood poisoning.
A Department of Health spokeswoman said: "We are working with the HPA on this issue.
"Hospitals need to ensure they continue to provide good infection control to prevent any spread, consider whether patients have recently been treated abroad and send samples to HPA for testing.
"So far there has only been a small number of cases in UK hospital patients. The HPA is continuing to monitor the situation and we are investigating ways of encouraging the development of new antibiotics with our European colleagues."
The Welsh Assembly Government said it would be "fully considering" the report.
"The NHS in Wales is used to dealing with multi-resistant bacteria using standard microbiological approaches, and would deal with any new bacteria in a similar way," said a spokesperson. | – A new superbug has emerged in UK hospitals, probably brought back to the country by people who traveled to India or Pakistan for cosmetic surgery, the BBC reports. About 50 cases have been cataloged in the UK, but researchers fear the newly christened NDM-1 (named after New Delhi, notes the Times of India, which quotes Indian officials not at all happy about it) could evolve into a global heath problem. Scattered cases have been reported in the US, Canada, and Australia as well. "The fear would be that it gets into a strain of bacteria that is very good at being transmitted between patients," and go global, says a doctor with the UK's Health Protection Agency. NDM-1 is especially scary to doctors because it confers resistance to the strongest-known group of antibiotics. The bad news comes along with good news on a related front: The CDC says the superbug known as MRSA appears to be retreating, reports the Washington Post. |
progeria is a genetic disorder rarely encountered and is characterized by features of premature aging .
it is also known as " hutchinson - gilford progeria syndrome " ( 1 ) .
although signs and symptoms vary in age of onset and severity , they are remarkably consistent overall . children with hgps usually appear normal at birth .
characteristic facies , with receding mandible , narrow nasal bridge and pointed nasal tip develop .
death occurs as a result of complications of severe atherosclerosis , either cardiac disease ( myocardial infarction ) or cerebrovascular disease ( stroke ) , generally between ages 6 and 20 yr .
we report here a 4-yr - old boy with an apparently typical hutchinson - gilford progeria syndrome with g608 g lmna mutation .
a 4-yr - old boy was referred to the department of pediatrics with short stature and sclerodermatous skin on 2 september 2010 .
he was the first child born to non - consanguinous parents with no significant family history .
he was born at full term with birth weight of 3.35 kg by spontaneous virginal delivery .
growth retardation , hair loss , and alteration of skin color on the abdominal region began . on physical examination ,
his length was 88 cm and weight 11.5 kg , both less than 3rd percentile and head circumference was 52 cm ( ca .
1 ) . he had generalized indurated and shiny skin associated with decreased subcutaneous fat , especially on the abdomen .
his hair was fine and sparse and his scalp veins were easily visible ( fig .
his anterior fontanelle was still patent ( horizontal diameter 1.7 cm , vertical diameter 1.4 cm ) .
he had craniofacial disproportion for his age due to micrognatia , prominent eyes , scant eyelashes and small nose .
his bone age was 3 yr . he was neurologically intact with motor and mental development . on serological and hormonal evaluation ,
an echocardiogram did not show concentric left ventricular hypertrophy nor increased left ventricular pressure but showed calcification of aortic and mitral valves ( fig .
2 ) . hypertrophy of internal layer at internal carotid artery suggesting atherosclerosis was found by carotid doppler sonography ( fig .
2 ) . he is on low dose aspirin to prevent thromboembolic episodes and on regular follow up .
we obtained his dna from white blood cell in peripheral blood and sequencing was performed by such method as belows ; samples were extracted using a mg tissue kit .
the pcr reaction was performed with 20 ng of genomic dna as the template in a 30 l reaction mixture by using a ef - taq ( solgent , korea ) as follows : activation of taq polymerase at 95 for 2 min , 35 cycles of 95 for 1 min , 55-63 , and 72 for 1 min each were performed , finishing with a 10 min step at 72. the amplification products were purified with a multiscreen filter plate ( millipore corp . , bedford , ma , usa ) .
the dna samples containing the extension products were added to hi - di formamide ( applied biosystems , foster city , ca ) .
the mixture was incubated at 95 for 5 min , followed by 5 min on ice and then analyzed by abi prism 3730xl dna analyzer ( applied biosystems , foster city , ca , usa ) .
gene study showed typical g608 g ( ggc- > ggt ) point mutation at exon 11 in lmna gene ( fig .
we planned to carry out gene study for her and his family but could n't for their refusal .
after the diagnosis , he regularly visits our clinic for routine lab . , echocardiogram and carotid doppler sonography .
in addition , with our help , he was enrolled in the progeria research foundation ( prf ) in usa and is waiting for farnesyltransferase inhibitors ( ftis ) for clinical trials .
progeria was described for the first time in 1886 , by hutchinson , and ratified by gilford , in 1904 .
it occurs sporadically , with an incidence of 1 in 8 million live births and there are approximately 150 cases described in the medical literature .
it predominates in males with a ratio of 1.5:1 and greater susceptibility of caucasians can be seen in 97% of cases ( 3 ) . in korea ,
this is the first case diagnosed as hutchinson - gilford progeria syndrome by gene study in korea .
there is variability in onset and rate of progression of disease among children , although the final phenotype in these patients is remarkably similar , underscoring the identical common mutation that leads downstream to similar pathobiology ( 10 ) . in this syndrome ,
the average life span is 13 yr ( range 7 - 27 yr ) , but occasionally can survive till the age of 45 yr . the death is mainly due to cardiovascular complications like myocardial infarction or congestive heart failure .
the evidence supports the de novo point mutations in lamin a ( lmna ) gene as the causative factor has been increasing ( 11 ) .
the most common hgps - associated mutation , gly608gly , causes 150 nucleotides encoded in exon 11 to be spliced out of the final mrna and results in a protein that lacks 50 amino acids .
this protein , progerin , retains its c - terminal caax motif but lacks sequences that are required for complete processing and is , therefore , stably farnesylated ( 12 ) .
lamin a is a protein meshwork lining the nucleoplasmic face of the inner nuclear membrane and represents an important determinant of interphase nuclear architecture ( 13 ) .
progerin apparently acts in a dominant - negative manner on the nuclear function of cell types that express lamin a ( 14 ) .
the clinical manifestations are divided into major criteria and signs usually presents itself as follows : the major criteria are a bird - like face ( which occurs around 6 months to one year of age ) , alopecia , prominent veins on the scalp , big eyes , micrognathia , abnormal and slow dentition , pear - shaped chest , short clavicles , bow legs ( coxa valga ) , short upper limbs and prominent articulations , low stature and weight with normal bone age , incomplete sexual maturation , reduction of the adipose tissue and adequate psycho - motor development with normal intelligence .
diagnosis is essentially clinical with major criteria appearing during the first and second years of life ( 15 ) .
cutaneous manifestations are earlier to appear followed by skeletal and cardiovascular systems . in our case , cardiovascular systems showed calcification of aortic and mitral valves and hypertrophy of internal layer at internal carotid artery suggesting atherosclerosis .
skeletal abnormalities include osteolysis , osteoporosis , dystrophic clavicles , coxa valga , " horse riding " stance , thinning of cranial bones , delayed closure of cranial sutures and anterior fontanelle .
prognosis is detrimental to the health of the patient and life expectancy is around 13 yr .
the main mortality factors are cardiovascular diseases ( 75% ) like acute myocardial infarction . despite the advances in cardiovascular surgery ,
the low survival rate remains due to the high capacity of the disease to reproduce the erythematous plaques .
low - dose aspirin is recommended as prophylaxis to prevent atherosclerotic changes ( 15 ) .
a vigorous research effort in the pharmaceutical industry has identified and developed a number of small - molecule compounds that potently and selectively inhibit farnesyltransferase ( ftase ) . in vitro studies in mice
suggest a possible role for the use of farnesyltransferase inhibitors ( ftis ) in progeria ( 15 ) .
farnesylation , a posttranslational modification involving the addition of a 15-carbon isoprene moiety , was implicated as a potential anticancer target when it was discovered that the oncoprotein ras , which has been estimated to be involved in up to 30% of all human cancers , required farnesylation for its function .
two of these drugs ( lonafarnib_sch66336 from schering - plough , kenilworth , nj , and tipifarnib_r115777 from johnson & johnson , new brunswick , nj ) have entered phase iii trials and have been well tolerated , including in trials involving children ( 16 ) .
similar to ras , the lamin a precursor is also farnesylated , with farnesylation serving as a required step to insert prelamin a into the nuclear membrane as well as to allow for the two downstream cleavage steps which complete the processing of lamin a ( 18 ) . for this patient , we are waiting for ftis for clinical trials . with
the knowledge that the single c - to - t base change seen in nearly all cases of hgps created a cryptic splice site and , thus , deleted the normal second endoproteolytic cleavage site in the lamin a processing pathway , it was hypothesized that progerin was forced to retain its farnesyl group and , therefore , could not dissociate itself from the nuclear membrane . with other members of the nuclear lamina
also potentially becoming trapped in complexes with the mislocalized progerin , a mechanistic connection between this permanently farnesylated state and the striking nuclear blebbing and disrupted nuclear architecture seen in hgps cells was proposed , and the possibility of preventing or reversing this phenotype through ftis was raised .
detailed study of hgps and lmna mutations may also advance our understanding of the process of aging .
why do lmna mutant cells enter senescence earlier than normal cells ? ( 19 ) . in summary
, we found a new patient with typical hutchinson - gilford progeria syndrome with mutation of the g608 g in the lmna gene .
this is the first case diagnosed as hutchinson - gilford progeria syndrome by gene study in korea . | hutchinson - gilford progeria syndrome ( hgps ) is a rare condition originally described by hutchinson in 1886 .
death result from cardiac complications in the majority of cases and usually occurs at average age of thirteen years .
a 4-yr old boy had typical clinical findings such as short stature , craniofacial disproportion , alopecia , prominent scalp veins and sclerodermatous skin .
this abnormal appearance began at age of 1 yr . on serological and hormonal evaluation , all values are within normal range .
he was neurologically intact with motor and mental development .
an echocardiogram showed calcification of aortic and mitral valves .
hypertrophy of internal layer at internal carotid artery suggesting atherosclerosis was found by carotid doppler sonography .
he is on low dose aspirin to prevent thromboembolic episodes and on regular follow up .
gene study showed typical g608 g ( ggc- > ggt ) point mutation at exon 11 in lmna gene .
this is a rare case of hutchinson - gilford progeria syndrome confirmed by genetic analysis in korea . |
Some of the bruises found on actress Natalie Wood's body may have occurred before she drowned in the waters off California more than 30 years ago, according to a newly released coroner's report on one of Hollywood's most mysterious deaths.
Officials on Monday released an addendum to Wood's 1981 autopsy that cites unexplained bruises and scratches on Wood's face and arms as significant factors that led officials to change her death certificate last year from drowning to "drowning and other undetermined factors."
Officials were careful about their conclusions because they lacked several pieces of evidence.
The renewed inquiry came after the boat's captain, Dennis Davern, told "48 Hours Mystery" and the "Today" show that he heard Wood and her actor-husband Robert Wagner arguing the night of her disappearance and believed Wagner was to blame for her death.
Wood, 43, was on a yacht with Wagner, co-star Christopher Walken and the boat captain before somehow ending up in the water.
Sheriff's spokesman Steve Whitmore said the newly released autopsy report does not change the status of the investigation, which remains open.
Whitmore said Wagner is not considered a suspect in Wood's death.
Conflicting versions of what happened on the yacht have contributed to the mystery of how the actress died. Wood, Wagner and Walken had all been drinking heavily before the actress disappeared. The original detective on the case, Wagner and Walken have all said they considered her death an accident.
Bruises on Wood's arms, a scratch on her neck and superficial abrasions to her face may have occurred before Wood ended up in the waters off Catalina Island in November 1981, but coroner's officials wrote they could not definitely determine when the injuries occurred.
"The location of the bruises, the multiplicity of the bruises, lack of head trauma, or facial bruising support bruising having occurred prior to entry in the water," says the report, written by Chief Medical Examiner Dr. Lakshmanan Sathyavagiswaran. "Since there are unanswered questions and limited additional evidence available for evaluation, it is opined by this Medical Examiner that the manner of death should be left as undetermined."
Officials also considered that Wood wasn't wearing a life jacket and had no history of suicide attempts and didn't leave a note as reasons to amend the report and the death certificate.
The newly released report also says there are conflicting statements about when the boat's occupants discovered Wood was missing. The report estimates her time of death was around midnight, and she was reported missing at 1:30 a.m.
Wagner wrote in a 2008 memoir that he and Walken argued that night. He wrote that Walken went to bed and he stayed up for a while, but when he went to bed, he noticed that his wife and a dinghy attached to the yacht were missing. ||||| Actress Natalie Wood in 1963 / AP Photo
(CBS/AP) LOS ANGELES - The Los Angeles Coroner's Office is expected to release a new report Monday in the drowning of actress Natalie Wood more than 30 years ago.
48 Hours Presents Vanity Fair: Hollywood Scandal
Sources told CBS News the review of the 1981 coroner's report questions the findings that led investigators to conclude Wood's death was an accident.
Sources say the new report concludes that the bruising on the actress' wrists, knees, and ankles could be more consistent with injuries from an assault than they were from struggling to climb back on a boat.
Conflicting versions of what happened on the yacht shared by Wood, her actor-husband Robert Wagner and their friend, actor Christopher Walken, have contributed to the mystery of how the actress died on Thanksgiving weekend in 1981. Authorities have previously said Wagner is not a suspect in his wife's death.
Investigators re-opened the case in Nov. 2011, after the boat's captain, Dennis Davern, told "48 Hours Mystery" and the "Today" show that he heard Wagner and Wood arguing the night of her disappearance and believed Wagner was to blame for her death.
Wagner wrote in a 2008 memoir that he and Walken argued that night. He wrote that Walken went to bed and he stayed up for a while, but when he went to bed, he noticed that his wife and a dinghy attached to the yacht were missing.
Wood was nominated for three Academy Awards during her lifetime. Her death has remained one of Hollywood's most enduring mysteries. The original detective on the case, Wagner, Walken and until recently, the coroner's office, have all said they considered Wood's death an accident.
Complete coverage of Natalie Wood on Crimesider
Click to watch a preview of 48 Hours Presents Vanity Fair: Hollywood Scandal.
| – Natalie Wood's body had bruises and scratches that she likely suffered before drowning in the Pacific Ocean on a November night in 1981, according to a new coroner's report. "The location of the bruises, the multiplicity of the bruises, lack of head trauma, or facial bruising support bruising having occurred prior to the entry into the water," wrote Dr. Lakshmanan Sathyavagiswaran in his report. He refused to rule out "non-volitional, unplanned entry into the water" but left her cause of death "undetermined," the LA Times reports. The actor's drowning was considered accidental until Dennis Davern, the pilot of the boat she shared with actor-husband Robert Wagner, gave a TV interview blaming Wagner for her death, CBS reports. The LA County Sheriff's Department reopened the investigation in 2011, which led to today's report. But a lack of evidence leaves open the chance that Wood was hurt trying to tie a dinghy to the boat. Her actual injuries: superficial abrasions to the face, a scratch on her neck, and bruises on her arms, reports the AP. (According to unearthed audiotapes, Wagner actually pushed Wood off the boat.) |
caffeine is one of the most commonly consumed drugs throughout the world and has minimal health risks ( 10 ) .
since caffeine can be found in many foods and drinks , its effect on the body and cardiovascular system should not be overlooked .
caffeine has been shown to improve mood , enhance psychomotor performance , enhance cognitive performance in healthy volunteers ( 24 ) and has diverse physiological effects including central nervous system stimulation ( 14 ) .
caffeine intake has consistently been shown to increase systolic blood pressure through its effects on systemic vascular resistance ( 2 ) .
caffeine is also seen as an adenosine antagonist where it stimulates the central nervous system ( 16 ) by preventing the repressor effects on arousal ( 23 ) and neurotransmitter release ( 18 ) .
caffeine is an antagonist competitor for these adenosine receptors , acting on these receptors in many areas , including whole body peripheral circulation and at the brain cortex ( 5 ) .
evidence has shown that caffeine has a broad scope of pharmacologic effects and has been positively associated with cardiac stimulation , arrhythmias , and coronary heart disease ( 24 ) . in 2004 the world
anti - doping agency removed caffeine from the banned substances list ( 31 ) and the national collegiate athletic association set a high urinary level of 15 g / ml to be tested positive for caffeine ( 17 ) .
this loosening of regulations increases the need for deeper investigation on the effects of caffeine during exercise and sport performance .
as athletes commonly use caffeine as an ergogenic aid ( 10 ) , it is important to know caffeine s effect on blood pressure during and after exercise . at rest
, caffeine has been shown to have a stimulatory effect on systolic blood pressure and diastolic blood pressure ( 8) .
dosing caffeine relative to body weight is common practice in exercise science research . to induce improvements in exercise performance
, a dose of 36 mg.kg taken prior to the performance seems to be sufficient ( 9 ) .
an article by sung assessed the effects of 3.3 mg.kg of caffeine on blood pressure during exercise in normotensive healthy young men . sung found that during maximal exercise 44% of their participants had a significant increase in systolic and/or diastolic blood pressure during the caffeine trial when compared to the placebo trial ( 27 ) .
this is more than twice the number showing such a response to exercise alone ( 27 ) .
this suggests that caffeine has an effect on blood pressure during exercise in men , suggesting an increased sympathetic response .
however , there has been a lack of research on the effects of caffeine on women during exercise and sport performance ( 13 ) .
the influence of caffeine on blood pressure and blood pressure recovery in women has not been well documented .
the purpose of this study is to determine how caffeine affects maximal blood pressure and blood pressure recovery after maximal exercise in physically active college - aged females .
it is hypothesized that caffeine will raise the participant s maximal blood pressure and extend the period of time it takes for blood pressure to recover from maximal work .
all participants signed a written informed consent form approved by and institutional review board and had descriptive variables ( i.e. age , height , weight , medical history , caffeine usage , and a list of medications they were currently taking ) recorded .
the participants had to be classified as low risk individuals by the american college of sports medicine s ( acsm ) guidelines . to be considered low risk by the acsm , participants had less than two risk factors for cardiovascular disease and no signs , symptoms , or diagnoses of pulmonary , cardiac , or metabolic diseases ( 19 ) .
furthermore , individuals that had no risk factors , had a body mass index < 25 kg / m , and were younger than 45 years old are considered to have normal fasting glucose levels according to acsm and to be at low risk without the need of a blood draw .
finally , all participants completed a survey determining how much physical activity they complete according to the acsm s criteria .
participants took part in two maximal exercise tests after consuming either 1 ) a placebo or 2 ) a dosage of 3.3 mg.kg of body weight of caffeine .
the caffeine was in powder form and was dissolved in 10 oz . of unsweetened cranberry juice to hide the bitter taste of the caffeine powder .
this dosage was used in the sung study and showed significant results in healthy normotensive men ( 27 ) . in the 72 hours leading up to each test , participants were asked to refrain from consuming food and drinks containing caffeine .
found that 72 hours was shown to avoid the effects of caffeine tolerance ( 25 ) . in the 12 weeks between trials ,
participant caffeine consumption was not controlled , except the 72-hour period before testing . during each trial participants
( i.e. sitting position with feet flat on the floor and back supported ) was taken just prior to consuming the 10 oz .
the one hour wait time was the same whether the participant drank the placebo or the caffeine beverage .
after the hour wait participants performed a maximal treadmill exercise test using the bruce protocol to 85% of their age - predicted heart rate max using the gellish equation ( 19 ) .
each participant completed the two maximal treadmill exercise tests using the bruce protocol . during each test the participant s rating of perceived exertion
( rpe ) and heart rate were recorded during the last 15 seconds of each minute and their blood pressure was assessed during the last minute of each stage .
heart rate was monitored continuously to determine when the participant was at 85% of their age predicted maximum heart rate .
blood pressure was immediately taken post - exercise and every 2 minutes during the active recovery phase at 2.0 mph and 0% grade until the following two criteria were met : 1 ) blood pressure was within 20 mmhg of their resting values and 2 ) 6 minutes had elapsed post - exercise .
after the 6-minute period , participants began the passive recovery phase by sitting down until their blood pressure was within 10 mmhg of their resting blood pressure .
the amount of time it took participants to recover from near maximal exercise was recorded in minutes , as well as the amount of time it took participants to reach 85% of their age predicted maximum heart rate .
a repeated measures anova was used to assess if maximal and recovery systolic blood pressures and rpes , were significantly altered following caffeine use .
the amount of time it took to reach 85% maximal heart rate and exercise recovery was also assessed in minutes using a repeated measures anova .
all participants signed a written informed consent form approved by and institutional review board and had descriptive variables ( i.e. age , height , weight , medical history , caffeine usage , and a list of medications they were currently taking ) recorded .
the participants had to be classified as low risk individuals by the american college of sports medicine s ( acsm ) guidelines . to be considered low risk by the acsm , participants had less than two risk factors for cardiovascular disease and no signs , symptoms , or diagnoses of pulmonary , cardiac , or metabolic diseases ( 19 ) .
furthermore , individuals that had no risk factors , had a body mass index < 25 kg / m , and were younger than 45 years old are considered to have normal fasting glucose levels according to acsm and to be at low risk without the need of a blood draw .
finally , all participants completed a survey determining how much physical activity they complete according to the acsm s criteria .
participants took part in two maximal exercise tests after consuming either 1 ) a placebo or 2 ) a dosage of 3.3 mg.kg of body weight of caffeine .
the caffeine was in powder form and was dissolved in 10 oz . of unsweetened cranberry juice to hide the bitter taste of the caffeine powder .
this dosage was used in the sung study and showed significant results in healthy normotensive men ( 27 ) . in the 72 hours leading up to each test , participants were asked to refrain from consuming food and drinks containing caffeine .
found that 72 hours was shown to avoid the effects of caffeine tolerance ( 25 ) . in the 12 weeks between trials ,
participant caffeine consumption was not controlled , except the 72-hour period before testing . during each trial participants
( i.e. sitting position with feet flat on the floor and back supported ) was taken just prior to consuming the 10 oz .
the one hour wait time was the same whether the participant drank the placebo or the caffeine beverage .
after the hour wait participants performed a maximal treadmill exercise test using the bruce protocol to 85% of their age - predicted heart rate max using the gellish equation ( 19 ) .
each participant completed the two maximal treadmill exercise tests using the bruce protocol . during each test the participant s rating of perceived exertion
( rpe ) and heart rate were recorded during the last 15 seconds of each minute and their blood pressure was assessed during the last minute of each stage .
heart rate was monitored continuously to determine when the participant was at 85% of their age predicted maximum heart rate .
blood pressure was immediately taken post - exercise and every 2 minutes during the active recovery phase at 2.0 mph and 0% grade until the following two criteria were met : 1 ) blood pressure was within 20 mmhg of their resting values and 2 ) 6 minutes had elapsed post - exercise .
after the 6-minute period , participants began the passive recovery phase by sitting down until their blood pressure was within 10 mmhg of their resting blood pressure .
the amount of time it took participants to recover from near maximal exercise was recorded in minutes , as well as the amount of time it took participants to reach 85% of their age predicted maximum heart rate .
a repeated measures anova was used to assess if maximal and recovery systolic blood pressures and rpes , were significantly altered following caffeine use . the amount of time it took to reach 85% maximal heart rate and exercise recovery was also assessed in minutes using a repeated measures anova .
measured values of descriptive characteristics of participants are presented in table 1 . in table 2 ,
exercise test characteristics of participants are presented . the participant s maximal systolic blood pressure ( sbp ) and diastolic blood pressure ( dbp ) were not significantly different between the caffeine and placebo trials . however , in the caffeine trial 60% of participants had a higher sbp and 40% had a higher dbp when compared to the placebo trial . the caffeine trial had significantly longer active recovery ( f = 12.923 , p = 0.003 , 2 = 0.48 ) , passive recovery ( f = 6.167 , p = 0.026 , 2 = 0.306 ) , and total recovery periods ( f = 16.205 , p = 0.001 , 2 = 0.537 ) than the placebo trial . the time it took participants to reach their age - predicted maximum heart rate was significantly longer in the caffeine trial than the placebo trial ( f = 7.445 , p = 0.017 , 2 = 0.364 ) .
determining the effects of caffeine intake on near maximal blood pressure and blood pressure recovery would help provide information for female athletes , researchers , and practitioners to better understand how caffeine affects blood pressure .
this study examined caffeine s hemodynamic influences during exercise in physically active , college - aged females that were acsm classified as low risk . while caffeine use did not increase systolic ( sbp ) or diastolic blood pressure ( dbp ) at
near maximal exercise , the total recovery time from near maximal exercise was , on average , 2.6 minutes longer in the caffeine trial .
this pattern of results is in contrast with an article by sung ( 27 ) .
the study reported a significant increase in sbp and dbp ( p < 0.02 ) during maximal exercise trial with caffeine consumption compared to a non - caffeine trial ( 27 ) .
current results indicate that caffeine ingestion prior to near maximal exercise had no significant effect on near maximal sbp and dbp ( p < 0.05 ) .
consequently , the hypothesis that caffeine would increase sbp and dbp at near maximal exercise was not supported . however , the study by sung ran a maximum exercise test to exhaustion .
two different studies also found non - significant differences between sbp and dbp after exercise following caffeine consumption .
used a submaximal exercise test on a population of normotensive active men ( 6 ) .
. used a submaximal cycle ergometry exercise session on a population of male and female cyclists ( 7 ) .
the effects of caffeine on systolic blood pressure recovery were shown to be significantly longer during active , passive , and total recovery time compared to the placebo trial , based on the current results .
souza et al . found similar results in a resistance training study assessing the effects of caffeine on hemodynamics after a resistance training session .
they also found that sbp and dbp were significantly elevated for nine hours post - exercise ( 26 ) .
current results also indicate that participants took longer to reach 85% of their age predicted maximum heart rate in the caffeine trial . a systematic review by the university of connecticut found that across 21 research studies , caffeine improved sport - specific endurance performance ranging from 0.3% to 17.3% improvements across studies ( 9 ) . the present study showed a 2.7% increase in aerobic endurance .
furthermore , the results from piha also found heart rate responses were lower under the influence of caffeine in ten healthy participants ( 20 ) .
the piha study monitored heart rate responses following standing up and after an isometric handgrip test ( 20 ) .
further research is needed in this area to determine whether caffeine lowers heart rate responses at near maximal and maximal exercise .
current results showed that rpe was not significantly different between the caffeine trial and placebo trial .
however , several other studies found that there was a significant reduction in rpe after caffeine consumption ( 11 , 12 , 29 ) . these studies each tested one of the different aspects of health - related fitness ( i.e. muscular strength , muscular endurance , cardiovascular endurance , flexibility , body composition ) . however , there are studies that have found that caffeine does not significantly reduce rpe ( 3 , 28 ) .
one study in particular found that rpe was not significantly altered after caffeine ingestion during intense exercise in active women ( 1 ) .
a source of variation in the effects of caffeine across these studies may be the tolerance level of habitual users .
it has been found that caffeine can show significant effects on cognition and hemodynamics in habitual users after abstinence ( 15 , 21 , 22 , 30 ) .
our participants abstained from caffeine for 72 hours before each exercise trial ( 25 ) .
results from this study were limited by the near maximal nature of the exercise test . a maximal test to exhaustion
additionally , research has shown that habitual caffeine users who abstain from caffeine for at least seven days will optimize the ergogenic effect ( 9 ) .
seven days of caffeine abstinence would have been optimal , but we still had significant results with the 72-hour washout period that was deemed sufficient in previous research ( 25 ) .
further research is also needed to determine the effects of caffeine on maximal blood pressure and blood pressure recovery in a sedentary female population and to determine the effects of caffeine on heart rate responses at near maximal exercise . in summary
, current results suggest that caffeine significantly increases active and passive recovery time following near maximal exercise in active females .
the responses of recovery time to caffeine may have clinical and practical significance to those who use caffeine before exercise .
the current study also provides evidence that caffeine has an ergogenic effect on endurance and near maximal heart rate responses .
overall , this study shows that a caffeine dose of 3.3 mgkg has the potential to lengthen the amount of time it takes to reach near maximal heart rate and also increases the amount of time it takes for blood pressure to recover from near maximal exercise . | the purpose of this study is to determine how caffeine affects exercise blood pressure ( bp ) and active and passive recovery bp after vigorous intensity exercise in physically active college - aged females .
fifteen physically active , acsm stratified low - risk females ( age ( y ) : 23.53 4.07 , weight ( kg ) : 60.34 3.67 , height ( cm ) : 165.14 7.20 , bmi ( kg / m2 ) : 22.18 1.55 ) participated in two bruce protocol exercise tests . before each test participants consumed 1 ) a placebo or 2 ) 3.3 mgkg1 of caffeine at least one hour before exercise in a counterbalanced double - blinded fashion .
after reaching 85% of their age - predicted maximum heart rate , bp was taken and participants began an active ( i.e. walking ) recovery phase for 6 minutes followed by a passive ( i.e. sitting ) recovery phase .
bp was assessed every two minutes in each phase .
recovery times were assessed until active and passive bp equaled 20 mmhg and 10 mmhg above resting , respectively .
participants completed each test 12 weeks a part .
maximal systolic and diastolic blood pressures were not significantly different between the two trials . active recovery , passive recovery , and total recovery times were all significantly longer during the caffeine trial than the placebo trial . furthermore , the time to reach age - predicted maximum heart rate was significantly shorter in the placebo trial than the caffeine trial .
while caffeine consumption did not significantly affect maximal blood pressure , it did affect active and passive recovery time following vigorous intensity exercise in physically active females .
exercise endurance also improved after consuming caffeine in this population . |