text
stringlengths 1.87k
18k
|
---|
Chemical pathways of peptide degradation. X: effect of metal-catalyzed oxidation on the solution structure of a histidine-containing peptide fragment of human relaxin.
To elucidate the major degradation products of the metal-catalyzed oxidation of (cyclo S-S) AcCys-Ala-X-Val-Gly-CysNH2 (X = His, cyclic-His peptide), which is a fragment of the protein relaxin, and the effect of this oxidation on its solution structure. The cyclic-His peptide and its potential oxidative degradation products, cyclic-Asp peptide (X = Asp) and cyclic-Asn peptide (X = Asn), were prepared by using solid phase peptide synthesis and purified by preparative HPLC. The degradation of the cyclic-His peptide was investigated at pH 5.3 and 7.4 in an ascorbate/cupric chloride/oxygen [ascorbate/Cu(II)/O2] system in the absence or presence of catalase (CAT), superoxide dismutase (SOD), isopropanol, and thiourea. The oxidation of the cyclic-His peptide was also studied in the presence of hydrogen peroxide (H2O2). All reactions were monitored by reversed-phase HPLC. The main degradation product of the cyclic-His peptide formed at pH 7.4 in the presence of ascorbate/Cu(II)/O2 was isolated by preparative HPLC and identified by 1H NMR and electrospray mass spectrometry. The complexation of Cu(II) with the cyclic-His peptide was determined with 1H NMR. The solution structure of the cyclic-His peptide in the presence and absence of Cu(II) at pH 5.3 and 7.4 and the solution structure of the main degradation product were determined using circular dichroism (CD). CAT and thiourea were effective in stabilizing the cyclic-His peptide to oxidation by ascorbate/Cu(II)/O2, while SOD and isopropanol were ineffective. Cyclic-Asp and cyclic-Asn peptides were not observed as degradation products of the cyclic-His peptide oxidized at pH 5.3 and 7.4 in an ascorbate/Cu(II)/O2 system. The main degradation product formed at pH 7.4 was the cyclic 2-oxo-His peptide (X = 2-oxo-His). At pH 5.3, numerous degradation products were formed in low yields, including the cyclic 2-oxo-His peptide. The cyclic 2-oxo-His peptide appeared to have a different secondary structure than did the cyclic-His peptide as determined by CD. 1H NMR results indicate complexation between the cyclic-His peptide and Cu(II). CD results indicated that the solution structure of the cyclic-His peptide in the presence of Cu(II) at pH 5.3 was different than the solution structure observed at pH 7.4. H2O2 and superoxide anion radical (O(*-)2) were deduced to be the intermediates involved in the ascorbate/Cu(II)/O2-induced oxidation of cyclic-His peptide. H2O2 degradation by a Fenton-type reaction appears to form secondary reactive-oxygen species (i.e., hydroxyl radical generated within complex forms or metal-bound forms of hydroxyl radical) that react with the peptide before they diffuse into the bulk solution. CD results indicate that different complexes are formed between the cyclic-His peptide and Cu(II) at pH 5.3 and pH 7.4. These different complexes may favor the formation of different degradation products. The apparent structural differences between the cyclic-His peptide and the cyclic 2-oxo-His peptide indicate that conformation of the cyclic-His peptide was impacted by metal-catalyzed oxidation. |
MediCaring: development and test marketing of a supportive care benefit for older people.
To develop an alternative healthcare benefit (called MediCaring) and to assess the preferences of older Medicare beneficiaries concerning this benefit, which emphasizes more home-based and supportive health care and discourages use of hospitalization and aggressive treatment. To evaluate the beneficiaries' ability to understand and make a choice regarding health insurance benefits; to measure their likelihood to change from traditional Medicare to the new MediCaring benefit; and to determine the short-term stability of that choice. Focus groups of persons aged 65+ and family members shaped the potential MediCaring benefit. A panel of 50 national experts critiqued three iterations of the benefit. The final version was test marketed by discussing it with 382 older people (men > or = 75 years and women > or = 80 years) in their homes. Telephone surveys a few days later, and again 1 month after the home interview, assessed the potential beneficiaries' understanding and preferences concerning MediCaring and the stability of their responses. Focus groups were held in community settings in New Hampshire, Washington, DC, Cleveland, OH, and Columbia, SC. Test marketing occurred in New Hampshire, Cleveland, OH; Columbia, SC, and Los Angeles, CA. Focus group participants were persons more than 65 years old (11 focus groups), healthcare providers (9 focus groups), and family decision-makers (3 focus groups). Participants in the in-home informing (test marketing group) were persons older than 75 years who were identified through contact with a variety of services. Demographics, health characteristics, understanding, and preferences. Focus group beneficiaries between the ages of 65 and 74 generally wanted access to all possible medical treatment and saw MediCaring as a need of persons older than themselves. Those older than age 80 were mostly in favor of it. Test marketing participants understood the key points of the new benefit: 74% generally liked it, and 34% said they would take it now. Preferences were generally stable at 1 month. In multivariate regression, those preferring MediCaring were wealthier, more often white, more often living in senior housing, and using more homecare services. However, they were not more often in poor health or needing ADL assistance. Older persons aged more than 80 years can understand a health benefit choice; most liked the aims of a new supportive care benefit, and 34% would change immediately from Medicare to a supportive care benefit such as MediCaring,. These findings encourage further development of special programs of care, such as MediCaring, that prioritize comfort and support for the old old. |
Normal ocular features, conjunctival microflora and intraocular pressure in the Canadian beaver (Castor canadensis).
The aim of the study was to assess the ocular features, normal conjunctival bacterial and fungal flora, and intraocular pressure (IOP) in the Canadian beaver (Castor canadensis). Sixteen, apparently healthy beavers with no evidence of ocular disease, and live-trapped in regions throughout Prince Edward Island. The beavers were sedated with intramuscular ketamine (12-15 mg/kg). Two culture specimens were obtained from the ventral conjunctival sac of both eyes of 10/16 beavers for aerobic and anaerobic bacterial and fungal identifications. The anterior ocular structures of all beavers were evaluated using a transilluminator and slit lamp biomicroscope. Palpebral fissure length (11/16 beavers), and horizontal and vertical corneal diameters (10/16 beavers) were measured. IOPs were measured in both eyes of 11/16 beavers using applanation tonometry. Both eyes of 3/16 beavers and one eye of 1/16 beavers were dilated using topical tropicamide prior to sedation to effect timely maximal dilation. Culture specimens and IOPs were not evaluated in these four animals. Indirect ophthalmoscopy was performed on 7/8 eyes of these four beavers. Conjunctival specimens from all eyes cultured positively for one or more isolates of aerobic bacteria. The most common isolate was Micrococcus spp. (five beavers; 9/20 eyes). Other isolates included a Gram-positive coccobacilli-like organism (four beavers; 7/20 eyes), Aeromonas hydrophila (three beavers; 4/20 eyes), Staphylococcus spp. (three beavers; 4/20 eyes), Gram positive bacilli (one beaver; 2/20 eyes), Enterobacter spp. (two beavers; 2/20 eyes), Streptococcus spp. (two beavers; 2/20 eyes), aerobic diphtheroids (one beaver; 1/20 eyes), and Pseudomonas spp. (one beaver; 1/20 eyes). Clostridium sordellii (one beaver; 1/20 eyes) and Peptostreptococcus spp. (one beaver; 1/20 eyes) were the sole anaerobic bacteria isolated. All conjunctival specimens were negative for growth of fungi. Ophthalmic examinations revealed the normal beaver eye and ocular adnexa included dorsal and ventral puncta, a vestigial third eyelid, and a circular pupil. Average palpebral fissure length was 9.36 mm (SD = 1.00) for both eyes. Mean horizontal and vertical corneal diameters of both eyes were 9.05 mm (SD = 0.64) and 8.45 mm (SD = 0.69), respectively. Mean IOP for the right and left eyes were 17.11 mmHg (SD = 6.39) and 18.79 mmHg (SD = 5.63), respectively. Indirect ophthalmoscopic examinations revealed normal anangiotic retinas. Gram-positive aerobes were most commonly cultured from the conjunctival sac of normal beavers, with Micrococcus spp. predominating. The overall mean IOP in ketamine-sedated beavers was 17.95 mmHg. The beaver, an amphibious rodent, has an anangiotic retina. |
On the impact of smoothing and noise on robustness of CT and CBCT radiomics features for patients with head and neck cancers.
We investigated the characteristics of radiomics features extracted from planning CT (pCT) and cone beam CT (CBCT) image datasets acquired for 18 oropharyngeal cancer patients treated with fractionated radiation therapy. Images were subjected to smoothing, sharpening, and noise to evaluate changes in features relative to baseline datasets. Textural features were extracted from tumor volumes, contoured on pCT and CBCT images, according to the following eight different classes: intensity based histogram features (IBHF), gray level run length (GLRL), law's textural information (LAWS), discrete orthonormal stockwell transform (DOST), local binary pattern (LBP), two-dimensional wavelet transform (2DWT), Two dimensional Gabor filter (2DGF), and gray level co-occurrence matrix (GLCM). A total of 165 radiomics features were extracted. Images were post-processed prior to feature extraction using a Gaussian noise model with different signal-to-noise-ratios (SNR = 5, 10, 15, 20, 25, 35, 50, 75, 100, and 150). Gaussian filters with different cut off frequencies (varied discreetly from 0.0458 to 0.7321 cycles-mm-1 ) were applied to image datasets. Effect of noise and smoothing on each extracted feature was quantified using mean absolute percent change (MAPC) between the respective values on post-processed and baseline images. The Fisher method for combining Welch P-values was used for tests of significance. Three comparisons were investigated: (a) Baseline pCT versus modified pCT (with given filter applied); (b) Baseline CBCT versus modified CBCT, and (c) Baseline and modified pCT versus baseline and modified CBCT. Features extracted from CT and CBCT image datasets were robust to low-pass filtering (MAPC = 17.5%, pvalFisher¯ = 0.93 for CBCT and MAPC = 7.5%, pvalFisher¯ = 0.98 for pCT) and noise (MAPC = 27.1%, pvalFisher¯ = 0.89 for CBCT, and MAPC = 34.6%, pvalFisher¯ = 0.61 for pCT). Extracted features were significantly impacted (MAPC=187.7%, pvalFisher¯ < 0.0001 for CBCT, and MAPC = 180.6%, pvalFisher¯ < 0.01 for pCT) by LOG which is classified as a high-pass filter. Features most impacted by low pass filtering were LAWS (MAPC = 11.2%, pvalFisher¯ = 0.44), GLRL (MAPC = 9.7%, pvalFisher¯ = 0.70) and IBHF (MAPC = 21.7%, pvalFisher¯ = 0.83), for the pCT datasets, and LAWS (MAPC = 20.2%, pvalFisher¯ = 0.24), GLRL (MAPC = 14.5%, pvalFisher¯ = 0.44), and 2DGF (MAPC=16.3%, pvalFisher¯ = 0.52), for CBCT image datasets. For pCT datasets, features most impacted by noise were GLRL (MAPC = 29.7%, pvalFisher¯ = 0.06), LAWS (MAPC = 96.6%, pvalFisher¯ = 0.42), and GLCM (MAPC = 36.2%, pvalFisher¯ = 0.48), while the LBPF (MAPC = 5.2%, pvalFisher¯ = 0.99) was found to be relatively insensitive to noise. For CBCT datasets, GLRL (MAPC = 8.9%, pvalFisher¯ = 0.80) and LAWS (MAPC = 89.3%, pvalFisher¯ = 0.81) features were impacted by noise, while the LBPF (MAPC = 2.2%, pvalFisher¯ = 0.99) and DOST (MAPC = 13.7%, pvalFisher¯ = 0.98) features were noise insensitive. Apart from 15 features, no significant differences were observed for the remaining 150 textural features extracted from baseline pCT and CBCT image datasets (MAPC = 90.1%, pvalFisher¯ = 0.26). Radiomics features extracted from planning CT and daily CBCT image datasets for head/neck cancer patients were robust to low-power Gaussian noise and low-pass filtering, but were impacted by high-pass filtering. Textural features extracted from CBCT and pCT image datasets were similar, suggesting interchangeability of pCT and CBCT for investigating radiomics features as possible biomarkers for outcome. |
Invasive versus conservative strategy in patients aged 80 years or older with non-ST-elevation myocardial infarction or unstable angina pectoris (After Eighty study): an open-label randomised controlled trial.
Non-ST-elevation myocardial infarction (NSTEMI) and unstable angina pectoris are frequent causes of hospital admission in the elderly. However, clinical trials targeting this population are scarce, and these patients are less likely to receive treatment according to guidelines. We aimed to investigate whether this population would benefit from an early invasive strategy versus a conservative strategy. In this open-label randomised controlled multicentre trial, patients aged 80 years or older with NSTEMI or unstable angina admitted to 16 hospitals in the South-East Health Region of Norway were randomly assigned to an invasive strategy (including early coronary angiography with immediate assessment for percutaneous coronary intervention, coronary artery bypass graft, and optimum medical treatment) or to a conservative strategy (optimum medical treatment alone). A permuted block randomisation was generated by the Centre for Biostatistics and Epidemiology with stratification on the inclusion hospitals in opaque concealed envelopes, and sealed envelopes with consecutive inclusion numbers were made. The primary outcome was a composite of myocardial infarction, need for urgent revascularisation, stroke, and death and was assessed between Dec 10, 2010, and Nov 18, 2014. An intention-to-treat analysis was used. This study is registered with ClinicalTrials.gov, number NCT01255540. During a median follow-up of 1·53 years of participants recruited between Dec 10, 2010, and Feb 21, 2014, the primary outcome occurred in 93 (40·6%) of 229 patients assigned to the invasive group and 140 (61·4%) of 228 patients assigned to the conservative group (hazard ratio [HR] 0·53 [95% CI 0·41-0·69], p=0·0001). Five patients dropped out of the invasive group and one from the conservative group. HRs for the four components of the primary composite endpoint were 0·52 (0·35-0·76; p=0·0010) for myocardial infarction, 0·19 (0·07-0·52; p=0·0010) for the need for urgent revascularisation, 0·60 (0·25-1·46; p=0·2650) for stroke, and 0·89 (0·62-1·28; p=0·5340) for death from any cause. The invasive group had four (1·7%) major and 23 (10·0%) minor bleeding complications whereas the conservative group had four (1·8%) major and 16 (7·0%) minor bleeding complications. In patients aged 80 years or more with NSTEMI or unstable angina, an invasive strategy is superior to a conservative strategy in the reduction of composite events. Efficacy of the invasive strategy was diluted with increasing age (after adjustment for creatinine and effect modification). The two strategies did not differ in terms of bleeding complications. Norwegian Health Association (ExtraStiftelsen) and Inger and John Fredriksen Heart Foundation. |
Spatial patterns of spinal cord [14C]-2-deoxyglucose metabolic activity in a rat model of painful peripheral mononeuropathy.
Spatial patterns of spinal cord glucose metabolic activity were examined in unanesthetized rats with painful peripheral mononeuropathy produced by sciatic nerve ligation (chronic constrictive injury, CCI). Spinal cord metabolic activity was assessed 10 days after nerve ligation by using the fully quantitative [14C]2-deoxyglucose technique. This technique allows simultaneous examination of both neural activity inferred from local glucose utilization and its spatial distribution in multiple spinal regions previously implicated in nociceptive processing. Rats used in the experiment exhibited thermal hyperalgesia to radiant heat applied to the hind paw ipsilateral to nerve ligation and behaviors indicative of spontaneous pain. Sciatic nerve ligation produced a significant increase in spinal cord metabolic activity in four sampling regions (laminae I-IV, V-VI, VII and VIII-IX) of lumbar segments compared to sham-operated rats. The pattern of altered metabolic activity in CCI rats presented 3 distinct features. (1) The spinal cord grey matter both ipsilateral and contralateral to nerve ligation exhibited substantial increases in metabolic activity compared to sham-operated rats. (2) This increase in metabolic activity was somatotopically specific, i.e., higher metabolic rates were observed on the side ipsilateral to nerve ligation than on the contralateral side, and higher metabolic rates were seen in the medial portion of the ipsilateral spinal cord dorsal horn than in the lateral portion. The peak metabolic activity occurred in laminae V-VI of CCI rats, a region involved in nociceptive processing. (3) The increase in spinal cord metabolic activity of CCI rats extended from lumbar segment L1 to L5 in all 4 sampling regions. The substantial increase in metabolic activity in both the ipsilateral and contralateral spinal cord that occurs over an extensive rostro-caudal area in CCI rats may represent a unique pattern of spinal cord metabolic activity distinct from that observed in rats exposed to acute thermal pain. This pattern of spinal cord neural activity in CCI rats may reflect possible radiation of neuropathic pain. In addition, the procedure of curare-induced paralysis in a separate group of CCI rats did not change the extent and patterns of metabolic activity seen in non-paralyzed CCI rats, reflecting a minimal influence of the afferent feedback from flexor motor reflexes on spinal cord metabolic activity following sciatic nerve ligation. This chronic increase in spinal cord neural activity in the absence of overt peripheral stimulation suggests a spinal cord hyperactive state and may account for behaviors suggestive of spontaneous pain in CCI rats.(ABSTRACT TRUNCATED AT 400 WORDS) |
Effect of quebracho-chestnut tannin extracts at 2 dietary crude protein levels on performance, rumen fermentation, and nitrogen partitioning in dairy cows.
Our objective was to determine the effects of a tannin mixture extract on lactating cow performance, rumen fermentation, and N partitioning, and whether responses were affected by dietary crude protein (CP). The experiment was conducted as a split-plot with 24 Holstein cows (mean ± standard deviation; 669±55kg of body weight; 87±36 d in milk; 8 ruminally cannulated) randomly assigned to a diet of [dry matter (DM) basis] 15.3 or 16.6% CP (whole plot) and 0, 0.45, 0.90, or 1.80% of a tannin mixture in three 4×4 Latin squares within each level of CP (sub-plot). Tannin extract mixture was from quebracho and chestnut trees (2:1 ratio). Dietary CP level did not influence responses to tannin supplementation. A linear decrease in DM intake (25.5 to 23.4kg/d) was found, as well as a linear increase in milk/DM intake (1.62 to 1.75) and a trend for a linear decrease in fat-and-protein-corrected milk (38.4 to 37.1kg/d) with increasing levels of tannin supplementation. In addition, there was a negative linear effect for milk urea N (14.0 to 12.9mg/dL), milk protein yield (1.20 to 1.15kg), and concentration (2.87 to 2.83%). Furthermore, the change in milk protein concentration tended to be quadratic, and predicted maximum was 2.89% for a tannin mixture fed at 0.47% of dietary DM. Tannin supplementation reduced ruminal NH3-N (11.3 to 8.8mg/dL), total branched-chain volatile fatty acid concentration (2.97 to 2.47mol/100mol), DM, organic matter, CP, and neutral detergent fiber digestibility. Dietary tannin had no effect on intake N (587±63g/d), milk N (175±32g/d), or N utilization efficiency (29.7±4.4%). However, feeding tannin extracts linearly increased fecal N excretion (214 to 256g/d), but reduced urinary N (213 to 177g/d) and urinary urea N (141 to 116g/d) excretion. Decreasing dietary CP did not influence milk production, but increased N utilization efficiency (milk N/N intake; 0.27 to 0.33), and decreased milk urea N (15.4 to 11.8mg/dL), ruminal NH3-N (11.0 to 9.3mg/dL), apparent digestibility of DM (66.1 to 62.6%), organic matter (68.2 to 64.3%), and CP (62.9 to 55.9%), as well as urinary N excretion (168 vs. 232g/d). Results of this study indicated beneficial effects of 0.45% tannin extract in the diet on milk protein content. Increasing tannin extract levels in the diet lowered urinary N excretion, but had detrimental effects on DM intake, milk protein content, milk protein yield, and nutrient digestibility. |
A reevaluation of midbrain and diencephalic projections to the inferior olive in rat with particular reference to the rubro-olivary pathway.
Projections from the midbrain and caudal diencephalon to the inferior olivary nucleus (ION) in the rat were investigated by using anterograde and retrograde tracing techniques. Particular attention was directed to studying the projection from the red nucleus (RN) to the ION. Tritiated leucine was stereotaxically injected into the RN in six animals. The injection sites and the portion of the medulla containing the ION were processed for autoradiography. In three cases, the injections were largely confined to the RN. In these instances, no terminal labeling was noted in the ION. In the other three animals, the injections spread beyond the RN to include, in one or another of the rats, such areas as the mesencephalic reticular formation (MRF) between the RN and the periaqueductal gray (PAG), the region around the fasciculus retroflexus (FR), the prerubral area, the MRF dorsolateral to the RN, or the region dorsal to the medial geniculate body. In these animals, terminal labeling was observed ipsilaterally in the ION, chiefly in the caudal one-half of the medial accessory olive and in the dorsal lamella and lateral bend of the principal olive. Retrograde transport of lectin-conjugated horseradish peroxidase (WGA-HRP) or the fluorescent dye fast blue (FB), injected into the ION, was used to localize regions in the midbrain and caudal diencephalon containing neurons which project to the inferior olive. Following WGA-HRP injections into the ION, retrogradely labeled neurons were found in the ipsilateral ventral PAG, the MRF between the PAG and the RN, the MRF dorsolateral to the RN, and the area dorsal to the medial geniculate body. Large numbers of heavily labeled neurons were found in the area surrounding the FR. No labeled cells were noted in the RN or the nucleus of Darkschewitsch (ND). Subsequent to FB injections into the ION, fluorescent neurons were observed in the same regions which contained HRP-labeled neurons in the experiments using the enzyme as a tracer. Additionally, very small numbers of FB-labeled neurons were found in the RN and the ND. These results indicate that, in the rat, descending input from the midbrain and caudal diencephalon to the ION arises chiefly from a medially located column of neurons extending from the level of the rostral RN to the region of the FR. Little, if any, input to the ION is derived from the ND or RN in this species, unlike the cat, in which a substantial projection to the ION arises from the ND and the RN. |
[Effect of hydrogen-rich water on the CD34 expression in lesion boundary brain tissue of rats with traumatic brain injury].
To observe the effect of hydrogen-rich water on the CD34 expression and angiogenesis in lesion boundary brain tissue of rats with traumatic brain injury (TBI). A total of 54 adult male Sprague-Dawley (SD) rats were divided into three groups by random number table: namely sham-operated group (sham group), trauma group (TBI group), and trauma + hydrogen-rich water group (TBI+HW group), the rats in each group were subdivided into 1, 3 and 7 days subgroups according to the time points after trauma, with 6 rats in each subgroup. The TBI model was reproduced by using a modified Feency method for free fall impact, and the rats in sham group were not given brain impact after craniotomy. The rats in TBI+HW group were given intraperitoneal injection of hydrogen-rich water (5 mL/kg) after TBI model reproduction, and then once a day until being sacrificed, and the rats in sham group and TBI group were given the same amount of normal saline. The neurological severity scores (NSS) for neurologic deficits were calculated at corresponding time points, and then the rats were sacrificed for brain tissue at 3 mm around lesion boundary. After hematoxylin-eosin (HE) staining, the pathological changes in lesion boundary brain tissue were observed under light microscope. The expression of CD34+ cells was observed by immunohistochemical analysis, which markers were used to count the newborn blood capillary sprouts around the traumatic brain tissue. The protein expression of CD34 was determined by Western Blot. NSS scores at all time points in sham group were 0. NSS scores in TBI and TBI+HW groups showed a decreased tendency with time prolongation after TBI, which showed more significant in TBI+HW group, NSS scores at 3 days and 7 days were significantly lower than those of TBI group (3 day: 8.67±0.52 vs. 11.56±1.94, 7 days: 7.33±0.52 vs. 8.17±0.98, both P < 0.05). Under light microscope, the brain tissue of rats in sham group was normal. After injury, pathological changes in lesion boundary brain tissue in TBI group were characterized by obvious hemorrhagic necrosis, severe brain edema, a large number of degeneration and necrosis of nerve cells and inflammatory cell infiltration, and the pathological changes were more obvious at 3 days. The edema area in TBI+HW group was slightly smaller than that of TBI group, and the surrounding edema was slightly reduced. It was shown by immunohistochemistry that only a very small number of neoformative capillaries were found in sham group. The number of neoformative capillaries in lesion boundary brain tissue was gradually increased with time prolongation in TBI group. The number of neoformative capillaries in TBI+HW group was more significantly, which was significantly higher than that of TBI group at 3 days and 7 days after injury (cells/HP: 10.59±1.88 vs. 8.61±1.22 at 3 days, 23.20±3.16 vs. 17.01±2.64 at 7 days, both P < 0.05). It was shown by Western Blot that the expression of CD34 protein at all time points in TBI group was significantly increased as compared with that of sham group. The expression of CD34 protein at 1 day and 3 days in TBI+HW group was slightly increased as compared with that of TBI group without significant difference, but it was significantly up-regulated at 7 days after injury, which was significantly higher than that of TBI group (gray value: 1.36±0.36 vs. 0.74±0.08, P < 0.05). Hydrogen-rich water promote CD34+ cells home to the site of injured tissue in rats with TBI, is involved in angiogenesis, and improve clinical outcomes during brain functional recovery. |
Pharmacologic validation of human tumor clonogenic assays based on pleiotropic drug resistance: implications for individualized chemotherapy and new drug screening programs.
Because of the bias toward successful cloning of human tumor cells from more advanced malignancies, alternative approaches to clinical correlations of drug resistance are needed to determine the validity of the human tumor clonogenic assay (HTCA) as a clinically useful test. Capitalizing on the prevalence of clinical drug resistance among these advanced malignancies, we have taken an independent approach to testing the validity of HTCAs based upon pharmacologic principles rather than tumor response. A database of results from drug sensitivity/resistance testing in 1,777 HTCAs has been examined retrospectively for specimens exhibiting either the MDR1 or Topo-II pleiotropic drug resistance phenotype. Twenty specimens were identified as MDR1 based upon test results showing resistance to adriamycin, vinca alkaloid, and etoposide. Test results with mitomycin-c confirmed the MDR1 phenotype in eight out of nine of these specimens. Seven out of eight of the confirmed MDR1 samples were resistant to either cis-platinum or alkylating agents or to both. There was no significant difference in the 5-fluorouracil resistance of these MDR1 specimens and the database as a whole, demonstrating the specific nature of this drug resistance phenotype in vitro. One specimen, a squamous carcinoma of the lung, was mitomycin-c sensitive, even though it exhibited all the other drug resistances characteristic of the MDR1 phenotype. Six specimens with the Topo-II phenotype were identified based upon resistance to adriamycin and etoposide with sensitivity to vinca alkaloids. Surprisingly, the Topo-II phenotype showed a strong association with increased cis-platinum resistance and a weaker one with decreased 5-fluorouracil resistance. Thus, 26/30 (87%) of analyzable specimens showed some form of clinically characterized multidrug resistance, illustrating how easily one can obtain 90% accuracy in predicting clinical drug resistance with HTCAs that are heavily biased by a disproportionate number of successful cloning assays with advanced malignancies. The data analysis also shows that prediction of adriamycin resistance based on lack of Topoisomerase II expression will not be very accurate, in contrast to a previous claim. Until cell culture technology can facilitate frequent successes in the cloning of early detected, drug-sensitive lesions, this bias will remain in HTCA databases, and studies comparing HTCA results with clinical response will continue to be uninformative. However, the in vitro identification of pleiotropic drug resistance phenotypes exactly analogous to those previously observed in patients provides pharmacologic validation at least for the prediction of drug resistance as measured by current HTCAs using suprapharmacologic drug concentrations. |
The nurse's role in preventive care in the field of community nursing.
According to published reports from the WHO, health care is undergoing a transformation that reflects the increasing importance of community care based on social, group, and individual needs. Community health care is provided by multidisciplinary teams, with nurses occupying irreplaceable positions. Nurse competencies constitute significant potential in the area of community based preventive care as well as the more traditional roles in treatment and recovery. Data was obtained from health care professionals and the public through a structured interview. The study population included 1,007 physicians, 1,005 nurses and 2,022 laypersons. Respondents were selected randomly with the aid of quotas. The parameters for the selection of health care workers (nurses and physicians) were constructed based on registration data from the Institute of Health Information and Statistics. Layperson selection was based on data from the Czech Statistical Office. The Statistical Analysis of Social Data program (version 1.4.4) was used to process the data, which was in the form of 1st and 2nd degree contingency tables. The dependence level was determined based on χ2 and other testing criteria (according to the character of the signs). The results show that respondents perceive the concept of a "community nurse" as a nurse working independently in local neighborhoods and communities. Results also showed that work in senior care, followed by home care, and care for chronically ill patients were the most preferred. A role for nurses in health care education centers was only supported by 13.1% of physicians, 13.8% of nurses, and 6.8% of laypersons. The results also reveal that community nursing is perceived by both health care professionals and laypersons as fieldwork (i.e. work not based in a hospital or clinic environment), yet, at the same time, it was perceived as work that dealt with people needing health care. The results also reflect the opinion that the establishment of an independent nurse in the workplace (in the form of preventive care) could lead to an increase in the quality of care for employees (65.7% of physicians and 70.8% of nurses), an improvement in workplace health education (33% of physicians and 34.7% of nurses) and would provide support for healthy work environments (31.4% of physicians and 30.4% of nurses). Our results lead us to conclude that the health care system in the Czech Republic needs to better utilize the potential of trained nurses in the field of community health care. Additionally, steps need to be taken to increase job opportunities and staffing for nurses wanting to work in community health and preventive care. |
[TRANSPLANTATION OF HUMAN AMNIOTIC EPITHELIAL CELLS IN TREATMENT OF HEPATIC FIBROSIS IN IMMUNE RATS].
To observe the survival, migration, and effect of human amniotic epithelial cells (hAECs) on hepatc fibrosis in immune rats so as to provide the experimental theory for the clinical treatment with hAECs. Sixty-four 10-week-old male Sprague Dawley rats (weighing, 220-280 g) were randomly divided into 4 groups, sixteen rats in each group. Rat hepatic fibrosis model was induced in groups A, B, and C; hepatic fibrosis rats were injected with 4 x 10(6) hAECs in group A, and with normal saline in group B, and no treatment was given in group C; group D served as control group. After 2 weeks of transplantation, the expression of human Alu gene repeat sequence was detected by DNA-PCR method and human leucocyte antigen G (HLA-G) by immunohistochemical staining in heart, liver, spleen, kidney, lung, and brain in group A, and then the percentage of positive expression was compared between organs except spleen. Semi-quantitative analysis was done for liver fibrosis with HE staining according to Chevallier semi-quantitative histological liver fibrosis scoring system, and immunohistochemical staining for TGF-β1 was used to record immunohistochemical score (ISH), the concentrations of aspartate transaminase (AST), alanine aminotransferase (ALT), and albumin (ALB) were determined to analyze hepatic fibrosis. Alu gene repeat sequence and HLA-G could be detected in liver, heart, brain, lung, and kidney in group A, the percentage of positive expression in the liver was significantly higher than that in the other organs (P < 0.05). The histological semi-quantitative score of group A (10.47 ± 3.20) was significantly lower than that of groups B and C [(13.84 ± 3.46) and (13.85 ± 3.16)](P < 0.05), but no significant difference was found between groups B and C (P > 0.05). The ISH scores in groups A, B, C, and D were 3.60 ± 1.50, 5.38 ± 2.60, 5.50 ± 2.40, and 1.87 ± 1.36, respectively; groups A, B, and C were significantly higher than group D, and group A was significantly lower than groups B and C (P < 0.05), but there was no significant difference between groups B and C (P > 0.05). The concentrations of ALT and AST in groups A, B, and C were significantly higher than those in group D, and group A was significantly lower than groups B and C (P < 0.05), but there was no significant difference between groups B and C (P > 0.05). The concentration of ALB in groups A, B, and C was significantly lower than that in group D, and group A was significantly higher than groups B and C (P<0.05), but there was no significant difference between groups B and C (P > 0.05). hAECs can survive in immune rats by intrasplenic transplantation and migrate to liver, heart, brain, lung, and kidney, and the liver shows the largest migration. The transplantation of hAECs in immune rat with cirrhosis can alleviate hepatic fibrosis and improve the serum indexes of liver function. |
The evaluation of amifostine for mucosal protection in patients with advanced loco-regional squamous cell carcinomas of the head and neck (SCCHN) treated with concurrent weekly carboplatin, paclitaxel, and daily radiotherapy (RT).
Concurrent chemotherapy and radiation has improved the outcome for patients presenting with locally advanced squamous cell carcinomas of the head and neck (SCCHN). These improvements have come at a cost of increased treatment-related toxicities. We previously reported the results of a phase II trial examining the role of concurrent carboplatin, paclitaxel, and daily radiotherapy (RT) in SCCHN. In an attempt to decrease these side effects, we conducted a prospective phase II trial evaluating the role of amifostine (Ethyol, MedImmune Oncology, Inc, Gaithersburg, MD) in patients treated with this concurrent chemoRT scheme. From April 2002 to September 2004, 19 patients with stage III-IV SCCHN were enrolled on a prospective phase II trial. Treatment consisted of daily RT delivered to 70.2 Gy (1.8 Gy/fx) with amifostine 500 mg IV (<1 hour before RT), and concurrent weekly carboplatin (100 mg/m2) and paclitaxel (40 mg/m2). Median age was 58.5 years (range, 48 to 70 years); male to female ratio was, 83%:17%; Caucasian versus other was, 61%/39%. Tumor characteristics based on histology were: primary cancers of the oropharynx (55.6%); supraglottic larynx (16.7%); hypopharynx (16.7%); oral cavity (5.6%); and unknown primaries (5.6%). All patients presented with locally advanced, unresectable disease T4 (50%), T3 (27.8%), and advanced nodal disease (N2b-N3) (78%). Toxicities were measured weekly during treatment and at each follow-up visit. Disease response to therapy was determined 2 months after completion of therapy. Seventeen patients are evaluable for response and survival at 2 months following completion of RT. Eighty-four percent completed the prescribed radiation treatment, and 84% of patients received more than six cycles of chemotherapy. The median number of missed chemotherapy cycles was 1.5 (range, 0 to 5 cycles). Fifty-six percent of patients received more than 90% of prescribed amifostine doses, with chemoRT-related toxicity being the most common reason for withholding the dose (77%). Median doses of missed amifostine were three (range, 0 to 30 doses). Grade 3 toxicities associated with therapy were: mucositis and dysphagia (40% of patients each), dehydration (27%), xerostomia (20%), and dermatitis (20%); 53% of patients experienced grade 3 leukopenia, while grade 3/4 neutropenia developed in 20%/13%. No grade 4/5 nonhematologic toxicities were encountered. Forty percent of patients completed RT without unscheduled treatment breaks secondary to treatment-related toxicity. Median treatment-break time was 5 days (range, 0 to 20 days). Clinical complete response at both the primary site of disease and neck was achieved in 75% of patients 2 months following completion of RT. Weekly carboplatin and paclitaxel administered concurrently with definitive RT and daily amifostine is well tolerated, with over 85% of patients completing therapy with acceptable toxicity. The addition of amifostine appears to decrease treatment-related toxicity without impacting efficacy. |
Sleep disorders: disorders of arousal? Enuresis, somnambulism, and nightmares occur in confusional states of arousal, not in "dreaming sleep".
In summary, the classical sleep disorders of nocturnal enuresis, somnambulism, the nightmare, and the sleep terror occur preferentially during arousal from slow-wave sleep and are virtually never associated with the rapid-eye-movement dreaming state. Original data are reported here which indicate that physiological differences from normal subjects, of a type predisposing the individual to a particular attack pattern, are present throughout the night. The episode, at least in the case of enuresis, appears to be simply a reinforcement of these differences to a clinically overt level. A number of features are common to all four sleep disorders. These had been shown previously to be attributable to the arousal itself. New data obtained by means of evoked potential techniques suggest that these common symptoms of the confusional period that follows non-REM sleep are related to alterations of cerebral reactivity, at least of the visual system. The symptoms which distinguish the individual attack types (that is, micturition, prolonged confusional fugues, overt terror) appear to be based upon physiological changes present throughout sleep which are markedly accentuated during arousal from slow-wave sleep. These changes may in some way be related to diurnal psychic conflicts. But, to date, it has proved impossible to demonstrate potentially causal psychological activity, dreaming or other forms of mental activity, or even a psychological void in sleep just preceding the attacks. The presence of all-night or even daytime predisposing physiological changes and the difficulty in obtaining any solid evidence of a preceding psychological cause explain, no doubt, why the results of efforts to cure the disorders at the moment of their occurrence (for example, by conditioning procedures in nocturnal enuresis) have been far from satisfactory. I stress the points that the attacks are best considered disorders of arousal and that the slow-wave sleep arousal episode which sets the stage for these attacks is a normal cyclic event. Indeed it is the most intense recurrent arousal that an individual regularly experiences. The most fruitful possibilities for future research would appear to be more detailed studies of those physiological changes that predispose individuals to certain types of attacks when they undergo intense arousal or stress; the reversal of these changes by psychological or pharmacological means; and more refined investigations of the physiological and psychological characteristics of the process of cyclic arousal from non-REM sleep. |
Quantitative testing in spinal cord injury: overview of reliability and predictive validity.
The objective of this study was to identify commonly used physiological outcome measures and summarize evidence on the reliability and predictive validity of quantitative measures used in monitoring persons with spinal cord injury (SCI). A systematic search of PubMed through January 5, 2012, was conducted to identify publications using common outcome measures in persons with SCI and for studies that were specifically designed to evaluate the reliability and predictive validity of selected quantitative measures. Quantitative measures were defined as tests that quantify sensory and motor function, such as amount of force or torque, as well as thresholds, amplitudes, and latencies of evoked potentials that might be useful in studies and monitoring of patients with SCI. Reliability studies reporting interclass correlation coefficients (ICCs) or weighted κ coefficients were considered for inclusion. Studies explicitly evaluating correlation between measures and specific functional outcomes were considered for predictive validity. From a total of 121 potentially relevant citations, 6 studies of reliability and 4 studies of predictive validity for quantitative tests met the inclusion criteria. In persons with incomplete SCI, ICCs for both interrater and intrarater reliability of electrical perceptual threshold (EPT) were ≥ 0.7 above the sensory level of SCI but were less reliable below the sensory level. Interclass correlation coefficients for interrater and intrarater reliability of the Graded Redefined Assessment of Strength, Sensibility, and Prehension (GRASSP) components ranged from 0.84 to 0.98. For electromyography, the ICC was consistently high for within-day tests. The overall quality of reliability of the majority of studies was poor, due to the potential for selection bias and small sample sizes. No classic validation studies were found for the selected measures, and evidence regarding the predictive validity of the measures was limited. Somatosensory evoked potentials (SSEPs) may be correlated with ambulatory capacity, as well as the Barthel Index and motor index scores, but this correlation was limited for evaluation of bladder function recovery in 3 studies that assessed the correlation between baseline or initial SSEPs and a specific clinical outcome at a later follow-up time. All studies used convenience samples and the overall sample quality was low. Evidence on the reliability and validity of the quantitative measures selected for this review is limited, and the overall quality of existing studies is poor. There is some evidence for the reliability of the EPT, dermatomal SSEPs, and the GRASSP to suggest that they may be useful in longitudinal studies of patients with SCI. There is a need for high quality studies of reliability, responsiveness, and validity for quantitative measures to monitor the level and degree of SCI. |
Whole-brain functional connectivity during script-driven aggression in borderline personality disorder.
Intense anger and anger-related aggression are frequently reported by patients with borderline personality disorders (BPD). Recent results suggest that anger-related aggression and its control is associated with a complex interplay of different neural systems in BPD. To further investigate this, we complement standard activation and seed-based connectivity analyses by examining whole-brain changes in functional connectivity during anger and reactive aggression in BPD. We reanalyzed functional MRI data from 33 women with BPD, all of them fulfilling BPD criterion 8, "anger proneness", according to DSM-IV, and 30 healthy women. Subjects performed a script-driven imagery task consisting of four phases: baseline, anger-induction by a narrative of interpersonal rejection, a narrative of directing physical aggression towards others, and relaxation. We used a data-driven, spatially constrained spectral clustering approach to parcellate the brain into 200 regions. For each script-phase and subject, we computed the full connectivity matrix using wavelet coefficient correlations in the 0.05-0.10 Hz range. We calculated the individual increase in connectivity from baseline to the anger-induction and physical aggression phases by subtracting the corresponding connectivity matrices per subject, as well as the increase and decrease from the anger-induction to the aggression phase. We then applied permutation-based sampling to determine a combined threshold on the strength of individual connections and the size of the discovered networks for these difference matrices. We discovered a single, large network showing a significantly stronger increase in connectivity from baseline to the aggression phase in female patients with BPD compared to healthy women. This network consisted of regions in the anterior and posterior cingulate cortex, precuneus, dorsomedial prefrontal cortex, superior and middle temporal gyrus, hippocampus, insula, ventrolateral and dorsolateral prefrontal cortex, superior parietal lobe, thalamus, precentral and postcentral gyrus, caudate, pallidum, cerebellum, middle occipital lobe, lingual gyrus, calcarine sulcus, and fusiform gyrus. Hub regions with highest node centrality were found in the right caudate and left thalamus. We found no significant differences for the increase of connectivity from baseline to anger-induction, as well as for the increase or decrease from the anger-induction to the aggression phase. We identified a large network showing a significantly stronger increase in connectivity from baseline to the aggression phase in female patients with BPD compared to healthy women. The regions constituting this network belong to four previously described functional networks: The frontoparietal cognitive control network, the extended default mode network, the visual system, and the motor system. This stronger increase in connectivity between regions of different functional brain systems associated with cognitive control of behavior, socio-affective and self-referential thinking, as well as salience processing and emotion regulation, visual perception, and action is mediated via hubs in the thalamus and caudate, i.e., core components of the thalamocorticostriatal motor loop essential for action selection and initiation. These findings suggest increased interaction of prefrontal cognitive control processes with thalamocorticostriatal action-selection processes in female patients with BPD during the processing of aggressive action impulses, which are facilitated by states of high emotional salience and associated processes of self-referential and social processing, and ineffective emotion regulation. |
Neurodevelopmental outcome of severe neonatal hemolytic hyperbilirubinemia.
We recruited 128 neonates with hyperbilirubinemia over a 5-year period (1995-2000) to study the short- and long-term effects of hemolytic hyperbilirubinemia on the auditory brainstem pathway and neurodevelopmental status. These children were divided into two groups: (1) a hemolytic group (n = 29; ABO incompatibility [n = 19], Rh incompatibility [n = 1], glucose-6-phosphate dehydrogenase deficiency [n = 8] and both ABO incompatibility and glucose-6-phosphate dehydrogenase deficiency [n = 1]) and (2) a nonhemolytic group (n = 99). All received phototherapy. Exchange transfusions were performed for four (13.8%) in the hemolytic group and three (3%) in the nonhemolytic group. The brainstem auditory evoked potential was recorded at a mean age of 3.2 months in the hemolytic group and 3.1 months in the nonhemolytic group. Serial brainstem auditory evoked potential assessments were performed until 2 years of age (3 in the hemolytic group and 18 in the nonhemolytic group). All had regular physical, neurologic, visual, and auditory evaluation until 3 years of age. The rate of exchange transfusion was significantly higher in the hemolytic group than in the nonhemolytic group (P < .05). Brainstem auditory evoked potential abnormalities at the initial assessment occurred in three (10.4%) in the hemolytic group (all related to ABO incompatibility) and nine (9.1%) in the nonhemolytic group. At 2 years, the brainstem auditory evoked potential returned to normal except in three cases with a slightly increased hearing threshold (one [3.5%] in the hemolytic group at 60 dB nHL and two [2%] in the nonhemolytic group at 50 dB nHL]). There were no significant differences in the rate of brainstem auditory evoked potential abnormalities at the initial or subsequent assessments between both groups. All except five cases had a normal neurodevelopmental outcome at 3 years (three [two with ABO incompatibility and one with glucose-6-phosphate dehydrogenase deficiency] in the hemolytic group [10.4%] and two [2%] in the nonhemolytic group). All had mild motor delay and hypotonia, which returned to normal at 3 years. The rate of abnormal neurodevelopmental outcome was higher in the hemolytic group than in the nonhemolytic group, although with no significant difference between both groups (P = .08). All five cases in both groups with abnormal neurodevelopment had a normal brainstem auditory evoked potential at the initial assessment. There was no relationship between the abnormal initial brainstem auditory evoked potential and the final neurodevelopmental outcome. The toxic effect of hyperbilirubinemia on the auditory brainstem pathway and neurodevelopmental status in our cohort was transient. The prognosis of neonatal hemolytic hyperbilirubinemia in our Chinese cohort is excellent, possibly owing to an aggressive early-intervention approach. |
Unique Complications of Percutaneous Endoscopic Lumbar Discectomy and Percutaneous Endoscopic Interlaminar Discectomy.
Percutaneous endoscopic discectomy (PED) includes 2 main procedures: percutaneous endoscopic lumbar discectomy (PELD) and percutaneous endoscopic interlaminar discectomy (PEID), both of which are minimally invasive surgical procedures that effectively deal with lumbar degenerative disorders. Because of the challenging learning curve for the surgeon and the individual characteristics of each patient, preventing and avoiding complications is difficult. The most common complications, such as nucleus pulposus omission, nerve root injury, dural tear, visceral injury, nerve root induced hyperalgesia or burning-like nerve root pain, postoperative dysesthesia, posterior neck pain, and surgical site infection, are difficult to avoid; however, more focus on these issues perioperatively may be in order. Additionally, unique and unexpected complications can also occur, such as retroperitoneal hematoma (RPH), intraoperative seizures, and thrombophlebitis, among others. We aim to delineate unique complications during PED and accumulate strategies to prevent significant morbidity and improve surgical techniques. A retrospective cohort study of patients undergoing PEID or PELD from October 2014 to January 2016. Affiliated hospitals of Qingdao University. Patients with lumbar disc herniation (LDH) who underwent PEID and PELD were retrospectively analyzed. Complications were recorded and analyzed pre and postoperatively. We assessed clinical outcomes using the visual analog scale (VAS) and Oswestry Disability Index (ODI) and classified the results into "excellent," "good," "fair," or "poor" based on the modified MacNab criteria. All of the patients were followed for more than one year to evaluate their recovery from complications. From October 2014 to January 2016, 426 patients with LDH underwent PEID (106 cases) or PELD (320 cases). Common complications and occurrence rates were as follows: the incomplete removal of herniated discs was 1.4% (6/426), recurrence 2.8% (12/426), nerve root injury 1.2% (5/426), dural tear 0.9% (4/426), and nerve root induced hyperalgesia or burning-like nerve root pain 2.3% (10/426); no posterior neck pain or surgical site infection occurred. Unique complications included: passage of the working channel through the spinal canal into the disc space (one case), super-elastic nerve hook caught by exiting nerve root (one case), epidural hematoma (one case), radicular artery injury and massive bleeding (one case) which was revised by micro-endoscopic discectomy, and intraoperative seizure (one case). No serious consequences occurred after active medical intervention, and most patients had good recovery by 3 months postoperatively with physical therapy. The main limitations of this study are the retrospective study design, limited case number, and short follow-up period. PEDs are effective and minimally invasive methods for the surgical treatment of LDH, causing fewer complications due to the very minimal operational trauma for the muscle-ligament complex and stability of the spine. Nevertheless, because of the difficult learning curve for surgeons, lack of experience with the requisite surgical techniques, and enhanced clinical responsibility, a variety of problems may occur. Especially concerning are the unique complications mentioned here, which potentially lead to severe injury for the patient and require diligent preventive measures. Unique complications, epidural, hematoma, interlaminar, transforaminal, PEID, PELD. |
Vitamin E for Alzheimer's disease and mild cognitive impairment.
Vitamin E is a dietary compound that functions as an antioxidant scavenging toxic free radicals. Evidence that free radicals may contribute to the pathological processes of cognitive impairment including Alzheimer's disease (AD) has led to interest in the use of Vitamin E in the treatment of Alzheimer's disease and Mild Cognitivie Impairment (MCI). To assess the efficacy of Vitamin E in the treatment of Alzheimer's disease and prevention of progression of Mild Cognitive Impairment to Alzheimer's disease. The Cochrane Dementia and Cognitive Improvement's Specialized Register was searched on 8 January 2007 using the following terms: "Vitamin E", vitamin-E, alpha-tocopherol. The CDCIG Registers contains records from major health care databases and ongoing trial databases and is updated regularly. All unconfounded, double blind, randomized trials in which treatment with Vitamin E at any dose was compared with placebo for patients with Alzheimer's disease or Mild Cognitive Impairment. Two reviewers independently applied the selection criteria and assessed study quality and extracted and analysed the data. For each outcome measure data were sought on every patient randomized. Where such data were not available an analysis of patients who completed treatment was conducted. Only 2 studies met the inclusion criteria. The primary outcome used in the AD study was survival time to the first of 4 endpoints: death, institutionalisation, loss of 2 out of 3 basic activities of daily living and severe dementia (defined as a global Clinical Dementia Rating of 3). The investigators reported the total numbers in each group who reached the primary endpoint within two years for participants completing the study ("completers"). There appeared to be some benefit from Vitamin E with fewer participants reaching endpoint - 58% (45/77) of completers compared with 74% (58/78) - a Peto odds ratio of 0.49, 95% confidence interval 0.25 to 0.96.However, more participants taking Vitamin E suffered a fall (12/77 compared with 4/78; odds ratio 3.07, 95% CI 1.09 to 8.62). It was not possible to interpret the reported results for specific endpoints or for secondary outcomes of cognition, dependence, behavioural disturbance and activities of daily living.The primary outcome used in the MCI study which had 769 participants (257 in the Vitamin E group and 259 in the placebo group; a third Donepezil group of 253 was not included in this review) was the time to progression from MCI to possible or probable AD. A total of 214 of the 769 participants had progression to dementia, with 212 being classified as having possible or probable AD. There was no significant difference in the probability of progression from MCI to AD between the Vitamin E group and the placebo group. There was no significant difference between the placebo group and the Vitamin E group in adverse events. Five subjects died in each group and 72 discontinued treatment in the Vitamin E group and 66 in the placebo group. There is no evidence of efficacy of Vitamin E in the prevention or treatment of people with AD or MCI. More research is needed to identify the role of Vitamin E, if any, in the management of cognitive impairment. |
The degree of extramural spread of T3 rectal cancer: an appeal to the American Joint Committee on Cancer.
The T3 category of the TNM classification includes over 60% of all rectal tumours and encompasses the greatest variance in cancer-specific end-points than any other T category. The most recent edition of the cancer staging handbook of the American Joint Committee on Cancer (AJCC) dated 2010 does not divide T3 tumours into subgroups which reflect cancer-specific outcome more sensitively. The original aim of the present study was to review the literature to assess the influence of the degree of extramural extent of T3 rectal cancer on local recurrence and survival. An article written by the authors was accepted for publication but was withdrawn immediately after they became aware of the publication of the 4th edition of the TNM Supplement by the Union for International Cancer Control dated 2012, which was not accessible by the search system used. This article dealt with the subdivision of the T3 category although this was not included in the most up-to-date AJCC guidelines and was stated to be 'entirely optional'. Medline, PubMed and Cochrane Library searches were performed to identify all studies that investigated the degree of extramural spread and its relationship to survival and local recurrence. Twenty-two studies were identified of which 12 assessed the degree of histopathological extramural spread measured in millimetres. In 18 of the 22 studies the degree of extramural spread was a statistically significant prognostic factor for survival and local recurrence. Analysis of the studies indicated that the subdivision of category T3 rectal cancer into two subgroups of extramural spread ≤ 5 mm or more than 5 mm resulted in markedly different survival and local recurrence rates. The data were insufficient to allow validation of any greater subdivision. Measurement of the extent of extramural spread by MRI before any treatment agreed with the histopathological measurement in the surgical specimen to within 1 mm. The extent of extramural spread in T3 rectal cancer measured in millimetres is a powerful prognostic factor. A subdivision of T3 into T3a and T3b of less than or equal to or more than 5 mm appears to give the greatest discrimination of local recurrence and survival. Preoperative T3 subdivision by MRI has the same sensitivity as histopathological examination of the resected specimen. Given the clinical need for the pretreatment classification of the T3 category for oncological management planning, the evidence strongly indicates that the subdivision of the T3 category by MRI should be formally considered as part of the TNM staging system for rectal cancer. |
Association of the Mandatory Medicare Bundled Payment With Joint Replacement Outcomes in Hospitals With Disadvantaged Patients.
Medicare's Comprehensive Care for Joint Replacement (CJR) model rewards or penalizes hospitals on the basis of meeting spending benchmarks that do not account for patients' preexisting social and medical complexity or high expenses associated with serving disadvantaged populations such as dual-eligible patients (ie, those enrolled in both Medicare and Medicaid). The CJR model may have different implications for hospitals serving a high percentage of dual-eligible patients (termed high-dual) and hospitals serving a low percentage of dual-eligible patients (termed low-dual). To examine changes associated with the CJR model among high-dual or low-dual hospitals in 2016 to 2017. This cohort study comprised 3 analyses of high-dual or low-dual hospitals (n = 1165) serving patients with hip or knee joint replacements (n = 768 224) in 67 treatment metropolitan statistical areas (MSAs) selected for CJR participation and 103 control MSAs. The study used Medicare claims data and public reports from 2012 to 2017. Data analysis was conducted from February 1, 2019, to August 31, 2019. The CJR model holds participating hospitals accountable for the spending and quality of care during care episodes for patients with hip or knee joint replacement, including hospitalization and 90 days after discharge. The primary outcomes were total episode spending, discharge to institutional postacute care facility, and readmission within the 90-day postdischarge period; bonus and penalty payments for each hospital; and reductions in per-episode spending required to receive a bonus for each hospital. In total, 1165 hospitals (291 high-dual and 874 low-dual) and 768 224 patients with joint replacement (494 013 women [64.3%]; mean [SD] age, 76 [7] years) were included. An episode-level triple-difference analysis indicated that total spending under the CJR model decreased at high-dual hospitals (by $851; 95% CI, -$1556 to -$146; P = .02) and low-dual hospitals (by $567; 95% CI, -$933 to -$202; P = .003). The size of decreases did not differ between the 2 groups (difference, -$284; 95% CI, -$981 to $413; P = .42). Discharge to institutional postacute care settings and readmission did not change among both hospital groups. High-dual hospitals were less likely to receive a bonus compared with low-dual hospitals (40.3% vs 59.1% in 2016; 56.9% vs 76.0% in 2017). To receive a bonus, high-dual hospitals would be required to reduce spending by $887 to $2231 per episode, compared with only $89 to $215 for low-dual hospitals. The study found that high- and low-dual hospitals made changes in care after CJR implementation, and the magnitude of these changes did not differ between the 2 groups. However, high-dual hospitals were less likely to receive a bonus for spending cuts. Spending benchmarks for CJR would require high-dual hospitals to reduce spending more substantially to receive a financial incentive. |
[Influence of non-occupational sources on the levels of biomarkers of internal dose for use in biological monitoring of occupational exposure to extremely low concentrations of benzene].
To study how traditional (t,t-muconic acid--t,t-MA and S-phenylmercapturic--SPMA) and new (urinary benzene) urinary biomarkers of internal dose can contribute to exclude an occupational source of exposure to extremely low concentrations of benzene, also analyzing the influence that non-occupational sources of exposure, such as cigarette smoking and urban pollution, can have on the levels of these biomarkers. Assessment was made of 6 workers employed at a groundwater purification plant polluted by benzene (exposed) and 6 administrative clerks employed at the same plant (controls); both groups included smokers and non-smokers. Environmental monitoring (fixed and personal samplings lasting 8 hours) and biological monitoring (determinations of t,t-MA, SPMA, urinary benzene, and urinary creatinine so as to apply suitable adjustments) were performed in exposed workers on 10 successive days, including also rest days (background exposure), and in controls only once. Airborne benzene always resulted lower than the limit of detection of the analytical method in both fixed and personal samplings done on exposed workers and controls during working days, while personal samplings done on exposed workers during rest days showed benzene concentrations even higher than 5 microg/m3, that is the limit value for ambient air quality. Concentrations of t,t-MA, SPMA and urinary benzene did not show differences between exposed workers, regardless of whether they were studied on working or rest days, and controls and appeared to be largely within the reference value range for the Italian population. All biomarkers of internal dose examined in the study showed significantly higher values in smokers than non-smokers. In the latter, SPMA was always below the limit of detection, while urinary benzene resulted higher than the limit of detection in 60.0% and 87.5% of the determinations done on working and on rest days, respectively. In situations of occupational exposure to extremely low doses of benzene or of absence of exposure, the application of an integrated environmental--biological monitoring approach, involving the determination of SPMA and/or urinary benzene, together with a careful evaluation of those factors determining non-occupational exposure to the toxicant, seems indispensable in order to be able to exclude the presence of occupational exposure. In these particular situations of occupational exposure to benzene, the interpretation of the results of environmental and biological monitoring should not only consider the TLV or BEI, but also the limit value for ambient air quality and the reference value for the general population, since benzene is able to determine genotoxic carcinogenic effects even at exposure to extremely low concentrations of the toxicant. |
Septic arthritis in childhood.
The purpose of the present study was to determine whether there was a difference between septic arthritis (SA) combined with osteomyelitis and SA alone with regard to clinical and laboratory findings, such as symptoms on admission, age, sex, joint involvement and isolated micro-organisms, and a relationship between age and joint involvement in SA. In addition, we also aimed to determine the prognostic factors in SA. The clinical and laboratory findings of 40 patients who were diagnosed with SA in our hospital were reviewed retrospectively. The diagnosis of SA was made according to the following criteria: immediate joint fluid aspiration (culture and Gram's stain positive, leukocyte count markedly elevated and glucose level low), blood culture positive and positive cultures from other possible sites of infection. Of the 40 patients, 22 were boys, 18 were girls and the male to female ratio was 1.2/1. Patient ages ranged from 6 months to 14 years (mean (+/- SD) 8.44 +/- 4.18 years). The most observed symptoms were fever (52.5%), arthralgia (50%) and joint swelling (45%). Thirty-four (85%) patients had only one joint and six patients (15%) had more than one joint involved. In total, arthritis was diagnosed in 49 joints. The joints diagnosed as having arthritis were the following: knee (n = 18), hip (n = 12), ankle (n = 12), elbow (n = 3), shoulder (n = 2), wrist (n = 1) and interphalangeal joint (n = 1). Of the 40 patients, 21 (52.5%) had SA alone and 19 (47.5%) had arthritis together with osteomyelitis. While arthritis was diagnosed in 27 joints in the group of patients with SA, it was diagnosed in 22 joints in the group of patients with SA combined with osteomyelitis; in the latter, an increase was not observed in the number of joints involved. Joint fluid culture was positive in 22 (55%) patients; the growth of Staphylococcus aureus was observed in 20 cases and Pseudomonas aeruginosa and Staphylococcus epidermidis were isolated in each patient. In contrast, in one patient, arthritis occured during meningococcal meningitis (in this patient, Gram-negative diplococci was isolated from a cerebrospinal fluid culture). Patients with SA combined with osteomyelitis and those with SA alone were compared for symptoms on admission, the history of trauma and antibiotic use, sex, age, fever, joint involvement, anemia, leukocytosis and micro-organisms isolated from joint fluid and blood; there were no significant differences for these parameters between the two groups (P > 0.05). In addition, we found that there was no relationship between age and joint involvement in SA and there was no effect of micro-organisms on mortality. Three of 40 patients died; the mortality rate was 7.3%. Of the three patients who died, two had SA alone and one had SA combined with osteomyelitis. The primary disease was sepsis in these three patients; S. aureus was cultured from blood in two patients and Gram-positive cocci was observed following examination of the joint fluid in the other patient. We would like to emphasize that SA is mono-articular, frequently localized in the knee, hip and ankle in 85% of patients, joint fluid culture was positive in 55% of patients, bacteria was isolated from one or more cultures of blood, joint fluid, pus or bone in 70% of patients and the most common isolated micro-organism was S. aureus. In addition, it must be pointed out that children younger than 2 years of age with fever, a positive trauma history and/or abnormal joint findings should be carefully examined for SA because the rate of SA was lower (7.5%) than expected in this age group. We also found that the mortality of SA was not influenced by age, joint involvement and bacterial agents, and there was no significant difference in symptoms on admission, the history of trauma and antibiotic use, sex, age, fever, joint involvement,anemia, leukocytosis and micro-organisms isolated from joint fluid and blood between patients with SA |
[Late results of myocardial revascularization in patients with coronary artery endarterectomy].
The aim of this study was to evaluate long-term results after myocardial revascularization in patients with diffuse and distal coronary disease, and to compare this procedure with the classical approach--indirect myocardial revascularization (revascularization without endarterectomy). This retrospective study was done in the period of three years, and includes patients operated between January 1, 1985 and December 31, 1990 at the University Clinic of Cardiovascular Surgery, Novi Sad. 500 patients were included and two groups were made. The investigated group consisted of 251 patients with endarterectomy and control group of 249 patients without endarterectomy. Other parameters (age, gender, preoperative hemodynamic parameters etc.) were practically the same. Postoperative mortality (PM) during immediate 30 postoperative days was 4.64% in the investigated group, and 1.97% in the control group (total PM = 2.66%). The main causes of death were cardiac (3.74%), and the rest of them were respiratory, renal and cerebral. The highest postoperative mortality according to the localization of endarterectomy was left artery descendent (LAD) in the position of the first septal artery (36.36%). The follow-up study included 500 operated patients. The mean follow-up period was 9 years (0-13 years). Cumulative survival curve and postoperative myocardial infarction curve made by Wilcocxon (Gehan) and Kaplan-Meier methods showed no statistically significant difference between groups after 13 years of follow up. Lower incident of new angina was found in the investigated group (p < 0.01). Most of patients show good physical condition, well toleration of the stress test (Bruce protocol) and no significant impairment of ejection fraction. Despite its long history and development, endarterectomy of coronary arteries is one of the most controversial methods in cardiac surgery. Application of this method was very restrictive mostly because its complexity and very controversial results from one institution to another. Endarterectomy of the first septal artery has the highest operative risk, but it is the method of choice in full revascularization of this region. Despite higher operative mortality, the immediate and long term results of this study show that endarterectomy of the coronary arteries is a method with very acceptable operative risk. Endarterectomy is a good and effective method for direct myocardial revascularization in cases with diffuse coronary disease. It is the best procedure for revascularization of the septum. The number of endarterectomies and low ejection fraction are independent predictors for early and long-term mortality. Endarterectomy is also a method of choice in patients with low ejection fraction and poor coronary bed. Frequent and repeated application of angioplasty, higher incidence of diffuse and distal coronary disease and no available donors for heart transplantation will increase the application of this method. In the future we expect further improvement and complete affirmation of endartrectomy of coronary arteries. |
Tei index for prenatal diagnosis of acute fetal hypoxia due to intermittent umbilical cord occlusion in an animal model.
To study the effectiveness of the pulsatility index for veins of ductus venosus (DV-PIV) and the Tei index in a prospective assessment of fetal hypoxic-ischemic brain damage in a near-term ovine fetus model with intermittent umbilical cord occlusion (UCO). Twelve fetal sheep were studied with umbilical cord occlusion performed in the experimental group animals by complete inflation of an occluder cuff for 90 s, every 30 min for approximately 2.5 h. Fetal arterial blood was sampled at 5 min before the first umbilical cord occlusions, approximately 60 s of the first umbilical cord occlusions, and 3 min after each occlusion for blood gas, pH, neuron-specific enolase (NSE) and S100B. Doppler measurements and Doppler echocardiographic examinations were performed 5 min before the first umbilical cord occlusions and 3 min after each successive occlusion. In experimental group animals, UCO caused a large decline in arterial PaO(2) (to approximately 7.70 mmHg, p < 0.01), a modest decline in pH (to approximately 7.24, p < 0.01), and a modest rise in PaCO(2) (to approximately 53.31 mmHg, p < 0.01), with a return more or less to baseline after occluder release. and there was significant change as compared with the control animals (all p < 0.01) with cumulative changes in responses to repetitive cord occlusions. The DV-PIV waveforms, right ventricle (RV) and LV Tei indices, the serum levels of NSE and S100B increased with cord occlusions (all p < 0.05), and were significantly higher than the control animals (all p < 0.05) with a cumulative changes in responses to repetitive cord occlusions. RV and LV Tei indices were significantly correlated with PaO(2) (r = - 0.684, p < 0.01 and r = - 0.725, p < 0.01), PaCO(2) (r = 0.682, p < 0.01 and r = 0.780, p < 0.01), pH (r = - 0.538, p < 0.01 and r = - 0.681, p < 0.01), NSE (r = 0.653, p < 0.01 and r = 0.687, p < 0.01), and S100B (r = 0.606, p < 0.01 and r = 0.640, p < 0.01). Significant but weaker correlations were also present between DV-PIV and the parameters considered. Umbilical cord occlusion during the latter part of the pregnancy, enough to cause significant hypoxemia and acidosis, results in a significant increase of DV-PIV, RV and LV Tei indices, and the serum levels of NSE and S100B. There was a strong correlation between the RV and LV Tei indices and blood gas, pH, and NSE, S100B with hypoxia. Therefore, the Tei index might be an easy and useful quantitative parameter for assessing fetal hypoxic ischemia. |
Image simulation and a model of noise power spectra across a range of mammographic beam qualities.
The aim of this work is to create a model to predict the noise power spectra (NPS) for a range of mammographic radiographic factors. The noise model was necessary to degrade images acquired on one system to match the image quality of different systems for a range of beam qualities. Five detectors and x-ray systems [Hologic Selenia (ASEh), Carestream computed radiography CR900 (CRc), GE Essential (CSI), Carestream NIP (NIPc), and Siemens Inspiration (ASEs)] were characterized for this study. The signal transfer property was measured as the pixel value against absorbed energy per unit area (E) at a reference beam quality of 28 kV, Mo/Mo or 29 kV, W/Rh with 45 mm polymethyl methacrylate (PMMA) at the tube head. The contributions of the three noise sources (electronic, quantum, and structure) to the NPS were calculated by fitting a quadratic at each spatial frequency of the NPS against E. A quantum noise correction factor which was dependent on beam quality was quantified using a set of images acquired over a range of radiographic factors with different thicknesses of PMMA. The noise model was tested for images acquired at 26 kV, Mo/Mo with 20 mm PMMA and 34 kV, Mo/Rh with 70 mm PMMA for three detectors (ASEh, CRc, and CSI) over a range of exposures. The NPS were modeled with and without the noise correction factor and compared with the measured NPS. A previous method for adapting an image to appear as if acquired on a different system was modified to allow the reference beam quality to be different from the beam quality of the image. The method was validated by adapting the ASEh flat field images with two thicknesses of PMMA (20 and 70 mm) to appear with the imaging characteristics of the CSI and CRc systems. The quantum noise correction factor rises with higher beam qualities, except for CR systems at high spatial frequencies, where a flat response was found against mean photon energy. This is due to the dominance of secondary quantum noise in CR. The use of the quantum noise correction factor reduced the difference from the model to the real NPS to generally within 4%. The use of the quantum noise correction improved the conversion of ASEh image to CRc image but had no difference for the conversion to CSI images. A practical method for estimating the NPS at any dose and over a range of beam qualities for mammography has been demonstrated. The noise model was incorporated into a methodology for converting an image to appear as if acquired on a different detector. The method can now be extended to work for a wide range of beam qualities and can be applied to the conversion of mammograms. |
Conservation, fiber digestibility, and nutritive value of corn harvested at 2 cutting heights and ensiled with fibrolytic enzymes, either alone or with a ferulic acid esterase-producing inoculant.
The aim of this study was to determine the effects of the use of a fibrolytic enzyme product, applied at ensiling either alone or in combination with a ferulic acid esterase-producing bacterial additive, on the chemical composition, conservation characteristics, and in vitro degradability of corn silage harvested at either conventional or high cutting height. Triplicate samples of corn were harvested to leave stubble of either a conventional (15cm; NC) or high (45cm; HC) height above ground. Sub-samples of chopped herbage were ensiled untreated or with a fibrolytic enzyme product containing xylanases and cellulases applied either alone (ENZ) or in combination with a ferulic acid esterase-producing silage inoculant (ENZ+FAEI). The fibrolytic enzyme treatment was applied at 2mL of enzyme product/kg of herbage dry matter (DM), and the inoculant was applied at 1.3×10(5) cfu/g of fresh herbage. Samples were packed into laboratory-scale silos, stored for 7, 28, or 70 d, and analyzed for fermentation characteristics, and samples ensiled for 70 d were also analyzed for DM losses, chemical composition, and in vitro ruminal degradability. After 70 d of ensiling, the fermentation characteristics of corn silages were generally unaffected by cutting height, whereas the neutral detergent fiber, acid detergent fiber, and ash concentrations were lower and the starch concentration greater for silages made with crops harvested at HC compared with NC. After 70 d of ensiling, the acetic acid, ethanol concentrations, and the number of yeasts were greater, and the pH and neutral detergent fiber concentrations were lower, in silages produced using ENZ or ENZ+FAEI than the untreated silages, whereas ENZ+FAEI silages also incurred higher DM losses. No effect of additive treatment was observed on in vitro degradability indices after 48h ruminal incubation. The use of a fibrolytic enzyme product, either alone or in combination with a ferulic acid esterase-producing inoculant, at ensiling did not improve corn silage fermentation or its nutritive value and resulted in some negative effects on these parameters. The effects of using a fibrolytic enzyme product at ensiling, either alone or in combination with a ferulic acid esterase-producing inoculant, did not differ between corn harvested at either NC or HC. Silage made from HC had a greater starch content and lower fiber content than NC silage, whereas cutting height did not affect the in vitro digestibility indices. |
Effect of integrated responsive stimulation and nutrition interventions in the Lady Health Worker programme in Pakistan on child development, growth, and health outcomes: a cluster-randomised factorial effectiveness trial.
Stimulation and nutrition delivered through health programmes at a large scale could potentially benefit more than 200 million young children worldwide who are not meeting their developmental potential. We investigated the feasibility and effectiveness of the integration of interventions to enhance child development and growth outcomes in the Lady Health Worker (LHW) programme in Sindh, Pakistan. We implemented a community-based cluster-randomised effectiveness trial through the LHW programme in rural Sindh, Pakistan, with a 2 × 2 factorial design. We randomly allocated 80 clusters (LHW catchments) of children to receive routine health and nutrition services (controls; n=368), nutrition education and multiple micronutrient powders (enhanced nutrition; n=364), responsive stimulation (responsive stimulation; n=383), or a combination of both enriched interventions (n=374). The allocation ratio was 1:20 (ie, 20 clusters per intervention group). The data collection team were masked to the allocated intervention. All children born in the study area between April, 2009, and March, 2010, were eligible for enrolment if they were up to 2·5 months old without signs of severe impairments. Interventions were delivered by LHWs to families with children up to 24 months of age in routine monthly group sessions and home visits. The primary endpoints were child development at 12 and 24 months of age (assessed with the Bayley Scales of Infant and Toddler Development, Third Edition) and growth at 24 months of age. Analysis was by intention to treat. This trial is registered with ClinicalTrials.gov, number NCT007159636. 1489 mother-infant dyads were enrolled into the study, of whom 1411 (93%) were followed up until the children were 24 months old. Children who received responsive stimulation had significantly higher development scores on the cognitive, language, and motor scales at 12 and 24 months of age, and on the social-emotional scale at 12 months of age, than did those who did not receive the intervention. Children who received enhanced nutrition had significantly higher development scores on the cognitive, language, and social-emotional scales at 12 months of age than those who did not receive this intervention, but at 24 months of age only the language scores remained significantly higher. We did not record any additive benefits when responsive stimulation was combined with nutrition interventions. Responsive stimulation effect sizes (Cohen's d) were 0·6 for cognition, 0·7 for language, and 0·5 for motor development at 24 months of age; these effect sizes were slightly smaller for the combined intervention group and were low to moderate for the enhanced nutrition intervention alone. Children exposed to enhanced nutrition had significantly better height-for-age Z scores at 6 months (p<0·0001) and 18 months (p=0·02) than did children not exposed to enhanced nutrition. Longitudinal analysis showed a small benefit to linear growth from enrolment to 24 months (p=0·026) in the children who received the enhanced nutrition intervention. The responsive stimulation intervention can be delivered effectively by LHWs and positively affects development outcomes. The absence of a major effect of the enhanced nutrition intervention on growth shows the need for further analysis of mediating variables (eg, household food security status) that will help to optimise future nutrition implementation design. UNICEF. |
[Survival status of stage IV non-small cell lung cancer patients after radiotherapy--a report of 287 cases].
The patients with stage IV non-small cell lung cancer (NSCLC) usually need radiotherapy and have good responses, particularly in those with brain or bone metastases. This study was to evaluate the influence of radiotherapy on the survival of stage IV NSCLC patients. Clinical data of 287 patients with stage IV NSCLC were retrospectively analyzed. Whole brain was treated with two parallel fields irradiation; bone metastases were treated with one local field irradiation. Primary tumors, regional lymph nodes and other distant metastases were treated by conventional fractionation radiotherapy or 3-dimensional conformal radiotherapy. Whole brain and bone radiotherapy was delivered with a total dose of 40 Gy in 20 fractions in 4 weeks or with a total dose of 30 Gy in 10 fractions in 2 weeks. The median dose for primary tumors and regional lymph nodes was 50 Gy (20-70 Gy), and the median dose for other distant metastases was 46 Gy (40-60 Gy). The median survival time of the 287 patients was 9 months (8-10 months). The 1- and 2-year overall survival rates were 30.2% and 8.9%. The median survival time was significantly longer in the patients received chemotherapy than in the patients didn't (10 months vs. 8 months, P = 0.049). In the patients with brain, bone, or other distant metastases, the median survival time was 8, 9, and 10 months, respectively; the 1-year survival rates were 24.8%, 28.7%, and 37.5%, respectively; the 2-year survival rates were 6.7%, 7%, and 15.3%, respectively. By unitivariate analysis, histological type and patients' age were prognostic factors of NSCLC. The median survival time was significantly longer in adenocarcinoma patients than in squamous cell carcinoma patients and other carcinoma patients (10 months vs. 7 and 9 months, P = 0.046), longer in the patients of < or =60 years old than in those of >60 years old (11 months vs. 8 months, P = 0.012), and longer in the patients with only bone metastases than in the patients with concomitant other distant metastases (10 months vs. 6 months, P = 0.033), but there was no significant difference between the patients with only brain metastases and those with concomitant other distant metastases (9 months vs. 8 months, P = 0.374). Radiotherapy for primary tumors and lymph nodes, complications of other chronic diseases, and irradiation dose and pattern had no effect on the survival. Histological type and patients' age may affect the efficacy of radiotherapy on stage IV NSCLC. The irradiation patterns of 40 Gy in 20 fractions in 4 weeks or 30 Gy in 10 fractions in 2 weeks have no effect on the survival of patients with brain or bone metastases. |
Regulation of glucagon-like peptide-1-(7-36) amide, peptide YY, and neurotensin secretion by neurotransmitters and gut hormones in the isolated vascularly perfused rat ileum.
Neurotensin (NT), peptide YY (PYY), and several peptides derived from proglucagon are promptly released from endocrine cells of the distal part of the gut after oral ingestion of a meal, thus suggesting that release of these peptides is partly under neural and/or hormonal control. Our previous studies conducted with a model of isolated vascularly perfused rat colon showed that colonic L cells are highly responsive to several transmitters of the gut and to the hormonal peptide GIP. To test the possibility that hormones produced by the proximal small intestine or transmitters of the enteric nervous system may also modulate the secretory activity of the ileal L cells, various intestinal regulatory peptides and neurotransmitters were administered intraarterially for 30 min in the isolated vascularly perfused rat ileum preparation. The secretory activity of the ileal N cells was comparatively assessed. The release of NT, PYY, and glucagon-like peptide-1 (GLP-1) in the portal effluent was measured with specific RIAs. The muscarinic cholinergic agonist bethanechol at a concentration of 10(-4) M provoked a biphasic release of PYY, GLP-1, and NT, consisting of an early peak followed by a sustained response. Similarly, bombesin (10(-7) M) induced a marked biphasic release of PYY and GLP-1. In contrast, the NT response was essentially monophasic, characterized by an early peak secretion. Tetrodotoxin did not modify the bombesin-induced release of PYY, GLP-1, and NT. The beta-adrenergic agonist isoproterenol at a concentration of 10(-6) M induced a transient rise in portal PYY and GLP-1 concentrations, whereas the effect on NT release was clearly biphasic. Calcitonin gene-related peptide (5 x 10(-8) M) induced a dramatic rise in PYY, GLP-1, and NT immunoreactivities in the portal effluent (peaks at 600%, 500%, and 550% of the basal values, respectively, 4 mi n after the start of infusion). Intraarterial infusion of GIP over the concentration range (0.5-3 nM) evoked a significant increase in portal concentration of the three peptides only at the threshold concentration of 3 nM. Secretin (50 pM) or cholecystokinin (50 pM) did not affect the release of ileal hormones. In conclusion, ileal L and N cells respond to a variety of transmitters of the gut. The pattern of peptide release depends on the cell type studied. The two cosynthesized peptides, PYY and GLP-1, appear to be cosecreted in the conditions of the present study. |
Comparison of the long-term reproducibility of the walk test and of exercise peak oxygen consumption in patients with preserved exercise capacity.
Short-term and long-term reproducibility of the cardiopulmonary (CPX) exercise test have been established. Though short and mid-term reproducibility of the walk test has been ascertained, this was not extensively done for the long-term reproducibility. The aim of the study was to examine the long-term reproducibility of distance walked in an allotted time and to check the stability of the relationship between walked distance and exercise peak VO2 (pVO2). Forty six subjects (33 men; 57 ± 14 years), referred for functional capacity assessment, were studied twice by CPX and walking test. On the same day, CPX was performed on a bicycle or a treadmill and walk test in a corridor as required by specific guidelines. We performed a 12-minute walk test and the distance covered in six minutes was systematically taken down. A free time interval of 1.5 hours was observed between the exercise tests. Distance walked in the allotted time and pVO2 were analysed. Reproducibility was assessed according to Bland and Altman plots and intra-class coefficient correlation (ICC). The relationship between distance ambulated and pVO2 was analysed by the Spearman coefficient correlation. The time interval between the two evaluations was 290 ± 10 days. During this meantime, for those subjects having drug treatment, no change was recorded in their regimen. BMI remained stable for the entire studied population (28 ± 5 kg/m2). Minute walked distance was respectively 522 ± 83 and 527 ± 76 m in six minutes, 1033 ± 182 and 1041 ± 153 m in 12 minutes. pVO2 was 21 ± 7 and 22 ± 7 ml/kg/min (all p = NS). The walk test was reproducible in the long-term, regardless of the modality (6 or 12-minute walk) as shown by the Bland-Altman plots and the high ICC of .89. Spearman's rho coefficient between distance ambulated and pVO2 was modest and remained stable over time whatever the allotted time: Spearman's r = .54; p = .0011 (1st evaluation) and Spearman's r = .51; p = .0019 (2nd evaluation) between 6-minute distance walked and pVO2. The walking distance in an allotted time seems highly reproducible in the long-term. Its relationship with pVO2 remains stable over time. It could be of value for repeated assessment of patients' exercise capacity in a first step. Further evaluation in a larger population is needed to confirm our result and its usefulness in clinical practice. |
Explainable variation in renal transplant outcomes: a comparison of standard and expanded criteria donors.
In 2002, OPTN/UNOS altered kidney allocation rules to allow patients to be listed separately to receive kidneys from expanded criteria donors (ECD). Our aim was to quantify the short- and long-term impacts of 21 prognostic factors on recipients of ECD as well as recipients of living (LD) and deceased standard criteria (SCD) donors. A factor's impact depends on both the risk and diversity of its effects. Using OPTN/UNOS Registry data from 1996-2003, we have analyzed kidney-only, adult-recipient grafts for factor effects among 35,878 LD, 47,941 SCD and 10,399 ECD transplants. During an early risk period, all 94,218 recipients were followed through one year, and, in the late risk period, 85,270 recipients whose grafts survived beyond one year were followed for 5 years post-transplant. Impact was measured by determining a factor's percentage of assignable variation in one- and 5-year graft failure rates. Scores for 21 factors were estimated via generalized logistic models, which contained a random component for transplant center. The assignable variation associated with a given factor was computed as the factor score variance multiplied by the square of the corresponding regression coefficient. Impacts were heterogeneous with regard to posttransplant period and donor type. The top 5 factors influencing one-year graft survival rates were as follows: * For LD grafts - pretransplant dialysis time (14% of the variation in short-term outcomes), recipient age (13%), body mass (12%), PRA (10%) and induction therapy (10%). * For SCD grafts - donor age (24%), recipient age (12%), pretransplant dialysis time (12%), HLA-DR matching (6%) and pretransplant medical condition (6%). * For ECD grafts - donor age (18%), pre-transplant dialysis time (10%), recipient age (10%), pretransplant medical condition (10%) and recipient body mass (6%). Ranking long-term outcomes demonstrated the following top 5 influential factors: * For LD grafts - donor age (28% of the variation in long-term outcomes), recipient race (15%), age (15%), transplant year (13%) and recipient sex (11%). * For SCD grafts - donor age (35%), recipient race (23%), transplant year(15%), recipient sex (8%)and age (5%). * For ECD grafts - donor age (33%), recipient sex (20%), race (15%), transplant year (8%) and recipient's original disease (5%). Donor age was the dominant factor governing the survival rates among deceased donor kidney transplants. Advancing donor age was still the major risk factor for SCD transplant failure despite setting aside all donors 60 and up, and a large fraction of 50-59 year-old donors, from this group. Current ECD/SCD definitions warrant review and possible revision. |
Confirmation of E. coli among other thermotolerant coliform bacteria in paper mill effluents, wood chips screening rejects and paper sludges.
Paper sludges are solid wastes material generated from the paper production, which have been characterized for their chemical contents. Some are rich in wood fiber and are a good carbon source, for example the primary and de-inking paper sludges. Others are made rich in nitrogen and phosphorus by pressing the activated sludge, resulting from the biological water treatments, with the primary sludge, yielding the combined paper sludge. Still, in the absence of sanitary effluents very few studies have addressed the characterization of their coliform microflora. Therefore, this study investigated the thermotolerant coliform population of one paper mill effluent and two paper mill sludges and wood chips screening rejects using chromogenic media. For the first series of analyses, the medium used was Colilert broth and positive tubes were selected to isolate bacteria in pure culture on MacConkey agar. In a second series of analyses, double selective media, based on ss-galactosidase and ss-glucuronidase activities, were used to isolate bacteria. First, the presence of thermotolerant coliforms was detected in low numbers in most water effluents, but showed that the entrance of the thermotolerant coliforms was early in the industrial process. Also, large numbers of thermotolerant coliforms, i.e., 7,000,000 MPN/g sludge (dry weight; d.w.), were found in combined sludges. From this first series of isolations, bacteria were purified on MacConkey medium and identified as Citrobacter freundii, Enterobacter sp, E. sakazakii, E. cloacae, Escherichia coli, K. pneumoniae, K. pneumoniae subsp. rhinoscleromatis, K. pneumoniae subsp. ozaenae, K. pneumoniae subsp. pneumoniae, Pantoea sp, Raoultella terrigena, R. planticola. Second, the presence of thermotolerant coliforms was measured at more than 3,700-6,000 MPN/g (d.w) sludge, whereas E. coli was detected from 730 to more than 3,300 MPN/g (d.w.) sludge. The presence of thermotolerant coliform bacteria and E. coli was sometimes detected from wood chips screening rejects in large quantities. Also, indigenous E. coli were able to multiply into the combined sludge, and inoculated E. coli isolates were often able to multiply in wood chips and combined sludge media. In this second series of isolations, API20E and Biolog identified most isolates as E. coli, but others remained unidentified. The sequences of the 16S rDNA confirmed that most isolates were likely E. coli, few Burkholderia spp, but 10% of the isolates remained unidentified. This study points out that the coliform bacteria are introduced by the wood chips in the water effluents, where they can survive throught the primary clarifier and regrow in combined sludges. |
Modular Design of Peptide- or DNA-Modified AIEgen Probes for Biosensing Applications.
Fluorophore probes are widely used for bioimaging in cells, tissues, and animals as well as for monitoring of multiple biological processes in complex environments. Such imaging properties allow scientists to make direct visualizations of pathological events and cellular targets. Conventional fluorescent molecules have been developed for several decades and achieved great successes, but their emissions are often weakened or quenched at high concentrations that might suffer from the aggregation-caused quenching (ACQ) effect, which reduces the efficiencies of their applications. In contrast to the ACQ effect, aggregation-induced emission (AIE) luminogens (AIEgens) display much higher fluorescence in aggregated states and possess various advantages such as low background, long-term tracking ability, and strong resistance to photobleaching. Therefore, AIEgens are employed as unique fluorescence molecules and building blocks for biosensing applications in the fields of ions, amino acids, carbohydrates, DNAs/RNAs, peptides/proteins, cellular organelles, cancer cells, bacteria, and so on. Quite a few of the above biosensing missions are accomplished by modular peptide-modified AIEgen probes (MPAPs) or modular DNA-modified AIEgen probes (MDAPs) because of the multiple capabilities of peptide and DNA modules, including solubility, biocompatibility, and recognition. Meanwhile, both electrostatic interactions and coupling reactions could provide efficient methods to construct different MPAPs and MDAPs, finally resulting in a large variety of biosensing probes. Those probes exhibit leading features of detecting nucleic acids or proteins and imaging mass biomolecules. For example, under modular design, peptide modules possessing versatile recognition abilities enable MPAPs to detect numerous targets, such as integrin αvβ3, aminopeptidase N, MMP-2, MPO, H2O2, and so forth; MDAP could allow the imaging of mRNA in cells and tissue chips, suggesting the diagnostic functions of MDAP in clinical samples. Modular design offers a novel strategy to generate AIEgen-based probes and expedites functional biomacromolecules research. In this vein, here we review the progress on MPAPs and MDAPs in the most recent 10 years and highlight the modular design strategy as well as their advanced biosensing applications including briefly two aspects: (1) detection and (2) imaging. By the use of MPAPs/MDAPs, multiple bioanalytes can be efficiently analyzed at low concentrations and directly visualized through high-contrast and luminous imaging. Compared with MPAPs, the quantities of MDAPs are limited because of the difficult synthesis of long-length DNA strands. In future work, multifunctional of DNA sequences are needed to explore varieties of MDAPs for diverse biosensing purposes. At the end of this Account, some deficiencies and challenges are mentioned for briging more attention to accelerate the development of AIEgen-based probes. |
Finding an optimal method for imaging lymphatic vessels of the upper limb.
Lymphoscintigraphy involves interstitial injection of radiolabelled particulate materials or radioproteins. Although several variations in the technique have been described, their place in clinical practice remains controversial. Traditional diagnostic criteria are based primarily on lymph node appearances but in situations such as breast cancer, where lymph nodes may have been excised, these criteria are of limited use. In these circumstances, lymphatic vessel morphology takes on greater importance as a clinical endpoint, so a method that gives good definition of lymphatic vessels would be useful. In patients with breast cancer, for example, such a method, used before and after lymph node resection, may assist in predicting the development of breast cancer-related lymphoedema. The aim of this study was to optimise a method for the visualisation of lymphatic vessels. Subcutaneous (sc) and intradermal (id) injection sites were compared, and technetium-99m nanocolloid, a particulate material, was compared with (99m)Tc-human immunoglobulin (HIG), which is a soluble macromolecule. Twelve normal volunteers were each studied on two occasions. In three subjects, id (99m)Tc-HIG was compared with sc (99m)Tc-HIG, in three id (99m)Tc-nanocolloid was compared with sc (99m)Tc-nanocolloid, in three id (99m)Tc-HIG was compared with id (99m)Tc-nanocolloid and in three sc (99m)Tc-HIG was compared with sc (99m)Tc-nanocolloid. Endpoints were quality of lymphatic vessel definition, the time after injection at which vessels were most clearly visualised, the rate constant of depot disappearance ( k) and the systemic blood accumulation rate as measured by gamma camera imaging over the liver or cardiac blood pool. Excellent definition of lymphatic vessels was obtained following id injection of either radiopharmaceutical, an injection route that was clearly superior to sc. Differences between radiopharmaceuticals were less clear, although after id injection, (99m)Tc-HIG gave images that were marginally but significantly better than those given by (99m)Tc-nanocolloid. Image quality correlated inversely with time after injection at which the best image was obtained, consistent with the notion that good vessel definition was dependent on a "narrow" bolus width. k was approximately three times higher after id injection than after sc injection but it was not significantly different between radiopharmaceuticals for either injection route. Intradermal (99m)Tc-HIG gave a cardiac blood pool signal that, over the first 60 min, increased about five times faster than that with sc (99m)Tc-HIG, but no clear difference was observed in the rate of increase in hepatic activity between id (99m)Tc-nanocolloid and sc (99m)Tc-nanocolloid. We conclude that id injection provides rapid access of radiotracers to lymphatic vessels, which is ideal for imaging lymphatic vessel morphology. (99m)Tc-HIG is marginally superior to nanocolloid for this purpose and, in drainage basins from which lymph nodes have been excised, is not handicapped by a potentially inferior ability, compared with radiocolloid, to image lymph nodes. |
Effects of recommendations to follow the Dietary Approaches to Stop Hypertension (DASH) diet v. usual dietary advice on childhood metabolic syndrome: a randomised cross-over clinical trial.
The effects of the Dietary Approaches to Stop Hypertension (DASH) eating plan on childhood metabolic syndrome (MetS) and insulin resistance remain to be determined. The present study aimed to assess the effects of recommendations to follow the DASH diet v. usual dietary advice (UDA) on the MetS and its features in adolescents. In this randomised cross-over clinical trial, sixty post-pubescent adolescent girls with the MetS were randomly assigned to receive either the recommendations to follow the DASH diet or UDA for 6 weeks. After a 4-week washout period, the participants were crossed over to the alternate arm. The DASH group was recommended to consume a diet rich in fruits, vegetables and low-fat dairy products and low in saturated fats, total fats and cholesterol. UDA consisted of general oral advice and written information about healthy food choices based on healthy MyPlate. Compliance was assessed through the quantification of plasma vitamin C levels. In both the groups, fasting venous blood samples were obtained at baseline and at the end of each phase of the intervention. The mean age and weight of the participants were 14.2 (SD 1.7) years and 69 (SD 14.5) kg, respectively. Their mean BMI and waist circumference were 27.3 kg/m2 and 85.6 cm, respectively. Serum vitamin C levels tended to be higher in the DASH phase than in the UDA phase (860 (SE 104) v. 663 (SE 76) ng/l, respectively, P= 0.06). Changes in weight, waist circumference and BMI were not significantly different between the two intervention phases. Although changes in systolic blood pressure were not statistically significant between the two groups (P= 0.13), recommendations to follow the DASH diet prevented the increase in diastolic blood pressure compared with UDA (P= 0.01). We found a significant within-group decrease in serum insulin levels (101.4 (SE 6.2) v. 90.0 (SE 5.5) pmol/l, respectively, P= 0.04) and a non-significant reduction in the homeostasis model assessment for insulin resistance score (P= 0.12) in the DASH group. Compared with the UDA group, the DASH group experienced a significant reduction in the prevalence of the MetS and high blood pressure. Recommendations to follow the DASH eating pattern for 6 weeks among adolescent girls with the MetS led to reduced prevalence of high blood pressure and the MetS and improved diet quality compared with UDA. This type of healthy diet can be considered as a treatment modality for the MetS and its components in children. |
Angiotensin II receptor antagonists and heart failure: angiotensin-converting-enzyme inhibitors remain the first-line option.
(1) Some angiotensin-converting-enzyme inhibitors (ACE inhibitors) reduce mortality in patients with heart failure (captopril, enalapril, ramipril and trandolapril), and in patients with recent myocardial infarction and heart failure or marked left ventricular dysfunction (captopril, ramipril and trandolapril). (2) Angiotensin II receptor antagonists, otherwise known as angiotensin receptor blockers, have haemodynamic effects similar to ACE inhibitors, but differ in their mechanism of action and certain adverse effects. (3) Five clinical trials have evaluated angiotensin II receptor antagonists (candesartan, losartan and valsartan) in terms of their effect on mortality and on the risk of clinical deterioration in patients with symptomatic heart failure, but without severe renal failure, hyperkalemia or hypotension. In these trials, candesartan and valsartan were used at much higher doses than those recommended for the treatment of arterial hypertension. (4) In patients with heart failure who were not taking an angiotensin II receptor antagonist or an ACE inhibitor at enrollment, no significant difference was found between losartan and captopril in terms of mortality or the risk of clinical deterioration. (5) In patients with heart failure who had stopped taking an ACE inhibitor because of adverse effects, candesartan had no effect on mortality as compared with placebo, but it did reduce the risk of clinical deterioration (3 fewer hospitalisations per year per 100 patients). However, candesartan was associated with adverse effects such as renal failure and hyperkalemia, especially in patients who had experienced these same adverse effects while taking an ACE inhibitor. (6) In patients with heart failure who were already taking an ACE inhibitor, adjunctive candesartan or valsartan treatment did not influence mortality in comparison to the addition of a placebo. Adding candesartan or valsartan reduced the risk of hospitalisation (between 1 and 3 fewer hospitalisations per year per 100 patients), but increased the risk of renal failure and hyperkalemia. (7) In patients with heart failure and incapacitating dyspnea despite ACE inhibitor + diuretic combination therapy, there are no trials comparing the addition of an angiotensin II receptor antagonist versus spironolactone. Adjunctive spironolactone therapy prevents 5 to 6 deaths per year per 100 patients in this setting. (8) In patients with heart failure who do not have markedly altered cardiac contractility, candesartan appears to have no clinical advantages over placebo. (9) In some of these trials, mortality was higher with angiotensin II receptor antagonist therapy than with placebo among patients who were already taking a betablocker. (10) Two trials have compared an angiotensin II receptor antagonist with an ACE inhibitor in patients with recent myocardial infarction who had heart failure or an altered left ventricular ejection fraction, but who did not have hypotension or severe renal failure. However, there are no placebo-controlled randomised trials assessing the effects of angiotensin II receptor antagonists on mortality. (11) In patients with recent myocardial infarction, these trials showed no difference in mortality between angiotensin II receptor antagonist treatment (losartan or valsartan) and captopril. They did not rule out the possibility that these angiotensin II receptor antagonists are moderately less effective than captopril. Adding valsartan to ongoing captopril therapy did not reduce mortality or morbidity as compared with placebo, but did increase the risk of adverse effects. (12) Overall, these trials confirm the advantage of angiotensin II receptor antagonists over ACE inhibitors with respect to some adverse effects (cough, skin rash, etc.). However, the two drug classes share certain serious adverse effects such as hyperkalemia, renal failure and hypotension. In one trial, angioedema was less frequent with angiotensin II receptor antagonist therapy (one less case per 500 patients). |
One-year tolerability and efficacy of sumatriptan nasal spray in adolescents with migraine: results of a multicenter, open-label study.
The objective of this study was to determine the 1-year tolerability and efficacy of sumatriptan nasal spray (NS) at doses of 5, 10, and 20 mg for the treatment of acute migraine in adolescents. This was a prospective, multicenter, open-label, 1-year, multiple-attack study. Adolescents (aged 12-17 years) with a > or =6-month history of migraine with or without aura, 2 to 8 moderate or severe migraines per month, and a typical migraine duration of > or =4 hours were eligible for participation. After initial treatment with sumatriptan 10 mg, the dose could be adjusted down to 5 mg or up to 20 mg at the investigator's discretion to optimize tolerability or efficacy. Patients could treat an unlimited number of moderate or severe migraine attacks, provided there was a 24-hour headache-free period between treated attacks and a 2-hour period between doses of sumatriptan NS. A second dose of sumatriptan NS was available for headache recurrence 2 to 24 hours after initial treatment; no more than 2 doses could be used within a 24-hour period. Adverse events, vital signs, electrocardiographic and physical findings, and laboratory variables were assessed. Headache response (reduction of moderate/severe predose pain to mild/no pain) and pain-free response (reduction of moderate/severe predose pain to no pain) were reported by patients 2 hours after dosing. A total of 437 patients treated > or =1 migraine; 3272 total attacks were treated, with 3675 drug exposures (mean, 1.1 dose/attack). Patients had a mean age of 14.1 years, 91% were white, and 53% were female. Seven patients used the 5-mg dose; meaningful conclusions concerning this dose could not be made. Drug-related adverse events were reported in 33% of attacks with the 10-mg dose and 31% with the 20-mg dose; most were related to taste disturbance. Adverse events did not increase with a second dose or over time. Four percent (16/437) of patients withdrew due to drug-related adverse events. One serious adverse event, a facial-nerve ischemic event (10-mg dose), was considered drug related. No drug-related changes in vital signs or electrocardiographic findings were observed. Headache response 2 hours after dosing was reported by 76% of patients taking the 10-mg dose and 72% of those taking the 20-mg dose. Pain-free response 2 hours after dosing was reported by 43% and 40% of patients in the 10- and 20-mg groups, respectively. Based on these results, sumatriptan NS at doses of 10 and 20 mg was well tolerated and effective in the 1-year treatment of multiple migraine attacks in adolescents. |
Does de-escalation of antibiotic therapy for ventilator-associated pneumonia affect the likelihood of recurrent pneumonia or mortality in critically ill surgical patients?
Ventilator-associated pneumonia (VAP) is a leading cause of mortality in critically ill patients. Although previous studies have shown that de-escalation therapy (DT) of antibiotics may decrease costs and the development of resistant pathogens, minimal data have shown its effect in surgical patients or in any patients with septic shock. We hypothesized that DT for VAP was not associated with an increased rate of recurrent pneumonia (RP) or mortality in a high acuity cohort of critically ill surgical patients. All surgical intensive care unit (SICU) patients from January 2005 to May 2007 with VAP diagnosed by quantitative bronchoalveolar lavage with a positive threshold of 10,000 CFU/mL were identified. Data collected included age, gender, Acute Physiologic and Chronic Health Evaluation Score III (A3), type of bacterial or other pathogen, antibiotics used for initial and final therapy, mortality, RP, and appropriateness of initial therapy (AIT). Patients were designated as receiving AIT, DT, or escalation of antibiotic therapy based on microbiology for their VAP. One hundred thirty-eight of 1,596 SICU patients developed VAP during the study period (8.7%). For VAP patients, the mean Acute Physiologic and Chronic Health Evaluation III score was 82.7 points with a mean age of 63.8 years. The RP rate was 30% and did not differ between patients receiving DT (27.3%) and those who did not receive DT (35.1%). Overall mortality was 37% (55% predicted by A3 norms) and did not differ between those receiving DT (33.8%) or not (42.1%). The most common pathogens for primary VAP were methicillin-resistant Staphylococcus aureus (14%), Escherichia coli (11%), and Pseudomonas aeruginosa (9%) whereas P. aeruginosa was the most common pathogen in RP. The AIT for all VAP was 93%. De-escalation of therapy occurred in 55% of patients with AIT whereas 8% of VAP patients required escalation of antibiotic therapy. The most commonly used initial antibiotic choice was vancomycin/piperacillin-tazobactam (16%) and the final choice was piperacillin-tazobactam (20%). Logistic regression demonstrated no specific parameter correlated with development of RP. Higher A3 (Odds ratio, 1.03; 95% confidence interval, 1.01-1.05) was associated with mortality whereas lack of RP (odds ratio, 0.31; 95% confidence interval, 0.12-0.80), and AIT reduced mortality (odds ratio, 0.024; 95% confidence interval, 0.007-0.221). Age, gender, individual pathogen, individual antibiotic regimen, and the use of DT had no effect on mortality. De-escalation therapy did not lead to RP or increased mortality in critically ill surgical patients with VAP. De-escalation therapy was also shown to be safe in patients with septic shock. Because of its acknowledged benefits and lack of demonstrable risks, de-escalation therapy should be used whenever possible in critically ill patients with VAP. |
Automatic detection of lung nodules in CT datasets based on stable 3D mass-spring models.
We propose a computer-aided detection (CAD) system which can detect small-sized (from 3mm) pulmonary nodules in spiral CT scans. A pulmonary nodule is a small lesion in the lungs, round-shaped (parenchymal nodule) or worm-shaped (juxtapleural nodule). Both kinds of lesions have a radio-density greater than lung parenchyma, thus appearing white on the images. Lung nodules might indicate a lung cancer and their early stage detection arguably improves the patient survival rate. CT is considered to be the most accurate imaging modality for nodule detection. However, the large amount of data per examination makes the full analysis difficult, leading to omission of nodules by the radiologist. We developed an advanced computerized method for the automatic detection of internal and juxtapleural nodules on low-dose and thin-slice lung CT scan. This method consists of an initial selection of nodule candidates list, the segmentation of each candidate nodule and the classification of the features computed for each segmented nodule candidate.The presented CAD system is aimed to reduce the number of omissions and to decrease the radiologist scan examination time. Our system locates with the same scheme both internal and juxtapleural nodules. For a correct volume segmentation of the lung parenchyma, the system uses a Region Growing (RG) algorithm and an opening process for including the juxtapleural nodules. The segmentation and the extraction of the suspected nodular lesions from CT images by a lung CAD system constitutes a hard task. In order to solve this key problem, we use a new Stable 3D Mass-Spring Model (MSM) combined with a spline curves reconstruction process. Our model represents concurrently the characteristic gray value range, the directed contour information as well as shape knowledge, which leads to a much more robust and efficient segmentation process. For distinguishing the real nodules among nodule candidates, an additional classification step is applied; furthermore, a neural network is applied to reduce the false positives (FPs) after a double-threshold cut. The system performance was tested on a set of 84 scans made available by the Lung Image Database Consortium (LIDC) annotated by four expert radiologists. The detection rate of the system is 97% with 6.1 FPs/CT. A reduction to 2.5 FPs/CT is achieved at 88% sensitivity. We presented a new 3D segmentation technique for lung nodules in CT datasets, using deformable MSMs. The result is a efficient segmentation process able to converge, identifying the shape of the generic ROI, after a few iterations. Our suitable results show that the use of the 3D AC model and the feature analysis based FPs reduction process constitutes an accurate approach to the segmentation and the classification of lung nodules. |
Living Under the Constant Threat of Ebola: A Phenomenological Study of Survivors and Family Caregivers During an Ebola Outbreak.
Ebola is a highly infectious disease that is caused by viruses of the family Filoviridae and transmitted to humans by direct contact with animals infected from unknown natural reservoirs. Ebola virus infection induces acute fever and death within a few days in up to 90% of symptomatic individuals, causing widespread fear, panic, and antisocial behavior. Uganda is vulnerable to future Ebola outbreaks. Therefore, the survivors of Ebola and their family caregivers are likely to continue experiencing related antisocial overtones, leading to negative health outcomes. This study articulated the lived experiences of survivors and their family caregivers after an Ebola outbreak in Kibale District, Western Uganda. Eliciting a deeper understanding of these devastating lifetime experiences provides opportunities for developing and implementing more compassionate and competent nursing care for affected persons. Ebola survivors and their family caregivers were recruited using a purposive sampling method. Twelve (12) adult survivors and their family caregivers were recruited and were interviewed individually between May and July 2013 in Kibale, a rural district in Western Uganda close to the border of the Democratic Republic of the Congo, where Ebola virus was first discovered in 1976. Oral and written informed consent was obtained before all in-depth interviews, and the researchers adhered to principles of anonymity and confidentiality. The interviews were recorded digitally, and data analysis employed Wertz's Empirical Psychological Reflection method, which is grounded in descriptive phenomenology. Living under the constant threat of Ebola is experienced through two main categories: (a) defining features of the experience and (b) responding to the traumatizing experience. Five themes emerged in the first category: (a) fear, ostracism, and stigmatization; (b) annihilation of sufferer's actualities and possibilities; (c) the lingering nature of the traumatic experience; (d) psychosomatic manifestations; and (e) the inescapable nature of the experience. The second category was composed of two themes: (a) seeking self-preservation and protection and (b) transcending victimhood and becoming empowered. Living under the constant threat of Ebola is experienced as distressing in the physical, social, and psychological realms. In the future, prompt treatment and nursing care are recommended to minimize deaths and to reduce the widespread terror, anxiety, ostracism, and stigmatization that affected individuals and families face. Furthermore, it is recommended that the resilience of survivors and caregivers be increased to facilitate their better coping with the rampant antisocial overtones that they are likely to experience because of their association with Ebola. |
Clinical evaluation of oral chronic graft-versus-host disease.
Oral chronic graft-versus-host disease (cGVHD) is a significant and serious complication following allogeneic hematopoietic stem cell transplantation (HSCT). The purpose of this study was to characterize the distribution, type, and extent of lesions and their correlation with patient-reported symptoms such as pain and discomfort. The effect of time since transplantation on these measures was also assessed. Consecutive patients with oral cGVHD referred to the Center for Oral Disease at Brigham and Women's Hospital, Boston, MA, were evaluated over a 2-year period. Subjective data included the responses to 4 targeted symptom questions (yes/no) and a visual analog scale pain score (0-10). Objective data included the location and extent of reticulation, erythema, and ulcerations using a previously published scoring system as well as time since HSCT. Multiple linear regression analyses were performed using SAS. We evaluated 27 patients, for a total of 79 clinic visits (median 2, range: 1-8). The median time since HSCT was 18 months (range: 5-157 months). The buccal and labial mucosa and tongue were the sites of 93% of all ulcerations, 72% of all erythematous lesions, and 76% of all reticular lesions, and were the most frequently affected sites. The gingiva, floor of mouth, and hard and soft palate were infrequently affected. Although uncommon, ulceration of the soft palate was the objective finding most highly correlated with increased pain (P < .0001), and there was a generalized significant trend for increased pain scores with increased extent of ulceration. Overall, 95% of pain scores were <or=5 (scale from 0-10, range: 0-7), with 40% reporting a score of zero. However, 80% admitted to avoiding certain foods because of mouth pain. After controlling for the presence and extent of ulcerations, we found that time since HSCT was inversely related to the pain score (P < .04). There was a statistically significant inverse relationship between the overall presence of ulceration and time since HSCT. We found that oral cGVHD most frequently affects the buccal and labial mucosa and the tongue. The functional impact was significant, as most patients had to restrict oral intake because of discomfort. Both the signs and symptoms associated with oral cGVHD tend to decrease over time. The association between ulceration of the soft palate and patient-reported pain highlights the significance of the location of involvement and the need for targeted approaches to therapy. Our findings, in large part, support the recently introduced National Institutes of Health response criteria for oral cGVHD, which is critical for the conduct of effective and meaningful research in this field; however, prospective application in clinical and investigative settings is necessary for evaluating its utility and efficacy in practice. |
Effect of estriol treatment on the menstrual cycle and prolactin secretion.
In order to study the biological effects of estriol in women 20 mg estriol was administered daily to 7 young women. Plasma luteinizing hormone (LH), estradiol (E2), progesterone (Pg) and prolactin (Prl) were measured during a treatment and a control cycle every second or third day. Further 3, 6 or 20 mg estriol was administered in a single dose to 5 women and plasma Prl, unconjugated and conjugated estriol (E3) measured over 24 h at 2-3 intervals. In 2 experiments with 20 mg E3, blood samples were taken more frequently, over 6 h. When 20 mg E2 was administered daily, 2 of the 7 young women had anovulatory cycles. The mean plasma E2 was lower during the follicular and ovulatory phases (P less than 0.025) and mean plasma LH was higher (P less than 0.005) during the luteal phase, when E3 was given. Because of the 2 anovulatory cycles the mean Pg value during the luteal phase was lower (P less than 0.05) during treatment. There was a slight decrease in mean Prl in 5 out of 6 women (P less than 0.0005), but in only 1 woman was this decrease substantial (from a mean value of 27.6-18.9 ng/ml; P less than 0.01). When 6 or 20 mg E3 was administered orally in the morning a significant negative correlation (P less than 0.01) between plasma Prl and unconjugated E3 was found. The correlation coefficient was highest (r = -0.74) with 6 mg E3. When 3 mg was administered no obvious effect on Prl section was seen. However, when results from all experiments with identical time schedules were pooled (two with 3 mg, two with 6 mg and one with 20 mg E3) and the mean values for plasma Prl calculated and compared with the mean values obtained in 7 control experiments, it was found that E3 administration in the morning almost abolishes the Prl rise during the following night. There was a statistically significant (P less than 0.0125) decrease in the difference between the maximum value during the night and the minimum value during the day. The minimum value was significantly higher (P less than 0.01) and the maximum value significantly lower (P less than 0.025) after E3 treatment, compared to the control values. It is concluded that long-term administration of 20 mg E3 usually has only a slight but significant decreasing effect on mean plasma Prl concentration measured in the morning, before the next dose is taken.(ABSTRACT TRUNCATED AT 400 WORDS) |
Time course of competence in phytochrome-controlled appearance of nuclear-encoded plastidic proteins and messenger RNAs.
The phytochrome-controlled expression of genes coding for plastidic proteins was studied in mustard (Sinapis alba L.) seedling cotyledons in continuous red (R) and far-red (FR) light, i.e. under steady-state conditions with regard to phytochrome, and in darkness over a time span of 8 d after sowing (25° C). (i) The time courses of the levels of the Calvin-cycle enzymes ribulose-1,5-bisphosphate carboxylase (RuBPCase) and NADP-dependent glyceraldehyde-3-phosphate dehydrogenase (NADP-GPD) were found to be optimum curves. The time at which the optimum (peak) occurred was - independent of fluence rate - the same in R (strong phytochrome action, chlorophyll accumulation and photosynthesis) and FR (strong phytochrome action but no significant chlorophyll accumulation and no photosynthesis). The starting point (first detectable inccrease of enzyme level) was also endogenously fixed and not affected by light. However, the two enzymes differed insofar as the peak was at 4 d after sowing for RuBPCase activity and 4.5 d for GPD. Western blots of the small (SSU) and large (LSU) subunits of RuBPCase showed that enzyme activity and protein levels were correlated. It was concluded that a dramatic change of competence towards phytochrome had occurred and that this change was endogenous. This conclusion was confirmed by short-term induction experiments. In constant darkness (D) the low enzyme levels were saturation rather than optimum curves, presumably because enzyme turnover was lacking. (ii) The time course of accumulation of membrane components showed that chlorophyll and LHCP (light-harvesting chlorophyll a/b-binding protein of photosystem II) levels were closely correlated in R until 6 d after sowing. Thereafter the levels remained constant. The accumulation of membrane components was not related to the accumulation of Calvin-cycle enzymes. (iii) Time courses of the levels of translatable mRNAs, particularly SSU mRNA and LHCP mRNA were determined. In the case of SSU the maximum mRNA-level was found in R, FR and D around 3 d. This was compatible with the in-situ protein accumulation rate. Induction experiments with FR showed that accumulation of SSU mRNA followed the same rise and fall (peak at 3 d) as would be expected from the time course of mRNA levels and from enzyme-induction experiments. In the case of LHCP mRNA the peak was between 3 and 4 d in R, and was not well correlated with in-situ protein accumulation. Translatable LHCP mRNA was also formed in FR and in D-with a peak between 3 and 4 d-although LHCP protein was not detectable under these circumstances (because of the lack of chlorophyll). The data indicate that competence of gene expression towards phytochrome is determined endogenously. However, in the case of LHCP its appearance is not only limited by mRNA but also depends on the availability of chlorophyll. |
High rates of early HCV reinfection after DAA treatment in people with recent drug use attended at mobile harm reduction units.
The World Health Organization recently called for the elimination of hepatitis C virus (HCV) and has identified people who inject drugs (PWID) as a key target population. Clinical trials analyzing currently available all-oral regimens have demonstrated a high degree of efficacy in this population, with a relatively low reinfection rate. There is an urgent need to confirm these data in a harm reduction and active consumption setting. The primary aim of this study was to evaluate the HCV reinfection rate in people with recent drug use followed at low-threshold mobile harm reduction units. We included people with recent drug use (smoked or injected heroin/cocaine in the previous 6 months) who received HCV treatment and were attended at two low-threshold mobile harm reduction units over 19 months. Sustained virologic response was assessed 12 weeks after therapy (SVR12). The incidence density of HCV reinfection was defined as the number of reinfections per 100-person years (PY) using person-time of observation and was stratified by drug consumption at initiation of HCV treatment. Cox proportional hazard regression analysis was used to assess factors associated with reinfection. During the study period, 160 people who used drugs in the past 6 months completed HCV therapy. 122 (73.9%) and 88 (53.3%) reported injecting drug use in the 6 months and 30 days prior to HCV treatment, respectively. The overall SVR12 was 68% in the ITT analysis (reinfection = failure) and 90.7% in the modified intent-to-treat analysis (considering reinfections as response and removing people who were missing SVR data). The cohort at-risk for reinfection (n = 121) included 47 (39.2%) people who initiated HCV treatment with recently reported abstinence. Reinfection was identified in 10 persons (8.3%), and the median time to reinfection was 7.2 (IQR 4.2-18) months. Total follow-up time at-risk was 101.1-PY (median 0.6 years, IQR 0.3-1.3). The overall incidence of reinfection was 9.8 per 100-PY (95% CI 4.7,18.2). The incidence of reinfection was higher amongst those who had injected drugs in the previous 6 months (16.7 [95%CI 8.0; 30.7] per 100-PY) and in the previous 30 days (18.9 [95% CI 8.1; 37.2] per 100-PY). In the adjusted analysis, only injecting drugs use in the month prior to initiation of HCV therapy was associated with reinfection (aHR 8.7, 95%CI 1.0; 73.6; p 0.04). High efficacy of HCV treatment, was found in people with recent drug use attended and followed at low-threshold mobile harm reduction units. The high rate of early HCV reinfections in this setting should promote surveillance for reinfection at 7-month intervals after ending the treatment or earlier. |
[Efficacy of PCR-microwell plate hybridization method (Amplicor Mycobacterium) for detection of M. tuberculosis, M. avium and/or M. intracellulare in clinical specimens].
Recently, a new kit to detect and identify mycobacteria in clinical specimens was developed by Japan Roche Co. Limited. The new method is based on amplification of DNA of mycobacteria in clinical specimens by PCR and hybridization of amplified DNA by microwell plate hybridization method, which is the "Amplicor Mycobacteria, Roche, (AMP-M)". Cooperative study was organized with 15 tuberculosis hospitals and institutions throughout Japan, and 349 clinical specimens from newly admitted tuberculosis patients and/or suspects were collected during July and August, 1993. All the specimens were examined by smear microscopy (Ziehl-Neelsen's staining), culture on Ogawa egg media, culture on variant 7H9 liquid media and by AMP-M. Excluding 25 specimens which had failed to identify the species of mycobacteria because of contamination, disability to multiply on the transplanted solid media and so on, the results of the examinations in 324 specimens consisting of 167 specimens from previously untreated cases and those of 157 specimens from previously treated cases were analysed. Main results obtained were as follows; 1. Of 70 smear positive specimens from previously untreated cases, culture positive on Ogawa media and 7H9 media, and by AMP-M positive were 59 (84.3%), 61 (87.1%) and 66 (94.3%), respectively. Of 97 smear negative specimens, culture positive were 20 (20.6%), 22 (22.7%) and 27 (27.8%), respectively. The AMP-M showed the highest positive rate in both groups. 2. The sensitivity and the specificity of AMP-M in previously untreated cases were calculated by assuming that positive on Ogawa and/or variant 7H9 media is "positive". The sensitivity was 95.8% (68/71) and the specificity was 94.8% (91/96) for M. tuberculosis in previously untreated cases. The sensitivity and the specificity for M. avium and M. intracellulare were all 100%, although the numbers observed were small. 3. So-called false positive of the AMP-M were observed in 5 cases out of 96 culture negatives on both Ogawa and variant 7H9 media. However, all 5 cases were positive by repeated AMP-M, 3 become culture positive later, and another 2 showed clinical findings consistent with tuberculosis. Hence, the authors considered that the false positive rate of the AMP-M method is to be very low in previously untreated cases. 4. Of 86 smear positive cases with history of previous chemotherapy, the positive culture on Ogawa media, variant 7H9 media and that by AMP-M method were 64 (74.4%), 77 (89.5%) and 85 (98.8%), respectively. In the smear negative cases, culture positive was 10 out of 71 (14.1%), 13 (18.3%) and 24 (33.8%), respectively. |
Speech perception in children with cochlear implants: effects of lexical difficulty, talker variability, and word length.
The present results demonstrated that all 3 factors --lexical difficulty, stimulus variability, and word length--significantly influenced spoken word recognition by children with multichannel cochlear implants. Lexically easy words were recognized significantly better than lexically hard words, regardless of talker condition or word length of the stimuli. These results support the earlier findings of Kirk et al(12) obtained with live-voice stimulus presentation and suggest that lexical effects are very robust. Despite the fact that listeners with cochlear implants receive a degraded speech signal, it appears that they organize and access words from memory relationally in the context of other words. The present results concerning talker variability contradict those previously reported in the literature for listeners with normal hearing(7,11) and for listeners with mild-to-moderate hearing loss who use hearing aids.(14) The previous investigators used talkers and word lists different from those used in the current study and found that word recognition declined as talker variability increased. In the current study, word recognition was better in the multiple-talker condition than in the single-talker condition. Kirk(15) reported similar results for postlingually deafened adults with cochlear implants who were tested on the recorded word lists used in the present study. Although the talkers were equally intelligible to listeners with normal hearing in the pilot study, they were not equally intelligible to children or adults with cochlear implants. It appears that either the man in the single-talker condition was particularly difficult to understand or that some of the talkers in the multiple-talker condition were particularly easy to understand. Despite the unexpected direction of the talker effects, the present results demonstrate that children with cochlear implants are sensitive to differences among talkers and that talker characteristics influence their spoken word recognition. We are conducting a study to assess the intelligibility of each of the 6 talkers to listeners with cochlear implants. Such studies should aid the development of equivalent testing conditions for listeners with cochlear implants. There are 2 possible reasons the children in the present study identified multisyllabic words better than monosyllabic words. First, they may use the linguistic redundancy cues in multisyllabic words to aid in spoken word recognition. Second, multisyllabic words come from relatively sparse lexical neighborhoods compared with monosyllabic tokens. That is, multisyllabic words have fewer phonetically similar words, or neighbors, competing for selection than do monosyllabic stimuli. These lexical characteristics most likely contribute to the differences in identification noted as a function of word length. The significant lexical and word length effects noted here may yield important diagnostic information about spoken word recognition by children with sensory aids. For example, children who can make relatively fine phonetic distinctions should demonstrate only small differences in the recognition of lexically easy versus hard words or of monosyllabic versus multisyllabic stimuli. In contrast, children who process speech using broad phonetic categories should show much larger differences. That is, they may not be able to accurately encode words in general or lexically hard words specifically. Further study is warranted to determine the interaction between spoken word recognition and individual word encoding strategies. |
[Comparative studies on activities of antimicrobial agents against causative organisms isolated from patients with urinary tract infections (1997). I. Susceptibility distribution].
The frequencies of isolation and susceptibilities to antimicrobial agents were investigated on 560 bacterial strains isolated from patients with urinary tract infections (UTIs) in 9 hospitals during the period of June 1997 to May 1998. Of the above bacterial isolates, Gram-positive bacteria accounted for 29.3% and a majority of them were Enterococcus faecalis. Gram-negative bacteria accounted for 70.7% and most of them were Escherichia coli. Susceptibilities of several isolated bacteria to antimicrobial agents were as followed; 1. Enterococcus faecalis Ampicillin (ABPC) showed the highest activity against E. faecalis isolated from patients with UTIs. Its MIC90 was 1 microgram/ml. Imipenem (IPM) and vancomycin (VCM) were also active with the MIC90s of 2 micrograms/ml. The others had low activities with the MIC90s of 16 micrograms/ml or above. 2. Staphylococcus aureus including MRSA VCM and arbekacin (ABK) showed the highest activities against both S. aureus and MRSA isolated from patients with UTIs. The MIC90s of them were 1 microgram/ml. The others except minocycline (MINO) had low activities with the MIC90s of 32 micrograms/ml or above. More than a half of S. aureus strains (including MRSA) showed high susceptibilities to gentamicin (GM) and MINO, the MIC50s of 0.25 microgram/ml or 0.5 microgram/ml. 3. Enterobacter cloacae IPM showed the highest activity against E. cloacae. The MICs for all strains were equal to or lower than 1 microgram/ml. The MIC90s of ciprofloxacin (CPFX) and tosufloxacin (TFLX) were 1 microgram/ml, the MIC90s of amikacin (AMK) and ofloxacin (OFLX) were 4 micrograms/ml, the MIC90 of GM was 16 micrograms/ml. Among E. cloacae strains, those with low susceptibilities to quinolones have decreased in 1997, compared with those in 1996. But the other drugs were not so active in 1997 as 1996. 4. Escherichia coli All drugs except penicillins were active against E. coli with the MIC90s of 8 micrograms/ml or below. Particularly, flomoxef (FMOX), cefmenoxime (CMX), cefpirome (CPR), cefozopran (CZOP), IPM, CPFX and TFLX showed the highest activities against E. coli with the MIC90s of 0.125 microgram/ml or below. 5. Klebsiella pneumoniae K. pneumoniae was susceptible to almost all the drugs except penicillins. Carumonam (CRMN) had the strongest activity with the MICs for all strains equal to or lower than 0.125 microgram/ml. FMOX, CPR, CZOP, CPFX and TFLX were also active with the MIC90s of 0.125 microgram/ml or below. The MIC90s of quinolones had changed into a better state in 1997, compared with those in 1996. 6. Proteus mirabilis Almost all the drugs except ABPC and MINO showed high activities against P. mirabilis. CMX, ceftazidime (CAZ), latamoxef (LMOX), CPR, cefixime (CFIX), cefpodoxime (CPDX) and CRMN showed the highest activities against P. mirabilis. The MICs of them for all strains were equal to or lower than 0.125 microgram/ml. CPFX and TFLX were also active with the MIC90s of 0.125 microgram/ml or below. 7. Pseudomonas aeruginosa The MIC90 of GM was 8 micrograms/ml, the MIC90s of AMK, IPM and meropenem (MEPM) were 16 micrograms/ml. The others were not so active against P. aeruginosa with the MIC90s of 32 micrograms/ml or above. The MIC90s of quinolones had changed into a lower state in 1997, compared with those in 1996. 8. Serratia marcescens IPM showed the highest activity against S. marcescens. Its MIC90 was 2 micrograms/ml. GM was also active with the MIC90 of 4 micrograms/ml. The MIC90s of the others were 16 micrograms/ml or above. The MIC50s of CRMN was 0.125 microgram/ml or below, the MIC50s of CPR and CZOP were 0.25 microgram/ml. |
Final report on the safety assessment of disperse Blue 7.
Disperse Blue 7 is an anthraquinone dye used in cosmetics as a hair colorant in five hair dye and color products reported to the Food and Drug Administration (FDA). Hair dyes containing Disperse Blue 7, as "coal tar" hair dye products, are exempt from the principal adulteration provision and from the color additive provision in sections 601 and 706 of the Federal Food, Drug, and Cosmetic Act of 1938 when the label bears a caution statement and "patch test" instructions for determining whether the product causes skin irritation. Disperse Blue 7 is also used as a textile dye. The components of Disperse Blue 7 reportedly include Disperse Turquoise ALF Granules, Disperse Turquoise LF2G, Reax 83A, Tamol SW, and Twitchell Oil. No data were available that addressed the acute, short-term, or chronic toxicity of Disperse Blue 7. A mouse lymph node assay used to predict the sensitization potential of Disperse Blue 7 was negative. Although most bacterial assays for genotoxicity were negative in the absence of metabolic activation, consistently positive results were found with metabolic activation in Salmonella strains TA1537, TA1538, and TA98, which were interpreted as indicative of point mutations. Studies using L5178Y mouse lymphoma cells appeared to confirm this mutagenic activity. Mammalian assays for chromosome damage, however, were negative and animal tests found no evidence of dominant lethal mutations. Cases reports describe patients patch tested with Disperse Blue 7 to determine the source of apparent adverse reactions to textiles. In most patients, patch tests were negative, but there are examples in which the patch test for Disperse Blue 7 was positive. In general, anthraquinone dyes are considered frequent causes of clothing dermatitis. The Cosmetic Ingredient Review Expert Panel determined that there was a paucity of data regarding the safety of Disperse Blue 7 as used in cosmetics. The following data are needed in order to arrive at a conclusion on the safety of Disperse Blue 7 in cosmetic products: (1) methods of manufacture, including clarification of the relationship between Disperse Blue 7 and Disperse Turquoise ALF and Disperse Turquoise LF2G mixed with Reax 83A, Tamol SW, and Twitchell Oil; (2) analytical methods by which Disperse Blue 7 is measured; (3) impurities; (4) concentration of use as a function of product type; (5) confirmation that this is a direct hair dye; and (6) clarification of genotoxicity study results (e.g., Disperse Turquoise ALF and Disperse Turquoise LF2G were genotoxic in bacteria - what is the specific relation to Disperse Blue 7? Disperse Blue 7 at 60% purity was genotoxic in bacteria - is the other 40% the inert Reax 83A, Tamol SW, and Twitchell Oil?). Until such data are provided, the available data are insufficient to support the safety of Disperse Blue 7 as a hair dye ingredient in cosmetic formulations. |
Living lobar lung transplantation.
A constant awareness of the risk to the living donors must be maintained with any live-donor organ transplantation program, and comprehensive short- and long-term follow-up should be strongly encouraged to maintain the viability of these potentially life-saving programs. There has been no perioperative or long-term mortality following lobectomy for living lobar lung transplantation, and in the authors' series the perioperative risks associated with donor lobectomy are similar to those seen with standard lung resection. These risks might increase if the procedure were offered on an occasional basis and not within a well-established program. Further long-term outcome data, similar to data for live-donor renal and liver transplantation, are needed. Therefore, the authors still favor performing living lobar lung transplantation only for the patient with a clinically deteriorating condition. They believe that prospective donors should be informed of the morbidity associated with donor lobectomy and the potential for mortality, as well of potential recipient outcomes in regard to life expectancy and quality of life after transplantation. A major question regarding lobar lung transplantation that has been unanswered during the last decade has been defining when a potential recipient is too ill to justify placing two healthy donors at risk of donor lobectomy. Recipient age, gender, indication for primary transplant, prehospitalization status, preoperative steroid usage, relationship of donor to recipient, and the presence or absence of rejection episodes postoperatively do not seem to influence overall mortality. Patients receiving mechanical ventilation preoperatively and those undergoing retransplantation after either a previous cadaveric or lobar lung transplantation have significantly elevated odds ratios for postoperative death. The authors therefore recommend caution in these subgroups of patients. This experience is similar to the cadaveric experience in which intubated patients have higher I-year mortalities and patients undergoing retransplantation have decreased 3- and 5-year survival. A similar experience with a smaller number of lobar transplants has been reported by the Washington University group. Despite the high-risk patient population, this alternative procedure has been life saving in severely ill patients who would die or become unsuitable recipients before a cadaveric organ becomes available. Although cadaveric transplantation is preferable because of the risk to the donors, living lobar lung transplantation should continue to be used under properly selected circumstances. Although there have been no deaths in the donor cohort, a risk of death between 0.5% and 1% should be quoted pending further data. These encouraging results are important if this procedure is to be considered as an option at more pulmonary transplant centers in view of the institutional, regional, and intra- and international differences in the philosophical and ethical acceptance of the use of organs from live donors for transplantation. |
Effect of dietary calcium and phosphorus levels on the total tract digestibility of innate and supplemental organic and inorganic microminerals in a corn-soybean meal based diet of grower pigs.
The effects of Ca and P (CaP) levels and micromineral sources on mineral digestibility were evaluated in growing pigs. Treatments consisted of 2 levels of CaP and 3 trace mineral (TM) treatments arranged as a 2 × 3 factorial in a randomized complete block design with 8 replicates. The CaP levels evaluated were: 1) 0.65% Ca and 0.55% P [standard CaP (Std CaP)], and 2) 1.00% Ca and 0.85% P (High CaP). The TM treatments were: 1) Basal, without supplemental TM, 2) Basal supplemented with organic TM, and 3) Basal supplemented with inorganic TM. Both organic and inorganic TM premixes added 15 mg Cu, 150 mg Fe, 10 mg Mn, 0.3 mg Se, and 140 mg Zn/kg diet. Diets were formulated using corn soybean meal with a Ca to P ratio of 1.18 in both CaP treatments. Barrows with an initial BW of 45 kg were acclimated to stainless steel metabolism crates where diets were fed for 14 d before a 10-d collection period. Pigs within replicates were fed equivalent amounts of feed at 0800 and 1600 h each day with water provided free choice. Total feces, urine, and feed orts were collected daily. Essential macro- and microminerals were analyzed by inductively coupled plasma analysis. Increasing dietary CaP decreased the digestibility of Ca and Zn. Phosphorus digestibility did not change when the P inclusion level increased from 0.55 to 0.85% Ptotal. The High CaP level resulted in a lower urinary excretion of most minerals, particularly Cu (P < 0.05) and Mn (P < 0.05), as dietary CaP level increased but the others were not statistically significant. A summary of the ATTD for each of the experimental variables was statistically analyzed and averaged for the experiment. Although there were few statistical differences with individual minerals, they generally demonstrated a decline in digestibility when the High CaP was fed, averaging a 3% lower digestibility consistently than when the Std CaP level was fed. Organic TM averaged an approximately 5% greater digestibility than the average inorganic microminerals with the difference between minerals within each source relatively consistent. These results indicate that CaP level had the greatest effect on mineral digestibility, organic microminerals had a greater digestibility than inorganic minerals, and the innate microminerals had an average apparent digestibility of 45%. |
[Drawing up guidelines for the attendance of physical health of patients with severe mental illness].
Having a mental illness has been and remains even now, a strong barrier to effective medical care. Most mental illness, such as schizophrenia, bipolar disorder, and depression are associated with undue medical morbidity and mortality. It represents a major health problem, with a 15 to 30 year shorter lifetime compared with the general population. Based these facts, a workshop was convened by a panel of specialists: psychiatrists, endocrinologists, cardiologists, internists, and pharmacologists from some French hospitals to review the information relating to the comorbidity and mortality among the patients with severe mental illness, the risks with antipsychotic treatment for the development of metabolic disorders and finally cardiovascular disease. The French experts strongly agreed on these points: that the patients with severe mental illness have a higher rate of preventable risk factors such as smoking, addiction, poor diet, lack of exercise; the recognition and management of morbidity are made more difficult by barriers related to patients, the illness, the attitudes of medical practitioners, and the structure of healthcare delivery services; and improved detection and treatment of comorbidity medical illness in people with severe mental illness will have significant benefits for their psychosocial functioning and overall quality of life. GUIDELINES FOR INITIATING ANTIPSYCHOTIC THERAPY: Based on these elements, the French experts propose guidelines for practising psychiatrists when initiating and maintaining therapy with antipsychotic compounds. The aim of the guidelines is practical and concerns the detection of medical illness at the first episode of mental illness, management of comorbidity with other specialists, family practitioner and follow-up with some key points. The guidelines are divided into two major parts. The first part provides: a review of mortality and comorbidity of patients with severe mental illness: the increased morbidity and mortality are primarily due to premature cardiovascular disease (myocardial infarction, stroke...).The cardiovascular events are strongly linked to non modifiable risk factors such as age, gender, personal and/or family history, but also to crucial modifiable risk factors, such as overweight and obesity, dyslipidemia, diabetes, hypertension and smoking. Although these classical risk factors exist in the general population, epidemiological studies suggest that patients with severe mental illness have an increased prevalence of these risk factors. The causes of increased metabolic and cardiovascular risk in this population are strongly related to poverty and limited access to medical care, but also to the use of psychotropic medication. A review of major published consensus guidelines for metabolic monitoring of patients treated with antipsychotic medication that have recommended stringent monitoring of metabolic status and cardiovascular risk factors in psychiatric patients receiving antipsychotic drugs. There have been six attempts, all published between 2004 and 2005: Mount Sinai, Australia, ADA-APA, Belgium, United Kingdom, Canada. Each guideline had specific, somewhat discordant, recommendations about which patients and drugs should be monitored. However, there was agreement on the importance of baseline monitoring and follow-up for the first three to four months of treatment, with subsequent ongoing reevaluation. There was agreement on the utility of the following tests and measures: weight and height, waist circumference, blood pressure, fasting plasma glucose, fasting lipid profile. In the second part, the French experts propose guidelines for practising psychiatrists when initiating and maintaining therapy with antipsychotic drugs: the first goal is identification of risk factors for development of metabolic and cardiovascular disorders: non modifiable risk factors: these include: increasing age, gender (increased rates of obesity, diabetes and metabolic syndrome are observed in female patients treated with antipsychotic drugs), personal and family history of obesity, diabetes, heart disease, ethnicity as we know that there are increased rates of diabetes, metabolic syndrome and coronary heart disease in patients of non European ethnicity, especially among South Asian, Hispanic, and Native American people. Modifiable risk factors: these include: obesity, visceral obesity, smoking, physical inactivity, and bad diet habits. Then the expert's panel focussed on all the components of the initial visit such as: family and medical history; baseline weight and BMI should be measured for all patients. Body mass index can be calculated by dividing weight (in kilograms) by height (in meters) squared; visceral obesity measured by waist circumference; blood pressure; fasting plasma glucose; fasting lipid profiles. These are the basic measures and laboratory examinations to do when initiating an antipsychotic treatment. ECG: several of the antipsychotic medications, typical and atypical, have been shown to prolong the QTc interval on the ECG. Prolongation of the QTc interval is of potential concern since the patient may be at risk for wave burst arrhythmia, a potentially serious ventricular arrhythmia. A QTc interval greater than 500 ms places the patient at a significantly increased risk for serious arrhythmia. QTc prolongation has been reported with varying incidence and degrees of severity. The atypical antipsychotics can also cause other cardiovascular adverse effects with, for example, orthostatic hypotension. Risk factors for cardiovascular adverse effects with antipsychotics include: known cardiovascular disease, electrolyte disorders, such as hypokaliemia, hypomagnesaemia, genetic characteristics, increasing age, female gender, autonomic dysfunction, high doses of antipsychotics, the use of interacting drugs, and psychiatric illness itself. In any patient with pre-existing cardiac disease, a pre-treatment ECG with routine follow-up is recommended. Patients on antipsychotic drugs should undergo regular testing of blood sugar, lipid profile, as well as body weight, waist circumference and blood pressure, with recommended time intervals between measures. Clinicians should track the effects of treatment on physical and biological parameters, and should facilitate access to appropriate medical care. In order to prevent or limit possible side effects, information must be given to the patient and his family on the cardiovascular and metabolic risks. The cost-effectiveness of implementing these recommendations is considerable: the costs of laboratory tests and additional equipment costs (such as scales, tape measures, and blood pressure devices) are modest. The issue of responsibility for monitoring for metabolic abnormalities is much debated. However, with the prescription of antipsychotic drugs comes the responsibility for monitoring potential drug-induced metabolic abnormalities. The onset of metabolic disorders will imply specific treatments. A coordinated action of psychiatrists, general practitioners, endocrinologists, cardiologists, nurses, dieticians, and of the family is certainly a key determinant to ensure the optimal care of these patients. |
Intricate interaction between store-operated calcium entry and calcium-activated chloride channels in pulmonary artery smooth muscle cells.
Ca(2+)-activated Cl-() channels (Cl(Ca)) represent an important excitatory mechanism in vascular smooth muscle cells. Active accumulation of Cl-() by several classes of anion transporters results in an equilibrium potential for this ion about 30 mV more positive than the resting potential. Stimulation of Cl(Ca) channels leads to membrane depolarization, which enhances Ca(2+) entry through voltage-gated Ca(2+) channels and leads to vasoconstriction. Cl(Ca) channels can be activated by distinct sources of Ca(2+) that include (1) mobilization from intracellular Ca(2+) stores (ryanodine or inositol 1,4,5-trisphosphate [InsP(3)]) and (2) Ca(2+) entry through voltage-gated Ca(2+) channels or reverse-mode Na(+)/Ca(2+) exchange. The present study was undertaken to determine whether Ca(2+) influx triggered by store depletion (store-operated calcium entry, SOCE) activates Cl(Ca) channels in rabbit pulmonary artery (PA) smooth muscle. Classical store depletion protocols involving block of sarcoplasmic reticular Ca(2+) reuptake with thapsigargin (TG; 1 microM) or cyclopiazonic acid (CPA; 30 microM) led to a consistent nifedipine-insensitive contraction of intact PA rings and rise in intracellular Ca(2+) concentration in single PA myocytes that required the presence of extracellular Ca(2+). In patch clamp experiments, TG or CPA activated a time-independent nonselective cation current (I (SOC)) that (1) reversed between -10 and 0 mV; (2) displayed the typical "N"-shaped current-voltage relationship; and (3) was sensitive to the (I (SOC)) blocker by SKF-96365 (50 microM). In double-pulse protocol experiments, the amplitude of I (SOC) was varied by altering membrane potential during an initial step that was followed by a second constant step to +90 mV to register Ca(2+)-activated Cl(-) current, I (Cl(Ca)). The niflumic acid-sensitive time-dependent I (Cl(Ca)) at +90 mV increased in proportion to the magnitude of the preceding hyperpolarizing step, an effect attributed to graded membrane potential-dependent Ca(2+) entry through I (SOC) and confirmed in dual patch clamp and Fluo-5 experiments to record membrane current and free intracellular Ca(2+) concentration simultaneously. Reverse-transcription polymerase chain reaction (RT-PCR) experiments confirmed the expression of several molecular determinants of SOCE, including transient receptor potential canonical (TRPC) 1, TRPC4, and TRPC6; stromal interacting molecule (STIM) 1 and 2; and Orai1 and 2, as well as the novel and probable molecular candidates thought to encode for Cl(Ca) channels transmembrane protein 16A (TMEM16A) Anoctamin 1 (ANO1) and B (ANO2). Ourpreliminary investigation provides new evidence for a Ca(2+) entry pathway consistent with store-operated Ca(2+) entry signaling that can activate Ca(2+)-activated Cl-() channels in rabbit PA myocytes. We hypothesize that this mechanism may be important in the regulation of membrane potential, Ca(2+) influx, and tone in these cells under physiological and pathophysiological conditions. |
The interactions between hypothalamic-pituitary-adrenal axis activity, testosterone, insulin-like growth factor I and abdominal obesity with metabolism and blood pressure in men.
To examine potential interactions between abdominal obesity, endocrine, metabolic and hemodynamic perturbations. A subgroup of 284 men from a population sample of 1040 at the age of 51 y. Anthropometric measurements included body mass index (BMI, kg/m2), waist/hip circumference ratio (WHR) and abdominal sagittal diameter (D). Endocrine measurements were a modified, low dose (0.5 mg) dexamethasone suppression test (Dex), testosterone (T) and insulin-like growth factor I (IGF-I). Overnight fasting values of blood glucose, serum insulin, triglycerides, total, low and high density lipoprotein cholesterol, as well as resting heart rate and blood pressure were also determined. Arbitrary subdivisions of the men were performed to obtain subgroups of low T and IGF-I values (lowest decile, borderlines < or =13.13 nmol/I and < or =128.80 microg/l, respectively) and normal or blunted Dex. Significant relationships with BMI, WHR or D, and abnormal metabolic and hemodynamic factors, usually with the exception of total and low density lipoprotein cholesterol, were then found in subgroups with different endocrine profiles. These included men with a blunted Dex test with low T or IGF-I values, as well as men with a normal Dex test and low or normal T or IGF-I values. In addition, a group with isolated low Dex suppression, as well as another group without endocrine abnormalities, showed such relationships. These findings suggest that, in men, obesity factors are associated with metabolic and hemodynamic complications with or without the presence of perturbations of hypothalamic-pituitary-adrenal axis (HPA) regulation or low T or growth hormone secretion. In order to generate hypotheses concerning the nature of the impact of the endocrine perturbations in abdominal obesity and its metabolic complications, path analyses were performed, testing different models. These models included the endocrine measurements (Dex test, T and IGF-I), the WHR and D (representing abdominal distribution of fat), BMI (representing obesity), as well as insulin and triglyceride values (representing metabolic perturbations). The results showed a satisfactory fit (goodness-of-fit index: 0.945 - 1.0) for the path diagrams: Dex --> T/IGF-I --> WHR or D --> insulin --> triglycerides with additional direct input of blunted Dex on insulin values (see Figure 1). With BMI as determinant, essentially the same results were found with the addition of a direct pathway between Dex and BMI as well as between IGF-I-T and insulin (Figure 2). There was no evidence for pathways where WHR or BMI determined endocrine variables. The results suggest that abdominal obesity with or without endocrine abnormalities exerts a major impact on abnormalities in metabolic and hemodynamic variables. Abdominal obesity seems to be dependent on endocrine abnormalities, which in turn show direct or indirect relationships to the metabolic and circulatory variables, including a direct pathway between HPA-axis perturbations and accumulation of total body fat as indicated by the BMI. It is therefore suggested that endocrine perturbations are followed by obesity and by storage of an elevated proportion of fat in visceral depots, followed by metabolic and hemodynamic abnormalities. This is statistical evidence which is supported by evidence of mechanistic links in previous studies, suggesting the possibility of causal relationships. The results also indicate subgroups of abdominal obesity and its associated metabolic and hemodynamic abnormalities, which might be due to the input of different pathogenetic factors. |
Total knee arthroplasty after previous knee surgery: expected interval and the effect on patient age.
With more than 650,000 knee arthroscopies and 175,000 anterior cruciate ligament reconstructions performed annually in the United States, patients presenting for total knee arthroplasty are increasingly likely to have had previous knee surgery. The purpose of this study was to assess the prevalence of previous knee surgery in patients undergoing total knee arthroplasty and to test the hypothesis that patients with previous knee surgery undergo total knee arthroplasty at a younger age. All patients undergoing primary total knee arthroplasty over the study period who consented to enroll in a prospective total joint registry were reviewed. Inclusion criteria included a diagnosis of osteoarthritis or posttraumatic arthritis. Of 1372 patients in the registry, 1286 met inclusion criteria. Twenty-nine percent had a history of knee surgery, and significantly more men (39%) than women (24%) had a history of knee surgery (p < 0.0001). Patients with previous knee surgery were significantly younger (p < 0.0001) at total knee arthroplasty; the mean age (and standard deviation) was 59 ± 10 years for patients with previous knee surgery compared with 66.6 ± 10.4 years for patients without previous knee surgery. Patients with a history of ligament reconstruction underwent total knee arthroplasty at a significantly younger age (p < 0.0001) than patients with a history of other knee surgery; the mean age (and standard deviation) was 50.2 ± 9.1 years for patients with a history of ligament reconstruction and 59.9 ± 9.6 years for patients with a history of other knee surgery. Among patients who had not undergone previous knee surgery, women underwent total knee arthroplasty at a significantly younger age (p < 0.001) than men; the mean age (and standard deviation) was 65.4 ± 10.3 years for women and 69.3 ± 10 years for men. However, there was no difference in age between the sexes in those with previous knee surgery; the mean age (and standard deviation) was 58.6 ± 10.1 years for women and 59.6 ± 9.8 years for men. The average interval (and standard deviation) from previous knee surgery to total knee arthroplasty is 13.1 ± 12.6 years, longer in men (17.7 ± 13.8 years) than in women (9.1 ± 9.8 years) (p < 0.0001). Patients with previous knee surgery undergo total knee arthroplasty at a significantly younger age than patients without previous knee surgery, especially men and patients with a history of ligament reconstruction. This may be a factor in the rising demand for total knee arthroplasty. Future investigation to identify those at risk for early total knee arthroplasty after knee surgery and to develop methods to delay or to prevent the need for future total knee arthroplasty in these patients is warranted. Prognostic Level III. See Instructions for Authors for a complete description of levels of evidence. |
Swedish use and validation of Valpar work samples for patients with musculoskeletal neck and shoulder pain.
I studien beskrivs vad e bedömning av arbetskapacitet (BAK) innehåller Vid en bedömning av arbetskapacitet jämförs kraven för att utföra ett specifikt arbete, definierat i "The dictionary of Occupational Titles" (DOT), och patientens förmåga att utföra ett arbote vilket definieras med hjälp av sju variabler. Allmän utbildnings nivä. Speciella yrkesförberedelser. Anlag, Intresseområden, Personlig läggning. Fysiska drav och Miljöpåverkan. En bedömning av arbetskapaciteten kan ske antingen genom att observera patienten under arbete påarbetsplatsen eller under simulerat arbete tex genom att använda Valpar systemets arbetsprover.Vid Rehabiliteringsmedicinska kliniken, Karolinska Sjukhuset har tvä av de standardiserade arbetsprovema använts. VCWS 8 "Simulerad moontering" och VCWS 9 "Simulerade rorelser for hela kroppen" for att förbättra bedömningen av patientens arbetskapacitet. VCWS 8 mäter "en persons förmaga att utföra ett monteringsarbete som kräver repetetiva fysiska manipulationer" och VCWS 9 mäter "en persons formåga att röra på bålen, nacken, armarna, händerna och fingrama när de relaterar till funktionellt utförande av ett arbete". Valideringen av arbetsprovema för Svensk anvandning gjordes på en grupp patienter (n = 97) med muskuloskelettal nack- och skulder smärta som deltog i ett rehabiliteringsprogram. VCWS 9 utfördes av åttiofem patienter och VCWS 9 av sextionic patienter.Medelvärdet for patienterna som slutförde VCWS 8 var 83.1% av industriell standard nivå (mätt enligt MTM-Method-Time-Measurement) där den lä gsta gränsen är 87.5% för ett godkännt utförende. Detta innebär att de "inte moter" kraven för detta specifika arbete. I motsats till detta sä nädde patientema som utförde VCWS 9 ett medelvärde pä 108.6% vilket overstiger kravet (87.5%) för industriell standard. Det oväntade resultatet kan kanske förklaras av att patienternas intresseområden vad gäller arbetsfalt endast sammanfoll med iintresseområdena för VCWS 8 och yeWS 9 i 32% av fallen. En retrospective jämförelse mellan att vara sjukskriven eller inte vara sjukskriven och deras förmåga att utföra ett arbetsprov och uppnå minimi kravet for industriell standard visade inte på något samband. Mänga faktorer kan ha påverkat resultatet som tex patientens motivation, andra psyko-sociala omstandigheter och arbetsterapeutemas beslut att endast använda sig av två stycken av ett flertal arbetsprov.Konklusiionen ar att en bedörning av patientens arbetskapacitet skall utföras prospektivt och bör ta i beräknng de krav arbetet ställer enligt DOT och därefter bör rätt arbetsprov väijas för den individuella patienten. Sker detta på ett riktigt sätt så år Valpars arbetsprover valida och till stor hjälp vid en medicinsk bedomning och för beslut när det gäller fortsatt sjukskrivning eller vid yrkesrådgivning. |
Release of incompletely processed proinsulin is the cause of the disproportionate proinsulinemia of NIDDM.
The production of insulin from proinsulin involves cleavage of intact proinsulin into proinsulin conversion intermediates by the processing of enzymes PC2 and PC3 before fully processed insulin is produced. Intact proinsulin and these conversion intermediates are measured in many immunoreactive insulin (IRI) assays, and therefore contribute to the absolute IRI measurement. The proportion of basal IRI made up of proinsulin (PI)-like molecules (PI/IRI) is increased in NIDDM. Whether stimulated IRI levels are similarly made up of disproportionately increased PI/IRI or whether the relative proportions of proinsulin and its conversion intermediates are altered has not been evaluated. An index of the efficiency of proinsulin processing within the pancreatic beta-cell can be achieved by measuring PI/IRI immediately following acute stimulation of beta-cell secretion, and then determining the proportion of intact proinsulin and proinsulin conversion intermediates contributing to circulating proinsulin-like molecules. In this study, we determined the PI/IRI levels under basal and arginine-stimulated conditions in 17 healthy and 16 NIDDM subjects; high-performance liquid chromatography (HPLC) was also performed in a subset of these subjects to measure the relative contribution of intact proinsulin and its conversion intermediates to total proinsulin-like molecules. In NIDDM subjects, levels of both basal (44.6 +/- 9.6 vs. 9.3 +/- 1.5 pmol/l; P = 0.0007) and stimulated (64.0 +/- 12.7 vs. 19.8 +/- 2.8 pmol/l; P = 0.001) proinsulin-like molecules were higher than in healthy subjects. Although IRI was higher in NIDDM than in control subjects under basal conditions (106 +/- 19 vs. 65.1 +/- 8.1 pmol/l; P = 0.05), it was lower in NIDDM than in control subjects following stimulation (increment: 257 +/- 46 vs. 416 +/- 51 pmol/l; P = 0.03). PI/IRI ratios were increased in NIDDM subjects under both basal (43.3 +/- 5.0 vs. 14.0 +/- 1.3%; P < 0.0001) and stimulated (increment: 10.1 +/- 2.1 vs. 2.5 +/- 0.2%; P = 0.0006) conditions, compatible with the release of a disproportionately increased amount of proinsulin-like products. HPLC analysis revealed that, in the stimulated state, intact proinsulin made up 40.1 +/- 6.7% of proinsulin-like molecules in NIDDM individuals (n = 9) and 30.1 +/- 5.6% in healthy subjects (n = 7; NS). The remainder of the proinsulin-like molecules comprised the des-31,32-split proinsulin conversion intermediate. The increase in PI/IRI in NIDDM under basal and especially under stimulated conditions suggests that proinsulin conversion is indeed perturbed in this disorder. Because the relative proportions of intact and des-31,32-split proinsulin are similar in both healthy and NIDDM subjects, the orderly cleavage of proinsulin at its two junctions appears preserved. However, at the time of exocytosis, the secretory granule in the islet of NIDDM subjects contains an increased proportion of incompletely processed proinsulin, presumably reflecting a slower rate of conversion or granules' reduced time of residence in beta-cells. |
[History and development trend of minimally invasive surgery for colorectal cancer in China].
With the development in past 20 years, the utilization of the laparoscopic surgery, which is the main trend in minimally invasive surgery for colorectal cancer, has tremendously changed. Minimally invasive surgery for colorectal cancer is now at a high level platform after going through the exploration at the very beginning and rapid development in the period of standardizing and promoting the regulations. Nowadays, the unique advantage that the laparoscopy owns is high definition and enlargement of the image, along with the establishment of the key note in series of laparoscopic complete mesocolic excision and the improvement of surgical instruments and methods make the operation skills accurate and normative in exploration of correct plane, high ligation of vessels and protection of nerve during the lymph node dissection of colorectal surgery. Currently, the most common procedure widely used in reconstruction of gastrointestinal(GI) tract is still laparoscopy-assisted approach. The frequent reconstruction of GI tract in rectal cancer surgery is double stapling technique, coloanal anastomosis by hand-sewn technique and the laparoscopic reconstruction of GI tract based on NOSE. At present, the most reconstructions of GI tract, including reconstruction by instrument and by hand-sewn are operated extracorporeally by pulling out the colon through the small incision, which is used to extract specimen. Although compared with the traditional reconstruction of GI tract, the complete laparoscopic excision has the advantage that the incision is smaller, it is more expensive. The preference approach of laparoscopic surgery is mainly medial approach, but with further understanding of CME, TME and the basic of medial approach, the new approaches like total medial approach, hybrid medial approach and caudal approach applied in right hemicolectomy and cephalad medial approach applied in rectal cancer have derived. As the introduction of NOTES, transanal TME and transanal transabdominal TME emerged but the experience is still primitive. Now, the focus points about the surgical operation platform relatively concentrate on 3D laparoscopic surgery and robotic surgery, while about the surgical techniques, the minimally invasive surgery techniques which are well developed in recent years are single incision laparoscopy and combined endoscopy-laparoscopic surgery, etc. When we are confident about the future of laparoscopic gastrointestinal surgery, it is necessary for us to slow down and think some questions seriously. What is the hot-point and challenge we need to focus on? What aspect is the innovation needed? With the help of rapidly developed technology, the new round overturning revolution in minimally invasive surgery techniques perhaps is going to emerge. |
Genome exposure and regulation in mammalian cells.
A method of measurement of exposed DNA (i.e. hypersensitive to DNase I hydrolysis) as opposed to sequestered (hydrolysis resistant) DNA in isolated nuclei of mammalian cells is described. While cell cultures exhibit some differences in behavior from day to day, the general pattern of exposed and sequestered DNA is satisfactorily reproducible and agrees with results previously obtained by other methods. The general pattern of DNA hydrolysis exhibited by all cells tested consists of a curve which at first rises sharply with increasing DNase I, and then becomes almost horizontal, indicating that roughly about half of the nuclear DNA is highly sequestered. In 4 cases where transformed cells (Raszip6, CHO, HL60 and PC12) were compared, each with its more normal homolog (3T3, and the reverse transformed versions of CHO, HL60 and PC12, achieved by dibutyryl cyclic AMP [DBcAMP], retinoic acid, and nerve growth factor [NGF] respectively), the transformed form displayed less genome exposure than the nontransformed form at every DNase I dose tested. When Ca++ was excluded from the hydrolysis medium in both the Raszip6-3T3 and the CHO-DBcAMP systems, the normal cell forms lost their increased exposure reverting to that of the transformed forms. Therefore Ca++ appears necessary for maintenance of the DNA in the more highly exposed state characteristic of the nontransformed phenotype. LiCl increases the DNA exposure of all transformed cells tested. Dextran sulfate and heparin each can increase the DNA exposure of several different cancers. Colcemid prevents the increase of exposure of CHO by DBcAMP but it must be administered before or simultaneously with the latter compound. Measurements on mouse biopsies reveal large differences in exposure in different normal tissues. Thus, the exposure from adult liver cells was greater than that of adult brain, but both fetal liver and fetal brain had significantly greater exposure than their adult counterparts. Exposure in normal human fibroblasts as revealed by in situ nick translation reveals a nuclear distribution pattern around the periphery, around the nucleoli and in punctate positions in the nuclear interior in parts of both S and G1 phases of the cell cycle. The same exposure pattern is duplicated by the pattern of DNA synthesis in S cells. It would appear that these nuclear regions represent positions of special activity. The previously proposed theory of genome regulation in mammalian cells is supported by these findings. The theory proposes that: a) gene activity requires exposure of the given locus followed by action of transcription factors on the exposed genes; b) the fiber system of the cell (cytoskeleton, nuclear fibers, and extracellular fibers) are required for normal exposure; c) active sites for gene expression and replication consist of the nuclear periphery where differentiation genes particularly are exposed; the nucleoli where at least some housekeeping genes are exposed; and possibly also punctate regions in the interior; d) noncoding sequences play a critical role in genome regulation, possibly including the transport of loci to be activated to appropriate exposure transcriptional and replicating locations. Cancer cells have lost specific differentiation gene activities, at least sometimes because of mutation of appropriate exposure genes; at least some protooncogenes and tumor suppressor genes are responsible for exposure and transport of specific differentiation gene loci to their appropriate exposure sites in the nucleus and for inducing exposure. |
Silica-based solid phases for affinity chromatography: Effect of pore size and ligand location upon biochemical productivity.
The influence of pore size and surface chemistry upon the productivity in affinity chromatography of three silica-based solid phases, Sorbsil C-200, C-500, and C-1000 (40-60 microm particle diameter and the corresponding pore diameters of 20, 50, and 100 nm), was studied using three model ligand/biomolecule systems of varying molecular masses. These studies revealed two unique parameters, biochemical productivity and maximum physical capacity, of the matrix as generically essential in the successful design and operation of productive affinity chromatography systems. Biochemical productivity, the molar ratio of the amount of product recovered per unit volume of adsorbent and ligand concentration, utilized the expected stoichiometry of binding of the two molecules to assess the efficacy of the adsorbent. This parameter, determined by equilibrium binding in batch suspensions and by saturation binding capacities and recoveries in fixed beds, yielded the optimum ligand concentration required for maximal performance. Maximum physical capacity, of the adsorbent to accommodate the biomolecules, was calculated from pore and molecular dimensions assuming that there was no steric hindrance to access. Using an immobilized human-IgG (Hu-IgG)/anti-Hu-IgG monoclonal antibody (MCAB) system, in which both the ligand and the product are of the same size (150 kDa), it was shown that the physical capacity of C-200 was only 16% of the theoretically expected amount. This capacity increased to 70 and 90% of the expected value with C-500 and C-1000, respectively, as the steric hindrance to protein penetration induced by pore dimensions decreased. The distribution of immobilized Hu-IgG within individual particles, visualized by immunofluorescence and immunogold labeling, showed that the ligand was restricted to the peripheral 3 microm of the C-200 particles (12% radius). In contrast, it was present throughout the C-1000 particles, indicating that there was no hindrance to access in this solid phase. The C-200 was suitable for use in a small ligand/biomolecule system studied (immobilized trypsin-inhibitor binding trypsin; 22.1 and 23.3 kDa, respectively) for which more than 60% of the maximum physical capacity was available for interactions. The C-500 proved satisfactory for the Hu-IgG/MCAB model system but showed steric limitations when an immobilized anti-beta-galactosidase MCAB (anti-beta-gal) was used to purify a larger product (beta-galacosidase; 460 kDa). The binding capacity and overall productivity of Hu-IgG- and anti-beta-gal-C-1000 was equivalent to that of Sepharose CL-4B. Selection of matrices with pore sizes appropriate to the dimensions of the ligand and product was, therefore, important. Finally, the Sorbsil silicas packed easily into beds and were used successfully with conventional chromatography equipment for low-pressure affinity chromatography. They therefore offer an ideal alternative to silica-based high-performance liquid affinity chromatography and soft-gel supports. |
Predicting 1-Year Statin Adherence Among Prevalent Users: A Retrospective Cohort Study.
Attempts to predict who is at risk of future nonadherence have largely focused on predictions at the time of therapy initiation; however, these users are only a small proportion of all patients on therapy at any point in time. Methods to predict nonadherence for established medication users, which have not been previously described in the literature, would be helpful to guide efforts to enhance the use of evidence-based therapies. To test approaches for adherence prediction among prevalent statin users, namely the use of short-term filling behavior, investigator-specified predictors from medical and pharmacy administrative claims, and the empirical selection of potential predictors using the high-dimensional propensity score variable selection algorithm. Medical and prescription claims data from a large national health insurer were used to create a cohort of patients who filled statin medication prescriptions in January 2012. We defined 6 groups of adherence predictors and estimated 10 main models to predict medication adherence in the full cohort. The same was done for the population stratified based on the days supply of the index statin prescription (≤ 30 days vs. > 30 days). The study cohort consisted of 93,777 individuals, 58.4% of which were adherent to statins during follow-up. The use of 3 pre-index adherence predictors alone achieved a c-statistic of 0.70. Investigator-specified and empirically selected pharmacy, medical, and demographic variables did substantially worse (0.57-0.60). The use of 3 indicators of post-index adherence achieved a higher c-statistic than the best-performing model using pre-index information (0.74 vs. 0.72). The addition of 3 pre-index adherence predictors further improved discrimination (0.78). This analysis demonstrated the ability to predict adherence among medication users using filling behavior before and immediately after an index prescription fill. This work was supported by an unrestricted grant from CVS Health to Brigham and Women's Hospital. Shrank, Brennan, and Matlin were employees and shareholders at CVS Health at the time of this manuscript preparation; they report no financial interests in products or services that are related to the subject of the manuscript. Franklin has received consulting fees from Aetion. Chourdry has received grants from the National Heart, Lung, and Blood Institute, PhRMA Foundation, Merck, Sanofi, AstraZeneca, and MediSafe. Spettell is an employee of, and shareholder in, Aetna. The other authors have nothing to disclose. Krumme, Choudhry, Tong, and Franklin contributed to the study design, interpretation of results, and manuscript drafting. Tong prepared and analyzed the data. Isaman, Spettell, Shrank, Brennan, and Matlin provided interpretation of results and critical manuscript revisions. |
Indicators for the evaluation of diet quality.
The role of diet quality and physical activity in reducing the progression of chronic disease is becoming increasingly important. Dietary Quality Indices or Indicators (DQIs) are algorithms aiming to evaluate the overall diet and categorize individuals according to the extent to which their eating behaviour is "healthy". Predefined indexes assess dietary patterns based on current nutrition knowledge and they have been developed primarily for nutritional epidemiology to assess dietary risk factors for non-communicable diseases. There are many different types of DQIs. There are three major categories of DQIs: a) nutrient-based indicators; b) food/food group based indicators; and c) combination indexes, the vast majority of DQIs, which often include a measure of diet variety within and across food groups, a measure of adequacy i.e. nutrients (compared to requirements) or food groups (quantities or servings), a measure of nutrients/foods to consume in moderation, and an overall balance of macronutrients. The Healthy Eating Index (HEI), the Diet Quality Index (DQI), the Healthy Diet Indicator (HDI) and the Mediterranean Diet Score (MDS) are the four 'original' diet quality scores that have been referred to and validated most extensively. Several indexes have been adapted and modified from those originals. In particular, many variations on the MDS have been proposed, included different alternate MDS and Mediterranean Diet Adherence Screener (MEDAS). Primary data source of DQI's are individual dietary data collection tools, namely 24 h quantitative intake recalls, dietary records and food frequency questionnaires. Nutrients found in many scores are total fat, saturated fatty acids or the ratio of monounsaturated fatty acids to saturated fatty acids or the latter SFA to polyunsaturated fatty acids. Cholesterol, protein content and quality, complex carbohydrates, mono- and disaccharides, dietary fibre and sodium are also found in various scores. All DQIs, except those that only contain nutrients, include the components fruits and vegetables; additional attributes are legumes or pulses, nuts and seeds. Meat and meat products, namely red and processed meat, poultry, and milk and dairy products are also included in many scores. Other foods contained in some DQIs e.g. MDS are olive oil and fish. Nowadays, there is interest in defining more than DQIs, healthy life indices (HLIs), which give information on behaviours associated with specific patterns and beyond dietary habits they include physical activity, rest and selected socio-cultural habits. The Mediterranean Lifestyle (MEDLIFE) index has been recently created based on the current Spanish Mediterranean food guide pyramid and it includes both the assessment of food consumption directly related to the Mediterranean diet, physical activity and rest and other relevant cultural information. However, a global HLI should consider, based on the Iberoamerican Nutrition Foundation (FINUT) Pyramid of Healthy Lifestyles, in addition to food groups and nutrients, selected items on food safety e.g. consumption rate of proceed foods, food handling, preparation and storage and access to drinking water, selected food habits, including alcoholic beverage and salt consumption patterns, purchase of seasonal and local foods, home cooking and conviviality, as well as patterns of physical activity, sedentary and rest habits and some selected sociocultural habits, particularly those related to food selection, religious beliefs and socializing with friends. |
End of preview. Expand
in Dataset Viewer.
No dataset card yet
New: Create and edit this dataset card directly on the website!
Contribute a Dataset Card- Downloads last month
- 5